I applied this query with 4o and yes, quite a thorough historical recounting. Really weaves together all those "random" questions one asks an LLM into a surprisingly (and somewhat scary) encompassing of one's self.
All: There's no valid use cases for local LLM, unless you're doing something illegal or unethical, you creepy pervert and/or criminal
Also all: How come LLM (and the company that runs it and provides a way for me to access and use it) knows so much about me? That's creepy. And there's just no way anybody could predict that would happen
People's privacy expactations are miscalculated in a world with LLMs. Before LLMs there was little reason to care if you were leaving breadcrumbs, because no one with access was going to investigate them and draw conclusions from them.
The fact that the LLMs are prone to hallucinations and fanciful conclusion jumping probably make things worse, because the only way to be safe from it wrongfully concluding things about you is to just be invisible to it.
I really dislike the new memory, find myself using temp chats on ChatGPT a lot these days, they claim it adds nothing to mine or the collective knowledge, although to be honest at this point my trust in OpenAI is incredibly low.
I find this is an extraordinary tool for 90% of needs, and I love that it can interact with multiple models. It is a great example of how to productize the ecosystem in a user-loyal way.
FWIW, Grok answers this when asked "what do you know about me?" (I do have an account):
I don’t have any personal information about you, as I don’t have access to user data unless it’s explicitly shared in the conversation. If you’ve posted publicly on X, I could potentially look up your profile or posts to learn more, but I’d need your X handle to do that. Alternatively, if you want to share something about yourself—like your interests, location, or what you’re curious about—I can tailor my responses based on that. What would you like me to know or explore about you?
I know someone else who uses Grok and has the "learn from my responses" feature turned on, he claims it generates memories (like chatgpt did for a while) although he might just not understand how the dialog engine works.
"Integrous" is a rarely used adjective meaning "having or characterized by integrity". It's derived from the Latin word "integer," meaning whole or complete, and the suffix "-ous" indicating fullness or possession. While technically correct, it's more common to use synonyms like "honest," "decent," or "virtuous" to convey the concept of integrity.
I just asked Mistral.ai, using it without an account:
what do you know about me?
2:50pm
I don't know anything about you unless you share information with me. How can I assist you today?
2:50pm
you do not know my IP and its location?
2:50pm
That's correct, I don't have access to your IP address or your location. If you need assistance with something specific, feel free to let me know!
You might as well have asked the pen on your desk how much it knows about you. Whether your personal data is in the current LLM context window or not has nothing to do with whether it was collected from you by Mistral.
Rather than ask it if it knows anything about you, ask it where the nearest McDonalds is
Mistral seems honest in its assessment (as should be expected from a French company operating under GDPR). But e.g. ChatGPT will deny knowing your location, but still show a McDonalds that suggests it knows your IP geolocation
I'm sorry, but I don't have the capability to access real-time location data to find the nearest McDonald's for you. However, you can easily find this information by using a maps application on your phone or computer.
for a start, the ability to partition that knowledge?
e.g. what is being researched or worked on for project A, might be vastly different or require different domain of knowledge for project B.
Or worse still, you ended up reusing code across different projects for different clients, and one day client O(#1) decides to sue client G(#2) and finds code similarities?
[1] as example, perhaps client O has a name that rhymes with "Horrible"
[2] and potentially client G's name rhymes with "Flugel"
Plainspoken and unnerving. Schneier nails it: LLMs become mirrors that reflect, amplify, and exploit our own signals.
What resonated with me: every prompt, every correction, every hesitation feeds a profile the model refines over time. It’s not just personalization—it’s psychoanalysis-by-proxy.
This matter needs more than opt-out buttons. We need transparency and circuit breakers. Give me AI that explains why it suggests a certain answer—not just adapts forever without oversight.