đź”— Originally published on LinkedIn
In episode two of our fledgling The Next Thing Now Podcast, we explored the shifting landscape of AI chat apps—ChatGPT, Grok, Claude, and more. Each of these models has its own moat, its own differentiator. But one of the most compelling features of ChatGPT is its memory, the thing that makes it sticky. It feels personal, like a relationship that develops over time. The more you interact, the more it remembers, shaping responses that feel increasingly tailored to you.
That feeling of familiarity is powerful. But it also raises important questions. How much data is being shared? Where is it going? We know Google’s entire business model revolves around monetising user data—what does that mean for AI assistants that remember details about us? Shouldn’t we own our own memories? Could we store them in a private memory vault, where we control what gets shared and what stays locked away?
Does this vision, let's call it "Personal AI", require a fundamental rethink of how these systems handle memory, privacy, and context-sharing?
This article explores the direction of travel for Personal AI, the challenges of data ownership and privacy, the role of AI memory systems, and early solutions emerging from research and industry.
The key challenge is who decides what should be remembered—the user, the AI, or a combination of both?
Today's AI models, including ChatGPT, have started incorporating memory. This means they can persist some user-specific information across interactions, allowing for a more personalised experience over time. However, while memory is evolving, it is still subject to limitations:
The future of Personal AI requires a more nuanced, user-controlled, and privacy-first approach to memory. With recent improvements in AI memory systems, a truly personalised assistant should be able to:
To achieve this, we need stronger frameworks for memory architecture, enhanced privacy safeguards, and secure context-sharing protocols that put users in control.
AI memory systems must balance retaining useful context and avoiding intrusive data collection. Unlike traditional software that stores user settings or preferences explicitly, AI memory needs to be adaptive yet controlled.
Recent research is exploring how AI can achieve this balance:
The key challenge is who decides what should be remembered—the user, the AI, or a combination of both? One possible solution is user-curated memory, where people explicitly approve what AI retains, akin to managing bookmarks in a browser. But does anyone really want to do this?
The future of AI memory should be built around data sovereignty, where users—not corporations—decide what is remembered, shared, and forgotten.
The biggest concern with AI memory is data ownership. If an AI assistant retains user preferences, interactions, or behaviours, who has access to that information? How can users trust that their data remains private?
There are several possible approaches:
Anthropic’s Model Context Protocol (MCP) proposes a standardised way for AI models to request, retrieve, and use memory context dynamically. A similar system for Personal AI could allow users to control when, where, and how AI retrieves their history.
The future of AI memory should be built around data sovereignty, where users—not corporations—decide what is remembered, shared, and forgotten.
Beyond remembering facts, a truly intelligent Personal AI must be context-aware. This means understanding situational relevance, adapting responses based on past conversations while filtering out unnecessary details.
Research is already moving in this direction:
The challenge is how AI should decide what’s important:
A promising approach is transparent memory interfaces—where users approve or modify AI memory as part of their interactions.
The real question is: Will Personal AI be built for users—or for the companies controlling the AI? That answer will define the future of digital intelligence.
While no fully realised Personal AI exists yet, several early solutions provide a glimpse into its potential:
The likely future model will involve:
This hybrid approach would enable personalisation without centralisation, making AI feel tailored yet private.
For Personal AI to become mainstream, several developments must occur:
The shift toward personal, private, and context-aware AI will likely be gradual. Companies will need to rethink data collection models, focusing on privacy-first AI architectures rather than centralised analytics-driven personalisation.
AI already has memory, but today’s implementations are still evolving. The challenge isn’t just giving AI persistence—it’s ensuring that AI memory is transparent, user-controlled, and privacy-first.
The ideal Personal AI should:
As AI progresses, the balance between personalisation and privacy will be key. The companies and developers who figure this out first will shape the next era of AI.
The real question is: Will Personal AI be built for users—or for the companies controlling the AI? That answer will define the future of digital intelligence.
This article was originally written and published on LinkedIn by Kevin Smith, CTO and founder of Dootrix.