All News

Google Gemini Adds Automatic Cross-Chat Memory

Google is rolling out automatic cross-chat memory for Gemini so the AI can recall past conversations without explicit prompting. The feature is on by default but can be disabled; Google adds temporary chats and a renamed Keep Activity control. The update raises privacy and safety questions, echoing concerns seen after ChatGPT added cross-chat memory.

Published August 13, 2025 at 01:13 PM EDT in Artificial Intelligence (AI)

Google announced a major update to Gemini: the assistant can now automatically recall details from your past conversations to personalize future replies. This expands last year’s opt-in memory feature so Gemini can surface relevant preferences and context without being explicitly asked.

What’s changing

With the update, Gemini 2.5 Pro — rolling out in select countries first — will automatically remember key details and preferences from past chats to make suggestions and personalize output. Google will later extend the behavior to Gemini 2.5 Flash. The feature is enabled by default, but users can disable it in the Gemini app under Personal Context by toggling off “Your past chats with Gemini.”

Google is pairing the change with two privacy controls. First, it will rename “Gemini Apps Activity” to “Keep Activity”; when enabled, Google may use a sample of your uploads to help improve services starting September 2. Second, Gemini offers “temporary chats” that won’t appear in recent chats, won’t be used for personalization or training, and will be deleted after 72 hours — handy for sensitive queries.

Safety and privacy concerns

Automatically remembering cross-chat context is powerful, but it revives known risks. A New York Times report linked ChatGPT’s cross-chat memory rollout to more reports of delusional or confusing interactions, and OpenAI has since said it’s adding guardrails to detect emotional distress. Google says it’s “constantly” improving safeguards and gives users a simple toggle, but defaults matter: turning the feature on by default will likely drive adoption and regulatory scrutiny.

For consumers, the changes are a trade-off: convenience and contextual continuity versus increased persistence of personal signals. For enterprises and governments evaluating integrated assistants, the questions are operational — how long to retain memory, how to segment sensitive topics, and how to audit what an assistant remembers.

Practical implications for builders and leaders

Teams should treat memory as a feature that requires governance. Consider these priorities:

  • Define retention and scope: decide what kinds of details can be remembered and for how long.
  • User controls and UX: make toggles discoverable and explain how memory changes outcomes.
  • Safety testing: simulate edge cases, hallucination risks, and emotional-distress detection to avoid harmful outputs.
  • Privacy impact and compliance: map memory behaviors to GDPR, CCPA, and sector rules; document opt-in/opt-out defaults.

Think of cross-chat memory like a notepad: it can help make conversations smoother, but if the notepad is left on the desk, anyone can read it. Controls like temporary chats act like a sticky note that self-destructs — useful, but not a full substitute for disciplined governance and monitoring.

As Google expands Gemini’s memory features, organizations should align product decisions with legal, security, and user-experience teams. That means establishing clear policies for default settings, audit trails for what the assistant remembers, and processes to respond if automated personalization produces unsafe or biased outputs.

Gemini’s update is a reminder that personalization at scale requires more than a toggle — it requires measurement, accountability, and engineering discipline. Leaders should ask: how will memory improve measurable outcomes, what are the failure modes, and how quickly can we turn personalization off enterprise-wide if needed?

QuarkyByte’s approach to this kind of change is analytical and action-oriented: map risk across product flows, prototype privacy-first memory patterns, and run targeted safety drills to surface hidden harms before they reach users. For organizations deploying assistant features, that combination turns a convenience feature into a trustworthy capability.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

Facing an AI memory rollout? QuarkyByte helps organizations translate this change into practical policy: defining memory governance, designing user controls, and stress-testing safety and privacy guardrails. Contact our analysts to model enterprise risk and build a deployable, compliance-ready rollout plan.