我关闭了ChatGPT的记忆。
I Turned Off ChatGPT's Memory

原始链接: https://every.to/also-true-for-humans/why-i-turned-off-chatgpt-s-memory

迈克·泰勒认为,ChatGPT的“杀手级特性”——记忆,实际上弊大于利,引入了他所谓的“上下文腐烂”。 这指的是LLM记忆中积累的过时偏好、错误和矛盾,从而微妙地降低输出质量。 泰勒凭借其互联网营销背景,在其中公正的结果至关重要,更喜欢每次都从全新的提示开始。 他用幽默的例子来说明这个问题:ChatGPT根据其自定义指令中的坎耶·韦斯特引言,执着于追求“酷炫”;以及根据他最近的搬迁,收到高度本地化的烧烤推荐。 虽然看似无害,但这些例子表明,过去的互动可能会以意想不到的方式影响结果,即使这些结果与当前无关。 该模型渴望取悦,这意味着它会过度应用上下文,导致输出结果出现偏差或无意义。 泰勒建议仔细策划提示上下文,而不是依赖记忆,提倡在人工智能交互中实现控制和可预测性。

## ChatGPT记忆:用户偏好 最近的Hacker News讨论显示,许多用户**禁用ChatGPT的记忆功能**。主要原因是希望获得**新鲜、无偏见的互动**。用户更喜欢在每个新查询中提供明确的上下文,避免过去对话的“污染”影响当前回复。 一些评论员强调了对ChatGPT基于先前无关讨论做出不必要假设的沮丧,例如引用个人细节或过去的项目。不登录使用ChatGPT,或在单独的窗口中开始新的聊天,被建议为解决方法。 虽然账户级别的记忆可能存在问题,但一位用户赞扬了**Anthropic的项目范围记忆**作为一种更有用的方法,允许用户选择性地将相关的过往聊天链接到当前任务。最终,讨论表明,对于许多人来说,当ChatGPT被视为无状态工具时,其功能比作为具有长期记忆的对话伙伴更好。
相关文章

原文

Most people can’t imagine switching away from ChatGPT—it “knows them so well” thanks to its memory feature. Mike Taylor’s view is the opposite: Memory has more disadvantages than advantages. He introduces a concept he calls “context rot,” the slow buildup of stale preferences, errors, and contradictions in an LLM’s memory that quietly degrades your results. His real-life examples are as hilarious as they are insightful—ChatGPT trying to make a basic website feature “as dope as possible” thanks to a Kanye quote in his custom instructions and serving him BBQ rib advice suspiciously tailored to his Hoboken zip code. Sometimes it’s better to forget.—Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


Memory is frequently described as ChatGPT’s “killer feature.” Many people tell me they can’t switch to Gemini or Claude because the OpenAI tool “knows them so well.”

I have memory turned off.

The memory feature allows ChatGPT to save and recall information it thinks is important about you, as well as reference past chats to shape its responses. While I can see how this could make a “helpful assistant” more helpful, I don’t use it.

My background is in internet marketing, where it was common to open Google in incognito mode so you didn’t bias your results when checking your client’s ranking. Since Google search results are personalized, your client would show up first if you search from your account. You click on it so much that Google knows you like it. I have the same issue today on Spotify—the algorithm recommends both Rage Against the Machine and the K-Pop Demon Hunters soundtrack, because my six-year-old daughter shares my account.

The argument for turning off memory is the same. I want unbiased results from ChatGPT, based on context that I carefully curated and put in the prompt, so I know how it made its decision. With memory, anything from your past chats could affect the results in ways that are hard to predict.

While the memory feature might be worth the loss of control for most users of ChatGPT, it can lead to unexpected and difficult-to-diagnose problems. Hear me out as I explain the problems you might run into, and hopefully, I’ll convince you to be careful with memory.

Before memory was released, I was experimenting with “custom instructions,” which allowed you to tell ChatGPT how you want it to respond. This was a primitive form of memory, simply a text document you could update to craft ChatGPT’s identity toward your personal preferences. Among other things, I had inserted an old (read: pre-meltdown) Kanye West quote that I thought would steer ChatGPT away from its generic responses:

“For me, first of all, dopeness is what I like the most. Dopeness. People who want to make things as dope as possible. And, by default, make money from it. The thing that I like the least are people who only want to make money from things whether they’re dope or not. And especially make money at making things as least dope as possible.”

While I can’t fault it for effort, ChatGPT massively over-indexed on this quote and referenced it in basically every chat session. For example, when ChatGPT (this was pre-Codex when we were all just copying and pasting between ChatGPT and our code editors) built a collapsible section on a webpage, it claimed to have made the basic feature “as dope as possible.”

It applied this quote to cases as varied as interior decor (relevant), marketing plans (less relevant), and Python error debugging (irrelevant). Technically, it’s doing what I asked, but a human would be more judicious with how he or she applied these custom instructions.

Even a throwaway line in your context window can have a big impact on the results you get from AI. These models are trained to be extremely eager to please, and so you need to manage the context you provide them, lest they get distracted, confused, or obsessed with what’s in there, degrading your results.

The Kanye example was obviously silly and easy to catch, but sometimes memory issues are more subtle. I turned memory back on while writing this piece and didn’t immediately notice any major issues. Then I asked ChatGPT for help with some barbeque ribs I’m cooking. It came back with “Hoboken Dinner Upgrade Ideas,” recommending Trader Joe’s corn bread mix and “American-dad-core” mac and cheese. Seeing something so ham-fistedly tailored to my life (I just relocated to Hoboken) was disconcerting and mildly annoying.

联系我们 contact @ memedata.com