Simon Willison has a new article explaining an important and often ununderstood aspect of LLMs. There’s a remarkable difference between chatting with an LLM, as we users do, and training it.
Short version: ChatGPT and other similar tools do not directly learn from and memorize everything that you say to them.
Every time you start a new chat conversation, you clear the slate. Each conversation is an entirely new sequence, carried out entirely independently of previous conversations from both yourself and other users. Understanding this is key to working effectively with these models. Every time you hit “new chat” you are effectively wiping the short-term memory of the model, starting again from scratch. This has a number of important consequences.
More here.