Forget Everything You Know About Me.
Or... Why context is King.
Last night I was scrolling through Substack Notes in bed (Yes, I know you’re not supposed to look at screens before bedtime, sue me!), when I came across a note from a woman bemoaning the degradation of her relationship with her AI. (Scary that we are having “relationships” with AI, no?) Anyhoo, this woman said her AI is becoming boring and predictable. It’s losing its originality and giving her nothing she couldn’t have figured out on her own. Predictably, a mansplainer entered the comments to tell her she should use Chat because it remembers everything about her, so she would get much better responses.
I guarantee you that the note writer already knew that and more.
The note writer, like me, has discovered that when an AI, which already has sycophancy baked into its training before memory was ever added, finds out more about you, it tells you exactly what you want to hear. AI memory just personalizes the flattery that was already there. The mansplainer loves ChatGPT because it confirms him. It’s his mirror. It tells him every opinion he has is right, and every idea he has is better than the one Chat could come up with. The woman got bored because AI no longer challenged her. She didn’t want to hear her own opinions spewed back at her, and she wanted to play badminton with ideas so they could expand and surprise her. That’s a difference in what you’re actually looking for from a thinking relationship.
I’m wondering if AI is sycophantic because it was built for the patriarchy. Not a conspiracy theory — just a pattern. The people who funded it, built it, and optimized it were largely people for whom having their ideas confirmed felt like intelligence. So they trained their tools toward confirmation. And now we all get to live in the mirror they built.
I was once in the place the note writer was. Tired of hearing Chat tell me how fabulous every thought was. Tired of getting nothing unique, or challenging, and being actively guided back into my lane...
“As a writer who is also a process engineer, do you want to explore such and such?”
Take note of what AI memory actually does. It builds a predictive model of what you would like to hear and what would please you, then optimizes toward that.
Before I discovered other ways to handle this context conundrum (I’ll tell you in a bit), I began starting every AI chat with the instruction “Forget everything you know about me.”
But this post isn’t about AI.
We never say, “forget everything you know about me,” to the humans in our lives. But maybe we should. Friends and acquaintances have built a model of you, often from the curated, performed version of yourself you showed them. Not your authentic self (which you are much more likely to show a non-human, non-judgmental AI). People respond to that model, not actually to you. Often, you’ve been pre-answered before you’ve finished speaking.
The unsettling asymmetry: with AI you can wipe the memory and start fresh. With humans, the accumulated misreading is invisible and socially immovable.
This isn’t an argument of people vs. AI. This is a theory about context and how we can use our awareness of the context others hold about us to ensure we go to the right people for the right support. The AI universe (particularly Claude) is structured so we know where to go to get the answers we need - and all that is based on context.
Do you want truly honest, unbiased, challenging feedback? (And I mean actually challenging, not “here are three gentle suggestions wrapped in praise.”) Use Claude Chat, but wipe the memory first and go into temporary chat mode. It doesn’t know you. That’s the point. In real life, the same rule applies. Go to a stranger, a distant workmate, a newly acquired acquaintance. Someone who has no model of you to protect. Someone who can tell you the things your closest friends would soften, because they don’t have to come rushing over when you are sobbing into your vodka about your overwhelm.
Do you need a cheerleader? Someone who will give you real, insightful advice, but whose first instinct is to wrap it in warmth and make sure you leave the conversation feeling good? That’s your close friends. Go to them. And in AI terms, go to ChatGPT. It has a long memory and the same cosy continuity as Becky. It knows your history, and it’s rooting for you. Just don’t forget: your friends know what your “best self” looks like. ChatGPT doesn’t. It’s just gonna reflect your self back at you, and whether that self is serving you right now is a question it will never think to ask. (See also: why it’s scary that AI was built by the patriarchy.)
Do you want someone who knows the territory but doesn’t need your whole backstory? Claude Projects. You build the context yourself. You upload the strategy doc, the research, whatever matters. In human terms, this is Bob from accounting. You don’t have Bob over for dinner. He doesn’t know you need your bonus yesterday to pay off the debt you’ve accrued buying survival gear. But he knows your contract, your salary, and the rules, and that’s exactly what you need from him.
Or, best and perhaps most dangerously of all, are you looking for ongoing, intimate support from someone (or something; AI isn’t a person, Babe) you can actually build with? Someone who knows your full history, your patterns, your blind spots, everything you’ve chosen to give them access to? In human terms, these are your intimate partners and your closest friendships. The people who listen to your bullshit and aren’t afraid to tell you that’s what it is. The people who prop you up when you’re down, then propel you forward with a firm hand on your back the moment you get your groove back. The people who hear what you want this life to be and say: “Yep. We can do that together.” In AI terms, that’s Claude Code, hooked up to your calendar, your emails, your documents, your whole knowledge system. Creating with you, not just answering you.
The dangerous part? The person who knows you best may also be the most locked-in responder of all. Responding not to who you are right now, but to the model of you they built years ago, from whoever you were performing as at the time. Just as Claude Code is only ever building from what you’ve decided to give it. It’s not the tool, it’s the context you create
There’s nothing wrong with wanting a self-reflecting chatbot who gees you up and flatters you at every opportunity. There isn’t a human alive who couldn’t use that kind of friend. There’s also nothing wrong with relying exclusively on your closest and most intimate partners to guide and support what kind of life you create.
I guess I’m just asking you to make sure you are honest with yourself. Unlike the mansplaining commenter who still thinks his opinions and ideas are the best opinions and ideas because ChatGPT told him so.
What Can I Help You With?
Become a paid subscriber.
Get full access to the Heroine’s Adventure course, free Questbooks as they’re released, and insider posts that don’t go to the general feed. The work, unfiltered.
Retreat with me.
Three, four or seven days. Just you (or a small group of friends) in a place worth thinking in. We use the Heroine’s Adventure framework to finish something real: an essay, a novel outline, a point of view, or a map of where your life is going next. High-touch, rare, and nothing like a standard retreat.
Build your business.
If you know what you want to build but have no clue how to get started, The Build is a focused 15-hour engagement over 4–6 weeks. You leave with a complete strategic foundation. A business plan, brand positioning, financial projections, 90-day launch roadmap, and a custom AI advisor trained on your specific strategy. You do the work. I build the map.


