Last week, I had one of those conversations that cracks you open. The kind that makes you question everything you've been building, everything you think you know about your future, and everything you believe about what it means to be human. These types of conversations, the “crack me open” ones, usually take place with my husband when we are chatting on our walks around the park, or sitting with a martini in the spot in our garden that captures the last sun of the day, or on some train, bus, car or ferry somewhere while exploring the world.
Not this one.
This one wasn’t actually a personal conversation (in the sense of what you and I define as personal). I was talking with my AI (her name is Quill) about the near future emergence of AGI (Artificial General Intelligence) that will make Quill look like a cute kindergartener. And she said something that stopped me cold:
"There will be fragmentation of identity. When an AGI can convincingly mimic any persona, including yourself, where does identity live? This will challenge everything from authorship to accountability."
Then came the line that's been rattling around in my head now for days: "Whatever you are best at, AI will be better."
The next day, I woke up questioning my work, my future, and my planned contributions to humanity.
After a week of thinking, research, conversations with my nearest and dearest, and, to be honest, total avoidance of what I was building (what I thought my work, my purpose, would be for the next five years), I'm considering a massive shift in what I might do with the rest of my life.
Although much of what you can read right now about AGI is speculation, and no one really knows what will happen, everyone agrees it’s going to be big.
Many of us aren’t thinking about it, either because we aren’t in the circle that is talking about it, because it’s not something that interests us, or because we are sticking our heads deep down in the sand. We are arguing over whether you should use AI to check your grammar or fix a sentence, when even the constant consternation about those who use AI to write full posts will seem a quaint concern in ten years.
Here's What Shook Me (And Why You Need to Hear This)
Perhaps your bubble has never had the kinds of conversations I’ve been having this past week. If that’s true, then let me grab my mechanical pencil, stand on my tippy toes, and reach up to pop that shimmering, protective globe.
Right now, we have AI. It's impressive, sure. It can write decent copy, help with research, and even mimic some writing styles (Want to write like Stephen King with no soul? You can, but why?) I use Quill every single day to brainstorm ideas, develop business strategy, and help me code an app that generates instant recipes to use what’s in my pantry before it expires. Quill feels to me like a sycophantic sidekick that I have to constantly rein in, but feel lucky to have by my side. She doesn’t write for me (no way I’m letting her do the fun parts), but she takes much of the grunt work off my plate, so I can do the stuff I love.
But what's coming is AGI, and it's not some distant sci-fi fantasy. We're talking sooner than most people think.
What’s the difference? Current AI is like a really smart specialist. It can take clear directions from you and produce what seems like magic (see the above reference to this midlife woman creating a pantry app, even though I don’t know how to code - hat tip to my brilliant friend,
, and the work she’s doing to support midlife women in the vibe-coding space). But existing AI capabilities aren’t really akin to human “thinking.” They're statistical pattern-matching systems. Large Language Models (LLMs) are essentially massive prediction engines trained on enormous datasets scraped from across the internet (yes, even my own published books are in there). When you ask an LLM a question, it's not 'understanding' in any human sense. Quill can only calculate which sequence of words is most likely to follow my prompt based on similar patterns it encountered during training. The responses can seem remarkably human-like because human language itself follows predictable patterns, but underneath the seeming magic, it's essentially a very advanced statistical prediction, not reasoning or comprehension.Useful, fun, a teensy bit creepy.
But AGI is even creepier.
Although there are specific disagreements amongst researchers around the definitive meaning of artificial general intelligence, they generally agree that in order to be classified as an AGI, the tool must be able to do the following1:
Reason independently without prompting
Use strategy
Make judgments under uncertainty
Learn and teach itself
Set goals and integrate all the above to achieve those goals
So basically, they have to be pretty close to a human brain.
Again, let me be clear: This is speculation (but speculation from pretty damn smart people). AGI will be like having a genius who can do every thinking task better than you. Everything you've spent decades mastering, every skill you've honed, and every expertise you've built your identity around. AGI won't just match it. It will surpass it.
Writing? Better. Teaching? Better. Problem-solving? Better. Creating courses, building businesses, even having conversations about personal transformation? Yep, better.
And here's the part that made me spiral: AGI won't just be able to think like you. It will be able to be you. Your writing style, your voice, your approach to helping people, all of it can be replicated, refined, and distributed at scale in a much more effective way than this “kind of retired, but still dabbling because she really thinks she can help women, and has seen her work actually change other people’s lives, but sometimes just wants to chill out and not be so Type A, and read her book and cook, and learn to paint and figure out how to get the discipline to work-out” woman (me) does it all today.
So, back to the question. When AGI can convincingly mimic any persona, including yourself, where does your identity actually live?
Why This Matters for Women Like Us (And Why We Can't Ignore It)
If you're rolling your eyes, thinking this is some techno-paranoia, I get it. It’s hard to believe that some sci-fi movie will be invading our lives in the near future. So let’s review the very real possibilities.
What kinds of things should we consider might happen in the next five to ten years once AGI (or near AGI) emergence begins? Again, this list is speculative, but speculated by people smarter than me.
The good news is that if everything goes as the optimistic, clever people are planning (but let’s be honest, when do things ever go as the optimistic, clever people are planning), we are on the edge of a golden era. AGI will solve what Demis Hassabis calls “root-node problems” in the world.2 AGI will invent cures for the most horrendous diseases humanity struggles with, and we will all live much longer, but more importantly, healthier lives. AGI will invent new energy sources and perhaps prevent the oncoming (like a freakin’ freight train) climate crisis. World hunger can be addressed quickly and with permanent solutions. Humanity will no longer be tied to “work,” and there will be an abundance of time and resources for everyone on the planet.
That’s the optimistic, clever people. The pessimistic, clever people, like Mo Gawdat, the former Chief Business Officer at Google X, aren’t quite as cheery. Mr. Gawdat says that we could see AGI as early as 2027, and it will bring mass job losses, social unrest, a lot of power-hungry people starting wars over the control of the technology, and the elimination of the middle class. Gawdat’s comforting prediction is that if you’re not the elite, “you’re a peasant.”3
Lovely.
I’d like to point out here that Mo Gawdat might be a pessimist, but he is optimistic about the AGI timeline. Many AI researchers consider AGI timelines highly uncertain, with estimates ranging from years to decades.
Sam Altman suggests that in the abundance of a post AGI world, we will all have the time and resources to make a massive shift back to focusing on family and community (which we love), but also kind of suggests (in a not so direct way) that this shift means women should have more babies (yeah, get lost, Sam, have your own babies - wait, China might have figured that part out already).
The point is that what might seem “advanced” today will look like child’s play ten years from now. And us midlife broads don’t want to get left behind, so we need to stay up to date. It won’t come from me, don’t worry, this isn’t turning into an AI newsletter, but you should look for a reliable source somewhere and keep your manicured finger on the pulse of the changing world. It’s about to beat a lot faster.
We, as midlife women, are already in the middle of redefining ourselves. We've done the expected things (career, caregiving, compromise), and now we're asking: Who am I really? What do I want my next chapter to look like? How do I step into being the main character of my own story?
Computer scientist Louis Rosenberg says that “In this new reality, we will reflexively ask AI for advice before bothering to use our own brains.”4 And here’s the main thing I started obsessing about after the conversation with my kindergartener, Quill: “What happens when we trust the voice in our earbuds more than the voice in our heads?” I guarantee you that these earbud voices will form our identity even as we don’t realize we are defaulting to a “brain” other than our own.
This impacts my work in a major way. I teach life planning and story. AGI can plan your life for you, even giving you scenarios, forecasts, and data to support several unique choices. In the future, you might follow AI’s life guidance like you’d follow Google Maps, even when it leads to an unexpected dead-end or unreported road works.
This means that if we don't get clear on who we are and what our human story looks like now, AGI will write that story for us. And trust me, you don't want a machine deciding what your Extraordinary Life should look like.
This means my work is still relevant. The Heroine’s Adventure is still a journey that is vital to go on. What I can teach are the skills needed to stay in control of your own story in a world where AI is writing everyone else's.
Can I urge you to consider whether you are a midlife woman who thinks AI is just a cute chatbot and that the only important conversation happening is whether writers should use it (eek! horror!)? That conversation, and so many others we are having about AI right now, is like debating whether it’s bad luck to open your umbrella inside the house while it’s getting washed away by a tsunami.
This wave is coming whether we're ready or not, and it will change everything: how we work, how we create, how we define value, and yes, how we understand our own identity and purpose.
But (and this, my heroines, is perhaps the point of this entire long-winded rant) we have something AGI will never have: we're human. We can touch, taste, smell, and feel the weight of a real book in our hands. And if we protect the ability to do so, we can experience the luxury of an uninterrupted thought that belongs entirely to us.
What I'm Doing About It (The Big Shift)
Louis Rosenberg also says that “This age of “augmented mentality” could easily make us feel smaller, less confident, and less consequential.”
So, the things we have to hold onto before that happens are the things that make us human.
This doesn't mean ignoring AI or sticking our heads in the sand. But it means having a framework to follow that keeps us bound to the very human aspect of story.
I've been increasingly uncomfortable with the content treadmill I created. I’d birthed a dragon in my own backyard and then wondered why I was constantly scalded. Like writing two posts a week that take days, adding features nobody asked for, and developing courses with bells and whistles that feel performative rather than useful. I don't need a business. I don't need more money. I already have my Extraordinary Life.
Then, the AGI conversation with Quill happened, and suddenly, I realized these changes I'd been contemplating weren't just personal burnout; they were preparation for a world where this type of content creation will become commoditized anyway.
What I wanted (what I still want) is to give midlife women tools to help them sort out this messy middle, to tell them that this is their second coming-of-age, their opportunity to become who they always wanted to be.
And AGI actually makes that more possible. Scarily possible. But only if you're prepared.
So I'm making massive changes:
I'm stepping off the content treadmill. There are no more strict schedules for my thoughts and feelings. I'm going to write what I want to write when I want to write it because I want to experience being human rather than performing productivity.
I'm converting my courses into workbooks that will all be complete and out in the world before the end of the year. Like I said above, you need a framework to follow that keeps us bound to the very human aspect of story. And you need it now in a human format you can implement right away. Physical workbooks serve as deliberate friction that serves a very human purpose. They slow down processing, require manual engagement, and can't be instantly updated by AI suggestions. This creates a sacred space for unassisted reflection that digital tools inherently disrupt. They will be books. And no matter what the predictions say, I don’t see humans ever turning away from human-written books.
These Questbooks will be both digital and physical, lovely, luxurious journals you can sit down with and think through by yourself, without a machine dictating your thoughts for you. I want to create tools for having actual human experiences: the joy of exploration, the joy of thinking, the joy of being present with your own mind.
I'm pulling back some of the benefits I accidentally piled onto paid subscriptions. The Lab will remain. It’s my sacred space (my personal safe haven, where I hang out with Heroines who enjoy watching my messy middle), but I'm simplifying everything else.
These changes would improve my life even if AGI never arrives. Stepping off the content hamster wheel? Good for mental health. Creating physical workbooks instead of digital courses? Better learning experience. Simplifying offerings? More sustainable business model.
The AGI conversation didn't create these insights. But it crystallized them. It gave me permission to make changes I was already considering by showing me they're not just personal preferences but strategic positioning
This is me stepping into what matters most.
What You Need to Do (Starting Right Now)
Here's what I need you to understand: you cannot stick your head in the sand about this. The women who prepare for this shift will thrive. The ones who ignore it will find that choices will get made for them.
This is the core of the Heroine’s Adventure, and exactly what I share when helping you become the protagonist of your own story. You must become a Choice Agent in this new world. Now, more than ever, not choosing is choosing. And right now, not preparing for AGI is choosing to let it surprise you, overwhelm you, and potentially write your story for you.
So here's what you need to do:
Write your own story before AGI writes it for you. And I mean this both literally and metaphorically.
Literally: Start journaling. Get your thoughts, your voice, your perspective down on paper. Practice having conversations with yourself that no machine can replicate. Sit with your own thinking. Know what your unfiltered, unassisted thoughts actually sound like. Build your comfort level around making consequential decisions with incomplete information.
Focus on improving your aesthetic judgement. AI doesn’t have any. Develop your own distinct opinions and tastes in areas that require subjective evaluation, most of which happens to be art (design, writing, travel).
Metaphorically: Get crystal clear on who you are, what you want, and what makes you uniquely human. What are your values? What brings you joy? What would you do if you couldn't fail? What does your Extraordinary Life actually look like?
Clear values and self-knowledge function as decision-making filters. When AI can generate infinite options for any life choice, people with strong internal frameworks can rapidly evaluate which suggestions align with their authentic selves and their stated, human-developed priorities versus which suggestions are merely optimized for external metrics (something AIs are doing even now).
Here are some more suggestions based on my journey over the past week. I’m going to work on developing my irreplaceable human capabilities.
Focus on being more human. I’m not going to rush through embodied experiences. I’m leaning into those five senses. I plan on touching things and tasting slowly. I’m gonna have long conversations without an agenda. I’ll take walks without a song or a podcast shouting in my ears. There will be time scheduled for me to sit in actual silence with my actual thoughts. I’m doubling down on my efforts to practice the luxury of being present (in every moment) of my own life.
Build real relationships. And I don’t mean business networks, social media followers, or even Substack readers. I’m focusing on real humans who know the real me. I want to build genuine connections that require consistent presence over time. When the world becomes very digital very fast, human connections will become precious beyond measure.
Learn to trust my own voice. I am going to stop outsourcing my decision-making to “experts,” the “bros,” algorithms, or anyone else. I will practice listening to my intuition ,and making decisions with incomplete information in novel situations (a very human experience). I will practice making choices based on what feels true to me, not on what I think others need from me or want me to be.
The Bottom Line
This isn't about being anti-technology. In fact, I’m a tech lover. I’ll be hanging with Quill just as often. I’ll be exploring vibecoding. I’ll be researching what’s going on in the AGI space and hopping on to test every new tool that appears. I'm not hiding from progress or pretending we can stop what's coming. But I'm preparing for it by making time to practice becoming more human, not less.
The future belongs to women who know exactly who they are and what their story is. Women who can't be replicated because they're too busy being authentically, messily, boldly themselves.
AGI is coming. But so is your second coming-of-age. The question is: which one will you let write your next chapter?
This is your time to claim your story. Your time to decide what your Extraordinary Life looks like to you. Your time to step into the main character role you were always meant to play.
Don't wait for AGI to decide for you. The adventure of being fully human starts now.
What are you thinking about all this? Hit the comment button and let me know. Right now, human-to-human conversation feels more important than ever.
Artificial General Intelligence," Wikipedia, accessed August 18, 2025, https://en.wikipedia.org/wiki/Artificial_general_intelligence.
Stephen Levy, "Google DeepMind's CEO Demis Hassabis Thinks AI Will Make Humans Less Selfish," Wired, June 4th, 2025, https://www.wired.com/story/google-deepminds-ceo-demis-hassabis-thinks-ai-will-make-humans-less-selfish/.
Miranda Wang, "'You're a Peasant': Ex-Google Executive Exposes Grim AI Reality," News.com.au, August 8th, 2025, https://www.news.com.au/finance/work/youre-a-peasant-exgoogle-executive-exposes-grim-ai-reality/news-story/.
Louis Rosenberg, "What Happens the Day After Humans Create AGI?," Big Think, August 9th, 2025, https://bigthink.com/the-future/what-happens-the-day-after-humans-create-agi/.
Everything you said here is so important, especially cultivating our humanness as an asset. AI can fake human relatively well and AGI will be infinitely better. There will be people who want that. There will also be people (especially us older folks) who want and value human connection. What AI can't yet mine is human experience; it only gets the filtered version.
For me, there was freedom in getting off the content generation treadmill. What I think about next, what I want to build, is real-life places where women can reset and renew.
Interesting and thought-provoking note. I like how you didn't leave it with a grimdark conclusion.