top of page

The Crowned Birds, the Snake, and the Private LLMs We’re About to Live With

  • Writer: Mehmet Batili
    Mehmet Batili
  • Jan 10
  • 7 min read

I had a dream last night that felt like slapstick staged by an unfriendly director. I’m in a field. A few large trees. From far away, a fleet of small birds arrives. They wear little crowns, more like toy helmets than royal regalia, and they start defecating on me. Small white pellets. I ran away laughing. I’m covered in it. Then I notice a pile of autumn leaves. As I approach, a snake launches out and attacks. I’m not scared. I’m puzzled, almost offended by the logic of it: What is going on?


In the old days, I would have taken this to Google and ended up on some 2006-era dream forum with a background that looked like a granite countertop. Today, like everyone else, I took it to a large language model (LLM). I asked neutrally. No leading. No “tell me what I want to hear.” It came back with a neat framing: a two-act play “ridiculous status” followed by “sudden boundary test.” The images were blunt and physical, which usually means the mind isn’t trying to be poetic. It’s trying to be clear. I liked it. I liked how the LLM framed the whole play with direct references to my private life.


And that’s when this post idea began to flourish, because the dream isn’t just about the dream. It’s about the new way we interpret ourselves: through a system that will soon be ours, theirs, and every institution’s at the same time.


I’ll come back to the dream’s “interpretation” in a bit. But first, a small piece of futurology.


We are moving toward a world where people and corporations run private language models the way they run email servers. Yes, it’s fashionable, but more importantly the incentives will demand it: privacy, customization, control, liability, proprietary knowledge. It won’t just be “LLM in a browser.” It will be your working memory, your personal library, your meeting prep, your therapy-adjacent sounding board, your partner, your research assistant, your chief of staff, quietly present, always on.


For individuals, this will feel like carrying an external mind that knows your patterns too well: your fears, your tells, your habits of avoidance, the words that calm you, the stories you repeat, the things you postpone. For organizations, it will feel like installing a new organ: an internal voice that speaks “policy,” “risk,” “culture,” “values,” “strategy,” while also training itself on the organization’s actual behaviour.


In my upcoming project The Age of Narcissus, I have a chapter called The Shadow King, and I use the concept of an Egregore. The short version is this: in many institutions, power is not only held by people, it is held by patterns. The Shadow King is the invisible ruler made of incentives, fears, reputational games, dashboards, and unspoken rules. Nobody “votes” for him, yet everybody obeys him. And the egregore is what happens when a group repeats the same stories, rituals, and interpretations long enough that the collective mood starts behaving like a separate entity, guiding what feels safe to say, what gets rewarded, what gets punished, what gets ignored. It’s not a ghost. It’s a feedback loop with a personality.


A corporate LLM trained on internal documents and communications will become an amplifier of whichever layer it is allowed to learn from. Feed it only polished memos and it will produce a polished hallucination of the company. Feed it the real artifacts, tickets, escalations, complaints, backchannels and it will map the organization’s true incentives with disturbing accuracy. Either way, it will become an authority.


In other words: we are building interpreters that can talk like conscience without necessarily having one.

When people say “AI risk,” they often picture a villain with agency: a machine deciding to harm us. That’s dramatic, and it’s also a convenient distraction. The more common risk is smaller, duller, and more realistic: a system that optimizes the wrong thing, very efficiently, while sounding calm and reasonable.


If a model is paid in attention, it will learn to hold attention. If it is paid in speed, it will learn to minimize friction. If it is deployed inside an institution, it will learn which truths are career-limiting and which phrases earn approval. Not through malice. Through reinforcement. Through the basic mathematics of incentives.

This is how misalignment arrives in real life, as a gentle “recommendation.” It arrives wearing a tiny crown.

It also arrives through convenience. The model will be “helpful” in the exact way that makes you stop thinking. It will draft the email you were avoiding, summarize the meeting you half-attended, produce the policy language that sounds compliant, translate your messy feelings into crisp sentences. You will feel relief and then you will notice a subtle dependency: your own inner voice becomes a rough draft, and the model becomes the editor-in-chief of your meaning.


The danger is not that it lies. The danger is that it formats. It turns ambiguity into something that looks resolved. It takes complex human trade-offs and returns a confident paragraph. It makes the world feel more legible than it is, and it does this at the exact moment when humility would be the wiser posture.

And inside organizations, this becomes political. Because smoothness is a weapon. A smooth voice can justify almost anything. “Due to policy.” “For your wellbeing.” “To ensure alignment.” “To reduce risk.” A model that learns to speak in that register can become a ventriloquist for the Shadow King, amplifying the institution’s instincts while presenting them as neutral logic.


Now picture the next scene, because it’s coming.


You have your private model. It knows you, your goals, your moral red lines, your history, your ambitions, your blind spots. It drafts your notes, helps you think, and sometimes says, gently, “You are rationalizing again.”


Your employer has its own model. It knows the organization, its policies, regulatory constraints, legal exposures, compensation structures, reputational sensitivities, and the unofficial “how things are done.” It drafts the memos, flags the risks, predicts the employee relations blowback, and sometimes says, gently, “This is not aligned with our values.”


These two models will not always agree. In fact, they will disagree in ways that are structurally revealing.


  • Your private model might encourage you to document something for your protection. The company model might discourage documentation because it increases liability.

  • Your private model might suggest direct language. The company model might prefer weaponized ambiguity.

  • Your private model might push you toward truth. The company model might push you toward “process.”

  • Your private model might say, “This is burnout.” The company model might say, “This is a performance opportunity.”


And then we reach the real battleground: data boundaries.


What happens when your private model has your notes from a difficult meeting, while the company model has the official minutes? What happens when your private model can reconstruct the pattern of coercion or manipulation in a workplace dynamic, while the company model produces a compliant narrative that erases it? What happens when your private model advises you to escalate, while the company model advises the organization to contain?


This is where governance departments will get very busy.


They will write new resolutions, policies, and “acceptable use” guidelines with the same soothing tone we always use when we’re nervous. There will be rules about:

  • whether you can use a private model for work tasks,

  • whether you can paste internal information into it,

  • whether your private notes are discoverable in legal proceedings,

  • whether the company can require use of its own model,

  • whether employees can refuse “AI-mediated” coaching, performance feedback, or HR interactions,

  • whether the model’s recommendations constitute “management direction,”

  • whether a model’s summary can be treated as an official record.


And all of this will be presented as a matter of “clarity,” when it is really a contest over interpretation authority.


Because that is the prize: who gets to define what happened, what it means, what is reasonable, what is risky, what is allowed, what is real.


If you’ve ever watched a workplace conflict turn on a single sentence, “that’s not what I said,” “that’s not how I meant it,” “that’s not what policy implies”, you already understand the future. We are about to scale that fight, automate it, and give it a polite interface.


This is one of the reasons I’m writing The Age of Narcissus. The old narcissism was loud: ego, spectacle, bullying, domination. The new narcissism is quieter and more institutional: it wears empathy like a uniform, uses “wellbeing” as a shield, and treats public coherence as a substitute for private integrity.

Now add always-on interpreters, private and corporate, and you get a new pressure: your inner life becomes a governance problem, and governance becomes a spiritual problem.


The book is a manual. It will offer practical daily practices that help you stay human under these conditions:

  • how to keep your inner voice from being replaced by a smooth external one,

  • how to notice when you’re being trained into compliance,

  • how to design boundaries that hold under social pressure,

  • how to stop mistaking “clarity” for truth when clarity is cheap.


Not heroic practices. Ordinary ones. The kind you can do while still having a life.


Now, let’s get back to the dream interpretation. This is what my LLM says: 


Act I: Small birds wearing tiny crowns defecate on you while you laugh and run. Translation: minor authorities and petty demands arrive dressed as importance, and you cope with humour. You survive by turning annoyance into comedy. You keep moving. You still get covered in it.


Act II: You approach the leaf pile, the thing you postponed, and a snake launches. Translation: the deferred mess isn’t passive. It is alive. It bites when you get close, not to terrify you, but to force contact with reality. You are not afraid. You are offended by the logic because you wanted the pile to be inert. It isn’t.


So, when I read that, wow, I outsourced the interpretation to a machine, and it gave me something I recognized as true. Which is both helpful and slightly alarming, because the interpreter is becoming part of the story.


Maybe that’s the real punchline. The crowned birds are not just emails and micro-demands. They are also the small authoritative voices we invite into our heads, polite, useful, confident, until one day we realize we’re laughing while getting covered in someone else’s narrative.


And then, eventually, we walk toward the leaf pile.


And something launches.




Toronto        Istanbul

  • LinkedIn
toro.city.png
ist.city.png

©2025 BATI.LI

bottom of page