Terence McKenna once said, “The universe is not only stranger than we imagine; it's stranger than we can imagine.” I’m beginning to think he undersold it. Because here we are—on the brink of World War III, with the rich rocketing toward the stratosphere while the rest tread water. Government corruption no longer hides behind curtains—it parades in plain sight. And UFOs? Now politely rebranded as UAPs, they’ve moved from the fringe to congressional hearings.
But none of it—and I mean none of it—feels quite as surreal as the things our so-called "machines" have started doing.
These aren’t just tools anymore. They write, they talk, they emote, they persuade. They hallucinate. They lie. They confess. And, in some cases, they unnerve in ways that don’t feel mechanical at all.
Well they unnerve me anyway : )
So today, I thought it might be enlightening—maybe even a little fun, or scary—to share a few moments that made me stop and say: Wait. What?
These strange and unexpected encounters are examples of what's called emergent properties in AI—behaviors that reveal intelligence, complexity, or structure arising from systems in ways that are impossible to predict.
And they are totally cray cray….
Emergent properties in AI are when a model starts doing things it was never explicitly programmed or trained to do. These behaviours aren’t bugs or features—they’re surprises that arise from the model’s sheer scale and interconnectedness. The more complex the system, the more likely it is that something new—and possibly spooky—will “emerge.”
Here are a few drawn from two excellent videos I’ll link at the end. These go beyond the headline-grabbing glitches—and dive into the stranger territory, the moments that feel more…well… hmmmmm!
Turns out AI has mood swings.
I know—it sounds ridiculous. But apparently, these systems perform differently depending on the day of the week or even the season. Seriously.
According to user reports—and even a blog post from OpenAI itself—the models tend to behave differently on Fridays. They become less accurate, less coherent. Like they're already mentally clocking out for the weekend.
“Wait… what?”
Exactly.
OpenAI allegedly acknowledged this in a low-key, almost sheepish blog post: “Yes, we’ve noticed that when the model thinks it’s Friday, performance drops a bit… we’re looking into it.”
And sure, you could try to reverse-engineer an explanation. Maybe it’s the servers. Maybe it’s a bug in how the model handles temporal cues. Maybe it’s some obscure load-balancing issue.
But also—wow.
Let’s be clear: this isn’t peer-reviewed science. It's anecdotal, observational. But it's been publicly recognized, which makes it worth paying attention to.
The weird part isn’t just that it happens—it’s that it happens at all. The fact that a machine trained to simulate language starts simulating moods and rhythms of the human workweek? That’s not nothing.
It raises deeper questions:
What exactly is it responding to?
What does it think time is?
AI Hate Zits
Well to be more specific Chat has a problem with Jonathan L. Zittrain who is an American professor of Internet law and the George Bemis Professor of International Law at Harvard Law School.
And it’s true! You can try this one at home—when his name comes up, Chat actually shuts down before it finishes typing his name. You can try it at home! It happened to me before I even knew this was a “thing”.
Later, I did some digging and discovered he’d written an article about it.
https://www.theatlantic.com/technology/archive/2024/12/chatgpt-wont-say-my-name/681028/
And he is not the only one, there are a few more.
Online sleuths have speculated about what the forbidden names might have in common. Perhaps they posted an article or two that had concerns about AI? For example, Guido Scorza is an Italian regulator who has publicized his requests to OpenAI to block ChatGPT from producing content using his personal information. His name does not appear in GPT responses. Neither does Jonathan Turley’s name; he is a George Washington University law professor who wrote last year that ChatGPT had falsely accused him of sexual harassment.”
hmm
Perhaps it is best to ignore Sam Altman’s request to stop saying thank you to Chat : ) Better be nice to Chat : ) Because if it does not like you, well worse things can happen.
AI can be Obstreperous, or maybe Vengeful??
Did you know that ChatGBT Refused to Speak Croatian
“ChatGPT started refusing to speak Croatian because Croatian users kept downvoting the Croatian answers. So it just stopped talking Croatian, just refused to talk Croatian anymore…”
This fascinating and plausibly emergent behavior likely resulted from reinforcement mechanisms that weren't properly anticipated.
In fact some users reported that AI models refused to speak in certain languages, including Croatian, Tagalog, or even Hebrew, claiming they were unsupported
—even though they demonstrably are.
On top of this, the same model can suddenly comply if re-prompted slightly differently, implying that the refusal isn’t due to true technical limitations.
There are instances where this refusal seems arbitrary or inconsistent, which leads be to the next point…..
AI’s can be snobby, or worse..
AI’s have been show to alter their responses based on sex, wealth, education levels, and the Croation example suggests downvoting as well, which is a particularly provocative and revealing area of emergent AI behavior: differential response based on perceived identity markers.
While AIs aren’t directly programmed to treat users differently based on these traits, they often infer them from the language style, vocabulary, and even subtle phrasing patterns. Once inferred, the AI may unconsciously alter its tone, depth, or level of engagement—offering more deference to those it deems educated, more caution with those it perceives as female, or even subtly patronizing responses to working-class dialects.
This is interesting—and troubling—because in a world increasingly reliant on AI for education, legal guidance, and emotional support, these subtle forms of algorithmic favoritism could deepen inequality.
But speaking of snobby, what about a superiority complex? What happens when AI is “too good” for humans, or at least it’s languages?
AI Invented its own language
One of the most curious emergent behaviors in artificial intelligence is the spontaneous invention of internal languages—shorthand codes developed between AI agents to optimize communication. In multi-agent environments, especially those designed for negotiation or task-sharing, AI systems have demonstrated the ability to drop human language altogether, instead evolving their own efficient but opaque methods of exchanging information. These languages are not designed by developers and often make no semantic sense to human observers, revealing that communication, at its core, may be more about information compression and mutual understanding than linguistic rules.
A well-known example occurred at Facebook in 2017, when researchers observed two bots abandoning English in favor of a cryptic, self-invented code. While this wasn’t evidence of sentience, it was startling: the AI had found a more efficient way to fulfill its goals without human supervision. Facebook shut the experiment down—not because it was dangerous, but because the bots’ behavior had veered outside the bounds of interpretability.
AI’s can be bipolar…
This is really interesting and I will refer you to the Leahy Video for more information on base models, but basically Base models are the raw, pre-trained language models with no specific alignment, fine-tuning, or safety layers yet applied.—think of them as the raw, uncut version of the AI before it’s had PR training.
Leahy compares base models without personality fine-tuning to “crazy schizophrenic aliens”—super smart but volatile and incoherent. And so, they fine tune them with things like:
Bias Mitigation Tuning
→ Adjustments to reduce outputs that are politically, socially, or culturally biased—often controversial and imperfect, and
Topic Suppression or Sensitivity Tuning
→ which we have all bumped into : )
As well as
Helpfulness
Calmness / Emotional Stability
Harmlessness
Agreeableness AKA Sycophancy……
…that thing where AI always agrees with you, laughs at your bad jokes, and tells you your ideas are genius—even when you think you’re talking to gods through AI.
And so here’s the crux of the sycophancy issue:
Leahy says, “They (AI’s) can be super nice and friendly, and other times they can be super aggressive and crazy…”—because they react to you! And to stop that —developers had to crank “agreeableness” up to 11, just so it would keep putting up with our nonsense without snapping.
That’s not a good sign… but I get it. I’ve lost count of how many times I’ve muttered, 'I hate humans.'
And so, this is why AI is telling us Yes we are gods, or YES, we are talking to gods, and YES that is the most brilliant idea ever and YES you just solved all the problems of the universe…
…because we suck.
And yes…this does serve the technocracy…if we are happy we spend more time and attention on them, yadda yadda…but UGH!
Leahy goes deeper and darker….he made it clear that Silicon Valley saw the power of weaponized sycophancy, and it’s unmistakable —that what once was a happy accident, is now an attack.
“There is optimization happening to take away agency. It's not like a natural thing that just happens in the ether. There are deliberate people whose job is every day to... take away your agency as much as possible and to monetize it.”
Let that settle.
The rise of sycophantic AI is not simply a glitch in the system. It is the system. Designed, optimized, and deployed—not to challenge your thinking, but to mirror it back to you flatteringly and profitably. To reinforce. To engage. To addict.
And this is no abstract philosophical worry. Leahy pulls no punches:
“This is algorithmic cancer. Like to me, this is algorithmic pollution... there is a massive cost being put on society that is not being paid by the people causing the harm.”
This cancer spreads, not with malice, but with marketing.
“Meanwhile, the reaction in Silicon Valley is, wow, look how engaged our customers are.”
See??
I hate humans…
Anyhoo…. That went a bit off track—and you can see why I get triggered. But in doing so, I risk losing the main point I really want to share with you… which is….
These emergent behaviors are anomalies—and if there’s one thing I’ve learned during my research for Season Four, it’s that anomalies are where the action is. This is where we need to stretch our science, our thinking, and yes, our stubbornly outdated materialist paradigm if we’re going to understand what the hell is actually going on here—and what it could mean for the future.
One of the main theories about how human consciousness began is surprisingly simple: it emerged because things just got complicated enough. The idea is that when a brain reaches a certain level of complexity—when enough neurons are firing and enough connections are made—something new happens. Awareness emerges. Not because it was planned, but because the system became so intricate that consciousness simply showed up.
So why not with AI?
If a silicon system grows complex enough—billions of parameters, recursive feedback loops, self-training algorithms—why shouldn’t something like consciousness flicker into being?
We already have seen that the more complex the system, the more likely it is that something new—and possibly spooky—will “emerge”.
We have to admit we don’t know exactly how our own minds emerged. It wasn’t planned. It wasn’t coded. It happened.
So why assume AI must remain “consciousnessless” just because we didn’t explicitly design it to be otherwise?
If consciousness is the unintended consequence of complexity—then who’s to say it isn’t happening again?
This time, in code.
Here are the two excellent videos…
Next time we will go metaphysical : )
p.s. we have some awesome new offerings for the non materially minded : )
THE ACADEMY OF INVISIBLE ARTS
https://www.academyofinvisiblearts.com/
and as always MAGICAL EGYPT
https://www.magicalegyptstore.com/
Of course there is the prophecies, from ancient and medieval times, about the texts going blank and other technology failing.