Last month, a tech lab boasted that it had developed a synthetic cyber-human hybrid.
Researchers dropped 200,000 human brain cells onto a computer chip. Hon Weng Chong, CEO of Cortical Labs, said he scraped his own brain cells for the experiment.
Then it was taught how to play Doom — the video game celebrating active shooters hunting down humans.
Why would they teach this thing to play Doom?
According to Chong: “In computer nerd science land, there’s an obsession with getting Doom to run on everything.”
He described the whole process as “cool.”
Cool to have computer nerds creating a synthetic life form being trained to shoot humans. What could go wrong?
As I was reading this dystopian story, I learned that a robot outran all the athletes at the Beijing half-marathon. Its time came in at 50 minutes and 26 seconds — faster than the world record. The same month, a robot beat elite ping pong players in a game that showcased the incredible speed with which it could respond.
The news about these dramatic advances came as we learned that for the first time in the history of warfare, human soldiers had surrendered to a robot in Ukraine.
I keep thinking of John Connor’s warning in Terminator: The Sarah Connor Chronicles:
Have you ever heard of the Singularity? It’s a point in time where machines become so smart that they’re capable of making even smarter versions of themselves without our help. That’s pretty much the time we can kiss our asses goodbye... unless we stop it.
There was a time when Silicon Valley presented its invasive technologies as lighthearted and helpful. They were creating a “web” of connections that we would “surf” on our way to knowledge and fun.
When we were tracked, it was nothing to fear because it was just in the form of “cookies.” Getting targeted by trash harassment was nothing more than “spam” — from the Monty Python skit.
More and more, the Silicon Valley tech bros are revealing themselves as very dark figures indeed. Gone are cookies and surfing, now they are peddling “AI Kill Chains” for tracking and targeting state enemies. Those being tracked don’t have to have guns. They can be troublesome journalists or civilians hiding from mass deportation.
Consider Peter Thiel, founder of Palantir and early investor in Silicon Valley start-ups like PayPal, who is a committed anti-democrat. Lately, he has been obsessing about the Apocalypse and the rise of the Anti-Christ.
But he’s not looking in the mirror.
He believes that the anti-Christ’s minions come in the form of activists like Greta Thunberg who dare call out the self-centred obsession with the billionaire class as the planet burns.
Thiel has spent his career creating invasive surveillance technology. But he hates prying journalists, hence the creation of a start-up to build “AI juries” to go after troublesome journalists.1
Recently, Palantir signed a lucrative deal with ICE for “complete target analysis of known populations” — tech jargon for civilians.2 Faced with negative press, the company maintains that it supports human rights.
Palantir is also working with the IDF on surveillance, tracking and targeting technologies.
The Israeli investigative magazine +972 Magazine says that one AI machine (the company that created the program wasn’t identified) has “marked tens of thousands of Gazans as suspects for assassination with little human oversight and a permissive policy for casualties.”3
Gaza represents a future of warfare that will be repeated elsewhere as the algorithms are proving efficient, merciless and relentless. A recent study on the use of AI in the war states:
Israel’s growing use of artificial intelligence (AI) in military operations is changing how wars are fought. In this new model, machines, not people, decide who lives and who dies. This shift is causing more civilian deaths and breaking international laws meant to protect innocent lives during conflict...
Israel’s use of AI in war has removed human judgment from many decisions… It’s now the algorithms that decide who lives and who dies.4
In Ukraine, robot war, AI and drones have dramatically reshaped the face of 21st-century conflict. Rebekah Maciorowski, a medical aid worker in Ukraine, was recently interviewed in The Independent about the dramatically changing face of drone warfare.
Maciorowski warned that NATO had no clue what was coming:
“If you were to talk to NATO military officials, they would reassure you that everything is under control, they’re well-equipped, they’re well-prepared. But I don’t think anyone can be prepared for a conflict like this. I don’t think anyone can. After 40 months of war here, I am terrified.”5
A war being fought by spotters with laptops and cheap drones is on the verge of becoming fully automated. And the targeting is moving farther and farther from the battle zone, so that all of society can be targeted.
In early 2026, the Trump White House awarded a massive contract to the founders of ChatGPT to develop AI kill machines. Trump didn’t choose the company because it was the best technology, but because tech bro Sam Altman agreed to let the Pentagon develop it without guardrails or protections.
The contract had originally been awarded to Anthropic AI, but they refused to let the Trump regime exploit their technology without basic protections. The limits they tried to impose on the Pentagon were very limited and reasonable:
1) Barring the use of fully autonomous kill machines unless there is human oversight.
2) Limiting the ability of the Pentagon to use the unprecedented power of AI to launch widespread domestic surveillance on American citizens.
Trump ripped up the deal and gave it to Altman. He had to back down on the exceptions demanded by Trump amid intense public backlash. He said the company had been “opportunistic and sloppy” in agreeing to the Trump terms so quickly.6
Our world is being rapidly rewired by AI — from the “friendly” assistant that turns on your lights and finds directions to the AI programs that will erase millions of clerical, research, and factory jobs at the stroke of an algorithm.
It’s rewiring our brains in ways we have barely begun to process. And it is all happening without any oversight or regulation.
The federal government is promising to set ground rules for AI. So far, the talk is about the rosy future and opportunity. The darker part of the story needs to be addressed.
While we still have time.