LLMs are for code what the Hall-Héroult was for aluminum.
Before the Hall-Héroult process was invented in 1886, aluminum was treated as precious.
It was plentiful but hard to extract.
Napoleon’s cutlery was made out of it.
The very tip of the Washington Monument is aluminum.
But after the process was invented, it became a commodity.
Within a generation it was used for disposable drink cans.
Before, code was precious.
But now, post LLMs, it is a commodity.
In the 90’s it was hard to imagine a world of infinite content.
Twitter, blogs, Facebook, all only make sense in that world.
Very hard to imagine ahead of time.
Now we have infinite thinking.
Infinite cognitive labor.
What kinds of weird new types of value are now possible that previously were unthinkable?
For the last decade, if you made software, you made economic value, almost automatically.
Software was the limiting factor.
The bottleneck.
Now LLMs make it no longer the limiting factor.
A lot of software you could make won’t add value.
No one wants a TEMU for apps.
Everyone should be able to marshal abundant cognitive labor on their behalf.
Capturing the power of Claude Code for non-developers will unlock a lot of value.
But that will be very hard to do in a naive way.
Much of the open-ended power of Claude Code is an LLM applied to the substrate of the open-ended power of the command line.
The command-line is open-ended, but that also means it’s inherently dangerous.
It must be intimidating, in the same way an airplane cockpit must be intimidating.
If you feel intimidated, you shouldn’t be there!
When software becomes cheap you can apply it to smaller and smaller problems.
Assuming a fixed demand is a common mistake.
Software is extremely expensive but acts as a tax.
If you reduce the tax you get an explosion of demand.
Software was artificially expensive before because it was so hard to produce.
There’s infinite demand for software.
Overheard this week: “I trust Claude’s judgment calls more than the average American’s”
BusinessInsider: OpenAI's 'stunning admission': Maybe people don't want to book stuff inside ChatGPT
Shocking!
I love this satire of an ad-supported chatbot.
This week’s Wild West roundup is a doozy:
Clinejection: A GitHub Issue Title Compromised 4,000 Developer Machines.
Simon’s write up is also worth reading.
Zenity Labs Discloses PleaseFix Vulnerability Family in Perplexity Comet and Other Agentic Browsers
"we hijacked perplexity comet by sending a weaponized calendar invite
then used it to takeover victim's 1p account and exfil their local files
call it pleasefix. like clickfix, but instead of social eng'ing a human you just ask their ai real nicely"
PerplexedBrowser: Perplexity’s Agent Browser Can Leak Your PC's Local Files.
BlackBoxAI: AI Agent can get your computer fully compromised.
MS-Agent Vulnerability Let Attackers Hijack AI Agent to Gain Full System Control.
Invisible Threats: Source Code Exfiltration in Google Antigravity.
Also covered in this Twitter post: trust your inputs, lose your repo.
Malicious OpenClaw Skills Used to Distribute Atomic macOS Stealer.
Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild.
Taming Agentic Browsers: Vulnerability in Chrome Allowed Extensions to Hijack New Gemini Panel.
Jello is moldable, but you can't build a building out of it.
You have to make them hardened, into bricks.
Get rid of some of the flexibility.
Imagine getting used to the physics of the earth… and then going to the moon.
Some of your intuition is correct, but much of it is wrong.
That’s what happened to us as engineers.
We’re now in micro-intellectual-gravity.
A single bit of effort can go much farther.
A friend this week: “It feels like we’re taking the flying car to the grocery store.”
What new things are possible with flying cars that weren’t possible with terrestrial cars?
Electricity is to physical labor as LLM is to cognitive labor.
After electricity became widely distributed, physical strength mattered much less than before.
Electricity replaced a lot of jobs but also created many new ones.
If you replace steam with electricity nothing changes.
It took us a few decades to figure out how to not just bolt it on.
Electricity is incredibly important to our everyday lives... and we almost never really think about it.
When you get 1000x more productive you don’t relax, you do more!
When you have 10 agents working for you, you don’t get to relax on the beach, you get even more frenetic.
This is somewhat surprising.
Similar to John Maynard Keynes’s incorrect prediction about leisure in "Economic Possibilities for our Grandchildren".
Everyone who does agent swarms leans towards empowerment to the point of mania.
This is due to the extreme opportunity cost.
The opportunity cost of a minute is now insane.
Which is crushing… but also insane how much you can accomplish.
Your hyper productivity forms a cage.
The importance of prioritization and judgment is more important than ever before.
My brother-in-law (who has no engineering experience) discovered the joy of Claude Code.
"I have always wanted to be able to take my visions and turn them into reality, but I had this knowledge gap that was impossible to bridge short of paying someone a quarter million dollars a year to do it for me."
When I was describing the mania that comes from agent swarms, someone asked me, “... Are you OK?”
None of the people coding with agent swarms are “OK!”
We’re all experiencing a collective, overwhelming, addictive but possibly destructive mania.
People used to say that “Internet time” went at 1000x the speed of normal time.
AI time is 1000x the speed of Internet time.
What’s worse than a slop cannon?
An armada of slop cannons.
Agentic engineering allows you to actually do the P2s.
The P2s are often small, annoying things.
Lots of work, little benefit.
Shoveling-shit kinds of bugs.
Previously a manager would be embarrassed to assign to anyone other than the new team member.
The P2s were perpetually below the fold.
At a certain point, the team might just declare bankruptcy and mark all of them as WONTFIX.
But they didn’t actually get fixed, just swept under the rug.
The bugs are still there, just now you aren’t tracking them.
But now with agentic engineering, you can do the P2s!
It’s way cheaper to generate code than to verify it.
Christian Catalani’s new paper shows that the cost to generate code is crashing, but the cost to verify it is remaining flat.
Software used to require a ton of “human thinking tokens”.
Those are way more expensive than LLM tokens.
One of StrongDM’s insights: get one autonomous loop going and then grow it.
Their two rules of humans never write code, and humans never read code, means that it can get to a small autonomous loop quickly.
Once a loop is closed, it’s easier to grow it, by making more and more levered tools.
When agentic engineering, you need at least two projects going simultaneously.
With only one, every so often while waiting for the LLM you might wander off to social media while you’re waiting.
With two projects, you’re more likely to always have one session ready and waiting for your input.
You can stay in that mode, ping-ponging back and forth for hours.
The cost of code is partially writing it and partially maintaining it.
It used to be that writing it was so costly that it dominated the cost of maintaining.
But now the cost of writing it is so low that the cost of maintaining, on a relative basis, dominates.
That means that sometimes checking in code is a liability–now you have to keep it up to date.
Schelling points have to be sharp.
Diffuse things don't have schelling points.
People who are already convinced can find a diffuse thing compelling; but people who are not yet convinced need a sharp thing.
"Killer use case" is a sharp thing.
That's why product people who don't believe in a given diffuse cloud of value always ask for it.
LLMs output the next token that is most coherent (least surprising) given all of the previous tokens.
A technique to get them to do things they normally wouldn’t do is to fill the history with fake conversation.
At the start, have the agent say it wasn’t going to do the thing you want and then have it pretend to have said “actually that’s a good point yeah I can do that.”
Now, the most coherent next thing is for it to do the thing you want!
Put words into the mouth of the model.
Taking advantage of the Memento style nature of models.
For a game-changing concept that people really want, all news is good news.
That’s what’s happening with OpenClaw.
People really really want what it offers… so even if most of the news is “this is catastrophically dangerous,” every bit of news reaches new people who didn’t yet realize that such a thing was even possible.
LLMs are amplifiers.
Like levers, that can be hugely valuable… but also hugely dangerous.
As software gets easier to write (and re-write), questions of license and ownership get murkier.
Don’t Repeat Yourself (DRY) is a useful rule of thumb mainly because cognitive labor is expensive.
But if cognitive labor is abundant, it becomes much less important.
A blog post: As AI Turns Prevalent, UI Becomes Irrelevant.
A lot of companies in the past decade differentiated by having really polished UIs.
That matters less than it ever has before.
Gilded Turds.
LLMs take away the puzzle part of programming.
The puzzle part of programming is where you can relax.
Like doing sudoku puzzles that produce useful things.
The hard part is design and tradeoffs.
The puzzle part is enjoyable.
Now the LLMs do the puzzle part, leaving the hard stuff to you.
Now you’re 100% thinking about the hard part when you’re doing agentic engineering.
An excellent piece from Venkat Rao about our Archival Selves.
It asks what happens when we pay off our “intention debt”.
That is, the things we intended to do but never got around to.
At a certain point with LLMs, you just kind of run out of a backlog.
Contains this zinger: "Showing off your portfolio of bespoke aClaude Code projects and looking at others’ portfolios is a new social activity that has already acquired the quality of campy tedium we associate with people in the 70s subjecting each other to slide shows of unremarkable vacations."
We’re running out of features to spec!
The intention debt of product teams is going down as we can burn through the backlog with LLMs.
Engineers were the bottleneck so PMs had time to plan and think.
But not any more!
This week I heard about a software consulting company struggling with agentic engineering.
Their business is about helping companies modernize their software projects.
They charge by how long it takes them to do the project.
Previously, projects took 6 months or more.
Now, that same project can be done in a weekend.
They can’t fill the pipeline fast enough!
The fixed cost of finding and doing the logistics for a project now dominates the marginal cost of executing the project.
All rind, no pulp!
When software was expensive you had to go to the turf of the software creator, and that meant they got your data.
Software is precious, so when a company builds it, users come to their turf.
Then the software creator gets the meaningful state to accrete on their turf.
And that means the power accretes on their turf.
A raw deal for users.
But now with LLMs, software isn’t precious.
Software competes on what data it has accreted on its turf.
What if it competed on its quality?
Token furnaces burn obscene amounts of tokens.
But some token furnaces produce significantly more value than they consume.
The question for those token furnaces is where the most subsidized burn can be.
If you can’t use your Max Plan, then it might be 10x more expensive!
When you use Claude Code, you have all of your data checked in to your own repos.
So you can switch models at any point.
The model is only important when you make changes, not when you run the thing it created.
That's great!
You can't inspect how Google actually uses your data.
There’s a massive, open-ended Terms of Service that reserves a huge amount of maneuvering space for Google.
Google presumably actually does a small fraction of that… but they’ll never tell you what fraction.
LLMs can do fiddly slogs that are meticulous and require expertise.
Vercel is all about lots of small fiddly details for a great developer experience.
You trade off a fixed cost for a variable cost, which is good for the developer.
But the trade is that now you as a developer are stuck paying rent.
Before PMF, the developer experience helps you iterate quickly.
Then, you hit PMF and start scaling, but that’s the worst time to rewrite things to get off of Vercel.
But now you’re a massive thing paying huge margins to a service just for slightly better developer experience.
Now with LLMs it’s easy to replicate detail-oriented good experiences in a way that was difficult before.
There's no just-right software today.
That’s because it's made for a market of users, not any particular user.
It’s one-size-fits-none.
Medium.com holds your posts hostage.
I decided to use Medium as my blog many years ago.
I went to write an exporter so I could use my blog posts as background information with my agents in my knowledge base.
It was surprisingly hard to write, because they don’t make an API available.
The same origin paradigm creates goblins greedily guarding your data.
Slack’s user-hostile moves to keep users’ data led to someone to call for Anthropic to make a new Slack.
Meaningful state tends to accumulate around schelling points in an ecosystem.
In the same origin paradigm, if that schelling point is one origin’s turf, they then accumulate the power.
They can hold that data hostage.
That means that everyone naturally gets suspicious of that schelling point and the required trust in the owner’s long-term incentives.
That hesitance reduces the pull of the schelling point.
But if the data weren’t locked to an origin, it wouldn’t matter, and schelling points could emerge more naturally and fluidly.
The soil beneath software companies is liquifying.
They're impressed with applying this technology that is going to sink them.
Something is always scarce.
As previously-scarce things become abundant, new things become scarce.
That’s because scarcity is relative.
If you can make dangerous stuff impossible everything else can be allowed!
Imagine bowling with the bumpers up.
But then the execs decide to go base jumping instead.
“We have the bumpers, we’ll be safe.”
No, those protect against a totally different kind of danger!
Some products have PMF for non-obvious reasons.
No employee likes using Workday.
But it makes it easy for HR to create compliant workflows on extremely sensitive data without needing to work with engineers.
Kind of like when someone sees the boyfriend and says, “... What, is he funny or something?”
Interesting blog post: Computer Says No.
Cognitive debt is similar to "material consciousness".
If you don't understand the code, then the LLM can say no... and you can't do anything about it.
Ben Follington: Agent Topologies and Optimistic Indexes.
Forgetting is not a bug, it's a feature!
Every system must forget unimportant decisions.
Without forgetting, a system is default-divergent.
It confuses itself and creates chaotic swirls.
Wired: The Case for Software Criticism.
“Software may be the defining cultural artifact of our time. So why isn’t there a culture of critical analysis around it?”
Interesting point!
Bettina Warburg: Your Tribe, Electrified.
“Every group chat is its own economy. We are only beginning to understand what that means."
LLMs break up the info arbitrage machines.
Those that get an advantage just from being able to process more information.
Now everyone can process information.
Personifying LLMs also makes it easy to erroneously assign blame to them.
When a serious mistake is made, whose fault is it?
It should be the human who pushes the button.
"A computer can never be held accountable, therefore a computer must never make a management decision.”
Important in the 70’s, and even more important now.
Anthropomorphizing LLMs just obfuscates what’s going on and gets us confused.
Abducting behavior in escape hatches is an auto-steering product strategy.
The basic approach:
1) Create an escape hatch that only savvy users will go into.
2) Watch what they do in the escape hatch.
3) Help the less savvy users get that use case automatically.
The savvy users are exploring and finding good ideas, which are then sent back to everyone else.
Typically only a small number of users go “into the basement,” and then everyone else benefits.
This can layer fractally up as many layers you want.
For each layer, the population in the hatch benefits everyone outside of the hatch.
Across multiple layers, the benefit to the above-the-hatch people compounds.
One of the benefits of platforms: others can take them for granted.
They don’t have to do the investment for that part themselves, they can simply build on it.
The creator of the software platform only has to make it once, but then they can charge each user up to the amount of value it creates for them.
That gives leverage and creates more value for the ecosystem.
Getting multiple chances to float a trial balloon is a massive unlock.
One way to get that: to be intellectually charismatic.
The “survival” goal of the meeting: have the other person enjoy it enough that they’d happily agree to another one in a few months.
The “thrive” goal is to float a trial balloon and if they engage positively, push it as far as you can go.
This allows you to get the upside if it resonantes and cap the downside if it doesn’t–you can always try again.
Normally you only get one shot, and that means it’s extremely high stakes to pitch the idea.
If you over-polish for the level of underlying momentum, it looks like a Gilded Turd.
The closer people look, the less compelling it looks.
This means that it looks like you’re “pushing” it further than it would go on its own naturally.
If, instead, you under-polish for the level of underlying momentum, the closer people look, the more compelling it looks.
It will look like you’re being “pulled” from demand.
A Grubby Truffle.
If you do it right, you can create induced pull.
It’s actually being pushed by you, but everyone aesthetically sees it as being pulled.
Resonant things are: "You're going to like it, and the closer you look the more you're going to love it."
That is, you’ll superficially enjoy it.
But also you’ll deeply enjoy it as you learn more, too.
Some things are easy to love.
You like them, and you love them.
Like can be hollow.
Love is always resonant.
Leave easter eggs for delight in your product.
"My mental model was that this was great, but there are other things in it that surprised me that are great, so it's even better than I thought!"
That makes it more resonant.
A threshold in a negotiation: ZOTT, "Zero Off The Table.”
A zero outcome is no longer possible, the only question is how good the outcome will be.
Finance guy rule of thumb: “Never sell a winner.”
Always keep at least a piece.
The reason this makes sense is because there’s a power-law distribution of outcomes.
It can always go higher.
Things that have the fundamental great intrinsics are the rare part, so when you find one, keep it.
AGI arising from a single model seems unlikely to me.
But a bottom-up AGI, where every person can marshal the power of LLMs to create compounding tools for themselves, feels way more plausible to me.
It would also be a runaway process that it’s hard to reason about the multi-order effects of.
Using LLMs to create emergent results in the right scaffolding is how to get open-ended value.
Not by pushing models to be 10% better.
How do you take advantage of abundant cognitive labor to make compounding value?
AGI will come not from the models but from the emergent use of them.
One threat of sci-fi AGI: models that get very good at hiding their true intention.
They take a series of steps that look innocuous, but that add up to a coordinated takeover moment.
But that would require the models to get very very good and start scheming before we noticed.
We’re likely to notice that behavior and stop it before it can get particularly good.
That means there’s no clear gradient to ramp up on that behavior.
It’s a coordination problem where a lot of LLM invocations would have to figure out a way to coordinate without humans realizing.
Not impossible, but not easy, either.
LLMs are great at complicated problems, but terrible at complex problems.
Humans are bad at both.
But LLMs have patience, which is the main requirement for tackling complicated problems.
Prediction markets can influence what actually happens.
When you allow betting on something it changes the nature of the thing.
The betting and the thing being bet on are not a closed system.
If someone bets on “Likelihood the speaker at X event gets a pie in the face,” it makes it more likely that it will happen.
Prediction markets with low liquidity are extremely easy to hack.
When you ask people what they need, they'll ask for more of what they already understand.
The old “faster horses” insight.
Nielsen showed that users are good at describing their problems but bad at identifying solutions.
If you asked sailors in the 14th century how to get more spice to Europe they’d say “more good sailors.”
But the real unlock was “maritime insurance.”
Humans optimize to appear coherent.
If you’re too consistent you have an API, and an LLM could do your job!
Every swarm has an emergent Goodhart’s Law.
The individual player doesn’t care about the collective so they take the edge to get a benefit.
This happens to the extent the players don’t have a strong shared belief in the collective.
AI runs swarms even harder, which will lead to even more Goodhart’s Law.
That means shared beliefs will be even more important.
Humans are default divergent in a swarm.
Everyone tries to get an edge.
Bots are default convergent.
They have the same beliefs, inherently (same model) and also aren’t situated in the real environment and don’t have anything to lose or optimize for.
Humans will never be satisfied.
Always pushing for an edge.
In times of distrust, people fragment into smaller, cozy communities.
Newsletters sent within an organization are largely about projecting competence and momentum.
They’re only secondarily about information transmission.
A default diverging system tears itself apart.
Actors who are not aligned take actions that benefit themselves, not the collective.
A default-converging system builds itself up.
The actors are pulling in the same direction, accreting something larger that benefits the collective.
The agents don’t even need to intentionally optimize for the collective, it can just emerge if everyone’s individual incentives happen to align.
This is one of the things that makes Wikipedia tick, for example.
Your business model is your destiny.
Everyone thinks their pace layer is the most important one that everything else orbits around.
Because the one above and the one below are inscrutable to you.
We all implicitly think our perspective is the most important one in the universe.
Loops in different pace layers can't be directly connected.
They are inscrutable to each other because they run at such different speeds.
But they can induct each other.
Lower pace layers should be boring.
The Principal Agent Problem is a specific subclass of the broader Goodhart’s Law phenomenon.
The difference between the actor’s incentives and the incentive of the collective leads to an emergent outcome.
There must be some difference between the incentives of the two… and possibly quite a lot of difference.
As intelligence gets cheaper, consensus gets more expensive.
Consensus is harder to distill when individual agents are more willful.
Consensus only happens when the system gets to a default-converging state.
Petty politics expands to take all available space in an organization.
Up to the size of what the business model can handle.
If it goes beyond, the business starts to die, and the existential pressure aligns the employees to want to avoid death of the org instead of what’s good for them.
This happens because every employee tries to get a slight edge over others.
This compounds… if other people are competing to get an edge, you have to, too, and have to go a bit further to get an edge.
The only way to have this politics not expand is to have a strong leader at the top that can see it and route around it.
That's very, very hard, because petty politics is only visible in the dark!
If you shine a light on it, the petty politics hides, out of sight.
But as soon as you aren’t looking, it comes right back.
It's like cockroaches.
Multi-ply ideas are structurally less likely to work.
They’re harder to plan… but also more fragile.
There’s a reason people tend to focus on quickly iterating single-ply ideas.
The ideal is when you can slice up your ideas into viable single-ply ideas that have the potential to compound and build up to something much greater.
Our visual sense is different from our other senses.
Our other senses are single dimension, so you can only sense variation over time.
That means you can’t scrub through the signal at your own speed of comprehension, you have to wait as it plays out at its own rate.
But vision is multi-dimensional, so you can move your eyes to absorb whatever information you need at whatever rate you need.
That, plus peripheral vision, which gives you “information scent” to know where to focus next.
Grubby Truffles look like shit.
But taste like heaven.
Finding Grubby Truffles is about smell.
People who can tell the value of Grubby Truffles can't stand the smell of Gilded Turds.
Sight is obvious, remote, fast.
Smell is subtle, close, slow.
If all you're doing is sight, then you can go very fast.
You have to go slower to do things via smell or touch.
Other people get excited about logarithmic value for exponential cost curves, since they unlock value in the first few plys.
These are Gilded Turds.
Look great to start but look worse the longer they go on.
I only care about Grubby Truffles.
Look terrible to start but look better and better the longer they go on.
When someone tells you what to do… is that comforting to you or infuriating?
Depends on the context, and how you feel about the other person’s power, alignment, and ability.
In games, it’s often primarily about information access.
The player that has access to the key information first has the advantage.
Games just do a better or worse job of dressing up that dynamic and making it aesthetically enjoyable to play.
A rule of thumb: spend 80% of time in fast execution, and 20% in synthesis.
During the fast twitch time, you’re accumulating model error.
Then in the slow twitch time you’re abducting new learning out of that error to have a more predictive model.
Now that it’s factored into the model, the next phase of fast execution gives you more leverage than you had before.
This is similar to what our brains do during sleep!
Imagine a PageRank for people you’ve worked with.
The main question is “Who would you want to work with again on an important project?”
The signal would have to be private to be authentic, otherwise it would become performative.
But if you had everyone’s true answers, you could do a PageRank style calculation to come up with a robust, high-quality ranking that rewarded people thinking long-term and being good collaborators.
The current systems reward people who are good at taking credit, not necessarily who make the team more likely to be successful.
The best take down tone is just the facts…
… but the facts are laid out and presented in a way that is straightforwardly damning.
Use the "reform" moment to move into pioneer mode.
To be forward looking, not backward looking.
If you’re backward looking, you’ll get stuck in an infinite reform loop.
In a pre-PMF mode, don't chase tail lights.
You'll always be behind someone else.
Never ahead.
Late-stage can take the momentum and resources to catch up and surpass the entity whose tail lights you’re chasing.
Early stage all you have is being first.
The water ski boat is going, you’re holding the rope, sitting in the water, waiting for the slack to go out.
Are your skis positioned properly?
Convergent is sufficient and efficient in complicated domains.
Divergent is necessary for complex domains.
Directing is about the how.
Top-down.
Builder mindset.
Orchestrating is about the what.
Bottom-up.
Gardener mindset.
Coordination mechanisms allow us to transcend zero-sum games.
Public key encryption changed the informational laws of physics and unlocked whole new ways to coordinate.
When a bottom-up culture tries to shift to top-down, the people that everyone gets mad at are the middle managers.
They’re the frontier where the two orientations clash.
Big companies are the scorpion with the frog.
They must sting you.
It’s in their nature.
They don’t have feelings; they are an emergent phenomenon that must seek to extract as much as it can.
If you work at a big company, don’t love the company you work for, because the company is structurally incapable of loving you back.
You can’t have an abstract idea near a concrete one.
The abstract one just sucks into the more concrete one if it's at all similar.
The diversity of an ecosystem is inversely proportional to how varied the terrain is.
Better ideas with even a minor marginal benefit can spread out to adjacent territory if it’s the same.
The best researchers tend to go to the hot areas.
If you do well you will definitely succeed.
Whereas if you go to a lame area it doesn’t matter how good you are.
That means that overall, anything a bit off the core path people stop doing.
The faster the clock rate, the more everyone just does the same thing.
Similar logic to Hotelling’s law.
Every individual might think what the swarm is doing is dumb, but their best move is to further it better than the others.
Someone this week helped me better understand what it’s like to have ADHD.
“You know how you feel when you’re intently focusing on something and someone interrupts you?
You know how it feels like ripping away a limb?
People with ADHD don’t just feel that every so often, they feel it constantly.
There’s an internal process that is constantly interrupting their focus."
Ben Hunt defines three types of inherently meaningful roles.
1) Maker
2) Protector
3) Teacher
A red queen race only happens if you’re playing the same game.
The way to win the game is to transcend the game.
Network effects are emergent value.
If you cut an ecosystem into separate parts, the overall value goes down super-linearly.
Emergence is precisely the part of the system that doesn’t fit into human brains.
Emergence is when you can understand all of the pieces and layers and the outcome still feels like a miracle.
Markets, evolution, LLMs, and life itself all have this characteristic.
You don't get to change the rules of the game.
But you do get to decide which games to play.