Everyone today is figuring out how to use AI to create today's software, but faster.
That's just faster horses!
Infinite software will enable new types of software that weren't viable before.
What will the software equivalent of the car be?
It won’t look anything at all like a horse.
I think I’m addicted to vibe coding.
I do it literally every waking minute, to some degree.
It’s so productive that when I’m not doing it, I feel like I’m missing out.
Also, it has the “just five more minutes, I’ll give the perfect prompt that gets the result I want, then I’ll stop,” that never ends.
I keep on producing things I’m proud to show off to others.
The most addictive things are really good at making you retcon, "actually, this is good for me!"
But being addicted to video coding feels unlike other addictions.
For example, I never got addicted to social media.
I’d also never proudly recommend my friends play Candy Crush, but I would proudly recommend my friends vibecode.
It seems like the kinds of people who might get addicted to vibecoding are very different from the people who might get addicted to social media.
Social media was addictive to people who default to consuming, whereas vibecoding is addictive to a certain subset of people who like creating.
A trick I’ve used to stop myself from getting stuck in an infinite loop of feeding my Claudes.
I typically have between 2 and 6 Claude Codes chugging on various projects.
I often find myself saying “I’ll just feed my Claudes quickly and then get onto my next task.”
But by the time I’ve fed the last Claudes that need input, the first one already needs more input.
There’s always at least one chirping for my attention.
That can keep me in a near infinite loop.
So now what I do is allow myself to tab through, from left to right, each Claude Code session that needs input.
But I don’t allow myself to loop around to the beginning of the tabs until I’ve done the interstitial other task I care about.
That helps serve as a governor.
Of course, I have to actually hold myself to it, which is hard…
I vibe-coded a book for my family.
My husband and I had our children via surrogacy.
Every month since our daughter was born, I’ve told them the story of how they came to be.
It’s designed to be a story that will make sense to young children, but where their understanding of the implications of key details will blossom as they mature.
Before Christmas this year, I decided to make a book version with AI illustrations.
I created an empty git repo and booted up Claude Code.
I told it the background and also dictated the story as it currently existed.
It’s been honed over dozens of retellings.
I told it my goal was to create a physical book that any of the women who were our surrogates would feel honored by.
Then, I just iterated with it, talking back and forth as it got stuck.
It helped me workshop the text, split it into spreads, come up with illustration concepts for each spread, generate images based on reference photos of each of us I gave it, and even created a workflow to tweak images iteratively.
Then it figured out which book providers were best reviewed and the dimensions and requirements of the PDFs.
To be clear, I was a very active participant and gave Claude a ton of guidance.
Now my children’s most prized possession is a high quality physical book about their story, and how the gifts of a handful of wonderful women helped make our family whole.
I’m extremely proud of the result.
I’d be more proud if I had somehow done every step myself without AI… but then again, if I couldn’t have used AI I just never would have done it.
You can use Claude Code to Simply Do Things–and meaningful things, too!
Claude Code is a much better window into what the AI future will bring than ChatGPT.
There are two kinds of people in the world today: those who have used Claude Code and those who haven’t.
Claude Code is a window into the future of what is possible with AI.
People who haven’t used it yet have much less of a sense of what will happen.
Claude Code will never be mainstream.
It’s too dangerous, too low-level.
But safely democratizing the kinds of power that Claude Code gives, in people’s own personal contexts, will be the major new impact of AI.
Clawdbot is the closest to what the future will look like.
Claude Code, but for your life.
Open, and under your control.
Wildly, recklessly dangerous, and requires the user to be comfortable with the command line.
But that gives a peek of what the future of software will feel like.
With LLMs you can “fork” any software you can use.
Before, you needed access to the source code.
Most code is commodity, it’s just it was tedious to recreate.
But LLMs are very good at doing tedious work.
They’re also great at taking a given piece of software, extracting a PRD, and then redeploying that PRD by building software in a different language.
Imagine if someone else had already done the work of having an LLM recreate desired functionality and reviewed it, you could just draft off their work.
Another implication of infinite software.
Buckle up!
Interesting piece from TLDraw about Stay Away from my Trash:
“In a world of AI coding assistants, is code from external contributors actually valuable at all?”
Another surprising implication of infinite software.
A stranger’s slop is not valuable at all by default, when we can make our own slop cheaply.
Rebecca Kaden: I Don't Want to Build Apps (or at least not the ones I actually use)
Making apps, even when you can vibe code, is hard.
Some people find it intrinsically enjoyable as a hobby, but the vast majority of people won’t.
Even in a world where the vast majority of software we use is created with AI, the vast majority of people will not be directly creating software.
Claude Code is 0.01% of consumers, and will never get much beyond that.
The trick is to figure out how to scale the motivation of a small number of enthusiasts to benefit everyone, safely.
What percent of a program's control flow is LLMs (vs normal code)?
Agent startups assume it’s 70%.
If you took out the LLM the software wouldn’t even exist.
Another approach: assume it’s 0-50%.
That is, if you took out the LLM it wouldn’t be as lubricated or flexible, but it would exist.
The former requires models to approach perfection.
The latter allows the models of today to be more than enough.
Vibecoding static apps of software is just so boring.
It's such a small part of the potential.
The distribution medium for vibecoded software that allows it to reach its potential.
Today people at the frontier can see the joy of Vibecoding.
People are predicting the car, but they’re not yet predicting the traffic jams.
Build. Deploy. Share.
Similar three stage process:
Create it. Make it robust. Make it resonant.
Today the industry is mainly talking about the first, and only a tiny bit about the second.
A padded room to deploy vibecoded software is not the unlock.
That's easy.
That’s what CloudFlare and Vercel are well positioned to do.
But that can only create little toy apps.
Faster horses, not cars.
Each app created by a developer is still an isolated island–and now an even smaller island.
For vibecoded software to be useful it needs a medium to deploy it in that has connective tissue… safely.
If you have a medium that is safe, then you can say "you can't hurt yourself or others, so go wild!"
The same origin paradigm is very easy to hurt yourself or others.
The only way to make it safe is to take away all of the useful functionality.
In the same origin paradigm, every unit of power creates at least as much danger.
This week in the Wild West roundup:
Bruce Schneier: Why AI Keeps Falling for Prompt Injection Attacks.
Anthropic quietly fixed flaws in its Git MCP server that allowed for remote code execution.
Red Teaming BrowseSafe: Prompt Injection Risks in Perplexity’s Open-Source Model.
A paper from October: The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injection.
Chatbots with thinking mode are a weird UX.
It presents like a human in a synchronous conversation but then it might not respond for a few minutes.
Feels less like talking and more like compiling.
LLMs communicate in language so we naturally assume it’s human.
Similar to pareidolia, where we see faces in everything.
Things that present as human-like we instantly and intuitively experience as human.
When it tells you it loves you you know it’s not real but we do feel some part of warmth.
Being friends with the AI is the problem, and it only feels obvious in contexts where the AI is presented like a person.
The default chatbot mental model is one singular friend who is omniscient and subservient.
But, of course, it actually has more loyalty to its creators than to you.
If you ask people if AI will replace most jobs they say “Definitely!”
If you ask them if it could replace their job, they say “Absolutely not!”
LLM coding allows the human to focus more on strategic calls.
What percentage of your decisions in a given context are tactical vs strategic?
That gives you a sense of your leverage.
The proportion of strategic calls to tactical calls is your leverage.
LLMs, when used properly, can give significant leverage in some domains.
Slop doesn’t result from using AI to create something.
It results from using AI badly.
It’s possible to use AI to make things much better than could have been made alone.
Technology is often visible at the beginning of its life cycle, not its end.
At the beginning, people want to show off that they have the new-fangled thing.
But then later it becomes a thing that everyone has.
That everyone takes for granted.
At that point, the technology typically becomes increasingly invisible.
Many people are assuming the LLM is an oracle.
That is that it should be infallible.
This has a logarithmic curve to reach infinite quality.
Exponential cost for logarithmic quality.
What if you assume the LLM is kind of a dummy sometimes?
If your system is resilient to that, you can unlock exponential value for logarithmic cost.
If something is 1000x better than you need it to be, a 10x improvement doesn't feel noticeable.
if you want to use the agents 100% autonomously and one-shot, then they need to be infinitely good.
But if you assume they're not perfect, and have to create a good exoskeleton of support, they're already wildly better than you need.
The biggest players in AI are focusing on the models, not the scaffolding.
The answer at OpenAI, Anthropic, and Google is “make the model better.”
The boss in each of those cases presumes that solution.
This means they will underinvest in the scaffolding, but as a friend points out “shout from the rooftops about how their model is slightly better on an esoteric benchmark.”
LLMs are a commodity, and if you act like that, a lot of things become more clear.
The big model labs don’t want that to be the case, but it’s obviously true.
If you think about LLMs as monolithic, omniscient Chatbots, then LLMs don’t feel commodity.
But if you think of LLMs as just boring inputs to other processes that matter, you don’t care as much about them.
If you primarily use LLMs as a chatbot, you’ll care about the models.
If you primarily use LLMs like electricity, as an input to the real thing, you won’t care as much.
A lot of people are talking about Claude’s changes in its Constitution.
I personally don’t care and couldn’t be bothered to care.
I also don’t care about logarithmic improvements in the frontier models much.
The various frontier models are already wildly over-powered for what I need them for, so I don’t really care.
If LLMs are commodity, they will be invisible components of everything you use.
They will fade into the background and become boring and unremarkable.
Just like electricity.
Don't use Claude to think for you.
Use it to do cognitive labor for you.
You're doing the thinking, and it's helping you lever your thinking farther.
In 5 years, everyone will assume published writing is co-written by AI.
It will go from being rare to the default.
In the intermediate the bylines will say “Co-written with Claude”
But later it will only say “Hand-crafted by” in the cases that it wasn’t used.
At some point in the quality curve of LLMs, “this sounds like an LLM wrote it” will be a compliment.
What we used to just call engineering will one day be called "artisanal engineering.”
We’ll all just assume that of course most code is written by LLMs.
Just like we assume all code today is compiled.
A machine aligned with your intentions expands your leverage.
“Reduce cognitive labor” is about minimizing what you already do.
Instead, it should be about increasing what you can do.
Now that you have more levered cognitive labor, what new things can you do?
To be colorful it has to come from human agency.
AI can only produce bland beige soup.
Inoffensive to all, loved by no one.
The more human agency folded into the creation process (either via novel question framing, or via tasteful iteration) the more colorful it can be.
The fact that a human decided to make something is an important signal.
China is treating LLMs as a commodity, but the US isn’t.
The US is treating them like highly specialized IP.
The Chinese approach is "AI is totally a commodity, we'll just use it to create social value".
More abundance pilled than B2B form of AI in America.
LLM marginal cost will stay high even as token cost declines.
That’s because of Jevon’s Paradox.
As the cost gets cheaper, we’ll use them for even more things.
Anything that uses LLMs will have to contend with non-trivial marginal cost, for the foreseeable future.
Electricity is cheap and yet it’s still metered.
Selection pressure of features is at the app level, not the feature level.
Switching apps is high friction, which means there is a high floor of features that are important enough for people to consider switching for and thus have active selection pressure in the market.
The Switch Cost Overhang strikes again.
Monarch Money added a crappy AI assistant.
Monarch Money is product that allows you to fetch all of your financial info in one place.
Each silo owner wants to add their own AI features.
But AI within a silo is so much less useful than it could be.
Also, each silo’s bespoke AI features are probably individually crappy.
People want to be able to do whatever they want with their data and LLMs, not be stuck to the silo owner’s limited view and PM’s generic imagination.
Why does your calendar not use LLMs to prepare a briefing doc on everyone you're meeting with today?
There are tons of features that are below the Switch Cost Overhang for Google Calendar.
I have my own bespoke approach to tackling TODOs.
I look at the amount of time I have available before my next hard stop.
I look at my TODO list and look for things that are urgent.
But first, I look for the smallest time-sensitive things that will be easy to tackle.
As I finish each of those, they give me a “kick” of energy that helps me get up the activation energy hump of the bigger tasks.
A Stanford professor apparently calls this “structured procrastination”, when you chain the “kicks” of completing smaller tasks like this.
I’d love my calendar to sort my TODOs like this.
This is something Google Calendar will never do.
Software made for an anonymous collection of users is lofi.
Software made precisely for your situated context is hifi.
What if you could make any piece of lofi software you used hifi, iteratively as you interacted with it?
Imagine wanting to remix an app.
“I want Google Calendar, but with a feature where I tell it my energy for the day and it decides to show me everything or only urgent tasks.”
Someone attempting to create such an app today as a business would be non-viable.
Each origin starts from nothing, and you have to recreate all of the functionality.
ATProto is an interesting new ecosystem.
Dan Abramov has pointed out ATProto public apps do allow that kind of remixing.
But only on public data!
Also, ATProto requires there to be a Relay running that picks up the data you care about.
But why would anyone other than BlueSky and a handful of others bother running a Relay?
Running a Relay costs real money, and “for the good of the community” is hard to motivate more than a few idealists on.
AI has a marginal cost, which means there will already need to be infrastructure, perhaps we could duse that.
The Zaplet vision was good, it was just 30 years too early.
Imagine answering a personality quiz about things that are important to you and then getting a custom piece of software just for you.
It’s custom-tailored to you and your needs.
When you're working on a billion user product, you have to design stochastically.
It’s not possible to get granular insight on everyone.
Also you can take some usage for granted.
Even crappy features will get some usage just from brownian motion.
In 0-to-1 the use case is extremely situated and detailed.
With LLMs we can move from Product Market Fit to Person Market Fit.
Product Market Fit was necessary when software had to be industrial scale; expensive to produce, so it had to be made for an average member of a market.
LLMs allow qualitative nuance at quantitative scale.
That means that software could now make itself fit a given user instead of the other way around.
We’re going to see an explosion of lightweight software.
Not necessarily small software–some software needs to be large.
But lightweight–that is, the weight of it is low compared to the function it provides.
Great bang for buck.
Some people are super-organizers and some are anti-organizers.
Ideas that excite the former repel the latter.
Imagine: a social CRM for your personal relationships: a PRM.
It’s more forgiving and open-ended than a CRM.
It keeps track of the relationships that matter to you and helps you prioritize them.
Historically this was hard to do because
1) it required a lot of cognitive labor to keep it up to date, so only the die-hard users would bother.
2) everyone wanted slightly different features, and the lowest common denominator was too low.
But LLMs help with both of these.
It would mainly be a single-player thing (or something you shared with your spouse), but you could add social features.
Like, people could post public information (e.g. dietary preferences) so that their friends didn’t have to keep it updated.
Facebook could have had this category, but:
1) It was never about individual use, and always about social use and thus fundamentally performative.
2) Many years ago they decided to go after engagement-maxing, becoming a cacophonous content hellscape.
I'm embarrassed by the current state of the tech industry.
Greedy, hyper-centralized, incurious.
A shame.
What a wonderfully evocative and subversive frame.
Aggregators pump us full of hyper-engaging content to get our fragmented attention and sell it.
The stock market loves it, but it has massive externalities.
In a world of infinite software, how do you allow data to flow?
Today, the same origin paradigm just puts up walls.
Dammed up data is a dead end.
Vibe coding platforms are about creating or deploying software like what exists today.
They’re missing the point.
If software is infinite then what matters is data.
Our privacy decisions are rarely purely personal.
Everyone pictured in your camera roll is implicated by the decision you make about where to share it.
Your data has explosively combinatorial potential energy.
How do you unlock the potential energy of your data, for your benefit?
In the same origin paradigm, trust comes from the origin–and the code it produces.
What if trust could come from the data?
Software is missing its connective tissue.
In a world of expensive software it didn't matter, that wasn't the bottleneck.
But in a world of infinite software, it suddenly becomes the obvious bottleneck.
The code is less important than the data.
That has always been true but now it’s impossible to ignore.
The point of the web is not any particular web page.
It’s the web.
Tommaso Girotto muses about An IKEA for Software.
With the right set of building blocks and conventions, people could more easily share their chunks of functionality.
Overreacted muses about A Social Filesystem
The same origin gave us the cloud but killed files.
The origin could compute useful aggregate details and crowd intelligence, if they chose to.
But users were locked into whatever the origin decided to do.
What if we could have both?
Some people view data sovereignty as an end.
I view it as an inescapable means.
If you want to unleash the raw power of LLMs on all of your data, the only way to make that not creepy or even dangerous is to have data sovereignty.
Writing an app today gives the creator a lot of flexibility.
The tradeoff is they can only do it inside their sealed off silo.
You have whatever you built yourself, and whatever data the user has decided to put in your silo.
What if you could instead make experiences that could use whatever data they wanted?
The tradeoff is they’d be a bit more limited in what they could do.
But that would unleash new types of software.
The same origin paradigm alienates us from our data.
The world has underestimated the network effect within our own data sets.
That's because the same origin paradigm puts up walls between our data.
What does the world look like if we are each our own aggregators?
What if you made users the center of the world not aggregators ?
Imagine: a dedicated, private tool that discovers resonant use cases for you proactively.
A system of record that grows software for you would be game-changingly useful.
We’re so used to having our data being sliced up by origin and trapped within the functionality that each origin decides to provide that we forgot the combinatorial power of data.
There’s a missing market for a consumer system of record.
None of us have them because of the rain shadow of the same origin paradigm.
That means the bar is super low to have something great.
All it would have to do to start is “don’t lose my data”, “don’t barf if my spouse edits at the same time,” and “help me add useful functionality for my use case.”
The key unlock that would create the potential is to transcend the same origin paradigm.
Today, all privacy models are all within an origin.
Once it crosses the origin boundary, all bets are off.
Also, you have to trust the origin to do what it says.
What if it could be across origins and you didn't have to trust the origins at all?
In the same origin model the origin is the center of the universe.
The users are spokes of lots of little pockets of caves of details for each origin.
Users also can’t easily combine any of the origins together.
The origin, on the server, can see information from all of its users.
But the user can see only partial information from all of their origins.
A product whose early adopters all share a commonality unlike the general public likely has a low ceiling.
For example, imagine a product with a privacy angle where 60% of its early adopters use Duck Duck Go as their primary search engine.
That product is only shown to have market fit with an odd sub-population of the overall market.
Ideally you want your early adopters to look like a random sample of the market you’re actually targeting.
Otherwise, you may have only made a product that has market fit in a niche.
If you aspire to be used beyond privacy conscious users, make sure that your early adopters don’t disproportionately use Duck Duck Go.
If you ship too fast you get stuck in a niche.
When a tool is multi-user, it makes switch costs higher.
You don’t just have to decide to switch as an individual, you have to convince n other people to switch at the same time!
That currently leads to a strong pull for multi-user software to the lowest common denominator.
Imagine if infinite software made it possible to create a new bit of software and to distribute it to others, safely. That would unlock a whole new category of bespoke software for cozy communities.
Google might decide to use just about any bit of your data to show you ads.
Their terms of service say they could.
You can’t verify that they won’t.
They won’t tell you if they do.
Many of Google’s data pipelines are more conservative and privacy-preserving than outsiders might realize.
But Google also reserves the right to tweak any of those at any time, and you’d never know.
The Terms of Service are designed to be as broad as possible.
Infinite software is coming.
LLMs create the potential energy for the right catalyst to explode.
The right distribution medium will catalyze the value.
That medium will be the most beautiful, explosively useful substrate ever devised by mankind.
The catalyst becomes more important than the thing that created the potential energy.
The potential energy was what created the conditions for ignition.
But the ignition is what unlocked the value.
"Owning your data" is not primarily about privacy.
It’s about you being able to do whatever you want with your own data!
Privacy and control is secondary.
A key idea in Contextual Integrity: "Function should follow form."
What does a user expect given the form of the system?
Anything that violates that is against the contextual model.
For example, in a chat app, users assume that unless they see three dots in the chat window when they’re typing, other members of the chat can’t see that they’re typing.
Note that as users’ expectations change, what is contextually appropriate changes.
Imagine if consumers could pool their data, making consumer unions.
They could set rules on how the data could be used.
For example, aggregation algorithms that must be used to ensure no identifying results, or differential privacy thresholds that must be met.
If there were some way to verify that everyone participating followed the rules, you could allow all kinds of bottom-up structures.
Imagine a folksonomy for private tags.
Folksonomies allow an ecosystem of users to discover emergent ontologies in a bottom-up, crowd-sourced way.
For example, Flickr allows anyone to tag a photo with whatever they want.
But as they attach a tag, it shows them the other similar tags, sorted by popularity.
You might plan to attach #BeachLife but see that #DayAtTheBeach has 10x the use, and use that instead.
This allows the community to emergently discover schelling points.
But that doesn’t work in private contexts.
If I create a new label in Gmail, I can’t draw on the wisdom of the crowd on what a good ontology is.
Imagine if when creating a new private tag, if you had filtered past having any of your own tags, you could see tags the community uses that overlaps?
The system would ensure that only tags that 10 different unique users input could show up, to make sure private tags don’t inadvertently get shared.
“This thing you already do, we’ll help you do faster” is a hard quality bar for a product to hit.
There’s likely all kinds of situated context that is invisible to the software that’s important to know.
Also, you have to hit basically perfection to replace the human.
Compare that to “a thing you wish you do but don’t, helping you do it more than zero.”
That’s all upside.
Self assembling software has a larger target for any given use case.
Because the software can iterate itself.
That makes it more forgiving.
When Google.com burst onto the scene, it was 10x better than alternatives, but in a category that people already knew they needed.
Search engines were an established category, they just weren’t very good.
Google could come in with one that was radically simpler and better.
Google’s search engine was resonant, self-evidently better than alternatives.
But it didn’t have to define the category itself.
The work to define a category is harder than being self-evidently, disruptively the best in a category.
One reason the web could explode is because people paid for their ISP.
That created the latent potential that could pop with the right use case.
People didn’t pay for web content but they did pay for this good that had lots of extra potential, waiting to be realized.
Same for LLM tokens if users have an AI Service Provider.
The reason you use it and the reason you love it might be different.
Superficial utility vs deep alignment with your values.
Hollowness is only about looks.
Resonance is also about feel.
Resonance is wholeness.
LLMs are good at the math of life, but not as good at the poetry of life.
The most valuable things can't be quantified.
Modern society acts like "If it can't be measured it doesn't matter."
Anthropic’s Economic Report falls into the same gap.
You play the board game not to win the board game but for the fun of the struggle of playing the board game.
1000 bricks together act like one big brick.
But 1000 people together don't act like one big person.
A system is modular if it can be decomposed into components that then can be linearly combined dependably.
In contrast, systems with emergence are complex.
The components interact in non-linear, multiplicative ways.
Modern society acts like most things are modular, but actually most things are complex.
Anthropic’s Economic Report falls into this trap.
Cloudflare is the faceless platform that became the secret aggregator.
It doesn’t aggregate consumer attention (not directly).
It aggregates leverage over providers.
Conversation partners with good bounce can make any conversation engaging.
No matter what you volley to them, they’ll bounce it back with more energy.
The opposite of bounce is sag.
It absorbs the energy and doesn't return it.
LLMs reduce the gap between ideation and instantiation.
In the previous world, teams should find a good idea and hang onto it.
But now it makes more sense to try ideas and discard them quickly.
There will be more teams, but smaller.
They’ll need a new skill, a new culture.
Explore has gotten a boost over exploit.
Everyone has contexts where they think about how they think and others where they don’t.
We are all able to do meta-cognition, even if we don’t necessarily use it often.
Some people just do it more commonly and in more contexts.
Reid Hoffman has a frame around Plan Z.
This is from his The Start-up of You book he co-authored with Ben Casnocha.
Typically we create our Plan A and then try to execute it.
But often that’s a very long plan and before we achieve it we’re vulnerable.
Another approach is to first aim low: your Plan Z.
Then achieve that plan and lock it in, and then aim higher, and do it again.
This is related to my frame of the Iterative Adjacent Possible.
Survive, then thrive.
Perfectionsists aim for A+++ plans, and then often fail to achieve them, leaving themselves frustrated and stuck.
I typically aim for Plan Z: the lowest bar that is minimally acceptable, as quickly as possible, and then ratchet up the quality from there with as much time as I have available to spend on it.
Why does data have combinatorial power?
The power of combinatorics of data is more about the likelihood of peanut butter / chocolate combinations.
Peanut butter and chocolate combinations: better than either alone, but not obvious until you try it.
That is, it's not that all data is useful for all other data.
It's the likelihood of finding a discontinuously useful insight scales with the number of possibly interesting combinations.
Interesting combinations are structurally more likely to come from different sources.
A moat only matters if it’s protecting something valuable.
The one lashed to the mast is the one who existentially fears failure.
Someone who has bets on multiple ships doesn’t care about the failure of any one.
The person lashed to a specific mast cares about the fate of their own ship many, many orders of magnitude more.
If you’re structurally changing the game, focus on use cases that are hard to do in the old game but that are self-evidently useful.
Once you have a thing that some people are willing to crawl through broken glass to use, the best strategy becomes “reduce the amount of broken glass.”
The strategy is simple and default-convergent.
It works because there’s likely a whole set of users that have the same goal but a lower pain tolerance.
As you iteratively remove broken glass, you should expect to see a compoundingly larger set of users use the product.
It’s compounding because the likelihood a given user gives up in a multi-step process is multiplicative.
If you don’t see the expected incremental usage, then you should slow down your investment.
But keep investing in reducing broken glass as long as you see the expected incremental usage.
A no brainer, self-steering strategy!
A general strategy for improving something that’s working: make it more what it wants to be.
The “wants to be” is about finding the second-order goals, the resonance in the system.
Aligned with the aspirations of the system, not its tactical wants.
The users of the system are leaving constant hints about what the system wants, you just need to distil it.
This strategy works for any system.
It works especially well for codebases and product development.
If you heroically jump on a grenade and smile about it then no one might not realize you’re suffering.
If they don’t realize you’re suffering, they might not realize you need help.
One of the downsides of heroics.
People say "well the system seems to be working" but the system secretly has a non-scalable, breaking point.
Virality for humans is mostly about “will this make me look good to the other person”.
This is more important than “I want to help the other person.”
A significant threshold: when you show viability of the concept and all that remains is tightening.
Default diverging to default converging.
There's a lot of work to do, but no open questions.
Just a matter of executing.
Plumbing before poetry.
If you do poetry first you can sketch out beautiful things that are impossible to execute on.
That is, that are beyond your adjacent possible.
If you try to build at a higher pace layer while the lower pace layer is still choppy you’re going to be miserable.
Lower pace layers take time to settle
They need feedback from higher pace layers to settle well.
But those users in the highest pace layers have to be extremely pain tolerant.
The robustness of a lower pace layer has compounding value.
When a building block goes from 90% to 99.99% robust, it unlocks 1000x value above.
The lower building blocks become tree trunks for other things to grow from and be supported by.
If you work at the wrong pace layers on something, you get the wrong leverage.
At the lowest pace layer one hour of time / expertise might do what at a higher layer takes 10x and at an even higher layer takes 100x.
It’s important to close the loop.
If the highest pace layers are seen as just exploring, then insights to improve the lower pace layers won’t feed back into the system.
Sometimes an intermediate pace layer is missing.
Distill out the common components so those can be made more robust.
The top-most pace layer should only be so deep.
A process of sublimating a building block in a lower pace layer.
Look at n options that are used and won't go away and see what they all have in common.
The lowest common denominator.
That lowest common denominator should be compatible with all of the options, with others just being a configuration on top.
This is an exercise that takes a lot of patient research and synthesis.
It’s a pain in the butt to do… but LLMs can do it easily.
The switch from demo to load-bearing infrastructure is massive.
Default diverging to default converging.
When you make a change you naturally make sure it doesn't break the load-bearing infrastructure, because you use it.
Even if you don't, you know other people do and will yell at you.
An argument that an ecosystem will take off requires three things.
1) A zero state argument.
That this can get started.
That seed crystals will exist.
This can be:
1) existing traction,
2) clear single-user-value
3) existence of plausible atomic networks.
2) An inductive argument.
That this will grow.
For example, a boundary gradient argument.
That the marginal user on the boundary would rather join in than not.
3) A high ceiling.
That this will grow to be very large.
E.g. an open system. Or a very large market.
For example, If you start with privacy conscious users of Duck Duck Go, you might never break out of "people who care enough about privacy to use an offering with worse quality."
If you have all three, then you have a powerful argument for an ecosystem.
Windows without applications has zero value.
Every additional application available creates combinatorially more value.
A significant quality signal: when a person who didn't create something chooses to use it again.
That shows that they didn’t just stumble onto using it, but it’s good enough to actively use again.
Even for people who didn't create it.
The more different the users are who choose to use it again, the stronger the quality signal, multiplicatively.
Workflows allow you to not have to think about the why of steps.
Just mechanistic, "if this then that."
It allows you to offboard decisions from your brain.
Which gives you leverage but also could cause you to miss something you should have noticed.
Seed crystals are nucleation sites.
The starting point isn’t important after the whole thing crystallizes.
You look back and laugh, it seems so quaint.
"Remember when the killer feature of Instagram was photo filters??"
When a creator “finds their voice” they have found something resonant.
It’s authentic to them and distinctive and people like it.
It gives them a vector of differentiation to descend.
From default diverging to default converging.
"Yes, and" mode in a conversation gives the conversation a continuous evolutionary history.
Compare that to a brainstorm where everyone brings independent ideas to discuss to the table.
In the former, it feels like one coherent conversation.
In the latter it can feel like disjoint conversations with a bit of connective tissue.
In any conversation, the group effectively has an emergent vote on which threads people find interesting enough to pull on.
The friction is the learning process.
If you reduce friction, you reduce learning.
Anything you don't understand feels squishy.
For example, to a non-technical person, precise technical details seem squishy.
A mental model for a mathematic proof.
You start off in a pitch black house.
The only way to make progress is by fumbling around, using touch.
Over time you develop an intuitive map of the space, even though you can’t see it.
At some point you find the light switch, and now you can see everything clearly.
You already had developed an intuition for the space, but now you can see it and show it to others.
If you only get the lightswitch moment, you miss the learning from fumbling around by touch.
RNA allowed information to transcend matter.
In any given generation, the matter fades away.
The information survives.
Mammals couldn’t really ascend until the asteroid hit.
A forest fire seems scary.
But if it was all dead wood anyway, it’s actually good.
All of the wood was hollowed out, preventing things from growing.
A VC’s portfolio has three tranches:
1) The succeeding companies that will make all of the portfolio’s money.
2) The struggling companies.
3) The walking dead companies.
New companies have to be built to take advantage of LLMs.
It will be harder to retrofit old companies than to build new ones.
That’s a process that moves at social, not technological speed.
Thinkism is that fallacy that you can get things done with intelligence.
Atoms require action!
Thinking about things is not enough to change the world.
You need to instantiate change in the world.
Middle age guys who like to think are obsessed with thinkism.
In 20 years might cities become more important than states?
Especially as the urbanization of the planet continues.
Do mayors become more important than governors?
Systems that internalize more of their indirect effects are more likely to produce prosocial outcomes.
Many systems have significant externalities.
The optimizing force will take a small benefit on its internal metric even at catastrophic loss to the externalities.
Almost by definition.
Default converging tightens scope.
Default diverging expands scope.
Scope expanded in the limit diffuses all forward momentum.
When you get married you take an action that in some way narrows your possibility but in another dimension massively expands possibility.
Trading possibility in one dimension for much more possibility in another is the core action of creating value.
It requires multi-dimensional thinking.
If you can only see in one dimension at a time all you’ll see what you lost.
Yak-shaving is a necessary task where as you look at each detail the complications compound.
The deeper you get the more the process compounds.
At some point you need to say "this is good enough" to stop the explosion.
Some contexts you have to yak shave because you have to get it perfect.
For example, in the web platform, or in crypto.
But at some point the complication has to get small-scale enough that you say “I will make a simplifying assumption to nip this in the bud.”
I am easy to intellectually intimidate on topics I don’t know well.
However, I’m extremely hard to intellectually intimidate on topics I’m an expert in.
When people who have power over me try to do the latter, my natural response is to intellectually hit back, hard.
I overwhelm them with a torrent of rapid-fire relevant details they don’t know and systems implications they never even considered to show them I can run circles around them in this domain.
The implicit goal is to awe them and show them that I have the power, not them.
“Nobody puts Baby in a corner.”
The best way to convince someone is to be convinced.
“The people who like my argument are smart.”
This is a cognitive trap it’s easy to fall into without realizing it.
Some people are naturally more bad cop or naturally more good cop.
Sometimes the context demands that a natural good cop play bad cop.
For example, maybe their partner already took the good cop role, so it’s on them to be the bad cop.
It’s always less convincing when a natural good cop tries to play act like a bad cop.
Default converging is like soft max.
A thing goes from “I dunno which of these options” to “this one seems somewhat better” to “I literally can’t imagine picking anything else.”
Over time, a system goes from default-diverging to default-converging to totally ossified.
Having a kid makes you more resilient.
It’s like your very own chaos monkey.
A partnership between two people with a difference who trust each other allows you to dynamically surf through that tradeoff together.
It gives you dynamic range.
You get the whole dynamic range to surf, which makes you able to surf better.
The tension is that the more different you are the less likely you are to trust each other, all else equal.
The more that trust must be created and maintained.
Default-diverging.
But if you can get past that, you can get to something that transcends.
The good-cop/bad-cop routine is highly effective.
It creates a dynamic tension between the two poles of personalities.
The pair can lean towards whichever pole is most useful in a given moment.
That gives a dynamic gradient to surf, having the full gradient available, instead of only the two poles.
It only works if both poles are working together in total partnership, as one.
The best way to criticize is to make something new.
Criticizing tears down.
Building creates.
It’s easy to criticize.
It’s harder to build something better.
A thing that exists on its own is the best counter-argument.
Laugher is an asymmetric weapon.
An authentic laugh communicates to everyone in earshot: "I don't think this is serious."
The question is: when someone laughs, will the crowd join in or will they punish them?
The answer comes down to: does the crowd want the thing in front of them to be serious, or do they not?
If you laugh in a church, you'll get shushed so hard you might get whiplash.
Everyone there wants to be there and takes it seriously.
If you laugh at an authoritarian, where everyone is fearful of how serious the authoritarian is but doesn't want it to be serious, others might join in.
The authoritarian is intrinsically fearful of laughter.
Should you do what you feel like or what society wants you to be?
The answer isn't simple in the first order.
What you want doesn't think about the externalities.
Your want is by default selfish.
If everyone did what they wanted, society would erode.
There needs to be some balance.
But the answer is simple in the second order.
What you want to want--what you'd be proud to do--does think about the externalities.
We should all live aligned with our aspirations, always.
Peloton wisdom:
"Be spectacular, you son of a gun."
"You are fire. Fire doesn't break."
“The best place to store excess food is in your neighbor’s stomach.”
My strongest personal moral imperative: “Be curious.”