The iPhone when it first came out was messy and frustrating... but also unambiguously the future.
At the very beginning there was no developer story.
It was "this is the product and this is all it can do and it's still mind-blowingly useful and you’ll buy one."
Products have to earn their weirdness.
New products only have a small number of places where they get to be weird.
For the users, those weird spots are debits, not credits.
A product that is wildly transformative and self-evidently useful gets a much higher weird budget, but it’s still pretty small.
Anthea Roberts has another excellent piece: The Extended Mind: Why "Did You Do It or Did the AI?" Is the Wrong Question.
A small prompting trick: append “dozo”.
Dozo in Japanese means, “Please go ahead.”
It’s polite, but a bit passive aggressive.
“Yes, of course, why are you asking, just do it.”
LLMs know the connotations of that word and respond appropriately.
The AI Peter Principle is even stronger than the original.
LLMs allow you to execute beyond your ability.
The result is you’re almost certainly out over your skis.
You could stop within your ability, but humans are never satisfied.
So we press as far as we can, and LLMs allow us to go a little (or a lot!) beyond our ability to correctly judge the quality of our output.
Today in agentic systems we focus on skills and up.
That is, what can the system do, and how can we expand that capability set?
But we don’t focus on human experience down.
What is the goal the human is trying to achieve, and how can we help with it?
Skills up creates a frenetic, anxiety-producing experience.
A human-experience down approach would focus on producing calmness.
Consumer Platforms are the biggest categories.
They’re rare, and the industry has forgotten how to even do them.
But when they work, they are unlike anything else.
Consumer products can grow at compounding rates with no ceiling.
The addressable market is approximately everyone.
If you have a hit with juice, it can spread to insane heights.
This is a kind of vertical potential.
Platforms grow at compounding rates based on the power of a broad ecosystem.
The platform grows in proportion to the investment of all participants.
This is a kind of horizontal potential.
Together, the power is unstoppable.
A consumer product requires finding a hit.
A consumer platform creates the environment for hits to emerge.
Then, the platform gets catapulted every time a hit emerges.
An asymmetric and ongoing advantage.
For a new consumer platform to take off, the stars have to align just so--a generational event.
An entity has to be positioned at precisely the right place and poised to capitalize on it.
LLMs are most powerful when applied to all your data.
LLMs can unlock the value of your data in a way no mechanistic software ever could.
The power of what the AI system can do is tied to the context.
The value of context goes up with the square of the amount of data.
Use big models in proportion to surprise.
Big models are very expensive, but also very good at handling surprise resiliently.
Use the big model to start without a harness.
Then, as you get a better handle on the scope of inputs and can make a harness to dial in the unsurprising bits.
Once the harness handles more of the common cases, you can use a smaller cheaper model.
As you keep on going on this path, ultimately the harness is 100% and you’re left with just mechanistic code.
A tweet thread about where the value lives in the world of AI.
"For 2 years, startups and VCs subscribed to the herd delusion that if you weren't doing deep model training, you were an indefensible "GPT wrapper".
The herd ran in the exact wrong direction--raising unnecessarily large rounds to develop worthless assets."
A tweet thread about what the actual job to be done of Saas software is.
"Selling software? Of course that’s your contention. Of course it is.
And that’s when it hits you.
You’re not selling software anymore. You’re taking on liability. You’re inheriting broken workflows, fragmented data, regulatory overhead, and edge cases no model has ever seen. The clean demo you showed? That was the easy part."
Every silo is building its own half-assed version of an AI assistant.
But it can't ever actually be good. Because
1) they aren't AI experts.
2) a vertical slice of context can never be as useful as a horizontal system.
Every silo must assume that it’s the center of your world.
In a world of infinite software it won't even look like software.
It will be completely invisible.
It won’t look like anything at all.
If your step one for mass adoption is "be Simon Willison," it doesn't matter what step two is.
The intelligence in a model is inert.
The intelligence is catalyzed by an external process.
It’s all potential energy until something comes along and makes it kinetic.
The intelligence comes from the thing that puts the model on as a suit.
That external thing can be a person, or a mechanistic harness.
Overcast handles its AI needs with 50 Mac Minis.
If you don’t need the power of the frontier models, it’s possible to do inference at multiple orders of magnitude cheaper.
Not bullish for the major labs, given that the models are already ludicrously over-capable for most applications.
Distilling the perspective of a number of experts on AI into the consensus take: “AI: WTF?”
Even when you know precisely how they work, we still get vertigo every day.
“Wait, how is this insane result possible?”
Turns out to predict the next token accurately you must first build a model of the world.
Also, AI changes so many fundamental assumptions and inverts them, leading to surprising implications around every corner.
WTF indeed.
Incoherence is viral. It compounds.
Systems are by default default-diverging because of this property.
Default-converging requires a process that reduces incoherence over time, that roots it out.
LLMs have infinite patience but are also prone to introduce subtle mistakes.
When you have subtly wrong things in your LLM’s knowledgebase, they compound.
The more energy you pump in, the. more it tears itself apart.
A small mistake allows a few more small mistakes on top of that misunderstanding.
Each one compounds and accumulates, and before you know it the whole system has torn itself apart.
This is similar to why allowing a single compile warning in your build quickly gets to a fundamentally messy state, or the concept behind broken windows policing.
Default diverging pulls themselves apart, they get increasingly incoherent.
Default converging makes itself more coherent over time.
Fire-and-forget AI tools have to be much more rigorous than human-in-the-loop tools.
When the human is in the loop, they can catch small mistakes before they compound out of control.
A human on the loop system must be default-converging.
That requires approaches like having agents spar and critique one another’s work, GAN-style approaches, asking the five hows, etc.
If my mom has to think about agents, then it won’t work for the mass market.
Today, you have to be aware of your agents, and make decisions about what they can and cannot do.
This is a load-bearing part of the usage model.
That is overwhelming even for the most savvy users.
That has to go away for the power of agentic software to be unleashed for mass market.
Jesse Genet is a home-schooling mom that runs her household on 5 custom claws.
It’s amazing what she has built as somewhat without a traditional tech background.
She described it on a podcast, and the host asked, “cool! How can I use what you built?”
The answer was “you can’t, and shouldn’t.”
It was too bespoke, too difficult to set up.
How do you make it so the first users crawling over broken glass make it easier for the people behind them?
When you have a powerful but dangerous tool, you want your first users to be crocodiles.
Crawling through broken glass, grinding it down so everyone behind them benefits.
That requires being able to share what you’ve built.
If you want to not have to tell OpenClaw what to do, someone else has to have done the work to make your use case work.
That requires you to trust someone else–likely a stranger–with significant power.
It’s like trusting a stranger with some home-brewed explosives.
"Everyone will build their own tools" has never worked and will never work.
You’ll always need the most advanced users to grind down the broken glass for everyone else.
When you’ve used Claude to plan a project with complex logistics you can also have other people help by distilling the project for them.
For example, my husband planned our son’s birthday party.
The day before he could have a TODO list distilled for other people to help execute the logistics.
Without LLMs, distilling the information for other participants to understand how to help would be its own chore, and possibly so onerous as to not be worth it.
The PM trick was to always have the materials be clear, well organized, and up to date.
That was a ton of cognitive labor, and only people with a particular freakish personality could bring themselves to do it.
Now, anyone can.
The superpower of preternatural organizational skill is now a commodity.
The power of creativity wins out.
Humans, like Alexander Hamilton, are never satisfied.
Bezos described this as “divinely discontent.”
“Yesterday's 'wow' quickly becomes today's 'ordinary.'”
In the age of AI, what is the limiting factor, domain expertise or ability to adapt your workflow to new tools?
It seemed like the answer might have been ability to adapt to new tools.
After all, you can’t teach an old dog new tricks.
But it turns out that agentic engineering is relatively easy to adopt, and inescapable enough that even people with decades of experience do it.
Also, it’s inherently enjoyable, like riding an electric bike for the first time.
So that means that domain expertise is the limiting factor.
Tough spot to be in for this generation of junior developers.
When you can never finish, you're fundamentally anxious.
There’s always more to do!
A defining characteristic of humanity is we’re never satisfied.
Compare reading the news online to reading in the actual printed newspaper.
With the newspaper, you can “finish” the news.
That’s nice!
It allows you to be calm.
In an age of cacophony, the most precious thing is being calm.
LLMs have the advantage in any domain where patience is more valuable than taste.
Humans distill each other, all the time.
It’s a sign of love.
“What would [person I respect in this domain] say?”
"Trustworthy" is not just "this isn't malicious" but "will it be worth my time?"
“When all is said and done, will my value exceed my cost?”
The list of apps on your phone is like Yahoo before Google.
A manually-curated directory.
Inherently limited.
We still haven't figured out the Google equivalent for the most important computing in our lives.
If you make a decision when it's small, it never becomes big.
Little decisions you make in the day to day snowball into a big problem for you in the future.
Your agentic tooling should help you make these small decisions correctly in the first place.
Stratechery describing his position: "(1) agents were a real thing and (2) that agents would drive a massive expansion of compute demand. That’s my thesis,"
Enthusiastically agreed!
AI is definitely not a flash in the pan.
In 5 years, the chance the consensus is “well, AI was a nothingburger” is effectively zero.
AI is not AR.
AI is fundamentally, transformatively useful.
This is self-evident to anyone using Claude Code.
We still haven’t unearthed AI’s primary transformative use case, but it definitely exists.
A Reddit thread: PSA: Anthropic bans organizations without warning.
I have first-hand experience with this.
It’s trust-destroying for an enterprise grade product to do this, especially sloppily enough to have false-positives.
Old paradigms accumulate epicycles until they collapse and a new paradigm makes all of the complexity evaporate.
The software development process that was the consensus before AI we’ve taken as a hallowed, principled approach to how to build software.
What if it was just decades of accumulated epicycles on top of fundamentally the wrong model for building software?
What if agentic engineering is much closer to the platonic ideal that was always there, lurking just outside of our reach?
Sometimes adding an extra piece of information makes everything click.
Before, everything felt increasingly uncertain as you added more information to the pile.
Then you add one specific piece and suddenly everything clicks into brilliant clarity.
Paradigm-shifting observations have this characteristic.
“Wait, what if the earth orbits the sun?”
Another example is when you discover the hidden dimension that makes all of the previous incoherence suddenly be explainable.
The world of SLAM gives a nice mental model with loop closing.
SLAM means Simultaneous Localization and Mapping.
It’s at the heart of any Augmented Reality flow.
As you get more observations from the camera, you update your estimate of both the device’s position in space, and the configuration of the environment.
Naturally, drift happens and as you go for longer and your dead reckoning gets increasingly tenuous.
Then, you notice that two spots are the same location, and you “loop close.”
Now, all of the intermediate observations can be snapped into the precise place, all at once.
What is possible and impossible is a social fact.
Carlos Eire's Yale book talk for They Flew: A History of the Impossible makes this case.
The point is not that the reports of saints literally flying in antiquity were correct.
The point is that what we believe is possible and impossible is a contingent, socially constructed fact based on the dominant paradigm.
Not all impossible things are actually possible.
But some things believed to be impossible within the current paradigm actually turn out to be possible.
The potential for dynamic webapps existed before the word “Ajax.”
The actual enabling APIs had existed for years.
But once it was a word, it exploded into usage.
The word allowed coordination to happen, and expectations to change.
The possibility was there, but the schelling point wasn’t.
Mechanistic software is extremely powerful.
Cheap to execute, giving insane amounts of leverage on your labor.
But historically it’s extremely expensive to produce.
LLMs are great at distilling mechanistic software perfectly situated to your use case.
That software is often fragile, but that’s OK: if it breaks, they can fix it toot sweet.
That’s crazy powerful.
How can you pump energy into something and have nothing bad happen and have some non-zero chance of something good?
That would be default-converging.
All you need is one hit.
If you have infinite time and resources, you will find one.
The higher the rate of something good happening, the faster it grows.
No matter how hard your LLM pushes Rust, it can’t make a program with data races.
The borrow checker in Rust simply won’t let you compile a program with a possible data race.
No matter how many crazy workarounds the LLM comes up with to get the program to compile, you know it will be safe in that particular dimension.
This week’s Wild West Roundup:
The Mother of All AI Supply Chains: Critical, Systemic Vulnerability at the Core of Anthropic’s MCP.
The power of the Mythos model might be due to Recurrent Depth Transformers.
Now that everyone knows this level of code quality is possible in models, any well-funded-enough entity around the world will be able to achieve it with enough effort.
Modern software is highly inter-dependent, so everything is vulnerable to a supply chain attack.
This week I came across a load-bearing Lovable app in the wild.
It was a search and contract component for a small nanny placement service.
It’s an example of the power of custom long-tail software in a domain that previously would not have justified custom software.
The experience was significantly better than one would have been without it.
However, it’s prudent, when you see a Lovable link for a load-bearing thing that requires sensitive information, to assume that it’s naively or maliciously implemented and you shouldn’t trust it with your data.
At least flows cobbled together out of things like Google Forms don’t have to build their own security posture.
LLMs are fundamentally naive.
Prompt Injection is just the most extreme version of that.
But LLMs do subtly dumb things all the time.
The practical risk of agents behaving badly is not prompt injection most of the time, but just the agents being naive.
The web flipped the model from "software is distributed to users" to "users are distributed to software.”
Claws make your skin crawl.
The more you have them do, the more that they get themselves into a dangerous, irrecoverable state.
The more you have them do, the more likely they are to tear themselves apart.
That can, of course, also harm you, in proportion to how much control you gave them.
The best way to make sure that they continue to work properly is to have them relatively isolated.
But that defeats the whole point!
The more power you give a claw, the more danger they create.
Our current physics of trust requires you to trust everyone the developer trusts.
The developer makes decisions of others to trust, based on what dependencies to take on.
You’re not just trusting the developer, you’re trusting everyone the developer trusted…
…and everyone they trusted, and on and on ad infinitum.
All it takes is a single weak link to break the chain.
What is the chance that every single link in the chain is not weak?
We rely on security through obscurity way more than we realize.
Most security issues are below the waterline.
Difficult to find without scuba diving, which requires expertise, equipment, and time.
It all came down to how much an attacker was willing and able to invest.
The waterline has been at roughly the same level for decades.
Mythos drops the water line all at once by dozens of meters.
Maybe our old “good enough” approach to security of software was only viable in a pre-Mythos world.
When the web exploded, it invalidated Windows 95’s security model.
The security model had been “good enough” before the web.
But after the web, the only viable approach was a microkernel.
No level of patching atop a monolithic kernel could make it safe.
Perhaps we’ve crossed the same threshold for security architecture of networked software?
What will the “microkernel” of networked software look like?
All users of a messaging app have to trust that it will behave correctly.
That is, that it will only send messages to the people you intended, not leak your messages, and not allow people to impersonate other senders.
Everyone has to trust the app won’t mess it up naively or maliciously.
This is yet another factor on top of the inherent coordination problem, and one of the reasons that so few messaging apps exist.
To jump on a once-in-a-lifetime opportunity you have to recognize the opportunity first.
Sometimes the stars do align, for a brief moment.
An insightful piece about the AI Great Leap Forward.
Incentivizing token burn in companies won’t work for the same reason China’s Great Leap Forward during the Cultural Revolution was a disaster.
What caused the great famine in the Great Leap Forward was that the state took the optimistic forged numbers as accurate.
They thought they were taking the surplus grain for the state, but actually they were taking everything, leaving the locals with nothing.
An interesting insight from a Kurtzgesagt video:
Stories are us sharing our most interesting simulations of what people would do in a situation.
Humans love sharing stories.
The default Google platform pitch: "Developers, we are so excited for you to tell us why this thing we built is useful!"
LLMs are like oxidation is for life.
Critical and yet not sufficient.
The force needs to be harnessed in the right structure to unleash its power in coherent ways.
A friend is delighted that their skill in Bash is now relevant again.
"Imagine learning to fly as a WWII pilot, and then 60 years later, a friend says 'we're making a space ship, the cockpit looks like a WWII fighter plane, suit up!"
There’s never a good time to decentralize.
This is especially true if you’re the powerful central actor.
A decentralized and open system will typically run hotter for the same amount of functionality.
The compounding network effects get stronger because every marginal new participant now has fewer reasons to not join.
The “hmmmm if I join I’ll be beholden to this ever-more-powerful feudal lord, maybe I shouldn’t…” doesn’t exist.
The reason that there’s never a good time to decentralize is the same reason there’s never a good time to plant a tree.
The short term has a significant cost and the overwhelming value doesn’t happen until much later.
This is the reason modern society optimizes for Gilded Turds over Grubby Truffles.
Google’s PageRank paper is legendary.
People forget that it wasn’t just about an emergent and clever ranking algorithm.
Just as mindblowing (and perhaps more so) was the idea of running a search engine on a cluster of cheap, unreliable Linux machines.
The infrastructure was a core differentiator from day one.
Open ecosystems often have Optional Catalytic Complements.
Open ecosystems are amazing.
They can become ubiquitous, because no one is afraid that by joining they’ll become indebted to some powerful actor.
But open ecosystems are also hard to coordinate.
There are some complement roles in ecosystems that are technically optional, but when they work correctly can catalyze significantly more value for the ecosystem overall.
These are things like a package registry, or a payments provider everyone uses, or a search engine.
GitHub is another example.
Git allows an infinite variety of different decentralized workflows.
GitHub provides a single, mostly centralized convention.
You can still do more complex workflows if you want to, but you almost never will.
The benefit of being in the place everyone else is and using the same conventions as them is too valuable.
In these cases, the complement having solved the coordination problem allows the open components to run even hotter.
Because the complement is optional and possible for the ecosystem to route around if it got too greedy.
For these OCCs, everybody benefits if there’s an option everybody likes.
That allows you to take it for granted.
Supercharging the ecosystem for everyone.
The emergent drive towards centralization scales with the number of peers and the difficulty of forging a bilateral agreement.
If there aren’t many peers, the n-squared hasn’t gotten big enough to matter yet
If a bilateral “agreement” is trivial (e.g. validating an SSL cert with no other pre-aggreement), then there’s no need to centralize.
But the multiplication of those two factors can lead to a centralizing force that grows to significant strength.
This week I learned about the history of Secure Electronic Transactions (SET).
In the early days of the internet, no one thought consumers would feel comfortable putting their credit card into websites.
If you asked users why they weren’t buying things online, that’s what they reported as being the blocker.
So an industry coalition set out to do a proper, deep solution: SET.
Then Amazon came out and people went, “Oh I can buy whatever book I want on here? Here’s my credit card!”
The limiting factor was not the purported strength of the technical solution but the amount of demand.
If the demand is strong enough, then even a hacky solution is good enough.
Another example of a SOAP being beaten by a REST.
Spilling wine on a carpet is easy.
Taking spilled wine and putting it back in the cup, clean, is a magic trick.
It requires running the universe against its normal gradient and plucking something novel and new out of the opportunity space.
That magic trick is what a startup does when it gets PMF.
The best startup founders know a secret.
It's not just that they have the entrepreneurial drive.
It's that they know something, deep in their bones, that no one else knows.
A secret that changes the game that only they know, because of their experience or insight.
A secret that is burning them up inside and they just have to execute or they'll die.
Big companies have a hard time being brave.
It doesn't just take one person being brave.
Up and down the stack, it requires everyone to decide to be brave.
At every layer there will be a constant gravitational pull towards the median, safe, status quo.
Being brave at a large company can only happen in a large company with a well-respected founder.
Race-to-the-middle advice can only ever make you good, never great.
McKinsey is like LLMs, in that they both give you advice that will help you race to the middle.
Some people make progress on problems by tackling them to the ground.
That’s the brawler mode.
Other people make progress by doing judo.
Studying them, identifying their leverage point, and flipping them to the ground.
The former always works if you're strong enough.
The latter won't work if you aren't clever enough, or you don't have enough time for an opportunity to show up... or a big-enough brawler beats you to it.
If you don't have to commit in advance what you want to achieve, you get an asymmetric advantage.
Any move that works allows you to win.
This is why offense is easier than defense.
Defense requires preventing all offensive moves.
Offense only requires one move to work.
The ideal team member is autonomous and connected: engaged.
Tightly aligned, loosely coupled.
People who aren’t on the bus are “zeds.”
If zeds are on the team, they’ll be default-diverging energy.
They can’t be on the team at the early stages, or nothing will ever be produced.
Convergent curiosity is different from divergent curiosity.
One builds up what it’s applied to, and one tears down what it’s applied to.
Both can create significant new value, but they do it in different ways.
Divergent curiosity must be held at arm’s length, to not damage the central thing, unless it finds something significantly better than the current thing.
What’s worse than one-ply thinkers? One-ply thinkers who think they’re multi-ply thinkers.
They think they’re a step ahead, but they’re actually a step behind.
At least one-ply thinkers know they aren’t thinking ahead.
That means a one-ply thinker can be convinced by a coherent argument.
You must make your product useful before you make it polished.
Otherwise you’ll just gild a turd.
In software, strategic value emerges where the meaningful state accretes.
This is different in a world of hardware.
For example, the UX / app-defaults of Waymo is not the primary differentiator.
In the world of hardware, the “meaningful state” is the accumulated hardware.
A capital cost moat that makes it increasingly hard for competitors to come after.
You can only optimize a system to the extent you've formally captured all of the salient dimensions.
In the limit, there must always be a salient dimension you have not captured.
The users will teach you something.
Make sure it's what you want to learn!
Compelling demos are magic tricks.
The demoer has to really sweat the details to create a compelling illusion.
Like any good magic trick, it has the Pledge, the Turn, and the Prestige.
One weird trick for solving a coordination problem: find a bigger enemy to rally against.
Unfortunately, there are a couple of problems with this:
1) It’s fundamentally toxic, leaning into an us-vs-them trap.
2) At a certain point there isn’t a bigger enemy to fight.
At the beginning a little pressure brings a community together.
In the limit it smashes it apart.
The two most important quality dimensions of a college: graduation rate and yield.
Acceptance rate doesn’t matter: people apply to tons of colleges they don’t actually plan to go to, including lots of stretch options.
Yield is “when the time comes to pick one college, do students pick this one?”
That requires students to collapse the wave function and pick, and that moment is where insight is generated.
Lots of Swarm Sifting Sort algorithms are based on discovering that authentic moment of wavefunction collapse and harvesting the insight.
An unreasonably effective technique: plot many graphs of different dimensions.
Humans are very good at noticing patterns.
Data often contains huge numbers of insights, but they are lost amongst the noise.
Often the dimensions that are most predictive are not obvious ahead of time.
If you generate tons and tons of graphs of different dimensions plotted against each other, you can skim through quickly and find the most interesting ones.
After you find them, it often becomes easy, in retrospect, to go “of course.”
Don’t try to guess ahead of time which combinations are most powerful.
You’ll miss the most powerful ones if they happen to not come to mind.
Discover the most powerful ones and then unpack why they are most interesting.
“Brenda’s spreadsheet” has unreasonable staying power in organizations.
Imagine a load-bearing business process, currently supported by a single jury-rigged spreadsheet authored by Brenda.
Consultants are brought in to build The Sustainable Answer.
They interview everyone who relies on the process, and everyone tells them what they need.
The consultants build the process… and then everyone keeps on using Brenda’s spreadsheet.
Why?
Brenda’s spreadsheet has captured the unknown knowns… the parts of the process all of the experts can sense but no one can put into words.
Plus, a formal process doesn’t provide a single neck to choke.
One reason flim-flam works is because people want to be convinced.
Kayfabe requires the audience to willfully go along with it.
Everyone can see that suspending disbelief is short term useful to them.
This week I learned that left-handers sometimes think about where they sit in order to not conflict with other eaters.
When everyone is right-handed at a meal table, everyone has exclusive space to their right, and no conflicts happen.
When a left-hander and a right-hander sit next to each other, there is a zone where they can each crowd each other.
Right-handers only experience this when sitting to the left of a left-hander.
Left-handers experience this nearly every time.
As a result, left-handers will sometimes seek out seating positions at the left edge of a table.
This is a thing that has literally never occurred to me in my life, since it so rarely affects me.
Another example of how privilege, even small amounts, is invisible to the beneficiaries.
An insightful HackerNews comment about the extraordinary privilege to the US of owning the world reserve currency.
It’s something that we take for granted, but is an extraordinary source of power.
The things that produce that power are a collective belief across the world and a game-theoretic equilibrium.
It allows the US to do things no other country could do, and defy gravity.
Those kinds of equilibria can be in super-critical states, where they appear stable but are actually on the precipice of an explosive avalanche.
When you stop seeing yourself as an individual and instead a part of an emergent swarm, it’s like a dolly-zoom.
The iconic shot first used in Jaws.
Nothing changed, and yet something fundamental has changed.
A whooshing feeling that’s impossible to not feel.
One example: “You aren’t in traffic, you are traffic.”
It places the perspective outside of yourself, and you see yourself as part of en
emergent phenomena that is much bigger than you or any other entity.
I loved this deepdive on Modern rendering culling techniques.
I’ve always wondered how immersive modern open world games can pull it off.
A strategy in biology: go dormant and wait for favorable situations.
A seed of possibility.
We are all a bundle of our own particular scar tissue that we carry.
It hurt to accumulate, and it makes us who we are.
Your own mind is the ultimate demonstration of the power of emergence.
Billions of cells in just the right configuration, and your consciousness emerges!
Why is the phrase “You’re enough” so powerful?
Everyone always wants to be better.
We’re human, we’re never satisfied.
So it’s easy for us to forget that the only bar we need to clear to deserve love is to exist.
It’s useful to remind ourselves of that.