Four tiers of privacy in cloud computing:
1) Cloud apps: couch surfing in someone else's apartment.
The host can do whatever they want.
2) Cloud VMs: renting an apartment.
The landlord can come in but only in an emergency.
3) Trusted Execution Environments: an embassy.
Snooping would be an act of war.
4) ZK Proofs: a volcano lair on a remote island.
Society would have to have broken down.
Each tier gives you an order of magnitude better protection.
The vast majority of the cloud is in tier 1 or 2 today
Moving most computing to tier 3 would be a massive improvement.
Tier 4 is overkill for most situations.
Your family pictures will definitely be safe in the embassy.
Anthropic rolled out a new safety feature optimized for “model welfare”
Obviously this is a reasonable feature given the topics that it cuts off.
But the frame of we “do this for model welfare” raises my eyebrow.
This is one of the ways you can tell the major model labs have lots of true believers trying to summon a new god.
Being in a position where you’re able to blackmail everyone is an easy path for becoming a central dictator.
Even if you don't intend to!
Power is corrupting.
“Can’t be evil” is better than “won’t be evil.”
Stratechery on the GPT5 Model upgrade:
"Would building on AI be like building for the PC in the late 1980s and early 1990s, where you wrote an application that barely worked, confident that you could, when available, seamlessly drop in a new model and get better performance instantly? Or would each model be so unique that every single model update would require an extensive rewrite of the product incorporating it?
The answer to this question does seem to be closer to the former: AI products can incorporate new models fairly seamlessly. The exception, at least in this particular case, appears to be ChatGPT! The problem isn’t in the actual functionality; you work with ChatGPT 5 just like you worked with ChatGPT 4o. Rather, the problem for this set of users is the personality. They aren’t bothered by the fact that 4o wasn’t nearly as good of a model as 5, or that it didn’t have the capability of using a reasoning model; they’re bothered that the personality is different."
"More generally, I feel better than ever about my position that (1) people predicting imminent super intelligence that takes over the world and renders all economic activity worthless are being ridiculous and (2) people predicting that AI is a nothing-burger that won’t amount to anything meaningful are also being ridiculous. AI continues to be a big deal, and even if all progress on the leading edge stopped today, the product overhang remains massive."
Chat will be the universal fallback UX modality.
LLMs now make it plausible in any scenario.
Every user can do it intuitively.
It's only great for single-player starting of open-ended tasks.
But it's good enough for anything.
Vibe coding is like driving with real wheel steering.
It’s fine at small angles.
But impossible at large angles.
It’s auto de-cohering.
LLMs feeding on their own output can become an auto-decohering rat’s nest.
Even if the LLM is right 90% of the time, if it builds on its outputs dozens of times the likelihood it’s right is tiny.
You need humans in the loop, cleaning, curating, steering.
How early in a compounding process do you find the error?
If it's late, it can be super-linearly disruptive.
We haven't had too many computing-driven compounding processes before.
If it's a set of humans doing it, one of them will notice at some point and go "wait, what?"
But when an LLM is “yes, anding” its own output with no curation from someone with external judgment, it can get into an auto-catalysing rat's nest of confusion.
UI is only easy to verify by sight if you actually see it.
If it's hidden (e.g. behind a looping construct) you won't notice if it's wrong, an error in that can compound.
Vibe surfing is still a useful pattern for curating an LLM’s output in a natural way.
LLM UIs have gotten much more convergent, "here is the answer, where should we go from here" vs "here are 5 options, pick one"
Vibe surfing is a useful interaction where the LLM and the human are in a nice interaction, a coactive loop.
But they don't really exist in that many systems right now, especially as the model quality has gotten better.
But the quality will never be 100%, so for long-lived tasks it will always be useful!
LLMs can absorb an implicit pattern from examples.
It can do this without you needing to systematize it.
That allows it to be fuzzy.
Systematization is about making something black and white, quantitative, unambiguous.
A lot of nuance and nebulosity and resonance is removed in that process.
It’s also an extremely expensive process to distill those rules!
LLMs can do qualitative nuance at quantitative scale.
Strava obscures the precise starting and end points of your ride for privacy.
I want anyone who follows me on Strava and who I also give location sharing to in FindMy to be able to see the precise starting point.
That's structurally impossible in the same origin paradigm!
Strava and FindMy are separate universes and aren't allowed to share what they know between them by default, in order to discover that overlap.
They'd have to do a bespoke integration... likely way too low a priority for both sides to even consider.
This is one example of the daily papercuts we just take for granted in the Same Origin Paradigm.
Google has a ton of your information, but unstructured.
There’s no internal ontology, no way for you to tell it, “this is my grandmother.”
ChatGPT attempts to create an ontology, but indirectly via chats.
It does it in an ad hoc way you can’t easily inspect or correct.
LLMs could plausibly help, but it will need to be a coactive process.
LLMs get off the rails easily, and those problems compound.
Every action you take to curate / correct / confirm could give an exponential benefit down the line.
If users can feel that, then it would be a killer product.
There's' a kind of personality who will value even the small wins of cleaning up context.
These are e.g. people who already do Obsidian or Notion,
Those small 'aha' moments give them the oomph to keep using it.
Compare that to someone who expects the LLM powered system to be some magical automate everything, who will be disappointed until it achieves a real-world outcome for them: a high bar!
A sweet spot for the coactive fabric at the very beginning: people who use Claude Code and also Obsidian or Notion.
The thrill of it auto-organizing some data for them is enough to give them the thrill to keep going.
The original ChatGPT was an upgrade in usability more than capability.
Sometimes the right UX activates a previously latent possibility.
Apps are the wrong soil for the seeds of LLMs to blossom into their full potential.
When you put data onto a shelf you expect it to stay put.
When you put a seed in a garden you expect it to bloom.
Don't think of it as automation, which is mechanistic.
Think of it as gardening suggestions.
As you garden, they get better and better.
If you’re gardening suggestions, constantly curating, tending to what the garden is producing, giving it feedback, it has a naturally alive feeling.
I use Obsidian a ton, but the actual feature set I use is incredibly small.
1) The ability to easily create a pointer to another file by typing ‘[[‘ and then having a nice autocomplete (that also makes it easy to intentionally create a stub).
2) Being able to navigate quickly to a page by typing a key command and then typing a few letters of the name in an autocomplete..
3) Knowing that I can visualize backlinks pointing to any page.
4) Knowing that if I rename a page, all of the backlinks will update.
5) A single layer of folders (for example, for daily pages, or for a 'people' folder)
That’s it!
This week in “we’re in the wild west era”
"Sloppy AI defenses take cybersecurity back to the 1990s, researchers say"
"GPT-4o still outperforms GPT-5 on hardened [security] benchmarks across the board."
“GitHub Copilot RCE Vulnerability via Prompt Injection Leads to Full System Compromise”
The dangers of sycosocial relationships
Rolling Stone: ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out
Ars Technica: "ChatGPT users hate GPT-5’s “overworked secretary” energy, miss their GPT-4o buddy"
Good Work / Dan Toomey: “Is ChatGPT Therapy a horrible idea?”
“When you’re using AI, make sure AI isn’t using you”
Sage advice from a YouTube comedian!
In the world of infinite software, users will need to tolerate a lot of false positives and not have them be too expensive or distracting.
Tools for thought can't do open-ended action, so they only work for people where organizing thoughts is an end in and of itself.
To do things for you requires turing-completeness.
That would unlock a significantly larger market.
The thing that aligns with your aspirations has to be a bonus use case.
Otherwise it will be too annoying for a primary use case.
Three pace layers of prototyping with LLMs:
1) The LLM does everything.
Expensive, loosey-goosey, flexible
2) The LLM behavior is sublimated into a mechanistic harness that can be run inside of other systems.
Normal code that calls out to LLMs within itself.
3) The LLM creates mechanistic behavior that can be executed unless something needs to be changed.
For example, the LLM creates a list of regular expressions to execute on inputs.
The definition of what it means to “write code” has changed over the years.
Does only assembly count?
What about C?
What about Javascript?
What about vibe prompting?
The printing press kicked off the enlightenment.
LLMs have the potential to be the printing press for turing completeness.
The printing press led to more literacy.
There was more to read, which made reading more valuable.
Which then led to more demand for books, which compounded.
The movable-type printing press was invented in China and Korea hundreds of years before Gutenberg.
However those languages had way more characters so it wasn’t plausible to scale.
It took the printing press plus the latin alphabet to explode its potential.
Vibe-coding is like China inventing the printing press, but code being too hard to write safely.
How can you make turing-complete code as natural to talk as any other language?
Mass scale emergent programming.
Computer literacy at a massive scale, we can't even predict how large the impact on society could be.
LLMs can look at many different emails that you missed the details in because you got bored.
A power of infinite patience!
These can help assist you with things that you don’t even bother to do today.
All upside!
Someone this week told me about a medical situation they had..
He needed elective surgery and needed to shop around to many different providers.
He had a body scan that clearly showed the situation and need for surgery, but he had to describe the needs and history in various forms for each provider each time.
Sending the raw scan out to all of the providers to have them bid on it would be terrifying… but also very convenient.
How can you take away the terrifying part and just leave the convenient part?
What if the providers could come to his scan, bid on it, and not be able to communicate back across the network about the scan?
Any static quality function can’t be intelligent.
Being able to innovate new tools is what allows adaptation.
The LLM is not adapted for the non-stationary environment.
It’s a snapshot of when it was trained.
The model can’t shift but the memory pace layer can.
Pictures are different from images.
They have curation.
The things that are replicated are not a random sample but a curated sample.
The subset of images that people found useful or interesting.
To kids, “gay marriage” isn’t a thing.
It’s just “marriage.”
Of course marriage can be between two men or two women.
It’s only people who were alive before who would even think to distinguish, or realize there even is something to distinguish.
(Of course, it’s always possible we backslide as society…)
A definition of intelligence I had never come across before: "efficient cross domain maximization"
The person who happens to be the product manager of the most explosive new product category will think they’re a genius.
When in reality the demand for the product is so great it’s hard to have messed it up.
When Google Maps launched its massive UI revamp more than a decade ago, the main user success metrics barely budged.
People used Google Maps to accomplish tasks that they needed to do, so no matter what the UI was like, they’d power through.
Anything minimally competent would have had a similarly great outcome.
The skill of the PM only matters that much in middling ideas, not in fundamentally great ones.
The coactive fabric is not about micro apps.
It's a different kind of thing.
It's a swarm of tools so small that you don't even think about them.
They just do things for you, emergently.
The tools are swarming in the fabric.
What’s the equivalent of an “app” in the fabric?
There isn’t one.
It will be instead like "the computer did what I wanted it to.”
Stowe Boyd used to write about "Me First" collaboration.
The action of doing something for yourself that can then be summarized for the entire community.
People don’t do it primarily for the community; they do it for themselves and the fact it helps the community emergently is a bonus.
These kinds of systems are significantly easier to activate and get compounding loops in.
The incentive is direct and immediate.
What would a Reverse Google look like?
Normal Google requires publishers to put content up proactively that might be indexed and served to users at some point in the future.
A pull model.
What would a push model look like?
What if the internet came to you?
This requires qualitative nuance at quantitative scale.
LLMs provide this.
It would also require a different security model to not leak a user’s intent to a swarm of publishers.
That would require a new security model.
Yes, and: We need tech that scales and preserves nuance.
Systems that are efficient and humane.
Builders who can code and contemplate second-order effects.
The age of AI demands synthesis, not reduction.
Bright tech is fresh, compelling, optimistic.
I can’t stop thinking about this meme.
Clippy: "It looks like you're trying to solve intractable human problems with technology"
It sums up the tech industry’s approach to the nuance of human / social problems.
One ply thinking that misjudges the nuance and complexity by an order of magnitude or more.
This week I founded Sundae Labs, a project by John Vervaeke.
"Let's Make Technology Virtuous
We, the makers of the machines, are more than the sum of our parts.
Let’s domesticate tech to aid human flourishing."
Hear, hear!
My favorite word for Christopher Alexander’s “quality that cannot be named” is resonance.
Something that is aligned at every level.
You like it intuitively and the closer you consider it, the more you like it.
Resonance is fundamentally about authenticity.
I think of resonance as being aligned at multiple dimensions and also good.
But resonance technically just means aligned at multiple dimensions.
Sometimes that emergent result is bad.
For example, soldiers marching across a suspension bridge causing it to collapse.
Resonance tells you two things match, not that they're good.
Homeostasis doesn't ask if the baseline is healthy, just that it keeps it going.
Resonance locks different dimensions.
It's important they lock into something good, like your aspirations.
That's the thing to anchor off of.
Tech addiction is a kind of limbic resonance expanded to a grotesque society scale.
Tech companies find a weird human addiction and resonate it to an inhuman scale.
"unregretted user-minutes" is actually a good metric, if it didn’t have unfortunate associations…
Does it resonate with your limbic system or your values?
If it resonantes with your values, then that’s what matters.
The security model in a platform protects users, even if none of them know what it is.
A couple of months ago Antrophic announced they are doing confidential Inference via Confidential Compute.
This is great!
This gives Anthropic two benefits:
1) Users don’t have to worry that their cloud host can see the user’s queries.
2) Anthropic doesn’t have to worry that the cloud host can see their model weights.
The latter is probably more important to Anthropic.
The first is nice, but users still have to trust Anthropic to do what they say and not peek.
If one sentence in a document is top-secret, the entire document is top-secret.
Relatedly, sanitation people have a saying:
Q: What do you get when you mix a gallon of clean water with a gallon of sewage?
A: Two gallons of sewage.
The app store is not just a distribution thing, it's a load bearing part of the security model.
It requires a single gatekeeper.
That is a powerful, and thus corrupting, position to be in.
It makes a thing that could be open-ended, close-ended.
An infinite difference.
Things that emerge are shaped by their constraints.
Ecosystems are emergent.
Software ecosystems are shaped by their constraints.
The most foundational constraints are the security model.
If you want a new kind of ecosystem to blossom you need to focus on the constraints.
If you change the physics you can change everything.
Because everything emerges out of the physics.
The ceremony of signing some contracts in person is load bearing.
The ceremony of it underlines how significant it is.
"I have to go to a specific place and use a pen?"
Takes you out of the ordinary flow to make you think "... are you really sure?"
Makes it harder for someone to say, “did you not even read the contract or realize how important it was?
If you have a stochastic security system you need a whole bunch of layers of Swiss cheese.
In the swiss cheese model, the chance that an exploit makes it through all the layers goes down super-linearly.
The more holey the cheese the more layers you need in the swiss cheese model.
Functional reactive systems can be interpreted as a blackboard system.
But only within one cooperative swarm: the routines have to come from the same entity.
The goodness of the individual actions in a system and the goodness of the emergent result are distinct.
It’s possible, and even common, to have all of the individual actions be uniformly good, but the emergent result be bad.
“The road to hell is paved with good intentions”
It’s a 2x2:
Local Decoherence / Global Decoherence: Chaos
Local Coherence / Global Decoherence: Kafka-esque systems.
Evil results, but they persist because no one can point to the evil actions that cause it.
Local Decoherence / Global Coherence: Pandemic lockdowns.
Bad for each individual; good for emergently stemming the tide of the disease.
Local Coherence / Global Coherence: Transcendence.
Resonant systems that are good at all dimensions.
Individuals feel rewarded and authentic and the emergent output is also good for society.
A system where everyone in it does the right thing but the emergent result is a bad thing is a bad system.
Saying precisely what you mean is hard no matter which language you use.
Claude and GPT5 feel good at different kinds of code.
Integration hard is a straightforward long slog.
It can sometimes be sped up by assigning more engineers to it.
Algorithm hard is complex puzzle with one solution.
Often the final algorithm is just a few thousand lines of code.
Adding more people to this kind of problem never speeds it up.
Claude is great at integration-hard code; ChatGPT feels better at algorithm-hard code.
Knobs and dials are things that give customizability and wrinkle a product.
They allow it to be morphed to fit a different use case.
Wrinkled products are harder to develop, market, test, maintain, coordinate within a team about.
So over time hyperscale products tend to get smoother and smoother.
To get scale you need to reduce its fit to any one case.
One size fits none.
Consumer products work better than B2B for the “swarm of P2s” product strategy.
If you have a successful product with people who care about it, often the things the employees work on naturally improve it and make it a more successful product.
Just making the product more what it wants to be.
This strategy works better for consumer products than B2B, because most of the employees working on it are also users, so they have a calibrated intuition of what features to add to make it better for themselves (and likely others).
The scavenger mindset strategy needs a large inventory to curate over.
The scavenger mindset is not “create new things,” it’s “take existing things that I can take for granted and combine them into novel combinations.”
The thing in your set are:
Existing products in your portfolio
Pre-existing prototypes
Existence proofs of viable products from competitors
Team strengths
The larger the inventory, the more likely there’s a chocolate-and-peanut-butter game changing combination to discover.
Signals, to be summarized at the collective and be high quality, need to be based on individually authentic observations.
If they aren't you get something default divergent.
Emergence in systems are driven by the population of people who care enough to show up and represent their interests on an ongoing basis.
That authentic belief is why Wikipedia works.
When Wikipedia forks then only the zealots go to the fork, and that bias makes it more and more extreme in that direction.
That's one of the reasons everyone stays on the one that everyone agrees is at least minimally reasonable.
The metagame evolves fundamentally out of the constraints, the medium, the laws of physics.
Moving from mass surveillance to mass spying changes the meta game.
Forced transparency changes the meta.
AI transcription and recording for all meetings doesn't lead to authentic transparency but something else.
People would mention the thing that they're supposed to mention, knowing that it will be summarized into notes for the CEO.
The informal spaces immediately adjacent to the formal spaces are load bearing.
Japan’s tradition of going out and getting drunk with your team: nominication.
Sauna culture.
Smoke breaks (back when that was a thing).
A long time ago, to be a painter, you also had to be a chemist.
That used to be true for photography, too.
It used to be that to be an artist you had to be a drafter and a curator.
Now you can just be a curator.
Adrian Bliss satirizes Making art in 2026.
Working to make AI do what humans can do shows how amazing humans are.
Moravec's paradox is under-appreciated.
Maybe humans are orders of magnitude more complex than we think we are.
When you get vertigo you realize how computationally intensive just standing is.
Meaning is never efficient.
Meaning comes from cost.
Technologies don’t disappear, we just get more of them.
The same is true of art forms.
At one point Kevin Kelly wanted to know if any technology had disappeared.
NPR did a challenge, but even the weirdest ones he was able to find someone who made it.
What technology gives us is choices.
You can still do old things if you want, but you also get Midjourney.
Our questions are often similar.
If you have the same question but go to a different source you'll get a different answer.
The monoculture of LLMs leads to everyone having the same answers to the same questions.
I heard of someone who thought someone else had plagiarized their essay.
Turns out they had both used ChatGPT, and asked it similar questions.
Would you want to be able to read your therapist’s mind?
“Alex is still hung up on X but thinks he’s over it.”
The Johari Window is a 2x2.
On one dimension: what you know about yourself vs not.
On another dimension: what others know about you vs not.
Even if you had a socially perceptive LLM helping you out, it might do the wrong thing if it doesn’t know details of your physical situation in the moment.
The right recommendation requires knowing the right context.
Storytelling is a key way to pass on information.
No one believes Peter Rabbit is real.
But it helps transmit cultural knowledge: “if you don’t follow the norms you might get injured or killed.”
It’s hard to learn norms when you don’t see them violated well.
Stories are great for that.
9 dots connected out of 10, so it's easier to apply to yourself without it feeling forced.
Sometimes a movie can affect you in surprising ways in that moment.
You connect some of the dots yourself in a way that feels authentic to you.
It’s about your own response to this shared artifact in the world.
But what if there was a movie made specifically for you?
Would it be as easy to accept any insights from it, or feel like they were being deliberately incepted onto you?
When I first heard of “K-Pop Demon Hunters” I figured it would be total slop.
Just some random fever dream concocted by the Netflix algorithm.
But unexpectedly it’s one of the most fresh and original mainstream movies I’ve seen, the anti-slop.
Novelty you can control is more interesting than novelty you can’t control.
The fact you pull the wheel on a slot machine makes it more intriguing.
Kids choose to play with the toys that have random / unpredictable behavior
We pay attention to meaningful novelty.
Static on an old TV is totally novel but also meaningless.
Facebook and any ranking problem is often modeled as a ‘multi armed bandit’ optimization.
‘Bandit’ here is based on ‘one-armed bandit’ referring to a slot machine.
The slot machine is called a 'bandit' because it steals your money!
Maybe it shouldn’t be a thing we optimize for and feel good about!.
If you can tell a story about it, it’s more likely to happen.
If you need multiple people to work together for it to happen, it’s easier to coordinate if they can understand how it could work: a story.
LLMs started with one snapshot of society.
How will stories evolve now that we have this self-replicating technology?
We’re never getting rid of React!
We're stuck for it for all time now.
Apparently alphabets stopped morphing when dictionaries were published.
Stories are the most potent compression possible.
The protocol humans use to connect to others… and ourselves.
Stories are a very potent molecule of meaning.
It survives all kinds of retelling.
Not just noise, a narrative that resonates.
One way to think of cellular automata is millions of copies of the same routine, connected to their neighbors.
Everything emerges entirely out of local state.
Computers can play chess better than humans.
But we don't enjoy watching computers play chess.
In the late 18th century there was a new technology to print faster.
There was a flood of soft core porn.
Apparently “Let them eat cake” is actually a meme from a softcore lesbian porn about Marie Antoinette.
The French Revolution was catalyzed partially based on memes and slop!
The pamphlets didn’t have to be correct to cause people to do big things.
There were pamphlets like Common Sense, but most of the pamphlets were low-quality slop.
Systems have a concave region and a convex region.
If you perturb it a small bit, it stays in the concave region and returns to the balance point.
Default cohering.
If you perturb it a lot, it gets into the convex region and starts accelerating away from the balance point.
Default decohering.
Almost all systems have both: a bowl on a pedestal.
The lip of the bowl is the critical region, the line between the two modalities.
A high trust team can achieve miracles together.
The thing that makes it work is that every team member believes in the power and legitimacy of the collective.
They are willing to subordinate their own desires to what is good for the group.
By doing that without being coerced, their authentic goals and desires are able to be transferred to the group, allowing it to emerge into something way beyond what it was before.
If a given organization fractures from one high-trust group to sub-societies that compete, the magic of that collective is lost.
Mediocrity is easy as an individual and expensive as a group.
Does the thing you're doing align with others?
An authentic belief that everyone in the organization shares allows transcendence.
The group coheres, naturally.
Default cohering.
Things become easy.
Trust is an iterative game.
It's orders of magnitude harder to earn trust than to lose it.
It takes many good faith actions to slowly build trust.
It takes a single bad faith action to erase all of it in one go.
Apparently diversity of authors in a paper predicts how often it will be cited.
Diversity here meaning “how closely related are the authors”
e.g. are they in the same departments?
The same schools?
If they are closely related, they’re more likely to be cited in 10 years.
If they are not that related, they’re more likely to be cited in 20 years.
The plumbing is never fun.
It's just important, but never does new stuff.
The plumbing is the foundation.
Important but never fun.
Sometimes urgent, when it's spraying water everywhere.
The plumbing never feels like it unlocks anything great, it’s just a baseline thing that must be done.
We all have blindspots that we can't see.
So we keep forgetting they exist!
The curse of blind spots is not that we have them, it’s that we can’t see them.
It takes active effort to remember they exist, and no matter how hard you try, you will forget about them and they will bite you and others around you.
Leaders always think: "My door is always open, people feel comfortable pushing back against me, I'm not like the other leaders"
And yet those statements are never as true for a given leader as they think they are.
Even if the leader does have an open door, it’s less open than they think it is.
Gossip in the small is load bearing, the nervous system of the group.
Gossip works great in a small high trust environment.
Gossip at mass-media scale is a totally different thing.
TV is one to many.
Now we just have ephemerality of oral tradition but at global scale.
Uh oh!
The biggest story we tell is there’s an individual voice in our head that is totally distinct from our context.
But the self vs other is not real!
We have the natural necessary sense of self.
One downside of giving more voting rights to those who were there the longest?
It selects for the system to stay as its been, to ossify, not become adaptive.
Immigration brings new ideas.
Mixing allows the better idea to outcompete.
Immigration doesn’t need to come from a different nation, just a different context.
For example, China gets a lot of cross-pollination from internal migrations.
The human evolutionary environment is the non-stationary environment.
Humans by being nomadic had to learn to deal with different situations.
Children are the automatic immigrants.
Someone was telling me about a story from an anthropology podcast.
A tribe hated the water because it was dangerous.
But then the environment changed and they were forced to live near the water.
The kids could adapt easily.
They’ve gone to the river and the parents don’t get it.
“You’ll never learn how to walk on the hard rocks”
“Dad, there are no hard rocks here!”
A memorable idiom I learned this week: “You ever hear a peddler yell ‘stale fish?’”
This week I heard about a study with kids and fun.
The researchers taught the kids a beach bowling game.
Then they asked some of the kids how to change the game so they won more.
They made the game easier.
For other kids they asked to make the game more fun.
They made the game harder.
Fun comes to some degree from leaning into the challenge.
Wanting to win is to some degree in tension with having fun.
The Stroop test shows that reading is automatic once learned.
You can’t not do it.
When you need to spread butter, a swiss army knife is worse than a butter knife.
If it’s flexible in ways you don’t need, it’s not an upside, it’s worse.
For a tool to feel like an extension of you you must have a nearly fully predictive model of what it will do.
The illusion of it being an extension of you only works if you feel you can control it with high precision.
Every time you can't take it for granted, it breaks the illusion and separates you from it.
Some new research shows that intuitive ‘fairness’ is related to power dynamics.
It’s calibrated to what the powerful entity in a system ‘can get away with,’
It’s not free floating, it’s contextual.
A paper on AI-induced dehumanization.
As we interact with chatbots more often, we also treat humans in a more transactional way.
Divergent phase: there are no bad ideas.
Convergent phase: there are bad ideas.
Value arises from scarcity.
If you apply a business model from one distribution paradigm to another you’re going to miss the point.
It took a decade or more of the web to discover the social media business model.
When you're being authentic, it's cheap and easy
You don't have to mask anything or pretend, you simply do.
Pretending is expensive!
It requires you to go against your natural, authentic desires in that moment.
VALUES.md is like constitutional AI for you.
Pluralistic values are easier to align on than a singular set of values for everyone.
Politics emerges when two or more entities try to coordinate.
They can never be fully aligned.
Since they are two distinct entities, there’s always something that is good for one but bad for the other.
That fundamental lack of alignment is what creates all of the coordination challenges inherent to politics.
Politics is a phenomena that emerges out of that asymmetry.
Individuals care about reality.
As soon as you get more than two people, the emergent social imaginary enters the picture.
The importance of the social imaginary grows at a compounding rate.
Even at relatively small team sizes, the importance of the social imaginary often dominates the reality.
To win, a side needs to stay coordinated, to act as one thing bigger than the sum of its parts.
That's easier if there's a schelling point, a thing that everyone takes for granted at the center.
Either a long-term ideal that is bigger than anyone and been true for a hundred year.
Or a leader that everyone in the group acknowledges is the legitimate leader.
Having one leader is "efficient".
Less time spent disagreeing, more time figuring out how to execute.
But it's also low resilience.
If the universities had acted like one collective instead of a collection of universities they could have pushed back against authoritarian requests.
Individually they were easy to intimidate and pick off.
“United we stand, divided we fall.”
The emergent coffee shop culture probably wouldn’t work without an addictive substance at its core.
Nearly everyone is likely to have at least some desire for the product that’s sold, making it a natural schelling point to meet up, a focal point of a community.
A thing I learned from "When India crashed into Asia and created the Himalayas":
When two land masses combine the animals from the smaller land mass tend to lose out.
This is because they’re less able to handle change.
They had an easier competition to win that never pushed them as hard as the animals on the bigger land mass.
In an evolutionary environment, you’re only pushed as far as competition takes you.
Less competition pushes you less hard.
One of my superpowers, I’ve come to realize, is an extremely low activation energy threshold.
For a system to maintain homeostasis it must have an accurate thermometer.
The scaffold will become a cage.
It both supports and constrains you.
You can't build a foundation the same way you build a sandcastle.
You don’t lay track when you don’t think you’re going anywhere.
Which makes it hard to get anywhere when you do need to move.
Users should be able to bet on an ecosystem as a whole, not a specific provider in it.
Sarumans start from the assumption that it’s zero-sum.
Radagasts stat from the assumption that it’s positive sum.
“Hyper“ often means “too much”
Urgency comes from outside.
Importance comes from inside.