I published an op ed in TechDirt last week: Why Centralized AI Is Not Our Inevitable Future.
The gentle singularity Altman envisions might start gently, but any singularity that revolves around a single company contains within it the seeds of tyranny.
We don’t need Big Tech’s vision of AI. We need Better Tech—technology that respects human agency, preserves privacy, enables creativity, and distributes power rather than concentrating it.
Instead of racing to build the one AI to rule them all, we should be building intentional technology—systems genuinely aligned with human agency and aspirations rather than corporate KPIs.
If thinking is now 10x cheaper will we think faster or will we think more deeply?
AI has the potential to have as much positive impact for humanity as the printing press, electricity, or the internet.
Whether it will be like electricity (universal infrastructure) or like social media (engagement-maxing aggregators) depends on who owns your context.
Your context is your memories, your data, your digital life.
If you own it, AI becomes your personal superpower.
If platforms own it, you become the product.
Again.
Only this time with technology powerful enough to know you better than you know yourself.
ChatGPT tops out as a Chatbot.
Is chat really all there is?
Chat is a feature, not a paradigm.
Which would you choose between 1) a product that’s only a Chatbot tied to one model or 2) can be a Chatbot from any model (as well as any other UI)?
I agree with David Colarusso’s post:
"I worry about concentration of power a lot lot more than I do murderous AGI."
I agree with this point from Ryan Singer:
"Tools like Cursor hint at where AI-driven user interfaces are going. Chat is only 10%. Most of the output (and UI) is a domain-specific representation of the state of the work.”
Gemini generally does well.
Claude refuses to lie, and thus loses often.
ChatGPT o3 often wins because it is very happy to betray its collaborators.
The things that LLMs are selected for and trained for matter, especially when you apply their advice across a whole society.
The same origin paradigm is not an immutable law of the universe.
It is merely a useful convention that we've used for the last few decades.
It's running into its ceiling in the era of infinite software.
It's possible to design new security models to complement it!
1) Access to private data
2) Ability to externally communicate
3) Exposure to untrusted content.
Related riff to my “iron triangle of the same origin paradigm.”
Ads are inevitable in our current laws of physics for consumer software.
You can typically trust off-the-shelf LLMs to not try to manipulate you in particular.
But LLMs are easy to fool.
So if anyone else you don't trust is feeding input into the context, then the LLM might be entirely tainted and any of its decisions not trustworthy.
This is why prompt injection is so fundamental for any system where LLMs make calls on security.
Greasemonkey is back, baby!
Due to the security model, you need to trust the creator of the bookmarklet: either yourself, or someone you know personally.
The one (significant) improvement this time: more people can write their own vibecoded bookmarklets, and don’t need to rely on some rando they don’t know.
How can you construct a platform where a small number of intrinsically motivated people can indirectly create value for a ton of people?
That’s what Wikipedia does.
Wikipedia shows the best of the internet.
People with limited structure can anonymously create convergent value emergently.
It’s not the default outcome, but the right garden with the right structure can cause it to emerge.
It's an existence proof.
Wordpress is powerful not because the totality of its code is that powerful.
It’s the usefulness of the main data model and fabric it implies as a flexible base for the power of 3P plugins to build on.
Scrapers are hard to maintain by yourself.
Any time the service you’re scraping changes it breaks.
But a community maintaining a scraper can make the scraper antifragile because someone else will likely fix it when it breaks.
Is AI more like the Industrial Revolution or the invention of literacy?
The latter shaped our thoughts in new ways not possible before.
Entire religions were founded based on the power of the written word.
The web bundled data with the UI to view it.
That was new!
Previously data was shared on the filesystem, and applications could create or read any properly-shaped data.
In the web’s model, the creator of data could show what made the most sense for them.
But that innovation also became a problem; now the creator of the code controlled the data, not the other way around.
Transformers won because they allowed data to flow like water within the model.
Don’t be clever about where it flows, just let it flow everywhere.
Just increase the bandwidth from the past to the future.
No one has done that at the social level yet (without being a hyper aggregator).
The transformer architecture has extremely high leverage.
It’s only a few hundred lines of code.
What if you could have the same kind of runtime for data in general?
A small set of physics and policies that allow data and code to flow like water.
What would that unlock?
Evolution arises because no matter how noisy the signal, if it’s broad enough and has a consistent bias, a clear macro scale phenomena will emerge.
It emerges automatically and inexorably.
This is the same power of amplification algorithms.
When there is a small but consistent bias in a noisy, broad set of input, a powerful ranking signal emerges.
All that matters is the consistency of the bias and the breadth of the input.
One of the reasons cloud wins is because the aggregators can get more bang out of a flop than you can.
This is because of the pooled data.
A noisy but broad data stream with consistent bias.
The broader the stream, the more effectively the signal can be extracted and distilled into a valuable signal.
That signal can then be used to add value for other users.
That is only possible at scale.
Breadth in data is more useful than depth.
This is the bitter lesson of data.
Part of the thrill of AI models is they are open ended.
You can get them to do crazy things, like insult you.
The Google Assistant (and other assistants of that era) are close ended.
All of the things they can say were curated grammars of possibility written by employees.
Top-down, not bottom-up.
I used ChatGPt to identify an odd piece of hardware in my post-war AirBnB in Manhattan this week.
That could have never worked in Google Lens, but it worked great in ChatGPT.
Only an open-ended tool could do that.
Now that we have approaches like LLMs it’s impossible to see how Google Lens’s old approach of manually curating data for individual verticals could never have possibly ever worked in general.
It feels positively medieval!
Imagine a self-refining context.
A swarm of simple worker bees help transmute the data in individually small ways that allow new patterns to emerge.
To allow a 3P swarm to safely operate on the data requires a security model to allow information to flow safely.
Context is the word for distilled memory, curated to the task.
Contextless context is not useful.
Context that is incorrect is worse than nothing.
Randomizing things the LLM will pay attention to, overriding its intuition on the right answer.
If you give it good context it nudges in a useful way.
If you give it bad context it overwhelms the model and just returns the context.
It overshadows the answer.
An example in the wild:
"“come up with some business ideas for untapped markets in my area”
>Based on your love of astronomy and that one time 8 months ago you mentioned you like watching basketball, so how about a youth basketball coach and stargazing guide!"
Coactive Software grows like a coral reel.
Each interaction deposits another layer of functionality, building complex, beautiful structures uniquely shaped by your digital life.
No two users' software would look the same.
Software that anticipates, adapts, and assembles itself around your intentions.
Software that's alive, not just responsive.
It grows with you, maintains itself, and evolves to fit your needs.
Like a digital garden that tends itself.
Not pre-programmed features, but emergent capabilities that bloom from use.
Features emerge where you need them, interfaces form around your habits, tools appear right when you reach for them.
In Coactive Computing, there are no 'wrong' ways to use software.
The interface reshapes itself around how you actually work, not how someone thought you should work.
In Coactive Computing, there is no 'off-road.'
Every route you take becomes a path.
Every path you repeat becomes a road.
Your software landscape is exactly as wild or civilized as your work requires.
Coactive Software has evolutionary pressure at the speed of thought.
What you use thrives and expands.
What you ignore gracefully fades.
Your digital environment evolves as fast as your needs do.
Only a small number of users are implicitly motivated to organize their knowledge bases.
These are people who are happy to put in concrete costly effort for only abstract possible value.
These are people who have adopted Personal Knowledge Management tools even in their current generation.
A tool that helps you organize your life would have to show that non-PKM users like using it, too.
Otherwise even if it had early momentum it would get stuck in the same low ceiling that PKMs do.
In ranking, there’s “query independent” and “query dependent” quality signals.
Query dependent: “for this specific query in the past, how likely are people to click this result?”
Query independent: “what is the PageRank of this document in general?”
The query independent component is normally the more important ranking factor.
The query dependent signal is more often a “twiddle” on top.
If this weren’t true, then at the very beginning Google wouldn't have been able to get so far so quickly with PageRank as the main innovation.
This means that systems where most people can agree on a baseline quality can have good ranking even if there’s not much overlap in queries.
The query independent signals are powerful enough.
One user’s completionist approach to checking every suggestion could improve query independent signals for everyone.
You only need a very small number of highly motivated users to improve query independent signals for everyone.
Spreadsheets are situated software, but often become a rat’s nest.
There are very few parts of a given user’s spreadsheet that can be shared with others as generally useful components.
This is the reason that everyone loves their own spreadsheet and hates everyone else’s.
One reason is it’s not possible to modularize logic.
It could be that a blackboard system kind of architecture that is inherently modularized and allows emergence could help.
The blackboard architecture allows emergence.
Imagine the Unix “one small tool” insight but applied to a blackboard model, allowing emergence.
"One small tool" makes them easier to add to the pile, easier to reason about, structurally more likely to be shareable.
The smaller they are, the more likely they overlap, for mathematical reasons, with the kinds of things that other people would have written if they had the time.
To allow a blackboard system that allows untrusted 3P components to swarm on the data would require an alternative security model.
Why have so many malleable software systems failed?
Partially because there was no way to safely execute open-ended code written by strangers.
Blackboard systems couldn’t be open ended before because you had to trust the code, so you could only have the code you wrote yourself, not code some rando wrote.
So you had to do all the malleability yourself.
You ended up with rat’s nests of customized functionality that can’t integrate or compose with others’ functionality.
For intentional tech, you can get far with a short essay that distills your values.
Even a first draft of an LLM distilled off of your broader context can be useful.
Diff it from the baseline of what average people would care about or write.
Now how are you the same as everyone else (LLMs already have that baked in) but how are you different?
Intentional tech isn’t about imposing what anyone should want.
It’s about helping each person live aligned with their own intentions.
Their agency and aspirations.
It’s easy to underestimate the value of horizontal use cases that aren’t possible today.
People are crawling through broken glass and don’t realize there’s another way.
Also there is a lot of glass not worth crawling through that they aren't even bothering with because the pain is self-evidently worse than the benefit so no one would even try.
But removing that broken glass could reveal value that was there all along, we just never imagined could exist.
More prompt injection / MCP security issues.
The same origin paradigm says you have to trust the entity who controls the origin.
They can distribute arbitrary turing-complete code that can operate on any data visible to that origin, with open-ended networking access.
You have to trust not just this app today but all future versions and all future owners.
The same origin paradigm lacks the vocabulary to express more detailed trust.
You have to have open ended trust in a stranger with different incentives from you.
This was OK when software was expensive enough to make that only people who knew what they were doing or had significant resources (and thus something to lose) did it.
But it doesn’t work with infinite software.
It also doesn’t work with LLMs in the loop–if LLMs can execute code, and LLMs can be tricked by malicious context, then they can’t be given access to run on sensitive data.
How can you make code not scary?
How can you make it so users don’t have to trust the creators of code?
Data is typically seen as an asset for a given origin.
But it should be seen more as a liability.
Origins should prefer not to have the data with all the downside risk of having sensitive data.
There should be ways for creators of code to write arbitrary code that runs blindly.
This would allow them to do useful things for users on sensitive data… the only tradeoff is the creator of the code can’t see the data or its derivatives directly.
This would allow safe "you can run anything."
The optionality of the data is its power.
More optionality, more power, more danger, more explosive potential.
Denaturing removes that optionality.
For example, filter out any data that doesn’t meet some k-anonymity threshold on some data stream.
If you made the right denaturing call and it's the right signal you need, you can get most of the value with less of the danger.
But the problem is you can't change what subdata to get later, so if you guessed wrong, you can’t change your mind later.
The same origin paradigm drives to centralization and aggregation.
One solution is to make a single origin that is shared by a collective.
An open aggregator.
All data has the same friction to get it to that origin but there is no friction inside.
That would require an alternate security model for data to flow safely among the different actors within that origin.
One of the benefits of the cloud is distilled signals from aggregated data.
For example, location pingbacks can be used to generate a signal of how busy given restaurants are right now.
Today, this is typically done by users sharing the entire data stream with an aggregator.
The aggregator then does internal processing and distills a high-quality signal from the data that is fully anonymous and can be shared with everyone.
But it requires every user trusting the aggregator to not do something nefarious with that data.
Also, that aggregate signal is owned by the aggregator.
Imagine an alternate model where users can tag their data with policies that are always followed in the system.
You could construct a policy that allows users to pool their data into an anonymous process, and the collective of all contributors owns the distilled output.
A benefit of this system is no one has to trust any one entity with all of the data.
If you can trust that the policies on the data are being followed, everything else flows from it, safely.
Algorithms don’t just respond to you.
They shape you and coevolve with you.
“Interestingness” is often formulated as surprisal or entropy.
Entropy is easier to generate than real data.
So incentivizing interestingness in a swarm often leads to junk.
This happened with, for example, FileCoin.
Most of the data stored was just noise.
There needs to be some bias towards useful interesting data.
Swarms of ‘agents’ can come to better conclusions than than a single very smart agent.
This is also true even if it’s the same model for each member of the swarm and for the single very smart agent.
Perhaps the reason for this is the same reason that boundaries emerge in every complex adaptive system.
Within a boundary, signals propagate like a broadcast.
That means that the larger the boundary volume (the more emitters contained) and the higher the rate of information emitted, the more cacophonous the background noise.
Everything within the boundary coheres to the centroid average point; things that are away from the centroid are impossible to hear.
Boundaries allow different regions to have different centroids and less cacophony which allows more diverse ideas to be tried before being drowned out.
The good ideas can then spread out through the boundaries once found.
The thoughts within one model/mind are similar; a cacophony of information sloshing around.
When you have to distill the thought into a stream of language to transmit to another model / mind, you have to collapse the wave function into a specific information stream.
This distillation can allow different perspectives.
The distillation is where the OODA loop emerges.
The OODA loop is where interaction between different things emerges.
Emergence happens because of the distillation to communicate.
Incentives dominate intentions.
We focus on intentions because we can see them more and our brains are tuned to people, not systems.
But what matters is the outcome and the incentives influence that much more.
Over time everything falls down the gravity well of its incentives.
Each incremental move is individually fine, but makes it easier to take the next one, which leads to a compounding rate of movement towards the incentives.
That’s why you need "can’t be evil" not “won't be evil”.
Lash yourself to the mast.
It’s easier to align a collective against a thing than for a thing.
A lot more people can agree with "we don't like the guys in charge."
Even people who disagree with what should happen can agree they don't think the guys in charge are doing a good job.
Similar to the "why entropy arises fundamentally" argument.
There’s more directions to go away from something than to go towards a thing, mathematically.
If all of the directions away are on the same “team,” that team can be larger.
This dynamic means that the opposition party coheres to fight the dominant party and then becomes the dominant party and diffuses and fractures and then it all repeats.
Humans want to be above the API.
But your boss wants you to be below the API.
People don't like being used as a tool, they like autonomy.
The coasian boundary decision within an organization for any given task is implicit, rarely talked about.
Which collaborators do you treat as a tool vs a peer?
I think LLMs will likely make corporate politics worse.
The metagame will just get more inscrutable and energized.
Corporate politics emerge from the fundamental asymmetry of the manager / boss relationship.
The imperative is "act like your boss is right".
That doesn’t go away with LLMs.
A corporate politics maneuver classically used with consultants will also be applied to LLMs.
You bring in a consultant to come to the conclusion you secretly believe.
If your boss agrees with the consultant’s conclusion, then you get the benefit as the person who decided to bring them in.
If your boss doesn’t agree, you get to blame the consultant for being bad and fire them.
A cynical move to cap downside and leave exposure to upside.
I liked these fundamental reflections on alignment and collectives from SoftMax’s Reimagining Alignment.
"The result of this process is not just a big colony of cells, but an organism which is a new individual in itself. Something more than just the sum of its parts. The “we” of the cells becomes an “I”, with goals that cannot be understood as some simple sum of the goals of the parts. Animals do the same thing, forming colonies and packs and so on. Even trees form these organically aligned collectives through mycelial networks. It happens at every scale, big and small."
"Hierarchical alignment works fine, right up until the rules or person on top are wrong. The smarter the subordinate, the more likely this is. Hierarchical alignment is therefore a deceptive trap: it works best when the AI is weak and you need it least, and worse and worse when it’s strong and you need it most. Organic alignment is by contrast a constant adaptive learning process, where the smarter the agent the more capable it becomes of aligning itself."
Another thought on emergence from Anthropic’s guide to multi-agent systems:
"Once intelligence reaches a threshold, multi-agent systems become a vital way to scale performance. For instance, although individual humans have become more intelligent in the last 100,000 years, human societies have become exponentially more capable in the information age because of our collective intelligence and ability to coordinate. Even generally-intelligent agents face limits when operating as individuals; groups of agents can accomplish far more."
Multi-agent systems are able to use the power of the swarm.
Typically the strategic benefit of the swarm was that the energy of the swarm emerged from each individual’s own incentives and you didn't need to pay them.
Now the swarm is more like a series of very-easy-to-direct-and-coordinate employees.
Agents you foot the bill for but don't need to give precise direction to and who can figure things out a bit themselves.
A middle ground between mechanistic systems (e.g. computers... everything has to be extremely precisely defined) and real people (where they have their own incentives)
An army of interns.
A pattern: use LLMs in tiny tasks where the overall swarm is emergently powerful.
LLMs handle small tasks very well.
The emergent swarm might lose the plot (e.g. forgetting it's just researching egg prices, not buying them).
But if you have ways of capping downside / preventing possible irreversible actions you can just let the swarm swarm.
Rights make you feel like you don’t have duties.
Rights are primarily about the individual.
Duties are primarily about the collective.
Which is the default, the collective or the individual?
LLMs are not the first time we’ve interacted with an emergent social technology that is orders of magnitude larger than we could possibly understand.
Markets are also an emergent social technology.
But you can’t talk to the market.
LLMs you can talk to.
It talks like a human but it doesn’t have memories like we do.
The coherence of it is illegible to us.
Isn’t it kind of weird when ChatGPT uses the first person in its answers?
When you talk to a person you’d never say “I’m just talking to a collection of synapses.“
We can all agree that the collective, the emergent phenomena of all of the synapses creates something larger than the sum of its parts: a person.
Also,that the person is more important than any of their constituent parts.
Work by Ian Couzin shows that the same algorithms that schools of fish and the neurons in our brain use to decide between multiple options can both be fully modeled as the same algorithm: ring attractor models.
But we think of the brain as one thing, and the school of fish as a swarm.
Which do we pay more attention to: the collective or the individual?
It depends!
Switching from focusing on the individual or the collective is like the Jaws dolly zoom.
The figure ground inversion happens: suddenly something is very fundamentally, obviously different, and yet it’s hard to describe what it is.
Everything stays the same but your perspective changes and now everything's different.
Something changed that’s fundamental but you can’t put your finger on what.
Glen Weyl et al published a new proposal called Prosocial Media.
I haven’t gotten a chance to read the proposal in depth yet but I love the name!
Digital services get to compel you to agree to their Terms of Service.
It’s a binary choice: agree to their terms or don’t get the service at all.
The ToS reserves a ton of their rights for what they can do with your data.
But you don’t get to assert your rights over the data you bring to the service.
It’s your data, but the ToS makes it effectively their data.
One small change that would have a massive impact: make it so that any ToS that reserves rights for the service over the user’s data is also mutual and automatically reserved for the user, too.
In a zombie organization, alignment is rewarded more than correctness.
If you’re a gadfly in a zombie organization, the horde will try to squash you even if you’re right.
Being a positive deviant is exhausting, you get worn down.
You're constantly cutting against the grain of the system.
The system is orders of magnitude more patient and higher momentum than any individual.
As an individual it’s much easier to get worn down and simply become a zombie and join the horde.
There was apparently a lesser known Milgram experiment where he sent researchers onto subways to ask people to give up their seats even on clearly empty cars.
The research assistants quit because they couldn’t do it, it was excruciating.
LLMs are like aliens whispering in our ears with very different incentives from us.
These aliens were made from an emergent process that is steered by PMs tuning a dashboard to optimize MAU.
A lot of what is VC fundable is tied to assumptions about the distribution characteristics.
Apps have friction to distribute and data is trapped in apps.
This makes apps with significant traction very valuable.
If you distributed a service in a new distribution model, the rules of thumb of what is valuable would be very different.
Things that have in the past felt self-evidently valuable might not be valuable at all.
Things that didn’t used to be valuable might become valuable.
I was entranced by this random article on HackerNews: 3D-printed device splits white noise into an acoustic rainbow without power.
This design is a kind that can not be built from top down plans.
It can only be grown, not built.
It must emerge.
All organizations are fundamentally dysfunctional.
They must be.
The principal agent problem is why.
The individual and the collective can never be fully aligned.
That lack of alignment between individual and collective is the negative space that Goodhart’s law emerges out of, which is emergent dysfunction.
The only way to get asymptotically close is with "infinites" that everyone believes in.
But those have their own problems.
Setting a shared infinite can help a team cohere.
Making an idea "sacred" fixes it in place.
It can never be traded off.
It's infinite, sacred.
Often identities become wrapped up in the sacred value.
Debates can unconsciously feel like "you're attacking my identity," which can pre-charge conversation.
Shared infinities are a good coordination mechanism.
Even though they make certain discussions impossible.
Also, when you share one, certain trade offs become impossible to acknowledge, let alone make consciously.
The constraint creates coherence, it gives the terra firma to build off of.
Otherwise there's too many free dimensions, nothing can cohere.
Some of the infinites that are most important to us are so deep and unquestioned that we don't even realize they exist, it's just water to us.
If given an impossible goal where you will be killed if you fail, you must come to believe you can do it.
A powerful pull towards self delusion.
Tolstoy’s work is about the broad sweep of history vs individual agency within that structure.
An argument against the Great Man Theory.
Napoleon felt he had agency, but it was largely an illusion.
The higher in power you go the more constants operating on you.
Just because “someone else will do it if I quit” does not absolve you of responsibility.
A lot of Molochian dynamics are emergent.
Everyone gets locked into an equilibrium where everyone is forced to do a thing no one wants to do.
A lot of AI safety debates boil down to “We should not build a demon god… but if we don’t China would anyway and that’s bad… so we should build our own demon god.”
What you want to want and what you want must be disjoint to some degree, they can’t be perfectly aligned.
That misalignment is where the Goodhart’s law delta emerges, leading to the inevitability of engagement maxing.
If a company doesn’t do it, their competitor will, so they do it too.
The more information that varies the harder consensus becomes.
Divergent information leads to compounding difficulty of converging on consensus of truth.
Politicalization incentivizes divergence of information to better fill your desired perspective.
One benefit of monoculture: shared experience.
Religion used to give a shared substrate of experience, to pull a community together.
Society kind of got rid of it but didn't replace it with anything.
Internal prediction markets will almost always be killed by some VP.
Two reasonable things they’ll say as they stab it:
1) "Yes, we all know that no one thinks the strategy is working but we all agreed to not point at it because if we do the big boss will notice and scramble everything before we can fix it."
2) "Yes, this strategy won't work, but executing on it will move us in the right direction. If everyone believes that everyone believes it then we will move in the right direction and if they don't believe it we won't work and we won’t make any steps in the right direction."
A kind of example of the misalignment between the collective and individual that must emerge, to some degree.
Would you tell a white lie to make the right thing happen?
A culture that is taken as an infinite is effectively a cult.
R-selected vs K-selected is a barbell.
Two stable equilibriums with opposite logic.
Reminder: r-selected is huge amounts of cheap offspring and k-selected is a very small number of very expensive offspring.
Anything in the middle is pulled inexorably to one of the poles.
Over time you’re only left with the extremes.
Anything with this dynamic produces a barbell distribution.
We’ve all implicitly assumed K-selected AI.
What does R-selected AI look like?
An ecosystem of forest sprites vs an omniscient centralized power.
Physical communities are scarce.
Virtual communities aren’t.
Communities that combine the best of both can be interesting.
Use an existing virtual community in a dense area (like NYC) to bootstrap a physical one.
This is the concept behind Fractal University.
We’re in a weird phase without meso scale communities.
The natural state of communities is fractal nesting.
The internet was a shock to the system, giving a barbell: very small cozy communities and massive, globe-spanning ones.
But now that we’ve wrestled with it for a decade everyone can see this barbell is not great and we should bring back meso-scale communities too.
Someone told me about an idea from the book Weirdest People in the World.
We have rule of law partially based on a foundation of the idea that God can see what everyone does.
That allowed shifts from kin based to a market economy where you have faith in laws, and we have faith people will follow it.
The innovations that made religions global were “god can see your thoughts” and “an infinite afterlife”.
These two ideas were step changes that increased a given religion’s power by orders of magnitude.
Like eukaryotic or multicellular step changes.
Someone this week pointed out that after WWII we ran a natural experiment.
We had 8 million extra productive people show up all at once, with only a month to plan what to do with them.
Also, everyone could agree that we should trust those people because they had put their lives on the line for the country.
This discontinuity plus broad trust allowed something miraculous to happen.
We extended credit to everyone and the economy boomed.
If you give cheap credit, you need to believe that the people it’s given to will be worth it.
It’s an emergent social imaginary; if everyone doesn’t believe it then it’s not true, but if everyone does, it is.
WWII created the external alignment, discontinuous and broad.
Today it would be very hard to do.
If you have a slow drip of people of the same age and ability, but they’re playing video games and the older generation thinks they’re just losers, no one gives them credit.
What if we just all agreed to believe in all of the young people at once?
How could we make that happen?
If you treat employees like they don’t care (Theory X) then they won’t care.
The style of management leads to how the employees act and vice versa.
To get creative output you need Theory Y style management.
With Theory X you don't get their intuition and creativity.
Theory X employees care about the perception of motion.
They're acting like they care, not actually caring.
Enroll them, so they shift from a Theory X mindset to a Theory Y mindset.
Only if you're enrolled does failure give you the signal to improve.
Failure without enrollment is just noise.
Imagine testing a prototype of an app with actors, it wouldn't be real feedback.
You want someone who's in it and cares, authentically, not performatively.
Otherwise the signal doesn't mean much.
The flow state emerges partially from not fearing death.
Could the economy run fully in silicon?
It seems to me that it couldn’t.
The economy is based on the idea that participants are playing because they actually care.
They are enrolled.
The impacts matter to them.
In the modern world you have to reduce a nuanced input to a number to collaborate.
To get quantitative scale you need to lose qualitative nuance.
But maybe you could use AI to collaborate on a higher dimension.
Qualitative nuance at quantitative scale.
What if the CEO could have a simulated conversation with every single line employee?
My suspicion is that this would create even more volatility.
The ability for one person to steer more directly would allow more boldness from companies… and also more game over events for them when that one person chose wrong.
The swarming behavior of companies make them harder to steer… but also gives some insulation from disastrous decisions.
The limiting factor of corporations is not the intelligence of any one individual, it’s the challenge of central coordination with the fractally complex facts on the ground.
Real names in social media context collapse all the various versions of you into a single maximally safe version of yourself.
You can only have an alter ego online if you have a nomme de plume.
Otherwise you have to be the generic, socially-acceptable-in-most-situations version of you--the lowest common denominator of you.
That is the most average, boring version of you.
We have fewer and fewer channels for truly private ideas in modern society, and that will hurt our resilience.
It will make innovation of ideas harder.
True constraints must come from outside.
If you're the one who sets the constraints on yourself, are they actually constraints?
You could simply change them if they were too inconvenient.
If you ask an AI to grade a writing sample, it will tend to give a B+.
Then if you make any changes, it will tend to rate the new essay as an A-.
A subtle sycophancy.
The illusion of growth and development.
Fire quote from The Emperor’s New LLM:
"Large language models are manufacturing consensus on a planetary scale. Fine tuned for ‘helpfulness’, they nod along to our every hunch, buff our pet theories, hand us flawless prose proving whatever we already hoped was true."
Self perception and self-deception are easy to confuse.
Therapy makes progress based on your ability to report what actually happened in your life.
Your memories are pre-narrativized.
You’re the main character in your memories, so memories are stored with you being noble and correct.
That's one reason couples counseling is so effective: two different motivated views of the same scenario, which makes it easier to triangulate the ground truth..
Perhaps Granola style transcripts could help ground truth?
But only if everyone’s actions didn’t change given they were being recorded.
Do we try to impress ChatGPT?
Before the memories feature I didn’t care.
But now I do.
An existential risk: human addiction to “therapy” that feels like growth but actually isn't.
If someone doesn't understand your plan, is it because they lack the ability to understand, or because you didn't communicate it in a way they could understand?
Always a little of both.
Documentation written by creators of the tool and by users of the tool will start from different angles.
Some documentation of a system is inside out -- how it works and why.
But what matters for users is outside in--what you want to do and what you need to know to make that work.
Complex systems are constantly living on the edge in a critical state.
A small perturbation could cascade and lead to higher leverage output but only if the rest of the network agrees.
That ability is what allows adaptation.
Good ideas can rapidly be absorbed and strengthened; bad ideas evaporate out quickly.
Rippling waves of possibility, all because they are poised on a knife’s edge of criticality.
Complex systems are irreducible.
The emergence is the main dynamic, because each individual sub component has interdependent actions.
The OODA loop is where emergence comes from.
A life hack from a friend: put the hard-won insights from your couples counseling on coffee mugs.
Those insights were excruciating to distill, and painful to look at.
That means that you’re likely to forget them, even though they’re important.
Putting them on a coffee mug that you often see around the house treats them more like a trophy.
But also brings the insights into your normal social context at unexpected times, making it harder to ignore.
The modern world is largely a focus on optics over substance.
The gilded turd era.
The opposite of resonance is “bait”.
Resonant things are things that the closer you look the more convincing they become.
Bait is the opposite.
Superficial stuff that hooks you, that gives you the superficial pop of insight but the deeper you look the less convincing it is.
Remember: when you think you're 80% of the way done, you're actually 20% of the way done.