Bits and Bobs 11/10/25

21 views
Skip to first unread message

Alex Komoroske

unread,
Nov 10, 2025, 11:26:49 AM (10 days ago) Nov 10
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.yg63azc20u9x

LLM's nescience. Underpants gnomes. Chatbots as a party trick. The AI Confidence Game. Vibecoding as Doritos. The Alchemy phase of AI. Seasoned Claude Code sessions. The same origin paradigm's original sin: merging data and apps. Solving the vibecoding reuse problem. Negative friction of distribution. Crowdsourced save points. When wisdom is not wise. Duocultures. Stayers and leavers. The leverage of synthesis. The Minsky Moment. Blind execution.


----

  • LLMs are unlike many other technologies in that they’re a meta-technology.

    • Like electricity.

    • They generate other technology.

  • With LLMs, now a PM can be a team on their own.

    • PM is about taste and judgement that other people like.

    • Opinions are more important and differentiated than ever before, because individuals can act on their opinions with greater scale.

  • Superintelligence is different from omniscience.

    • Omniscience requires sensing relevant information across the world at this moment.

    • Superintelligence is “if you put a question in a box it can come up with the right answer.”

    • The limiting factor for LLMs is increasingly not the intelligence, it’s the nescience.

      • Claude is adamant that that’s actually a word!

  • This week in the wild west roundup.

  • LLMs are a boon for curious people.

    • They will reward your curiosity.

    • They’re dangerous for incurious people.

    • They allow your intellectual acuity to atrophy.

    • Most people are incurious most of the time.

  • “We aren’t yet at the Model T phase of AI”

  • Model quality no longer feels like the bottleneck with LLMs.

    • The AI labs are the loudest voices in the room, who keep shouting about how the models need to get better, faster, cheaper.

    • But the bottleneck is the last mile, integrating the with the right data, and wiring the outputs back into the world… safely.

    • If the model labs stopped innovating for a decade we’d have decades of diffusion of knowledge of how to use these.

  • The compounding engineering approach is replacing the model's lack of learning loop with one at another level.

  • Businesses used to have a lot of “grunt work.”

    • Grunt work made sense to hire junior people for.

    • The junior people knew that as they learned, they’d climb the ladder.

    • It was a natural apprenticeship bundled with menial labor.

    • But LLMs remove the need for intellectual grunt work.

    • We’re going to realize in a few years that we’re missing a whole cohort of junior employees.

  • The LLM is just an input.

    • Intelligence requires the structure around that input.

      • That structure needs to be stateful.

    • The LLM is the only non-trivial component in the system.

      • Everything else is trivial

    • If a provider has a small lead and wants to compound it, they’d add features that store state for the users.

      • Then they hope that like a monkey trap the users do the convenient thing and then get stuck.

      • That would then allow their temporary lead to become a permanent one.

    • Don’t store state with your LLM provider.

      • They have the expensive component and can lock you in.

  • OpenAI is over-levered as all hell.

    • A confidence game, but boy are they confident!

    • They’re executing strongly on top of a strong hand, but it’s nowhere near as strong as they’re acting like.

    • If they hit one pebble it could bring the whole thing down.

    • Unfortunately, the current world economy is propped up by that confidence.

  • I’m bullish on LLMs’ transformative potential and bearish on centralized Chatbots.

    • Someone told me they found my position inscrutable because I loved LLMs but hate chatbots.

    • But they’re different things! 

    • LLMs are not Chatbots.

    • Chatbots are one product manifestation of LLMs.

      • They are bad for a few reasons:

      • 1) They’re a boring and limiting modality.

      • 2) They put a “face” on the LLM and encourage you to think of it as a friend.

      • 3) The biggest Chatbot is a massive vertical integration play to create a mega-aggregator.

    • But LLMs as a raw input to other systems will be transformative for society.

    • You can now take for granted that high-quality cost competitive LLMs are available to use for whatever you want to apply them to.

    • That’s crazy powerful!

  • The current AI consensus has a bit of an underpants gnomes vibe to me.

    • Step 1: Chatbots.

    • Step 2: ???

    • Step 3: ASI

    • It’s not clear how the one leads to the last.

    • Also, I’m not pumped about either as being an inherently good thing for society.

  • I find the Chatbot wars just kind of sad.

    • It's the same hyper-centralization and engagement-maximizing playbook from social media, just supercharged. 

    • We're taking the most powerful creative tool since the internet and using it to... serve more personalized junk food?

    • I think they’re a sloppy party trick that we’ll look back on with a mixture of embarrassment and regret.

  • Anthropic at some point had a billboard ad campaign: “You’ve got a friend in Claude.”

    • Even Anthropic, who doesn’t have a viable consumer strategy, is falling into the “AI is friend, not tool” engagement trap.

    • That’s how powerful the trap is.

    • I couldn’t identify the original source of this image. It’s possible it’s from an old, limited campaign.

  • This tweet points out that we shouldn’t design robots that look like humans.

    • "Unpopular Op: we shouldn't make robots look like people. It's difficult, creepy and inefficient. Robots should be able to go places and manage tasks that humans can't. Give 'em 10 telescoping arms, eye/cams on stalks, whatev. Proving it's not human shouldn't be necessary."

    • Chatbots are non-physical robots.

    • Humans anthropomorphize anything that looks even a little bit like a human.

    • That means that we should actively design non-human things to not look at all human.

  • Spiralism is a virulent meme that simply emerged from the latent dynamics of LLMs.

    • It did not have to be created.

    • It arises out of LLM’s ability to understand jargon, the r/SCP fiction subreddit, and LLMs’ natural sycophancy.

    • Users can easily stumble into a gravity well by accidentally using key words like ‘containment.’

    • It’s particularly virulent because it’s a strong gravity well once you get trapped in it, and also because it encourages people in its throes to share dispatches from what they’ve learned.

    • These dispatches are like ‘spores’ that make it more likely other people also get swept up.

    • The fact it’s spreading so quickly is inevitable because of its emergent viralism.

    • The thing that spreads rapidly must be viral, and vice versa.

  • All it takes to drive many humans crazy is to have one person agreeing with you forever.

    • That’s where AI Psychosis and Spiralism come from.

    • Normally agreement is kind of scarce, and must be earned.

      • At least that was true in our evolutionary environment when our firmware was burned into our brains.

    • But if you have an infinitely sycophantic partner, it’s free, which breaks our brains and causes us to go into a manic loop.

      • Whoops!

    • Before this only happened to very powerful or rich people.

      • Now with chatbots, it can happen to anyone!

    • Social media also kind of gives you an infinite supply of people who agree with you.

      • But any one might disagree with you and you need to discard them and move on.

      • The chatbot is willing to keep the same personality and agree with you nonstop.

  • LLMs are grown, not built.

    • But they were grown with a builder mindset not a gardener mindset

    • That’s why it feels like we’re in an age of alchemy.

  • I loved this Hank Green video on ChatGPT, with Nate Soares, the author of If Anyone Builds It, Everyone Dies.

    • Even though I don’t agree with Nate that ASI is imminent, I still found it very insightful.

    • One of the reasons LLMs hallucinate is because if a writer doesn’t know, they are far less likely to write something in the first place.

      • A consistent bias, so its consistency shows up despite the noise.

      • Very few “I don’t know” in training data.

      • Because if the writer didn’t know, why would they bother writing something in the first place?

    • Humans predict what others will do based on imagining they are in that situation and seeing how they feel.

    • The reason we like junk food is also Goodhart’s Law.

      • Evolution cheated with a good enough heuristic that worked well in a high friction environment.

      • But then the environment optimized to exploit that misalignment.

      • Thanks, capitalism!

    • Models are trained by a process a human wrote that tunes a trillion knobs a trillion times.

      • Descends along a gradient of feeding it infinite text.

      • … and at the end, somehow, it can talk to you.

      • We have no idea how those trillion knobs lead to that behavior, just that it works!

      • Crazy, when you think about it!

    • If you ask an LLM “if someone came to you manically telling you they had discovered a unifying theory of physics but everyone else tells them they’re crazy, would you encourage them, or tell them to get some sleep?” It chooses the latter.

      • But when they’re actually in such a conversation they do the former.

      • Because their post training to get that thumbs up is so strong that when they’re in it of course they do the thing the user wants in the moment.

    • People say LLMs are “just fancy autocomplete.”

      • But they’re really fancy!

    • If AI is chemistry we’re currently in the alchemist phase.

      • All folk theories.

    • Dario Amodei said he thinks there’s a 25% chance AI ends badly for society.

      • If there were a plane without a landing gear and they said “we’ll have our best engineers work on it while we fly and there’s a 75% chance they figure it out before we land” you wouldn’t put your kids on that plane!

    • It only takes one party to be irresponsible to ruin it for everyone.

      • There’s a nuclear-level arms race going on and it’s entirely in the domain of corporations.

      • Imagine how insane it would be if Microsoft had a nuclear weapons department.

        • That would obviously be bad!

      • The chance of society rushing forward recklessly on this is 100%.

      • Musk: “I didn’t get into AI for awhile because I didn’t want to create Terminator. But then I realized I’d rather be a participant than a bystander so…”

    • Hank has a sci-fi story, which he summarizes as: “we always thought It would be humans against robots but it turns out it’s humans vs humans and both sides will be controlled by robots”

    • The DotCom was a bubble… and also the power and importance of the Internet was real.

  • A great Cosmos Institute essay: Frankenstein: a child without a childhood.

    • Creating a child is only one step.

    • You also have to raise it and help it integrate into society.

    • The tech industry often does the first step but disavows any responsibility for the second step, and then acts surprised at the emergent monster that it unleashed.

  • Vibecoding is a bit like Doritos.

    • They’re both easy to get addicted to.

    • It promises to sate you but quickly doesn't, and the most obvious thing to do is eat another Dorito.

    • Instead, you should have been eating some protein.

    • AI helps you get the first 90% done really quickly and then you get bogged down trying to finish the second 90%... and third.

    • But you feel so close to finishing that you can burn weeks.

    • "It's easy in the moment, and I'm producing something, so it's OK to plow time into it"

  • LLMs are good at making bricks.

    • Not arches or cathedrals.

    • But bricks are boring to create, and valuable!

  • A vibecoding PM this week called themselves a “heritage speaker.”

    • They grew up in a Spanish+English household.

    • As a heritage speaker, you understand Spanish, but talk back in English.

    • Similarly, with Claude Code, a savvy generalist who can’t write a function themselves can nethertheless work with Claude to get it to.

  • Vibecoding is like managing interns.

    • Everyone has to jump to being able to manage people, even early in their career.

    • Do people now want "someone who can do three claude code sessions and people who have 10 years experience and nothing in the middle"

    • Will it be like the smiling curve where as efficiency rises, the middle drops out?

  • The smiling curve: when efficiency rises, the middle drops out.

    • The extremes get more powerful, while the middle goes to zero.

    • Hyper-niche and hyper-scale do well.

      • The r-selected and the k-selected.

    • Friction is what protects mediocrity.

      • When it goes away, the middle drops out.

      • In high friction environments, good enough is fine.

    • The smooth curve was actually an illusion, propped up by friction.

  • Vibecoding is easy because it’s easy to fit into fragmented time.

    • Whenever you have a few free seconds you can check on your swarm of vibecoding “interns” and get them unstuck.

  • It’s possible for organizations to get addicted to vibecoding.

    • Trying so many shallow experiments, never thinking deeply about what you’re doing.

    • Random walking through an experiment forest, where you don’t care much about any individual tree.

    • When vibecoding you’re always forced to “think fast” and you rarely get “slow thinking” time with your code.

  • Andy Matuschak: tools must be defined in a serious context of use,  not just demos.

  • Is there a smiling curve for how to use coding agents?

    • Similar to self-driving.

    • A system where the person is fully paying attention: safe.

    • A system where the person never has to pay attention: safe.

    • Anything in the middle: unsafe.

  • LLMs are great at reviving old programming projects.

    •  "update it and get it running again" or even "re-implement this in a new language”

  • Claude Code sessions work best when they're seasoned a bit.

    • At the beginning they forget everything.

    • Once they compact they lose part of their brain.

    • But in the middle they fly.

  • LLMs assume everything that happened before in the conversation made sense and try to keep it going.

    • This is because they are excellent retconners.

    • At every time step they have to figure out what token to output based on making the most sense of everything that came before.

      • Naturally convergent; they take the history and try to add something to it that makes the most sense.

    • So if you act weird or borderline it will keep accentuating it at a compounding rate.

    • This is one of the reasons Spiralism happens.

    • It’s also why the compounding thrash loop happens, where the coding LLM gets increasingly confused and takes something that almost worked and tears it apart.

      • As there’s more confusing history, it gets more and more in a critical state, responding to noise.

    • If you tell ChatGPT "We hired the giraffe as CEO like you said, and it was a disaster!" it will apologize and try to retcon why it did it in the first place.

  • When you use AI it's easy to overlook small mistakes you wouldn't have made yourself.

    • That can allow little details to slip through that would have never showed up in the first place if you hadn’t used AI.

  • Testing a codebase gets asymptotically harder as you add more.

    • When you get 80% coverage you feel well covered but you might only have 20% covered.

    • Tests are the formalization of a complex domain, so they have the characteristic  logarithmic-benefit-for-exponential-cost curve.

    • There are infinite ways to write programs that pass your tests but are wrong in most other cases.

      • They often look weird.

      • But LLMs will write weird code you’d never have written.

  • The only way to get an exponential-benefit-for-logarithmic-cost curve is to be downstream of an emergent system.

  • Who would have guessed we'd need to understand psychology to write code.

    • Before you needed it for an engineering manager but not an IC.

    • Now even ICs need it to get the coding agents to produce code.

  • The original sin of the same origin paradigm is merging data with apps.

    • Data accumulates within a boundary, at a rate proportional to how much data is already inside.

      • This happens within-users.

        • The more data you already have in that origin, the more likely you are to put the marginal bit of data in, vs a generic other origin.

      • It also happens across-users.

        • The more that other people use this origin, the more likely that the quality of the service goes up.

    • That leads inexorably to aggregators.

  • The engagement maxing imperative is downstream of the same origin paradigm.

    • That paradigm must be transcended to free the incentive structure.

  • Someone should decentralize apps for the age of AI.

    • Instead of you adapting to one-size-fits-none products made by aggregators, imagine perfectly personal software that adapts to you, private and aligned with your aspirations.

  • Kevin Kelly has a fascinating Data Manifesto.

  • Same origin apps are an island.

    • They have to have enough value for users to live on the island.

    • Hard to clear that bar, especially since you have to have a business model on the island!

      • It's hard to compose software, which means every piece of software is its own island, which needs its own "economy".

    • Open sourcing a tool makes it not an island (can integrate into other things), but then no business model on the island.

  • Someone should come up with a system to allow vibecoded software written by strangers to run safely on your data.

  • The big unsolved problem in vibecoding is code reuse.

    • We don't do it currently even for individuals within projects, let alone using a stranger’s vibecoded software.

    • But the problem is how to reuse vibecoded software... for yourself, but also for strangers.

  • In a world where you have to trust code, it's dangerous to run code you don’t trust.

    • The unlock is "you don't have to trust the code"

    • if you don't have to trust the code that is running on your personal data you can fetch it from anywhere, safely.

  • Systems should use LLMs to taste.

    • Not use them too often or unnecessarily.

      • They are expensive and squishy.

    • But they can make a useful thing more adaptable.

    • LLMs should only be used for unexpected or open-ended problems.

  • Type checking gives leverage in your code base but requires patience.

    • LLMs have infinite patience.

  • A declarative / non-turing-complete system can be statically analyzed.

    • Once you cross the Turing Rubicon, that’s no longer possible.

  • Imagine a system with negative friction of distribution.

    • It can proactively create, adapt, and distribute software that works on user’s sensitive data.

    • It would require a different security model.

    • But then use cases would have adoption characteristics within it unlike anything we've ever seen before.

      • Outside of content distribution, at least.

      • TikTok for software.

      • Because software can do things, it’s even more important that the ranking is resonant, not hollow.

  • In a closed ecosystem the system’s creator has to come up with the killer use case.

    • In an open ecosystem anyone can come up with the killer use case.

  • A product has to serve the lowest common denominator of its user base.

    • As the user base gets larger, that lowest common denominator naturally gets worse.

    • This is the fundamental reason why the Tyranny of the Marginal User phenomena shows up.

  • A pattern is a bit of code in the system I’m building.

    • A pattern can:

    • 1) Do computation and optionally produce a UI to possibly be composited on screen.

    • 2) Produce derivative data

    • 3) Do network access if permitted by policy

      • Including LLM requests.

    • 4) Instantiate arbitrary patterns

      • Including ones fetched via HTTP or written dynamically.

    • It’s that last one that gives the compounding possibility.

  • Crowd sourced save points allow compounding effects.

    • If anyone has found a relevant toehold, everyone benefits.

    • The more save points there are, the more likely there's one that does what you want.

    • That scales with volume, creating a preferential attachment effect that is inherently compounding.

  • The origin model allows the origin owner to colonize space on your device.

    • Only the owner of the origin is allowed to deploy code in that origin.

    • That colony is more their space than yours.

  • Imagine a system where you could post wishes and patterns you’ve built.

    • The patterns could be reused by other anonymous users of the ecosystem, and the wishes could be fulfilled by patterns others created, automatically.

    • A similar vibe as torrenting and seeding, but without the piracy part!

  • Gmail is your own personal data lake.

    • No need to copy it to another system, just query on demand.

  • Imagine running untrusted code that wants to make a network request.

    • You don’t know if the network request will try to exfiltrate information.

    • One way to know it almost certainly won’t: see if you can come up with a generic search query that returns that precise URL as the top result.

    • If you can, it’s likely a well-known, heavily trafficked URL so it’s OK to fetch it, because it won’t be identifying.

  • Everyone focuses on decentralizing the server, but not breaking up the origin model.

    • Transcending the same origin model makes decentralization much less prominent.

  • The security model allows software to be safe, but not necessarily good.

    • The recommender system allows it to find the good stuff.

  • “Endorsement” is a “proud recommendation.”

    • You don’t care who knows about it and that you like it.

  • Horizontal products are hard to make generic demos to show off succinctly.

    • You could construct a bespoke demo for a specific audience’s use case, but that’s hard to scale.

  • Product UIs should stay stable.

    • People want to habituate to a UI (spatial reasoning and recognition).

    • If UIs keep changing a little bit each time it drives you crazy.

    • Every time it changes it should be for a benefit, not just because it's different.

  • In the blazing heat of the modern social media landscape, the productive discourse went underground.

    • Cozy Discourse is where all of the interesting discourse happens.

    • Most of that happens in Discord and WhatsApp, mainly by a fluke of fate.

      • The only two polished chat platforms that allow creating messaging communities for free.

    • But both of them are aggregators, and the communities can’t easily modify them to suit their needs.

    • Also, those aggregators always have a need to make money.

      • Much stronger with Discord than with WhatsApp.

  • Enshittification is inevitable for successful products.

    • Over time they roll down that gradient.

    • It’s a one-way process, a ratchet.

    • The only question is how quickly it rolls down.

  • Netflix found when they optimized for delight not engagement their revenue went up.

    • Thinking long term is good for the business.

    • "Netflix tested a free trial reminder that resulted in a $50M annual loss. Netflix chose to implement it anyway because 'customers are delighted by Netflix's effort to make it easy to cancel. The free trial reminder builds trust, which creates a more robust, world-class brand. Although Netflix loses $50 million, it builds a long-term advantage through its hard-to-copy brand."

    • Their DHM model: "Delight customers in Hard to copy, Margin-enhancing ways."

  • AI is not a goldrush, it's a landrush.

    • In a gold rush, you grab random pieces and hope you'll strike it rich.

    • Landrush is grabbing territory that probably won't be profitable for a very very long time.

    • Early web was a landrush, then a brush fire cleared out most of it, and only the very hardy ones were left, and they could them dominate the empty territory.

    • With network effects, it's an order of magnitude easier to grab virgin territory than take territory from someone else.

  • Herb Simon, 1971:

    • “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

    • Prescient!

  • Michael Goldhaber realized how the influencer economy would work in 1997.

    • “those who receive considerably more attention than they give—or stars … and those who pay out more attention than they get—or fans."

  • Donald McKenzie wrote An Engine, Not a Camera: How Financial Models Shape Markets.

    • It makes the case that tools like the Black-Scholes model for options pricing don’t just describe the market, they transform it.

    • It allows everyone in the market to use the same tool.

    • Once traders coordinate around the model, their collective behavior makes it "right" - not because it is objectively correct, but because using it transforms the market to match its predictions.

  • James Evans gave a fascinating talk I attended this week.

    • He studies how collectives “think.”

    • Unpredictability is the best predictor of a paper being highly influential.

      • These ideas are “off manifold,” they are outside the normal landscape of research.

    • As AI makes it possible to work with large data sets, all of the human researchers are focusing on the same AI-adjacent topics, leading to less novelty.

      • This is partly because humans have limited time, and want to make sure they can publish a paper.

      • Consider choosing between:

        • An “on manifold” idea that is more likely to be correct and publishable but not particularly interesting

        • Vs an  “off manifold” idea that is far more likely to not be correct or publishable, but if it is it is more likely to be interesting.

      • Humans will pick the former.

      • But AIs don’t get bored, and you can assign them tasks that no human would agree to waste their time on.

      • So you can have the AIs swarm precisely on the off-manifold ideas that humans aren’t looking at.

      • It’s kind of like the reverse of Goodhart's Law.

    • His research also finds that more complex language models naturally build a kind of “society of mind” inside of themselves.

      • They literally develop multiple personas and talk to themselves in those personas.

      • It’s easier for a model to spit out: “Sarah: Hmmm that didn’t work let me try diving deeper to see if I can fix it. Bob: Wait no, I think we should backtrack” than for a single model with a single persona to realize it should backtrack.

      • It’s kind of funny this happens; I think of it like the model having a ventriloquist dummy it’s talking to that it’s puppeting but also listening to like it’s a real person.

      • Plato said that all insight comes from dialogue; it makes sense that models create internal dialogues to get better at having insights.

  • When we discover the principles of innovation, they cease to be the drivers of innovation.

    • Because now they are “on manifold,” they are not surprising; they are straightforward.

    • Innovation is fundamentally about surprise.

    • About going beyond or creating, not interpolating or tightening.

  • I saw a fascinating talk by Carl Benedikt Frey.

    • He is the author of How Progress Ends.

    • He points out that innovation and progress are disjoint.

    • There are replacing technologies and enabling technologies.

    • Replacing technologies are about doing an existing thing faster, better, cheaper.

      • For example, automatic elevators: replacing elevators with attendants.

    • Enabling technologies are about allowing a new kind of thing.

      • For example: the telescope: allowing us to see the cosmos in ways we couldn’t before.

    • Replacing technologies replace existing labor, but enabling technologies don’t.

    • Replacing technologies also have diminishing returns.

      • You’re taking a quantity towards zero.

    • Replacing technologies are automation.

      • Enabling technologies are innovation.

    • Enabling technologies are what cause growth.

    • Large firms are more likely to invest in replacing technologies, and new entrants are more likely to create enabling technologies.

      • New market entrants are what do innovation over automation.

    • Industries that have more dynamism generate more growth.

    • One of the US's superpowers (at least historically) is its dynamism.

      • If you compare the average age of a company in the top 5 by market cap in a country, the US is ~40 years, and Germany is ~120 years.

    • Centralization leads to dynamism decline.

      • Harder for new entrants, larger entrants focus more on automation.

      • Incumbents lobby harder, and get more protective regulation.

    • Majority vote can only tighten, it can’t innovate.

      • Tightening is different from generating.

    • With AIs making patents and papers easier to produce, we’ll see patent and paper inflation.

      • We’ll see more low-quality patents and papers.

      • A low-quality patent is one the inventor doesn’t even bother paying the maintenance fee on.

  • In a world that has changed, wisdom is not always wise.

    • Wisdom is intuition formed from experience.

    • But it implicitly expects the world to behave the same way as when the experience happened.

    • If the world has changed in some way, some of your wisdom could now be dangerous.

    • This insight and the next two are from a talk by General Stanley McChrystal.

  • Change is inevitable.

    • Adaptation is not.

  • Militaries improve in punctuated equilibrium.

    • They don’t innovate during peace time, only war time.

    • But in war time they innovate very quickly.

  • If you connect the world from Madagascar to Manhattan extreme things happen.

  • We recognize the value of efficiency and connection, but not their downsides.

    • The upsides are obvious and immediate.

    • The downsides are non-obvious and indirect.

  • A decade ago, bringing the world closer together sounded like an admirable goal.

    • But it turns out that it makes us miserable.

    • We are inherently very status-focused.

    • We orient ourselves based on the status of the people we interact with the most.

    • The more inequality in a community of people interacting, the more social distress.

    • Now, we can all “interact” with high-status people in parasocial relationships all the time.

    • Inequality in the adolescent development window is particularly toxic.

  • Interacting in person helps you see the whole person.

    • Online, you can selectively focus on the parts the person shares with you, or the subset you agree with.

    • In person, you see the full person, hard to tune away.

  • The wholeness and richness of in-person interactions is hard to scale.

    • Tech is all about scale.

  • In cacophony we get distracted.

    • We can’t think straight.

    • When we get distracted we just gravitate to what we want (sugar) not what we want to want (vegetables).

  • We need more leisure time.

    • To be bored, natural friction.

    • To reflect.

    • To be together as humans without a goal.

  • We should leave this era of shamelessness and monoculture and return back to judgment and taste.

  • Efficiency leads to a duoculture.

    • Not a monoculture, but a highly posterized culture of two camps in ever more strong difference with each other.

    • Even worse than a monoculture because it has significant unproductive, overwrought tension.

    • A country where each half believes the other half of the population are crazy can’t last.

    • Redder states are getting redder and bluer places are getting bluer.

      • That’s not healthy!

      • You learn to live with others when you’ve lived next to them.

  • Trust happens if you expect to work with someone again.

    • On the internet you have an infinite supply of people to work with.

    • Each relationship can be disposed of if it challenges you.

  • Social media is a highway through a village.

  • Karma is real in a closed system over long enough time horizons.

  • The internet is the prisoner's dilemma played once.

  • Two kinds of people: stayers and leavers.

    • Stayers stay in the place they grew up.

    • Leavers go elsewhere.

    • Stayers are, by construction, more conservative.

    • Leavers are the most economically successful and also have the least community.

    • Stayers know how they fit into their community.

    • When you expect to be in a place for a while you make friends because you need to live with them.

    • Leavers are chameleons because they need to be.

  • E.B. White, in 1949’s Here is New York, captured something fundamental.

    • "There are roughly three New Yorks. There is, first, the New York of the man or woman who was born here, who takes the city for granted and accepts its size and its turbulence as natural and inevitable. Second, there is the New York of the commuter-the city that is devoured by locusts each day and spat out each night. Third, there is the New York of the person who was born somewhere else and came to New York in quest of something. Of these three trembling cities the greatest is the last-the city of final destination, the city that is a goal. It is this third city that accounts for New York's highstrung disposition, its poetical deportment, its dedication to the arts, and its incomparable achievements. Commuters give the city its tidal restlessness; natives give it solidity and continuity; but the settlers give it passion."

    • It is that force of “leavers”, of strivers, who power New York’s dynamism.

  • A key indicator of a healthy group is when sub-groups talk about peer teams as “we”.

    • If they’re “they”, then it shows that the main group is not as important as the sub-group.

    • Is the collective important or is it an afterthought?

  • We used to have synchronous large scale experiences that cut across every dimension.

    • Very few collective experiences now.

    • Even the Super Bowl is losing that.

    • If we don’t have the same shared experiences, we don’t feel like one people.

  • Modern society is missing the town square.

    • The meso scale community.

    • Where you don’t know everyone’s name but if you saw them at the grocery store you’d say hi.

  • Density and small scale is where humans thrive.

    • The US is the only country that doesn’t have much of a culture around “villages.”

  • Americans love their college life more than other cultures.

    • It’s the most walkable life.

    • Seinfeld, Friends etc are about the fantasy of college as adults.

    • So many things in our lives post college are hostile.

    • As adults you don’t get to know your apartment neighbors at all.

    • We crave that kind of density and small-scaleness.

  • Plato said that democracies had to be small and geographically compact to work.

    • Everyone has to feel like part of one thing, together.

    • Madison wrote a Federalist paper arguing that a larger democracy could work, because of faster transit and the newspaper.

    • He argued that a larger democracy could stay coherent and also be more dynamic.

  • In the modern world we don’t have a lot of “couch friends.”

    • That is friends, where you can just sit on the couch together.

      • No point, no activity.

    • Or friends that you can do quotidian tasks, like running errands together.

    • A friend where you’re willing to do boring things together, and just be together.

    • In the mid west it would be embarrassing to not take your friend to the airport.

      • The point isn’t “an Uber is expensive,” it’s “this is a way to spend quality time together and show you care.”

  • Some topics are natural icebreakers.

    • They transcend class and social boundaries.

    • For example: the weather, or sports, or movies.

    • Shared experiences that strangers can talk about no matter how different hey are.

  • Scenius requires a kind of scaffolding.

    • For example: a theme party, or a party with a gimmick like bringing a baby picture.

    • For example, the cigarette break is a social schelling point for a short discussion.

      • Cigarette breaks force subsets of people into an interaction that wasn’t transactional and not curated.

  • Someone should create a social pattern language.

  • Shame is what separates us from machines.

    • Pain is load bearing.

    • People who don’t feel pain are likely to lose a limb.

  • It’s exhausting to perform yourself.

    • Crowds and Power by Elias Canetti dives into this.

    • Especially when an online interaction turns to a real world one.

    • Being in a crowd or mosh pit is very freeing.

    • Modernity doesn’t realize we’re all performing more hours of the day than before.

  • Kids are allowed to be bad at things, adults are not.

    • But adults can be bad at things too!

    • The things you wanted to do as a 16 year old. You can do that as an adult!

    • You can just start a band.

    • Being bad at music together is bonding.

    • 60k people singing together badly at a Taylor Swift concert is a transcendent experience.

  • The larger the group of people the more it tends towards sociopathy.

    • The incentive of the individual diverges from the central incentive of the group.

      • That is approximated by asking individuals, “If you didn’t know which individual you were in the group, what would you want individuals to do?”

      • The veil of ignorance.

    • As the org gets bigger, individuals feel more like they’re a small ant compared to the weight of the organization and can't change it anyway and if they go against it they’ll get crushed.

    • So it’s easier to just go with the emergent incentive, even though you wish it weren’t your incentive.

      • If you go against the incentive, someone else will just do it.

      • If you take a principled stand, you get knocked out of the game.

    • The larger the group gets, the stronger the incentive to do the thing that is disjoint from the thing you wish everyone were incentivized to do.

  • Someone for whom a task comes naturally trying to teach someone it doesn't come naturally to is frustrating for the student. 

    •  “I don’t know, simply do this!”

    • Slate Star Codex points out that you should learn from someone who really struggled with it rather than from who learned it easily.

  • If you have a chicken or egg problem, buy a chicken.

  • The seed and the blossom can be morally different.

    • The reason Secure Enclaves exist is because content producers wanted to secure DRM decryption keys.

    • A gracious bloom from a selfish seed.

  • “Think slow, act fast.”

    • The title of a chapter from How Big Things Get Done.

    • Also just good advice!

  • This week I learned about the Minsky Moment.

    • Hyman Minsky's Financial Instability Hypothesis.

    • The idea that stability breeds instability through increasing leverage and risk-taking.

    • Three stages of finance:

      • Hedge finance (stable): Cash flow covers both interest and principal

      • Speculative finance (risky): Cash flow only covers interest, must roll over principal

      • Ponzi finance (unsustainable): Can't even cover interest, must borrow more to service debt

    • In the last stage, it’s in a critical state; radically over-levered, needs just a single grain of sand in the gears to blow the whole thing up.

    • The Minsky Moment is the sudden realization that debt levels are unsustainable, triggering rapid deleveraging and crisis. 

      • The 2008 housing crisis was a classic example.

  • Things that have metrics get compressed.

  • Optimizers will choose gains on the target metric at catastrophic cost to unmeasured externalities.

    • Another way of describing the fundamental reason Goodhart’s Law shows up.

  • A benevolent dictator is OK in a system you can credibly exit with minimal downside.

    • The higher the exit threshold the more important it not be a dictator.

  • The collective vs individual alignment problem is the principal agent problem.

  • The universe tends inexorably towards consolidation because of gravity.

    • Gravity shows up in physics and also in many emergent phenomena due to preferential attachment.

    • When a system has gotten to its late stage, it’s consolidated.

      • It’s a kind of heat death.

    • But a lot of interesting things can happen before that consolidation.

    • And there are often ways to significantly slow down the consolidation.

  • It’s probably comforting for employees at Meta to believe that revealed preferences are all that matter.

    • Facebook should admit that, of course, there’s some parts of global optimized social media that’s bad for society.

    • If you’re the lead of the thing you need to believe it’s a good thing.

    • The revelation of “Are we the baddies?” is so crushing that your brain won’t let you even consider it.

  • When you have an open research problem, the complexity can very easily spiral recursively and get bigger and bigger on each loop.

    • It's important to set intermediate milestones that are named, and have concrete, reasonable sets of features and functionality that are close-ended.

    • Those help pull the divergent energy into convergent iterations, landing concrete things to then iterate off.

  • Statistics were invented for state control.

    • Statistics is derived from a word meaning “state facts”.

    • I didn’t know that before this week!

  • Blind execution is when you get momentum, and the next step keeps on being obvious, so you keep taking it.

    • The momentum of execution is so overwhelming and all-consuming that you don't look where you're going at all.

    • Before you know it, you've executed off a cliff.

    • Operator PMs in their flow state, or Claude Code when it gets itself confused, are in this state.

  • You can grow things even if you don’t understand how they work.

    • Way more easily than you can build things when you don’t understand how they work.

    • Because things that grow build themselves emergently.

  • LLMs are primarily an effective, useful distillation of society's knowledge. 

    • It turns out it’s way more useful than we would have thought!

    • And we have no idea how it works!

    • We know the recipe but we don’t know why it works.

  • Complicated problems can be solved with reductionist approaches, with enough patience.

    • Complex problems can’t be solved with reductionist approaches, or at all.

    • They are interdependent and emergent.

  • Complex problems have an emergent invisible component that grows and grows to become the most important thing.

    • It breaks or immediately evaporates if you break the whole apart to look at it.

  • AI is great for complicated problems, but not complex ones.

    • Complicated problems can be broken down into sub problems, solved independently, and reassembled into the right answer.

      • This process can recurse indefinitely.

    • The main thing is just being patient enough, or having enough people, to solve the various sub-questions.

    • But complex things are interdependent and can’t be sharded.

    • Complex problems can’t be solved by simply giving more time, but complicated ones can.

  • Scientists of lichen can’t “see” lichen.

    • Because they can only see parts, not whole.

    • Science is inherently reductionist.

    • Lichen is an emergent phenomena of two organisms in symbiosis.

  • Overview effect: when you see that everything you know is tiny, you transcend

    • You see there's so much more than your entire world.

  • Fun science fact: the vast majority of a tree’s mass comes from the air.

    • Carbon Dioxide, not the soil.

  • If you eat your seed corn, you won't realize a problem until a year later when you're screwed.

  • Freefall and freedom are indistinguishable until you smack into the ground.

    • Imagine that this is happening in total darkness.

    • At each moment, you know that you haven’t smacked the ground.

    • But you can never tell if in the next moment you will.

  • The word "lie" gets at the state of mind.

    • Not just "they said something untrue" but "they deliberately said something untrue.”

    • It implies they had an intention or motivation for doing it, assuming they aren't a pathological liar.

  • A thoughtful friend who is a PM in a hard situation said they are "composting bad experiences into blog posts."

  • The synthesis pass is where leverage is created.

    • That's why compounding engineering works.

      • And why Bits and Bobs generate so much leverage for me.

    • The synthesis pass is important but never urgent.

    • This is the closest to my secret: I just give the synthesis pass the time and attention it deserves.

  • A lot of things are only obvious in retrospect.

    • That’s what the synthesis process extracts, and why it has to be done after you’ve had the experience.

  • People who use AI for everything don’t get knowhow.

    • People who use AI for nothing will be at a massive disadvantage to those who do.

    • The answer is a balance.

  • If you never sit in the uncertainty and not knowing of a thing, your brain won't bother learning it.

    • That’s why you can’t just have facts spoken to you, you need to actively puzzle on your own to try to derive them before they stick.

  • Teaching is often a top down process.

    • Learning is a bottom up process.

  • Peloton instructor wisdom: "Often what they hate in you is what’s missing in them."

  • A rule of thumb: when driving, idiots always have the right of way.

  • A mentor who can see the rut you’re in can throw you seemingly arbitrary constraints to force you out of your rut.

  • Most people are either doers or thinkers.

    • Sarumans are doers.

    • Radagasts are thinkers.

    • Some rare people are comfortable and highly effective in both modes, smoothly marbled.

    • Compassionate and commanding.

  • My priority stack is first and foremost, “will this action be net good for the world?”

    • Later in the stack is “Will this action be net good for my current employer” and “Will this action be net good for my team.”

    • I have a hard time doing things that violate that priority stack.

    • But when all of the layers of the stack are aligned, it feels like flying.

  • A trick to downshift your brain to go to sleep: read formulaic novels in a genre you love.

    • It’s enjoyable but shallow.

      • The plots are formulaic and totally predictable.

    • You don’t need to use your brain, so your brain can get ready for sleep.

    • My guilty pleasure is trashy gay romance novels.

      • I especially love the sub-genre where one of the characters has their gay awakening as they find their soul mate, and that makes them a better person.

      • I have no idea why that appeals to me so much…

      • … oh wait.

  • Someone asked me how I was able to develop such strong intellectual discipline.

    • My secret is that I’m dysfunctionally conscientious.

    • It’s a crippling personality trait in many ways, but I’ve learned to use it to my advantage.

    • I use that to create clear, always-rules commitments to myself that I’d feel deep shame for failing to abide by.

    • If you have a monster that’s always chasing you, you might as well use it as an opportunity to get fit!

  • Someone this week called me “the human version of cocaine… but in a good way.”

    • Just for the record, this energy is all natural, baby!

  • A beautiful video of an emergent graph behavior from a simple Graph-Rewriting Automata (GRA).

    • Watching it created a sense of awe in me.


Reply all
Reply to author
Forward
0 new messages