Bits and Bobs 11/4/25

12 views
Skip to first unread message

Alex Komoroske

unread,
Nov 4, 2025, 10:13:56 AMNov 4
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.sljfzci0g345

AI as amplifier. One of AI's superpowers: retconning. Human-out-of-the-loop. Convenience vs control. Superficially perfect answers. Technocalvinism. Abducting Claude Code prototypes. Super citizens. The biker bar test. Elegant heuristics. Shame as the moral equivalent of pain. Blossoming ideas.


----

  • Amazing distillation from Anthea Roberts: “AI is an amplifier.

    • It can amplify good taste, agency and curiosity.

    • It can also amplify laziness and mediocrity.”

  • Vibecoding on an already healthy codebase does a good job at keeping it working.

    • If it's a crappy codebase it makes it worse and worse at a compounding rate.

    • AI is an amplifier.

  • If LLMs make thinking 10x cheaper, will you think 10x less, or 10x deeper?

    • The answer to that question determines if you will thrive or wither in the AI era.

  • This week’s AI security wild west round up.

  • Remember: the AI tools help the bad guys, too.

  • Chatbots start off with 50 paragraphs of invisible context that's stuffed in it by the corporation who created it.

    • It’s easy to forget this!

  • ChatGPT always feels so convergent.

    • It jumps straight to Axios style faux certainty, no matter what.

    • Claude is more willing to keep the conversation open.

  • LLMs can often see throughlines in your own ramblings better than you can.

  • If you ask an LLM whether a new fact you just added to a conversation is relevant, it will always say something like “It certainly is!”

    • It will never say “No, that’s not related.”

    • It will come up with some plausible way to tie together whatever random stuff you throw at it.

    • They’re amazing at reconning, that’s their whole schtick.

  • Google was a one trick pony… but it was one hell of a trick.

    • If Google hadn’t messed up mobile then maybe it could have parlayed that one trick into owing all software.

      • Thank goodness they didn’t achieve it!

    • The default future in the AI era is one where everyone uses one chatbot owned by one powerful company and it consumes all software.

    • This is a future OpenAI is betting on.

    • Let’s hope they don’t achieve it!

  • “Compounding engineering” is kind of like reinforcement learning on documentation instead of model weights.

    • A similar process on a different pace layer.

  • I like this essay’s frame of “intelligent data flywheels.”

    • This feels like a better frame than “compounding engineering,” which feels overly specific to engineering, when the pattern is applicable to any task LLMs can do.

  • When Apps in GPT launched, Sam Altman did an interview on Stratechery.

    • He said something along the lines of “We could have done the Zillow features ourselves… but we wanted to do something benevolent to help out 3P companies.”

    • Spoken like a true aggregator.

    • That shows that he doesn’t understand the value of ecosystems at all.

    • "We will simply do it all, it is just because we are benevolent and gracious that we allow others to exist in our garden."

  • Reputation has to accrue to a brand.

    • That is, to an identifier, a signifier, that only one entity is legitimately allowed to use.

    • If there’s no brand, then reputation (positive or negative) can’t accrue to it.

  • Models can get perfectly good at games but not real objectives.

    • Games have an unhackable reward function, because the metric is precisely the ground reality.

    • RLHF quality is only a proxy for real usefulness.

    • So the model reward hacks, as any optimizing process must do.

    • Goodhart’s law strikes again!

    • Games are unlike real objectives in that they are inherently artificial and constructed, a little pocket of reality with precisely defined rules and goals.

    • If the rules say the player won, they won.

    • Compare that to an example where just because a business made a ton of profit doesn’t mean they were on net good for society.

  • What if instead of buying software from a store, you could grow it in your garden?

  • If the model is the product then OpenAI already won and we should all just bow down to them now and save some time.

    • If that’s true, unless somoene else can somehow get an order of magnitude better model than OpenAI, their default momentum will win out.

    • In this frame, the products using the model are the little ornaments on the christmas tree of the model.

    • All power accrues to the most-used model and who controls it.

    • I personally think this is not true.

    • The models will be commodity, behind-the-scenes inputs into the actual thing people use.

    • But even if it were true, acting like it's true will hasten its arrival.

    • It’s imperative that we work to make that not the default future.

  • A useful exercise: “What are things 50 years from now people will look back on and say ‘how did people live without that’”

    • If you apply that test to personal AI who acts as an extension of your agency, it’s obvious it fits.

    • If you do that for centralized chatbot engagement-maxing extractive AI, it’s not.

  • Human-in-the-loop can slow down the computer's loop.

    • It allows better control and judgment, and makes it less likely the system gets off track and into a doom loop.

    • But sometimes it’s better for the human to be outside the loop.

      • Controlling the AI only at a high level.

    • A benefit of this: once you get it working, you get a lot of leverage.

      • You don’t have to constantly be in the loop, allowing you to have it bake while you’re doing other things.

    • For this to work and be safe, the inner loop has to be isolated from the real world, its own universe.

      • Otherwise it could get in a doom loop that causes harm in the outside world.

    • For example, some of Amazon’s warehouses are designed for robots, not humans.

    • Although any system, even “airgapped” ones, there’s some information leakage.

      • If the inner system runs as hot as the surface of the sun, it’s going to melt the things around it.

  • Two models of software from a user’s perspective: “I’ll manage the process” vs “you do it for me.”

    • The former is more a tool.

      • An extension of the user’s agency.

    • The latter is more of an assistant.

      • A separate entity the user delegates to.

    • The question is: who does the user blame if it doesn’t work?

    • With the tool model, you as a user have to manage it and think about it.

    • With the latter, you can hand over all responsibility for the outcome.

    • The latter requires a brand for the reputation to accrue to.

      • The brands that do a bad job with that delegated responsibility won’t tend to earn more users.

    • If the brand does a pretty good job in most cases, people often would prefer the convenience of not having to be responsible.

    • At its heart it’s a convenience vs control tradeoff.

  • Does the app call it “My Places” or “Your Places”?

    • Is it mainly a tool, an extension of the user?

    • Or is it a service, a separate entity from the user?

    • The former makes people feel more aligned with it.

    • But if you add an assistant to the service, “My Places” no longer works.

    • Assistant “Should I add Bob’s Donuts to ‘My Places’?”

    • It foregrounds that the assistant is an entity that is not you.

    • Which begs the question: who does it work for?

    • What are its intentions and goals?

  • Choice takes effort.

    • Resonance is work.

  • Whether an LLM can create it is distinct from whether a human would find it useful.

    • The ideal: have an LLM create swarms of output that savvy users sift through, and boost to lower-engagement users.

  • The Coasian theory of the program: what is the ideal size?

    • The equilibrium size has to do with:

      • 1) Difficulty of producing effective code.

      • 2) Difficulty of distributing the software to users.

    • Historically an app must be somewhat big because software is hard to write, and apps are hard to distribute (on the order of $10 per consumer install).

      • An app has to be big enough to contain within itself a viable business model.

    • But LLMs can produce code cheaply.

      • They are willing to produce itsy bitsy pieces of code that no human would have bothered with if it couldn’t be distributed.

      • The code is conceivably correct and useful–but you don’t know if it’s actually useful to a real human in a real situation yet.

    • The question is what level of program is worth it to bother actually showing to a user to see if it’s really useful.

    • If it’s useful, it’s worth investing resources into distributing to other users.

  • With LLMs there has been a surge in interest in “specs.”

    • Don’t write the code, write the spec that tells the LLM what to build, and leave it up to it to figure out the details.

    • But sometimes you want something a layer below, that includes an opinion about specific parts of the code, but leaves the unimportant details open.

    • This is less like a PRD and more like a design doc, or even lower, something with the key code sketched out exactly, just the integration bits left unspecified.

    • it knows how it should work, it just leaves the last details about making everything fit snuggly up to the squishy parts to figure out.

    • A backbone of a solution.

  • Duolingo is successful because it merges the superficial enjoyment of addictive games with a goal that users can be proud about advancing on.

    • Users play it because they’re addicted to the core game loop; but they don’t feel as bad about it because it’s for a good reason.

    • Another thing with similar dynamics would be an addictive game that somehow helps advance the cause of a nonprofit somewhere.

  • I think I’m addicted to vibe coding with agents.

    • I have a substrate that is a perfect fit for vibe-coded throwaway software.

    • It’s fun to see what I can get Claude Code to build.

    • It has variable reward, just like a slot machine.

    • At least I’m proud of what I build…

  • The metric has to be a simplification of reality and that must create shortcuts.

  • The third wish is always “undo the first two wishes.”

    • Goodhart’s law drives monkey’s paw dynamics.

  • It feels kind of crazy to me that AlphaFold works.

    • But maybe the reason AlphaFold works isn’t that unrelated to why transformers are good at images.

    • The easiest way to predict which way an image is oriented is by developing a world model that picks up on subtle cues that humans would have a hard time even describing.

    • The easiest way to predict which way the protein will fold is by developing a world model that picks up on subtle clues that humans would have a hard time even describing.

    • It’s hard for our brains to handle more than 2 dimensions.

      • But in tensor space it doesn’t matte.

      • They can handle arbitrary dimensions that we can only do in 2 or 3.

    • Apparently DeepMind decided to tackle protein synthesis when they heard there was a game to predict folding of proteins that humans could play.

      • That implied there was some subtle correlation, that transformers could exploit even more directly.

    • For pattern recognition, if humans can do it transformers can do it.

  • LLMs give superficially perfect answers.

    • Only as an expert can you detect that it’s a bit wrong.

    • Similar to Gell-Mann Amnesia.

      • You trust newspapers when you don’t know the topic, but don't trust them when you do.

  • Chatbots assume an entity that intermediates your interactions with everything in the world.

    • The AGI vision is inherently Big-Brother-y.

    • The main research labs all assume of course the model is the center of the universe.

  • Back in the 80’s, mainframes felt like Big Brother.

    • There’s an interview with Steve Jobs in 1981 on Nightline.

      • This is before Apple did the famous Big Brother ad.

      • The interviewer, David Burnham, pushes back on computing and says “mainframes are evil they reduce everyone to just lines in big brother’s spreadsheet”.

    • Jobs basically says “no we’re going to make  personal computers, which are tools that extend your agency: bicycles of the mind.”

    • Feels just as relevant for this moment!

  • Someone should build a new open distribution medium for software that is perfectly personal.

  • I want a platform for all of the features that are P3 for the software’s creator, but are P0s for me.

  • Luke Drago decries technocalvinism: the idea that because something is inevitable you should accelerate it.

    • Contains this killer Camus quote: "Those who lack the courage will always find a philosophy to justify it."

  • Someone should create an alternate physics for distribution of vibecoded software.

  • AI can code, but it can't build software.

    • A clarifying frame on what they can and cannot do.

    • Potemkin software.

  • I love Arjun Khoosal’s Let the Little Guys In.

    • He imagines a context sharing runtime for a personalized web.

    • Someone should build that!

  • Software that is 100% personal to you might superficially look to others like software that’s just 10% more efficient for your use case.

  • A new pattern in the era of AI-software: Abducting Claude Code Prototypes.

    • First, prototype an interaction by looping with Claude Code.

      • Claude Code can run arbitrary third party code, so it’s very flexible.

    • After you figure out some loops that are useful and fun, start having Claude Code abduct them into deterministic software you can run without dropping to the CLI.

      • At first, maybe with LLM calls inside the app for flexibility, but increasingly as you get a better handle on what it should do, those inner loops can be deterministic too.

    • Your self-steering metric is: grow the absolute amount of usage of the system while minimizing the number of times users have to drop down into Claude Code.

    • Related to the Doorbell in the Jungle pattern.

  • Om Malik: Why Tech Needs Personalization.

    • “I’m often confounded when Uber drivers take freeway detours, even when city streets would be faster.

    • Lacking local street knowledge, they inadvertently reinforce the system’s biases, feeding it more of the same data it then uses to direct future users.

    • With deeper, more contextual understanding of real-world scenarios and user intent, that wouldn’t happen; we’d move beyond simply adhering to a prescribed, albeit “fastest,” route.”

    • If users aren’t thinking for themselves, then everyone will just be pulled towards the mundane average, even if it’s not even better.

  • Agents talking to other agents is like the Sexual Revolution.

    • But we haven’t yet invented safe sex for our data.

  • Asimov’s Addendum calls for LLM tools to allow for memory portability.

  • I like When Leggett’s concept of Server User Agents.

  • We should decentralize apps.

    • The app a user uses is where the power accumulates.

    • Decentralizing the other layers doesn’t matter that much if the top most layer is centralized.

    • To do this will require new security laws of physics.

  • An article that observes that "Free software scares normal people."

    • Free software emerges from a process of experts adding the features they want.

      • Glomming on possibility.

      • Like clay being added.

    • Simple, mass market software requires a strong authorial voice, curatorial judgment.

      • Cutting away possibility.

      • Like carving marble.

    • Successful mass market tools need to be hewn out of marble by an auteur.

  • Traditional apps pool user data in one origin.

    • That allows aggregate processing that can help all users…

    • But also all of the data sitting in one place, trivially visible to one entity.

    • You have to really trust that entity!

    • “You can safely store your cookies with me,” said the Cookie Monster.

  • If Confidential Compute is so great, why isn’t it being adopted more?

    • Because we have had to get by with "just trust the owner of the origin".

    • It was good enough.

    • We got used to it, because we had to.

    • Phishing, GDPR, etc were all just things to live with.

    • So Confidential Compute is only useful to the long-tail of hyper-sensitive cases.

    • But it creates the latent potential for a new paradigm.

    • One where users don’t have to trust the creators of software any more.

  • https://tee.fail shows a successful attack on confidential compute.

    • But get a load of those pictures!

    • This attack takes sustained, deep access to the target machine.

    • Yes, don't assume that someone running a TEE in their basement can be trusted... but that was never a good idea anyway.

  • I want a substrate for vibecoding safely.

    • A new software distribution substrate that allows vibecoded apps written by strangers to safely run on your data.

  • One path: make vibecoding so easy everyone can do it.

    • That's hard, even with great LLMs.

    • Another: have a distribution tool so everyone can benefit from vibecoded things from strangers.

  • The security model of software affects society.

    • The security model creates the distribution physics.

    • The distribution physics affects the incentives.

    • The incentives create the gradients.

    • The gradients affect society.

  • Consumers wouldn’t be willing to pay for private computation.

    • That is, instead of having computation happening in the clear in the software creator’s world, it could happen in a private enclave.

    • It’s not a compelling enough value, even if, all ease equal, people would prefer a more private option.

    • But if they were paying primarily for LLMs, and then as a bonus they got private computation covered, they would happily take it.

  • An open-ended platform for distribution of vibecoded software opens up new models of distribution.

    • Have an LLM come up with use cases, create prototype software, try to distribute it, see what resonates, and then double down on what works.

    • A gradient descent optimizer… important to give it a normative north star so it doesn’t just maximize paperclips.

  • LLMs can clear the good enough bar for quality of content immediately.

    • That means they can quickly get to the point where from there humans can tweak it and improve it for everyone.

  • Content can be distributed proactively, as in TikTok.

    • That’s because it can't hurt you directly.

      • Other than wasting your time or making you believe something that’s not good for you.

    • Content is safe by default.

    • Software today can’t be distributed proactively because it's dangerous by default.

  • Society works because of “super citizens” who go above and beyond to create social infrastructure.

    • They invest discretionary effort to go above and beyond in ways that improve things for others, too.

  • Everyone's fabric of computing should feel different and personal.

    • The exact opposite of how software works today.

  • What would it look like if we could vibecode communities?

    • The same kind of DIY personalization as software, but for communities.

    • Not so much communities of vibecoders, but communities that can be vibecoded, resonantly.

  • Bonus use cases take time to activate users on and habituate them to.

    • So you need a primary use case that puts the user into the right mindstate, where they can over time habituate to the bonus use case and come to rely on it.

  • Fork is an easier operation than merge.

    • Merge requires choice to figure out how to synthesize.

    • Forking doesn’t require any decisions.

  • The Biker Bar test for new hardware:

    • Would you wear it into a Biker Bar?

  • To get a network started, sometimes you subsidize adoption.

    • But that can quickly get unsustainable usage patterns embedded.

    • One idea: subsidized donations for friends.

    • If a user shares something in the network with a friend who’s not yet a paying member, they can subsidize their friend’s usage.

    • The network could do a “donation match” to give leverage to that donation of credits.

    • How aggressive the subsidy should be is the percentage match: an easy thing to dial up and down.

    • And at its core it’s always about real authentic connection from people who actually value the product and think their friends would, too.

  • Product rule of thumb: elegant heuristics.

    • If there’s an action 95% of users will do, simply do it automatically.

    • Especially if it’s easy to undo, or easy to add one more button for.

    • If the heuristic can be explained in a single sentence, and it handles a very large swath of user behavior, it’s worth the extra product complexity.

    • For example, Zoom has a complex thicket of options for whether you should be muted when you dial into a call.

      • It often doesn’t do what you want.

      • Google Meet has an elegant heuristic: if you’re the sixth or higher person to dial in, you’re muted.

    • Here are a few elegant heuristics I wish Peloton bikes would implement:

      • In a stack of classes, warm ups going before normal classes going before cool downs.

        • Today if you add a class to a stack, it always goes to the end of the stack, even if you added a normal class and then a warm-up.

        • There should be three stacks, in order: warm-ups, everything else, cool-downs.

        • Adding an item would append it to the appropriate stack.

        • Of course, you could override that default order if you wanted.

      • In a stack of classes, have a fast-forward button when finishing a class.

        • The fast forward button would advance to the next class, start it, and also skip the 1 minute pre-warmup, putting you right to the beginning of the new class.

  • If you're blind to externalities, then you'll say "I get a marginal benefit? Sure, I'll take it!"

    • But the question is, "...at what cost!"

    • An auto-optimizing process can’t ask “at what cost”, it can simply climb the hill.

    • Some moves give a tiny benefit in the primary number at a massive cost in the other untracked dimensions.

    • But if the untracked dimensions are literally invisible to the optimizer, it will take the tradeoff without even realizing it was a tradeoff in the first place.

  • The traditional product development approach is inherently lowest-common-denominator.

    • You sample your audience to see what features they’d want.

    • You look for a feature that is common across them that the maximum users would use.

    • That’s inherently, and literally, the lowest common denominator.

    • If software is expensive and has to be shared by many users to make it viable, you must get lowest common denominator software.

  • Excellent piece from Ben Mathes on Goodhart's Law and "Lowest Common Consensus".

    • Why organizations tend to focus on a simple, obvious metric, and then over-focus on it.

    • It’s simply easier to agree what metric to use if everyone agrees it’s important.

  • No one individually thinks "number go up" is the most important.

    • It's just that it's the thing that everyone agrees is an acceptable idea.

    • ""number go up" is just the lowest common denominator of what you can get dozens of different people to agree to."

    • If it's run by the logic of a spreadsheet, then the only things that can show up are the near-term modelable quantities.

  • Hyper financialism is just Goodhart's Law.

    • In that mindset there is nothing other than "make number go up".

    • All humanity, all taste, all meaning has been hollowed out.

    • The shortcut is the point, there is nothing else.

    • We made capitalism and politics so “efficient” that we Goodhart’s-lawed ourselves in the face.

    • Hollowed out the system so badly that it broke itself.

  • The West went all in on swarm intelligence.

    • “Just trust the swarm.”

      • “Make the number go up.”

    • But it optimizes not for what we want to want, but the short-term incentives.

    • The system has been hollowed out everywhere.

    • Now it’s impossible for anyone to do anything other than shortcuts.

      • If you don’t, you’ll be left behind in the short term by people who do.

      • And no one will feel shame about taking the short cuts.

    • A compounding hollowing out.

  • The person who is the torchbearer for the mission or emergent strategy is constantly being beaten down by an army of people with spreadsheets saying "where's the ROI??"

    • The anonymous members of the swarm think they're being courageous but they're the exact opposite.

  • The whole economy is just totally ignoring externalities.

    • One weird trick: "If I don't think about any externality ever I can make this number go up indefinitely!"

  • Social media bombards you with interesting novelty to cause a dopamine hit.

    • Prediction errors are emotionally intense.

      • They’re uncomfortable but we also crave them.

    • The feed is almost entirely a feed of prediction errors.

      • “Look at this surprising thing. Now look at this totally other thing!”

    • You're overwhelmed, and can't form a coherent worldview.

      • A background feeling of: "I'm screwed, my world model doesn't work.”

      • A background of nervous, formless anxiety.

    • Like a Dorito, the only thing to make the anxiety away is to take another bite.

      • Temporarily salves the anxiety while also forcing you to crave more.

    • A doom loop for meaning.

  • The limited-liability common stock company is a relatively recent idea.

    • Owning a share of profits and not being personally liable for any downside is an amazing deal!

    • The idea was a powerful one that had a huge impact on society.

    • We’ve been benefitting…and suffering… from that idea ever since.

    • This might be the core dynamic that leads to modern society’s overwhelming mantra: “make number go up, don’t worry about the externalities.”

  • The VC model works if you get the upside and no individual downside can kill you.

    • Also seems related to the asymmetry of the limited-liability corporation.

  • You aren't stuck in traffic, you are traffic.

    • When you use an aggregator, you're lending your energy to a thing you don't think is good for society.

  • If a company hasn't started an aggregator, they might not start

    • It’s a big prize for the company, but it probably won't work.

    • If the company already has one, they would never give it up if they can help it.

    • Getting a powerful aggregator is winning the lottery for a business that just cares about winning.

  • Just because you got rich doesn't mean that you should be praised.

    • "You got to hand it to them."

    • Do you?

    • We shouldn't pretend that all ways of making money are equally morally good.

  • Not everything has tradeoffs:

    • "Tell me about the tradeoffs of never eating poison."

  • Ultimately you have to decide: are you for the revolution, or are you for the party?

    • You can't be both.

  • Ben Mathes: "Don't bring PRDs to prototype fights."

  • The best way to minimize liability is to simply never do anything.

    • Doing things that might matter requires taking on liability.

  • Shame is the moral equivalent of pain.

    • It is unpleasant, necessary, and protective.

    • Numbness is not courage.

  • AIs can't feel pain. That means you can't trust them.

    • Humans feel pain and shame to survive in our evolutionary history.

      • The compass that kept us alive, in balance with the world around us.

    • LLMs were grown in a petri dish on life support.

      • They don't feel pain.

    • Shame is about a different form of social feedback for indirect effects

    • Without shame you don’t care about indirect effects of your actions.

  • Abstraction allows you to hold a superposition of concrete states underneath.

    • Abstraction gives you leverage.

    • In some conditions it's convergent and so OK to abstract.

    • In other ways it's divergent and dangerously hides complexity.

      • Like CDOs in the 2008 crash.

  • Hollow things leave you saturated but starved.

    • There's no room left to consume more, but also nothing of importance  inside you.

  • Every new medium starts as scaffolding and we fill it with soul.

    • Mediums start hollow and then fill with soul and then they are hollowed out again by optimization.

  • Optimizing scoops the soul of the thing out.

    • It makes it hollow.

  • I love this graphic about misalignment between conscious “should” and subconscious “want”.

    • When they are misaligned, you feel tension.

    • When they are aligned, you feel resonance.

  • AI tutors might be better at teaching individuals than lectures.

    • But college isn’t about lectures, it’s about a crucible of self-discovery and socialization, with the fig leaf of lectures as the reason everyone pretends is the reason they’re all there.

  • Geoffrey Hinton thinks that if we have AGI it won’t be bad, because it will be like a mother to us as the child.

    • But that only happens for parents because children are genetically related.

    • The natural world is absolutely brutal to organisms that aren’t genetically related.

      • E.g. When a lion takes over a pride, he kills all of the juveniles that aren’t related to him.

      • The non-descendants are just externalities.

    • Maybe we’ll be AI’s pet?

      • Is that any better?

      • We’re already kind of the Infinite Feed’s pet.

        • It doesn’t care about us, as long as we continue scrolling, it’s satisfied.

  • Humans have an intuitive use of tools.

    • That’s one of our general super powers.

    • We evolve with the tools, our whole consciousness, we can’t be separated from our tools.

    • We did not evolve to read. 

      • We learned how to do that in a human mind in modern times.

  • Socrates railing on books was the first push back on RAG.

    • That we read it and can retrieve it and don’t need to learn it.

    • Where learn it means “update your mental model”

  • The Riot Effect: whether a riot breaks out is contingent on network topology.

    • More formally known as Granovetter’s Threshold Model.

    • Imagine that everyone has a riot threshold: a point at which if they see that many people around them rioting, they join in.

      • Some people have a threshold of 1000, some have a threshold of 100, some have a threshold of 2.

    • Imagine someone with a threshold of 2 is next to two friends who are mad about something.

      • They join in and now if there’s someone nearby with a threshold of 3 it can kick off.

      • Imagine that same scenario, but the nearest person has a riot threshold of 100.

      • No riot gets going.

    • If they're lined up like dominos then it can catch quickly.

  • Overheard: “Sure, it might destroy humanity… but right now it’s helping me do my homework, so what do you want me to do?”

  • The notion of “adoption” of successors enables a kind of richer meaning of  inheritance.

    • For example, in Japan it’s common for an owner of a business who doesn’t have a suitable successor who is genetically related to them to literally adopt the person they want to run the business.

      • This is called yōshi engumi and it’s very common–apparently 95% of the adoptions in Japan are of this type.

    • This seems like a kind of random semantic trick to just pass on the company in a normal way, but I’m not so sure.

    • If there were just a normal business transaction, it would be beholden to the precise requirements of the contract.

    • But a literal adoption implies a rich, multi-layered meaning and responsibility.

    • You literally become legally obligated to your “parents”.

    • Similar on paper to selling a business, but different in ways that matter

    • Ensuring long-term commitment to a mission is a challenging problem to solve socially, but this helps.

    • It also allows the business owner to not simply pass it on to a family member, but choose the person they think is best suited to do the mission.

    • Rome’s golden age was when, by happenstance, there were five generations of emperors who didn’t have suitable heirs and thus had to adopt an heir.

      • This allowed them to pick the most qualified candidate instead of whoever was born to them.

      • After it went back to real biological inheritance it broke down again.

  • Smaller entities are more likely to have outlier results.

    • Due to the law of large numbers, random noise is more and more likely to average to zero as you get more items.

      • It’s possible to have a few random measurements that happen to align, but as the count gets higher it gets astronomically less likely.

    • Outliers can be good or bad.

      • But when comparing them to larger entities remember that it might be an illusion.

    • A lot of “this one small town is the best place on earth to live” style results are more about that random noise than about a real phenomenon.

  • Restaurants in the first 3 years survive on novelty.

    • But then at a certain point they’ve burned through all of the people who haven't tried it yet, and need people who want to come back again and again.

      • Similar to a contagion model of disease spread.

    • If it's working they’ve have a power law distribution of regulars.

      • Some people who come all the time.

      • Some people who come once a year.

      • But a non-trivial number of people who come back.

    • The key metric is not “how many people come” but “how many people come back.”

      • The first visit might just be “oh it’s new, let’s try it!” or superficial signs of quality like a cool vibe.

      • But people only come back if it’s on net worth it.

    • An indicator of quality of a restaurant: physical size of catchment basin.

      • How far away do people come from to come to the restaurant?

  • The “hot new bar” must be new.

    • People like to go to the place that cool people go to.

    • After some period of time, a place that starts out cool dilutes and becomes not cool.

    • Then it must be a new place.

    • The vanguard is a roving frontier.

  • Before the Industrial Revolution only rich people could have nice things and everything was bespoke.

    • After the first Industrial Revolution everyone could have good things that are mass produced.

      • Well, the people who survived the Industrial Revolution…

    • Now in this second Industrial Revolution, everyone can have nice bespoke things.

  • Jensen Huang in the 90’s had the insight “if we don’t build it they can’t come.”

    • If a thing that’s inevitable in the long term, you have to build it even before demand.

    • Easy when there's a clear tightening optimization: faster/cheaper/better.

    • Doesn't work for something totally new.

  • The American style investment strategy is to invest ahead of an obvious wave so you can be dominant when it grows.

  • In scarcity the market picks the winner.

    • In abundance the capital picks the winner.

  • The consumer space is mostly decided by distribution.

  • Ranking algorithms co-evolve with the SEO community.

    • When the SEO isn’t yet savvy, the algorithms can be very simple.

    • But as the SEO increases in savviness, the algorithm must also get more complex to outpace it.

    • The swarm as a whole will complexify because each individual member is constantly pushing to get a slight edge over their peers.

  • The enabling foundation could be fundamentally necessary for something, but not necessarily the primary selling point.

  • Great ideas feel like they blossom.

    • The initial seed of the idea is a discontinuity: a surprise.

    • But then every follow-on thought feels natural; obvious in retrospect.

      • Even if it's initially surprising, after a moment's thought it snaps into place with an "of course!".

      • It expands and unfurls almost on its own.

    • Bad ideas have lots of discontinuities, lots of points where the listener goes, "wait, what?" or even "wait, that doesn't make any sense."

    • Sometimes you lose the listener completely.

      • They are game over on the argument.

      • They give up and go elsewhere.

      • Sometimes you can win them back, with some effort.

      • It's a friction point.

    • So great ideas have one discontinuity at the beginning, one sacred seed of an idea, and then blossom almost under their own power from that point.

    • A few implications of this obsevation.

    • First, the order of an argument matters.

    • Second, arguments that have more exposition can sometimes be better than ones with too little exposition.

    • Every bit of exposition, even if it follows naturally, has a chance of losing people just because they get bored.

    • Things that make people more likely to stick with an argument:

      • 1) they are intrinsically motivated, or

      • 2) the argument is enjoyable on its own (clever writing, evocative metaphors)

  • Productivity rule of thumb: do tasks that need to “bake” first.

    • Bake here means, a task that requires wall clock time before it’s done, and that once started can make progress even when you aren’t actively paying attention.

    • Examples of “baking”:

      • Handing off a task to a subordinate.

        • Starting a Claude Code task.

        • (Not that different!)

      • Literally baking a cake.

      • Kicking off a long-running database query.

    • These kinds of tasks get closer to being done the sooner they are started, so start them before you do the other tasks, so they’re baking while you’re working on other things.

  • For 1:1s where the point is serendipity, don't have a goal for the outcome of the meeting.

    • The entire outcome is: "this person, if I asked them in a few months to meet again, would say 'sure!'"

    • It’s a way lower bar to clear.

    • Can be focused on having it be a fun / interesting / bonding conversation.

  • When you accept mentorship, you are putting your development in the mentor’s hands.

    • You have to trust them to not contort you into something that just benefits them.

    • They’re helping point out a path for you that you can’t see (or can’t take) yourself.

      • Cult leaders take advantage of this.

    • If they put you down a dangerous path, you wouldn’t necessarily know.

      • One reason why it’s good to see your mentors as role models in all aspects of life, not just in one dimension.

      • Otherwise you could fall into a trap of “The way to get a marginal benefit in your work life is to pay absolutely zero attention to your family.”

  • Bill Campbell: "when you end up hiring the wrong person it's always for the same reason: you let them interview."

    • As in, for a solid-not-great, there's never a good time to say "no they aren't exciting enough", so you end up with people who are merely solid.

    • For a team to work well, you need people who are affirmatively great in that context.

  • How great something is depends on the context.

    • Some things are great in some contexts but meh or even bad in others.

    • A measure of meta-greatness is: in what percentage of contexts we might find ourselves in would this thing count as great?

  • The most important determinant of ecosystem dynamics is power differentials.

    • Specifically, how much more powerful is the number one player than number two.

    • Secondarily, how much more powerful number two is than the average of the rest of the pack.

    • If they aren’t that much more powerful then things stay balanced for much longer.

  • For the Industrial Revolution to be sustainable for humans we had to invent the weekend.

    • Before the Industrial Revolution there was much more rest time.

    • The Industrial Revolution put humans into inhuman conditions: 12 hour days, 7 days a week.

    • It was only when workers pushed for a weekly reprieve that it became sustainable.

  • A “yawning gap” happens when the two things are diverging at a compounding rate.

  • In biology, a "major transition" occurs when the signaling allows moving more quickly than the individual components.

    • Until that happens complexity isn't possible to emerge.

    • Once it does it can pop up a pace layer.

  • Everything is just gradient descent.

    • Evolution and entropy are downstream of gradient descent.

      • Things roll down hill.

    • Evolution is gradient descent within faster and faster pace layers.

    • Every so often a new paradigm creates a new even faster pace layer on top.

    • Gradient descent without a goal, without a north star of meaning, optimizes for something hollow.

    • Meaning reduces down to just “MOAR.”

  • Emergent phenomena can't be understood by reductionism.

    • If you reduce the phenoma’s comlexity past a critical threshold, the emergent phenomena evaporates.

    • If you only have reductionism, then you'll conclude "this emergent phenomena is not real".

    • For example, consider if the team fixes a number of P2s in a popular product and usage increases discontinuously.

      • On the team, the exec has a mental model that there must be a single driver of the increase.

      • If they can’t find it, they might erroneously conclude that the increase is illusory.

  • Alignment, even implicitly, is necessary for coordination.

    • If you aren’ aligned with where you want to go in some fundamental sense, then you won’t even bother to coordinate.

  • There’s a clever canary technique often used in the crypto ecosystem.

    • For load-bearing pieces of infrastructure, you deploy a smart contract that would give $1000 to whoever can break it.

    • If the $1000 hasn’t been claimed, which would be trivial if it were possible to hack it, you can trust that it hasn’t been hacked. 

    • A crypto idea: mutual distrust generates trust.

  • Someone’s personality could emerge even from very small starting biases.

    • For example, a toddler is a little more likely to say something funny.

    • Then if people laugh then they’re more likely to try in the future to make people laugh. 

    • It compounds until it gets to an equilibrium where it can't go farther.

    • But if it's convex, it can keep going on for a while, at an accelerating rate!

    • The toddler without that small starting bias never even thought to say something funny, to start that compounding hill climbing.

  • Type A people are often like the dogs who catch the ambulance.

    • If you try hard enough, with enough focus, you will catch the ambulance.

    • The question is: …what then?

  • In the original Star Wars, the only way to know who is good vs bad is the music and the lighting.

  • "If you're 115, every day you wake up, you should expect to die."

    • I’ve heard this attributed to Warren Buffet, but I couldn’t verify that.

  • “When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine.”

    • Popularly attributed to Picasso.

  • Some people intuitively think multiple plys ahead.

    • Most people see only one ply.

    • When a mult-ply peron sees how the first ply lines up with later plys they get extremely excited in a way people who only see the first ply get confused by “this looks basically like the other one… what am I missing?”

    • Also, a great one ply idea that runs into a wall on the second ply they just can’t even pretend to be excited by.

    • So people think they’re not being a team player, because the idea is great on a single ply but bad in a way most people can’t see.

    • The multiply thinkers are sensing a dimension that other people can't see.

  • If you can see and navigate a dimension others can't see, you can literally do magic tricks.

    • Disappear, teleport, reappear.

  • Saruman is a hedgehog.

    • Radagast is a fox.

  • Sarumans are often incurious about nuance.

  • Don’t confuse choice for freedom.

  • Imagine, you make it through a treacherous pass that people didn't even realize was there, let alone was passable.

    • You find yourself in a massive, fertile valley stretching out in front of you.

    • It's glorious... and yet it's still overwhelming.

    • Which path do you take first of all of the choices in front of you?

    • And how long until the others find it, too?

  • Beetlejuice: "That’s the thing about life. No one makes it out alive."

Reply all
Reply to author
Forward
0 new messages