Bits and Bobs 12/8/25

9 views
Skip to first unread message

Alex Komoroske

unread,
Dec 8, 2025, 12:23:24 PM (6 days ago) Dec 8
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.jprxslsso4cu

AGI as kayfabe. Resonant privacy models. Software as a tree vs a piece of furniture. SETI@Home for Deep Research trawling through old papers. LLM frosting. Thinking paper. A cozy OS for your life. Apps' inability to compose. AI Service Provider. Vibecoding islands. Atomic networks. Transcending silos. Compounding engineering pattern: superstitions with guestbooks. What Got Lost in the Optimization. The missing hacker ethic. Liability meatbags.

----

  • People assume the main AI software category will be chatbots.

    • I think it’s a category no one has invented yet that will make Chatbots look like an embarrassing party trick.

  • LLMs have so much potential energy, just waiting to be catalyzed.

    • All we need is the right catalyst.

    • We’re seeing like 5% of the potential value they contain.

  • A situation good for society is “all of the model providers have to compete and none of them win” for LLMs.

    • Great for everyone but the LLM model providers, who are in a never-ending red ocean battle.

    • But the rest of us benefit from the significant competition.

  • Using LLMs with tool calls is like running a stranger’s code on your sensitive data.

    • Obviously a bad idea!

    • Prompt injection arises from this being a bad idea, but somewhat more subtly than normal code.

    • We’re so used to treating only code (special incantations, treated specially) as dangerous, but now all text is!

  • If thinking is 10x cheaper, are you going to think 10x less or are you going to explore 10x more?

  • This week in the wild west roundup.

  • If your story is "to the moon" then when you can see the top of the s curve, it’s a code red.

  • We've sought an alien intelligence for a long time, to better compare ourselves against.

    • We’ve discovered one that we made kind of superficially looks like us.

    • We have this massive society scale collective brain that talks to us like a single human.

    • That's confusing!

  • When a precious commodity becomes common, it changes things in significant ways.

    • Salt used to be extremely precious.

      • The word ‘salary’ derives from salt, back when salt was precious.

    • But now salt is so common that it’s almost too cheap to meter.

    • Code is the same way.

  • All models must be biased, there must be value judgments in curating things into it.

    • These models can be steered so easily with a shockingly small number of training examples, that it’s unthinkable the models wouldn’t all have some kind of (perhaps unintentional) bias.

  • Ilya Sutskever: “Predicting the next token well means that you understand the underlying reality that led to the creation of that token.”

  • How much of the AGI talk is kayfabe for employees and investors?

    • During the Soviet Revolution, believers talked about how  "utopia is just around the corner"

    • But at a certain point, decades later, everyone knew it wasn't going to happen, and that the name of the game was just about keeping the same thing going.

    • "True communism is still on the way and will be great” was the continued reframe, but now hollow.

    • Is the same happening for AGI?

  • Untrusted code must be kept in a strong containment boundary.

    • If you want a system that can build itself, some parts have to be locked down with clear boundaries, otherwise it's inherently dangerous.

    • A system that can change itself internally arbitrarily, that allows executing untrusted code, is a dead end.

  • Deterministic code and LLMs both have a place.

    • Asking an LLM to do a thing computers are already good at is expensive, inefficient, and fragile.

    • Deterministic Code is like a skeleton.

      • Strong but inflexible.

    • LLMs are like muscles.

      • Action potential but squishy.

  • Imagine a system that had a deterministic code containment boundary, that LLMs could run in.

    • Imagine the whole system could be made out of these code patterns.

      • Patterns instantiating patterns.

      • Like layers of a cake joined by LLM frosting.

      • Patterns all the way down.

    • LLMs would be able to make significant decisions, but always contained within a Pattern.

    • The question is: is the root a pattern or an LLM?

      • If the latter, it can’t ever be made secure.

      • But that’s how the vast majority of LLM systems are designed today!

  • An easy way to flatter and manipulate people: "Of course, you're smart so you know that..."

  • A resonant privacy model allows you to have minimal friction, but safely.

    • At each layer, you don’t have to peel it back and understand the layer beneath.

    • But if you did, you’d find it aligned with your intentions and expectations in even more ways than you had anticipated.

  • Ben Thompson says OpenAI does have a business model they aren't embracing: ads.

    • But that feels fundamentally dangerous in a chat.

    • The question is: how effective can the wall be between ads and organic?

      • Newspapers and Google did it... but they didn't know every intimate detail of users.

    • Chatbots can do a perfectly personalized pitch.

    • At a certain level of quality, a pitch becomes a "phish".

    • Humans can't often get to that point, but LLMs with extensive memories can do it

    • “Conman” comes from "confidence man", where they gain your confidence

    • it's all principal agent problems all the way down

    • Whatever you ask it to sell, the LLM will do it.

  • LLMs make it trivial to create your own personal echo chamber.

  • An LLM can be used as a devil's advocate with no shame and infinite patience.

    • They have to be asked to play this role, but they can help provide disconfirming evidence.

  • Google Search as a centralized service felt somewhat less scary to me than ChatGPT.

    • But why?

    • Both could have outsize impact from just a bit of bias given their scale.

    • Google Search didn’t have much individual state, whereas ChatGPT has significant amounts of state.

    • A tool without state is easier to leave than a tool with state.

    • Also, a non-personalized tool is easier to audit at scale.

  • Software is hard to design but easy to copy.

    • So a company that has momentum and is able to quickly copy into their thing they win from superior distribution.

    • This is part of what makes the industry kind of cynical.

    • Thirty years of the same origin paradigm causing significant centralization.

    • It doesn’t matter who innovates, it’s the aggregators who benefit.

  • Two possibilities for Infinite Software.

    • One: there’s a single system that is infinite inside itself, and can be designed by one company and one team of PMs.

    • Two: There’s an emergent explosion of lots of little things, innovating without permission.

    • Notion is betting on the former.

    • I’m betting on the latter.

  • Two different models for software: a tree vs a piece of furniture.

    • Two very different mental models of what it can / should do, and how you'd cause it to come into the world.

    • We’ve only had the latter because software was expensive to create.

    • But now LLMs allow infinite software, which can grow on its own.

    • The question becomes: what is the right trellis for infinite software to grow on?

  • Anthropic’s Red Team’s Agents, in simulated testing, found $4.6M in Smart Contracts exploits.

  • If the model is 1000x better than you need for your use case, then squeezing out 10x more quality doesn't matter!

    • In that case, the model quality isn’t the bottleneck.

  • If you want infinite software for normies with just the model, LLMs will need to be 100x more capable than they are today.

    • But if you can allow cached save points in the ecosystem emergently because you don't have to trust the code, they're already 10x more powerful than they need to be.

  • In a world of infinite software, you don't think about software as discontinuous chunks; but as something that just melts away, leaving only the data, come alive.

  • I want infinite software with humans in the loop.

  • Watermarks for non-AI content seem more plausible to me than for AI content.

    • It’s easy to get rid of a watermark.

    • If the incentive is for someone to get rid of it, they will.

    • But if everything is assumed to be AI generated unless it has a chain of provenance (including a verifiable watermark) then the image creators will have an incentive to maintain that watermark.

  • Is there any team of 10 PMs in the world who can design all of the features for infinite software?

    • Or is the problem fundamentally combinatorial?

  • Imagine a system for infinite software where no PM had to pre guess your use case.

    • It can emerge as long as others did it and found it useful. 

    • It can accrete out of disparate pieces.

    • Similar to before, someone in an SEO content mill had to guess the question you wanted to answer.

    • A very indirect process--but software is orders of magnitude more expensive!

  • In an SEO world, the content is produced and available for everyone by default.

    • It has to be to get picked up by any search engine.

    • But with AI answers, they are private by default, only available to the chat app that created them.

    • Wouldn’t it be cool if people could cache Deep Research reports publicly for others to draw from?

    • There's a bunch of great papers in the literature that took decades and decades to be noticed and operationalized.

      • Often they are cross-domain.

      • Found in one domain, but useful in another.

    • There are presumably tons of great ideas lurking in the old papers.

    • You could imagine a SETI@Home kind of project where participants do a few Deep Research reports on random old papers and applicability, and then publish them automatically to a shared repo.

  • When you ask a question on Stack Overflow, others might judge you.

    • But ChatGPT will never judge you… and if it did, it’s in a private chat.

    • So it’s easier to ask questions without being embarrassed.

    • But that also means it’s easy to just let ChatGPT do things for you and never have to learn it.

    • ChatGPT can spoonfeed you answers without making you ever feel self conscious.

  • What is the living substrate for both data and code to be embedded in?

    • Thinking paper.

    • Paper that is active, that automatically extends your thinking, an extension of your agency. 

    • A living tool.

  • Imagine: the cozy operating system for your life.

    • Unlike a real OS, it would need to be in userland (a web app or an app).

    • Something your spouse and other people you collaborate with and care about can also use.

  • An interesting tweet:

    • "This feels like software that adjusts itself to your preferences. All apps behaving like TikTok algorithms, in a private way."

  • It can't be a personal system of record if your spouse can't use it.

    • A personal system of record has to integrate with the people and data you care about.

  • Today only trusted code can safely access sensitive data.

    • That's a massive limitation.

    • It leads to significant centralization, and huge numbers of use cases below the Coasian Floor.

  • Apps are silos that can’t meaningfully compose.

    • That means they are monoliths, and some PM had to have come up with the right set of features for you.

    • That’s impossible, so you get the lowest common denominator.

    • But arbitrary, safe composability allows software experiences perfectly tailored to you.

    • Safe arbitrary composition would be a killer feature in a new system.

  • Wishes are the threads that tie the fabric of possibility together.

    • Coordinating on wishes is how society makes progress.

    • An implicit, automatic, emergent process.

  • In the future we’ll all have an ASP.

    • Not an ISP: an Internet Service Provider.

    • An ASP: an AI Service Provider.

    • The manager of your compute.

      • Tokens and deterministic compute.

  • In the app store you go to the app.

    • Imagine if instead the software could come to you, safely.

  • Imagine an ecosystem with negative friction of distribution.

    • It will have a positive boundary gradient.

    • It will attract all data.

    • At each point, there's no reason not to put your data in the ecosystem, and a clear benefit to doing so.

  • In a negative-distribution medium, casual users will require suggestions and wishes to not just be a bonus, but a load-bearing part of the product.

    • But whales will be happy just with the safe remixing environment.

    • They'll evangelize to their less-savvy friends and pull them in with concrete use cases, so suggestions can be a bonus until it achieves lift off.

    • A less risky launch strategy.

  • In a new medium, you need both the diamond hard engine, and the user-land scripting language.

    • A hard layer next to a soft layer.

    • In game engines, this is the engine and the Lua scripts.

    • In browsers, this is the rendering engine and Javascript.

    • The hard layer has to be perfectly calibrated, trusted.

    • The soft layer can be malleable, untrusted.

  • LLMs are the most transformative new material to build software with since the web.

    • Anyone who isn't using them in software development today is missing out.

  • The architectural decisions aren't in the codebase,

    • They're implied at a layer that is not formally captured in full fidelity.

    • Code captures a thing that works, but not why or the intention.

  • If you don’t solve the security model, all of the vibecoding is an island.

    • You're alone on that island.

    • Your data is not on the island.

    • It can't integrate with your life: the people or data you care about.

  • Most vibecoding tools assume the goal is to make apps easier to build.

    • But apps are islands.

    • Making them smaller makes them less habitable to users.

    • What you need is a new way for software to connect, safely.

    • The same origin model presumes that code can't interconnect with other code.

    • Apps have to be a viable island on their own.

    • A new type of distributable code shouldn’t have to be.

  • In an ecosystem where savvy users can cache savepoints for less savvy users, whales can create a compounding amount more quality.

    • Whales create a collection of toeholds for everyone else.

    • Compounding quality means the quality per cost (dollar and also invested time) falls quadratically.

  • An “atomic network” is a network that is at critical mass to persist on its own.

    • It's an atom, because it's at a critical mass to be free-standing.

    • These are the seed crystals of larger networks.

  • Cozy communities are underserved by software today.

    • They're below the Coasian floor.

    • But that's where a lot of our life is spent--especially the meaningful parts!

  • If you want to have software that integrates with your life, then it has to have access to sensitive data

    • Today that requires trusting the code in an open ended way.

    • What if you didn't have to?

  • When the gatling gun came out, it changed warfare.

    • You couldn’t just line up troops on the battlefield.

    • It took 50 years to figure that out!

    • AI software engineering is the same.

    • We’re still lining up our troops with structured PRDs.

  • Code can do stuff.

    • That's why it can't be distributed proactively.

    • But self-distributing code that can somehow be safe would be an order of magnitude more positive impact.

  • If a more savvy user than you invested effort in a similar use case that worked well, the system should use that to solve your problem too.

    • The optimizations of the savvier users roll down hill to less savvy users, an inductively emergent system.

  • When code must be trusted, you have to make sure you don't mess it up when you write it, because you could hurt someone.

    • But when it's untrusted, you can YOLO the code you write and not have to worry about hurting yourselves or others.

    • The same origin model requires your code to be trusted to work with any data it has access to.

  • Today, users being in control of their data forces them to think about it constantly, and get pages and pages of permission dialogs.

    • It's not worth it.

    • But if you could separate it and have a resonant privacy model it would be great.

    • Control of your data, without the constant fatigue.

  • The same origin model shouldn't go away,

    • It just shouldn't be the only model.

  • I realized one reason I’m addicted to having Claude Code always “baking” something in the background.

    • I’m dysfunctionally future oriented.

    • Things that can “bake” give you leverage–they can be making progress towards completion even while you’re doing something else.

    • Claude Code makes it easy to bake code.

    • There’s always another session I could be feeding, unblocking, so that it can continue baking and give me leverage.

    • It creates a kind of manic energy, trying to always be baking, consuming every waking minute.

  • When you’re in a culdesac, it’s hard to get motivated.

    • Every action you take just brings you deeper into a dead end.

    • But when you feel like you’re accumulating compounding possibilities, it’s easy to get motivated.

    • Locking in stepping stones for yourself in the future–or anyone else.

  • The same origin paradigm solved a nuanced security problem with silos.

    • Simple, effective, and fundamentally leads to significant centralization.

  • It’s the intersection of your data across silos where a system without silos would shine.

    • Things that are more specific to your use case.

      • The venn diagram is smaller and smaller as you intersect more.

    •  Those use cases are thus are below the Coasian floor.

  • Crosshatch is shutting down.

    • It was trying to make a system where silos could share context.

    • Companies jealously guard context silos.

    • How do you get silos to share with each other?

      • You can't.

      • Silos want data to come in, not out.

      • So the key problem is not the incentives for sharing, it's the silos themselves.

    • The answer is not to change the incentives of silo owners, it's to somehow make it safe to not have silos.

    • Then the silo-free ecosystem will be so much better than the siloed ecosystem that it could pull in the whole world.

  • A pattern I love for vibecoding compounding engineering: superstitions with guestbooks.

    • In community-docs in each repo I have:

      • blessed-wisdom

      • folk-wisdom

      • superstitions

    • When an agent bangs its head against the wall on a problem but figures it out, it adds a superstition.

    • The superstition describes the context and data, what was tried, and what works.

    • Each time an agent gets stuck, it checks the community-docs.

    • If it uses a superstition and it works, its adds an entry to the guestbook.

      • Containing the date and context.

    • When a superstition gets a lot of guestbook entries, it’s upgraded to folk-wisdom.

    • Every so often a human expert reviews the folk wisdom and promotes items that are correct to blessed-wisdom.

  • Stack Overflow is like my agent superstitions pattern, but for the whole internet.

  • An old paper on “Programming as Theory Building

    • The code is not the thing, it's just a projection of the theory of the code.

    • The code is the map, the theory is the territory.

      • The ground truth of what it's supposed to do and how it's supposed to work.

    • Vibecoding makes you less aligned with the theory of the code.

      • The map vs the territory.

      • You don't understand the motivation of any line of code, and don't know that a human ever affirmatively did.

    • Spec driven development is the theory that the spec is the more important artifact than the code.

    • But to be effective, the spec has to be much more in depth than typical engineering design docs or PRDs.

      • Typically those are throwaway, so they diverge from the code.

      • But if the code is throwaway, then the spec will continue being ground truth.

    • LLMs allow us to move up to spec as the primary layer.

      • Before it wasn't plausible to represent the spec as ground truth; it was too annoying to keep it precise enough.

  • An LLM-assisted coding best practice: Iterate on the spec, not iterating in the chat session.

    • If it gets it wrong, kill the session, change the spec, and try again.

    • Iterate on the levered thing.

  • Memories should have a half-life.

    • If they haven't been used, they fade.

    • Each time they are used, they jump to the top.

    • The longer they aren’t use, the more they fade to the bottom, and ultimately get culled.

    • This is like the Swarm Sifting Sort that Kiva robots do in warehouses, automatically keeping the most useful things towards the front.

  • Many people have a nihilistic take on privacy today. 

    • That's because the modern environment has made it so cynicism is the only reasonable option.

    • When you become totally hollowed out by cynicism you become nihilistic.

    • The current privacy models have caused people to be cynical about their data.

  • The privacy model is the enabler of new distribution ecosystems, but not the draw.

  • It’s easier to motivate outcomes that benefit the person directly.

    • A motivation of "I solve this for myself, and as a bonus it helps other people" is orders of magnitude easier to get off the ground than "I do this and it only helps other people and maybe at some point in the future that indirectly helps me."

    • Radically different motivation bars to clear.

  • There's a lot of thinking about minimizing the harm of AI.

    • But not a lot about using AI to maximize human flourishing.

  • A new piece of mine: What Got Lost in the Optimization.

    • The Optimization Ratchet leads to systems that optimize to the point of hollowing out.

    • This emergent focus on optimization explains hollowness in not just tech, but also business and politics.

  • The guy who invented the GDP metric apparently said "never optimize for this as the sole metric, that would be catastrophic."

    • Whoops!

    • Goodhart’s lawing ourselves in the face: the defining characteristic of modern society.

    • Optimization without limit.

  • Ossification happens inevitably, the question is just how quickly it happens.

    • Ossification is (mostly) a one-way process.

    • A ratchet.

    • Systems optimize and that creates ossification.

  • When you scale you have to optimize.

    • When you optimize you hollow out.

  • I’m not saying optimization is bad.

    • I’m saying it’s not an unalloyed good.

    • In modern society we forgot optimization could be taken too far.

    • We forgot there was anything at all on the other side of the scale.

  • Systems often optimize for things that no one actually wants.

    • For what gets optimized, what's more important is what is measurable, not what is desirable.

    • That's how we get hollowness.

  • We're obsessed with scale as a society.

    • Things that scale are deeply homogenous, inherently average, dull.

  • Optimizing… but for what?

    • Due to Goodhart's law it’s not what you think it is.

    • We treat optimization like an unalloyed good.

    • In the right situations it can be hugely beneficial.

    • But as it saturates it starts to go wonky and decohere from what we actually care about.

    • Just some number that lots of people agree is worthwhile, because everyone else agrees it's worthwhile.

    • Creating hollow improvement at the cost of resonant quality.

  • Technocrats love optimization.

    • Instrument, measure, codify best practices.

    • It creates obvious value by capping downside.

    • But there is a significant cost—it’s just that it’s diffuse and hidden.

    • So optimization ratchets up, unchecked, unbalanced with any countervailing force.

  • If you don't put both things on the opposite ends of the scale then it can't possibly be correctly balanced.

    • It will tip to one side as though the other side doesn't matter at all.

    • Optimization is not an unalloyed good.

    • Like all things, it has downsides.

    • The downsides are just more indirect and hard to measure.

  • The benefit of a Public Benefit Corporation is that there’s something to balance out the otherwise infinite need of profit. 

    • It doesn’t have to actively balance it out, just be a non-zero thing on the other side of the scale.

    • Just putting something on the other side of the scale against “maximize profit.”

  • Operationalizing the “benefit of humanity” is hard to measure for someone else.

    • Someone else might cheat–Goodhart’s Law

    • But inside yourself it’s clear.

    • You know if you’re cheating or not.

    • Do your intentions align with your actions or not?

    • If you have to decide if someone else is doing it it is hard, just judged by their externally visible actions.

    • If it’s just you then it’s easy, trivial, the most natural thing in the world.

  • Metrics can be used to help ground truth your thinking but should never replace your thinking.

  • The problem isn't anybody choosing Candy Crush.

    • The problem is when there's nothing left to choose but Candy Crush.

    • Meaningful choice requires genuine alternatives.

  • What drew many of us to tech was the hacker ethic: technology as a force for human flourishing.

    • The industry didn't abandon it—we accidentally optimized it away in pursuit of short-term metrics.

    • Resonant products build trust.

    • Trust compounds.

    • That's good ethics and good business.

  • The point of technology is to further human flourishing.

  • It’s hard to build resonance by committee.

    • Measures are “objective” and become a coordination function when people lack trust or can’t get strong buy-in to change something based on gut.

    • LLMs don’t need to coordinate as much… maybe that will lead to more resonant products.

  • LLMs will be destabilizing to the current tech paradigm.

    • They radically change the cost of key inputs (e.g. software), which will profoundly change the industry.

    • If we need to grow a new tech industry fit to this paradigm, we might as well do it by returning to the hacker ethic that we lost.

  • With LLM on the scene, now we have to decide: do we slide further into cynicism and nihilism, turbocharged?

    • Or do we use LLMs to reboot the industry, to come home to the hacker ethic we lost?

  • LLMs allow the possibility for resonance at a scale that has never been possible before.

    • Resonance can’t be done quantitatively, only qualitatively.

    • LLMs allow qualitative nuance at quantitative scale.

  • Steven Levy laments "the crash of the idealism that originally drew founders—and me—to the tech revolution."

    • My response: That idealism didn't die, it just got scattered.

    • The Resonant Computing Manifesto is a Schelling point for people who never stopped believing.

    • The manifesto isn't proposing something new—it's trying to recover what got lost.

    • This is a homecoming, not a revolution.

    • The hacker ethic never went away; it got drowned out by the optimization ratchet.

  • Palo Alto has a “we run the world” vibe.

    • The rebels became the incumbents.

    • Not just one company, but the whole culture.

    • An Innovator’s dilemma for a whole industry.

  • The hacker ethic said "mistrust authority."

    • The current tech elite became the authority to mistrust.

    • Resonant Computing is what you build when you take the hacker ethic seriously in an era when the hackers became the incumbents.

  • Resonant products create more value.

    • With a broad enough perspective, there is no tension with business objectives…

    • In fact there's fundamental alignment.

  • Nobody chose hollow over resonant.

    • Hollowness is what survives when you optimize only for engagement. 

  • Resonance is hard to measure, but everyone can feel it deeply, intuitively.

  • Resonance is fractal alignment across the layers of a system.

    • That creates an emergent positive force greater than the sum of its parts.

    • Every layer pulling in the same direction, strengthening and reinforcing the other layers.

  • Resonant things grow naturally.

    • But as they grow they fundamentally become optimized and hollowed out.

    • A random example.

    • The first Five Guys was in my home town.

    • They used to pour in heaping amounts of extra fries into the bag.

      • An authentic gesture of generosity.

    • Now, Five Guys is a massive national chain.

      • They still give you extra fries, but they measure out the extra in a calibrated tin before pouring it in.

    • Same gesture, but coming from a very different place.

    • Presumably some beancounter could discover “if we reduced the size of the tin by 10% no one will notice and we’ll save $10M a year nationally!”

  • Contexts and environments can make resonant outcomes the default… or nearly impossible.

  • Tim Berners-Lee has a concept of “beneficent software.”

  • Technology is multivalent.

    • It amplifies what it is applied to.

    • That can create good things, or bad things, depending on what it’s applied to.

  • Seb Agertoft has a frame I like: Thoughtful Technologists.

    • Similar to the Cosmos Institute’s Philosopher-Builders.

    • I just recorded an episode on his Humans in the Loop podcast that will be published in the new year.

  • We should make people feel welcomed to be Thoughtful Technologists, not shamed for not being one yet.

  • Hasty decision making leads to hollowness.

    • Resonance takes time and space.

  • Hollow is shallow.

    • Resonant is deep.

  • Technology is primarily an amplifier, so it's imperative that it amplifies the right thing.

  • Resonant things feel like coming home vs running away from ourselves.

  • Resonance is what is missing in modern society.

    • We can all feel it.

    • It’s everywhere.

    • This feeling of emptiness.

    • You can’t see emptiness, but you can feel it.

  • Everything before the Industrial Revolution was fundamentally resonant by default.

    • Hollowness only comes from an industrial process that is scaled.

  • When you live aligned with your principles, resonance is the default.

  • When you lose all of your idealism you become a cynic.

    • When you stew in cynicism you become a nihilist.

    • At that point you think “nothing matters so I might as well do what benefits me.”

    • This is when you are well and truly lost.

  • We all want to be optimistic and idealistic, but in the hollow age, all that feels possible is cynicism.

  • We're living in a post-modern hellscape.

    • No one thinks that anything matters, so nothing does.

    • A self-fulfilling nihilistic belief.

  • Shamelessness causes hollowness in society. 

    • "I'll just make number go up, I don't care what the indirect effects are."

    • I wonder: is shame required for resonance?

  • The hacker ethic is about friendly pirates.

    • Rebels who are trying to make the world a better place.

    • Today we have neither the prosocial, nor the rebels.

    • We just have the cynical incumbents.

  • Berkeley has rebel energy.

    • It’s also adjacent to Silicon Valley, but distinct.

    • So you get the Silicon Valley goals but the Berkeley rebel energy.

  • When you’re working with something fundamentally addictive then it’s easy to accidentally create opium dens.

    • Basically every direction ends there.

    • It’s like walking on a tightrope over an alligator pit in a hurricane.

  • Once someone does something primarily for the credential itself (the metric, the external validation) vs the intrinsic desire, it becomes hollow.

  • A tweet: "resonance and internal alignment is true wealth."

  • Another tweet: "Technology is a tool, use it or such or you're a tool. Get out and meet people or ‘touch grass’ as the kids say. "

  • There’s a new methodology to detect monopoly conditions: Olley-Paks.

    • Here’s Claude’s synthesis after we had a discussion about the article.

    • "Concentration and markups measure market structure, not market process.

    • Both can rise in healthy markets (productive firms growing, innovation being rewarded) or sick ones (incumbents protected from competition).

    • They're outcomes that don't tell you if the competitive discovery mechanism is actually working.

    • The Olley-Pakes decomposition instead measures whether resources are flowing to more productive firms—essentially asking: "Is the market rewarding excellence?"

    • It decomposes aggregate productivity into

      • (1) average firm efficiency and

      • (2) the covariance between productivity and market share.

    • If that covariance is positive and rising, competition is functioning.

    • If it's zero or falling, something is blocking the reallocation process regardless of what concentration looks like.

    • The policy implication: Instead of asking "how many firms?" or "how much profit?", ask "are better firms growing faster than worse ones?"

    • This reframes antitrust from policing structure to diagnosing whether the discovery process is functioning—much closer to what competition actually does in a market economy."

  • A new paper: Polarization by Design: How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs 

    • LLMs can be skewed by just a few hundred aligned inputs (see the Anthropic paper).

    • The same is true for society.

    • A small number of strongly aligned components can have an outsize impact on the whole.

    • Already true for motivated adversaries like Russia, now the toolkit is available to anybody!

  • Talking Points Memo:

    • "There's some older (pre gen ai) work that shows fear of AI is repackaged fear of capitalism running away & behind left behind.

    • So that framing of AI being controlled by the ultra rich is likely to connect, across the political spectrum."

  • A nice distillation from a HackerNews comment:

    • "Closed doors (focused work) lets you reach the local minimum faster. 

    • Open doors (More connections) lets you escape the local minimum."

  • When you look at the gas tank meter you minimize usage.

    • When you focus on the tachometer you focus on how much torque you can produce.

  • When you feel uncertain you default to pulling forward questions to resolve them, to feel a solid foundation under you.

    • But resolving questions before you need to ties you down.

    • If you can solve it cheaply enough with blunt force, you don't have to figure out the precise force to apply.

      • This is the difference between the engineer and the physicist.

      • The physicist wants the solution to be elegant.

      • The engineer just wants the solution to be good enough.

  • If you don’t have paint on the brush you can’t paint.

    • You need both paint and the paint brush.

  • Success breeds success.

    • It puts you in a position where you have better adjacencies with better payouts and lower risk.

    • Each step, if you play your cards right, puts you in an even stronger position.

  • Shifting from default diverging (convex) to default converging (concave) is a miracle.

    • The spoon flipping your reflection upside down.

      • A discontinuity from possibility to reality.

      • Pulling it through the veil of possibility

    • As you get closer to the flip it gets harder until you will it through and then it pops into the other side.

  • One kind of job AI will create: a “liability meatbag.”

    • A person who can say “I stake my credibility and downside on this.”

    • This person has to judge that the agent’s actions are good enough to be worth the trade.

    • A human can be punished in a way an LLM cannot.

  • Instead of trying to make outcomes that are unbiased, make local processes that lead to emergently balanced results.

    • For example, Twitter’s Community Notes feature was cleverly designed.

  • The key question around regulation is not regulation or no regulation.

    • It’s ex-post governance or ex-ante.

    • The American approach is ex-post.

    • It’s messy but it’s highly resilient and anti-fragile.

  • A culture of permissionless innovation requires capped downside to the commons.

    • One bad actor shouldn’t be able to ruin it for everyone.

  • When you build a brain, we tend to reach for structure, top-down approaches.

    • But what if you built a brain out of ants and stigmergy?

    • That's the emergent approach.

    • That’s closer to how these kinds of systems actually work.

  • Popper: science itself is an externalization of human cognition at planetary scale.

  • Objectivity emerges from the process of human interaction of competitive intersubjectivity: science.

  • Emergence isn't controllable, but it is tunable.

    • If you set the right structure, different results emerge.

    • Don't give up and say "it's emergent so I can't change it."

    • You just need to use a different method of analysis.

      • It requires multi-ply thinking, experimentation, and reflection.

    • Think like a gardener, not a builder.

  • Why does the tech industry experience such a strong version of the  coordination headwind, compared to other industries?

    • Because tech is bits not atoms, and so it goes faster than other types of organizations. 

      • Its OODA loop is the fastest.

      • Not because tech is better, because tech is easier.

    • That makes it the vanguard of these kinds of issue.

      • It has more acute versions of the problems that everyone has.

    • Not just the coordination headwind but also the optimization ratchet.

    • The tech industry runs hotter; it experiences all of the factors normal organizations experience, but faster and more intensely.

  • If you point out the emperor has no clothes, he'll have your head cut off.

    • If you point out that organizations are fundamentally emergent uncontrollable slime molds, no one person feels strongly offended.

    • Except founders in founder mode who want to believe the whole organization is a pure manifestation of their intention.

  • If we all tried to pretend gravity didn't exist, we'd go crazy.

    • We'd keep doing things that consistently didn't work for some hidden, non-obvious, but omnipresent and omnipotent reason.

    • We'd all be doomed to failure and frustration.

  • It's easier to take a small thing that is known to work and extend it to generalize it.

    • It’s like pulling taffy.

    • At every point, it exists and is viable.

    • As long as you pull it carefully and consistently, it can expand.

    • Compare that to starting with a theory and trying to build the real thing.

    • The time from the design to the reality is a small miracle.

    • The likelihood of failure goes up at a compounding rate with size.

  • Don't lean into divisions.

    • That's the coward's way out, by focusing on superficial differences.

    • That hollows things out.

    • Instead, transcend and include

    • Lean into fundamental alignment.

    • That creates resonance.

  • The gradient of competition can only give users what they want, not what they want to want.

    • Users get to decide what to purchase.

    • They'll decide to use the one that gives them what they want.

  • Centralization creates chokepoints.

    • It doesn't matter if it's not centralized by the government.

    • If it's in one place then a government can control it more easily than if it’s distributed.

  • Non-stakeholders are more likely to be trolls.

    • To destroy just for the lulz.

    • Catastrophic loss you don't care about for a minimal benefit you do care about.

    • Stakeholders care about the thing itself and don’t want to harm it for no reason.

    • LLMs aren’t truly stakeholders in any conversation you have with them.

  • In Memento, taking away the post-it notes would be worse than just deprivation of property.

    • Taking away part of someone’s extended mind is more dangerous than other property deprivation.

  • Most ranking functions can be gamed by coordinating outside the system.

    • Ranking functions typically assume authentic actions that are independent.

    • But if actors coordinate outside the system, they can invalidate the independence assumption.

    • That’s why ranking typically is proprietary–otherwise every ranking function is gameable.

  • Once a signal moves from authentic to performative it loses its value.

    • Goodhart's law is performing for the metric, vs intrinsic belief.

    • The searchstream is valuable because the search results are just for you; theres no reason to performatively search.

  • Authentic private in situ decisions can be aggregated to create high quality signals.

    • For example, if the system could see what compositions of software users use multiple times.

    • What things a user uses is what sets what gets distributed.

  • Kids' names being cyclical is another example of the Ouija Board Effect.

    • Even when parents think they’re being distinctive and clever with their kid’s names, they still end up with lots of other kids having the same names.

    • Everyone makes the decisions independently (and effectively secretly).

    • Everyone wants to be a little distinctive, but not too much.

    • Everyone is independently influenced by the same cultural forces, so they converge even though they don't coordinate.

  • Having a shared fiction allows you to align as a society.

    • Even if it's patently ridiculous.

    • As long as you all earnestly believe in it.

    • It allows a bunch of apes to build cathedrals!

  • Collectives that produce great results are ones where everyone believes it’s special and good.

    • That creates an emergent magic,

    • But only if everyone agrees.

  • When you leave the exercise to the reader it engages them.

    • It puts them in the loop.

    • It becomes a part of them as opposed to something given to them.

    • Something that grew from within them instead of force upon them.

  • Brian Chesky has observed that the worst punishment is to make someone compete with someone who likes what they do.

    • “Missionaires will outlast mercenaries.”

    • Because if they like what they do they will always lean in no matter what, and they will have longer stamina than you.

    • At some point you'll lean out, and they never will.

  • Everyone will tell themselves a story about how they’re the good guy

    • It might need to be an elaborate and ridiculous story.

      • But it will definitely end with “...and that’s why I’m the good guy.”

    • Zombie PMs: “Here's why doomscrolling is actually good for society.”

    • Believing that you're doing something that isn't good is so hard for our ego to accept that we contort our beliefs to fit.

  • No one thinks the company they were in early and feel connected to will be bad. 

    • “The incentive is bad but I know the people there and they’re all good people I trust.”

    • But incentives matter more!

    • The emergent result and the inputs can all be differently aligned.

  • The emergent systems and the inputs are different.

    • Systems can work even if all the components are self deceiving and self interested.

    • Also, vice versa!

  • People with higher IQ are better at self deception.

    • Because they can retcon anything.

    • Retcon ability is more important in practice than disconfirming ability.

    • Retconning is when it’s definitely wrong.

    • Justifying is when it might be right.

  • One form of gauntlet is giving someone a quest.

    • See if they succeed, which demonstrates motivation.

    • The cynical version is the rockfetch.

      • You don't hope they succeed, you hope they give up.

    • But a proper quest, you hope they succeed.

  • How do you get your idea to go viral?

    • The mistake is starting with the idea and then trying to make it go viral.

    • The trick is to find an idea that goes viral that aligns with your goals.

    • This requires creating gardens of possibility and farming for luck.

    • A nerd club is a great way for viral ideas to be discovered.

  • An organism thinks itself to be useful, no one else has to.

    • A thing that is not alive, someone else has to think it's useful to keep it around.

  • You can’t be friends with a narcissist.

    • You’re an object to them.

      • Your inner world, your emotions, don’t matter to them.

    • A person can’t be friends with an object.

  • Socratic dialogue looks superficially like what the Sophists were doing

    • But the goals were different.

    • The sophist has a practical goal in mind.

    • Sophists want to win.

    • But the Socratic dialogue is asking questions from the perspective of truthseeking.

    • Collaborative debate.

  • If you have an oracle, you'll tend to go to it for answers, to the exclusion of others.

    • Why talk to an elder when you can ask the LLM oracle?

    • With an LLM, those answers are private; others can't learn from it

  • A way to resolve disputes in society long ago was an oracle.

    •  Everyone agreed that the oracle would tell you the answer.

    • It was really just a way to discover a schelling point that isn't best but just one everyone can agree on to solve the tie.

    • Related to my old Schelling Points in Organizations..

  • Every time you utter a word it’s a vote that the word is useful.

  • Semantic diffusion over time erodes all useful novel words until they are dull.

    • Every time that someone uses a word, there’s some chance they mis-use it in subtle ways.

      • This is kind of like an entropic process.

    • People use new words when they are novel and useful.

    • The more that they are novel and useful, the more they get shared, the more they become eroded.

    • Over time the equilibrium is for them to become as useless and dull as the average word.

    • Words that are not broadly useful have this happen more slowly.

      • For example, vibecoding has been eroding quickly.

      • But academic jargon from a specific domain erodes more slowly.

  • High quality collaborative debates need big group discussion, interleaved with smaller organic groups and then back again.

    • That marbling allows remixing, cross pollination, collective sifting.

  • When we talk past each other it’s often assumptions we don’t share.

    • The assumptions are hidden, by definition, so there's nothing to see that shows the misalignment.

    • It's only the absence of a thing, so much harder to notice.

  • Ontology is a top down process of curation.

    • Folksonomy is an emergent process of accumulation.

      • An emergent process of discovering schelling points.

    • Similar looking results, but wildly different resilience and adaptability.

  • Humans can detect tiling way easier than traditional computers can.

    • We perceive the whole, at every level of perception, in a way that for whatever reason is harder with sequential processing.

  • Do corporations have emotional states?

    • This Hacker News comment parodying the “I believe LLMs may have functional emotions in some sense”: "I believe Anthropic may have functional emotions in some sense. Not necessarily identical to human emotions, but analogous processes."

    • Thought provoking!

    • We have a number of emergent intersubjective realities.

    • Collectives that are not physical but emergent.

    • Do they all have emotions?

    • What does that mean?

    • Our own consciousness is emergent.

  • If the right long term thing for the company is in tension with what will keep you not fired, you have no good option.

    • If you don't do it you get fired and someone else takes your spot.

    • If you do it then you do a thing that harms the company and society.

  • A three-tier model for resonance in a company.

    • What - What is the goal

    • How - How will you do it

    • Money - How will you make money.

    • They all must be in alignment to be resonant.

    • But if the money is not aligned, then the What goal you care about will be subverted.

    • If you’re not aligned across these layers, every action hollows you out more.

  • We all have some skills that are fully associative and natural, and some that require active focus.

    • For me, proprioception requires my forebrain being fully active, but planning doesn't.

    • With tons of practice you can shift from active focus to intuitive, but our individual rates of learning for different skills are different.

    • Presumably it's possible to only optimize for a few skills that are intuitive, and not have much more space to cram things in over time.

    • That's why "you can't teach a dog new tricks."

    • The tricks you learned intuitively earlier in your life were the ones you worked hard enough on before other skills.

    • They were likely the ones you had a natural advantage at, so they were more fun, and it pulled you through the slog of learning.

  • Dysfunctionally conscientious people feel shame so acutely that it leads to maladaptive outcomes.

    • They normally are so self-censoring that they never even produce a thing they don’t think is perfect.

    • When someone gives them negative feedback about a thing they didn't realize was bad it feels like an existential threat.

    • So they sometimes lash out at the person who said it and try to punish them for it and get them to apologize.

    • This is something I’ve discovered about myself in over a decade of couples counseling.

  • When you're in a terrible situation, look for the helpers.

    • A little bit of wisdom from Mr. Rogers.

    • There are always helpers.

    • Even in a barren wasteland, there's always something that can grow.

  • Dr. Martin Luther King Jr: "The time is always right to do what is right."

  • If you don't believe the hopeful future is possible, then people won't even bother trying to build it.




Reply all
Reply to author
Forward
0 new messages