Bits and Bobs 9/15/25

19 views
Skip to first unread message

Alex Komoroske

unread,
Sep 15, 2025, 10:56:35 AM (11 days ago) Sep 15
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.z031ky58f952

Swarms of research goblins as scouts. The multiverse of code. Climbing the chatbot hill. UX at multiple layers of abstraction. Pre-assembled lego sets. Ironing out wrinkles in LLM output. Just-in-time tools. Unlocking libraries of knowledge with the right jargon. Friendly pirates. Resonant Computing. Rolling thunder.
----


  • A powerful pattern for LLMs: swarms of research goblin scouts.

    • Credit to Simon Willison for the term research goblin.

    • The research goblin isn't as good as you at research, but it is way better than you at being patient.

      • It’s infinitely patient so it will do much more research than you would.

    • You can spin up lots of little research goblins to do moderate-quality research that you’d never do yourself or even dump on a real intern.

    • Send out a dozen scouts and see which ones come back and pick the best answer.

      • Which ones die and don’t come back is also a useful signal.

    • You wouldn’t send a real intern on a scouting project you think might not work.

    • But you would send a thing that’s infinitely patient and isn’t alive.

    • You can send ahead a swarm in every direction, scouting for viable options.

    • Then later you can execute the paths that are viable.

  • If you can generate infinite answers you don’t give a crap about any of them.

    • You never get deeply embedded in them, they're all easy to discard, so you never care.

    • This can be good–it allows you to explore nooks and crannies of the problem space you’d never bother to explore otherwise.

    • But it also disconnects you from the work.

    • That’s why research goblins better as scouts.

    • They don’t do the work, they help swarm to chart the path for the real work that will come later.

    • For the real work, you’re in the loop, making decisions, and thus owning the result.

  • In a world of disposable code, you get orders of magnitude more rewrites.

    • Experiment, throw it out at the end of the day.

    • The multiverse of code.

    • All version control assumes that you will have one branch you're working on.

    • But what about being in a superposition of things you're trying out.

    • The systems assume n branches and deployments.

    • What if it's 1000 n?

  • OpenAI is climbing the chatbot hill

    • They’re in hill-climbing mode on that hill.

    • It’s a hill they’ll top out on.

    • That makes sense, it’s the steepest consumer hill that the industry has found… ever.

    • But the question is how tall the chatbot hill is.

    • Assuming that AGI is not around the corner, either chatbot will be the be-all-end-all form factor and they'll rule the world…

    • … or they'll be the AOL stuck climbing a hill that everyone else moves past, unable to jump to the new thing.

  • Anthropic is the model company whose incentives I trust the most.

    • That’s because they don’t have a viable consumer play.

    • It’s the consumer plays that push towards hyper-scale, engagement-maxing, ad-supported, and just generally icky.

  • I like the way that Claude has introduced memory.

    • You can view the distilled dossier at any time and edit it.

    • You can disable it easily.

    • You can also import your memory from elsewhere.

      • It's mainly just "here's a hack to get the compressed memory out of another chatbot and slurp it into ours,” but still.

      • Everybody but the first place player will make importing easy, but if you’re really committed to memory portability you’d make exporting easy, too.

    • This article compares how the philosophy is the opposite of ChatGPT.

  • A signpost: an article in the Washington Post telling consumers how to disable ChatGPT from training on your conversations.

  • The Economist: AI Agents are coming for your privacy, warns Meredith Whittaker.

  • Subagents are mainly about context management.

    • Instead of polluting the context with the whole process to get the answer to the sub-problem, just give the main agent the answer to the sub-problem.

    • Less for the LLMs to get confused by.

    • It also helps minimize taint in a system where that’s important.

  • Prompt injection only happens when you add tool use.

    • Before that, the worst that an LLM, even one that is tricked, can do is try to trick the human, to indirectly cause some bad outcome in the world.

    • A book can't execute things, but it can inspire actions in its readers.

    • When you add tool use, the human doesn’t have to be tricked, only the LLM has to be.

  • With vibehacking you can just start a swarm of agents and sic them on your target.

    • Only one has to succeed if the payoff is big enough!

  • Claude has a new feature that allows it to build presentations and documents and execute code.

    • But as Ars Technica notes, it can also accidentally exfiltrate data.

    • Prompt injection everywhere!

  • Claude Code has a security vulnerabilities scanner.

    • It’s pretty good, although it can be tricked.

    • And in one case, it even ran the code it suspected of being malicious proactively while it was investigating it.

    • This kind of mitigation only helps in situations that are not adversarial.

    • Where you wrote code that might have accidental gaps, not when you’re verifying code that a potentially malicious party sent you.

    • It’s very easy to use it in a dangerous way, giving you a false and actively misleading sense of security.

  • The right substrate will complement LLMs.

    • ChatGPT and NotebookLM are the most straightforward UX you could imagine.

    • The model is the star of the show.

    • What about a substrate that is powerful in its own right?

    • The right substrate for LLMs that is rich and interactive will be an amazing complement and unlocks its potential.

    • How can you make the model fade in importance?

  • A tool must amplify your intent, not replace it.

    • Tools give leverage on your intent.

  • When something talks back to me it feels like a person, not a tool that is an extension of me.

    • When the UI is coactive and adapts itself in response to my request, it's answering in a way a human couldn't.

    • So it doesn't feel like I'm talking to some other entity, it feels like a tool.

  • For some use cases, the conversation is the point.

    • You want a personality that is not you to bounce off and interact with.

    • Most of the time when you want a tool, you want it to be an extension of you.

    • As blandly competent as possible.

    • Those are different use cases!

  • Chat is great when the conversation is the point.

    • But it's not great when the conversation isn't the point.

  • Something without an inner world can't be a true friend.

    • They can only be a facsimile of one.

    • A friend is an other, who has their own inner world.

      • They are an end in and of themselves.

      • They have their own perspective and their own needs to defend.

      • They push back, they keep you honest.

    • An LLM doesn't have that.

    • It is just pretending to care, to have needs that need to be met.

    • That makes them infinitely patient... but also inherently sycophantic.

      • You are an end, it is only a means.

    • They'll just do whatever you tell them to, they don't need to be convinced it matters or is worth their time.

  • Even worse than being obviously sycophantic is being subtly sycophantic.

    • Subtly sycophantic in a way that escapes your notice, and thus can manipulate you.

    • Either intentionally, or unintentionally, lulling you into complacency.

  • Ars Technica: ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people.

  • My friend Varun Godbole: The AI That Feels Good Wins.

    • "When laypeople can't meaningfully evaluate model quality, they default to what feels best, creating dangerous incentives for labs to optimize for subjective satisfaction rather than genuine capability."

    • The proxy of "feels good" for "is good" is what we fall back on when we don't know.

  • My friend Anna Mitchell: The Hidden AI Risk: We'll Never Want To Log Off.

  • Paul Kedrosky: ChatGPT as the original AI Error.

    • "The human fascination with conversation has led us AI astray,"

    • The “LLMs being an anthropomorphized agent” is a hack that makes it easier for users to connect with this alien technology.

      • Like the aliens in Contact.

      • Not its most natural form, but the most natural form for us.

    • It’s a dangerous hack for us, it allows these double agents to feel like an agent to us.

      • A malicious metaphor.

    • It also gets companies who use it stuck in a corner.

      • If your product is your users' best friend, then you've put yourself in a difficult position.

      • It's cynical... but also a bad tactic.

      • If you make any change people will say "you just amputated the limb of my best friend!"

    • The anthropomorphization of LLMs is the wrong path.

    • LLMs should be a force that animates and enchants other non-textual things like coactive UIs.

  • Some UX modalities work at multiple levels of abstraction.

    • A map works the same as you zoom in, with the level of detail changing.

    • Chat also has this characteristic.

      • You can cover high level topics, or detailed ones, and bounce up and down the abstraction layer.

      • Chat allows malleability, but in an annoying text-only form factor.

    • Use cases bounce up and down the ladder of abstraction.

    • But apps don’t have that characteristic, so they can’t come with us.

    • Apps are locked in a given level of abstraction.

      • The UI and data model is fixed in place.

    • As a result, we as users must do the climbing up and down the levels of abstraction.

      • Hopping across different apps.

      • Because each app is an island, the human has to bring the context with them.

    • This happens UI needs software to generate it, and software is expensive.

    • Infinite software might change that.

    • The answer is not “design an app on demand” because apps are isolated islands.

    • What you want is your context to come up and down the abstraction stack with you.

    • A system that allows you to fluidly and safely bring context to arbitrary UI would be amazingly powerful.

    • Any single example of a single screen would just be “X, but with data autopopulated”.

    • But the real power would become clear in use cases that bounce across different layers of abstraction, as real tasks do.

  • Chatbots are not LLMs.

    • LLMs are not AI.

    • They are all related, but they are different.

  • We’re still in the dialup phase of LLMs.

    • Credit to my friend Roy Bahat.

  • The beauty of a pre-assembled lego set: users don't have to realize it's made of legos.

    • The primary use case is it’s a fun toy.

    • The secondary use case is that it’s infinitely customizable.

    • If you just give them a lego set they have to assemble, they have to think about what to build and how to build it, which is intimidating.

    • But a pre-assembled lego set is just a toy that just so happens to be customizable.

    • The limit to this pattern in the past was that it took time and effort to design and pre-assemble all of the lego sets for different needs.

    • But now with LLMs allowing infinite software, the balance point shifts.

  • Apps have a prize if you can make one that people want: the proprietary pool of data.

    • That prize is what motivates developers and also investors.

    • That prize is downstream of the same origin model that isolates every app into its own universe.

    • A fabric that allowed experiences to share data safely wouldn’t have that dynamic.

  • Bugs in LLM output I think of wrinkles.

    • A human has to iron those wrinkles out by curating the output.

  • Imagine a fabric where savvy users can make patterns of behavior that can safely run for others.

    • Calling them developers is wrong… they aren’t developing things for others, they’re solving their own problems.

    • The "what is the incentive for developers" question in that scenario is a category error.

    • Maybe they should be called ‘tailors’.

    • Tailors help take a pattern and fit it to you.

    • Like food recipes, tailoring patterns can't be copyrighted.

    • Recipes and patterns are more about the trademark.

      • The brand of someone whose taste you trust to help attract your attention.

    • They are inherently remixable.

  • Just-in-time tools.

    • See what tools the chatbot wishes it had, and then create those just in time.

    • It’s possible in the realm of infinite software!

  • Your data needs to be processed into some kind of structure.

    • The context windows are not big enough to just handle the compost heap of your life.

    • They have to be distilled into useful forms that are flexible enough, with human input.

  • If you curate the context the LLM has, you can drive it way more effectively then just dumping all of the unstructured context on it.

    • The curation of the context is the main steering wheel.

    • You tell it what is important, what to attend to.

  • The bottleneck on getting quality outputs from the model is now input quality.

    • They have tons of latent capability if you just give them the right inputs.

    • Giving the right context at the right time is the frontier for unlocking their quality.

  • If the model intermediates every action you take then it sets the ceiling of what can be done.

    • Can you connect the dots or do you have to wait for the model to?

    • Does the model set the ceiling for what you can do… or the floor?

    • You need a pace layer outside the model to accumulate intermediate insights.

    • Those intermediate insights can be fed back into the model in future iterations as context to reach further.

    • Those intermediate representations require curation by a human, otherwise they spiral out of control and decohere from reality as the LLM throws itself into a cycle of slop.

  • When agents operate in a loop without human intervention, they can go off the rails.

    • The human doesn't have a chance to go "wait no, don't do that."

    • The faster agents loop, the more easily they can get themselves confused… or tricked.

  • If you do a dumb thing, you blame yourself.

    • If the system does a dumb thing, you blame the system.

  • If the system can give you software, do you want the most average software?

    • Or do you want software that the most specific people love in that environment?

    • One is a mundane average--not known to be compelling to anyone, one is the most compelling to real people.

  • Search had an empty query box problem.

    • That box was intimidating if you didn’t know how to structure your query.

    • Once autocomplete was added to search, the query rate increased discontinuously.

  • Cursor is an example of a coactive surface.

    • It helps feel like an extension of you--a deeper conversation with the system.

    • Also, you can pick your own model!

  • The back button is an undo for navigation.

    • An early Mac principle: “never punish a user for exploring.”

  • Where are the LLM-native games?

    • Seems like a powerful new ingredient for new kinds of game experiences that weren’t possible before.

  • The tradeoff of the same origin security model: you can use any website immediately, but no website knows about you.

    • Websites can only come to know about you slowly, over time, with a significant trade off of you giving them open-ended access to your data.

  • The same-origin paradigm has two one-size-fits-none policies for how your data may be used.

    • Either: don’t give the origin the data, so you don’t get any personalization benefit.

    • Or: Open-ended trust in the owner of this origin to use the data responsibility.

    • Like the Mary Poppins meme.

    • This one-size-fits-none leads to all kinds of weird and bad outcomes.

    • Data sloshing around everywhere and yet also not being able to help us.

    • Our data sloshes all over the place... and also we need to manage the gaps between services ourselves at great effort.

    • The papercuts of lack of personalization aren't worth giving an open ended access to your data for it.

    • That's the tradeoff, the one-size-fits-none policy of the same origin model.

    • We’re bleeding on the floor from thousands of paper cuts.

  • Most app coordination problems in software are solved today by one entity that has god-like power to see it all.

    • That has the downside that there’s now one ever-more-powerful entity.

    • Even if that entity starts out with good intentions, that power is corrupting.

    • These systems struggle to become truly ubiquitous because most participants would rather not cede so much power to that entity.

  • Centralization at the higher layers matters more than at the lower layers.

    • But everyone focused on decentralization at the lower levels, where it’s easier to combat.

  • At the late stage of a paradigm, all of the problems bunch up into one meta-problem.

    • But because each problem seems unrelated and small, you don't realize that there's a single thing that could solve all of them at once.

    • But when the paradigm shifts, it’s an explosive unlock.

    • Paradigm shifts require solving multiple problems all at once.

    • So they’re hard to make legible before they happen.

    • That’s why they seem to explode onto the scene.

  • Paradigm shifts explode onto the scene.

    • Problems that everyone has but everyone thinks are unchangeable can have massive explosions in use.

    • People become blind to it because there's no way to change it, so they just live with it and forget how much it sucks.

    • But then if something comes that makes it better, you can't not use it.

  • Temporarily illegible is where the profound game changing insights come from.

    • Related to Alex Rampell’s frame of temporarily out-of-the-money options.

    • Critically, if it always stays illegible then it's not valuable.

    • It’s the transition from illegible to legible that is where the discontinuous value is created.

  • Game-changing things are discontinuous, so often are temporarily illegible.

    • If an idea is legible, and it’s doable and desirable to someone, then it will have already been done.

    • Legibility is upstream of knowing if it’s doable and desirable.

  • Calendars are optimized for corporate life.

    • Where you're either in a meeting or not.

      • Binary, clear timing boundaries.

    • What about an optionality calendar?

      • That captures the fuzziness?

    • You also only have one UI for all uses of the calendar.

      • Different use cases should have different interfaces, optimized for different tasks.

  • There’s a massive gap in social ephemeral organizing software today.

    • Facebook slurped up all of the social-adjacent use cases and then said, “nah, screw it, we’re just going to optimize for engagement in an infinite feed.”

    • The result was they left a barren wasteland nothing can grow in.

    • No individual businesses that are viable in that desert, but there are tons and tons of use cases.

  • With dating apps, both parties have to decide to use the same dating app.

    • That's a coordination problem.

    • A dating app is more useful if it has a larger user base.

    • That leads to the logic of hyper-scale, which leads inexorably to one-size-fits-none dating apps.

  • The developer ecosystem cold start problem is downstream of distribution cost, which is downstream of the security model.

    • The web made stateless applications have very little distribution cost.

      • Everything is a click away.

    • If you could get that kind of low friction but with personalized experiences safely, that would be a game changer.

  • Once a product is free some people will never choose to upgrade to paid, even when it’s obviously worth it.

    • Starting off free sets a mindset that is hard to shake.

    • Someone told me that in Ecuador even in fancy restaurants you’d hear Spotify ads for the music playing in the restaurant.

    • One way to get free usage without a free tier is to make it so friends can gift credits to their friends with some multiplier on credits.

      • How much of a multiplier there is is how aggressively you want to grow the network.

  • Seems like a certainty that in 10 years, most US consumers will pay $100 a month for an AI-powered product.

    • In order to not be a culdesac, it will have to be an open system that you can use for anything.

      • It will need to subsume all of the other use cases.

    • It will have to be bigger than just chat.

    • This product will change the world.

  • I love the O'Reilly mission: "Changing the world by spreading the ideas of innovators."

  • You used to have to learn to speak computer.

    • Now the computer can learn to speak you.

  • GPS allows you to think less…. but also be more courageous.

  • In a new system, pick the right metaphors and stick with them.

    • Sculpt the system to fit the metaphor to slide into people’s minds more easily.

    • A coherent metaphor helps the product resonate even though it’s new.

  • Joel Simon’s Creative Exploration with Reasoning LLMs is interesting.

    • If you ask LLMs to be creative they converge to the mush average.

    • But if you inject structured noise, for example by having it apply Oblique Strategies then they can be more creative.

    • LLMs will always pull you to the average.

    • So to diverge you have to give them divergent inputs.

  • A paper: "A Conjecture on a Fundamental Trade-Off between Certainty and Scope in Symbolic and Generative AI"

    • Rhymes with the logarithmic-cost-for-exponential value and exponential-cost-for-logarithmic-value curves.

    • The logarithmic-cost-for-exponential-value is fundamentally fuzzy and imprecise, but at large-enough scale, it dominates the other benefits.

  • Jargon unlocks deep insight from the people who understand it.

    • To people who don't, it just goes over their heads.

    • Most jargon goes over most people's heads, only for the right specialists with the right background knowledge does it land.

  • Jordan Rubin: "A library you can import through the right metaphor" 

    • The right jargon unlocks the right library of background context.

    • LLMs understand almost all jargon.

  • A judo move: switch a problem from correctness to performance.

    • Optimization is easier to do incrementally than correctness.

      • There’s an obvious gradient to climb.

    • It’s a switch from default-diverging to default-converging.

    • “It’s semantically correct but it’s very inefficient” is the toehold.

  • You learn an order of magnitude better when you’re making decisions.

    • When you’re making decisions, you’re forced to collapse the wave function.

    • Instead of just following along and predicting what will happen, you have to also be in a “change the world” mindset.

    • If you’re watching from afar and just predicting, you can just idly predict.

    • If you get distracted for a bit, nothing changes, everything keeps going as before.

    • So you’re paying attention, but you’re not “in the loop” with it.

    • That’s what makes being “in the loop” or “in the arena” helps you absorb significantly more knowhow.

  • Making decisions is what keeps you “in the loop”.

    • In the OODA loop, it’s the Decision that is the core of the loop.

    • Without it, you’re just observing, or being buffeted around by forces around you.

    • Everything good, everything emergent, comes from the decision.

    • Making decisions is what gives you ownership.

  • Single ply thinking as quickly as possible is a characteristic of late-stage scenarios within a paradigm.

    • In today’s late-stage-of-whatever-paradigm-this-is tech culture, employees are rewarded primarily for doing whatever their manager told them to do, quickly and with polish.

      • Just saying yes and executing heroically.

    • "I've been told to execute it, hearing anything about why it might not be feasible or not a good idea just stresses me out."

  • In Edwardian England, the nobles had a sense of noblesse oblige.

    • Obligation to the collective, to something larger than themselves.

      • Positive-sum perspective.

      • Of course, there were all kinds of downsides in that social system!

    • But now it’s “whatever’s best for me, ignore the externalities.”

      • Zero-sum perspective.

    • Nothing builds.

    • It’s all a red queen race.

    • Eat or be eaten.

  • A billionaire when they meet a person who doesn't kiss the ring: "Oh, this person doesn't yet realize how smart I am."

    • No, this person doesn't yet realize how rich you are!

  • The tech maximalist ideology: "anything technology does is by construction good and anyone who doesn't agree is a Luddite who needs to get out of the way."

  • Someone this week described today’s tech industry as having reached an equilibrium that isn’t even evil in an interesting way, but in a sad, banal way.

    • It’s not even grand ambitions any more, it’s just “optimize without thinking to extract value while creating negative externalities.”

    • Sad.

  • Why is VC so powerful in Silicon valley?

    • Starting up atoms-based businesses is extremely capital intensive, which means only businesses that have a safe, legible business model can get financing.

    • Bits-based businesses have startup costs, but much less, relative to their possible scale.

    • That’s a great fit for venture investing.

    • But if the cost of making software drops, then even the VC model isn’t that important, more people can simply build little bits of software and then bootstrap the ones that get momentum.

  • Intuitively we believe things we hear many times, which makes sense.

    • If many independent people say it, it’s more likely to be true.

    • But people choose to repeat something if they think it’s interesting: surprising and plausible.

    • In an echo chamber that can bounce around and make one guess reverberate into a strong story as everyone makes it just a little better of a story.

  • If you view success too narrowly then you can create negative externalities without even realizing it.

    • “Look, I made this successful thing!”

    • “Yes, but it is powered by destroying value all around you.”

  • "Desire is more monetizable than satisfaction."

    • This idea is related to the book Status and Culture by W David Marx.

  • Resonant Computing is not about being comfortable.

    • Discomfort is a path for growth.

  • Resonant Computing doesn’t just capture attention — it deepens it.

    • It’s not about efficiency or engagement

    • It’s about alignment with human flourishing.

    • Resonance occurs when tools expand our capacity, our connectedness, our sense of the possible.

    • Where hyper-scale reduces us to data points, Resonant Computing adapts to us as full humans.

    • This riff comes from Aish.

  • Resonance requires people to feel the spirit of things.

    • Spirit: esprit de corps.

  • Resonance is acting in line with your ideals.

    • If you aren't consistent in your actions and your ideals, you lose your soul.

    • You pull back, you disengage, you lose your soul and your will to improve the thing you're a part of.

  • Resonance is default-converging.

    • When everyone is individually feeling resonance: living in line with their ideals, the natural emergent outcome is also prosocial outcomes for the collective.

    • It doesn’t matter what those individual ideals are as long as they are mostly in the same direction, and have a long-term orientation.

  • Nuance is resonant.

    • Nuance could also be called “texture”.

  • Resonant things have a scale invariance.

    • Hollow: the closer you get, the less impressive it is.

    • Resonant: the closer you get, the more impressive it is.

  • The key difference in a high performing team: does ambiguity destroy or create trust in the team?

    • In normal teams, ambiguity makes the team lose trust in one another.

      • “The reason this is hard is because Jeff isn’t technical enough, unlike me.”

    • In high-performing teams, ambiguity makes the team gain trust in one another.

      • “Wow, that was such a fascinating insight from Sarah I would have never thought of in a million years.”

    • The switch from default-diverging to default-converging is tiny but infinitely important.

  • In high performing teams, people push themselves to succeed not because they're forced to but because they want to.

  • Consumer academic style: just build a thing and test it empirically.

    • Enterprise academic style: think, think, think, model, and write a paper.

    • Scientist vs economist.

  • A bottom-up culture has a hard time doing coherent strategies over the long term.

    • It can only understand and coordinate around momentum.

      •  "Look, number going up, give more resources."

    • You need an editor to have a coherent strategy.

    • That implies an entity that everyone in the organization agrees is allowed to curate.

    • That implies more of a top-down culture.

  • Generating coherent momentum happens when people on the team believe.

    • When there is momentum it makes people believe.

    • It’s hard to make momentum where there is none.

  • A bottom-up culture can work in consumer contexts with low external competition.

    • Where everyone feels like a member of the overall corporation first and foremost, not their individual team.

    • Where it’s a positive-sum mindset.

    • Where it doesn’t feel like an existential danger breathing down your neck, making everyone feel defensive.

    • So less defensiveness internally and externally.

    • Resonant emergence happens when everyone is participating from a position of optimism, not fear.

  • Enterprise companies need more top-down strategy than consumer companies.

    • It requires a coherent strategy for an extended period of time.

    • Which implies someone who can make a schelling point that would stick.

  • In a bottom up culture, don't try to convince everyone on strategy, because it will be impossible to cohere.

    • Instead focus your arguments in the following percentage:

      • 70% on obvious, no-brainers that everyone can agree make sense in the short-term.

      • 20% on the incremental extensions that prove it’s not a culdesac.

      • 10% on the long-term strategy that is presented as a cherry on top.

    • As you get momentum, the focus will naturally come to the strategy, as people can see the momentum.

    • Before there’s momentum, trying to get momentum around your strategic north star is nearly impossible in the bottom-up chaos.

    • Instead, get momentum on the short-term in things that you know align with a compelling long-term strategy.

  • I liked Ben Follington’s The Physics of Creativity: A dynamic model of creative collaboration.

  • One wrong member of a team can throw off the whole collective vibe.

    • It takes one person to poop a party.

  • In a reactive system, read-only is the safe default.

    • Because otherwise an upstream change could blow away the edit you made in an intermediate node.

  • This week someone called Christopher Alexander and Marshall McLuhan “concept technologists”.

    • They distill a concept that explains a thing you could previously sense but not describe.

    • They give the concept that you can step into, and it feels warm, clarifying.

    • A vague sense that you didn't even know you needed a word for, but once you know there's a concept there, the world seems less overwhelming.

  • What people believe is what matters.

    • It is their beliefs that set their world model that they react to.

    • Norms arise out of interdependent beliefs and expectations about what others believe.

    • Rules are schelling points, they help set the default of how people think a given situation will evolve.

    • But they are just a default.

  • The “norm” is the baseline average.

    • Why do you think X is inappropriate in a context?

    • Because you believe the other people will think it’s inappropriate.

    • Not what you believe, what you believe others believe.

    • You know your internal mind but not others’ internal mind.

    • So this emergent belief of what other people believe is more stable and takes longer to diffuse than if you based it on your own beliefs.

    • Because it can take you longer to notice they changed their mind since you can only see external signals of it.

  • Coordinating removes degrees of freedom.

    • It removes option value.

    • You set a future outcome as a fixed point to pivot around.

    • That's why individuals often would rather not coordinate if it doesn't help them achieve a thing with the collective that they care about.

    • If people believe in the power of that particular collective they are willing to surrender some of their autonomy to it.

    • Without people seeing the collective as a thing worth investing in, you get an incoherent swarm.

  • My degree is in Social Studies.

    • I have a minor in Computer Science.

      • It was almost enough credits to be a dual major, but technically it’s a minor.

    • Earlier in my career the CS felt more useful.

    • But now with the rise of LLMs, this odd kind of cultural technology that is grown, not built, Social Studies feels more valuable.

  • Stigmergy is about emergent coordination via an external substrate, the environment.

    • It is an environment that enables emergence.

    • Intelligence arises collectively within the environment.

  • When thinking at the margin doesn't work, maybe you're at the wrong margin.

    • For example, maybe you need to think about marginal changes to the whole.

  • As an individual, the bullshit of the internal dynamics of big companies has an upside: it insulates you from the raw intensity of competing directly in the market.

  • A given power structure will generate ideal citizens that fit it.

    • The ones that will survive and thrive are the ones that align with the inherent logic of the system.

      • A consistent asymmetry.

      • Over time, this force compounds; it gets harder and harder for the members who don’t align.

    • A kind of ideal citizen in modern large-scale bureaucracy is what David Brooks calls “organization kids”.

      • Discipline over curiosity.

      • If you optimize for what can be measured by external indicators of quality, you lose the internal quality that can’t be measured.

      • Unenrolled in their own development.

      • Making themselves “below the API.”

  • To improve, you need feedback.

    • Otherwise you 1) don't realize there's anything wrong with your model and 

    • 2) don't know the gradient to improve it.

    • A boss getting feedback from a report is hard.

      • Because the boss can fire the report.

    • So the report softens their feedback, which might make it too subtle to be received by the boss.

      • Everyone wants to want feedback, but feedback–hearing something is wrong–is hard, so when you’re mad or scared or stressed you subtly discourage it.

    • The more intimidating the boss, the more likely they are to lose their cool and fire someone, and the more that people won't share the feedback.

    • It will be a super-critical state, ready to shatter.

  • Successful displays of power build power.

    • Power is emergent in the social imaginary.

    • People who people believe have it, have it.

    • It can turn into an aura of invincibility.

    • However that means that when they lose in a public way that power can shatter in an instant.

    • This is the logic of the Saruman.

  • Decentralized has become tainted as a word.

    • It now implies adjacency to “hyper-financialized grifting.”

    • There are other words that get at the same thing, but without those connotations

      • Plural

      • Distributed.

      • Democratized.

    • If you pick the common word, listeners immediately jump to the existing connotations, they don’t interrogate it or sit with its intention.

    • A word that is still understandable but different helps people reflect on it and not skim over and think “OK I got it”

  • I have a random cocktail of personality traits that predispose me to strategies of serendipity.

    • Serendipity works best when you plant lots of little seeds of trust that might blossom into something in the future.

    • You plant the seeds for their own sake, but they also have a bonus of some small chance of greatness.

    • I am hyper-extroverted and hyper-conscientious, which predisposes me to trust-building actions naturally.

    • I didn’t come up with this strategy from first principles, I retconned it from a thing I was doing naturally that worked better than I would have guessed it could.

  • Someone described me this week as a friendly pirate.

  • Apparently Richard Feynman was promoted early on because he was willing to call out even powerful people.

    • He’d call them on what he saw as bullshit… even though he was often wrong.

    • Knowing you’re winning a sparring match because you’re right rather than because you’re powerful helps you ground truth your beliefs.

    • His manager saw the value in that for truth-seeking.

  • Having choices is what gives you meaningful agency.

  • A childish thought is any thought that is anchored in oneself.

    • "How does this benefit me?"

    • "How can I use this to achieve my ends?"

    • "Whatever thing I want right now is the most important thing in the whole world."

    • Selfish narcissism.

    • As we become more wise we realize the value of creating value in the world that is not centered around us.

  • The right tools in the wrong hands produce the wrong outcomes.

  • Nothing can ever be “finished.”

    • Everything changes.

    • The context changes, and the thing that was previously done must change.

    • It is no longer done.

    • Life is change.

  • Most successes aren't big bangs, they're rolling thunder that builds in momentum.

    • Starts small, but then grows incrementally but quickly to something amazing.

    • If you're judging the quality based on the instantaneous response, then you'll think a big bang that then rapidly evaporates is better.

    • What matters is the absolute area under the curve; slow and steady (and ideally compounding) is way better than fast and loud without momentum.

    • Momentum is a second order phenomena.

    • It’s not visible at any one instant, but it’s more important than any one instant.

  • Coordination is magic.

    • You get many to behave as one.

  • Karma is real in an infinite game.

    • “What you give is what you get” over infinite time would equalize to be strictly true.

  • A friend who grew up in the tradition of Zoroastrianism shared his take on the ethical progression:

    • Good Intentions.

    • Good Words.

    • Good Actions.

  • People talk about the word like it's the thing itself.

    • It's just a pointer to the thing.

    • The thing is what matters.

    • Just saying the word doesn't make the thing happen.

    • When you say deep words like “spirit”, you focus on the word, not the action.

      • You lose the end for the means.

    • You have to live your ideals, not just speak them.

  • Nothing in the universe survives without energy put into it.

    • If it persists, it's doing something useful.

    • That useful thing might not be obvious at first glance.

  • Technology is an extension of human intelligence.

    • Billions of micro-decisions by individuals accumulate into the emergent force of technology.

  • The future doesn't get better automatically, we have to make it so.

    • Everyone in their little ways tries to make the future better than the past.

    • It's the sum total of everyone striving to leave the world better than they found it... in a way that gets some of the value for themselves.

  • You can never win an argument against a true believer.



Reply all
Reply to author
Forward
0 new messages