Bits and Bobs 1/19/26

14 views
Skip to first unread message

Alex Komoroske

unread,
Jan 19, 2026, 12:33:29 PMJan 19
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.k2ny5xj886bf

Promptware. Alien intelligence. The slop code. LLMs as electricity. Automating junk code away. A pattern graph. The same origin's data fiefdoms as fractal monopolies. The rainshadow of the same origin policy. The switch cost floor. Security nihilism. Social accretion. Combinatorial value from combining different data sources. Convenience without surveillance. Losing the plot. Buying the chicken. The power of naming a Thing.

---

  • ChatGPT says it will do ads.

    • Shocker!

    • Your intimate friend who knows everything about you and is pretenaturally good at making hyper-convincing arguments being incentivized to get you to buy things is way different from ads in search engine.

    • It’s possible to do this well, but it’s a tightrope walk across an alligator pit in a hurricane.

    • I don’t have confidence that OpenAI has the culture or incentives to do this well.

  • This week in the Wild West roundup.

  • Bruce Schneier proposes a new term: promptware.

    • Prompt injection attacks have morphed into complex, persistent, multi-stage attacks.

    • Not unlike traditional malware threats.

    • Prompt injection + malware = promptware.

  • Anthea Roberts has a new excellent piece on the 0-1, 1-10, and 10-100 impacts of LLMs for individuals.

    • Who ends up being the 10x vs the 100x return?

    • It’s “who can change how they work.”

  • Not "Artificial" intelligence, but Alien Intelligence.

    • It only superficially looks like human intelligence.

    • But when you make it look and sound like a human, it tricks us.

      • “These are just like us, they have the same kinds of insight generation and blindspots.”

    • No, they are nothing like us.

    • They are the aliens in Contact.

  • ChatGPT’s Apps don’t appear to be doing well.

    • The conversion rate for a user actually “installing” one is low.

    • That’s because the actual user experience blows!

    • This is the kind of thing that only the MBAs in the BD team thinks is a good idea.

  • “A thing some MBA thought should exist” is often not actually valuable in the real world.

    • MBA’s tend to think from the company’s needs first.

    • But what matters is the user’s needs.

    • What MBA’s think is cool is a lagging indicator of what’s actually cool.

  • Some people think the models have plateaued in quality and some people think that’s crazy because what we can achieve with them keeps growing without bound.

    • I think both camps are right, but they’re talking about two different things.

    • The value of a model is both its inherent ability (the model quality), and also how easily it can be used to do things (the usefulness of the scaffolding).

    • The raw model quality is clearly plateauing, but there’s no end in sight to how much we can do with them, because the quality of scaffolding continues to compound.

  • LLMs are really good at absorbing infodumps.

    • You don't need a ton of structure for the LLM to get it.

    • Humans are way worse at receiving infodumps.

      • They get bored or confused or distracted.

    • Infodumps are often necessary to communicate sufficient context, but humans can't do it, so we learned to not do it.

    • But it actually is the best way to give context… if your collaborator can absorb it.

  • The slop code: don’t share slop you haven’t read yourself.

    • In the code context, this includes “verified it actually compiles.”

    • Since we all think our own slop smells sweet, the person you share it with will almost certainly find it less endearing than you do.

    • If you couldn’t be bothered to read it, then don’t bother someone else to read it.

    • Before slop was a thing, every bit of writing at least someone had bothered to make and thus asserted was worth reading.

    • But now AI can create content that no one has ever had to assert is actually worth reading.

    • So it’s on the slop creator to vouch that it’s worth reading, by actually reading it.

  • One of the things that doesn't scale is code reviews in a world of lots of code generation.

    • You have to look at the code before wasting someone else's time with it.

  • If LLMs are like electricity, we should use them to electrify software.

  • The vast, vast majority of code that exists in the world is junk code.

    • We think about algorithmically interesting code as most code.

    • But that’s maybe 1% of code.

      • Most of the Algorithmically Interesting code is actually just an implementation of an interesting algorithm you’d find in a textbook.

      • Truly novel Algorithmically Interesting code is maybe .01% of all code written.

    • Maybe 20% of code is CRUD.

      • Simple wrappers around SQL.

    • The other 80% of code is just boring integrations.

      • Just shuttling code back and forth between different formats.

      • Extremely boring and quotidian.

      • Easy to mess up, requires meticulousness and patience to do right.

    • Convincing someone else to do integration work for you is hard.

      • Even if you’re a leader in a company, you have to do a lot of coordination, writing up proposals, proving to engineers it will be useful.

      • If you’re just an individual with a need, good luck, it’s nearly impossible to convince someone to do it for you.

    • LLMs are really good at CRUD code, and also integration code.

      • They can simply do it.

      • You don’t have to convince them to, they just do it!

    • Convincing someone to do your bespoke integration work was the main blockage for software.

      • Now it doesn’t exist.

  • Refactors that go for more than a month are always a disaster.

    • But now you can execute refactors way faster with the right plan.

    • You don't need to coordinate the politics of multiple distractible humans with their own incentives because the LLMs will just execute on the plan with infinite patience.

    • So you get 10x productivity without 10x the coordination cost.

  • LLMs will find workarounds to achieve the goals you set.

    • That implies that you need to give them lots of tests.

    • But often the agents also create the tests.

    • The agents should own the test set, but the human must own the verification set, and never show it to the LLMs.

    • Just like in model training!

  • The key question: will LLMs just compound crap code quickly, or will it accumulate and accrete in useful ways?

    • Will code get so unworkable that it collapses under its own weight?

    • Or can LLMs successfully continue improving code?

    • For example, Beads has grown to 248k lines of code.

    • Is LLM-generated code default-diverging or default-converging?

    • Different approaches of using LLMs for Elevated Engineering will have different answers.

    • The race is on to find a robustly default-converging way of working with LLM coding agents.

  • Elevated engineering is not vibecoding.

    • Both are LLM-assisted engineering.

    • One is hollow.

      • Default diverging.

    • The other is resonant.

      • Default converging.

  • Once compilers came out, insisting on doing everything in assembly would have just gotten you left behind.

    • Compilers were bad at the beginning, but then they got better, and as they did, they gave leverage to everyone who used them.

      • That leverage compounds due to the power of platforms.

    • LLM-assisted engineering is not a passing fad.

      • Even Linus is now doing it!

    • If you're running against people on bikes you'll have to give up at some point.

  • The future of music is not "Spotify, but AI generated."

    • It’s participatory creation.

    • That’s a very different kind of thing.

    • LLMs drop the cost of creation, allowing more participation than ever before.

    • Blurring the lines between creation and consumption.

  • Ralph is an "asynchronous agent".

    • You run it until it’s done and don’t babysit it.

    • You assume it will converge over enough time on a workable solution.

    • If it doesn’t, that’s OK, you didn’t waste any of your time.

    • Just tokens.

  • AI is going to be powerful.

    • How can you make it so that power works for you?

  • We're still driving around in horse and buggy, with a racecar in the garage.

    • LLM tools are so powerful, but they are hard to integrate into our lives.

    • Today we’ve only integrated chatbots into our lives.

    • But LLMs are so much more powerful than that.

  • A blogpost: Stop Using Natural Language Interfaces.

    • Just because you can doesn't mean you should.

    • Natural language interfaces are a new party trick.

    • They are undeniably useful, but the extreme of “everything is just a single natural language interface” is absurd.

    • Once we as an industry sober up that will become more obvious.

  • A knowledge graph is just data, no behavior.

    • A pattern graph can also embed UI and behavior.

    • Massively more powerful.

  • Someone will figure out how to get Obsidian-style notetaking and LLMs well marbled.

    • Where they feel naturally integrated, fluid.

    • Just call up an LLM and ask it to do something based on those raw notes.

    • If you do it right, you could get users to be able to iteratively build a pattern graph of their life that gets increasingly, compoundingly useful.

  • Messaging is a killer app for humans.

    • And yet everyone uses the lowest common denominator messaging apps.

    • When WhatsApp adds a feature, it's so rare and noteworthy that it's on the front page of the NYtimes!

  • It's not that everyone needs a specific bespoke random feature in an app that has all features for all people.

    • It's that most everyday use cases are surprisingly underserved by normal apps.

    • That’s because most of those features just need integration and that single feature users need.

  • Adam Wiggins talks about the Personal Information Firehose

    • One of the possibilities unlocked by Infinite Software.

  • Truly situated software can't be built top-down.

    • It's too fractally complicated.

      • It has infinitely nesting nooks and crannies specific to its situated context.

    • It can only be grown bottom up.

      • Accumulating situated details that work in a given context, growing up into more generalized, levered structures.

      • This is a slow process when done single-player.

      • But if you can figure out a way to make it safely multi-player it can work marvelously well.

      • A process of social accretion, on top of the right substrate, can change the world.

      • That’s what Google Search did, for example!

  • Matt Webb points out that the natural home for AI agents is your Reminders app.

  • Calendar software is designed for businesses.

    • Each meeting is a commitment that is fully “burned in.”

      • No superpositions.

    • That’s necessary when you need high precision coordination with others.

    • But in personal domains, fuzziness is often more useful.

    • Things need to be in superposition, roughly blocked in.

    • We hack fuzziness into calendaring software, but imprecisely.

    • What would a calendar look like designed for the imprecision of our personal lives?

  • Imagine a calendar of possibilities not commitments.

  • What if you had an app that grew up with your kid?

    • When my kids were newborns I used an app to keep track of when we changed their diaper, when they last fed, etc.

      • It also included things like health checkups, vaccinations, measured height, etc.

    • As the kids grew up, I stopped tracking many of those items, but added others.

      • E.g. when they lost their teeth, their first words, cute things they said.

    • But the app stays the same, dormant, frozen in time.

      • Plus, I have fears I’ll someday lose that important data if the app goes defunct.

    • What if you could have a personal app for your kid, that grew up with them, and was private just to you?

  • The tech industry often talks about 'community' as in 'as an influencer I have a community of fans.'

    • That's not a community!

    • A community has two way interaction.

  • Where are your cozy communities?

    • They are where you feel alive.

    • They often have an in-person component, or some other long-term interaction and trust.

  • Vibecoding is like baking sourdough.

    • Some individuals will choose to bake their own sourdough.

    • What do the “local bakeries” look like?

      • By being local they get good competition and situatedness that gives resonance.

    • Even if everyone is buying from local bakeries, local bakeries shouldn’t need to mill their own flour.

    • A service to mill high quality flour for local bakeries makes it an incrementally lower floor for a new local bakery to come on line.

      • And selling milled flour can be a great business!

      • Win-win!

    • What are the key inputs for local, resonant software?

  • The same origin paradigm creates data fiefdoms.

  • The same origin paradigm structurally precludes resonant computing at scale.

  • The same origin paradigm makes all tech strategies reduce to owning state and rendering the pixels.

    • That's where all moats come from.

  • Web 2.0's insight was: because all of these users use the same domain, you can aggregate signals to benefit all users.

    • You do that via anonymous aggregation to produce crowdsourced intelligence.

      • For example, Google Search’s ranking is largely powered by the clickstream and the querystream.

    • As Tim O’Reilly has said, data is like sand.

      • Not useful in small quantities, but very valuable in large quantities.

    • Origins get a lot of data, distill it into a processed signal that is mostly anonymous but can add lots of value for the ecosystem.

    • But the origin has all of the original data, so they will be tempted to peek and look at it in more depth.

      • That incentive is too strong to ignore.

      • It turns origin owners into a greedy goblin hoarding the data in their fiefdom.

      • The incentive is so strong that this happens even if the owner didn’t start that way.

    • What if you got the crowdsourced intelligence but without any goblin hoarding data?

  • The same origin model means that owners of origins have a powerful advantage that naturally accrues over time.

    • That moat is proprietary data.

    • That power becomes impossible to ignore.

    • Write useful software, get a data moat, even without asking for it.

    • Given a moat, would you really not use it?

  • When an entity has no competition they are effectively a monopoly within that niche.

    • Without competition, there's no drive to make things better.

      • The drive is only to satisfice, to make good enough.

      • Maximally used but minimally liked is the equilibrium.

    • There's a switch cost floor for users.

      • Technically a user could switch to another provider, but at great cost.

      • Any annoyance below the floor of that switch cost just doesn’t ever get addressed.

      • I could technically go try to be a citizen of the EU if a given piece of paperwork at the US Department of State is too onerous and confusing... but no one would ever do that.

    • So everything below the switch cost floor has no competition.

    • The higher the switch cost, the higher the floor of things the provider will ever bother to care about improving. 

    • The same origin paradigm gives the largest providers a monopoly over your data.

    • So they have no need to actually innovate on adding more value for you. 

    • The same origin paradigm creates fractal monopolies.

  • Any single origin will succumb to Tyranny of the Marginal User and slime-mold style coordination costs.

    • The more data you have and the more users you have, the higher the coasian floor of new features.

    • The entities with all the data can’t do anything useful with it for users.

    • This is the curse of the same origin paradigm.

    • The small number of lucky lottery winner origins make a bunch of money, but all of us are poorer.

    • So much software that provides value could exist, and doesn’t.

    • It’s all dammed up behind the silo walls of the same origin paradigm.

    • Someone should blow up that dam.

  • In the era of desktop software, the software was valuable, not the data.

    • The same origin model combined the software and the data.

    • Origins that created useful software got their own data fiefdom.

    • But now software is cheap to produce, and yet those origins still have a fiefdom on your data.

    • In a world of infinite software, your data is what has power.

    • You should take back your data, and make it work for you, not someone else.

  • It is unsafe to run a stranger’s code on your data.

    • Today we do it if the stranger has a lot to lose.

      • For example, a large, established, successful company, we typically trust them.

        • “They don’t have an incentive to cheat, and if they did someone else probably would have found out and sued them.”

    • But what if you could safely run a stranger’s code on your data?

    • That would be a massive unlock.

    • A system with momentum, that could get rid of that constraint, would increase its momentum by orders of magnitude.

  • It’s easy to accidentally become a security nihilist.

    • “Any security that isn't perfect is dangerous, and nothing can ever be perfect, so no incremental work to improve security is useful.”

    • This is obviously ridiculous when stated this plainly, but it’s easy to creep in unnoticed.

    • For example this is how the HackerNews thread about Confer sounded to me.

  • The idealists contingent should be welcome on the bus but they should not steer it.

    • The idealists will make it an auto-intesifying pocket that will take it away from what it needs to go mainstream.

  • Combining data from multiple sources gives combinatorial value.

    • One more unit of data from an already-integrated data source is worth orders of magnitude less than the first unit of data from a not-previously-integrated source of data.

    • This multiplies with every new source of data.

      • Not just n^2, but n^m.

    • This is a mind-bending combinatorial.

      • It’s hard for our brains to even fathom its power.

    • Why don’t we see the power of this today?

    • The same origin model is about thin and narrow verticals of value, not shallow-but-broad plains.

    • Also, someone has to actually have bothered to create the software to unlock that particular combination.

    • Before the world of infinite software, the likelihood someone else had bothered to write software for your particular desired data source combination dropped off combinatorially.

    • But now infinite software means every piece of integration software that can be imagined can exist, with a very low cost.

    • The world will tilt on its axis when we combine the power of infinite software with a data model that allows running a stranger's code on your data safely.

  • A trust model test: would you trust this person to be left alone in your home for an hour?

    • The person could really screw you over in the worst case.

    • Some collaboration models require a high degree of trust.

    • But in practice we make trust decisions all the time.

    • We typically do this for people we know in real life and expect to continue interacting with.

    • That’s much harder in the digital realm.

    • Still, if you’re participating in a cozy community in the digital realm with all people you trust, it’s OK if there’s some dangerous tail risk, if there's enough productivity upside.

  • Your environment is the terrain you have to navigate.

    • Today you spend most of your time fighting the terrain, not your enemy.

  • For software services that both consumers and business people want to use, the consumers lose out.

    • Enterprises have higher willingness to pay; if there is ROI they will pay it.

    • Companies like Airtable, Notion, Slack etc are terrible fits for consumers because enterprises use them too, and those providers would lose out on too much revenue if they didn’t charge what the enterprises are willing to pay.

    • But as a result, as a consumer, those tools simply aren’t viable due to their pricing model.

    • Many years ago I invested heavily in making Airtable part of my personal system of record.

    • Now, with Airtable focusing on enterprise, I feel like a chump.

    • The same will happen with Notion with their increasing focus on enterprise.

    • You are not their customer, and at some point they will make it clear to you that you don't matter.

    • Software pricing would be like if planes were all business class or all economy class... and the economy class planes just didn't exist.

    • A world of infinite software could fix this.

  • There's been no business model to make software for cozy communities.

    • But we spend most of our meaningful hours in cozy communities.

    • A vast desert created by the rainshadow of the same origin paradigm.

  • People want convenience without surveillance.

    • The same origin model forcibly bundled them.

  • Super-citizens don't just organize for themselves, they benefit the people around them, too.

    • They create positive externalities.

  • Some people find having a high quality offboard brain is an end in and of itself.

    • I am one of those people.

    • When I have an offboard brain calibrated to actually be my system of record on a task, I feel joy.

      • The joy of not having to use my brain to think about that thing, freeing it up to think about higher-leverage things.

  • A system of record needs to ride the knife's edge between comprehensive enough to model reality usefully and also easy enough to maintain.

    • LLMs have infinite patience and could help with maintaining systems of record.

    • That could help tip the balance point towards more comprehensiveness.

  • A frontier: no one has an LLM-based organizing thing that doesn't cross a threshold where it starts deteriorating.

    • Where it accumulates data and doesn't collapse under its own weight.

    • Riding the line of "comprehensive and good quality without being overbearing" is hard to do.

    • Someone will figure out the LLM-powered system of record that is default-converging.

  • Every bit of quantifiable-self health hardware is tied to a deep but narrow model of that hardware provider’s software.

    • The PM is trying to figure out how to get you to keep paying your subscription to their service.

    • But what if those were dumb sensors going into your own personal data lake?

    • And there were little swarms of processors, powered by insights from strangers, chewing on your data to produce useful insights for you?

  • Anyone can write a food recommendation app now.

    • So the question is who has the data to make the best recommendations for you?

  • Explore has gotten way cheaper than exploit.

    • That happens when any new input changes.

    • It’s less about LLMs being great at explore (though that’s part of it).

    • It’s mainly that LLMs are ushering in a new paradigm.

  • When the cost of inputs decreases by an order of magnitude, it leads to a cambrian explosion.

    • All of that variation is required to figure out what is worth standardizing on.

    • The industrial revolution of software is creating an explosion of bespoke things.

    • So the explosion is the immaturity of the space more than fundamentals of LLMs.

  • One of the reasons crypto has such fast adoption of new products is a FOMO characteristic.

    • Early users actually do have a literal financial stake in it, so if you’re late to the game you’re in a much worse position.

    • This characteristic is often applied to hollow things that are mainly just about speculation and lead to rug pulls, etc.

    • But what if the same kind of FOMO of shared ownership were applied to a thing that was actually resonant?

  • Crypto started off being about ideals.

    • It was overtaken by speculation and financialization.

    • But it has also led to huge amounts of discovery of ideas and building.

    • Some subset of those ideas will turn out to be extremely useful.

    • What if you could take the good parts of crypto, throw out the speculation, and keep some of the ideals?

    • In the future, people wouldn’t talk about “blockchain,” they’d talk about the thing that embodied those good ideas originally explored in the crypto cambrian explosion.

  • AI service providers will need to figure out new pricing models.

    • The marginal cost is too high that “use as much as you want for one flat fee” is unreasonable, let alone “use as much as you want for free.”

    • Marginal pricing models often discourage heavy use of a system.

      • Users nickel and dime themselves.

      • “Is using this feature for this use worth $3.23?”

    • Contrast that to traditional SaaS seat-based pricing.

      • “I already paid $20 for the seat for this month, I might as well get my money’s worth.”

      • This encourages use, which likely leads to the user storing more state, which makes them stickier.

    • A hybrid model is “pre-pay in chunks.”

      • Anthropic and OpenAI do this for API use.

      • You have to pay before use, and when you get below some threshold you buy, say, $10 more in credits.

      • This gets some of the “well I already have it I might as well use it” psychology.

    • A slightly more complex model that might get the best of both worlds.

      • 1) Have a base plan of recurring credit you purchase each month, say $30.

        • Whatever credit you don’t use in the month expires.

          • Use it or lose it.

        • This encourages you to use the product to its fullest extent up to that level, to “get your money’s worth.”

      • 2) Allow overage in $10 chunks.

        • If you use more than your monthly allotment, you can continue using, you’ll just auto-buy $10 chunks of credit.

        • This auto-buy credit does not expire.

        • Crossing that auto-buy threshold doesn’t feel like a scary thing, because you’ll almost certainly use that small bit of overage in the future.

      • With the combination, users have a maximize use mentality, and the more they use it the more likely they are to find it useful enough to use for incremental use, too.

  • When you lose your soul you become a zombie.

  • In the last decade it feels like Silicon Valley lost its soul.

    • Where is that soul still alive, as a faint glimmer?

    • I think the answer is Berkeley.

  • A paradigm shift is the time to be idealistic.

    • If AI is a paradigm shift, why waste it by continuing the late stage cynicism of the last decade of tech?

  • Once the genie’s out of the bottle, there’s no point in talking about whether there should be a genie or not.

  • Intention is what you want to want.

    • The world is a flurry of distracting activity.

    • Often the actions take precedence and distract you from your intention.

    • You lose the plot.

  • Claude loses the plot every so often.

    • You have to ask it to take a step back and reconsider.

    • It’s so busy taking the next step that it forgets to see where it is.

    • Humans also do this.

    • Every hour, ask yourself “have I lost the plot?”

  • Make sure you don't confuse a waypoint for an end point.

  • When it's not converging, one approach is "let's prioritize our resources better within the constraints".

    • The other approach is to change the game.

  • We get so focused on playing the game. 

    • We should always ask ourselves, "what's the way to change the game?"

  • Imagine a constrained scenario where you diligently work out a solution that will converge sufficiently quickly.

    • Then a wheel falls off unexpectedly.

    • The “unexpectedly” is key.

    • This shifts your priors of how likely other parts that you thought were solid are to actually work.

    • Now, the expected result is that the plan doesn’t converge in sufficient time.

    • That’s a moment to look at the constraints from the balcony again.

  • Constraints tend to accumulate.

    • Some of them, it will turn out, aren’t as important as you originally thought.

    • Removing a constraint changes the game.

      • An extremely high leverage move if you can do it.

    • Deciding to remove a constraint typically comes from the outside.

      • For example a person you respect looking in from the balcony asking a question that seems obvious at first but makes you think.

      • Or external conditions kicking you back on your butt and forcing a rethink.

  • You don't deliver consumer outcomes with a tech roadmap.

    • You do it with actual users.

    • Use cases are guesses at how a tech roadmap will apply to users.

    • They are guesses in a vacuum.

    • All that matters is what users do in the wild.

    • That’s the source of the tech industry's focus on getting in the hands of real users as quickly as possible.

  • Once you get in the hands of users, if you iteratively follow their requests, you climb the hill you’re on.

    • If you happen to start at the foot of a small hill, you can quickly hit the ceiling.

    • It’s important to make sure you’re at the basecamp of a proper mountain before you start iterating on what users tell you.

  • Every consumer problem requires breaking a chicken and egg problem.

    • The question: how do you get the product to a useful level?

    • The answer is to cheat.

    • Buy the chicken.

    • That is, buy the supply.

    • From there quickly get to a self-sustaining flywheel.

    • Flywheels don’t just start spinning on their own; they need an infusion of energy.

  • Managing swarms of agents is akin to managing a large team of overseas contractors.

    • An employee will naturally try to balance the long-term, a contractor won't nearly as much.

    • Renter mindset vs owner mindset.

  • Runaway combinatorics are impossible to beat by just running at them.

    • There are 10^80 atoms in the universe.

    • There are 10^122 possible games of chess.

    • If there are runaway combinatorics, you have to figure out how to set constraints to close off vast amounts of territory.

    • Ideally those constraints cut out mostly the bad while keeping mostly the good.

    • But if you don’t constrain, you’ll never even get started.

  • General intelligence is a function of constraint, not open-endedness.

    • Our brains actually aren't that general, we strongly bake in assumptions about the world that are nearly always true.

    • Open-endedness has combinatorics that quickly become overwhelming.

    • So you need constraints and heuristics that cleanly cut off 99% of the wrong things.

    • That implies the things you set as your foundations, which provide the constraints, are critical.

    • Picking good constraints is the secret art of product development.

  • Farming used to be 90% of the US population, now it's 2%.

    • Cognitive labor will go the same way.

    • So much of the cognitive labor today is bullshit work that can be automated away.

  • A company’s moderation ability should ideally scale with the size of the community, not the size of the company.

    • Many security models require moderation of some form.

    • The default approaches scale their capability with the number of employees of the company.

    • But those can quickly get swamped by a compounding ecosystem.

    • Even better is if there’s some way to get some form of moderation to emerge within the community naturally.

    • That is, that scales with the size of the community.

  • Why did Dubai Chocolate suddenly become such a Thing?

    • Apparently it was partially due to TikTok.

    • But I think there’s another inductive engine behind it.

    • When you hear what it’s made of, the default reaction is, “... that doesn’t sound very tasty.”

      • That is, you have low expectations.

    • Then, when you actually eat it… it’s surprisingly good!

    • That diff in the expectations gives you that viral “aha!” moment that you feel compelled to share.

    • If it didn’t sound not very good and then actually taste pretty good then there wouldn’t be as much as a baseline drive to share the discovery.

  • Sometimes refactoring something into a schelling point creates unforeseeable value.

    • For example, maybe you have a number of bits of functionality spread out across some scripts.

    • Then you refactor it into one CLI with a consistent UI.

    • Technically nothing new is now possible… but it is much easier to wrangle the functionality.

    • Now a lot of outcomes that were previously non-viable (too high of friction) are suddenly viable.

    • These are the kinds of no-brainer moves that often unlock emergent value, but are hard to motivate with a detailed ROI calculation.

  • A platform that people find to be a bit better than alternatives and also has a fundamental bonus becomes a schelling point

    • Even if it's only a little bit better, if it has a fundamental bonus (e.g. open, values-aligned) then everyone picks it and it starts developing obvious momentum.

    • It starts off as a very small quality differential but can then compound into an unstoppable force.

  • Make sure you know where you’re going.

    • That way when the wind buffets you in a direction you can know if you should fight it or spread your wings.

  • When the wind is at your back on a project, that's a great reason to prioritize it!

    • Prioritization is contextual, not in a vacuum. 

    • So "the wind is currently at my back on this for an external reason" is a great tie-breaker between priorities you care about.

    • You should prioritize based on bang-for-buck, and the current “buck” has to do with if the wind is at your back.

  • If you’re going to clone something that works, you have to clone all of it.

    • Don't even think about what to include or exclude, you’ll likely get it wrong.

    • You don't know which parts are load bearing so you just have to assume everything is.

    • Many years ago I heard a legend about Baidu when it first rolled out.

    • It didn’t just look like Google.com’s home page, it literally had the same HTML comments in the code.

    • Comments, of course, don’t actually change the behavior… but when they cloned it, they didn’t want to break anything that might have mattered.

  • Some cheap candles are red on the outside, white on the inside.

    • Premium candles are red all the way through.

    • It's easier to convince someone to buy a real candle than a hollow one.

  • I learned this week that AOL talked about cyberspace, not the web.

    • That was a deliberate marketing decision.

    • The web, if it became powerful, would supersede AOL.

    • So they wanted a generic term that didn’t imply the whole open system part.

    • Chatbot feels like a similar kind of move to me.

    • LLMs will create a new valuable world-spanning open system, and it will not be a chatbot.

  • "A good story reduces the cost of capital."

  • "Even if the truth isn't hopeful, the telling of it is."

  • Being a peer to everyone and thus not having anyone in charge is actually stressful.

    • It's clarifying to know who the leader is.

      • At least, if you believe in them.

    • Instead of having to herd cats, there’s one person who everyone will follow.

    • If you trust them to make good calls, that reduces significant anxiety.

  • A named milestone switches from default divergent energy to default convergent.

    • In a team it switches from push energy…

      • “All of these things we could do are important for some use case that will matter… but how do we decide which ones?”

    • …to pull energy.

      • “Here is the goal we’ve decided to target and why. Which of these things help achieve that goal?”

    • A milestone is a schelling point created by a leader to cut through coordination cost.

    • A milestone is an assertion that “this collection of projects add up to more than the sum of their parts.”

  • Giving a milestone a name is the first step to give it power.

    • Giving something a name asserts it’s a Thing.

    • A name is a handle.

    • It selects into the swirling grey goop of reality and says “this collection of smaller things should be thought of as one thing.”

    • A name asserts “the whole of these things is more important than the parts.”

  • Where does power emanate from?

    • From the individuals and their beliefs.

    • Power is an emergent force.

    • A kind of magic.

  • Accountability can only fully exist when a person has their neck on the line.

    • It's possible if all people in the org have their neck on the line, but that's uncommon.

    • A single person being sacked is common, a whole team being sacked is much less common.

  • A team is in balance and tension on various dimensions.

    • Team members who are at the extremes of a given dimension will feel and carry that tension more than people in the middle.

    • It's healthy for the team but potentially unsustainable for the individual.

  • Two different models of making progress.

    • 1) Focus on the next breakthrough.

    • 2) Focus on unlocking as much value as possible assuming there are no breakthroughs.

  • The expected rate of breakthroughs is tied to the novelty of a domain.

    • The newer the domain, the more low-hanging fruit there are.

  • A team is a zoo.

    • You need all kinds of animals to have a balanced zoo.

    • Different animals require different habitats.

    • If you put a conscientious walrus in the savannah habitat you won't realize it’s even struggling until it decides to give up.

  • Animals are more N.

    • As in, Myers-Brigg iNtuitive.

    • Plants are more S

      • As in, Myers-Brigg Sensing.

    • Animals have goals.

    • They see problems from the balcony and try to come up with strategies to change things.

    • Plants are on the dance floor, they can only sense the things near them, and need to make do with whatever they’re surrounded by.

  • You change externally by changing internally.

    • A mindset: If it's broken, how am I generating that brokenness?

    • It gives you the curiosity to unpack, and to be in a non-victim / active stance.

    • Especially important for leaders.

  • As the amount of knowledge explodes and becomes a cacophony, people retreat to the things that are comfortable.

    • Most of the information that is stressful to you is also wrong or even malicious.

    • So why not focus on the things that are comfortable for you?

    • That's a reasonable proxy that feels natural and at least somewhat justified.

    • But when everyone does that you get self-polarizing, self-intensifying pockets.

      • A society that is split into warring factions cannot thrive as a cohesive entity.

    • All of us in modern society are dealing with the same cacophony.

    • The things we all do to survive are also inadvertently destroying society.

  • There’s a funnel of understanding.

    • 1) Describing

    • 2) Explaining

    • 3) Predicting

    • 4) Creating

    • You have to go through each stage in the funnel in a domain.

    • Ultimately the only one that actually changes the world is the last one.

    • All of the earlier stages are just means to that end.

  • You will surround yourself with people who believe the same thing as you.

    • That will isolate you from the rest of the world.

    • Your success at doing this is tied with the amount of power you wield.

  • There's no good time to plan a load-bearing tree.

  • Sometimes it takes being sick to appreciate being well.

    • When we’re well, we forget how great it is to not be sick.

    • We just take for granted how great it feels to be well.

    • When it's broken you realize how great the thing you take for granted.

      • You take it for granted because it's always there.

      • Even if you love it and it's load bearing, you never think about it.

    • Same for society.

Reply all
Reply to author
Forward
0 new messages