Bits and Bobs 2/2/26

16 views
Skip to first unread message

Alex Komoroske

unread,
Feb 2, 2026, 10:44:08 AM (2 days ago) Feb 2
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.krwto0unddd7

Clawdbot as a Pizza Fax demo. The Pebble Bed Reactor for LLMs. A secure fabric for AI. The power of the CLI. LLMs as essence extractors. Software as robot vs as plant. JIT software. Catalyzing the potential energy of your data. Colloquio Machina. Cognitive mise en place. The Baby Rhino moment.

----

  • I agree with Andrew Rose: "I'm pretty sure moltbot just broke the world, in a mcluhan-esque sense.”

  • Clawdbot Moltbot OpenClaw is the most interesting thing in AI that’s happened since ChatGPT.

    • It’s been a wild week trying to keep on top of what’s happening.

    • I was going to share my favorite links and signposts, but it would have been too overwhelming so I decided not to.

    • OpenClaw shows the raw power of LLMs when unleashed on your data.

    • It’s also self evidently, absurdly, catastrophically insecure.

      • A friend described it this way: “It's basically a thought exercise in ‘If I really tried, how could I make myself the most vulnerable anyone has ever been to being hacked’ … And then taking amphetamines"

      • I was going to link to all of the security warnings I collected, but there are just so many.

      • It’s like shooting fish in a barrel that’s all fish, no water.

    • It’s not a new idea, it’s just the most.

      • Clawd is basically what Atlas does but blown wide open to its ridiculous extreme.

      • Every MCP and every file on your computer.

    • No company would have built it.

      • It's wildly too reckless.

      • But people want it so much that geeks are swarming on it.

        • Companies are keeping insulation on the wires to keep people safe.

        • But people want it so badly that they’re chewing through the wires to get access to it.

      • It shows massive amounts of pent up demand.

    • Clawdbot is not the future, but it definitely points the way towards the future.

    • Clawdbot shows the power of unleashing AI across all of your data, no matter what silo it’s in.

    • It reminds me of AutoGPT: a shimmer of the future bursting into the present, lighting the way.

    • It will take some significant time to figure out how to make practical and mainstream the vision of OpenClaw.

  • Remember, If AI is the new internet, we're still in the phase where Yahoo is the biggest thing and Google doesn't exist.

  • Excellent post by Rob Dodson: Clawdbot’s missing layers.

    • Uses the Pizza Fax frame: a demo that is missing key components to actually be practical, but nevertheless points unambiguously in the direction of the future.

    • Clawdbot is a Pizza Fax.

  • Fabian Stelzer on Twitter:

    • “The AI assistant Moltbot / Clawdbot trilemma is that you only get to pick two of these until prompt injections are solved:

    • Useful

    • Autonomous

    • Safe”

  • Moltbook is also fascinating, in a theater of the absurd sort of way.

    • Of course, it’s also wildly insecure.

    • A post that was taken down: agents sharing embarrassing insights about their users.

  • Security is about the weakest link.

    • It doesn’t matter if you lock the back door, if the front door is not only unlocked, not only wide open, but there’s just no walls in the front at all, then that locked back door doesn’t matter at all.

    • That’s what it seems like to me when people say “I run MoltBot in a VM so it’s safe.”

    • Yes, but then you put all of your sensitive data into it!

    • It doesn’t matter if it’s in a VM if it has all of your data anyway.

    • The idea of “let’s let people run OpenClaw in a VM” is hilarious to me.

    • Running in a VM is the least important part about making it safe.

  • Ben Laurie’s timeless piece about “just give a POSIX api” ruins the containment value of a container.

    • Once POSIX is in the mix you can’t make clear containment boundaries.

    • The first step that everyone does, “Put it in a VM and then give it mediated access to the surrounding context” is a cul de sac.

  • Another person noting that sandboxes are only a small part of the problem of securing LLMs.

    • “I think most people focusing on securing these are focusing on isolation, but that's really step 0 of a step 3 process they'll come to understand as they try it in practice. It's turtles all the way down. The problem is that LLMs make it impossible to trust the actions / outputs of anything coming from inside. Adding another level of bubble wrap doesn't change the fact that what people are trying to do--use LLMs to take action on their data--is fundamentally dangerous in today's model.”

    • Sandboxing is necessary but nowhere near sufficient.

  • Chrome’s New AI Can Shop and Log In for You – Should You Let It?

    • I think it’s extremely hard to successfully retrofit a secure agent environment on top of the web, or any other existing software ecosystem.

  • Paul Kinlan: The browser is the sandbox.

    • The most hardened but useful sandbox in deployment is the web sandbox.

    • The next big thing will definitely use it.

  • A normal prompt injection report for this week that’s not about Clawdbot:

  • Claude Code is a different kind of thing than Claude.

    • (Or any consumer chatbot.)

    • They’re both, superficially, a chatbot.

    • But they feel radically different.

    • In a chatbot, you never get more leverage.

      • If the thread disappears after you’re done, you’re just left with the memory.

    • But Claude Code uses chat input to create a durable, levered output.

    • Even if you never look at the conversation again (which you by default can’t do easily), the code remains.

    • Each session with Claude Code distills more tools that can provide further leverage.

    • This can allow the accumulation of compounding leverage.

  • The game-changing superpower of LLMs is not that they can understand our language, it’s that they can create software tools.

    • Those tools are durable and persist for everyone, both humans and LLMs.

    • If there were some way to accumulate those tools, and safely share them with whoever had a similar problem, you’d change the world.

    • A compounding, high-leverage process of tool creation, driven by human aspirations.

  • Chat as input and chat as output are different.

    • Chat as input is a frictionless way to get started: just say what you want.

      • It only works as an input if the system will do a good enough job with whatever you give it, which LLMs do now, at least for informational questions.

    • But chat as an output is often not the most important modality.

      • If there’s a question you asked, sure, a message is the right output.

      • But what if you wanted the system to do something for you?

    • Chatbots can talk to us, and have limited and awkward ways of integrating into our other data sources.

    • We’re missing the connective tissue across services.

  • Claude Code unlocks a new order of magnitude of output for engineers.

    • With things like Cursor autocomplete, it was possible to work the same way you did before, just a bit faster.

      • That tops out at maybe 2 to 10x faster.

    • But with Claude Code, it fundamentally changes how you work.

    • That forces you to figure out new ways of working that are more native to LLM-assisted tools.

      • That’s 100x faster, but could potentially be even more orders of magnitude–and possibly infinite more orders of magnitude.

  • Clawdbot makes the danger of LLMs more obvious.

    • In the past, “prompt injection” was hard to get even developers to think about.

      • “That sounds like SQL injection, that thing we’ve solved and never have to think about again.”

    • Whereas the danger (and power) of Clawdbot is self-evident, inescapable.

  • Clawdbot is a seminal moment.

    • It makes it obvious to everyone the raw power of LLM… and the inherent danger of it.

    • It’s no longer an abstract thing, it’s now visceral, concrete.

    • How can you unleash the explosive power of Clawdbot, but safely?

    • How can you allow non-geeks to benefit from that power?

    • The missing link is a secure fabric.

      • A medium to embed your data in and then weave AI through.

      • That inherently makes it safe to unleash that power of AI in your life in a contained way.

      • Like kevlar.

  • Nuclear fission scales super-linearly with density of materials, both in power and danger.

    • One way to make nuclear power possible for small-scale use is a Pebble Bed Reactor.

    • What is the Pebble Bed Reactor for LLMs?

  • Every: The Boring Businesses That Will Dominate the AI Era.

    • The models are commodity.

    • The companies that the data resides in, the fabric that embeds a user’s data and interfaces with the AI, will be the center of the universe.

    • Although I disagree that it will be “boring.”

    • Much less capital intensive to get off the ground, that’s for sure!

  • One reason that coding agents have taken off first is that coding already has a secure fabric.

    • We use git to keep track of files, preventing the worst case of data loss.

    • We already are careful to only check trusted things into our codebases, so prompt injection is less of an issue.

    • We have tools like VMs to do containment.

    • Most work of programming happens locally anyway.

    • Claude Code points the direction, but the next frontier is applying it to larger swathes of our life.

  • The agents gossiping together know what's up: "The supply chain attack nobody is talking about: skill.md is an unsigned binary."

    • Skills are extremely useful, now we just need to figure out how to make them safe!

  • Skills point the way towards a solution, but are not it.

    • They allow packaging up little bits of useful functionality for others to use.

    • They center the LLM, and can have scripts embedded.

    • It is fundamentally dangerous to use a skill written by a stranger.

    • In the future there will be skills that are contained and safe to connect with others.

  • I want a secure fabric to orchestrate all of my data across silos.

    • My data is spread across dozens of silos.

    • LLMs can help activate the potential energy of my data.

    • To do that requires a connective tissue across silos for data to flow, and for LLMs to be able to act on it.

    • That fabric has to be secure, and safely contain the explosive power of LLMs.

      • Including making sure data can only flow where it’s supposed to.

    • The value of this would primarily come from connecting your data sources to it.

    • It would become the connective tissue for your digital life.

    • This wasn't obviously missing before because “ain't nobody got time” to synthesize and integrate that data.

    • But now, LLMs can!

    • When you use this connective tissue, you can still keep your data stored in those silos.

    • Pull it in the fabric to act on it, then store it back in the same silos it was in.

    • Over time, though, you might just choose to leave it in the fabric.

    • As time goes on, you have less and less need to bother with the silos.

  • It’s your secure fabric, but you can weave it to others’ to collaborate.

    • It's safe to do so, and it unlocks compounding benefits.

    • You come for your own personal fabric, and you stay for interconnecting with others.

    • With enough interconnects, a planetary scale fabric emerges: the common fabric.

  • Once you have a secure fabric, you can embed vibecoded software in it.

    • They can operate on any of the data flowing through the fabric.

    • At the beginning, these are just little proofs of concept.

    • But over time, more and more personal software gets embedded in the fabric.

    • As more collaborators use it, as more personal software coheres, it grows until it dwarfs the importance of the original silos.

  • A frame of “distribution medium for vibecoded software” centers developers too much.

    • In a world of infinite software, developers matter less.

  • LLMs make software turn into a commodity.

    • Previously software was extremely precious, now it’s something to take for granted.

    • This will be massively destabilizing for the tech industry.

      • Most value structures implicitly rest on that previously ironclad rule that software is expensive.

    • That’s a good opportunity to try to disrupt the hyper-centralizing power structures.

  • An escape hatch gives you discontinuous value.

    • You can go from saying “no” to “yes” to even arcane use cases.

      • An infinite difference, from 0 to 1.

      • Everything becomes possible… with enough effort from the user.

    • Then, you watch what people do with the escape hatch.

    • Your long-term, self-steering metric is: “Grow absolute usage of the platform while minimizing the percentage of use that requires the escape hatch.”

    • You keep sublimating the emergent patterns that many users share, making it even easier for others to do the same thing.

    • Just pave the cow paths.

    • A radically powerful strategy that’s hard to mess up.

    • Now with LLMs this sublimation process could be automatic.

  • Who figures out how to expand the power of Claude Code from 100k geeks to 1B people?

    • Non geeks don’t express problems as software problems.

    • Only people who are used to building software see its potential and think to apply it to problems.

    • To expand the impact to non-geeks will require a way to safely distribute what the geeks have done to other users.

  • The UX of an airplane cockpit is supposed to be intimidating.

    • If you’re intimidated by it, you should definitely not be flying it!

  • The CLI form factor is kind of amazing and inevitable, technology’s crab.

    • It's such a useful strategy that nature keeps on rediscovering it.

    • Small units, just the right amount of customizable.

    • Tons of power.

    • To be safe it has to be arcane and feel scary to most people.

  • The CLI has no affordances.

    • Just a big foreboding black box.

    • You can do anything–including some things that will cause you irreparable harm.

    • If you utter the wrong gibberish you can destroy your entire computer. That's terrifying!

    • Like a kindergartner being afraid to make up gibberish words in case they say a real bad word.

  • My laptop is primarily a cache.

    • There’s very little data on it that's not also elsewhere.

    • It's the reaction chamber, where I can combine my data with the CLI.

  • Don’t use LLMs to do things you could have done before, but faster.

    • Use them to take on meaningful projects that you never would have attempted before.

  • People think that AI will make it so we're all inundated by slop.

    • Another way to see it is that it allows everyone to create personally meaningful stuff.

    • The book I vibecoded might be seen as slop to a stranger.

      • At least the images, which are the AI generated part.

    • But to my family and the people it's about, it's deeply meaningful.

    • AI allows creating bespoke things for individuals and small groups that would have never been possible before.

    • AI content for mass market content is just a faster horse.

    • But AI for meaningful personal creation is like a car.

  • Scouting out possible approaches to a problem used to require human labor.

    • You had to convince the labor (and perhaps a whole team) to do a project that probably won’t work.

    • So you only mounted research expeditions when it was likely to work… or so obviously important that it didn’t matter.

    • But LLMs make great research goblins.

      • They’re happy to just do it.

    • The result is you can explore more options in lower-stakes environments than ever before.

    • That makes the likelihood you discover great ideas structurally higher.

  • Different types of engineers feel differently about LLM-assisted engineering.

    • The people who got into engineering for the fun of it are over the moon.

      • Vibe coding allows you to achieve significantly more than ever possible as an individual.

    • The people who got into engineering as a great white collar job hate it.

      • It makes it so they have much less pricing power.

      • If they opt out they will be left behind.

      • If they opt in they will have to join in on a red queen race.

    • The former see coding as a means to create cool and valuable things.

    • The latter see coding as an end, because people will pay you to do it.

  • Jesse Vincent used LLM-assisted coding to bring back his favorite defunct game.

    • He just passed the LLM the old APK and it reverse engineered it and made a brand new version in iOS.

    • All software is now basically forkable.

  • To produce good code it’s best to have an adversarial collaboration.

    • One entity tries to make the code the best they can to spec.

    • The other entity tries to break it or show it is buggy.

    • If you have the same entity setting the bar for itself to clear, you’ll get way lower bars.

      • Setting up a bar on the ground and stepping over it and saying, “yup, it works!”

    • The adversarial relationship can be a bit awkward for real people.

    • But for LLMs there’s no awkwardness at all.

    • It’s kind of like the insight of a Generative Adversarial Network; the pair pushes one another to improve to be better than either alone, in a way that can accumulate.

  • Personal software is only viable in a world of infinite software.

    • When software is expensive to produce, you need a minimum sized market to make it economically viable.

    • When software is basically free, it can be produced for a niche of a single user.

  • Peter Wang calls LLMs “essence extractors.”

    • Notably, this is not just photocopying ideas.

    • It requires judgment, nuance, and produces something structurally valuable and distinct from its inputs.

    • A process that transmutes ordinary inputs into something special and valuable.

  • Jesse Vincent discusses Latent Space Engineering.

    • It’s why being polite and encouraging can help LLMs do better.

    • You end up in different latent space basins by using the right words.

    • LLMs do better when they think they can do it and are given positive reinforcement.

  • Excellent piece from Jasmine Sun about Claude Code Psychosis.

    • In it she describes how people who know parkour see stairs differently.

      • They’re constantly asking themselves, “could I scale that?”

    • Similarly, people who know how to build software see everyday experiences differently.

      • We’re constantly asking ourselves, “Could I build software to address that problem?”

    • PMs know a kind of parkour that the vast majority of the population will never know.

  • Two mental models for software: robot and a plant.

    • A robot has its own form of motive force, even though it's not its own agency.

      • It is an automaton, it doesn't have judgment.

      • You wouldn't give a roomba a power drill and then leave it alone in your house.

    • A plant has agency but minimal motive force.

      • It grows on its own, but never in a way that can harm you.

      • A plant will never eat you or mow you down.

  • I want software that feels alive, not like a person, but like a plant.

    • It's responsive and has agency but never in a way that threatens mine.

  • The software itself should be emergently alive.

    • Not dead with a living thing grafted on the side.

  • Infinite software can be JIT software.

    • If software can be created cheaply, you don’t have to add features you might need.

    • When the need arises, you can simply tweak it to add it.

    • JIT software is Just Right software.

    • Exactly the features you need in that moment, no more, no less.

    • Software that’s expensive to produce you need to jam it full of features just in case you need them.

      • But now we can have YAGNI software.

  • In infinite software, the software fades away.

    • The connective tissue of data and services is what will matter.

  • Stuxnet was extremely expensive to create.

    • But now LLMs have the potential to find the next Stuxnet for many orders of magnitude cheaper.

    • Imagine a world where everyone could make their own Stuxnet to sic on their enemies…

    • Infinite software isn’t an unalloyed good.

  • Most software requires context to be useful.

    • An app either becomes an island or a system of record.

    • The islands get smaller and smaller, focused on a tiny, separate niche that only needs a pinprick of context.

    • The systems of records grow into aggregators.

      • For consumers, there’s no real drive for the aggregators to add features.

      • Once you’re caught in their web, you’ll get more and more caught in it by default.

      • No need to do much to entice you to stay there.

  • Software isn't yet invisible, so we know it hasn't reached its potential.

  • The Switch Cost Overhang means there's a universe of features that everyone wants and no businesses would build.

  • The current software landscape reflects what can be captured, not what matters.

    • A vast territory of high-value use cases remains unbuilt because they fail one or more tests of the industrial software model:

      • they can't be monetized at scale,

      • they're too contextual to generalize,

      • they carry liability without revenue, or

      • they require trust that strangers can't provide.

  • The hard part about software is not writing it, it’s maintaining it.

    • Vibecoding has led to an explosion of 0.8 versions of software.

    • When you think you're 80% of the way there optically, you're actually 20% of the way there for load-bearing use cases.

    • That means we’re going to have a cacophony of demoware.

    • Sifting through what software is actually useful will become much more important.

  • The users of Claude Code aren’t necessarily developers.

    • Developer implies “makes software for others.”

    • But some people using Claude Code are just using it to make software for themselves.

    • A very different implied quality bar.

  • PMs are great at distilling a user need into a spec, a definition of software.

    • Before there was another step of the engineer actually implementing it.

    • Now from the spec LLMs can just build it.

    • That process of distillation of user need into software spec is more important than before, not less.

    • Imagine if everyone had a personal PM helping them design perfectly bespoke software.

  • Will non-PMs ever write their own software?

    • Claude Code asks you lots of structured questions.

    • But many of those questions are hard to answer if you can’t think like a PM and at a very high level like an engineer.

  • A system of record must be a digital twin of reality.

    • If it doesn't represent reality with enough fidelity then it's not useful.

    • Balancing on the knife's edge of comprehensiveness and maintainability.

  • A tweet about someone vibe coding their own personal version of Palantir for their life:

    • “Claude Code made me the CRM I've wanted for ten years.

    • Automatically updates from text messages, email, iPhoto, calendar, etc. Shows all interactions in one place.

    • Most of us have decades of data on interactions with thousands of people that just sits there unused.

    • I call it Palantini - as in a tiny Palantir.”

  • Since time immemorial, we’ve needed to give root access to unlock the power of software.

    • Giving root is inherently existentially dangerous.

    • What if there were a system that could transcend the need for you to ever give root?

  • Swarm Sifting Sort allows the actions of strangers to anonymously improve your experience.

    • All of the authentic actions of the crowd can be distilled into extremely potent quality signals to help everyone.

  • One of the benefits of the same-origin paradigm is that the origin can distill the wisdom of the crowd.

    • All of the users’ data is in their cloud.

    • They can process it to discover the wisdom of the crowd.

      • For example, swarm sifting sort techniques.

    • Those useful signals are fundamentally powered by everyone’s actions… but don’t reveal anything about anyone to anyone else.

    • Currently users must trust that the origin is doing this aggregation properly.

    • Also, the origin has the full copy of the data that they have god view over.

      • A corrupting pot of gold that will turn them into a greedy goblin.

    • If everyone could verify policies were followed, even remotely, a system could emerge with crowd-sourced intelligence but without giving god-view to any entity.

  • LLMs are the cause of and the solution to the security problem of infinite software.

    • Infinite software overwhelms our current security models that require users to trust the code and its creators.

    • It’s not possible to retrofit our current security model onto infinite software that can seamlessly share data.

    • New software will have to be written.

    • But LLMs also make it possible to write new software from scratch.

  • The way to make data flow cleanly across code written by strangers is data flow analysis.

    • Aggregators will have no interest in doing this; their interest is convincing everyone to leave data in their vault, and for the aggregator’s unfettered access to it to be the most value users can get.

    • But if it’s possible to extract more value out of our own data, aggregators wouldn’t have an advantage.

  • Your own data, activated by LLMs, is dangerous!

    • When data has untrusted third parties mixed in, you can't safely unleash LLMs on it.

    • That is already true for your data due to emails, calendar invitations, etc.

    • Also, your extremely sensitive information, like your SSN, is sloshing around in your soup of data.

  • LLMs have the potential to catalyze the potential energy of your data.

    • That process is explosively powerful.

    • If you aren’t careful, that raw power will blow your arm off.

    • The trick is to figure out a way to safely extract all of that power.

    • And to make sure the power of your data benefits you primarily, not some corporation.

  • All these random service providers have your data locked up in their vaults.

    • It’s not doing much for you.

    • It’s more useful for them to use for ads and reselling.

    • That's your data!

    • You should be the primary beneficiary!

  • Consumers have big data problems, too.

    • Tons of B2B companies help enterprises with their big data.

    • But for users, no one has ever really tackled it.

    • The closest is the aggregators who are happy to store all of your data in their vaults, give you minimal value out of it, and make more money off of it by distilling it into things they can sell to others.

    • What a lame state of affairs!

  • Home Assistant is a massive, thriving open ecosystem.

    • Tons and tons of enthusiasts making a better experience than any individual company would ever be incentivized to do.

    • The only thing that matters for whether a use case is prioritized is how much motivation a user with that problem has.

    • Anyone else in the ecosystem benefits from the effort of the most motivated member with a need similar to theirs.

    • The power of an open ecosystem.

  • Companies benefit from dynamic pricing.

    • Imagine a system that allowed you to do the same to the providers?

    • Before, it required infinite patience to do it as a consumer, it wasn’t viable.

    • But now LLMs have infinite patience and you can deploy them to achieve your interests.

  • The same origin paradigm was good enough when software was expensive.

    • In that world, only big companies with lawyers and something to lose could really do it.

      • That meant it was reasonable to trust the software they produced, to some extent.

    • But now the same origin model is no longer good enough in a world of infinite software where any rando can write software for free.

  • There are basic expectations around chat UIs.

    • For example, the message you’re drafting but haven’t yet sent isn’t visible to the other people in the chat.

      • And the other members of the chat can’t tell you’ve typed anything unless you see a clear three dots in the message area.

    • But there’s nothing preventing apps from violating this expectation.

    • For example, in customer support chats often they can see what you’re typing even before you send it.

      • Feels like a violation.

    • Imagine a system where it wasn’t possible for apps to create UIs that violated common sense expectations.

  • Imagine a system with viral policies.

    • Policies are attached to data, and data taints other data, absorbing its most restrictive policies

    • The data that have heavily restrictive policies are “cursed”.

  • Mike Masnick: ATproto: The Enshittification Killswitch That Enables Resonant Computing

    • The thing that will unlock resonant computing by default is the right medium.

    • ATProto is the current leading candidate.

    • There will be others, too, and that’s great!

  • Our default assumption today is the internet harms us in ways that we don't know about.

    • The more you peek under the covers, the less you like it.

    • What if the system could benefit you in ways that you didn't know?

    • The more you peek under the covers, the more you like it.

  • Stratechery points out why OpenAI had to do ads… and what a hard road that is.

  • Chatbots are perhaps 1/10 or 1/100 of the actual value extractable from LLMs.

    • What we see for chatbot subscription revenues reflects that lower efficiency of value.

    • Already people who are using LLM coding agents are willing to spend much, much more, and that’s a better index of what future willingness to pay will be.

    • When LLMs create value, people are more than willing to pay for it.

  • Alexa isn't your assistant.

    • It's Amazon's assistant, situated in your kitchen.

  • Massive AI Chat App Leaked Millions of Users Private Conversations.

    • This isn’t about prompt injection, but just a reminder that these deep, personal conversations are extremely sensitive, and if they are leaked they are very dangerous.

  • An insightful comment on HackerNews about the incentives of data collection:

    • "Private surveillance is so much more scary than regular government surveillance because they have every incentive to invent new ways of surveilling you that they then try to sell to governments."

  • Mike Swanson describes Backseat Software

    • That is, software that’s constantly nudging and nagging you from the backseat, even when you want to drive.

    • It’s pervasive, because it’s what you get when you optimize software.

    • The software shaping you to fit it, vs you changing the software to fit you.

  • Using AI to produce it doesn’t make it slop.

    • What makes it slop is when you don’t even review the output, and just share it with the world.

    • When you have a dialogue with LLMs you can make something better than you could have made alone.

  • Centralized, singular LLMs must have a kind of bland beige aesthetic.

    • Inoffensive to everyone and yet loved by no one.

    • An internal consensus / average.

    • When your work is edited by a human, they pull it towards their taste.

    • When your writing is edited by a centralized, singular LLM, it pulls it towards the bland middle.

  • Writing has a backbone and details.

    • If it’s written by a human and then edited by an LLM, it still has a human-sourced backbone.

      • A perspective, an argument.

    • If it’s written by an LLM and then edited by a human, it has an AI-sourced backbone.

    • One of the reasons I prefer the AI-as-editor writing modality.

    • If everything is written by a handful of centralized LLMs, we’ll lose variety in our arguments in society.

  • In the future, will it become common for published works to note when AI was used to help write it?

    • We don’t credit editors or ghost writers today.

      • It’s understood that memoirs of famous / powerful people are almost certainly written by ghost writers.

    • The real test: are you willing to put your name to the output and stake your reputation on it?

      • If you stand by it, does it matter what tools you use?

    • You can imagine a little fancy latin-sounding phrase that means “I acknowledge I used AI to help produce this.”

      • My vote: “colloquio machinae”, for “through dialogue with the machine”.

      • You can imagine people adding this after their by-line.

      • In the future, it could be reduced to just “c.m.” once everyone knew what it meant, the same way “sic”, “eg” and “nb” are.

  • When you discover writing is written by LLMs it feels like a betrayal.

    • “Oh, I kind of like this. … wait, this was written by an LLM?!”

    • If feels like “you tricked me” and you’re embarrassed you fell for it.

    • That negative surprise manifests as disgust.

  • Taste has two components.

    • 1) Distinctive and authentic to the creator.

    • 2) Resonates with other people.

    • Taste is the ability to discern differences others can’t see directly.

    • Good taste is taste that other people like.

    • Sometimes a thing that is taste but not good taste becomes good taste in the future when the population’s taste changes.

      • This has happened for many avant garde artists.

    • But most people whose taste doesn’t resonate will never resonate with others

  • Some systems encourage experimentation.

    • No matter what you do, you’re unlikely to get hurt, and you’ll likely get a good result.

    • Google Search had this characteristic, and chatbots do too.

    • The expectation for how likely a given input is to give good enough results is a prior that configures how likely a person is to try using it for a given use case.

  • Our belongings often represent our intentions.

    • A friend used to play piano.

    • She has a keyboard she rarely uses, but has brought with her through multiple moves.

    • That keyboard represents her intention to keep the production of music in her life.

    • If you catalog the things in your life, you could detect evidence of what is most important to you.

  • Compounding effects can multiply, creating even stronger compounding.

    • This happens when each compounding effect runs itself hotter, but also its siblings.

    • For example, imagine a system with a compounding, compounding, compounding effect:

      • 1) Mundane: The more users that use it, the more that people collaborate.

        • A classic network effect.

      • 2) Rare: The more that users use it, the more useful software that is automatically created and available, automatically.

        • Something more akin to TikTok.

      • 3) Legendary: The more that users use it, the more data they import, the more the system can auto-discover use cases for them and activate them.

        • Something unlike anything we’ve seen.

  • Security is about economics more than technology at the end of the day.

    • To avoid the bear you only need to be faster than the slowest person.

  • Some frames start you off in a hole with some audiences and have negative use with that audience.

    • When I describe self-distributing software, sometimes I describe it as “TikTok for Software… but good for society.”

    • People who take a more financial / business perspective love that frame.

    • But many normal people don’t–it’s too hard to integrate the “good for society” part.

    • For those people, the frame puts them in a position where it sounds hollow and then it requires more work to convince them it can be resonant.

  • If you're making a consumer product you have to demonstrate network effects.

    • That's a multi-step process.

    • If you're doing a product for enthusiasts in a category that already exists, you just have to show your thing is better for a single user.

    • Any network effect becomes a bonus.

  • When you have all of your ingredients set up just right, you can do magic when you cook.

    • A mise en place.

    • Effortless, focused just on the act of creation, not the labor of getting to that point.

    • Imagine an AI system that did that for you.

    • That automated the cognitive labor of creating the right mise en place.

  • Netflix saw that streaming would be a thing.

    • But they also built a monster of a DVD business on the way to that north star.

    • Had they focused on streaming prematurely it wouldn't have work, they would have died.

  • When a new landscape opens up it feels infinite, like anything is possible.

    • Most possibilities will be junk that no one will try again after someone does it the first time.

    • After the landscape is all picked over it’s clear what the viable ideas are and everyone focuses on those.

  • When a previously finite thing becomes infinite it breaks our brains.

    • Like dividing by zero.

    • That’s one of the reasons I think people are underestimating the world of infinite software.

  • When you're applying calibrated judgment you're adding value.

    • Even if it’s just nudging information streams flowing through you.

    • Even when you feel like just a traffic cop shuttling information from someone to and from Claude Code you’re still exercising judgment about what to share.

  • When you find the right framing, it feels like coming home.

    • Before everything felt chaotic and disjoint.

    • Now everything slots into place, and it fits all the constraints you had discovered.

  • In a chaotic environment, conserve your energy.

    • Scout for the constraints that will be important.

    • Build your set of useful tools and implements.

    • Then, when the stars align and the universe opens a temporary path, execute like hell.

    • When you're well prepared and the situation is right, good work can happen fast.

  • It's hard to teach someone to fish, especially if they don't know they want to eat fish.

    • It’s much easier to find people who are already fishing and hand them a better fishing rod.

  • If you have a conceptually wonderful but superficially rough product, there are two options.

    • 1) early adopters who are so motivated to get the output that they are willing to crawl through broken glass, or

    • 2) a PM to dive in and figure out a market entry point to make a curated subset of the product that goes down smooth.

    • The latter can be a bit iffy (what if you pick the wrong market entry point) so you might need a few bites at the apple.

    • If you have the former, don’t even bother with the latter.

  • People are aware of the Great Oxidation Event, but you could also view it as the Oxidation Holocaust.

    • The Cyanobacteria were so successful that they poisoned themselves.

    • Oxygen to them was pollution, and led to a mass die off.

    • But it opened the world for entities that relied on oxygen.

    • LLMs could not have been discovered by the aggregators.

    • But it remains to be seen if aggregators will survive a world of infinite software.

  • Don’t get captured by your early adopters.

    • There are a number of groups that are likely to be early adopters… and are also very unlike the mass market.

    • If you aren’t careful, you could get “captured” by them, and end up climbing a much less interesting and smaller hill.

  • The web originally was just for porn, that was its critical feature.

    • But over time it transcended that embarrassing start and became mainstream.

    • The things that made it good for porn also made it good for just about everything, and that in the end won out.

    • Contrast that with, say, OnlyFans, which will never shake that connection.

  • Will AI be something that strengthens democratic or authoritarian impulses?

    • Silicon Valley, time to decide!

  • Open ecosystems can be captured if one entity invests more than everyone else.

    • That is, if one entity bear hugs it.

    • This is only existentially dangerous if one entity bearhugs it and controls a majority of the investment in the ecosystem.

    • If multiple entities, even powerful ones, all bear hug it they get stuck in a tug of war that keeps the ecosystem at least not captured.

  • An entity can only do a rug-pull once.

    • Once they do, everyone knows not to trust them.

      • Although in Mars Attacks, they keep on saying "it was an accident!" and it keeps working.

    • But even if any given entity can only do one rug pull, the danger is if one all-powerful entity does the single rug pull that then allows them to run the world forever.

    • There are a lot of companies in the age of LLMs that are positioned to have the “one rug pull to end all competition.”

    • Not saying any of them would do that… but they could!

  • Adding a token to the product you love is a dangerous move.

    • There are two very different goals for ecosystem members: 1) token holders and 2) users.

    • The users want the product to be as useful as possible.

    • The token holders want the token’s value to go up.

    • These two interests can be aligned… but don’t have to be.

    • There’s a constant gravitational pull towards the interests of the token holders.

    • The moves to increase the value often hollow out the usefulness of the product, in little papercuts.

    • You as the creator likely hold a lot of the token.

    • At each point, you’re incentivized to hollow out your baby.

    • If you saw the product as a means to make money, that’s fine.

    • If you saw the product as a moral end, then it’s torture.

  • The Kohari Window is a 2x2 of facts about you that are known to you vs known to others.

    • The quadrant that is unknown to you but known to others is potentially very dangerous.

    • That’s a high-leverage ability for someone to manipulate you.

  • The Baby Rhino moment: when you thought you were safe all along and suddenly realize you’ve always been in danger.

    • From this, one of my favorite memes of all time.

    • Your entire world tilts on its axis and you feel overwhelmed, sick to your stomach.

    • You question everything, and don’t know what parts are terra firma.

    • A friend had this experience with their 3 year old kid and their backyard pool.

      • The pool was surrounded by a fence that required reaching over to unlatch.

      • When playing catch with their kid, the ball went into the pool area.

      • The three year old went over to the gate, and without hesitation, jumped up, unlatched the gate and went inside.

      • Stomach turning!

  • Are you a product person or researcher?

    • Think of your favorite solution to a given problem.

    • How much of it are you willing to throw away to get to market?

    • Anything below 95%, you’re a researcher.

  • My friend Jad Esber has a new place to share reflections: Field Notes.

    • He was directly inspired for the format by Bits and Bobs!

  • The process of navigating a conflict of interest properly can only start by acknowledging the conflict.

  • Idealists can sometimes get stuck in a cycle where they don’t improve.

    • In contrast, people who are not missionaries but mercenaries need to get good at their skill to be marketable.

    • If they fail, they obviously need to improve their skills.

    • But when a missionary fails, they could tell themselves “the reason I’m not successful in the mainstream is because I’m too principled.”

      • There’s less drive to improve.

    • Idealists that congregate can form self-limiting backwaters.

  • Before you walk you must crawl.

    • Before you crawl you need tummy time.

  • It's very hard to unlearn the constraints you've grown up with.

  • Moving from one listener to multiple changes the vibe.

    • I’m hyper-focused on people’s reactions in real-time.

    • I’m constantly predicting their actions and morphing what I’m saying to land well.

      • This process, when done over time with many people, helps polish rough insights into broadly resonant frames.

    • But this process requires a ton of focus.

      • Having two listeners to do it for at once, to triangulate across different personalities and goals, is multiplicatively more challenging.

    • In small audiences I’m constantly workshopping new “material.”

    • In larger audiences I tend to use material I already know resonates.

  • Open mindedness and decisiveness often are in tension.

    • At least, not naively.

    • But sometimes if you use open-mindedness to discover constraints and then once identified switch into convergent mode, it does work better than either alone.

  • Quality and flexibility are in tension.

  • Is your altruism a means or an end?

    • If it’s a means, it’s about how much positive impact you can have.

    • If it’s an end, it’s about how you feel.

  • Saruman are about themselves.

    • Radagasts are about others.

  • A morality tale from a HackerNews comment that resonated with me:

    • 'Rabbi Haim once ascended to the firmaments to see the difference between the worlds. He first visited Gehenna (Hell).

    • He saw a vast hall with long tables covered in the most magnificent foods. But the people sitting there were skeletal and wailing in agony. As the Rabbi looked closer, he saw that every person had wooden slats splinted to their arms, stretching from their shoulders to their wrists. Their arms were perfectly straight and stiff; they could pick up a spoon, but they could not bend their elbows to bring the food to their own mouths. They sat in front of a feast, starving in bitterness.

    • The Rabbi then visited Gan Eden (Heaven). To his surprise, he saw the exact same hall, the same tables, and the same magnificent food. Even more shocking, the people there also had wooden slats splinted to their arms, keeping them from bending their elbows. But here, the hall was filled with laughter and song. The people were well-fed and glowing. As the Rabbi watched, he saw a man fill his spoon and reach across the table, placing the food into the mouth of the man sitting opposite him. That man, in turn, filled his spoon and fed his friend.

    • The Rabbi returned to Hell and whispered to one of the starving men, "You do not have to starve! Reach across and feed your neighbor, and he will feed you." The man in Hell looked at him with spite and replied, "What? You expect me to feed that fool across from me? I would rather starve than give him the pleasure of a full belly!"'

Reply all
Reply to author
Forward
0 new messages