Bits and Bobs 12/1/25

11 views
Skip to first unread message

Alex Komoroske

unread,
Dec 1, 2025, 10:58:07 AM (13 days ago) Dec 1
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.oo3ktkon9hhs.

Intention vs attention. SaaSy CRUD. VibecodedThe humble hashtag. Responsibility laundering. Heaping epicycles on the geocentrism of the same origin paradigm. Resonant privacy. Brands as bank accounts. Dammed up data. Aggregator as a peculiar form of monopoly. Sleepwalking giants. Number go up. Emergently evil.

---

  • This week in the wild west roundup.

    • HashJack is a new indirect prompt injection technique.

      • It takes advantage of the fact that the content after a hashtag in a URL won’t lead to errors if it’s in a structure the page can’t interpret… but the LLM can see it just fine.

      • A natural place to inject malicious prompt injection instructions!

    • Google’s new AntiGravity IDE has a number of data exfiltration attacks.

      • I’m disappointed… Google normally has one of the best security teams in the industry.

      • How did they let this go out the door?

    • A universal AI jailbreak: make the prompts poems.

      • This just drives home that “make the LLM not get tricked” is a dead end.

  • NYTimes: "What OpenAI Did When ChatGPT Users Lost Touch With Reality

    • “In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?"

    • Damned if you do, damned if you don’t.

    • Chatbots: strange game, the only winning move is not to play!

  • When an LLM agent gets going, it’s easy to accidentally egg it on.

    • “Solve this the proper way” gets it to a layer deeper.

    • But if it’s following a path that is already off course it can just make it more confused.

    • Each layer deeper is another layer of leverage on top of the previous layer.

    • God help you if the LLM was even a little confused at the layer above.

    • Before you know it you’re totally twisted up and the only move is to unwind it.

  • We should demand technology that serves our intention instead of hijacking our attention.

  • In the last decade in Silicon Valley, everyone just assumed that the consumer category was dead.

    • Every bit of territory was gobbled up by the handful of planet-scale aggregators.

    • Everyone else was left fighting over scraps, or subsisting on the thin gruel served up to them.

  • Silicon Valley's lost decade: consumer ceded to aggregators, software "innovation" meant SaaSy CRUD for some vertical niche.

    • We forgot software could be anything else.

    • Infinite software means we can dream again.

  • When you're stuck in a relentlessly dull world, you don't even realize it's dull.

    • Everything is gray.

    • Nothing stands out.

    • So you don't even realize anything could stand out.

    • Then you see something in color and it pops, instantly.

    • You can't not look at it.

  • Everyone thinks their own slop smells sweet.

    • But everyone else can sense what it actually smells like.

    • AI now makes it way easier to produce slop.

    • We’re going to be swimming in everyone else’s slop.

  • This week I published https://komoroske.com/writing-with-ai.

    • It’s a kind of disclaimer for how I use AI in my public writing.

    • The summary:

    • I write my bits and bobs by hand, sometimes using AI as an editor to bounce ideas off of.

    • Some other prose essays I publish use AI in the writer seat, fed with a tightly curated set of bits and bobs as input, and with me actively editing.

  • The humble hashtag was low-key brilliant.

    • It was a userland hack for precise string matching to make search and trending work.

    • The human writing the tag picks one they think others will think to use to find it.

    • Like the old Google image Labeler Game... everyone is trying to think of how other people will think to find it, and doing that, which is naturally convergent.

    • By the hashtags being in an explicit, non-accidental namespace, people can't accidentally tag something, so all tags are intentional to that string.

      • Maybe they don't know what other people will think of that string and aren't in on the joke, but they do know they tagged it.

    • This creates a concave system, where convergent results emerge automatically.

    • Best of all: no one had to design this system, it could just emerge!

  • I asked Claude to help me understand the power and importance of Smalltalk.

    • "Alan Kay conceived Smalltalk as part of a vision for ‘personal computing’ that was about more than just individual ownership of machines. 

    • It was about giving people computational agency—the ability to understand, modify, and create their own computational tools.

    • In this vision, computers aren't appliances that run apps—they're malleable media for thought, as customizable as paper and pencil."

  • Last week someone asked: ‘Isn’t ‘living software’ just… ‘software?”

    • To me what feels different is the meta-aliveness.

    • An ability to assemble itself.

    • In some ways it’s the same magic of turing-completeness, but applied to itself.

  • Problems that can be reframed as coding problems will be able to be easily tackled by LLMs.

    • LLMs are great at coding, and will only get better.

      • It’s so easy to create ground-truthed synthetic data that the major labs are doing a ton of it.

    • So if the problem can be reframed as a coding exercise, LLMs will do well on it.

  • There are a ton of companies focusing on tooling for vibecoding and making it easy.

    • If you took it for granted that good tooling will exist in all of the normal places due to that vibrant ecosystem, you could focus on a higher layer in the stack.

    • Focusing on how to distribute all of that vibecoded software, safely.

  • A vibecoded island has to keep on absorbing more functionality to be useful.

    • A high bar and also leads to it being over-fit to one use case.

    • But if they can be safely stitched together and composed they can be smaller and thus more likely to be useful to be composed in others.

    • The larger a piece of software, the less likely it is to be perfectly fit to your particular use case, just mathematically.

  • Imagine a platform that allowed deploying and safely sharing chunks of vibecoded code.

    • There’s a set of highly motivated people today using Claude Code in YOLO mode on local files.

    • If you had a system that allowed you to use the existing workflow but deploy/distribute the results, there would be three immediate benefits.

    • 1) Hosted cozy muti-player.

      • Even if you have little apps, if they’re all fully local then you can’t collaborate with people like a spouse.

      • That makes it difficult for them to actually become a system of record for even small tasks.

    • 2) Prompt injection mitigation.

      • Running Claude Code on YOLO mode, especially when pulling in software from other randos, is dangerous.

    • 3) Standing on the shoulders of the ecosystem.

      • If anyone else in the ecosystem has solved a problem you have, you can take their solution and build on it, instead of having to build it yourself.

  • Code isn’t content.

    • Content can be auto-distributed.

      • Think Search, or TikTok.

    • Content can’t hurt you.

      • At least, not directly.

    • Code can do useful things, which means it can hurt you.

    • But a system where you don’t have to trust code could treat code like content.

    • It could use the best practices for self-distributing content.

  • Most systems today require you to trust the code and the LLM’s responses.

    • A system that doesn't require you to trust either but still allows you to accomplish real things would be extremely powerful!

  • When users are presented with a question they can’t possibly answer in an informed way, it’s responsibility laundering.

    • Technically, they consent.

    • But they don't understand what they consented to, so the consent is hollow.

    • The same origin paradigm is littered with this responsibility laundering.

      • App install dialogs.

      • GDPR banners.

      • Permission dialogs.

    • This is downstream of the same origin paradigm never grappling with the fact that data is naturally viral.

  • When you solve a problem at the wrong level you add epicycles.

    • It feels like you’re making progress, like you’re laying down useful insights.

    • But actually each step you take is drawing you away from the fundamental insight.

    • That you’re trying to solve the problem at the wrong layer, and that the layer beneath needs a fundamental change.

  • Heaping on permission dialogs that are ever-more-precise is like adding epicycles to make geocentric models to work.

    • The answer is not more responsibility laundering epicycles.

    • The answer is to switch to a heliocentric model.

    • The same origin security model orbits the code.

    • Instead, it should orbit the data.

  • In security systems friction is load bearing.

  • What if I told you that this weird bit of math originally created in the 70's holds the secret to fixing the power dynamics that have screwed modern society?

    • It allows you to treat code as untrusted and yet still do sensitive things.

    • It’s used every day in different contexts… the math just needs to be applied to UI.

  • Resonance: the deeper you go the more evidence you find for fractal alignment with what you care about.

  • The same origin model is not resonant, it's hollow.

  • Resonant privacy is when the system behaves the way you want it to at every level.

    • So you never have to think about privacy again.

    • Perfectly aligned with your intentions and expectations, at every level.

  • Even if a security model guarantees code won't be dangerous, it doesn’t guarantee that it's useful.

    • That's where ranking comes in.

  • As you get closer to a lighthouse metric it saturates and you start inadvertently destroying what you care about.

  • There’s an interesting recall/precision tradeoff in systems with high marginal cost.

    • A quick recap on Precision and Recall in an Information Retrieval sense.

      • Precision means, roughly, of the questions answered, what percentage were correct.

      • Recall means, roughly, what percentage of questions asked were answered.

      • You want both, but you can trivially trade off recall for precision and vice versa at a given quality level.

    • Imagine a system that does useful things for you… but has non-trivial marginal cost for answers.

    • Price-sensitive people would optimize for precision, not recall.

      • Recall without precision would be expensive if most of the answers are wrong.

      • They want to minimize false-positives, since they are expensive.

    • Prince-insensitive people might optimize for recall.

      • They’d rather minimize false-negatives, where the system reports it doesn’t have a good answer but it would have.

    • An answer for the price-insensitive people helps the system tune and know where good results are, which could help the price-sensitive people have better results next time.

  • Super-organizers are great first adopters for a system designed to grow into the operating system for your life.

    • Especially a system that is multi-player by default.

    • Super-organizers don't just organize themselves, they organize the systems around them for others, too.

    • They get a boost of energy when things are put in the right place, which can motivate them to do long slogs of effort that other people wouldn’t bother with.

    • If the system can create self-paving cowpaths, then it helps other users, too.

  • View brands as bank accounts that you can deposit into or draw from.

    • If you’ve drawn down, you need to deposit more.

    • This was advice Steve Jobs gave to Bob Iger.

    • This bank account is hidden and emergent.

    • It’s a store of trust.

    • Beancounters don’t see it, so they don’t see how their micro-optimizations are harming this store of trust.

    • But this store of trust matters more than any individual transaction.

  • Some people think Her is optimistic and some think it’s dystopian.

    • Which kind of person are you?

  • Omnipresent but ethereal forces are suffocating.

    • Once you see them, you can't unsee them.

    • But before you see them, you just feel like you’re crushed under a weight that you must be imagining.

  • Emergent effects are spooky.

    • Ethereal, hard to see or understand.

    • But that doesn’t make them any less real.

  • A positive boundary gradient leads to a system that grows organically.

    • The people at the edge would rather be in the system than outside.

    • A positive Net Promoter Score is a way to measure a positive boundary gradient.

  • Useful things grow automatically.

    • More people invest in using it, improving it, keeping it going.

    • That creates more surface area, more likely that other people find it, because other people are using it.

  • A concept for growing networks: “Social Dandelions.”

    • I like the concept, even if the article is a bit slop-y.

  • Slop is the perfect word for AI-generated content.

    • Slop: like the nutrition fed to a pig.

    • Sloppy:  Like careless.

  • My friend Dimitri has a classic essay about the composition of early adopters. A few riffs:

    • The Doers are the most populous.

      • They just want to use the thing to achieve a task.

      • They give the overall momentum to the group.

    • The Thinkers are a much smaller group.

      • They help reason about how to use the thing and see around corners.

    • The Magicians are even rarer still.

      • They see how to use the system to do things that look like magic.

      • Those concrete demos will inspire others to join.

    • You don’t know ahead of time which people will be in which group.

    • If you can find some Magicians it can be the difference between moderate and runaway success.

    • Optimizing for serendipity is the best way to find Magicians.

  • A nice rule of thumb from the Amplifier documentation:

    • A good recipe “Solves a Real Problem.

    • Not ‘what if we…’ but ‘I need to…’”

    • Gets at the distinction between demoable and usable.

  • Standing something up and optimizing something are two very different skillsets.

    • Imagine what could exist, vs improve what already exists.

    • You need both.

    • Every problem nests fractally; inside a “standing something up” project is many “optimize what exists” projects.

  • It’s not possible to know if an idea will be viable until you see it actually exist.

    • This is due to complexity.

    • Only the universe can calculate all of the interdependencies and have only the viable things remain.

  • People don't want to be cynical.

    • In the modern world, they feel like they have to be.

    • Everything has some cynical angle to extract from them in some hidden way.

    • If you don’t have your guard up then you get taken advantage of.

    • That's the modern world: these omnipresent but hidden forces that we all pretend don’t exist.

    • Just because we can’t see them doesn’t mean it doesn’t exist!

  • Nice piece on The Subversive Hyperlink.

    • "The web has a superpower: permission-less link sharing."

    • A nice paean to links and the power of the web.

  • I'm optimistic about tech, pessimistic about Big Tech.

  • There's so much data, but it's all dammed up by a small number of entities who hoard it.

    • It's for our own protection, due to the laws of physics of data

    •  But it's also very valuable to those hoarders, what a crazy random happenstance.

    • Why don't we change the “laws of physics”?

  • Aggregator is just a polite word for a peculiar form of monopoly.

    • In the industry we don’t call them monopolies, because that’s impolite and we want to pretend like they’re something different.

    • Like all monopolies, they get to a point where they short-circuit competition.

      • They then become something that doesn’t really have to compete, and the amount of society surplus created declines.

    • Aggregators are better than most monopolies because network effects mean the products are better than competitors’.

      • But that’s more due to the inherent network effect than the value created by the aggregator.

    • But they have the same stagnation and lack of competition as any monopoly.

    • Stagnation, heat death.

    • Where Ben Thompson and I disagree is he thinks aggregators are great and I think they’re bad.

  • Centralized things are hollow.

    • They are not situated, they are one-size-fits-none.

    • The bigger an entity is, the more they succumb to the Optimization Ratchet.

      • The harder it is to avoid it.

    • The scale means that even a small optimization has material change.

    • The asymmetry that causes the Optimization Ratchet gets larger the more centralized the entity.

    • When you have one perspective over the whole system, the value of optimization becomes more clear.

    • This is the force that fundamentally kills organizations as they grow, inexorably.

    • Organizations become sleepwalking giants.

  • A sleepwalking giant can stomp on villages without intending to, or even realizing.

    • The question is, as the giant, once you wake up, what do you do?

    • How do you structurally prevent yourself from causing that harm?

    • If you don’t, then maybe you deserve the blame.

  • “Number go up” sounds like something a braindead zombie would say.

    • People who are part of an emergent Optimization Ratchet machine become zombies.

    • Instead of thinking “is this a good thing I’m doing” they think entirely “Does it make number go up?”

    • Optimizing without thinking about the implications of that optimization.

    • The default state of modern society.

  • Many systems are emergently evil, not intrinsically evil.

    • There are very few individual grains of intrinsic evil within it.

    • But systems that have few grains of intrinsic evil can still produce massively emergently evil outcomes.

    • People who can’t see emergence will say “look, there’s very few grains of evil in it, so the outcome can’t be evil.”

    • That’s wrong!

    • Emergence is a thing.

    • A system that is emergently evil is evil.

    • The question is: what is the net effect on society?

    • The road to hell is paved with good intentions.

    • Every incremental step every person takes is almost entirely reasonable, but the emergent result can be a totally different character.

  • A new book: The Nerd Reich: Silicon Valley Fascism and the War on Democracy.

    • I haven’t read it, but wow what a title.

  • Ignorance and arrogance often co-occur.

  • What happens in the next 5 years in technology is extremely important for the future of the world.

  • Progress can be resonant but it isn’t necessarily.

    • Some progress hollows things out.

      • “Number go up.”

      • Over-optimizing.

    • But other progress creates resonance, things that nourish society and individuals.

  • When you take a longer time horizon, resonance becomes the default.

  • Resonance ignites the soul.

    • ... Yes, this is drawing on a line from KPop Demon Hunters.

  • Huntrix is resonant.

    • The Saja Boys are hollow.

    • Both groups have devoted fans.

    • One group gets that from addiction, one gets it from love.

  • The Resonant Computing Manifesto is attracting a lot of important signatories.

    • You should sign too!

  • An insight from my friend Peter Wang:

    • "’Slack’ is the space afforded to the peripheries so they can innovate and encounter and be shaped by the liminal / exterior.

    • But most capitalist/McKinsey optimized operational disciplines create dendritic structures that suck the slack out from the edges as waste.

    • This destroys the capacity of the organization to actually learn and adapt from the field.

    • Furthermore it destroys any sense of enchantment or organic interaction.

    • All users are faced with an impenetrable façade of commoditized end-effector units, and they are thus interpellated into understanding that they, too, are merely commoditized sources of money and data (and ‘attention’)."

  • Ellul has a critique of “the technological society.”

    • He said that Communism and Capitalism are just different ways of serving the same underlying master: efficient production.

    • Efficient production is what leads to the Optimization Ratchet.

    • It’s what leads to the inexorable hollowing out.

    • That leads to alienation, commoditization, etc.

  • This Proto-Metamodern Thinkers on Technology is interesting.

    • It lists 4 movements:

      • Convivial technology

      • Appropriate Technology

      • Liberatory Technology

      • Calm Technology

    • They should add another: Resonant Computing.

    • Another interesting meta note: the article is “coauthored” by an AI personality.

  • A few reflections Claude and I had thinking about valuation "multiples" as an emergent phenomenon.

    • A "multiple" is how the market translates a flow (revenue) into a stock (company value).

      • It's a compression algorithm for all future scenarios.

        • $100M ARR at 10x, 1B valuation. 

        • At 20x, $2B.

      • The multiple encodes: growth trajectory, margin potential, market structure, risk/uncertainty.

      • Finance people are running a mental simulation: "How many turns of compounding does this system have left?"

      • High multiples mean they believe there are many doublings ahead.

    • The "market multiple" is an emergent coordination point.

      • Each buyer has their own internal multiple based on unique synergies, cost of capital, risk tolerance, alternative opportunities.

      • But the transaction price is where these different models find their overlap.

      • Like a Schelling point in game theory.

        • Not that everyone agrees on "true value," but they converge on a price that allows exchange.

      • Private/early stage: wide variance, high friction leads to price is what the most motivated pair agrees to.

      • Public markets: millions of participants leads to prices converge toward consensus rapidly.

    • Multiples exhibit stigmergy.

      • Each transaction creates information that influences the next.

      • "Company X raised at Y valuation" becomes a reference point that anchors subsequent deals.

      • The market multiple becomes self-reinforcing through this signaling.

      • It's the statistical residue of all those private calculations converging through mutual observation.

    • The multiple is market consensus about system potential.

      • Constantly recalibrating based on comparables, macro conditions, narrative momentum.

      • AI companies command higher multiples right now because of zeitgeist, not just fundamentals.

      • Everyone calculates differently, but they all dance around the same attractor point that emerges from their collective behavior.

  • One thing that makes Silicon Valley work as a whole is the lack of noncompetes.

    • Harder for a given company but great for Silicon Valley overall.

    • The value of the Silicon Valley network and 'scenius' is so significant that it makes sense to be here even if it's worse for you as a single company.

  • Bits and Bobs is my own personal anti-Twitter.

    • Instead of shouting into the void, I invite people into my cozy study where I navelgaze.

    • Twitter is about hot takes and shouting.

    • Bits and Bobs is about navelgazing quietly with friends.

  • Someone this week described me as an “anthropologist,” which fits!

  • Different versions of curmudgeons have very different impacts on teams.

    • One version is someone who assumes the worst but still has forward momentum to try anyway.

      • They can sometimes flag obstacles the whole team had missed.

    • Another version thinks "I've been right the whole time and the world is wrong.”

    • That style of curmudgeon is toxic.

      • Unable to learn, drags everyone around them into their own particular chip on their shoulder issue.

  • Sometimes external structure can help with activation energy.

    • Activation energy makes it hard to start tasks, even ones that you know you should, or that you want to do.

    • External structure can help give you the “kick” you need. For example:

      • Having a meeting to get to 5 minutes from now.

      • Having someone else expecting you to do the task by a deadline.

    • Some external structure is more effective than others.

    • Some are like “Answer this multiple choice question in the next five minutes.”

      • Easy to do: small task, short time frame.

      • If you don’t start it right now it won’t get done, so it’s obvious you need to start now.

    • Some are like “Turn in this essay next Friday.”

      • More work to do, more time, easier to put off.

      • “I’ll do this tomorrow when I’m not as busy.”

    • The sooner you need to do it, the more effective the external structure is at compelling you to actually get over your activation energy hump and do it.

  • "Manifesto" and "humble" don't often go together.

    • But they can!

  • The social process of a queue emerging is kind of like blockchains.

    • Imagine a situation where there’s an ambiguous queuing situation… a collection of people, not yet formed into an unambiguous line.

    • Everyone makes local decisions: which line in front of them appears to be the longest, and that no one will get mad at them for slotting into?

    • People stand in the line that seems to be the longest, avoiding the perception of cutting.

    • As the line emerges, it pops into existence quickly; because it’s concave and the process is self-accelerating.

    • As the emerging line gets more resolution, the pull to slot into it in a way that minimizes perceptions of cutting gets stronger.

  • Veritasium’s video on power laws is excellent.

    • A few notes / riffs.

    • Mediocristan: normal distributions.

    • Extremistan: power law distributions.

    • Any mechanism that has preferential attachment is by definition Extremistan.

    • The Extremistan power law is called a log normal.

    • A power law implies scale free, which implies fractal behavior.

    • At the critical point in a system, fractal behaviors show up.

    • Systems tune themselves to criticality automatically.

      • Self-organized criticality.

    • Criticality is balanced on a knife’s edge, the most interesting position to be in.

      • Kind of like entropy; it has the the most adjacencies.

    • In a system that’s in self organized criticality none of the physical details matter for how it behaves.

      • The criticality is more important than any other detail.

    • Insurance fundamentally assumes Mediocristan not Extremistan.

      • If it’s a power law it’s uninsurable because the average goes up to infinity.

      • This is true only if the extreme phenomena are correlated.

      • Of course, in a crisis all correlations go to 1.

    • Only the world of information can live in Extremistan.

      • You need to be able to replicate the product quickly.

      • Of course, physical things can also be constructed quickly, on a spectrum.

        • Books can be printed.

        • Furbies can be constructed.

        • Restaurant chains can add more store locations.

      • But the more it requires atoms to be configured the more it will live in Mediocristan.

  • The more complexity, the more opportunity for malicious compliance by a cynical actor.

    • Complexity creates clouds of uncertainty.

    • An entity that wants to hide its intentions can hide better in uncertainty.

      • “Oh oops, I fumbled that thing I was supposed to try again. Oh oops, I did it again!”

    • It’s harder to detect that they aren’t complying, and they have more plausible deniability when they are detected.

  • Sharks will sometimes push collaborators to compromise their integrity.

    • The shark sees the direct personal benefit of that collaborator compromising their integrity: a better payday for the shark.

    • But they miss the long-term diffuse costs to that person of compromising their integrity.

    • It’s invisible to them, and would be unimportant to them even if it weren’t.

  • A great leader empowers people to believe things about themselves they didn't dare to.

  • Judgement is a burden.

    • Judgment is different from simply noticing.

    • It’s noticing with a value judgment layered on top.

    • The judgment collapses possibility, it cuts off options by labeling them as bad.

    • When you judge, you place yourself on the balcony, instead of on the dance floor.

  • When you’re stuck with something for life, you don’t just give up as easily.

    • Especially if there are a finite set of options so the opportunity cost is high.

    • You push through to at least “good enough.”

    • If it doesn’t work right, you don’t just shrug and say “that’ll never work, I’ll just write it off.”

    • LLMs give up more easily on software problems than humans might.

      • “Meh, it’s fine, just leave it.”

    • A version of the principal agent problem.


Reply all
Reply to author
Forward
0 new messages