Bits and Bobs 1/6/26

14 views
Skip to first unread message

Alex Komoroske

unread,
Jan 6, 2026, 1:55:05 PM (5 days ago) Jan 6
to
I just published my weekly reflections: https://docs.google.com/document/d/1x8z6k07JqXTVIRVNr1S_7wYVl5L7IpX14gXxU1UBrGk/edit?tab=t.0#heading=h.wwa6o7pcww0u

Not Big Tech, My Tech. The digital third place. Fractal personalization. The Software Industrial Revolution. The Same Origin creating jealous goblins. Climbing the wrong infinite software hill. Useful slogs are moats. Superficial optimization is selfish. Touch vs sight.

----

  • Lenny’s podcast with Sander Schulhoff about Prompt Injection.

    • "You can patch a bug, you can't patch a brain."

    • About 11 minutes in they reference me!

  • A tweet summary of a new paper:

    • "This paper from Stanford and Harvard explains why most “agentic AI” systems feel impressive in demos and then completely fall apart in real use.

    • The core argument is simple and uncomfortable: agents don’t fail because they lack intelligence.

    • They fail because they don’t adapt."

  • A nice insight from a YouTube video:

    • “Making ai models is less like training an animal intelligence and more like summoning ghosts.”

  • A tweet analyzing what Google is trying to do to OpenAI:

    • "google is trying to do to openai what facebook ended up doing to snap which is to first decelerate growth substantially (which kills a lot of momentum & morale) & then unleashing integrations at scale rapidly by leveraging distribution advantages.”

  • OpenAI admits that prompt injection is a fundamentally unsolvable problem

    • "Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully 'solved.'”

  • I don’t want Big Tech, I want My Tech.

    • Big Tech is owned by someone else.

    • My Tech is owned by me.

  • Mike Masnick on How we can make the internet good again.

  • Nice summary of agentic browsers from a HackerNews comment.

    • "Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline."

  • This week’s Wild West roundup.

  • A tweet: "factorio is unironically the perfect tutorial for agentic coding systems btw."

  • I binge watched Pluribus on my flight to London.

    • Excellent show.

    • There’s an obvious metaphor for people who use AI and simply do whatever it tells them to do.

    • Compare to people who use AI to multiply their agency.

    • Looks similar, but wildly different.

  • LLMs add a token that is most coherent with what’s in the context.

    • So if it makes an error it will tend to make it again.

    • Because that error is put into the context.

    • The most coherent belief to amend silently assumes that the error is right. 

    • The more errors it makes the more deeply ingrained the errors get.

  • We’re missing the digital "third place.”

  • A new paper from MIT’s Daniel Jackson: What You See Is What It Does: A Structural Pattern for Legible Software.

  • Chris Loy argues we’re about to see the industrial revolution of software.

    • On the one hand, the idea that we’ve been at the hand-crafted / high friction style of software to date does resonate.

    • But unlike in the industrial revolution where everything was made more similar, the industrial revolution of software will allow software that is fractally personalized.

  • Your ASP (AI Service Provider) has to not be a chatbot (too limited) and must be dedicated to only you (no conflict of interest).

    • A chatbot owned by Sam Altman, of all people, is obviously not an ASP.

    • Corporations pretending to be your friend.

    • What could possibly go wrong?

  • Your ASP can't be just a chatbot.

    • That's like AOL saying the point of the internet is chat.

    • No, it's the web, duh!

  • Facebook is testing $14.99 monthly subscription fee to post links.

    • The aggregator endstate.

  • Imagine a service that unlocks the resonant value of AI for individuals.

  • Ivan Zhao’s Steam, Steal and Infinite Minds.

    • “If history teaches us anything, those who master the material define the era.”

    • What will the substrate that unlocks AI’s infinite patience look like?

  • What if you could vibecode your life?

  • If the living substrate is powerful enough then the substrate is the product.

    • Especially true if there are strong network effects.

    • The web was the product.

  • The right mindset to use coding agents to their fullest potential: “I don’t know if it’s going to work, but let’s try it!”

    • If you have the mindset of “That might not work, let’s not try it.” then you won’t unlock their full power.

  • Josh Marshall points out the Grand AI Disconnect.

    • AI is really, really unpopular with the American public.

      • “Fewer than 20% of Americans think AI will have a positive impact on America over the next 20 years.”

    • But everyone keeps on acting like most people love it.

  • An "inductively knowable" UX works great with reasonable defaults.

    • It just does what you expect.

    • Then as you peel back layers of understanding there's no magic.

    • but you don't need to know the lower layers, they just make sense if you ever did peel them back.

  • The appropriateness of data being used is tied to the context it was collected and used in.

    • So if it’s collected in a context where only superficial things can be extracted and then it’s put in a context where it can be deeply understood, it feels like a betrayal.

    • That’s why Google’s data is a blessing and a curse in an era of LLMs.

    • They’re sitting on a trove of data for each user… but if they preprocessed everyone’s decades of emails it would feel like a crazy beytral, an invasion.

  • The same origin paradigm tur ns every app creator into a goblin jealously hoarding their data treasure.

  • The vast majority of consumer software in the past couple of decades has been algorithmically simple.

    • We’ve explored every nook and cranny of that simple software landscape.

    • If your architecture doesn’t matter, just build as quickly as possible and get as many users as possible as quickly as possible.

    • But if your architecture does matter, getting users on the wrong architecture could be deadly.

  • The entire software industry is predicated on “invest in the expense of building software and in return own the user’s data.”

    • Consumer and enterprise alike all assume this model.

    • This model is downstream of the same origin paradigm.

    • It is not something that must be fundamentally true about software.

  • The incentives that emerge from the same origin paradigm are too strong to be overcome by a handful of users demanding better.

    • That’s not how we’ll transcend the same origin paradigm.

  • We’ve all developed a learned helplessness about data and its use within apps.

    • We can’t imagine it being any other way.

    • So we just stopped imagining.

    • For the world to become better, you have to first imagine that it could be better.

  • Confidential Compute bundles two very different abilities: 1) encrypted-in-memory and 2) remote attestation.

    • Encrypted in memory means that even if an adversary has physical access to the machine it is very difficult for them to extract your secrets.

      • This is only important if your threat model includes nation state actors targeting your data in particular.

    • Remote attestation allows proving to someone else that you’re running the software you claim to be running.

      • This one isn’t useful in most software contexts where users must already assume the host can do arbitrary things with their data.

      • But it enables a totally new type of software that doesn’t have to be trusted.

  • A demoed system doesn’t necessarily increase in performance.

    • A used system however will tend to increase in performance.

    • Users simply won’t tolerate their usage patterns being slow.

    • When it’s just a demo, users have to remember to actively demand performance.

      • If you’re distracted you won’t demand it.

    • When it’s something you’re using, every time you use it you want it to be faster.

      • It’s just the most obvious, inescapable thing.

  • All the vibecoding platforms are climbing the wrong hill.

    • They’re trying to get everyone to create their own software.

    • That will never happen… software requires thinking at least like a PM.

    • But it’s a necessity if software can’t be safely shared by strangers.

    • To climb the right hill, you’d have to make software that could be safely shared by strangers.

  • We need a solution to run a stranger's untrusted software on your sensitive data.

    • That's the unlock for infinite software.

  • If the system assumes an LLM as the main loop then there’s a floor of how cheap and performant it can be.

    • Whereas if it assumes normal compute sweetened with LLMs there’s no floor or ceiling.

    • And also if you assume LLM in the loop the only way to improve is model quality or tools.

      • Whereas normal code can accrete functionality over time.

  • Wishes allow software that gets better than when you wrote it.

    • It improves continuously as the ecosystem improves.

  • The big companies are all on the chatbot train.

    • Big companies can't bet on two contradictory things.

    • What if the most powerful use of LLMs that is not chatbots?

  • If it’s a hill climbing exercise, first, make sure you’re on the right hill.

    • For example, performance work should come after required architectural changes.

    • When you have users is when you’re forced to start climbing the hill.

    • If you have users prematurely, it forces you optimize prematurely, meaning you likely climb the wrong hill.

  • A useful slog is a moat.

    • But a useless slog is a waste of time.

    • The whole question becomes: is it useful or not?

  • Some teams are great at execution.

    • But they can’t do multi-month slogs to non-obvious endpoints.

    • Other teams might go slower on simple projects, but are capable of achieving more complex projects.

    • If they achieve something useful, that gives them a moat.

    • Anyone who wants to catch them would have to go through a similar slog.

  • To resonate, stories must be both true and believable.

  • The network effects are downstream of the security model.

  • Negative distribution friction could come from wishes.

    • Little gravitational pulls to have the right pieces snap together.

    • Wishes are only safe to grant in the right security model.

    • The security model is what changes the distribution physics.

  • In traditional software the worst case is the code harms you.

    • Potentially in an unbounded way.

    • Imagine a substrate where the worst case is that the code isn't useful.

      • A radically better worst case.

    • Especially because you can use normal ranking techniques to sift through which is useful.

  • American culture is about maximizing, about never being satisfied.

    • Always striving to go above and beyond.

      • Innovative… but exhausting.

    • Other cultures are more willing to merely satisfice.

      • “It’s good enough.” 

      • “Why bother improving it?”

  • PMF always starts as a needlepoint.

  • The push for frictionless is a push for hollowness.

  • First, make the theory practical.

    • Then make the practical marketable.

  • To be useful, it has to feel more like a tool than a toy.

  • If you're staring at the ground in front of you and not on the horizon you'll fall off a cliff.

  • Friends don’t use friends as smoke tests.

  • The system builder and product builder are different.

    • The system builder, when they try to build product, will see the problem and try to solve it properly.

      • Because it's a means to push their end, the system, to be full featured.

    • The product builder doesn't care about the system, they just want the product to work.

  • The Six Degrees of Kevin Bacon effect shows up in many networks.

    • It happens because the combinatorics makes even a small incidence rate quickly saturate the network.

    • That also means that statements like “Nearly every academic paper is less than 6 citations away from a retracted paper” are not as interesting.

      • It’s trivially true no matter the density of retracted papers.

  • If you give someone the smoking gun they can stop looking for it.

  • I found service in restaurants in Europe to be notably slower than in the US.

    • I didn’t think it was about tipping–if most tips are in the same narrow band (15-20%), that doesn’t explain much difference in speed.

    • But the existence of tips–even with a narrow band–incentivizes turning as many tables as possible.

    • So even if the average “salary” comes out the same with the default tipping rate, the one with tipping will incentivize faster table service than the one without tips, which doesn’t reward based on how many tables were finished.

  • The defining characteristic of the Saruman is selfishness.

    •  “I got mine, screw you.”

    • Not caring about the externalities is selfishness.

  • When you’re fighting for money, you'll only optimize superficially.

    • You’ll cut corners.

    • When you’re fighting for honor, you'll optimize resonantly.

    • You won’t take shortcuts.

  • Resonance is soul.

  • When incentives align, you get resonance by default.

  • Ambition, when attached to selfishness and competency, can quickly become net bad for the surroundinging system.

    • Slytherin for example is ambitious.

    • Ambition unchecked by shame will take shortcuts with significant externalities.

    • Imagine a choice in front of you.

      • One way is the right way.

        • You take a personal penalty.

      • The other way is a shortcut, with externalities.

        • But no one will ever know you did it.

      • How big must the externality be before you don’t take it?

        • Everyone has a threshold.

        • Some people’s thresholds are significantly lower than other people’s.

  • A popular authoritarian is much more dangerous than an unpopular one.

  • Curious people are much less likely to get bored.

    • A question gives you something to do.

    • Curious people always generate another question.

    • A great question is

      • 1) interesting

      • 2) you care about the answer.

  • Whether or not something is in extremistan: does it have a preferential attachment effect or not?

    • Preferential attachment is multiplicative vs additive.

  • You’re allowed to make your own choices if you can understand and be responsible for their implications.

    • That’s why we don’t yet young children make substantive decisions.

  • Local optimizations come from touch.

    • Global optimizations require sight.

    • Touch is 100x higher fidelity and cheaper.

      • Touch is more resilient.

    • Touch is local.

    • Sight is global.

    • Touch doesn’t work from a global perspective, but it is cheap and reliable.

    • Sight working well for larger scales makes us crave it to solve all problems, pushing it beyond the scale it actually works in.

    • Touch can cover very large scales if you can come up with decision procedures that only require touch and can ripple through a larger substrate.

    • These decision procedures are hard to discover but are often quite simple when they exist.

  • Who gets to decide the lore of the tooth fairy?

    • No one.

    • It emerges, bottoms up. 

    • Based on the stories people choose to repeat.

    • The ones that loom largest.

  • It feels like most outcomes are decided top down instead of emerging bottom up.

    • But that’s an illusion because only the top down outcomes can be thought of as “decisions”.

    • We think naturally in narrative so top down decisions compress way easier and thus feel more obvious and tidy.

      • The bottom up outcomes can’t be compressed as easily with “this agent had this goal and made this decision”.

    • Also even bottom up things could look like top down.

      • An agent makes a locally optimal decision which unbeknownst to them pushes the emergent outcome past a critical point.

      • That would look like that one agent’s decision changing the whole system, but really those decisions are externalities.

  • The game of telephone shows how every act of comprehension is an act of interpretation.

    • Over time, over multiple iterations, it continuously morphs.

  • When you’re making a movie, everyone has to be making the same movie.

    • Otherwise you get a discordant mess.

    • The alignment of vision is critical.

  • You’re reading a romance book. Will it have a happy ending?

    • The best predictor: was it filed in the Romance section or the Drama section?

  • When people have many devoted friends, it’s a sign that they’re generous and open hearted.

  • Believing in something bigger than yourself is how you transcend.


Reply all
Reply to author
Forward
0 new messages