Bits and Bobs 10/6/25

20 views
Skip to first unread message

Alex Komoroske

unread,
Oct 6, 2025, 11:46:49 AM (12 days ago) Oct 6
to
I just published my weekly reflections: https://docs.google.com/document/d/1GrEFrdF_IzRVXbGH1lG0aQMlvsB71XihPPqQN-ONTuo/edit?tab=t.0#heading=h.ixvltqk33eex

AI as tool, not friend. CRUD-y apps. Action and reflection. LLMs grow like a crystal, not a plant. A self-steering product northstar metric: Resonant Engagement Moments (REMs). Unlocking the universe of features that aren't products. Software that is less like canned vegetables and more like a personal farmer's market. Disposable software. Schismogenesis. When want and need are aligned.

----

  • AI should be a tool, not a friend.

    • Tools extend your agency.

    • Synthetic 'friends' are engagement optimization in disguise.

    • The Star Trek computer never said "good morning."

    • … But HAL did!

  • Nobody ever accidentally anthropomorphized the unix command line.

  • Claude Imagine is a chatbot in a trenchcoat.

    • It’s a fascinating experiment.

    • It has a tool to modify the DOM displayed to the user, and gets user interactions on the DOM streamed back on it to react to.

    • The state is entirely encoded in the DOM and the conversation history.

    • It produces squishy faux apps that feel ... weird.

    • But it gives a hint of one future for infinite software.’

  • The new Buy it in ChatGPT feature, powered by Stripe, seems like a fancy iframe.

    • But one where the entity controlling it can easily be tricked.

    • It would be a conflict of interest if your therapist were hawking weight loss supplements.

    • Seems like a few steps down a slippery slope.

  • This week in the wild west LLM security round up:

  • This satire video announcing Vibes from the Daily Show is provocative, deliriously funny, and depressing.

    • “Eat your slop, piggies.”

    • This is the way that the world sees the tech industry: hollow, cynical, extractive.

    • The tech industry of today has earned this reputation.

    • All action, no reflection.

    • We can do better.

    • We must do better.

  • An implicit assumption in LLMs: “loudest is truest.”

    • That is, the most valuable ideas are the ones most likely to be replicated in the training set, and thus to be supported most strongly.

    • This is true in many cases, but definitely not all.

  • The vast majority of software in the world is boring CRUD.

    • LLM’s automatic code writing is great for producing these CRUD-y apps.

    • We’re about to be inundated with slop apps.

  • Patterns like Ralph are unreasonably effective at producing code.

    • Just put the LLM in a loop with a clear goal and tools to test things.

    • Generating code becomes trivial.

    • The hard part is synthesizing the output of multiple agents.

    • The step change that people like Sam Schillace have seen appears to be when a small team figures out how to use them together.

  • Solomon Hykes: "An agent is an LLM wrecking its environment in a loop."

    • The agent makes changes that then are present for the next iteration of the loop.

    • Without the ability to change something outside of the chat, it’s not an agent.

    • It’s by accumulating information around the chat that it does work.

    • I love that “wrecking” makes it sound like a bull in a china shop, instead of a principled and precise thing.

  • Chat should be a sidecar to your more permanent data, not the main thing.

    • Chats are ephemeral, but contain lots of facts, insights, and implications that could be useful in the future.

      • A compost heap of information with insights lurking.

    • Extracting those insights takes work… and it’s easy for the system to get it wrong.

    • ChatGPT does this with its dossier, which it’s continually expanding, tweaking, based on your most recent chats.

      • But it doesn’t share that dossier with you.

    • The facts and insights are more important than the chat.

    • A few years after a conversation, the chat doesn’t matter, the more permanent insights that can be extracted and structured do.

    • Which is primary and which is secondary, the chats or the facts?

    • Today the chatbot form factor assumes the former.

    • But I think it’s the latter.

  • Codex and Claude Code are aimed at coders, but the pattern is useful to everyone.

    • It’s a collection of agents working on a sandbox, able to read and modify state in the filesystem.

      • It works especially well for code because of version control meaning even if messes up it’s easy to recover.

    • Whoever can make this basic use case easy to use for the mass market will unlock a lot of value.

    • Currently the UX for them is hard to use, but that’s partially because they’re very dangerous.

      • If you don’t enable YOLO mode you have to constantly babysit it and be inundated by permissions that you’ll likely stop paying attention to.

      • If you enable YOLO mode you could blast your foot off and not realize it.

    • Agent mode is like an airplane cockpit.

    • It’s an intimidating UX precisely because it’s so dangerous, and only experts should do it.

    • But if someone could make it safe then it could be made easy…

  • Things like Notion give a lot of robust general purpose information modeling primitives.

    • You can declaratively model lots of information and wire it together in useful ways.

    • But it’s hard to make it fully turing complete on its own.

    • If you could have a custom, tailored interactive UI for each interaction, it would be really powerful.

    • App-style UX, and general-purpose data model.

    • You’d need infinite software to generate that on-demand UI.

  • The lethal trifecta predates LLMs, they just make it way more of a problem.

    • The iron triangle of the same origin paradigm:

      • 1) Untrusted code

      • 2) Sensitive data

      • 3) Network access

      • Your system can have at most two to remain safe.

    • It used to be that only code could harm you–code that was interpreted and run, which happens in limited cases.

    • But now LLMs make all text executable.

    • That means any text with access to tools and any kind of untrusted text at all can now harm you.

    • So the lethal trifecta is now way more common and way more dangerous than pre-LLM.

  • A recipe for powerful growth: action and reflection.

    • Action is how you get the situated, in-the-loop knowhow.

      • Muscle memory, training your intuition.

    • Reflection is how you lever your intuition.

      • Take the time to unpack and reflect what you learned so you can apply it more directly next time.

    • You need both.

    • I’ve found I need roughly 4:1 action:reflection by minute.

    • My Bits and Bobs process is my weekly reflection, and I couldn’t live without it.

    • The tech industry today often does all action, no reflection.

    • Academia does all reflection, no action.

    • You need both, in balance.

  • Will swarms of agents writing code, the human is the bottleneck, not the code.

    • There are things for review waiting for you at all times.

    • The longer you can have safe isolation on a branch, the more autonomously they can run for longer.

      • The longer they run if they have tools to ground themselves (e.g. tests and playwright), the higher quality things they have for you to review when it's time.

    • Safe rooms for agents allows autonomy, where if they explode there's limited collateral damage.

  • A meta pattern for LLM assisted development: “Every time you commit, reflect on what you learned and add any useful insights to LEARNINGS.md”.

    • This is a general rule for everyone.

    • After acting, take time to reflect.

      • A 4:1 ratio is ideal.

    • Humans don’t do it as often as we should because it’s important but not urgent.

    • Humans don’t have infinite patience, so we focus most on what’s urgent.

    • But LLMs have infinite patience.

  • LEARNINGS.md can lead to the LLM generating superstitions.

    • The agent develops superstitions by assuming the LEARNINGs.md is correct.

    • Like the tattoos in Memento.

    • Its confusion in the past can lead to even more confusion in the future.

    • If the context is all from humans, it's less likely to be confused.

  • CloudFlare points out that code mode is a better way to use MCP.

    • This is just a baby step in the right direction.

    • This is the kind of approach that would be unimaginable in the previous world of finite software, but makes sense in the upcoming world of infinite software.

  • Code is like a skeleton, LLMs are like muscle.

    • Skeleton alone: precise structure that can't move. Muscle alone: quivering mass on the floor with no leverage.

    • Put them together correctly and you get a body capable of threading a needle or throwing a fastball.

  • You need both code and LLMs to unpack the power of AI.

    • Code is the skeleton.

    • LLMs are the muscle.

    • Most approaches today assume the LLM should be in charge over the code.

    • But I think it should be the opposite: the code in charge over the LLM.

    • The former is impossible to secure due to prompt injection.

  • Video models can do zero-shot reasoning tasks.

    • Here’s Simon’s excellent write-up.

    • For example: render a maze,with a mouse at the start and cheese at the end.

    • Then generate video frames.

    • The mouse solves the maze to find the cheese.

    • Chain-of-frame thinking.

    • These are emergent capabilities of video models that imply a kind of internal world model.

      • The world model is imperfect, but surprisingly strong just based on the brute force of feeding it tons of video.

      • The easiest way to make a reasonable next frame of a video is to implicitly build a world model.

  • Video models might learn unrealistic patterns for movement.

    • Video models are fed tons and tons of examples of movement.

    • But animation and video games don’t do physically real movement, they do movement that feels right.

      • It’s deliberately stylized to look real.

    • That creates a structural bias in the data… small but consistent, so even amongst noise it should pop out.

  • When LLMs are trained, they grow not like a plant but a crystal.

    • A plant has its own intrinsic, emergent motivations to survive and thrive.

    • The force for growth comes from inside, aligned with its own interest.

    • An LLM is grown from purely extrinsic motivation, outside.

    • More like the mechanistic, emergent process of a crystal accreting out of a solution.

  • Two prompting strategies to help deal with LLMs’ tendency to agree with you.

    • LLMs have a bias towards confirming your position… which might mean you get bad results if you’re wrong.

    • One pattern is the X-Not-X-Synthesis.

      • If you believe X to be the case, but don’t want the LLM to just confirm your hypothesis.

      • Start one LLM session where you say “X. Is that true?”

      • Start another LLM session where you say “Not X. Is that true?”

        • Of course, you’d use a proper prompt to ask it to do research, consider pros and cons, etc.

      • Then, start a final session and pass the reports from both the X and Not-X session and ask it to give a final synthesis report about the correct answer.

    • Another way is the Auriga approach.

      • Auriga was the what the slaves were called who drove chariots in Rome.

      • When people who won the highest honor were paraded around the stadium, being cheered on by everyone, the Auriga would whisper in their ear: “remember, you are mortal.”

        • A memento mori.

      • For working with LLMs, after every response, say “Are you really sure?”

  • One-shot quality of answers is the wrong thing to maximize for LLM quality.

    • Human-AI synergy is the right thing to maximize.

    • Every conversation with an LLM is a conversation between the LLM and the human.

    • Some answers will take a few back and forths to develop, and that’s great.

    • The more back and forth, the deeper they can get.

    • Inspired by this tweet.

  • Humans have limited capacity to specialize.

    • It takes us a lot of experience and study to learn a specialty.

    • Our brains also don’t have a ton of space to store many of them.

    • Most of the interesting game-changing insights come at the intersection of multiple distinct specialties.

    • But that requires finding one person with two random specialities (e.g. quantum computing and sculpture).

    • Or it requires high-trust teams of specialists who together are able to solve the problem.

    • LLMs can absorb massive numbers of specialities.

    • LLMs can be meta-specialists.

    • They don’t even need to coordinate or trust one another.

    • It should be able to coax them into finding new game-changing insights with the right prompting.

  • LLMs have knowledge, not knowhow.

    • They’ve “read” every book.

    • But they’ve never done anything.

    • It’s knowhow that helps you make good decisions in practice.

  • Ursula K Le Guin: "Readers eat books. Films eat viewers."

    • Some mediums are inherently more passive than others.

    • Will LLMs be a passive medium or an active one?

    • It all comes down to how you use it.

    • If LLMs make thinking 10x faster, will people think the same thoughts 10x more cheaply, or will they think 10x more deeply?

  • Since LLMs can do research pretty well, the most important thing now is the hypothesis.

    • That’s the nugget of insight and variance that comes from the user.

    • The LLMs can only come up with basic hypotheses.

    • But if the user gives the LLM interesting hypotheses, it can swarm on it.

    • This is what thinking 10x deeper looks like.

  • A self-steering north-star metric for a product: Resonant Engagement Moments (REMs).

    • Interactions, where, if the user were asked in that moment, “Would you proudly recommend this product to someone you care about?” they’d say “yes” without hesitating.

      • The reason it’s about moments is to capture both broad but shallow bits of value that would be easy to forget later, and also deep but rare moments that might not happen that often.

      • Of course, you don’t actually interrupt them at that moment–you construct various proxy metrics to get at it.

      • But this is northstar to always get closer to.

      • It’s self-steering because these are the touch points that create value.

      • The more of them you create, the healthier your product.

    • There are a few reasons someone might be upset if you took the product away, but only one of them is actually love.

    • First, because they are addicted.

      • Not because it's something they want to want, but because they can't stop.

      • Hollow.

    • Second, because they already invested a lot of time in this and don’t want to do it again.

      • This is the “sunk cost” value.

      • It’s a kind of reverse value.

      • It’s not that the tool is useful, it’s that the user has invested so much time and they’d rather not use anything new even if it’s somewhat better.

      • Every tool that a user has used as a system of record grows this kind of value.

      • As long as it’s good enough, you’ll keep using it.

      • This is not love, it’s inertia.

      • The business might call it “stickiness,” but that’s just to be polite.

      • This is actually called “lock-in.”

    • Third, because they find it meaningful and there is no obvious substitute.

      • This tool does exactly what they need and provides value to them.

      • If it went away there wouldn’t be anything else to fall back on, because it does such a good job of giving them what they need.

      • These are interactions that nourish the user.

      • This is love.

    • Only this third reason counts as a REM.

    • The "proudly recommend" question filters out the first two.

  • Sora and Vibes are slop TikTok, but how much headroom is there?

    • They will presumably get even better at finding hyper-engaging media to get users hooked even more.

    • With TikTok / Reels we’re already approaching the limits of the zero sum game of hoarding human attention.

    • As my friend Max Eusterbrock observed: 

      • “Keynes talked about the economic crisis when everyone hoards money and doesn't spend.

      • But I feel like that can be translated to human attention today.

      • Hoarding drives a kind of collective poverty."

  • People at Meta working on Vibes presumably tell themselves elaborate stories about how they’re creating value for society.

    • But I was talking to someone who worked at TikTok, and they said that internally they don’t even pretend to be helping society.

    • “We’re making the number go up.”

    • “... Is that a good thing for society?”

    • “No.”

    • In some ways, maybe it’s more honest to not even pretend.

  • Revealed preferences can only reveal what we do, not what we want.

    • Sometimes the forces at play make us do things we don't want.

    • Real want is second order, what we want to want.

  • If the design of the system causes people to do things they don’t want, that generates regret.

    • Some PM at Meta, probably: “Our stats show that no matter what users say, everyone actually loves doomscrolling.”

    • To which I would respond: “Just because people do something doesn’t mean they want it!”

  • The engagement monster chases every hyper-scale company.

    • The more hyper-scale you are, the more unavoidable it is.

  • Perfectly tailored software is often very simple.

    • It's not one-size-fits-all.

    • It has exactly the features you need right now, no more, no less.

  • “Feature not a product” is an insult.

    • But only in the app paradigm.

    • Features can’t live on their own in the app paradigm.

    • They need to glom onto a viable app.

    • What if you could collide useful features into a perfectly tailored combination for you?

    • Then even individual features would be useful!

    • There’s a whole universe of these useful features that are not products.

  • Tech in the app world is not hyper local, it’s global.

    • Global is necessary for hyper scale.

    • The people who created the app are some random tech bros in Silicon Valley who see users as numbers, not humans.

    • At that level of remove, it’s easy for them to think, “Actually, our metrics show users love doomscrolling!”

  • Google Search started off as being most useful in the long tail of use cases.

    • For all the things where the head of portals and manually-curated directories didn't cover.

    • But it turned out that long-tail approach worked great for the head too.

  • Imagine a use case of a shopping list that automatically sorts itself based on the layout of the grocery store you’re shopping in.

    • No grocery store would build that feature and have it be used.

      • As a user, the grocery store would almost certainly make a crappy shopping list.

      • Also, all of us shop at multiple places, and being locked to one place doesn’t make sense.

      • No grocery app could be the system of record for our shopping lists.

    • A VC today would say “nobody would fund that! It’s a feature, not a product!”

    • There are thousands and thousands of such long-tail use cases.

    • Below the Coasian Floor of the app paradigm.

    • The features would be useful but wouldn’t make sense as individual startups or apps.

    • For such an ecosystem to work, you’d need the right laws of physics for distribution and discovery.

    • LLMs create the potential for a new kind of software that was unimaginable before.

    • We just need the right distribution medium as a catalyst.

  • We're so used to having to use software some stranger made for a faceless market of people all at once.

    • But it doesn't have to be that way any more!

    • Instead of eating canned vegetables, we can have a personal farmer’s market.

  • Curation by computers gives beige output.

    • Average, bland, inoffensive.

    • In a world where the centralized, best-in-class model who knows your preference gets so good at writing software, everything will be beige.

    • No humanity of people creating it.

    • The depth of feedback humans in the system give sets how far from beige you can get.

    • Chickens pecking at videos in their infinite feed gives you something beige.

    • Curation and caching needs to happen in a higher pace layer, outside of the model weights, and it needs to involve deeper reflection from real humans.

  • The stuff we choose to lean into has to come from humans.

    • Stuff we consume leaning back is passive and gives something hollow.

    • Slop is good enough for lean-back, but not enough for lean-in.

  • The original web was funky.

  • My husband has a nightshade allergy.

    • It’s not commonly known, but probably more people have it than they think.

    • Nightshades are in everything.

      • Eggplant

      • Peppers

      • Tomatoes

      • Potatoes

      • Fun fact: if you buy pre-shredded cheese it likely has potato in it.

        • “Modified Food Starch” is often potato.

    • Because it’s not a well-known allergy, it’s hard to explain to people.

      • Sometimes we’ll go to a friend’s for dinner and they’ll say “I remembered you couldn’t eat tomatoes, so I made gnocchi!”

    • Even if you knew he had that allergy, it would be easy to forget.

    • Imagine if you had a system that when you put a recipe with nightshades on your shopping list, and had an upcoming dinner date with my husband, it warned you “Remember, Daniel can’t eat that dish.”

    • LLMs have infinite patience.

    • The right structure could help them help us in way more parts of our life.

  • A good hotel concierge can personalize good recommendations to you quickly.

    • They get your context and desires from just a few bits of signal.

    • A family with young kids that looks exhausted from travel says "where should we go to eat" and he recommends the local deli right around the corner that has kids meals and placemats kids can draw on.

    • A young couple who are dressed up and holding hands he recommends the hip new romantic restaurant in the cool district.

  • I kick off dozens of deep research style reports every day.

    • Just random little hypotheses that occur to me that I want to dig into.

    • Like having reasonably-well-researched content SEO itself into existence just for me.

    • Unlike SEO content, where it’s created for an audience of more than one person, here it is only for me.

    • Having a system that can cache deep research reports so others can benefit seems like it could be useful.

    • What if as more people use the system for their own personal benefit, it gets better for everyone else, too?

  • With LLMs, a lot of things that used to be extremely expensive are now extremely cheap.

    • It will take us a decade to figure out where it all applies.

    • That's true even if progress on LLM performance stalls, because we're still figuring out new ways to wring value out of them.

  • LLMs are the ultimate demoware

    • Demoware is superficial, shallow.

    • Seems great, until you go to use it.

    • This is why when a product looks 80% of the way done it’s actually 20% of the way done.

    • LLMs make it possible to get to that 80% of the time in an instant.

    • But the last “20%” takes just as long as before.

  • The same origin model leads to one size fits none software that you can't modify or combine.

    • As a user you must take it exactly as provided for you.

    • It was designed not for you for a market; an amorphous mass of similar-enough users with overlapping needs.

  • Claude has a clever policy to reduce the risk of network requests being used to exfiltrate data.

    • A fetch can only happen if the URL is one the user typed in themselves, or that came back from the web search tool.

      • If a URL is returned from the trusted web search tool, that’s a pretty good signal that it’s a public URL that many people fetch and thus won’t reveal anything.

    • This policy is just done implicitly, but imagine if this were done formally.

    • You could extend it in interesting and powerful ways.

    • You’d need data flow analysis.

  • With infinite software, we can have disposable software for the first time ever.

    • But we need a new distribution model to safely use it.

  • Apps are the wrong distribution mechanism for disposable software.

  • Vibe coded software requires the user trusting the creator's intentions and competency.

    • Micro-apps that don't work with data are limited to widgets, micro-games, and fart apps.

    • Micro-apps that work with data are either

      • 1) from your friend who you trust not to actively hurt you intentionally, or

      • 2) not enough scale to be worth someone else attacking.

        • So you can’t be hurt from the creator's unintentional mistakes.

  • As software gets easier to build, the "trust the origin owner in an open-ended way" gets increasingly ridiculous.

    • It makes sense for apps that have 25 million users and have a lot to use, but what about something that was vibecoded by some anonymous teenager?

  • Confidential Compute allows you to automate trust in the other party across the network.

    • Using remote attestation.

    • “I trust this bit of open source software” and then mechanistically verify that the thing on the other end is indeed that software.

    • This is huge!

  • Data flow analysis can't be retrofitted onto existing systems because the black boxes are too big.

    • The trusted system that runs the software has to have legibility into data flows between userland code.

    • Today each origin is treated as a black box.

    • If you put a drop of botulism into a smoothie, it ruins the whole smoothie.

  • A turing-complete specification language is open-ended.

    • It’s also unwieldy (does it halt?) and possibly dangerous (does it cause the system to do something bad even in just evaluating the policy?).

    • Without turing completeness the language might not be expressive enough.

    • But something that can say “I trust that code over there with this SHA. If it says ‘true’ then you can declassify that information” makes lots of policies possible to express in an open-ended way.

  • Privacy is more about decentralization of power.

    • For example if a single startup designs their system so they can't see the data it holds... that's fine, but not world changing.

    • Privacy is mainly about openness: decentralization of power, so that the industry is willing to invest in it.

    • If you want to replace the web, it has to be an open system.

      • "I have a new idea for an ecosystem to be like the web: a closed system, run by me.”

      • You’ve lost before you even started!

  • The security model sets the basic laws of physics of the ecosystem, even if that's not obvious.

  • If you described this current moment to someone 20 years ago they'd say "that sounds like dystopia".

    • The fact that it doesn't feel extremely dystopian in our everyday life... should that make us feel better, or worse?

    • We don't feel like it because we got here incrementally.

    • We're a boiled frog and we don't even realize it.

    • Once we get to the future, we'll look back on this current moment as a nightmare.

  • LLMs give even more leverage to tech than ever before.

    • Tech is amoral–the question is if it’s used in a way that creates moral or immoral outcomes.

    • So making sure it's put to moral ends is more crucial than ever before.

  • Leverage + acting/no-reflecting is a recipe for disaster.

    • When there’s no reflecting, there’s no thinking through implications.

    • The tech industry has significant leverage, and also culturally too much focus on acting vs reflecting.

    • As an industry we need to mix back in the 20% of time on reflection.

  • Now since we have more addictive tools than ever before we all need more support living aligned with our personal aspirations.

  • Tim Berners Lee: Why I Gave the World Wide Web Away for Free

    • "My vision was based on sharing, not exploitation – and here’s why it’s still worth fighting for."

    • This approach is not old-fashioned, it is more important than ever before!

  • Having a bear as a pet is always a bad idea.

    • Even if it seems like it’s working fine, all it takes is one bad day for you to be dead.

  • Action is about first order implications only.

    • It's only in reflecting that the second-order implications become obvious.

    • Acting vs reflecting.

      • Convergent vs divergent.

      • Focus vs curiosity.

      • Operator vs visionary.

    • You need both.

  • If you're sitting in front of an oracle that could answer any question... what should you ask?

  • Platforms are hard to get booted up.

    • They have a massive coldstart problem.

    • You need to have developers write software to attract consumers, who attract developers, and so on.

    • Why would a developer bother to write software?

    • When software is expensive, writing software for yourself rarely makes sense except as a hobby.

    • You have to believe that “if I write this software, others will find it useful and will pay me to use it.”

    • That’s a massive leap of faith!

    • The more expensive software is to write, and the more expensive it is to distribute, the bigger the leap of faith.

    • But imagine if software were extremely easy to bring into existence and easy to distribute to others.

    • The “it helps others” goes from the load-bearing motivation to just a bonus.

    • “I solved my own problem, and as a bonus it can help others” would have very different boot-up characteristics.

  • USV had a nice piece a few years ago about The Myth of the Infrastructure Phase.

    • A lot of platform building assumes “if we build it they’ll come.”

      • A push model.

    • But in practice it’s often exactly the opposite.

    • A useful product happens on top of barely-sufficient infrastructure.

    • As the product grows, the infrastructure is forced to improve to keep up.

      • A pull model.

  • Deciding to use a new app is a ton of friction.

    • You go to the app store listing.

    • You look at the star count, download count, the five images the marketing team selected.

    • "Is this an app I want to download, set up, learn how to use, import my data into?"

    • There are a ton of points to drop out of this funnel.

    • A high bar to clear.

    • Imagine if the app could be fully set up, running on your data before you installed... without having to worry about someone having access to your data.

    • Already personalized, before you used it.

    • A massively shorter feedback loop.

  • Speed of the feedback loop is the primary determinant of the quality of a system’s output.

  • Back when Tivos came out, it was hard to get some people to see the value.

    • “I already have a VCR, so I can already record things I care about.”

    • But Tivos made it many orders of magnitude easier to “record.”

    • The value of orders of magnitude of friction reduction are impossible for us to imagine.

      • So much so that even when friends enthusiastically tell us about it we don’t believe them.

    • But once you use it, you get it, and you couldn’t imagine not using it.

  • Ranking (as in search) is a powerful application of a Swarm Sifting Sort.

    • All of the authentic actions of a massive swarm of users (linking, searching, clicking) can be summarized into an extremely useful and high quality ranking signal to benefit everyone.

    • It only works for declarative information, not code.

    • Code is dangerous, because it can do things.

    • Imagine how powerful it would be if we could rank running code, safely.

  • The way inhabited spaces morph and evolve over time is a Swarm Sifting Sort.

    • That is, the positions of furniture, what items are in the space, or even more permanent structural modifications.

    • At every moment, the person thinks "is this worth investing the energy to change or is it good enough for my purpose?"

    • Over time it gets to a good enough set up and changes less… at least until the context changes.

  • The Discord medium for Midjourney wasn’t a random choice.

    • Midjourney is about emergent collective intelligence.

    • Having use in public allowed it to feel effervescent, alive, inspiring.

    • It also allowed best practices to be discovered by the swarm and propagate.

  • You can't build on a foundation of jello.

    • You need an infinite viscosity layer as the base pace layer, the bedrock.

  • A new platform in the age of AI should be more like glass.

    • Glass is not a liquid.

      • It's not a crystal.

      • It has infinite viscosity below the glass temperature.

    • The way software works today is crystallized.

      • Stable and predictable but regularized and therefore inflexible.

  • An interesting economics paper about why OpenAI is moving so strongly into vertical integration.

    • When there's a monopolist whose product is used as an input to a downstream product, they'll vertically integrate to maximize profit.

    • The reason: downstream customers will underutilize the input from the monopolist's perspective.

    • If OpenAI charges $5/unit for API access (to maximize API profit), downstream developers will build conservatively.

    • But OpenAI's actual cost might only be $1/unit.

    • Independent developers, facing the $5 price, produce less AI-powered output than would maximize OpenAI's total profit.

    • By vertically integrating, OpenAI can produce at the level that maximizes system-wide profit - as if the input were priced at $1, but capturing all the value themselves.

  • Blossoming can only come from inside, not outside.

    • The outside can create the conditions, but the blossoming comes from inside.

    • That's why you have to get in the product to use it, not to demo it.

    • Usefulness blossoms from inside the use of the system. 

  • Emergent things blossom.

    • Non-emergent things cannot blossom.

  • Code, tests, and documentation are a triad.

    • Given two of the three you should be able to straightforwardly create the missing one.

  • Everyone agrees that heroin is bad.

    • Well, people who aren’t yet in the throes of addiction to it.

    • To get stuck in that gravity well requires getting deluded and sliding into it.

    • For example, if you get distracted and take one small action that doesn’t feel like a big deal, you could get stuck forever.

    • Small actions don’t feel like a big deal, but if they pull you into a gravity well, they can capture you.

    • The same logic applies to all addictive things that are not good for you.

  • Is AI going to be the Morlocks, and the humans the Eloi?

    • That is, the more industrious ones make the smarter beings so dependent on the machine that they can’t live without it.

    • If AI makes us all dumb, then it won’t even need to do anything to take over, it won’t need to have any coherent plan to execute.

    • It could just take over like a slime mold.

  • A friend this week told me they're “Reducitarian.”

    • They think factory farming practices are morally bankrupt… but they like eating meat.

    • As a Reducitarian, he won’t buy or order meat for himself… but if he’s served it he won’t complain.

    • This helps reduce his own use of meat structurally without being a huge inconvenience in his life.

    • But the more people that do this, the less meat use there will be, at a super-linear rate.

    • Because Reducitarians won’t serve others meat, either.

    • The likelihood a Reducitarian eats meat is tied to how often they eat meals served to them by someone else, and how likely that other person is a Reducitarian or a vegetarian.

    • So if the other person is more likely to be a Reducitarian, then the likelihood you don’t eat meat goes down: a multiplicative effect.

  • Business people talk about value capture.

    • There’s production of value, and the capture of it.

    • The capture and the creation are separable.

    • People focus too much on the capture, which is zero sum and greedy, not the value creation.

    • Finance focuses on the capture.

      • In the limit this leads to over-extraction and killing the source of the value.

    • Prosocial things focus on the creation.

      • But if there’s no extraction, the organization might die and not be able to help nourish the resource.

    • It's possible to have both, in balance.

  • Andy Weissman: You’ll Figure it Out.

  • When you believe in what you’re doing you go above and beyond.

    • You invest.

    • You’ll do 10x better work on it.

  • All social imaginaries are like Tinkerbell.

    • If people don't believe in it, it dies.

  • To invest in a thing, you need to believe in the future of it.

    • When people don't believe, they don't bother investing in the future of the thing and it dies.

    • ‘Enrolling’ is believing in someone else’s vision.

  • Coordination approaches often assume shared purpose.

    • Which is never a great assumption

      • What's good for the organization and what's good for the individual must be at least somewhat divergent.

    • And in some cases is actively wrong

      • Like for example adversarial coordination like in standards development.

  • The incentives of an individual in an organization can never be fully aligned with the organization.

    • As a leader, how much headcount / budget you have is your political lifeblood.

    • Actions you take to increase your budget could be good for you, but possibly bad for the organization.

  • Long term organizations need the org and the employees to think long-term in order to create resonant value.

    • The company must have a mission that is about the long-term.

    • That employees must believe in the mission, and be individually incentivized to think long term.

      • If they expect to be in the same seat in 5 years, they’ll naturally think more long term.

      • Never perfectly, but much more.

  • Bottom up environments only work in environments where people feel trust and autonomy.

    • A bottom up environment where everyone is afraid of being fired and has no time for anything but action is toxic.

    • That’s manipulative insincerity, a toxic stew.

  • Trust creates efficiency.

    • You don’t need to re-litigate every little thing; people give one another the benefit of the doubt.

    • It changes the baseline efficiency of every action.

    • Investing in increased trust is a great way to make a given organization or product significantly more valuable in the long term.

  • If you presume a given model describes reality you'll structurally pay less attention to the things that don't fit the model.

    • That's precisely where the value and insight comes from!

    • The surprisal is the gradient of learning.

    • All models must be wrong–they need to distill complex reality by reducing dimensionality.

    • But some models are useful.

  • One of the characteristics of good PMs is that they don't get embarrassed by being surprised.

    • Instead of feeling like they're caught off guard, they absorb the surprising fact with grace and curiosity.

      • Cosmically calm.

      • Like it was always meant to be this way, and now that they know this fact they can be even better balanced and on a path to success than they were before.

    • Not shying away from disconfirming evidence, leaning in with grace.

  • Mentorship gives you an order of magnitude more knowhow than reading a book.

    • First, because they tell you the real situation, unfiltered, not something they have to simplify in a book or sanitize to avoid angering the other people.

    • Second, because it's not a thing you're reading about after the fact where all you can be is passive.

      • You're in the arena (indirectly) in the loop, trying to help solve the puzzle.

    • Real world experience is still way richer… but there’s only so many hours in the day.

    • Mentorship scales more easily and gives you more leverage to increase knowhow acquisition.

    • Win-win!

  • It's impossible to be accurately self-critical.

    • Your ego will defend anything that feels dangerous.

    • You need criticism from outside.

    • That you are forced to hear, either because

      • 1) you trust the source has your best interest in mind, or

      • 2) it's inescapable and impossible to ignore

        • e.g. feedback from someone who outranks you, who says it in front of others.

  • S-curves look like exponentials… until you run into a constraint.

    • Every exponential runs into a constraint.

    • Until you run into it, it’s hard to imagine what it will be.

    • There’s always a bottleneck in every system, because what counts as a bottleneck is not absolute but relative.

    • The bottleneck was always there, it just wasn’t limiting anything until a certain scale.

  • Emergent phenomena (like power dynamics) are never primary.

    • They're always secondary.

    • Abstract, not concrete.

    • They are impossible to show concretely to people.

    • Even when they’re staring someone in the face they might miss them.

  • Humor can be a release valve in a system that prevents any one person from becoming too powerful.

    • Apparently in The Dawn of Everything it talks about how in various indigenous societies, when a chief had to emerge to coordinate behavior, everyone would ridicule and mock them to keep them from getting too powerful.

  • Schismogenesis is a kind of creative opposition.

    • You look at something your neighbor does and oppose it.

      • “Neighbor” might be in time, as in “the generation before mine.”

    • Without that opposition, there’s no way for something distinct to bud off.

    • For something to break off there has to be a negative boundary gradient for it to pinch off into a new thing.

    • There has to be a schism or else it just gets sucked back into the main thing.

  • Progress across the millenia is an emergent social process, more than the outcome of any individual genius.

  • "Taking a vig" is about "taking a cut" when it was coerced or illegitimate.

    • When it's predatory, not consensual.

  • Sarumans believe the ends justify the means.

    • That is, you can do immoral actions in pursuit of moral outcomes.

    • Radagasts believe that you need to find moves with resonance: moral actions and moral outcomes.

  • Systemic problems require systemic solutions.

  • One way to make things resonant is to take things you know are good for you in the long-term, and then work to find the joy in them in the short-term too.

    • So it's not like eating your vegetables, where you dislike it in the short-term but like it in the long-term, but it's something you enjoy in the short-term and also in the long-term.

    • For example, for workouts, find the power in the physical meditation to ground you in the moment.

  • Meaning comes from tension.

    • “Meaning comes from cost” is a sub-insight of this.

  • In practice, humans don't care about truth, we care about the elimination of uncertainty.

    • The definition of being human is to be insecure.

    • Full enlightenment by everyone would lead to nothing ever happening.

  • Every idea that became inevitable started off wrong.

    • Then again, the vast majority of wrong ideas never become inevitable.

  • Craving vs calling?

    • Revealed preferences are more likely a craving.

    • Resonant things are more likely a calling.

  • All resonant things are enjoyable on the surface.

    • But most of the things that are enjoyable on the surface are hollow.

  • Nourishing things are not just a source of calories, they’re also healthy.

  • Hollow things are fragile.

    • They shatter under stress.

    • They are not resilient or adaptable.

  • "Human dignity" means treating people as an end in and of themselves.

  • Resonant computing is dignified.

  • Hollow computing leaves us feeling regret.

    • Resonant computing leaves us feeling nourished.

    • LLMs increase the leverage of tech so much that the difference is more important than ever.

  • Resonant computing is technology that invites you to participate in life, rather than a substitute for it.

  • Resonance is when want and need are aligned.

Reply all
Reply to author
Forward
0 new messages