Meet Lloyd, my personal ai assistant

58 views
Skip to first unread message

Alan Timm

unread,
Feb 23, 2026, 11:29:34 PMFeb 23
to RSSC-List
https://www.youtube.com/watch?v=jWv5ty0YmQw

After more than a year of stalling and waiting I spent some time this past weekend tying together some recent advances in AI to make Lloyd, my personal AI assistant.

For now he lives on my pc and in interact with him through speech and this simple gradio interface.  But for the first time AI agents are capable of thinking and doing things in the real world.

Central to his capabilities is a shared memory system that I already use called Obsidian.  It's my collection of notes and ideas that I can access anywhere.  Well now Lloyd and I share it, and he's been reorganizing it to make it more useful for both of us.  He uses my obsidian vault as his memory repo, so it's really easy to see what he's learned.

Under the cover it uses a new search engine called QMD which specializes in indexing and searching markdown files.  It sounds crazy and simple -- but it works.

The current software stack:
  • Openwakeword 
  • Moonshine ASR
  • Qwen3 30B A3B 2507 Instruct for local processing
  • OpenClaw w / Sonnet 4.6
  • Qwen3-TTS for TTS
It all works together OK.  I'm not happy using a wakeword but there's nothing else out there that makes it as easy to get his attention.  He's completely interruptible and preemptible by wakeword, and I've coded several extensions to his memory system to make use of the knowledge graph features to connect and traverse ideas in there.

He also maintains a list of "skills",  procedures to accomplish goals.  Among other things he will use spin off a sub agent to use Claude Code to do anything complicated.  It's kinda wild.

I've already made a bunch of changes to make him smarter and faster than he is out of the box.

Next steps are to write an android flutter app to connect him directly to my earbud for access anywhere.

2026 is turning out to be wild.

screenshot_23022026_201706.jpg

Thomas Messerschmidt

unread,
Feb 24, 2026, 12:24:05 AMFeb 24
to Alan Timm, RSSC-List
Nice!


Thomas Messerschmidt

-  

Need something prototyped, built or coded? I’ve been building prototypes for companies for 15 years. I am now incorporating generative AI into products.

Contact me directly or through LinkedIn:   




On Feb 23, 2026, at 8:29 PM, Alan Timm <gest...@gmail.com> wrote:


<screenshot_23022026_201706.jpg>

--
You received this message because you are subscribed to the Google Groups "RSSC-List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rssc-list+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/rssc-list/ad832f25-05da-438b-9ff1-3b0b3129e33bn%40googlegroups.com.
<screenshot_23022026_201706.jpg>

Alan Timm

unread,
Feb 25, 2026, 9:59:44 PMFeb 25
to RSSC-List
Hey there!

It's been an exciting couple of days.  It's been a real process to understand, build trust with, and extend the openclaw scaffolding into something greater.

The memory subsystem is really coming along.  He's proactively storing memories and periodically organizing them as part of his daily heartbeat.
His initial context is prefilled via memory_search, memory_get, and memory_graph searches to surface up potentially relevant memories without having to search for them through tool calls (which he still can do).
Think of it like surfacing subconscious memories you get for free whenever you think of something.
Nightly he goes through them, looking for:
  • duplicates and consolidating them
  • finding complex ones and splitting them apart
  • identifying new search tags and applying them for the knowledge graph
Tonight's discovery:  He figured out how to pull youtube transcripts on his own.  He now reads the transcript and stores the url, title and summary as a memory for future use.

So I put him to work.  I gave him 5 youtube videos about memory settings and optimizations for openclaw, and asked him to see if any of them would improve his memory system and to implement them.

And that's exactly what he did.  All I had to do was paste in the urls and he did the rest.  some of them we implemented, others we moved to a backlog for future discussion.

He read the transcripts, reviewed his own memory setup (tools, configs, descriptions and prompts) and made changes to his own setup.

Here's the summary from the last video as an example.

"Okay, I've got the full picture. The video itself is aimed at beginners — we're already way past that. But digging into the docs revealed some real gaps in our current config.

**The video's content:** Mostly basic setup — enable memory flush, auto capture, vector search. We already have all of it via our MEMORY.md + QMD setup.

**What I actually found reading the docs:**

**1. Real bug: `includeDefaultMemory: false`**
Our config explicitly excludes MEMORY.md and daily notes from QMD indexing. The `memory_search` tool is hitting 289 chunks from the Obsidian vault, but our actual personal memory files (`MEMORY.md`, `memory/YYYY-MM-DD.md`) aren't semantically indexed at all. I'm reading them manually at startup with `read`, but `memory_search` can't find them. Fix: set `includeDefaultMemory: true`.

**2. MMR + Temporal Decay (not configured)**
The docs have two post-processing options for hybrid search that would help us as daily notes accumulate:
- **MMR** — prevents returning near-duplicate snippets from repeated daily entries
- **Temporal Decay** — boosts recent daily notes over older ones for the same topic (half-life: 30 days)

**3. Memory Flush** — probably already enabled by default, no action needed.

**4. Session Memory (experimental)** — can index actual conversation transcripts for recall. Optional/lower priority.

The biggest win is #1 — want me to flip `includeDefaultMemory: true` and optionally add MMR + temporal decay to the config?"

Here's the list of videos I fed him.

The future is here?

Alan Timm

unread,
Mar 1, 2026, 12:28:03 AMMar 1
to RSSC-List
Instead of working on voice mode like I was supposed to one of my buddies inspired me to work on a complete mission center to keep track of Lloyd's configuration.  This entire react app was developed by claude code with minimal effort from me.  To be honest my only involvement was to ask for features to be added.

Among other things all custom tooling was moved to MCP servers and everything needed for Lloyd was migrated to services to make it easier to manage.

We also have a nice backlog of improvement items to look through (after I get voice mode working the way I want)

These walk through videos were made for him but I thought I'd share them here as well.

screenshot_28022026_212536.jpg

Alan Timm

unread,
Mar 1, 2026, 12:09:10 PMMar 1
to RSSC-List
"Tell me you're on top of things without telling me that you're on top of things."

screenshot_01032026_090633.jpg




On Saturday, February 28, 2026 at 9:28:03 PM UTC-8 Alan Timm wrote:
Instead of working on voice mode like I was supposed to one of my buddies inspired me to work on a complete mission center to keep track of Lloyd's configuration.  This entire react app was developed by claude code with minimal effort from me.  To be honest my only involvement was to ask for features to be added.

Among other things all custom tooling was moved to MCP servers and everything needed for Lloyd was migrated to services to make it easier to manage.

We also have a nice backlog of improvement items to look through (after I get voice mode working the way I want)

These walk through videos were made for him but I thought I'd share them here as well.

Alan Timm

unread,
Mar 5, 2026, 11:14:12 PMMar 5
to RSSC-List
Quick update.  Lloyd is officially 100% bootstrapped.  I've spent the past few hours work working on lloyd, with lloyd, on lloyd.
The only reason I'd have to go back to claude code is if one of our updates broke him.

Oh, he has read access to my email, calendar, and contacts, and I can reach him on discord now.

He started as a simple openclaw agent.  Working together he's becoming so much more.

good lloyd.

screenshot_05032026_195753.jpg

screenshot_05032026_201202.jpg

Alan Timm

unread,
Mar 11, 2026, 10:03:45 PMMar 11
to RSSC-List
So much to cover for our talk on Saturday...

openclaw comes with a browser plugin that allows Lloyd to access internet resources on his own.

After a services refactor he was having trouble getting all of the services to appear on the page.

Instead of me being the bottleneck and telling him every time he got it wrong, I asked him to pull up the page himself and keep at it until he figured it out.

good lloyd.

screenshot_11032026_190027.jpg

Alan Timm

unread,
Mar 20, 2026, 11:14:35 PM (12 days ago) Mar 20
to RSSC-List
Oof, so much ground has been covered since our last meeting.

As I see papers and youtube clips of interest i pass them to lloyd to research and draw information from.

For the past week we've been collecting details of memory implementations and tonight everything came together in a way that made sense.  Lloyd created an 11 phase game-plan for a next-generation-memory-subsystem incorporating the the current best practices at the forefront of memory research, while preserving knowledge visibility through markdown files.  Here's the complete game-plan for anyone who wants to take a look.  

Lloyd is taking the reigns and executing everything on his own overnight.  This should be interesting.


Next-Generation Memory System — Overview

Created: 2026-03-20 Status: Design Complete — Ready for Implementation Origin: Convergence of three threads:

  1. Existing QMD + 2B consolidation pipeline (current system)
  2. Document relations investigation (projects/lloyd/plans/document-relations-retrieval.md)
  3. Supermemory analysis (projects/lloyd/research/supermemory-api-analysis.md)

Design Principle

Markdown is the source of truth. Everything else is derived and rebuildable.

The system achieves Supermemory-class capabilities (atomic facts, temporal grounding, relationship tracking, hybrid search, profile building) while keeping everything in human-readable markdown files browsable in Obsidian.

No external database. No opaque stores. Delete the indexes, rebuild from markdown.


Architecture Overview -- The Three Layers
┌─────────────────────────────────────────────────────────┐
│  Layer 1: Source Documents                              │
│  Daily notes, knowledge articles, project docs          │
│  Human-written, human-curated                           │
│  Enhanced frontmatter (relations + temporal)            │
│  THE CONTENT — what exists today, enriched              │
└──────────────────────┬──────────────────────────────────┘
                       │ extraction pipeline
                       ▼
┌─────────────────────────────────────────────────────────┐
│  Layer 2: Fact Files                                    │
│  Atomic facts extracted from Layer 1                    │
│  Still markdown — browsable in Obsidian                 │
│  Structured YAML frontmatter = the graph                │
│  Auto-generated readable body                           │
│  Linked back to source documents                        │
│  THE KNOWLEDGE — what the system knows                  │
└──────────────────────┬──────────────────────────────────┘
                       │ index builder
                       ▼
┌─────────────────────────────────────────────────────────┐
│  Layer 3: Derived Indexes (cache, rebuildable)          │
│  relations-index.json (graph traversal)                 │
│  facts-index.json (all facts + embeddings)              │
│  staleness-map.json (contradiction tracking)            │
│  profiles/ (synthesized user profiles)                  │
│  THE PERFORMANCE — fast retrieval layer                 │
└─────────────────────────────────────────────────────────┘

good lloyd.
next-gen-memory-subsystem.zip
Reply all
Reply to author
Forward
0 new messages