AI Newsletter 5/12/2026

0 views
Skip to first unread message

Caleb Bryant

unread,
May 12, 2026, 12:35:04 PM (4 days ago) May 12
to it-ho...@googlegroups.com
📬 Getting this in Promotions? Move it to Primary so you never miss an edition.
IT HootClub Owl

IT HootClub — AI Community Newsletter

Hands-on. Career-focused. Future-ready.
Issued 2026-05-12

The Schema

Building Intelligence Week 6 — six entities, five junction tables, the editorial decisions encoded in each one, and the ERD rendered as a published artifact. The schema is the foundation; pouring the foundation is the work before any code goes in. Under the Hood: the stack — Postgres in Docker, SQLAlchemy and Alembic, and the discipline of versioning the database alongside the code.

Announcements

Global AI Milwaukee — Tomorrow at WCTC

The May meeting of the Global AI Milwaukee User Group is tomorrow, Wednesday, May 13, 2026 at 5:15 PM, at WCTC (800 Main Street, Pewaukee, WI). Hybrid format — attend in person or online. This is the home meetup for the IT HootClub community, the one that meets where the newsletter is written.

This month's featured speaker is Cameron Vetter, presenting Meet Emily v4 — a personal AI assistant rebuilt on the OpenClaw platform. The talk walks through the evolution from Emily v1 through v4: what worked, what didn't, and what it took to build an AI assistant that feels genuinely personal rather than generic. If you have been following the Building Intelligence series in this newsletter, this talk is a worked example of the same questions from a different builder's perspective.

Networking and food at 5:15, the talk starts after the brief group business. Details and registration: meetup.com/global-ai_milwaukee/events/313746500. See you there.

Building Intelligence

Building Intelligence Week 6: The Schema

Last week, the questions. This week, the first answer.

NL-22 walked through the six architectural questions that have to be settled before a system like this can be built. The most editorially loaded of them — what does the system need to know in order to answer the reference question? — is the question this edition takes on.

The reference question, restated for the third time across this series:

I'm a new software engineer relocating to Milwaukee. Name five local tech events, meetups, or community organizations I should know about in 2026, and briefly describe what each one is for.

That question, taken seriously, is a structural question. It implies entities. It implies relationships. It implies discipline about what counts as a real answer and what doesn't. It implies a system that knows what kinds of things exist, how they relate, who's involved, and where the information came from.

NL-22 deliberately did not answer that structural question. The discipline of that edition was to ask the question and resist the pull to leap to a solution. The order of operations is itself a load-bearing editorial commitment: data architecture before model selection before generation layer. Schema before code before content.

This edition answers the first part. The schema. The shape of what the system has to know, encoded as a real, designed, drawn artifact that exists before any code has been written.

What follows is a walkthrough of six entities, the editorial decision each one encodes, and the diagram that makes the whole shape visible. The schema you're about to see is the planning artifact for a system being built deliberately, in public, with each decision made for a specific reason that the prose will name. That's the editorial argument: the schema is the design, the design is the differentiator, and the differentiator is what makes the build worth doing instead of pointing the inquirer at a generic frontier model.

The Schema, Walked Through

Six entities. Five junction tables. One controlling editorial argument: the system being designed here is not a substitute for a subject matter expert. It is the connection layer between an inquirer with a question and the human networks where deep knowledge actually lives. The schema is optimized for that posture — for routing, not for replacing. The closing of this edition develops that idea more fully; the walkthrough below shows how the schema embodies it.

The order of the walkthrough follows the narrative, not the database. We start with the central entity and build outward.

Organization

Organization is the spine. Every Milwaukee tech entity that the system knows about is one row in this table. Global AI Milwaukee. gener8tor. Choose MKE Tech. FOR-M. Each one, one row, with a canonical name and a stable identifier.

Three editorial decisions are encoded in this entity, and each one was learned the hard way.

The first is a separation between the canonical name and the URL slug. Data Driven MKE — an organization in the candidate set — went through a recent name change. Its Meetup URL still reads mke-big-data, the older identity. If the schema used a single field for both name and identifier, a renamed organization would either lose its identity continuity (every system referencing the old name breaks) or require a URL rewrite (every system referencing the old URL breaks). Splitting them means an organization can be renamed without losing its identity in the database, and a URL can persist even when the name evolves.

The second is a self-referencing parent organization. FOR-M, in the candidate set, has no standalone web surface — it's a program of MKE Tech Hub Coalition, and its primary documentation is a blog post on the parent organization's website. Modeling FOR-M as an independent organization would lose the relationship. Modeling it as a row on the parent organization's record would lose its identity. The right answer is that FOR-M is a real organization with its own row, and that row points to its parent organization through a foreign key that references the same table. Self-referencing relationships are common in schemas of this shape, and they're the right answer when an entity can be both a thing and a part of another thing.

The third is the description split. Two columns: description_raw and description_curated. The raw field holds whatever the source surface provides — the organization's About page, the Meetup group description, the blog post text. The curated field holds an SME-validated, one-or-two-sentence purpose statement, written or approved by an editor, that answers the question what is this organization for? The raw field is for context. The curated field is what the AI layer surfaces in answers. A scraped description is almost always wrong-shape for the question being asked. A curated description, written deliberately to answer the question, is what the system can return with confidence. The split makes the editorial-curation discipline structural rather than policied.

TechFocus

An organization is a tech organization, but what kind of tech? AI. Data. Software development. Cybersecurity. Networking. Consulting. Startup and entrepreneurship. Seven categories, deliberately chosen, governed as a controlled vocabulary.

The editorial decision here is that the vocabulary is fixed at the schema level, not extracted from source data. A scraped or web-search-grounded system inherits whatever taxonomy the source uses — Eventbrite's tags, Meetup's categories, LinkedIn's industry codes, free-text blurbs in About pages. The result is convoluted: AIA.I.artificial intelligencemachine learningML/AIData ScienceData/AI, all describing overlapping-but-not-identical things, none of them comparable in a query. This schema imposes a controlled vocabulary that the data must conform to at ingestion time, not at query time. Every organization in the database is classified using these seven categories, no exceptions. An organization that genuinely cannot fit into any of the seven is the trigger for considering whether the vocabulary needs to grow — not because the taxonomy needs to be exhaustive, but because the schema needs to be honest.

The relationship between Organization and TechFocus is many-to-many, encoded in a junction table. An organization can be AI and Networking. It can be Software Development and Startup and Consulting. Real organizations span multiple categories. The schema reflects that without forcing a primary-only choice.

The junction table carries one additional column worth naming explicitly: editorial_priority. This is the field that does the most work to differentiate the build from a generic model, and its placement matters. The priority is not a property of the organization globally — it is a property of the organization within a specific tech focus. Global AI Milwaukee is flagship-tier for AI inquiries; for Networking inquiries, it is notable but not flagship, because for someone primarily looking for networking opportunities, other organizations might be a better starting point. Encoding the priority at the junction means the system can return different rankings for the same organization depending on what topic is being asked about. That capability is the structural answer to why this build outperforms a generic model on questions like the reference question. The priority is editorial judgment, encoded as data, scoped to the question being asked.

Event

Events are first-class entities in this schema because the reference question explicitly names them: five local tech events, meetups, or community organizations. The schema cannot treat events as an afterthought when the question being answered treats them as primary.

An event belongs to one organization (the host) and optionally to one venue (the location). It has a start time and optional end time, a description, a URL, a registration link, and a status. None of those are surprising. The interesting field is recurrence_rule, which stores the schedule of recurring events as an iCalendar RRULE string.

RRULE is the standard format for expressing recurrence in calendar systems. It can encode rules like every Tuesdaythe second Thursday of each monthweekly on Monday for ten weeks, and the harder cases: the last Wednesday of each month except June, November, and December, which is the rule for MKE Tech Hub Coalition's Founders Day. Storing recurrence as a standard format rather than inventing custom fields means the schema accommodates real-world complexity without growing a custom calendaring engine. Standards exist for a reason; using them is craft.

Venue

Venue is its own entity because venues recur across organizations and events. Ward4, at 313 N Plankinton Avenue, hosts Founders Day every month. It also hosts other events from other organizations. If Venue were modeled as a string field on Event, the same physical address would be repeated dozens of times, with inevitable spelling and formatting drift. Modeling Venue as its own entity gives each physical location a single identity that multiple events can reference.

The venue entity has one field that earns explicit mention: online_capable. A boolean flag that captures whether the venue supports hybrid or online-only events. Some venues do. Some don't. Events that are purely online with no hosting venue have a null venue_id; events at a hybrid-capable venue may run online while still being associated with the physical location. The schema accommodates both modes.

Person

People are in the schema, and the editorial decision in this entity is about what kinds of information the system distinguishes between and how each kind is sourced.

A person publicly listed as an official contact on an organization's website, or in a directory, or on a public Meetup group page, is a public_official entry in this schema. That kind of information is implicitly consented to — the person, or the organization on their behalf, chose to publish it. Including it in the database is no different from what a directory or a search result already does. Their name, role, and publicly listed contact information populate the row.

Additional information — a private email, a phone number that isn't published, internal organizational details, a role that hasn't been publicly announced — requires a different basis. Those facts are sourced either through SME validation (a trusted source provided them in a verified context) or through explicit consent from the person themselves. The schema's contact_role field distinguishes between these categories: public_official for publicly listed contacts, private_sme for individuals who have provided explicit consent to be recorded, and deprecated for entries that should no longer surface — someone who has moved on, asked to be delisted, or whose information is no longer current.

The application layer enforces the rule: only public_official and consented private_sme rows surface in generated answers. Deprecated rows stay in the database for audit purposes but are filtered out at query time. The discipline is not "consent or nothing" — it is "the right level of verification for the kind of information." Public information has a low bar; non-public information has a higher bar. The schema makes that distinction visible and queryable, rather than leaving it to the goodwill of whoever is operating the system.

This is the difference between a system that pretends to be a comprehensive person directory and a system that is honest about what it knows, where the information came from, and what discipline was applied to recording it.

Source

Source is the entity that does the most novel editorial work in the schema, and the one most worth pausing on.

Every fact in this database has a source. Not as documentation, not as a comment in the code, not as an editor's mental note — as a real, queryable, foreign-keyed row in a real table. The Source entity records how a piece of information entered the system: whether it was seeded by an SME with no public URL behind it, manually entered by an editor based on direct knowledge, programmatically scraped from a public web surface, or pulled from an official API. It records who validated the source and when. It records a confidence level. It records free-text notes about the source's nature: phone call with Ward4 general manager, April 29, 2026.

Every other entity in the schema — Organization, Event, Person, Venue — references Source through a junction table. An organization can have one source or many. A person's consent record has its own source. An event's schedule has its own source. The schema accommodates the reality that any given fact may be supported by multiple sources, that those sources may have different reliability, and that when they conflict, the system needs a rule for which wins.

The rule is encoded structurally: SME-seeded sources with high confidence win. When two sources for the same fact conflict, the one with source_type = sme_seeded and the highest confidence value is canonical. That rule could have lived as an editorial policy — "we trust SMEs over the web" — and it would have been ignored the first time a scraper updated a record. By encoding the override as data, the schema makes the discipline structurally enforceable rather than aspirationally policied.

This is the part of the schema that NL-22 named in the abstract and that this entity makes concrete. The override mechanism is a real, designed, queryable thing. Human-in-the-loop is not a value statement; it is a foreign-key relationship.

The Diagram

Milwaukee Tech Scene Inventory ERD - six entities, five junction tables, all relationships and cardinality visible

That is the schema, drawn. Six entities, five junction tables, the relationships and cardinality made visible. Every editorial decision named in the walkthrough above is somewhere in that diagram — the parent-organization self-reference looping from Organization back to itself, the editorial_priority field on Organization_TechFocus, the description split on Organization, the consent columns on Person, the Source entity sitting at the bottom-right with junction tables reaching outward to every other entity.

The diagram was rendered in dbdiagram.io from a dbml source file. dbml is an open, human-readable format for describing database schemas. The dbml file lives in the project repository, version-controlled like code; the diagram is generated from it. When the schema changes, the dbml changes first, the diagram is re-rendered, and the diff is reviewable just like a code diff. The tool is free for public diagrams and the format is open — anyone can use it for their own schema design.

For verifying the live database matches the planned schema once it's stood up, DBeaver — a different tool, a desktop database client — will be useful. That's a NL-24 (or later) concern. For now, dbdiagram.io produces the planning artifact and the dbml file is the source it renders from.

The schema is rendered. The argument is made. What this edition does, and what it does not yet do, is the subject of the closing.

Scope, and What This Edition Argues

What this edition does: defines the schema, walks through the editorial decisions encoded in each entity, and renders the design as a published artifact. The dbml file and the diagram are real, downloadable, copy-able outputs that survive the edition.

What this edition does not do: stand up the database, write the migrations, populate any data, or select a model for the eventual generation layer. Each of those is real work that benefits from being done deliberately and separately. The schema settling first is the precondition for the work that follows, and getting the schema right is the work this edition is here to do.

The closing of the edition is reserved for three editorial threads that the schema implies but that the walkthrough did not develop. They are connected, and naming them now is part of the discipline.

What the system is for

The system being built here is not a substitute for a subject matter expert. It does not pretend to be one, and the schema is designed in a way that makes that posture explicit.

The reference question — name five tech organizations a new engineer should know about — has a deeper structure than the literal text suggests. Underneath "name five organizations" is "point me toward the people and gatherings where this community actually exists." The interesting answer to that question is not a list. The interesting answer is here is where to go, here is when, here is who to talk to. The schema is optimized for that answer. Organization, Event, Venue, Person — each one a first-class entity. The system's job is to know where the human networks are and to route the inquirer toward them, with verified data and editorial judgment about which networks are worth knowing.

That posture is more honest than the alternative. A system claiming to be the comprehensive answer would have to know everything, validate everything, stay current on everything. No such system exists, and the ones that pretend to are mostly hallucinating. A system claiming to be the connection layer between an inquirer and the people who actually have the knowledge is making a smaller, truer claim. The schema reflects the smaller claim. That is a feature, not a limitation.

The build is biased by design, and the bias is the point

An honest reading of this schema is that it encodes bias. The system is going to know about some organizations and not others, recommend some more strongly than others, surface some entities and exclude others. Every selection is a bias. Every ranking is a bias. The controlled vocabulary itself is a bias — seven categories, deliberately chosen, with everything that doesn't fit either accommodated awkwardly or excluded entirely.

Some specific kinds of organizations will not make it into this system: an organization with no public web presence and no SME advocate. An organization that no SME in the curator's network has heard of or chosen to validate. An organization whose work falls outside the seven categories the schema recognizes. A category of activity — say, hardware-focused maker spaces, or cryptocurrency communities, or organizations primarily serving a non-English-speaking population — that this build's curator has not prioritized and that no SME in the network has surfaced. Each of those exclusions is a form of bias.

The honest editorial position is that this bias is deliberate, and the bias is the point. The system is designed to make recommendations. Recommendations require selection. Selection requires judgment about what is worth surfacing and what is not. A system without bias would return every Milwaukee tech organization in alphabetical order with no ranking, no editorial discrimination, no signal about what is worth attending versus what is dormant or marginal. That is the system this build is explicitly arguing against. The build is biased on purpose because the alternative — comprehensive neutrality — is exactly the failure mode of generic models that have to claim coverage of everything and end up doing nothing well.

Naming the bias openly is part of the discipline. The schema is not neutral. It encodes the curator's judgment, the SMEs' validation, and the editorial decisions made during inventory work. A reader of any answer the system eventually produces should know that. Beneficial bias, in the sense that the bias reflects deliberate human curation rather than unexamined assumptions. But still bias. Honest about it.

Better than the alternatives, not better than the people

The question the build keeps having to answer is whether it is worth doing. Two versions of that question are worth answering directly.

Is this better than what is currently available? Almost certainly yes, for this specific kind of question, when populated. A frontier model with web search will surface organizations based on whatever is well-indexed and recently linked, with no way to verify currency, no signal about which are active versus dormant, and no editorial judgment about which are worth recommending. This build returns SME-validated, currency-verified, editorially-ranked answers with provenance attached. The structural advantage is real, and the schema is what makes it real.

Is this as good as an actual SME? No. An SME has tacit knowledge the schema cannot fully capture — which leaders are responsive, which events are worth driving to versus skipping, which orgs have politics, which gatherings welcome newcomers. Some of that is encodable in notes columns and curated descriptions. The deepest version of it is contextual and conversational in a way no structured data fully reproduces. The build does not claim to replace an SME. It claims to be the next-best thing when an SME is not available, and to make SME knowledge more durable and shareable when an SME is involved.

Most readers of this newsletter do not have a Milwaukee tech SME on speed-dial. The system's job is to be that next-best thing. The schema is the foundation that lets it be a good version of that thing rather than a hallucinating one. AI will always be only as good as the data available to it; SMEs can also be wrong. The question worth asking is not is the system perfect? The question is is the system more honest about what it knows and does not know than the alternatives? The build's answer is yes, and the schema is the structural commitment to that honesty.

What comes next

NL-24 will stand up the database. The Docker container from Under the Hood becomes a running Postgres instance. The dbml file becomes a real migration. The schema, today only an artifact, becomes a thing that exists and can hold rows.

No commitment beyond that. The inventory pass, the SME outreach, the eventual seeding of data — all of those are real work that will come in their own time, paced deliberately. The schema is the foundation. The foundation has to be poured before the building goes up. That is the work this edition was here to do.

Under the Hood

Under the Hood: The Stack

A note on audience: this series is written for a mixed audience — students, working IT professionals, and readers who don't yet have a database background. Parts of this section are foundational context for readers new to databases; other parts dig deeper into architectural decisions. If you already work with databases, the next sub-section is skimmable. If you're new to this territory, that sub-section is the one written for you. The rest of the section assumes the foundation has been laid.

The Foundation: SQL, Relational Databases, and the AI Connection

SQL — Structured Query Language, pronounced either "sequel" or as the letters S-Q-L depending on who you ask — is the language for talking to relational databases. It's decades old and still dominant for one straightforward reason: it works. Almost every business application, every analytics tool, every data warehouse, every reporting system runs on SQL somewhere underneath. You ask SQL a question — "give me every organization in Milwaukee tagged as AI-focused" — and SQL goes to the database and brings back the answer. Learning SQL is one of the most durable skills in technology; the SQL you learn today will still be useful in fifteen years, which is something almost no other technical skill can claim.

Relational is the descriptor that matters in the phrase "relational database." Data lives in tables — rows and columns, like a spreadsheet, but with a strict structure and rules about what each column can contain. Tables relate to other tables through shared keys. Our Organization table relates to our Event table through an organization_id column on each event row, pointing back to the organization that hosts it. That structure — the relationships between tables — is what makes the database powerful. A single SQL query can pull together data from multiple related tables in one operation: show me every upcoming AI-focused event hosted by an organization with a Milwaukee address. One query, one trip to the database, one answer. The schema we walked through in Building Intelligence Week 6 only works because relational databases let tables relate to each other in defined, queryable ways.

And here's where AI enters the picture. Modern AI systems — specifically, the retrieval layer that lets an AI answer questions about a specific dataset rather than just generating from training data — rely on something called vector embeddings. An embedding is a numerical representation of meaning. When an AI "understands" that "Milwaukee tech meetup" and "MKE technology gathering" mean roughly the same thing, that understanding is encoded as a long list of numbers — a vector — sitting in a high-dimensional space where similar meanings cluster near each other. To build an AI that can answer questions about a specific dataset (like the inventory of Milwaukee tech organizations we're building toward), you need a database that can store and search two different kinds of data: the structured data (the rows and columns of the tables we just discussed) and the embeddings (the vectors that represent meaning). The retrieval layer queries both.

Why Postgres specifically? Postgres has an extension called pgvector that adds native support for storing and querying vector embeddings inside a Postgres database. That means a single Postgres database can hold both the structured organization data and the vector representations that the AI layer will eventually use. No separate vector database. No two systems to keep in sync. One database, two query modes. This is a real architectural advantage and is one of the reasons this build picks Postgres over alternatives that would force a split between a relational store and a separate vector store. We're not using pgvector yet — that's editions away, after the inventory is populated and the embedding layer becomes the next thing to build — but the schema is being designed on a foundation that's ready for it. Picking the database now that the future build will need is one of those decisions that costs nothing today and saves real work later.

One more piece of foundational context: there are many kinds of SQL databases. SQL is the language, but the database systems that implement SQL come in many flavors, each with its own dialect, extensions, and quirks. The major variants worth knowing about: MySQL (the longtime dominant open-source choice, widely used in web applications), PostgreSQL (often called Postgres, the open-source choice that prioritizes correctness and advanced features), SQLite (the lightweight, file-based variant — runs without a server, ships embedded in millions of applications), Microsoft SQL Server (often called MSSQL, with its own SQL dialect called T-SQL, dominant in Microsoft-centric enterprise environments), and Oracle Database (the enterprise heavyweight, expensive, still entrenched in large institutions). Each has its own pros and cons, its own performance characteristics, its own ecosystem of tools, and its own subtle differences in SQL syntax. Code written against one variant won't necessarily run unchanged against another — what works in MySQL may break in Postgres, and vice versa. Choosing a SQL database isn't a single decision; it's a choice between databases with real, consequential differences.

The choice for this build is Postgres. The reasons are in the next section.

Why Postgres

The schema we're about to build needs a place to live. The choice of database isn't a detail — it's a structural commitment that's expensive to undo. Here's how I made the call.

The honest contenders for a project this shape are SQLitePostgres, and managed Postgres services like Supabase or Neon. MongoDB and the document-store family I'll dispatch in one sentence: the data we're modeling has foreign keys, parent-child relationships, and a controlled vocabulary that joins across tables — that's relational shape, not document shape. Wrong tool for this job.

The real question is whether to start with SQLite — the lightweight, file-based database introduced in the previous section — or go straight to Postgres, which is more work to stand up but is the database real production systems actually run on.

The conventional engineering advice is "start with SQLite, migrate to Postgres later when you need it." There's a narrow case where that advice is right: a throwaway prototype, a proof of concept you're not sure will continue, a learning project where the goal is to ship something working in a weekend. For those cases, SQLite is the right call.

This build isn't one of those cases. And here's where I want to be direct about something the series has been circling: a lightweight V1 attempt isn't going to stand up against a generic frontier model with web search. That's the bar. The whole point of building human-in-the-loop into the architecture is that the system has to be measurably better than what a generic model can produce — not roughly equivalent, not a charming local alternative. Better. That bar doesn't get cleared by a lightweight first pass that we promise to upgrade later.

So when I say "start simple, migrate to Postgres later" is bad advice for this build, I mean it specifically: the V1-with-V2-migration framing undercounts the cost of the migration itself, and it sets the bar too low for what the build is trying to achieve. I've lived the first half of that lesson.

In a prior project, I built on SQLite expecting an easy migration to Postgres when the scale demanded it. The migration was not easy. Every SQLite convention needed a wrapper. SQLite's loose type coercion versus Postgres's strict types. SQLite's INTEGER PRIMARY KEY AUTOINCREMENT versus Postgres's SERIAL and IDENTITY columns. SQLite's lax foreign key enforcement by default versus Postgres's strict referential integrity. Date handling differences. Concurrent-write limitations that only showed up under load. What was supposed to be a configuration swap turned into a translation project. Vibe-coding it through that migration was a nightmare I'm not repeating.

The lesson: SQLite was cheaper on day one and significantly more expensive on day forty. A professional building something meant to last picks the shape of the right answer at the start, even when it costs more setup, because the migration debt compounds and almost always exceeds the setup cost it was meant to avoid.

So: Postgres. From day one. No V1-with-V2-migration-later hedge.

The bonus of picking Postgres locally now is that it's the same database that runs in managed cloud services later. When this build is ready to serve real traffic — reachable from hardais.com, available to anyone who wants to query it — the path is "dump the local database, restore to a managed Postgres instance, update the connection string." That's a real, named, reproducible operation. Not a rewrite. Not a translation. Same database, different host. The schema I write in local Postgres is the same schema that runs in Supabase or Neon or AWS RDS when the time comes. Portability without migration debt.

That's the argument. The next question — and this is the one that matters most for the "how do I actually build it" instinct — is where the database physically lives during the build phase. That's the next section.

Where the Database Lives

The Postgres argument from the previous section is settled. The next question is more interesting and gets written about far less: where does the database physically live while you're building?

Tech instructional content has a pattern. It tells you what to build and why to build it, and then it skips the part where you actually sit down at a computer and make the database exist. This section is for that part.

The three honest options:

Option 1: Install Postgres directly on your machine. Download the installer from postgresql.org, run it, get pgAdmin (the official admin interface) as part of the bundle. Postgres runs as a Windows service, the data files live in C:\Program Files\PostgreSQL\, and you connect to it via localhost. This is the most familiar path for anyone who's installed software on a computer before.

I have Postgres installed locally on my development machine, and I'd recommend any working developer do the same. It's a general-purpose database server that's useful across many projects, and pgAdmin is genuinely valuable for browsing data, running ad-hoc queries, and managing databases without writing SQL by hand. The local Postgres install is your general database environment. What I'm not doing is using that local install as the home for this specific project's database. The reason is in Option 2.

Option 2: Run this project's Postgres database inside a Docker container. Postgres still runs — it's just running inside a container, isolated from the rest of the system, defined by a configuration file that lives in the project's git repository. This is the right choice for this build, and the longest part of this section is about why.

Option 3: Skip local entirely, use managed cloud Postgres from day one. Sign up for Supabase, Neon, or AWS RDS, get a connection string, point your code at it. Database is in the cloud, immediately reachable, no local infrastructure to manage. Real option, especially for teams. I'm deferring this for a specific reason I'll name at the end.

What Docker actually is, in plain language

Docker is one of those tools that gets named constantly in technical content but rarely explained well. Before going further, here's what it is and what some of the surrounding terminology means.

What is Docker? Docker is software that runs containers. A container is a self-contained, isolated environment that holds a piece of software and everything it needs to run — the application itself, the libraries it depends on, the configuration it expects. Think of it as a lightweight, disposable box. You start the box, the software inside runs, you stop the box, the software stops. The software inside the container doesn't affect the rest of your computer, and the rest of your computer doesn't affect what's inside the container.

Containers are sometimes confused with virtual machines (VMs). The difference matters: a VM runs an entire operating system inside it, which is heavy. A container shares the host operating system and just isolates the application, which is much lighter. A Postgres container takes seconds to start. A VM running Postgres would take minutes and use ten times the resources.

What is a .yml file? YAML (technically "YAML Ain't Markup Language" — yes, the acronym is recursive and ridiculous) is a human-readable configuration format. Pronounced "yamel." A .yml file is a text file with structured indentation that describes a configuration. It's widely used in DevOps and infrastructure tooling because it's easy for humans to read and edit, unlike XML or raw JSON. Docker uses YAML files (specifically docker-compose.yml) to describe what a container or set of containers should look like.

If I'm using Docker, do I still need Postgres installed? Postgres still needs to exist somewhere — Docker doesn't replace Postgres, it runs Postgres inside a container. The Postgres binary lives inside the container image, downloaded once when you first run the configuration. You don't install Postgres on your machine for this specific project; Docker handles that for you, inside the container's isolated environment. (As noted in Option 1, you might still want Postgres installed locally for other uses — that's a separate decision.)

What happens when versions change? This is where Docker earns its place. In the configuration file, the line image: postgres:16 specifies Postgres version 16. To upgrade to Postgres 17 when it comes out, change that line to image: postgres:17, run one command, and the new version runs. The change is a one-line diff in git, reviewable like any other code change. Rolling back is just as easy — change the line back. Compare that to upgrading a locally installed Postgres, which involves uninstalling the old version, installing the new one, migrating the configuration, and hoping nothing else on the machine depended on the old version. Versioning a containerized service is something I now expect from a tool; versioning a locally installed service is a project.

What are the storage repercussions? Docker uses disk space for two things: the container image itself (the Postgres software, typically a few hundred megabytes per version), and the data volume (the actual database files, which grow with your data). Multiple projects each running their own Postgres container each take their own image space — though Docker is smart enough to share layers between similar images, so it's not strictly cumulative. For a development machine, this is a non-issue. For a server running many containers, storage planning matters.

Does Docker make deployment faster or better? Yes, materially. The same docker-compose.yml file that runs the database on my development laptop can run the database on a production server with minimal changes — typically just swapping the development password for production secrets and adjusting network rules. The promise of Docker is that "it works on my machine" stops being a development-versus-production problem because the container is the machine. The development environment and the production environment run the same container, defined by the same file. Deployment becomes a matter of moving the file, not of recreating the environment.

Why Docker for this project specifically

The editorial principle: the database configuration is a property of the project, not a property of your machine.

When Postgres is the project's database via a local install, the database setup lives in your laptop. Which version of Postgres, which extensions are enabled, what port it uses, what configuration tweaks you made when you set it up six months ago — all of that lives in your specific machine, in places that aren't tracked anywhere. If you move to a new laptop, you reproduce that setup from memory, hoping you remembered every step. If a collaborator joins the project, they install Postgres themselves and try to match your version. If something breaks, you debug it by remembering what you did when you set it up.

That's the failure mode this section exists to address. The database isn't a feature of the project — it's a feature of your environment that happens to be running the project. The two are tangled.

Docker untangles them. The configuration of the database lives in a file called docker-compose.yml that sits in the project repository. Anyone who has the repository and has Docker installed can run one command — docker compose up — and get the same database. Same version, same extensions, same configuration. Reproducible by design. The database is now a property of the project, not the machine.

Here's what that file looks like for this build:

services:
  postgres:
    image: postgres:16
    container_name: mil_tech_scene
    environment:
      POSTGRES_USER: builder
      POSTGRES_PASSWORD: localdev
      POSTGRES_DB: mil_tech_scene_db
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

That's the whole file. Twelve lines defining a complete, runnable Postgres database. It specifies Postgres version 16, names the container mil_tech_scene and the database mil_tech_scene_db, sets a development username and password, exposes the standard Postgres port to the host machine, and persists the data files in a Docker-managed volume so the database survives container restarts.

A few notes on what this file is and is not:

  • It is a development configuration. The password is "localdev" because this database is reachable only from my machine and contains no real secrets. Production hardening — real passwords managed as environment variables or secrets, network restrictions, backup configuration, encryption at rest — comes when the database is exposed beyond local. That's a later concern, handled differently.
  • It is checked into git. The configuration is project history. If I change the Postgres version in three months, that change is a diff in the repository, reviewable, reversible. Same discipline as code.
  • The data is not in git. The postgres_data volume holds the actual database files, and it's managed by Docker outside the repository. The configuration is versioned. The data is persistent but not versioned. That distinction matters when collaborators join: they get the configuration from git, they get their own empty database from running the configuration, and they populate it however they need to.
  • That separation is also a security protection. Putting database files into git would be a real security failure. Git commits are permanent — once data lands in commit history, it's there forever, even if a later commit deletes it. Database files contain every row, every value, every potentially sensitive piece of information the database holds. Real-world breaches have happened because someone committed a database file or a credentials file by accident. Docker volumes existing outside the repository isn't just a convenience; it's a structural protection against that failure mode.

One concrete consequence: when I move to a new laptop, the database setup is a three-step recovery. Install Docker. Clone the repository. Run docker compose up. The database that comes up is identical to the one I left behind, modulo the data itself (which I'd back up and restore separately). That's the reproducibility payoff. The setup cost of Docker is real, but it pays itself back the first time the database needs to exist somewhere it doesn't currently exist.

Why not managed cloud from day one

Supabase, Neon, and AWS RDS are real, defensible options. For some builds, they're the right call from day one. The reason I'm deferring is specific to this phase of the project.

During the build phase, the schema is going to change. Frequently. Migrations will run, get rolled back, get rewritten. Tables will get added, dropped, restructured as the inventory pass surfaces real edge cases. That iteration is faster, cheaper, and more forgiving against a local database I control completely than against a managed instance where every operation hits a network and every mistake is one connection-string-leak away from being public.

Once the schema stabilizes — when the inventory pass is complete, the seeded data is loaded, and the system is ready to serve real queries — managed Postgres becomes the right call. The migration path, as I named in the previous section, is genuinely simple: dump the local database with pg_dump, restore it to the managed instance, update the connection string in the application configuration. Same Postgres on both sides. No translation layer. No schema rewrite.

That's the deferred-but-named-now decision. Local Docker for the build phase, managed Postgres when the build serves real traffic. The path between them is a tool that ships with Postgres itself.

One housekeeping note before moving on: Docker's command syntax has shifted across versions. The modern form is docker compose (space-separated); older tutorials use docker-compose (hyphenated). They do the same thing. I'm using the modern form throughout this build. Either works.

The next question is how the application code actually talks to this database. That's the next section.

One more editorial note. This section ran long because the question deserved it. The level of deliberation around storage choice, container architecture, and configuration discipline is the point — not a digression from it. A serious data system is built deliberately, and the deliberation is the thing that separates a real build from a point-and-click agent deployment. A comparison edition pitting this build against one of the marketed agent-builder platforms is on the runway; that edition needs lived experience on both sides, which I don't yet have. For now, the contrast is in the form.

How the Code Talks to the Database

So far we've covered what kind of database to use (Postgres) and where it lives (a Docker container defined by a configuration file). The remaining question for this section is the one most working developers ask first: how does my code actually talk to the database?

Two honest options.

Option 1: Raw SQL. Write Structured Query Language directly in your application code. To insert a new organization, your Python code constructs a string that looks like INSERT INTO organizations (name, website) VALUES ('Global AI Milwaukee', 'https://www.meetup.com/global-ai_milwaukee/'), sends it to the database, gets back a result. To query, you write a SELECT statement. To change the schema, you write CREATE TABLE or ALTER TABLE statements and run them by hand. This is the closest path between your code and the database. Nothing is between you and the SQL the database runs.

Option 2: Object-Relational Mapping. Use a library that lets you define each database table as a class in your programming language — in this case, Python. The library translates between your Python objects and the database's tables. You create an organization by instantiating a Python Organization object and calling session.add(org). The library generates the INSERT statement behind the scenes. You query the database by writing Python code that looks more like filtering a list than writing SQL. Schema changes are managed by a separate tool that tracks each change as a numbered, reversible file in your repository.

This second approach is called Object-Relational Mapping, almost always shortened to ORM. The name describes what the library does: it maps between the world of objects (in your programming language) and the world of relational tables (in your database). It's the bridge between code-shape and database-shape. The acronym ORM gets thrown around in technical content as if everyone knows what it stands for — I assume you don't, because for a long time I didn't either, and asking what an acronym stands for is a perfectly reasonable thing to want to know.

The legitimate case for raw SQL

Before arguing for the ORM choice, the case for raw SQL deserves a fair hearing.

Raw SQL has real advantages. It's the most transparent option — the SQL you write is the SQL the database runs, with no abstraction layer translating between your intent and the database's behavior. Performance work is straightforward, because you have direct control over how queries are written. There's no library to learn beyond SQL itself, which is a language you'll have to know regardless. For projects with a small, stable schema, deep SQL expertise on the team, or performance-critical query patterns, raw SQL is often the right answer.

The downside is operational. Every query is a hand-written string. Schema changes are managed by hand — you remember which version of the schema is in your development database, which is in production, which is documented somewhere. Mistakes are easy: a typo in a query is a runtime error, a missing schema migration is a production incident. None of this is unmanageable. It's just discipline you have to apply manually, every time.

The ORM argument: schema changes deserve version control

Here's the editorial principle for this sub-section: schema changes deserve version control just like code does.

Code lives in git. Every change to a function, every renamed variable, every bug fix — all of it is tracked, reviewable, reversible. We accept this discipline for code because we've all been bitten by lost work, by changes nobody can explain, by the question "when did this break and why?" Git solved those problems for code.

Schema changes deserve the same treatment, and they don't get it by default. When the schema lives only in the database — "the table is whatever I last typed at the command line" — there's no history. No diff to review. No way to roll back without a backup. If a teammate asks "when did we add the parent_organization_id column to organizations, and why?", the honest answer is "uh, sometime in March, I think." That's not a serious engineering practice for a build meant to last.

An ORM with proper migration tooling fixes this by making the schema part of the code. The table definitions live in Python files. The history of changes lives in numbered migration files in the repository. The question "when did we add this column, and why?" has a real answer: it's in the migration file, with the commit that introduced it and the description the author wrote when they made the change.

The specific tools for this build are SQLAlchemy (the dominant Python ORM, mature, widely used, with extensive documentation) and Alembic (SQLAlchemy's migration tool, designed to work alongside it). SQLAlchemy handles the Python-to-database translation. Alembic handles the schema-change history.

Here's what a table definition looks like in SQLAlchemy, simplified to show the shape:

class Organization(Base):
    __tablename__ = "organizations"

    id = Column(Integer, primary_key=True)
    name = Column(String(200), nullable=False)
    url_slug = Column(String(100), unique=True)
    website = Column(String(500))
    parent_organization_id = Column(Integer, ForeignKey("organizations.id"))
    status = Column(String(50))
    description = Column(Text)

That Python class is also the database table. The columns in the class are the columns in the table. The relationships (the ForeignKey reference back to organizations.id, which captures the parent-organization pattern we discussed in Building Intelligence Week 6) are encoded in the same place. The schema and the code are not two separate things; they're the same thing, in one place, version-controlled in git.

Alembic handles what happens when this class changes. When I add a new column, Alembic generates a migration file — a small Python script that captures both the forward change (add the column) and the reverse (drop the column, in case the change needs to be rolled back). The migration file goes in a folder called migrations/ in the project repository. Every change to the schema is a file in that folder, numbered in order, reviewable in git, reversible if needed. Schema history becomes project history. The discipline of code review extends naturally to schema review.

What this means in practice

A few concrete consequences of the ORM-plus-migrations choice:

  • Schema changes are diffs. When I add a column or change a constraint, the change shows up as a pull request — a code review I (or a collaborator) can read line-by-line before it lands. Same review discipline as application code.
  • Rolling back is possible. If a schema change turns out to be wrong, the migration file's reverse step can roll it back. Without migrations, "rolling back" means restoring from a backup.
  • The database state is reproducible. Anyone running the project can run all migrations in order from an empty database and end up with the current schema. This is the same discipline-of-the-project principle that drove the Docker decision in the previous section. Configuration is project history, not personal history.
  • SQL is still accessible. SQLAlchemy lets you drop down to raw SQL when you need to, for queries the ORM doesn't express well or for performance-tuning work. You don't lose the ability to write SQL; you gain the option to not write it for the routine operations.

One more loose end from the previous section: the Docker container exposes Postgres on port 5432 with username builder and password localdev. The application code reaches the database through a connection string — a single string that tells SQLAlchemy where to connect, what credentials to use, and which database to open. For this build, that string is roughly postgresql://builder:localdev@localhost:5432/mil_tech_scene_db. In production, that string changes (different host, real credentials, encrypted connection) but the application code doesn't — the connection string lives in environment configuration, not in the code itself. Same code, different deployment, different connection string. That's another piece of the portability argument that runs through this section.

A worthwhile aside on whether publishing any of that is risky. It's a good question, and the answer matters. The username, password, and connection string above are safe to print in a newsletter for three specific reasons. First, port 5432 is the well-known default for Postgres — every Postgres installation in the world uses it unless deliberately changed, so naming it publicly tells an attacker nothing they couldn't find in any tutorial. Second, the Docker configuration exposes the database only on localhost, meaning the database is reachable only from my own machine. There's no network path from the public internet to this database, regardless of what credentials anyone reading this newsletter knows. Third, the credentials themselves (builder / localdev) are deliberately throwaway — they're not reused on any other system, and they protect a development database with no real data in it yet.

Where the real risk would live is different. Publishing a connection string for a production database, or one that points at a managed cloud service like Supabase or AWS RDS where the host is reachable from the public internet, would be a genuine security failure. So would committing a credentials file to a public git repository, even briefly — git history is permanent, and there are bots that continuously scan public repositories for exposed credentials. Those are the situations that demand secrets management, environment variables, and the kind of production hardening I mentioned earlier. None of that applies to a local development setup, but all of it will apply later in this build, and it'll be the subject of its own section when it does.

The general rule worth keeping: local development credentials are safe to share when the system they protect isn't reachable; production credentials are never safe to share, full stop. The first paragraph of this sub-section was the local case. The second was the production case. They're different categories of secret, and treating them as the same category is one of the more common ways teams get themselves into trouble.

That's the stack: Postgres in Docker for the build phase, SQLAlchemy and Alembic for the application layer, migration files versioning the schema alongside the code that uses it. Three deliberate decisions, each one made for a specific reason, each one preparing the system for what comes next.

The Learning Loop

DEFINITION Prompt Injection
A security exploit where a user inputs specifically crafted text to trick an AI into overriding its original instructions or bypassing safety guardrails.
Source: OWASP
TIP Skeleton-of-Thought
To prevent "lazy" or incomplete long-form content, ask the AI to first output a detailed outline, then expand each point in separate prompts to maintain depth.
Source: Microsoft Research
TOOL Fireflies.ai
An AI-powered meeting assistant that joins video calls to transcribe conversations, track speakers, and generate searchable summaries with automated action items.

Lift-Off

“The only real stumbling block is fear of failure. In cooking you've got to have a what-the-hell attitude.”

— Julia Child — She was a renowned American chef and television personality who introduced French cuisine to the United States. Before her culinary career, she served as a research assistant for the Office of Strategic Services during World War II. She became a cultural icon by demonstrating that complex skills can be mastered through persistence and a sense of humor.

The Nest Jest

Why did the function cross the road? Because it was called from the other side.

Upcoming Events

Event 05/13/2026 5:15 PM
May Global AI Milwaukee User Group Meeting — Online (Wctc, 800 Main Street, Pewaukee, WI)
Agenda Includes: * Networking / Food * Brief Introduction / Discuss Group Business * Featured Speaker Featured Speaker: **Cameron Vetter** Topic: **Meet Emily v4** What if your AI assistant felt truly *yours*—private, deeply personalized, and seamlessly integrated into your real daily workflow? In this talk, I’ll introduce Emily v4, the latest version of my personal AI assistant, rebuilt on the OpenClaw platform. I’ll begin with the history of Emily v1 through v3—sharing the wins, lessons, [Group: global-ai_milwaukee]
Event 05/13/2026 12:00 PM
Entrepreneur & Innovator Resource Fair — Technology Innovation Center, Wauwatosa, WI
Are you an entrepreneur or innovator in need of resources? Are you dreaming of starting your own business?
Event 05/14/2026 9:00 AM
Machine Learning & AI Essentials 2 Days Training – Milwaukee, WI — For venue information, Please contact us: in...@skelora.com, Milwaukee, WI, WI
Learn AI & ML fundamentals with practical insights, real-world use cases, and hands-on concepts to boost your career. By Skelora Edu Tech.
Event 05/15/2026 9:00 AM
Artificial Intelligence & Automation 1 Day Workshop |Milwaukee, WI — For venue details reach us at in...@learnerring.com, Milwaukee, WI
Understand AI, Automation & Real-World Business Applications | Hands-On | Beginner to Intermediate
Event 05/16/2026 9:00 AM
PMI-CPMAI® Weekend Training – Project in AI Certification in Milwaukee, WI — 1433 N Water St, Milwaukee, WI
Join our PMI-CPMAI® weekend training and master Project in AI. Learn AI project lifecycle, data strategy, and real-world implementation.
Event 05/16/2026 9:00 AM
PMI-CPMAI® Weekend Training –Project in AI Certification in Kenosha, WI — 100 N Atkinson Rd, Grayslake, IL
Join our PMI-CPMAI® weekend training and master Project in AI. Learn AI project lifecycle, data strategy, and real-world implementation.
Event 05/19/2026 9:00 AM
AI: Toy → Tool → Teammate with Marcus Green — Zoofari Center, Milwaukee, WI
Turn AI from a toy into your business teammate. Learn practical ways to boost consistency, clarity, and leverage—without adding more hours.
Event 05/19/2026 9:00 AM
4-Day Data Science with Python Bootcamp in Milwaukee, WI — 1433 N Water St, Milwaukee, WI
Join our 4-Day Data Science with Python bootcamp! Learn data analysis, ML basics, and work on real-world projects.
Event 05/20/2026 07:00 PM
For-M May Founder Showcase — Ward4, 313 N Plankinton Avenue, Milwaukee, WI 53203
Come see Milwaukee's newest tech startup founders showcase their great ideas!
Event 05/20/2026 8:00 AM
AI Roundtable with SVA Consulting — Harley-Davidson Museum®, Milwaukee, WI
Join SVA Consulting and a cross-industry group of innovators for a morning of insight, collaboration, and connection.
Event 05/26/2026 9:00 AM
Cisco CCNA Training & Certification Program in Milwaukee, WI — 1433 N Water St, Milwaukee, WI
Get CCNA 200-301 certified with hands-on Cisco labs, expert-led training, real-world networking skills, and career guidance.
Event 05/27/2026 09:00 AM
Founders Day — Ward4, 313 N Plankinton Avenue, Milwaukee, WI 53203
Monthly networking and programming for Milwaukee's startup and technology community, hosted by MKE Tech Hub Coalition at Ward4. Recurring last Wednesday of the month.
Source: MKE Tech Hub Coalition (mketech.org)
Event 05/27/2026 6:00 PM
Reasoning for Complex Data through Self-Supervised Learning — Online (Madison Central Public Library, 201 West Mifflin St. Room 302, Madison, WI)
*** Self-supervised learning deals with problems that have little or no available labeled data. Recent work has shown impressive results when underlying classes have significant semantic differences. We will discuss strategies to tackle to enable learning from unlabeled data even when samples from different classes are not prominently diverse. We approach the problem by leveraging novel ensemble-based clustering strategies where clusters derived from different configurations are combined to gen [Group: madison-ai]
Event 05/27/2026 9:00 AM
PMI-CPMAI Certification Training – 3-Day Bootcamp in Milwaukee, WI — 1433 N Water St, Milwaukee, WI
Master AI in Project Management & project in AI concepts with PMI-CPMAI™. 4-day training, real use cases & exam prep.
Event 06/08/2026 6:00 PM
(Hybrid) Random Testing with ‘Fuzz’: 35 Years of Finding Bugs — Online (Madison Central Public Library, 201 West Mifflin St. Room 302, Madison, WI)
online: https://youtube.com/live/DTOTDmrAjq4?feature=share Fuzz testing has passed its 35th birthday and, in that time, has gone from a disparaged and mocked technique to one that is the foundation of many efforts in software engineering and testing. The key idea behind fuzz testing is using random input and having an extremely simple test oracle that only looks for crashes or hangs in the program. Importantly, in all our studies, all our tools, test data, and results were made public so that o [Group: madison-ai]
Event 06/12/2026 5:00 PM
AI Specialists: Supercharge Your Career with AIConnect Networking! - Milwaukee — Location TBD; Register Online, Milwaukee, WI
Experience specialists supercharge networking in Milwaukee on 12 Jun 2026, 5 PM CDT. Connect with professionals!
Event 06/15/2026 6:00 PM
Using AI Wisely: Tips for Everyday Decisions — Antioch Public Library District, Antioch, IL
Get smart about using AI in your daily life with easy tips that actually make decisions simpler and better.
Event 07/24/2026 10:00 AM
Hands-On : Copilot Studio, Microsoft Fabric, Azure AI : Better Together — Online (Online event)
**Hands-On Online Workshop: Copilot Studio, Microsoft Fabric, Azure AI : Better Together** **Date: 24 July 2026, 10 AM to 5 PM Eastern Time** **Level: Beginners/Intermediate** **Registration Link:** https://www.eventbrite.com/e/hands-on-copilot-studio-microsoft-fabric-azure-ai-better-together-tickets-1983680029367?aff=oddtdtcreator **Who Should Attend?** This hands-on workshop is open to developers, senior software engineers, IT pros, architects, IT managers, citizen developers, technology pro [Group: artificialintelligenceandmachinelearning]
Event 07/29/2026 09:00 AM
Founders Day — Ward4, 313 N Plankinton Avenue, Milwaukee, WI 53203
Monthly networking and programming for Milwaukee's startup and technology community, hosted by MKE Tech Hub Coalition at Ward4. Recurring last Wednesday of the month.
Source: MKE Tech Hub Coalition (mketech.org)

In the News

News 2026-05-12 — TechCrunch
Vapi hits $500M valuation as Amazon Ring chose its AI platform over 40 rivals
AI voice startup Vapi reached a $500 million valuation after its platform was selected by Amazon Ring over 40 competing rivals. The company has reported tenfold growth in its enterprise sector since early 2025 as more firms transition sales and support calls to AI agents. Vapi’s technology allows businesses to automate complex, real-time vocal interactions at scale. read more
News 2026-05-12 — TechCrunch
Thinking Machines wants to build an AI that actually listens while it talks
Thinking Machines is developing a new AI model architecture that processes user input and generates responses simultaneously. This approach seeks to move beyond the traditional sequential turn-taking of current large language models to create a more natural, real-time interaction. The technology allows the AI to engage in fluid dialogue by listening while it talks, mimicking the flow of a human phone call. read more
News 2026-05-12 — VentureBeat
Railway secures $100 million to challenge AWS with AI-native cloud infrastructure
San Francisco-based cloud platform Railway has raised $100 million in a Series B round to expand its AI-native infrastructure services. The company aims to challenge legacy cloud providers by addressing the specific hardware and performance demands of modern artificial intelligence applications. The funding follows a period of rapid growth, with the platform reaching two million developers without traditional marketing spend. read more

Tools from Hard AIs

AI Airfare Research
Find cheaper flights using an AI-powered research tool built by Hard AIs.
hardais.com/airfare →
Newsletter Archive
Every past edition of the IT HootClub AI Community Newsletter, hosted and searchable.
hardais.com/newsletter →
This newsletter was assembled with an AI-assisted workflow prototype for internal preview. This newsletter is AI-assisted. The editor is human.
Reply all
Reply to author
Forward
0 new messages