AI search/quest

23 views
Skip to first unread message

bruce

unread,
Mar 21, 2026, 11:20:25 PMMar 21
to lv...@googlegroups.com
Hi.

As a few of you know, I'm on a trek to see how this AI thing works when it comes to actually building webapps.

There are billions of sites/tiktok/youtube/etc.. saying here.. do this.. except "this" doesn't get nearly close to walking through the complete steps to actually building/testing/seeing how much the process costs, etc..

So, if anyone has links to any real person/resource that they've talked to, or has contact info -- phone/mobile, let me know.

-bruce

ps.

As I go through this process, I'll prob post from time to time, what I've found/done/etc..

If you are interested in this tech, and you too want more of a "how the heck does all of this work, and a class/etc.. would be useful" -- let me know..

My initial observation/thoughts.. the AI thing might be useful/powerful for webapps/dev process. But there's a def cost. Now, what that cost is, I have no clue. I've seen some articles where users have said, uh oh.. what do you mean I have a $1000 bill!  some of this was handled by their work.. but never the less, cost is critical!!

As far as I can tell, there's no real "free' test tier for the IDE/AI tools that would be really useful. But again, since I don't really have knowledge, I could be missing large chunks of information that would affect my thoughts. 

d.s.

unread,
Mar 22, 2026, 3:16:47 AMMar 22
to lv...@googlegroups.com

So I just wanna ramble a little here:

The cost is about $200/month at the high end right now, at least in my case. I tend to hit token limits pretty quickly on lower tiers.

Any of the “pro” subscriptions from the big three are generally enough for coding. If you want API-level access, expect something like $50–$100 per 8 hours of actual usage.

Out of the three, Anthropic’s models have been noticeably better at coding in my experience, though that opinion may be outdated. It’s just as likely that I’ve learned how to work around Anthropic’s quirks better than those of OpenAI or Google’s models.

So now, choose how you want to use it: vibe coding or treating it like a coding monkey.

In the vibe coding case, you describe the big picture and hope for the best. This works up to a point, but once the codebase grows, things start to fall apart. The issue isn’t raw context length, it’s that the coherence of your original design intent starts to dissolve. The model keeps producing code, but it stops being your code in any meaningful architectural sense.

In the other mode, you treat it like a junior dev. That means extremely detailed instructions, strict requirements, constant review, and re-explaining context over and over because it has no real memory of your project. Also, when it’s wrong, it’s very confidently wrong.

So you still need to be able to architect and plan complex systems yourself, and you still need to do serious due diligence in code review. Otherwise you end up with things like a “dynamic array” that passes tests for 100 elements by literally using 100 if-statements returning fixed-size buffers. It met the test, just not the intent.

Anyone claiming otherwise isn’t seeing the full picture. It’s not really hype so much as a misunderstanding of what programming actually is. Most of the work has always been thinking, not typing.

If you're coming at this without a dev background, vibe coding is probably your entry point. It can genuinely get you surprisingly far on a simple webapp. Just go in knowing the floor will eventually drop out, and the more complex your requirements, the sooner that happens

For me personally, it saves maybe ~30% of my effort once you factor in planning and verification. The real benefit is that reviewing and steering code is mentally easier than writing and debugging everything from scratch.

It’s a solid tool. Just not a magic one.

The closest thing to “magic” shows up when you pair a strong developer with a stack they don’t know. An LLM can fill in the syntax, patterns, and boilerplate well enough that you effectively bypass most of the ramp-up time.

--
chorgy


--
You received this message because you are subscribed to the Google Groups "LVL1 - Louisville's Hackerspace" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lvl1+uns...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/lvl1/CAP16ngq3t9Aw7VqkwsQStGPq19OchNzssDzs-vpVj6fX4zdASw%40mail.gmail.com.

bruce

unread,
Mar 22, 2026, 7:55:07 AMMar 22
to lv...@googlegroups.com
Hi/Morning chorgy!!

Thanks for your reply!! 

Thats the kind of base input/thoughts I'm looking for. Someone who's used some of the AI tools/platforms to build something -- webapps/etc, who has real world information, is invaluable!

Your reply, that it's possible to blow/spend serious cash as a small user who has to pay for the thing out of their pocket is invaluable, and actually confirms what i've read, as well as suspect. 

So, if you think this is useful for the small dev/group of kids at a local club, you're probably wrong. If you're a large enterprise with mad cash, and you have rigorous Requirement Docs, Unit Tests, Version Control, etc.. It might actually be useful.

When I see things like -- google creating X% of their codebase with AI, that makes sense as they own the tools, so they can essentially use as much of the "platform/tool" as needed at 0 cost.. Of course there's the energy/hardware/etc.. but. for all practical purposes, it's not the same as You or i in terms of the cost that will be incurred.

So -- tell me my "yoda"!!

If one wants to build a couple of test webapps.

Lets' focus on the 1st one for now, as an example:

Assume it's a LandingPage/marketing kind of app
  -with navbar/menus
  -carousel to provide/display content from menus
 -menus will have submenus
 -a countdown
 -a join/email/waitlist
 -feedback/contact kind of process
 
 so.. will have frontside, as well as srvrside to process email/contact/db logic/processes
   -might/ have other logic as well
 frontside might be straight javascript/react/vue
 srvrside -- no clue.. i've used php, but there are other tech/stacks
 db -- prob mysql
 webserver -- apache

 the target app would point to other "similar" apps to be used as a rough guide -if helpful

 the process would also need an admin/dashboard kind or process, which would need to be defined/specced (sort of)
 
 now.. my questions..
  if the target app, had a rough design/requirementDoc..would that help (i'm assuming it would)
  if the target app had useCase defs -- again.. yes/no -- how would that even be interpreted by the AI
  what AI model/tool
    gemini/claude
 what about the markdown files??
   -I've seen different things that imply you can have multiple markdown files to more or less keep the process on track, and not spinning out of control..

I've got a bunch more questions.. but I'm hopeful what I've provided can give you an idea of what I'm aiming for as a test.

Are you local?
Do you hang out at the lvl1 site?
Are you open to getting together somewhere?
If I could watch what you do as you do it.. would be a great help!

my dime of course!!

hit me up if you can/want to talk.

-bruce



Larry Richardson

unread,
Mar 22, 2026, 8:46:20 AMMar 22
to lv...@googlegroups.com
An absolute wonderful description ! People are being led to believe that anyone can just tell a machine "Go build this website, read my mind for the details" and it will happen.   I do complex applications all day, and I've used multiple engines .  Fond of Claude. but the limits can be frustrating.  For the requestor , the Haikku model provides good performance, decent code on most reasonable things . If you have very complex things (Think, I really need someone with a lot more experience to handleTHIS TASK) . Opus 4.6 is really good. 
Two things - One, literally, type out everything you think about the site for a week ..  Second, when you feed this into the tool, ask for plan mode. This will really provide you (and the LLM) a list of tasks that it will do. 

An example, I recently took a 12 year old MVC site, and told claude to make a Blazor site with the code.  It took 2 weeks because of tokens (ran out after 7 hours) , but it looks awesome.  Probably would have taken 2 months hand coding. 

Good Luck

On Sun, Mar 22, 2026 at 3:16 AM d.s. <cho...@gmail.com> wrote:

bruce

unread,
Mar 22, 2026, 10:44:56 AMMar 22
to lv...@googlegroups.com, Larry Richardson
hi larry!!

Now we might be getting somewhere to understand a bit.  

You  mentioned tokens.. If I may, how many, and what was cost?  While this wont be be transferable to my target, it starts to give insight into this.

Also, you mentioned "Plan" that  generates a list of tasks. Do you have this/text/doc/result just to see what this stuff looks like?

At the same time, is there a link between what you're describing, and the different markdown files?

thanks

-bruce


d.s.

unread,
Mar 22, 2026, 1:24:28 PMMar 22
to lv...@googlegroups.com
Well here is an example output for a medium sized web app thing I'm working on...

It's a document management system/electronic document signing... There are a bunch of major components in there including support TUS based upload backend, S3 storage, user definable document workflow, privacy and auditablity and such... basically the stuff that you expect from sales force.

The paste bellow is summary planning document, for just 3ish functions of the app and the initial technology stack. It talks about the tech-stack, login, the key-exchange, and validation.

There is no magic prompt for this. This doc is result of me talking with the AI for like 3-4 hours about how a DMS/Doc signing architecture followed by finally this:

go through our chat history is there some place where we discuss zero knwoledge crypto for a dms software?

create a architecture/design document of the DMS system we discussed.....

focusing on the core stack components

the initial app shell based on oauth2, account creation validation, reliable key exchange between the two parties, and QR code/hash based validation/confirmation of the keys exchanged by the end user to avoid man in the middle attack

For this DMS app I probably have a few dozen docs like this that I either use to feed the AI or my memory, basically each doc roughly covers what I'm planning on implementing during that coding session:

Zero-Knowledge Document Management System (DMS)

Architecture & Design Document

Executive Summary: Original Planning & Context

This document outlines the architecture for a zero-knowledge Document Management System (DMS) and electronic document signing service. The foundational concept relies on an implementation where a Requestor and a Signer interact without the underlying service provider ever having access to the plaintext data. In an era of increasingly severe data breaches and strict regulatory requirements (such as GDPR and HIPAA), this architecture ensures that a compromise of the central servers results only in the exposure of useless ciphertext.

Key elements from our initial planning include:

  • Zero-Password Accounts: Utilizing federated identity (OAuth2) to remove the friction and security risks of traditional passwords, effectively eliminating the threat of credential stuffing and password reuse attacks.

  • Separation of Cryptographic Duties: Generating distinct, separate key pairs for signatures (identity/integrity) and encryption (confidentiality). This separation is a cryptographic best practice, ensuring that a compromise of an encryption key does not automatically allow an attacker to forge identity signatures.

  • Blind Routing: The Signer submits their public keys to the Requestor. The Requestor signs and encrypts a data blob using those keys, routing it through the Service Provider. The Service Provider acts strictly as a secure courier and storage mechanism—able to see metadata like file size or routing timestamps, but entirely blind to the sensitive contents of the documents.

  • Market Positioning: This architecture was designed with an understanding of common search terms businesses use when evaluating secure document signing services. By focusing on "end-to-end encryption," "zero-knowledge," and "cryptographic proof of identity," the technical features align perfectly with high-security enterprise needs, differentiating the product from basic, server-trust e-signature platforms.

1. Core Stack Components

The architecture enforces a strict separation of concerns where the server acts purely as a routing and storage mechanism for encrypted blobs, while all critical cryptographic operations occur exclusively on the client-side.

  • Frontend (Client Application): A Single Page Application (SPA) built with React, utilizing the Mantine component library for a highly polished, accessible, and responsive user interface. This layer is responsible for local key generation, cryptographic signing, encryption/decryption, and hashing. It relies on the native WebCrypto API for secure, hardware-accelerated operations, or an audited WebAssembly crypto library (e.g., libsodium) to ensure memory-safe execution of complex cryptographic algorithms directly in the browser.

  • Backend (Service Provider): A high-performance API built with FastAPI (Python) that handles routing, permission management (verifying who is authorized to send a blob to whom), and database interactions. It seamlessly integrates a tus server protocol implementation to support reliable, resumable uploads for potentially massive encrypted document blobs. Because it never processes plaintext, the backend requires significantly less computational overhead and carries a drastically reduced liability profile.

  • Persistent Storage & File Hosting: PostgreSQL serves as the primary relational database, providing robust, ACID-compliant persistent storage for user identities, public keys, relational metadata, and audit trails. This is paired with an Amazon S3 (or S3-compatible) object storage system for housing the actual encrypted large file blobs. Even in the event of a total database and S3 exfiltration by a malicious actor, the data remains protected, as the cryptographic keys required for decryption exist only on the end-users' devices.

  • Caching & Real-Time Communication: Redis is utilized as a high-performance, in-memory cache store and message broker. It accelerates frequent database queries (like public key lookups), manages fast-access session states, and powers real-time communication channels (e.g., via WebSockets). This allows the system to instantly notify active clients when a new encrypted document has been routed to them or when a signature is completed, without relying on continuous HTTP polling.

  • Background Processing: Celery handles asynchronous background workloads. This includes dispatching secure routing notifications to users (via email or SMS), maintaining complex audit logs, and safely purging expired encrypted blobs from S3, all without blocking the main FastAPI application or impacting the user's response times.

  • Identity Provider (IdP): An external OAuth2 provider (Google, Microsoft, Apple) to handle the zero-password authentication layer, offloading the heavy lifting of account security, multi-factor authentication (MFA), and identity verification to trusted industry giants.

2. App Shell & Authentication (Zero-Password OAuth2)

To maintain a frictionless "zero-password" experience while ensuring secure account creation and validation, the application relies entirely on federated identity.

  1. OAuth2 Flow: The user accesses the app shell and initiates an OAuth2 Authorization Code flow with PKCE (Proof Key for Code Exchange). PKCE is specifically utilized to protect Single Page Applications from authorization code interception attacks, ensuring that only the client that initiated the request can exchange the code for a token.

  2. Token Issuance: The IdP returns a verifiable identity token (JWT) containing the user's standardized profile information.

  3. Validation & Account Creation: The FastAPI backend verifies the JWT signature against the IdP's published public keys. If the signature is valid and the user is new, a user record is created in PostgreSQL containing their verified email and unique ID.

  4. Session Establishment: The backend issues a short-lived session token to the client (often cached in Redis for extremely fast validation). This token only grants access to the routing service (allowing the user to upload or download blobs). It has absolutely no connection to, or authority over, the user's local cryptographic keys.

3. Cryptographic Foundation: Key Generation

Upon successful authentication, the React client application generates the necessary cryptographic material locally. It is a fundamental rule of this architecture that private keys never leave the client device in plaintext.

  • Signature Key Pair: Used for proving identity and ensuring document integrity. We utilize Ed25519, a public-key signature system carefully engineered at several levels of design and implementation to achieve very high speeds without compromising security against side-channel attacks.

  • Encryption Key Pair: Used for securing the document payload. We utilize X25519 (an elliptic curve Diffie-Hellman key exchange) which provides highly efficient and secure encryption.

  • Local Storage: Private keys are stored securely within the browser. To mitigate Cross-Site Scripting (XSS) risks, keys are stored in IndexedDB using the WebCrypto API with the extractable flag set to false. This prevents malicious scripts from easily exfiltrating the raw private key material. For mobile wrappers, hardware-backed secure enclaves (like Apple's Secure Enclave or Android's Keystore) are utilized.

  • Public Key Registration: The client sends both the Signature Public Key and the Encryption Public Key to the backend, associating them with their OAuth-verified identity in PostgreSQL for other users to query.

4. Reliable Key Exchange & MitM Prevention

The most vulnerable phase in any end-to-end encrypted system is the initial exchange of public keys. A Man-in-the-Middle (MitM) attacker could intercept the backend request and seamlessly substitute their own public keys, allowing them to decrypt the document, read it, re-encrypt it with the true recipient's key, and pass it along undetected. We utilize out-of-band hash/QR validation to neutralize this threat.

  1. Initiation: The Requestor queries the backend for the Signer's public keys.

  2. Delivery: The backend returns the Signer's Public Signature Key and Public Encryption Key (often served quickly from the Redis cache).

  3. Fingerprinting: The Requestor's client locally computes a cryptographic hash (e.g., SHA-256) of the combined public keys. This hash is visually represented as both a short, human-readable alphanumeric string and a scannable QR code to accommodate different verification scenarios.

  4. Out-of-Band Validation:

    • In-Person: The Requestor uses their device's camera to scan the QR code displayed on the Signer's device (which contains the Signer's self-generated hash of their own public keys).

    • Remote: The Requestor contacts the Signer via an independent, secure channel (e.g., a phone call, SMS, or Signal message) and reads off the alphanumeric hash. The Signer confirms it perfectly matches what is displayed on their screen.

  5. Confirmation: Once validated, the Requestor's client explicitly flags the Signer's public keys as "Trusted" in their local state, ensuring all future communications with this user are secure against interception.

5. Document Flow & Zero-Knowledge Routing

Once the public keys are exchanged and validated, the core document signing and routing process begins. This flow utilizes a hybrid encryption model for optimal performance with large files.

  1. Preparation: The Requestor prepares the document and locally generates a one-time, highly secure symmetric encryption key (e.g., AES-GCM). AES-GCM is chosen because it provides both data confidentiality and authenticity (verifying the ciphertext hasn't been tampered with).

  2. Encryption: The potentially large document is encrypted quickly using this symmetric key within the client's browser.

  3. Key Wrapping: Because symmetric keys cannot be shared safely in the open, the symmetric key is then "wrapped" (encrypted) using the Signer's validated Public Encryption Key. This is much faster than attempting to encrypt a large document directly with asymmetric cryptography.

  4. Signing: The Requestor signs the completely encrypted payload (or a hash of the ciphertext and metadata) using their own Private Signature Key. This creates a non-repudiable proof that the Requestor authored this specific encrypted blob.

  5. Routing & Upload: The Requestor packages the encrypted document, the wrapped symmetric key, and the digital signature into a unified data blob. This blob is uploaded to the backend using the tus resumable upload protocol, ensuring that large files can be paused and resumed without failure, even on unstable connections. The FastAPI backend validates the upload permissions, records the metadata in PostgreSQL, and directly stores the physical blob into S3 storage. Concurrently, a Celery background task may be triggered to notify the Signer via email or SMS, and a real-time Redis pub/sub event is fired to instantly update the Signer's dashboard if they are online.

  6. Retrieval & Decryption:

    • The Signer queries the backend and pulls the encrypted blob from S3.

    • The Signer first verifies the Requestor's signature using the Requestor's Public Signature Key (which was previously validated via the QR/Hash method). If the signature fails, the document is rejected as tampered or forged.

    • The Signer uses their Private Encryption Key to unwrap the symmetric key.

    • Finally, the Signer uses the unwrapped symmetric key to decrypt the document, completing the zero-knowledge transaction safely on their local device.




Ginny Jolly

unread,
Mar 22, 2026, 2:31:02 PM (14 days ago) Mar 22
to lv...@googlegroups.com
On the subject of being excellent to each other and to the world of creators, I wish to express my reservations about AI.

When AI is used to create images, it scrapes other creators' content.  There is the danger of destroying authenticity. AI content passed off as authentic when it's a mish - mash of periods or styles. There are AI patterns that don't make the finished product.

Most disturbing to me is there is no regulation or requirement to label what's AI and what is genuine original content. There is so much slop out there right now that any original creation is hard to find and sift through. And it will only get worse.

This is not to shoot down any LVL1 acquisitions, just my thoughts on how creative people are getting lost in the shuffle.

With regard, 
Ginny 

bruce

unread,
Mar 22, 2026, 4:08:45 PM (14 days ago) Mar 22
to lv...@googlegroups.com, d.s.
hi chorgy!!

wow.. thanks!

if you don't mind.. follow up questions!

Ok, You mentioned, a series of prompt/conversations over 3-4 hrs. Am I correct to understand that this process has a cost in terms of tokens/cash??  Can you give me a rough idea of how much this "process" cost for this "app"?

Do (does) the process have some sort of a counter/rolling data for tokens/cash used during the overall AI process/project? I know, really basic questions, but believe it or not, all the docs/sites/vids I've checked out don't have this kind of information, or at least I haven't run across it yet, so it could be user issue.

I had initially thought the vendors  -- goog/gemini, openAI/chatGPT, anthropic/claude would have full examples, lots of github material, how to vids, etc.. but i haven't yet found it.

At the same time, the doc you pasted, if I understand, the AI generated it, based on the prompts/conversation, and finally your issuing the "bold" statements  -- but the statement starting with 

"the initial app shell based on oauth2, ..." etc wasn't a complete sentence, did the AI complain about it.. or just accept it as "things" that have to be handled/included in the final doc?

You also said you have multiple docs like this. Are these docs included in the markdown (.md) files?? Are these kinds of docs/.md files two completely different things/concepts with the AI flow?

What LLM did you use, do you prefer?
What IDE -- vscode, cursor, (any particular extensions?)

I believe you mentioned in an earlier post, the ability for the AI to go of the rails., is there some sort of developed process you have to prevent this?

I'm assuming you/AI process/steps has a set of steps to create a branching process, where you can go back to if you need to. Not in the sense of branches/Github, but an ability to back up to the last point, where you could then refine the input docs to "resteer" the AI (is this right??).

Also, for the above/test doc you created/pasted, how does it actually get "inserted/used" by the AI. I assume there's a chat window/doc window/etc -- something that feeds the external data sources to the AI. (Again, in my travels, haven't yet seen this)

Whew!!

I've got a bunch more, but I know your time is valuable, and you're busy. So I won't hold you for now.  

But I do have one other issue at this point. If i have a few actual github apps, and they more or less have the UI/UX/Look-feel layout components I'm looking to have, is there a way to "feed" this kind of information into the AI?

And again, any kind of action/tokenUsed/Cost data you have or could share would be most helpful.

I'm assuming that this kind of app is for your profession, as opposed to hobby/side-gig, so the cost is being handled by a different tier than a small user. Any thoughts on how small users can go about accessing AI, in a meaningful manner. As I suspected, it appears that AI/coding could be useful, but the cost will be the delimiting factor.  You're not going to get some kid/student at JCC having the unding to go really use this. (Or am I missing something)

thanks for all your help/input

Larry Richardson

unread,
Mar 22, 2026, 5:41:56 PM (14 days ago) Mar 22
to lv...@googlegroups.com, d.s.
You know, to start out, I recommend using Claude. For about $20/mo, you can try some things out, see how you like it. 

For planning, really you take whats called a PRD (Product Requirements Document), and then, you'll need a simple document that defines the "how". What kind of web server are you using? Do you care whether it uses NodeJs? Do you have requirements or choices for the framework?  If you just don't know,  Probably something like "Use React for the UI" is fine.  


"I want to PLAN to create an auction website that sells heavy equipment. It should have a listing site to make a listing that includes pictures,descriptions, hours, cost ,location. The home page should be a gallery appearance with each listing on a card, and a link to a full description. "  and so on. 

That document, if you want to change things is really the right thing to do. In fact, if you want a human to review it , thats the perfect point to do it. 

Once this is all complete, you can tell claude "I want you to create the application based on the Plan" 

Now, there are lots of power tools here. You can actually say "I want you to execute teh plan with a team of developers. One lead that coordinates, one backend , and one UI specialist."  
That works incredibly, and your tokens magically disappear in minutes. I dont recommend this for you. 



First, with the lowest level, you'll get about 10-20 minutes before you have to wait a week :) 

Good luck.  There are 12million YT videos on this ,  

PRD
Plan
Execute 

This is how its done at the level youre doiing it. Other than literally having a "conversation" (vibe coding) with it. It will ask you lots of questions, but you don't really see whats going on so much. 

bruce

unread,
Mar 22, 2026, 5:58:56 PM (14 days ago) Mar 22
to lv...@googlegroups.com
hi Larry!!

thanks for all your help.  Couple of questions..

do you hang out/attend lvl1 every now and then?

As a former prod mgr, long ago software guy.. the Req Docs.. no prob to create/modify/tweak..

corgi gave me a great deal of helpful data to digest.

i'm going to ask you what I mentioned to him.. this is a biggie....

do you know anyone that i might talk to to be able to sit/watch them as they do the AI coding.. 

I'm getting a feel for what's the "basics', but peer watching would also be helpful. 

To give you an overview, of what i'm going for..

I'm going to be creating a few basic landingPAge/marketing kinds of sites

I'll also have a couple or user/recruiting sites, for user who want to join to fill out a data survey/form kind of process

I also intend on having a viral/growth/invite/email process/site -- combined with a rewards site. gotta bribe users to find/invite others..

I'm also looking to have a pseudo project/product mgmt/mgr app
 the idea is to be able to take ideas from start->finish.. 
  idea ideation
  dev team from devs in the system
  have the projmgmr/idea owner shepard the process
  have a dev process
  test/review process
  community dev process to test idea/validation, etc..

  there will be more, but this is the start...

the idea, is to be able to have a platform for owners to 
  build actual rev generating projects.
  -no enterprise apps.. -- the owners would already be funded..
  -but there are numerous smaller apps that could be created 
   under this process.
  - and yes, there are probably numerous platforms targeting 
     this. --- but there are numerous OS, and IDEs, and.. etc..
  -I'm looking at the actual implementation of the idea with 
   the customer/community/rev generation.. 
  -the actual software/website becomes the smaller piece 
   over time.. 
   -actual selling/getting someone to give you their hard earned $$
    that's the real skill...

thanks

-bruce


Reply all
Reply to author
Forward
0 new messages