Hello all,
I am sharing an open-source project and calling on practitioners to test it against their own system architecture diagrams. Your real-world feedback is what I need most.
The Problem
Architecture diagrams are drawn to communicate functionality and data flow — not security properties. Cryptographic controls (TLS configurations, key types, cipher suites, certificate authorities, key storage, signing mechanisms) are almost never explicitly captured in diagrams. This makes cryptographic discovery heavily dependent on the knowledge and experience of whoever is doing the review.
What I Built
A Claude AI skill that guides practitioners through a structured, interactive cryptographic discovery process against an architecture diagram. The goal is to systematically surface cryptographic assets that diagrams typically omit — and flag gaps where cryptographic controls would be expected but are unconfirmed.
The skill works through seven stages:
Every discovered asset is assessed for post-quantum risk using NIST PQC standards (ML-KEM FIPS 203, ML-DSA FIPS 204, SLH-DSA FIPS 205, FN-DSA FIPS 206) as reference points for migration planning.
What I Need From You
I need practitioners to test this against their own architecture diagrams — real systems, sanitised if needed — and answer these specific questions:
The discovery methodology — the questions asked at each stage — is the core of this tool. That is what I most need validated by people testing it against real systems.
How to Try It
GitHub Repository
https://github.com/weiwenweiwenweiwen/cryptographic-discovery-skill
All diagram types welcome — cloud, on-prem, hybrid, microservices, PKI, legacy. The more varied the architectures tested, the more robust the methodology becomes.
Thank you in advance. Honest feedback — including where it fails — is exactly what I am looking for.
Regards,
Wei Wen