I'm writing to express my strong interest in the Gemini-API Powered Intelligent Care Assistant project for GSoC 2026. I have carefully studied the project description and contributor guidelines. I find the focus on responsible, human-in-the-loop AI systems particularly compelling.
My background is in backend system design and applied Responsible AI engineering. Recently, I built a multi-layer Responsible GenAI pipeline that integrates structured prompt governance, semantic intent filtering, and role-aware output validation. This experience has given me practical exposure to designing AI systems that prioritize explainability, ethical standards, and controlled escalation. These aspects align closely with the goals of this care assistant project.
To better understand the technical scope, I’ve started designing a small prototype that simulates structured IoT activity ingestion, deviation detection using time-windowed summaries, and Gemini-based explanation generation with strict non-diagnostic prompt constraints. I’m also exploring how to implement role-based output filtering and supervisor validation workflows at the backend level.
Since there isn’t a dedicated entry task for this idea, I would appreciate your guidance on the preferred technical direction. Specifically:
- Is there a recommended backend stack (such as Python/FastAPI or Node.js) that fits best with Catrobat’s architecture?
- Are there existing system components or repositories where contributions related to structured logging, role-based access control, or AI governance layers would be most valuable?
- Is there a specific format you would prefer for proof-of-concept demonstrations (code repo, architectural design document, demo video)?
Thank you for your time and guidance. I look forward to contributing meaningfully and preparing a strong proposal.