Dear Wazuh Team,
We are a team of pre-final year students from Thiagarajar College of Engineering, Madurai (India), currently developing an AI-driven Monitoring System aimed at ensuring fairness, transparency, and data security during digital examinations conducted in college lab environments.
Our proposed system — powered by a Cross-App Behaviour Engine (CABE) integrated with Explainable Artificial Intelligence (XAI) — aims to continuously observe and analyze system and network-level activities of student devices. The system will identify and flag suspicious behavior (e.g., accessing prohibited applications, unapproved websites, performing external data transfers) and report these in real time to an invigilator/Professor dashboard.
We have studied the Wazuh architecture and believe that its open-source SIEM and XDR capabilities provide a strong foundation for building this project. We wish to leverage Wazuh’s framework for endpoint monitoring, rule-based alerting, and dashboard extensibility.
To successfully design and implement our project, we seek technical guidance, documentation access and possible internship/training opportunities from the Wazuh team /community.
An AI-driven Monitoring System for digital examination security in college labs, inspired by the Wazuh architecture and enhanced with a Cross-App Behaviour Engine (CABE) and Explainable AI (XAI)
1. The Wazuh architecture is scalable and agent-based.it has central components for data collection (from endpoints), analytical processing, and dashboard visualization. This makes it flexible for custom integrations and extensions to fit our college lab scenario
2. A cross-app engine tracking both system and network-level activities is achievable, as Wazuh already supports log aggregation and event monitoring across endpoints, applications, and network devices, with agentless monitoring where needed.
Agent layer (student Machine)
→ Wazuh Agents collect: logs, process list, FIM events, DNS/HTTP logs, network
connection metadata, clipboard/file transfer hooks (where permitted)
→ Wazuh Manager / Indexer (central ingestion, normalization, indexing)
→ CABE (separate analytics service sitting adjacent to indexer; consumes
indexed events)
Feature extractor (per-device & cross-device)
Behaviour models (statistical + ML)
Correlation & rule engine (temporal correlation, SIEM rules)
XAI & decision explanation module
→ Invigilator dashboard (integrates Wazuh Dashboard + custom CABE panels)
→ Audit & secure storage (encrypted), Incident response & feedback loop
to update rules/models
COMPONENTS AND RESPONSIBILITIES
A. Collection (Wazuh agents)
B. Central ingestion & normalization
C. Cross-App Behaviour Engine (CABE)
a. Collectors/Streamers: Subscribe to event streams from indexer.
b. Feature extractor: sliding windows (e.g., last N minutes) to compute behavioral features per session and cross-device aggregates:
a. App-switching frequency, new process launches, CPU spikes, clipboard-to-network ratio, unexpected outbound connections, file transfer attempts, browser tab changes to unapproved domains, time-of-day activity patterns.
c. Models:
a. Rule-based: immediate critical rules (e.g., “exam app terminated”, “external display connected”).
b. Anomaly detection (unsupervised): isolation forest, autoencoders, LSTM-based sequence anomaly detectors for time-series of events.
c. Supervised classifiers: if you have labelled historical exam sessions (allowed vs cheating), use XGBoost / RandomForest / small CNN/RNN ensembles.
d. Graph-based correlation: link suspicious events across devices (e.g., same external IP used across many students).
d. Correlation engine: temporal logic to combine signals (e.g., clipboard copy → suspicious outbound connection → file write → high confidence alert).
e. Scoring & confidence: probability or score per alert, plus contributing features.
D. XAI & Explanation Layer
A)Provide explanations for each alert:
B)Explanations must be concise and human-friendly (one sentence + expandable evidence view).
E. Dashboard & Workflow
a. Integrate CABE alerts into Wazuh Dashboard or a custom web UI: timeline, per-student view, alert queue, evidence panel (logs, screenshots if permitted, process list), explainability card.
b. Support triage: acknowledge, mark false positive, append notes (used to retrain models).
IMPLEMENTATION:
A)Prototype on a small lab:
B)Expand analytics & XAI:
4. Implement on-device preprocessing for privacy.
5. Run pilot exams, gather ground truth, measure metrics and refine thresholds.
6.Conduct fairness & privacy audits; consult legal/ethics committee.
7.Tune UI and triage workflows.
D)Rollout & maintenance:
METHODOLOGY FOR PROPOSED PROJECT
Our Project Maps to the Wazuh Architecture
THE "AI-DRIVEN" ENHANCEMENT:
our propsed project moves beyond a standard Wazuh deployment. Wazuh provides the data and the alerting framework; your AI/ML models provide the advanced "brain."
1. Data Feed for AI: The Wazuh agents will stream a rich dataset of system behavior to your central server. This includes:
a. Process Execution: Every application a student opens.
b. Network Activity: Every IP address and port they connect to.
c. File System Activity: Files created, modified, or deleted.
d. User Logins/Logouts.
2. Integrating the AI Engine (CABE & XAI):
o Option A : Build a separate "AI Analysis Microservice" that subscribes to the Wazuh alerts and the raw event data (Wazuh can forward this via its RESTful API or to a message bus like Kafka). Your AI model consumes this data, performs its complex behavioral analysis (e.g., "Student A opened a browser, then quickly switched to a PDF reader, then started a Python script—a sequence indicative of looking up answers"), and then sends a new, enriched alert back to the Wazuh API.
o Option B: For simpler models, we could write custom Python scripts as Wazuh Active Response scripts. These are triggered by specific Wazuh rules and can perform additional checks.
3. Explainable AI (XAI) in the Dashboard: This is a crucial part of your project. When your AI model flags a student, the invigilator dashboard shouldn't just say "CHEATING DETECTED." It must explain:
SUPPORT REQUESTED FROM WAZUH
1. Data Collection and Forwarding
2. Integration for AI/Machine Learning
3. Custom Rules, Actions, and Workflows
4. Dashboard and UI Extensibility
5. Security, Privacy, and Compliance
We are confident that this collaboration can serve as a meaningful academic and practical exploration of AI-enhanced security monitoring built upon Wazuh’s powerful foundation.
We would deeply appreciate your technical mentorship, documentation references, and any possible internship or training opportunities to further our learning and implementation.
Thank you for your time and
consideration.
We look forward to your valuable guidance and feedback.
Warm regards,
HARISH K
AKSHAI S
SANTHOSH S
Pre-final Year Student, Department of
Information Technology
Thiagarajar College of Engineering, Madurai – India

Local configuration (ossec.conf)
Log data collection