1. Structure is what makes AI trustworthy.
AI is fast enough to collapse an entire qualitative analysis into a single step. It can read, code, theme, and summarize in one pass. The problem is that when those steps blur together, there is no way to tell where the analysis is grounded and where it is inferred.
Breaking the work into the phases of qualitative analysis changes that. For example, I chose to deliberately move from ingestion, to batch coding, to codebook development, to full coding, to quantification, and only then to interpretation. At each step, the output could be checked. That structure did more than improve rigor. It made it possible to trust the results, because every conclusion could be traced back to a visible step in the process.
2. Slowing AI down is what prevents hallucination.
Left unconstrained, AI will fill in gaps. It will connect patterns, smooth inconsistencies, and complete partial ideas. While this can be useful in other contexts, it's risky in qualitative analysis and, perhaps more importantly, it made me as the user uncomfortable.
Knowing this, I mitigated its tendencies not a single instruction, but with the decision to slow the process down. Coding was done in batches of roughly 50 change units at a time. Themes were not named until all batches were complete. Interpretation was held until after quantification. At each stage, the prompt focused only on the task at hand, not the end result.
That pacing mattered. It reduced the opportunity for the model to “complete the story” before the data had been fully examined. In practice, hallucination was less about false facts and more about premature synthesis. Slowing the process is what kept that in check.
3. Trust comes from traceability, not output quality.
A well-written summary can still be wrong. That became clear repeatedly. The analysis only became reliable once every claim could be traced back to something concrete: a coded unit, a frequency count, or a verbatim excerpt.
This is where the earlier steps paid off. Because coding, theming, and counting were done systematically, it was possible to question any conclusion and follow it back through the process. When something didn’t hold up, it could be corrected without redoing the entire analysis.
That traceability is what builds trust in AI’s role. Not that it produces polished outputs (we know we can count on it for that!) but that it participates in a process where each step can be verified.