Dr. Michael Gizzi, a professor of criminal justice sciences, has been named a 2025-26 CAST Research Fellow for his research titled “The Ethics of Artificial Intelligence and Qualitative Research.” The primary goal of his research is to understand how AI technologies are transforming qualitative data analysis and to establish the necessary ethical boundaries to preserve the integrity of academic and scholarly work.
Gizzi has spent more than two years investigating new AI tools, including ChatGPT, Co-Pilot, NVivo, and MAXQDA’s AI-Assist, which can aid researchers in organizing and interpreting qualitative data. He brings extensive experience as a certified MAXQDA trainer. According to Gizzi, these tools can now automatically code entire datasets, create subcodes, and summarize complex data; tasks that were previously limited to human capabilities. Although these features increase productivity, they also bring up important moral issues regarding academic accountability, authorship, and authenticity.
“At what point does a summary or theme crafted by an AI tool stop being your research?” Gizzi said. “Should we let machines handle literature reviews or data coding, and how might that affect students’ ability to think critically and solve problems?”
To explore these questions, Gizzi is collaborating with Dr. Stefan Rädiker, a qualitative methodologist and lead developer of MAXQDA in Germany. To evaluate the precision, dependability, and potential biases of machine-assisted research, datasets coded by humans are compared with those analyzed using artificial intelligence. To assess consistency, data privacy issues, and the existence of “data hallucinations,” or inaccurate or deceptive insights produced by algorithms, their method entails looking at identical datasets using both conventional qualitative coding and AI tools.
The second phase of Gizzi’s research expands the focus to the ethical implications of integrating AI across disciplines. By conducting a qualitative content analysis of recent literature in the social sciences and STEM fields, the project aims to identify major ethical challenges and develop a framework of best practices for the responsible use of AI in academia. Rather than relying on quantitative meta-analysis, he employs a thematic synthesis approach to highlight key methodological, ethical, and practical insights.
This initiative contributes to the national discourse on AI ethics while also directly advancing Illinois State University’s institutional goals.
“The initiative aligns with the University’s Strategic Direction III, which places a strong emphasis on promoting scholarly excellence and innovative research,” Gizzi said. “It also advances the College of Applied Science and Technology’s goal of promoting interdisciplinary cooperation and increasing awareness of the CAST scholarship.”
Moreover, the study reaffirms the Department of Criminal Justice Sciences’ commitment to advancing methodological diversity, academic rigor, and policy relevance.
The outcomes of this project are expected to include journal publications and a forthcoming book collaboration that will guide scholars in the responsible and ethical application of AI tools in qualitative research. Gizzi’s overarching goal is to ensure that the fundamental principles of scholarly inquiry are strengthened, rather than diminished, by the rapid advancement of technology.
According to Gizzi, “Artificial intelligence is here to stay, and finding a balance between technological innovation and human judgment is a challenge that every researcher and educator faces. Productivity is only one aspect of using AI ethically; another is upholding the fundamentals of intellectual honesty and research integrity.”