<?xml version="1.0" encoding="UTF-8"?>
<record>
  <title>Interpretable Cyber Threat Detection Using SHAP-Based Explainable AI in Classroom Data Security Systems</title>
  <journal>Information Security Education Journal</journal>
  <author>Dussadee Seewungkum</author>
  <volume>13</volume>
  <issue>1</issue>
  <year>2026</year>
  <doi>https://doi.org/10.6025/isej/2026/13/1/1-19</doi>
  <url>https://www.dline.info/isej/fulltext/v13n1/isejv13n1_1.pdf</url>
  <abstract>This study presents an interpretable cyber threat detection framework using SHAP-based Explainable AI
(XAI) for classroom data security systems. Addressing vulnerabilities in educational digital ecosystems, the
research analyzes a simulated dataset containing diverse threat categories including insider threats, phishing,
unauthorized access, data breaches, and malware to develop a transparent, predictive intrusion detection
model. The methodology employs tree-based classifiers enhanced with SHAP values to quantify feature
contributions, enabling both global and local interpretability of threat predictions.
Key findings reveal that insider threats (n=218) and phishing attacks (n=204) dominate the threat landscape,
underscoring human-centric vulnerabilities over purely technical exploits. Threat severity distribution is
relatively balanced across low, medium, and high categories, supporting robust multi-class classification.
Analysis of access levels indicates administrative and standard user accounts represent primary attack
surfaces, highlighting the need for stringent privilege management. SHAP interpretability results identify
system response time as the most influential predictive feature, demonstrating that behavioral and
performance anomalies are critical indicators of malicious activity.
The study advocates for adaptive, behavior centric cybersecurity frameworks integrating User Behavior
Analytics, Role-Based Access Control, data centric protection strategies, and continuous user awareness
training. By bridging predictive accuracy and model transparency, the SHAP enhanced approach empowers
security analysts to understand the rationale behind detections, reduce false positives, and respond effectively
to evolving threats. Ultimately, this research contributes a comprehensive, interpretable methodology for
safeguarding smart educational environments, emphasizing that effective cybersecurity requires combining
technical defenses with human-factor considerations and explainable artificial intelligence.</abstract>
</record>
