To Bot or Not to Bot?
Participants
Summary
The disclosure of personal information related to well-being and mental health requires a high degree of trust between individuals and healthcare providers. However, traditional methods for conducting initial mental health assessments are often time-consuming and resource-intensive, creating challenges due to growing demand, limited funding, and workforce shortages. This research explores the potential of conversational agents (chatbots and AI-driven assistants) to collect sensitive mental health data in a way that is both efficient and trustworthy. The study examines factors influencing trust in AI-driven assessments, including privacy, empathy, transparency, and accuracy, while evaluating how these technologies can reduce the workload of trained professionals. By analyzing user experiences, ethical considerations, and data security measures, this project aims to determine whether conversational agents can serve as a reliable first point of contact in mental health care, ultimately improving accessibility, efficiency, and patient outcomes.
)
Publications
[1] Taylor, D., Melvin, C., Aung, M. H., & Asif, R. (2024). To Bot or Not to Bot? Analysing Mental Health Data Disclosures. 97-115. Paper presented at International Conference on Human-Computer Interaction, Washington DC, Washington, United States. https://doi.org/10.1007/978-3-031-61379-1_7
Secure and Ethical Approaches to Employee Wellness Measurement
Participants
Summary
Employee wellness programs are increasingly leveraging digital tools, wearable devices, and AI-driven analytics to assess physical, mental, and emotional well-being. However, the collection and processing of sensitive health and behavioural data raise significant concerns regarding data security, privacy, and ethical usage. This research explores secure and privacy-preserving approaches to employee wellness measurement, ensuring that workplace well-being initiatives do not compromise personal data protection. The study examines the role of blockchain for secure data storage, federated learning for decentralized AI-driven analysis, and differential privacy techniques to prevent unauthorized access. Additionally, it evaluates compliance with GDPR, HIPAA, and other data protection regulations while proposing a governance framework for ethical data handling. By integrating cybersecurity best practices and privacy-enhancing technologies, this research aims to develop a trustworthy, data-driven approach to employee wellness that benefits both organizations and their workforce without compromising individual rights.
)
Publications
[1] Buckley, O., Hodges, D., Windle, J., & Earl, S. (2022). CLICKA: Collecting and leveraging identity cues with keystroke dynamics. Computers & Security, 120, Article 102780. https://doi.org/10.1016/j.cose.2022.102780
[2] Earl, S., Campbell, J., & Buckley, O. (2021). Identifying Soft Biometric Features from a Combination of Keystroke and Mouse Dynamics. In M. Zallio, C. Raymundo Ibañez, & J. H. Hernandez (Eds.), Advances in Human Factors in Robots, Unmanned Systems and Cybersecurity - Proceedings of the AHFE 2021 (pp. 184-190). (Lecture Notes in Networks and Systems; Vol. 268). Springer. https://doi.org/10.1007/978-3-030-79997-7_23
Playful Training Against Social Engineering
Participants
Summary
In 2022, 39% of all UK businesses reported identifying a cyber security attack against their own organisation, 83% of which were phishing attempts. Traditional security training commonly includes point-and-click exercises or video media, yet humans remain one of the most exploitable endpoints for organisations. Simulations and games are increasingly being used for training purposes in organisations (for example, see CyberCIEGE and Decisions & Disruptions). Some often remain ineffective, however, as they either (a) simply raise cyber security awareness rather than deliver key security policy and content, or (b) lack accessibility with complex game pieces and rules not easily understandable by those not accustomed to playing games.
We introduce The disPHISHinformation Game: a customisable serious game to deliver phishing training specific to the threats businesses face on a day-to-day basis. Drawing on inoculation theory (a theory of resistance to persuasion), the game delivers content on email, voice, and SMS social engineering attacks, in a format that educates players in key social engineering features. Pilot research has seen this intervention used by a large multinational corporation in the UK, but we are always actively seeking new opportunities for collaboration in the UK and beyond.
)
Publications
[1] Henderson, N., Pallett, H., van der Linden, S., Montanarini, J., & Buckley, O. (2024, July). The disPHISHinformation Game: Creating a Serious Game to Fight Phishing Using Blended Design Approaches. International Conference on Applied Human Factors and Ergonomics 2024 (AHFE24), 146–156. https://doi.org/10.54941/ahfe1004774
[2] Henderson, N. (2024). The disPHISHinformation Game: Creating a Serious Game to Fight Phishing. Will Video Games Make You Stupid? Invited Talk. https://www.youtube.com/watch?v=UcXHkK96CdM
[3] Henderson, Niklas (2024). The Phishing Game: An Analog Game to Defend UK Organisations from Phishing. Cranfield Defence and Security Doctoral Symposia 2023 (DSDS23). Poster. https://doi.org/10.17862/cranfield.rd.25039922.v1
Game-based Inoculation against Online Misinformation
Participants
Summary
Misinformation poses a serious risk to societies, and its rapid spread particularly on online platforms has negatively impacted democratic processes, public health campaigns, and more. Inoculation theory is a theory of resistance to influence, positing that by being pre-emptively exposed to a small, altered ‘dose’ of an attack to an attitudinal attack (e.g., pre-warning of future misinformation messages or misinformation techniques), resistance can be built to defend that attitude from future persuasive attack. Game-based inoculation theory has been proposed as an engaging individual-level communication strategy to improve misinformation resilience in players, but much remains unknown of the approach’s effectiveness over time and the underlying mechanisms conferring resistance in this context. These studies adopt mixed-method approaches to further investigate the role inoculation theory can play in this game-based context, and aim to help researchers and practitioners design more effective interventions.
)
Publications
[1] Henderson, N., & Pallett, H. (2026). Inoculation theory as a design approach to game-based misinformation interventions: a review. Popular Communication, 1–23. https://doi.org/10.1080/15405702.2026.2619473
[2] Henderson, N., Buckley, O., & Pallett, H. (2023). Investigating Longitudinal Effects of Physical Inoculation Interventions Against Disinformation. International Conference on Human-Computer Interaction 2023 (HCII23), 39–46. https://doi.org/10.1007/978-3-031-36001-5_5
[3] Henderson, Niklas (2022). Comparing the Decay of Physical and Digital Inoculation Against Disinformation. Cranfield Defence and Security Doctoral Symposia 2022 (DSDS2). Poster. https://doi.org/10.17862/cranfield.rd.21618618.v1
)