Mackenzie Jorgensen, PhD


I am a Postdoctoral Research Fellow on the PROBabLE Futures project at Northumbria University. My current research focus is on the responsible adoption and evaluation of AI systems for law enforcement in England and Wales. I hold a PhD in Computer Science from King's College London.

Email: mackenzie(dot)jorgensen(at)northumbria(dot)ac(dot)uk


Research

Research Interests


My research interests fall under the broad umbrella of responsible AI, particularly:

  • AI ethics
  • AI governance
  • evaluating large language models
  • ethics artefacts to support AI design and responsible AI adoption
  • bias, fairness, and non-discrimination
  • human rights
  • public sector algorithms

Selected Publications


To learn more about my research, check out my shortened list of publications below and the rest on my Google Scholar profile.


Experience

Current Experience

Postdoctoral Research Fellow (FT)

Northumbria University, Newcastle, United Kingdom

I am working full-time on the PROBabLE Futures project at Northumbria University which is a Responsible AI (RAi) UK funded project. Within my first month of joining, I submitted to the Joint Committee on Human Rights’ inquiry: PROBabLE Futures Submission in response to the Call for Evidence on ‘Human Rights and the Regulation of AI’. In November 2025, I organised and led a multi-stakeholder workshop in London on what should be in an LLM evaluations framework for policing. My current work focuses on what robust technical and workflow level evaluations should be for policing AI systems as well as developing ethics artefacts to support the responsible adoption of policing AI systems.

August 2025 - March 2028

Research Volunteer

The Alan Turing Institute, London, United Kingdom

I support the Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute on research projects relating to responsible AI and national security.

September 2025 -

Independent Contractor

Immanence, Italy

I work from time to time with the AI governance start-up, Immanence with building up their platform and on measuring social impact from AI projects. Research from my work at Immanence has been published in the Proceedings of the CHI 2026 Workshop: Ethics at the Front-End: Responsible User-Facing Design for AI Systems: Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts.

October 2025 -

Director of Working Groups

EAAMO Bridges

I am a Director of the Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) Bridges' working groups which bring together researchers and practitioners from around the globe to tackle real-world problems through online groups. I previously co-organised the Conversations with Practitioners working group from September 2023-July 2025. Through the CwP group, we published this ACM EAAMO '24 conference paper: Bridging Research and Practice Through Conversation: Reflecting on Our Experience.

June 2025 -

Previous Experience

Research Associate

The Alan Turing Institute, London, United Kingdom

I worked on a project relating to AI-assisted decision-making, uncertainty, and human-AI oversight. This work was part of the EU project, ELSA (European Lighthouse on Secure and Safe AI). In spring 2025, I co-organised an in-person panel event in London, How AI is shaping journalism and journalism is shaping AI. This AAAI/ACM AIES '25 conference paper on AI governance came out of my research: Documenting Deployment with Fabric: A Repository of Real-World AI Governance (see the appendix on Arxiv for the repository of use cases). In addition, my uncertainty and AI discrimination research was featured in the KU Leuven AI Summer School's blog: Uncertain AI Systems Can Be Discriminatory: How Can We Address This? I worked PT for 6 months and FT for 3 months due to visa reasons while waiting for my PhD conferral.

November 2024 - August 2025

Bias Detection Tool Cohort Member

Algorithm Audit, Netherlands

I joined a cohort working remotely on Algorithm Audit's bias detection tool. We applied the tool to a Dutch public sector use case and wrote our findings in an IASEAI '26 conference paper: Auditing a Dutch Public Sector Risk Profiling Algorithm Using an Unsupervised Bias Detection Tool.

January 2024 - June 2024

Graduate Teaching Assistant

Informatics Department, King's College London, UK
January 2024 - March 2024

As a Graduate Teaching Assistant (GTA) for Machine Learning, I ran weekly labs and marked the coursework. I also led four labs and a couple of tutorials for Data Structures every few weeks; I marked for this course too.

January 2023 - March 2023

As a GTA for Machine Learning, I ran two weekly labs and tutorials. In addition, I was the GTA for the AI Impact Accelerator module. I also led two tutorials and two labs for Data Structures every three weeks.

September 2021 - December 2021

As a GTA for Artificial Intelligence, I ran two weekly online live tutorials where I made sure the students were confident with the material and the tutorial sheet answers. I also ran four virtual live labs each week where students worked on developing a successful pacman agent during the Autumn 2021 term.

January 2021 - April 2021

As a GTA for Machine Learning, I ran two small group live sessions weekly for undergraduates covering the tutorial sheet for that week. I also was a GTA for the Big Data Technologies course. I ran three online live lab sessions weekly for masters students, helped answer any questions they had, and marked coursework.

January 2021 - March 2024

PhD Mentor for DAAD Rise Worldwide

Informatics Department, King's College London, London, UK

I hosted a German undergraduate student researcher over 8 weeks while she worked alongside me on my PhD research project. She was funded by the DAAD RISE Worldwide program. Our research was published in the AAAI/ACM AIES '23 conference: Not So Fair: The Impact of Presumably Fair Machine Learning Models.

August 2022 - September 2022

PhD Intern

Centre for Data Ethics and Innovation, UK Department for Science, Innovation, and Technology, London, UK

I completed a 5 month (~two days a week) PhD Placement at the CDEI (now the Responsible Technology Adoption Unit), enabling the trustworthy use of data and AI. I worked on two different projects: the Responsible Demographic Data and the Bias Review projects. For London Tech Week 2022, I co-authored a blog post about demographic data collection and bias detection. My research on the Bias Review project while at the CDEI later developed into this IEEE article: Investigating the Legality of Bias Mitigation Methods in the United Kingdom.

February 2022 - July 2022

AI Robustness Researcher/Developer

Ernst & Young UK and UKRI STAI CDT, London, UK

Alongside three other PhD Student colleagues, I completed a literature review of AI Robustness methods, zoning in on NLP, weight-poisoning attacks, and poisoning detection methods. We developed a project combining state of the art AI Robustness and NLP research. I was the lead point person for communicating with our stakeholder, EY, every week over the 10 week project. To learn more about what we developed, check out our GitHub repository here: RobuSTAI.

January 2021 - April 2021

Education

King's College London

PhD in Computer Science

My PhD thesis was Mitigating Negative Impacts from Socio-Technical AI. My supervisors were Prof. Elizabeth Black, Prof. Jose M. Such, and Prof. Natalia Criado Pacheco. I was funded by the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted AI (STAI) and a member of the Distributed AI Research Group. While at KCL, I held a number of student representative positions: the EDI representative for the STAI cohort (2023-2024) and the Informatics Dept (2022-2024), and the PGR representative for the Faculty Research Committee (2020-2021). From 2021 to 2024, I co-led an international online AI ethics reading group for researchers. I was also a visiting postgraduate student at Imperial College London through the STAI CDT.

October 2020 - July 2025

KU Leuven, Faculty of Law and Criminology

Summer School Student on the Law, Ethics, and Policy of AI

I attended the 10 day intensive summer school at KU Leuven on the Law, Ethics, and Policy of AI which is described on their website: I presented some of my interdisciplinary research on the intersection of UK anti-discrimination law at the summer school. I also worked on a group project that culminated in a blog post: Public authorities’ use of AI chatbots for citizen queries: a legal and technical perspective.

September 2023

Villanova University

Bachelor of Science, Magna Cum Laude, Phi Beta Kappa

I majored in Computer Science Major and minored in Philosophy as a Presidential Scholar. I studied Computer Science & Philosophy during my third year abroad (2018-2019) at St. Anne's College, University of Oxford.

August 2016 - May 2020

Other Activites

Academic Reviewing

  • ACM 2025 & 2026 Conference on ACM Conference on Fairness, Accountability, and Transparency (FAccT)
  • Policing: A Journal of Policy and Practice 2026
  • Big Data & Society Journal 2025
  • ACM 2025 Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO)
  • AAAI/ACM 2021, 2022, 2025 Conference on Artificial Intelligence, Ethics, & Society (AIES)
  • SyntheticData4ML Workshop 2022 at NeurIPS

Leadership

  • Essex Data Ethics Committee Member, December 2025-Present
  • Villanova University Computing Sciences Advisory Council (CSAC) Member, April 2021-Present
  • Holy Names Academy Alumnae Mentor, 2019-Present
  • Co-Founder & Co-President of Women Advancing Tomorrow's Technologists Nonprofit, 2015-Present

Media