Skip to Main Content

Using AI to Support Academic Work: A Library Guide: The University's Policies on AI

Introduction

In this section, you'll find an overview of the University's current policies and guidance on the use of artificial intelligence in teaching and assessment, as well as supplementary information on plagiarism and GenAI in research.

Please be aware that Oxford has been reviewing its AI in teaching and assessment policies and guidance, and a policy on the use of AI in summative assessment has been published in July 2025. This Bodleian Libraries guide may include principles for Oxford members regarding the responsible use of AI tools, but it does not endorse the unsanctioned use of AI in academic work or override University-wide or local regulations.

AI in Teaching and Assessment

For students, use of AI in assessments is only allowed when explicitly permitted and must be declared following department or faculty instructions; unauthorised use is considered academic misconduct.

The University has published a policy on the use of AI in summative assessment.

The University of Oxford recognises both the opportunities and challenges posed by AI in education. While AI can support learning when used ethically and appropriately, it is not a substitute for the individual effort required to develop academic skills.

Oxford has contributed to and adopted the Russell Group principles on the use of generative AI tools in education. These principles state that:

  1. Universities will support students and staff to become AI literate
  2. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience
  3. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access
  4. Universities will ensure academic rigour and integrity is upheld
  5. Universities will work collaboratively to share best practice as the technology and its application in education evolves

Consistent with the Russell Group principles, Oxford views AI as a potentially supportive tool in education when used responsibly. Staff and students should be aware that:

  • The use of AI can be a supportive tool in learning, so long as that use is ethical and appropriate
  • In some instances, academic staff, departments, and colleges may give more detailed guidance on how they expect AI tools to be used (or not used) for different tasks or on specific assignments. Students should always follow the guidance of their tutors, supervisors, and department or faculty
  • Whenever AI is used, similar safeguards to those relating to plagiarism should be adopted. Authors should never pass off ideas or text gleaned from AI as their own, and there should be a clear acknowledgement of how AI has been used in the work
  • Given that the output of LLMs can be incorrect or entirely fictitious, users of these tools must recognise that they retain responsibility for the accuracy of what they write.

AI and Plagiarism

The University’s definition of plagiarism has been updated to include AI, and students must cite AI content in their work as they would a book, journal article, website, or other source.

At Oxford, artificial intelligence can only be used within assessments where specific prior authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student’s disability (such as voice recognition software for transcriptions, or spelling and grammar checkers).

To avoid plagiarism, students should follow good academic practices from the start of their studies, including proper attribution and critical engagement with sources. Please read this guide's section on referencing AI for more information on this topic.

Further information:

Using GenAI in Research

Researchers at Oxford are expected to follow principles of good research conduct in all aspects of publication, including proper authorship, acknowledgement of contributions, open access requirements, and transparency around conflicts of interest. Authorship is generally reserved for individuals who make a significant intellectual contribution and can take responsibility for the work’s integrity.

The University has set clear expectations about the use of generative AI in research through the policy developed by the Research Practice Sub Committee (RPSC). Researchers are responsible for any AI-generated content, must protect intellectual property and sensitive data, and should disclose substantive use of GenAI in their work. Note also the FAQs page to support users in meeting the policy's expectations.

Further information: