This section provides support for navigating the legal, ethical, and data-related aspects of using AI in academic work, combining University guidance with practical tips.
As Dr Caroline Emmer De Albuquerque Green, Director of Research at the Oxford Institute for Ethics in AI, and Chris Morrison, Head of Copyright & Licensing at the Bodleian Libraries, noted in an April 2025 news release: "The rise of generative AI in recent years has heightened anxiety amongst creators and the public more widely: people are concerned for the future of human creativity given machines’ ever-expanding ability to generate text, images, video and code".
When using AI tools, it's essential to be mindful of data privacy and confidentiality. The University emphasises the following considerations:
Use University-Supported Tools: University-licensed versions of ChatGPT Edu and Microsoft 365 Copilot are available through the AI Competency Centre. These have undergone security assessments ensuring that data processed is not used to train AI models and aligns with the University's data security standards.
Handle Data with Care: Avoid inputting confidential or sensitive information into AI tools unless they are University-approved and have appropriate data protection measures in place.
Review Your Data: Be aware that even University-licensed AI tools like Microsoft 365 Copilot have access to all the data you already have inside Microsoft 365, including Outlook e-mails, Teams chats, and OneDrive files. Regularly review and manage your data and sharing settings to help prevent unintended exposure.
Exercise Caution with External AI Tools: AI services may store and use your data to train their models. Such tools should not be used for processing confidential or sensitive data unless they have undergone a Third-Party Security Assessment (TPSA) and are approved for such use.
Further information:
The rise of GenAI raises new questions about the ownership and use of content, including the materials that users input into tools and their outputs. While copyright law continues to evolve in this area, students and researchers should take these potential risks into account:
Using Copyrighted Material in Prompts: If you upload or copy and paste copyrighted material (e.g. journal articles, book chapters, datasets, or images) into an AI tool, make sure you have permission or that your use is covered by an exception to copyright (i.e. it is fair).
💡 Tip: Check the licence on the original material (e.g. Creative Commons). When in doubt, avoid uploading full texts where the content is used by the tool for training purposes.
Who Owns AI-Generated Output?: Output generated by AI may not be eligible for copyright protection. You cannot assume you own it in the way you would own original creative work. This matters if, for example, you're submitting a thesis.
💡 Tip: If the output is significant to your academic work, keep a record of how it was generated and whether you edited it.
Attribution and Academic Integrity: Using AI-generated content without proper attribution can lead to academic misconduct.
💡 Tip: Seek out local guidance on how to acknowledge or cite AI tools. This guide's section on referencing AI can be a starting point.
Research Publication: If you plan to publish research where AI tools have been used, check publisher policies. Many journals now ask authors to disclose whether AI was used in preparing a manuscript.
💡 Tip: Be transparent. Some publishers may reject work that includes AI-generated content without clear attribution or approval.
Further information:
Using AI tools responsibly means more than following academic integrity rules. Ethical use of AI involves being aware of issues like bias and fairness, respecting the rights of others, and understanding the environmental cost of these technologies.
1. Bashir, N., Donti, P., Cuff, J., Sroka, S., Ilic, M., Sze, V., Delimitrou, C., and Olivetti, E. (2024). The Climate and Sustainability Implications of Generative AI, in An MIT Exploration of Generative AI, March 2024. Pre-print at: https://doi.org/10.21428/e4baedd9.9070dfe7
2. Tomlinson, B., Black, R. W., Patterson, D. J., and Torrance, A.W. (2024). The carbon emissions of writing and illustrating are lower for AI than for humans, in Scientific Reports 14, 3732. Available at: https://doi.org/10.1038/s41598-024-54271-x
A 2024 global survey of academic researchers by Oxford University Press found that trust in AI companies is very low, with only 8% trusting AI companies not to use their data without permission and 6% trusting them to meet data privacy and security requirements.
There were also strong concerns about the implications for intellectual property and how AI will impact academic research in general, with a quarter (25%) of the cohort believing that AI reduces the need for critical thinking.