Skip to Main Content

Using AI to Support Academic Work: A Library Guide: Privacy, Copyright, and Ethical Considerations

Introduction

This section provides support for navigating the legal, ethical, and data-related aspects of using AI in academic work, combining University guidance with practical tips.

As Dr Caroline Emmer De Albuquerque Green, Director of Research at the Oxford Institute for Ethics in AI, and Chris Morrison, Head of Copyright & Licensing at the Bodleian Libraries, noted in an April 2025 news release: "The rise of generative AI in recent years has heightened anxiety amongst creators and the public more widely: people are concerned for the future of human creativity given machines’ ever-expanding ability to generate text, images, video and code".

Privacy

When using AI tools, it's essential to be mindful of data privacy and confidentiality. The University emphasises the following considerations:

  • Use University-Supported Tools: University-licensed versions of ChatGPT Edu and Microsoft 365 Copilot are available through the AI Competency Centre. These have undergone security assessments ensuring that data processed is not used to train AI models and aligns with the University's data security standards.

  • Handle Data with Care: Avoid inputting confidential or sensitive information into AI tools unless they are University-approved and have appropriate data protection measures in place.

  • Review Your Data: Be aware that even University-licensed AI tools like Microsoft 365 Copilot have access to all the data you already have inside Microsoft 365, including Outlook e-mails, Teams chats, and OneDrive files. Regularly review and manage your data and sharing settings to help prevent unintended exposure.

  • Exercise Caution with External AI Tools: AI services may store and use your data to train their models. Such tools should not be used for processing confidential or sensitive data unless they have undergone a Third-Party Security Assessment (TPSA) and are approved for such use.

Further information:

Copyright and Intellectual Property

The rise of GenAI raises new questions about the ownership and use of content, including the materials that users input into tools and their outputs. While copyright law continues to evolve in this area, students and researchers should take these potential risks into account:

  1. Using Copyrighted Material in Prompts: If you upload or copy and paste copyrighted material (e.g. journal articles, book chapters, datasets, or images) into an AI tool, make sure you have permission or that your use is covered by an exception to copyright (i.e. it is fair).

    💡 Tip: Check the licence on the original material (e.g. Creative Commons). When in doubt, avoid uploading full texts where the content is used by the tool for training purposes.

  2. Who Owns AI-Generated Output?: Output generated by AI may not be eligible for copyright protection. You cannot assume you own it in the way you would own original creative work. This matters if, for example, you're submitting a thesis.

    💡 Tip: If the output is significant to your academic work, keep a record of how it was generated and whether you edited it.

  3. Attribution and Academic Integrity: Using AI-generated content without proper attribution can lead to academic misconduct.

    💡 Tip: Seek out local guidance on how to acknowledge or cite AI tools. This guide's section on referencing AI can be a starting point.

  4. Research Publication:  If you plan to publish research where AI tools have been used, check publisher policies. Many journals now ask authors to disclose whether AI was used in preparing a manuscript.

    💡 Tip: Be transparent. Some publishers may reject work that includes AI-generated content without clear attribution or approval.

Further information:

Ethical Considerations

Using AI tools responsibly means more than following academic integrity rules. Ethical use of AI involves being aware of issues like bias and fairness, respecting the rights of others, and understanding the environmental cost of these technologies.

  1. Bias and Fairness: AI tools reflect the data they were trained on - and that data often contains human biases. As a result, the outputs from AI tools can unintentionally reinforce harmful stereotypes related to age, culture, disability, gender or gender identity, race, religion, sexuality, socioeconomic status, and more. These biases may show up in any AI-generated responses, including analyses, images, search results, or writing suggestions.

    💡 Tip: Avoid treating AI output as neutral or objective. Question how AI tools are framing certain groups or issues, and consult a diverse range of scholarly sources.
     
  2. Consent and Respect for Others: Making use of AI tools ethically includes respecting the rights of others. Refrain from entering confidential or sensitive information about people into AI tools (e.g. contact information, unpublished research).

    💡 Tip: If conducting group work where AI is authorised, always agree on how it should be used. No one should feel pressured or excluded.
     
  3. Environmental Impact: The environmental effects of AI and particularly GenAI are still being fully understood. While some research such as Bashir et al. (2024) has urged caution, calling attention to GenAI's energy demands and how it has overwhelmed existing data infrastructures and outpaced regulatory efforts,[1] other studies like Tomlinson et al. (2024) have suggested that GenAI may be more efficient than traditional methods for certain tasks.[2]

    What is clear is that GenAI is not "free" or low impact. Every query and generation adds to its footprint. And the environmental costs include electricity, water use, raw material extraction, carbon emissions, and broader supply chain impacts.

    💡 Tip: Consider alternatives to AI. For example, when brainstorming, you can skim reading lists, review lecture notes, and construct mind maps on paper. Use locally-installed software or traditional research methods where possible.
Further information:

References

1. Bashir, N., Donti, P., Cuff, J., Sroka, S., Ilic, M., Sze, V., Delimitrou, C., and Olivetti, E. (2024). The Climate and Sustainability Implications of Generative AI, in An MIT Exploration of Generative AI, March 2024. Pre-print at: https://doi.org/10.21428/e4baedd9.9070dfe7

2. Tomlinson, B., Black, R. W., Patterson, D. J., and Torrance, A.W. (2024). The carbon emissions of writing and illustrating are lower for AI than for humans, in Scientific Reports 14, 3732. Available at: https://doi.org/10.1038/s41598-024-54271-x

Researchers' Perspectives

Oxford University Press Logo - The text reads "Oxford University Press" in a navy blue colour, while there is a circular symbol in blue on the far left

A 2024 global survey of academic researchers by Oxford University Press found that trust in AI companies is very low, with only 8% trusting AI companies not to use their data without permission and 6% trusting them to meet data privacy and security requirements.

There were also strong concerns about the implications for intellectual property and how AI will impact academic research in general, with a quarter (25%) of the cohort believing that AI reduces the need for critical thinking.