- Library
- Guides
- Library Guides
- AI in Academic Research and Writing
- AI Tools for Academic Research & Writing
AI in Academic Research and Writing: AI Tools for Academic Research & Writing
Tasks that involve searching
For any of these tasks... | Use Any of these Free tools |
---|---|
|
These summarize results from web search and link to the sources
|
|
Start with library databases and Google Scholar. Then try these additional tools. Note that each of these tools currently has a "free" version although they may require an account to access, and the free version may be limited in terms of features or usage.
|
Wordsmithing tasks (that don't involve search)
Task | Free tools |
---|---|
|
All of these tools also have paid versions that are more capable |
Bias
You must ensure that you account for possible biases that your AI model will produce. For instance, the AI model will provide resources that might only feature the most recent sources, have the most peer reviews, or generate unsuccessful resources for your application if the prompt is not clear. You must continually reread the sources provided as they pertain to your specific question; this can lead to biases if you do not search for sources that counter your research question. Generative AI has been known to produce sources that do not exist and create answers that are inaccurate or false.
Further Readings
- Bias in AI - Chapman University
- There’s More to AI Bias Than Biased Data - National Institute of Standards and Technology
- Shedding light on AI bias with real world examples - IBM
-
Limitations and Risks - University of Reading
Reliability
It is almost impossible to have an AI model that is operating at 100% accuracy. With some AI models, it is found that they can produce false responses, which can then lead to disinformation, skewed research results, and marketing the AI tool as unreputable. It is important to read forums produced by the AI model company that openly recognize that their model will generate inaccurate results or produce bias and how to avoid these issues.
Further Readings
- How to Check the Reliability of Artificial Intelligence Solutions—Ensuring Client Expectations are Met - Dr. John Patrick
- Contextualizing End-User Needs: How to Measure the Trustworthiness of an AI System - Carnegie Mellon University
- Generative AI Reliability and Authority - University of South Florida
Prompting
When asking your AI model research questions you will get varied responses depending on how specific you want it to be. Sometimes you must clarify to the model that what it has produced might have not answered; it is important to reword your input to see if this fixes the problem. The AI model will interpret what it thinks you are asking and that is why users will get varied responses. If you ask for the model to answer a question, it should be noted to ask if it can present citations if the application does not already do so. AI models must continually be trained on how to explain its responses by remaining transparent on information so as to remain neutral unless instructed by the user to provide strict evidence on their topic.
Further Readings
- AI Can Help You Ask Better Questions - Harvard Business Review
- More Useful Things Prompt Library
Privacy
Creating an account to use these tools often requires sharing of personal information. The use of these tools will likely also mean sharing information under their terms and conditions of use, and this data may be used to train the models used in a particular tool. You should avoid entering or uploading data or information that you do not have rights to, that are sensitive, or that are restricted by law or license/agreement.
Most, if not all, AI models require an account to use their site even if it is free. Some sites allow you to use preexisting accounts you may already have such as Google, Microsoft, and Apple accounts. The sites you allow access to your preexisting accounts could potentially fail if exposed to cyber security attacks harvesting your personal information. Using a burner account or alias does not provide a barrier to your identity when performing academic research. Using a VPN provides a buffer of security which allows you to stay safe while on the web and downloading resources and is highly recommended for all computers. You can access a free VPN download through OSU (you must be an OSU affiliate in order to use this).
It is always a good idea to examine the individual privacy and data use policies of each tool and, when appropriate, take measures to pause history or opt out of including your data for training models. For example, see OpenAI's Data Controls FAQ, managing & deleting your Gemini Apps activity and the Gemini Apps Privacy Hub.
For additional readings on privacy and AI, specifically large language model-powered chatbots, see the following:
- Security and Privacy Challenges of Large Language Models: A Survey (arXiv preprint)
- Large language models and European Data Protection Supervisor
- Scalable Extraction of Training Data from (Production) Language Models (arXiv preprint)
- How Strangers Got My Email Address From ChatGPT’s Model (NYTimes)
- OpenAI says mysterious chat histories resulted from account takeover (Ars Technica)
Attribution
This page was adapted from AI Literacy in the Age of ChatGPT: Which AI tool for your task? and Ethics of AI for Researchers by University of Arizona Libraries, © 2024 The Arizona Board of Regents on behalf of The University of Arizona, licensed under a Creative Commons Attribution 4.0 International License.