"

1 Chapter 1: AI Literacy for Researchers

Imagine you’ve just bought a high-end coffee machine, one of those sleek, futuristic ones with countless buttons, settings, and customizations. You wouldn’t just rip open the box, plug it in, and start brewing, would you? No, you’d take a moment to read the manual, figure out the right grind size, water temperature, and milk frothing options to get that perfect espresso shot. Without understanding how it works, you might end up with a bitter, undrinkable mess. The problem is not with the machine but with how you’re using it, or not using it to its full potential.

The same logic applies when integrating AI into academic research. Before you start relying on it for literature reviews, brainstorming, or writing assistance, you need to understand its capabilities, limitations, and potential risks. Just like a sophisticated machine, AI is only as useful as the person operating it. This is what AI literacy is all about, learning to engage with AI effectively, critically, and responsibly.

We initially considered placing this chapter at the end of the book, but then we realized that wouldn’t make much sense. We want you to develop these AI skills before you start experimenting with the different AI tools we’ll be introducing throughout the book. That’s why this chapter comes first. In a sense, it lays the foundation that equips you with essential AI tips and strategies so that you can navigate these tools with confidence and efficiency from the start.

What we’ve done in this chapter is compile a wide range of practical tips; lessons we’ve developed through our own work as AI researchers and educators. When you spend countless hours interacting with AI tools, you develop an ‘insider knowledge’, a form of tacit expertise that only comes with experience. Having access to this knowledge definitely shortens the learning curve enabling you to achieve more while bypassing much of the trial and error that typically accompanies mastering new technology. The insights we share here come from extensive hands-on experience and while they are tailored for academia, they can also apply broadly as AI is reshaping productivity, creativity, and problem-solving across different fields.

We’ve divided the tips in this chapter into two main categories:

 The first focuses on practical AI strategies to help you make the most of AI in your research and writing. These include prompt engineering, iterative refinement, treating AI as a thinking partner, verifying AI-generated content, and using AI for active learning. The aim here is to help you use AI both effectively and critically, avoiding common pitfalls while enhancing the quality of your work.

The second focuses on AI privacy and data protection; a crucial, often overlooked dimension of responsible AI use. Here, we offer specific strategies to safeguard your data, adjust AI settings, manage memory preferences, and use AI chatbots without compromising sensitive information. Academic research demands a high bar for ethical use, and privacy is central to that standard.

I. Practical AI Tips: Maximizing AI’s Potential for Research and Writing

This section provides a collection of important AI strategies to help you maximize your interactions with AI tools. Think of them as guidelines, a foundational set of best practices that will improve the way you use AI in your research and writing. We view them as core principles that you should keep in mind whenever you engage with AI tools. Most of these insights are derived from the first author’s book ChatGPT for Teachers: Mastering the Skill of Effective Prompt Engineering (2024).

  1. Prompt Engineering

AI is getting smarter and smarter. Remember GPT-3.5? Now compare it to current GPT-5. The difference is staggering, almost like watching a toddler grow into a sharp, articulate adult in just months. It’s a leap not just in power, but in understanding. Moore’s Law may be slowing for hardware, but in AI, we’re seeing exponential gains in performance and comprehension and we wouldn’t be surprised if prompt engineering becomes less crucial with future iterations. After all, it’s called intelligence for a reason. Current large language models already handle prompts riddled with typos, poor grammar, or vague phrasing and still respond with a higher degree of understanding.

The thing with these models is that they seem to grasp user’s intent. For instance, you can type a half-formed thought like “make lesson plan 45 min ecosystems grade 7 include quiz” and still get a coherent outline with objectives, activities, and even assessment ideas. That’s because modern models are getting better at inferring what you meant rather than what you typed. This shift suggests that the role of prompt engineering is gradually moving from crafting precise instructions to maintaining purposeful dialogue. In other words, the focus is less on perfect wording and more on having a clear sense of your goal, your audience, and the kind of help you want from the model.

That said, prompt engineering is still very much a critical skill. Knowing how to craft effective prompts is like knowing which buttons to press on that high-end coffee machine. It’s what allows you to steer the model toward more accurate, relevant, and insightful responses, turning a good output into a great one.

Here’s an analogy that comes to mind: imagine large language models (such as ChatGPT, Claude, Gemini, etc.) as vast castles, filled with countless rooms of knowledge and capability. You and a group of people arrive, each holding a set of keys. Some carry finely crafted keys that unlock hidden chambers brimming with insight, while others have only the basics, keys that open the main halls but little more. The more refined and intentional your prompts (i.e., the keys) the deeper you can venture, revealing layers of creativity, understanding, and solutions others might never access. In this sense, prompt engineering becomes your master key. It doesn’t just open doors; it guides your journey through the castle with clarity and purpose helping you tap into the richest knowledge hidden within.

  1. Iteration in AI Prompting

Staying on the topic of prompting, there’s another crucial principle to keep in mind: iteration. Even the most carefully crafted prompt won’t always yield the perfect answer on the first try. That’s why refining your prompts based on the AI’s responses is one of the most effective strategies for getting high-quality results.

It all begins with an initial prompt, but rarely ends there. If the AI’s response misses the mark (e.g., too vague, overly technical, or lacking key details) you don’t discard it. Instead, you revise. Rephrase the prompt, add context, or adjust the level of specificity. This trial-and-error process teaches you how to shape your inputs more precisely which often leads to clearer and more relevant answers.

No matter how skilled you become, AI responses can still be surprising or incomplete. These models don’t “think” like us; they predict based on patterns in data. Small changes in your wording or focus can lead to vastly different outputs. Adopting iteration as a mindset helps you gain better control over how conversational AI serves you.

  1. AI as Collaborator

Continuing the conversation on how best to engage with AI tools, there’s a critical distinction that every researcher must understand: AI is not a reliable fact engine. When we refer to AI in this book we are not advocating for its use in generating factual content. On the contrary, we strongly caution against relying on AI for retrieving facts, statistics, or citations, due to well-documented issues like hallucinations, misinformation, bias, and fabricated references. These concerns are discussed in detail in AI Ethics Chapter, but it’s important to mention them here because they directly shape how we should think about AI in the research process.

The real power of AI lies not in what it knows, but in how it helps you think. We view AI as a thinking partner, a tool that supports reflection, exploration, and refinement of ideas. It should never replace the deep cognitive labor that academic research demands. Instead, it can serve as co-intelligence (Mollick, 2024), a collaborative assistant that helps you unpack complex topics, consider different perspectives, and sharpen your reasoning.

One of the most effective and ethically sound ways to use AI is through summarization and synthesis. In this case, you’re not asking AI to fabricate or search for knowledge; you’re giving it real material (e.g., articles, notes, data) and asking it to distill or reframe that content. Tasks like summarizing arguments, brainstorming ideas, extracting insights, identifying patterns, or comparing studies are where AI can truly support your workflow, efficiently and responsibly, while you remain in full control.

Another key area where AI can be of tremendous help is with editing and improving your writing. Whether English is your first language or not, AI tools can improve clarity, flow, and precision without altering your voice. We use it routinely to enhance our own drafts, and we are often surprised by how effectively it can polish a rough paragraph.

Bottom line, AI is most valuable when it is working with your ideas, not generating them independently. Use it as a tool for refining, enhancing, and exploring, not for replacing critical thinking or scholarly rigor. This point is echoed throughout the book because it’s fundamental to using AI responsibly in academic work. When you treat AI as a collaborator, you’ll unlock its real potential without compromising the integrity of your research.

  1. Always Check for Accuracy

As you continue integrating AI into your research workflow, there’s one rule that should never be compromised: always evaluate AI-generated content for accuracy. No matter how fluent or convincing a summary sounds, verification is non-negotiable especially when it includes statistics, citations, or direct quotes. One of the most common pitfalls is hallucinated citations, references that look perfectly legitimate but simply don’t exist. A practical strategy here is to ask AI to include sources in its summaries but then take the critical step of checking those sources yourself. Don’t assume accuracy; verify independently.

  1. Reverse Prompting

While most people interact with AI by asking it questions, one of the most effective and often overlooked strategies is to reverse the roles: ask AI to quiz you. This approach, which we first encountered in Priten Shah’s (2023) book AI and the Future of Education, turns AI into a powerful tool for testing your understanding, clarifying your thinking, and deepening your learning.

Instead of just summarizing your notes or reviewing your work passively, prompt ChatGPT or Claude to generate challenging, thought-provoking questions based on your study material, research draft, or notes. For example:

“I’m studying [input topic] ask me  challenging questions to test my understanding and push my thinking further.”

You can tailor this method to match your specific needs by asking AI to generate different types of questions:

  • “Ask me multiple-choice questions to test my recall.”
  • “Use Socratic questioning to challenge my assumptions.”
  • “Pose questions that highlight potential biases in my argument.”
  • “Take on the role of a PhD examiner and grill me on my research.”
  1. Disclosing AI in Your Research

When using AI in your research it’s essential to be transparent about it. Clearly disclose the use of AI tools in a visible section of your work. Specify which tools you used and how they supported your process. For instance, in writing this book, we’ve used ChatGPT and Claude for editing, structural refinement, and formatting. Acknowledging AI’s role doesn’t weaken your work; it strengthens your credibility and demonstrates ethical awareness.

As discussed in later chapters, many academic journals now require authors to state whether they’ve used AI during the research or writing process. Failure to disclose such use can raise serious ethical concerns, especially if AI-generated content is misinterpreted as original scholarly work. Additionally, universities are rapidly developing formal guidelines around AI use. It’s crucial to check your institution’s policies and have open discussions with colleagues and fellow researchers to ensure your approach aligns with expectations around academic integrity.

For added transparency, we recommend using a word processor that tracks changes, such as Microsoft Word or Google Docs, when working with AI. This allows you to document revisions and show exactly how the AI contributed to your writing. It also helps maintain a clear distinction between your ideas and AI-assisted edits keeping you in control of the final product.

There is nothing inherently wrong with using AI to brainstorm, revise, or improve the structure of your writing. The important thing is to use it ethically, transparently, and responsibly. As long as your intellectual contribution remains central, AI can be a powerful catalyst, not a substitute, for your academic thinking.

II- AI Privacy Tips: Protecting Your Data While Maximizing AI’s Potential

As you begin to integrate AI tools into your research workflow, it’s essential to go beyond functionality and consider how these tools handle your data. Throughout this book, we introduce a variety of AI platforms each with its own strengths, limitations, and most importantly, privacy policies. Before you use any AI tool, you need to understand what happens behind the scenes. The first step in this process is to start by reviewing the platform’s privacy documentation. Ask yourself:

  • What data does this tool collect by default?
  • Can I opt out of contributing my data to model training?
  • How long does the platform retain my data?
  • What security measures are in place to protect it

These questions are critical to maintaining the integrity of your research. As an academic, your work may involve sensitive content: unpublished manuscripts, confidential findings, or participant data. You can’t afford to treat AI platforms as neutral spaces. It’s true AI can be an incredible assistant but not at the cost of privacy. As researchers, we must strike a balance: leveraging the capabilities of AI while actively safeguarding our data. That means being selective and intentional about what you input into AI systems.

Here are some practical privacy precautions:

  • Never upload confidential research materials.
  • Do not share personally identifiable information about research participants.
  • Avoid inputting unpublished manuscripts, proprietary data, or sensitive notes.

These practices are non-negotiable ethical standards in a time when digital tools are evolving faster than policy can keep up. Practicing good data hygiene helps you stay in control of your research while making responsible use of powerful AI technologies.

The Prevalence of AI Chatbots in Research

Of all the AI tools available, chatbots will likely become your primary research companion. Their real-time interaction, advanced language capabilities, and ability to support idea development and problem-solving make them indispensable in academic research.

When we talk about AI chatbots, we are primarily referring to:

  • ChatGPT
  • Claude
  • Gemini
  • Copilot
  • Perplexity AI

Each of these tools has its strengths and its own approach to privacy. As we’ve emphasized earlier, understanding how your chosen chatbot handles your data is not just helpful, it’s critical.

We’ve tested all of these AI chatbots extensively, and while we lean toward ChatGPT and Claude, this is largely a matter of preference. At the time of writing, Google has just released a powerful new model, Gemini 2.5, and given the current pace of AI development, it’s very likely that even more advanced LLMs will be available by the time you read this. You may find that Gemini, Perplexity, or Copilot better suit your research style or workflow. The key isn’t choosing the “best” model but rather understanding the privacy implications of the tool you choose and adjusting your usage accordingly.

In the next section, we walk you through a series of practical, privacy-focused AI tips designed to help you safeguard your data while using chatbots. The goal here is not to promise perfect security because in the digital world, no such thing exists. Instead, it’s about minimizing risks and exercising control over how your information is handled.

This section focuses primarily on ChatGPT with occasional references to Claude. That said, the core privacy strategies outlined here are broadly applicable across most AI platforms even though the settings and menus may vary.

Before we begin, we strongly recommend reviewing and adjusting the privacy settings of your preferred AI tool. If you’re using ChatGPT, follow the steps in this section closely. If you’re working with another platform, look for equivalent settings and apply the same principles.

1. Using Data for Training AI

As you’ll see in the coming chapters, we often recommend uploading your manuscript to ChatGPT or Claude to receive tailored feedback, summaries, or revisions. These features can be incredibly helpful but they also raise an important question: what happens to your data after you upload it?

While the risk of misuse is relatively low, it’s still possible that your manuscript or research content could be stored and used as training data to improve future versions of the model. This is why it’s critical to understand how each AI platform handles user data before sharing sensitive information.

Most reputable platforms like OpenAI’s ChatGPT and Anthropic’s Claude provide clear documentation outlining:

  • Whether your data is stored
  • Whether it is used for training
  • How long it is retained
  • Whether you can opt out of data usage

While some AI platforms may offer the option to turn off data sharing for training purposes, others do not.  As for retention policies, both Claude and ChatGPT offer temporary chat modes that retain data for up to 30 days for safety reviews, after which it is deleted. This setting helps limit long-term storage of your data but doesn’t eliminate short-term retention or safety flagging.

2. Memory: Personalized Conversations, Privacy Implications

One of ChatGPT’s most powerful features is Memory, a setting that allows the model to remember details between chats. This can make interactions feel more seamless and personalized over time. For instance, if you tell ChatGPT, “I’m a doctoral researcher focusing on AI ethics,” it may retain that detail to tailor future responses more effectively. While convenient, this feature also raises important privacy considerations, especially when you’re dealing with sensitive or unpublished research material. However, ChatGPT makes it possible to manage Memory through your ChatGPT settings at any time. Here is how:

  • Go to Settings > Personalization > Memory
  • Toggle Memory ON or OFF. Turning it off stops new memories from being saved, but does not delete previous ones
  • Select Manage Memory to review what’s stored. From here, you can delete individual items or clear all memories permanently

3. Temporary Chat

Temporary Chat is a ChatGPT feature that allows you to hold conversations without saving any history or memories. It’s ideal for private, one-time interactions where you want maximum privacy and don’t need the AI to retain previous context. Here is how to access temporary chat:

  • Navigate to Settings > Personalization > Temporary Chat
  • Toggle it ON to begin a session where nothing is stored

Once enabled, your conversation will not appear in your history, and ChatGPT will not retain any data from the session.

For optimal privacy, you can pair Temporary Chat with other settings to minimize data exposure:

  • Disable Memory under Settings > Personalization
  • Turn off “Improve the model for everyone” under Settings > Data Controls

This combo gives you stronger control over your data, ensuring no content is stored, retained, or used for training. However, while disabling Memory and using Temporary Chat boosts privacy, there are trade-offs:

  • With Memory off, ChatGPT won’t retain your preferences or previous interactions
  • With Temporary Chat, there’s no session continuity. Every time you start over, you’ll need to restate context or research goals.

This can reduce efficiency for long-term research tasks, writing projects, or brainstorming sessions that benefit from the AI’s retained understanding. For us, we choose to keep Memory enabled while disabling “Improve the model for everyone.” This gives us a useful middle ground: ChatGPT remembers relevant context for our work, but our data isn’t used to improve the model more broadly. Ultimately, it’s a personal decision, one that depends on your needs. Whether your priority is data privacy or workflow continuity, adjusting these settings gives you the flexibility to work with AI on your terms.

4. Projects: A Secure Workspace for Structured, Source-Based Research

If you’re using ChatGPT or Claude to support your academic research, then Projects is a feature you absolutely need to explore. Currently available to paid users only, Projects allows you to create a dedicated, private workspace where you can upload research materials (e.g., PDFs, Word documents, slides) and interact with them. Think of it as building a custom version of the chatbot that draws exclusively from the documents you’ve provided.

Projects offers two major advantages:

  • Reduced hallucination risk: Since the model is limited to your uploaded documents, it doesn’t pull from the broader web or its general training data. The result? More focused and contextually grounded responses.
  • Ideal for literature reviews and complex academic tasks: You can upload multiple sources, ask targeted questions, and get summaries, comparisons, or conceptual clarifications based only on your selected materials.

Currently, both ChatGPT and Claude allow you to upload up to 20 files per Project. In practice, we’ve found the ideal number of files to be 10–15, to avoid overloading the model and ensure clear, relevant outputs.

5. Customize ChatGPT

One of the powerful yet often overlooked features of ChatGPT is ‘Customize ChatGPT’. This feature allows you to personalize how the model responds to you. Simply provide ChatGPT with information about your work, research interests, and communication preferences and you will be able to turn it into a highly focused assistant tailored to your specific field.

In addition to narrowing the content focus, you can also control how ChatGPT communicates. Whether you want formal academic writing, concise bullet-point summaries, or a more conversational tone, you can set those preferences upfront. You can even specify if you want the model to be more analytical, critical, or direct in its tone. This is especially useful for writing tasks like literature reviews, research papers, or proposals where tone and precision are key.

To enable this feature, go to Settings > Personalization > Customize ChatGPT, and fill in details about your role, field of study, preferred tone, and response style. This small adjustment can dramatically improve the quality and relevance of your conversations.

Claude also offers a similar feature. Go to Settings > Profile, you can specify how Claude should respond to you. While the customization options are somewhat more limited than ChatGPT’s, they still offer meaningful control over how your chatbot behaves. In both cases, taking a few minutes to customize your AI assistant can significantly enhance your workflow and make your AI usage more intentional and productive.

Conclusion

AI is undeniably a powerful tool, but its true value lies in how thoughtfully and responsibly it is used. As emphasized throughout this chapter, making the most of AI in academic research begins with a foundation in AI literacy, not in the sense of mastering technical intricacies, but in developing a practical understanding of its strengths, weaknesses, and best practices. This starts with learning how to communicate effectively with AI through prompt engineering and iterative refinement. The more clearly and strategically you frame your inputs, the more meaningful and accurate the responses you’ll receive. But beyond effectiveness, responsible use of AI also requires a deep awareness of privacy, accuracy, and ethics.

Privacy and data protection should always remain a top priority especially when dealing with sensitive or unpublished research. Whether you’re working with participant data, proprietary content, or early-stage manuscripts, it’s essential to treat AI platforms with caution. Avoid sharing identifiable, confidential, or sensitive material. Always review the privacy policies of the tools you use: check what data is stored, whether it is used for training, how long it is retained, and whether you can opt out. Adjust your settings accordingly. These small but crucial steps ensure that you remain in control of your data rather than unknowingly feeding it into a machine learning pipeline.

Equally important is the need to critically evaluate AI-generated content. AI models, no matter how sophisticated, are still prone to hallucinations, factual errors, and misleading citations. Accepting their output without verification can lead to serious academic missteps. Never treat AI as a definitive source of information. Instead, use it as a thinking partner, a tool that helps you brainstorm, organize ideas, summarize complex texts, refine arguments, and improve clarity in your writing. When positioned this way, AI enhances your workflow while keeping the intellectual heavy lifting firmly in your hands.

Most importantly, your research must remain your own. The insights, conclusions, and arguments you develop should reflect your critical thinking and scholarly voice. However, if AI plays any role in your process whether in editing, structuring, summarizing, or ideation, transparency is essential. Disclosing how AI contributed to your work not only demonstrates academic integrity but also sets an example for responsible AI use. As AI tools become more embedded in the academic landscape, openness about their role will be key to maintaining trust, rigor, and ethical standards in research.

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

The AI Turn in Academic Research Copyright © 2025 by Johanathan Woodworth and Mohamed Kharbach is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.