"

25 Chapter 15: AI Ethics

The wide variety of AI tools we shared throughout this book shows the increasing integration of these technologies into research and education. However, these AI tools as much as they empower research students and facilitate their research endeavours, they also raise a wide variety of concerns regarding their ethical implications. These concerns have been the topic of extensive debates among scholars and practitioners and as of writing these lines, higher education institutions continue to wrestle with the far-reaching implications of generative AI on academic standards, assessment methodologies, and the integrity of research outputs. Just as new policies emerge to address pressing challenges, rapid advancements in AI introduce yet another layer of complexity, keeping institutions in a constant state of adaptation.

In fact, the rapid evolution of AI often outpaces institutional responses thus creating an ongoing action-reaction dynamic that is both challenging and hard to sustain. Given the bureaucratic nature underlying the functioning of higher education institutions, it becomes even more difficult to implement timely and effective measures. Approving policies often takes time and complicated measures and by the time they are finalized, the AI train has left the station. This lag in response leaves institutions perpetually playing catch-up.

Let’s take the example of the largest school district in New York. Few months after the introduction of ChatGPT and seeing the immense potential of this AI chatbot in generating human-like responses, writing coherent essays, and assisting with different natural language processing tasks, the district initially decided to ban its use mentioning concerns of privacy, plagiarism, and potential impact on student learning. Several school districts and educational institutions followed suite. However, as ChatGPT became mainstream and as more and more educators, teachers, and administrators started to comprehend the vast capabilities of this technology the district revoked its ban and instead introduced a set of structured guidelines for its use (Singer, 2023).

Throughout history, new technologies have always been met with skepticism and doubt. They have often been viewed as disruptive forces that could potentially challenge established systems and traditional norms (Suleyman & Bhaskar, 2023). From the invention of writing, the printing press, the telegraph, the Internet, and now AI. Each of these general-purpose technologies initially faced resistance, yet they ultimately reshaped societies, revolutionized communication, and redefined how knowledge is created, shared, and preserved.

However, while past technological advancements primarily disrupted communication and knowledge dissemination, AI presents a more profound challenge, one that extends beyond efficiency and accessibility into the realm of ethics. For researchers, the integration of AI into academic work raises pressing concerns about authorship, data integrity, bias, and accountability. As AI-generated content becomes more sophisticated, distinguishing between human and machine contributions grows increasingly complex thus forcing scholars and institutions to confront difficult questions about academic honesty, intellectual ownership, and the ethical boundaries of AI-assisted research.

In this chapter, we examine the ethical challenges that arise when integrating AI into academic research, focusing more specifically on issues related to authorship, originality, copyright, plagiarism, and accessibility. We acknowledge that AI presents a wide range of ethical concerns, many of which extend beyond the scope of this discussion. However, our focus remains on the challenges most relevant to conducting research as these directly impact the integrity, credibility, and ethical responsibility of scholars.

1. Authorship

Large language models (LLMs) such as ChatGPT are adept at generating human-like text which can easily pass off as human created. The implications of this are huge especially for us within the academic research community. As a researcher, you will want to own up to the originality and authenticity of your work. It goes without saying that your research paper should reflect your own insights, knowledge and expertise. However, certain uses of AI, especially when you entirely rely on it to generate ideas and do your writing, can blur the line between your genuine contribution as a researcher and the AI’s contribution (Hadan et al, 2024; The Committee on Publication Ethics [COPE], 2019).

The issue of authorship when using AI is a significant ethical concern with multiple dimensions that you should take very seriously as you navigate the treacherous terrain of AI in research. This concern gives rise to several related problems which taken together renders the entire ethical landscape even more complex and challenging to navigate.

Let’s start with the problem of ownership. It is extremely hard to exactly determine the owner of content generated by AI (Lund et al., 2023). Keep in mind that these LLMs such as ChatGPT are but predictive tools; that is, they do not really ‘think’ the same way we do and do not even understand language the way we do (Mollick, 2024). For them, text is a combination of tokens arranged based on complex probabilistic patterns learned from exposure during pretraining to vast datasets of human language.

So basically, what these AI tools do is produce text that is syntactically correct but is not necessarily semantically meaningful. AI is a smart tool but not a ‘cognitive thinking’ agent and its content generation lacks intent and accountability, elements that are central to the concept pf ownership in academic work. This is also the view espoused by the Committee on Publication Ethics (COPE, 2023) which state that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work.” (n. p.)

It is important to note here that when we talk about the use of AI generated content, we mean all kinds of content, text and otherwise, whose generation depended wholly or partly on the use of a generative AI tool. Of course, there is a huge difference between writing a prompt then copying and pasting the generated output verbatim versus using the AI’s response as a starting point for thoughtful revision or enhancement.

It’s true that the latter involves more human involvement but it still raises important ethical questions about authorship, originality, and attribution. In other words, regardless of the degree of AI involvement in the generation of content for your research, you should be wary that the ethical responsibility for the final output ultimately rests with you as the researcher and not with the AI tool that assisted you.

Given the ethical conundrums caused by the use of AI in writing research papers, an increasing number of academic journals and publishers are implementing clear guidelines and policies on the appropriate use of AI tools in the research and publication process. While some ban it completely, others require disclosure of any AI assistance used during the research, writing, or editing process (Gao et al., 2023; Hadan et al., 2024; Brainard, 2023).

For instance, the editorial policies of the Science journals state that:

    “Text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works.” (Thorp, 2023, n. p.)

However, while a total ban is an option for some publishers as we have seen with Science, it is not a popular one and most of them adopt a more balanced approach that allows the use of AI under strict conditions. These conditions typically involve transparent disclosure of AI assistance. For example, Springer Nature, while it does not recognize LLMs (e.g., ChatGPT) authorship, it nonetheless permits the use of AI tools and specifies that their contributions be fully disclosed in the methods section or any other ‘relevant section’ in the paper.

        “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria (imprint editorial policy link). Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM  should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.” (n. p.)

Along similar lines, the APA Publications and Communications Board (2023) has developed specific guidelines on the use of generative AI in AP journals urging writers to disclose their use of AI tools in the methods section. APA also banned authorship attribution to AI on any of their scholarly publications. I have yet to see a journal or scholarly publication that allows listing AI as an author.

The issue of authorship seems unequivocally settled. AI is not an author. This is understandable not only because of the ethical implications involved in assigning agency to an entity that lacks responsibility, originality, and critical thinking but also doing so vitiates scholarly research of its foundational principles. Writing is a form of thinking, an iterative process that enables you to dig deep into your long-term memory, uncover connections and insights, organize your ideas, and materialize them into coherent arguments and contributions.

When you write your research, you write with intention. You make deliberate vocabulary and stylistic choices; you highlight key insights and arrange ideas according to their contributory weight. You also take epistemological positions that align with your reasoning and with the overarching framework underlying your research. These positions reflect the lens through which you see the world and the topic you are working on. They also guide the type of questions you ask, the methods you choose, and the conclusions you draw.

Scholarly and scientific writing is, therefore, much more than simply presenting facts; it’s about crafting a narrative that integrates your knowledge, expertise, reasoning, experience, academic perspective, among other things. Your final written piece is a reflection of the sum of these deliberate decisions and serves as a testament to your intellectual engagement and scholarly integrity hence the problematic of AI authorship in scholarly writing.

Authorship is all about owning the intellectual journey that underpins the research process. Assigning authorship to a machine undermines the very essence of this journey. The capacities for critical reasoning, accountability, and authenticity required to navigate and contribute to scholarly discourse are purely human feats and AI, in its current form, is far from being a sentient being to do that for you!

2. Copyright

Closely related to the problem of authorship is the issue of copyright. Since AI cannot be considered the author or owner of any work it generates who then owns the right to the AI-generated content? Is it the person who wrote the prompt or directed the generation process, the AI developer or the organization that owns or operates the AI system, or a combination of all? Is AI-generated outputs eligible for copyright protection after all? These are all questions that remain at the forefront of ongoing legal, ethical, and academic debates.

In the American context, the Copyright Act attributes ownership of the copyright to “author or authors of the work” (U.S Code, n.d.). However, as the Congressional Research Service (2023) indicates “given the lack of judicial or Copyright Office decisions recognizing copyright in AI-created works to date, however, no clear rule has emerged identifying who the “author or authors” of these works could be” (p. 1).

Dr. Stephen Thaler’s case is one of the famous cases where the question of AI-generated works and copyright ownership has been brought into sharp focus. When Thaler presented his artwork titled ‘A Recent Entrance to Paradise’ which he created using an AI system he himself designed called ‘DABUS’, the U.S. Copyright Office rejected his application and refused to register an artwork created by AI. In its decision issued in 2022, the Copyright Office justified its refusal by the fact that “only a human being can be considered an ‘author’ under US copyright law, this quality being a prerequisite for the protection and registration of the work’ (cited in Geiger, 2024, p. 1132)

Current copyright legislature especially in the U.S. emphasizes the ‘anthropocentric approach’ which puts the human author ‘at center of the protection’ (Geiger, 2024, p. 1134). This anthropocentric approach reflects a broader societal and legal reluctance to assign legal personhood or rights to non-human entities such as AI systems.

As we were editing this chapter, the U.S. Copyright Office issued an updated statement regarding AI and copyright, reinforcing the principle that AI-generated content alone is not copyrightable. However, human contributions to AI-assisted works, as it further explained, can be protected if they meet originality standards with decisions being made on a case-by-case basis. The statement clarifies several key points:

(1) Copyright protection applies only to works created by human authors, even if AI-assisted;

(2) AI-generated content without significant human control does not qualify for copyright;

(3) Merely providing a text prompt is insufficient to claim authorship over AI-generated outputs;

(4) Copyright determinations will be assessed individually based on human involvement; and

(5) Humans may claim copyright if they make creative modifications, selections, or arrangements of AI-generated content. (U.S. Copyright Office, 2025)

In simpler terms, the U.S. Copyright Office has made it clear that AI alone cannot be an author. If a person meaningfully contributes to AI-generated content by shaping, modifying, or creatively selecting elements, they may be eligible for copyright protection. However, just typing a prompt into an AI tool isn’t enough to claim ownership. This means that while AI can assist in the creative process, legal protection still depends on the originality and effort contributed by a human creator.

 3. Plagiarism

Determining the authorship and ownership of AI-generated content is just one facet of this ethical dilemma. An equally pressing concern revolves around plagiarism, particularly how AI-generated material blurs the lines between original creation and derivative work. Generative AI tools such as ChatGPT, Claude, and Gemini have reached a point of sophistication where their outputs are increasingly indistinguishable from human generated content. This challenge was highlighted in the 2021 research by Köbis and Mossink, who used an incentivized version of the Turing Test to explore whether people could identify the source of creative text. Participants, motivated by financial rewards for accurate detection, were tasked with distinguishing between human-written and AI-generated poems. The results revealed that participants were unable to reliably tell the difference.

Further evidence supports these findings. Hadan et al. (2024) found that peer reviewers struggled to distinguish between AI-processed and human-written snippets in research papers and generally assumed that generative AI was involved in both. Similarly, Gao et al. (2023) indicate that blinded human reviewers correctly identified 68% of generated abstracts as being created by ChatGPT but mistakenly identified 14% of original abstracts as AI-generated. Rae (2024) also notes that previous studies consistently show that people often fail to differentiate between content created by humans and AI.

The thing with LLMs is that they are constantly improving through both developers’ interventions feeding them with state-of-the-art computing powers and through a process of autonomous learning and self-improvement. This explains why, for example, GPT 5 is better than its predecessors, and why Claude Opus 4.1 is more advanced than its earlier versions.

Unlike their predecessors, current LLMs’ versions are trained to produce outputs that mimic human-like text in a way that eliminates identifiable patterns that could be used to detect their machine origins (Heikkilä, 2023). This deliberate refinement makes it increasingly challenging to differentiate between human and AI authorship. And it is not only humans who struggle to distinguish AI-generated content from human-created material; even AI detection tools, specifically designed for this purpose, often fail to provide consistent and accurate results.

These detection tools frequently misclassify content labeling genuine human work as AI-generated (i.e., false positive) and failing to accurately detect AI-generated text in some cases (i.e., false negatives). Current research clearly demonstrate that existing AI detection tools are neither accurate nor reliable. Multiple studies reveal significant limitations in these detection systems, including high rates of false positives and false negatives (Aremu, 2023; Weber-Wulff et., 2023; Gao et al., 2023; Liang, et al., 2023; Hadan et al., 2024).

AI text detectors are also notoriously known for falsely identifying the work of non-native English writers as AI-generated due to reduced linguistic variability and word choices (Liang, et al., 2023; Hadan et al., 2024). For example, one study found that GPT detectors misclassified over half of the essays written by non-native English speakers as AI-generated, while demonstrating near-perfect accuracy for essays written by native English speakers (Liang et al., 2023).

This ongoing struggle to accurately identify AI-generated content has significant implications for issues of plagiarism. As detection tools fail to reliably distinguish AI-authored text, it becomes easier for individuals to pass off AI-generated work as their own further blurring the lines of academic and professional integrity. If AI content can seamlessly masquerade as human-authored material without detection, the risk of unintentional or deliberate plagiarism increases. This raises ethical concerns and highlights the need for more effective safeguards against the misuse of generative AI.

As a researcher, your credibility is your most valuable asset. Your work is built on the premise that your findings, arguments, and conclusions stem from your own intellectual engagement with a subject. If AI-generated content is used dishonestly, or without proper acknowledgment, it not only violates ethical research standards but also compromises the reliability of knowledge production. The strength of academic discourse relies on the authenticity of contributions, and failing to uphold these standards risks devaluing the entire research ecosystem. Thus, the solution to AI-generated plagiarism must go beyond detection, it must be rooted in cultivating ethical responsibility.

Researchers should be encouraged to use AI transparently, acknowledging when and how AI-assisted tools contribute to their work. Universities and research institutions should integrate discussions on AI ethics, responsible authorship, and proper attribution into academic training, ensuring that emerging scholars understand the implications of misusing AI. Indeed, fostering a culture of responsibility over regulation enables the research community to navigate the challenges of AI while preserving the core values of integrity, originality, and scholarly trust.

4. Inaccuracies and Hallucinations

Generative AI tools such as ChatGPT are known for generating ‘hallucinations’ (Weise & Cade, 2023; Babl & Babl, 2023). Hallucinations are fabrications that appear accurate but are not true or real. For instance, when asked about the first time the New York Times reported on “artificial intelligence,” ChatGPT claimed it was on July 10, 1956, in an article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict” about a seminal conference at Dartmouth College (Weise & Cade, 2023). While the conference was real, the article was not, as journalists Karen Weise and Cade Metz clarified. Similarly, AI models are capable of producing seemingly credible yet entirely fabricated references in academic writing. For instance, when tasked with generating a conference abstract, ChatGPT created fictitious references (Babl & Babl, 2023).

Likewise, Anderson et al. (2023) found out that the bibliography generated by AI for two essays contained inaccurate information, including non-existent authors and publication titles. The inaccuracies in AI models extend beyond simple fabrications to include incomplete or misleading information. For example, when ChatGPT was asked for instructions on building a PC, it omitted critical steps that could render the PC inoperable (Anderson et al., 2023). These inaccuracies and hallucinations also raise significant concerns about the spread of neural fake news and misinformation (Zellers et al., 2029). These can have detrimental impact on the whole society as they can be used by “malicious actors to automate the creation of convincing and misleading text for propaganda and influence operations” (Goldstein et., 2023, p. 1).

For those of us in academia, these inaccuracies and hallucinations highlight the importance of approaching AI tools with the utmost caution. We firmly believe that no academic researcher should rely on AI tools particularly chatbots like ChatGPT for generating factual information. The only reasonable exception is when using these tools to generate summaries or extract key insights from documents, and even then, it’s crucial to do your homework. This means thoroughly reading and understanding the original papers beforehand to ensure the accuracy and reliability of the output.

5. Inequities

The use of AI tools can also rise equity concerns. Training generative AI requires massive amounts of data and tremendous computing powers. For instance, the training process of GPT-4 costs OpenAI over $100 million (Knight, 2023). These higher costs drive AI companies to paywall their AI services which can create barriers to access especially for researchers and institutions with limited financial resources. In fact, the paywalling of AI services further exacerbate the existing inequities in academia putting researchers from underfunded or underprivileged backgrounds at a greater disadvantage. And instead of enhancing the democratization of knowledge and innovation that academia strives for, AI becomes a tool that reinforces existing disparities, creating a divide between those who can afford cutting-edge technologies and those who cannot (Anderson et al., 2023).

Researchers whose first language is not English face pronounced disadvantages in academia, and these inequities are further exacerbated by the use of AI tools. The academic world, dominated by English as the lingua franca, imposes significant barriers to publishing, collaborating, and succeeding for non-native English-speaking researchers (Lund, 2022). Language proficiency often becomes a gatekeeping factor; even when producing high-quality work, these researchers may struggle to conform to the stringent standards of scientific English, resulting in lower acceptance rates and editorial bias (Hadan et al., 2024). Limited access to resources, such as funding for proofreading and editing services, compounds these challenges, particularly for researchers in developing countries (Lund, 2022).

Furthermore, structural differences in language and difficulties with academic writing can hinder non-native English-speaking researchers’ ability to express complex ideas, negatively impacting their confidence and development as scholars. While generative AI tools like ChatGPT offer potential solutions, assisting with grammar, sentence structure, and clarity, these tools remain inaccessible for many due to financial constraints which risks exacerbating existing disparities.  Without deliberate interventions, the promise of AI in academic writing risks reinforcing the systemic disadvantages faced by non-native English speakers rather than bridging the gap.

Another critical dimension of inequities in the use of AI tools involves students with special needs, particularly those with intellectual and developmental disabilities (IDD). Research highlights significant concerns from both educators and parents regarding the biases embedded in AI systems (Shriver, 2024). A recent study revealed that 72% of teachers and 63% of parents worry that AI models have not been trained on data that adequately includes the perspectives, experiences, and abilities of individuals with IDD (Shriver, 2024). This lack of representation results in models that fail to accurately reflect the capabilities and contributions of this group, perpetuating harmful stereotypes and limiting the potential for these tools to support inclusive education.

Moreover, such biases can have far-reaching consequences. AI tools used for personalized learning or assessments may inaccurately gauge the strengths and needs of students with special needs, leading to inappropriate interventions or missed opportunities for growth (Shriver, 2024). Addressing this inequity requires deliberate efforts from AI developers to include diverse datasets that encompass individuals with IDD, ensuring that the technology is truly reflective of all learners. Educators and policymakers must also advocate for more inclusive practices in AI development and adopt tools designed with accessibility and equity at their core.

6. Algorithmic Bias

When you prompt an AI tool, its response is generated based on the data it was trained on. If the requested information falls outside its dataset, the AI may either fabricate a response, often referred to as a hallucination which we talked about earlier, or politely acknowledge its inability to answer. This was particularly common in the early iterations of large language models. However, with the advent of web access, these models can now retrieve and synthesize real-time information from online sources.

It is important to note here that despite their increasing accuracy and efficiency, AI platforms remain algorithmic systems. They do not think as humans do, nor do they possess our capacity for critical analysis. Their performance is entirely dependent on the quality and scope of their training data (Kharbach, 2024).

This training primarily consists of open-source or publicly available data. Unfortunately, much of this data is riddled with toxicity, inconsistencies, biases, and misrepresentations (McGovern, 2023; Kharbach, 2024). As a result, the responses generated by AI tools often reflect these biases, a phenomenon known as algorithmic bias (Gerlich, 2025).

A growing body of reports highlights the various manifestations of AI bias. These include AI chatbots generating racially insensitive poetry (Perrigo, 2021), spreading misinformation (Solaiman et al., 2019; Ouyang et al., 2022), and exhibiting gender and ability biases (Broussard, 2023; Ciurria, 2023), among other concerns (see also Kharbach, 2024, p. 118). As Crawford (2021) notes, examples of discriminatory AI systems are widespread:

“from gender bias in Apple’s creditworthiness algorithms to racism in the COMPAS criminal risk assessment software and to age bias in Facebook’s ad targeting. Image recognition tools miscategorize Black faces, chatbots adopt racist language, voice recognition software fails to recognize female-sounding voices, and social media platforms show more highly paid job advertisements to men than to women.”(p. 128)

These examples underscore how AI systems, rather than being neutral, often reflect and amplify existing societal biases. To mitigate this issue, major companies developing large language models, including Microsoft, Google, and Anthropic, have invested heavily in refining their AI systems. They employ sophisticated algorithms to filter out toxic content and reduce bias. However, despite these advancements, biased perspectives still find their way into AI-generated content, albeit at a reduced scale compared to earlier iterations.

Understandably, algorithmic or machine bias originates from the training data used during the pre-training phase. However, algorithmic bias is merely the visible layer of a deeper issue. Beneath it lies a more insidious form of bias, one shaped by the developers and human trainers working to refine these systems. These individuals, like anyone else, carry their own biases, which inevitably influence how AI systems interpret and represent the world.

Training AI is a rigorous and complex process, involving multiple stages. One of the most critical is the reinforcement learning phase, where human intervention plays a decisive role. During this stage, humans fine-tune AI responses to align with specific goals, shaping the system’s understanding of what is acceptable and what should be avoided. As AI learns from this feedback, it raises a pressing ethical question: “If AI can create images of the world as it could be or as it is, who gets to choose?” (Bowen & Watson, 2024, p. 18).

The power to define these narratives does not rest with AI itself, but with those guiding its development, a responsibility that warrants careful scrutiny.  Given the complexities of AI systemic bias, we doubt AI will eliminate it anytime soon. As the IBM team aptly puts it, “just as systemic racial and gender bias have proven difficult to eliminate in the real world, eliminating bias in AI is no easy task” (IBM Team).

7. The Ecological Footprint of AI

While much of the ethical debate surrounding AI in academia has rightly focused on issues like authorship, plagiarism, and bias, there’s another equally pressing concern that often goes unnoticed: the ecological impact of AI.

In Atlas of AI, Kate Crawford (2021) forcefully reminds us that artificial intelligence is not immaterial. Despite popular metaphors like “the cloud” that suggest lightness and abstraction, AI is grounded in physical infrastructure (e.g., data centers, cables, servers, and batteries) that come with significant environmental costs. Every query we send to an LLM, every dataset we process, every image we generate has a carbon footprint. AI systems, as Crawford explains, are energy-intensive, and training large-scale models like GPT-4 consumes massive amounts of electricity, much of which is still sourced from fossil fuels.

But energy use is only one part of the equation. The construction and operation of AI systems depend on rare earth minerals, lithium, cobalt, and other resources extracted from the Earth’s crust (Crawford, 2021). These materials are mined in ways that often result in environmental degradation, pollution, and exploitation of vulnerable communities particularly in the Global South. Crawford draws attention to how lithium extraction in places like Nevada and Bolivia, or cobalt mining in the Congo, leaves behind devastating ecological and social scars (Crawford, 2021, pp. 26–34).

As Crawford aptly puts it, “the cloud is the backbone of the artificial intelligence industry, and it’s made of rocks and lithium brine and crude oil” (p. 31). In other words, AI is part of a global extractive economy; one that, unless addressed, undermines the very values of sustainability and equity that academic institutions claim to uphold.

For researchers and educators, this ecological dimension of AI ethics cannot be ignored. If we are to use AI responsibly in academia, we must not only consider how it affects knowledge production and intellectual integrity but also how it contributes to climate change, environmental degradation, and systemic injustice. Ethical use of AI, therefore, includes asking not just what it can do, but what it costs, and who bears those costs.

Conclusion

In this chapter, we talked about the ethical complexities surrounding the use of AI in academic research. Beginning with the question of authorship, we explored how generative AI tools such as ChatGPT challenge our conventional understanding of intellectual ownership and scholarly accountability. We argued that while these tools can assist in the research and writing process, they cannot, by their very nature, be granted authorship, as they lack intention, critical reasoning, and responsibility, all of which are foundational to scholarly work. We also examined the murky terrain of copyright law as it pertains to AI-generated content, showing how current legal frameworks continue to prioritize human creativity and intentionality, leaving AI-generated works in a gray area of ownership and protection.

We then turned to the issue of plagiarism, where the increasingly human-like outputs of AI tools blur the line between original authorship and algorithmically derived content. With AI detectors proving to be unreliable, often biased, and riddled with false positives and negatives, especially against non-native English speakers, it becomes clear that ethical responsibility cannot be outsourced to detection tools. Instead, it must be cultivated through a culture of integrity, transparency, and critical awareness. Researchers must acknowledge when and how they use AI and be held accountable for the ethical implications of that use.

We further discussed the challenge of hallucinations and inaccuracies in AI-generated content, a phenomenon that raises serious concerns about the reliability of information used in academic writing. These hallucinations, which often appear convincing on the surface, can lead to the propagation of misinformation, fake citations, and faulty arguments if left unchecked.

We also addressed issues of equity, focusing on access to AI tools, language barriers, and the representation of marginalized groups. We explained how the growing commercialization of AI, especially through paywalls and subscription-based models, limits access for underfunded institutions and financially constrained researchers, further widening existing gaps in academia. We highlighted how non-native English speakers, already at a disadvantage in a system that privileges fluency in scientific English, face additional challenges when they can’t access AI writing tools. We also pointed out that students with disabilities, particularly those with intellectual and developmental differences, are often excluded from the datasets used to train AI systems. This lack of representation leads to biased outputs that fail to meet their needs, reinforcing patterns of exclusion rather than supporting genuine inclusion.

Closely related to equity is the issue of algorithmic bias, perhaps one of the most insidious and persistent ethical challenges. As we explained, AI systems learn from vast datasets of human language and behavior, which means they inevitably absorb and reproduce the biases embedded in those datasets. We explored how these systems can reflect racial, gender, ability-based, and age-related prejudices, biases that mirror the inequalities already present in our societies.

Lastly, we turned our attention to the often-overlooked ecological footprint of AI. We emphasized that AI is far from an invisible or weightless technology and that it depends on vast, extractive infrastructures that consume enormous amounts of energy and deplete critical natural resources. We discussed how AI systems rely on rare earth minerals for their hardware and burn through significant fossil fuels to power massive data centers. These environmental costs don’t fall evenly; they tend to hit vulnerable communities the hardest, especially in the Global South, where much of the extraction takes place.

These issues paint a complex and evolving picture of the ethical landscape of AI in academic research. They reveal that AI is not a neutral tool, but a powerful agent whose use has far-reaching implications: epistemological, legal, social, environmental, and moral. Ethical use of AI in academia means recognizing both its potential and its pitfalls, ensuring that we safeguard the values of integrity, equity, and accountability that define meaningful scholarly work.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

The AI Turn in Academic Research Copyright © 2025 by Johanathan Woodworth and Mohamed Kharbach is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.