Generative AI tools have quickly become very popular and widely used. As this technology continues to develop, ethical considerations surrounding their use have gained prominence. This page describes some of the ethical issues raised by generative AI tools.
Algorithmic Bias is defined by Wikipedia as "systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm". Algorithmic bias occurs when an AI system, like a chatbot, produces results that are systematically unfair or prejudiced. This bias can be introduced from different various sources, including the data used to train the chatbot and the algorithms that govern its behaviour. If the training data is biased, the chatbot may adopt these biases in its responses. For example, consider a chatbot designed to help users with job applications. If the training data mostly includes examples from one demographic group, the chatbot may not provide equally relevant advice to users from different backgrounds. This can lead to unfair treatment, where some users receive better assistance than others based solely on their identity or location.
It's a big issue in the tech industry. Many books have been written about how algorithmic bias is baked in by lack of minority groups being involved in the creation and testing of algorithms. Many tech products have been observed to not recognise black skin for example. This bias does appear to be carried over into generative AI tools - an August 2025 study found that AI teacher tools display racial bias when generating student behavior plans.
Generative AI tools can only be as good as the data they are trained on. They can't differentiate fact from fiction, and they don't recognise bias. In 2016 Microsoft released an AI Twitter chatbot named Tay and withdrew the service after a mere 16 hours because malicious users "trained" Tay to tweet racist, misogynist and other hateful remarks. Generative AI tools have mostly been created by English speakers and will be trained with mostly English language data. The tools will work best in English and reflect a Western point of view. Research has found that AI detection tools are biased against non-native English speakers, often wrongly flagging their work as being generated by AI.
Generative AI can amplify bias and negative stereotyping and pass these on to its users or reinforce a user's existing strong opinions on contentious issues.
Automated decision-making refers to the process where Ai systems make choices or provide recommendations without human intervention. While this technology can improve efficiency and speed, it does raise important ethical questions about accountability and fairness.
A major concern is that when a chatbot makes a decision, it can be difficult to determine who is responsible for that decision. For example, if a chatbot gives incorrect advice about a medical issue or a job application, who is to blame? Is it the developers who created the chatbot, the company that deployed it, or the chatbot itself? This lack of clear accountability can lead to confusion and mistrust among users.
Another ethical issue is the potential for bias in automated decision-making. If a chatbot is trained on biased data, it may make unfair decisions that negatively impact certain groups of people. For instance, if a chatbot used in hiring processes favours candidates from a specific demographic, it can perpetuate inequality and discrimination. This raises questions about whether it is ethical to rely on AI for decisions that can significantly affect people's lives.
When you interact with a chatbot, it may ask for information like your name, email address, or even details about your preferences and habits, or you might offer up these details as part of the question you are asking. While this data can help the chatbot offer personalised responses, it also means you are sharing personal and sensitive data. An ethical concern here is whether users fully understand how their data will be used and if they can trust the chatbot to keep it safe.
Another major issue is data security. If a chatbot collects personal information, it must have strong security measures in place to protect that data from hackers or unauthorised access. A data breach could expose users' private information, leading to identity theft or other harmful consequences. Therefore, companies that develop chatbots have a responsibility to ensure that user data is secure. So far, their record on this is not stellar. Some headlines from 2025:
For a while, conversations with ChatGPT were being indexing by Google and other search engines and made searchable on the web. These conversations can be pretty dull, but think of all the oversharing that people do online. Making conversations public like this wasn't intentional and fixed by OpenAI when they found out, but it does indicate how slapdash these companies can be when it comes to privacy and security; it's much better to anticipate things and prevent them from happening rather than being embarrassed in public like this.
Another ethical consideration is informed consent. Users should be aware of what data is being collected and how it will be used. Would you use an AI chatbot if all your searches on it were made public, accidentally or not? Users should also have the option to opt out of data collection if they choose. This means that companies should provide clear and understandable privacy policies, but many people think that, too often, these privacy policies and terms of use documents are too long and too complex to read easily.
To get where we are with generative AI tools, there has been a lot of heavy lifting by people, never mentioned by companies who make these tools. Workers in Kenya were paid less than $2 per hour to review and label LLM training content containing violence, hate speech, and sexual abuse. Constant exposure to this kind of material can affect mental health and trigger PTSD. These workers are not provided with any social protection benefits either.
As AI chatbots become more prevalent in our lives, the need for regulation has become a hot topic. Regulation refers to the rules and guidelines that govern how these technologies operate, ensuring they are used safely and ethically.
One of the main reasons for regulating AI chatbots is to ensure user safety. Chatbots can provide information and assistance, but they can also make mistakes or give harmful advice. For example, if a chatbot gives incorrect medical information, it could lead to serious consequences for the user. Regulations can help set standards for accuracy and reliability, ensuring that chatbots provide trustworthy information.
Another important aspect of regulation is protecting user privacy. Chatbots often collect personal data to improve their services. Without proper regulation, there is a risk that this data could be misused or shared without consent. Regulations can establish clear guidelines on how user data should be collected, stored, and used, ensuring that individuals' privacy rights are respected.
Additionally, regulation can help address issues of bias and fairness in AI chatbots. If a chatbot is trained on biased data, it may produce unfair or discriminatory responses. Regulation can require developers to implement fairness measures, ensuring that chatbots treat all users equitably.
The companies that build AI chatbots are frequently at odds with policy makers, ethicists, IT researchers and other interested parties. The companies would prefer a light touch, if any regulation at all, arguing that regulation will make it more difficult to make better AI tools. In the US, tech companies unsuccessfully lobbied Congress to introduce a 10-year ban on regulating AI. In December 2023, EU officials agreed the content of an AI regulatory law, adopted in 2024 and taking effect in 2025. Other jurisdictions (e.g. the UK, and China) are looking to introduce their own regulations, but perhaps generative AI needs a global regulatory response.
Copyright is a legal concept that protects the rights of creators over their original works, such as books, music and art.
AI chatbots have been trained on vast amounts of data, including copyrighted material. This raises important ethical questions about the implications of using such content for training these systems. On one side of the debate, there are the companies who make AI chatbots, and on the other side, well, just about everyone else. Yes, a generative AI tool can create a funny image of Spongebob Squarepants dressed as Captain America, but in order to do so, it has had to violate copyright.
Copyright violation is not moral and often is illegal. This hasn't stopped people from setting up websites and platforms where all sorts of digital material can be shared illegally. Meta (the umbrella company of Facebook, WhatsApp and Instagram) downloaded the entire contents of an illegal book library to help train its generative AI. The vast majority of authors do not earn that much money; the average Australian writer earns $ (Australian)18,000 per year (around €10,000). but Meta's gross profit in 2024 was $134 billion dollars. Surely they can afford to buy a book or two? The big tech companies are not short of a few euro; why should they be allowed steal people's work to create AI tools that they intend to profit from? Sam Altman, the CEO of OpenAI, the makers of ChatGPT has stated that it would be "impossible" to create ChatGPT without using copyrighted material. By the same token, you could argue that you will never get rich without being able to rob banks. Therefore, you should be able to rob banks. Perhaps not. Are AI chatbots useful and valuable enough to justify this?
Obviously, this copyrighted material is being used without the permission of the original creators. ETBI certainly weren't asked whether the pages on this website could be used for training AI chatbots. What if ETBI didn't want this content used?
When chatbots are trained on content, they can generate responses that closely resemble the original works. This can lead to situations where the chatbot effectively "copies" the ideas or expressions of the creators, which raises questions about intellectual property rights. If a chatbot produces content that is too similar to a copyrighted work, it could be seen as another violation of copyright law.
Also, there is the potential impact on artists, writers, musicians - and others. If chatbots can be used to generate content based on their training data, it may reduce the demand for original works - despite the lack of quality, using AI tools is cheaper than paying human artists. This could harm authors, artists, and musicians who rely on their works for income. The ethical dilemma here is whether the benefits of using AI to generate content outweigh the potential harm to those whose work has helped create it. A lot of people think that it doesn't.
AI chatbots rely on powerful servers and data centres to process information and generate responses. These data centres consume a large amount of electricity, often sourced from non-renewable energy. This high energy demand contributes to carbon emissions and climate change. In the US, power consumption by data centres accounts for almost half of the predicted growth in electricity demand between now and 2030.while in Ireland, data centre power use has outstripped that of all homes in town and cities. Some people think that generative AI chatbots are not a good use of energy.
Data centres also require water for cooling systems to prevent overheating. Using large amounts of water for technology can strain local resources, especially in areas facing drought or water shortages. Having ChatGPT compose a 100-word email using around 500ml of water. Data Centres also impact the quality of life of people who live near them, with water quality and air quality often affected.
"Some Harm Considerations of Large Language Models (LLMs)" by Rebecca Sweetman is licensed under CC BY-NC-SA 4.0.