Artificial Intelligence, or AI, is a part of computer science that focuses on creating machines that can do tasks normally done by people, such as learning and solving problems. In other words, AI researchers are trying to make machines that can think like humans.
AI research is still in its infancy. In 1950, mathematician and computer scientist Alan Turing introduced what is now known as the Turing Test. This test checks if a machine can behave in a way that makes it hard for a person to tell whether it is a machine or a human based solely on its answers. So far, no machine has truly passed this test.
AI research also overlaps with many other fields. While researchers are working on building intelligent machines, they are also exploring how these machines can be used in different ways. This has led to many related areas of study, all grouped under the term AI.

Generative AI uses complex algorithms (an algorithm is basically is set of steps used to solve a problem) that learn from and adjust to the text they receive—a process called machine learning. Because these programs work with language, they are known as language models. When trained on huge amounts of data—like ChatGPT, which was trained on 570GB of data or 300 billion words - they are called large language models (LLMs). Some even refer to them as language learning models because they continuously update their knowledge.
However, generative AI isn’t the advanced, thinking machines we see in movies like 2001, Terminator, or the TV series Westworld. In reality, it’s simply a program that has been trained on text from various sources such as websites, newspapers, magazines, and books. It builds a huge vocabulary and uses algorithms to predict, in a sequence of words, which words will likely come next, similar to how predictive texting or autocomplete works on your phone. This means it doesn’t really understand the text; it just assembles words based on patterns, which can lead to its abilities and output quality being overestimated.
The Chinese Room is a thought experiment created by philosopher John Searle to discuss whether machines can truly understand language, or if they are just following rules without comprehension.
Imagine a person who doesn’t speak Chinese is inside a room. This person has a big book of instructions that tells them how to respond to Chinese characters that are slipped under the door. When someone outside the room sends in a question written in Chinese, the person inside looks up the symbols in the book and follows the instructions to write back a response in Chinese.
To the person outside the room, it seems like the person inside understands Chinese because they are giving correct answers. However, the person inside doesn’t actually understand the language; they are just following the rules in the book.
Searle uses this example to argue that even if a computer - a chatbot - can respond to questions in a language, it doesn’t mean the computer understands that language. It’s just processing symbols without any real comprehension, similar to the person in the room.
A chatbot is an application or website feature that simulates human conversation. Websites often use chatbots to help with customer service. If you visit a webpage and see a chat window pop up offering help, it’s likely a chatbot designed to recognize certain words in your questions and provide answers. While some chatbots might occasionally seem convincing enough to pass the Turing Test, most people don’t believe they are truly intelligent. Alan Turing thought the test showed that machines might one day be able to make us think they can think, even if they can’t. The newest chatbots are getting close to that level.
In late 2022, the company OpenAI released a chatbot named ChatGPT (Chat Generative Pre-Trained Transformer). ChatGPT is much more sophisticated than other chatbots. It can (list shortened from Wikipedia):
It doesn't necessarily do these all things well (yet!), but it can do them and, what's important to note, is that it can do them very quickly.
Since its release, ChatGPT has become very popular because it can respond to prompts to generate new and original text. Other tech companies, such as Microsoft and Google, have accelerated their AI research, and many companies have released their own ChatGPT equivalents.
These complex programs use algorithms - a defined process or set of rules to be followed in calculations or other problem-solving operations - to generate content based on conversational prompts given by humans and thus ChatGPT and similar programs are called generative AI. They've become popular because they work very quickly and that have a huge amount of training data to draw upon.
So, generative artificial intelligence is the term used to describe computer programs, such as ChatGPT, that can be prompted to create new content, including audio, computer code, images, text, simulations, and videos. In the AI family tree above, generative AI mostly falls under Natural Language Processing (but can also be classified under speech and vision and deep learning - AI is complicated!)
The sophistication (yes, ChatGPT has passed the Turing test) of these computer programs and their capacity to process and generate human-like text has implications for content creation, the web, and automation of various language-related tasks.
ChatGPT is the best-known AI chatbot, but there are many others. The companies that make them continue to work to make them more sophisticated and more powerful. However, they are not intelligent.
Despite some limitations, generative AI has huge potential in many areas and already has a rapidly expanding number of uses:
Some ways you can use Generative AI chatbots:
Image sources: Amanda Wheatley and Sandy Hervieux - The AI Family Tree used under a CC-BY-NC-SA-4.0 licence
Katie Mack on BlueSky