Artificial Intelligence (AI) is a branch of computer science that deals with developing machines with the ability to perform tasks that are usually associated with human intelligence, such as learning and problem-solving, or put more simply, AI researchers are trying to develop machines that can think like humans.
In late 2022, the company OpenAI released a chatbot named ChatGPT (Chat Generative Pre-Trained Transformer), A chatbot is an app or a web platform that mimics human conversation. Chatbots are often used by websites to provide customer service. If you go on to a webpage and see a chat pop-up offering assistance, chances are it's a chatbox programmed to recognise keywords in queries and give a response. ChatGPT is much more sophisticated than other chatbots. It can (list shortened from Wikipedia):
- write and debug software
- compose music,
- write fairy tales
- write student essays
- answer test questions
- generate business ideas
- write poetry and song lyrics
- translate and summarize text
It doesn't necessarily do these things well, but it can do them. And as the technology improves, so will the quality of the output
Since its release, ChatGPT has become quite popular because it can respond to prompts to generate text, such as web content. Other tech companies, such as Microsoft and Google have accelerated their AI research, and many have released their own ChatGPT equivalents
ChatGPT is an easy way to generate material for web pages and it is becoming more common for companies to use ChatGPT and similar programs to replace human writers. G/O Media (online media platform), CNET (technology and consumer electronics website) Bild (German newspaper) and other companies have replaced human writers with AI chatbots. However, these AI chatbots do not write as well as humans. AI content is often poorly written, plagiarized and full of errors. The big problem is that AI isn't intelligent at all.
Large Language Models
ChatGPT and chatbots are not AI as we understand it from films and television like 2001, Terminator or Westworld and when you get down to it, ChatGPT and its ilk are really quite dull. All these chatbots are, are programs that have been "fed" (the technical term is "trained on") data like webpages, newspapers, magazines, books and other text content (the datasets used are very large and they are trained on language, so they are often called Large Language Models). They use this data to build up a big vocabulary of words and the words that usually appear next to those words in a given context. Essentially, generative AI is a very sophisticated form of predictive texting. AI Chatbots don't know anything other than what word usually comes next. Fiction and fact can't be distinguished by AI chatbots.
No, generative AI tools don't see pink elephants, but they make stuff up. They don't do this deliberately, it's just a by-product of how they work, stringing words together in a plausible sequence. When they confidently present facts which are known to be untrue, they are said to be hallucinating. Obviously, if you're looking for accurate information this can be and will be a growing problem as AI web content increases. Hallucination examples:
Hallucinations are a good reason why learners shouldn't use AI chatbots to write essays (please, don't do this), you can be caught out by:
- the inclusion of nonsense confidently presented as fact
- the inclusion of a sentence saying something like "As a language learning model I..."
- the reference list, which can include items that don't exist. Those items: books, journal articles, webpages will look very plausible - looking like they could or should exist, but they don't.
AI and the Web
The image at the top of the page shows the (September 2023) top Google search result from the search query "country in Africa that starts with the letter K'. The response is nonsense, but you can find this on other sites presented as a silly joke. Probably ChatGPT has been trained on those sites, so if you use that prompt, ChatGPT knows what usually follows.
AI Chatbots don't write facts, they write things that sound plausible based on the data they have been trained on
Generative AI is a cheap and easy way to write web pages, articles and even books. Everything in these sources will sound plausible but might not be true or even appropriate. A few examples:
Hopefully, you can see the problem: increasing use of AI chatbots to create web content means the web becomes less reliable as an information source as it gets cluttered up with low-quality AI generated content. Why do this? It's cheap, easy and probably raises the site's profile in Google search results, using search engine optimisation techniques. A higher profile on Google search will likely lead to more traffic to that site and more views of adverts.
Researchers have suggested possible "model collapse" in the future - generative AI being trained using data produced by other AIs rather than human beings, leading to deteriorating quality of AI output.
Another important thing to remember is that AI is trained on human-generated content from a variety of sources included the web. All sources are treated equally, so no distinction is made between fact or fiction, and objective or biased sources. AI tools cannot reliably distinguish between biased and unbiased data when constructing their responses.
Hopefully, as the technology behind AI chatbots improve, their output will improve too (so long as they're trained on the right data), but for now, if you're searching on the web, just be aware that AI-generated content might start cluttering up your search results.