Generative AI
Generative AI refers to forms of artificial intelligence (AI) that create or generate new content. Such as texts and images that look real. In the most popular applications of generative AI, it is almost impossible to tell that the content was created by AI and not by a human. How does this technology work?
On this page
Predicting answers
The most well-known examples of generative AI are so-called chatbots and digital assistants. The user enters text, usually in the form of a question or instruction. For example: 'How do I bake a vegan apple pie?'. Or a little more extensive: 'Write a happy song about baking an apple pie'.
The possibilities of generative AI are endless. From solving maths homework and writing computer code, to advice on nutrition or new recipes. Generating videos, applications (apps) or games will also become increasingly easier.
The content created using generative AI, such as texts, is also referred to as output. The AI model is trained to predict the next word or series of words in a sentence. This prediction is based on a great deal of text (data) used to train the AI models.
The basics of generative AI: machine learning
The underlying technology of generative AI is what we call machine learning (ML) models.
Computers 'learn' patterns or associations from large amounts of data. One example of this is when a computer system learns to recognise a dog. Suppose a computer has been given 10,000 images of dogs to learn how to recognise when an image does or doesn't contain a dog. The computer sees a pattern, namely in the pixels of the images. Based on this, a computer can 'recognise' certain combinations of pixels as a (part of a) dog. Or conclude that an image may possibly show a dog.
The system assigns a 'score' to this: how likely is it that the image contains a dog? If enough features are detected, and a recognisable pattern can therefore be seen in the pixels, the score will be high. The system will then make a classification: the image shows a dog.
This not only works with images, but also with texts, for example. The more data the computer receives, the better it learns to recognise patterns and to generate similar data itself.
Large language models
Models trained with huge amounts of data, such as text samples, are referred to as large language models (LLM). The computer system then uses this information to determine how words fit into a sentence and how sentences fit well into a context. This generates output: the text that the user ultimately sees on the screen. Like the aforementioned song about baking an apple pie.
Foundation model
AI models are sometimes adjusted with new training data. The basic model on which these adjustments are made is called a foundation model. It forms the foundation for models that perform specific tasks, such as generating text.