Explaining the Different Types of “AI”

“Artificial Intelligence” (AI) as a term can cover a range of different computing techniques that have quite different uses, benefits and risks. Since 2023, when someone refers to “AI”, they are probably referring to the much-discussed chat tools, known as “generative AI” or “AI Assistants” (such as ChatGPT, Gemini, Copilot, Llama, Claude, or DeepSeek, or related image or video generators) that are based on Large Language Models (LLMs). Riding on the back of the current hype, companies now apply the label to a variety of products and services, whether that is plausibly justifiable or not. 

As a term, “artificial intelligence” was coined in 1956 by a group of researchers looking for a name for their field of scientific and engineering study. Since then, a combination of mathematics and advances in data collection and software and hardware engineering has created the capability to do things with computer programs that couldn’t be done before, and these things now popularly get called “AI”. These computer programs are mainly making predictions about something, given a particular input of data or text. For example, a generative AI tool predicts a probable text or image from a prompt, based on patterns in its training data. 

However, before this current, very confusing situation, there were — and remain — many other tools and methods that get called “AI”. It is therefore not a useful term to use on its own when considering whether and how to use such techniques. Specifics of the techniques and application contexts are needed for a sensible debate.

The following table splits up “AI” into specific types of tool that have very different characteristics and applications. It explains what each is, how it can be used, and some of the risks involved in using it. The choice of one for an application depends on the particular use case: it will be different between cases involving predicting or classifying people, for building a customer-service chatbot, or for production line optimisation. Some common technical methods underpinning the tools are explained in the second table below.

Just for the record, “Artificial General Intelligence (AGI)”, typically taken to mean a system that can accomplish any intellectual task that human beings can perform, does not exist and remains a controversial ambition of some notable figures in the technology industry. It is not something to trouble us in the near future. Gaining in prominence though is the term “AI Agent”, referring to a program that carries out a task on its own to achieve a goal set by its user, such as booking a hotel near Leeds station for next Tuesday. There is as yet little real world experience to guide where and when such tools can be used reliably (i.e. with no person reviewing or committing to the outputs).

Types of “AI”

WhatDescriptionDiscussionExamplesRisks
Generative tools (text, images, voice, video)Generative “AI” tools are capable of generating text, images, or other media, using generative models – models that learn the patterns and structure of their input training data and then generate new data that has similar characteristics from input “prompts”.**There has been a lot of media coverage about “AI” systems that can generate text, images, videos, emails, Facebook posts and so on in response to someone feeding them an instruction to produce something: a prompt. While this can be fun to do, and sometimes useful, the results may be completely unreliable.Unscrupulous people can use them to produce offensive or misleading material. They are making it harder to sort out what is true and what isn’t on the internet. Check sources of information carefully before believing it.GPT-4o, GPT-5, PaLM, DALL-E, Stable Diffusion, Midjourney. These tools are often accessed through user-friendly applications like chatbots, web sites or office programs, connected to them via application programming interfaces (APIs).Inaccurate, offensive, illegal or meaningless output, copyright infringement.
Chatbots – Large Language Model -basedA language model captures patterns and regularities present in natural language and is used for making assumptions on previously unseen language fragments. Large language Models (LLMs) are trained on extremely large textual datasets. Generative LLMs such as Chatbots are models that output text, such as the answer to a question or even writing an essay on a specific topic.*LLMs, on which these Chatbots are based, are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact. People using a LLM Chatbot often trust it as a human-like information source. The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth. The data the technology uses to answer questions does not always come from a factually correct source. LLMs are trained on large datasets… usually from online sources. These can contain false statements, opinions, and creative writing. ****ChatGPT, Gemini, Llama, Claude, Copilot. These are either “conversational” stand-alone programs or apps, or interfaces to search engines, or built in to common programs like word processors or presentation makers.Inaccurate, offensive, illegal or meaningless output, copyright infringement.
Chatbots – custom-builtA computer program designed to simulate conversation with a human user, usually over the internet; especially one used to provide information or assistance to the user as part of an automated service.* Firms or public bodies may build one specifically to give (accurate) answers to questions they are commonly asked. NB not all chatbots are based on LLMs, and it might be a stretch to call them “AI”, but it is a fine distinction.More often now, when you go to a company’s web site (or sometimes a public body), it will give you a window into which you can type questions or comments conversationally, and you will get a response in plain language. This is likely to be a computer replying, not a person.First point of contact offered by companies for customer enquiries on their web sites; similarly some public bodies e.g. local authorities. A bit like a conversational FAQ – relevance and accuracy are crucial.Inaccuracy, irrelevance, scope failure (“don’t know”)
ClassificationClassification models separate data into distinct classes. These models are built by inputting a set of training data for which the classes are pre-labelled for the algorithm to learn from. The model is then used by inputting a different dataset for which the classes are withheld, allowing the model to predict their class membership based on what it has learned from the training set*** such as accept/reject, true/false, spam/not-spam.A classification system is a set of “boxes” into which things are sorted (such as sunny, cloudy or rainy), and is called binary classification if there are only two boxes.* These are widely used e.g. in recruitment to inform decisions about people but can be inaccurate. Anyone who is subject to a decision is entitled to get an explanation of how it was made and why it was made.Recruitment: CV scanning for interview selection or rejection. Identifying an email as spam and blocking it. Social services teams selecting cases for their intervention or not.Inaccuracy, bias, output not explainable, false-positive problem (i.e. a large proportion, even most, cases being flagged as positive are in fact negative).
Facial recognitionA facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces.**A lot of mobile phones can be set up now to unlock themselves or authorise a payment just by recognising your face through their camera. Electronic passport gates do the same. These all need a digital photograph of you when setting them up before they can work. Sometimes though, such as in bad lighting, they may not recognise you and this can be frustrating.Unlocking mobile phones, transaction authorisation, door access control, suspect identification by police.Inaccuracy, bias, differential performance with respect to skin tone or race, output not explainable, false-positive problem.
Games playing agentA computer program created to learn to play a game by seeking a goal (i.e. winning) and generating moves in play; or code built in to video games to animate characters.You can play games like chess or cards online or on phone apps “against the computer”, which has been programmed to respond to your moves with one of its own that gives it the best chance of beating you.Chess, Go etc and non-playing characters embedded in video games.It doesn’t follow the rules, odds unfairly stacked against the player in gambling games. 
Natural language processingThe ability of a machine to process, analyse, and mimic human language, either spoken or written.*You can now talk to some devices or computer programs and it will work out what you said, or type in normal text to get a translation. In the other direction, you can get some products to speak their outputs instead of displaying text on a screen.Amazon Echo (Alexa) and Google Nest record a voice instruction, turn it into something a computer can process then act on it, sometimes turning the result into speech. Google Translate takes text as input to turn into another language.Inaccuracy, incomplete interpretation.
PredictionPrediction tools uncover relationships and patterns within large volumes of data that can be used to predict behaviour and events.**Systems that make predictions, i.e. make a guess about the future, occur in many places. They may be used by insurers (to assess us for risk and premiums) or by public bodies in making decisions about us, and non-human situations like flood risk and weather forecasts. The one thing they all have in common is that predictions are not certainties, but probabilities like which horse will win the race. It is not always clear when these processes are being used, but anyone who is subject to a decision is entitled to get an explanation of how it was made and why it was made, particularly from public bodies.A) Predictions about humans:whether they might commit fraud, or be likely to reoffend when released from jail, or are likely to have a car accident.B) Predictions about physical systems – weather, traffic congestion.Inaccuracy, bias, output not explainable, false-positive problem, self-reinforcing (performative) prediction.
RankingAlgorithms used by social media platforms or search engines to determine the order in which posts, results or adverts are displayed to a user.When you use a search engine or go on to Facebook or other social media, the system works out what to show you (search results, posts, adverts, etc) and in what order, from a vast number of possibilities, based on information it has collected about you, your interests, previous uses, and people you are connected with or similar to. This isn’t necessarily obvious, but you are probably getting presented different things to your friend.Google search results, social media feed posts, adverts presented on web sites.Bias, attitude reinforcement (bubble creation).
Recommender systemsSimilar to Ranking, recommender algorithms determine what information to present to a user; typically, the suggestions refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read.**When you use shopping, news, recipe, video or music services online, the system works out what to show you based on information it has collected about you, your interests, previous uses like purchases, and people you are connected with or similar to. This can seem a bit spooky.Playlist generators for video and music services (Netflix, Spotify), product recommenders for online stores, or content recommenders for social media, news, books, restaurants and online dating.Inappropriate recommend-ations, attitude reinforcement (bubble creation).
SensorsDevices that use measurement and analysis of their environment e.g. image, sound and motion detection, temperature measurement etc and take action accordingly.It is now possible to get all sorts of devices that you can have in your home to help keep you safe such as fall detectors and ones that can tell if something unusual is going on, and send an alert to your family or doctor.Monitoring devices for fall detection, illness, etc for people at risk.Inaccuracy, false alerts.

Some common methods used in the above processes 

MethodDescriptionUsed in
ClusteringClustering is used for analyzing data which does not include pre-labeled classes, or even a class attribute at all. Data instances are grouped together using the concept of maximizing the within-group similarity and minimizing the between-groups similarity..***Recommender systems
Expert systemsExpert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules.Diagnosis, design, prediction, sensors
Machine learning (ML)ML refers to a range of statistical, mathematical and optimisation techniques for building a model that fits a set of data and can be used for e.g. data analysis, or prediction in circumstances similar to the ones from which the modelling data was derived.A wide range of “AI” methods and processes. The transformer models used in Large Language Models use ML.
Predictive analysisThe organisation of analyses of structured and unstructured data for inference and correlation that provides a useful predictive capability to new circumstances or data.*Prediction, ranking
Profiling‘Profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.*Classification, prediction, ranking, recommender systems

Reference Sources

* From https://digital-strategy.ec.europa.eu/en/library/eu-us-terminology-and-taxonomy-artificial-intelligence
** From Wikipedia
*** From
https://www.kdnuggets.com/2016/05/machine-learning-key-terms-explained.html
**** From https://www.ox.ac.uk/news/2023-11-20-large-language-models-pose-risk-science-false-answers-says-oxford-study

Explaining the Different Types of “AI” © 2025 by Paul Waller is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/

The material presented in this article has been peer-reviewed. Any opinions are the author’s own and do not necessarily represent those of LibSTEMM. The information is given in good faith but without any liability on the part of LibSTEMM. No “AI” tools were used in the production of this work.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *