AI Taxonomy

Artificial Intelligence (AI) Terminology Defined

Confusion in terminology is rampant in any field undergoing rapid transformation.  Words used to describe devices, services, approaches and strategies are rife with misunderstanding.  The following definitions are intended to reduce such confusion and standardize terms related to the use of advanced technology in the delivery of healthcare.  They have been developed by the PATH Task Force on Standards and Guidelines.  The terms and definitions were derived from numerous sources.  Definitions that came complete from another source is referenced.

Artificial Intelligence (AI) – Sometimes called machine intelligence, AI is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. 

Accuracy – This is the ratio of correct predictions to the total predictions.  Accuracy is considered an insufficient measure of the performance of a model.  See also Precision, Recall, and F1 Score.

Algorithm – Algorithmsregard math formulas and/or programming commands that inform a regular non-intelligent computer on how to solve problems with artificial intelligence. Algorithms are rules that teach artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.  AI systems typically demonstrate at least some of the following behaviors associated with human intelligence- planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity. AI is an umbrella term that includes using various methods, to perform ‘smart’ tasks we often associate with the human mind such as learning and reasoning.[1]

Agent — Also called assistants, brokers, bots, intelligent agents is an autonomous entity which observes through sensors and acts upon an environment using actuators.

Assistive AI – Providing output(s) that assist physicians and other specialists in medical decisions. The ultimate medical decision is made by the physician. (see also Clinical Decision Support).

Autonomous AI –
Providing output(s) that represent a medical decision without oversight by a physician or other expert. These decisions can be diagnostic or therapeutic. Autonomous AI can perform at various level depending on the risk category of the decision as well as the involvement of the health care system.  Typically, Autonomous AI will require some form of medical practice insurance.

Autonomic computing – A system’s capacity for adaptive self-management of its own resources for high-level computing functions without user input.

Artificial General Intelligence (AGI)— An emerging field aiming at the building of “thinking machines”; that is, general-purpose systems with intelligence comparable to that of the human mind, also called “Strong AI”, “Human-level AI”, etc.

Artificial Narrow Intelligence (ANI)— A computer’s ability to perform a single task extremely well, such as crawling a webpage or playing chess.[2]

Artificial Super Intelligence (ASI) — The time when the capability of computers will surpass humans. Smarter than the best human brains and has the ability to apply that to absolutely anything.[3]

Backpropagation – The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

Bayesian network – A type of probabilistic graphical models built from data and/or expert opinion. They are graphs explaining the chances of one thing happening depend on the chances that another thing happened. They can be used for a wide range of tasks including prediction, anomaly detection, diagnostics, automated insight, reasoning, time series prediction and decision making under uncertainty.

Black Box –
Black Box, also called Blank Slate AI, is an algorithm that learns complex concepts from a blank slate and with superhuman proficiency. Some would criticize this as making it more susceptible to overtraining, catastrophic (non-intuitive) failure and unanticipated bias.

Case-based Reasoning- Artificial intelligence technique of utilization of former experiences to comprehend and solve new problems.

Chatbot — A computer program that conducts conversations with human users by simulating how humans would behave as a conversational partner.

Classification- Classification algorithms let machines assign a category to a data point based on training data.

Classifiers – Algorithms used for data classification machine learning.

Clinical care standard – A Clinical Care Standard is a small number of quality statements that describe the care patients should be offered by health professionals and health services for a specific clinical condition or defined clinical pathway in line with current best evidence.[4]

Clinical guideline – Clinical practice guidelines are statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options. Rather than dictating a one-size-fits-all approach to patient care, clinical practice guidelines offer an evaluation of the quality of the relevant scientific literature, and an assessment of the likely benefits and harms of a particular treatment. This information enables health care clinicians to select the best care for a unique patient based on his or her preferences. [5]

Cluster analysis – A type of unsupervised learning used for exploratory data analysis to find hidden patterns or grouping in data; clusters are modeled with a measure of similarity defined by metrics such as Euclidean or probabilistic distance.

Clustering- Clustering algorithms let machines group data points or items into groups with similar characteristics.

Cognitive computing- A computerized model that mimics the way the human brain thinks. It involves self-learning through the use of data mining, natural language processing, and pattern recognition.

Convolutional neural network (CNN)- A type of neural network that includes one or more layers that connect only neighboring nodes from the previous layer.  This contrasts with fully connected layers in which every node from the previous layer is connected to every node in the current layer.  The convolutional neural network (CNN) is much more efficient in memory and processing than a fully connected network while maintaining it’s efficacy.  The CNN is often used for image processing.

Crowdsourcing – A practice of distributing tasks to a large audience and get things done quickly. The drawback here is that it’s tough to maintain the crowd and ensure quality when done alone.

Data — Any collection of information converted into a digital form.

Data labeling- Task of annotating the object(s) found in the given data. This includes images, audio, video or any file type.

Data Mining — The process by which patterns are discovered within large sets of data with the goal of extracting useful information from it.

Data science– An interdisciplinary field that combines scientific methods, systems, and processes from statistics, information science, and computer science to provide insight into phenomenon via either structured or unstructured data.

Decision tree– A tree and branch-based model used to map decisions and their possible consequences, similar to a flow chart.

Decision Model – A model that uses prescriptive analytics to establish the best course of action for a given situation. The model assesses the relationships between the elements of a decision to recommend one or more choices. It may also predict what should happen if a certain action is taken.

Deep Learning – A machine learning approach that uses neural networks with more than a single hidden layer and backpropagation type training that operates across multiple layers.

Expert System – Expert System (ES) software implements the analytic approach and logical reasoning of a domain expert in the computer implementation of a diagnostic decision or therapeutic recommendation process.  An ES utilizes the data analysis and decision-making algorithms acquired by the domain expert through extensive training and experience.  These systems typically use the concept of production rules to encode the knowledge transferred from the domain expert through a process known as knowledge engineering (KE).

One common format for the storage if this knowledge is as a set of production rules that will rigorously evaluate input data and reach final results within their complex domain space.  These rules will typically be created utilizing a symbolic programming language containing the standard algebraic operators (addition, subtraction, multiplication, division, square root, etc.) and logical operators (Less Than, Less Than or Equal, Greater Than, Greater Than or Equal, Equal, Not Equal, etc.).  These programming languages are usually customized for the knowledge domain being automated.

An ES may be structured as a complex software package that is initially designed for use within only one problem domain.  It may alternatively be developed using a generalized ES Shell, which is essentially a flexible and sophisticated expert system that has yet to be loaded with its problem domain knowledge.  An ES shell allows rapid prototyping and development of an ES, prior to its eventual commercial release as a single-purpose complex software package.

All or some of the following software components are usually found within the typical ES:

  • Input Database for parameters related to the entity being analyzed;

  • Production Rule Data Base containing the domain knowledge;

  • Results Message Database containing the output messages;

  • Text Editor for modifying Production Rules and Results Messages;

  • Inference Engine that processes the Production Rules, evaluates them against the Input Database, and outputs the Results Message(s); and

  • Custom Complier or Interpreter to convert the Production Rules into machine executable codes.

One major advantage of the use of ES architectures in an Artificial Intelligence system is that it is easy for a domain expert to double check on the validity of a determination made by an ES by manually evaluating the case using their personal training and experience.  Not only can a domain expert independently agree with the result provided by the ES for a particular case, but also the domain expert can review the ES Production Rules themselves to verify their medical appropriateness.  This would provide very convincing verification of ES performance for presentation to approval organizations such as the FDA.

The popularity of the ES seems to have diminished over time, if measured by the lack of publicity for this AI implementation.  The ES, however, has actually become commonplace and has been deeply integrated into automobiles, appliances, and into numerous other medical, consumer, and industrial products.  These implementations seem to no longer be publicly identified as AI, possibly to minimize unnecessary product scrutiny.

Expert Systems – This describes where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

Explainability – Explainability is the ability to derive what the medical AI is targeting.  For example, an explainable system allows for the validation of each important medical criterion important to the disease for which the AI is validated on. Explainable AI also helps avoidance of unintended bias and validation that the AI applies to its intended use and not a proxy (contrast with Black Box).  

Facial Recognition – The recognition of faces and emotional states in images or video signals. This is commonly done through point annotations called landmarks.

F1 Score – The weighted average of Precision and Recall. This score takes both false positives and false negatives into account. Intuitively it is not as easy to understand as accuracy, but F1 is usually more useful than accuracy, especially if you have an uneven class distribution

Game AI– A form of AI specific to gaming that uses an algorithm to replace randomness. It is a computational behavior used in non-player characters to generate human-like intelligence and reaction-based actions taken by the player.

Genetic Algorithm — A method for solving optimization problems by mimicking the process of natural selection and biological evolution. The algorithm randomly selects pairs of individuals from the population (whereby the best performing individuals are more likely to be chosen) to be used as parents.

Generative adversarial networks (GAN)- A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

Ground Truth – Is a process usually done on site (or, using gold standard) to measure the accuracy of the training dataset to prove or disprove research hypothesis.

Health Automation – Automation is the use of information technology that reduces the need for human work in the creation of outcomes. Today’s automation technologies are capable of far more than human administrators.  Tasks such as reduction of administrative workloads, improvement of the consistency of patient care, elimination of waste, enhancement of information exchange, analyzation of data, and monitoring of patients can all be streamlined with data automation. Automation of routine tasks can cut the amount of paperwork that healthcare organizations have to deal with. It can also reduce staffing costs and increase operational efficiency.  Health automation is used in many applications including- electronic prescribing of controlled substances, automated appointment reminders and patient portals.

Human-Computer Interaction (HCI)- The interdisciplinary field of study that involves interaction between human users and computers; computer science, behavioral science, design science, cognitive psychology, and communication theory are all involved in this field.

Human-in-the-loop – This describes a process of using humans in the middle of the process to achieve expected output. It’s been used in Machine Learning process to enhance the result accuracy.

Image Recognition – Recognizing the specific types of objects in given image or video datasets.

Internet of Everything (IoE)- The intelligent connection of people, process, data, and things to make networked connections more relevant and valuable.

Knowledge engineering– Focuses on building knowledge-based systems, including all of the scientific, technical, and social aspects of it.

Heuristics — It is the knowledge based on Trial-and-error, evaluations, and experimentation.

Heuristic search techniques – Support that narrows down the search for optimal solutions for a problem by eliminating options that are incorrect.

Knowledge Engineering – Knowledge Engineering (KE) involves the design and development of knowledge structures that are easily interrogated within an artificial intelligence software system. The KE process is the most critical step in the development of an expert system.  It involves in depth interviewing of a domain expert to extract the subjective and objective knowledge contained within their brain.  The result of these interviews will be the creation of a large collection of domain knowledge that is represented in a compact computer-useable format.

Laboratory Data – Describes a group of data that may be retrospectively collected or manufactured without consideration to the intended condition of use and intended user (contrast with Real World Data). Classifying the type of data used is applicable to training and validation of medical AI. 

Logic programming– A type of programming paradigm in which computation is carried out based on the knowledge repository of facts and rules; LISP and Prolog are two logic programming languages used for AI programming.

MHealth – the use of mobile devices to deliver healthcare

Machine intelligence– An umbrella term that encompasses machine learning, deep learning, and classical learning algorithms.

Machine Learning – Using advanced statistical techniques to identify patterns in data and then make predictions.  A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.[6]

Machine Learning:  Machine Learning (ML) is an umbrella term describing a class of software that is used to develop systems for fully automated disease diagnoses or therapy recommendations.  ML implementations must contain an analytic decision-making construct or knowledge paradigm, which is first trained by adjusting their internal parameters, coefficients, decision boundaries, etc. to make the correct decision on a group of training cases.  These determined values are saved for use during future decision-making.

In order to begin the learning process, it is necessary to obtain a large number of well-diagnosed entities (patients, verified histologic specimens, or confirmed radiology images).  All entities are labeled in advance with their confirmed diagnoses and are then randomly divided into Training and Test Sets.  Diagnostic categories might be: normal, abnormal category1, abnormal category 2, etc.

The Training Set is used during the learning period, and the Test Set is used to evaluate the specificity and sensitivity of the machine learning system after the learning process has been completed.

Principle Two Classes of ML:

  1. Supervised ML – Initially identifying the number and name of the diagnostic categories into which the entities will be sorted (typically normal, abnormal category1, abnormal category 2, etc.), which match the labeled diagnostic categories of the Learning Set entities.

  • Unsupervised ML – Allowing the learning process to determine the number of specific diagnostic categories into which the entities are to be sorted based on the separability determined during the learning period.

Rationale for ML Class selection:

If Supervised ML is utilized, it is possible to specify diagnostic categories into which the entities are not fully separable with the attributes specified for these entities.  As a result, entities from the Test Set can be assigned multiple diagnostic categories, with equal or different probabilities of occurrence.

Alternatively, if Unsupervised ML is utilized, it is possible to have entities with different diagnoses in the Test Set blended into a single cluster, which results in multiple diagnoses being reported for these overlapping entities without any indication of their probabilities.

At least a dozen ML knowledge paradigms are currently in use, and additional methodologies will continually be developed for future implementation.  Four commonly used ML paradigms:

  • Artificial Neural Networks – modeling based upon the interconnectivity of biologic neurons is carried out with a network of multiple inputs, representing attributes of an entity, which are connected through multiplier elements into summing junctions.  Multiple layers of these symmetrical butterfly structures ultimately generate the network outputs reflecting diagnostic categories.

  • Clustering Methods – Attributes associated with the entities are plotted in n-dimensional space (> 3D), and results in clustering in this space, which identifies the allowed diagnostic categories.

  • Bayesian Statistical Methods – A Bayesian network can be drawn that relates the variables associated with the entities and their conditional probabilities for each diagnosis.

  • Rule-Based Methods – Similar to the Production Rules in Expert Systems, however, these rules are determined via machine learning algorithms, instead of the de-briefing of domain experts.

One disadvantage of ML methods is that it is extremely difficult, if not impossible, for a domain expert to verify the proper operation of a ML system using manual methods.  This is due to the fact that the decision surfaces, boundaries, and thresholds, which are developed by the ML system during its training and utilized during its operation, are not necessarily evident or understandable.  This makes it extremely difficult for human verification of ML performance to be presented in a convincing manner to approval organizations such as the FDA.

Machine Perception- The ability for a system to receive and interpret data from the outside world similarly to how humans use our senses. This is typically done with attached hardware, though software is also usable.

Medical AI Algorithm – A stand-alone AI algorithm designed for a specific intended diagnostic, therapeutic or management use.  

Medical AI System – Accounts for the usability, hardware pairing, integration into the medical guidelines, and all other impact that the Medical Al algorithm has on the intended use. 

Medical Robotics – Robots in medicine help by relieving medical personnel from routine tasks, that take their time away from more pressing responsibilities, and by making medical procedures safer and less costly for patients.  Robotic medical assistants monitor patient vital statistics and alert the nurses when there is a need for a human presence in the room, allowing nurses to monitor several patients at once. These robotic assistants also automatically enter information into the patient electronic health record. Robotic carts may be seen moving through hospital corridors carrying supplies. Robots are also assisting in surgery, allowing doctors to conduct surgery through as tiny incision instead of an inches-long incision. Robotics is making a big impact in other areas of medicine, as well.   They can be used to disinfect patient rooms and operating suites, reducing risks for patients and medical personnel. They work in laboratories to take samples and the to transport, analyze, and store them.[7][8]

  • Surgical robots- either allow surgical operations to be carried out with greater precision than an unaided human surgeon or allow remote surgery where a human surgeon is not physically present with the patient.
  • Rehabilitation robots- facilitate and support the lives of infirm, elderly people, or those with dysfunction of body parts effecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy.
  • Biorobots- a group of robots designed to imitate the cognition of humans and animals.
  • Telepresence robots- allow off-site medical professionals to move, look around, communicate, and participate from remote locations.
  • Pharmacy automation- robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting.
  • Companion robot- has the capability to engage emotionally with users keeping them company and alerting if there is a problem with their health.
  • Disinfection robot- has the capability to disinfect a whole room in mere minutes, generally using pulsed ultraviolet light. They are being used to fight Ebola virus disease.

Medical Sensors – Devices that respond to a physical stimulus (as heat, light, sound, pressure, magnetism, or a particular motion) and transmit a resulting impulse (as for measurement or operating a control) also: sense organ.

Neuromorphic chip- A computer chip designed to act as a neural network. it can be analog, digital, or a combination.

Natural Language Processing – This refers to the interpretation of speech and text. A machine learning task concerned with improving the interaction between humans and computers. This field of study focuses on helping machines to better understand human language in order to improve human-computer interfaces.

Natural Language Generation — A machine learning task in which an algorithm attempts to generate language that is comprehensible and human-sounding. The end goal is to produce computer-generated language that is indiscernible from language generated by humans

Neural Networks – Neural networks are machine learning approaches that implement neurons as mathematical mappings or transformations from multiple inputs to a single output, where the output of one neuron forms the input to another neuron, weighted by the so-called ‘weight’ on that output. By specifying which neurons input to which other neurons, one or more layers of neurons are created. Training is accomplished by changing one or more weights in the network. Backpropagation is a training or learning technique whereby the difference between desired outputs and realized outputs of the final layer is used to adjust the weights starting with the output layer and traversing each layer until the input layer is reached. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have ‘learned’ how to carry out a particular task.

Optical Character Recognition (OCR)A computer system that takes images of typed, handwritten or printed text and converts them into machine-readable text.

Perception – A process of acquiring, interpreting, selecting, and organizing sensory information. It is what you perceive, which may be true or false, as opposed to the ground truth which is always true.

Perceptron- An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

Precision – Theproportion of positive identifications was actually correct.  Also called specificity or true negativerate.  This is calculated by:

Pruning — Overriding unnecessary and irrelevant considerations in AI systems.

Real World Data – Real World Data refers to data obtained under the intended condition of use and collected from the intended user (contrast with Laboratory Data). Classifying the type of data used is applicable to training and validation of medical AI.

Recall – The proportion of actual positives that were identified correctly. Also named sensitivity or probability of detection.  This is calculated by:

Recurrent Neural Network – A type of artificial neural network in which recorded data and outcomes are fed back through the network forming a cycle.

Reference Standard – in the context of medical AI, medical data and the desired output associated with it. Reference standards are used for training and validating medical AI. Typical a reference standard is traceable to some level back to the patient’s medical data and physicians or other experts involved in creating it. Considerations are how similar the methods by which the medical data was acquired are to the methods used to acquire input data by the AI. Another consideration is how representative the desired output is with respect to optimizing patient outcomes.

Reinforcement Learning — A type of machine learning in which machines are “taught” to achieve their target function through a process of experimentation and reward. In reinforcement learning, the machine receives positive reinforcement when its processes produce the desired result, and negative reinforcement when they do not.

Robotic Process Automation (RPA) Bots-Intelligent software robots deployed to automate repetitive activities with the user interface of a computer system.

Robotic Surgery – the performance of operative procedures with the assistance of robotic technology. It allows great precision and is used for remote-control, minimally invasive procedures. Current systems consist of computer-controlled electromechanical devices that work in response to controls manipulated by the surgeon.[9]

Rule — It is a format of representing knowledge base in Expert System. It is in the form of IF-THEN-ELSE

Semantic Segmentation- is understanding the image at pixel-level, partitions the image into semantically meaningful parts, and classifies each part into one of the predetermined classes.

Speech Recognition – The recognition of words and/or emotional state in an audio signal.

Swarm behavior– From the perspective of the mathematical modeler, it is an emergent behavior arising from simple rules that are followed by individuals and does not involve any central coordination

Supervised Learning – This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms.  Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining.  In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

Strong AI — An area of AI development that is working toward the goal of making AI systems that are as useful and skilled as the human mind.

Technical Standard – A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes, and practices. In contrast, a custom, convention, company product, corporate standard, and so forth that becomes generally accepted and dominant is often called a de facto standard.[10]

Telehealth – A collection of means or methods for enhancing health care, public health, and health education delivery and support using telecommunications technologies.  Telehealth is considered by some as a broader term than telemedicine encompassing non-medical services.  However, many view the two terms interchangeable.

Telemedicine – the remote delivery of health care services and clinical information using telecommunications technology.  This includes a wide array of clinical services using internet, wireless, satellite and telephone media.

Tensorflow – A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

Training Data – In machine learning, the training data set is the data given to the machine during the initial “learning” or “training” phase. From this data set, the machine is meant to gain some insight into options for the efficient completion of its assigned task through identifying relationships between the data.

Traceability – in medical AI, the level to which its computations and performance can be traced to each bit of information from each patient, as well as each clinical decision by each expert involved in creating a reference standard, that was used for its training or validation the such as each pixel from a patient’s. Also called pixel-to-weight accountability (in the context of machine learning AI).

Transfer Learning– Another way machines can learn. Once an AI has successfully learned something, like how to determine if an image is a cat or not, it can continue to build on its knowledge even if you aren’t asking it to learn anything about cats. You could take an AI that can determine if an image is a cat with 90-percent accuracy, hypothetically, and after it spent a week training on identifying shoes it could then return to its work on cats with a noticeable improvement in accuracy.

Turing Test — A test developed by Alan Turing 1950, which is meant as a means to identify true artificial intelligence. The test is based on a process in which a series of judges attempt to discern interactions with a control (human) from interactions with the machine (computer) being tested.

Unsupervised Learning – This describes where algorithms try to identify patterns in data,looking for similarities that can be used to categorize that data. An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.  The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

ADDITIONAL INFORMATION