What Exactly Is Artificial Intelligence (AI)?
The primary and often defining goal of Artificial Intelligence is to develop Thinking Machines, primarily the computer/software combinations, which can think as well as or better than human beings. These Thinking Machines must have input to think about, the ability to process said input in a prescribed way using algorithms, and deliver useful output. We want these Thinking Machines to be intelligent, just as human beings are intelligent. And there’s the rub. What exactly is Human Intelligence?
Input, Processing, and Output
Let us examine some of the human mental functions which are universally accepted as indications of Human Intelligence and to the extent possible, identify corresponding functions of which Thinking Machines are capable.
Both Thinking Machines and humans must have input to think about, the ability to process said input in an algorithmic-prescribed way, and the ability to communicate or take action as an outcome of its information processing. Both Thinking Machines and humans can fulfill these requirements to a varying extent.
Input comes in the form of Information. To input information to an intelligent entity, be it man or machine, the entity must have the ability to perceive. There are two required components to perception. The first requirement is the ability to sense. Man has five senses: hearing, seeing, smelling, tasting, and touching. As the result of brilliant human work, machines now also have the ability to use the same five senses even though they lack the human organs — ears, eyes, nose, tongue, and skin. The second requirement is the ability to make sense of that which is being sensed. Obviously, humans have, to a certain extent, such an ability. Intelligent Machines, to a certain extent, also have the same capacity. Some examples of machines’ ability to make sense of what they sense include:
Image Recognition, Facial Recognition, Speech Recognition, Object Recognition, Pattern Recognition, Handwriting Recognition, Name Recognition, Optical Character Recognition, Symbol Recognition, and Abstract Concept Recognition.
Again, it is evident that humans can, to a certain extent, process information. We do it all day long, every day. True, sometimes we do a poor job, and at other times we find it impossible to do. But it is fair to say we do it. Now, how about Thinking Machines? Well, they are not entirely unlike humans when it comes to processing information. Sometimes, Thinking Machines do it well, while at other times, they make a mess of it or find it impossible to complete. Their failures are not their fault. The fault is ours, as humans. If we provide them with inadequate or inaccurate input, it should be no surprise that their output is unsatisfactory. If we give them a task to do for which we have not prepared them, we can expect them to mess it up or just give up.
The Thinking Machines’ failures resulting from humans providing them with bad input deserve little discussion: garbage in, garbage out. Conversely, preparing our Thinking Machines properly for the tasks we give them to execute is an extraordinarily vast and complex subject. This essay will provide the reader with a rudimentary discussion of the subject.
We have a choice of whether we prepare our Thinking Machines for a single task or an array of complex tasks. The Single Task orientation is known as Weak or Narrow Artificial Intelligence. The Complex Task orientation is known as Strong or General Artificial Intelligence. The advantages and disadvantages of each orientation are:
The Narrow Intelligence orientation is less costly to program and allows the Thinking Machine to function better at a given task than the General Intelligence oriented machine. The General Intelligence orientation is more expensive to program. However, it enables the Thinking Machine to function on an array of complex tasks. If a Thinking Machine is prepared to process numerous complex aspects of a single subject such as Speech Recognition, it is a hybrid of both Narrow and General Artificial Intelligence.
Artificial Intelligence cannot be considered the equivalent of or even similar to Human Intelligence if it cannot produce the desired useful output. Output can be communicated in any one of numerous forms, including but not limited to written or spoken language, mathematics, graphs, charts, tables, or other formats. Desired useful output can alternatively be in the form of effecting actions. Examples of this include but are not limited to self-driving vehicles and activating and managing the movements of factory machines and robots.
Artificial Intelligence Tools
The following link will take you to a listing of popular AI Tools. Each Tool is rated for its utility and has a link to the provider’s website.
Artificial Intelligence Platforms
Artificial Intelligence Platforms simulate the cognitive function that human minds perform, such as problem-solving, learning, reasoning, social intelligence, and general intelligence. Platforms are a combination of hardware and software that allow AI algorithms to run. AI platforms can support the digitalization of data. Some popular AI Platforms include Azure, Cloud Machine Learning Engine, Watson, ML Platform Services, Leonardo Machine Learning, and Einstein Suite.
Artificial Intelligence Is Big Business
These are conservative projections, prepared by well-respected financial analysts, for World Wide Artificial Intelligence Business Revenues in Billions of US Dollars:
Almost all of the leading tech companies are deeply involved in the field of Artificial Intelligence. A few examples are Apple, Google, Facebook, IBM, Nvidia, IBM, Salesforce, Alibaba, Microsoft, and Amazon. The following link will take you to an article that lists the Top 100 AI companies worldwide. For each company, there is a brief description of its AI involvement. https://www.analyticsinsight.net/top-100-artificial-companies-in-the-world/
Machine Learning is a subset of Artificial Intelligence. The basic concept is that Thinking Machines can learn to a large extent on their own. Input relevant data or information, and with the use of appropriate algorithms, patterns can be recognized, and the desired useful output can be obtained. As data is inputted and processed, the Machine “learns.” The power and importance of Machine Learning, and its subset Deep Learning, are increasing exponentially due to several factors:
- The explosion of available utilizable data
- The rapidly decreasing costs of and increasing ability to store and access Big Data
- The development and use of increasingly sophisticated algorithms
- The continuous development of increasingly powerful and less costly computers
- The Cloud
Types of Machine Learning Algorithms
Supervised Learning: The Machine is trained by providing it with both the input and the correct expected output. The Machine learns by comparing its output, which results from its programming, with the accurate output provided. Then, The Machine adjusts its processing accordingly.
Unsupervised Learning: The Machine is not trained by providing it with the correct output. The Machine must undertake tasks such as pattern recognition, and in effect, it creates its own algorithms.
Reinforced Learning: The Machine is provided with algorithms that ascertain what works best by trial and error.
Languages for Machine Learning
Machine Learning Algorithms
Here, we list several of the most often used Machine Learning Algorithms: Linear Regression, Logistic Regression, SVM, Naive Bayes, K-Means, Random Forest, and Decision Tree.
Links to Examples of Machine Learning Applications:
- Rainfall prediction using Linear regression
- Identifying handwritten digits using Logistic Regression in PyTorch
- Kaggle Breast Cancer Wisconsin Diagnosis using Logistic Regression
- Python | Implementation of Movie Recommender System
- Support Vector Machine to recognize facial features in C++
- Decision Trees – Fake (Counterfeit) Coin Puzzle (12 Coin Puzzle)
- Credit Card Fraud Detection
- Applying Multinomial Naive Bayes to NLP Problems
- Image compression using K-means clustering
- Deep learning | Image Caption Generation using the Avengers EndGames Characters
- How Does Google Use Machine Learning?
- How Does NASA Use Machine Learning?
- 5 Mind-Blowing Ways Facebook Uses Machine Learning
- Targeted Advertising using Machine Learning
- How Machine Learning Is Used by Famous Companies?
- Deep Learning Is Machine Learning on steroids.
- Deep Learning makes extensive use of Neural Networks to ascertain complicated and subtle patterns in enormous amounts of data.
- The faster the computers and the more voluminous the data, the better the Deep Learning performance.
- Deep Learning and Neural Networks can perform automatic feature extraction from raw data.
- Deep Learning and Neural Networks draw primary conclusions directly from raw data. The primary conclusions are then synthesized into secondary, tertiary, and additional levels of abstraction, as required, to address the processing of large amounts of data and increasingly complex challenges. The data processing and analysis (Deep Learning) are accomplished automatically with extensive neural networks without significant dependence on human input.
Deep Neural Networks — The Key to Deep Learning
Deep Neural Networks have multiple levels of processing nodes. As the levels of nodes increase, the cumulative effect is the Thinking Machines’ increasing capability of formulating abstract representations. Deep Learning utilizes multiple levels of representation achieved by organizing non-linear information into representations at a given level. In turn, this is transformed into more abstract representations at the next deepest level. The deeper levels are not designed by humans but are learned by the Thinking Machines from data processed at higher levels.
Deep Learning vs. Machine Learning
To detect money laundering or fraud, Traditional Machine Learning might rely on a small set of factors such as the dollar amounts and frequency of a person’s transactions. Deep Learning will include more data and additional factors such as times, locations, and IP addresses processed at increasingly deeper levels. We use the term Deep Learning because Neural Networks can have numerous deep levels that enhance learning.
Examples of How Deep Learning Is Utilized
Online Virtual Assistants like Alexa, Siri, and Cortana use Deep Learning to understand human speech. Deep Learning algorithms automatically translate between languages. Deep Learning enables, among many other things, the development of driverless delivery trucks, drones, and autonomous cars. Deep Learning enables Chatbots and ServiceBots to respond to auditory and text questions intelligently. Facial Recognition by machines is impossible without Deep Learning. Pharmaceutical companies are using Deep Learning for drug discovery and development. Physicians are using Deep Learning for disease diagnosis and the development of treatment regimes.
What Are Algorithms?
An Algorithm is a process — a set of step-by-step rules to be followed in calculations or for other problem-solving methods. Algorithm types include but are hardly limited to the following: Simple Recursive algorithms, Backtracking algorithms, Divide-and-Conquer algorithms, Dynamic Programming algorithms, Greedy algorithms, Branch, and Bound algorithms
Training Neural Networks
Neural Networks must be trained using algorithms. Algorithms used to train Neural Networks include but are in no way limited to the following: Gradient descent, Newton’s method, Conjugate gradient, Quasi-Newton method, and Levenberg-Marquardt.
Computation Complexity of Algorithms
The computational complexity of an algorithm is a measure of the number of resources that the use of a given algorithm requires. Mathematical measures of complexity are available, which can predict how fast an algorithm will run and how much computing power and memory it will require. In some cases, the complexity of an indicated algorithm might be so extensive that it becomes impractical to employ. Thus, a heuristic algorithm, which produces approximate results, may be utilized in its place.
This article should give you a basic understanding of what Artificial Intelligence is and provide you the context for your next steps in research and learning on the wide topic.