What is all-inclusive good at

Artificial Intelligence (AI)

Marvin Minsky created the scientific discipline "Artificial Intelligence" (KI) or English "Artificial Intelligence" (AI) in 1956. Originally it meant the replication of human intelligence. The idea was to replace the human.
In the meantime, the networking of all devices generates more and more data that must be processed sensibly in order to derive information from it and generate knowledge from it. Therefore, nowadays artificial intelligence is rather the extension of human intelligence. So it's about supporting and relieving people. Artificial intelligence should therefore be understood as a tool or a function.

In connection with artificial intelligence one actually speaks of "cognitive computing" or "data science", whereby the keywords artificial intelligence (AI), machine learning (ML) or neural networks are more common.

In the following we try to explain the various processes that are related to artificial intelligence.

How does machine learning work?

Concepts like machine learning and neural networks follow the idea that programs can acquire something independently without a programmer ever influencing it. In contrast to this, with classic programming, one problem is at the center that software is supposed to solve.

Example: The sum is to be formed from the input of numbers. For this there is an algorithm with which you can get from the input to the desired result. The programmer is familiar with the necessary formula from past experience, which he can map in any programming language. You don't need AI for that.

However, there are complicated tasks that cannot be represented with the EVA principle (input-processing-output) and classic programming. A self-learning algorithm can help here if the calculation method for the desired goal is unknown or the data is too extensive.

The idea is that with machine learning the computer finds the optimal formula by the software learning from the given data which output can be expected with which input data. You can get closer and closer to the desired results via internal settings. After the learning process (training), the software is able to output a result for an input that has never been seen before (in the best case).

In order for programs to learn independently, large amounts of data are required that serve as empirical values ​​and form the basis on which relationships between input variables can be determined.

Machine learning example: handwriting recognition

In the case of character recognition, a matrix of z. B. 28 x 28 fields. Each field is an input variable, the color of which represents the input. So every input influences the output.

The task now is to determine which class a written character belongs to. Therefore, this is classified as a classification problem in machine learning.

In order for the algorithm to be used for handwriting recognition, it must be trained. He needs a lot of already categorized written characters. After a certain number of learning steps, the algorithm is able to recognize one character as correct in X out of 100 cases. In this case, X must be very high, because otherwise it cannot be used in practice.

The output corresponds to the probability that it is a specific digit. In the simplest case, you can accept the best result as correct, or you can check which would have been the second or third best result.
The result is improved by training the neural network longer or by taking the context into account in addition to the probability.

Neural Networks

The term "neural networks" has existed since the 1950s. At that time, researchers discovered that the neurons in the brain receive input impulses with different weights and generate an output impulse from them, which in turn serves as input for other neurons. Computer scientists then tried to reprogram this process with the technology of the time using matrix calculations.
Today, neural networks have nothing to do with brain research. All that remains are the matrix calculations.

While a mathematician speaks of a matrix, for a programmer it is a two-dimensional array. In simple terms, a table with two columns. In AI one speaks of a level two tensor. A three-dimensional array would be a level three tensor. And so on.
The term "tensor" refers to a mathematical object of linear algebra and differential geometry.

Deep learning

From the IT point of view, neural networks are just a chain of functions, the parameters of which can be adapted using new input data.
A distinction must be made between the fact that if more and more operations follow one another in neural networks, then this is called "deep", as with "deep learning". Using deep learning, neural networks can consist of more than a hundred levels. If the meshes use ever larger matrices instead, this is called "wide".

Deep learning is to be understood as optimization methods for neural networks. This is about improving predictive analysis techniques, diagnostics and recommendations. Training, inference (application) and adaptation of the model are the steps that make deep learning with neural networks so complex. The training alone requires the evaluation of large amounts of data. The parallelization of these work steps is the only way to cope with deep learning efficiently.

Specific applications and areas of application

  • Speech recognition and speech output
  • Image and face recognition
  • translation
  • Text analysis for keywords
  • Weather and climate forecast
  • Precision medicine for personalized diagnoses, therapies and drugs
  • Fraud detection and risk management
  • realistic animations and VR applications
  • Edit and render films in real time
  • Finding unknown connections
  • discover new knowledge in existing data

Application: speech recognition

What is commonly referred to as speech recognition in AI is natural language processing (NLP). A very rough distinction is made here between speech recognition and output (Speech) and speech understanding (Language).

Speech recognition and output services include chat and voice bots or digital assistants that can be controlled with individual commands or sentences.

In the case of language comprehension services, the AI ​​must recognize the intention of the speaker or writer. So put what has been said or written into context. This is necessary if a conversation is to be carried out over several levels.

Application: image and face recognition

Image recognition and processing is about interpreting content in visual material.
The machine image and pattern recognition is used to recognize objects and faces in order to block unwanted content or to monitor people.

Application: autonomous driving

Autonomous driving is mainly about recognizing the surroundings. Fortunately, dangerous traffic situations are rare, but they must be recognized without a doubt. Unfortunately, such situations don't follow a set pattern, which is not an ideal environment for AI.
Especially cases in which traffic rules have to be broken, for example if a solid line has to be crossed as an exception, are high hurdles.

The limits of artificial intelligence

The limits of artificial intelligence are reached quickly and the methods are therefore subject to limited applicability.
Such a neural network only learns what is contained in the training data without understanding the general patterns. A neural network behaves like a pupil or student who only learns what needs to be.

In addition, the application of AI only covers the past. If something completely new happens or is to be created in the future, then the AI ​​fails at this point.
It would be conceivable that artificial intelligence could be used to compose a number 1 hit or even have a bestseller written. In principle, anyone who is familiar with computers and music software can also produce music with artificial intelligence.
To do this, the algorithms only have to be fed with the appropriate styles that produce similar sounds from them. But only something new emerges from something familiar, which of course bears a resemblance to a certain artist or style. AI cannot invent its own style.

Another limitation arises when the AI ​​makes a mistake or does not work. With traditionally programmed software, we end up with a source code. If the software makes a mistake, then you can look up which part of the source code is responsible and then you can fix it immediately. The error is then corrected.
This is not the case with machine learning processes. For example, a neural network is trained here. If that doesn't work well enough later on, it can't be repaired. You can only continue to train, but this can also lead to something that has already been learned being unlearned.

Another limitation occurs when a neural network is particularly good at something. But one cannot explain why that is so.
This means that we are moving towards a future in which we are surrounded by machines that we do not understand and therefore cannot trust. If we have software that is based on a neural network does not work, then in case of doubt the software has to be rewritten because a correction of the neural network is not possible.

Conclusion on artificial intelligence

Ever since there have been technical devices, machines and computers, there has been a fear that people will first be displaced and then replaced. Especially when it comes to jobs, the introduction of AI can frighten employees in a company. Regardless of whether it really happens that way, man and machine are unequal opponents and therefore cannot be compared. Individual activities that humans perform can be replaced by computers and thus by AI. Usually these are stupid repetitive activities that will inevitably be automated at some point.

AI is always good when well-trained neural networks encounter a task that is as narrow as possible.

Other related topics:

Everything you need to know about computer technology.

Computer technology primer

The computer technology primer is a book about the basics of computer technology, processor technology, semiconductor memory, interfaces, data storage devices, drives and important hardware components.

I want that!

Everything you need to know about computer technology.

Computer technology primer

The computer technology primer is a book about the basics of computer technology, processor technology, semiconductor memory, interfaces, data storage devices, drives and important hardware components.

I want that!