Körber-Prize 2019

Image: The structure of neural networks remotely resembles that of the human brain. They are capable of learning thanks to their networked ›neurons.‹ If a programmer successively shows the network many thousands of photos of different apples and pears, it will also be able after the conclusion of training to distinguish apples and pears in unfamiliar photos. 

»The ›third industrial revolution‹ replaces energy with the concept of information.«  BERNHARD SCHÖLKOPF

Far more successful since the mid-1980s were the so-called artificial neural networks, whose structure is remotely oriented on the human brain. They have artificial neurons that are arranged in several layers. They are not fed expert rules but learn to solve tasks more like a child does: A data scientist trains a neural network stepwise by showing it, for example, pictures of pears and apples. Initially, the network guesses which of the two types of fruit is meant. The data scientist checks the results and communicates to the system whether it was right or wrong. The neural network gets better and better in the course of the training. The network stores the ›knowledge‹ that it acquires in its artificial neurons, which are linked to one another like synapses in the brain via excitatory or inhibiting weights. After many thousands of rounds of training, the weights are adjusted so that the neural network can now distinguish apples from pears on new images that it has never seen before.


The most recent breakthrough in AI goes back to those methods of machine learning whose performance has greatly improved thanks to ever faster computers, ever larger memory, and rapidly growing amounts of data for training. Today neural networks control, for example, driverless cars that they autonomously steer with the aid of data from cameras and sensors and automatically brake upon recognizing obstacles.

In many games AI systems have become far superior to their human competitors. Already in 1997 Deep Blue—a chess computer built by IBM, which utilized a giant knowledge base of championship games and was able to calculate 200,000 moves per second—defeated the then chess world champion Garry Kasparov under tournament conditions. The world champion in the complex Japanese board game Go was defeated by a system named AlphaGo that was developed by the Google firm DeepMind and that was based on a neural network trained on championship matches.


The support vector machines that Schölkopf helped to develop work similarly to neural networks but provide more precise results for some tasks. Furthermore, they are based on a solid mathematical foundation, which makes their functioning more transparent.

»A simple task for a support vector machine would be, for example, to determine on the basis of entries for body size and weight whether a person is a man or a woman,« explains Matthias Bauer, a doctoral candidate in Schölkopf’s MPI team in Tübingen. The system presents the results mathematically as vectors, which you can imagine as two clouds of dots (one for women, one for men) in a two-dimensional coordinate system. Ideally, the two clouds can be separated by a straight line. A straight line is referred to as a linear solution, which can be calculated particularly quickly. Yet since there are also some small, light men and heavy, large women, a few of the points land in the wrong cloud.

To be able to separate the clouds anyway, a nonlinear wavy line would have to be used, which however is much more complicated to calculate.