News
April 6, 2016
By Nuritas
Artificial intelligence can beat humans at playing the game of Life
Author: Alessandro Adelfio
It’s a wonderful time to be alive. Especially for artificial intelligence (AI) lovers. The idea of building an artificial intelligence capable of human-like reasoning has remained confined to science fiction or to philosophical and ethical debates for many decades. A couple of weeks ago, and a few years earlier than expected, that idea has taken shape and it seems to be a stone’s throw from reality.
As it happened in 1997, when the IBM supercomputer Deep Blue defeated the chess world champion Garry Kasparov, it is again a game that is benchmarking the revolution that is occurring in artificial intelligence. Developed by Google DeepMind, AlphaGo is the first software to beat a 9-dan professional (the highest rank) Go player without handicaps.
But, from an AI perspective, the AlphaGo win is way more interesting than the Deep Blue win, and will be remembered as a milestone in the academic field. This is for a number of reasons.
Leaving out the details of the Go game, which make it a much more difficult game to be approached computationally than chess, the most fascinating thing about AlphaGo is probably that it learned how to play Go “on its own”. Differently than Deep Blue, no knowledge on the strategy of the game was explicitly provided to the software by expert players, instead it learned how to play by observing thousands of games that had already been played and practising “on its own”.
The underlying learning architecture is quite complex and integrates different machine learning techniques. Its core is essentially composed of three different Neural Networks (NNs) which are trained to learn gaming information at three different levels of abstraction, and whose outputs are combined to produce each move in a game.
AlphaGo is only the latest amazing achievement in Machine Learning (ML), the AI field that explores the construction of methods and techniques for making computers learn and make predictions on data, without being explicitly programmed for that.
Many different fields use machine learning. Indeed, and very close to our hearts here at Nuritas™, a significant area of bioinformatics harnesses ML for different purposes. The purposes of using ML on biological data are multiple: the extraction of high-level information from raw data, the modelling of biochemical interactions, the prediction of microbiological features difficult to explore by lab methods as well as many others.
Artificial neural networks are probably the most powerful and popular among ML techniques and the successful application of ML is not only seen in biology but also in many other fields such as image recognition, natural language processing and speech recognition.
Although there is no absolute definition of a neural network, we can think of it as is a mathematical model that aims to mimic the functioning of the human brain. It is generally composed of a set of nodes connected by links and, using the biological brain analogy, each node corresponds to a neuron.
During the NN’s “training period” it uses a learning set, which consists of provided input/desired output pairs, to adjust its internal mathematical model of the world. In the case of a game, a learning set will likely have the form of sequence of moves/player wins or sequence of moves/player loses. In this scenario, the aim is to ensure a trained model is able to associate a new unknown sequence of moves with the most likely result of those moves, or, in other words, a model which is able to make predictions about novel inputs.
A particular class of NNs, generally referred to as “Deep Neural Networks” (DNNs), has proved in the past decade to be particularly effective at modelling high-level abstractions in data. Networks of this kind generally split the learning process into multiple and hierarchical stages and use high complexity structures.
Both Neural Networks and Deep Neural Networks have been used for approaching a vast range of problems in bioinformatics. Within proteomics, in particular, they have been used to predict protein structural and functional information at many different levels. NN based approaches have been attempted with some degree of success also for the prediction of particular peptide characteristics.
Within Nuritas™, we are combining the use of accurate expert systems that leverage our expertise in the study of peptides, with the most up-to-date Network Neural and Deep Neural Network models for the extraction of high-level, data-driven knowledge. We combine this with other machine learning techniques for the analysis as well as the classification of our knowledge base. Ultimately, these technologies allow us to maximize our laboratory’s throughput and efficiently access the most health-benefiting peptides within our food sources.
Much like AlphaGo has changed how games are played going forward, we are changing what we know about machine learning as it applies to the prediction and discovery of disease-beating ingredients. Our networks are continuously learning new sets and ensuring that we have easier, more accurate access to the most life-changing molecules within food.