Chess and Deep Learning
Since Alan Turing wrote a program for playing chess in 1951, the game has always been a benchmark for progress in machine intelligence. In 1998, Gary Kasparov, the reigning world chess champion was beaten by IBM’s Deep Blue supercomputer for the first time under standard tournament rules. Since then, chess-playing computers have become much more sophisticated leaving the best human chess players little chance even against a smartphone based modern chess engine. More advanced artificial intelligence systems such as Google’s AlphaZero started out
knowing just the rules of chess and in a matter of hours played more games against itself then ever been recorded in human chess history. The ability of artificial intelligence based computing systems to learn makes them a formidable opponent of humans in the game of chess and a great assistant in other fields such as business and robotics. Today’s chess engines are significantly stronger and have deeply influenced the development of chess theory, and this was achieved solely by the application of deep learning methods.
What is Deep learning?
Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input. For example, in image analysis, lower layers may identify areas like edges, while higher layers may identify the exact details related to features of a human being in digits or letters or faces.
Deep learning drives many artificial intelligence (AI) applications and services that improve automation, performing analytical and physical tasks without human intervention. To achieve a greater than average level of accuracy, deep learning programs would require access to incredible amounts of training data and processing power, neither of which were easily available to programmers until the era of big data and cloud computing. Because deep learning programming can create complex statistical models directly from its own iterative output, it is able to create accurate predictive models from large quantities of unlabeled, unstructured data.
Chess as a Paradigm to Study Deep Learning
In any case, while PCs have become quicker, the manner in which chess engines work has not changed. Their work depends on aggressive power. The most common way of looking through all possible future moves to track down the next best move. Matthew Lai at Imperial College London made a man-made reasoning machine called Giraffe
that has trained itself to play chess by assessing positions significantly more like people and in a totally unique manner to ordinary chess motors. The innovation behind Lai’s new machine is a neural organization. This is a method of preparing data enlivened by the human cerebrum. It comprises a few layers of hubs that are associated such
that change as the framework is prepared. This preparation interaction utilizes heaps of guides to tweak the associations so the organization delivers a particular yield given a specific contribution, to perceive the presence of face in an image, for instance. So it’s nothing unexpected that deep neural networks should have the option to spot designs in chess and that is by and large the methodology Lai has taken. His network comprises four layers that together inspect each position on the board in three distinctive ways.
How Does a Chess Algorithm work?
To begin with, the algorithm looks at the global state of the game i.e. the number and sort of pieces on each side, which side is to move, castling rights, etc. Secondly, it looks at piece-driven components like the area of each piece on each side. The last perspective is to plan the squares that each piece assaults and protects. Lai trains his network with a painstakingly created set of data taken from genuine chess games. This data set should have the right dissemination of positions. “For instance, it doesn’t bode well to prepare the framework on positions with three sovereigns for each side, in light of the fact that those positions for all intents and purposes never come up in real games,” he says. It should likewise have a lot of assortment of inconsistent situations past those that typically
happen in high level chess games. That is on the grounds that albeit inconsistent positions once in a while emerge in genuine chess games, they crop up constantly in the inquiries that the PC performs inside. Lai created his dataset by arbitrarily picking 5,000,000 situations from a data set of computer chess games. He then, at that point, made a more noteworthy assortment by adding an arbitrary legitimate move to each position prior to utilizing it for preparation. In complete he produced 175 million situations. The typical method of training these machines is to physically assess each position and utilize this data to train the machine to perceive those that are solid and those that are weak. This ability of chess to generate a large dataset of possible moves has provided a paradigm to algorithm creators to utilize the data to learn and successfully execute a task which is winning the chess game and make best decisions to defeat the human or machine opponent. The learning from chess can also be implemented in fields such as automation, robotics, social networks, e-commerce where algorithms are created to maximize output or sales.