It was shocking news that AlphaGo beat human champion, Lee Sedol, for 3 consecutive games.
Regarding AlphaGo, we can easily predict that it will show more powerful results and abilities in the near future.
What Are The Intelligent Sources Of AlphaGo And What Are Its Limitations?
There is no doubt that more than thousands of computers and stat-of-the-art algorithms like deep learning were attributed to AlphaGo. However, beyond hardware and software, if there were no accessible go game data, AlphaGo’s learning curve would be much slower. The secret intelligent source of the just 2 year old machine beating a 33 years old human go game champion, Lee Sedol, is accessible human game data.
Another condition I’d like to point out is the structure of accessible human game data. If we gave the go game data as speech or natural language format to AlphaGo, AlphaGo cannot learn from it or at least has lots of difficulties to achieve the current level of intelligence. Simply speaking, AlphaGo learns nothing from watching real human go game match.
Comparing to AlphaGo, humans are powerful at learning with various multimedia format including natural languages. However, AlphaGo is learning go game only by huge amount of data and digitalized format.
Here are basic co-operation strategies for humans and computers. Once humans can generate their behavior and thoughts in digitalized form rather than multi-media or natural language forms, computers can have huge benefit from it.
We can call it human-computer interactive or human computation. Whatever we call it, I believe the future of artificial intelligence lies in human computer cooperation.
3 Types Of Reasoning
Categorization reveals a lot about the field of study. However, Artificial Intelligence field has so diverse approaches to achieve its goals, presenting one nice and neat taxonomy of current existing diverse approaches is difficult. I will adapt classical classification of reasoning here like deduction, induction, and abduction. These names and taxonomy might not precise though, typical approach of these 3 reasoning covers a lot of AI algorithms, machine learning, and deep learning that is known as a core algorithm of AlphaGo.
First of all, inductive reasoning is a very typical approach with statistical machine learning such as KNN (K-nearest neighbor) or SVM (Support Vector Machine). He died and she died. Everyone died, so I will die is inductive reasoning. Likewise, these algorithms try to generate statistical functions and parameters, which can classify given data with training data (in other words, known answers).
Secondly, deductive reasoning requires many rules and facts. If all humans die and I am a human, then I will die. One of very similar machine learning algorithm to deductive reasoning is a decision tree. Decision tree algorithms try to generate rules based on known answers.
Third classification, abductive reasoning is similar to deep learning. He died and the cat died, so he is a cat is abductive reasoning. This logical output of this is incorrect reasoning example though, in real world, abductive reasoning shows great power since we cannot get whole information always. Deep learning uses many example data to learn its pattern automatically at its memory network (In other words, deep neutral network).
We live with many kinds of artificial intelligence algorithms, and it includes deductive, inductive, and abductive reasoning. One type of reasoning cannot solve all problems. We have to choose algorithms selectively for the purpose of problems.
However, one thing very common to almost all algorithms is the needs of data. Extracting rules automatically, generating mathematical functions, and training deep neural networks needs data.
Fortunately, one human can produce valuable and meaningful data. (Yet!)
1 Comment
Regarding the 3 types of reasoning.
Respectfully, what you are saying will only confuse the reader. Induction or generalization is used in all ML and deep learning processes of training, it does not matter if it is unsupervised or supervised or reinforcement learning, it is about to find a set of functions/relations/mappings that are balanced in the variance and bias trade-off. If you have a model trained, you can talk about deductive reasoning at that moment your model is making a prediction.