Can machines get smarter than humans? No, says Jean-Gabriel Ganassia: this is just a myth inspired by science fiction. In his article, he recalls the main stages in the development of this branch of science, the achievements of modern technology and ethical issues that increasingly require attention.
Artificial intelligence (AI) is a branch of science that was officially launched in 1956 at a summer seminar at Dartmouth College (Hanover, USA), which was organized by four American scientists: John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. Since then, the term “artificial intelligence”, most likely coined with the aim of attracting everyone’s attention, has become so popular that today you can hardly meet a person who has never heard it. Over time, this branch of informatics has developed more and more, and intelligent technologies in the last sixty years have played an important role in changing the face of the world.
However, the popularity of the term “artificial intelligence” is largely due to its erroneous interpretation – in particular, when it refers to some kind of artificial entity endowed with intelligence, which is supposedly able to compete with people. This thought from the field of ancient legends and traditions, which sounds like the myth of the Golem, has recently been revived by such our contemporaries as the British physicist Stephen Hawking (1942-2018), the American entrepreneur Elon Musk and the American engineer Ray Kurzweil, as well as supporters of the creation the so-called strong or general AI. However, we will not talk about this understanding of this term, because it is rather a product of a rich imagination that appeared under the influence of science fiction, and not a tangible scientific reality, confirmed by experiments and empirical observations.
Many of the results achieved using AI technology are superior to humans: in 1997, a computer defeated the then-reigning world chess champion, and recently, in 2016, other computers beat the world’s best go and poker players. Computers prove or help to prove mathematical theorems; knowledge is created automatically, based on machine learning methods and with the help of huge amounts of data, the volume of which is calculated in terabytes (10 to the 12th power) and even in petabytes (10 to the 15th power).
Machine learning techniques allow some automata to recognize and record spoken language like secret typists of yesteryear, while others can accurately identify faces or fingerprints among tens of millions of others and process texts written in natural languages. Thanks to the same methods, cars move independently, computers are better than dermatologists at diagnosing melanomas from photographs of moles taken with cell phones, robots are fighting instead of people; and conveyors in factories are becoming more and more automated.
Scientists also use these methods to determine the functions of biological macromolecules, in particular proteins and genomes, based on the sequence of their components – amino acids for proteins and bases for genomes. In general, in all sciences there is a serious epistemological gap due to the qualitative difference between in silico experiments – so called because they are performed on the basis of big data using powerful processors with silicon chips – from in vivo experiments (on living tissue) and especially in vitro.
Artificial intelligence services are widely used in almost all areas, especially in:
Many routine processes can now be automated, which will transform our professions and, ultimately, eliminate some of them.