The history of the development of artificial intelligence. A Brief History of Artificial Intelligence


The concept of artificial intelligence (AI or AI) includes not only technologies that allow you to create intelligent machines (including computer programs). AI is also one of the areas of scientific thought.

Artificial Intelligence - Definition

Intelligence- this is the mental component of a person, which has the following abilities:

  • adaptive;
  • learning through the accumulation of experience and knowledge;
  • the ability to apply knowledge and skills to manage the environment.

The intellect combines all the abilities of a person to cognize reality. With the help of it, a person thinks, remembers new information, perceives the environment, and so on.

Artificial intelligence is understood as one of the areas of information technology, which is engaged in the study and development of systems (machines) endowed with the capabilities of human intelligence: the ability to learn, logical reasoning, and so on.

At the moment, work on artificial intelligence is carried out by creating new programs and algorithms that solve problems in the same way as a person does.

Due to the fact that the definition of AI evolves as this direction develops, it is necessary to mention the AI ​​Effect. It refers to the effect that artificial intelligence creates when it has made some progress. For example, if AI has learned to perform any actions, then critics immediately join in, arguing that these successes do not indicate the presence of thinking in the machine.

Today, the development of artificial intelligence goes in two independent directions:

  • neurocybernetics;
  • logical approach.

The first direction involves the study of neural networks and evolutionary computing from the point of view of biology. The logical approach involves the development of systems that imitate high-level intellectual processes: thinking, speech, and so on.

The first work in the field of AI began to be conducted in the middle of the last century. The pioneer of research in this direction was Alan Turing, although certain ideas began to be expressed by philosophers and mathematicians in the Middle Ages. In particular, as early as the beginning of the 20th century, a mechanical device capable of solving chess problems was introduced.

But this direction was really formed by the middle of the last century. The appearance of works on AI was preceded by research on human nature, ways of knowing the world around us, the possibilities of the thought process, and other areas. By that time, the first computers and algorithms had appeared. That is, the foundation was created on which a new direction of research was born.

In 1950, Alan Turing published an article in which he asked questions about the capabilities of future machines, as well as whether they could surpass humans in terms of sentience. It was this scientist who developed the procedure that was later named after him: the Turing test.

After the publication of the works of the English scientist, new research in the field of AI appeared. According to Turing, only a machine that cannot be distinguished from a person during communication can be recognized as a thinking machine. Around the same time that the role of a scientist appeared, a concept was born, called the Baby Machine. It envisaged the progressive development of AI and the creation of machines whose thought processes are first formed at the level of a child, and then gradually improve.

The term "artificial intelligence" was born later. In 1956, a group of scientists, including Turing, met at the American University of Dartmund to discuss issues related to AI. After that meeting, the active development of machines with the capabilities of artificial intelligence began.

A special role in the creation of new technologies in the field of AI was played by the military departments, which actively funded this area of ​​research. Subsequently, work in the field of artificial intelligence began to attract large companies.

Modern life poses more complex challenges for researchers. Therefore, the development of AI is carried out in fundamentally different conditions, if we compare them with what happened during the period of the emergence of artificial intelligence. The processes of globalization, the actions of malefactors in the digital sphere, the development of the Internet and other problems - all this poses complex tasks for scientists, the solution of which lies in the field of AI.

Despite the successes achieved in this area in recent years (for example, the emergence of autonomous technology), there are still voices of skeptics who do not believe in the creation of a truly artificial intelligence, and not a very capable program. A number of critics fear that the active development of AI will soon lead to a situation where machines will completely replace people.

Research directions

Philosophers have not yet come to a consensus about what is the nature of the human intellect, and what is its status. In this regard, in scientific works devoted to AI, there are many ideas that tell what tasks artificial intelligence solves. There is also no common understanding of the question of what kind of machine can be considered intelligent.

Today, the development of artificial intelligence technologies goes in two directions:

  1. Descending (semiotic). It involves the development of new systems and knowledge bases that imitate high-level mental processes such as speech, expression of emotions and thinking.
  2. Ascending (biological). This approach involves research in the field of neural networks, through which models of intellectual behavior are created from the point of view of biological processes. Based on this direction, neurocomputers are being created.

Determines the ability of artificial intelligence (machine) to think in the same way as a person. In a general sense, this approach involves the creation of AI, the behavior of which does not differ from human actions in the same, normal situations. In fact, the Turing test assumes that a machine will be intelligent only if, when communicating with it, it is impossible to understand who is talking: a mechanism or a living person.

Science fiction books offer a different way of assessing the capabilities of AI. Artificial intelligence will become real if it feels and can create. However, this approach to definition does not stand up to practical application. Already, for example, machines are being created that have the ability to respond to changes in the environment (cold, heat, and so on). At the same time, they cannot feel the way a person does.

Symbolic approach

Success in solving problems is largely determined by the ability to flexibly approach the situation. Machines, unlike people, interpret the data they receive in a unified way. Therefore, only a person takes part in solving problems. The machine performs operations based on written algorithms that exclude the use of several abstraction models. To achieve flexibility from programs is possible by increasing the resources involved in the course of solving problems.

The above disadvantages are typical for the symbolic approach used in the development of AI. However, this direction of development of artificial intelligence allows you to create new rules in the calculation process. And the problems arising from the symbolic approach can be solved by logical methods.

logical approach

This approach involves the creation of models that mimic the process of reasoning. It is based on the principles of logic.

This approach does not involve the use of rigid algorithms that lead to a certain result.

Agent Based Approach

It uses intelligent agents. This approach assumes the following: intelligence is a computational part, through which goals are achieved. The machine plays the role of an intelligent agent. She learns the environment with the help of special sensors, and interacts with it through mechanical parts.

The agent-based approach focuses on the development of algorithms and methods that allow machines to remain operational in various situations.

Hybrid approach

This approach involves the integration of neural and symbolic models, due to which the solution of all problems associated with the processes of thinking and computing is achieved. For example, neural networks can generate the direction in which the operation of a machine moves. And static learning provides the basis through which problems are solved.

According to company experts Gartner, by the beginning of the 2020s, almost all released software products will use artificial intelligence technologies. Also, experts suggest that about 30% of investments in the digital sphere will fall on AI.

According to Gartner analysts, artificial intelligence opens up new opportunities for cooperation between people and machines. At the same time, the process of crowding out a person by AI cannot be stopped and in the future it will accelerate.

In company PwC believe that by 2030 the volume of the world's gross domestic product will grow by about 14% due to the rapid introduction of new technologies. Moreover, approximately 50% of the increase will provide an increase in the efficiency of production processes. The second half of the indicator will be the additional profit received through the introduction of AI in products.

Initially, the United States will receive the effect of the use of artificial intelligence, since this country has created the best conditions for the operation of AI machines. In the future, they will be surpassed by China, which will extract the maximum profit by introducing such technologies into products and their production.

Company experts Sale force claim that AI will increase the profitability of small businesses by about $1.1 trillion. And it will happen by 2021. In part, this indicator will be achieved through the implementation of solutions offered by AI in systems responsible for communication with customers. At the same time, the efficiency of production processes will improve due to their automation.

The introduction of new technologies will also create an additional 800,000 jobs. Experts note that this figure offsets the loss of vacancies due to process automation. Analysts, based on a survey among companies, predict their spending on factory automation will rise to about $46 billion by the early 2020s.

In Russia, work is also underway in the field of AI. For 10 years, the state has financed more than 1.3 thousand projects in this area. Moreover, most of the investments went to the development of programs that are not related to the conduct of commercial activities. This shows that the Russian business community is not yet interested in introducing artificial intelligence technologies.

In total, about 23 billion rubles were invested in Russia for these purposes. The amount of government subsidies is inferior to the amount of AI funding shown by other countries. In the United States, about 200 million dollars are allocated for these purposes every year.

Basically, in Russia, funds are allocated from the state budget for the development of AI technologies, which are then used in the transport sector, the defense industry, and in projects related to security. This circumstance indicates that in our country people are more likely to invest in areas that allow you to quickly achieve a certain effect from the invested funds.

The above study also showed that Russia now has a high potential for training specialists who can be involved in the development of AI technologies. Over the past 5 years, about 200 thousand people have been trained in areas related to AI.

AI technologies are developing in the following directions:

  • solving problems that make it possible to bring the capabilities of AI closer to human ones and find ways to integrate them into everyday life;
  • development of a full-fledged mind, through which the tasks facing humanity will be solved.

At the moment, researchers are focused on developing technologies that solve practical problems. So far, scientists have not come close to creating a full-fledged artificial intelligence.

Many companies are developing technologies in the field of AI. "Yandex" has been using them in the work of the search engine for more than one year. Since 2016, the Russian IT company has been engaged in research in the field of neural networks. The latter change the nature of the work of search engines. In particular, neural networks compare the query entered by the user with a certain vector number that most fully reflects the meaning of the task. In other words, the search is conducted not by the word, but by the essence of the information requested by the person.

In 2016 "Yandex" launched the service "Zen", which analyzes user preferences.

Company Abbyy recently introduced a system Compreno. With the help of it, it is possible to understand the text written in natural language. Other systems based on artificial intelligence technologies have also entered the market relatively recently:

  1. findo. The system is capable of recognizing human speech and searches for information in various documents and files using complex queries.
  2. Gamalon. This company introduced a system with the ability to self-learn.
  3. Watson. An IBM computer that uses a large number of algorithms to search for information.
  4. ViaVoice. Human speech recognition system.

Large commercial companies are not bypassing advances in the field of artificial intelligence. Banks are actively implementing such technologies in their activities. With the help of AI-based systems, they conduct transactions on exchanges, manage property and perform other operations.

The defense industry, medicine and other areas are implementing object recognition technologies. And game development companies are using AI to create their next product.

Over the past few years, a group of American scientists has been working on a project NEIL, in which the researchers ask the computer to recognize what is shown in the photograph. Experts suggest that in this way they will be able to create a system capable of self-learning without external intervention.

Company VisionLab introduced its own platform LUNA, which can recognize faces in real time by selecting them from a huge cluster of images and videos. This technology is now used by large banks and network retailers. With LUNA, you can compare people's preferences and offer them relevant products and services.

A Russian company is working on similar technologies N-Tech Lab. At the same time, its specialists are trying to create a face recognition system based on neural networks. According to the latest data, Russian development copes with the assigned tasks better than a person.

According to Stephen Hawking, the development of artificial intelligence technologies in the future will lead to the death of mankind. The scientist noted that people will gradually degrade due to the introduction of AI. And in the conditions of natural evolution, when a person needs to constantly fight to survive, this process will inevitably lead to his death.

Russia is positively considering the introduction of AI. Alexei Kudrin once said that the use of such technologies would reduce the cost of maintaining the state apparatus by about 0.3% of GDP. Dmitry Medvedev predicts the disappearance of a number of professions due to the introduction of AI. However, the official stressed that the use of such technologies will lead to the rapid development of other industries.

According to experts from the World Economic Forum, by the beginning of the 2020s, about 7 million people in the world will lose their jobs due to the automation of production. The introduction of AI is highly likely to cause the transformation of the economy and the disappearance of a number of professions related to data processing.

Experts McKinsey declare that the process of automation of production will be more active in Russia, China and India. In these countries, in the near future, up to 50% of workers will lose their jobs due to the introduction of AI. Their place will be taken by computerized systems and robots.

According to McKinsey, artificial intelligence will replace jobs that involve physical labor and information processing: retail, hotel staff, and so on.

By the middle of this century, according to experts from an American company, the number of jobs worldwide will be reduced by about 50%. People will be replaced by machines capable of carrying out similar operations with the same or higher efficiency. At the same time, experts do not exclude the option in which this forecast will be realized before the specified time.

Other analysts note the harm that robots can cause. For example, McKinsey experts point out that robots, unlike humans, do not pay taxes. As a result, due to a decrease in budget revenues, the state will not be able to maintain infrastructure at the same level. Therefore, Bill Gates proposed a new tax on robotic equipment.

AI technologies increase the efficiency of companies by reducing the number of mistakes made. In addition, they allow you to increase the speed of operations to a level that cannot be achieved by a person.

Previously, the concept of artificial intelligence (AI) was associated with hopes of creating a thinking machine that could compete with the human brain and possibly surpass it. These hopes, which captured the imagination of many enthusiasts for a long time, remained unfulfilled. And although the fantastic literary prototypes of "smart machines" were created hundreds of years before our days, only since the mid-thirties, since the publication of the works of A. Turing, which condemned the reality of creating such devices, the problem of AI began to be taken seriously.

In order to answer the question of which machine is considered "thinking", Turing suggested using the following test: the tester communicates through an intermediary with an interlocutor invisible to him, a person or a machine. “Intellectual” can be considered the machine that the tester in the process of such communication cannot distinguish from a person.

If the tester, when testing a computer for “intelligence”, adheres to fairly strict restrictions in choosing a topic and form of dialogue, any modern computer equipped with suitable software will pass this test. It might be considered a sign of intelligence to be able to carry on a conversation, but as has been shown, this human ability is easily modeled on a computer. The ability to learn can serve as a sign of intelligence. In 1961, Professor D. Michi, one of the leading British AI experts, described a mechanism consisting of 300 matchboxes that could learn to play tic-tac-toe. Michin called this device MENACE (Matchbox Educable Naughts and Crosses Engine). In the name (threat), there is obviously a share of irony caused by prejudices against thinking machines.

Until now, a single and universally recognized definition of AI does not exist, and this is not surprising. “Suffice it to recall that there is also no universal definition of human intelligence. The discussion about what can be considered a sign of AI and what is not, is reminiscent of the disputes of medieval scientists about how many angels could fit on the tip of a needle”1. Now it is customary to refer to AI as a number of algorithms and software systems, the distinctive feature of which is that they can solve some problems in the same way as someone who thinks about their solution would do.

Neural networks

The idea of ​​neural networks was born in the course of research in the field of artificial intelligence, namely, as a result of attempts to reproduce the ability of neural biological systems to learn and correct errors by modeling the low-level structure of the brain. The main area of ​​research on artificial intelligence in the 60-80s were expert systems. Such systems were based on high-level modeling of the process of thinking (in particular, on its representation as manipulations with symbols). It soon became clear that such systems, although they may be useful in some areas, do not cover some key aspects of how the human brain works.

According to one point of view, the reason for this is that they are unable to reproduce the structure of the brain. To create artificial intelligence, you need to build a system with a similar architecture.

The brain consists of a very large number (approximately 1010) of neurons connected by numerous connections (on average, several thousand connections per neuron, but this number can fluctuate greatly). Neurons are special cells capable of propagating electrochemical signals. The neuron has a branched information input structure (dendrites), a nucleus and a branching output (axon). The axons of a cell are connected to the dendrites of other cells via synapses. When activated, a neuron sends an electrochemical signal down its axon. Through synapses, this signal reaches other neurons, which can in turn be activated. The neuron is activated when the total level of signals that came to its nucleus from the dendrites exceeds a certain level (activation threshold).

The intensity of the signal received by the neuron (and, consequently, the possibility of its activation) strongly depends on the activity of the synapses. Each synapse has a length, and special chemicals transmit the signal along it. One of the most respected researchers of neurosystems, Donald Hebb, postulated that learning consists primarily in changes in the strength of synoptic connections. For example, in the classical experiment. Pavlova, each time before feeding the dog, the bell rang, and the dog quickly learned to associate the ringing of the bell with food.

Synoptic connections between the areas of the cerebral cortex responsible for hearing and the salivary glands increased, and when the cortex was excited by the sound of a bell, the dog began to salivate.

Thus, being built from a very large number of very simple elements (each of which takes a weighted sum of input signals and, if the total input exceeds a certain level, passes on a binary signal), the brain is able to solve extremely complex problems. The definition of a formal classical neuron is given as follows:

It receives input signals (input data or output signals from other neurons in the network) through several input channels. Each input signal passes through a junction having a certain intensity (or weight); this weight corresponds to the synoptic activity of the biological neuron. Each neuron has a specific threshold value associated with it. The weighted sum of the inputs is calculated, the threshold value is subtracted from it, and as a result, the neuron's activation value is obtained.

The activation signal is transformed using an activation function (or transfer function) and as a result, the output signal of the neuron is obtained.

If you use the stepwise activation function, then such a neuron will work in exactly the same way as the natural neuron described above.

Neural networks in artificial intelligence

Work on the creation of intelligent systems is carried out in two directions. Supporters of the first direction, who today make up the absolute majority among specialists in the field of artificial intelligence, proceed from the position that artificial systems are not required to repeat in their structure and functioning the structure and processes occurring in it inherent in biological systems. The only important thing is that by one means or another it is possible to achieve the same results in behavior that are characteristic of humans and other biological systems.

Supporters of the second direction believe that this cannot be done at a purely informational level. The phenomena of human behavior, its ability to learn and adapt, according to these experts, is a consequence of the biological structure and features of its functioning.

The supporters of the first informational direction have actually working layouts and programs that model certain aspects of the intellect. One of the most striking works representing the first direction is the program "General Problem Solver" by A. Newell, I. Shaw and G. Simon. The development of the information direction proceeded from the task of rationalizing reasoning by clarifying general methods for quickly identifying false and true statements in a given system of knowledge. The ability to reason and find contradictions in various systems of interrelated situations, objects, concepts is an important aspect of the phenomenon of thinking, an expression of the ability to deductive thinking.

The effectiveness of the information direction is indisputable in the field of study and reproduction of deductive mental manifestations. For some practical problems, this is sufficient. The information direction is an exact, rigorous science that has incorporated the main results of cybernetics research and mathematical culture. The main problems of the information direction are to introduce internal activity into their models and to be able to present inductive procedures.

One of the central problems is "the problem of active knowledge that generates the need for the system's activities due to the knowledge that has accumulated in the system's memory"1.

The supporters of the second biological direction have so far less results than their hopes. One of the founders of the biological trend in cybernetics is W. McCulloch. In neurophysiology, it has been established that a number of functions and properties in living organisms are implemented using certain neural structures. Based on the reproduction of such structures, in a number of cases, good models have been obtained, especially for certain aspects of the work of the optic tract.

The creation of neurocomputers simulating neural networks (NN) is currently considered as one of the most promising areas in solving the problems of intellectualization of newly created computers and information-analytical systems of a new generation.

In most of the studies on this topic, the NN is presented as a set of a large number of relatively simple elements, the topology of the connections of which depends on the type of network. Almost all known approaches to the design of neural networks are mainly associated with the selection and analysis of some particular structures of homogeneous networks on formal neurons with known properties (Hopfield, Hamming, Grossberg, Kohonnen networks, etc.) and some mathematically described modes of their operation. In this case, the term neural networks is metaphorical, since it only reflects the fact that these networks are in some sense similar to living neural networks, but do not repeat them in all complexity. As a result of this interpretation, neural computers are considered as the next stage of highly parallel supercomputers with the original idea of ​​parallelizing algorithms for solving different classes of problems. The very term neural computer neurocomputer, as a rule, is in no way connected with any properties and characteristics of the brain of humans and animals. It is associated only with the conditional name of the threshold logic element as a formal neuron with adjustable or fixed weight coefficients, which implements the simplest transfer function of a neuron-cell. Research in the field of creating neurointelligence is carried out at various levels: theoretical tools, prototypes for applied tasks, NN software tools, hardware structures. The main stages on the way to creating a brain-like computer are the elucidation of the principles for the formation of inter-element connections and brain-like systems of adaptive networks with a large number of elements, the creation of a compact multi-input adaptive element analogous to a real neuron, the study of its functional features, the development and implementation of a training program for a brain-like device.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and subsequently successfully imitated by a machine. Goethe Faust The idea that not a person could do hard work for a person arose in the Stone Age when a person domesticated a dog. What was most valuable in this creation is what we now call artificial intelligence. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized...


Share work on social networks

If this work does not suit you, there is a list of similar works at the bottom of the page. You can also use the search button


JOINT INSTITUTE FOR NUCLEAR RESEARCH

EDUCATIONAL AND SCIENTIFIC CENTER

ESSAY

in History and Philosophy of Science

on the topic:

HISTORY OF DEVELOPMENT OF ARTIFICIAL INTELLIGENCE

Completed:

Pelevanyuk I.S.

Dubna

2014

Introduction 3

Before Science 4

The very first ideas 4

Three Laws of Robotics 5

First scientific steps 7

Turing test 7

Darmouth Seminar 8

1956-1960: a time of great hopes 9

1970s: Knowledge Based Systems 10

Fight on a chessboard 11

Use of artificial intelligence for commercial purposes 15

Paradigm shift 16

Data mining 16

Conclusion 21

References 22

Introduction

The term intellect (lat. intellectus) means the mind, reason, the ability to think and rational knowledge. Usually, this means the ability to acquire, remember, apply and transform knowledge to solve some problems. Thanks to these qualities, the human brain is able to solve a variety of tasks. Including those for which there are no previously known solution methods.

The term artificial intelligence arose relatively recently, but even now it is almost impossible to imagine a world without it. Most often, people do not notice his presence, but if, suddenly, he was gone, then this would radically affect our lives. The areas in which artificial intelligence technologies are used are constantly replenished: once they were programs for playing chess, then - vacuum cleaner robots, now algorithms are able to conduct trading on exchanges themselves.

This direction was formed on the basis of the assertion that human intelligence can be described in detail and, subsequently, successfully imitated by a machine. Artificial intelligence was the cause of great optimism, but soon showed staggering complexity of implementation.

The main areas of development of artificial intelligence include reasoning, knowledge, planning, learning, language communication, perception, and the ability to move and manipulate objects. Generalized artificial intelligence (or "strong AI") is still on the horizon. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are a huge number of tools that use artificial intelligence: different versions of search algorithms, mathematical optimization algorithms, logics, probability-based methods, and many others.

In this essay, I tried to collect the most important, from my point of view, events that influenced the development of technology and theory of artificial intelligence, the main achievements and prerequisites.

Before the advent of science

The very first ideas

“We are told “madman” and “fantastic”,

But, coming out of sad dependence,

Over the years, the brain of a thinker is skillful

The thinker will be artificially created.”

Goethe, Faust

The idea that a non-human could do difficult work for a human originated in the Stone Age, when a human domesticated a dog. The dog was ideally suited to the role of a watchman and performed this task much better than a person. Of course, this example cannot be considered as a demonstration of the use of artificial intelligence, because a dog is a living creature: it is already endowed with the ability to recognize images, orientate in space, and is also predisposed to some basic learning in order to recognize “friend/foe”. However, it shows the direction of the person's thought.

Another example is the myth of Talos. Talos, according to legend, was a huge bronze knight, which Zeus gave to Europe to protect the island of Crete. His job was to keep outsiders out of the island. If they approached, Talos threw stones at them; if they managed to land, Talos set himself on fire and burned the enemies in his arms.

Why is Talos so remarkable? Constructed from the most durable material at the time, capable of identifying intruders, virtually invulnerable without the need to rest. This is how the ancient Greeks imagined the creation of the gods. What was most valuable in this creation is what we now call artificial intelligence.

Another interesting example can be taken from Jewish traditions - these are the legends about golems. A golem is a clay creature of the human species. They, according to legend, could be created by rabbis to protect the Jewish people. In Prague, a Jewish folk legend arose about a golem, which was created by the chief rabbi of Prague to perform various “black” jobs or simply difficult assignments. Other golems are also known, created according to popular tradition by various authoritative rabbis, innovators of religious thought.

In this legend, folk fantasy justifies resistance to social evil by the violence of the golem. For him, the idea of ​​an intensified struggle against evil, which transcends the boundaries of religious law, is legalized; No wonder the golem, according to the legends, can exceed its powers, declaring its will, contrary to the will of its creator: the golem is able to do what is criminal for a person according to the law.

And finally, the novel Frankenstein or the Modern Prometheus by Mary Shelley. It can be called the ancestor of science fiction literature. It describes the life and work of Dr. Victor Frankenstein, who brought to life a being created from the body parts of dead people. However, seeing that it turned out to be ugly and monstrous, the doctor renounces his creation and leaves the city in which he lived. The nameless creature, hated by the people for its appearance, soon begins to haunt its creator.

And here again the question of the responsibility that man bears for his creatures is raised. At the beginning of the 19th century, the novel raised several questions about the pair of creator and creation. How ethical was it to create such a creation? Who is responsible for his actions? Questions closely related to ideas about artificial intelligence.

There are many similar examples that are somehow related to the creation of artificial intelligence. This seems to people like a holy grail that can solve many of their problems and free them from any manifestations of lack and inequality.

Three Laws of Robotics

Since Frankenstein, artificial intelligence has appeared in literature constantly. The idea of ​​him has become a fertile ground for thinking of writers and philosophers. One of them, Isaac Asimov, will forever be remembered by us. In 1942, in his novel Round Dance, he described three laws that robots must follow:

  1. A robot cannot harm a person or by its inaction allow a person to be harmed.
  2. A robot must obey all orders given by a human, unless those orders are contrary to the First Law.
  3. The robot must take care of its safety to the extent that this does not contradict the First and Second Laws.

Before Isaac, stories about artificial intelligence and about robots retained the spirit of Mary Shelley's Frankenstein novel. As Isaac himself said, this problem became one of the most popular in the world of science fiction in the 1920s and 1930s, when many stories were written, the theme of which was robots that rebelled and destroyed people.

But not all science fiction writers followed this pattern, of course. In 1938, for example, Lester del Rey wrote the short story Helen O'Loy, a story about a robotic woman who fell in love with her creator and later became his ideal wife. Which, by the way, is very much like the story of Pygmalion. Pygmalion carved an ivory statue of a girl so beautiful that he himself fell in love with her. Touched by such love, Aphrodite revived the statue, which became the wife of Pygmalion.

In fact, the emergence of the Three Laws happened gradually. The two earliest stories about robots, "Robbie" (1940) and "Logic" (1941), did not explicitly describe the laws. But they already implied that robots must have some internal limitations. In the following story: "The Liar" (1941), the First Law was first spoken. And all three laws appeared in full only in the Round Dance (1942).

Despite the fact that today robotics is developing like never before, researchers from the field of artificial intelligence do not attach so much importance to the laws of robotics. After all, the laws, in fact, coincide with the basic principles of humanity. However, the more complex robots become, the more obvious is the need to create some basic principles and security measures for them.

There are even claims that the Laws are unlikely to be fully implemented in all robots, because there will always be those who want to use robots for destruction and murder. Science fiction scholar Robert Sawyer compiled these statements into one:

“AI development is a business, and business is not known to be interested in developing fundamental security measures especially philosophical ones. Here are a few examples: the tobacco industry, the automotive industry, the nuclear industry. None of them were initially told that serious security measures were necessary, and all of them prevented externally imposed restrictions, and none of them adopted an absolute edict against harming people.

First scientific steps

The history of the development of artificial intelligence as a science can be traced back to early philosophical works such as Discourse on Method (Rene Descartes, 1637), Human Nature (Thomas Hobbes, 1640). If you look at the development also from a technical point of view, then we can mention the first mechanical digital computer, which appeared even earlier: in 1623, thanks to the efforts of Wilhelm Schickard. However, I would like to turn my attention to the 1950s, when Alan Turing, in his article "Computing Machines and the Mind", set out to determine whether a machine is capable of thinking.

Turing test

A person communicates with another person and with a computer program. A person can ask questions through the terminal, his task is to determine where the program is and where the person is. The purpose of the program is to mislead a person and force them to make the wrong choice. According to the terms, the correspondence had to be made at fixed intervals, because back then computers were slow and this made it possible to immediately determine where the program was. Now this rule is needed because a computer can answer much faster than a human.

In fact, there are several versions of the wording. In one of them, for example, the computer had to convince a panel of several people that they were communicating with another person. And in one of the very first options, Turing suggests that the participant in the experiment play chess with a person and with a computer, and then determine which of them is who. In fact, at the time of writing, Alan Turing has been working on artificial intelligence for about 10 years.

In the subsequent history of the Thiring test, several key events can be identified. But the main thing happened in 1966, Joseph Weizenbaum created the Eliza program, which could parody a conversation with a psychotherapist. This allowed her to answer a question with a question in conditions of uncertainty. Here is an example of what a conversation could have turned out with her:

H: Who is your favorite composer?

Elisa: Are you interested in this topic? (Answers a question with a question)

H: My head hurts.

Elisa: Why do you say you have a headache? (Uncertainty condition)

X: My father hates me

Elisa: Who else in the family hates you? (reacts to the keyword "father")

Eliza will forever remain an important milestone in the development of artificial intelligence. It was the first program that went beyond the Human/Machine communication paradigm and was able to create an imitation of Human/Human communication.

Darmouth Seminar

Thanks to the explosive leap in the speed of computers, researchers began to believe that it would not be difficult to create artificial intelligence thanks to the computer. The fact is that at that time there were two areas of research: neurocybernetics and, a little later, “black box” cybernetics.

The basis of neurocybernetics was the principle that the only object capable of thinking is a person, which means that a thinking device should model its structure. Scientists tried to create elements that would work like neurons in the brain. Thanks to this, in the late 50s, the first neural networks appeared. They were created by two American scientists:Rosenblatt and P.McCulloch. They tried to create a system that could simulate the work of the human eye. They called their device the Perceptron. It could recognize handwritten letters. Now, the main area of ​​application of neural networks is pattern recognition.

The cybernetics of the “black box” was based on the principle that it does not matter how the thinking machine is arranged inside, the main thing is that it reacts to a certain set of input data in the same way as a person. Researchers working in this area began to create their own models. It turned out that none of the existing sciences: psychology, philosophy, neurophysiology, linguistics, could not shed light on the algorithm of the brain.

The development of “black box” cybernetics began in 1956, when the Darmouth Seminar was held, one of the main organizers of which was John McCarthy. By that time, it became clear that both theoretical knowledge and technical base were not enough to implement the principles of neurocybernetics. But computer science researchers believed that through joint efforts, they could develop a new approach to creating artificial intelligence. Through the efforts of some of the most prominent scientists in the field of computer science, a seminar was organized called: Dartmouth Summer Project for Artificial Intelligence Research. It was attended by 10 people, many of whom were, in the future, awarded the Turing Award - the most honored award in the field of computer science. The following is the opening statement:

We propose a 2-month artificial intelligence study with 10 participants in the summer of 1956 at Dartmouth College, Hanover, New Hampshire.

The research is based on the assumption that any aspect of learning or any other property of intelligence can, in principle, be described so precisely that a machine can simulate it. We will try to understand how to teach machines to use natural languages, form abstractions and concepts, solve problems that are currently only possible for humans, and improve themselves.

We believe that significant progress on one or more of these problems is quite possible if a specially selected group of scientists will work on it during the summer.”

It was perhaps the most ambitious grant application in history. It was at this conference that a new field of science - “Artificial Intelligence” was officially established. And maybe nothing specific was discovered or developed, but thanks to this event, some of the most prominent researchers got to know each other and began to move in the same direction.

1956-1960: a time of great hope

In those days, it seemed that the solution was already very close and, despite all the difficulties, humanity would soon be able to create a full-fledged artificial intelligence that could bring real benefits. There were programs capable of creating something intellectual. The classic example is the Logic theorist program.

In 1913, Whitehead and Bertrand Russell published their Principia Mathematica. Their goal was to show that with a minimal set of logical tools such as axioms and rules of inference, all mathematical truths could be recreated. This work is considered to be one of the most influential books ever written after Aristotle's Organon.

The Logic Theorist program was able to recreate most of Principia Mathematica by itself. Moreover, in some places even more elegant than the authors did.

Logic Theorist introduced several ideas that have become central to artificial intelligence research:

1. Reasoning as a way of searching. In fact, the program walked through the search tree. The root of the tree was the initial statements. The emergence of each branch was based on the rules of logic. At the very top of the tree, there was a result - something that the program was able to prove. The path from the root statements to the target ones was called the proof.

2. Heuristics. The authors of the program realized that the tree would grow exponentially and they would need to cut it off somehow, “by eye”. They called the rules according to which they got rid of unnecessary branches “heuristic”, using the term introduced by Gyorgy Pólya in his book “How to Solve a Problem”. Heuristics has become an important component of artificial intelligence research. It remains an important method for solving complex combinatorial problems, the so-called “combinatorial explosions” (example: the traveling salesman problem, enumeration of chess moves).

3. Processing of the “List” structure. To implement the program on a computer, the IPL (Information Processing Language) programming language was created, which used the same form of lists that John McCarthy used in the future to create the Lisp language (for which he received a Turing award), which is still used by artificial intelligence researchers. .

1970s: Knowledge Based Systems

Knowledge-based systems are computer programs that use knowledge bases to solve complex problems. The systems themselves are further subdivided into several classes. What they have in common is that they all try to represent knowledge through tools such as ontologies and rules, rather than just program code. They always consist of at least one subsystem, and more often of two at once: a knowledge base and an inference engine. The knowledge base contains facts about the world. The inference engine contains logical rules, which are usually represented as IF-THEN rules. Knowledge-based systems were first created by artificial intelligence researchers.

The first working knowledge-based system was the Mycin program. This program was created to diagnose dangerous bacteria and select the most appropriate treatment for the patient. The program operated on 600 rules, asked the doctor a lot of yes/no questions and gave a list of possible bacteria sorted according to probability, also provided a confidence interval and could recommend a course of treatment.

The Stanford study found that Mycin provided an acceptable course of treatment in 69% of cases, which is better than experts who were evaluated according to the same criteria. This study is often cited to demonstrate disagreement between medical experts and the system if there is no standard for the “correct” treatment.

Unfortunately, Mycin has never been tested in practice. Ethical and legal issues related to the use of such programs have been raised. It was not clear who should be held responsible if the program's recommendation turned out to be wrong. Another problem was the technological limitation. In those days there were no personal computers, one session took more than half an hour, and this was unacceptable for a busy doctor.

The main achievement of the program was that the world saw the power of knowledge-based systems, and the power of artificial intelligence in general. Later, in the 1980s, other programs began to appear using the same approach. To simplify their creation, the E-Mycin shell was created, which made it possible to create new expert systems with less effort. The unforeseen difficulty that the developers faced was extracting knowledge from the experience of experts, for obvious reasons.

It is important to mention that it was at this time that the Soviet scientist Dmitry Alexandrovich Pospelov began his work in the field of artificial intelligence

Fight on the chessboard

Separately, one can consider the history of the confrontation between man and artificial intelligence on a chessboard. This story began a long time ago: when in 1769, in Vienna, Wolfgang von Kempeleng created a chess machine. It was a large wooden box with a chessboard on top, and behind it stood a wax Turk in matching attire (because of this, the machine is sometimes called "Turk" for short). Before the start of the performance, the doors of the box were opened, and the audience could see many details of a certain mechanism. Then the doors were closed, and the car was started with a special key, like a clock. After that, whoever wanted to play came up and made moves.

This machine was a huge success and managed to travel all over Europe, losing only a few games to strong chess players. In fact, inside the box there was a person who, with the help of a system of mirrors and mechanisms, could observe the state of the party and, with the help of a system of levers, control the arm of the “Turk”. And it was not the last machine inside which, in fact, a living chess player was hiding. Such machines were successful until the beginning of the twentieth century.

With the advent of computers, the possibility of creating an artificial chess player became tangible. Alan Turing developed the first program capable of playing chess, but due to technical limitations, it took about half an hour to make one move. There is even a recording of the game of the program with Alik Gleny, Turing's colleague, which the program lost.

The idea of ​​creating such programs based on computers caused a resonance in the scientific world. Many questions were asked. An excellent example is the article: “The use of digital computers for games” (Digital Computers applied to Games). It raises 6 questions:

1. Is it possible to create a machine that could follow the rules of chess, could give a random correct move, or check if the move is correct?

2. Is it possible to create a machine capable of solving chess problems? For example, say how to checkmate in three moves.

3. Is it possible to create a machine that would play a good game? Which, for example, faced with a certain usual arrangement of pieces, could, after two or three minutes of calculations, give a good correct move.

4. Is it possible to create a machine that, by playing chess, learns and improves its game over and over again?

This question brings up two more that are likely already on the reader's tongue:

5. Is it possible to create a machine that is able to answer the question in such a way that it is impossible to distinguish its answer from the answer of a person.

6.Can you create a machine that felt like you or me?

In the article, the main emphasis was on question number 3. The answer to questions 1 and 2 is strictly positive. The answer to question 3 is related to the use of more complex algorithms. Regarding questions 4 and 5, the author says that he does not see convincing arguments refuting such a possibility. And to question 6: “I will never even know if you feel everything the same way as I do.”

Even if such studies in themselves, perhaps, did not have much practical interest, however, they were very interesting theoretically, and there was a hope that the solution of these problems would become an impetus for the solution of other problems of a similar nature and of greater importance.

The ability to play chess has long been attributed to standard test tasks that demonstrate the ability of artificial intelligence to cope with the task not from the standpoint of "brute force", which in this context is understood as the use of a total enumeration of possible moves, but with the help of ..."something such,” as Mikhail Botvinnik, one of the pioneers in the development of chess programs, once put it. At one time, he managed to “break through” official funding for work on the project of an “artificial chess master” of the PIONEER software package, which was created under his leadership at the All-Union Research Institute of Electric Power Industry. Botvinnik repeatedly reported to the presidium of the USSR Academy of Sciences about the possibilities of applying the basic principles of "PIONEER" to solving problems of optimizing management in the national economy.

The basic idea on which the ex-world champion based his development, he himself formulated in one of his interviews in 1975: “For more than a dozen years I have been working on the problem of recognizing the thinking of a chess master: how does he find a move without a complete enumeration? And now it can be argued that this method is basically open ... Three main stages of creating a program: the machine must be able to find the trajectory of the movement of the piece, then it must "learn" to form the playing area, the local battle area on the chessboard and be able to form a set of these zones . The first part of the work has been done for a long time. The zone formation subprogram has now been completed. Debugging will begin in the coming days. If it is successful, there will be full confidence that the third stage will also be successful and the car will start playing.”

The PIONEER project remained unfinished. Botvinnik worked on it from 1958 to 1995 and during this time he managed to build an algorithmic model of a chess game based on the search for a "tree of options" and the successive achievement of "imprecise goals", which were the material gain.

In 1974, the Soviet computer program Kaissa won the First World Computer Chess Championship, defeating other chess machines in all four games, playing, according to chess players, at the level of the third category. Soviet scientists introduced many innovations for chess machines: the use of an opening book, which avoided the calculation of moves at the very beginning of the game, as well as a special data structure: a bitboard, which is still used in chess machines.

The question arose whether the program could beat a person. In 1968, chess player David Levy made a £1,250 bet that no machine could beat him for the next 10 years. In 1977, he played a game with Kaissa and won, after which the tournament was not continued. In 1978, he won a game against Chess4.7, the best chess program at the time, after which he confessed that there was not much time left before the programs could defeat titled chess players.

Particular attention should be paid to the games between a human and a computer. The very first was the previously mentioned game of Alik Gleny and Turing's programs. The next step was the establishment of the Los Alamos program in 1952. She played on a 6x6 board (without bishops). The test was carried out in two stages. The first stage is a game with a strong chess player, as a result of which, after 10 hours of play, a man won. The second stage was a game against a girl who, shortly before the test, was taught to play chess. The result was the victory of the program on the 23rd move, which was an undoubted achievement at that time.

It wasn't until 1989 that Deep Thought managed to beat an international grandmaster: Bent Larsen. In the same year, a match of the same program took place with Garry Kasparov, which was easily won by Kasparov. After the match, he stated:

If a computer can beat the best of the best in chess, this will mean that the computer is able to compose the best music, write the best books. I can not believe it. If a computer with a rating of 2800, that is, equal to mine, is created, I myself will consider it my duty to challenge it to a match in order to protect the human race.

In 1996, the Deep Blue computer lost a tournament to Kasparov, but for the first time in history won a game against a world champion. And only in 1997, for the first time in history, a computer won a tournament against a world champion with a score of 3.5:2.5.

After Kasparov's matches, many FIDE leaders repeatedly expressed the idea that holding mixed matches (a person against a computer program) is inappropriate for many reasons. Supporting this position, Garry Kasparov explained:Yes, the computer does not know what winning or losing is. And how is it for me?.. How will I feel about the game after a sleepless night, after blunders in the game? It's all emotions. They place a huge burden on the human player, and the most unpleasant thing is that you understand that your opponent is not subject to fatigue or any other emotions.».

And if even now in chess combat the advantage is on the side of computers, then in such competitions as the game of Go, the computer is suitable only for playing with beginners or with intermediate level players. The reason is that in Go it is difficult to assess the state of the board: one move can make a winning position from an unambiguously losing position. In addition to this, a complete enumeration is practically impossible, because without using a heuristic approach, a complete enumeration of the first four moves (two on one side and two on the other) may require an evaluation of almost 17 billion possible scenarios.

Of similar interest may be the game of poker. The difficulty here is that the state is not completely observable, unlike in Go and chess, where both players see the entire board. In poker, it is possible that the opponent says a pass and does not show his cards, which can complicate the analysis process.

In any case, mind games are as important to AI developers as fruit flies are to geneticists. This is a convenient field for testing, a field for research, both theoretical and practical. This is also an indicator of the development of the science of artificial intelligence.

Use of artificial intelligence for commercial purposes

In the 80s, inspired by the advances in artificial intelligence, many companies decided to try new technologies. However, only the largest companies could afford such experimental steps.

One of the earliest companies to adopt artificial intelligence technology was DEC (Digital Equipment Corp). She was able to implement the XSEL expert system, which helped her configure equipment and select alternatives for clients. As a result, the three-hour task was reduced to 15 minutes, and the number of errors decreased from 30% to 1%. According to company representatives, the XSEL system made it possible to earn $70 million.

American Express used an expert system to decide whether to issue a loan to a client or not. This system was one-third more likely to offer credit than experts did. She is said to have earned $27 million a year.

The payoff provided by intelligent systems has often been overwhelming. It was like going from walking to driving, or from driving to flying.

However, not everything was so simple with the integration of artificial intelligence. Firstly, not every task could be formalized to the level at which artificial intelligence could handle it. Secondly, the development itself was very expensive. Thirdly, the systems were new, people were not used to using computers. Some were skeptical, and some were even hostile.

An interesting example is DuPont, which was able to spend $10,000 and one month to build a small auxiliary system. She could work on a personal computer and allowed to receive an additional profit of $ 100,000.

Not all companies have successfully implemented artificial intelligence technologies. This showed that the use of such technologies requires a large theoretical base and a lot of resources: intellectual, temporary and material. But if successful, the costs paid off with a vengeance.

Paradigm shift

In the mid-1980s, humanity saw that computers and artificial intelligence were able to cope with difficult tasks as well as humans and, in many ways, even better. At hand were examples of successful commercial use, advances in the gaming industry, and advances in decision support systems. People believed that at some point computers and artificial intelligence would be able to cope with everyday problems better than humans. A belief that has been traced since ancient times, and more precisely, since the creation of the three laws of robotics. But at some point, this belief moved to a new level. And as proof of this, one more law of robotics can be cited, which Isaac Asimov himself preferred to call “zero” in 1986:

“0. A robot cannot harm a person unless it can prove that it will ultimately benefit all of humanity.”

This is a huge shift in the vision of the place of artificial intelligence in human life. Initially, machines were given the place of a weak-willed servant: the cattle of the new age. However, having seen its prospects and possibilities, a person began to raise the question of whether artificial intelligence could manage people's lives better than people themselves. Tireless, fair, unselfish, not subject to envy and desires, perhaps he could arrange people's lives in a different way. The idea is not really new, it appeared in 1952 in Kurt Vonnegut's novel Mechanical Piano or Utopia 14. But then it was fantastic. Now, it has become a possible prospect.

data mining

The history of this trend towards Data mining began in 1989, after a seminar by Grigory Pyatetsky-Shapiro. He wondered if it was possible to extract useful knowledge from a long sequence of seemingly unremarkable data. For example, it could be a database query archive. In the event that by looking at it, we could identify some patterns, this would speed up the database. Example: every morning from 7:50 to 8:10, a resource-intensive request is initiated to create a report for the previous day, in which case by this time it can already be generated in between other requests, so the database will be more evenly loaded with requests. But imagine that this request is initiated by an employee only after he enters new information. In this case, the rule should change: as soon as a specific employee has entered information, you can start preparing a report in the background. This example is extremely simple, but it shows both the benefits of data mining and the difficulties associated with it.

The term datamining has no official translation into Russian. It can be translated as “data mining”, and “mining” is akin to that carried out in mines: having a lot of raw material, you can find a valuable object. In fact, a similar term existed back in the 1960s: Data Fishing or Data Dredging. It was used by statisticians, signifying the recognized bad practice of finding patterns in the absence of a priori hypotheses. In fact, the term could be more correctly called Database mining, but this name turned out to be a trademark. Himself, Grigory Pyatetsky-Shapiro, proposed the term “Knowledge Discovery in Databases”, but in the business environment and the press the name “Data mining” was fixed.

The idea that using a certain database of some facts, you can predict the existence of new facts appeared a long time ago and constantly developed in accordance with the state of the art: Bayes' theorem in the 1700s, regression analysis in the 1800s, cluster analysis in the 1930s analysis, 1940s - neural networks, 1950s - genetic algorithms, 1960s - decision trees. The term Data mining united them not according to the principle of how they work, but according to what their goal is: having a certain set of known data, they can predict what data should turn out next.

The goal of data mining is to find “hidden knowledge”. Let's take a closer look at what "hidden knowledge" means. First, it must be new knowledge. For example, that on weekends the number of goods sold in the supermarket increases. Secondly, knowledge should not be trivial, not reduced to finding the mathematical expectation and variance. Thirdly, this knowledge should be useful. Fourth, knowledge that can be easily interpreted.

For a long time, people believed that computers could predict everything: stock prices, server loads, the amount of resources needed. However, it turned out that it is often very difficult to extract information from the data dump. In each specific case, it is required to adjust the algorithm, if it is not just some kind of regression. People believed that there was a universal algorithm that, like a black box, was able to absorb some large amount of data and start making predictions.

Despite all the limitations, tools that facilitate data mining are improving from year to year. And since 2007, Rexer Analytics has published the results of a survey of experts about existing tools every year. The survey in 2007 consisted of 27 questions and involved 314 participants from 35 countries. In 2013, the survey already included 68 questions, and 1259 specialists from 75 countries of the world took part in it.

Data mining is still considered a promising direction. And again, its use raises new ethical questions. A simple example is the use of data mining tools to analyze and predict crimes. Similar studies have been carried out since 2006 by various universities. Human rights activists oppose this, arguing that knowledge gained in this way can lead to searches, which are not based on facts, but on assumptions.

Recommender systems are by far the most tangible result of the development of artificial intelligence. We can encounter it by going to one of the popular online stores. The task of the recommender system is to determine, for example, a list of products viewed by a specific user, by some observable features, to determine which products will be most interesting to the user.

The task of finding recommendations also comes down to the task of learning the machine, just like with data mining. It is believed that the history of the development of recommender systems began with the introduction of the Tapestry system by David Goldberg at the Xerox Palo Alto Research Center in 1992. The purpose of the system was to filter corporate mail. It became a kind of progenitor of the recommender system.

There are currently two recommender systems. David Goldberg proposed a system based on collaborative filtering. That is, in order to make a recommendation, the system looks at information about how other users similar to the target user evaluated a certain object. Based on this information, the system can assume how highly the target user will rate a particular object (product, movie).

Content filters are another kind of recommender systems. A prerequisite for the existence of a content filter is a certain database that must store metrics for all objects. Further, after several user actions, the system is able to determine what type of objects the user likes. Based on existing metrics, the system can pick up new objects that will be in some way similar to those already viewed. The disadvantage of such a system is that you first need to build a large database with metrics. The process of building the metric itself can be a challenge.

Again, the question arises whether the use of such systems is not a violation. There are two approaches here. The first is explicit data collection, which represents the collection of data exclusively within the framework in which the recommender system operates. For example, if this is a recommendation system for an online store, then it will offer to evaluate some product, sort products in order of interest, and create a list of favorite products. With this type, everything is simple: the system does not receive information about the user's activity outside its boundaries, all that it knows is the user himself. The second type is implicit data collection. It includes techniques such as using information from other, similar resources, keeping a record of user behavior, checking the contents of the user's computer. This type of information gathering for recommender systems is troubling.

However, in this direction, the use of private information causes less and less controversy. For example, in 2013, at the YAC (Yandex Another Conference) conference, the creation of the Atom system was announced. Its purpose is to provide website owners with the information they may need to create recommendations. This information, initially, should be collected by Yandex services. That is, in this case, implicit data collection is carried out. Example: a person enters a search service to find out the most interesting places in Paris. After some time, a person visits the site of a travel agency. Without Atom, the agency would simply have to show the person the most popular tours. Atom could advise the site to first of all show the user a tour to Paris and make a personal discount on this particular tour in order to distinguish it from others. Thus, confidential information does not go beyond the Atom service, the site knows what to advise the client, and the client is happy that he quickly found what he was looking for.

To date, recommender systems are the clearest example of what artificial intelligence technologies can achieve. With one such system, work can be done that even an army of analysts could not handle.

Conclusion

Everything has a beginning, as Sancho Panza said, and this beginning must be described.

turn to something that precedes it. The Hindus invented the elephant, which

which held the world, but they had to put it on the tortoise. Need

note that invention consists in creating not from emptiness, but from

chaos: first of all, you should take care of the material ...

Mary Shelley, Frankenstein

The development of artificial intelligence as a science and technology for creating machines began a little more than a century ago. And the achievements that have been achieved so far are stunning. They surround people almost everywhere. Artificial intelligence technologies have a peculiarity: a person considers them something intellectual only at first, then he gets used to them and they seem natural to him.

It is important to remember that the science of artificial intelligence is closely related to mathematics, combinatorics, statistics and other sciences. But not only do they influence him, but the development of artificial intelligence allows you to take a different look at what has already been created, as was the case with the Logic Theorist program.

An important role in the development of artificial intelligence technologies is played by the development of computers. It is hardly possible to imagine a serious data mining program, which would be enough for 100 kilobytes of RAM. Computers allowed technologies to develop extensively, while theoretical research served as prerequisites for intensive development. We can say that the development of the science of artificial intelligence was a consequence of the development of computers.

The history of the development of artificial intelligence is not over, it is being written right now. Technologies are constantly being improved, new algorithms are being created, and new areas of application are opening up. Time constantly opens up new opportunities and new questions for researchers.

This abstract does not focus on the countries in which certain studies were conducted. The whole world has contributed bit by bit to the area that we now call the science of artificial intelligence.

Bibliography

Myths of the peoples of the world. M., 1991-92. In 2 vols. T.2. S. 491,

Idel, Moshe (1990). Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. Albany, New York: State University of New York Press. ISBN 0-7914-0160-X. page 296

Asimov, Isaac. Essay No. 6. Laws of robotics // Robot dreams in . M.: Eksmo, 2004. S. 781784. ISBN 5-699-00842- X

See Nonn. Acts of Dionysus XXXII 212. Clement. Protreptic 57, 3 (reference to Philostephanes).

Robert J. Sawyer. On Asimovs Three Laws of Robotics (1991).

Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433460

McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955)A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence

Crevier 1993, pp. 4648.

Smith, Reid (May 8, 1985). "Knowledge-Based Systems Concepts, Techniques, Examples"

Alan Turing, "Digital computers applied to games". n.d. AMT's contribution to "Faster than thought", ed. B.V. Bowden, London 1953. Published by Pitman Publishing. TS with MS corrections. R.S. 1953b

Kaissa - World Champion. Journal "Science and Life", January 1975, pp. 118-124

Geek, E. Grandmaster "Deep Thought" // Science and Life. M., 1990. V. 5. P. 129130.

F. Hayes-Roth, N. Jacobstein. The State of Enowledge-Based Systems. Communications of the ACM, March, 1994, v.37, n.3, pp.27-39.

Karl Rexer, Paul Gearan, & Heather Allen (2007); 2007 Data Miner Survey Summary, presented at SPSS Directions Conference, Oct. 2007, and Oracle BIWA Summit, Oct. 2007.

Karl Rexer, Heather Allen, & Paul Gearan (2013); 2013 Data Miner Survey Summary, presented at Predictive Analytics World, Oct. 2013.

Shyam Varan Nath (2006). “Crime Pattern Detection Using Data Mining”, WI-IATW "06 Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, page 41-44

David Goldberg, David Nichols, Brian M. Oki and Douglas Terry (2006). “Using collaborative filtering to weave an information Tapestry”, Communications of the ACM, Dec 1992, vol.35, n12, p.61-71

Other related works that may interest you.vshm>

14280. The idea of ​​artificial intelligence systems and the mechanisms of their functioning 157.75KB
Consideration of the structure and mechanisms of functioning of intelligent systems, on the one hand, requires a detailed presentation, taking into account the influence of specific features of applications, and on the other hand, it requires a generalization and classification of introduced concepts, structures, mechanisms.
609. 12.42KB
In lighting installations intended for lighting enterprises, gas discharge lamps and incandescent lamps are widely used as light sources. The main characteristics of light sources include: rated voltage V; electric power W; luminous flux of pits: luminous efficiency lm W this parameter is the main characteristic of the efficiency of the light source; service life h. The type of light source at enterprises is chosen taking into account the technical and economic indicators of the specifics of production ...
6244. History of CIS development 154.8KB
It should be noted that any type of system includes systems of earlier types. This means that systems of all types peacefully coexist today. The general model of the CIS system architecture Until recently, the technology of creating information systems was dominated by the traditional approach, when the entire architecture of the information system was built from top to bottom from application functionality to system engineering solutions, and the first component of the information system was entirely derived from the second. Initially, systems of this level were based ...
17626. History of the development of swimming 85.93KB
The enormous importance of water in the life of primitive man, the need for the industrial development of this unusual environment demanded from him the ability to swim so as not to die in the harsh struggle for existence. With the advent of the state system, the ability to swim became especially necessary in labor and in military affairs.
9769. History of the development of ethnopsychology 19.47KB
History of development of ethnopsychology Conclusion. So Hippocrates in his work On Airs, Waters, and Localities wrote that all differences between peoples, including in psychology, are due to the location of the country, climate and other natural factors. The next stage of deep interest in ethnic psychology begins in the middle of the 18th century. Montesquieu perhaps most fully expressed the general methodological approach of that period to the essence of ethnic differences in the spirit of psychology.
9175. History of the development of natural science 21.45KB
Among the natural science revolutions, the following types can be distinguished: global ones, covering all natural science and causing the emergence of not only fundamentally new ideas about the world of a new vision of the world, but also a new logical structure of science, a new way or style of thinking; local in individual fundamental sciences v. Formation of a new ...
9206. History of the development of mechatronics 7.71KB
In the last decade, much attention has been paid to the creation of mechatronic modules for modern cars of a new generation of technological equipment of machine tools with parallel kinematics of robots with intelligent control of micromachines of the latest computer and office equipment. The first serious results on the creation and practical application of robots in the USSR date back to the 1960s. The first industrial samples of modern industrial robots with positional control were created in 1971 at the Leningrad Polytechnic Institute...
11578. History of information technology development 41.42KB
The results of scientific and applied research in the field of informatics, computer technology and communications have created a solid basis for the origin of a new branch of skill and production of the information industry. constitutes the infrastructure and information space for informatization of society. Stages of emergence and development of information technology At the very beginning of the situation, in order to synchronize the effects performed, a person needed coded communication signals. The presentation of information thinks the self-control of Two objects: the source of information and...
3654. History of the development of inorganic chemistry 29.13KB
Chemistry, as a science, originated in ancient Egypt and was used mainly as an applied science: to obtain any substances and products with new properties that were still unknown to a wide range of people. The priests of ancient Egypt used knowledge of chemistry to obtain artificial jewelry, embalming people
14758. The history of the development of genetics as a fundamental science 942.85KB
The history of the development of genetics as a fundamental science. Methods for the study of human genetics. The history of the development of genetics as a fundamental science.2 The main stages in the development of genetics: the classical period.
knowledge can be stored outside the brain. Their arguments are:
  1. cognition as a process lends itself to formalization;
  2. intelligence can be measured (intelligence quotient IQ - intelligence quotient 1 The term was introduced into scientific use by V. Stern (1911) according to the calculation method of A. Binet (1903)., memory capacity, reactivity of the psyche, etc.);
  3. information measures (bit, byte, etc.) are applicable to knowledge. Pessimists believe that artificial intelligence is not capable of storing knowledge, since it is just an imitation of thinking. Pessimists believe that the human intellect is unique, that creativity cannot be formalized, the world is whole and indivisible into information discretes, that the imagery of human thinking is much richer than the logical thinking of machines, etc.

Who is right in this dispute, time will tell. We only note that the memory of the machine stores what is written into it, and this can be not only knowledge as the highest form of information, but also simply data that can contain knowledge, disinformation and information noise(See "The history of the development of informatics. The development of ideas about information. On the way to the information society"). In order to extract knowledge from the data, a machine, like a person, must set a goal (“what do I want to know?”) And, according to this goal, select valuable information(After all, they store values, and not everything that is horrible). Can artificial intelligence to formulate acceptable goals and to carry out artificial selection of valuable information for these goals is another problem in the theory and practice of artificial intelligence. While this work is done by a person - in expert systems, in robot programming, in process control systems, etc. Free machines (see above) will have to do this work themselves. At the same time, the indicated problem may become aggravated due to the fact that in the networks from which machines "download" knowledge, there may be a lot of "garbage" and destructive viruses.

4.4. The history of the development of artificial intelligence ideas and their implementation

For the first time, the ideas of creating artificial intelligence arose in the 17th century. (B. Spinoza, R. Descartes, G.W. Leibniz and others). We are talking about artificial intelligence, and not about mechanical dolls, already known at that time. The founders of the theory of artificial intelligence were, of course, optimists - they believed in the feasibility of their idea:

According to the psychological law of conservation (“the sum of pleasures and pains is equal to zero”), pessimists immediately appeared (F. Bacon, J. Locke, etc.), who laughed at the optimists: “Oh, stop it!”. But any idea in science, once having arisen, continues to live, despite the obstacles.

The idea of ​​artificial intelligence began to take on real features only in the second half of the 20th century, especially with the invention of computers and "intelligent robots". To implement the idea, it also required applied developments in mathematical logic, programming, cognitive psychology, mathematical linguistics, neurophysiology and other disciplines developing in the cybernetic channel of the relationship between organisms and machines in terms of control and communication functions. The name itself artificial intelligence" arose in the late 60s of the XX century, and in 1969 the First World Conference on Artificial Intelligence was held (Washington, USA).

at first artificial intelligence developed in the so-called analytical (functional) direction in which the machine was instructed to perform private intellectual tasks creative nature (games, translation from one language to another, painting, etc.).

Later arose synthetic (model) direction, according to which attempts were made to model the creative activity of the brain in a general sense, "without exchanging" for particular tasks. Of course, this direction turned out to be more difficult to implement than the functional direction. The object of study of the model direction was metaprocedures human thinking. The meta-procedures of creativity are not the procedures (functions) of intellectual activity themselves, but ways to create such procedures, ways to learn a new kind of intellectual activity. In these ways, probably, is hidden what can be called intellect. The presence of meta-procedures of thinking distinguishes true intelligence from the apparent one, therefore, the implementation of meta-procedures of creativity by machine tools has become almost the main task of the model direction. Not what but how inventing how solve a creative problem how learning (self-learning) new things? - these are the questions inherent in the implementation of models of human creative thinking.

Within the framework of the model direction, mainly two models of intelligence have been developed. Chronologically the first labyrinthine a model that implements a targeted search in the maze of alternative ways to solve a problem with an assessment of success after each step or from the standpoint of solving the problem as a whole. In other words, the labyrinth model is reduced to the enumeration of possible options (by analogy with the enumeration of exit options from the labyrinth). Success (or failure) in choosing one or another option can be assessed at each step (that is, immediately after the choice), without foreseeing the final result of solving the problem, or, conversely, the choice of option at each step can be made based on the final result. For example, let's take chess. It is possible to evaluate the result of each move by the immediate gain or loss after that move (winning or losing pieces, gaining a positional advantage, etc.) without thinking about the end of the game. With this approach, it is understood that success at each move will lead to the success of the entire game, to victory. But this is not at all necessary. After all, it is possible to lure the opponent's king into a mating trap by sacrificing pieces in a series of moves, losing the apparent positional advantage. With this approach, partial successes on each move mean nothing compared to the last winning move - the announcement of checkmate.

The first approach in labyrinth modeling was developed in heuristic programming, the second approach is dynamic programming. Apparently, dynamic approach more effective than heuristic when it comes to chess. In any case, strong chess players, without knowing it, used exactly dynamic approach against chess programs operating in heuristic mode, and with their natural intelligence, they defeated the labyrinth artificial intelligence. But that was the case in the 60s and 70s. 20th century Since then, chess programs have improved so much (including through the introduction of a dynamic approach) that they are now successfully confronting world champions.

Maze models were widely used not only in the creation of chess programs, but also for programming other games, as well as for proving mathematical theorems and in other applications.

Following the labyrinth models of artificial intelligence, associative models. Association (from Latin association - connection) - the connection of psychological representations (due to previous experience), due to which one representation, having appeared in the mind, causes another representation (by the principle of similarity, contiguity or opposite). For example, Nobel laureate Academician I.P. Pavlov, conducting his well-known experiments with dogs, noticed that if, at the same time as eating, a dog sees a lamp turned on, then as soon as the lamp was turned on, gastric juice began to stand out in the dog, although food was not offered to it. At the heart of this conditioned reflex is an association based on the principle of adjacency. The similarity association is described in the story of A.P. Chekhov "Horse surname". Association by opposite can be described by a logical scheme: if "not A", then "A". For example, if during the day I saw a white cat, I immediately associated it with a black cat that crossed the road in the morning.

In associative models, it is assumed that the solution of a new, unknown problem is somehow based on already known solved problems similar to the new one, so the method of solving a new problem is based on the associative principle of similarity (similarity). For its implementation, associative search in memory, associative logical reasoning using the methods of solving problems mastered by the machine in a new situation, etc. are used. In modern computers and intelligent robots, there is associative memory. Associative models are used in tasks classification, pattern recognition, learning which have already become ordinary tasks of information systems and technologies. However, the theory of associative models until the 90s. 20th century was absent and is now being created.

Let us briefly list the main creators of artificial intelligence.

N. Wiener(mathematician), U.R. Ashby(biologist) - the founders of cybernetics, who first stated that machines can be smarter than people, who gave the initial impetus to the development of the theory of artificial intelligence.

W. McCulloch, W. Peets(physiologists) - in 1943. proposed a formal model of a neuron; founders of neurocybernetics and the initial concept of the neural network.

A. Turing(mathematician) - in 1937 he invented a universal algorithmic "Turing machine"; proposed an intellectual "Turing test" to determine whether a machine is intelligent in a comparative dialogue with it and a "reasonable person".

J. von Neumann(mathematician) - one of the founders of game theory and the theory of self-reproducing automata, the architecture of the first generations of computers.

M. Somalvico(cybernetic) A. Azimov(biochemist, writer) - the founders of intellectual robotics.

G. Simon, W. Reitman(psychologists) - authors and developers of the first labyrinth intellectual models built on the principles of heuristic programming.

R. Bellman(mathematician), S.Yu. Maslov(logician) - authors of a dynamic approach to labyrinth intellectual models (dynamic programming, inverse proof method).

F. Rosenblatt(physiologist), MM. bongard(physicist) - pioneers of the problem of pattern recognition; developers of devices and models of recognition and classification.

L. Zade, A.N. Kolmogorov, A.N. Tikhonov, M.A. Girshik(mathematicians) - authors of mathematical methods for solving poorly formalized problems and decision making under conditions of uncertainty.

N. Chomsky(mathematician, philologist) - the founder of mathematical linguistics.

L.R. Luria(psychologist) - the founder of neuropsychology, which studies the underlying mechanisms of the cognitive activity of the brain and other intellectual functions of the brain.

K.E. Shannon(communications engineer), R.H. Zaripov(mathematician) - authors of the theory and models of machine synthesis of music.

The above list is far from complete. In the field of artificial intelligence, not only individual specialists have worked and are working, but also teams, laboratories, and institutes. The main problems they solve are:

  1. representation of knowledge;
  2. reasoning modeling;
  3. intelligent interface "man-machine", "machine-machine";
  4. planning expedient activities;
  5. training and self-training of intelligent systems;
  6. machine creativity;
  7. intelligent robots.

Basic concepts of artificial intelligence.

It is rather difficult to give a precise definition of what human intelligence is, because intelligence is a fusion of many skills in the field of processing and presenting information. Intelligence (intelligence) comes from the Latin intellectus - which means mind, reason, reason; human thinking ability. With a high degree of certainty, intelligence can be called the ability of the brain to solve (intellectual) tasks by acquiring, remembering and purposefully transforming knowledge in the process of learning from experience and adapting to a variety of circumstances.

Artificial intelligence (AI) is a set of scientific disciplines that study methods for solving problems of an intellectual (creative) nature using computers.
Artificial intelligence is one of the areas of informatics, the purpose of which is to develop hardware and software tools that allow a non-programmer user to set and solve their own, traditionally considered intellectual tasks, communicating with a computer in a limited subset of natural language.

Artificial intelligence systems (AI) are computer-based systems that simulate the solution of complex intellectual tasks by a person.
Knowledge: in general, knowledge is a practice-tested result of cognition of reality, its correct reflection in human thinking, the possession of experience and understanding that are correct both subjectively and objectively, on the basis of which judgments and conclusions can be built that seem reliable enough for to be regarded as knowledge. Therefore, in the context of IT, the term knowledge is the information present in the implementation of intellectual functions. Usually these are deviations, trends, patterns and dependencies found in information. In other words, intelligent systems are at the same time knowledge processing systems.

Artificial intelligence programs include:



1. game programs (stochastic, computer games);

2. natural language programs - machine translation, text generation, speech processing;

3. recognition programs - recognition of handwriting, images, maps;

4. programs for the creation and analysis of graphics, painting, musical works.

The following areas of artificial intelligence are distinguished:

1. expert systems;

2. neural networks;

3. natural language systems;

4. evolutionary methods and genetic algorithms;

5. fuzzy sets;

6. knowledge extraction systems.

History of the development of artificial intelligence

There are three main stages in the development of AIS:

− 60-70s. These are the years of realizing the possibilities of artificial intelligence and the formation of a social order to support decision-making and management processes. Science responds to this order with the appearance of the first perceptrons (neural networks), the development of methods for heuristic programming and situational control of large systems (developed in the USSR)

− 70-80s. At this stage, there is an awareness of the importance of knowledge for the formation of adequate decisions; expert systems appear in which the apparatus of fuzzy mathematics is actively used, models of plausible inference and plausible reasoning are developed

− 80-90s. There are integrated (hybrid) models of knowledge representation that combine intellects: search, computational, logical and figurative.

The term artificial intelligence was proposed in 1956 at a seminar at Stanford University (USA).

The idea of ​​creating an artificial likeness of the human mind to solve complex problems and simulate the thinking ability has been in the air since ancient times. It was first expressed by R. Lully, who, as early as the 14th century, tried to create a machine for solving various problems based on a general classification of concepts.

The development of artificial intelligence as a scientific direction became possible only after the creation of computers. This happened in the 40s of the XX century. At the same time, N. Wiener created his fundamental works on a new science - cybernetics.

In 1954, the seminar "Automata and thinking" began its work at Moscow State University under the guidance of Professor A. A. Lyapunov. Leading physiologists, linguists, psychologists and mathematicians took part in this seminar. It is generally accepted that it was at this time that artificial intelligence was born in Russia.

A significant breakthrough in the practical applications of artificial intelligence occurred in the mid-1970s, when the search for a universal thinking algorithm was replaced by the idea of ​​modeling the specific knowledge of expert experts.

In the United States, the first commercial knowledge-based systems, or expert systems, appeared a new approach to solving artificial intelligence problems - knowledge representation. MYCIN and DENDRAL are created - classic expert systems for medicine and chemistry.

In 1980-1990, active research was carried out in the field of knowledge representation, knowledge representation languages ​​and expert systems were developed. Since the mid-1980s, artificial intelligence has been commercialized. Annual investments are growing, industrial expert systems are being created.

Expert systems are not widely used in practical medicine. They are mainly used as an integral part of medical instrument-computer systems. This is primarily due to the fact that in real life the number of possible situations and, accordingly, diagnostic rules turned out to be so large that the system either begins to require a large amount of additional information about the patient, or the accuracy of diagnosis sharply decreases.

Conventionally, 7 stages of the development of artificial intelligence can be distinguished, each of which is associated with a certain level of development of artificial intelligence and a paradigm implemented in a particular system.

Paradigm is a new idea of ​​mathematical description of the work of artificial intelligence systems.

Stage 1 (50s) ( Neuron and neural networks )

It is associated with the appearance of the first sequential machines, with very small, by today's standards, resource capabilities in terms of memory, speed and classes of tasks to be solved. These were problems of a purely computational nature, for which solution schemes were known and which can be described in some formal language. Adaptation tasks also belong to this class.

Stage 2 (60s) ( heuristic search)

In the "intelligence" of the machine, search mechanisms, sorting, the simplest operations for generalizing information that do not depend on the meaning of the data being processed, were added. This has become a new starting point in the development and understanding of the tasks of automating human activities.

Stage 3 (70s) ( knowledge representation)

scientists have recognized the importance knowledge(by volume and content) for the synthesis of interesting algorithms for solving problems. This meant knowledge that mathematics could not work with, i.e. experienced knowledge that is not of a strict formal nature and is usually described in a declarative form. This is the knowledge of specialists in various fields of activity, doctors, chemists, researchers, etc. Such knowledge is called expert knowledge, and, accordingly, systems that work on the basis of expert knowledge have become known as consulting systems or expert systems.

Stage 4 (80s) ( learning machines)

The fourth stage of AI development has become a breakthrough. With the advent of expert systems in the world, a fundamentally new stage in the development of intelligent technologies began - the era of intelligent systems - consultants who proposed solutions, justified them, were able to learn and develop, communicate with a person in a familiar, albeit limited, natural language .

Stage 5 (90s) ( Automated Machining Centers)

The complication of communication systems and tasks to be solved required a qualitatively new level of "intelligence" of software systems, such as protection against unauthorized access, information security of resources, protection against attacks, semantic analysis and search for information in networks, etc. And intelligent systems have become a new paradigm for creating advanced protection systems of all kinds. They allow you to create flexible environments within which the solution of all necessary tasks is provided.

Stage 6 (2000s) ( Robotics )

The scope of robots is quite wide and extends from autonomous lawn mowers and vacuum cleaners to modern models of military and space technology. Models are equipped with a navigation system and all kinds of peripheral sensors.

Stage 7 (Year 2008)( Singularity )

The creation of artificial intelligence and self-reproducing machines, the integration of humans with computers, or a significant leap in the capabilities of the human brain due to biotechnology.

According to some forecasts, the technological singularity may come around 2030. Proponents of the theory of technological singularity believe that if there is a fundamentally different from the human mind (post man), the future fate of civilization cannot be predicted based on human (social) behavior.

Editor's Choice
Fish is a source of nutrients necessary for the life of the human body. It can be salted, smoked,...

Elements of Eastern symbolism, Mantras, mudras, what do mandalas do? How to work with a mandala? Skillful application of the sound codes of mantras can...

Modern tool Where to start Burning methods Instruction for beginners Decorative wood burning is an art, ...

The formula and algorithm for calculating the specific gravity in percent There is a set (whole), which includes several components (composite ...
Animal husbandry is a branch of agriculture that specializes in breeding domestic animals. The main purpose of the industry is...
Market share of a company How to calculate a company's market share in practice? This question is often asked by beginner marketers. However,...
First mode (wave) The first wave (1785-1835) formed a technological mode based on new technologies in textile...
§one. General data Recall: sentences are divided into two-part, the grammatical basis of which consists of two main members - ...
The Great Soviet Encyclopedia gives the following definition of the concept of a dialect (from the Greek diblektos - conversation, dialect, dialect) - this is ...