Search
  • AI Expert

Ai Rising - Modern History of Artificial Intelligence

Updated: Feb 24


Artificial intelligence evolution systems conventionally exhibit at least some of the behaviors related to natural intelligence, such as planning, training, thinking, problem-solving, reproducing information, understanding, moving, managing, and to some extent communication skills.


But did you know that the ideas of robots and intelligent machines were first noticed in ancient Greek Mysticism?

Yes, that was when the expansion of Aristotle's prologue and his application of rational thought contributed significantly to humanity's pursuit to understand its own intelligence. In this way, we can attribute the beginnings of modern AI history to the attempts of classical philosophers to describe human thought as a figurative system. The idea of ​​inorganic objects coming to life as intelligent beings has been around for a long time, as the ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automata.


Below you will find in chronological order the most important milestones in the modern history of Artificial Intelligence:


1943

Walter Pitts along with Warren McCullough issued "A Logical Calculus of the Ideas Immanent in Nervous Activity." This article introduced the first arithmetic model for formulating the neural net.

1949

The Organization of Behavior: A Neuropsychological Theory, written by Donald Hebb. In which, he suggests that neural connections are formed from occurrences and that associations among neurons become a lot more effective when they are more generally utilized. Hebbian’s learning remains to be a significant paradigm in artificial intelligence evolution.

1950

"Computing Machinery and Intelligence,” published by Alan Turing. He introduced a method that is presently called the Turing Test, a technique for ascertaining if a mechanism is reasonable. Harvard sophomores Marvin Minsky along with Dean Edmonds establish SNARC, the primary neural net machine. Claude Shannon issues the article "Programming a Computer for Playing Chess." Lastly, Isaac Asimov issues the "Three Laws of Robotics" article.

1952

During 1952, Arthur Samuel developed a self-teaching application to play the checkerboard.

1954

In 1954, The Georgetown-IBM computer translation analysis was performed, in which it was concluded that the machine can automatically translate 60 precisely chosen Russian sentences in the English language.

1956

The expression artificial intelligence is invented at the "Dartmouth Summer Research Project on Artificial Intelligence." Commenced by John McCarthy, in the convention, which determined the range and intentions behind the artificial intelligence evolution, is broadly contemplated to be the beginning of artificial intelligence the way we comprehend it now. Allen Newell along with Herbert Simon prove Logic Theorist or LT, the primary reasoning application.

1958

John McCarthy builds the AI data processing language Lisp. Also, he issues the article "Programs with Common Sense." The article stated the theoretical Advice Taker, a comprehensive AI framework with the capability to acquire from expertise as efficiently as individuals can.

1959

Allen Newell, J.C. Shaw, and Herbert Simon came up with the General Problem Solver, also known as GPS, an application created to simulate human-like problem-solving. Moreover, Herbert Gelernter produces the Geometry Theorem Prover application. Arthur Samuel introduces the phrase of intelligent retrieval whilst at IBM. Lastly, John McCarthy along with Marvin Minsky established the Artificial Intelligence Project called MIT.

1963

In 1963, John McCarthy inaugurates the AI Laboratory at Stanford.

1966

In 1966, the Automatic Language Processing Advisory Committee (ALPAC) statement by the United States government describes the loss of development in computer translation analysis, a significant Cold War drive with the assurance of automated and immediate interpretation of Russian. The report of ALPAC causes the dissolution of every government-funded MT project.

1969

The initial prosperous expert regularities are produced within DENDRAL, the XX program, along with the MYCIN, intended to identify blood contagions, are conceived at Stanford.


1972

The logical programming language known as PROLOG is formed.

1973

In 1973, the "Lighthill Report," revealing the setbacks in AI analysis, was published by the British administration and caused drastic reductions in financing for artificial intelligence programs.

1974 to 1980

Disappointment with the advancement of AI gives rise to important DARPA reductions in educational privileges. Coupled with the more advanced ALPAC statement and the "Lighthill Report" of the last year, artificial intelligence financing dwindles and research obstructs. This phase is called the "First AI Winter."

1980

The Digital Equipment Corporations form R1 ( referred to as XCON), the earliest prosperous commercial specialist program. Created to structure orders for modern processing systems, R1 starts an investment growth in specialist programs that lasts for more than a decade, efficiently completing the initial "AI Winter."

1982

The Ministry of International Trade and Industry of Japan begins the driving Fifth Generation Computer Systems application. The purpose of FGCS is to promote supercomputer-like administration and a program for AI advancement.

1983

In reply to FGCS Japan, the U.S. administration originates the Strategic Computing Initiative to implement DARPA funded analysis in advanced technology along with AI.

1985

Businesses are investing over a million dollars every year on specialist regularities and a complete enterprise called the Lisp computer business came into existence to help them. Organizations like Symbolics, Lisp Machines Inc. create specialized machines to operate on the AI computer technology Lisp.

1987 to 1993

As programming language developed, cheaper options developed too and the Lisp computer business deflated in 1987, escorting in the "Second AI Winter." Throughout this time, specialist regularities proved too costly to manage and upgrade, ultimately facing disgrace. Japan stops the FGCS program in 1992, indicating a collapse in reaching the formidable aims described a decade before. DARPA closes the Strategic Computing Initiative in 1993 even after contributing almost $1 billion.

1991

Forces of the United States use DART, a computerized logistics devising and scheduling mechanism, through the Gulf War.


1997

The Deep Blue of IBM wins over the world chess titleholder, Gary Kasparov

2005

STANLEY, a car that self-drives, accomplishes the DARPA Challenge. The United States forces started funding in independent robots, such as Boston Dynamics "Big Dog" along with the iRobot's "PackBot."

2008

Google made breakthroughs in voice recognition and proposed the trait in its iPhone application.

2011

Watson of IBM wins the tournament on Jeopardy!.

2012

Andrew Ng, Google Brain’s founder deep intelligent retrieval program, fed a neural net with deep intelligent retrieval algorithms about 10 million training set of YouTube videos. The neural net acquired to identify what is a cat on its own, leading in a milestone period for neural nets and deep intelligent retrieval financing.

2014

Google executes the earliest self-drive car for passing the test of state driving.

2016

AlphaGo of the Google DeepMind beats world titleholder Go professional Lee Sedol. The complication of the classical Chinese tournament was viewed as a significant barrier for clearing in artificial intelligence.

2018

The advancement of self-drive cars is a heading case for the present VR – the software that has caught the public’s attention more than ever. Similar to the artificial intelligence evolution that helped to power them, they took time to reach the milestone regardless of how it appears to the individuals who haven’t been keeping up with trends of technology. General Motors prophesied the ultimate advent of driverless carriers at the World’s Fair in 1939. The Stanford Cart – basically created to discover how lunar carriers might operate, then remodeled as a self-governing road transport – was initiated in 1961.


*Original Source : https://builtin.com/artificial-intelligence



Main Terminologies used in Artificial Intelligence


Here is a glossary of artificial intelligence most used terms:


§ Artificial intelligence: this refers to a particular field of computer engineering dedicated to the creation. If systems fully capable of decision-making and data collection.


§ Autonomous: this means capable of independent action without human input.


§ Algorithm: these are rules that help computers learn, created by a fusion of effective numbers and commands.


§ Machine learning: this refers to the means by which artificial intelligence makes use of algorithms to perform various functions.


§ Black box: this occurs when an artificial intelligence performs math of such complexity that it is often unable to be interpreted by humans.


§ Neural network: this is the computer equal to a human's neural system, where a network of AI is linked to helping break down complex problems into different levels of data.


§ Deep learning: this refers to what happens after a neural network linkage; the AI begins to understand things by itself and can make some leap of judgment as evidenced by the application of knowledge from one task to a completely different one.


§ Natural language processing: this refers to the process by which artificial intelligence is trained to understand human language. This finds a wide range of applications from chatbots and translation services to more advanced iterations like Siri.


§ Reinforcement learning: this method of learning is applicable to both humans and machines, and requires the use of a goal not definable by specific metrics. This would result in a broader range of results, which can then be streamlined in the next scenario for a more appropriate response.


§ Supervised learning: this refers to the process by which artificial intelligence is trained by providing it with an answer as a question is being asked. This is shown to lead to pattern recognition between questions and answers.


§ Unsupervised learning: Unlike supervised learning, this refers to the process of teaching artificial intelligence to find patterns on its own by not giving it the answer to questions.


§ Transfer learning: this refers to the belief that skill, for example, in pattern recognition in one form will lead to improved skill in pattern recognition in all forms.


§ Turing Test: this refers to the test that aims to distinguish artificial intelligence from a human one, and the aim of the development of artificial intelligence is to reach the stage where they both are indistinguishable.








58 views