6 Types of Artificial Neural Networks Currently Being Used in Machine Learning

https://analyticsindiamag.com/6-types-of-artificial-neural-networks-currently-being-used-in-todays-technology/

Artificial neural networks are computational models that work similarly to the functioning of a human nervous system. There are several kinds of artificial neural networks. These types of networks are implemented based on the mathematical operations and a set of parameters required to determine the output. Let’s look at some of the neural networks:

1. Feedforward Neural Network – Artificial Neuron:

This neural network is one of the simplest forms of ANN, where the data or the input travels in one direction. The data passes through the input nodes and exit on the output nodes. This neural network may or may not have the hidden layers. In simple words, it has a front propagated wave and no backpropagation by using a classifying activation function usually.

Below is a Single layer feed-forward network. Here, the sum of the products of inputs and weights are calculated and fed to the output. The output is considered if it is above a certain value i.e threshold(usually 0) and the neuron fires with an activated output (usually 1) and if it does not fire, the deactivated value is emitted (usually -1).

Application of Feedforward neural networks are found in computer vision and speech recognition where classifying the target classes is complicated. These kind of Neural Networks are responsive to noisy data and easy to maintain. This paper explains the usage of Feed Forward Neural Network. The X-Ray image fusion is a process of overlaying two or more images based on the edges. Here is a visual description.


2. Radial basis function Neural Network:

Radial basic functions consider the distance of a point with respect to the center. RBF functions have two layers, first where the features are combined with the Radial Basis Function in the inner layer and then the output of these features are taken into consideration while computing the same output in the next time-step which is basically a memory.

Below is a diagram that represents the distance calculating from the center to a point in the plane similar to a radius of the circle. Here, the distance measure used in euclidean, other distance measures can also be used. The model depends on the maximum reach or the radius of the circle in classifying the points into different categories. If the point is in or around the radius, the likelihood of the new point begin classified into that class is high. There can be a transition while changing from one region to another and this can be controlled by the beta function.

This neural network has been applied in Power Restoration Systems. Power systems have increased in size and complexity. Both factors increase the risk of major power outages. After a blackout, power needs to be restored as quickly and reliably as possible. This paper how RBFnn has been implemented in this domain.

Power restoration usually proceeds in the following order:

  • The first priority is to restore power to essential customers in the communities. These customers provide health care and safety services to all and restoring power to them first enables them to help many others. Essential customers include health care facilities, school boards, critical municipal infrastructure, and police and fire services.
  • Then focus on major power lines and substations that serve larger numbers of customers
  • Give higher priority to repairs that will get the largest number of customers back in service as quickly as possible
  • Then restore power to smaller neighborhoods and individual homes and businesses
    The diagram below shows the typical order of the power restoration system.

Referring to the diagram, first priority goes to fixing the problem at point A, on the transmission line. With this line out, none of the houses can have power restored. Next, fixing the problem at B on the main distribution line running out of the substation. Houses 2, 3, 4 and 5 are affected by this problem. Next, fixing the line at C, affecting houses 4 and 5. Finally, we would fix the service line at D to house 1.


3. Kohonen Self Organizing Neural Network:

The objective of a Kohonen map is to input vectors of arbitrary dimension to discrete map comprised of neurons. The map needs to be trained to create its own organization of the training data. It comprises either one or two dimensions. When training the map the location of the neuron remains constant but the weights differ depending on the value. This self-organization process has different parts, in the first phase, every neuron value is initialized with a small weight and the input vector.

In the second phase, the neuron closest to the point is the ‘winning neuron’ and the neurons connected to the winning neuron will also move towards the point like in the graphic below. The distance between the point and the neurons is calculated by the euclidean distance, the neuron with the least distance wins. Through the iterations, all the points are clustered and each neuron represents each kind of cluster. This is the gist behind the organization of Kohonen Neural Network.

Kohonen Neural Network is used to recognize patterns in the data. Its application can be found in medical analysis to cluster data into different categories. Kohonen map was able to classify patients having glomerular or tubular with an high accuracy. Here is a detailed explanation of how it is categorized mathematically using the euclidean distance algorithm. Below is an image displaying a comparison between a healthy and a diseased glomerular.


4. Recurrent Neural Network(RNN) – Long Short Term Memory:

The Recurrent Neural Network works on the principle of saving the output of a layer and feeding this back to the input to help in predicting the outcome of the layer.
Here, the first layer is formed similar to the feed forward neural network with the product of the sum of the weights and the features. The recurrent neural network process starts once this is computed, this means that from one time step to the next each neuron will remember some information it had in the previous time-step.

This makes each neuron act like a memory cell in performing computations. In this process, we need to let the neural network to work on the front propagation and remember what information it needs for later use. Here, if the prediction is wrong we use the learning rate or error correction to make small changes so that it will gradually work towards making the right prediction during the back propagation. This is how a basic Recurrent Neural Network looks like,

The application of Recurrent Neural Networks can be found in text to speech(TTS) conversion models. This paper enlightens about Deep Voice, which was developed at Baidu Artificial Intelligence Lab in California. It was inspired by traditional text-to-speech structure replacing all the components with neural network. First, the text is converted to ‘phoneme’ and an audio synthesis model converts it into speech. RNN is also implemented in Tacotron 2: Human-like speech from text conversion. An insight about it can be seen below,


5. Convolutional Neural Network:

Convolutional neural networks are similar to feed forward neural networks, where the neurons have learnable weights and biases. Its application has been in signal and image processing which takes over OpenCV in the field of computer vision.

Below is a representation of a ConvNet, in this neural network, the input features are taken in batch-wise like a filter. This will help the network to remember the images in parts and can compute the operations. These computations involve the conversion of the image from RGB or HSI scale to the Gray-scale. Once we have this, the changes in the pixel value will help to detect the edges and images can be classified into different categories.

ConvNet are applied in techniques like signal processing and image classification techniques. Computer vision techniques are dominated by convolutional neural networks because of their accuracy in image classification. The technique of image analysis and recognition, where the agriculture and weather features are extracted from the open-source satellites like LSAT to predict the future growth and yield of a particular land are being implemented.


6. Modular Neural Network:

Modular Neural Networks have a collection of different networks working independently and contributing towards the output. Each neural network has a set of inputs that are unique compared to other networks constructing and performing sub-tasks. These networks do not interact or signal each other in accomplishing the tasks.

The advantage of a modular neural network is that it breakdowns a large computational process into smaller components decreasing the complexity. This breakdown will help in decreasing the number of connections and negates the interaction of these networks with each other, which in turn will increase the computation speed. However, the processing time will depend on the number of neurons and their involvement in computing the results.

Below is a visual representation,

Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies.

Planning in Artificial Intelligence

Planning in Artificial Intelligence

Artificial intelligence is an important technology in the future. Whether it is intelligent robots, self-driving cars, or smart cities, they will all use different aspects of artificial intelligence!!! But Planning is very important to make any such AI project.

Even Planning is an important part of Artificial Intelligence which deals with the tasks and domains of a particular problem. Planning is considered the logical side of acting.

Everything we humans do is with a definite goal in mind, and all our actions are oriented towards achieving our goal. Similarly, Planning is also done for Artificial Intelligence.

For example, Planning is required to reach a particular destination. It is necessary to find the best route in Planning, but the tasks to be done at a particular time and why they are done are also very important.

That is why Planning is considered the logical side of acting. In other words, Planning is about deciding the tasks to be performed by the artificial intelligence system and the system’s functioning under domain-independent conditions.

What is a Plan?

We require domain description, task specification, and goal description for any planning system. A plan is considered a sequence of actions, and each action has its preconditions that must be satisfied before it can act and some effects that can be positive or negative.

So, we have Forward State Space Planning (FSSP) and Backward State Space Planning (BSSP) at the basic level.

What is the Role of Planning in Artificial Intelligence

1. Forward State Space Planning (FSSP)

FSSP behaves in the same way as forwarding state-space search. It says that given an initial state S in any domain, we perform some necessary actions and obtain a new state S’ (which also contains some new terms), called a progression. It continues until we reach the target position. Action should be taken in this matter.

  • Disadvantage: Large branching factor
  • Advantage: The algorithm is Sound

2. Backward State Space Planning (BSSP)

BSSP behaves similarly to backward state-space search. In this, we move from the target state g to the sub-goal g, tracing the previous action to achieve that goal. This process is called regression (going back to the previous goal or sub-goal). These sub-goals should also be checked for consistency. The action should be relevant in this case.

  • Disadvantages: not sound algorithm (sometimes inconsistency can be found)
  • Advantage: Small branching factor (much smaller than FSSP)

So for an efficient planning system, we need to combine the features of FSSP and BSSP, which gives rise to target stack planning which will be discussed in the next article.

What is planning in AI?

Planning in artificial intelligence is about decision-making actions performed by robots or computer programs to achieve a specific goal.

Execution of the plan is about choosing a sequence of tasks with a high probability of accomplishing a specific task.

Block-world planning problem

  • The block-world problem is known as the Sussmann anomaly.
  • The non-interlaced planners of the early 1970s were unable to solve this problem. Therefore it is considered odd.
  • When two sub-goals, G1 and G2, are given, a non-interleaved planner either produces a plan for G1 that is combined with a plan for G2 or vice versa.
  • In the block-world problem, three blocks labeled ‘A’, ‘B’, and ‘C’ are allowed to rest on a flat surface. The given condition is that only one block can be moved at a time to achieve the target.

The start position and target position are shown in the following diagram.

What is the Role of Planning in Artificial Intelligence

Components of the planning system

The plan includes the following important steps:

  • Choose the best rule to apply the next rule based on the best available guess.
  • Apply the chosen rule to calculate the new problem condition.
  • Find out when a solution has been found.
  • Detect dead ends so they can be discarded and direct system effort in more useful directions.
  • Find out when a near-perfect solution is found.

Target stack plan

  • It is one of the most important planning algorithms used by STRIPS.
  • Stacks are used in algorithms to capture the action and complete the target. A knowledge base is used to hold the current situation and actions.
  • A target stack is similar to a node in a search tree, where branches are created with a choice of action.

The important steps of the algorithm are mentioned below:

  1. Start by pushing the original target onto the stack. Repeat this until the pile is empty. If the stack top is a mixed target, push its unsatisfied sub-targets onto the stack.
  2. If the stack top is a single unsatisfied target, replace it with action and push the action precondition to the stack to satisfy the condition.

iii. If the stack top is an action, pop it off the stack, execute it and replace the knowledge base with the action’s effect.

If the stack top is a satisfactory target, pop it off the stack.

Non-linear Planning

This Planning is used to set a goal stack and is included in the search space of all possible sub-goal orderings. It handles the goal interactions by the interleaving method.

Advantages of non-Linear Planning

Non-linear Planning may be an optimal solution concerning planning length (depending on the search strategy used).

Disadvantages of Nonlinear Planning

It takes a larger search space since all possible goal orderings are considered.

Complex algorithm to understand.

Algorithm

  1. Choose a goal ‘g’ from the goal set
  2. If ‘g’ does not match the state, then
    • Choose an operator ‘o’ whose add-list matches goal g
    • Push ‘o’ on the OpStack
    • Add the preconditions of ‘o’ to the goal set
  3. While all preconditions of the operator on top of OpenStack are met in a state
    • Pop operator o from top of opstack
    • state = apply(o, state)
    • plan = [plan; o]

 

Source:https://www.javatpoint.com/what-is-the-role-of-planning-in-artificial-intelligence

Menakar Indonesia di Tengah Perang Kecerdasan Buatan

Apa itu AI?
Ia menjelaskan teknologi AI pada prinsipnya adalah program yang diberi contoh. Sederhananya, program tersebut dapat membedakan sesuatu dengan data yang sudah dikumpulkan ke sistem tersebut, atau disebut data set.
“Kita misalnya pengen AI membedakan kucing dengan anjing. Skearang bagaimana supaya mereka pintar(mengidentifikasi) maka dikasih gambar kucing dan anjing. Semakin banyak gambarnya(data set) semakin keren AI bisa membedakan,” jelasnya.

Menurut Budi data set yang dikumpulkan, terkadang terhambat dengan pelanggaran privasi seseorang. Seperti halnya di AS. Beberapa warganya menganggap pengambilan data set merupakan sebuah pelanggaran privasi masyarakat. Namun hal ini dapat diterapkan oleh pemerintah China tanpa takut melanggar privasi.

Lihat juga:KPI Klaim Pakai Teknologi AI Mudah Pelototi Acara TV Digital
“Sekarang misalnya AI ini mau ngenalin wajah manusia. Misalnya dikasih perempuan dan laki-laki. Nah di Amerika ini engga bisa karena privasi. Jadi mereka punya data set banyak tetapi sayangnya kalau di AS melanggar privasi. Sedangkan di China engga,” ujarnya.

Menurut Budi, ia sudah membangun perusahaan di bidang AI sejak sekitar tahun 2016. Beberapa projek telah ia lakukan, misalnya membuat teknologi yang dapat mengenali wajah manusia.

Ia menjelaskan untuk mengenali wajah manusia butuh dana yang terbilang banyak. Dalam sekali melakukan pelatihan perusahaanya bisa menghabiskan 90 hingga 100 juta Rupiah.

Perusahaan saya kalau untuk mengenali wajah gitu sekali running saya bisa keluarin biaya untuk trainingnya saja antara 90-100 juta,” kata dia

Persaingan antar negara
Di samping itu Budi mengatakan musuh terbesar dalam peperangan teknologi AI. Menurutnya China menjadi musuh terbesar dalam pengembangan teknologi itu.

Hal ini dampak dari negara tersebut melakukan investasi besar-besaran dan serius dalam mengembangkan teknologi dalam negeri.

“Suka engga suka tapi itu realitas. hal ini sama seperti pasar HP China di indonesia. Mau gimana juga kita diuntungkan dengan teknologi itu,” kata pria yang juga pendidik di Institut Teknologi Bandung itu.

Selain China dan Amerika Serikat, beberapa negara di timur tengah dikabarkan tengah fokus dalam mengembangkan teknologi AI, salah satunya Uni Emirat Arab (UEA). Menurut Wisnu, kefokusan UEA dalam mengembangkan AI ditandai dengan dibuatnya kementerian AI di negara tersebut. Hal ini sebagai contoh salah satu negara yang fokus dalam mengembangkan teknologi tersebut.

“Uni emirat aja berani berinvestasi di sana karena memang mereka tahu AI akan menjadi flagship. Ini sebagai contoh negara yang maju serius di AI,” ujar Wisnu kepada CNNIndonesia.com melalui sambungan telepon (9/3).

Pada tahun 2017 lalu pemerintah UEA melantik seorang pria berusia 27 tahun, Omar Sultan Al-Olama untuk menduduki posisi Menteri Artificial Intelligence. Posisi itu disebut sebagai respons dalam menghadapi gelombang kecerdasan buatan.

Lebih lanjut Wisnu menilai teknologi AI kini sudah menjadi tren di beberapa negara maju. Kini menurutnya investasi di bidang pengembangan AI sudah diseriuskan oleh beberapa perusahaan teknologi, di antaranya Google.

Ia menilai Google kini tengah fokus berinvestasi mengembangkan AI. Nilai investasi yang digelontorkan untuk itu melebihi anggaran Agensi Pertahanan dan Pengembangan Teknologi Militer AS, DARPA.

“AI itu sudah menjadi tren, orang itu sudah investasi besar-besaran. Google mind, dia main-main di investasi ini gila-gilaan. Bahkan mungkin bujetnya lebih besar dari bajetnya DARPA,” katanya.

Tidak hanya itu, menurut Wisnu negara China juga kini tengah menggurita dalam investasi AI. Hasilnya banyak produk AI yang diciptakan negeri tirai bambu itu, salah satunya sistem keamanan di Beijing, China.

Ia menjelaskan pemerintah China telah memasang beberapa juta kamera di wilyah, salah satu fungsinya untuk Security Survillance AI.

“The biggest survillace ai sistem itu di Bejing. Ada sekitar berapa juta kamera yang sudah terpasang untuk security survillance AI dan sebagainya,” kata Wisnu.

Wisnu menilai getolnya China dalam mengembangkan teknologinya patut diacungi jempol. Menurutnya negara tersebut jor-joran dalam riset dan pengebangan di bidang teknologi, hingga bisa menyaingi Amerika Serikat dan negara Eropa.

“Kita respect dengan China karena (Reaserch and Development) RnD gede-gedean di bidang AI dan teknologi. Sudah bisa menyaingi Amerika dan negara Eropa dan sebagainya.

(can/eks)
Source: https://www.cnnindonesia.com/teknologi/20210312143859-185-616720/menakar-indonesia-di-tengah-perang-kecerdasan-buatan/2.

Artificial intelligence (AI)

artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

What is intelligence?

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger waspSphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—conspicuously absent in the case of Sphex—must include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solvingperception, and using language.

Learning

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.

Reasoning

To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is, “Fred must be in either the museum or the café. He is not in the café; therefore he is in the museum,” and of the latter, “Previous accidents of this sort were caused by instrument failure; therefore this accident was caused by instrument failure.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rules.

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

Problem solving

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.

Perception

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.

At present, artificial perception is sufficiently well advanced to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on the open road, and robots to roam through buildings collecting empty soda cans. One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving television eye and a pincer hand, constructed at the University of Edinburgh, Scotland, during the period 1966–73 under the direction of Donald Michie. FREDDY was able to recognize a variety of objects and could be instructed to assemble simple artifacts, such as a toy car, from a random heap of components.

Language

language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a minilanguage, it being a matter of convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”

An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—is their productivity. A productive language can formulate an unlimited variety of sentences.

It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements. Although none of these programs actually understands language, they may, in principle, reach the point where their command of a language is indistinguishable from that of a normal human. What, then, is involved in genuine understanding, if even a computer that uses language like a native human speaker is not acknowledged to understand? There is no universally agreed upon answer to this difficult question. According to one theory, whether or not one understands depends not only on one’s behaviour but also on one’s history: in order to be said to understand, one must have learned the language and have been trained to take one’s place in the linguistic community by means of interaction with other language users.

Methods and goals in AI

Symbolic vs. connectionist approaches

AI research follows two distinct, and to some extent competing, methods, the symbolic (or “top-down”) approach, and the connectionist (or “bottom-up”) approach. The top-down approach seeks to replicate intelligence by analyzing cognition independent of the biological structure of the brain, in terms of the processing of symbols—whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain’s structure—whence the connectionist label.

To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by “tuning” the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.) In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board
BRITANNICA QUIZ
Computers and Technology Quiz
Computers host websites composed of HTML and send text messages as simple as…LOL. Hack into this quiz and let some technology tally your score and reveal the contents to you.

In The Fundamentals of Learning (1932), Edward Thorndike, a psychologist at Columbia UniversityNew York City, first suggested that human learning consists of some unknown property of connections between neurons in the brain. In The Organization of Behavior (1949), Donald Hebb, a psychologist at McGill University, Montreal, Canada, suggested that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections. The notion of weighted connections is described in a later section, Connectionism.

In 1957 two vigorous advocates of symbolic AI—Allen Newell, a researcher at the RAND CorporationSanta Monica, California, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University, Pittsburgh, Pennsylvania—summed up the top-down approach in what they called the physical symbol system hypothesis. This hypothesis states that processing structures of symbols is sufficient, in principle, to produce artificial intelligence in a digital computer and that, moreover, human intelligence is the result of the same type of symbolic manipulations.

During the 1950s and ’60s the top-down and bottom-up approaches were pursued simultaneously, and both achieved noteworthy, if limited, results. During the 1970s, however, bottom-up AI was neglected, and it was not until the 1980s that this approach again became prominent. Nowadays both approaches are followed, and both are acknowledged as facing difficulties. Symbolic techniques work in simplified realms but typically break down when confronted with the real world; meanwhile, bottom-up researchers have been unable to replicate the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic even this worm. Evidently, the neurons of connectionist theory are gross oversimplifications of the real thing.

Strong AI, applied AI, and cognitive simulation

Employing the methods outlined above, AI research attempts to reach one of three goals: strong AI, applied AI, or cognitive simulation. Strong AI aims to build machines that think. (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) The ultimate ambition of strong AI is to produce a machine whose overall intellectual ability is indistinguishable from that of a human being. As is described in the section Early milestones in AI, this goal generated great interest in the 1950s and ’60s, but such optimism has given way to an appreciation of the extreme difficulties involved. To date, progress has been meagre. Some critics doubt whether research will produce even a system with the overall intellectual ability of an ant in the foreseeable future. Indeed, some researchers working in AI’s other two branches view strong AI as not worth pursuing.

Applied AI, also known as advanced information processing, aims to produce commercially viable “smart” systems—for example, “expert” medical diagnosis systems and stock-trading systems. Applied AI has enjoyed considerable success, as described in the section Expert systems.

In cognitive simulation, computers are used to test theories about how the human mind works—for example, theories about how people recognize faces or recall memories. Cognitive simulation is already a powerful tool in both neuroscience and cognitive psychology.

Alan Turing and the beginning of AI

Theoretical work

The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines.

During World War II, Turing was a leading cryptanalyst at the Government Code and Cypher School in Bletchley Park, Buckinghamshire, England. Turing could not turn to the project of building a stored-program electronic computing machine until the cessation of hostilities in Europe in 1945. Nevertheless, during the war he gave considerable thought to the issue of machine intelligence. One of Turing’s colleagues at Bletchley Park, Donald Michie (who later founded the Department of Machine Intelligence and Perception at the University of Edinburgh), later recalled that Turing often discussed how computers could learn from experience as well as solve new problems through the use of guiding principles—a process now known as heuristic problem solving.

Figure 1: Position of chessmen at the beginning of a game. They are queen's rook (QR), queen's knight (QN), queen's bishop (QB), queen (Q), king (K), king's bishop (KB), king's knight (KN), king's rook (KR); the chessmen in front of these pieces are the pawns.
READ MORE ON THIS TOPIC
chess: Chess and artificial intelligence
Machines capable of playing chess have fascinated people since the latter half of the 18th century, when the Turk, the first of the pseudo-automatons,…

Turing gave quite possibly the earliest public lecture (London, 1947) to mention computer intelligence, saying, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” In 1948 he introduced many of the central concepts of AI in a report entitled “Intelligent Machinery.” However, Turing did not publish this paper, and many of his ideas were later reinvented by others. For instance, one of Turing’s original ideas was to train a network of artificial neurons to perform specific tasks, an approach described in the section Connectionism.

Chess

At Bletchley Park, Turing illustrated his ideas on machine intelligence by reference to chess—a useful source of challenging and clearly defined problems against which proposed methods for problem solving could be tested. In principle, a chess-playing computer could play by searching exhaustively through all the available moves, but in practice this is impossible because it would involve examining an astronomically large number of moves. Heuristics are necessary to guide a narrower, more discriminative search. Although Turing experimented with designing chess programs, he had to content himself with theory in the absence of a computer to run his chess program. The first true AI programs had to await the arrival of stored-program electronic digital computers.

In 1945 Turing predicted that computers would one day play very good chess, and just over 50 years later, in 1997, Deep Blue, a chess computer built by the International Business Machines Corporation (IBM), beat the reigning world champion, Garry Kasparov, in a six-game match. While Turing’s prediction came true, his expectation that chess programming would contribute to the understanding of how human beings think did not. The huge improvement in computer chess since Turing’s day is attributable to advances in computer engineering rather than advances in AI—Deep Blue’s 256 parallel processors enabled it to examine 200 million possible moves per second and to look ahead as many as 14 turns of play. Many agree with Noam Chomsky, a linguist at the Massachusetts Institute of Technology (MIT), who opined that a computer beating a grandmaster at chess is about as interesting as a bulldozer winning an Olympic weightlifting competition.

The Turing test

In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence, introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer, “No,” in response to, “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turing’s test) the computer is considered an intelligent, thinking entity.

In 1991 the American philanthropist Hugh Loebner started the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program has come close to passing an undiluted Turing test.

Early milestones in AI

The first AI programs

The earliest successful AI program was written in 1951 by Christopher Strachey, later director of the Programming Research Group at the University of Oxford. Strachey’s checkers (draughts) program ran on the Ferranti Mark I computer at the University of Manchester, England. By the summer of 1952 this program could play a complete game of checkers at a reasonable speed.

Information about the earliest successful demonstration of machine learning was published in 1952. Shopper, written by Anthony Oettinger at the University of Cambridge, ran on the EDSAC computer. Shopper’s simulated world was a mall of eight shops. When instructed to purchase an item, Shopper would search for it, visiting shops at random until the item was found. While searching, Shopper would memorize a few of the items stocked in each shop visited (just as a human shopper might). The next time Shopper was sent out for the same item, or for some other item that it had already located, it would go to the right shop straight away. This simple form of learning, as is pointed out in the introductory section What is intelligence?, is called rote learning.

The first AI program to run in the United States also was a checkers program, written in 1952 by Arthur Samuel for the prototype of the IBM 701. Samuel took over the essentials of Strachey’s checkers program and over a period of years considerably extended it. In 1955 he added features that enabled the program to learn from experience. Samuel included mechanisms for both rote learning and generalization, enhancements that eventually led to his program’s winning one game against a former Connecticut checkers champion in 1962.

Evolutionary computing

Samuel’s checkers program was also notable for being one of the first efforts at evolutionary computing. (His program “evolved” by pitting a modified copy against the current best version of his program, with the winner becoming the new standard.) Evolutionary computing typically involves the use of some automatic method of generating and evaluating successive “generations” of a program, until a highly proficient solution evolves.

A leading proponent of evolutionary computing, John Holland, also wrote test software for the prototype of the IBM 701 computer. In particular, he helped design a neural-network “virtual” rat that could be trained to navigate through a maze. This work convinced Holland of the efficacy of the bottom-up approach. While continuing to consult for IBM, Holland moved to the University of Michigan in 1952 to pursue a doctorate in mathematics. He soon switched, however, to a new interdisciplinary program in computers and information processing (later known as communications science) created by Arthur Burks, one of the builders of ENIAC and its successor EDVAC. In his 1959 dissertation, for most likely the world’s first computer science Ph.D., Holland proposed a new type of computer—a multiprocessor computer—that would assign each artificial neuron in a network to a separate processor. (In 1985 Daniel Hillis solved the engineering difficulties to build the first such computer, the 65,536-processor Thinking Machines Corporation supercomputer.)

Holland joined the faculty at Michigan after graduation and over the next four decades directed much of the research into methods of automating evolutionary computing, a process now known by the term genetic algorithms. Systems implemented in Holland’s laboratory included a chess program, models of single-cell biological organisms, and a classifier system for controlling a simulated gas-pipeline network. Genetic algorithms are no longer restricted to “academic” demonstrations, however; in one important practical application, a genetic algorithm cooperates with a witness to a crime in order to generate a portrait of the criminal.

Logical reasoning and problem solving

The ability to reason logically is an important aspect of intelligence and has always been a major focus of AI research. An important landmark in this area was a theorem-proving program written in 1955–56 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of the Carnegie Mellon University. The Logic Theorist, as the program became known, was designed to prove theorems from Principia Mathematica (1910–13), a three-volume work by the British philosopher-mathematicians Alfred North Whitehead and Bertrand Russell. In one instance, a proof devised by the program was more elegant than the proof given in the books.

Newell, Simon, and Shaw went on to write a more powerful program, the General Problem Solver, or GPS. The first version of GPS ran in 1957, and work continued on the project for about a decade. GPS could solve an impressive variety of puzzles using a trial and error approach. However, one criticism of GPS, and similar programs that lack any learning capability, is that the program’s intelligence is entirely secondhand, coming from whatever information the programmer explicitly includes.

English dialogue

Two of the best-known early AI programs, Eliza and Parry, gave an eerie semblance of intelligent conversation. (Details of both were first published in 1966.) Eliza, written by Joseph Weizenbaum of MIT’s AI Laboratory, simulated a human therapist. Parry, written by Stanford University psychiatrist Kenneth Colby, simulated a human paranoiac. Psychiatrists who were asked to decide whether they were communicating with Parry or a human paranoiac were often unable to tell. Nevertheless, neither Parry nor Eliza could reasonably be described as intelligent. Parry’s contributions to the conversation were canned—constructed in advance by the programmer and stored away in the computer’s memory. Eliza, too, relied on canned sentences and simple programming tricks.

AI programming languages

In the course of their work on the Logic Theorist and GPS, Newell, Simon, and Shaw developed their Information Processing Language (IPL), a computer language tailored for AI programming. At the heart of IPL was a highly flexible data structure that they called a list. A list is simply an ordered sequence of items of data. Some or all of the items in a list may themselves be lists. This scheme leads to richly branching structures.

In 1960 John McCarthy combined elements of IPL with the lambda calculus (a formal mathematical-logical system) to produce the programming language LISP (List Processor), which remains the principal language for AI work in the United States. (The lambda calculus itself was invented in 1936 by the Princeton logician Alonzo Church while he was investigating the abstract Entscheidungsproblem, or “decision problem,” for predicate logic—the same problem that Turing had been attacking when he invented the universal Turing machine.)

The logic programming language PROLOG (Programmation en Logique) was conceived by Alain Colmerauer at the University of Aix-Marseille, France, where the language was first implemented in 1973. PROLOG was further developed by the logician Robert Kowalski, a member of the AI group at the University of Edinburgh. This language makes use of a powerful theorem-proving technique known as resolution, invented in 1963 at the U.S. Atomic Energy Commission’s Argonne National Laboratory in Illinois by the British logician Alan Robinson. PROLOG can determine whether or not a given statement follows logically from other given statements. For example, given the statements “All logicians are rational” and “Robinson is a logician,” a PROLOG program responds in the affirmative to the query “Robinson is rational?” PROLOG is widely used for AI work, especially in Europe and Japan.

Researchers at the Institute for New Generation Computer Technology in Tokyo have used PROLOG as the basis for sophisticated logic programming languages. Known as fifth-generation languages, these are in use on nonnumerical parallel computers developed at the Institute.

Other recent work includes the development of languages for reasoning about time-dependent data such as “the account was paid yesterday.” These languages are based on tense logic, which permits statements to be located in the flow of time. (Tense logic was invented in 1953 by the philosopher Arthur Prior at the University of Canterbury, Christchurch, New Zealand.)

Microworld programs

To cope with the bewildering complexity of the real world, scientists often ignore less relevant details; for instance, physicists often ignore friction and elasticity in their models. In 1970 Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that likewise AI research should focus on developing programs capable of intelligent behaviour in simpler artificial environments known as microworlds. Much research has focused on the so-called blocks world, which consists of coloured blocks of various shapes and sizes arrayed on a flat surface.

An early success of the microworld approach was SHRDLU, written by Terry Winograd of MIT. (Details of the program were published in 1972.) SHRDLU controlled a robot arm that operated above a flat surface strewn with play blocks. Both the arm and the blocks were virtual. SHRDLU would respond to commands typed in natural English, such as “Will you please stack up both of the red blocks and either a green cube or a pyramid.” The program could also answer questions about its own actions.Although SHRDLU was initially hailed as a major breakthrough, Winograd soon announced that the program was, in fact, a dead end. The techniques pioneered in the program proved unsuitable for application in wider, more interesting worlds. Moreover, the appearance that SHRDLU gave of understanding the blocks microworld, and English statements concerning it, was in fact an illusion. SHRDLU had no idea what a green block was.

Another product of the microworld approach was Shakey, a mobile robot developed at the Stanford Research Institute by Bertram Raphael, Nils Nilsson, and others during the period 1968–72. The robot occupied a specially built microworld consisting of walls, doorways, and a few simply shaped wooden blocks. Each wall had a carefully painted baseboard to enable the robot to “see” where the wall met the floor (a simplification of reality that is typical of the microworld approach). Shakey had about a dozen basic abilities, such as TURN, PUSH, and CLIMB-RAMP.

Critics pointed out the highly simplified nature of Shakey’s environment and emphasized that, despite these simplifications, Shakey operated excruciatingly slowly; a series of actions that a human could plan out and execute in minutes took Shakey days.

The greatest success of the microworld approach is a type of program known as an expert system, described in the next section.

Expert systems

Expert systems occupy a type of microworld—for example, a model of a ship’s hold and its cargo—that is self-contained and relatively uncomplicated. For such AI systems every effort is made to incorporate all the information about some narrow field that an expert (or group of experts) would know, so that a good expert system can often outperform any single human expert. There are many commercial expert systems, including programs for medical diagnosischemical analysis, credit authorization, financial management, corporate planning, financial document routing, oil and mineral prospecting, genetic engineeringautomobile design and manufacture, camera lens design, computer installation design, airline scheduling, cargo placement, and automatic help services for home computer owners.

Knowledge and inference

The basic components of an expert system are a knowledge base, or KB, and an inference engine. The information to be stored in the KB is obtained by interviewing people who are expert in the area in question. The interviewer, or knowledge engineer, organizes the information elicited from the experts into a collection of rules, typically of an “if-then” structure. Rules of this type are called production rules. The inference engine enables the expert system to draw deductions from the rules in the KB. For example, if the KB contains the production rules “if x, then y” and “if y, then z,” the inference engine is able to deduce “if x, then z.” The expert system might then query its user, “Is x true in the situation that we are considering?” If the answer is affirmative, the system will proceed to infer z.

Some expert systems use fuzzy logic. In standard logic there are only two truth values, true and false. This absolute precision makes vague attributes or situations difficult to characterize. (When, precisely, does a thinning head of hair become a bald head?) Often the rules that human experts use contain vague expressions, and so it is useful for an expert system’s inference engine to employ fuzzy logic.

DENDRAL

In 1965 the AI researcher Edward Feigenbaum and the geneticist Joshua Lederberg, both of Stanford University, began work on Heuristic DENDRAL (later shortened to DENDRAL), a chemical-analysis expert system. The substance to be analyzed might, for example, be a complicated compound of carbonhydrogen, and nitrogen. Starting from spectrographic data obtained from the substance, DENDRAL would hypothesize the substance’s molecular structure. DENDRAL’s performance rivaled that of chemists expert at this task, and the program was used in industry and in academia.

MYCIN

Work on MYCIN, an expert system for treating blood infections, began at Stanford University in 1972. MYCIN would attempt to diagnose patients based on reported symptoms and medical test results. The program could request further information concerning the patient, as well as suggest additional laboratory tests, to arrive at a probable diagnosis, after which it would recommend a course of treatment. If requested, MYCIN would explain the reasoning that led to its diagnosis and recommendation. Using about 500 production rules, MYCIN operated at roughly the same level of competence as human specialists in blood infections and rather better than general practitioners.

Nevertheless, expert systems have no common sense or understanding of the limits of their expertise. For instance, if MYCIN were told that a patient who had received a gunshot wound was bleeding to death, the program would attempt to diagnose a bacterial cause for the patient’s symptoms. Expert systems can also act on absurd clerical errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age data were accidentally transposed.

The CYC project

CYC is a large experiment in symbolic AI. The project began in 1984 under the auspices of the Microelectronics and Computer Technology Corporation, a consortium of computer, semiconductor, and electronics manufacturers. In 1995 Douglas Lenat, the CYC project director, spun off the project as Cycorp, Inc., based in Austin, Texas. The most ambitious goal of Cycorp was to build a KB containing a significant percentage of the commonsense knowledge of a human being. Millions of commonsense assertions, or rules, were coded into CYC. The expectation was that this “critical mass” would allow the system itself to extract further rules directly from ordinary prose and eventually serve as the foundation for future generations of expert systems.

With only a fraction of its commonsense KB compiled, CYC could draw inferences that would defeat simpler systems. For example, CYC could infer, “Garcia is wet,” from the statement, “Garcia is finishing a marathon run,” by employing its rules that running a marathon entails high exertion, that people sweat at high levels of exertion, and that when something sweats it is wet. Among the outstanding remaining problems are issues in searching and problem solving—for example, how to search the KB automatically for information that is relevant to a given problem. AI researchers call the problem of updating, searching, and otherwise manipulating a large structure of symbols in realistic amounts of time the frame problem. Some critics of symbolic AI believe that the frame problem is largely unsolvable and so maintain that the symbolic approach will never yield genuinely intelligent systems. It is possible that CYC, for example, will succumb to the frame problem long before the system achieves human levels of knowledge.

Connectionism

Connectionism, or neuronlike computing, developed out of attempts to understand how the human brain works at the neural level and, in particular, how people learn and remember. In 1943 the neurophysiologist Warren McCulloch of the University of Illinois and the mathematician Walter Pitts of the University of Chicago published an influential treatise on neural nets and automatons, according to which each neuron in the brain is a simple digital processor and the brain as a whole is a form of computing machine. As McCulloch put it subsequently, “What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine.”

Creating an artificial neural network

It was not until 1954, however, that Belmont Farley and Wesley Clark of MIT succeeded in running the first artificial neural network—albeit limited by computer memory to no more than 128 neurons. They were able to train their networks to recognize simple patterns. In addition, they discovered that the random destruction of up to 10 percent of the neurons in a trained network did not affect the network’s performance—a feature that is reminiscent of the brain’s ability to tolerate limited damage inflicted by surgery, accident, or disease.

The simple neural network depicted in the figure illustrates the central ideas of connectionism. Four of the network’s five neurons are for input, and the fifth—to which each of the others is connected—is for output. Each of the neurons is either firing (1) or not firing (0). Each connection leading to N, the output neuron, has a “weight.” What is called the total weighted input into N is calculated by adding up the weights of all the connections leading to N from neurons that are firing. For example, suppose that only two of the input neurons, X and Y, are firing. Since the weight of the connection from X to N is 1.5 and the weight of the connection from Y to N is 2, it follows that the total weighted input to N is 3.5. As shown in the figure, N has a firing threshold of 4. That is to say, if N’s total weighted input equals or exceeds 4, then N fires; otherwise, N does not fire. So, for example, N does not fire if the only input neurons to fire are X and Y, but N does fire if XY, and Z all fire.

Training the network involves two steps. First, the external agent inputs a pattern and observes the behaviour of N. Second, the agent adjusts the connection weights in accordance with the rules:

  1. If the actual output is 0 and the desired output is 1, increase by a small fixed amount the weight of each connection leading to N from neurons that are firing (thus making it more likely that N will fire the next time the network is given the same pattern);
  2. If the actual output is 1 and the desired output is 0, decrease by that same small amount the weight of each connection leading to the output neuron from neurons that are firing (thus making it less likely that the output neuron will fire the next time the network is given that pattern as input).

The external agent—actually a computer program—goes through this two-step procedure with each pattern in a training sample, which is then repeated a number of times. During these many repetitions, a pattern of connection weights is forged that enables the network to respond correctly to each pattern. The striking thing is that the learning process is entirely mechanical and requires no human intervention or adjustment. The connection weights are increased or decreased automatically by a constant amount, and exactly the same learning procedure applies to different tasks.

Perceptrons

In 1957 Frank Rosenblatt of the Cornell Aeronautical Laboratory at Cornell University in Ithaca, New York, began investigating artificial neural networks that he called perceptrons. He made major contributions to the field of AI, both through experimental investigations of the properties of neural networks (using computer simulations) and through detailed mathematical analysis. Rosenblatt was a charismatic communicator, and there were soon many research groups in the United States studying perceptrons. Rosenblatt and his followers called their approach connectionist to emphasize the importance in learning of the creation and modification of connections between neurons. Modern researchers have adopted this term.

One of Rosenblatt’s contributions was to generalize the training procedure that Farley and Clark had applied to only two-layer networks so that the procedure could be applied to multilayer networks. Rosenblatt used the phrase “back-propagating error correction” to describe his method. The method, with substantial improvements and extensions by numerous scientists, and the term back-propagation are now in everyday use in connectionism.

Conjugating verbs

In one famous connectionist experiment conducted at the University of California at San Diego (published in 1986), David Rumelhart and James McClelland trained a network of 920 artificial neurons, arranged in two layers of 460 neurons, to form the past tenses of English verbs. Root forms of verbs—such as comelook, and sleep—were presented to one layer of neurons, the input layer. A supervisory computer program observed the difference between the actual response at the layer of output neurons and the desired response—came, say—and then mechanically adjusted the connections throughout the network in accordance with the procedure described above to give the network a slight push in the direction of the correct response. About 400 different verbs were presented one by one to the network, and the connections were adjusted after each presentation. This whole procedure was repeated about 200 times using the same verbs, after which the network could correctly form the past tense of many unfamiliar verbs as well as of the original verbs. For example, when presented for the first time with guard, the network responded guarded; with weepwept; with clingclung; and with dripdripped (complete with double p). This is a striking example of learning involving generalization. (Sometimes, though, the peculiarities of English were too much for the network, and it formed squawked from squatshipped from shape, and membled from mail.)

Another name for connectionism is parallel distributed processing, which emphasizes two important features. First, a large number of relatively simple processors—the neurons—operate in parallel. Second, neural networks store information in a distributed fashion, with each individual connection participating in the storage of many different items of information. The know-how that enabled the past-tense network to form wept from weep, for example, was not stored in one specific location in the network but was spread throughout the entire pattern of connection weights that was forged during training. The human brain also appears to store information in a distributed fashion, and connectionist research is contributing to attempts to understand how it does so.

Other neural networks

Other work on neuronlike computing includes the following:

  • Visual perception. Networks can recognize faces and other objects from visual data. A neural network designed by John Hummel and Irving Biederman at the University of Minnesota can identify about 10 objects from simple line drawings. The network is able to recognize the objects—which include a mug and a frying pan—even when they are drawn from different angles. Networks investigated by Tomaso Poggio of MIT are able to recognize bent-wire shapes drawn from different angles, faces photographed from different angles and showing different expressions, and objects from cartoon drawings with gray-scale shading indicating depth and orientation.
  • Language processing. Neural networks are able to convert handwritten and typewritten material to electronic text. The U.S. Internal Revenue Service has commissioned a neuronlike system that will automatically read tax returns and correspondence. Neural networks also convert speech to printed text and printed text to speech.
  • Financial analysis. Neural networks are being used increasingly for loan risk assessment, real estate valuation, bankruptcy prediction, share price prediction, and other business applications.
  • Medicine. Medical applications include detecting lung nodules and heart arrhythmias and predicting adverse drug reactions.
  • TelecommunicationsTelecommunications applications of neural networks include control of telephone switching networks and echo cancellation in modems and on satellite links.

Nouvelle AI

New foundations

The approach now known as nouvelle AI was pioneered at the MIT AI Laboratory by the Australian Rodney Brooks during the latter half of the 1980s. Nouvelle AI distances itself from strong AI, with its emphasis on human-level performance, in favour of the relatively modest aim of insect-level performance. At a very fundamental level, nouvelle AI rejects symbolic AI’s reliance upon constructing internal models of reality, such as those described in the section Microworld programs. Practitioners of nouvelle AI assert that true intelligence involves the ability to function in a real-world environment.

A central idea of nouvelle AI is that intelligence, as expressed by complex behaviour, “emerges” from the interaction of a few simple behaviours. For example, a robot whose simple behaviours include collision avoidance and motion toward a moving object will appear to stalk the object, pausing whenever it gets too close.

One famous example of nouvelle AI is Brooks’s robot Herbert (named after Herbert Simon), whose environment is the busy offices of the MIT AI Laboratory. Herbert searches desks and tables for empty soda cans, which it picks up and carries away. The robot’s seemingly goal-directed behaviour emerges from the interaction of about 15 simple behaviours. More recently, Brooks has constructed prototypes of mobile robots for exploring the surface of Mars. (See the photographs and an interview with Rodney Brooks.)

 

Nouvelle AI sidesteps the frame problem discussed in the section The CYC project. Nouvelle systems do not contain a complicated symbolic model of their environment. Instead, information is left “out in the world” until such time as the system needs it. A nouvelle system refers continuously to its sensors rather than to an internal model of the world: it “reads off” the external world whatever information it needs at precisely the time it needs it. (As Brooks insisted, the world is its own best model—always exactly up-to-date and complete in every detail.)

The situated approach

Traditional AI has by and large attempted to build disembodied intelligences whose only interaction with the world has been indirect (CYC, for example). Nouvelle AI, on the other hand, attempts to build embodied intelligences situated in the real world—a method that has come to be known as the situated approach. Brooks quoted approvingly from the brief sketches that Turing gave in 1948 and 1950 of the situated approach. By equipping a machine “with the best sense organs that money can buy,” Turing wrote, the machine might be taught “to understand and speak English” by a process that would “follow the normal teaching of a child.” Turing contrasted this with the approach to AI that focuses on abstract activities, such as the playing of chess. He advocated that both approaches be pursued, but until recently little attention has been paid to the situated approach.

The situated approach was also anticipated in the writings of the philosopher Bert Dreyfus of the University of California at Berkeley. Beginning in the early 1960s, Dreyfus opposed the physical symbol system hypothesis, arguing that intelligent behaviour cannot be completely captured by symbolic descriptions. As an alternative, Dreyfus advocated a view of intelligence that stressed the need for a body that could move about, interacting directly with tangible physical objects. Once reviled by advocates of AI, Dreyfus is now regarded as a prophet of the situated approach.

Critics of nouvelle AI point out the failure to produce a system exhibiting anything like the complexity of behaviour found in real insects. Suggestions by researchers that their nouvelle systems may soon be conscious and possess language seem entirely premature.

Is strong AI possible?

The ongoing success of applied AI and of cognitive simulation, as described in the preceding sections of this article, seems assured. However, strong AI—that is, artificial intelligence that aims to duplicate human intellectual abilities—remains controversial. Exaggerated claims of success, in professional journals as well as the popular press, have damaged its reputation. At the present time even an embodied system displaying the overall intelligence of a cockroach is proving elusive, let alone a system that can rival a human being. The difficulty of scaling up AI’s modest achievements cannot be overstated. Five decades of research in symbolic AI have failed to produce any firm evidence that a symbol system can manifest human levels of general intelligence; connectionists are unable to model the nervous systems of even the simplest invertebrates; and critics of nouvelle AI regard as simply mystical the view that high-level behaviours involving language understanding, planning, and reasoning will somehow emerge from the interaction of basic behaviours such as obstacle avoidance, gaze control, and object manipulation.

However, this lack of substantial progress may simply be testimony to the difficulty of strong AI, not to its impossibility. Let us turn to the very idea of strong artificial intelligence. Can a computer possibly think? Noam Chomsky suggests that debating this question is pointless, for it is an essentially arbitrary decision whether to extend common usage of the word think to include machines. There is, Chomsky claims, no factual question as to whether any such decision is right or wrong—just as there is no question as to whether our decision to say that airplanes fly is right, or our decision not to say that ships swim is wrong. However, this seems to oversimplify matters. The important question is, Could it ever be appropriate to say that computers think, and, if so, what conditions must a computer satisfy in order to be so described?

Some authors offer the Turing test as a definition of intelligence. However, Turing himself pointed out that a computer that ought to be described as intelligent might nevertheless fail his test if it were incapable of successfully imitating a human being. For example, why should an intelligent robot designed to oversee mining on the Moon necessarily be able to pass itself off in conversation as a human being? If an intelligent entity can fail the test, then the test cannot function as a definition of intelligence. It is even questionable whether passing the test would actually show that a computer is intelligent, as the information theorist Claude Shannon and the AI pioneer John McCarthy pointed out in 1956. Shannon and McCarthy argued that it is possible, in principle, to design a machine containing a complete set of canned responses to all the questions that an interrogator could possibly ask during the fixed time span of the test. Like Parry, this machine would produce answers to the interviewer’s questions by looking up appropriate responses in a giant table. This objection seems to show that in principle a system with no intelligence at all could pass the Turing test.

In fact, AI has no real definition of intelligence to offer, not even in the subhuman case. Rats are intelligent, but what exactly must an artificial intelligence achieve before researchers can claim this level of success? In the absence of a reasonably precise criterion for when an artificial system counts as intelligent, there is no objective way of telling whether an AI research program has succeeded or failed. One result of AI’s failure to produce a satisfactory criterion of intelligence is that, whenever researchers achieve one of AI’s goals—for example, a program that can summarize newspaper articles or beat the world chess champion—critics are able to say “That’s not intelligence!” Marvin Minsky’s response to the problem of defining intelligence is to maintain—like Turing before him—that intelligence is simply our name for any problem-solving mental process that we do not yet understand. Minsky likens intelligence to the concept “unexplored regions of Africa”: it disappears as soon as we discover it.

https://www.britannica.com/technology/artificial-intelligence B.J. Copeland 

Robotics and artificial intelligence

Robotics and artificial intelligence: The role of AI in robots (by Alan Martin)

 

Article ImageIt may be obvious, but robotics and artificial intelligence are two very different disciplines.

While you can have robots with artificial intelligence, it’s equally possible for robotics to flourish without AI, and indeed most systems currently do.

On this page, we’ll explain the differences between robotics and artificial intelligence, as well as exploring areas where AI is used in cutting-edge robotic technology.

What is robotics?

Robotics is the branch of engineering and computer sciences where machines are built to perform programmed tasks without further human intervention.

That’s a pretty broad definition and covers everything from a simple, mechanical arm that assembles cars, all the way to something out of science fiction like Wall-E or Amazon’s upcoming Astro ‘Alexa on wheels’ home robot.

Traditionally, robots are used when tasks are either too difficult for humans to perform well (e.g.: the movement of extremely heavy parts on an assembly line), extremely repetitive or both.

A robot will happily do the same exhausting task over and over again each day. A human will get bored, fatigued or both and that’s when mistakes slip in.

Are robotics and artificial intelligence the same thing?

Though sometimes (incorrectly) used interchangeably, robotics and artificial intelligence are very different things.

Artificial intelligence is where systems emulate the human mind to learn, solve problems and make decisions on the fly, without needing the instructions specifically programmed.

Robotics is where robots are built and programmed to perform very specific duties.

In most cases, this simply doesn’t require artificial intelligence, as the tasks performed are predictable, repetitive and don’t need additional ‘thought’.

What is the role of artificial intelligence in robots?

Despite this, robotics and artificial intelligence can coexist. Projects using AI in robotics are in the minority, but such designs are likely to become more common in future as our AI systems become more sophisticated. Here are some examples of existing robots that use AI.

Examples of artificial intelligence applied to robotics

 

Examples of robotics for households

The most obvious example of robots for households using AI is Amazon’s upcoming Astro bot.

Essentially an Echo Show on wheels, the robot uses artificial intelligence to navigate autonomously around the home, acting as eyes and ears when you’re not around thanks to a periscope camera.

This isn’t entirely new, as robot vacuums can also navigate around furniture. But here, too, AI is playing a greater role.

Most recently iRobot, the company behind Roomba, announced that new models will use machine learning and AI to spot and avoid pet excrement.

Examples of robotics in manufacturing

Image – Acieta

The scope for robotic AI in manufacturing, also known as Industry 4.0, is potentially more transformational.

This could be as simple as a robot algorithmically navigating its way around a busy warehouse, but companies like Vicarious are using AI on turnkey robotics where the task is too complex for programmed automation.

It’s not alone in this. Another example of how robots are used in manufacturing includes the Shadow Dexterous Hand, which is agile enough to pick soft fruit without crushing it, while also learning via demonstration, potentially making it a game change in pharma.

Scaled Robotics’ Site Monitoring Robot, meanwhile, can patrol a construction site, scan the project and analyse data for possible quality issues.

Examples of robotics for business

Image: Miso Robotics

For any business that needs to send things in a four-mile radius, Starship Technologies’ delivery robots are a clever innovation.

Equipped with mapping systems, sensors and AI, the little robot on wheels can figure out the best route to take on the fly, all the while avoiding the dangers of the outside world.

In the catering space, things are getting even more impressive. Miso Robotics’ Flippy uses 3D and thermal vision to learn from the kitchen it’s in, and can acquire new skills over time, even though it’s named after the simple art of flipping burgers. Moley’s Robotic Kitchen is also a possible insight into the future of catering.

Examples of robotics in healthcare

Image: Softbank Robotics

Medical professionals are often tired and overworked, and in the world of healthcare, fatigue can have fatal consequences.

Robots don’t get tired, which potentially makes them a perfect substitute, and so-called “Waldo Surgeons” are capable of performing operations with incredible accuracy and a steady ‘hand.’

But robots don’t have to be able to perform the duties of highly trained surgeons to be useful.

More basic examples of robots in healthcare can free up medics’ time by taking care of lower skilled work. Moxi, for example, can do everything from distributing PPE to running patient samples, allowing the human doctors more time to care.

And during the coronavirus pandemic, Cobionix came up with a robot that can administer needleless vaccinations, without needing any kind of human supervision.

Examples of robotics in agriculture

Image – Small Robot Company

As in healthcare, the future of robotics in agriculture could make labour shortages and worker fatigue feel less acute, but there’s another big potential advantage: sustainability.

Iron Ox, for example, uses AI and robotics to try and ensure that each plant gets the optimal level of sunshine, water and nutrients required to ensure it grows to its full potential.

With each plant analysed with robotics and AI, less water is wasted and farms produce less waste. The idea is that the AI will continue to learn from the data, improving yield for future harvests, too.

Another example of robots used in agriculture is, the Agrobot E-Series, can not only harvest strawberries with its 24 robotic arms, but uses its artificial intelligence to assess the ripeness of each fruit harvested.

Examples of robotics in aerospace

Image – Fanuc

While NASA is currently looking to improve its Mars rovers’ artificial intelligence and working on an automated satellite repair robot, other companies are also keen to improve space exploration via robotics and AI.

Airbus’ CIMON, for example, is a kind of Alexa in space, designed to assist astronauts with their day-to-day tasks and reduce stress via speech recognition, while also operating as an early-warning system to detect problems.

And NASA isn’t the only team working on autonomous rovers, either. iSpace’s own rover could, with the help of onboard tools, be responsible for laying the foundations of a ‘Moon Valley’ colony away from Earth in the not-so-distant future.

Examples of robotics for military

Image – Sword International

For obvious reasons, the military is less keen to shout about its achievements than others using robotic AI for less controversial purposes, but the future of AI weapons is very real and autonomous military drones have seen actual combat.

What about software robots and artificial intelligence?

To make things a bit more confusing, the term “bot” — an abbreviation of “robot” — can also be used to describe software programs which autonomously complete tasks. And these sometimes also use artificial intelligence.

Software bots aren’t a part of robotics, as they have no physical presence, and the term can describe anything from web crawlers to chatbots.

The latter of these embraces artificial intelligence to respond appropriately to messages sent by humans.

Why wouldn’t you want to use artificial intelligence in robotics?

The main argument against using artificial intelligence in robots is that, in many cases, it simply isn’t necessary. The tasks currently outsourced to robots are predictable and repetitive, so adding any form of AI would simply be overkill when the work doesn’t require additional ‘thought’.

But there’s a flip side to this, and it’s that, to date, most robotics systems have been designed with the limits of artificial intelligence firmly in mind.

In other words, most robots have been created to perform simple, programmable tasks, because there wasn’t much scope for them to be able to do anything more complex.

With advances in artificial intelligence coming on in leaps and bounds every year, it’s certainly possible that the line between robotics and artificial intelligence will become more blurred over the coming decades.

Robotics and artificial intelligence: a bright future 

Robotics and artificial intelligence are two related but entirely different fields.

Robotics involves the creation of robots to perform tasks without further intervention, while AI is how systems emulate the human mind to make decisions and ‘learn.’

While you can have robotics with an AI element (and vice versa), both can, and usually do, exist independently of each other.

For most robots, designed to perform simple, repetitive tasks, there’s no need for advanced AI as the duties are simple, predictable and pre-programmed.

But many such AI-free robotics systems were created with past limitations of artificial intelligence in mind, and as the technology continues to advance in leaps and bounds each year, robotics manufacturers may feel increasingly confident in pushing the limits in what can be achieved by marrying the two disciplines.

The cited examples of AI in manufacturing, aerospace, healthcare and agriculture highlighted above can certainly leave us feeling confident that the future is bright for robotics and artificial intelligence.

The next big innovation may feel like science fiction today, but eminently possible tomorrow.

Source : https://aibusiness.com/author.asp?section_id=789&doc_id=773741

Software Manajemen Berbasis Artificial Intelligence Akan Menjadi Tren Bisnis 2022

Software manajemen berbasis artificial intelligence (AI) diprediksi menjadi tren bisnis di tahun 2022, karena dianggap nantinya dapat memudahkan pengambilan keputusan bisnis dengan pertimbangan risiko bertingkat, auto-upsell produk, otomatisasi pekerjaan, dan masih banyak lagi.

Melansir dari Antaranews.com, inovasi ini merupakan terobosan baru pasalnya, hingga saat ini, di Indonesia belum dijumpai provider sistem otomasi bisnis yang terintegrasi dengan AI.

Hadirnya fitur ini di dalam sebuah sistem Enterprise Resource Planning (ERP) diharapkan dapat memberikan industri pilihan perangkat pendukung digitalisasi yang lebih canggih guna menghasilkan profitabilitas yang berkelanjutan.

Business Development Director HashMicro, Lusiana Lu, dalam siaran pers pada Senin mengatakan sebagaimana diteliti oleh CTI Group, software AI-ERP ini juga berpotensi meningkatkan produktivitas perusahaan sebesar 40 persen di tahun 2035 serta memberikan Nilai Tambah Bruto (GVA) di 16 industri sebesar 14 triliun dolar AS.

“Dengan potensi pendapatan yang fantastis, maka kita tetapkan tahun 2022 menjadi tahun di mana kita menggiatkan digitalisasi berbasis AI,” kata dia.

Lusiana melanjutkan, AI merevolusi peran dari sistem ERP yang sudah ada. Jika sistem ERP fokus pada otomatisasi proses bisnis dan analisis data, serta sentralisasi bisnis, fitur AI dapat melengkapi itu semua dengan saran optimasi bisnis melalui beragam informasi bisnis berupa forecasting, historical data, auto-action dan potensi optimasi efisiensi.

“Masalah utama perusahaan yang kami temui adalah minimnya sumber daya untuk melakukan analisis bisnis jangka panjang. Karena itulah kami hadir dengan solusi AI ini. Pertama, kita fokus untuk mengeliminasi hambatan berupa proses administrasi dan kalkulasi data yang kompleks. Lalu, AI akan bertindak sebagai asisten virtual yang membantu pengambilan keputusan dan menginformasikan potensi risiko,” kata dia.

Integrasi dua teknologi ini akan berpengaruh tidak hanya pada produktivitas, tetapi juga memiliki dampak yang signifikan terhadap kemampuan bisnis beradaptasi terhadap pasar yang fluktuatif, serta memberikan sumber daya lebih dari segi waktu dan materi untuk fokus pada pertumbuhan bisnis.

Menanggapi meningkatnya adopsi teknologi di industri belakangan ini, Lusiana juga yakin kedepannya potensi-potensi bisnis serta angka-angka prediksi yang menggiurkan dapat terwujud.

“Berlawanan dengan kepercayaan masyarakat umum, perusahaan-perusahaan di Indonesia sebetulnya sangat terbuka dalam investasi teknologi. Hal ini terutama terjadi pada perusahaan keluarga yang dikelola oleh generasi kedua dan ketiganya. Selain lebih akrab dengan teknologi, para pegiat usaha generasi saat ini sangat menyadari pentingnya penerapan teknologi,”ujar Lusiana.

Dalam jangka pendek, HashMicro berorientasi untuk memastikan bahwa Sistem ERP berbasis AI ini dapat dijangkau dengan mudah oleh para pegiat bisnis. Sementara untuk jangka panjang, Lusiana dan tim sedang dalam tahap riset dan pengembangan untuk meluncurkan lebih banyak fitur cerdas lainnya untuk mendukung smart business.

“Indonesia memang sedang fokus mematangkan revolusi industri 4.0. Kita bergerak di sana mendukung pemerintah, namun juga fokus menyiapkan industri 5.0,” tutup Lusiana (Sumber:https://www.cloudcomputing.id/berita/software-manajemen-berbasis-ai-diprediksi-tren-bisnis)