Sunday, August 12, 2007

Artificial Intelligence




The term Artificial Intelligence (AI) was first used by John McCarthy who used it to mean "the science and engineering of making intelligent machines".[1] It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. While AI is the generally accepted term, others, including both Computational intelligence and Synthetic intelligence, have been proposed as potentially being "more accurate".[2] The terms strong and weak AI can be used to narrow the definition for classifying such systems. AI is studied in overlapping fields of computer science, psychology, philosophy, neuroscience, and engineering, dealing with intelligent behavior, learning, and adaptation and usually developed using customized machines or computers.Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech, and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, and strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of "comprehension". Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.

List of applications

Typical problems to which AI methods are applied
Pattern recognition
Optical character recognition
Handwriting recognition
Speech recognition
Face recognition
Artificial Creativity
Computer vision, Virtual reality and Image processing
Diagnosis (artificial intelligence)
Game theory and Strategic planning
Game artificial intelligence and Computer game bot
Natural language processing, Translation and Chatterbots
Non-linear control and Robotics
Other fields in which AI methods are implemented
Artificial life
Automated reasoning
Automation
Biologically-inspired computing
Colloquis
Concept mining
Data mining
Knowledge representation
Semantic Web
E-mail spam filtering
Robotics
Behavior-based robotics
Cognitive robotics
Cybernetics
Developmental robotics
Epigenetic robotics
Evolutionary robotics
Hybrid intelligent system
Intelligent agent
Intelligent control
Litigation
Lists of researchers, projects & publications
List of AI researchers
List of AI projects
List of important AI publications

AI in other disciplines

Philosophy

Mind and Brain Portal
Main article: Philosophy of artificial intelligence
The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters' opinions, artificial consciousness is considered the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information (e.g., semantic networks).

Neuro-psychology
Main article: Cognitive science
Techniques and technologies in AI which have been directly derived from neuroscience include neural networks, Hebbian learning and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.

Computer Science
Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as John McCarthy, Marvin Minsky, Seymour Papert (who developed Logo there) and Terry Winograd (who abandoned AI after developing SHRDLU).

Business
Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[8] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering.
Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.[9]

Fiction
Main article: Artificial intelligence in fiction
In science fiction AI is most commonly portrayed as an upcoming power trying to overthrow human authority (society controlled by a supercomputer) or as futuristic humanoid service robots. Alternative plots depict civilizations which chose to be managed by AI or to ban AI completely. Best known examples include films such as The Matrix and Artificial Intelligence: A.I.
The inevitability of world domination by AI is also argued by some science/futurist writers such as Kevin Warwick, Hans Moravec and Isaac Asimov. This concept is also explored in the Uncanny Valley hypothesis.

Toys and games
The 1990s saw some of the first attempts to massproduce domestically aimed-types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of A.I, specifically in the form of Tamogatchis and Giga Pets, the Internet(ex. basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.

Research challenges



Stanley, the winner of the 2005 DARPA Grand Challenge

A legged league game from RoboCup 2004 in Lisbon, Portugal.
The 800 million-Euro EUREKA Prometheus Project on driverless cars (1987-1995) showed that fast autonomous vehicles, notably those of Ernst Dickmanns and his team, can drive long distances (over 100 miles) in traffic, automatically recognizing and tracking other cars through computer vision, passing slower cars in the left lane. But the challenge of safe door-to-door autonomous driving in arbitrary environments will require additional research.
The DARPA Grand Challenge was a race for a $2 million prize where cars had to drive themselves over a hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005, the winning vehicles completed all 132 miles of the course in just under seven hours. This was the first in a series of challenges aimed at a congressional mandate stating that by 2015 one-third of the operational ground combat vehicles of the US Armed Forces should be unmanned.[6] For November 2007, DARPA introduced the DARPA Urban Challenge. The course will involve a sixty-mile urban area course. Darpa has secured the prize money for the challenge as $2 million for first place, $1 million for second and $500 thousand for third.
A popular challenge amongst AI research groups is the RoboCup and FIRA annual international robot soccer competitions. Hiroaki Kitano has formulated the International RoboCup Federation challenge: "In 2050 a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply [sic] with the official rule [sic] of the FIFA, against the winner of the most recent World Cup."[7]
In the post-dot-com boom era, some search engine websites use a simple form of AI to provide answers to questions entered by the visitor. Questions such as What is the tallest building? can be entered into the search engine's input form, and a list of answers will be returned

Mechanisms

Generally speaking AI systems are built around automated inference engines including forward reasoning and backwards reasoning. Based on certain conditions ("if") the system infers certain consequences ("then"). AI applications are generally divided into two types, in terms of consequences: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems.
Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI).
Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.[3] These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.[4]

[edit] Classifiers
Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.
When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches.
A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.
The most widely used classifiers are the neural network, support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision tree.

[edit] Conventional AI
Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:
Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A case based reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications.[5]
Bayesian networks
Behavior based AI: a modular method of building AI systems by hand.

[edit] Computational intelligence
Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include:
Neural networks: trainable systems with very strong pattern recognition capabilities.
Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as 'hot', 'cold', 'warm' and 'boiling'.
Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms).
With hybrid intelligent systems, attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R or CLARION (see References below). It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI, especially the integration of symbolic and connectionist models (e.g., as advocated by Ron Sun).

History



Main articles: History of artificial intelligence and Timeline of artificial intelligence
The field of artificial intelligence dawned in the 1950s. Since then, there have been many achievements in the history of artificial intelligence; some of the more notable moments include:
Year
Development
1950
Alan Turing introduces the Turing test, intended to test a machine's capability to participate in human-like conversation.
1951
The first working AI programs were written to run on the Ferranti Mark I machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1956
John McCarthy coined the term "artificial intelligence" as the topic of the Dartmouth Conference.
1958
John McCarthy invented the Lisp programming language.
1965
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966
Machine Intelligence workshop at Edinburgh - the first of an influential annual series organized by Donald Michie and others.
1972
The Prolog programming language was developed by Alain Colmerauer.
1973
Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.
1974
Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1991
AI logistics systems deployed in the first Gulf War save the US more money than spent on all AI research since 1950[citation needed].
1994
With passengers onboard, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
1997
The Deep Blue chess machine (IBM) beats the world chess champion, Garry Kasparov.
1998
Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment.
1999
Sony introduces AIBO, it becomes one of the first improved A.I "pets" that is also autonomous.
2004
DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.
2005
Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings.
During the 1970s and 1980s AI development experienced an AI winter due to failure to achieve expectations and lack of governmental funding.
During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this new movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between "neat" and "scruffy" approaches. This new school of AI is sometimes called machine learning. The last few years have also seen a big interest in game theory applied to AI decision making. After the September 11, 2001 attacks, there was much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining.

Thursday, August 9, 2007

Artificial Intelligence


The term Artificial Intelligence (AI) was first used by John McCarthy who used it to mean "the science and engineering of making intelligent machines".[1] It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. While AI is the generally accepted term, others, including both Computational intelligence and Synthetic intelligence, have been proposed as potentially being "more accurate".[2] The terms strong and weak AI can be used to narrow the definition for classifying such systems. AI is studied in overlapping fields of computer science, psychology, philosophy, neuroscience, and engineering, dealing with intelligent behavior, learning, and adaptation and usually developed using customized machines or computers.

Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech, and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, and strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of "comprehension". Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.