To create computers that exhibit human-level intelligence, or as most know it, Artificial Intelligence, has been on the computer science agenda for decades. In fact, it has been a kind of vision quest, full of trials and tribulations, a journey that has metaphorically speaking tried the soul of humans and machines. The full simulation of human intelligence by digital computing has not been achieved. Not even close. That does not mean the quest is without value, in fact, as is often the case, what’s been learned along the way is often more valuable than achieving the final goal.
In this case, during the history of AI, research broke away from attempts at general intelligence into separate channels (roughly described):
Reasoning (logic, problem solving)
Knowledge representation (as a concept and also in terms of memory and communication)
Planning (the ability to form goals and measure achievement)
Learning (a core characteristics of AI, strongly correlated with memory)
Natural language (the ability to communicate through language)
Motion and physical manipulation (e.g. robotics)
Perception (having senses, especially vision, with awareness of environment)
Social intelligence (mostly in the human context, meaning emotions and social skills)
Creativity (the ability to think and act beyond programmed parameters)
Lined up like this, it’s easy to see what a tall order AI research made for itself. Here be huge unknowns for basic human research, let alone attempting to recreate these abilities in digital processes. Put another way, for example, if you don’t know what reasoning is, how can you imitate it? Nevertheless, the research into AI, quixotic though it may have been at the level of ‘general intelligence’, has revealed and continues to reveal important information about computing (software and hardware) and occasionally about the original human processes.
Almost every time somebody writes a novel or makes a movie about computers or robots in the future, artificial intelligence is assumed. AI helps create powerful images. The baleful red eye of the HAL 9000 computer in the movie 2001: A Space Odyssey is a global icon for the potential malevolence of artificially intelligent computers. Ditto for robotics in the Terminator series. Often, but by no means always, AI is associated with faulty if not downright evil mentality. This probably reflects an understandable discomfort with the idea of sharing a similar intelligence with something that isn’t human – even if humans created it in the first place. There’s the strong theme in many fictional works that the AI means learning, and when it learns, it may (a) become smarter than humans, (b) know more than humans, (c) not reach the same conclusions about life (etc.) as humans, and is therefore ipso facto (d) dangerous.
Functional robots are in use the world over. Software applications contain elements of knowledge manipulation and deductive reasoning. Sensors are employed with a high degree of coordination by computers. Computers can now distinguish human speech. For these and many other areas, AI methods and technology are vital. The bits and pieces of research, while not adding up to full artificial intelligence, power the application of computers and robotics in manifold ways. This will continue and become more important in the future.
SciTechStory Impact Areas
The study of AI is obviously affected by the increase in Computer Power. In almost any form, AI programming is ‘compute intensive’ so the bigger, faster, and better the computer environment, the better for AI. AI is (or at least probably should be) influenced by research in almost all areas of neuroscience, but most especially those involving the impact areas of Neuro-intelligence and Neuro-memory. While not many AI researchers are still attempting exact mimicry of human thought processes, new information in those areas could be helpful. In its application AI, or its sub-forms, touch many other impact areas, notably Communications, VR/AR (Virtual or Augmented Reality), and, of course, General Robotics.