In 1950s first artificial intelligence laboratories were established at Carnegie-Mellon University, and MIT. Early successes created a sense of optimism and false hopes that some kind of grand unified theory of mind would soon emerge and make general AI possible.
The promises of the artificial intelligence were summed up in the classic 1968 movie 2001: A Space Odyssey featuring artificially intelligent computer HAL.
In 1982 following the recommendations of technology foresight exercises, Japan's Ministry of International Trade and Industry initiated the Fifth Generation Computer Systems project to develop massively parallel computers that would take computing and AI to a new level.
The United States responded with a DARPA-led project that involved large corporations, such as Kodak and Motorola.
But despite some significant results, the grand promises failed to materialise and the public started to see AI as failing to live up to its potential. This culminated in the "AI winter" of the 1990s, when the term AI itself fell out of favour, funding decreased and the interest in the field temporarily dropped.
Researchers concentrated on more focused goals, such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.