2. Around this time many mathematicians and scientists were talking
about the concept of artificial intelligence. It started taking shape when
Alan Turing, in his 1950 paper COMPUTING MACHINERY AND
INTELLIGENCE, gave us a framework to build intelligent machines and
test their intelligence.
Humans use available information as well as reason in order to solve
problems and make decisions, Alan Turing suggested, computers can be
taught to behave this way.
1950
3. But we couldn’t progress much around that time because of one
fundamental reason. Our computers that time lacked the ability to store
commands, they could only execute commands. This means, computers
could follow instructions and do what we want them to do. But they
could not remember what they did.
And this is the basic need for developing machine intelligence to
remember previous actions and the outcomes of the actions. Also, the
computers that time were very expensive.
1950
4. In 1956-56 Allen Newell, Cliff Shaw and Herbert Simon, came up with
Logic Theorist, a program designed to mimic the problem solving skills of
a human. This is considered to be the first ever artificial intelligence
program.
In 1956, John McCarthy and Marvin Minsky organized Dartmouth Summer
Research Project on Artificial Intelligence (DSRPAI) for doing collaborative
work on artificial intelligence. Logic Theorist was presented at DSRPAI.
1955-56
5. From 1957 to 1974, computers could store more information and became
faster, cheaper, and more accessible boosting up the research in AI.
Around this time machine learning algorithms improved a lot. And most
importantly people got better clarity and understanding about which
algorithms to apply to their problems.
1957-74
6. In 1970, Marvin Minsky predicted, “from three to eight years we will have
a machine with the general intelligence of an average human being.”
However, it took a longer time to achieve this.
The main reason being the computers were way behind and way weaker
to exhibit intelligence.
1957-74
7. Around this time John Hopfield and David Rumelhart popularized deep
learning techniques which allowed computers to learn using experience.
Around same time, Edward Feigenbaum introduced expert systems which
mimicked the decision making process of a human expert.
Around this time, the Japanese government started taking initiatives in
expert systems and other AI related research as part of their Fifth
Generation Computer Project (FGCP).
1980s
8. In 1997, reigning world chess champion and grandmaster Gary Kasparov
was defeated by IBM’s Deep Blue, a chess playing computer program.
In the same year, speech recognition software, developed by Dragon
Systems, was implemented on Windows. This gave a new direction to the
application of AI in solving problems that humans solve.
1990-2000
9. Neural networks have been theorized since the mid 1900s. But we could
successfully implement them in 2000s only after we could got rid of
compute and data constraints.
The concept behind the neural network is more or less a set of stacked
and connected linear regressions with an activation function. As more of
these neurons are added, the networks have more variables to train and
thus can model more complex patterns.
2000s
10. Graphics Processing Unit or GPU allowed for deeper networks. Deep
learning took over the machine learning space and in effect started a
much more widespread machine learning revolution than ever before.
2000s
11. As deep learning was taking shape, computer science problems in the
language and image areas were being solved much more effectively with
this new form of AI.
Language models began to leverage and adapt the concept of Recurrent
Neural Networks (RNN) and computer vision implemented the
Convolutional Neural Network (CNN).
2000-2020
12. When we speak, every word we say in a sentence is not decided at
random. Rather the next word is dependent on the context of the
sentence. RNNs and more specifically Long Short Term Memory (LSTM) are
implemented to do exactly the same thing.
This method gave a powerful boost to Natural Language Processing (NLP).
Over the last 10 years, CNNs have led to huge breakthroughs in computer
vision.
2000-2020
13. Eventually, RNN and CNN are merged to develop new methods in
language processing. This is rapidly revolutionizing this space.
In the image processing space, Residual Neural Network is
revolutionizing this space and helping us build more effective deeper
models.
2000-2020
14. On the other side, more advanced models could produce better results
because of the availability of huge and more accurate data.
Typically neural networks are trained using Back Propagation method.
This requires complex programming and hardware to achieve, especially
for the more complex forms of architectures. Availability of state of the
art hardwares created way for AI to take today’s shape.
2000-2020
15. Finally, none of this progress would have been possible without the
language and library support explosion that came with the rise of Python
and libraries like Tensorflow, Keras, or PyTorch. Python has grown to be
the most prominent programming language due to its ease of use and
library support.
2000-2020
16. This is just what we have done until today. Seeds have been planted for AI
to grow much bigger in near future and help mankind in healthcare to
financial sector, education to space exploration. Some of the future
leaders in this field may be reading this already. This may be you. We are
excited to experience where the future AI takes us to.
The Future