Let me explain you with some examples. Traditionally we have used Yellow Pages to search for businesses around us. In the programming world, we replicated it by creating a database of businesses online and searching through that db. The search uses index etc. In Google when we search for Coffee near me, it has to recognize where we are. Our location is determined or approximated. Then it searches for not only coffee shops but also restaurants or other businesses that carry coffee. It tries to approximate the nearness…whether it is 1 mile or 100 yards. Behind the scene there is AI in play in Google search.
Another example is of IVR. When we call a Call center, it greets with hello but gives us the options like- press 1 to do this and press 9 to repeat the menu. Does it feel like an intelligence piece? No. When we ask SIRI or Alexa a question, it responds with intelligence. It appears as if we are talking to a real person. The model in Alexa / SIRI keeps improving itself with Machine Learning to make it more intelligent.
Another good example is of searching through the aisles of a store. When eCommerce started, the same store model was replicated online. But with Amazon and other pioneers, AI was brought into online shopping. Now when you search for a product on Amazon, it can suggest you the things that go with the main product. It creates bundles for you. It even tries to find the product relevant to your need. It even shows you the product that can be delivered to you early.
In a regular program, we have some data as X. We want the output Y. For this we write a function or an algorithm to get Y. E.g. If we want to calculate Income Tax, we code all the rules of calculating Income tax in our software. By doing this we can calculate the Income tax for 1 person or thousands of persons very fast. But in future if the rules of Income Tax change we have to again code it into our program. Luckily the rules change once a year, so it is not that difficult to code.
In an AI program we have X as input data. We also has Y as output data for some values of X. But we do not know how we get Y from X. So our AI program tries different functions to get Y. This approximate function that can give us Y from X is the output of an AI program. It is called learning from data. One good example is Image recognition. If you compare with Income Tax calculation example, there are so many variations in Images that it is almost impractical to write if else statements in code to recognize elements in an Image. Here we us AI for image recognition.
Let us understand by a simple example. We have data of fruits. Data contains weight, color, height, shape etc of each fruit.
It is a classification problem. We have ML algorithms that can help us classify data. When we give this data to our AI program, it can help us determine that between an Apple and Banana, shape is the better classifier. We can use color as a classifier, but it may not give good results, because there can be green apples of color similar to a green banana. The AI program tells us which classifier gives us better results.
Similarly between an Apple and Pine Apple, size is the better classifier.
When it comes to Apples and Oranges, the size can be quite similar. In this case color can be a very good classifier. The AI program keeps suggesting the best features that can be used for classification with confidence.
Now comes the question, what is the difference between AI and ML. Whether Artificial Intelligence and Machine Learning are two same things or one is the subset of other.
In my view, Artificial Intelligence is a broader subject that uses technologies in Machine Learning, Data Engineering, Computer Science etc to create AI programs. ML is a subset of AI. ML is the set of tools and techniques that we use for Artificial Intelligence. Eg. Robotics is a field that uses laws of physics, electronics and mechanics.
An AI Model is a program or set of programs that we use for a problem. A model can be single piece of code. Or it can be many models put together to form a higher level model.
This is an example of Google App for translation. It can translate text in images from one language to another language. It is very useful when you are travelling to foreign countries. In this example our AI model consists of multiple models. There is a model to identify text signs. You may point the camera to many things like a flower, tree, road etc. But it will ignore these objects and focus on the signs with text in it. The other model will get the input from previous model and identify the characters and language in the text. It is like an OCR.
Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output. Y = f(X) We have labels in the input data. It can be continuous data.
Unsupervised learning is where you only have input data (X) and no corresponding output variables. The goal for unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data.
Start Jupyter Create a Notebook Rename Notebook Create heading Write markdown Import libraries: import pandas as pd Version libraries: pd.__version__ Write code Run code Remove the cells
Neural Networks consist of the following components An input layer, x An arbitrary amount of hidden layers An output layer, ŷ A set of weights and biases between each layer, W and b A choice of activation function for each hidden layer, σ. In this tutorial, we’ll use a Sigmoid activation function.
A tensor is a generalized matrix. It could be a 1-D matrix (vector) or a 3-D matrix (like a cube of numbers), even a 0-D matrix (a single number), or a higher dimensional structure that is harder to visualize. The dimension of the tensor is called its rank.
Written in Python (even though some parts crucial for performance is implemented in C++) which is a very attractive language to read and develop in It’s developed and maintained by Google. As such, a continued support and development is ensured Very large and active community Faster model compilation than Theano-based options Faster compile times than Theano Multiple GPUs support. So you can freely run the code on different machines without having to stop or restart the program Tensorboard is the powerful visualization suite which is developed to track both the network topology and performance, making debugging even simpler. Is about more than deep learning. TensorFlow actually has tools to support reinforcement learning and other algorithms.
Learn Basics of Artificial intelligence with TensorFlow
Training a Model
We start with initial data.
Divide data into training and test data.
Create model on training data.
Test model on test data
Let say we have sale price, sq ft., year built, number of
rooms, tax information of 100 houses.
Training a Model
Initial Data Training Data Test Data
25 houses75 houses100 houses
Create a Model on
Test the model on
Run the model on
What are the tools
Language: Python, R, Java
Data tools: SQL, Pandas etc.
Data storage: BigQuery, DynamoDB etc.
Data Stream: Kinesis, Kafka etc.
ML libraries: Scikit learn, TensorFlow, Theano etc.
Environment: Jupyter Notebook
Production: Google Cloud, AWS Cloud, Azure
Artificial Intelligence: Tools