Deep Learning was constrained with two key factors for practical applicability. One was the availability of Big Data. With the explosion of Big Data with Internet growth solving the Data problem, the second issue was that even with Big Data availability, to get the compute power required to harvest valuable knowledge from Big Data.
Here is my perspective
1. ELECTROCON-2018:
National Conference on Emerging Trends in
Electronics, IT and Communication
15 March 2018 at IILM-CET, Greater Noida
PROF. (DR.) NEETA AWASTHY
DIRECTOR, SCHOOL OF ENGINEERING AND TECHNOLOGY
NOIDA INTERNATIONAL UNIVERSITY,
GREATER NOIDA
11. The soul of it is Parallel Computing
with Neural Networks Playground
https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1
&cad=rja&uact=8&ved=0ahUKEwj165rJ--
zZAhUR5o8KHehUAcgQFggwMAA&url=http%3A%2F%2Fplayground.tens
orflow.org%2F&usg=AOvVaw2vuvPE9ac9fhwJftudYEXB
16. Solving Computational Needs: GPUs
GPUs are used in High Performance Computing Applications of Deep
Learning.
The speed is 10-75 time of CPU
Due to parallelism, the number of floating point operations per second
(FLOPS) rises to much higher that that of traditional GPUs
17. Two new Job Roles
Data Scientist
Deep Learning Specialist
18. AI Research Problems & Solutions
The problems of matching edges, selecting hierarchical traits, organizing
networks of inquiry, and establishing output parameters.
And the success of the solution is on the time taken to (re)train, develop,
and scale accordingly.
19. For queries please contact :
drneetaa@gmail.com
Or
dir.et@niu.edu.in
9873062986 drneetaa@gmail.com
Editor's Notes
Analysis vis a vis Analytics
Programming is about coding instruction. Deep learning is about creating and training neural networks.
TECHNICAL INTELLIGENCE
Technology at the state level and experts to intercept terrorist e-mails, satellite phone calls and monitor banking channels
This is how Analytics Works
The global economic impact associated with the use, development, and adoption of AI from 2016 through 2025 is between $1.49 trillion and $2.95 trillion.
Play Neural Network Playground
Watson is a question answering computer system capable of answering questions posed in natural language. Watson bought the capabilities developed by: Idibon which offers natural language processing solutions enabling organizations to understand unstructured data.
Watson also bought Corlicol, the NLP platform from any language to any language. AlchemyAPI. Alchemy were acquired by IBM in 2015 and their technology is now a core component of the cognitive APIs offered on IBM's Watson Developer Cloud. Alchemy Language is now Watson Natural Language Understanding and DataNews is now Watson Discovery News
Input streaming speech and text from anywhere. Gridspace interfaces with all your communications from phones, mobiles, network components and IoT. HEAR WHAT WE HEAR. It Decodes live with high accuracy at scale. Gridspace turns these conversation feeds into text, entities, labels, ...
Clarifai is Artificial Intelligence with a vision. It has image and video recognition APIs to build smarter apps to see the world like you do.
Vicarious is developing artificial general intelligence for robots. By combining insights from generative probabilistic models and systems neuroscience, the architecture trains faster, adapts more readily, and generalizes more broadly than AI approaches commonly used today.
Rapid Miner is the enterprise-scale data science platform that unites data prep,machine learning, and predictive model deployment.
Deep Learning was constrained with two key factors for practical applicability. One was the availability of Big Data. With the explosion of Big Data with Internet growth solving the Data problem, the second issue was that even with Big Data availability, to get the compute power required to harvest valuable knowledge from Big Data.
A graphics processing unit (GPU) is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles
GPUs run on Single Instruction Multiple Data parallelism which is highly suited to Deep learning. It is to the tune of 10x to 75x as compared to normal CPUs
GPUs are housed in a machine like your laptops.
Through this, the same instruction (a matrix multiply and add, for instance) can be executed concurrently on a very large collection of data, rather than serially and results in a much higher number of floating point operations per second (FLOPS) than traditional CPUs. Couple this with the very high memory bandwidth on GPUs to achieve unsurpassed computing performance.
Data Scientists attempts to navigate, interrogate, dissect, and derive results from any stream of data using a set of discrete and autonomous algorithms derived from neural networking.
Deep learning specialists, on the other hand, need to focus on deep learning, its advances, and general issues in application of machine learning and neural nets.
AI problems are not the same as the statistical problems raised by big data; they are problems of matching edges, selecting hierarchical traits, organizing networks of inquiry, and establishing output parameters.
The success of an AI solution is not only dependent on the accuracy of its response. But also on the time taken to (re)train, develop, and scale accordingly.