Deep learning refers to artificial neural networks with many layers. This document provides an introduction to deep learning and neural networks, including their strengths and weaknesses. It discusses popular deep learning libraries for R like H2O and MXNet. H2O allows users to perform distributed deep learning on large datasets using R. MXNet provides state-of-the-art deep learning models and efficient GPU computing capabilities for R. The document demonstrates how to customize neural networks and run deep learning models with H2O and MXNet in R.
Mastering Computer Vision Problems with State-of-the-art Deep LearningMiguel González-Fierro
Deep learning has been especially successful in computer vision tasks such as image classification because convolutional neural nets (CNNs) can create hierarchical
representations in an image. One of the most remarkable advances is ResNet, the CNN that surpassed human-level accuracy for the first time in history.
ImageNet competition has become the de facto benchmark for image classification in the research community. The “small” ImageNet data contains more than 1.2 million images distributed in 1,000 classes.
Miguel González-Fierro explains how to train a state of the art deep neural network, ResNet, using Microsoft RSever and MXNet with the ImageNet dataset. (While most of the deep learning libraries are programmed in C++ and Python, only MXNet offers an API for R programmers.) Miguel then demonstrates how to operationalize this training for real-world business problems related to image classification.
This talk was presented at Strata London 2017: https://conferences.oreilly.com/strata/strata-eu/public/schedule/detail/57428
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
Presentation of few recent papers on Deep Learning ... in particular Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, Song Han, Huizi Mao, William J. Dally International Conference on Learning Representations ICLR2016
Squeezing Deep Learning Into Mobile PhonesAnirudh Koul
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
This presentation focuses on Deep Learning (DL) concepts, such as neural networks, backprop, activation functions, and Convolutional Neural Networks. You'll also learn how to incorporate Deep Learning in Android applications. Basic knowledge of matrices is helpful for this session, which is targeted primarily to beginners.
Improving Hardware Efficiency for DNN ApplicationsChester Chen
Speaker: Dr. Hai (Helen) Li is the Clare Boothe Luce Associate Professor of Electrical and Computer Engineering and Co-director of the Duke Center for Evolutionary Intelligence at Duke University
In this talk, I will introduce a few recent research spotlights by the Duke Center for Evolutionary Intelligence. The talk will start with the structured sparsity learning (SSL) method which attempts to learn a compact structure from a bigger DNN to reduce computation cost. It generates a regularized structure with high execution efficiency. Our experiments on CPU, GPU, and FPGA platforms show on average 3~5 times speedup of convolutional layer computation of AlexNet. Then, the implementation and acceleration of DNN applications on mobile computing systems will be introduced. MoDNN is a local distributed system which partitions DNN models onto several mobile devices to accelerate computations. ApesNet is an efficient pixel-wise segmentation network, which understands road scenes in real-time, and has achieved promising accuracy. Our prospects on the adoption of emerging technology will also be given at the end of this talk, offering the audiences an alternative thinking about the future evolution and revolution of modern computing systems.
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smartphones. Highlights some frameworks and best practices.
Mastering Computer Vision Problems with State-of-the-art Deep LearningMiguel González-Fierro
Deep learning has been especially successful in computer vision tasks such as image classification because convolutional neural nets (CNNs) can create hierarchical
representations in an image. One of the most remarkable advances is ResNet, the CNN that surpassed human-level accuracy for the first time in history.
ImageNet competition has become the de facto benchmark for image classification in the research community. The “small” ImageNet data contains more than 1.2 million images distributed in 1,000 classes.
Miguel González-Fierro explains how to train a state of the art deep neural network, ResNet, using Microsoft RSever and MXNet with the ImageNet dataset. (While most of the deep learning libraries are programmed in C++ and Python, only MXNet offers an API for R programmers.) Miguel then demonstrates how to operationalize this training for real-world business problems related to image classification.
This talk was presented at Strata London 2017: https://conferences.oreilly.com/strata/strata-eu/public/schedule/detail/57428
Faster deep learning solutions from training to inference - Michele Tameni - ...Codemotion
Intel Deep Learning SDK enables using of optimized open source deep-learning frameworks, including Caffe and TensorFlow through a step-by-step wizard or iPython interactive notebooks. It includes easy and fast installation of all depended libraries and advanced tools for easy data pre-processing and model training, optimization and deployment, providing an end-to-end solution to the problem. In addition, it supports scale-out on multiple computers for training, as well as using compression methods for deployment of the models on various platforms, addressing memory and speed constraints.
Presentation of few recent papers on Deep Learning ... in particular Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding, Song Han, Huizi Mao, William J. Dally International Conference on Learning Representations ICLR2016
Squeezing Deep Learning Into Mobile PhonesAnirudh Koul
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smart phones. Highlights some frameworks and best practices.
This presentation focuses on Deep Learning (DL) concepts, such as neural networks, backprop, activation functions, and Convolutional Neural Networks. You'll also learn how to incorporate Deep Learning in Android applications. Basic knowledge of matrices is helpful for this session, which is targeted primarily to beginners.
Improving Hardware Efficiency for DNN ApplicationsChester Chen
Speaker: Dr. Hai (Helen) Li is the Clare Boothe Luce Associate Professor of Electrical and Computer Engineering and Co-director of the Duke Center for Evolutionary Intelligence at Duke University
In this talk, I will introduce a few recent research spotlights by the Duke Center for Evolutionary Intelligence. The talk will start with the structured sparsity learning (SSL) method which attempts to learn a compact structure from a bigger DNN to reduce computation cost. It generates a regularized structure with high execution efficiency. Our experiments on CPU, GPU, and FPGA platforms show on average 3~5 times speedup of convolutional layer computation of AlexNet. Then, the implementation and acceleration of DNN applications on mobile computing systems will be introduced. MoDNN is a local distributed system which partitions DNN models onto several mobile devices to accelerate computations. ApesNet is an efficient pixel-wise segmentation network, which understands road scenes in real-time, and has achieved promising accuracy. Our prospects on the adoption of emerging technology will also be given at the end of this talk, offering the audiences an alternative thinking about the future evolution and revolution of modern computing systems.
A practical talk by Anirudh Koul aimed at how to run Deep Neural Networks to run on memory and energy constrained devices like smartphones. Highlights some frameworks and best practices.
A simplified way of approaching machine learning and deep learning from the ground up. The case for deep learning and an attempt to develop intuition for how/why it works. Advantages, state-of-the-art, and trends.
Presented at NYU Center for Genomics for NY Deep Learning Meetup
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...Edureka!
** AI & Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka Tutorial on "Keras Tutorial" (Deep Learning Blog Series: https://goo.gl/4zxMfU) provides you a quick and insightful tutorial on the working of Keras along with an interesting use-case! We will be checking out the following topics:
Agenda:
What is Keras?
Who makes Keras?
Who uses Keras?
What Makes Keras special?
Working principle of Keras
Keras Models
Understanding Execution
Implementing a Neural Network
Use-Case with Keras
Coding in Colaboratory
Session in a minute
Check out our Deep Learning blog series: https://bit.ly/2xVIMe1
Check out our complete Youtube playlist here: https://bit.ly/2OhZEpz
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This is a 2 hours overview on the deep learning status as for Q1 2017.
Starting with some basic concepts, continue to basic networks topologies , tools, HW/Accelerators and finally Intel's take on the the different fronts.
An introduction to Keras, a high-level neural networks library written in Python. Keras makes deep learning more accessible, is fantastic for rapid protyping, and can run on top of TensorFlow, Theano, or CNTK. These slides focus on examples, starting with logistic regression and building towards a convolutional neural network.
The presentation was given at the Austin Deep Learning meetup: https://www.meetup.com/Austin-Deep-Learning/events/237661902/
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow, CNTK or Theano.
We can easily build a model and train it using keras very easily with few lines of code.The steps to train the model is described in the presentation.
Use Keras if you need a deep learning library that:
-Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
-Supports both convolutional networks and recurrent networks, as well as combinations of the two.
-Runs seamlessly on CPU and GPU.
Deep learning on mobile - 2019 Practitioner's GuideAnirudh Koul
The 2019 Guide to Deep Learning on Mobile, from Inference to Training on iOS and Android smartphones. Featuring CoreML, Tensorflow Lite, MLKit, Fritz, AutoML Approaches (Hardware Aware Neural Architecture Search) to make models more efficient, and lots of videos. Presented by Anirudh Koul, Siddha Ganju and Meher Anand Kasam. More details at PracticalDL.ai in the upcoming O'Reilly Book 'Practical Deep Learning on Cloud & Mobile'
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Yinyin Liu presents at SD Robotics Meetup on November 8th, 2016. Deep learning has made great success in image understanding, speech, text recognition and natural language processing. Deep Learning also has tremendous potential to tackle the challenges in robotic vision, and sensorimotor learning in a robotic learning environment. In this talk, we will talk about how current and future deep learning technologies can be applied for robotic applications.
Hadoop Summit 2014 - San Jose - Introduction to Deep Learning on HadoopJosh Patterson
As the data world undergoes its cambrian explosion phase our data tools need to become more advanced to keep pace. Deep Learning has emerged as a key tool in the non-linear arms race of machine learning. In this session we will take a look at how we parallelize Deep Belief Networks in Deep Learning on Hadoop’s next generation YARN framework with Iterative Reduce. We’ll also look at some real world examples of processing data with Deep Learning such as image classification and natural language processing.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/wavecomp/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-nicol
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Nicol, CTO at Wave Computing, presents the "New Dataflow Architecture for Machine Learning" tutorial at the May 2017 Embedded Vision Summit.
Data scientists have made tremendous advances in the use of deep neural networks (DNNs) to enhance business models and service offerings. But training DNNs can take a week or more using traditional hardware solutions that rely on legacy architectures that are limited in performance and scalability. New innovations that can reduce training time for both image-centric and text-centric deep neural networks will lead to an explosion of new applications. Dr. Chris Nicol, Wave Computing’s Chief Technology Officer, examines the performance challenge faced by data scientists today. Nicol outlines the technical factors underlying this bottleneck for systems relying on CPUs, GPUs, FPGAs and ASICs, and introduces a new dataflow-centric approach to DNN training.
This is an 1 hour presentation on Neural Networks, Deep Learning, Computer Vision, Recurrent Neural Network and Reinforcement Learning. The talks later have links on how to run Neural Networks on
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
A simplified way of approaching machine learning and deep learning from the ground up. The case for deep learning and an attempt to develop intuition for how/why it works. Advantages, state-of-the-art, and trends.
Presented at NYU Center for Genomics for NY Deep Learning Meetup
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...Edureka!
** AI & Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka Tutorial on "Keras Tutorial" (Deep Learning Blog Series: https://goo.gl/4zxMfU) provides you a quick and insightful tutorial on the working of Keras along with an interesting use-case! We will be checking out the following topics:
Agenda:
What is Keras?
Who makes Keras?
Who uses Keras?
What Makes Keras special?
Working principle of Keras
Keras Models
Understanding Execution
Implementing a Neural Network
Use-Case with Keras
Coding in Colaboratory
Session in a minute
Check out our Deep Learning blog series: https://bit.ly/2xVIMe1
Check out our complete Youtube playlist here: https://bit.ly/2OhZEpz
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This is a 2 hours overview on the deep learning status as for Q1 2017.
Starting with some basic concepts, continue to basic networks topologies , tools, HW/Accelerators and finally Intel's take on the the different fronts.
An introduction to Keras, a high-level neural networks library written in Python. Keras makes deep learning more accessible, is fantastic for rapid protyping, and can run on top of TensorFlow, Theano, or CNTK. These slides focus on examples, starting with logistic regression and building towards a convolutional neural network.
The presentation was given at the Austin Deep Learning meetup: https://www.meetup.com/Austin-Deep-Learning/events/237661902/
On-device machine learning: TensorFlow on AndroidYufeng Guo
Machine learning has traditionally been the solely performed on servers and high performance machines. But there is great value is having on-device machine learning for mobile devices. Doing ML inference on mobile devices has huge potential and is still in its early stages. However, it's already more powerful than most realize.
In this demo-oriented talk, you will see some examples of deep learning models used for local prediction on mobile devices. Learn how to use TensorFlow to implement a machine learning model that is tailored to a custom dataset, and start making delightful experiences today!
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow, CNTK or Theano.
We can easily build a model and train it using keras very easily with few lines of code.The steps to train the model is described in the presentation.
Use Keras if you need a deep learning library that:
-Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
-Supports both convolutional networks and recurrent networks, as well as combinations of the two.
-Runs seamlessly on CPU and GPU.
Deep learning on mobile - 2019 Practitioner's GuideAnirudh Koul
The 2019 Guide to Deep Learning on Mobile, from Inference to Training on iOS and Android smartphones. Featuring CoreML, Tensorflow Lite, MLKit, Fritz, AutoML Approaches (Hardware Aware Neural Architecture Search) to make models more efficient, and lots of videos. Presented by Anirudh Koul, Siddha Ganju and Meher Anand Kasam. More details at PracticalDL.ai in the upcoming O'Reilly Book 'Practical Deep Learning on Cloud & Mobile'
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Yinyin Liu presents at SD Robotics Meetup on November 8th, 2016. Deep learning has made great success in image understanding, speech, text recognition and natural language processing. Deep Learning also has tremendous potential to tackle the challenges in robotic vision, and sensorimotor learning in a robotic learning environment. In this talk, we will talk about how current and future deep learning technologies can be applied for robotic applications.
Hadoop Summit 2014 - San Jose - Introduction to Deep Learning on HadoopJosh Patterson
As the data world undergoes its cambrian explosion phase our data tools need to become more advanced to keep pace. Deep Learning has emerged as a key tool in the non-linear arms race of machine learning. In this session we will take a look at how we parallelize Deep Belief Networks in Deep Learning on Hadoop’s next generation YARN framework with Iterative Reduce. We’ll also look at some real world examples of processing data with Deep Learning such as image classification and natural language processing.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/wavecomp/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit-nicol
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Chris Nicol, CTO at Wave Computing, presents the "New Dataflow Architecture for Machine Learning" tutorial at the May 2017 Embedded Vision Summit.
Data scientists have made tremendous advances in the use of deep neural networks (DNNs) to enhance business models and service offerings. But training DNNs can take a week or more using traditional hardware solutions that rely on legacy architectures that are limited in performance and scalability. New innovations that can reduce training time for both image-centric and text-centric deep neural networks will lead to an explosion of new applications. Dr. Chris Nicol, Wave Computing’s Chief Technology Officer, examines the performance challenge faced by data scientists today. Nicol outlines the technical factors underlying this bottleneck for systems relying on CPUs, GPUs, FPGAs and ASICs, and introduces a new dataflow-centric approach to DNN training.
This is an 1 hour presentation on Neural Networks, Deep Learning, Computer Vision, Recurrent Neural Network and Reinforcement Learning. The talks later have links on how to run Neural Networks on
Handwritten Recognition using Deep Learning with RPoo Kuan Hoong
R User Group Malaysia Meet Up - Handwritten Recognition using Deep Learning with R
Source code available at: https://github.com/kuanhoong/myRUG_DeepLearning
Synthetic dialogue generation with Deep LearningS N
A walkthrough of a Deep Learning based technique which would generate TV scripts using Recurrent Neural Network. The model will generate a completely new TV script for a scene, after being training from a dataset. One will learn the concepts around RNN, NLP and various deep learning techniques.
Technologies to be used:
Python 3, Jupyter, TensorFlow
Source code: https://github.com/syednasar/talks/tree/master/synthetic-dialog
Deep Learning libraries and first experiments with TheanoVincenzo Lomonaco
In recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, firstly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and efficiency.
Big Data Analytics (ML, DL, AI) hands-onDony Riyanto
Ini adalah slide tambahan dari materi pengenalan Big Data Analytics (di file berikutnya), yang mengajak kita mulai hands-on dengan beberapa hal terkait Machine/Deep Learning, Big Data (batch/streaming), dan AI menggunakan Tensor Flow
NVIDIA DEEP LEARNING INFERENCE PLATFORM PERFORMANCE STUDY
| TECHNICAL OVERVIEW
| 1
Introduction
Artificial intelligence (AI), the dream of computer scientists for over half
a century, is no longer science fiction—it is already transforming every
industry. AI is the use of computers to simulate human intelligence. AI
amplifies our cognitive abilities—letting us solve problems where the
complexity is too great, the information is incomplete, or the details are
too subtle and require expert training.
While the machine learning field has been active for decades, deep
learning (DL) has boomed over the last five years. In 2012, Alex
Krizhevsky of the University of Toronto won the ImageNet image
recognition competition using a deep neural network trained on NVIDIA
GPUs—beating all the human expert algorithms that had been honed
for decades. That same year, recognizing that larger networks can learn
more, Stanford’s Andrew Ng and NVIDIA Research teamed up to develop
a method for training networks using large-scale GPU computing
systems. These seminal papers sparked the “big bang” of modern AI,
setting off a string of “superhuman” achievements. In 2015, Google and
Microsoft both beat the best human score in the ImageNet challenge. In
2016, DeepMind’s AlphaGo recorded its historic win over Go champion
Lee Sedol and Microsoft achieved human parity in speech recognition.
GPUs have proven to be incredibly effective at solving some of the most
complex problems in deep learning, and while the NVIDIA deep learning
platform is the standard industry solution for training, its inferencing
capability is not as widely understood. Some of the world’s leading
enterprises from the data center to the edge have built their inferencing
solution on NVIDIA GPUs. Some examples include:
Covers basics Artificial neural networks and motivation for deep learning and explains certain deep learning networks, including deep belief networks and autoencoders. It also details challenges of implementing a deep learning network at scale and explains how we have implemented a distributed deep learning network over Spark.
Comparing TensorFlow and MxNet from a cloud engineering and production app building perspective. Looks at adoption, support, deployment and cloud support cross vendors.
Georgia Tech cse6242 - Intro to Deep Learning and DL4JJosh Patterson
Introduction to deep learning and DL4J - http://deeplearning4j.org/ - a guest lecture by Josh Patterson at Georgia Tech for the cse6242 graduate class.
In today’s world many organizations are move on to machine learning and artificial intelligence to improve the business processes and stay ahead in competition. But some organizations are not able to implement machine learning or AI for their processes due to different reasons that is why deep learning framework come in. they are interfaces, libraries, or tools which are open source. People having little or no knowledge of machine learning or AI can easily use.
https://www.ducatindia.com/artificial-intelligence-training-in-delhi
Deep learning is one of the most exciting areas of machine learning and AI. This presentation covers all the very basics of deep neural networks, from the concept down to applications and why this technology is so popular in today's business landscape.
This presentation is provided by the Tesseract Academy, which provides executive education for deep technical subjects such as data science and blockchain. For a video of the presentation please visit https://www.youtube.com/watch?v=RiYGluH_cx0&t=0s&list=PLVce3C5Hi9BBfabvhEzYQTQDYEg2vtuxH&index=2
For an associated blog post about deep learning also visit http://thedatascientist.com/what-deep-learning-is-and-isnt/
Worried about the learning curve to introduce Deep Learning in your organization? Don’t be. The DEEP-HybridDataCloud project offers a framework for all users, including non-experts, enabling the transparent training, sharing and serving of Deep Learning models both locally or on hybrid cloud system. In this webinar we will be showing a set of use cases, from different research areas, integrated within the DEEP infrastructure.
The DEEP solution is based on Docker containers packaging already all the tools needed to deploy and run the Deep Learning models in the most transparent way. No need to worry about compatibility problems. Everything has already been tested and encapsulated so that the user has a fully working model in just a few minutes. To make things even easier, we have developed an API allowing the user to interact with the model directly from the web browser.
Malaysia R User Group Meetup at Microsoft Malaysia, 13th July 2017. Facebook Page https://www.facebook.com/rusergroupmalaysia/
Video of the talk can be viewed here https://www.youtube.com/watch?v=lN057ua0dKU
Explore and have fun with TensorFlow: An introductory to TensorFlowPoo Kuan Hoong
TensorFlow and Deep Learning Malaysia Meetup 6th July 2017 https://www.facebook.com/groups/TensorFlowMY and recorded video of the talk can be viewed here https://www.youtube.com/watch?v=EdMz3fwRFBc
Context Aware Road Traffic Speech Information System from Social MediaPoo Kuan Hoong
This project focuses on developing a mobile application that transmits real-time traffic state to motorcyclists. The traffic data is collected from Twitter. The data collected is subjected to various processes like Named Entity Recognition, Sentiment Analysis and Statistical Analysis and the derived traffic state will be transmitted to the user’s mobile application. A Bluetooth enabled helmet of a motorcyclist will then playback the traffic state to the user according to their location.
Virtual Interaction Using Myo And Google Cardboard (slides)Poo Kuan Hoong
This project focuses on developing a mobile application that integrates Google Cardboard and Myo Armband. The mobile application developed is an educational application that teaches users to write Japanese characters by getting users to trace the characters display on the phone screen. The users will air draw the letters while the Myo Armband will capture the gestures, send the data to the smartphones and display the drawn character on the screen.
A Comparative Study of HITS vs PageRank Algorithms for Twitter Users AnalysisPoo Kuan Hoong
Social Networks such as Facebook, Twitter, Google+
and LinkedIn have millions of users. These networks are constantly
evolving and it is a good source of information, both
explicitly and implicitly. The analysis of Social Network mainly
focuses on the aspect of social networking with an emphasis
on mapping relationships, patterns of interaction between user
and content information. One of the common research topics
focuses on the centrality measures where useful information of
the connected people in the social network is represented in
a graph. In this paper, we employed two link-based ranking
algorithms to analyze the ranking of the users: HITS (Hyperlink-
Induced Topic Search) and PageRank. We constructed Twitter
user retweet-relationship graph using 21 days worth of data.
Lastly, we compared the ranking sequence of the users in addition
to their followers count against the average and also whether
they are verified Twitter accounts. From the results obtained,
both HITS and PageRank showed a similar trend, and more
importantly highlighted the importance of the direction of the
edges in this work.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
4. Introduction
In the past 10 years, machine learning and Artificial
Intelligence (AI) have shown tremendous progress
The recent success can be attributed to:
Explosion of data
Cheap computing cost - CPUs and GPUs
Improvement of machine learning models
Much of the current excitement concerns a subfield of it
called “deep learning”.
6. Neural Networks
Deep Learning is primarily about neural networks, where a
network is an interconnected web of nodes and edges.
Neural nets were designed to perform complex tasks, such
as the task of placing objects into categories based on a few
attributes.
Neural nets are highly structured networks, and have three
kinds of layers - an input, an output, and so called hidden
layers, which refer to any layers between the input and the
output layers.
Each node (also called a neuron) in the hidden and output
layers has a classifier.
8. Deep Learning
Deep learning refers to artificial neural networks that are
composed of many layers.
It’s a growing trend in Machine Learning due to some
favorable results in applications where the target function is
very complex and the datasets are large.
9. Deep Learning: Strengths
Robust
No need to design the features ahead of time - features are
automatically learned to be optimal for the task at hand
Robustness to natural variations in the data is automatically learned
Generalizable
The same neural net approach can be used for many different
applications and data types
Scalable
Performance improves with more data, method is massively
parallelizable
10. Deep Learning: Weaknesses
Deep Learning requires a large dataset, hence long training
period.
In term of cost, Machine Learning methods like SVMs and
other tree ensembles are very easily deployed even by relative
machine learning novices and can usually get you reasonably
good results.
Deep learning methods tend to learn everything. It’s better
to encode prior knowledge about structure of images (or audio
or text).
The learned features are often difficult to understand. Many
vision features are also not really human-understandable (e.g,
concatenations/combinations of different features).
Requires a good understanding of how to model multiple
modalities with traditional tools.
12. Deep Learning R Libraries
MXNet: The R interface to the MXNet deep learning library.
darch: An R package for deep architectures and restricted
Boltzmann machines.
deepnet: An R package implementing feed-forward neural
networks, restricted Boltzmann machines, deep belief networ
ks, and stacked autoencoders.
Tensorflow for R: The tensorflow package provides access t
o the complete TensorFlow API from within R.
h2o: The R interface to the H2O deep-learning framework.
Deepwater: GPU Enabled Deep Learning using h2o platform
13. H2O
Founded in 2012
Stanford & Purdue Math and Systems Engineer
Headquarters: Mountain View, California, USA
h2o is an open source, distributed, Java machine learning
library
Ease of Use via Web Interface
R, Python, Scala, REST/JSON, Spark & Hadoop Interfaces
Distributed Algorithms Scale to Big Data
Package can be downloaded from http://www.h2o.ai/downloa
d/h2o/r
16. H2O + R
All computations are performed in
highly optimized Java code in the H2O
cluster, initiated by REST calls from R
Installation
Design
Java 7 or later; R 3.1 and above; Linux,
Mac, Windows
The easiest way to install the h2o R
package from CRAN
Latest version: http://www.h2o.ai/downlo
ad/h2o/r
> install.packages("h2o")
> library(h2o)
18. H2O Deep Water
Deep Water is the next generation distributed deep learning
with H2O
Deep Water is H2O for GPU that enabled deep learning on
all data types integrating with TensorFlow, MXNEt and Caffe
Inherits all H2O properties in scalability, ease of use and
deployment
20. H2O: Deep Water
Available networks in Deep Water
LeNet
AlexNet
VGGNet
Inception (GooLeNet)
ResNet (Deep Residual Learning)
Or build own customized network
22. MXNET
Founded by: Uni. Washington & Carnegie Mellon Uni (~1.5 years
old)
Supports most OS: Runs on Amazon Linux, Ubuntu/Debian, OS
X, and Windows OS
State of the art model support: Flexible and efficient GPU
computing and state-of-art deep learning i.e. CNN, LSTM to R.
Ultra Scalable: Seamless tensor/matrix computation with multiple
GPUs in R.
Ease of Use: Construct and customize the state-of-art deep
learning models in R, and apply them to tasks, such as image
classification and data science challenges
Multi-language: Supports the Python, R, Julia and Scala
languages
Ecosystem: Vibrant community from Academia and Industry
24. MXNET Architecture
You can specify the context of the function to be executed
within. This usually includes whether the function should be
run on a CPU or a GPU, and if you specify a GPU, which
GPU to use.
25. MXNET R Package
The R Package can be downloaded using the following
commands:
install.packages("drat", repos="https://cran.rstudio.co
m")
drat:::addRepo("dmlc")
install.packages("mxnet")