This is the presentation from the Ask the Expert: App performance on Series 40 phones webinar. In this presentation, Michael Samarin of Futurice, provides an overview to the key issues that should be considered when designing Series 40 Java apps for optimum performance. Michael covers issues such as selecting the technology for your GUI, memory considerations, obfuscation, objects, variable, and caching among others.
This is the presentation from the Ask the Expert: App performance on Series 40 phones webinar. In this presentation, Michael Samarin of Futurice, provides an overview to the key issues that should be considered when designing Series 40 Java apps for optimum performance. Michael covers issues such as selecting the technology for your GUI, memory considerations, obfuscation, objects, variable, and caching among others.
In this talk we explore how to build Machine Learning Systems that can that can learn "continuously" from their mistakes (feedback loop) and adapt to an evolving data distribution.
The youtube link to video of the talk is here:
https://www.youtube.com/watch?v=VtBvmrmMJaI
Presentation in Vietnam Japan AI Community in 2019-05-26.
The presentation summarizes what I've learned about Regularization in Deep Learning.
Disclaimer: The presentation is given in a community event, so it wasn't thoroughly reviewed or revised.
Find out how to configure and package clustered Payara Micro with load balancing, automatic scaling and dedicated storage for building cloud-native microservices. Then with the help of cloud scripting and triggering, automate CI/CD for the deployed application and emulate the load to check the scaling and performance results.
How EVERFI Moved from No Automation to Continuous Test Generation in 9 MonthsApplitools
See and hear about EVERFI's journey to generating targeted tests automatically from changing system schemas in this webinar with Applitools, CircleCI, and Cypress. Greg Sypolt, VP of Quality Engineering, and Sneha Viswalingam, Director of Quality Engineering, share the time and productivity savings achieved through this approach, and how adopting shift-left test generation has shortened the QA cycle.
* See the Applitools products used, including the Ultrafast Grid, at https://applitools.info/n3o
* Read and download the case study at https://applitools.info/gbi
Devel::NYTProf 2009-07 (OUTDATED, see 201008)Tim Bunce
The slides of my "State-of-the-art Profiling with Devel::NYTProf" talk at OSCON in July 2009.
I'll upload a screencast and give the link in a blog post at http://blog.timbunce.org
Describes the detail of software quality, tradeoffs, quality with testing, quality with inspection, need of standards, standards organizations & different type of software standards.
In this talk we explore how to build Machine Learning Systems that can that can learn "continuously" from their mistakes (feedback loop) and adapt to an evolving data distribution.
The youtube link to video of the talk is here:
https://www.youtube.com/watch?v=VtBvmrmMJaI
Presentation in Vietnam Japan AI Community in 2019-05-26.
The presentation summarizes what I've learned about Regularization in Deep Learning.
Disclaimer: The presentation is given in a community event, so it wasn't thoroughly reviewed or revised.
Find out how to configure and package clustered Payara Micro with load balancing, automatic scaling and dedicated storage for building cloud-native microservices. Then with the help of cloud scripting and triggering, automate CI/CD for the deployed application and emulate the load to check the scaling and performance results.
How EVERFI Moved from No Automation to Continuous Test Generation in 9 MonthsApplitools
See and hear about EVERFI's journey to generating targeted tests automatically from changing system schemas in this webinar with Applitools, CircleCI, and Cypress. Greg Sypolt, VP of Quality Engineering, and Sneha Viswalingam, Director of Quality Engineering, share the time and productivity savings achieved through this approach, and how adopting shift-left test generation has shortened the QA cycle.
* See the Applitools products used, including the Ultrafast Grid, at https://applitools.info/n3o
* Read and download the case study at https://applitools.info/gbi
Devel::NYTProf 2009-07 (OUTDATED, see 201008)Tim Bunce
The slides of my "State-of-the-art Profiling with Devel::NYTProf" talk at OSCON in July 2009.
I'll upload a screencast and give the link in a blog post at http://blog.timbunce.org
Describes the detail of software quality, tradeoffs, quality with testing, quality with inspection, need of standards, standards organizations & different type of software standards.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
17. Playground - Spiral
First trial Second trial
You can still only tuning the parameters to reach a good
performance rather than doing feature engineering
18. Playground - Spiral
First trial Second trial
It is a choice between more features and more computing
power($$$)
26. Example
Input Layer Hidden Layer Output Layer
Bias
= f(0.3825) = 0.5944
To compute sigmoid, you can use :
https://goo.gl/Jiuw2p
Try to compute yh2, o1, o2 by yourself
Reference
27. Example
Input Layer Hidden Layer Output Layer
Bias
So we get:
yh1= 0.5944
yh2 = 0.5968
o1 = f(1.106) = 0.7513
o2 = f(1.225) = 0.7729
28. Example
Input Layer Hidden Layer Output Layer
Bias
Then we can update weights:
OutputO1 = 0.75
OutputO2 = 0.773
Etotal = EO1 + EO2
= 1/2*(0.01-0.75)^2 +
1/ 2*(.099 - 0.773)^2
= 0.74
w5new = w5old - a*dEtotal/dw5
29. Example
Then we can get:
w5new = w5old - a*dEtotal/dw5
so, w5new = 0.4 - 0.5 * 0.082
= 0.359
30. Example
Then we can get:
w5new = 0.359
w6new = 0.4
w7new = 0.51
w8new = 0.56
Next, we need to update the first
layer, ie. w1~w4
Input Layer Hidden Layer Output Layer
Bias
31. Example
Input Layer Hidden Layer Output Layer
Bias
#We’ve already computed in the previous layer
Then we can also get :
#
33. Example
Input Layer Hidden Layer Output Layer
Bias
Then we can get:
w1new = 0.1497
w2new = 0.1995
w3new = 0.2497
w4new = 0.2995
34. Brief Summary
• You can do the update for all weights, just remember you need to update
all the weight together instead of one-by-one.
• That means you should always update the weights using old data, do not
mix the old ones and new ones.
• But, I think just call nn.train() would be the best way to do it!
35. Training Neural Nets
• Thing to note:
• Gradients should be differentiable so we can learn from it.
• Gradients can vanish and explode: additional layer, ReLUs / learning rate, batch
normalization
• Lower level gradient may go closer to zero that makes training slow. Use
ReLU can prevent it.
• If weights are too large, they may make lower level gradients explode. Use
batch normalization to avoid it.
• ReLU layers can die: learning rate
36. Dropout Regularization
• Randomly dropping out units
in a network for a single
gradient step.
• Control from 0.0 to 1.0, 1.0
means drop out all nodes and
then learn nothing!
• This mechanism makes
deeplearning useful in recent
years
38. Programming Exercises: Optimizer
AdagradOptimizer:
Automatically reduce the learning rate
RMSE=122.29 / 124.10
AdamOptimizer:
Adaptive Moment Estimation,
computes adaptive learning rates for
each parameter.
RMSE= 67.67 / 67.48
Reference:
http://ruder.io/optimizing-gradient-descent/
39. Programming Exercises: Normalization+
You can pass the normalization into
the function options, that makes it
simpler
z_score, RMSE: 71.54 / 70.39
binary_threshold(0.5), RMSE:
115.78 / 116.41
clip(0.1, 0.8), RMSE: 115.77 / 116.33
log_normalize??? (math error)
42. See Food
The ‘See Food’ app from Silicon Valley really happened, and it was also a lie
“Meal Snap”
43. See Food
• Multi-class, single label: this
is a hotdog, octopus, or
banana
• => softmax(candidate
sampling)
• Multi-class, multi-label: this
picture contains hotdog,
cucumber, tomato, and onion
• => regression for all
49. Embeddings
• Embed the data into an
d-dimension plane,
which maps items to low-
dimension real vectors
• the dimensions could be
determined by the
empirical way
hidden layers
55. Static vs Dynamic Training
• Static - Trained offline. For data do not change a lot overtime.
• Pros: easy to build and test, batch training, test and iterate until good
• Cons: required monitors inputs, easy to let it grow stale
• Dynamic - Trained online.
• Pros: continue to feed data, regularly sync out updated version. Use
progressive validation rather than batch training & test. Adapt to changes.
• Cons: needs monitoring, model rollback & data quarantine capabilities
56. Static vs Dynamic Inference
• Static - Inference offline. For data do not change a lot overtime.
• Pros: much less computational cost
• Cons: need all the data at hand, update latency could be very long
• Dynamic - Inference online.
• Pros: can predict the newest data
• Cons: latency is higher, you need budget to solve that
57. Data Dependencies
• Feature and data change makes huge impact with the model
• Unit test for data?
• Reliability: what about the input data disappears?
• Versioning: feature changes over time?
• Necessity: how useful is the feature according to its computational cost?
• Correlations: tied together or tease apart?
• Feedback loops: could my input be impacted by my output?
59. Cancer Prediction
• Hospitals specialized with cancer treatment make the model overfitting
• => label leakage, just like a cheat
60. Real World Guidelines
• Keep the very first model extremely simple
• Focus on data pipeline correctness
• Use a simple, observable metric for training & evaluation
• Own and monitor your input features
• Treat your model configuration as code: review it, check it in
• Write down the results of all experiments, especially “failures"
61. Good Bye!
Machine Learning Practica
Check out these real-world case studies of how Google uses machine learning in its products,
with video and hands-on coding exercises:
• Image Classification: See how Google developed the image classification model powering
search in Google Photos, and then build your own image classifier.
• More Machine Learning Practica coming soon!
Other Machine Learning Resources
• Deep Learning: Advanced machine learning course on neural networks, with extensive
coverage of image and text models
• Rules of ML: Best practices for machine learning engineering
• TensorFlow.js: WebGL-accelerated, browser-based JavaScript library for training and
deploying ML models