45 min talk about collecting home network performance measures, analyzing and forecasting time series data, and building anomaly detection system.
In this talk, we will go through the whole process of data mining and knowledge discovery. Firstly we write a script to run speed test periodically and log the metric. Then we parse the log data and convert them into a time series and visualize the data for a certain period.
Next we conduct some data analysis; finding trends, forecasting, and detecting anomalous data. There will be several statistic or deep learning techniques used for the analysis; ARIMA (Autoregressive Integrated Moving Average), LSTM (Long Short Term Memory).
Deep Convolutional GANs - meaning of latent spaceHansol Kang
DCGAN은 GAN에 단순히 conv net을 적용했을 뿐만 아니라, latent space에서도 의미를 찾음.
DCGAN 논문 리뷰 및 PyTorch 기반의 구현.
VAE 세미나 이슈 사항에 대한 리뷰.
my github : https://github.com/messy-snail/GAN_PyTorch
[참고]
https://github.com/znxlwm/pytorch-MNIST-CelebA-GAN-DCGAN
https://github.com/taeoh-kim/Pytorch_DCGAN
Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
쉽게 설명하는 GAN (What is this? Gum? It's GAN.)Hansol Kang
Original GAN 논문 리뷰 및 PyTorch 기반의 구현.
딥러닝 개발환경 및 언어 비교.
[참고]
Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.
Wang, Su. "Generative Adversarial Networks (GAN) A Gentle Introduction."
초짜 대학원생의 입장에서 이해하는 Generative Adversarial Networks (https://jaejunyoo.blogspot.com/)
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기 (https://www.slideshare.net/NaverEngineering/1-gangenerative-adversarial-network)
프레임워크 비교(https://deeplearning4j.org/kr/compare-dl4j-torch7-pylearn)
AI 개발에AI 개발에 가장 적합한 5가지 프로그래밍 언어 (http://www.itworld.co.kr/news/109189#csidxf9226c7578dd101b41d03bfedfec05e)
Git는 머꼬? GitHub는 또 머지?(https://www.slideshare.net/ianychoi/git-github-46020592)
svn 능력자를 위한 git 개념 가이드(https://www.slideshare.net/einsub/svn-git-17386752)
Deep Convolutional GANs - meaning of latent spaceHansol Kang
DCGAN은 GAN에 단순히 conv net을 적용했을 뿐만 아니라, latent space에서도 의미를 찾음.
DCGAN 논문 리뷰 및 PyTorch 기반의 구현.
VAE 세미나 이슈 사항에 대한 리뷰.
my github : https://github.com/messy-snail/GAN_PyTorch
[참고]
https://github.com/znxlwm/pytorch-MNIST-CelebA-GAN-DCGAN
https://github.com/taeoh-kim/Pytorch_DCGAN
Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
쉽게 설명하는 GAN (What is this? Gum? It's GAN.)Hansol Kang
Original GAN 논문 리뷰 및 PyTorch 기반의 구현.
딥러닝 개발환경 및 언어 비교.
[참고]
Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.
Wang, Su. "Generative Adversarial Networks (GAN) A Gentle Introduction."
초짜 대학원생의 입장에서 이해하는 Generative Adversarial Networks (https://jaejunyoo.blogspot.com/)
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기 (https://www.slideshare.net/NaverEngineering/1-gangenerative-adversarial-network)
프레임워크 비교(https://deeplearning4j.org/kr/compare-dl4j-torch7-pylearn)
AI 개발에AI 개발에 가장 적합한 5가지 프로그래밍 언어 (http://www.itworld.co.kr/news/109189#csidxf9226c7578dd101b41d03bfedfec05e)
Git는 머꼬? GitHub는 또 머지?(https://www.slideshare.net/ianychoi/git-github-46020592)
svn 능력자를 위한 git 개념 가이드(https://www.slideshare.net/einsub/svn-git-17386752)
Mining Frequent Closed Graphs on Evolving Data StreamsAlbert Bifet
Graph mining is a challenging task by itself, and even more so when processing data streams which evolve in real-time. Data stream mining faces hard constraints regarding time and space for processing, and also needs to provide for concept drift detection. In this talk we present a framework for studying graph pattern mining on time-varying streams and large datasets.
"Scalable Link Discovery for Modern Data-Driven Applications" as presented in the 15th International Semantic Web Conference ISWC, Doctoral Consortium, October 18th, 2016, held in Kobe, Japan
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
STRIP: stream learning of influence probabilities.Albert Bifet
Influence-driven diffusion of information is a fundamental process in social networks. Learning the latent variables of such process, i.e., the influence strength along each link, is a central question towards understanding the structure and function of complex networks, modeling information cascades, and developing applications such as viral marketing.
Motivated by modern microblogging platforms, such as twitter, in this paper we study the problem of learning influence probabilities in a data-stream scenario, in which the network topology is relatively stable and the challenge of a learning algorithm is to keep up with a continuous stream of tweets using a small amount of time and memory. Our contribution is a number of randomized approximation algorithms, categorized according to the available space (superlinear, linear, and sublinear in the number of nodes n) and according to different models (landmark and sliding window). Among several results, we show that we can learn influence probabilities with one pass over the data, using O(nlog n) space, in both the landmark model and the sliding-window model, and we further show that our algorithm is within a logarithmic factor of optimal.
For truly large graphs, when one needs to operate with sublinear space, we show that we can still learn influence probabilities in one pass, assuming that we restrict our attention to the most active users.
Our thorough experimental evaluation on large social graph demonstrates that the empirical performance of our algorithms agrees with that predicted by the theory.
Scaling out logistic regression with SparkBarak Gitsis
Large scale multinomial logistic regression with Spark. Contains animated gifs. Analysis of LBFGS. Real world spark configurations. SimilarWeb categorization algorithm
Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
2014-06-20 Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Mining Frequent Closed Graphs on Evolving Data StreamsAlbert Bifet
Graph mining is a challenging task by itself, and even more so when processing data streams which evolve in real-time. Data stream mining faces hard constraints regarding time and space for processing, and also needs to provide for concept drift detection. In this talk we present a framework for studying graph pattern mining on time-varying streams and large datasets.
"Scalable Link Discovery for Modern Data-Driven Applications" as presented in the 15th International Semantic Web Conference ISWC, Doctoral Consortium, October 18th, 2016, held in Kobe, Japan
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
STRIP: stream learning of influence probabilities.Albert Bifet
Influence-driven diffusion of information is a fundamental process in social networks. Learning the latent variables of such process, i.e., the influence strength along each link, is a central question towards understanding the structure and function of complex networks, modeling information cascades, and developing applications such as viral marketing.
Motivated by modern microblogging platforms, such as twitter, in this paper we study the problem of learning influence probabilities in a data-stream scenario, in which the network topology is relatively stable and the challenge of a learning algorithm is to keep up with a continuous stream of tweets using a small amount of time and memory. Our contribution is a number of randomized approximation algorithms, categorized according to the available space (superlinear, linear, and sublinear in the number of nodes n) and according to different models (landmark and sliding window). Among several results, we show that we can learn influence probabilities with one pass over the data, using O(nlog n) space, in both the landmark model and the sliding-window model, and we further show that our algorithm is within a logarithmic factor of optimal.
For truly large graphs, when one needs to operate with sublinear space, we show that we can still learn influence probabilities in one pass, assuming that we restrict our attention to the most active users.
Our thorough experimental evaluation on large social graph demonstrates that the empirical performance of our algorithms agrees with that predicted by the theory.
Scaling out logistic regression with SparkBarak Gitsis
Large scale multinomial logistic regression with Spark. Contains animated gifs. Analysis of LBFGS. Real world spark configurations. SimilarWeb categorization algorithm
Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
2014-06-20 Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
https://github.com/telecombcn-dl/dlmm-2017-dcu
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
EuroPython 2017 - How to make money with your Python open-source projectMax Tepkeev
Developers create new open-source projects every day. As the project becomes popular they have to invest more and more time into it’s development and of course at some point a question arises: “How can I make some money with my project ?”
In this talk we will try to answer this question. We will talk about different models of making money, their pros and cons. We will concentrate on Python Open-Source projects mostly and try to answer the following questions:
What to sell ?
Where to sell ?
How to distribute ?
How to license ?
After this talk you will have a clear understanding of how you can make money with your project. What your next steps should be and how you can get the actual profit while still continuing making your customers happy.
Expose the inertial forces in your company that oppose accurate forecasting and learn to forecast based on Deal maturity of customer decision elements.
Norbert Kraft, Referent Research & Technology, Nokia Siemens Networks
Durch die weltweite Verfügbarkeit, Abdeckung und Nutzung sind Mobile Telekommunikationsnetze heute ein typisches Anwendungsgebiet für 'Big Data' und insbesondere für komplexe Datenanalyseverfahren. Norbert Kraft beschreibt in dieser Session Einsatzszenarien dieser Technologien in der Telekommunikationsindustrie anhand konkreter Beispiele, die im Rahmen eines Forschungsprojektes des Zentralbereiches 'Technologie und Innovation' von Nokia entstanden sind. In einen Entwicklungsprototypen wurden hier Möglichkeiten der Netzausfallvorhersage sowie der Ursachenanalyse für solche Ereignisse untersucht und entwickelt. Hierbei kommen unterschiedliche Data Mining und Machine Learning Verfahren zum Einsatz, z.B. (Un-)supervised Learning, Clustering Verfahren für die Klassifizierung von Kundenprofilen oder Radiozellen sowie eine Zeitreihenanalyse zur Vorhersage von Netzausfällen. Eine wichtige Rolle neben der Erkennung von Fehlerszenarien ist hierbei immer die Ermittlung einer möglichen Fehlerursache, wobei der erkannte Netzfehler mit einer Vielzahl möglicher Einflussgrößen (z.B. SW Konfiguration, Lastprofil) korreliert wird.
A summary of tested principles I composed on improving forecast accuracy for a more reliable and efficient supply chain operation. These tips can be useful for Demand Managers and forecasters especially in a market involving wholesalers and distributors.
IDATE DigiWorld -FTTx markets public - Roland MONTAGNEIDATE DigiWorld
1. FTTx Worldwide Key Trends
2. Major Players Worldwide
3. FTTH in Europe
4. FTTH: zones of growth for the future?
5. Drivers for Fibre: The Gigabit Race, Short Latency and 4K / 8K !
6. Conclusion: Place of the UK?
Understanding RF Fundamentals and the Radio Design of Wireless NetworksCisco Mobility
This advanced session focuses on deep-dive understanding of the often overlooked Radio Frequency part of designing and deploying a Wireless LAN Network. It discusses 802.11 radio MIMO APs and antennas placements when to use a DAS system antenna patterns. It covers the main environments such as carpeted offices campuses and conference centers, providing feedback based on lessons learned from challenging deployments such as outdoor/stadium/rail deployments and manufacturing areas. Learn More: http://www.cisco.com/go/wireless
Different Models Used In Time Series - InsideAIMLVijaySharma802
We were working for the project Godrej Nature’s Basket, trying to manage its supply chain and delivery partners and would like to accurately forecast the sales for the period starting from “1st January 2019 to 15th January 2019”
Checkout for more articles: https://insideaiml.com/articles
Time Series Analysis - 2 | Time Series in R | ARIMA Model Forecasting | Data ...Simplilearn
This Time Series Analysis (Part-2) in R presentation will help you understand what is ARIMA model, what is correlation & auto-correlation and you will alose see a use case implementation in which we forecast sales of air-tickets using ARIMA and at the end, we will also how to validate a model using Ljung-Box text. A time series is a sequence of data being recorded at specific time intervals. The past values are analyzed to forecast a future which is time-dependent. Compared to other forecast algorithms, with time series we deal with a single variable which is dependent on time. So, lets deep dive into this presentation and understand what is time series and how to implement time series using R.
Below topics are explained in this " Time Series in R presentation " -
1. Introduction to ARIMA model
2. Auto-correlation & partial auto-correlation
3. Use case - Forecast the sales of air-tickets using ARIMA
4. Model validating using Ljung-Box test
Become an expert in data analytics using the R programming language in this data science certification training course. You’ll master data exploration, data visualization, predictive analytics and descriptive analytics techniques with the R language. With this data science course, you’ll get hands-on practice on R CloudLab by implementing various real-life, industry-based projects in the domains of healthcare, retail, insurance, finance, airlines, music industry, and unemployment.
Why learn Data Science with R?
1. This course forms an ideal package for aspiring data analysts aspiring to build a successful career in analytics/data science. By the end of this training, participants will acquire a 360-degree overview of business analytics and R by mastering concepts like data exploration, data visualization, predictive analytics, etc
2. According to marketsandmarkets.com, the advanced analytics market will be worth $29.53 Billion by 2019
3. Wired.com points to a report by Glassdoor that the average salary of a data scientist is $118,709
4. Randstad reports that pay hikes in the analytics industry are 50% higher than IT
The Data Science with R is recommended for:
1. IT professionals looking for a career switch into data science and analytics
2. Software developers looking for a career switch into data science and analytics
3. Professionals working in data and business analytics
4. Graduates looking to build a career in analytics and data science
5. Anyone with a genuine interest in the data science field
6. Experienced professionals who would like to harness data science in their fields
Learn more at: https://www.simplilearn.com/
#OSSPARIS19 : Detecter des anomalies de séries temporelles à la volée avec Wa...Paris Open Source Summit
WarpScript est un langage de programmation open source conçu pour facilement requêter, manipuler et traiter des données de séries temporelles à la volée. Bien qu'il soit compatible nativement avec Warp 10, WarpScript peut aussi être connecté à d'autre sources de données. Dans cette présentation, nous allons détecter des anomalies à la volée en utilisant des fonctions WarpScript et répondre aux questions suivantes. Que doit-on définir comme une anomalie ? Quel algorithme correspond au type d'anomalie que l'on cherche à détecter ? Comment prendre en compte la possible saisonnalité de mes données ?
Forecasting time series powerful and simpleIvo Andreev
Time series are a sequence of data points positioned in order of time. Time series forecasting has two main purposes - to understand the mechanisms that lead to rise or fall, and to predict future values. Very often it analyses trends, cyclical events, seasonality and has unique importance in Economics and Business. The quality of predictions can be evaluated only in future due to temporal dependencies on previous data points and there are many model types for approximation. In this session we are going to talk about challenges, ways of improvement and technology stack like ML.NET, ARIMA, Python, Azure ML, Regression and FB Prophet
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Vellvm - Verifying the LLVM
Steve Zdancewic (Professor, USA University of Pennsylvania)
For video follow the link: https://youtu.be/jDPAtUfnoBU
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
ANN ARIMA Hybrid Models for Time Series PredictionM Baddar
Existing models for prediction either focus on capturing non linearity of the relation between predictors and target or time series proprieties . Hybridization of ANN and ARIMA can be a useful method to model both aspects in time series prediction
WarpScript is an open source programming language designed for easily querying, manipulating and processing time series data on the fly. Although it is natively compatible with the Warp 10 time series database, WarpScript can be plugged to other data sources and can be used from a lot of data science tools, like Python and Spark. In this presentation, we will detect anomalies on the fly using WarpScript functions and answer the following questions. What is to be defined as an anomaly? Which algorithm to use depending on the type of anomaly I want to detect? How to take into account possible seasonality in my data?
Data structure and algorithm using javaNarayan Sau
This presentation created for people who like to go back to basics of data structure and its implementation. This presentation mostly helps B.Tech , Bsc Computer science students as well as all programmer who wants to develop software in core areas.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
2. Who am I?
● Machine Learning Engineer
○ Fraud Detection System
○ Software Defect Prediction
● Software Engineer
○ Email Services (40+ mil. users)
○ High traffic server (IPC, network, concurrent programming)
● MPhil, HKUST
○ Major : Software Engineering based on ML tech
○ Research interests : ML, NLP, IR
3. Outline
Data Collection Time series Analysis Forecast Modeling Anomaly Detection
Naive approach
Logging SpeedTest
Data preparation
Handling time series
Seasonal Trend Decomposition Rolling Forecast Basic approaches
Stationarity
Autoregression, Moving Average
Autocorrelation
ARIMA Multivariate Gaussian
LSTM
8. Problem definition
● Detect abnormal states of Home Network
● Anomaly detection for time series
○ Finding outlier data points relative to some usual signal
12. Outline
Data Collection Time series Analysis Forecast Modeling Anomaly Detection
Naive approach
Logging SpeedTest
Data preparation
Handling time series
Seasonal Trend Decomposition Rolling Forecast Basic approaches
Stationarity
Autoregression, Moving Average
Autocorrelation
ARIMA Multivariate Gaussian
LSTM
15. Data preparation
● Parse data
class SpeedTest(object):
def __init__(self, string):
self.__string = string
self.__pos = 0
self.datetime = None# for DatetimeIndex
self.ping = None # ping test in ms
self.download = None# down speed in Mbit/sec
self.upload = None # up speed in Mbit/sec
def __iter__(self):
return self
def next(self):
…
16. Data preparation
● Build panda DataFrame
speedtests = [st for st in SpeedTests(logstring)]
dt_index = pd.date_range(
speedtests[0].datetime.replace(second=0, microsecond=0),
periods=len(speedtests), freq='5min')
df = pd.DataFrame(index=dt_index,
data=([st.ping, st.download, st.upload] for st in speedtests),
columns=['ping','down','up'])
23. Components of Time series data
● Trend :The increasing or decreasing direction in the series.
● Seasonality : The repeating in a period in the series.
● Noise : The random variation in the series.
24. Components of Time series data
● A time series is a combination of these components.
○ yt
= Tt
+ St
+ Nt
(additive model)
○ yt
= Tt
× St
× Nt
(multiplicative model)
25. Seasonal Trend Decomposition
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(week_dn_ts)
plt.plot(week_dn_ts) # Original
plt.plot(decomposition.seasonal)
plt.plot(decomposition.trend)
27. Rolling Forecast
from statsmodels.tsa.arima_model import ARIMA
forecasts = list()
history = [x for x in train_X]
for t in range(len(test_X)): # for each new observation
model = ARIMA(history, order=order) # update the model
y_hat = model.fit().forecast() # forecast one step ahead
forecasts.append(y_hat) # store predictions
history.append(test_X[t]) # keep history updated
28. Residuals ~ N( , 2
)
residuals = [test[t] - forecasts[t] for t in range(len(test_X))]
residuals = pd.DataFrame(residuals)
residuals.plot(kind=’kde’)
33. Anomaly Detection (Naive approach)
● 2-5 Standard Deviation
○ NumPy
○ Pandas
std = pd[‘col’].std()
med = pd[‘col’].median()
df.loc[~df[‘col’].between(med - 3*std, med + 3*std), 0]
std = np.std(col)
med = np.median(col)
np.where((col < med - 3*std) | (col < med + 3*std))
34. Anomaly Detection (Naive approach)
● MAD (Median Absolute Deviation)
○ MAD = median(|Xi
- median(X)|)
○ “Detecting outliers: Do not use standard deviation around the mean, use absolute deviation
around the median” - Christopher Leys (2013)
35. Outline
Data Collection Time series Analysis Forecast Modeling Anomaly Detection
Naive approach
Logging SpeedTest
Data preparation
Handling time series
Seasonal Trend Decomposition Rolling Forecast Basic approaches
Stationarity
Autoregression, Moving Average
Autocorrelation
ARIMA Multivariate Gaussian
LSTM
36. Stationary Series Criterion
● The mean, variance and covariance of the series are time invariant.
stationary non-stationary
37. Stationary Series Criterion
● The mean, variance and covariance of the series are time invariant.
stationary non-stationary
38. Stationary Series Criterion
● The mean, variance and covariance of the series are time invariant.
stationary non-stationary
40. Differencing
● A non-stationary series can be made stationary after differencing.
● Instead of modelling the level, we model the change
● Instead of forecasting the level, we forecast the change
● I(d) = yt
- yt-d
● AR + I + MA
41. Autoregression (AR)
● Autoregression means developing a linear model that uses observations at
previous time steps to predict observations at future time step.
● Because the regression model uses data from the same input variable at
previous time steps, it is referred to as an autoregression
42. Moving Average (MA)
● MA models look similar to the AR component, but it's dealing with different
values.
● The model account for the possibility of a relationship between a variable
and the residuals from previous periods.
43. ARIMA(p, d, q)
● Autoregressive Integrated Moving Average
○ AR : A model that uses dependent relationship between an observation and some number of
lagged observations.
○ I : The use of differencing of raw observations in order to make the time series stationary.
○ MA : A model that uses the dependency between an observation and a residual error from a
MA model.
● parameters of ARIMA model
○ p : The number of lag observations included in the model
○ d : the degree of differencing, the number of times that raw observations are differenced
○ q : The size of moving average window.
44. Identification of ARIMA
● Autocorrelation function(ACF) : measured by a simple correlation between
current observation Yt
and the observation p lags from the current one Yt-p
.
● Partial Autocorrelation Function (PACF) : measured by the degree of
association between Yt
and Yt-p
when the effects at other intermediate time
lags between Yt
and Yt-p
are removed.
● Inference from ACF and PACF : theoretical ACFs and PACFs are available for
various values of the lags of AR and MA components. Therefore, plotting
ACFs and PACFs versus lags and comparing leads to the selection of the
appropriate parameter p and q for ARIMA model
45. Identification of ARIMA (easy case)
● General characteristics of theoretical ACFs and PACFs
● Reference :
○ http://people.duke.edu/~rnau/411arim3.htm
○ Prof. Robert Nau
model ACF PACF
AR(p) Tail off; Spikes decay towards zero Spikes cutoff to zero after lag p
MA(q) Spikes cutoff to zero after lag q Tails off; Spikes decay towards zero
ARMA(p,q) Tails off; Spikes decay towards zero Tails off; Spikes decay towards zero
50. Anomaly Detection (Multivariate Gaussian)
import numpy as np
from scipy.stats import multivariate_normal
def estimate_gaussian(dataset):
mu = np.mean(dataset, axis=0)
sigma = np.cov(dataset.T)
return mu, sigma
def multivariate_gaussian(dataset, mu, sigma):
p = multivariate_normal(mean=mu, cov=sigma)
return p.pdf(dataset)
mu, sigma = estimate_gaussian(train_X)
p = multivariate_gaussian(train_X, mu, sigma)
anomalies = np.where(p < ep) # ep : threshold
51. Anomaly Detection (Multivariate Gaussian)
import numpy as np
from scipy.stats import multivariate_normal
def estimate_gaussian(dataset):
mu = np.mean(dataset, axis=0)
sigma = np.cov(dataset.T)
return mu, sigma
def multivariate_gaussian(dataset, mu, sigma):
p = multivariate_normal(mean=mu, cov=sigma)
return p.pdf(dataset)
mu, sigma = estimate_gaussian(train_X)
p = multivariate_gaussian(train_X, mu, sigma)
anomalies = np.where(p < ep) # ep : threshold
52. Anomaly Detection (Multivariate Gaussian)
import numpy as np
from scipy.stats import multivariate_normal
def estimate_gaussian(dataset):
mu = np.mean(dataset, axis=0)
sigma = np.cov(dataset.T)
return mu, sigma
def multivariate_gaussian(dataset, mu, sigma):
p = multivariate_normal(mean=mu, cov=sigma)
return p.pdf(dataset)
mu, sigma = estimate_gaussian(train_X)
p = multivariate_gaussian(train_X, mu, sigma)
anomalies = np.where(p < ep) # ep : threshold
53. Outline
Data Collection Time series Analysis Forecast Modeling Anomaly Detection
Naive approach
Logging SpeedTest
Data preparation
Handling time series
Seasonal Trend Decomposition Rolling Forecast Basic approaches
Stationarity
Autoregression, Moving Average
Autocorrelation
ARIMA Multivariate Gaussian
LSTM
56. Long Short-Term Memory
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.metrics import mean_squared_error
model = Sequential()
model.add(LSTM(num_neurons, stateful=True, return_sequences=True,
batch_input_shape=(batch_size, timesteps, input_dimension))
model.add(LSTM(num_neurons, stateful=True,
batch_input_shape=(batch_size, timesteps, input_dimension))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
for i in range(num_epoch):
model.fit(train_X, y, epochs=1, batch_size=batch_size, shuffle=False)
model.reset_states()
57. Long Short-Term Memory
● Will allow to model sophisticated and seasonal dependencies in time series
● Very helpful with multiple time series
● On going research, requires a lot of work to build model for time series
58. Summary
● Be prepared before calling engineers for service failures
● Pythonista has all the powerful tools
○ pandas is great for handling time series
○ statsmodels for analyzing and modeling time series
○ sklearn is such a multi-tool in data science
○ keras is good to start deep learning
● Pythonista needs to understand a few concepts before using the tools
○ Stationarity in time series
○ Autoregressive and Moving Average
○ Means of forecasting, anomaly detection
● Deep Learning for forecasting time series
○ still on-going research
● Do try this at home