this chapter will deal with power system stability analysis. in this chapter transient stability of a power system is discussed and equal area criteria is used as a solution method to check the stability of a system.
This chapter deals with the power system operation of different power system parts which includes the generation, transmission and distribution systems. This slide is specifically prepared for ASTU 5th year power and control engineering students.
Transactive Energy (TE) can play a defining role in adapting and stabilizing today's grid for tomorrow. A follow-up to the Cross-DEWG Discussion on Transactive Energy session held in May at the SGIP Spring 2014 Members Meeting, this webinar continues the dialogue regarding this important game changer. SGIP is making this webinar event open and free to the public.
Deep Reinforcement Learning: Q-LearningKai-Wen Zhao
This slide reviews deep reinforcement learning, specially Q-Learning and its variants. We introduce Bellman operator and approximate it with deep neural network. Last but not least, we review the classical paper: DeepMind Atari Game beats human performance. Also, some tips of stabilizing DQN are included.
this chapter will deal with power system stability analysis. in this chapter transient stability of a power system is discussed and equal area criteria is used as a solution method to check the stability of a system.
This chapter deals with the power system operation of different power system parts which includes the generation, transmission and distribution systems. This slide is specifically prepared for ASTU 5th year power and control engineering students.
Transactive Energy (TE) can play a defining role in adapting and stabilizing today's grid for tomorrow. A follow-up to the Cross-DEWG Discussion on Transactive Energy session held in May at the SGIP Spring 2014 Members Meeting, this webinar continues the dialogue regarding this important game changer. SGIP is making this webinar event open and free to the public.
Deep Reinforcement Learning: Q-LearningKai-Wen Zhao
This slide reviews deep reinforcement learning, specially Q-Learning and its variants. We introduce Bellman operator and approximate it with deep neural network. Last but not least, we review the classical paper: DeepMind Atari Game beats human performance. Also, some tips of stabilizing DQN are included.
The ability of the power system to maintain synchronous operation when subjected to a severe transient disturbance
faults on transmission circuits, transformers, buses
loss of generation
loss of loads
Response involves large excursions of generator rotor angles: influenced by nonlinear power-angle relationship
Stability depends on both the initial operating state of the system and the severity of the disturbance
Post-disturbance steady-state operating conditions usually differ from pre-disturbance conditions
Summary of Modern power system planning part one
"The Forecasting of Growth of Demand for Electrical Energy"
the main topic of this chapter is the analysis of the various techniques required for utility planning engineers to optimally plan the expansion of the electrical power system.
This slide presents an introduction to microgrid. This is the second class for the subject 'Distribution Generation and Smart Grid'. Class wise I will provide all the discussions and analysis.
Performance prediction of PV & PV/T systems using Artificial Neural Networks ...Ali Al-Waeli
This presentation offers insight into use of ANN and machine learning for various applications in solar energy. Prepared and presented by Dr. Ali H. A. Alwaeli.
- POSTECH EECE695J, "딥러닝 기초 및 철강공정에의 활용", 2017-11-10
- Contents: introduction to reccurent neural networks, LSTM, variants of RNN, implementation of RNN, case studies
- Video: https://youtu.be/pgqiEPb4pV8
The ability of the power system to maintain synchronous operation when subjected to a severe transient disturbance
faults on transmission circuits, transformers, buses
loss of generation
loss of loads
Response involves large excursions of generator rotor angles: influenced by nonlinear power-angle relationship
Stability depends on both the initial operating state of the system and the severity of the disturbance
Post-disturbance steady-state operating conditions usually differ from pre-disturbance conditions
Summary of Modern power system planning part one
"The Forecasting of Growth of Demand for Electrical Energy"
the main topic of this chapter is the analysis of the various techniques required for utility planning engineers to optimally plan the expansion of the electrical power system.
This slide presents an introduction to microgrid. This is the second class for the subject 'Distribution Generation and Smart Grid'. Class wise I will provide all the discussions and analysis.
Performance prediction of PV & PV/T systems using Artificial Neural Networks ...Ali Al-Waeli
This presentation offers insight into use of ANN and machine learning for various applications in solar energy. Prepared and presented by Dr. Ali H. A. Alwaeli.
- POSTECH EECE695J, "딥러닝 기초 및 철강공정에의 활용", 2017-11-10
- Contents: introduction to reccurent neural networks, LSTM, variants of RNN, implementation of RNN, case studies
- Video: https://youtu.be/pgqiEPb4pV8
IEEE International Conference PresentationAnmol Dwivedi
IEEE INTERNATIONAL CONFERENCE -
Paper Title "Real-Time Implementation of Phasor Measurement Unit Using NI CompactRIO".
Code Available on: https://github.com/anmold-07/Synchrophasor-Estimation
Low Energy Task Scheduling based on Work StealingLEGATO project
Abstract: Optimizing energy efficiency of parallel execution on computing systems, ranging from server farms, mobile devices to embedded systems, becomes increasingly one of the first-order concerns. A common way to express a parallel application is as a directed acyclic graph (DAG) in which each node represents a task. The problem of such task scheduling on multiprocessor systems is to find the proper execution processors. Especially nowadays asymmetric multiprocessor systems feature different type of cores with different performance and power consumption, e.g. Arm big.LITTLE and Intel Lakefield. However, naive task assignment without considering core types and task features could result in inefficient resources utilization and detrimentally impacts the overall energy consumption. Dynamic task scheduling is a widely used scheduling strategy, which does not require prior knowledge, e.g. architecture heterogeneity, task DAG structure, before execution but makes the decisions during runtime. Work stealing has been proven to be an effective method among dynamic task scheduling with better scalability in larger systems. DVFS is a common technique to achieve better energy efficiency, however, exploiting it costs reconfiguration overhead ranging from tens of microseconds to one millisecond. With fine-grained tasks as small as milliseconds, as required to expose large parallelism, it is not realistic to use DVFS on a per-task level. Also, it shows that the energy consumed in cores’ under-utilized period is significant.
Based on these problem statements, we come up with a low energy task scheduling work stealing runtime based on XiTAO where the system environment configurations are either fixed or managed by the O/S power governors or system administrators. The runtime contains dynamic performance tracing module, idleness tracing module, power profiling module and a task mapping algorithm. The dynamic performance model is able to give the accurate predictions for future tasks given a set of resources. It is independent of platforms and frequencies and achieves scalability and portability. Power profiling helps runtime systems to understand CPU power consumption trends with respect to number/type of cores and frequencies. Idleness tracing presents the real-time status of cores and contributes to the energy conservation of under-utilized period. It also provides the real-time parallel slackness of active cores, which allows the task mapping algorithm to attribute corresponding power consumption on each concurrent running task. The task mapping algorithm integrates the information from above three modules and outputs the predicted best resources placements for ready tasks.
Poster presented by jing Chen at the LEGaTO Final Event: 'Low-Energy Heterogeneous Computing Workshop'
Brains rely on spiking neural networks for ultra-low-power information processing. Building artificial intelligence with similar efficiency requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. Toward this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks and showcase their robustness and self-calibration capabilities on analog neuromorphic hardware. Drawing further inspiration from biology, I will discuss the impact of homeostatic plasticity and network initialization in the excitatory-inhibitory balanced regime on deep spiking neural network training. Finally, I will show how approximations relate surrogate gradients to biologically plausible online learning rules with a minor impact on their effectiveness.
Prediction of stream-flow by using Artificial Neural Network model with special reference to pre-processing of raw data.
The model is based on daily stream-flow records of many years.
Deep learning (also known as deep structured learning or hierarchical learning) is the application of artificial neural networks (ANNs) to learning tasks that contain more than one hidden layer. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
Test different neural networks models for forecasting of wind,solar and energ...Tonmoy Ibne Arif
In this project work, a multi-step deep neural network is used to forecast power generation and load demand for a short-term time frame. The data or feature vectors that have been used to predict the target, is a sequential time series sequence. In this project, a Recurrent Neural Network has been used in combination with a convolutional neural network to have a better forecasting model for the Windpark, Solar park and Loadpark datasets. Moreover, the forecasting performance of Feedforward neural network and Long Short Term Memory also has been compared. The whole project work has divided into two parts, in the first approach the raw dataset has been divided into a train, test split and no previous step data have been used. In the second step whole raw dataset has been divided into test, train and validation split. Additionally, current and seven previous time steps data has been fed into the model.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
One-day ahead Power Forecasting is more and more required on the energy markets, and its accuracy is more and more crucial since it affects the net income of operators. 1. Weather Numerical Prediction, including a meso scale downscaling, provides a global prediction. A RANS CFD-tools is used for the micro-scale downscaling, providing a precise wind forecast at each wing generator hub. 2. To improve the reliability of this forecast, especially in the short term range, the use of "fresh" SCADA data is performed. Attention is focused on the Active Power, but other signals such as temperature and local wind characteristics can be taken into account. 3. In order to erase systematic errors and bias from the downscaled NWP based forecast (1.), as well as to mix it with the persistent model (2.), an Artificial Neural Network is trained using long term history. This paper explains first the method used and the choices made, especially concerning the Machine Learning parameters. A second part presents some results on some real cases, with different time horizons.
The MSc defense ceremony was held on 6-7-2017 in Mansoura University, Faculty of Engineering. This presentation is shared to help MSc students in Faculty of Engineering prepare their thesis presentation and ease their tension before their presentation time
Implementation of recurrent neural network for the forecasting of USD buy ra...IJECEIAES
This study implements a recurrent neural network (RNN) by comparing two RNN network structures, namely Elman and Jordan using the backpropagation through time (BPTT) programming algorithm in the training and forecasting process in foreign exchange forecasting cases. The activation functions used are the linear transfer function, the tan-sigmoid transfer function (Tansig), and the log-sigmoid transfer function (Logsig), which are applied to the hidden and output layers. The application of the activation function results in the log-sigmoid transfer function being the most appropriate activation function for the hidden layer, while the linear transfer function is the most appropriate activation function for the output layer. Based on the results of training and forecasting the USD against IDR currency, the Elman BPTT method is better than the Jordan BPTT method, with the best iteration being the 4000th iteration for both. The lowest root mean square error (RMSE) values for training and forecasting produced by Elman BPTT were 0.073477 and 122.15 the following day, while the Jordan backpropagation RNN method yielded 0.130317 and 222.96 also the following day
Similar to Electricity price forecasting with Recurrent Neural Networks (20)
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...Taegyun Jeon
PR-050: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting
Original Slide from http://home.cse.ust.hk/~xshiab/data/valse-20160323.pptx
Youtube: https://youtu.be/3cFfCM4CXws
[PR12] PR-026: Notes for CVPR Machine Learning SessionsTaegyun Jeon
PR-026: Notes for CVPR Machine Learning Session
Paper 1: Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning, https://arxiv.org/abs/1702.08690
Paper 2: The More You Know: Using Knowledge Graphs for Image Classification, https://arxiv.org/abs/1612.04844
Paper 3: On Compressing Deep Models by Low Rank and Sparse Decomposition, http://openaccess.thecvf.com/content_cvpr_2017/papers/Yu_On_Compressing_Deep_CVPR_2017_paper.pdf
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Electricity price forecasting with Recurrent Neural Networks
1. Taegyun Jeon
TensorFlow-KR / 2016.06.18
Gwangju Institute of Science and Technology
Electricity Price Forecasting
with Recurrent Neural Networks
RNN을 이용한 전력 가격 예측
TensorFlow-KR Advanced Track
2. Who is a speaker?
Taegyun Jeon (GIST)
▫ Research Scientist in Machine Learning and Biomedical Engineering
tgjeon@gist.ac.kr
linkedin.com/in/tgjeon
tgjeon.github.io
Page 2[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
Github for this tutorial:
https://github.com/tgjeon/TensorFlow-Tutorials-for-Time-Series
3. What you will learn about RNN
How to:
Build a prediction model
▫ Easy case study: sine function
▫ Practical case study: electricity price forecasting
Manipulate time series data
▫ For RNN models
Run and evaluate graph
Predict using RNN as regressor
Page 3[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
4. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 4[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
5. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 5[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
6. TensorFlow
Open Source Software Library for Machine Intelligence
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 6
7. Prerequisite
Software
▫ TensorFlow (r0.9)
▫ Python (3.4.4)
▫ Numpy (1.11.0)
▫ Pandas (0.16.2)
Tutorials
▫ “Recurrent Neural Networks”, TensorFlow Tutorials
▫ “Sequence-to-Sequence Models”, TensorFlow Tutorials
Blog Posts
▫ Understanding LSTM Networks (Chris Olah @ colah.github.io)
▫ Introduction to Recurrent Networks in TensorFlow (Danijar Hafner @ danijar.com)
Book
▫ “Deep Learning”, I. Goodfellow, Y. Bengio, and A. Courville, MIT Press, 2016
Page 7[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
8. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 8[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
10. Recurrent Neural Networks (RNN)
𝒙 𝒕: the input at time step 𝑡
𝒔 𝒕: the hidden state at time 𝑡
𝒐 𝒕: the output state at time 𝑡
Page 10[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
Image from WILDML.com: “RECURRENT NEURAL NETWORKS TUTORIAL, PART 1 – INTRODUCTION TO RNNS”
11. Overall procedure: RNN
Initialization
▫ All zeros
▫ Random values (dependent on activation function)
▫ Xavier initialization [1]:
Random values in the interval from −
1
𝑛
,
1
𝑛
,
where n is the number of incoming connections
from the previous layer
Page 11[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
[1] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks” (2010)
12. Overall procedure: RNN
Initialization
Forward Propagation
▫ 𝑠𝑡 = 𝑓 𝑈𝑥 𝑡 + 𝑊𝑠𝑡−1
• Function 𝑓 usually is a nonlinearity such as tanh or ReLU
▫ 𝑜𝑡 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 𝑉𝑠𝑡
Page 12[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
13. Overall procedure: RNN
Initialization
Forward Propagation
Calculating the loss
▫ 𝑦: the labeled data
▫ 𝑜: the output data
▫ Cross-entropy loss:
𝐿 𝑦, 𝑜 = −
1
𝑁 𝑛∈𝑁 𝑦𝑛log(𝑜 𝑛)
Page 13[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
14. Overall procedure: RNN
Initialization
Forward Propagation
Calculating the loss
Stochastic Gradient Descent (SGD)
▫ Push the parameters into a direction that reduced the error
▫ The directions: the gradients on the loss
:
𝜕𝐿
𝜕𝑈
,
𝜕𝐿
𝜕𝑉
,
𝜕𝐿
𝜕𝑊
Page 14[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
15. Overall procedure: RNN
Initialization
Forward Propagation
Calculating the loss
Stochastic Gradient Descent (SGD)
Backpropagation Through Time (BPTT)
▫ Long-term dependencies
→ vanishing/exploding gradient problem
Page 15[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
16. Vanishing gradient over time
Conventional RNN with sigmoid
▫ The sensitivity of the input values
decays over time
▫ The network forgets the previous input
Long-Short Term Memory (LSTM) [2]
▫ The cell remember the input as long as
it wants
▫ The output can be used anytime it wants
[2] A. Graves. “Supervised Sequence Labelling with Recurrent Neural Networks” (2012)
Page 16[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
17. Design Patterns for RNN
RNN Sequences
Page 17[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
Blog post by A. Karpathy. “The Unreasonable Effectiveness of Recurrent Neural Networks” (2015)
Task Input Output
Image classification fixed-sized image fixed-sized class
Image captioning image input sentence of words
Sentiment analysis sentence positive or negative sentiment
Machine translation sentence in English sentence in French
Video classification video sequence label each frame
18. Design Pattern for Time Series Prediction
Page 18[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
RNN
DNN
Linear Regression
19. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 19[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
20. RNN Implementation using TensorFlow
How we design RNN model
for time series prediction?
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 20
How manipulate our time
series data as input of RNN?
21. Regression models in Scikit-Learn
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 21
X = np.atleast_2d([0., 1., 2., 3., 5., 6., 7., 8., 9.5]).T
y = (X*np.sin(x)).ravel()
x = np.atleast_2d(np.linspace(0, 10, 1000)).T
gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4,
thetaU=1e-1, random_start=100)
gp.fit(X, y)
y_pred, MSE = gp.predict(x, eval_MSE=True)
22. RNN Implementation
Recurrent States
▫ Choose RNN cell type
▫ Use multiple RNN cells
Input layer
▫ Prepare time series data as RNN input
▫ Data splitting
▫ Connect input and recurrent layers
Output layer
▫ Add DNN layer
▫ Add regression model
Create RNN model for regression
▫ Train & Prediction
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 22
23. 1) Choose the RNN cell type
Neural Network RNN Cells (tf.nn.rnn_cell)
▫ BasicRNNCell (tf.nn.rnn_cell.BasicRNNCell)
• activation : tanh()
• num_units : The number of units in the RNN cell
▫ BasicLSTMCell (tf.nn.rnn_cell.BasicLSTMCell)
• The implementation is based on RNN Regularization[3]
• activation : tanh()
• state_is_tuple : 2-tuples of the accepted and returned states
▫ GRUCell (tf.nn.rnn_cell.GRUCell)
• Gated Recurrent Unit cell[4]
• activation : tanh()
▫ LSTMCell (tf.nn.rnn_cell.LSTMCell)
• use_peepholes (bool) : diagonal/peephole connections[5].
• cell_clip (float) : the cell state is clipped by this value prior to the cell output activation.
• num_proj (int): The output dimensionality for the projection matrices
Page 23[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
[3] W. Zaremba, L. Sutskever, and O. Vinyals, “Recurrent Neural Network Regularization” (2014)
[4] K. Cho et al., “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation” (2014)
[5] H. Sak et al., “Long short-term memory recurrent neural network architectures for large scale acoustic modeling” (2014)
24. LAB-1) Choose the RNN Cell type
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 24
Import tensorflow as tf
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(num_units)
rnn_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units)
rnn_cell = tf.nn.rnn_cell.GRUCell(num_units)
rnn_cell = tf.nn.rnn_cell.LSTMCell(num_units)
BasicRNNCell BasicLSTMCell
GRUCell LSTMCell
25. 2) Use the multiple RNN cells
▫ RNN Cell wrapper (tf.nn.rnn_cell.MultiRNNCell)
• Create a RNN cell composed sequentially of a number of RNN Cells.
▫ RNN Dropout (tf.nn.rnn_cell.Dropoutwrapper)
• Add dropout to inputs and outputs of the given cell.
▫ RNN Embedding wrapper (tf.nn.rnn_cell.EmbeddingWrapper)
• Add input embedding to the given cell.
• Ex) word2vec, GloVe
▫ RNN Input Projection wrapper (tf.nn.rnn_cell.InputProjectionWrapper)
• Add input projection to the given cell.
▫ RNN Output Projection wrapper (tf.nn.rnn_cell.OutputProjectionWrapper)
• Add output projection to the given cell.
Page 25[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
27. 3) Prepare the time series data
Split raw data into train, validation, and test dataset
▫ split_data [6]
• data : raw data
• val_size : the ratio of validation set (ex. val_size=0.2)
• test_size : the ratio of test set (ex. test_size=0.2)
Page 27[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
[6] M. Mourafiq, “tensorflow-lstm-regression” (code: https://github.com/mouradmourafiq/tensorflow-lstm-regression)
def split_data(data, val_size=0.2, test_size=0.2):
ntest = int(round(len(data) * (1 - test_size)))
nval = int(round(len(data.iloc[:ntest]) * (1 - val_size)))
df_train, df_val, df_test = data.iloc[:nval], data.iloc[nval:ntest],
data.iloc[ntest:]
return df_train, df_val, df_test
28. LAB-3) Prepare the time series data
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 28
train, val, test = split_data(raw_data, val_size=0.2, test_size=0.2)
Raw data
(100%)
Train
(80%)
Validation
(20%)
Test
(20%)
Test
(20%)
Train
(80%)
16%64% 20%
29. 3) Prepare the time series data
Generate sequence pair (x, y)
▫ rnn_data [6]
• labels : True for input data (x) / False for target data (y)
• num_split : time_steps
• data : our data
Page 29[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
def rnn_data(data, time_steps, labels=False):
"""
creates new data frame based on previous observation
* example:
l = [1, 2, 3, 4, 5]
time_steps = 2
-> labels == False [[1, 2], [2, 3], [3, 4]]
-> labels == True [3, 4, 5]
"""
rnn_df = []
for i in range(len(data) - time_steps):
if labels:
try:
rnn_df.append(data.iloc[i + time_steps].as_matrix())
except AttributeError:
rnn_df.append(data.iloc[i + time_steps])
else:
data_ = data.iloc[i: i + time_steps].as_matrix()
rnn_df.append(data_ if len(data_.shape) > 1 else [[i] for i in data_])
return np.array(rnn_df)
30. LAB-3) Prepare the time series data
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 30
time_steps = 10
train_x = rnn_data(df_train, time_steps, labels=false)
train_y = rnn_data(df_train, time_steps, labels=true)
df_train [1:10000]
x #01
[1, 2, 3, …,10]
y #01
11
…
…
train_x
train_y
x #02
[2, 3, 4, …,11]
y #02
12
x #9990
[9990, 9991, 9992, …,9999]
y #9990
10000
31. 4) Split our data
Split time series data into smaller tensors
▫ split (tf.split)
• split_dim : batch_size
• num_split : time_steps
• value : our data
▫ split_squeeze (tf.contrib.learn.ops.split_squeeze)
• Splits input on given dimension and then squeezes that dimension.
• dim
• num_split
• tensor_in
Page 31[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
33. 5) Connect input and recurrent layers
Create a recurrent neural network specified by RNNCell
▫ rnn (tf.nn.rnn)
• Args:
◦ cell : an instance of RNNCell
◦ inputs : list of inputs, tensor shape = [batch_size, input_size]
• Returns:
◦ (outputs, state)
◦ outputs : list of outputs
◦ state : the final state
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 33
38. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 38[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
39. Case study #1: sine function
Libraries
▫ numpy: package for scientific computing
▫ matplotlib: 2D plotting library
▫ tensorflow: open source software library for machine intelligence
▫ learn: Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning
▫ mse: "mean squared error" as evaluation metric
▫ lstm_predictor: our lstm class
Page 39[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from tensorflow.contrib import learn
from sklearn.metrics import mean_squared_error,
mean_absolute_error
from lstm_predictor import generate_data, lstm_model
40. Case study #1: sine function
Parameter definitions
▫ LOG_DIR: log file
▫ TIMESTEPS: RNN time steps
▫ RNN_LAYERS: RNN layer information
▫ DENSE_LAYERS: Size of DNN[10, 10]: Two dense layer with 10 hidden units
▫ TRAINING_STEPS
▫ BATCH_SIZE
▫ PRINT_STEPS
Page 40[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
LOG_DIR = './ops_logs'
TIMESTEPS = 5
RNN_LAYERS = [{'steps': TIMESTEPS}]
DENSE_LAYERS = [10, 10]
TRAINING_STEPS = 100000
BATCH_SIZE = 100
PRINT_STEPS = TRAINING_STEPS / 100
41. Case study #1: sine function
Generate waveform
▫ fct: function
▫ x: observation
▫ time_steps: timesteps
▫ seperate: check multimodality
Page 41[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
X, y = generate_data(np.sin, np.linspace(0, 100, 10000), TIMESTEPS,
seperate=False)
42. Case study #1: sine function
Create a regressor with TF Learn
▫ model_fn: regression model
▫ n_classes: 0 for regression
▫ verbose:
▫ steps: training steps
▫ optimizer: ("SGD", "Adam", "Adagrad")
▫ learning_rate
▫ batch_size
Page 42[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
regressor =
learn.TensorFlowEstimator(model_fn=lstm_model(TIMESTEPS,
RNN_LAYERS, DENSE_LAYERS), n_classes=0, verbose=1,
steps=TRAINING_STEPS, optimizer='Adagrad', learning_rate=0.03,
batch_size=BATCH_SIZE)
43. Case study #1: sine function
Page 43[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
validation_monitor = learn.monitors.ValidationMonitor(
X['val'], y['val'], every_n_steps=PRINT_STEPS,
early_stopping_rounds=1000)
regressor.fit(X['train'], y['train'],
monitors=[validation_monitor], logdir=LOG_DIR)
predicted = regressor.predict(X['test'])
mse = mean_squared_error(y['test'], predicted)
print ("Error: %f" % mse)
Error: 0.000294
44. Case study #1: sine function
Page 44[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
plot_predicted, = plt.plot(predicted, label='predicted')
plot_test, = plt.plot(y['test'], label='test')
plt.legend(handles=[plot_predicted, plot_test])
45. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 45[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
46. Energy forecasting problems
Current timeEnergy signal
(e.g. load, price, generation)
Signal forecast
External signal
(e.g. Weather) External forecast
(e.g. Weather forecast)
Page 46[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
47. Electricity Price Forecasting (EPF)
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 47
Current timeEnergy signal (Price)
External signal
(e.g. Weather, load, generation)
65. Competition Ranking (Official)
Check the website of EPF2016 competition
▫ http://complatt.smartwatt.net/
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 65
66. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 66[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
67. Implementation issues
Issues for Future Works
▫ About mathematical models
• It was used wind or solar generation forecast models?
• It was used load generation forecast models?
• It was used ensemble of mathematical models or ensemble average of multiple runs?
▫ About information used
• There are a cascading usage of the forecast in your price model? For instance, you use your
forecast (D+1) as input for model (D+2)?
• You adjusted the models based on previous forecasts of other forecasters ? If yes, whish
forecast you usually follow?
▫ About training period
• What time period was used to train your model?
• The model was updated with recent data?
• In which days you update the models?
Page 67[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
68. Contents
Overview of TensorFlow
Recurrent Neural Networks (RNN)
RNN Implementation
Case studies
▫ Case study #1: sine function
▫ Case study #2: electricity price forecasting
Conclusions
Q & A
Page 68[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks
69. Q & A
Any Questions?
[TensorFlow-KR Advanced Track] Electricity Price Forecasting with Recurrent Neural Networks Page 69