Like this presentation? Why not share!

- Cassandra Community Webinar: Apache... by DataStax Academy 3655 views
- CourboSpark: Decision Tree for Time... by Hadoop Summit 1580 views
- How An Expert Analyzes Utility Bills by EnergyCAP, Inc. 190 views
- Five Alternatives to Hand-Keying Y... by EnergyCAP, Inc. 151 views
- Fate hollow ataraxia 7 review by architectdisgusting 87 views
- Annualreport by Kristin Verrill 1810 views

861

-1

-1

Published on

A majority of the electricity in the U.S. is traded in independent system operator (ISO) based wholesale markets. ISO-based markets typically function in a two-step settlement process with day-ahead (DA) financial settlements followed by physical real-time (spot) market settlements for electricity. In this work, we focus on obtaining equilibrium bidding strategies for electricity generators in DA markets. Electricity prices in DA markets are determined by the ISO, which matches competing supply offers from power generators with demand bids from load serving entities. Since there are multiple generators competing with one another to supply power, this can be modeled as a competitive Markov decision problem, which we solve using a reinforcement learning approach. For power networks of realistic sizes, the state-action space could explode, making the RL procedure computationally intensive. This has motivated us to solve the above problem over Spark. The talk provides the following takeaways:

1. Modeling the day-ahead market as a Markov decision process

2. Code sketches to show the markov decision process solution over Spark and Mahout over Apache Tez

3. Performance results comparing Mahout over Apache Tez and Spark.

No Downloads

Total Views

861

On Slideshare

0

From Embeds

0

Number of Embeds

4

Shares

0

Downloads

0

Comments

0

Likes

3

No embeds

No notes for slide

I will begin talking about what electricity markets are and then discuss the need for them to be modeled. I will briefly highlight

Other markets that behave in a similar fashion. I will discuss why reinforcement learning (unsupervised machine learning)

Is a viable approach for this challenging problem. Then Vijay will discuss how we intend to exploit the inherent parallelisms within the algorithm and

Share a potential solution strategy over spark. WE will also briefly highlight the current state of work and discuss future work.

So we have multiple generators now competing to sell power and multiple large distribution companies or load serving entities competin to buy power.

This market is what we are looking to model and understand. And since this decision making happens each day, we have a very real need for it to be modeled and understood especially since there have been some market design flaws. We all remember about the California market crisis, where the prices went through the roof.

- 1. © 2014 Impetus Technologies1 Impetus Technologies Inc. Distributed Reinforcement Learning for Electricity Market Bidding with Spark Vishnu Nanduri, Ph.D. Vijay Srinivas Agneeswaran, Ph.D. 2014 Spark Summit (June 30th-July 2nd), San Francisco, CA
- 2. © 2014 Impetus Technologies2 Speaker Bios Vishnu is a Senior Data Scientist with experience in Reinforcement Learning, Supervised Machine Learning, Stochastic Optimization, and Statistical Analysis. He was a faculty member at the University of Wisconsin- Milwaukee before joining Impetus. Vijay is the Director for BigData Labs at Impetus, with experience in cloud, grid, peer-to-peer computing, and machine learning for Big- Data. He is the author of the recently published book “Big Data Analytics Beyond Hadoop.”
- 3. © 2014 Impetus Technologies3 Agenda What is an electricity market? Why does it need to be modeled? Are there other markets that need/or could use such modeling? Why reinforcement learning? • Background and basics Why Spark? What other options exist? Solution strategy over spark Ongoing and future work 1 2 3 4 5 6 7
- 4. © 2014 Impetus Technologies4 Electricity Markets in North America Source: http://www.caiso.com/about/Pages/OurBusiness/UnderstandingtheISO/Opening-access.aspx
- 5. © 2014 Impetus Technologies5 Prior to Market Restructuring Generators Transmission Distribution Consumers Vertically Integrated Old Structure
- 6. © 2014 Impetus Technologies6 Field Force Use Case – The Challenges ISO balances supply and demand and decides the price- quantity allocations Independent System Operator (ISO) Multiple Competing Suppliers submit Price-Quantity bids Load Serving Entities (LSEs) submit demand bids
- 7. © 2014 Impetus Technologies7 Why does it need to be modeled? To understand long-term behavior of market Equilibrium behavior Assessment of market power Investigate policy prescriptions
- 8. © 2014 Impetus Technologies8 Modeling Approach • Bid each day (each hour) for supplying power the following day • Stochastic processes underlying the day ahead energy market operations are modeled as a Markov chain • Decisions embedded on a Markov chain are modeled as Markov decision processes • Multiple generators competing in a market is modeled as a competitive Markov decision process (a.k.a., stochastic game)
- 9. © 2014 Impetus Technologies9 CMDP Model • Let B = Set of Supply/Load Buses in the Network M = # of loads N = # of suppliers/generators • Xt = (qt, pt), is the system state at any day ‘t’ where, • qt = (q1t, q2t, , q3t, ,……… qt|B|)- load vector • pt = (p1t, p2t, , p3t, ,……… pt|B|)- price vector • Dt = ( Dlt : l Є { 1,2,3,…….N}) where, • Dlt = lth generator’s bid
- 10. © 2014 Impetus Technologies10 CMDP Model • X = {Xt : t Є N}- System State process • D = {Dt : t Є N}- Decision process • The joint process (X, D ) is a Competitive Markov Decision Process • Average reward criterion and non-zero sum nature of the game make solution more challenging • Hence, simulation based optimization called Reinforcement Learning (RL)
- 11. © 2014 Impetus Technologies11 RL Basics • The theory of RL is founded on two important principles: • Bellman’s equation and the theory of stochastic approximation. • Any reinforcement learning model contains four basic elements: System environment (simulation model) Learning agents (market participants) Set of actions for each agent (action spaces) System response (participant rewards)
- 12. © 2014 Impetus Technologies12 Are there other applications? Generation capacity expansion planning Carbon allowance markets
- 13. © 2014 Impetus Technologies13 Reinforcement Learning in Electricity Markets: A Schematic Agent 1 Agent 2 Agent ‘n’ Independent System Operator System State System State System State action action action reward reward reward Update State Update State Update State Learn & Update Knowledge Base Learn & Update Knowledge Base Learn & Update Knowledge Base
- 14. © 2014 Impetus Technologies14 Why Spark? • Hadoop Map-Reduce is not well suited for iterative machine learning algorithms • Twister/HaLoop/Apache Hama Fault-tolerance is questionable, Enterprise readiness is not clear • GraphLab Can only take distributed snapshots – no automatic recovery. • Possible application to the spot market – real-time electricity bidding.
- 15. © 2014 Impetus Technologies15 RL Implementation over Spark
- 16. © 2014 Impetus Technologies16 RL Spark Code
- 17. © 2014 Impetus Technologies17 RL Code xxx xxx xxx
- 18. © 2014 Impetus Technologies18 Work-in-progress • Giraph like APIs for GraphX Allow user to specify master code VS slave code Explicit barrier synchronization • Reinforcement learning over Spark Can be used by independent system operator or by individual generators Can be used in other competitive bidding-based markets such as carbon allowance markets, which is a multi-million dollar industry in U.S. and Europe. • RL over Hadoop/Stratosphere Comparison with Spark/GraphX
- 19. © 2014 Impetus Technologies19 Thank you. Questions??

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment