Your SlideShare is downloading. ×

Akka london scala_user_group

3,085

Published on

0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,085
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
93
Comments
0
Likes
5
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide












  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.
  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.



























  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.







  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.

  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.



  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.
  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.
  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.






































  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.











  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.












  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.

























  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.
  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.





  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.


  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.




  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.


  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.





  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.


  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.

  • Scalable: Scala stands for SCALAble language which means that the language is designed to scale with the usage. Concepts and abstractions scale from small to larges
    Component-Oriented: scala was initially designed as a language to do component oriented programming, with its requirements on reusability, maintainability, orthogonality and extensibility.
    Coherent: Scala has managed to build a coherent whole while merging the FP and OO paradigms.
    High level: Scala allows you to program it to very high level utilizing its powerful abstractions both in OO and FP
    Extensible: It is great for building Domain Specific Languages (DSLs). Since Scala has a good type inference, a very flexible syntax which gives it a dynamic feel similar to Ruby and Python.
  • Transcript

    • 1. Akka: Simpler Concurrency, Scalability & Fault-tolerance through Actors Jonas Bonér Scalable Solutions jonas@jonasboner.com twitter: @jboner
    • 2. The problem It is way too hard to build: 1. correct highly concurrent systems 2. truly scalable systems 3. fault-tolerant systems that self-heal ...using “state-of-the-art” tools
    • 3. Why so difficult ? • We have been doing things the same way for so long, threads, locks... • Enterprise solutions are inflexible and unwieldy • Too much code to write
    • 4. Vision Simpler Concurrency Scalability Fault-tolerance Through one single unified Programming model Runtime service
    • 5. Manage system overload
    • 6. Scale up & Scale out
    • 7. Replicate and distribute for fault-tolerance
    • 8. Automatic & adaptive load balancing
    • 9. Introducing Concurrency Scalability Fault-tolerance Actors STM Agents Dataflow Distributed Secure Persistent Open Source Clustered
    • 10. Architecture Core Modules
    • 11. Architecture Add-on Modules Add-on Modules
    • 12. Architecture Enterprise Modules
    • 13. Actors one tool in the toolbox
    • 14. Akka Actors
    • 15. Actor Model of Concurrency • Implements Message-Passing Concurrency • Share NOTHING • Isolated lightweight processes • Communicates through messages • Asynchronous and non-blocking • Each actor has a mailbox (message queue)
    • 16. Actor Model of Concurrency • Easier to reason about • Raised abstraction level • Easier to avoid –Race conditions –Deadlocks –Starvation –Live locks
    • 17. Two different models • Thread-based • Event-based – Very lightweight (600 bytes per actor) – Can easily create millions on a single workstation (13 million on 8 G RAM) – Does not consume a thread
    • 18. Actors case
object
Tick class
Counter
extends
Actor
{ 

private
var
counter
=
0 

def
receive
=
{ 



case
Tick
=>
 





counter
+=
1 





println(counter) 

} }
    • 19. Create Actors import
Actor._ val
counter
=
actorOf[Counter] counter is an ActorRef
    • 20. Create Actors val
actor
=
actorOf(new
MyActor(..)) create actor with constructor arguments
    • 21. Start actors val
counter
=
actorOf[Counter] counter.start
    • 22. Start actors val
counter
=
actorOf[Counter].start
    • 23. Stop actors val
counter
=
actorOf[Counter].start counter.stop
    • 24. preStart & postStop callbacks class
MyActor
extends
Actor
{ 

override
def
preStart
=
{ 



...
//
called
before
‘start’
 

} 

override
def
postStop
=
{ 



...
//
called
after
‘stop’ 

} }
    • 25. the self reference class
RecursiveActor
extends
Actor
{ 

private
var
counter
=
0 

self.id
=
“service:recursive” 

def
receive
=
{ 



case
Tick
=>
 





counter
+=
1 





self
!
Tick 

} }
    • 26. Send: ! counter
!
Tick
 fire-forget
    • 27. Send: ! counter.sendOneWay(Tick) fire-forget
    • 28. Send: !! val
result
=
(actor
!!
Message).as[String] uses Future under the hood (with time-out)
    • 29. Send: !! val
result
=
counter.sendRequestReply(Tick) uses Future under the hood (with time-out)
    • 30. Send: !!! //
returns
a
future val
future
=
actor
!!!
Message future.await val
result
=
future.get ... Futures.awaitOne(List(fut1,
fut2,
...)) Futures.awaitAll(List(fut1,
fut2,
...)) returns the Future directly
    • 31. Send: !!! val
result
=
counter.sendRequestReplyFuture(Tick) returns the Future directly
    • 32. Reply class
SomeActor
extends
Actor
{ 

def
receive
=
{ 



case
User(name)
=>
 





//
use
reply 





self.reply(“Hi
”
+
name) 

} }
    • 33. Reply class
SomeActor
extends
Actor
{ 

def
receive
=
{ 



case
User(name)
=>
 





//
store
away
the
sender 





//
to
use
later
or
 





//
somewhere
else 





...
=
self.sender 

} }
    • 34. Reply class
SomeActor
extends
Actor
{ 

def
receive
=
{ 



case
User(name)
=>
 





//
store
away
the
sender
future 





//
to
resolve
later
or
 





//
somewhere
else 





...
=
self.senderFuture 

} }
    • 35. Immutable messages //
define
the
case
class case
class
Register(user:
User) //
create
and
send
a
new
case
class
message actor
!
Register(user) //
tuples actor
!
(username,
password) //
lists actor
!
List(“bill”,
“bob”,
“alice”)
    • 36. ActorRegistry val
actors
=
ActorRegistry.actors val
actors
=
ActorRegistry.actorsFor[TYPE] val
actors
=
ActorRegistry.actorsFor(id) val
actor
=
ActorRegistry.actorFor(uuid) ActorRegistry.foreach(fn) ActorRegistry.shutdownAll
    • 37. A Case Study Thatcham Integrated Methods The problem: • provide realtime repair information for over 21,000 vehicles • provide accurate, safe repair details on more than 7,000 methods • each document is tailored specifically for an individual repair / replacement job • too many permutations for caching (at least for phase 1) • a relatively long running process involving IO • a global solution (UK, Europe, USA, Asia, Australasia) • zero downtime allowed
    • 38. A Case Study The current simple solution: • Akka actors to the rescue • in this case delegation is literally the art of good workload management • two actors sharing the work for each document type • two servers cover a large percentage of the UK penetration • simple load balancing
    • 39. A Case Study Simple Architecture
    • 40. A Case Study Conclusion: We originally had a homegrown solution extending Scala actors. What do Akka actors give us ? • the fastest actors implementation • where is the code ? • great stability • massive integration potential • zero downtime • 50-60 documents per minute with our largest document size
    • 41. A Case Study What next ? • intelligent adaptive load balancing with Akka • enterprise monitoring • Akka in EC2 ? • caching with intelligent swarms of actors in a cluster
    • 42. TypedActor
    • 43. Typed Actors class
CounterImpl
 

extends
TypedActor
with
Counter
{ 

private
var
counter
=
0; 

def
count
=
{ 



counter
+=
1 



println(counter) 

} }
    • 44. Create Typed Actor val
counter
=
TypedActor.newInstance( 



classOf[Counter],
classof[CounterImpl])
    • 45. Send message counter.count fire-forget
    • 46. Request Reply val
hits
=
counter.getNrOfHits uses Future under the hood (with time-out)
    • 47. the context reference class
PingImpl
extends
TypedActor
 

with
Ping
{ 

def
hit(count:
Int)
=
{ 



val
pong
=
getContext 

.getSender.asInstanceOf[Pong] 



pong.hit(count
+=
1) 

} }
    • 48. Actors: config akka
{ 

version
=
"0.10" 

time‐unit
=
"seconds" 

actor
{ 



timeout
=
5 



throughput
=
5 

} }
    • 49. Scalability Benchmark Simple Trading system • Synchronous Scala version • Scala Library Actors 2.8.0 •Fire-forget •Request-reply (Futures) • Akka • Fire-forget (Hawt dispatcher) • Fire-forget (default dispatcher) • Request-reply Run it yourself: http://github.com/patriknw/akka-sample-trading
    • 50. Agents yet another tool in the toolbox
    • 51. Agents val
agent
=
Agent(5) //
send
function
asynchronously agent
send
(_
+
1)
 val
result
=
agent()
//
deref ...
//
use
result agent.close Cooperates with STM
    • 52. Akka Dispatchers
    • 53. Dispatchers • Executor-based Dispatcher • Executor-based Work-stealing Dispatcher • Hawt Dispatcher • Thread-based Dispatcher
    • 54. Dispatchers object
Dispatchers
{ 

object
globalHawtDispatcher
extends
HawtDispatcher 

... 
 

def
newExecutorBasedEventDrivenDispatcher(name:
String) 

def
newExecutorBasedEventDrivenWorkStealingDispatcher(name:
String) 

def
newHawtDispatcher(aggregate:
Boolean) 

... }
    • 55. Set dispatcher class
MyActor
extends
Actor
{ 

self.dispatcher
=
Dispatchers 



.newThreadBasedDispatcher(self) 

 

... } actor.dispatcher
=
dispatcher
//
before
started
    • 56. Let it crash fault-tolerance
    • 57. Influenced by Erlang
    • 58. 9 nines
    • 59. OneForOne fault handling strategy
    • 60. OneForOne fault handling strategy
    • 61. OneForOne fault handling strategy
    • 62. OneForOne fault handling strategy
    • 63. OneForOne fault handling strategy
    • 64. OneForOne fault handling strategy
    • 65. OneForOne fault handling strategy
    • 66. OneForOne fault handling strategy
    • 67. OneForOne fault handling strategy
    • 68. AllForOne fault handling strategy
    • 69. AllForOne fault handling strategy
    • 70. AllForOne fault handling strategy
    • 71. AllForOne fault handling strategy
    • 72. AllForOne fault handling strategy
    • 73. Supervisor hierarchies
    • 74. Supervisor hierarchies
    • 75. Supervisor hierarchies
    • 76. Supervisor hierarchies
    • 77. Supervisor hierarchies
    • 78. Supervisor hierarchies
    • 79. Fault handlers AllForOneStrategy( 

List(classOf[Throwable]), 

maxNrOfRetries,
 

withinTimeRange) OneForOneStrategy( 

List(classOf[Throwable]), 

maxNrOfRetries,
 

withinTimeRange)
    • 80. Fault handlers AllForOneStrategy( 

List(classOf[ServiceException],
 






classOf[PersistenceException]), 

5,
5000)
    • 81. Linking link(actor)
 unlink(actor)
 startLink(actor) spawnLink[MyActor]
    • 82. Supervision class
Supervisor
extends
Actor
{ 

faultHandler
=
OneForOneStrategy( 



List(classOf[Throwable])
 



5,
5000)) 

def
receive
=
{ 



case
Register(actor)
=>
 





link(actor) 

} }
    • 83. Manage failure class
FaultTolerantService
extends
Actor
{ 

... 

override
def
preRestart(reason:
Throwable)
=
{ 



...
//
clean
up
before
restart 

} 

override
def
postRestart(reason:
Throwable)
=
{ 



...
//
init
after
restart 

} }
    • 84. Remote Actors
    • 85. Remote Server //
use
host
&
port
in
config RemoteNode.start RemoteNode.start("localhost",
9999) Scalable implementation based on NIO (Netty) & Protobuf
    • 86. Two types of remote actors Client managed Server managed
    • 87. Client-managed supervision works across nodes //
methods
in
Actor
class spawnRemote[MyActor](host,
port)
 spawnLinkRemote[MyActor](host,
port) startLinkRemote(actor,
host,
port)
    • 88. Client-managed moves actor to server client manages through proxy val
actorProxy
=
spawnLinkRemote[MyActor]( 

“darkstar”,
 

9999) actorProxy
!
message
    • 89. Server-managed register and manage actor on server client gets “dumb” proxy handle RemoteNode.register(“service:id”,
actorOf[MyService]) server part
    • 90. Server-managed val
handle
=
RemoteClient.actorFor( 

“service:id”,
 

“darkstar”,
 

9999) handle
!
message client part
    • 91. Cluster Membership Cluster.relayMessage( 

classOf[TypeOfActor],
message) for
(endpoint
<‐
Cluster)
 

spawnRemote[TypeOfActor](

 













endpoint.host,
 













endpoint.port)
    • 92. A Case Study Eventually everything - auditing and logging The problem: • log/audit many operations over a diverse architecture • ensure we get massive reuse across diverse projects internal and external • promote store agnosticism • make the log repository searchable • provide an API for reporting
    • 93. A Case Study The current simple solution: • Akka remote (server-managed) actors to the rescue • in this case delegation across the network is literally the art of good workload management • ‘free your domain’ by defining your domain ‘verbs’ using simple case classes • further ‘free your domain’ with a clear divide between a simple service, and asynchronous pluggable repositories
    • 94. A Case Study Minimally intrusive No lethal injections of persistence concerns Pluggable repositories
    • 95. A Case Study Conclusion: What do Akka remote actors give us ? • simple distribution of operations over the network • decouple concerns to improve our focus on the service/API • great stability • no boilerplate
    • 96. STM yet another tool in the toolbox
    • 97. What is STM? 80
    • 98. STM: overview • See the memory (heap and stack) as a transactional dataset • Similar to a database • begin • commit • abort/rollback • Transactions are retried automatically upon collision • Rolls back the memory on abort
    • 99. Managed References • Separates Identity from Value - Values are immutable - Identity (Ref) holds Values • Change is a function • Compare-and-swap (CAS) • Abstraction of time • Must be used within a transaction
    • 100. atomic import
se.scalablesolutions.akka.stm.local._ 
 atomic
{ 

... 

atomic
{ 



...
//
transactions
compose!!! 

} }
    • 101. Managed References Typical OO - Direct Typical OO: direct Mutable Objects objects references to access to mutable foo :a ? :b ? :c 42 :d ? :e 6 Clojure - and value • Unifies identity Indirect references Managed Reference: separates Identity & Value • Anything can change at any time • Consistency is a user problem Objects to Immutable • Encapsulation doesn’t solve concurrency:a foo "fred" problems :b "ethel" @foo :c 42 :d 17 :e 6 Copyright Rich Hickey 2009
    • 102. Managed References • Separates Identity from Value - Values are immutable - Identity (Ref) holds Values • Change is a function • Compare-and-swap (CAS) • Abstraction of time • Must be used within a transaction
    • 103. Managed References import
se.scalablesolutions.akka.stm.local._ 
 //
giving
an
initial
value val
ref
=
Ref(0) 
 //
specifying
a
type
but
no
initial
value val
ref
=
Ref[Int]
    • 104. Managed References val
ref
=
Ref(0) 
 atomic
{ 

ref.set(5) } //
‐>
0 
 atomic
{ 

ref.get } //
‐>
5
    • 105. Managed References val
ref
=
Ref(0) 
 atomic
{ 

ref
alter
(_
+
5) } //
‐>
5 
 val
inc
=
(i:
Int)
=>
i
+
1 
 atomic
{ 

ref
alter
inc } //
‐>
6
    • 106. Managed References val
ref
=
Ref(1)
 val
anotherRef
=
Ref(3) 
 atomic
{ 

for
{ 



value1
<‐
ref 



value2
<‐
anotherRef 

}
yield
(value1
+
value2) } //
‐>
Ref(4) 
 val
emptyRef
=
Ref[Int] 
 atomic
{ 

for
{ 



value1
<‐
ref 



value2
<‐
emptyRef 

}
yield
(value1
+
value2) } //
‐>
Ref[Int]
    • 107. Transactional datastructures //
using
initial
values val
map



=
TransactionalMap("bill"
‐>
User("bill")) val
vector
=
TransactionalVector(Address("somewhere")) 
 //
specifying
types val
map



=
TransactionalMap[String,
User] val
vector
=
TransactionalVector[Address]
    • 108. life-cycle listeners atomic
{ 

deferred
{ 



//
executes
when
transaction
commits 

} 

compensating
{ 



//
executes
when
transaction
aborts 

} }
    • 109. Actors + STM = Transactors
    • 110. Transactors class
UserRegistry
extends
Transactor
{ 

 

private
lazy
val
storage
=
 



TransactionalMap[String,
User]() 

def
receive
=
{ 



case
NewUser(user)
=> 





storage
+
(user.name
‐>
user) 



...
 

} }
    • 111. Transactors
    • 112. Transactors Start transaction
    • 113. Transactors Start transaction Send message
    • 114. Transactors Start transaction Send message
    • 115. Transactors Start transaction Send message Update state within transaction
    • 116. Transactors Start transaction Send message Update state within transaction
    • 117. Transactors Start transaction Send message Update state within transaction
    • 118. Transactors Start transaction Send message Update state within transaction
    • 119. Transactors Start transaction Send message Update state within transaction
    • 120. Transactors Start transaction Send message Update state within transaction
    • 121. Transactors Start transaction Send message Update state within transaction Transaction fails
    • 122. Transactors Start transaction Send message Update state within transaction Transaction fails
    • 123. Transactors
    • 124. Transactors
    • 125. Transactors
    • 126. Transactors
    • 127. Transactors
    • 128. Transactors
    • 129. Transactors Transaction automatically retried
    • 130. blocking transactions class
Transferer
extends
Actor
{ 

implicit
val
txFactory
=
TransactionFactory( 



blockingAllowed
=
true,
trackReads
=
true,
timeout
=
60
seconds) 
 

def
receive
=
{ 



case
Transfer(from,
to,
amount)
=> 





atomic
{ 







if
(from.get
<
amount)
{ 









log.info("not
enough
money
‐
retrying") 









retry 







} 







log.info("transferring") 







from
alter
(_
‐
amount) 







to
alter
(_
+
amount) 





} 

} }
    • 131. either-orElse 

atomic
{ 



either
{ 





if
(left.get
<
amount)
{ 







log.info("not
enough
on
left
‐
retrying") 







retry 





} 





log.info("going
left") 



}
orElse
{ 





if
(right.get
<
amount)
{ 







log.info("not
enough
on
right
‐
retrying") 







retry 





} 





log.info("going
right") 



} 

}
    • 132. STM: config akka
{ 

stm
{ 





max‐retries
=
1000 





timeout
=
10 





write‐skew
=
true 





blocking‐allowed
=
false 





interruptible
=
false 





speculative
=
true 





quick‐release
=
true 





propagation
=
requires 





trace‐level
=
none 





hooks
=
true 





jta‐aware
=
off 

} }
    • 133. Modules
    • 134. Akka Persistence
    • 135. STM gives us Atomic Consistent Isolated
    • 136. Persistence module turns STM into Atomic Consistent Isolated Durable
    • 137. Akka Persistence API • Cassandra • HBase • Voldemort • Redis • MongoDB • CouchDB • Any JTA-compliant JDBC driver or ORM
    • 138. Akka Persistence API //
transactional
Cassandra‐backed
Map
 val
map
=
CassandraStorage.newMap //
transactional
Redis‐backed
Vector
 val
vector
=
RedisStorage.newVector //
transactional
Mongo‐backed
Ref val
ref
=
MongoStorage.newRef
    • 139. For Redis only (so far) val
queue:
PersistentQueue[ElementType]
=


 

RedisStorage.newQueue val
set:
PersistentSortedSet[ElementType]
= 

RedisStorage.newSortedSet
    • 140. Akka Spring
    • 141. Spring integration <beans> 

<akka:typed‐actor
 



id="myActiveObject"
 



interface="com.biz.MyPOJO"
 



implementation="com.biz.MyPOJO"
 



transactional="true"
 



timeout="1000"
/> 

... </beans>
    • 142. Spring integration <akka:supervision
id="my‐supervisor"> 

<akka:restart‐strategy
failover="AllForOne"
 
























retries="3"
 
























timerange="1000"> 



<akka:trap‐exits> 
 

<akka:trap‐exit>java.io.IOException</akka:trap‐exit> 



</akka:trap‐exits> 

</akka:restart‐strategy> 

<akka:typed‐actors> 



<akka:typed‐actor
interface="com.biz.MyPOJO"
 





















implementation="com.biz.MyPOJOImpl"
 





















lifecycle="permanent"
 





















timeout="1000"> 



</akka:typed‐actor> 

</akka:typed‐actors> </akka:supervision>
    • 143. Akka Camel
    • 144. Camel: consumer class
MyConsumer
extends
Actor
with
Consumer
{ 

def
endpointUri
=
"file:data/input" 
 

def
receive
=
{ 



case
msg:
Message
=>
 





log.info("received
%s"
format 





msg.bodyAs(classOf[String])) 

} }
    • 145. Camel: consumer class
MyConsumer
extends
Actor
with
Consumer
{ 

def
endpointUri
=
 



"jetty:http://0.0.0.0:8877/camel/test" 
 

def
receive
=
{ 



case
msg:
Message
=>
 





reply("Hello
%s"
format
 











msg.bodyAs(classOf[String])) 

} }
    • 146. Camel: producer class
CometProducer
 

extends
Actor
with
Producer
{ 

 

def
endpointUri
=
 



"cometd://localhost:8111/test" }
    • 147. Camel: producer val
producer
=
actorOf[CometProducer].start val
time
=
"Current
time:
"
+
new
Date producer
!
time
    • 148. Akka HotSwap
    • 149. HotSwap actor
!
HotSwap({ 

//
new
body 

case
Ping
=>
 



...
 

case
Pong
=>
 



...

 })
    • 150. HotSwap self.become({ 

//
new
body 

case
Ping
=>
 



...
 

case
Pong
=>
 



...

 })
    • 151. Akka Concurrent Problem Solving
    • 152. Simple Use Cases • Use different actors to compose computations and have them calculated in parallel • Analyse sequential versus non sequential operations • For simple isolated tasks start up more actors to reduce your message box size • Wherever possible share nothing, then you have no coupling or dependencies you can just scale out to infinity
    • 153. Dining Philosophers • Demonstrates many of the problems with concurrency • Deadlock, livelock, and ironically starvation • Akka to the rescue... use ‘become’ • A simple implementation of an FSM
    • 154. Dining Hakkers sealed
trait
DiningHakkerMessage case
class
Busy(chopstick:
ActorRef)
 

extends
DiningHakkerMessage case
class
Put(hakker:
ActorRef)
 

extends
DiningHakkerMessage case
class
Take(hakker:
ActorRef)
 

extends
DiningHakkerMessage case
class
Taken(chopstick:
ActorRef)
 

extends
DiningHakkerMessage object
Eat
extends
DiningHakkerMessage object
Think
extends
DiningHakkerMessage
    • 155. Dining Hakkers class
Hakker(name:
String,
left:
ActorRef,
right:
ActorRef)
 

extends
Actor
{ 

self.id
=
name 

def
thinking:
Receive
=
{ 



case
Eat
=> 





become(hungry) 





left
!
Take(self) 





right
!
Take(self) 

} 

def
hungry:
Receive
=
{ 



case
Taken(`left`)
=> 





become(waiting_for(right,left)) 



case
Taken(`right`)
=> 





become(waiting_for(left,right)) 



case
Busy(chopstick)
=> 





become(denied_a_chopstick) 

} ...
    • 156. Top Secret Project • Whisperings of a top secret mission critical project • What if we could figure out how to brew the perfect beer ? • Use a genetic algorithm to evolve the perfect recipe ? • use Akka for parallel computation ?
    • 157. Akka Kernel
    • 158. Start Kernel java
‐jar
akka‐1.0‐SNAPSHOT.jar
 

‐Dakka.config=<path>/akka.conf
    • 159. How to run it? Deploy as dependency JAR in WEB-INF/lib etc. Run as stand-alone microkernel OSGi-enabled; drop in any OSGi container (Spring DM server, Karaf etc.)
    • 160. ...and much much more PubSub REST Security FSM Comet Web OSGi Guice
    • 161. Learn more http://akkasource.org
    • 162. EOF

    ×