Akka in Production
Evan Chan
Scala Days 2015
March 17, 2015
Who is this guy?
•Principal Engineer, Socrata, Inc.
•http://github.com/velvia
•Author of multiple open source Akka/Scala
projects - Spark Job Server, ScalaStorm, etc.
•@evanfchan
A plug for a few projects…
•http://github.com/velvia/links - my stash of
interesting Scala & big data projects
•http://github.com/velvia/filo - a new, extreme
vector serialization library for fast analytics
•Talk to me later if you are interested in fast
serialization or columnar/analytics databases
Who is Socrata?
!
We are a Seattle-based software startup. 
!
We make data useful to everyone.
Open, Public Data
Consumers
Apps
Socrata is…
The most widely adopted Open Data platform
Scala at Socrata
•Started with old monolithic Java app
•Started writing new features in Scala - 2.8
•Today - 100% backend development in Scala,
2.10 / 2.11, many micro services
•custom SBT plugins, macros, more
•socrata-http
•rojoma-json
Want Reactive?
event-driven, scalable, resilient and responsive
Agenda
• How does one get started with Akka?
• To be honest, Akka is what drew me into Scala
• Examples of Akka use cases
• Compared with other technologies
• Tips on using Akka in production
• Including back pressure, monitoring, VisualVM usage,
etc.
Ingestion Architectures
with Akka
Akka Stack
• Spray - high performance HTTP

• SLF4J / Logback

• Yammer Metrics

• spray-json

• Akka 2.x

• Scala 2.10
Ingesting 2 Billion Events / Day
Nginx
Raw Log
Feeder
Kafka
Storm
New Stuff
Consumer watches
video
Livelogsd - Akka/Kafka file tailer
Current
File
Rotated
File
Rotated
File 2
File
Reader
Actor
File
Reader
Actor
Kafka Feeder
Coordinator
Kafka
Storm - with or without Akka?
Kafka
Spout
Bolt
Actor
Actor
• Actors talking to each other within a
bolt for locality

• Don’t really need Actors in Storm

• In production, found Storm too
complex to troubleshoot

• It’s 2am - what should I restart?
Supervisor? Nimbus? ZK?
Akka Cluster-based Pipeline
Kafka
Consumer
Spray
endpoint
Cluster
Router
Processing
Actors
Kafka
Consumer
Spray
endpoint
Cluster
Router
Processing
Actors
Kafka
Consumer
Spray
endpoint
Cluster
Router
Processing
Actors
Kafka
Consumer
Spray
endpoint
Cluster
Router
Processing
Actors
Kafka
Consumer
Spray
endpoint
Cluster
Router
Processing
Actors
Lessons Learned
• Still too complex -- would we want to get paged for this
system?

• Akka cluster in 2.1 was not ready for production (newer
2.2.x version is stable)

• Mixture of actors and futures for HTTP requests
became hard to grok

• Actors were much easier for most developers to
understand
Simplified Ingestion Pipeline
Kafka
Partition
1
Kafka
SimpleConsumer
Converter Actor
Cassandra Writer
Actor
Kafka
Partition
2
Kafka
SimpleConsumer
Converter Actor
Cassandra Writer
Actor
• Kafka used to partition
messages

• Single process - super
simple!

• No distribution of data

• Linear actor pipeline -
very easy to understand
Stackable Actor Traits
Why Stackable Traits?
• Keep adding monitoring, logging, metrics, tracing code
gets pretty ugly and repetitive

• We want some standard behavior around actors -- but
we need to wrap the actor Receive block:

class someActor extends Actor {!
def wrappedReceive: Receive = {!
case x => blah!
}!
def receive = {!
case x =>!
println(“Do something before...”)!
wrappedReceive(x)!
println(“Do something after...”)!
}!
}
Start with a base trait...
trait ActorStack extends Actor {!
/** Actor classes should implement this partialFunction for standard!
* actor message handling!
*/!
def wrappedReceive: Receive!
!
/** Stackable traits should override and call super.receive(x) for!
* stacking functionality!
*/!
def receive: Receive = {!
case x => if (wrappedReceive.isDefinedAt(x)) wrappedReceive(x) else unhandled(x)!
// or: (wrappedReceive orElse unhandled)(x)!
}!
}!
Instrumenting Traits...
trait Instrument1 extends ActorStack {!
override def receive: Receive = {!
case x =>!
println("Do something before...")!
super.receive(x)!
println("Do something after...")!
}!
}
trait Instrument2 extends ActorStack {!
override def receive: Receive = {!
case x =>!
println("Antes...")!
super.receive(x)!
println("Despues...")!
}!
}
Now just mix the Traits in....
class DummyActor extends Actor with Instrument1 with Instrument2 {!
def wrappedReceive = {!
case "something" => println("Got something")!
case x => println("Got something else: " + x)!
}!
}
• Traits add instrumentation; Actors stay clean!

• Order of mixing in traits matter

Antes...!
Do something before...!
Got something!
Do something after...!
Despues...
Productionizing Akka
On distributed systems:
“The only thing that
matters is visibility”
Akka Performance Metrics
• We define a trait that adds two metrics for every actor:

• frequency of messages handled (1min, 5min, 15min
moving averages)

• time spent in receive block

• All metrics exposed via a Spray route /metricz

• Daemon polls /metricz and sends to metrics service

• Would like: mailbox size, but this is hard
Akka Performance Metrics
trait ActorMetrics extends ActorStack {!
// Timer includes a histogram of wrappedReceive() duration as well as moving avg of rate
of invocation!
val metricReceiveTimer = Metrics.newTimer(getClass, "message-handler",!
TimeUnit.MILLISECONDS, TimeUnit.SECONDS)!
!
override def receive: Receive = {!
case x =>!
val context = metricReceiveTimer.time()!
try {!
super.receive(x)!
} finally {!
context.stop()!
}!
}!
}
Performance Metrics (cont’d)
Performance Metrics (cont’d)
VisualVM and Akka
• Bounded mailboxes = time spent enqueueing msgs
VisualVM and Akka
• My dream: a VisualVM plugin to visualize Actor
utilization across threads
Tracing Akka Message Flows
• Stack trace is very useful for traditional apps, but for
Akka apps, you get this:
at akka.dispatch.Future$$anon$3.liftedTree1$1(Future.scala:195) ~[akka-actor-2.0.5.jar:2.0.5]!
at akka.dispatch.Future$$anon$3.run(Future.scala:194) ~[akka-actor-2.0.5.jar:2.0.5]!
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:94) [akka-actor-2.0.5.jar:2.0.5]!
at akka.jsr166y.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1381) [akka-actor-2.0.5.jar:2.0.5]!
at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259) [akka-actor-2.0.5.jar:2.0.5]!
at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975) [akka-actor-2.0.5.jar:2.0.5]!
at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479) [akka-actor-2.0.5.jar:2.0.5]!
at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) [akka-actor-2.0.5.jar:2.0.5]
--> trAKKAr message trace <--!
akka://Ingest/user/Super --> akka://Ingest/user/K1: Initialize!
akka://Ingest/user/K1 --> akka://Ingest/user/Converter: Data
• What if you could get an Akka message trace?
Tracing Akka Message Flows
Tracing Akka Message Flows
• Trait sends an Edge(source, dest, messageInfo) to a
local Collector actor

• Aggregate edges across nodes, graph and profit!
trait TrakkarExtractor extends TrakkarBase with ActorStack {!
import TrakkarUtils._!
!
val messageIdExtractor: MessageIdExtractor = randomExtractor!
!
override def receive: Receive = {!
case x =>!
lastMsgId = (messageIdExtractor orElse randomExtractor)(x)!
Collector.sendEdge(sender, self, lastMsgId, x)!
super.receive(x)!
}!
}!
Akka Service Discovery
• Akka remote - need to know remote nodes

• Akka cluster - need to know seed nodes

• Use Zookeeper or /etcd

• http://blog.eigengo.com/2014/12/13/akka-cluster-
inventory/ - Akka cluster inventory extension

• Be careful - Akka is very picky about IP addresses.
Beware of AWS, Docker, etc. etc. Test, test, test.
Akka Instrumentation Libraries
• http://kamon.io

• Uses AspectJ to “weave” in instrumentation.
Metrics, logging, tracing.

• Instruments Akka, Spray, Play

• Provides statsD / graphite and other backends

• https://github.com/levkhomich/akka-tracing

• Zipkin distributed tracing for Akka
Backpressure and
Reliability
Intro to Backpressure
• Backpressure - ability to tell senders to slow down/stop

• Must look at entire system.

• Individual components (eg TCP) having flow control
does not mean entire system behaves well
Why not bounded mailboxes?
• By default, actor mailboxes are unbounded

• Using bounded mailboxes

• When mailbox is full, messages go to DeadLetters
• mailbox-push-timeout-time: how long to wait
when mailbox is full

• Doesn’t work for distributed Akka systems!

• Real flow control: pull, push with acks, etc.

• Works anywhere, but more work
Backpressure in Action
• A working back pressure system causes the rate of all
actor components to be in sync.

• Witness this message flow rate graph of the start of
event processing:
Akka Streams
• Very conservative (“pull based”)

• Consumer must first give permission to Publisher to
send data

• How does it work for fan-in scenarios?
Backpressure for fan-in
• Multiple input streams go to a single resource (DB?)

• May come and go

• Pressure comes from each stream and from # streams
Stream 1
Stream 2
Stream 3
Stream 4
Writer
Actor
DB
Backpressure for fan-in
• Same simple model, can control number of clients

• High overhead: lots of streams to notify “Ready”
Stream 1
Stream 2
Writer
Actor
Register
Ready for data
Data
At Least Once Delivery
What if you can’t drop messages on the floor?
At Least Once Delivery
• Let every message have a unique ID.

• Ack returns with unique ID to confirm message send.

• What happens if you don’t get an ack?
Actor A
Actor B
Msg 100 Msg 101 Msg 102
Ack 100 Ack 101?
At Least Once Delivery
• Resend unacked messages until confirmed == “at least
once”
Actor A
Actor B
Msg 100 Msg 101 Msg 102
Ack 100 Ack 101?
Resend 101
Ack timeout
At Least Once Delivery & Akka
• Resending messages requires keeping message history
around

• Unless your source of messages is Kafka - then just
replay from the last successful offset + 1

• Use Akka Persistence - has at-least-once semantics +
persistence of messages for better durability

• Exactly Once = at least once + deduplication

• Akka Persistence has this too!
Backpressure and at-least-once
• How about a system that works for fan-in, and handles back
pressure and at-least-once too?

• Let the client have an upper limit of unacked messages

• Server can reject new messages
Stream 1
Stream 2
Writer
Actor
Msg 100
Ack 100
Msg 101
Msg 200
Reject!
Backpressure and Futures
• Use an actor to limit # of outstanding futures
class CommandThrottlingActor(mapper: CommandThrottlingActor.Mapper,
maxFutures: Int) extends BaseActor {
import CommandThrottlingActor._
import context.dispatcher // for future callbacks
!
val mapperWithDefault = mapper orElse ({
case x: Any => Future { NoSuchCommand }
}: Mapper)
var outstandingFutures = 0
!
def receive: Receive = {
case FutureCompleted => if (outstandingFutures > 0) outstandingFutures -= 1
case c: Command =>
if (outstandingFutures >= maxFutures) {
sender ! TooManyOutstandingFutures
} else {
outstandingFutures += 1
val originator = sender // sender is a function, don't call in the callback
mapperWithDefault(c).onSuccess { case response: Response =>
self ! FutureCompleted
originator ! response
}
}
}
}
Good Akka development practices
• Don't put things that can fail into Actor constructor

• Default supervision strategy stops an Actor which
cannot initialize itself

• Instead use an Initialize message

• Put your messages in the Actor’s companion object

• Namespacing is nice
Couple more random hints
• Learn Akka Testkit.

• Master it! The most useful tool for testing Akka
actors.

• Many examples in spark-jobserver repo

• gracefulStop()

• TestKit.shutdownActorSystem(system)
Thank you!!
• Queues don’t fix overload

• Stackable actor traits - see ActorStack in spark-
jobserver repo
Extra slides
Putting it all together
Akka Visibility, Minimal Footprint
trait InstrumentedActor extends Slf4jLogging with ActorMetrics with TrakkarExtractor!
!
object MyWorkerActor {!
case object Initialize!
case class DoSomeWork(desc: String)!
}!
!
class MyWorkerActor extends InstrumentedActor {!
def wrappedReceive = {!
case Initialize =>!
case DoSomeWork(desc) =>!
}!
}
Using Logback with Akka
• Pretty easy setup

• Include the Logback jar

• In your application.conf:

event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]

• Use a custom logging trait, not ActorLogging

• ActorLogging does not allow adjustable logging levels

• Want the Actor path in your messages?

• org.slf4j.MDC.put(“actorPath”, self.path.toString)
Using Logback with Akka
trait Slf4jLogging extends Actor with ActorStack {!
val logger = LoggerFactory.getLogger(getClass)!
private[this] val myPath = self.path.toString!
!
logger.info("Starting actor " + getClass.getName)!
!
override def receive: Receive = {!
case x =>!
org.slf4j.MDC.put("akkaSource", myPath)!
super.receive(x)!
}!
}

Akka in Production - ScalaDays 2015

  • 1.
    Akka in Production EvanChan Scala Days 2015 March 17, 2015
  • 2.
    Who is thisguy? •Principal Engineer, Socrata, Inc. •http://github.com/velvia •Author of multiple open source Akka/Scala projects - Spark Job Server, ScalaStorm, etc. •@evanfchan
  • 3.
    A plug fora few projects… •http://github.com/velvia/links - my stash of interesting Scala & big data projects •http://github.com/velvia/filo - a new, extreme vector serialization library for fast analytics •Talk to me later if you are interested in fast serialization or columnar/analytics databases
  • 4.
    Who is Socrata? ! Weare a Seattle-based software startup. ! We make data useful to everyone. Open, Public Data Consumers Apps
  • 5.
    Socrata is… The mostwidely adopted Open Data platform
  • 6.
    Scala at Socrata •Startedwith old monolithic Java app •Started writing new features in Scala - 2.8 •Today - 100% backend development in Scala, 2.10 / 2.11, many micro services •custom SBT plugins, macros, more •socrata-http •rojoma-json
  • 7.
  • 9.
    Agenda • How doesone get started with Akka? • To be honest, Akka is what drew me into Scala • Examples of Akka use cases • Compared with other technologies • Tips on using Akka in production • Including back pressure, monitoring, VisualVM usage, etc.
  • 10.
  • 11.
    Akka Stack • Spray- high performance HTTP • SLF4J / Logback • Yammer Metrics • spray-json • Akka 2.x • Scala 2.10
  • 12.
    Ingesting 2 BillionEvents / Day Nginx Raw Log Feeder Kafka Storm New Stuff Consumer watches video
  • 13.
    Livelogsd - Akka/Kafkafile tailer Current File Rotated File Rotated File 2 File Reader Actor File Reader Actor Kafka Feeder Coordinator Kafka
  • 14.
    Storm - withor without Akka? Kafka Spout Bolt Actor Actor • Actors talking to each other within a bolt for locality • Don’t really need Actors in Storm • In production, found Storm too complex to troubleshoot • It’s 2am - what should I restart? Supervisor? Nimbus? ZK?
  • 15.
  • 16.
    Lessons Learned • Stilltoo complex -- would we want to get paged for this system? • Akka cluster in 2.1 was not ready for production (newer 2.2.x version is stable) • Mixture of actors and futures for HTTP requests became hard to grok • Actors were much easier for most developers to understand
  • 17.
    Simplified Ingestion Pipeline Kafka Partition 1 Kafka SimpleConsumer ConverterActor Cassandra Writer Actor Kafka Partition 2 Kafka SimpleConsumer Converter Actor Cassandra Writer Actor • Kafka used to partition messages • Single process - super simple! • No distribution of data • Linear actor pipeline - very easy to understand
  • 18.
  • 19.
    Why Stackable Traits? •Keep adding monitoring, logging, metrics, tracing code gets pretty ugly and repetitive • We want some standard behavior around actors -- but we need to wrap the actor Receive block: class someActor extends Actor {! def wrappedReceive: Receive = {! case x => blah! }! def receive = {! case x =>! println(“Do something before...”)! wrappedReceive(x)! println(“Do something after...”)! }! }
  • 20.
    Start with abase trait... trait ActorStack extends Actor {! /** Actor classes should implement this partialFunction for standard! * actor message handling! */! def wrappedReceive: Receive! ! /** Stackable traits should override and call super.receive(x) for! * stacking functionality! */! def receive: Receive = {! case x => if (wrappedReceive.isDefinedAt(x)) wrappedReceive(x) else unhandled(x)! // or: (wrappedReceive orElse unhandled)(x)! }! }!
  • 21.
    Instrumenting Traits... trait Instrument1extends ActorStack {! override def receive: Receive = {! case x =>! println("Do something before...")! super.receive(x)! println("Do something after...")! }! } trait Instrument2 extends ActorStack {! override def receive: Receive = {! case x =>! println("Antes...")! super.receive(x)! println("Despues...")! }! }
  • 22.
    Now just mixthe Traits in.... class DummyActor extends Actor with Instrument1 with Instrument2 {! def wrappedReceive = {! case "something" => println("Got something")! case x => println("Got something else: " + x)! }! } • Traits add instrumentation; Actors stay clean! • Order of mixing in traits matter Antes...! Do something before...! Got something! Do something after...! Despues...
  • 23.
  • 24.
    On distributed systems: “Theonly thing that matters is visibility”
  • 25.
    Akka Performance Metrics •We define a trait that adds two metrics for every actor: • frequency of messages handled (1min, 5min, 15min moving averages) • time spent in receive block • All metrics exposed via a Spray route /metricz • Daemon polls /metricz and sends to metrics service • Would like: mailbox size, but this is hard
  • 26.
    Akka Performance Metrics traitActorMetrics extends ActorStack {! // Timer includes a histogram of wrappedReceive() duration as well as moving avg of rate of invocation! val metricReceiveTimer = Metrics.newTimer(getClass, "message-handler",! TimeUnit.MILLISECONDS, TimeUnit.SECONDS)! ! override def receive: Receive = {! case x =>! val context = metricReceiveTimer.time()! try {! super.receive(x)! } finally {! context.stop()! }! }! }
  • 27.
  • 28.
  • 29.
    VisualVM and Akka •Bounded mailboxes = time spent enqueueing msgs
  • 30.
    VisualVM and Akka •My dream: a VisualVM plugin to visualize Actor utilization across threads
  • 31.
    Tracing Akka MessageFlows • Stack trace is very useful for traditional apps, but for Akka apps, you get this: at akka.dispatch.Future$$anon$3.liftedTree1$1(Future.scala:195) ~[akka-actor-2.0.5.jar:2.0.5]! at akka.dispatch.Future$$anon$3.run(Future.scala:194) ~[akka-actor-2.0.5.jar:2.0.5]! at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:94) [akka-actor-2.0.5.jar:2.0.5]! at akka.jsr166y.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1381) [akka-actor-2.0.5.jar:2.0.5]! at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259) [akka-actor-2.0.5.jar:2.0.5]! at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975) [akka-actor-2.0.5.jar:2.0.5]! at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479) [akka-actor-2.0.5.jar:2.0.5]! at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) [akka-actor-2.0.5.jar:2.0.5] --> trAKKAr message trace <--! akka://Ingest/user/Super --> akka://Ingest/user/K1: Initialize! akka://Ingest/user/K1 --> akka://Ingest/user/Converter: Data • What if you could get an Akka message trace?
  • 32.
  • 33.
    Tracing Akka MessageFlows • Trait sends an Edge(source, dest, messageInfo) to a local Collector actor • Aggregate edges across nodes, graph and profit! trait TrakkarExtractor extends TrakkarBase with ActorStack {! import TrakkarUtils._! ! val messageIdExtractor: MessageIdExtractor = randomExtractor! ! override def receive: Receive = {! case x =>! lastMsgId = (messageIdExtractor orElse randomExtractor)(x)! Collector.sendEdge(sender, self, lastMsgId, x)! super.receive(x)! }! }!
  • 34.
    Akka Service Discovery •Akka remote - need to know remote nodes • Akka cluster - need to know seed nodes • Use Zookeeper or /etcd • http://blog.eigengo.com/2014/12/13/akka-cluster- inventory/ - Akka cluster inventory extension • Be careful - Akka is very picky about IP addresses. Beware of AWS, Docker, etc. etc. Test, test, test.
  • 35.
    Akka Instrumentation Libraries •http://kamon.io • Uses AspectJ to “weave” in instrumentation. Metrics, logging, tracing. • Instruments Akka, Spray, Play • Provides statsD / graphite and other backends • https://github.com/levkhomich/akka-tracing • Zipkin distributed tracing for Akka
  • 36.
  • 37.
    Intro to Backpressure •Backpressure - ability to tell senders to slow down/stop • Must look at entire system. • Individual components (eg TCP) having flow control does not mean entire system behaves well
  • 38.
    Why not boundedmailboxes? • By default, actor mailboxes are unbounded • Using bounded mailboxes • When mailbox is full, messages go to DeadLetters • mailbox-push-timeout-time: how long to wait when mailbox is full • Doesn’t work for distributed Akka systems! • Real flow control: pull, push with acks, etc. • Works anywhere, but more work
  • 39.
    Backpressure in Action •A working back pressure system causes the rate of all actor components to be in sync. • Witness this message flow rate graph of the start of event processing:
  • 40.
    Akka Streams • Veryconservative (“pull based”) • Consumer must first give permission to Publisher to send data • How does it work for fan-in scenarios?
  • 41.
    Backpressure for fan-in •Multiple input streams go to a single resource (DB?) • May come and go • Pressure comes from each stream and from # streams Stream 1 Stream 2 Stream 3 Stream 4 Writer Actor DB
  • 42.
    Backpressure for fan-in •Same simple model, can control number of clients • High overhead: lots of streams to notify “Ready” Stream 1 Stream 2 Writer Actor Register Ready for data Data
  • 43.
    At Least OnceDelivery What if you can’t drop messages on the floor?
  • 44.
    At Least OnceDelivery • Let every message have a unique ID. • Ack returns with unique ID to confirm message send. • What happens if you don’t get an ack? Actor A Actor B Msg 100 Msg 101 Msg 102 Ack 100 Ack 101?
  • 45.
    At Least OnceDelivery • Resend unacked messages until confirmed == “at least once” Actor A Actor B Msg 100 Msg 101 Msg 102 Ack 100 Ack 101? Resend 101 Ack timeout
  • 46.
    At Least OnceDelivery & Akka • Resending messages requires keeping message history around • Unless your source of messages is Kafka - then just replay from the last successful offset + 1 • Use Akka Persistence - has at-least-once semantics + persistence of messages for better durability • Exactly Once = at least once + deduplication • Akka Persistence has this too!
  • 47.
    Backpressure and at-least-once •How about a system that works for fan-in, and handles back pressure and at-least-once too? • Let the client have an upper limit of unacked messages • Server can reject new messages Stream 1 Stream 2 Writer Actor Msg 100 Ack 100 Msg 101 Msg 200 Reject!
  • 48.
    Backpressure and Futures •Use an actor to limit # of outstanding futures class CommandThrottlingActor(mapper: CommandThrottlingActor.Mapper, maxFutures: Int) extends BaseActor { import CommandThrottlingActor._ import context.dispatcher // for future callbacks ! val mapperWithDefault = mapper orElse ({ case x: Any => Future { NoSuchCommand } }: Mapper) var outstandingFutures = 0 ! def receive: Receive = { case FutureCompleted => if (outstandingFutures > 0) outstandingFutures -= 1 case c: Command => if (outstandingFutures >= maxFutures) { sender ! TooManyOutstandingFutures } else { outstandingFutures += 1 val originator = sender // sender is a function, don't call in the callback mapperWithDefault(c).onSuccess { case response: Response => self ! FutureCompleted originator ! response } } } }
  • 49.
    Good Akka developmentpractices • Don't put things that can fail into Actor constructor • Default supervision strategy stops an Actor which cannot initialize itself • Instead use an Initialize message • Put your messages in the Actor’s companion object • Namespacing is nice
  • 50.
    Couple more randomhints • Learn Akka Testkit. • Master it! The most useful tool for testing Akka actors. • Many examples in spark-jobserver repo • gracefulStop() • TestKit.shutdownActorSystem(system)
  • 51.
    Thank you!! • Queuesdon’t fix overload • Stackable actor traits - see ActorStack in spark- jobserver repo
  • 52.
  • 53.
  • 54.
    Akka Visibility, MinimalFootprint trait InstrumentedActor extends Slf4jLogging with ActorMetrics with TrakkarExtractor! ! object MyWorkerActor {! case object Initialize! case class DoSomeWork(desc: String)! }! ! class MyWorkerActor extends InstrumentedActor {! def wrappedReceive = {! case Initialize =>! case DoSomeWork(desc) =>! }! }
  • 55.
    Using Logback withAkka • Pretty easy setup • Include the Logback jar • In your application.conf:
 event-handlers = ["akka.event.slf4j.Slf4jEventHandler"] • Use a custom logging trait, not ActorLogging • ActorLogging does not allow adjustable logging levels • Want the Actor path in your messages? • org.slf4j.MDC.put(“actorPath”, self.path.toString)
  • 56.
    Using Logback withAkka trait Slf4jLogging extends Actor with ActorStack {! val logger = LoggerFactory.getLogger(getClass)! private[this] val myPath = self.path.toString! ! logger.info("Starting actor " + getClass.getName)! ! override def receive: Receive = {! case x =>! org.slf4j.MDC.put("akkaSource", myPath)! super.receive(x)! }! }