Concurrency Oriented Programming in Erlang


Published on

Joe Armstrong
Distributed Systems Laboratory
Swedish Institute of Computer Science ̃joe/talks/ll2 2002.pdf
November, 9, 2002

Published in: Technology, News & Politics
  • Impressive presentation on ’Concurrency Oriented Programming in Erlang'. You’ve shown your credibility on presentation with this slideshow. This one deserves thumbs up. I’m John, owner of . Hope to see more quality slides from you.

    Best wishes.
    Are you sure you want to  Yes  No
    Your message goes here
  • Great display. I've taken a few of the structure graphics as well as adapted to my startup
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Concurrency Oriented Programming in Erlang

  1. 1. Concurrency Oriented Programming in Eflang Joe Armstrong Distributed Systems Laboratory Swedish Institute of Computer Science http: .-" 1-. 'w'. ~; . . sex" ’ joe. -"' talks ll2 20 02 . pdi’ joe@sics. se November, 9, 2002 Agner Krarup Erlang (1878 - 1929)
  2. 2. Joe Armstrong COP Plan 0 Phflosophy 0 Language o Vans o Dollars Not necessarily in that order. 7'V. "J’-In lN': ll“. .lI L_‘o L00-fwta ‘AL EWZE Distributed Systems Laboratory
  3. 3. Joe Armstrong COP Challenge 1 Put N processes in a ring: Send a simple message round the ring M times. Increase N until the system crashes. How long did it take to start the ring? How long did it take to send a message? When did it crash? Can you create more processes in your language than the OS allows? ls process creation in your language faster than process creation in the OS? SICS Distributed Systems Laboratory
  4. 4. Joe Armstrong COP Process creation times Process creation time: (LOG/ LOG scale) 1000 v vr lipi¢ri, 'rt' _l¢ any-an . ''Y' I "‘_Ipgi4ri. ‘ .1‘ T 100 OIOPOSQOOHGS ’ QFOCQBS 10 100 I000 10000 100000 Number of processes WWW? ‘ flfiflfifllg UGCUV IENCE Distributed Systems Laboratory
  5. 5. Joe Armstrong COP Message passing times Hesseqe sending hues (L06/L06‘) 100890 . r‘. n.<). ut' T '_~--an-_). ‘.‘ ‘c0n: Q.'-1‘ T 19600 . race 0 0 Cl 01 0 D 3 c 100 O 0 9 I O L 0 B 10 l 9.1 I0 100 I900 l9000 I 00000 Number of processes SW70?‘ lN5tIVUY| g COVI/ V SC£MZE Distributed Systems Laboratory
  6. 6. Joe Armstrong COP They forgot concurrency In languages like <noname> they forgot about concurrency. It either wasn't designed in from the beginning or else it was added on as an afterthought. This doesn’t matter for sequential programs. If your problem is essentially concurrent then this is a fatal mistake. But when you build something like a SICS Distributed Systems Laboratory
  7. 7. Joe Armstrong COP Web Server ‘III 'i«lrl€—9-at-wztiet-lcrt, ‘ ~» 'a1c¢—rM-we»: -'r. ~—d2:b - . l’ 11 M3 ll ](lJ0 : unu pun dill! ) sour run: run: anon -uni Red = yaws (Yet another web server, in Erlang, on NFS) Green = apache (local disk) Blue = Apache (NFS) Yaws throughput = 800 KBytes/ sec up to 80,000 disturbing processes) Apache misbehaves and crashes at about 4000 processes e Details: http : // yaws . hyber . org http: //www. sics. se/ ‘joe/ apachevsyaws. html WCYCNN INSHYUVI z CGVI/ V . $C£9KIE Distributed Systems Laboratory
  8. 8. Joe Armstrong COP Phnosophy Concurrency Oriented Programming '1? Processes are totally independent - imagine they run on different machines ~11 Process semantics = No sharing of data = Copy-everything message passing. Sharing = inefficient (can’t go parallel) + complicated (mutexes, locks, .. ) an Each process has an unforgeable name us If you know the name of a process you can send it a message is Message passing is ”send and pray” you send the message and pray it gets there a You can monitor a remote process SICS Distributed Systems Laboratory
  9. 9. Joe Armstrong COP Pragmatics A language is a COPL if: qt Process are truly independent up No penalty for massive parallelism at No unavoidable penalty for distribution 1» Concurrent behavior of program same on all OSs me Can deal with failure SICS Distributed Systems Laboratory
  10. 10. Joe Armstrong COP Why is COP Nice? I! The world is parallel at The world is distributed n Things fail «r Our brains intuitively understand parallelism (think driving a car) «I» To program a real-world application we observe the concurrency patterns = no guesswork (only observation, and getting the granularity right) «a Our programs are automatically scalable, have automatic fault tolerance (if the program works at all on a uni-processor it will work in a distributed network) in Make more powerful by adding more processors SICS Distributed Systems Laboratory
  11. 11. Joe Armstrong COP What is Erlang/ OTP? 4» Erlang = a new(ish) programming language er OTP = a set of libraries (making an Erlang application OS) as New way of developing distributed fau| t—tolerant applications (OTP) «:1 Battle tested in Ericsson and Nortel products. AXD301, GPRS, SSL accelerator. .. -1 Biggest COPL used to earn money in the world at Developed and maintained by a very small team Bjérn, Klacke, Robert, Mike, ... -1 A tool for writing reliable applications SICS Distributed Systems Laboratory 10
  12. 12. Joe Armstrong COP Eflang «r Functional/ single assignment is Light weight processes at Asynchronous message passing (send and pray) «r 08 independent (true) a Special error handling primitives is Lists, tuples, binaries «i Dynamic typing is Soft real-time GC an Transparent distribution SICS Distributed Systems Laboratory 11
  13. 13. Joe Armstrong Erlang in 11 minutes One minute per example. «2» Sequential Erlang in 5 examples «r Concurrent Erlang 2 examples «a Distributed Erlang 1 example «i Fault-tolerant Erlang in 2 examples WP Bit syntax in 1 example SICS Distributed Systems Laboratory COP 12
  14. 14. Joe Armstrong Sequential Erlang in 5 examples 1 - Factorial ~nodule(math). exporL([£ac/ ll). fac(N) when N > O —> N fac(O) -> 1 math: faC(25). > 155112lUQ0333C9859S4UOUUOU 2 - Binary Tree lookup(Key, {Key, Va; , _I lok, Val}; lookup(Key, {Key1,Val, S,Bl) lookuptKey, S); lookup(Key, {Keyl, Val, S,B}) Iookup(Key, B); loOkup(Key, nil) —> nOt_found. SICS Distributed Systems Laboratory _}) —> when Key<Key1 —> COP 13
  15. 15. Joe Armstrong 3 - Append append(lH| T], I) app€DCll l l I L) 4-Sod sorL([PivoLIT]) sort(lXllX <- T, X < Pivotli [Pivot] ++ —> —> L. > [H| append(T, COP Lil; +1‘ sort(lXllX <- T, X >= Pivotl); sortllli -> I]. 5 - Adder > Adder — fun(N) rig Fun [3 : #Fun G(5). V Adder(10). v 15 SICS Distributed Systems Laboratory —> fun(X) —> X + N end end. 14
  16. 16. Joe Armstrong COP Concurrent Erlang in 2 examples 6 - Spawn Pid = spawntfuntl —> loop(O) end) 7 - Send and receive Pid ! Message, eeeee receive Messagel ~> Actionsl; Message2 > actions2; after Time -> TimeOutActions end SICS Distributed Systems Laboratory 15
  17. 17. Joe Armstrong Distributed Erlang in 1 example 8 - Distribution Pid = Spawn(Fun@NOde) alive(Node) not_alive(Node) SICS Distributed Systems Laboratory COP 16
  18. 18. Joe Armstrong COP Fault tolerant Erlang in 2 examples 9 - Catch/ throw , . case (catch foo(A, B)) o: {abnormal_casel, Y} —> {'EXIT', Opps} —> Val —> end, foo(A, H) -> throw({abnormal_casel, .. .}) 7'V. "J’-In | N': l|“. .lI L‘! LO1“F. ."E‘ ‘AL EWZE Distributed Systems Laboratory
  19. 19. Joe Armstrong 10 - Monitor a process process_flaq(trap_exit, true), —/ Pid = spawn_link(fun() receive {'EXIT', Pid, Why} -> end SICS Distributed Systems Laboratory end), COP 18
  20. 20. Joe Armstrong COP Bit syntax in 1 example 11 - Parse IP Datagram Dgram is bound to the consecutive bytes of an IP datagram of IP protocol version 4. We extract the header and the data of the datagram: «define(IP_VERSION, é). -define (IP_P~'iIi= I_HDR_LEN, 5) . DgramSize = si2e(Dgram), case Dgram of <<? IP_UERSION:4, fiLen:4, SrvcType:8, TotLen: l6, ID:16, Flqs:3, Fraq0ff: l3, TTL:8, Proto:8, HdrChkSum:16, SrcIP:32, De5tIP:32, Body/ binary>> when HLen >= 5, 4*HLen = ’ DgramSize > OptsLen = 4*(HLen — ? IP_MIN_HDR_LEN), <<Opts: OptsLen/ binary, Jatafbinary>> = Body, end. SICS Distributed Systems Laboratory 19
  21. 21. Joe Armstrong COP Behaviours A universal Client - swapping : -) server(Fun, Data) —> receive {nevg_furn FLM1l} server(Funl, D From, Repl Datal} {Replyfl Da {rpc, (Reply, From ! server(Fun, end. rpc(A, -> Tag = A ! {rpc, receive B) new_ref(), self(), Va {Tag, Val} -> end SICS Distributed Systems Laboratory Server with hot code —> ata); YRS, O} -> = Fun(Q, 8: Reply}. tel) Data), Tag. B}, l 20
  22. 22. Joe Armstrong COP Programming Patterns Common concurrency patterns: " H4‘ Cast Event receive A —> A end Call (RPC) A ! {self(), B}, receive {A, Reply} -> Reply end Callback receive {From, A} —> From I F(A) end CCOVJVEI. SC>ZNCE Distributed Systems Laboratory
  23. 23. Joe Armstrong COP Challenge 2 Can we easily program tricky concurrency patterns? Callback A ! {Tag, within RPC X}, Q(A, Tag). 9(A, Tag) *> receive Hag. (R, end. Parallel RPC par_rpc([A, B,C], M) SICS Distributed Systems Laboratory 22
  24. 24. Joe Armstrong COP False encapsulation Q: Should we hide the message passing of an RPC inside a function stub? A: No - can’t do parallel RPC's How do we implement parallel RPCS? 1 _ipL ” TH : :r, _t' * i= ‘:lfll, I1 implfrrim l 1 ni= .l'. r: 1 II t l I[fu: 'i it — it ll Ml fle‘f ' l Val l’I"ll, liq ‘Fifi, I H -. iT. :. I 1 dill yie-d(lH Tl? r? "-‘fill : l~: ‘L': _‘. "va lli, 'vL. L‘ "" ‘krill -: ‘llli, U--l yi; lg[Tb]. SICS Distributed Systems Laboratory 23
  25. 25. Joe Armstrong COP ls Erlang an COPL? Yes - ish. Fine for LAN behind a firewall. Various security policies need implementing for use in a WAN. SICS Distributed Systems Laboratory 24
  26. 26. Joe Armstrong COP Wars . .. Erlang is secret (sssh ti! Erlang is slow on Erlang banned in Ericsson (for new products) - it wasn't C++ : -) in Erlang escapes (Open source) 5 Erlang infects Nortel an Erlang still used in Ericsson (despite ban) or Erlang controls the world (or at least the BT networks) SICS Distributed Systems Laboratory 25
  27. 27. Joe Armstrong Dollars all AXD301 or GPRS in NortelSSL accelerator “F Bluetail — very successful Swedish / T startup SICS Distributed Systems Laboratory COP 26
  28. 28. Joe Armstrong COP AXD301 «i ATM switch (Media gateway= AXD, Soft switch (ENGINE) = AXD+AXE) u 11% of world market = Market leader at 99.9999999°/ o reliability (9 nines) (31 ms. year! ) an 30-40 million calls per week at World's largest telephony over ATM network at 1.7 million lines of Erlang SICS Distributed Systems Laboratory 27
  29. 29. Joe Armstrong AXD Market share COP Ericsson is the world rmrket leader in Soltswiching Sdtuuildulluhelfiweo-2|! !! Ericsson 11% "°‘°'“ uonems mums aura F? om, .My2uP '. "-H. l‘(': ll'i, .ll rjo »_. _r-u-_. -{A ‘J. iwit Distributed Systems Laboratory l. uoeI'I10'. l6 28
  30. 30. Joe Armstrong 9 nines reliability ' iiwaiii izy i. BT, UK (§fiGll«’E mega‘ in Hybrid corliguraticn) Dumg the vcrypcpulur "Pop Idol" 7V show in February 1900?. the Ericsson server in It/ old was on two occasions expo--ed to 34 and 28 . 'l. 'BtCA. The dymmic congestion ccntrd kicked in and limrtad the throughput to the COflhgU"('d 1.4.1-. 'fltO T7-it sifuahon was handled smoothly and did no! cause any problems with the server. In September 3'02, N nodes has been deployed out otp/ nmed 23 nodes by the end of 2002 BT. are handling about 3040 Million c.1a'ls per node and nv. -ck In total BT has switched circa 7 8 'Iron c. -its to date wing VOA TM Tc-chndogy. -md A TM Trsnsport (September 2002) 'us. ‘r7r . "ir. ~a‘n H’ SICS Distributed Systems Laboratory till I ‘, ' COP 29
  31. 31. Joe Armstrong High performance El? Lb". .. Q r~ll, l§'; l"( t l . m’ , i , il I if. . (_. H._V. ‘._1>. . _~. I.lQll'il, ' i. ’Jl5iL 9'L-‘film C’-.1’-‘V’-’-C'1~"'. "Existing transit circu‘t-switched rctwork no. -cdcd modcrnimtion ‘ Rapid lrdlic growth lrom new and existing services -increase capacity and remcc cost through evolution to new mdti-scrvioc oorrmunication system capable of carrying all telephony, dda md rnulti« mi. -da services SICS Distributed Systems Laboratory Cd i-'. i 0'1 ‘ Putmrzrip ' EJ~JOl~E Integral mhytxid arwtlqtatlmta >J)“od 87 trmfit nr*. wor*k - zlnodrs across U< ' Mvr‘l. ':g3rV! 'rI uytrerrt ‘LN: cu~cw. -r lxomNS -Jniidrs Rt*: '.Jl '14 nr. x5c-. c.»rryir-g '~vv. - lvuttlc Sept-. -rrber ZIJQQJ d p-'. nrr. -d 8 b“orr rfid of 7YD? inu: aidlrg to (in ['. l.= r‘: ' <_¥). €|“. ¢‘_7£! ~?9'_1“-. ~nv. .Lsbll ty ‘ID-40 M‘ lm c1'lr- per wrfi; & nah‘ Wa1d'r. l:iri; r'-t Tr‘-pha'y o/ IVATM n-twat ' B-2:1 app‘ er cl the )-_--.1, $0 COP 30
  32. 32. Joe Armstrong COP Alteon SSL Accelerator Alteon SSL Accelerator . ... .. .". .. . ».». ... ... ... ..~ V . . . . . ... .. . .——. . ~. .. . .. . 4. . _l. l . .». _2-, .. -. . . ... .- . .. . . . ... -. Nortel networks press release 28 August, 2002 In the security arena, Infonetics Research credited Nortel Networks with the #1 position in the dedicated Secure Sockets Layer (SSL) appliance market with 48 percent market share in the first half of 2002, improving on the 41 percent market share reported for first half of 2001 . Total market c. 42M$ (2002) growing to 60$M 2005 (Infonetics research) SICS Distributed Systems Laboratory
  33. 33. Joe Armstrong GPRS «a 45% of world market an 79 commercial contracts «I» 76% of code is in Erlang SICS Distributed Systems Laboratory COP 32
  34. 34. Joe Armstrong Recap 0 The world is concurrent o Things in the world don't share data o Things communicate with messages o Things fail Model this in a language '. vs¥'. u'. Ai | N'. ti‘i, ,tI [:0 Lr. '.v-v. u'u < ‘ -' is Distributed Systems Laboratory COP 33
  35. 35. Joe Armstrong COP Now you must Make it easy for your users to program concurrent, distributed fault-tolerant applications. SICS Distributed Systems Laboratory