Learning Erlang (from a Prolog dropout's perspective)

  • 1,686 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,686
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
15
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Learning Erlang (from a Prolog dropout's perspective) Kenji Rikitake, JJ1BDX 26-APR-2008 For 1000speakers:4 conference 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 1 Commons CC-BY 3.0 licensed
  • 2. Disclaimers *Strictly NO WARRANTY (Ab)use this presentation at your own risk *JJ1BDX is the author and the only responsible person for this presentation My (former) employers or anyone else have nothing to do with this work and the contents *This is a work-in-progress You may find a bunch of errors and glitches 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 2 Commons CC-BY 3.0 licensed
  • 3. Who am I? *... I've been the JJ1BDX since 1976 JJ1BDX is my amateur radio callsign in Japan *I’ve been an Internet activist since 1986 *In 1986, I was partying around with fellow radio and computer hackers, like what you guys are doing now in the 1000speakers conference and other events. So I come here to enjoy. :-) 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 3 Commons CC-BY 3.0 licensed
  • 4. Why I didn't like prolog (in 1980s) *programming only by matching? *programming without assignment? *you can't really compute numbers? *parallelism on the desktop? *runtime (slow) virtual machines? ...NO WAY! (and I spent too much time on NetNews) 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 4 Commons CC-BY 3.0 licensed
  • 5. ...then why Erlang NOW?? *...works fast enough on modern PCs it works OK even on Windoze! (UNIX rules) *...has a bunch of practical applications yaws, ejabberd, ATM packet exchanges *...can get the most out of parallelism threads and shared memory are headaches *...is new to me and I’m sure I can learn something new. That’s for sure. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 5 Commons CC-BY 3.0 licensed
  • 6. Why Erlang is hard to learn? *extraordinary syntax embedded Prolog-ism Horn clauses and pattern-matching branches message-driven parallelism *I’m preoccupied by C and FORTRAN leaning tail recursion is not a trivial task local variable declaration is not explicit *lots of new things modules, data structures, Mnesia, OTP... 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 6 Commons CC-BY 3.0 licensed
  • 7. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 7 Commons CC-BY 3.0 licensed
  • 8. What I'm going to show you *IPv6-related string manipulation generation of 10,000 random IPv6 addresses parsing IPv6 colon-notation addresses into Erlang-native tuple forms generating reverse-lookup names from the Erlang tuples colon-form addresses -> ip6.arpa name *some crude profiling results with parallelism: using SMP Erlang VM 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 8 Commons CC-BY 3.0 licensed
  • 9. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 9 Commons CC-BY 3.0 licensed
  • 10. Erlang IP address tuples *IPv4 {127,0,0,1} -> localhost (4 x 8-bit elements) *IPv6 {0,0,0,0,0,0,0,1} -> ::1 (... well, localhost) (8 16-bit big-endian elements) *Address conversion function ready inet_parse:address() for both IPv4 and IPv6 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 10 Commons CC-BY 3.0 licensed
  • 11. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 11 Commons CC-BY 3.0 licensed
  • 12. Reverse lookup is getting harder in IPv6 than IPv4 *IPv4: just reversing the tuple elements is enough 127.0.0.1 -> 1.0.0.127.in-addr.arpa lists:reverse() for the tuple elements *IPv6: parsing and string manipulation needed 2001:1:3fff:dbde::1 -> 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.e.d.b.d.f.f.f.3.1.0.0. 0.1.0.0.2.ip6.arpa ... binary bit pack/unpack operation is effective 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 12 Commons CC-BY 3.0 licensed
  • 13. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 13 Commons CC-BY 3.0 licensed
  • 14. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 14 Commons CC-BY 3.0 licensed
  • 15. Concurrency and map()ping *simple list iteration: map()ping Applying a function to all the list members lists:map(fun(X) -> function(X) end, Arglist) *parallelism: parallel map() Joe Armstrong’s pmap() (in Programming Erlang: Software for a Concurrent World) spawn()ing light-weight process per element preserving result list sequence caution: execution sequence implementation dependent – no side effect allowed in the function 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 15 Commons CC-BY 3.0 licensed
  • 16. map() .vs. pmap() results * 10,000 addresses / Core2Duo 2.33GHz (mean values of 5-time measurements) completion time lists:map() Armstrong’s for test functions pmap() non-SMP 1.309 seconds 2.029 seconds (single scheduler) (+55.0%) SMP (2 cores) 1.343 seconds 1.202 seconds (2 schedulers) (-10.5%) 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 16 Commons CC-BY 3.0 licensed
  • 17. Spawning overhead of lightweight processes *cost of creating lightweight processes 3 to 12 microseconds/process *messaging overhead between processes *more efficient utilization of CPUs needed granularity: per-function computation number of simultaneous processes efficient process spawning e.g., result list sequence need not be preserved 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 17 Commons CC-BY 3.0 licensed
  • 18. Conclusions and lessons *Erlang is weird, but worth learning getting out of the modules is essential *parallelism works (even for 2 CPUs!) write parallelism-aware functions: Erlang’s idioms help parallel programming *programmers need to learn parallelism Erlang’s idea applicable to other languages effective serialization is also critical; some algorithms have to run fast (e.g., crypto) 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 18 Commons CC-BY 3.0 licensed
  • 19. Appendix: lighter function means less parallelism gain * 10,000 addresses / Core2Duo 2.33GHz (mean values of 5-time measurements) *Faster implementation io_lib:format() -> primitive hex conversion completion time lists:map() Armstrong’s for test functions pmap() SMP (2 cores) 0.129 seconds 0.198 seconds (2 schedulers) (+53%) 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 19 Commons CC-BY 3.0 licensed
  • 20. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 20 Commons CC-BY 3.0 licensed
  • 21. 26-APR-2008 1000speakers:4 by JJ1BDX: Creative 21 Commons CC-BY 3.0 licensed