Your SlideShare is downloading. ×
Introduction To Distributed Erlang
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Introduction To Distributed Erlang

3,775
views

Published on

Slides from my talk at the Vancouver Erlang Meetup Group on November 9th 2009. …

Slides from my talk at the Vancouver Erlang Meetup Group on November 9th 2009.

See: http://www.meetup.com/erlang-vancouver/events/11730648/

Published in: Technology, News & Politics

0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,775
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
37
Comments
0
Likes
3
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Introduction to Distributed Erlang Basic principles and a little more
  • 2. About us Developing a new breed of platform for social networking games Scalability is a must Back-end developed with Erlang/OTP And lots of other good stuff Front-end developed with Flex 4
  • 3. About me More stickers than a 40 years old RV!
  • 4. Agenda and non-agenda Quick recap of message passing Core principles of Erlang remoting Global registry and process groups Nothing on Erlang syntax {<<"sorry">>} Nothing on custom networking nor OTP OTP is what you'll use in reality
  • 5. Why Erlang for concurrency? Immutability painted all over it Designed to handle thousands of processes Spawned (not started nor forked) Processes communicate asynchronously Passing messages by value
  • 6. Message sending As easy as: Pid ! Message Sends a message to a process inbox Like with mail delivery: Not sure if the letter reached its destination Needs a letter back if a response is needed
  • 7. RSVP Add address on the envelope for a response
  • 8. RSVP (client) Client is also Server's public API send(Sid, Message) -> Sid!{self(), Message}, receive {From, Response} -> io:format("Client ~p from ~p~n", [Response, From]) after 1000 -> bail end.
  • 9. RSVP (server) start() -> spawn(fun()-> serve() end). serve() -> receive {From, Message} -> io:format("Server ~p from ~p~n", [Message, From]), From!{self(), {ack, Message}}, serve() end.
  • 10. Process registration Allows process naming: register(my_pid, Pid). Frees application from passing Pids around: my_pid ! Message.
  • 11. RPC strikes back Call MFA on single or multiple nodes Declined in zillions of variations: Blocking or not Parallelized, including pmap Makes code location aware Heterogeneous styles: Pid ! Message rpc:call(N,M,F,A)
  • 12. Node connectivity Triggered by: rpc:call ... net_adm:ping(node@host) Nodes shake hands and share information Processes, registrations... Transitive mechanism Node 1 ➟ Node 2 and Node 2 ➟ Node3 then: Node 1 ➟ Node 3
  • 13. Erlang Port Mapper Daemon
  • 14. Erlang's magic cookie Passed on startup: erl -sname n1 -setcookie=secret Proper node and host naming required Coarse grained security Party time or bust!
  • 15. Global process registry Location transparency: global:register_name(gbs, Sid). global:whereis_name(gbs) ! Message. Wired-in name conflict resolution Still need to ping nodes
  • 16. Process group (1/3) Distributed named process group Processes join and leave: pg2:create(mypg2). pg2:join(mypg2, Sid). pg2:leave(mypg2, Sid).
  • 17. Process group (2/3) pg2 can be used to send messages to: all processes: pg2:get_members(mypg2) local processes: pg2:get_local_members(mypg2) closest / random process: pg2:get_closest_pid(mypg2)
  • 18. Process group (3/3) Also: pg (experimental)
  • 19. Ping pong pang relief net_adm:world and net_adm:world_list Ease node discovery on hosts Requires hosts list Node discovery with nodefinder UDP multicast S3 list for AWS
  • 20. Probing further
  • 21. Thank you!