Zing Me Real Time Web Chat Architect

24,081 views

Published on

Zing Me Real Time Web Chat Architect

Published in: Technology

Zing Me Real Time Web Chat Architect

  1. 1. Zing Me Web Chat Architect By Chau Nguyen Nhat Thanh ZingMe Technical Manager Web Technical - VNG
  2. 2. Agenda● About Zing Me● Zing Me platform strategy● Zing Me web chat architect ● Problem with real time message in HTTP ● Connection server ● Message server ● Online server● Some statistic information 2
  3. 3. About Zing Me
  4. 4. SNS Platform
  5. 5. Updated stats● 45M registered accounts● 2.2M daily active users● 7M monthly active users● > 500 servers (2 K cores) ● 70 DB servers, >60 Memcached servers ● ~ 150 Web servers ● 40 servers for Hadoop farm ● Others (storage, ...)
  6. 6. Zing Me Technology● Load balancing: HA proxy, LVS● Web server: Nginx, Lighttpd● Web caching: Squid, Varnish● Caching: Redis, Memcached, Membase● CDN: in progress● Programming language: PHP, C++,Java, Python, Bash script, Erlang● Searching: Solr, Lucene● DB: MySQL , Cassandra, HBase● Log system: Scriber + Hadoop
  7. 7. Zing Me platform strategy● Open platform ● Open Social API ● Open Auth ● Zing Connect ● http://open.me.zing.vn/● Open service ● Cloud Memcache ● Cloud Key Value Storage ● Virtualization for Hosting Service
  8. 8. Zing Me platform strategy● Focus on communication tools: ● Email: Zing mail ● Private message ● IM : ZingMe Web Chat ● Notification: Zing Notification System
  9. 9. Zing Me Web Chat Architect
  10. 10. Frontend UI
  11. 11. Problem with realtime message● Realtime message in HTTP ● Browser actively connect to web server ● How can web server push the new message to the browser ? ● COMET is the answer● COMET overview ● Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it (wikipedia) ● Implementation: hiden iframe, ajax long-polling, script tag long polling
  12. 12. Frontend JS● 2 developers in 4 months● Building from the scratch based on zmCore.js, a Zing Me homemade js framework● Using long-polling for pushing new messge● Get online friend list● Get last message● Verify session
  13. 13. Backend● Must scalable● Handle large number of connetions● Connection server handles the long-polling connection● Friend Online server keeps track online friends● Message Server stores the messages● Message Delivery Worker transfers messages from the Message Server to Connection Server● Channel Server maps the users with their connections
  14. 14. Web Chat Architect Overview
  15. 15. Connection Server● Must handle large number of concurrent connections ( > 100K connections)● Implement in C++● Using native EPOLL system call● NONBLOCKING mode for async IO● Dev in 8 months
  16. 16. Connection Server Implementation● Long polling
  17. 17. Connection Server Implementation● Keep connections until message come or timeout● Server must keep a large number of connections (C100K)● Use the same solution like memcached, haproxy
  18. 18. Connection Server Implementation● Why EPOLL? ● General way to implement tcp servers is “one thread/process per connection”. But on high loads this approach can be not so efficient and we need to use another patterns of connection handling.(kovyri.net) ● Need an asynchronous way ● EPOLL is the solution
  19. 19. Connection Server Implementation● Coding with EPOLL ● Create specific file descriptor for epoll calls: epfd = epoll_create(EPOLL_QUEUE_LEN); ● After first step you can add your descriptors to epoll with following call: static struct epoll_event ev; int client_sock; ... ev.events = EPOLLIN | EPOLLPRI | EPOLLERR | EPOLLHUP; ev.data.fd = client_sock; int res = epoll_ctl(epfd, EPOLL_CTL_ADD, client_sock, &ev);
  20. 20. Connection Server Implementation● When all your descriptors will be added to epoll, your process can idle and wait to something to do with epoll’ed sockets: while (1) { // wait for something to do... int nfds = epoll_wait(epfd, events, MAX_EPOLL_EVENTS_PER_RUN, EPOLL_RUN_TIMEOUT); if (nfds < 0) die("Error in epoll_wait!"); // for each ready socket for(int i = 0; i < nfds; i++) { int fd = events[i].data.fd; handle_io_on_socket(fd); } }
  21. 21. Friend Online Server● Hard to scale● Implement in C++● Using some cheats to define user online● Caching is hard● Dev in 1 month
  22. 22. Message Server● Store the messages● Implement in Java● Problem with memory usage ● C++ version in progress● Notify new message by queue● Worker will deliver message to user by connection server
  23. 23. Web Chat statistic
  24. 24. Total message
  25. 25. Hourly stats
  26. 26. Q&AContact info: Chau Nguyen Nhat Thanh thanhcnn@vng.com.vn me.zing.vn/thanhcnn2000

×