• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Project Earl Grey

Project Earl Grey



Brief introduction of concepts and architecture of open-source online game server library Earlgrey.

Brief introduction of concepts and architecture of open-source online game server library Earlgrey.



Total Views
Views on SlideShare
Embed Views



5 Embeds 597

http://forums.andromedarabbit.net 464
http://andromedarabbit.net 118
http://www.hanrss.com 8
http://www.slideshare.net 6
http://forums.kaistizen.net 1



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Project Earl Grey Project Earl Grey Presentation Transcript

    • Earl Grey
      Concepts & Architecture for Dummies
    • What is it?
      Online game server framework
      Open source project with MIT license
      Still under development
    • Keywords
      Intel x86/x64 (Itanium)
      Multi processor machine
      Plenty of main memory
      Windows Vista/2003 and newer (Official)
      Windows XP/2000 (Non-official)
      Standalone (Without third-party libraries)
      Lock-free algorithm (Threading)
      IO Completion Port (Network)
    • Vision
      Provides basic functionalities and extensible architecture needed to build high performing, still solid online game servers
    • Threading – Task analysis
      Two different kinds of tasks
      takes a lot of CPU usages
      takes computer resources just for a short time
      Examples: HP calculation
      takes relatively less CPU usages
      takes computer resources for a long time
      Examples: database operations
    • Threading – Thread groups
      CPU-bound  IOCP thread group
      Client requests
      Not yet implemented
      Database operations
      Logging operations (Separation from IO-bound thread group is being considered)
      Main thread
      Starts and ends the application
    • Threading - Performance
      Race condition
      One resource / Multiple threads
      Best solution is not to share it!
      Cache invalidation
      One task / Multiple processors
      Best solution is to attach a thread to a specific processor
      A task should be fully processed in a thread
      Can’t depend on OS’ optimization
    • Threading - Performance
      CPU-bound thread group
      No waiting!
      Waiting means a thread unavailable
      Posts IO-bound tasks to the IOCP thread group.
      Message posting mechanism is required.
      Request should be processed in a short time
      A processor takes only one thread.
    • Threading - Performance
      IO-bound thread group
      Waiting is inevitable for some kinds of operations.
      Assign multiple threads to a processor
      Best receipt should be decided by a hand or a some mechanism which is not yet developed.
    • Threading - Performance
      Each thread holds copies of read-only or no-need-to-be-shared resources.
      Ex) Internal buffers of FromUnicode function.
      Race condition is resolved by lock-free containers.
      Traditional locking mechanism is still being used for one-time initialization of singleton instances.
      Message posting mechanism
      Each thread/thread group has its own roles.
      Copying data usually results in better performance than just waiting for shared resources.
    • Memory – GreedyAllocator
      Global heap allocator.
      Never return memory space to OS.
      Structure is relatively simple  High performance.
      Designed on the assumption that each application has a dedicated machine.
      Not yet optimized
      Cache line size
      Large page size (if a processor supports)
    • Memory – ThreadLocalAllocator
      Minimize race conditions and waiting time.
      Each thread has its own memory pool.
      If a thread has shortage of memory space, send a request to a global heap allocator.
      If thread A has plenty of memory space and thread B has shortage, a memory manager send a memory chunk from A to B.
      Faster about 10 times than Windows’ low-fragmentation heap.
    • Memory – StackAllocator
      Allocates memory on the stack (_malloca).
      Free allocated memory space automatically.
      Allocation is super-fast.
      Simple real-time check is implemented.
    • Memory – Third-parties
      Intel Threading Block Library
      Low-fragmentation Heap Allocator
    • Memory – STL support
      Using a global heap allocator (GreedyAllocator)
      xwstring, xwstringstream, xvector, and so on
      Using a stack allocator (StackAllocator)
      Life cycle of instances should be carefully considered
      auto_wstring, auto_wstringstream, and so on
    • Future tasks
      Complete feature set of asynchronous networking.
      Rich set of diagnostics
      Rolling log file/DebugOutput loggings and so on
      Integration with third-party libraries like log4cxx
      Performance tuning (Ex. detecting a heavy request)
      IO-bound thread group
      Administration tool
      Telnet-based tool for Win32 services
    • Credit
      Cover photohttp://www.flickr.com/photos/kankan/41403840