• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Project Earl Grey
 

Project Earl Grey

on

  • 2,036 views

Brief introduction of concepts and architecture of open-source online game server library Earlgrey.

Brief introduction of concepts and architecture of open-source online game server library Earlgrey.

Statistics

Views

Total Views
2,036
Views on SlideShare
1,439
Embed Views
597

Actions

Likes
0
Downloads
1
Comments
0

5 Embeds 597

http://forums.andromedarabbit.net 464
http://andromedarabbit.net 118
http://www.hanrss.com 8
http://www.slideshare.net 6
http://forums.kaistizen.net 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Project Earl Grey Project Earl Grey Presentation Transcript

    • Earl Grey
      Concepts & Architecture for Dummies
    • What is it?
      Online game server framework
      Open source project with MIT license
      http://code.google.com/p/earlgrey/
      Still under development
    • Keywords
      Intel x86/x64 (Itanium)
      Multi processor machine
      Plenty of main memory
      Windows Vista/2003 and newer (Official)
      Windows XP/2000 (Non-official)
      Standalone (Without third-party libraries)
      Lock-free algorithm (Threading)
      IO Completion Port (Network)
    • Vision
      Provides basic functionalities and extensible architecture needed to build high performing, still solid online game servers
    • Threading – Task analysis
      Two different kinds of tasks
      CPU-bound
      takes a lot of CPU usages
      takes computer resources just for a short time
      Examples: HP calculation
      IO-bound
      takes relatively less CPU usages
      takes computer resources for a long time
      Examples: database operations
    • Threading – Thread groups
      CPU-bound  IOCP thread group
      Client requests
      IO-bound
      Not yet implemented
      Database operations
      Logging operations (Separation from IO-bound thread group is being considered)
      Main thread
      Starts and ends the application
    • Threading - Performance
      Race condition
      One resource / Multiple threads
      Best solution is not to share it!
      Cache invalidation
      One task / Multiple processors
      Best solution is to attach a thread to a specific processor
      A task should be fully processed in a thread
      Can’t depend on OS’ optimization
    • Threading - Performance
      CPU-bound thread group
      No waiting!
      Waiting means a thread unavailable
      Posts IO-bound tasks to the IOCP thread group.
      Message posting mechanism is required.
      Request should be processed in a short time
      A processor takes only one thread.
    • Threading - Performance
      IO-bound thread group
      Waiting is inevitable for some kinds of operations.
      Assign multiple threads to a processor
      Best receipt should be decided by a hand or a some mechanism which is not yet developed.
    • Threading - Performance
      Each thread holds copies of read-only or no-need-to-be-shared resources.
      Ex) Internal buffers of FromUnicode function.
      Race condition is resolved by lock-free containers.
      Traditional locking mechanism is still being used for one-time initialization of singleton instances.
      Message posting mechanism
      Each thread/thread group has its own roles.
      Copying data usually results in better performance than just waiting for shared resources.
    • Memory – GreedyAllocator
      Global heap allocator.
      Greedy?
      Never return memory space to OS.
      Structure is relatively simple  High performance.
      Designed on the assumption that each application has a dedicated machine.
      Not yet optimized
      Cache line size
      Large page size (if a processor supports)
    • Memory – ThreadLocalAllocator
      Minimize race conditions and waiting time.
      Each thread has its own memory pool.
      If a thread has shortage of memory space, send a request to a global heap allocator.
      If thread A has plenty of memory space and thread B has shortage, a memory manager send a memory chunk from A to B.
      Faster about 10 times than Windows’ low-fragmentation heap.
    • Memory – StackAllocator
      Allocates memory on the stack (_malloca).
      Free allocated memory space automatically.
      Allocation is super-fast.
      Simple real-time check is implemented.
    • Memory – Third-parties
      TBBAllocator
      Intel Threading Block Library
      LFHAllocator
      Low-fragmentation Heap Allocator
    • Memory – STL support
      x-containers/x-streams
      Using a global heap allocator (GreedyAllocator)
      Fast
      xwstring, xwstringstream, xvector, and so on
      auto-containers/auto-streams
      Using a stack allocator (StackAllocator)
      Super-fast
      Life cycle of instances should be carefully considered
      auto_wstring, auto_wstringstream, and so on
    • Future tasks
      Complete feature set of asynchronous networking.
      Rich set of diagnostics
      Rolling log file/DebugOutput loggings and so on
      Integration with third-party libraries like log4cxx
      Performance tuning (Ex. detecting a heavy request)
      IO-bound thread group
      Administration tool
      Telnet-based tool for Win32 services
    • Credit
      Cover photohttp://www.flickr.com/photos/kankan/41403840