Rails services in the walled garden

1,043 views

Published on

The slide deck for our RubyConf 2011 (New Orleans) talk.

Follow us on lanyrd to get the video and other material: http://lanyrd.com/profile/ponnappa and http://lanyrd.com/profile/niranjan_p

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,043
On SlideShare
0
From Embeds
0
Number of Embeds
41
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • \n
  • \n
  • \n
  • \n
  • \n
  • This talk is based primarily on our experience building a suite of 9 different RESTful web services and multiple clients to manage a data center. \n
  • These services were build to replace a monolithic app which had become a maintenance nightmare. Adding any new features to the app was painful and time consuming.\n
  • We’ve subsequently worked on other projects that involved building APIs, but the first remains the biggest\n
  • Let us quickly cover tour the two parts of our talk - SOA, and Rails.\n
  • \n
  • \n
  • \n
  • Service Oriented Architectures allow applications be split into several self-contained services\n
  • along lines that match the different business verticals involved. Each such service should be usable by itself by the people of that vertical. While they provide APIs for other services to integrate in order to create organization wide workflows. \n
  • Being focused on a business vertical helps in limiting the ripple effects caused by changes in a business requirement for a particular app as long as the API remains stable. This makes a significant difference while building complex workflows which are specific to that vertical because you no longer worry about other business verticals that you may not understand or care about. So long as your API is stable, you’re fine.\n\n
  • This allows Independent evolution of the each service based on the needs of the corresponding vertical\n\n
  • Services can be deployed independently - so long as APIs are respected, teams no longer need to wait on other teams to release.\n
  • Only those services which see high traffic need to be scaled out\n
  • \n
  • \n
  • Having multiple small teams working independently on separate code bases is munch better than having one big team with everyone modifying the same codebase, smoothing out both development and deployment as you have to remember significant portion of the entire app (if not whole) while incorporating change requests.\n
  • While the list continues there are a few nuances which you should pay attention to \n
  • Services like to talk to other services, and the graph of HTTP requests can grow very quickly. This can potentially lead to...\n
  • ...performance bottlenecks. Every call to a service comes with all the overhead introduced by both HTTP as well as the framework.\n
  • Managing user base across all services and granting them appropriate privileges.\n
  • Managing ACID (Atomic, Consistent, Isolated, Durable) transactions across distributed databases is complex, even more so with distributed services.\n
  • This comes up in almost every discussion about building APIs. While it is important, in a walled garden impact of API versioning can be curtailed as you control both producer and consumer \n
  • continuous integration of APIs is a difficult business at best, with no existing open source infrastructure to solve this for us\n
  • These problems are generic in nature and are common to any RESTful web services not just Rails and hence most of the gotchas we’ll discuss are generic in nature, while the solutions may be more specific to Rails. But even before we go there, why should we develop these APIs in Rails?\n
  • Rails lends itself well to creating synchronous APIs.\n
  • \n
  • Rails supports transparent format negotiation using url or Accepts header and provides a mechanism to register custom formats\n
  • \n
  • It is a pleasure to write well engineered backend code in Ruby\n
  • It doesn’t mater whether we are using Rails or Sinatra or any other web framework. While beautiful APIs can be built using sinatra. \n
  • Walled garden signifies that we control both producers and consumers. Which allows us to establish conventions and potentially loosen the constraints.\n
  • \n
  • \n
  • Based on what’s demanded of the APIs and how much time/money is available you can decide where to stop, RMM 2.5 is fairly easily done with Rails.\n
  • As you might have noticed we have talking about SOA and not REST all this while. Because creating standard Rails web services does not necessarily mean creating RESTful web services.\n\nWe’ll try to be careful about this during the course of this talk, but it’s worth remembering that much of what we are talking about involves building APIs with Rails *as it is today*. This means that RMM 3 cannot be achieved without significant effort, effort that is often unnecessary inside an enterprise.\n
  • \n
  • Often you want to restrict access to various APIs you are building to limited set of users, even if they are internal services.\n
  • This authentication should needs to span across multiple services in order to support SSO\n
  • It needs to allow user to access multiple services behind the scene to manage a workflow which spans across multiple services \n
  • \n
  • \n
  • Simplest way to achieve it is by restricting access to these services from a range of IPs and allowing all internal communication\n
  • This will work as long as it doesn’t matter who within the organization is accessing the services, which is rarely the case. Next logical step is to create a centralized auth server which can be backed by any of the existing data sources such as LDAP or Active Directory\n
  • \n
  • \n
  • \n
  • OAuth2 provides a simple way to authenticate users against a centralized authentication system. There are multiple implementations of open-source OAuth2 provider. You can chose any of these providers and tweak it to suit specific requirements you might have.\n
  • OAuth2 provides a simple way to authenticate users against a centralized authentication system. There are multiple implementations of open-source OAuth2 provider. You can chose any of these providers and tweak it to suit specific requirements you might have.\n
  • \n
  • \n
  • With a decent HTTP library, an OAuth2 client can be hand-rolled in 30 minutes\n
  • There’s an interesting catch though. ActiveResource doesn’t allow custom headers for individual request to a server out of the box and OAuth2 operates on information passed through headers.\n
  • Which means, unless you are willing to open ActiveResource and monkey patch to set authentication information on every outbound request you might want to reconsider OAuth2, or (better yet) skip ActiveResource in favour of a friendlier library\n
  • \n
  • If you become an OAuth2 provider, it will be easier to provide authentication using any of the external OAuth2 providers such as Google, Twitter, Facebook.\n
  • So now we are restricting access to our services, the next step would be to determine who can do what with these services. Two broad levels at which we might want to control the access are\n
  • Both these areas can easily be tackled with a role based system. There are plenty of gems which allow you specify access rules.\n
  • If we have pulled out a central authentication server it will be better to manage the user roles centrally. We can query the central server to figure out what roles user has in a context of a service.\n
  • But let every service manage it’s own access rules based on the roles returned by the user service. It can become messy to manage the authorization across all services in a central server as every service can have custom set of rules which are tied to the data. \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Services like to talk a lot. Specifically among themselves. This can become a significant overhead. Consider following scenario with two services\n
  • \n
  • \n
  • \n
  • \n
  • The request graph can go wild with more services thrown into the mix. While it is difficult to reduce the chattiness between services it’s impact can be reduced. \n
  • \n
  • It is essential to setup a performance build early in the project and track the graph as services grow in number and complexity\n
  • Set a target for the performance. Say average GET request should not be more than 40ms.\n
  • \n
  • There are going to be a lot of HTTP calls with small response payload. So you might want to optimize for it.\n
  • \n
  • \n
  • To reduce the time taken to serve a frequently queried and time consuming requests, we can introduce various kinds of server side caching\n
  • Fragment or action caching can be used for optimizing response time for resources which need authorized access\n
  • For resources publicly available page caching can be used to avoid the Rails stack entirely\n
  • Etags can be used effectively to check if a requested resource is modified or not. The catch here is that the client has to implement caching and respect etags\n
  • If possible you should use a client which supports client side caching. So far we haven’t come across any client which does it and started an open sources project to build one. It ended up as a Ruby Net/HTTP wrapper which implements RFC 2616.\n
  • If you are using ActiveResource as a client, it will be difficult to achieve without monkey patching ActiveResource as it neither supports caching nor exposes request/response objects\n
  • With any such library and ActiveModel it is possible to quickly hand-roll a simple client. It might not have lot of things ActiveResource supports out of the box, but those features can be introduced as and when needed.\n
  • Check out Varnish or Squid to introduce caching between services.\n
  • Introducing caching adds an overhead of figuring out when and how to expire the caches.\n
  • Expiring caches under applications control is far more easier than expiring caches maintained by Squid or Varnish.\n
  • Pagination of resources requires some meta data along with the array of resources such as total number of available resources, number of resources included in each page. It can be achieved by either exposing a collection resource or adding additional attributes to the root node of the array xml. The latter is a better fit for ActiveResource.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • If you’re using ActiveResource, what you really need is something like WillPaginate. Luckily, it so happens that we have just the thing...\n
  • \n
  • So far we have been talking about caching HTTP responses between services. In certain scenarios it becomes essential to have a local cache of the resources. Let’s consider following scenario\n
  • We have a user management service which has users across multiple companies\n
  • and a project management service which has projects for individual users\n
  • A new user story comes in “As an admin user I want a paginated view of all projects for a given company” \n
  • In a typical monolithic app this can be easily solved by firing a database join query.\n
  • As we don’t have the required information locally we’ll be forced to access it over http. \n* It will involve multiple calls if the service returns paginated results. \n* It adds an overhead of serialization and deserialization.\n* If there are a lot of users for a given company, the ‘in’ clause won’t work due to limitation of the query string size\n\nSo on and so forth\n
  • One way to solve this problem is by allowing services to share their data with other services. \n
  • If one service starts writing in the database of the other service, it defeats the purpose of building separate services in the first place.\n
  • We can expose a readonly copy. We don’t recommend it although it is an option to keep in mind. It works in certain scenarios as long as everyone on the team understands that it should not be abused.\n
  • For this we can either use the same database for all our services or create master slave and read from slave\n
  • \n
  • \n
  • \n
  • As we said, sharing database connection is equivalent to integrating services at the database level and it comes with a lot of problems\n
  • Suddenly services which share databases start relying on the internal representation of resource instead of what is exposed at the service level\n
  • Behind the API resources might have computed fields which are not stored in the database or might be split into multiple tables etc. \n
  • \n
  • \n
  • We also tie to the internal stack of the service as different services might have different kind of data stores depending on their needs\n
  • \n
  • Before we discuss how to tackle it let’s talk about another problem.\n\n
  • \n
  • Imagine these multiple services needs to be notified when a particular user logs out of the system so as to do a local cleanup.\n
  • One way to approach this problem is by allowing services to register callback URIs with user management service, either through configuration or programmatically at bootstrap time. \n
  • It will work as long as we have only handful of events which can be registered against. But as we keep adding more events and more services interested in listening to them...\n
  • b. the overall complexity of managing the callback configurations grows massively.\n
  • a. Response time for a simple action like logout increases due to increasing number of callbacks it has to invoke.\n
  • Obviously by creating a background job for invoking callbacks we can guarantee quick response\n
  • We can use an MQ server for that which provides an internal centralized bus for all services who want to to broadcast messages. \n
  • It is a nice way of decoupling producers and consumers of the events. Producer can essentially fire the event and forget about it.\n
  • Any consumer who is interested in any such events will register with the central bus for notifications and is solely responsible for acting upon those event as it sees fit.\n
  • One thing to remember though is that these calls are asynchronous. We should not wrap make two consecutive calls to service expecting that a consumer of the first event has already received and processed it. \n
  • Establish a few convention in terms of what will be an exchange name and a topic name used by a service to propagate a particular type of event so that consumers can easily register for such events without massive configuration. \n
  • A few slides back we spoke about having local cache of resources to which shared database was one solution. If we implement an event system we can use it to manage local cache of resources and use local database joins.\n
  • \n
  • \n
  • \n
  • It’s a local cache and should be treated like one\n
  • Due to asynchronous nature of the system this cache will not reflect latest data. It might have a slightly older version. It can be safely used for resources which don’t change frequently and for which eventual consistency is not a problem.\n
  • Witch caching comes the problem of cache expiry. For which a consumer can listen to update and delete events. Not to mention, the producer has trigger these events with appropriate payload for consumer to modify the local cache. \n
  • Services evolve over a period of time and API changes. But we still have to maintain backward compatibility as the clients depend on a contract of the API\n
  • This is more or less a solved problem. All major APIs such as github, twitter, facebook do it.\n
  • Only thing to keep in mind is, if we are developing both producers and consumers in a walled garden we can deterministically predict number of revisions any API needs to support.\n\nRather going a step further, we can evaluate the cost of upgrading all clients with changing API vs cost of introducing API versioning and take a call whether we want to support multiple versions or not.\n
  • Implementing ACID transactions across multiple databases is a complex problem and having to do so over services adds to the complexity\n
  • There’s no framework in place which does it transparently does it. Databases solve this problem by introducing two phase commit.\n
  • It’s problem and there is one solution we have seen it working but haven’t been involved in its development. We’ll be willing to discuss it offline.\n
  • \n
  • \n
  • Local gem server\n
  • APIs are to be consumed by machines, web pages by human they have different requirements \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Rails services in the walled garden

    1. 1. rails services in the walled garden
    2. 2. niranjan paranjapeachamian @niranjan_p
    3. 3. sidu ponnappakaiwren @ponnappa
    4. 4. Engineeringhttp://github.com/c42
    5. 5. background
    6. 6. suite of 9+ services
    7. 7. replacing a monolithic legacy app
    8. 8. moar APIs
    9. 9. rails + SOA + walled garden
    10. 10. assumptions
    11. 11. structure
    12. 12. why SOA?
    13. 13. advantages!
    14. 14. map to biz verticals
    15. 15. self contained
    16. 16. independent evolution
    17. 17. independent deployment
    18. 18. scale out only what is in demand
    19. 19. easy to maintain
    20. 20. smaller, independent codebases
    21. 21. small teams
    22. 22. disadvantages?
    23. 23. chattiness
    24. 24. performance!
    25. 25. transparentauthentication and/or authorization
    26. 26. ACID
    27. 27. API versioning
    28. 28. continuous integration
    29. 29. why rails?
    30. 30. easy to create APIs
    31. 31. powerful routing
    32. 32. mime-type negotiation
    33. 33. less boilerplate code
    34. 34. we love Ruby
    35. 35. what about sinatra & co.?
    36. 36. walled garden?
    37. 37. inside the garden is easy(er)
    38. 38. full HATEOAS is expensive
    39. 39. rails does RMM 2(.5)
    40. 40. rails != REST
    41. 41. areas of interest
    42. 42. authentication
    43. 43. across services
    44. 44. transparent service orchestration
    45. 45. stateless
    46. 46. no cookies
    47. 47. firewall
    48. 48. central auth service
    49. 49. OAuth 2
    50. 50. OAuth 2?
    51. 51. standard
    52. 52. easy to implement
    53. 53. sample clienthttps://github.com/c42/wrest/tree/master/examples/facebook_auth
    54. 54. ActiveResource
    55. 55. ActiveResource +monkey patching
    56. 56. all gardens have a gate
    57. 57. external auth providers
    58. 58. authorization
    59. 59. user roles
    60. 60. roles over HTTP
    61. 61. authorization at service level
    62. 62. centralized rules
    63. 63. messy, fragmented
    64. 64. each service is independent
    65. 65. solution?
    66. 66. centralized roles federated rules
    67. 67. chattiness
    68. 68. Client Service 1 Service 2 Auth Server
    69. 69. Client Service 1 Service 2 Auth Server
    70. 70. Client Service 1 Service 2 Auth Server
    71. 71. Client Service 1 Service 2 Auth Server
    72. 72. Client Service 1 Service 2 Auth Server
    73. 73. no silver bullet
    74. 74. performance build
    75. 75. GET request ≈ 40ms
    76. 76. monitor trends
    77. 77. smaller payloads
    78. 78. common-sense
    79. 79. caching
    80. 80. HTTP 304 is your friend
    81. 81. fragment/action caching
    82. 82. page caching
    83. 83. etags
    84. 84. client side caching RFC 2616
    85. 85. ActiveResource
    86. 86. Wresthttp://github.com/c42/wrest
    87. 87. covering the middle
    88. 88. cache expiry
    89. 89. expiring Squid caches
    90. 90. pagination
    91. 91. default index action:ActiveRecord::Base.all
    92. 92. 50k records?
    93. 93. pagination is important
    94. 94. pagination meta-data locations:
    95. 95. HTTP headers
    96. 96. XML tag attributes
    97. 97. collection resources
    98. 98. ActiveResource
    99. 99. ActiveResource + WillPaginate
    100. 100. PoxPaginatehttp://github.com/c42/pox_paginate
    101. 101. local resources
    102. 102. user management serviceCompanies user user user user user user user user
    103. 103. project management service project 1 project 2 user 1 user 1 project 3 project 3 user 2 user 2
    104. 104. all projects for a company
    105. 105. database join
    106. 106. http + database “in”
    107. 107. shared database
    108. 108. read only connection to db
    109. 109. shared database connection master slave
    110. 110. immediate consistency
    111. 111. easy to implement
    112. 112. problems
    113. 113. integrating services at database level
    114. 114. broken encapsulation
    115. 115. computed fields
    116. 116. assumes 1:1 mappingbetween db records and resources
    117. 117. exposes internalrepresentations
    118. 118. datastore
    119. 119. observer pattern
    120. 120. post logout event
    121. 121. UserManagement Time and Service Expenses Tracking Service Project Internal Management Communication Service Service
    122. 122. callback URIs
    123. 123. callback hell
    124. 124. complex configuration
    125. 125. response time
    126. 126. async callbacks
    127. 127. MQ
    128. 128. centralized bus
    129. 129. register listeners
    130. 130. async out of the box
    131. 131. convention overconfiguration
    132. 132. local resource cache
    133. 133. it’s in the db
    134. 134. it makes joins cheap
    135. 135. but...
    136. 136. it’s a cache
    137. 137. not realtime
    138. 138. listen to update/delete
    139. 139. API versioning
    140. 140. partially solved
    141. 141. in a walled garden
    142. 142. atomic transactions
    143. 143. two phase commit
    144. 144. offline
    145. 145. engineering
    146. 146. standardization
    147. 147. treat common code like any other library
    148. 148. api vs html
    149. 149. configuration management
    150. 150. DON’T commit configuration
    151. 151. let Chef/Puppet manage it
    152. 152. that’s it!
    153. 153. Q & 42Niranjan Paranjape Sidu Ponnappa @niranjan_p @ponnappagithub: achamian github: kaiwren C42 Engineering

    ×