Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Architect's Two Hats


Published on

This presentation, given on the Software Architecture course at the University of Brunel, discusses the interplay between architecture and design. How the designer and architect are really different roles and ones that often have competing goals.

  • Be the first to comment

The Architect's Two Hats

  1. 1. The Architect ’ s Two Hats Ben Stopford Architect High Performance Computing RBS
  2. 2. Once upon a time
  3. 3. In a land far far away
  4. 4. There was a designer
  5. 6. And an architect
  6. 8. And they didn ’t really get on
  7. 9. OK – that ’s not entirely true – but they have competing objectives
  8. 10. This is the story of the architect ’s two hats
  9. 11. So what are the two hats all about?
  10. 12. Design <ul><li>The internal structure of software at a code level. </li></ul><ul><li>Controlling the complexity of the software as it grows. </li></ul><ul><li>Keeping it comprehensible and fungible. </li></ul><ul><li>Basically everything that Steve has been telling you so far. </li></ul>
  11. 13. Architecture <ul><li>The non functional stuff that fights against design. </li></ul><ul><li>The stuff we do because we have to. </li></ul><ul><li>Because we need to build systems that are fast and resilient. </li></ul><ul><li>This includes everything from machine specifications to grid infrastructures </li></ul>
  12. 14. So lets look at how design works in industry
  13. 15. We ’ ll look at architecture for performance a bit later, right now lets look at design
  14. 16. Software Design
  15. 17. Why do we need to worry about Design? <ul><li>Software evolves over time. </li></ul><ul><li>Unmanaged change leads to spaghetti code bases where classes are highly coupled to one another. </li></ul><ul><li>This makes them brittle and difficult to understand </li></ul>
  16. 18. So how to we avoid this?
  17. 19. We design our solution well so that it is easy to understand and change
  18. 20. And we all know how to do that right? I ’ m going to play with you a bit now….
  19. 21. We Design our System Up Front
  20. 22. … with UML
  21. 23. AND… We get to spot problems early on in the project lifecycle. Why is that advantageous?
  22. 24. Why? Because it costs less to get the design right early on, see: Low cost of change early in lifecycle
  23. 25. So we have a plan for how to build our application before we start coding. All we need to do is follow the plan!!
  24. 26. Well….. That ’s what we used to do… … but problems kept cropping up…
  25. 27. It was really hard to get the design right up front. <ul><li>The problems we face are hard. </li></ul><ul><li>The human brain is bad at predicting all the implications of a complex solution up front. </li></ul><ul><li>When we came to implementing solutions, our perspective would inevitably change and so would our design. </li></ul>
  26. 28. <ul><li>And then… </li></ul><ul><li>… when we did get the design right… </li></ul><ul><li>… the users would go and change the requirements and we ’d have to redesign our model. </li></ul>
  27. 29. Ahhh, those pesky users, why can ’t they make up their minds?
  28. 30. So… …in summary…
  29. 31. We find designing up front hard…
  30. 32. ...and a bit bureaucratic
  31. 33. … and when we do get it right the users generally go and change the requirements…
  32. 34. … and it all changes in the next release anyway.
  33. 35. So are we taking the right approach
  34. 36. Software is supposed to be soft. That means it is supposed to be easy to change.
  35. 37. So is fixing the design up front the right way to do it?
  36. 38. But we ’ve seen that changing the design later in the project life cycle is expensive!
  37. 39. So does the cost curve have to look like this?
  38. 40. Can we design our systems so that we CAN change them later in the lifecycle?
  39. 41. The Cost of Change in an Agile application
  40. 42. How
  41. 43. By designing for change ** But not designing for any specific changes *** And writing lots and lots of tests
  42. 44. Agile Development facilitates this Code Base Test Test Test
  43. 45. Dynamic Design <ul><li>Similar to up-front design except that it is done little and often. </li></ul><ul><li>Design just enough to solve the problem we are facing now, and NO MORE. </li></ul><ul><li>Refactor it later when a more complex solution is needed. </li></ul>
  44. 46. This means that your system ’s design will constantly evolve.
  45. 47. So what does this imply for the Designer?
  46. 48. It implies that the designer must constantly steer the application so that it remains easy to understand and easy to change .
  47. 49. With everyone being responsible for the design.
  48. 50. So the designers role becomes about steering the applications design through others.
  49. 51. Shepherding the team!
  50. 52. Shepherding the team <ul><li>People will always develop software in their own way. You can ’t change this. </li></ul><ul><li>You use the techniques you know to keep the team moving in the right direction. </li></ul><ul><li>Occasionally one will run of in some tangential direction and when you see this you move them back. </li></ul>
  51. 53. <ul><ul><li>How? </li></ul></ul><ul><ul><li>Encourage preferred patterns. </li></ul></ul><ul><ul><li>Encourage reuse. </li></ul></ul><ul><ul><li>Software Process </li></ul></ul><ul><ul><li>Communication </li></ul></ul><ul><ul><li>Frameworks </li></ul></ul>
  52. 54. <ul><li>In industry we see the design of a system evolve. It does not all happen up front. </li></ul><ul><li>We use agile methods to let us get away with changing things later. </li></ul><ul><li>We design for change using the patterns steve has discussed. Separation of Concerns, Interfaces etc. </li></ul>
  53. 55. Now we can switch to the architects hat.
  54. 56. Architecting for performance And why it can fight against good design
  55. 57. If speed is going to be a problem…. Then evolutionary design alone may not work
  56. 58. We need to seed the design with clever architectural decisions to ensure we get the speed we need
  57. 59. We use patterns and frameworks that increase speed But often at the expense of a clear programming model. i.e. these patterns can obfuscate the implementation.
  58. 60. How it fits together: A project timeline Set-up Architecture Design: Push patterns and reuse etc Amend Architecture
  59. 61. So lets look at how we architect for speed
  60. 62. Load Balancing <ul><ul><li>Ideal for applications that cleanly segregate into discrete pieces of work e.g. Commercial websites. </li></ul></ul>
  61. 63. Load Balancing: Horizontal Scalability
  62. 64. Load balancing is great way to get horizontal scalability without affecting the programming model much!!
  63. 65. But it is limited by the time taken for a single request i.e. it allows us to handle more load, but does not allow us to reduce the time taken to execute single process
  64. 66. What if we want to make a single (possibly long running) processes faster? Lets look at an example
  65. 67. Sum the numbers between 1 and 10,000,000 int total for (n =1 to 10,000,000){ total+=n } A service has a hard limit on how fast it can execute this, defined by it ’s clock speed. Load balancing won ’t speed this up.
  66. 68. How do we speed this up programming in process intensive cases? <ul><ul><li>Exploit the parallelism of multiple cores or machines </li></ul></ul>
  67. 69. Make it faster, using threads and multiple cores <ul><ul><ul><ul><ul><li>New Thread(){ </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>for ( n =1 to 100 ){ </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>total+=n </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>} </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>} </li></ul></ul></ul></ul></ul><ul><li>New Thread(){ </li></ul><ul><ul><ul><ul><ul><li>for ( n =101 to 200 ){ </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>total+=n </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>} </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>}….. </li></ul></ul></ul></ul></ul>
  68. 70. Larger Machines - Azul <ul><li>Up to 768 processors in a single machine </li></ul><ul><li>Each processor has up to 1GB local memory </li></ul><ul><li>RAID 10 based disk access (redundancy for speed) </li></ul><ul><li>Customised JVM distributes processing over available processors and handles garbage collection in hardware (why is this a problem?). </li></ul><ul><li>Expensive (£300-400K). </li></ul><ul><li>Still has a hard limit of 768 processors. </li></ul>
  69. 71. This is scaling up
  70. 72. But it costs
  71. 73. But we can also scale out. i.e. lots of normal hardware linked together
  72. 74. Scaling Out is Cost Effective Scaling out is much more cost effective per teraflop than scaling up.
  73. 75. But it adds complexity to the programming model
  74. 76. Key Point: Role of an Architect is to construct and application that performs - that often means a distributed system Distributed Systems can be much harder to manage than those that run on a single machine Why?
  75. 77. Because standard computing models don ’t work in a distributed world.
  76. 78. The Trials of Distributed Computing
  77. 79. As we distribute our application across multiple machines, complexity is moved from hardware to software.
  78. 80. The major problem is the lack of fast shared memory
  79. 81. Why? <ul><li>Read from memory ~ 1-10 μs </li></ul><ul><li>Read across wire ~ 1-10ms </li></ul>You cannot abstract such latencies away. (why not?) You must architect around them.
  80. 82. We can ’t ignore ‘wire’ time. We need to make sure programmers think about it.
  81. 83. The complexity of shared memory has been moved from the hardware domain to the software domain. Simple problems like accessing other objects are now more time consuming
  82. 84. But lets step back a little and look at why we need to distribute processing
  83. 85. Sum the numbers between 1 and 10,000,000 int total for (n =1 to 10,000,000){ total+=n } A service has a hard limit on how fast it can execute this
  84. 86. To execute faster: batch for parallel execution <ul><li>On machine (1) </li></ul><ul><ul><ul><ul><ul><li>int total </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>for ( n =1 to 100 ){ </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>total+=n </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>} </li></ul></ul></ul></ul></ul><ul><li>On machine (2) </li></ul><ul><ul><ul><ul><ul><li>int total </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>for ( n =101 to 200 ){ </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>total+=n </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>} </li></ul></ul></ul></ul></ul>
  85. 87. Leads to concept of Grid Computing <ul><li>Split work into jobs with the data they need. </li></ul><ul><li>Send these jobs to n machines for processing. </li></ul>
  86. 88. Parallel execution on a grid Server Grid node Grid node Grid node Grid node Send code + data Receive result Code + data (1) Code + data (2) Code + data (3) Code + data (4) Processing time is 4 x synchronous case (assuming processing time >> wire time)
  87. 89. Grid computing solves the processing problem. i.e. we can do complex computations very quickly by doing them in parallel.
  88. 90. But it complicates the programming model
  89. 91. Example: Report Generation for (Report report : reports){ Data data = getDataFromDB(report); format(data); present(data); }
  90. 92. ... so visually... Get data DB Format Present Loop For n
  91. 93. ...but on the grid... Start DB Format Present Loop n times Grid Runner Asynchronous Synchronous Grid Callback Grid
  92. 94. Note how this muddles the design A simple loop has become a set of distributed invocations and asynchronous call backs.
  93. 95. Grids allow us to process computations quickly through parallelism. But this leads to the problem getting fast enough access to the data we want to operate on
  94. 96. Problems With Data Bottlenecks <ul><li>Data tends to exist in only one place (DB) </li></ul><ul><li>This is a bottleneck in the architecture </li></ul><ul><li>In a distributed architecture we must avoid bottlenecks. Databases often represent one of these. </li></ul>
  95. 97. Why is the database a bottleneck? Server Grid node Grid node Grid node Grid node Send code Receive result Database Get data
  96. 98. Databases are slow(ish*) <ul><li>ACID (Atomic, Consistent, Isolated, Durable) </li></ul><ul><li>Must write to disk, this is slow. </li></ul><ul><li>What other options do you have? </li></ul>
  97. 99. We can scale up by using memory not disk Memory access is much faster than disk access.
  98. 100. In memory databases <ul><li>In memory databases do not have the overhead of writing to disk. </li></ul><ul><li>This makes them FAST . </li></ul><ul><li>However they have drawbacks: </li></ul><ul><ul><li>Durability – what happens if the power is cut or the application crashes? Is data consistent? </li></ul></ul><ul><ul><li>Scalability of memory: </li></ul></ul><ul><ul><ul><li>Memory on a single machine </li></ul></ul></ul><ul><ul><ul><li>Maximum size of a JVM for example is a little over 3GB (why?) </li></ul></ul></ul>
  99. 101. But Single Machine Data Stores Don ’t Scale, even if they are in memory Server Grid node Grid node Grid node Grid node Send code Receive result In Memory Database Get data BOTTLENECK
  100. 102. What we need is a distributed data source? Welcome to the world of distributed caching
  101. 103. Distributed Caching Solves This Problem by Splitting The Data Over All Servers 1/5 data 1/5 data 1/5 data 1/5 data 1/5 data Client Client Client Client This is parallel processing for data access. Data requests are split across multiple machines.
  102. 104. Now we have removed the data bottleneck Server Grid node Grid node Grid node Grid node (1) Send code (2) Receive result Data Fabric 1/6 1/6 1/6 1/6 1/6 1/6
  103. 105. This gives us access to a fast shared memory across multiple machines
  104. 106. We are now massively parallel With lightning fast data access.
  105. 107. But we can get faster than this. How?? Server Grid node Grid node Grid node Grid node (1) Send code (2) Receive result Data Fabric 1/6 1/6 1/6 1/6 1/6 1/6
  106. 108. Superimpose compute and data fabrics into one entity Data Fabric 1/6 1/6 1/6 1/6 1/6 1/6 Server (1) Send code (2) Receive result
  107. 109. So to speed up a system <ul><li>Load balance services </li></ul><ul><li>Run compute intensive tasks in parallel on a grid. </li></ul><ul><li>Avoid data bottlenecks with distributed caching </li></ul>
  108. 110. So remember the two hats
  109. 111. <ul><li>In industry we use Agile methods to evolve the systems design. It does not all happen up front. </li></ul><ul><li>We accept change as part of our world and use the patterns Steve has discussed to control it. Separation of Concerns, Interfaces etc. </li></ul>
  110. 112. <ul><li>But architecting for speed means distributed computing. </li></ul><ul><li>Distributed/parallel computing adds significant complexity. </li></ul><ul><li>It is hard to factor in later (although not impossible) </li></ul>
  111. 113. The trick is balancing them! How much architecture is enough?
  112. 114. But fundamentally, you need them both!
  113. 115. Thanks Slides: Vacation Placements: [email_address]