Architecting Solutions for the Manycore Future


Published on

This talk will focus solution architects toward thinking about parallelism when designing applications and solutions specifically Threads vs Tasks on TPL, LINQ vs. PLINQ, and Object Oriented versus Functional Programming techniques. This talk will also compare programming languages, how languages differ when dealing with manycore programming, and the different advantages to these languages. Demonstration include C#, VB, and F# features for functional programming, LINQ and TPL. A demonstration of the Concurrency Visualizer in Visual Studio 2010 will also be included.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • ENUF = Elements Needed Up Front
  • Architecting Solutions for the Manycore Future

    1. 1. Architecting Solutions for the Manycore Future<br />Talbott Crowell<br />ThirdM<br />
    2. 2. This talk will focus solution architects toward thinking about parallelism when designing applications and solutions<br />Threads vs. Tasks using TPL <br />LINQ vs. PLINQ <br />Object Oriented vs. Functional Programming<br />This talk will also compare programming languages, how languages differ when dealing with manycore programming, and the different advantages to these languages. <br />Abstract<br />manycore<br />
    3. 3. Patrick Gelsinger, Intel VP <br />February 2001, San Francisco, CA<br />2001 IEEE International Solid-State Circuits Conference (ISSCC) <br />If scaling continues at present pace, by 2005, high speed processors would have power density of nuclear reactor, by 2010, a rocket nozzle, and by 2015, surface of sun.<br />Intel stock dropped 8% on the next day<br />“Business as usual will not work in the future.”<br />
    4. 4. The Power Wall: CPU Clock Speed<br />Manycore<br />-&gt;<br />Multicore<br />-&gt;<br />Single core<br />-&gt;<br />From Katherine Yelick’s “Multicore: Fallout of a Hardware Revolution”<br />
    5. 5. In 1966, Gordon Moore predicted exponential growth in number of transistors per chip based on the trend from 1959 to 1965<br />Clock frequencies continued to increase exponentially until they hit the power wall in 2004 at around 3 to 4 GHz<br />1971, Intel 4004 (first single-chip CPU) – 740 kHz<br />1978, Intel 8086 (orgin of x86) – 4.77 MHz<br />1985, Intel 80386DX – 16 MHz<br />1993, Pentium P5 – 66 MHz<br />1998, Pentium II – 450 MHz<br />2001, Pentium II (Tualatin) – 1.4 GHz<br />2004, Pentium 4F – 3.6 GHz<br />2008, Core i7 (Extreme) – 3.3 GHz<br />Intel is now doubling cores along with other improvements to continue to scale<br />Effect of the Power Wall<br />This trend continues even today<br />The Power Wall<br />Enter Manycore<br />
    6. 6. Manycore, What is it?<br />Manycore, Why should I care?<br />Manycore, What do we do about it?<br />Frameworks<br />Task Parallel Library (Reactive Extensions and .NET 4)<br />Languages, paradigms, and language extensions<br />F#, functional programming, LINQ, PLINQ<br />Tools<br />Visual Studio 2010 Tools for Concurrency<br />Agenda: Manycore Future <br />
    7. 7. What is Manycore?<br />
    8. 8. Single core: 1 processor on a chip die (1 socket)<br />Many past consumer and server CPU’s (some current CPU’s for lightweight low power devices)<br />Including CPU’s that support hyperthreading, but this is a grey area<br />Multicore: 2 to 8 core processors per chip/socket<br />AMD Athlon 64 X2 (first dual-core desktop CPU released in 2005)<br />Intel Core Duo, 2006 (32 bit, dual core, for laptops only)<br />Core Solo was a dual core chip with one that doesn’t work<br />Intel Core 2 (not multicore, instead a brand for 64 bit arch)<br />Core 2 Solo (1 core)<br />Core 2 Duo (2 cores)<br />Core 2 Quad (4 cores)<br />Manycore: more than 8 cores per chip<br />Currently prototypes and R&D<br />Manycore, What is it?<br />
    9. 9. High-end Servers 2001-2004<br />IBM Servers 2001 - IBM POWER4 PowerPC for AS/400 and RS/6000 “world&apos;s first non-embedded dual-core processor”<br />Sun Servers 2004 - UltraSpark IV – “first multicore SPARC processor”<br />Desktops/Laptops 2005-2006<br />AMD Athlon 64 X2 (Manchester) May 2005 “first dual-core desktop CPU”<br />Intel Core Duo, Jan 2006 <br />Intel Pentium (Allendale) dual core Jan 2007<br />Windows Servers 2006<br />Intel Xeon (Paxville) dual core Dec 2005<br />AMD Opteron (Denmark) dual core March 2006<br />Intel Itanium 2 (Montecito) dual core July 2006<br />Sony Playstation 3 – 2006<br />9 core Cell Processor (only 8 operational) - Cell architecture jointly developed by Sony, Toshiba, and IBM<br />Multicore trends from servers to gaming consoles<br />
    10. 10. Power Mac G5 - Mid 2003<br />2 x 1 core (single core) IBM PowerPC 970<br />Mac Pro - Mid 2006<br />2 x 2 core (dual core) Intel Xeon (Woodcrest)<br />Mac Pro - Early 2008<br />2 x 4 core (quad core) Intel Xeon (Harpertown)<br />In 5 years number of cores doubled twice on Apple’s high end graphics workstation<br />From 2 to 4 to 8<br />Macintosh multicore trend<br />
    11. 11. The chip is just designed for research efforts at the moment, according to an Intel spokesperson.<br />&quot;There are no product plans for this chip. We will never sell it so there won&apos;t be a price for it,&quot; the Intel spokesperson noted in an e-mail. &quot;We will give about a hundred or more to industry partners like Microsoft and academia to help us research software development and learn on a real piece of hardware, [of] which nothing of its kind exists today.&quot; <br /><br />Microsoft said it had already put SCC into its development pipeline so it could exploit it in the future. <br /><br />48 Core Single-chip Cloud Computer (SCC)<br />
    12. 12. Why should I care?<br />(about Manycore)<br />
    13. 13. Hardware is changing<br />Programming needs to change to take advantage of new hardware<br />Concurrent Programming<br />Paradigm Shift <br />Designing applications<br />Developing applications<br />Manycore, Why should I care?<br />
    14. 14. “The computer industry is once again at a crossroads. Hardware concurrency, in the form of new manycore processors, together with growing software complexity, will require that the technology industry fundamentally rethink both the architecture of modern computers and the resulting software development paradigms.”<br />Craig MundieChief Research and Strategy OfficerMicrosoft CorporationJune 2008<br />First paragraph of the Forward of Joe Duffy’s preeminent tome “Concurrent Programming on Windows”<br />Concurrent Programming<br />
    15. 15. Excerpt from Mark Reinhold’s Blog post: November 24, 2009<br />The free lunch is over. <br />Multicore processors are not just coming—they’re here. <br />Leveraging multiple cores requires writing scalable parallel programs, which is incredibly hard. <br />Tools such as fork/join frameworks based on work-stealing algorithms make the task easier, but it still takes a fair bit of expertise and tuning. <br />Bulk-data APIs such as parallel arrays allow computations to be expressed in terms of higher-level, SQL-like operations (e.g., filter, map, and reduce) which can be mapped automatically onto the fork-join paradigm. <br />Working with parallel arrays in Java, unfortunately, requires lots of boilerplate code to solve even simple problems. <br />Closures can eliminate that boilerplate. <br />“It’s time to add them to Java.”<br /><br />“There’s not a moment to lose!”<br />
    16. 16. Herb Sutter 2005<br />Programs are not doubling in speed every couple of years for free anymore<br />We need to start writing code to take advantage of many cores<br />Currently painful and problematic to take advantage of many cores because of shared memory, locking, and other imperative programming techniques<br />“The Free Lunch Is Over”<br />
    17. 17. Is this just hype?<br />Another Y2K scare?<br />Fact:<br />CPU’s are changing<br />Programmers will learn to exploit new architectures<br />Will you be one of them?<br />Wait and see?<br />You could just wait and let the tools catch up so you don’t have to think about it. Will that strategy work?<br />Should you be concerned?<br />
    18. 18. Just tools or frameworks will not solve the manycore problem alone<br />Imperative programming by definition has limitations scaling in a parallel way<br />Imperative programming (C, C++, VB, Java, C#)<br />Requires locks and synchronization code to handle shared memory read/write transactions <br />Not trivial<br />Difficult to debug<br />Tools and frameworks may help, but will require different approach to the problem (a different paradigm) to really take advantage of the tools<br />The Core Problem<br />
    19. 19. Some frameworks are designed to be single threaded, such as ASP.NET<br />Best practices for ASP.NET applications recommend avoiding spawning new threads<br />ASP.NET and IIS handle the multithreading and multiprocessing to take advantage of the many processors (and now many cores) on Web Servers and Application Servers<br />Will this best practice remain true?<br />Even when server CPU’s have hundreds or thousands of cores?<br />Will it affect all programmers?<br />
    20. 20. What do we do about it?<br />(How do we prepare for Manycore)<br />
    21. 21. Identify where the dependencies are<br />Identify where you can parallelize<br />Understand the tools, techniques, and approaches for solving the pieces<br />Put them together to understand overall performance<br />POC – Proof of Concept<br />Test, test, test<br />Performance goals up front<br />Understand Problem Domain <br />
    22. 22. Frameworks<br />Task Parallel Library (TPL)<br />Reactive Extensions for .NET 3.5 (Rx)<br />Used to be called Parallel Extensions or PFx<br />Baked into .NET 4<br />Programming paradigms, languages, and language extensions<br />Functional programming<br />F#<br />LINQ and PLINQ<br />Tools<br />Visual Studio 2010 Tools for Concurrency<br />Manycore, What do we do about it?<br />
    23. 23. Parallelism vs. Concurrency<br />Task vs. Data Parallelism<br />Parallel Programming Concepts<br />
    24. 24. Concurrency or Concurrent computing<br />Many independent requests<br />Web Server, works on multi-threaded single core CPU<br />Separate processes that may be executed in parallel<br />More general than parallelism<br />Parallelism or Parallel computing<br />Processes are executed in parallel simultaneously<br />Only possible with multiple processors or multiple cores<br />Yuan Lin: compares to black and white photography vs. color, one is not a superset of the other<br /><br />Parallelism vs. Concurrency<br />
    25. 25. Task Parallelism (aka function parallelism and control parallelism)<br />Distributing execution processes (threads/functions/tasks) across different parallel computing nodes (cores)<br /><br />Data Parallelism (aka loop-level parallelism)<br />Distributing dataacross different parallel computing nodes (cores)<br />Executing same command over every element in a data structure<br /><br />Task vs. Data Parallelism<br />See MSDN for .NET 4, Parallel Programming, Data/Task Parallelism<br />
    26. 26. Task Parallel Libarary<br />
    27. 27. Parallel Programming in the .NET Framework 4 Beta 2 - TPL<br />
    28. 28. Reference System.Threading<br />Use Visual Studio 2010 or .NET 4<br />For Visual Studio 2008<br />Download unsupported version for .NET 3.5 SP1 from Reactive Extensions for .NET (Rx)<br /><br />Create a “Task”<br />How to use the TPL<br />FileStream fs =  new FileStream(fileName, FileMode.CreateNew); <br />var task = Task.Factory.FromAsync(fs.BeginWrite, fs.EndWrite, bytes, 0, <br />bytes.Length, null);    <br />
    29. 29. Use Task class<br />Task Parallelism with the TPL<br />// Create a task and supply a user delegate <br />// by using a lambda expression.<br />vartaskA = new Task(() =&gt; <br />Console.WriteLine(&quot;Hello from taskA.&quot;));<br />// Start the task.<br />taskA.Start();<br />// Output a message from the calling thread.<br />Console.WriteLine(&quot;Hello from the calling thread.&quot;); <br />
    30. 30. Task&lt;TResult&gt;<br />Getting return value from a Task<br />Task&lt;double&gt;[] taskArray = new Task&lt;double&gt;[]<br />{<br /> Task&lt;double&gt;.Factory.StartNew(() =&gt; DoComputation1()),<br /> // May be written more conveniently like this:<br />Task.Factory.StartNew(() =&gt; DoComputation2()),<br />Task.Factory.StartNew(() =&gt; DoComputation3())<br />};<br />double[] results = new double[taskArray.Length];<br />for (inti = 0; i &lt; taskArray.Length; i++)<br /> results[i] = taskArray[i].Result;<br />
    31. 31. Task resembles new thread or ThreadPool work item, but higher level of abstraction<br />Tasks provide two primary benefits over Threads: <br />More efficient and scalable use of system resources<br />More programmatic control than is possible with a thread or work item<br />Tasks vs. Threads<br />
    32. 32. Behind the scenes, tasks are queued to the ThreadPool<br />ThreadPool now enhanced with algorithms (like hill-climbing) that determine and adjust to the number of threads that maximizes throughput. <br />Tasks are relatively lightweight<br />You can create many of them to enable fine-grained parallelism. <br />To complement this, widely-known work-stealing algorithms are employed to provide load-balancing..<br />Tasks and the framework built around them provide a rich set of APIs that support waiting, cancellation, continuations, robust exception handling, detailed status, custom scheduling, and more.<br />Tasks<br />
    33. 33. Instead of:<br />Use: <br />Data Parallelism with the TPL<br />for (inti = 0; i &lt; matARows; i++) {<br /> for (int j = 0; j &lt; matBCols; j++) {<br /> ...<br /> }<br />}    <br />Parallel.For(0, matARows, i =&gt; {<br /> for (int j = 0; j &lt; matBCols; j++) {<br /> ...<br /> }<br />}); // Parallel.For  <br />
    34. 34. Use Tasks not Threads<br />Use Parallel.For in Data Parallelism scenarios<br />Or…<br />Use AsyncWorkflosw from F#, covered later<br />Use PLINQ, covered later<br />TPL Summary<br />
    35. 35. Functional Programming<br />
    36. 36. 1930’s: lambda calculus (roots)<br />1956: IPL (Information Processing Language) “the first functional language”<br />1958: LISP “a functional flavored language”<br />1962: APL (A Programming Language)<br />1973: ML (Meta Language)<br />1983: SML (Standard ML)<br />1987: Caml (Categorical Abstract Machine Language ) and Haskell<br />1996: OCaml (Objective Caml)<br />2005: F# introduced to public by Microsoft Research<br />2010: F# is “productized” in the form of Visual Studio 2010<br />Functional programming has been around a long time (over 50 years)<br />
    37. 37. Most functional languages encourage programmers to avoid side effects<br />Haskell (a “pure” functional language) restricts side effects with a static type system<br />A side effect<br />Modifies some state<br />Has observable interaction with calling functions <br />Has observable interaction with the outside world<br />Example: a function or method with no return value<br />Functional programming is safe<br />
    38. 38. Language Evolution (Simon Payton-Jones)<br />C#, VB, Java, C are imperative programming languages. Very useful but can change the state of the world at anytime creating side effects.<br />Nirvana! Useful and Safe<br />F#<br />Haskell is Very Safe, but not very useful. Used heavily in research and academia, but rarely in business.<br /><br />
    39. 39. When a function changes the state of the program<br />Write to a file (that may be read later)<br />Write to the screen<br />Changing values of variables in memory (global variables or object state)<br />Side Effect<br />
    40. 40. Compare SQL to your favorite imperative programming language<br />If you write a statement to store and query your data, you don’t need to specify how the system will need to store the data at a low level<br />Example: table partitioning<br />LINQ is an example of bringing functional programming to C# and VB through language extensions<br />Functional Programming<br />
    41. 41. Use lots of processes<br />Avoid side effects<br />Avoid sequential bottlenecks<br />Write “small messages, big computations” code<br />Efficient Multicore Programming<br />Source: Joe Armstrong’s “Programming Erlang, Software for a Concurrent World”<br />Section 20.1 “How to Make Programs Run Efficiently on a Multicore CPU”<br />
    42. 42. F#<br />
    43. 43. Functional language developed by Microsoft Research<br />By Don Syme and his team, who productized Generics<br />Based on OCaml (influenced by C# and Haskell)<br />History<br />2002: F# language design started<br />2005 January: F# 1.0.1 releases to public<br />Not a product. Integration with VS2003<br />Works in .NET 1.0 through .NET 2.0 beta, Mono<br />2005 November: F# 1.1.5 with VS 2005 RTM support<br />2009 October: VS2010 Beta 2, CTP for VS2008 & Non-Windows users<br />2010: F# is “productized” and baked into VS 2010<br />What is F#<br />
    44. 44. Multi-Paradigm<br />Functional Programming<br />Imperative Programming<br />Object Oriented Programming<br />Language Oriented Programming<br />F# is not just Functional<br />
    45. 45. Parallel Computing and PDC09<br />Tools<br />Managed Languages<br />Axum<br />Visual F#<br />Visual Studio 2010<br />Parallel<br />Debugger Windows<br />Native Libraries<br />Managed Libraries<br />DryadLINQ<br />Async<br />AgentsLibrary<br />Parallel Pattern Library<br />Profiler Concurrency<br />Analysis<br />Parallel LINQ<br />Rx<br />Task ParallelLibrary<br />Data Structures<br />Data Structures<br />Microsoft<br />Research<br />Native Concurrency Runtime<br />Task Scheduler<br />Race Detection<br />Managed Concurrency Runtime<br />Resource Manager<br />ThreadPool<br />Fuzzing<br />Operating System<br />Threads<br />UMS Threads<br />HPC Server<br />Windows 7 / Server 2008 R2<br />Research / Incubation<br />Visual Studio 2010 / .NET 4<br />Key:<br />
    46. 46. Functional programming has been around a long time<br />Not new<br />Long history<br />Functional programming is safe<br />A concern as we head toward manycore and cloud computing<br />Functional programming is on the rise<br />Why another language?<br />
    47. 47. Parallel Programming in the .NET Framework 4 Beta 2 - PLINQ<br />
    48. 48. “F# is, technically speaking, neutral with respect to concurrency - it allows the programmer to exploit the many different techniques for concurrency and distribution supported by the .NET platform” <br />F# FAQ:<br />Functional programming is a primary technique for minimizing/isolating mutable state<br />Asynchronous workflows make writing parallel programs in a “natural and compositional style”<br />F# and Multi-Core Programming<br />
    49. 49. Interactive Scripting<br />Good for prototyping<br />Succinct = Less code<br />Type Inference<br />Strongly typed, strict (no dynamic typing)<br />Automatic generalization (generics for free)<br />Few type annotations<br />1st class functions (currying, lazy evaluations)<br />Pattern matching<br />Key Characteristics of F#<br />
    50. 50. Concurrent Programming with F#<br />
    51. 51. Luke Hoban at PDC 2009F# Program Manager<br /><br />
    52. 52. Demo – Imperative sumOfSquares<br />
    53. 53. Difficult to turn existing sequential code into parallel code<br />Must modify large portions of code to use threads explicitly<br />Using shared state and locks is difficult<br />Careful to avoid race conditions and deadlocks<br />Two Problems Parallelizing Imperative Code<br /><br />
    54. 54. Demo – Recursive sumOfSquares<br />
    55. 55. Declarative programming style <br />Easier to introduce parallelism into existing code<br />Immutability by default<br />Can’t introduce race conditions<br />Easier to write lock-free code<br />Functional Programming<br />
    56. 56. Demo – Functional sumOfSquares<br />
    57. 57. From Seq to PSeq<br />Matthew Podwysocki’s Blog<br /><br />Adding Parallel Extensions to F# for VS2010 Beta 2<br />Talbott Crowell’s Developer Blog<br />!A6E0DA836D488CA6!396.entry<br />Parallel Extensions to F#<br />
    58. 58. Demo – Parallel sumOfSquares<br />
    59. 59. Asynchronous Workflows<br />Control.MailboxProcessor<br />Task Based Programming using TPL<br />Reactive Extensions<br />“The Reactive Extensions can be used from any .NET language. In F#, .NET events are first-class values that implement the IObservable&lt;out T&gt; interface.  In addition, F# provides a basic set of functions for composing observable collections and F# developers can leverage Rx to get a richer set of operators for composing events and other observable collections. ”S. Somasegar, Senior Vice President, Developer Division  <br /><br />F# Parallel Programming Options<br />
    60. 60. Problem<br />Resize a ton of images<br />Demo of Image Processor<br />let files = Directory.GetFiles(@&quot;C:imagesoriginal&quot;)<br />for file in files do<br /> use image = Image.FromFile(file)<br /> use smallImage = ResizeImage(image)<br /> let destFileName = DestFileName(&quot;s1&quot;, file)<br />smallImage.Save(destFileName)<br />
    61. 61. Asynchronous Workflows<br />let FetchAsync(file:string) =<br />async {<br /> use stream = File.OpenRead(file)<br /> let! bytes = stream.AsyncRead(intstream.Length)<br /> use memstream = new MemoryStream(bytes.Length)<br />memstream.Write(bytes, 0, bytes.Length)<br /> use image = Image.FromStream(memstream)<br /> use smallImage = ResizeImage(image)<br /> let destFileName = DestFileName(&quot;s2&quot;, file)<br />smallImage.Save(destFileName)<br /> }<br />let tasks = [for file in files -&gt; FetchAsync(file)]<br />let parallelTasks = Async.Parallel tasks<br />Async.RunSynchronouslyparallelTasks<br />
    62. 62. Tomas PetricekUsing Asynchronous Workflows<br /><br />
    63. 63. LINQ<br />Language-Integrated Query<br />
    64. 64. LINQ declaratively specify what you want done not how you want it done<br />Versus:<br />LINQ<br />var source = Enumerable.Range(1, 10000);<br />varevenNums = from num in source<br /> where Compute(num) &gt; 0<br /> select num;<br />var source = Enumerable.Range(1, 10000);<br />varevenNums = new List&lt;int&gt;();<br />foreach (var num in source)<br /> if (Compute(num) &gt; 0)<br />evenNums.Add(num);<br />
    65. 65. If I put a counter in Compute(num)?<br />What will happen?<br />var source = Enumerable.Range(1, 10000);<br />varevenNums = from num in source<br /> where Compute(num) &gt; 0<br /> select num;<br />private static int Compute(int num) {<br />counter++;<br /> if (num % 2 == 0) return 1;<br /> return 0;<br />}<br />
    66. 66. PLINQ<br />(Parallel LINQ)<br />
    67. 67. Parallel Programming in the .NET Framework 4 Beta 2 - PLINQ<br />
    68. 68. LINQ declaratively specify what you want done not how you want it done<br />PLINQ<br />Declaratively specify “As Parallel”<br />Under the hood, the framework will implement “the how” using TPL and threads.<br />PLINQ = Parallel LINQ<br />var source = Enumerable.Range(1, 10000);<br />varevenNums = from num in source<br /> where Compute(num) &gt; 0<br /> select num;<br />var source = Enumerable.Range(1, 10000);<br />varevenNums = from num in source.AsParallel()<br /> where Compute(num) &gt; 0<br /> select num;<br />
    69. 69. System.Linq.ParallelEnumerable<br />AsParallel()<br />The entry point for PLINQ. Specifies that the rest of the query should be parallelized, if it is possible.<br />
    70. 70. Visual Studio 2010<br />Tools for Concurrency<br />
    71. 71. Steven Toub at PDC 2009Senior Program Manager on the Parallel Computing Platform <br /><br />
    72. 72. Views enable you to see how your multi-threaded application interacts with <br />Itself<br />Hardware<br />Operating System<br />Other processes on the host computer<br />Provides graphical, tabular and textual data<br />Shows the temporal relationships between <br />the threads in your program<br />the system as a whole<br />Concurrency Visualizer in Visual Studio 2010<br />
    73. 73. Performance bottlenecks<br />CPU underutilization<br />Thread contention<br />Thread migration<br />Synchronization delays<br />Areas of overlapped I/O<br />and other info…<br />Use Concurrency Visualizer to Locate<br />
    74. 74. Concurrency Visualizer<br />High level of Contentions during Async<br />
    75. 75. CPU View<br />Threads View (Parallel Performance)<br />Cores View<br />Views<br />
    76. 76. CPU View<br /><ul><li>Async uses more of the CPU(s)/cores
    77. 77. Sync uses 1 CPU/core</li></li></ul><li>Threads View<br />
    78. 78. Full test<br />Close up of Sync<br />Close up of Async<br />Core View<br />
    79. 79. Tomas Petricek - F# Webcast (III.) - Using Asynchronous Workflows<br /><br />Luke Hoban - F# for Parallel and Asynchronous Programming<br /><br />More info on Asychrounous Workflows<br />
    80. 80. The Landscape of Parallel Computing Research: A View from Berkeley 2.0 by David Patterson<br /><br />Parallel Dwarfs<br /><br />More Research<br />
    81. 81. “The architect as we know him today is a product of the Renaissance.” (1)<br />“But the medieval architect was a master craftsman (usually a mason or a carpenter by trace), one who could build as well as design, or at least ‘one trained in that craft even if he had ceased to ply his axe and chisel’(2).” (1)<br />“Not only is he hands on, like the agile architect, but we also learn from Arnold that the great Gothic cathedrals of Europe were built, not with BDUF, but with ENUF”<br />(1). Dana Arnold, Reading Architectural History, 2002<br />(2). D. Knoop & G. P. Jones, The Medieval Mason, 1933<br />(3). Architects: Back to the future?, Ian Cooper 2008<br />The Architect<br /><br />
    82. 82. visit us at<br />Thank you. Questions?Architecting Solutions for the Manycore Future<br />Talbott Crowell<br /><br /><br />Twitter: @Talbott and @fsug<br />