Formal method guyMSR: AsmL, Spec#, Spec ExplorerWindows: Improving Microsoft Documentation (Open Documents) as resulting regulatory scrutiny of Microsoft. Using Model-Based Testing on a large scale in a project w/ over 300 person years of test effortSince April: Engineer @ Google. Currently working on G+ platform and tools. Just graduated from what they call a ‘Noogler’ (new Googler) ramping up on Google technology. To be clear: What I’m saying here is my personal viewpoint not necessarily inline with Google’s official opionion or direction. My intend is to not show you some research results or technologies as they come out of Google. Rather I’m trying to extract some challenges and opportunities for runtime verifcation research from my personal viewpoint on how things work @ Google. Other people @Google or in the industry might have totally different viewpoints.Also note that some of the technology used in the Google stack is confidential, others well-known or even open source. When it comes to the confidential parts, I will need to abstract and extract the underlying abstract concepts. So don’t take me literally when talking about the Google stack.
Will generally talk about what cloud computing is, how the market looks like. There is a lot of confusion about cloud computing, and I’ll try to set some scope. This maybe cold coffee for many but perhaps not for all.The largest part of this talk, will talk about monitoring (or RV or RA or however you want to call it) the cloud. This will present how a data center works, and how Google uses monitoring techniques to run it. It will pinpoint where there IMO no issues as the existing technologies work surprisingly well. It will pinpoint where I see gaps and challenges for research.I will shortly talk about testing the cloud. The biggest challenge here is integration testing, for which I suggest MBT. In this context I will also shortly talk about my experience with MBT when working for MS.I will dig into how the cloud can be actually exploited for software development. There are exciting opportunities here which wait to be applied.I will insert a small plug about a personal vision or dream around languages and specificationConclusion
Computation as a serviceIts like a utilityBased on shared resources
The picture you find on Wikipedia. It gives an idea of the components:Applications/SoftwarePlatformInfrastructure (Hardware)We dig into some of this in detail later on
Some properties of cloud computing as a user experiences it.
Let me talk a little bit about the cloud stack from the Market perspective. Most information I present here is extracted from an excellent article in the Economist (including this nice picture)
The three segments of the cloud stackSAAS‘PARSE’Eye-ASS
[Pronouncedeye-ass]Provides the hardware layers (data centers)For Google, one large one is for example in The Dalles Oregon. Size of two football fields, cooling towers four stories high, etc.A very important property is homogeneity of the hardware, often also achieved by using VMs. That makes it possible to migrate jobs in and between DCsThere are a number of big players. The actual size of the DCs (in number of machines) is highly confidential for each player. Guess why? …. A big issuein IAAS is the allocation problem, i.e. how many machines are required to provide certain services. The various players have there own ‘black magic’ to compute this.The revenue numbers are taking from the Economist article. The actual revenue is not disclosed directly by the players, so this is an estimate of the author of this article, and I have no idea about its accuracy… (In particular, this is not a number from Google!) The number looks actual relative small. This maybe related to that IAAS is not usually sold ‘as is’ but the actual end products – PAAS or SAAS – is sold.
Pronounced parseThis is basically the operating system plus frameworks and development tools. Not too many players in this space. For Google, it’s the app engine framework, which allows you to create and place applications in the Google cloud. For Microsoft, its Azure and Visual Studio.Estimated revenue of the whole market: again relative small. For companies like MS, platform is more a strategic investment which pays off indirectly.
Pronounced SARSNow this is how a user actually experiences the cloudThere are many players hereThe business is the largest
After this introduction, let us get a bit more technical and talk about monitoring of the cloud
The notions Monitoring, RV, RA, Testing, are often used to name similar or related things. What are the differences?Here is an attempt to make a definition.However, as in practice the boundaries are not so clear, and in particular at Google, monitoring is often used where other people would say RV, I will identify RV and monitoring.Testing is still a subject by itself though there is a lot of overlap.
Lets take a closer look how a DC actually works.If some sends a request on a domain like google.com, then the first thing is that a regional DNS will resolve this to particular DC closest to the locationIt reaches there some controller which will forward the request in kind of a hierarchy until a particular server (VM) is reached which handles the requestNote that high performing NFS is very important in a DC, so machines share a common storageMachines may be further organized in racks which may have certain replicated resources for shared storage
A server (or VM) usually runs a number of jobs (processes). Certain jobs which highly interact with each other may be arranged to run on the same serverIf it comes to monitoring, usually each (or a number) of jobs has associated a dedicated process which monitors the health of this job,The monitors collect data and can send alerts to user instances in the systen, alert managers.
A service (in contrast to a server) is about actually serving the initial user requestIt usually splits the task in sub-requests which are served by other jobs, often called backends. This is the major source of complexity in managing this kind of software.For certain activities, there maybe hundreds of jobs involved to get one request finally served
Now lets take a closer look at how monitoring works @Google. Black box: checks the health and basic functionalityWhite box: provides access to the internal state of a job collecting time series of dataLog analysis: processes after the fact logged data
Lets look at what are the issues with BB monitoring (if any). Where does it work and where not?
This is one problem. The worlds of test and of BB monitoring are largely disjoint.This maybe partly because of the original different engineering disciplines of operating engineers (called SREs) and software engineersIt would be nice to run some of the monitoring rules already at test time. And it would be nice to run of the test cases at monitoring time.It’s a matter of setting up a framework like Junit to decorate test cases for monitoring and provide other required meta dataNot really rocket science, though
Monitor what actually goes wrong when a request is serviced, following all its way to the topologyActually not important to catch failures; this works alreadyMaybe more important to analysis causes of failures and potentially prevent them before things crash
Runtime Analysis in the Cloud:Challenges and Opportunities<br />Wolfgang Grieskamp<br />Software Engineer, Google Corp<br />
About Me <br />< 2000: Formal Methods Research Europe (TU Berlin)<br />2000-2006: Microsoft Research: Languages and Model-Based Testing Tools<br />2007-2011: Microsoft Windows Interoperability Program: protocol testing and tools <br />Since 4/2011: Google: Google+ platform and tools<br />DISCLAIMER: This talk does not necessarily represent Google’s opinion or direction.<br />
Content of this talk<br />General blah blah about cloud computing<br />Monitoring the Cloud<br />Testing the Cloud<br />Using the Cloud for Development<br />A formal method guy’s dream…<br />Conclusion<br />
What is Cloud Computing?<br />From Wikipedia, the free encyclopedia<br />Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).<br />
What is Cloud Computing?<br />From Wikipedia<br />
Some Properties of Cloud Computing<br /><ul><li>It is location independent
IAAS (Infrastructure As A Service)<br />Basic building blocks<br />Storage<br />Networking<br />Computation<br />Homogenous, easy migration (based on VMs)<br />Distributed Data Centers over Geographical Zones<br />Players: Amazon, GoGrid, Rackspace, Microsoft, Google<br />Estimated revenue 2010: $1b (Source: Economist)<br />
Platform As A Service (PAAS)<br />Basic Building Blocks<br />Operating System <br />Frameworks and Development Tools<br />Deployment and Monitoring Tools<br />Players: Microsoft, Google, IBM, SAP, …<br />Estimated Revenue 2010: $311m (Source: Economist)<br />
Software As A Service (SAAS)<br />Device and location independent applications, typically running in a browser (Email, Social, Retail, Enterprise Apps, etc.)<br />Many different players<br />Estimated revenue 2010: $11.7b (Source: Economist)<br />
Monitoring vs RV vs Testing<br />What’s the difference?<br />A (strict) take:<br />Monitoring collects and presents information for human analysis<br />Runtime verification collects and transforms information for automated analysis which ultimately leads to a verdict<br />Testing does the above things in an isolated, staged or mocked, environment. In particular, stimuli from the environment are simulated.<br /> In practice, boundaries are not so clear. For this talk RV = Monitoring (adapting to Google conventions)<br />
Anatomy of a Data Center<br />Data Center A<br />Data Center B<br /> ……<br />Controller<br />Server<br />Controller<br />Server<br />Server<br />Server<br />Server<br />…<br />Storage<br />Storage<br />Storage<br />Storage<br />Storage<br />Note: abstracted and simplified<br />
Anatomy of a Server<br />Data Center A<br />Data Center B<br /> ……<br />Controller<br />Server (VM)<br />Controller<br />Server<br />Controller<br />Job<br />Job<br />Job<br />Server<br />Server<br />Server<br />Server<br />…<br />Monitor<br />Monitor<br />Monitor<br />Storage<br />Storage<br />Storage<br />Storage<br />Logs<br />Alert<br />Note: abstracted and simplified<br />
Anatomy of a Service<br />Data Center A<br />Data Center B<br /> ……<br />Controller<br />Service (across Servers)<br />Job<br />Job<br />Server<br />Controller<br />Job<br />Job<br />Job<br />Server<br />Server<br />Server<br />Server<br />…<br />Job<br />Storage<br />Storage<br />Storage<br />Storage<br />Storage<br />Storage<br />Note: abstracted and simplified<br />
Black Box Monitoring<br />Job<br />Monitor<br />Frequently send requests and analyze the response <br />Possible because server jobs are ‘stateless’ and always input enabled<br />If failure rate over a certain time interval exceeds a given ratio, raise an alert and page an engineer<br />Engineers aim for minimizing paging and avoiding false positives<br />
Black Box Monitoring: How its done @ Google<br />There are rule based languages for defining request/responses. Each rule:<br />Synthesizes an HTTP request<br />Analyzes the response using a regular expression<br />Specifies frequency and allowed failure ratio<br />Rules are like tests: a simple trigger and a simple response analysis <br />Monitors can be also custom code<br />Job<br />Monitor<br />
Black Box Monitoring: Issues?<br />Is the ‘stateless’ hypothesis feasible?<br />Nothing is really stateless -- state is passed as parameters in cookies, continuation tokens, etc.<br />However, as these are health tests, state can be ignored<br />What is the relation to testing?<br />In theory very similar, only that the environment is not mocked. <br />In practice uses quite different frameworks/languages <br />What about service/system level monitoring?<br />Its only about one job. <br />Doesn’t give failure root cause (it only measures a symptom)<br />Job<br />Monitor<br />
ChallengeIntegrate Black-Box Monitoring and Testing<br />Job<br />Monitor<br />Black-box monitoring can be seen as particular way of executing tests end-to-end on the live product such that the impact on performance can be neglected.<br />Frameworks which integrate design and execution of monitoring rules and test cases are promising<br />Mainly an engineering challenge <br />
Job<br />Job<br />ChallengeSystem/Service Level Black-Box Monitoring<br />Monitor<br />Monitor<br />Not commonly done <br />Main purpose would be failure cause analysis and failure prevention<br />Simple local monitoring already discovers failures <br />Is there a strong point of doing it at runtime (vs. log analysis)?<br />Only if real-time prevention and potentially repair is important <br />Monitor<br />
ChallengeProtocolcontract verification<br />Job<br />Monitor<br />At Google, all communication between jobs happens via a single homogeneous RPC mechanism based on a message format definition language (called protocol buffers)<br />Also all (= terra bytes) of data is stored in formats specified by protocol buffers<br />One could formulate data invariant and protocol sequencing contracts over protocol buffers and enforce them at runtime <br />
White-Box Monitoring<br />Server exports collection of probe points (variables)<br />Memory, # RPCs, # Failures, etc.<br />Monitor collects time series of those values and computes functions over them<br />Dashboards prepare information graphically<br />Mostly used for diagnosis by humans<br />Job<br />Monitor<br />
White-Box Monitoring:How its done @ Google<br />Job<br />Monitor<br />Declarative language for time series computations<br />Collects samples from the server by memory scraping<br />Merging of similar data from multiple servers running the same job<br />Rich support for diagram rendering in the browser<br />
White-Box Monitoring:Issues?<br />Design for monitorability/testability?<br />Its already ubiquitous throughout, since software engineers are themselves on-call…<br />Distributed collection/network load?<br />Not really an issue because it’s sample based<br />Relation to testing?<br />Same as with black-box – should be a common framework.<br />Automatic root cause analysis and self-repair?<br />Current systems mostly build for human analysis and repair. <br />Self-repair would be a big thing.<br />Job<br />Monitor<br />
ChallengeSelf-repair<br />Job<br />Monitor<br />Cloud system are homogenous and operate with redundancy<br />Many VMs with exactly the same properties<br />Self-repair could identify and ‘drain’ faulty parts of the system, apply fallbacks, rollback software updates, etc.<br />One major cause of cloud failures are outages<br />Another major cause are software updates <br />A semi-automated approach suggesting actions to a human would be already very useful<br />Ever got paged at 2am in the morning?<br />
ChallengeHybrid White-Box Monitoring / RV<br />Foundations<br />Job<br />Monitor<br />The data collected in white-box monitoring represents continuous and often stochastic functions over time. <br />The triggers for discrete actions (like alerts) are thresholds over integrated values of those functions.<br />Sounds like hybrid systems/automaton. Has anybody looked at it like this in RV community?<br />
Log Analysis<br />Job<br />Logs<br />Collect data from each server’s run containing information like operation flow, exceptions, etc.<br />Store data over a window of time (say for last 24h)<br />Access data from various sources programmatically to analyze issues (post-mortem, performance, etc.)<br />Allows for correlation of system/service wide information<br />
Log Analysis:How its done @ Google<br />Job<br />Logs<br />Very fined grained logging on job side; huge amounts of data collected<br />Logs are stored in Bigtable (Google’s large-scale storage solution)<br />Logs are analyzed using parallel (cloud) computing, e.g. with Sawzall, a declarative language based on map-reduce<br />Logs are most often used for failure cause analysis/debugging <br />
Log Analysis:Issues?<br />Job<br />Logs<br />Amount of data and accessibility?<br />Not really an issue because of highly performing distributed file systems<br />Format of the data?<br />Logs are structured data (at Google, protocol buffers)<br />Encryption?<br />A big issue: if it can’t be decrypted, not much may be diagnosable. If it can be decrypted, the access to this now decrypted data needs to be restricted. <br />
ChallengePrivacy and Encryption<br />Job<br />Monitor<br />Data logged (or otherwise analyzed) during monitoring may contain encrypted proprietary information<br />A problem may not be diagnosable without decryption<br />Decrypted clear text data (in particular if logged) needs to be highly protected<br />Automatic obfuscation and/or anonymization would be highly desirable<br />A protocol may need to be designed for this in the first place<br />
ChallengeIntegration Testing<br />Job<br />Job<br />Job<br />Storage<br />Two or more components are plugged together with a partially mocked environment<br />These tests are usually very ‘flaky’ (unreliable) because:<br />Difficulty to construct mocked component’s precise behavior (its more than a simple mock in a unit test)<br />Difficulty to synthesize mocked component’s initial state (it may have a complex state)<br />Potential solution: model-based testing<br />
Technical Document Testing Program of Windows: A Success Story for MBT<br />222 protocols/technical documents tested<br />22,847 pages studied and converted into requirements<br />36,875 testable requirements identified and converted into test assertions<br />69% tested using MBT<br />31% tested using traditional test automation<br />66,962 person days (250+ years)<br />Hyderabad: 250 test engineers<br />Beijing: 100 test engineers<br />38<br />
Comparison MBT vs Traditional<br /><ul><li>In % of total effort per requirement, normalizing individual vendor performance
Vendor 2 modeled 85% of all test suites, performing relatively much better than Vendor 1</li></ul>39<br />Grieskamp et. al: Model-based quality assurance of protocol documentation: tools and methodology. Softw. Test., Verif. Reliab. 21(1): 55-71 (2011)<br />
Idle Resources <br />Peak demand problem: as with other utilities, the cloud must have capacity to deal with peak times: 7am, 7pm, etc.<br />Huge amounts of idle computing resources available in the DCs outside of those peak times<br />Literally hundreds of VMs may be available for a single engineer on a low-priority job base<br /><ul><li>Game changer for software development tools</li></li></ul><li>Using the Cloud for Dev @ Google<br />Distributed/parallel build<br />Every engineer can build all of Google’s code + third party open source code in a matter of minutes (sequential build would take days)<br />Works by constructing the dependency graph than using map/reduce technology<br />Distributed/parallel test<br />Changes on the code base are continuously tested against all dependent targets once submitted<br />Failures can be tracked down very precisely to the given change which have introduced them<br />Check out http://google-engtools.blogspot.com/ for details<br />
Consequences for Testing, Program Analysis, etc. <br />Need to rethink the base assumptions of some of the existing approaches for testing and program analysis for massive coarse-grained parallelism:<br />Early divide-and-conquer ideal, e.g. initial random seed than run-till-end and collect and compare<br />Try different heuristics on the same problem; see which one wins<br />Techniques like SMT and concolic execution can largely benefit from this<br />
Conclusions<br />The cloud brings nothing really new, but it changes priorities. <br />Some of the problems traditionally researched in RV and testing are non-brainers (at least at Google). Others do wait for a solution.<br />Have ideas? Apply for a Google Research Award (Google it). Deadline is February 1, 2012.<br />