• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Domain-Specific Profiling - TOOLS 2011
 

Domain-Specific Profiling - TOOLS 2011

on

  • 1,035 views

Domain-Specific Profiling - TOOLS 2011

Domain-Specific Profiling - TOOLS 2011

Statistics

Views

Total Views
1,035
Views on SlideShare
855
Embed Views
180

Actions

Likes
0
Downloads
2
Comments
0

4 Embeds 180

http://scg.unibe.ch 176
http://paper.li 2
http://www.slideshare.net 1
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • \n
  • \n
  • Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • What is the relationship between this and the domain? picture again\n
  • \n
  • PetitParser is a parsing framework combining ideas from scannerless parsing, parser combinators, parsing expression grammars and packrat parsers to model grammars and parsers as objects that can be reconfigured dynamically.\n
  • Rigorous test suites try to ensure that each part of the grammar is covered by tests and is well-specified according to the respective language standards. Validating that each production of the grammar is covered by the tests is a difficult activity. As mentioned previously, the traditional tools of the host language work at the method and statement level and thus cannot produce meaningful results in the context of PetitParser where the grammar is modeled as a graph of objects.\n
  • This is a picture of the XML grammar\n
  • Rigorous test suites try to ensure that each part of the grammar is covered by tests and is well-specified according to the respective language standards. Validating that each production of the grammar is covered by the tests is a difficult activity. As mentioned previously, the traditional tools of the host language work at the method and statement level and thus cannot produce meaningful results in the context of PetitParser where the grammar is modeled as a graph of objects.\n
  • \n
  • new dimension of problem-domain\n
  • new dimension of problem-domain\n
  • Specify the Domain interests\n Capture the runtime information\n Present the results\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n\n
  • \n\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Clicking and drag-and-dropping nodes refreshes the visualization, thus increasing the progress bar of the corresponding nodes. This profile helps identifying unnecessary rendering. We identified a situation in which nodes were refreshing without receiving user actions.\n
  • \n
  • What does observeParser do?\n
  • \n
  • \n
  • Software instrumentation monitors the run-time behavior of a system to support a particular kind of analysis.\n
  • \n
  • \n
  • \n
  • We see that there are many different ways of doing reflection, adaptation, instrumentation, many are low level.\nAnd the ones that are highly flexible cannot break free from the limitations of the language.\n
  • Adaptation semantic abstraction for composing meta-level structure and behavior.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • We need to reflect on the dynamic representation of a program, that is, the operational execution.\nThis is called Behavioral Reflection pioneered by Smith in the context of Lisp\n
  • Behavioral reflection provides a way to intercept and change the execution of a program. It is concerned with execution events, i.e., method execution, message sends, or variable assignments. \n
  • \n
  • \n
  • Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
  • Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
  • \n
  • 1. identified the need for domain-specific profiling\n2. empirical validation of domain-specific profiling\n3. presented an infrastructure for domain-specific profiling\n

Domain-Specific Profiling - TOOLS 2011 Domain-Specific Profiling - TOOLS 2011 Presentation Transcript

  • Domain-Specific ProfilingAlexandre Bergel, Oscar Nierstrasz, Lukas Renggli and Jorge Ressia
  • Profiling:Is the activity of analyzing a programexecution.
  • Traditional Profilers
  • ProfileSource Report
  • Examples
  • CPU time
  • Mondrian
  • o mpl exityS m C ucasse 2003 yste d D n Lanza a
  • lit tle impact on the overall execution. T his sampling technique is uall mainstream profilers, such as JProfiler, Your K it, xprof [10], an Message Tally, the standard sampling-based profiler in Pharo Smtually describes the execution in terms of C P U consumption and ieach method of Mondrian:54.8% {11501ms} MOCanvas > > drawOn: 54.8% {11501ms} MORoot(MONode) > > displayOn: 30.9% {6485ms} MONode > > displayOn: | 18.1% {3799ms} MOEdge > > displayOn: ... | 8.4% {1763ms} MOEdge > > displayOn: | | 8.0% {1679ms} MOStraightLineShape > > display:on: | | 2.6% {546ms} FormCanvas > > line:to:width:color: ... 23.4% {4911ms} MOEdge > > displayOn: ... We can observe that the virtual machine spent about 54% othe method displayOn: defined in the class MORoot. A root is thenested node that contains all the nodes of the edges of the visuageneral profiling information says that rendering nodes and edge
  • Domain-Specific P rofiling 3C P U t i me p rofiling Which is the relationship?Mondrian [9] is an open and agile visualization engine. Mondrian describes avisualization using a graph of (possibly nested) nodes and edges. In June 2010a serious performance issue was raised1 . Tracking down the cause of the poorperformance was not trivial. We first used a standard sample-based profiler. E xecution sampling approximates the time spent in an application’s methodsby periodically stopping a program and recording the current set of methodsunder executions. Such a profiling technique is relatively accurate since it haslit tle impact on the overall execution. T his sampling technique is used by almostall mainstream profilers, such as JProfiler, Your K it, xprof [10], and hprof. Message Tally, the standard sampling-based profiler in Pharo Smalltalk 2 , tex-tually describes the execution in terms of C P U consumption and invocation foreach method of Mondrian:54.8% {11501ms} MOCanvas > > drawOn: 54.8% {11501ms} MORoot(MONode) > > displayOn: 30.9% {6485ms} MONode > > displayOn: ? | 18.1% {3799ms} MOEdge > > displayOn: ... | 8.4% {1763ms} MOEdge > > displayOn: | | 8.0% {1679ms} MOStraightLineShape > > display:on: | | 2.6% {546ms} FormCanvas > > line:to:width:color: ... 23.4% {4911ms} MOEdge > > displayOn: ... We can observe that the virtual machine spent about 54% of its time inthe method displayOn: defined in the class MORoot. A root is the unique non-nested node that contains all the nodes of the edges of the visualization. T hisgeneral profiling information says that rendering nodes and edges consumes agreat share of the C P U time, but it does not help in pinpointing which nodesand edges are responsible for the time spent. Not all graphical elements equallyconsume resources. Traditional execution sampling profilers center their result on the frames ofthe execution stack and completely ignore the identity of the ob ject that receivedthe method call and its arguments. As a consequence, it is hard to track downwhich ob jects cause the slowdown. For the example above, the traditional profilersays that we spent 30.9% in MONode > > displayOn: without saying which nodeswere actually refreshed too often.C overagePetit Parser is a parsing framework combining ideas from scannerless parsing,
  • Coverage
  • scg.unibe.ch/research/helvetia/petitparser
  • Java Grammar 210 Methods100% coverage to build grammar
  • Java Grammar 210 Methods 100% coverage to build grammarIs each production of the grammar covered?
  • ProfileSource Report Domain
  • DomainSource Traditional Code Profilers
  • Domain-SpecificDomain ProfilersSource Traditional Code Profilers
  • What does it mean?
  • Specify
  • Capture
  • Present
  • MetaSpy
  • Instrumenter Profiler
  • Domain Domain Object Domain Object Domain Object
  • Domain Domain Object Domain Object Domain Object
  • Domain Domain Object Domain Object Domain Object
  • Instrumentation strategies Profilers MetaInstrumenter Profiler handler model install strategies setUp observeClass:do: tearDown observeClass:selector:do uninstall observePackage:do: observePackagesMatching:do: observeParser:in:do: install Announcement MethodInstrumenter ParserInstrumenter setUp Instrumenter theClass parser tearDownannouncer selector grammar uninstallinstall methdo replacementuninstall doesNotUnderstand: install run:with:in: uninstall install setUp MondrianProfiler OmniBrowserProfiler PetitParserProfiler tearDown setUp setUp setUp uninstall visualize visualize visualize User provided classes
  • Specify the Domain interestsCapture the runtimeinformationPresent the results
  • Mondrian Profiler
  • o mpl exityS m Ccasse 2003 yste Du Lanza,
  • MondrianProfiler>>setUpself model root allNodes do: [ :node | self observeObject: node selector: #displayOn: do: [ ... counting ... ] ]
  • Profiler
  • PetitParserProfiler>>setUpself model allParsers do: [ :parser | self observeParser: parser in: self grammar do: [ ... counting ... ] ]
  • Implementation
  • Instrumentation
  • Two optionsHooking to the DomainReflection
  • Reflection
  • scg.unibe.ch/research/ bifrost
  • Organize the Meta-level
  • ExplicitMeta-objects
  • Class Instrumentation Meta-objectObject
  • Class Instrumentation Meta-objectObject
  • Class Instrumentation Meta-objectInstrumented Object
  • Partial Reflection Selective Reifications Unanticipated Runtime IntegrationAdaptation Composition
  • ProfileSource
  • ProfileSource
  • Domain-specific Information
  • MetaSpy Domain-Domain Specific ProfilersSource Traditional Code Profilers