• Save
Object-Centric Reflection - ESUG 2012
Upcoming SlideShare
Loading in...5
×
 

Object-Centric Reflection - ESUG 2012

on

  • 571 views

Object-Centric Reflection presentation at main track ESUG 2012 - Ghent - Belgium

Object-Centric Reflection presentation at main track ESUG 2012 - Ghent - Belgium

Statistics

Views

Total Views
571
Views on SlideShare
560
Embed Views
11

Actions

Likes
0
Downloads
0
Comments
0

3 Embeds 11

https://twitter.com 6
http://tweetedtimes.com 3
https://si0.twimg.com 2

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • I do not work with smalltalk in industry, I do research.\nHowever, I believe that the two domains should interact and learn from each other.\nA research product that has no application is basically useless.\n\nWhat I will show you today will have impact in your research as well as in your industrial applications.\n
  • \n
  • \n
  • Code profilers commonly employ execution sampling as the way to obtain dynamic run-time information\n
  • \n
  • is a framework for drawing graphs\n
  • \n
  • \n
  • What is the relationship between this and the domain? picture again\n
  • \n
  • \n
  • is a framework for drawing graphs\n
  • width = the number of attributes of the class\nheight = the number of methods of the class\ncolor = the number of lines of code of the class\n\n
  • \n
  • double dispatch\n
  • one of the nodes was not correctly rendered\n
  • \n
  • \n
  • \n
  • Why do I need to write code?\nWhy do I need to know about object id?\nI want to grab the object, it is there\n
  • Why do I need to write code?\nWhy do I need to know about object id?\nI want to grab the object, it is there\n
  • We have to go back to the code\n
  • Which questions do these debuggers try to answer?\n
  • Sillito\n
  • \n
  • Which questions do these debuggers try to answer?\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • We see that there are many different ways of doing reflection, adaptation, instrumentation, many are low level.\nAnd the ones that are highly flexible cannot break free from the limitations of the language.\n
  • Adaptation semantic abstraction for composing meta-level structure and behavior.\n
  • \n
  • \n
  • \n
  • I am going to talk about 4 applications that were conceived by breaking loose from the object model.\n
  • \n
  • \n
  • We have to go back to the code\n
  • We have to go back to the code\n
  • \n
  • Questions 15, 19, 20, 28 and 33 all have to do with tracking state at runtime. Consider in particular question 15: Where is this variable or data structure being accessed? Let us assume that we want to know where an instance variable of an object is being modified. This is known as keeping track of side-effects [3]. One approach is to use step-wise operations until we reach the modification. However, this can be time-consuming and unreliable. Another approach is to place breakpoints in all assignments related to the instance variable in question. Finding all these assignments might be troublesome depending on the size of the use case, as witnessed by our own experience.\nTracking down the source of this side effect is highly challenging: 31 of the 38 methods defined on InstructionStream access the variable, comprising 12 assignments; the instance variable is written 9 times in InstructionStream’s subclasses. In addition, the variable pc has an accessor that is referenced by 5 intensively-used classes.\n\n
  • Question 22 poses further difficulties for the debugging approach: How are these types or objects related? In statically typed languages this question can be partially answered by finding all the references to a particular type in another type. Due to polymorphism, however, this may still yield many false positives. (An instance variable of type Object could be potentially bound to instances of any type we are interested in.) Only by examining the runtime behavior can we learn precisely which types are instantiated and bound to which variables. The debugging approach would, however, require heavy use of conditional breakpoints (to filter out types that are not of interest), and might again entail the setting of breakpoints in a large number of call sites.\n\n
  • Back-in-time debugging [4], [5] can potentially be used to answer many of these questions, since it works by maintain- ing a complete execution history of a program run. There are two critical drawbacks, however, which limit the practical application of back-in-time debugging. First, the approach is inherently post mortem. One cannot debug a running system, but only the history of completed run. Interaction is therefore strictly limited, and extensive exploration may require many runs to be performed. Second, the approach entails considerable overhead both in terms of runtime performance and in terms of memory requirements to build and explore the history.\n\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • double dispatch\n
  • one of the nodes was not correctly rendered\n
  • We have more commands that the ones in the debugger, but we did not know how to put them there.\n
  • halt on next specific messages\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Clicking and drag-and-dropping nodes refreshes the visualization, thus increasing the progress bar of the corresponding nodes. This profile helps identifying unnecessary rendering. We identified a situation in which nodes were refreshing without receiving user actions.\n
  • \n
  • \n
  • \n
  • \n
  • We do not know what to evolve or who should evolve.\nWe need some spreading of the evolution like a disease or cure.\n\nThe adaptation cannot be seen by other objects which are in a different execution.\nThe adaptation is dynamically set.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • The impact to the system is bearable \n
  • \n
  • We have not forgotten about structural changes.\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Thanks!\n
  • \n

Object-Centric Reflection - ESUG 2012 Object-Centric Reflection - ESUG 2012 Presentation Transcript

  • Object-Centric Reflection Jorge Ressia Software Composition Group
  • Profiling
  • Profiling:Is the activity of analyzing a programexecution.
  • Domain-Sp CPU time profiling Mondrian [9] is an open and agile visualization engine. visualization using a graph of (possibly nested) nodes an Profile a serious performance issue was raised1 . Tracking down performance was not trivial. We first used a standard sam Execution sampling approximates the time spent in an by periodically stopping a program and recording the cu under executions. Such a profiling technique is relatively little impact on the overall execution. This sampling techn all mainstream profilers, such as JProfiler, YourKit, xprof MessageTally, the standard sampling-based profiler in P tually describes the execution in terms of CPU consumpt each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn:{ { | 18.1% {3799ms} MOEdge>>displayOn: { { ... } | 8.4% {1763ms} MOEdge>>displayOn: } } | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: } { } ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent abou the method displayOn: defined in the class MORoot. A ro nested node that contains all the nodes of the edges of t general profiling information says that rendering nodes a great share of the CPU time, but it does not help in pin and edges are responsible for the time spent. Not all grap consume resources. Traditional execution sampling profilers center their r the execution stack and completely ignore the identity of th the method call and its arguments. As a consequence, it which objects cause the slowdown. For the example above, says that we spent 30.9% in MONode>>displayOn: withou were actually refreshed too often. Coverage PetitParser is a parsing framework combining ideas from parser combinators, parsing expression grammars and pac grammars and parsers as objects that can be reconfigured
  • CPU time profiling Mondrian [9] is an open and agile visualization engine. Profile visualization using a graph of (possibly nested) nodes an a serious performance issue was raised1 . Tracking down performance was not trivial. We first used a standard sam Execution sampling approximates the time spent in an by periodically stopping a program and recording the cu under executions. Such a profiling technique is relatively little impact on the overall execution. This sampling techn all mainstream profilers, such as JProfiler, YourKit, xprof MessageTally, the standard sampling-based profiler in P tually describes the execution in terms of CPU consumpt each method of Mondrian: 54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn:{ { 30.9% {6485ms} MONode>>displayOn: { { | 18.1% {3799ms} MOEdge>>displayOn: } ... } } | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: } { } | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent abou the method displayOn: defined in the class MORoot. A ro nested node that contains all the nodes of the edges of t general profiling information says that rendering nodes a great share of the CPU time, but it does not help in pin and edges are responsible for the time spent. Not all grap consume resources. Traditional execution sampling profilers center their r the execution stack and completely ignore the identity of th the method call and its arguments. As a consequence, it which objects cause the slowdown. For the example above, says that we spent 30.9% in MONode>>displayOn: withou were actually refreshed too often. Domain Coverage PetitParser is a parsing framework combining ideas from parser combinators, parsing expression grammars and pac grammars and parsers as objects that can be reconfigured 1 http://forum.world.st/Mondrian-is-slow-next-step-tc a2261116 2 http://www.pharo-project.org/
  • Mondrian
  • C omp lexity stemand Ducasse 2003Sy zaLan
  • little impact on the overall execution. This sampling technique is uall mainstream profilers, such as JProfiler, YourKit, xprof [10], an MessageTally, the standard sampling-based profiler in Pharo Smtually describes the execution in terms of CPU consumption and ieach method of Mondrian:54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% othe method displayOn: defined in the class MORoot. A root is thenested node that contains all the nodes of the edges of the visuageneral profiling information says that rendering nodes and edge
  • Domain-Specific Profiling 3CPU time profiling Which is the relationship?Mondrian [9] is an open and agile visualization engine. Mondrian describes avisualization using a graph of (possibly nested) nodes and edges. In June 2010a serious performance issue was raised1 . Tracking down the cause of the poorperformance was not trivial. We first used a standard sample-based profiler. Execution sampling approximates the time spent in an application’s methodsby periodically stopping a program and recording the current set of methodsunder executions. Such a profiling technique is relatively accurate since it haslittle impact on the overall execution. This sampling technique is used by almostall mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof. MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-tually describes the execution in terms of CPU consumption and invocation foreach method of Mondrian:54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: ? | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% of its time inthe method displayOn: defined in the class MORoot. A root is the unique non-nested node that contains all the nodes of the edges of the visualization. Thisgeneral profiling information says that rendering nodes and edges consumes agreat share of the CPU time, but it does not help in pinpointing which nodesand edges are responsible for the time spent. Not all graphical elements equallyconsume resources. Traditional execution sampling profilers center their result on the frames ofthe execution stack and completely ignore the identity of the object that receivedthe method call and its arguments. As a consequence, it is hard to track downwhich objects cause the slowdown. For the example above, the traditional profilersays that we spent 30.9% in MONode>>displayOn: without saying which nodeswere actually refreshed too often.CoveragePetitParser is a parsing framework combining ideas from scannerless parsing,
  • Debugging
  • Debugging:Is the process of interacting with arunning software system to test andunderstand its current behavior.
  • Mondrian
  • C omp lexity stemand Ducasse 2003Sy zaLan
  • Rendering
  • Shape and Nodes
  • How do we debug this?
  • Breakpoints
  • ConditionalBreakpoints
  • { { { { } } } } { }
  • { { { { } } } } { }
  • Developer Questions
  • When during the execution is this method called? (Q.13) Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this pointin the code? (Q.28) What parts of this data structure are accessed in thiscode? (Q.33)
  • When during the execution is this method called? (Q.13) Where are instances of this class created? (Q.14) Where is this variable or data structure being accessed? (Q.15) What are the values of the argument at runtime? (Q.19) What data is being modified in this code? (Q.20) How are these types or objects related? (Q.22) How can data be passed to (or accessed at) this pointin the code? (Q.28) What parts of this data structure are accessed in thiscode? (Q.33) llito Si etal. g softwar e ask durin gr ammers s. 2008 Questi ons pro ution task evol
  • Which is the relationship?When during the execution is this method called? (Q.13) ?Where are instances of this class created? (Q.14)Where is this variable or data structure being accessed? (Q.15)What are the values of the argument at runtime? (Q.19)What data is being modified in this code? (Q.20)How are these types or objects related? (Q.22)How can data be passed to (or accessed at) this point in the code? (Q.28)What parts of this data structure are accessed in this code? (Q.33)
  • What is theproblem?
  • TraditionalReflection
  • visualization using a graph of (possibly nested) nodes and edges. In June 2010a serious performance issue was raised1 . Tracking down the cause of the poorperformance was not trivial. We first used a standard sample-based profiler. Execution sampling approximates the time spent in an application’s methodsby periodically stopping a program and recording the current set of methods Profilingunder executions. Such a profiling technique is relatively accurate since it haslittle impact on the overall execution. This sampling technique is used by almostall mainstream profilers, such as JProfiler, YourKit, xprof [10], and hprof. MessageTally, the standard sampling-based profiler in Pharo Smalltalk2 , tex-tually describes the execution in terms of CPU consumption and invocation foreach method of Mondrian:54.8% {11501ms} MOCanvas>>drawOn: 54.8% {11501ms} MORoot(MONode)>>displayOn: 30.9% {6485ms} MONode>>displayOn: ? | 18.1% {3799ms} MOEdge>>displayOn: ... | 8.4% {1763ms} MOEdge>>displayOn: | | 8.0% {1679ms} MOStraightLineShape>>display:on: | | 2.6% {546ms} FormCanvas>>line:to:width:color: ... 23.4% {4911ms} MOEdge>>displayOn: ... We can observe that the virtual machine spent about 54% of its time inthe method displayOn: defined in the class MORoot. A root is the unique non-nested node that contains all the nodes of the edges of the visualization. Thisgeneral profiling information says that rendering nodes and edges consumes agreat share of the CPU time, but it does not help in pinpointing which nodesand edges are responsible for the time spent. Not all graphical elements equallyconsume resources. Traditional execution sampling profilers center their result on the frames ofthe execution stack and completely ignore the identity of the object that receivedthe method call and its arguments. As a consequence, it is hard to track downwhich objects cause the slowdown. For the example above, the traditional profilersays that we spent 30.9% in MONode>>displayOn: without saying which nodeswere actually refreshed too often.Coverage
  • DebuggingWhen during the execution is this method called? (Q.13) ?Where are instances of this class created? (Q.14)Where is this variable or data structure being accessed? (Q.15)What are the values of the argument at runtime? (Q.19)What data is being modified in this code? (Q.20)How are these types or objects related? (Q.22)How can data be passed to (or accessed at) this point in the code? (Q.28)What parts of this data structure are accessed in this code? (Q.33)
  • Object Paradox
  • Object-Centric Reflection
  • Organize the Meta-level
  • ExplicitMeta-objects
  • Class Meta-object Object
  • Class Meta-object Object
  • Class Meta-objectEvolved Object
  • Debugging ProfilingExecution StructureReification Evolution
  • Debugging ProfilingExecution StructureReification Evolution
  • Object-Centric Debugging
  • Object-Centric Debugging 2 201Nierstrasz ICSE . rgel and O J. Ressi a, A, Be
  • { { { { } } } } { }
  • { { { { } } } } { }
  • { { { { } } } } { }
  • stack-centric debugging InstructionStream class>>on: InstructionStream class>>new InstructionStream>>initialize step into, CompiledMethod>>initialPC step over, InstructionStream>>method:pc: resume InstructionStream>>nextInstruction MessageCatcher class>>new InstructionStream>>interpretNextInstructionFor: ... object-centric debugging centered on centered onthe InstructionStream class the InstructionStream object initialize on:next message, next message, method:pc: new next change nextInstruction ... next change interpretNextInstructionFor: ...
  • Mondrian
  • Shape and Nodes
  • halt on object in call
  • Halt on next messageHalt on next message/s namedHalt on state changeHalt on state change namedHalt on next inherited messageHalt on next overloaded messageHalt on object/s in callHalt on next message frompackage
  • Debugging ProfilingExecution StructureReification Evolution
  • MetaSpy
  • MetaSpy OLS 2011 TO Ber gel etal.
  • Mondrian Profiler
  • C omp lexity stema, Ducasse 2003Sy anz L
  • Debugging ProfilingExecution StructureReification Evolution
  • What if we do notknow what to evolve?
  • ?
  • Prisma
  • Back in time Debugger
  • Back in time Debugger Debu gger w Floal. ECOOP 2008 O b ject nhard e t Lie
  • name value init@t1 null predecessor name value :Person field-write@t2 Doe predecessor name value field-write@t3 Smithperson := Person new t1...name := Doe t2...name := Smith t3
  • Debugging ProfilingExecution StructureReification Evolution
  • Talentsscg.unibe.ch/research/talents
  • Talents ST 2 011F. Perin and IW ie rstrasz, îr ba, O. N gli J. Re ssia, T. G L. Rengscg.unibe.ch/research/talents
  • Dynamicallycomposable units of reuse
  • Streams
  • scg.unibe.ch/research/bifrost
  • Object- Centric PrismaDebuggerSubjectopia MetaSpy Talents Chameleon
  • scg.unibe.ch/jenkins/
  • Alexandre Marcus Stéphane Oscar Bergel Denker Ducasse Nierstrasz Lukas Tudor Fabrizio Renggli Gîrba Perin
  • scg.unibe.ch/research/bifrost