Using requirements to retrace software evolution history
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Using requirements to retrace software evolution history

  • 711 views
Uploaded on

Presented at Software Evolvability at ICSM07

Presented at Software Evolvability at ICSM07

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
711
On Slideshare
711
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • 17 mins + 5 for questions/setup Thanks for coming. This is a practice talk for a workshop on software evolvability at ICSM in two weeks. The workshop will explore topics around how to design software systems, and their requirements, to support evolving contexts. Please hold questions til after the presentation as you would at a real conference. I'd appreciate any notes on typos, weird colours, inaccuracies, and other mistakes, too. Keep in mind this is a talk on history being given by someone who wasn't there for a lot of it. I welcome clarifications from my more experienced colleagues.
  • The nounal view looks at the 'what' of evolution rather than the verbal view – the how. We want to focus on how a system can be made more 'evolvable'. But evolvability doesn't arise like Athena, fully-formed.
  • e.g., the history of spreadsheets often starts with Visicalc, then 1-2-3, then Excel, etc. But this is just a factual recounting of events, like “napoleon retreated from Moscow and lost most of his army”. We want to understand, for example, 'why did Napoleon invade Russia'?
  • As an example of what software histories – that focus on the why – can teach us, I want to discuss distributed computing. Controversial because Microsoft did not dominate this area as much as, say, word processing. I focus on intentions because I am interested in the problem domain and not specifics of machine domains. I acknowledge that many issues are raised when implementation happens .. but I suggest those issues are then addressed in later releases.
  • What is distributed computing? Possibly parallel – so perhaps a better term in remote computing
  • Here is an overview of the protocols and technologies I cover. Any taxonomy is necessarily a simplification, but this diagram tries to show rough philogeny ... i.e. Connected nodes are more related. There is nonetheless a lot of gene transfer.
  • These requirements were a response to protocols that required reinvention each time, for example FTP and Telnet.
  • Rise of Unix for servers scared smaller vendors (Sun/ATT)‏
  • The HTTP model had demonstrated scalability and support for heterogeneity: everyone had a web server on port 80, but not everyone had a CORBA/DCOM server. A high enough level of abstraction, dealing with call/response, redirects, etc. XML is self-descriptive and simple to start with.
  • A service is: • A possibly remote procedure with an • invocation that is described in a standard (preferably XML-based) machine-readable syntax, • reachable via standard Internet protocols, • including at a minimum a description of the allowed input and output messages, as well as • possible semantic annotation of the service function and data meaning. Dagstuhl session on SOC

Transcript

  • 1. Using requirements to retrace software evolution history Neil Ernst and John Mylopoulos University of Toronto
  • 2. Motivation
    • Lehman and Ramil describe the nounal view of software evolution as concerned with “the nature of evolution, its causes . . . [and] its impacts”
    • Design for evolvability should learn from past decisions and past requirements
      • Learn ‘large-scale’ context for problems
      • Appreciate previous solutions (avoid NIH syndrome)‏
  • 3. Software 'history'
    • Many histories of ‘software’, the industry, etc. focus on what happened – not why
    • A ‘history’ is a systematic and plausible explanation of past events through disciplined and judicious use of primary sources, where possible
  • 4. Distributed computing
    • Why a history of distributed computing?
      • Seek to understand the evolution of various tools and standards
      • Well-documented and controversial
    • Artifacts:
      • Published specifications and proposals
      • Contemporary commentary, mailing list posts
    • Omit implementation in favour of intention
  • 5. Distributed computing
    • The ability to manipulate resources on new, heterogeneous systems
    • Protocols describe standard ways of dealing with problems – they constrain possible operations (for some reason, e.g. interoperability)‏
  • 6.
  • 7. Early days
    • The introduction of the ‘network’ spurred White to develop remote procedure call
    • Eight requirements, including:
      • Report on the outcome of invocation
      • Represent several types
      • Arbitrary, named commands
  • 8. Remote procedure call
    • Xerox
      • Stubs and skeletons
      • Remote invocation should mimic local invocation
    • Sun's ONC
      • Handle authentication formally
      • Discusses call semantics
    • DCE
      • Reaction to market dominance
      • Handle multiple OS
      • Separate invoker from consumer
  • 9.
    • Introduction of object-based DC
    • First standard quite buggy,
      • prompting 5 years of work on version 2
    • Numerous, often incomplete vendor implementations led to market fragmentation
      • True language-independence with IDLs
      • Handle multiple systems at once with ORBs
    CORBA
  • 10.
    • Java 1.0 introduced in 1995
    • RMI bundled, implementing RPC object protocol
    • In 1998, Sun introduced EJBs and notion of separating business logic
    • Large, dominant player meant comfort for decision-makers (cf. Corba)‏
    Java: RMI and EJBs
  • 11.
    • Extend DCE with a distributed object framework
    • A Microsoft competitor for CORBA-based solutions
      • Garbage collection
      • Access control lists
      • Remote object evolution
    Microsoft: DCOM
  • 12. The web years
    • Triumph of HTTP as transport protocol
    • Advent of XML as data exchange format
    • 'Web services' via XML-RPC and SOAP
    • Constrained architectures: REST (2000)‏
      • Constrain the
  • 13. Analysis
    • Change is incremental from year to year, addressing specific challenges: scalability as the web takes off; true heterogeneity…
    • In that light, what does our study suggest about designing an evolvable DC spec?
    • Vendor lock-in is a blessing and a curse. In the case of DCOM, tight integration with the dominant platform of the day made development easier. At the same time, extending applications beyond that platform was essentially impossible. Open standards processes can lead to ‘design-by-committee’ syndrome, but can also greatly increase adoption.
  • 14. 19/09/07
    • Successful protocols work best by allowing the developer to focus on what matters to their application. Such generality and encapsulation become more and more successful as the underlying technologies, such as HTTP, are themselves standardized and propagated.
    • Service orientation, the separation of business logic and application code, and the ability to separate invoker from consumer are important mechanisms for handling application complexity and promoting reusability.
    • Treating remote resources as though they were local is misleading. There are certain properties of local resources, such as latency, that are very difficult to ignore. One of CORBA’s problems was its insistence on this principle. Quality of service properties are important to consider. Web services specifications, such as WS-ReliableMessaging , are attempts to address this.
  • 15. Conclusions
    • Designing for evolvability should consider past decisions. Would a new distributed computing paradigm arise if it did not provide improvements on the past?
    • Abstraction is only possible after sufficient pain has been suffered. E.g. REST: SOA is a style w/o constraints (anything can be sent), but the constraints are there for a reason (e.g., loose coupling, uniform identifiers, help scale systems)‏
  • 16. Thank you [email_address]
  • 17. extra
    • Towards a `science of software systematics’*
    • *Scacchi, Walter, “Understanding Open-Source Software Evolution”, in Madhavji, Perry and Ramil, eds., Software Evolution and Feedback: Theory and Practice , Wiley: 2006.
    • It is increasingly difficult to point to any one piece of software or code as a ‘standalone’ system. Even embedded systems interact with servers, other devices, etc. Powerpoint has online help, embedded drawings, etc.
    • Cf. Jackson’s problem frames
    • Antòn and Potts described ‘functional paleontology’, a history of user-facing features in a telephony company