• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
SE and AI: a two-way street
 

SE and AI: a two-way street

on

  • 424 views

John Clark,, Raise'13

John Clark,, Raise'13

Statistics

Views

Total Views
424
Views on SlideShare
409
Embed Views
15

Actions

Likes
0
Downloads
10
Comments
0

2 Embeds 15

https://twitter.com 14
http://storify.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    SE and AI: a two-way street SE and AI: a two-way street Presentation Transcript

    • SE and AI: A Two-way Street26thMay2013John A ClarkDept. of Computer ScienceUniversity of YorkYork, UK
    • Why am I here? Why are you here? Aim to inspire progress in software engineering, in artificialintelligence, and at the interface between the two disciplines. Some aspirations/stretch goals for the next 20 years. Some are clear. Some are rather loose. Some may be met fully. Others are harder, but in trying to meet them…. Something Good Will Happen. Target the SE and AI communities, both individually and to inspirecollaboration. Many have roots in practical applications but satisfying them mayneed theoretical advances in both disciplines.
    • Deviation! Inputs from Simon M Poulding, Mark Harman, Edmund Burke and XinYao (my DAASE colleagues). It is better to travel than to arrive. The map is not the terrain. ….. The talk is not the paper! I’ve kept many of the challenges but have thrown in moreobservational material and some questions. KEYNOTE NIRVANA: Get to say things that would never get past referees! This is a health warning.
    • The challenge of proof automation:striking at the heart of rigoroussoftware engineering
    • Moore’s Law for Formal Reasoning forSoftware Rigorous software development has a long history in CS Turing’s 1949 has a proof of correctnessof a program with two nested loops and outlinesa more general proof approach. 1960s saw classic works by Floyd and Hoare: {P} Q {R} 1970s and 1980s: VDM, Z, and B, CSP and CCS. Tool support has matured, including automated reasoning. Tools generate “verification conditions” that must be proved for the software to bedemonstrably correct. Most are dispatched automatically but the remaining case cases require manualand expensive proof effort. Technological improvements here would greatly facilitate take-up. This brings us toour first challenge. Challenge. Demonstrate for 20 years a 20% year-on-year reduction in theresidual manual proof effort that must be expended to produce formallyverified systems.
    • Moore’s Law for Formal Reasoning forSoftware We seek a software engineering proof equivalent of Moore’s Law. This isexponentially (geometrically) ambitious and persistently feasible. “Classic AI” and undoubtedly “Classic CS” in the service of softwareengineering. NOTA BENE: In many successful toolsets developers build powerful domain specific theories fortheir provers. Obvious example here is ASTREE toolset (supporting AIRBUS C programanalysis) This has an important place in any way forward, but we observe also that currenttheorem proving technology seems to draw little on the technologies and ideas thatAI has to offer (e.g. a raft of adaptive learning techniques). Provision for better tools: Automated proof tactic development via large scale HPC experimentation? Computer crowd sourcing? Data mining of proof strategies and understanding their applicability? What do people find difficult?
    • Impact Pragmatics Take a REAL SYSTEM that has been subject to formaldevelopment and use that as a case study. Tokeneer. A security system Development using Z and SparkAda Part of the motivation is that EVERY TOOL has its ownidiosyncrasies/weaknesses and patterns of them. Perfectly reasonable to expect AI to detect them and for toolenvironmental improvements to be gained. There are real opportunities to do some work that will grabattention.
    • Learning form Software Engineering Software engineering knows the power of domain specificityor focus more generally. There is an age-old tensionbetween generality/expressiveness and tractability. Witness tools that resolve issues of: Freedom from deadlock (e.g. via model checking) Exception-freeness (e.g. via abstract interpretation). Latter may still suffer false positive issues. THEY HAVE WELL DEFINED NARROW TARGETS ANDTHEY AIM TO DELIVER ON THEM. YOU DON’T HAVE TO DO FULL FORMAL DEVELOPMENT. SOLVING SMALL PROBLEMS WELL IS GOOD
    • Software Engineering Helping AI to HelpSoftware Engineering: Ratcliff, White and Clark.Here we use evolutionarycomputation to evolve candidateinvariants from trace data – i.e.what Daikon does.Generates thousands!!!!!!!Not all all interesting.But see which invariants arebroken by mutant programs.Some are very special to theoriginal program. Those areINTERESTING
    • The challenge of trustable real-time ai
    • trustable real-time ai Modern critical systems (e.g. those with safety implications) will increasinglyuse AI to deliver the required services. Prototype driverless cars using imageprocessing techniques to detect humans avoid collision. Common “the non-determinism of many AIalgorithms makes their application in critical systemsdifficult”. It is the inability to provide guaranteed envelopes of system behavior that isproblematic; stochastic algorithms, for example, would be fine, provided you couldrigorously argue that their behavior is satisfactorily bounded. Functional and non-functional correctness are both relevant here. (Dealing withnon-functional performance and resource trade-offs forms an important componentDAASE project, most typically as part of the collaborative work on adaptiveautomated software engineering. Challenge. Demonstrate across a range of fundamental AI algorithms formalmachine assisted proofs of correctness and scientifically justifiedpredictive models of functionality-timing-memory-power-other tradeoffs.
    • Principlepeople and computers are not thesameUnderstand it, live with it, andthen embrace itOr…Vive la (les) Difference(s)!
    • Hard for humans Humans do some things well, and some things badly, and somethings not at all. Forget the doom and gloom. A lot of software works and worksacceptably well. This is largely due to human efforts. We humansactually can do a fair job….But…. Ask a human to write an image classifier that distinguishes picturescontaining cats from pictures containing dogs. Rather hard andstandard specification and refinement techniques are not much use. Furthermore, ask them to write such a binary classifier that works only80% of the time. Che? It’s just not the sort of thing we do well at all. But why should we care?
    • Vive la (les) Difference(s)Making n-version programming work! Critical systems hardware: majority voting of a 2-out-of-3 architecture allowscontinued operation in the presence of a single hardware fault. The software variant of this idea is more controversial It is questionablewhether independence holds between developing teams and teams work fromthe same, possibly flawed, specification.Prog 1Prog 1Prog 1Prog 1Prog 2Prog 3Majority Voteon OutputMajority Voteon OutputInput goes toall processorsInput goes toall processors
    • Vive la (les) Difference(s)Making n-version programming work! Automated programming may actually make the idea morepalatable. Programming teams may make the same assumptions and thesame mistakes But automated program discovery techniques can give sets ofprograms with complementary weaknesses. Only need the majority to be correct on any training andsubsequent example. Ensemble based approaches… Challenge. Demonstrate a credible approach to N-version automatedprogramming that is scientifically grounded and capable of satisfyingassurance requirements of appropriate regulatory authorities. Suitable applications will have to be identified. This challenge hasroots in both SE and AI.
    • Vive la (les) similitude(s)Making computers like humans! Humans are expensive and get bored very quickly. Great need for human testers in many systems. Long-standing quest. To create a system that cannot bedistinguished from humans. For most people the challenge was for humans to create such asystem Challenge. Bring AI to bear to create high performing proxies forhumans for specific purposes? What we really need is automated human compilation. (Humans areprograms too by the way.) But this should also allow for the “dumb user” – the one who doessomething that screws the system up in an unanticipated way. Abstractly, this reduces to modeling of traces/sequences – what is therange of approaches for this?
    • trustable real-time ai Finally, as part of our drive for engineering trusted systems, we envisagesignificant exchange of ideas on software testing. Challenge. Draw on the testing expertise of SE and AI to develop acredible and scientific basis for the testing of complex systems. We envisage further uses of AI of for stress testing of systems (but thescientific justification for such testing may be some way off). In addition, enhanced fault based (e.g. mutation) testing will likely play animportant part in testing systems of systems based on AI technology (e.g.agent based systems).
    • The Challenge of coming to termswith resource abundance andresource constraintsorThe sky’s (cloud’s) the limitAndJust how low can you go?
    • Resource availability: Aunt Ada’sDividend Challenge Extraordinary computational power. The ‘cloud’ (howeverconstituted) is very much the topic of the day. One of our most computationally-minded august relatives, AuntAda, has now retired from programming and has investeddeeply and successfully in cloud technology. Each year she receives a dividend of 1 billion processorhours (with each processor clocking at approximately 1GHz). She is free to use them as she sees fit, but they have to bespent within one year. She wishes to support speculativeresearch in SE and AI. Challenge. Identify the problem from SE, from AI, or theirfusion that provides the best use of such resources.
    • The guilt trip power diet challenge Ever-increasing power consumption is a serious concern: some highlydeveloped societies now live in fear of power outages. GUILT TRIPPING: We aim to encourage and celebrate power-frugality with aseries of challenges. Challenge. For your favourite application or algorithm from SE, from AI, or theirfusion, demonstrate a year on year power consumption reduction of 20% for thenext 20 years. Challenge. (The 1 J Diet Challenge.) You are given 1 Joule of energy. Identify themost ambitious task that can be completed using no more than 1 J. For the benefit of the less militantly frugal, we offer the corresponding 10 Jand 100 J Challenges. The above is intended as a playful attempt to sparkefforts in the area of low power functionality. The challenge may morph from real power to a more abstract model of power,e.g. to a virtual mapping from instructions to power consumed. Yes, you’ll have to make hardware assumptions/have some commoncomputation al model.
    • The dark sidePrinciple: embracing the dark sideof ai can be more fun and highlyproductivechallenge: to fully realise thedestructive capabilities of ai
    • Thinking within the box Habitual to recommend THINK OUTSIDE THE BOX . In asense this is nonsense. Aim is not to impose unnecessaryconstraints due to the mental baggage (assumptions,favourite techniques etc.). THINK WITHIN THE BOX butdo so in a way that gives a result we actually findappealing.Self-constructed box The real box
    • Thinking within the box: stressingsystems Best is best but very personal. Lots of AI (and indeed OR)related research in the optimisation arena. Provided we can engineer feedback we can optimise for whatare usually regarded as NEGATIVE performance or othercriteria – STRESS TEST THE SYSTEM. Already see examples – e.g. growing worst case executiontimes (various) But “stress” and “systems” are flexible beasts: Search based software testing – grow tests data that breakspredicates (module pre-conditions, post conditions) Push non-functional properties to their limits Push predicates (a la Daikon etc.) to their limits by aggressive testdata generation). Attacking products of AI with AI in order tostrengthen them!
    • The Challenge of the NewComputingNew Will Be Big
    • Teleportation Quantum Information Processing (QIP) and Quantum Computing (QC) offersus a radically different computational model and capability. Star Trek is with us now (sort of)!. Here’s Brassard’s TELEPORTATION Here’s an evolved one form Yabuki & Iba
    • Grover’s Algorithm Quantum Information Processing (QIP) and QuantumComputing (QC) offers us a radically different computationalmodel and capability. Here is Grover’s Search (a fundamental building block of QC) –allows you to search a database of size 2Nin order of 2N/2. Spector et al. evolved a GS circuit for two qubits
    • Shor’s Quantum Fourier Transform Here a Quantum Fourier Transform (Shor’s fundamentalbuilding block) – this is what enables you to breakfactorization in polynomial time.
    • Massey et al Quantum FourierTransform Generation
    • New Computing If we were being cruel we could say that the AI community(including me!) is capable of re-discovering what the physicistshave already discovered. But there are actually very few genuinely different quantumalgorithms around. A real opportunity…. Challenge. Using AI techniques generate new quantumalgorithms to solve problems of acknowledged importance. Question: what does the new computing offer currentsoftware engineering in terms of verification capability?
    • Dynamic Adaptive Automated SoftwareEngineering A major EPSRC “programme grant” (around £6.7m) over four sites:UCL (Mark Harman PI), Birmingham (Xin Yao), Stirling (EdmundBurke) and York (John Clark - or me). Major focus on automation and adaptivity (off-line and on-line) Will have to face many of the problems discussed at this workshop: Squishiness (in many forms)! Uncertainty and its resolution. Pareto and other MO approaches and their applicability. Ascertaining the limits of machine learning and what can bejustified/reasoned about.
    • Acknowledgements Sponsors DAASE: Dynamic Adaptive Automated SoftwareEngineering. EPSRC grant EP/J017515 The Birth, Life and Death of Semantic MutantsEPSRC grant EP/G043604/1 Many thanks to: Simon M Poulding, Mark Harman,Edmund Burke and Xin Yao