Where Did All The Errors Go?
European Dependable Computing Conference
Prof. Ian Phillips
Principal Staff Engineer
Visiting Prof. at ...
Industry Award 2008
Opinions expressed are my own ...
Links to Pdf and SlideCast @ http://ianp24.blogspot.com
When we think of Computing we think of ...
HPC and Mainframe
... maybe Desktop
... but not really Laptop or (Heaven forbid) Pocketable?
The Visible Face of Computing Today
Essential but not Vital ... All want Reliable
The Invisible Face of Computing Today
Unrecognised but Vital ... All need Dependabile
... State (s) and Time (t) are usually factors in this.
It can include phenomena ranging from human thinking to calculations with a narrower meaning.
Usually used it to animate analogies (models) of real-world situations
... frequently fast enough to be used as a stabilising factor in a loop (Real-time).
... Not prescriptive about the choice of Implementation Technology!
... Nor prescriptive about Programmability!
SoWhat is Computing ...
A mechanism for the algebraic manipulation of Data ...
Hipparchos’s Antikythera - c87BC
c.190 BC – c.120 BC.
A Machine for Calculating Planetary Positions
Technology: Metal, Hand-Cut Gears, Analogue
Found in the Mediterranean in 1900 (Believe there might have been 10’s)
Orrery c1700 ... Planet Motion Computer
Inventor: George Graham (1674-1751). English Clock-Maker.
Single-Task, Continuous Time, Analogue Mechanical Computing (With backlash!)
A Machine for Computing Polynomial Tables
Technology: Metal, Precision Gears, Digital (base 10)
Beyond the gear-cutting technology of the day
Babbage's Difference Engine - 1837
Amsler’s Planimeter - c1856
Planimeter 2014 !
A Machine for Calculating Area of an arbitrary 2D shape
Technology: Precision Mechanics, Analogue
Available today ... Electronically enhanced
They sell things that Customers want to buy
Supporting the End-Customers needs ... Who maybe several ‘layers’ above their business.
Focus on their Core Competencies in a Globally Competitive Market
Avoid Commoditisation by Differentiation
Cost and Quality (by improving Process) ..and..
Improved Business-Models (which make the Money) ..and..
New/Improved Technology (which are Expensive and/or Risky)
Product Development is a Cost (Risk) to be Minimised
Technology (HW, SW, Mechanics, Optics, Graphene, etc) just enables Options!
New-Technology may cost more (including risk) than it delivers in Product Value!
Over-Design costs ... Cannot afford the Precautionary Principle!
... Because successful End-Products fund their entire (RD&I) Value-Chains
... Their Technologies will be economic necessity in (all) lower volume markets!
Computing Technologies in Business Context
Businesses have to be Competitive, Money Making Machines today ...
... Old Compute Markets remain; but are no-longer the Technology Drivers!
Business Opportunities Drive Technology Developments
...And 21c Products are increasingly ‘Intelligent’
1970 1980 1990 2000 2010 2020 2030
for specific tasks
Computing as part
of our lives
How often can ...
An Anti-lock Braking system be unavailable?
Your Mobile Phone crash/restart?
An Autopilot be unavailable?
... As often as it likes: As long as it is available when you need it!
The Power Grid crash/restart?
An Engine Management unit get stuck at Full Throttle?
A spurious Cash Transaction in your Bank Account?
A PC crash before it is unusable?
Weather forecast be incorrect before it matters?
... Surprisingly often: Humans are inclined to blame themselves.
... Dependability is Subjective; Application, User and Context dependent (Quality)
What Dependable Computing do we Expect?
“To be trusted to do or provide what is needed” (Merriam-Webster)
End-Products are about Function, not aboutTechnology
You can’t tell which bits are done in Hardware and which in Software?
Hardware Module ?Software Module ?
Hardware + Software Module ?
... So where are the Dependability Vulnerabilities located?
Boolean Mathematics is Dependable; but implementation depends on reliably mapping
its equations to the physical world through Logic-Gates
(For HW and SW!)
CMOS has been a reliable Boolean mapping for 30 years, but ...
Today’s 20nm transistors at have larger variability, and there are more of
them on a chip (Typically 500M in 2012)
At 70degC, Vtn=130mv (sigma ~25mv) around 1 in 5 million,
transistors have Vt<0 (Can’t be turned off)
That’s ~100 transistors/chip that don’t switch off
And another hundred that only turn-on weakly (low drive/slow)
And they will always be randomly placed!
... So today’s chips shouldn’t work?
Is Hardware (Logic) Dependable? 1/3
Mitigating this we have ...
Transistors: Not all ...
Are at 70 degC even if the die is (local variation)
Are Minimum Size ... Increasing ‘area’ reduces variability
Are on Critical Paths ... And ‘chains’ of gates perform closer to average!
Non-Functionality is (easily) Observable ... The effects can be very subtle.
CMOS Logic: Is very robust and will continue to work with extreme transistors
Leaky Gates and Faster Transitions are not usually failure criteria
The chance of a second extreme transistor on a single Critical Path is the order of <1:1,000,000
Memory: Circuits are much more sensitive to Vt/gm variation ...
But spare rows/columns are part of SRAM designs and allow lots of defects to be ‘repaired’
AND >75% of typical SoC die area is memory, so ...
Most of the sensitive area has a repair strategy! ..and...
The rest is inherently more robust!
Is Hardware (Logic) Dependable? 2/3
But we haven't included ...
Internally and Externally generated synchronous supply noise? (Greater susceptibility at lower voltages)
High-energy particles? (Greater susceptibility at smaller geometries)
Wear-out (Vt/Gain drift)? (Greater susceptibility at smaller geometries)
Temperatures greater than 70degC (140C is not uncommon)
Limitations of Verification and Test (Limited exploration of state-space)
We repeatedly multiplying tiny-improbables, by large-numbers ...
And many of the values are only guesses!
We have no real idea about the reliability/dependability of modern Systems or Components
We only know that as process geometries shrink, Susceptibility will get worse ...
Chips will get ever more complex (and more chips will be used in more complex Systems)
Transistors will get smaller and Designers will erode safety margins to get performance
... Despite this Chips and Systems do Yield today more than we would rightly expect ...
... So we must be utilising Unknown Safety Factors!
Is Hardware (Logic) Dependable? 3/3
All Software Crashes!
Software providers seldom guarantee the functionality of their product
Quality is tested-in; and improved by bug-fixes/patches in the field (To what level?)
So software Reuse offers improved Quality and Productivity (But over what?)
Residual Errors ...
No code has zero residual errors!!
Well structured and tested Source-Code has ~5 errors per 1,000 lines of code (E-KLOC)
Commercial code is typically ~5x worse than this
No Useful Correlation between residual-errors and their system-impact severity
Only the Heuristic, that ‘most of them are harmless’.
Formal-Methods are better; but cost is high if you need a clean-sheet design.
Even Perfect-Software would have to work with an Imperfect-Platform
Don’t underestimate the Commercial Importance of TTM and Cost !!!
Is Software Dependable? 1/3
Demonstrating the limitations of achieving Quality throughTest ...
Is Software Dependable? 2/3
Hardware and Software Design are indistinguishable ...
// A master-slave type D-Flip Flop
module flop (data, clock, clear, q, qb);
input data, clock, clear;
output q, qb;
// primitive #delay instance-name
// (output, input1, input2, .....),
nand #10 nd1 (a, data, clock, clear),
nd2 (b, ndata, clock),
nd4 (d, c, b, clear),
nd5 (e, c, nclock),
nd6 (f, d, nclock),
nd8 (qb, q, f, clear);
nand #9 nd3 (c, a, d),
nd7 (q, e, qb);
not #10 inv1 (ndata, data),
inv2 (nclock, clock);
Hardware (Verilog Language)? Software (C Language)?
/* Use the PC's timer to check */
/* processing time */
printf("input loop count: ");
time = clock();
deltime = clock() - time;
secs = (float) deltime/CLOCKS_PER
printf("for %ld loops, #tics = %
HW ----- & ----- SW
Target Architecture Info
HW ----------- SW
HW -------------- SW
Is Software Dependable? 3/3
Somebody will see the bugs! (The Open Source Delusion)
“It is now very clear that
OpenSSL development could
benefit from dedicated full-time,
properly funded developers”
“OSF typically receives only
$2,000 a year in donations”
OpenSSL HeartBleed bug 1
Update was received just before a Public Holiday
Editor was a known and high-quality source
Code was reviewed informally and released
Editor was conflicted with day-job, family and holiday pressure 2
Too little resources to do a proper job.
This was a E-KLOC error ...
Not a Formatting error, nor a Functional error
It was a System error (an omission in a non-functional aspect of the code).
... Was the ‘fault’ with the software Source (OpenSSL Software Foundation (OSF)) ?
... Or a User Community too-ready to believe in the Quality of Open Source software?
HW1 HW2 HW3 HW4
Create Functional-Model1 on a ‘Generic’ Platform
Designing the Computing System ...
... is about creating a Model of Behaviour to meet Non-Functional Constraints
Translate to Functional-Model on an ‘Optimal’ Platform
1: This includes a Model of Execution such as a Java VM.
Typical 2014 Computing Platform ...
... is just 137.2 x 70.5 x 5.9 mm
Typical 2014 Computing Platform
Eight 32 bit CPUs (big.LITTLE):
• Four big (2.1GHz ARM A15) for
• Four small (1.5GHz ARM A7) for
+ Nine Mali GPU cores ...
... A ~30 Core Heterogeneous Multi-Processor ... In your Shirt Pocket!
... 21 significant ‘Chips’
2010:Apple’s A4 SIP Package (Cross-section)
IC Packaging Technology
The processor is the centre rectangle. The silver circles beneath it are solder balls.
Two rectangles above are RAM die, offset to make room for the wirebonds.
Putting the RAM close to the processor reduces latency, making RAM
faster and reduces power consumption ... But increases cost.
Processor: Samsung/Apple (ARM Processor)
Packaging: Unknown (SIP Technology)
Source ... http://www.ifixit.com
Processor SOC Die
2 Memory Dies
Steve Jobs WWDC 2010
2013: Samsung Solid-State Memory
Smart Memory Interface (eMMC)
16-128Gb in a single package
8Gb/die. Stacked 2-16 die/package
Handles errors in the bulk-data store
Package just 1.4mm thick! (11.5x13x1.4mm)
... Smaller than a postage stamp
2012: Nvidea’s Tegra 3 Processor Unit (Around 1B transistors)
NB: The Tegra 3 is similar to the Apple A4
Component and Sub-Systems from Global Enterprise ...
... Global Teams contributing Specialist Knowledge & Knowhow
Apple ID’d 159 Tier-1 Suppliers ...
Thousands of Engineers Globally
Est. 10x Tier-2 Suppliers ...
Including Virtual Components1 and
Sub-Systems (ARM and other IP Providers)
Multiple Technologies ...
Hardware, Software, Optics,
Mechanics, Acoustics, RF, Plastics, etc
Manufacturing, Test, Qualification, etc.
Methods, Tools, Training, etc
Tens of thousands Engineers Globally
... More than 90% of Technology and
Methods are Reused (productivity)!
1: Virtual Components do not appear on BOM
Designer Productivity has become theTechnology Driver
The Product Possibilities offered by utilising the Billions of Affordable and Aesthetically
Encapsulate-able Transistors is Commercially Beguiling!
But the only way to utilise these possibilities in a reasonable time, with a reasonable
team and at a reasonable cost; is huge amounts of Reuse of Design and Technology ...
Hardware, Software and other Technologies; Methods and Tools
In-Company: Sourced and Evolved from Predecessor Products
Ex-Companies: Sourced from businesses with lesser-known(?) Histories, but Specialist Knowledge
Reuse Improves Quality; as objects are designed more carefully, and bug-fixes are incremental
But this is ‘trend towards zero-defects’, not ‘zero-defects’ approach.
... Reuse Methods do seems to be good-enough for Commercial Applications!
... ‘Rigorous lean-sheet approaches’ will be orders of magnitude higher cost, so use of
Commercial Techniques for Dependable Systems are inevitable!
... The Available Components and Sub-Systems are unreliable; “get over it!”
ARM: brings the Right Horse to the Right Course ...
... Delivering ~5x speed (Architecture + Process + Clock)
...Which means: 24 Processors in 6 Families ...
... CoreLink for Hetrogeneous Multi-Processing ...
NIC-400 Network Interconnect
8-16MB L3 cache
CoreLink™ CCN-504 Cache Coherent Network
IO Virtualisation with System MMU
Up to 4 cores
Up to 4
Up to 18 AMBA
Peripheral address space
Heterogeneous processors – CPU, GPU, DSP and
… Tools, Libraries and Partners to Realize the Opportunity
Technology to build Electronic System solutions:
Software, Drivers, OS-Ports, Tools, Utilities to create
efficient system with optimized software solutions
Diverse Physical Components, including CPU and GPU
processors designed for specific tasks
Interconnect System IP delivering coherency and the
quality of service required for lowest memory bandwidth
Optimised Cell-Libraries for a highly optimized SoC
Well Connected to Partners in the Life-Cycle:
For complementary tools and methods required by
Global Technology Global Partners:
>900 Licences; Millions of Developers
We Can’t Design it Right
HW is SW; and Coding errors remain. State-space too big for simulation
exploration. Can’t model or explore whole Systems and they are too
complex for Formal methods
We Can’t Make it Right
Chips are subject to Process Imperfections and Variability. Chips and
Systems are subject to Verifications and Test Escapes. Boolean math is
absolute; logic cells are not
We Can’t Keep it Right
Chips are susceptible to Supply Transients, Wear-Out and High-Energy
... And all it get worse as processes shrink and complexity grows
... Yet we DO make Complex Electronic Systems that work!
... What is the explanation? (can we quantify it and use it?)
... Or are we just being Harbingers of a Ever-Threatening Doom ?
Where Do All The Errors Go?
System-Level Dependability is what matters ...
Dependable Systems need to Reuse Components and Sub-Systems (Physical and Virtual)
for Productivity; and the only affordable ones are of Commercial quality!
Clean-Sheet design is off-the-table for almost all complex products!
... the possible exception being the (diminishing) cost-no-object market!
The Only Place to implement System-Level Dependability is in the System ‘Layer’!
Dependability of Component and Sub-Systems may be enhanced, which will help with the
System-Level task; but they cannot achieve System-Level Dependability by themselves!
... I believe this is the only viable Strategy for creation of Dependable Systems
Facing the Unavoidable Truth
Dependable on Undependable ...
Toolbox to help us “Get over It”...
The only universal interpretation of Fail-Safe is Fail-Functional!
Probably impossible for the General Case; but may be for Specific Critical Cases.
So the identification of Failure and the initiation of appropriate Response must be the
highest System-Layer; Above the Functional-Integration-Layer.
This can include the ‘zero-case’ (In the even that it is all non-critical)
Recognising the differing requirements for Failure Survival (All cases are not equal)
Components and Sub-Systems may have protection built in, to increase their Reliability
(How probable are they to fail? How many/What type of defects can be tolerated?)
We need a Toolbox (equivalent of ‘Spare Rows and Columns’) for the System-Level
Memory Chip providers build in Repair mechanisms to overcome process limitations
Memory Systems providers Overcome memory limitations by handling Files not Addresses.
Redundancy (Double/Triple) is a black-box implementation strategy for logic blocks
Defensive Programming is a technique for building checking into software
Systems are what End-Customers buy; they expect them to be Dependable Enough.
A subjective level which is Application, State and Context dependent.
Commercial Components and Sub-Systems (HW/SW) are the building blocks
Commercial use has given us the Technologies which we are economically bound to use
They work better than we would rightly expect, but we cannot quantifying their quality
We can improve their Quality/Reliability/Dependability; but 100% is an asymptotic goal!
Dependable Systems must be based on Less-Dependable Components
So: System Dependability must be handled by the System-Level Software (Top-Level); only it can
determine the expected action and appropriate corrective action for everything in its domain.
And: Because Dependability is Application and State Dependent, then it can only be handled by a
Methodology ... Not every System state needs the same Dependability.
... The Commercial Imperative won’t wait for the ‘right way’
... before it produces systems that People Depend on!