Abstract:
Debugging real time issues present a unique set of challenges and requirements to the developer. Normal debugging techniques such as breakpoints, printf statements and logging frequently fail to locate the problem and can actually make the issue worse. This presentation will examine why common debugging techniques fail when applied to real time issues, and then present tools and techniques which can successfully address the unique challenges of real time debugging.
Bio:
Lloyd Moore is the founder and owner of CyberData Corporation, which provides consulting services in the robotics, machine vision and industrial automation fields. Lloyd has worked in software industry for 25 years, with his formal training in biological based artificial intelligence, electronics, and psychology. Lloyd is also currently the president of NWCPP and organizer of the Seattle Robotics Society Robothon event.
DDR Memory is at the heart of all Cloud Computing Servers but there is no standardized test specification. This presentation outlines what this test specification would look like and how the testing could be performed.
These slides were presented during technical event at my organization. It focuses on overview to find a root cause of the unexpected system down events. It is mainly useful for Linux or Unix system administrators. Here, I tried to cover all aspects of the topic. It took me more than 2 hours to present these slides, but one can also cover these slides within short time-span. Gray background of slides is implemented to hide the company logo and to preserve the confidentially of private template. However, The Knowledge is not restricted :)
DDR Memory is at the heart of all Cloud Computing Servers but there is no standardized test specification. This presentation outlines what this test specification would look like and how the testing could be performed.
These slides were presented during technical event at my organization. It focuses on overview to find a root cause of the unexpected system down events. It is mainly useful for Linux or Unix system administrators. Here, I tried to cover all aspects of the topic. It took me more than 2 hours to present these slides, but one can also cover these slides within short time-span. Gray background of slides is implemented to hide the company logo and to preserve the confidentially of private template. However, The Knowledge is not restricted :)
In this session, you'll learn how to discover devices into Network Configuration Manager using device templates, proper ways of adding credentials, configuration backup's and its importance, and disaster recovery.
Google Study: Could those failures be caused by design flawsBarbara Aichinger
Google has over 1 million Servers in its data center and has concluded that DDR memory failures are much more likely than they expected. Barbara Aichinger of FuturePlus gives a possible reason for these failures
New DeltaV Module Templates to Easily Configure, View, and Trend Advanced Pre...Emerson Exchange
Advanced pressure diagnostics use statistical process monitoring to characterize process variation and detect abnormal conditions such as plugged impulse lines, furnace flame instability, or distillation column flooding. New DeltaV diagnostics module templates, including control modules, operator faceplates, and process history view ease configuration and use of advanced diagnostics.
Presented by Emerson's Tom Wallace and Erik Mathiason at 2011 Emerson Exchange in Nashville.
Handling Vendor Problems, or Problems With VendorsSam Knutson
In this session, the speaker will share some simple steps you can follow to help get better results from interactions with IBM and other software and hardware vendors for your business. Working with IBM and other vendors to handle defect reports and enhancement requests is a key part of the job for many systems programmers. The speaker has worked as a systems programmer at commercial accounts, in second level support and development at an Independent Software Vendor (ISV) and brings an interesting perspective to this process. When is a bug really a severity-1 problem? Why does support always ask so many questions? How can you save yourself headaches by using the free tools available from vendors? These and many more questions will be answered, to provide you with a process and some tools to make the most of your calls with vendors.
This session was presented at SHARE in San Diego 2007.
This presentation is a short introduction to issues in Hardware-Software Codesign. It discusses definition of codesign, its significance, design issues in Hardware-software codesign, Abstraction levels, Duality of harware and software
Introduction to BeRTOS, real time embedded operating system open source. BeRTOS is free also for commercial projects or closed source applications.
http://www.bertos.org/download/
It is comprised of the five classical components (input, output, processor, memory, and datapath). The processor is divided into an arithmetic logic unit (ALU) and control unit, a method of organization that persists to the present.
In this session, you'll learn how to discover devices into Network Configuration Manager using device templates, proper ways of adding credentials, configuration backup's and its importance, and disaster recovery.
Google Study: Could those failures be caused by design flawsBarbara Aichinger
Google has over 1 million Servers in its data center and has concluded that DDR memory failures are much more likely than they expected. Barbara Aichinger of FuturePlus gives a possible reason for these failures
New DeltaV Module Templates to Easily Configure, View, and Trend Advanced Pre...Emerson Exchange
Advanced pressure diagnostics use statistical process monitoring to characterize process variation and detect abnormal conditions such as plugged impulse lines, furnace flame instability, or distillation column flooding. New DeltaV diagnostics module templates, including control modules, operator faceplates, and process history view ease configuration and use of advanced diagnostics.
Presented by Emerson's Tom Wallace and Erik Mathiason at 2011 Emerson Exchange in Nashville.
Handling Vendor Problems, or Problems With VendorsSam Knutson
In this session, the speaker will share some simple steps you can follow to help get better results from interactions with IBM and other software and hardware vendors for your business. Working with IBM and other vendors to handle defect reports and enhancement requests is a key part of the job for many systems programmers. The speaker has worked as a systems programmer at commercial accounts, in second level support and development at an Independent Software Vendor (ISV) and brings an interesting perspective to this process. When is a bug really a severity-1 problem? Why does support always ask so many questions? How can you save yourself headaches by using the free tools available from vendors? These and many more questions will be answered, to provide you with a process and some tools to make the most of your calls with vendors.
This session was presented at SHARE in San Diego 2007.
This presentation is a short introduction to issues in Hardware-Software Codesign. It discusses definition of codesign, its significance, design issues in Hardware-software codesign, Abstraction levels, Duality of harware and software
Introduction to BeRTOS, real time embedded operating system open source. BeRTOS is free also for commercial projects or closed source applications.
http://www.bertos.org/download/
It is comprised of the five classical components (input, output, processor, memory, and datapath). The processor is divided into an arithmetic logic unit (ALU) and control unit, a method of organization that persists to the present.
Abstract
More and more the world runs on software, furthermore software is increasingly controlling devices in the real world. Software failures can now have a greater impact than just loss of data, physical damage and injury are now concerns. While many high reliability specifications exist, such as MISRA and DO-178B, they can be too “heavy” for many projects and are typically domain specific (automotive and airborne systems respectively) and are not used.
This presentation explores various software techniques that can be used to harden a software system and make it more reliable. The presentation also covers key questions to be answered when developing software that interacts with the real world.
Specifically we will be looking at cases where the software needs to be more reliable than “average” but does not justify investment in a formal specification such as MISRA or DO-178B.
Bio
Lloyd Moore is the founder and owner of CyberData Corporation, which provides consulting services in the robotics, machine vision and industrial automation fields. Lloyd has worked in software industry for 25 years. His formal training in biological-based artificial intelligence, electronics, and psychology. Lloyd is also currently the president of the Northwest C++ User’s Group and an organizer of the Seattle Robotics Society Robothon event.
Test faster, release faster, get to market faster. Automated testing is the future of software development but all too often the performance and longevity of the tests themselves are an after thought. This lightning talk discusses why they shouldn't be.
DDR4 Memory Compliance Testing Barbara Aichinger FuturePlus SystemsBarbara Aichinger
Presented at MemCon 2016 by Barbara Aichinger of FuturePlus Systems. This presentation gives an update on the current state of memory failures in today's data centers. It offers a cost effective testing 'Audit' procedure to help eliminate those failures prior to having machines deployed in the data center. It can also be used to help diagnose root cause of failures in the data center or anywhere DDR4 is deployed.
Simplifying debugging for multi-core Linux devices and low-power Linux clusters Rogue Wave Software
Debugging scalable hybrid and accelerated applications on the Cray XC30, CS300 with our multi-threaded debugger TotalView. Faster fault isolation, improved memory optimization, and dynamic visualization for your high performance computing apps, a solution provided by Rogue Wave Software
Debugging Intermittent Issues - A How ToLloydMoore
Debugging intermittent issues present a unique set of challenges and requirements to the developer. Of particular interest is question “did this fix actually solve the problem, or am I just getting lucky today?” This talk will show you how to determine, with some level of confidence, if the issue is really solved or just not showing up. In addition issues which only show up rarely have a habit of accumulating until some issue seems to be happening all the time. Techniques for untangling the interaction of multiple issues will be discussed.
Successful Software Projects - What you need to considerLloydMoore
A successful software project is much more than just getting the code to compile and run. Building a successful piece of software requires multiple stages of planning and execution. The success or failure of the project will depend as much upon the processes used for development the project as the final project itself. This presentation is a high level survey of key processes and considerations that should be addressed for any project, from the beginning to end. When to use them, when they can be skipped and what the consequences can be if a process is skipped inappropriately.
A Slice Of Rust - A quick look at the Rust programming languageLloydMoore
Rust is a general purpose, safety oriented programming language. I’ve been using Rust professionally for roughly 2 years now and I would like to share some of the strengths and weaknesses that I have found during that time.
While this won’t be a formal introduction to Rust, anyone with a software background will be able to follow along. In several places I will compare and contrast Rust with C & C++. The presentation will give you a good background as to when you should consider Rust for a project as well as highlight several of the pitfalls I have run into when working with Rust.
What Have We Lost - A look at some historical techniquesLloydMoore
In the last 40+ years that I have been developing software there has been a massive increase in computing capability, both in terms of performance and the speed and ease of which new software can be created.
If we look back , the level of benefit to the user hasn’t kept pace with either of the above metrics. 40+ years ago I would have used a word processor to write this abstract (albeit without some of the fancy formatting) on a computer that had a single 8 bit processor and ran at Mhz speeds (ie: Commodore 64). On my modern computer with 16x 64 bit processors each running at Ghz speed writing this abstract is much the same experience. (For the moment we’ll ignore that MABYE in the not too distant future I won’t be doing this at all as a ChatGPT descent will do it for me – which of course would justify the cost of my current computer!)
Let’s take a moment to look back at how things were done in the past, particularly at techniques which I do not find in common practice, to see how we can do better. This is not to say that 40+ years of progress should be thrown out the window, this is to say some 40+ year old techniques still have value today and should not be forgotten.
Abstract:
The Raspberry Pi has become a very popular, inexpensive, credit card sized computer that runs the Linux operating system. The Pi is also, with a bit of help, an excellent controller for robotics projects. This talk will explore the use of the Raspberry Pi, along with an open source Cypress PSoC daughter board, for building a very functional robot. The final goal of this project will be to update the SRS robot with a Raspberry Pi, camera, and Wi-Fi network connection.
This talk builds on several of my previous talks and begins to tie them all together into a functional project. While I won't dive deeply into any of these topics again, the background will be helpful for this talk. Links to my prior talks are:
Getting Started with the Raspberry Pi:
http://www.cyberdata-robotics.com/Presentations/StartingPi/StartingRaspberryPi.pdf
Using the Cypress PSoC Processor:
http://www.cyberdata-robotics.com/Presentations/UsingPSoC.ppt
Using Cypress PSoC Creator:
http://www.cyberdata-robotics.com/Presentations/PsocCreator/Using_PSoC_Creator.ppt
Bio
Lloyd Moore is the founder and owner of CyberData Corporation, which provides consulting services in the robotics, machine vision and industrial automation fields. Lloyd has worked in software industry for 25 years. His formal training in biological-based artificial intelligence, electronics, and psychology. Lloyd is also currently the president of the Northwest C++ User’s Group and an organizer of the Seattle Robotics Society Robothon event.
This talk will cover the USB fundamental needed to implement as USB HID joystick device using the Cypress PSoC 5 processor. The concepts covered by the talk are common to USB communications in general and can be used with other processors as well as implementing other types of devices.
PSoC Creator is the development tool chain for the PSoC 3/5 line of Programmable Systems on Chip. This talk will explore this development environment and create a simple bubble level application using Creator with the PSoC 5 First Touch Starter Kit.
The Cypress PSoC is a programmable “system on chip” device which includes all the functions of a traditional microcontroller, in addition to programmable analog and digital blocks. This combination of resources makes the chip well suited to robotics applications. This will be an introductory talk covering the basic architecture and development tools.
Microcontrollers represent a highly resource constrained environment. Very small microcontrollers typically have only several K of program space available and several hundred bytes of memory, in addition to very low clock speeds. This talk will look at how to address these resource limitations. Many of the techniques examined also apply to larger / PC class hardware, and can be used to improve the performance for those systems. In addition the techniques explored are also beneficial for optimizing the power consumption of mobile devices and applications.
The Raspberry Pi is an inexpensive ($35), credit card sized computer that is able to run the Linux operating system. The card also contains USB ports, an Ethernet port, camera port, GPIO lines, serial ports, SPI port, HDMI port, and I2C port – just about anything you would want for an inexpensive and very powerful robot controller! Lloyd Moore will show us how to get started with this device. Specifically we'll talk about loading and configuring the operating system, installing the Qt (C++) development system, and controlling some of the ports.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. Overview
Definition of Real Time
Problems with Breakpoints
printf() – the old standby
Memory Buffers
I.C.E. and Trace Buffers
On Chip Debugging
Protocol Sniffers
Logic Analyzers and Scopes
Techniques & Best Practices
Summary
3. Definition of Real Time
MANY definitions depending on the context of the problem and who you talk to!
All generally refer to a condition where the answer is wrong if it is “late”
Soft Real Time: Refers to a condition where you have a guarantee that you will
get into some portion of the code in a bounded time (latency/delay).
• Typically the ISR routine based on an hardware interrupt
• Generally what is intended if an operating system is in use
Hard Real Time: Refers to a condition where you have a very tight latency
requirement, often on the same order as the machine cycle of the processor.
Jitter may also be a requirement – cannot trigger too early either.
• OS CANNOT be involved as it is just too complex
• Typically used with smaller microcontrollers or programmable logic
Real Time Problems are fundamentally different from other problems because
the answer was typically correct at some point in time, just not this point in time.
4. Problems with Breakpoints
• Breakpoints stop the thread of the process in which they are contained
• Only thread execution containing the breakpoint is guaranteed to be
stopped, though typically the containing process is stopped as well.
•
Not all threads will stop at the same time in any case, particularly on multiple processors!
• Other aspects of the system being debugged are typically NOT stopped
•
•
•
•
Hardware generating interrupts
Timers and real time clocks
Other processes which may be communicating over some port/pipe
Other processors in the system
• The root of the problem is that part of the system you are debugging is
stopped and part of the system continues to run.
• ALL Real Time guarantees just went out the window! Any answers are now
likely late and therefore, wrong!
• Leads to inconsistent state of the overall system as some things continue
running and others are stopped.
5. Some Real World Implications
In machine issues:
•
Buffer overflows / overwrites for shared memory buffers
•
Interrupts get missed and DMA status become corrupted
•
Timeout values expire without even being considered
Between machine issues:
•
Communication port overflows – data being sent not removed/received
•
Have actually had this crash commercial USB drivers forcing PC reboot!
•
Upstream data sources time out waiting for responses
•
Downstream data sinks may enter unexpected conditions
•
These are generally good test cases anyway as they are real world anyway – find em & fix em!
Hardware / physical issues:
•
Motors may continue to run!! (REALLY BAD!)
•
•
Processes that are being controlled may enter unsafe operating conditions.
•
•
Your robot runs off the table and/or into a wall! (Bigger bots may go through a wall!!!)
Heater may continue to heat once desired temperature is reached
Also really good to simulate as they could happen in production – fix any safety issues!!
6. Well then just use printf() or similar!
• This is what most developers do when faced with a breakpoint issue
• Advantages:
•
•
•
Quick, simple, common practice
Keeps the process running
Works in many cases
• Disadvantages:
•
•
•
•
•
•
Still alters program timing, sometimes considerably – consider serial port debugging even
at 115200 baud – 868+uS to send 10 chars
Needs some other resource to be activated which can radically alter system state
• I/O stream – generally the fastest but will trigger other processes
• Disk access – may or may not block based on buffering, other process activated
• COM port – Physical port slow, virtual COM port radically alters USB environment
Even non-blocking calls will alter thread behavior and timing
Simply not fast enough for many real time applications!
In small embedded applications may not have needed resources available
May alter hardware interrupt patterns and possibly DMA usage
• Tracepoints are basically breakpoints tied to printf()
7. Memory Buffer Debugging
The practice of reserving a section of memory to write debugging
information into. May be a linear array or circular buffer.
• Advantages:
•
Much faster than printf() but will still take some time
•
Much less likely to alter gross thread behavior or other system state – new threads
not spun up
• Disadvantages:
•
Usually have to write your own, or at least mess with much more code than printf()
•
Data is hard to get at – still need to stop system at some point or transmit memory
•
At least you have the option of doing this outside a critical time window
•
Limited data storage, especially on a microcontroller (maybe a couple K)
•
May have to alter your memory map for a small device to allocate memory
8. The Real Issue
Fundamentally EVERYTHING you do to the execution environment will
somehow alter the program execution environment!
• Normally not a big deal for general application development as resources are
plenty and processors are fast! Also works for some soft real time issues.
• For hard real time applications you are typically near the limits of the
processor to start with
• For embedded applications, resources are much more scarce
• Race conditions are extremely sensitive to timing changes
• Examples:
•
•
•
Program timing altered as program instructions and flow also altered
Resources that are not normally used being called into service
Program alignment in memory can be altered (PIC processors use paged memory)
I call this the: Heisenberg Uncertainty Principle for Software
9. Getting around HUPfS
What needs to be done is to add resources that ARE NOT part of the system
we are debugging – at least as we intend to use it in production!
Yes the added resources will each have their own impact on the system –
current draw, line inductance, line capacitance, etc. These impacts will be
small as compared to methods we have talked about so far.
Furthermore if you are seeing problems introduced by these debugging
methods it is an indication of something that may be marginal in the design
anyway and should at least be investigated if not fixed outright.
10. I.C.E. and Trace Buffers
I.C.E. stands for in-circuit emulator. This is a piece of hardware and software that
is typically inserted between the processor and the rest of the board to
accomplish debugging.
Consists of a “pod” that holds or replaces the processor and a “chassis” that
contains specialized debugging hardware, including a trace buffer.
Trace buffer is a big chunk of memory that records some or all of the processor
signals in real time. These can then be analyzed to determine program flow and
variable values.
11. More on the I.C.E.
Advantages:
• TRUE real time debugging! (At least for microcontrollers)
• In most cases has NO impact on executing program
• Can read out data in real time while program is running
Disadvantages:
• Can be VERY expensive – start at about $1K and go up to over $20K!
• Sometimes require a special “bondout chip” to access internal processor
signals
• Generally not usable for high speed processors as line inductance off the
main board alters signal timing significantly
• Often times making the physical connection is problematic or impossible
Generally not used all that often today, but still available.
12. On Chip Debugging
On Chip Debugging was developed to address the issues with ICE, in particular
connectivity, bondout chips and cost.
Places many of the functions of the ICE onto the die of the processor and
connects to these resources with a special serial connection.
Advantages:
• Typically has no impact on running program
• Comes integrated with the chip and the tool chain
• Solves about 80% of the debugging issues
• More complex implementations can also have a trace buffer and a serial wire
debugging output
Disadvantages:
• Most have little support for real time debugging issues
• Trace buffer, if present, is small
• Serial wire debugging, if present, is typically limited to 1Mbps
• Does raise the cost of the chip in production
13. Protocol Analyzers / Sniffers
A protocol sniffer is a piece of hardware that sits between two
communicating devices, or on a common bus, with the devices being
debugged and captures the communication data.
• WireShark (free!!) is likely the best known one of these, allowing you to
monitor and record network communications in real time! (Hardware is
an additional PC and switch with port mirroring)
• Other products available for USB, RS-232, wireless, SPI, I2C, I2S, etc.
14. Protocol Analyzers / Sniffers
Advantages:
• Free to low cost (<$1000) for many applications
• Near ideal solution if you have a communication problem
•
Can capture in real, or near real time with NO impact on running programs
Disadvantages:
• Can sometimes be temperamental to use, each one seems to have it’s
own quirks
• Does impact the electrical signals – if these are marginal to start with it
could create additional issues
•
In this case you need to solve these issues anyway, and it is better to detect them
now before the product goes into production!
• Generally dedicated to one particular protocol – may need a collection of
these depending on what you are doing
15. Logic Analyzers and Oscilloscopes
• Logic analyzers record groups of digital signals at a fixed rate, and then
make these signals available for viewing (8-64 channels typical)
• Oscilloscopes plot voltage vs. time curves instead of just binary
values, typically on fewer channels (2-4 channels typical)
• Can also purchase “mixed signal” models that have both in one unit
• Two “flavors” are typically available – standalone or USB
• Newer models also have multiple protocol analyzers built in, and with
some you can write your own.
16. Logic Analyzers and Oscilloscopes
Advantages:
• Usually pretty cost effective, especially for USB models ($200 - $2500)
• Zero to low impact on running program
•
May need to insert code to output debug values or states, toggle a bit to watch
• Very flexible tool for debugging real time and hardware
issues, particularly if there are protocol analyzers available
• Most have very complex triggering modes to help manage the deluge of
data you can get with them
Disadvantages:
• Can be a real pain to hook up!
• Can be complicated to use, and trigger properly
• High level information can be hard to interpret, particularly on a
oscilloscope
17. Logic Analyzer w. I2C Protocol
Sample screen capture of a USB based logic analyzer showing two I2C
captures and the resulting byte values from the I2C interpreter.
Notice that the “N” at the end of the interpreted traces – indicates a NAK
from the device, so something is clearly not correct here!
The Wt: indicates a byte being written which should be 60 (decimal) – clearly
getting interpreted as zero and NAK’d by the I2C slave as well.
18. I2C Capture on an Oscilloscope
This is a capture of the same four waveforms on an oscilloscope.
Notice that the leading edge of each trace is curved instead of squared off.
While the logic analyzer only captures digital levels, the oscilloscope
captures analog levels allowing you to debug finer details of the
communication.
19. Techniques
1. Communications Issues
a) Simply use one of the protocol monitors to capture the communications directly
b) Timing information is also typically obtained here
c) If you allow for “debug frames/packets” then you can send additional information
2. Use simple “bit twiddling” to output state information and watch on
scope or LA – use this much like you would a printf() statement
a) Very common tactic is to toggle a I/O pin in software and record on scope
b) Make sure this code is in the form of a macro or in-lined, do NOT use a function call
here!
c) Can also toggle two pins at different points to measure order or timing
d) Can output a stream of state information to a port if available (8 bit port is great for
this)
e) Program flow can be traced by putting out numbers to a port and then capturing on
a logic analyzer
3. By knowing where your code is NOT sensitive to interruption you can
often place debugging actions (breakpoints, debug code) in those areas
20. More Techniques
• Most real time issues on a PC or “large” hardware involve
communications with real world or another computer
•
Tap into and monitor this communication link
• Check to see if the processor you are using has a debugging module built
in with a trace buffer.
•
May need to use a special debugging package to access this
• Place debugging instrumentation AFTER the “critical window”
•
•
May be able to stop program and then look at historical data such as data buffers or
other program state to determine what happened during the “critical window”
Usually will have to restart the debugging session after one attempt but can move
you forward
• May be able to instrument or measure a “surrogate” to help determine
timing
•
•
Power consumption is common here as peripherals spin up and down
RF emissions may change as radios go on and off (AM radio was used on the olden
days – won’t likely help today!)
21. Intermittent Problems and Statistics
• Intermittent problems require special attention to ensure changes that
you are seeing are not do simply to chance
• Need to know your failure rate and track this as you make changes
• Consider a case where you see a failure on 3 out of 10 trials:
•
Failure rate is 30% (3/10 or 0.30)
•
Success rate is 70% (1.00 – 0.30)
•
Now you make a fix, test again for 10 trials and see zero failures! Did this happen by
chance?????
•
Compute the chance (probability) that you got 10 successes by chance:
•
Chance of success on one trial is 0.70 (70%), raise to the power of the number of
trials
•
0.70 ^ 10 = 0.028 or 2.8% chance you didn’t really fix the bug!!!
• Can extend this method to measure impact of instrumentation and
multiple fixes
22. Intermittent Problems and Statistics
General process:
1. Determine the specifics needed to repeat the problem
2. Determine the percentage of the time the problem occurs, and possibly a
confidence level – yes you have to do math here!
3. Add instrumentation (code and/or hardware) to begin investigating the
problem
4. Ensure that you HAVE NOT changed the result from step 2! If you have
determine if you are still looking at the same issue and can continue. If there
are any doubts find a different way to instrument!
5. Enter a proposed fix for the problem and re-run step 2.
6. Interpret the results from step 5.
1.
2.
3.
If you think you have solved the problem remove the instrumentation and test AGAIN!
Then use the math from the previous slide to see if you fixed something by chance.
If you have not solved the problem remove the fix attempt and try again!
You may end up accumulating several fixes that need to be considered together – make
these a SEPARATE trial, do NOT just start adding stuff until it works!
23. Best Practices
1. Use the right tool for the job! None of these solutions are one size fits all!
2. If you have “hard real time” requirements partition the problem at design time to
isolate those requirements accordingly. (No OS, no processor at all)
3. Design for test
a)
Leave extra resources available in the develop design for debugging, may not have in
production (memory, I/O, connectors, prototype area, extra UART)
b) Have a modular design that can be instrumented between major components
c) If possible provide a standard connector for hooking up a protocol analyzer, logic analyzer or
scope
4. Purchase instruments that are flexible and integrate with the PC
5. Build the system up in layers and fix problems as soon as they are found to
prevent interactions or having to “peel the onion”
6. Use statistical methods to measure impact of instrumentation and fixes
7. Be prepared to give up the “divide and conquer” debugging methodology, some
real time problems by their nature require a holistic approach
24. Summary
1. Real time debugging requires a variety of techniques to address different
types of issues
1. Hard real time issue will require additional hardware to debug properly due to the
Heisenberg Uncertainty Principle for Software
2. Design your system to allow for test and debug
3. The most complex issues will require both hardware and software
approaches to debug effectively
4. Some problems will require statistics to estimate actual results
5. Every time you modify software be aware of the resources you are using
– even if you don’t think it matters! (Ya it usually does!)