In todays engineering environments, organizations are aspiring to innovate, and are building ever more complex products and systems to differentiate themselves. The associated engineering effort is generating more and more data with ever increasing complexity. Trends include: More software and systems-of-systems designs, where a larger portion of the product value/innovation is being delivered in software More design and development artifacts. More product variants More collaboration and co-design between HW and SW engineers More regulations, standards, and other aspects of compliance More participants in the supply chain (including the SW supply chain) More pressure to do more with less … improve quality and productivity More pressure to get to market faster
In this environment of increasing complexity, for engineers, day to day tasks can end up consuming an inordinate amount of time that could be better spent with them being creative and productive. Questions like those shown on this chart are often difficult to answer, especially in a timely manner. Teams need information in context at their fingertips to be productive and make good engineering and business decisions This data typically resides in many locations, and pulling it together in context takes a lot of time, effort and is prone to errors. We often hear engineering organizations tell us that they face challenges like: It takes far too long for us to find information we need, especially when the information is stored in multiple places. Oftentimes we’ll miss some information, and that creates rework down the line. It’s really difficult to understand the impact of change across multiple engineering disciplines, products, versions and variants. We don’t have any effective way of rapidly defining new products, versions or variants based on previous development. It’s impossible to get a quick at-a-glance view of real-time status of lifecycle metrics that are important to us. We lack having concise information available to us from multiple disciplines across the engineering lifecycle to support effective decision checkpoints such as ship readiness reviews
Research shows that knowledge workers spend a lot of time simply searching for information, and even worse that in many cases, the information isn’t found. This leads to rework / recreation. This same dynamic afflicts engineers and engineering data. The net is that engineering teams are spending a lot of time on unproductive tasks – and a growing amount of their time. Sources: From studies by IDC, as well as organizations such as the Working Council of CIOs, AIIM, the Ford Motor Company and Reuters. Source: http://www.kmworld.com/Articles/Editorial/Features/The-high-cost-of-not-finding-information-9534.aspx Information Gathering in the Electronic Age: The Hidden Cost of the Hunt, The Ridge Group
This need for smarter products means that complexity will continue to rise, and continue to hinder engineer productivity, and restrict innovative capacity And unfortunately, the effort required to manage complexity increases faster than the complexity itself. New levels of collaboration and co-design are required across multiple engineering disciplines and ever growing data sets. Engineers are spending more and more time in simply coping with the overhead of complexity and on the resulting rework and less time being creative in doing what they are good at (and enjoy)
Taking data from the research mentioned earlier, we can work out hypothetical costs to a product development organization of 100 engineers. We assume an annual cost per engineer of $150k. More details on the tasks: Search: This covers both simple text searches across distributed data sources i.e. find all the requirements, test cases, work items and design elements that include the term ‘handset’; and more complex queries i.e. How many requirements for the heart monitor are related to tests that failed on their last execution run? Creating documents: Pulling together integrated reports / documents from multiple sources – i.e. requirements, test cases, work items, design elements (the data and relationships) Impact and coverage analysis: Picking a focus point, a particular artifact, and seeing all its relationships to other artifacts in order to determine the impact of a change to the artifact in focus or to check that there are relationships to other artifacts to verify coverage – i.e. does a requirement have related a test case. Typically across multiple data sources. It’s very important to stress that the impact can be far greater than the quantified costs on this slide. Missed information in searching - example: we need to change some properties of a number of requirements for response time of various system functions. We search, but don't find them all - we miss one. Hardware and software is designed and implemented, the system is delivered and that function responds outside the parameters we really wanted - resulting in high cost of rework or even product recalls. Similarly if an area of the product is missed when performing impact analysis, then the cost and effort of implementing a change request maybe incorrectly assessed and worse still, the change request isn’t fully implemented, again leading to reworks costs, possibly late delivery and worse, product recalls. The resulting costs can be huge. What’s really needed is a way to enable engineering teams to focus more on core engineering work and spend less time and effort in dealing with complexity and rework. We need to restore the balance and help engineering teams to work faster with higher quality by improving development processes and asking tools to do more. We need a next generation approach to both processes and the applications that support those processes.
Traditional Product Development was linear, and led by physical and mechanical engineering disciplines, with teams in one place and limited up front planning. In this sea of complexity, development of the next generation of products and systems demands a top down, holistic approach, where software is driving innovation, and products must be developed seeing how software and systems interact with all other areas up front, before development begins. Software is no longer secondary - it is up front - thus a systems engineering approach is paramount - where we can see all data together across engineering disciplines. Organizations must move beyond a traditional, physical BOM-based approach to view product development from a “systems” perspective: A Top-down, holistic, domain-integrated view rather than bottom-up assembly of parts. A next gen approach, for next gen products.
Traditionally, much of the data that we create and manage during the lifecycle of the systems and software projects that we are involved in is not open. Historically this data has only been available to users of the tool that manages that data. In such closed environments, we're prevented from answering complex questions that rely on our understanding of the relationships between data managed by different tools. Furthermore, the tools in such closed environments have no common understanding of the types of data and relationships that they are managing. This problem is not new and has been attempted to be addressed in several different ways in the past. Using a single repository to manage all the data is unrealistic as it implies a single vendor tie-in and prevents adoption of best of breed solutions and integration of data that you may have in existing repositories. It’s simply not practical to migrate all of your lifecycle data into a single monolithic repository, and such repositories have huge operational challenges with performance size, upgrades, backups and so on. Point-to-point integrations between multiple tools are brittle and prone to breaking when tool versions are updated. Adoption of universal metadata standards has been shown to be unsuccessful due to conflicting priorities and motivations of multiple vendors, coupled with the fact that it provides no flexibility for individual teams to customize.
A Linked Data architecture is at the heart of this next generation approach. Linked Data is a phrase that refers to a set of best practices for establishing a web of data. Linked Data can be read by machines, has explicitly defined meaning, can be linked to other data, and can in turn be linked to from other data. Linked Data has four main supporting principles: Use URIs to identify things Use HTTP URIs so that you can de-reference / look up those things When someone de-references a URI, provide some useful information about that thing As part of that useful information, include links to other URIs, so that additional, related things can be discovered Just as the Web relies on Uniform Resource Identifiers (URI's) and the Hypertext Transfer Protocol (HTTP) to provide a hugely scalable architecture for linking documents (HTML pages) regardless of where those documents are physically located, it uses the same underpinning technology stack to provide the same scalability for linking structured data, regardless of where that data is located. Adopting a loosely coupled approach to integration based on Linked Data principles in systems and software development allows us to truly make the most of the data that's being created, without the down sides of other approaches that have inevitably failed in the past. We’re able to derive huge benefits from applying Linked Data principles to this engineering data. By 'opening' the data managed by all of these tools we can create a well-defined system of tools that allows users working within any of the tools in that system to answer complex questions about the projects that they are working on that rely on the union of data held in disparate repositories.
RELM is built on top of this Lifecycle Query Engine, and extends the Rational solution for systems and software engineering, making available multi-dimensional views of your engineering data and thereby enabling your multi-disciplinary engineering teams to be more productive and make better decisions. RELM’s key innovation is its use of the Lifecycle Query Engine to index Linked Data from many engineering, which makes it possible to mine it in new ways. The fact that data is federated - rather than synchronized, or stored in the one monolithic database - means that it’s possible to implement this solution in finite time and with finite on-going maintenance effort – which is of course less true of the alternative approaches. RELM 1.0 indexes data from the tools of the Rational solution for SSE plus it has potential to index 3 rd party and home-grown repositories, which requires development of lightweight connectors for each repository. RELM includes an application running on the Jazz Team Server, which provides access to this data through your web browser. This gives your team new ways to visualize, analyze and organize your engineering data so that you can get new value from it. Visualization capabilities allow you to set up custom views that are populated with near real-time data to provide new perspectives that help your team make decisions and perform tasks. For example, an automotive safety engineer wants a view of cross lifecycle data in the context of the structure of the ISO26262 standard or an aerospace engineer wants to see cross lifecycle data overlaid on a picture of the aircraft with data shown over the parts of the aircraft it relates to. Team-oriented analysis capabilities across disciplines help teams quickly find relevant information and identify dependencies that they may otherwise have missed which could have downstream impacts. For example, with RELM, searching for all engineering artifacts across multiple repositories containing the phrase ‘fuel pump’ becomes a simple, quick, and error free task. Or, perhaps a supplier can no longer supply a particular electronic component and you need to determine what impact there would be by switching that component for a similar component from another supplier. You need to understand the potential impact on components it interfaces with and software that controls it. This would typically mean asking multiple teams to perform impact analysis in different tools and assembling the data. With RELM you can find the component in your system design and generate an interactive impact analysis view that will show you all of the other systems and software design elements and requirements, tests and work items associated with the component. You can work with the view to filter out any artifacts that are not relevant, and parcel out relevant branches to the relevant engineering disciplines to analyze. RELM also offers potential for better reporting from this wealth of data. Answers to complex engineering questions can be visualized and displayed in the most appropriate way for the task at hand, and documents can be generated containing data from across the lifecycle to support audits, proving compliance, and delivery of specifications that form the basis of contractual agreements across the supply chain. The visualization and analysis capabilities of RELM are made all the more powerful by the ability to organize all of the indexed data to add essential context to it. You can organize all of your engineering artifacts according to the products, systems, subsystems, components and capabilities that you are building, and use this context in your visualization and analysis. For example it becomes easy to find all of the requirements, tests and design artifacts related to the head-up display unit of the Japanese variant of the 2014 model of a vehicle. (Common question will be: But I have a product/system structure already in DOORS/Rhapsody/RTC/etc. so why I would want to reflect it again in RELM? Answer: Each tool has it’s own way of expressing this structure and to take advantage of it you need to know the details of each tool. RELM provides a way for one knowledgeable person to extract it and make it accessible and useful to the whole team.) RM – Requirements Management* EDA – Electronic Design Automation*** QM – Quality Management* CM - Change Management* AM – Architecture Management* SCM – Software Configuration Management** PDM – Product Data Management*** * = Covered by RELM 1.0 (DOORS, RQM, Rhapsody with DM (and Mathworks Simulink), RTC) ** = Planned for future version of RELM (RTC SCM, Clearcase) ***= through custom or third party adapters.
Perceived benefits of RELM: Visibility of artifacts produced in the engineering lifecycle Reduced time for impact analysis (which can typically take weeks to do otherwise) Faster document generation of data consolidated from multiple tools Vision of value of RELM: RELM should increase productivity of work and quality of products by reducing costs and time during the engineering and management of product development. Beta environment: IBM® Rational® Engineering Lifecycle Manager Beta 2 IBM Rational DOORS® IBM Rational Team Concert IBM Rational Rhapsody Design Manager Beta IBM Rational Quality Manager
Remember the costs discussed earlier of searching, rework/recreation, creating documents and impact analysis. RELM can help in all of these areas. This slide shows potential benefits from RELM over a 3 year period. In year 1 RELM is used with the currently out-of-the box supported Rational data sources and Mathworks Simulink (via Rhapsody Design Manager). In Year 2 we assume expansion via partner integrations / custom adapters to include additional tools for electrical and electronic design and product data management (PDM), and year 3 we assume further expansion to include an enterprise resource planning (ERP) tool You can see by graph (and additional details are provided in the back up section as to how that graph was calculated) that there’s potential for considerable benefits to be realized. In addition, consider the wider impact – risks that can be mitigated such as the impact on bad/missing data on quality and schedules; and on the additional opportunities that can be realized by addressing the data issues – making the engineering team more productive could lead to faster product launches and the potential to release more products. NOTE: Not all data in the tools is indexed into RELM. Of interest to RELM are relationships between systems elements and enough properties of the elements to gain insight into status and produce reports. From the PDM tool, for example, the mechanical design details would not be extracted, only the mechanical part representations with their associated properties. In Year 3 the coverage is not 100% because not all the data from the tools is indexed by RELM, and there are other sources of product development information not covered such as that in documents, on intranet, in standards volumes, and project management data.
RELM is the latest addition to the Rational SSE offering the potential to make your teams more productive and effective. What is your next step? … Automate another discipline supported by Rational SSE? … Consider RELM? … Consider adding another discipline at the same time you add RELM? Let your business priorities and potential ROI drive your decision When you are ready to take the next step with RELM … engage IBM and our Business Partners to explore a solution and deployment plan for your teams Consider which engineering data sources your teams can better exploit Could involve Rational tools, home-grown databases, mechanical, electrical
Author Note: Optional Rational slide. Graphic is available in English only.
Structured and traceable views of engineering data across the development lifecycle Role and task relevant views Product, system, sub-system, capability and component centric views Process, standards and framework centric views with access to supporting guidance Ability to create new views or customize predefined views Table / Grid, Tree, Freeform
(Common question will be: But I have a product/system structure already defined in my PDM system and to some degree in DOORS/Rhapsody/RTC/etc. so why I would want to reflect it again in RELM? Answer: Each tool has its own way of expressing this structure, and to take advantage of it you need to know the details of each tool. RELM provides a way for one knowledgeable person to extract it and make it accessible and useful to the whole team across all engineering disciplines.)
321 Gang participated in the Beta for RELM 1.0 and assisted a large aerospace and defense organization with their beta participation, and 321 Gang shared these perceptions. Perceived benefits of RELM Increased cross-team, cross-domain collaboration More informed engineering decisions through visibility across engineering tools Quicker and more accurate assessment of the impact of changes Efficient generation of cross-domain views and documents for meeting contractual and compliance obligations Beta environment: IBM® Rational® Engineering Lifecycle Manager Beta 2 IBM Rational DOORS® IBM Rational Team Concert IBM Rational Rhapsody Design Manager Beta IBM Rational Quality Manager
Manageware participated in the Beta for RELM 1.0 and assisted a large aerospace and defense organization with their beta participation, and Manageware shared these perceptions. Perceived benefits of RELM: Completes systems engineering & embedded software development environment Provides overview of status across the lifecycle for product/project managers Delivers ‘in-context’ engineering views making it easier to understand relationship across multiple disciplines Adds complementary value to the Rational solution for systems and software engineering Vision of value of RELM: A gateway to manage the development lifecycle Beta environment: IBM® Rational® Engineering Lifecycle Manager Beta 2 IBM Rational DOORS® IBM Rational Team Concert IBM Rational Rhapsody Design Manager Beta IBM Rational Quality Manager
Every organization is unique is some ways, and ROI needs to be considered in each context. That said, there are some common drivers of value: Some are based on direct labor savings (which are the ones quantified on this chart) Reducing unproductive or relatively unproductive manual effort Others involving saving from reducing errors and rework Manual efforts often introduce inconsistencies and errors that may not be discovered until much later in the development process when fixing them is much more expensive. Still others are high-lever benefits that drive the most ROI but often difficult to tie directly to a tool deployment Getting to market faster Getting more predictable project outcomes (reducing project risk) Better customer satisfaction resulting from meeting the requirements and fewer problems found during customer acceptance We can work with you to illustrate the potential ROI in your organization.
This shows the breakdown used to build the graph in the main section of the presentation. How does RELM / SSE solution address these tasks: Search: RELM provides both simple text searching and more complex query capabilities. It’s at least 50% faster and possibly up to 10 times faster to use RELM to search across its indexed data sources than it is to go to each individual source, search and then compile information. And that’s assuming the data is even in an easily searchable source and it’s not buried away in a closed database, document, or spreadsheet. And the complex queries would be extremely difficult to do and involve lots of manipulation of the data after its extracted from each of the data sources. Rework / Recreation: Because RELM is used to do the searching / querying, the chances of information not being found or the wrong information being compiled, is drastically reduced. Creating documents: RELM can be used together with Rational Publishing Engine (RPE) to automatically generate documents containing related information from multiple sources. Impact analysis: Given a focus artifact, RELM automatically generates a view of all of its related artifacts. That impact analysis view can then be filtered, if there are artifact or relationship types that are out of scope for the potential change being analysed; and individual elements can be manually removed from the view if the change wouldn’t affect them. This means you can present a very clear view to the engineering team of what would be potentially impacted by a change request, enabling the team to more quickly assess the time and cost of the change. It’s also important to stress that there are larger, un-quantified benefits, such as faster time to market through increased productivity (and the ability to deliver more products / releases), improved quality through more accurate and timely data, and better change control (a major source of issues with unexpected costs and schedule delays) through automated impact analysis views.
Possible speaker text: There has been significant focus, investment, and spending on Product Lifecycle Management systems over the years—but has the investment paid off? Are companies that have made these investments thriving in today ’s environment. If not, why not? In the eighties, companies started using CAD/CAM tools to improve their drawing capabilities. But often, they did not make corresponding changes in their business structure to optimize these investments. In the nineties, companies invested in Product Data Management systems, and integrated them with their ERP systems. Now, and looking forward, companies are looking to integrate various PLM applications across disciplines. But even this intent may not take into enough consideration the software; nor the necessity to have an integrated design view through optimized systems engineering practices and tooling. The challenge today is to get the requirements right for the whole system - with the same focus they've done with BOM management; to develop software with the same discipline/focus/governance that they have applied to CAD/CAM. And we need all of this with a new focus on interdisciplinary collaboration and design, that is, an Integrated Product Change Management/Systems engineering focus.
Business benefits of implementing our services customer understands quickly how the Rational toolset applies to their environment, processes, industry IBM (Lab Services) has become a trusted advisor to the customer IBM (Lab Services) reduced number of internal IBM contacts agile business, able to develop new services quickly to adjust to their changing marketplace 75% time reduction in testing cycle aligned organization to use consistent system delivery processes reduced time to define system interfaces across multiple organizations in the business reduced time by 10 times effort to determine source of defect and saved over 7 person years effort in test case creation as well as executed 400 test cases in a few hours compared to customer ’s 2 test cases per week rate FastTracks (approx 5 days) Used to accelerate installation and migrations. Typically used for product upgrades or when starting a PoC. QuickStarts (approx 5-15 days) Rapidly implement Rational tools using pre-defined activities and deployment templates for customer projects. Assessments (approx > 15 days) Employs proven methodologies to gather customer requirements and environment context when the customer may not be aware of in-house capabilities or deployments. Delivers guidance on best practices for the customer ’s context. Deployment Packages (approx > 15 days) Deployment options based on proven practices which are tailored to support the customer ’s context and usage model. Commercial Assets (approx 5 days) Packaged tools and services providing solutions to specific customer environments; these can be tool integrations, extensions and other automation delivered as an out-of-the box asset.