The document summarizes a case study using systems engineering models to plan the Exploration Flight Test-1 (EFT-1) mission for NASA's Orion spacecraft. Key points:
- EFT-1 will test Orion capabilities before crewed flights, including separations, parachutes, attitude control during reentry, and water recovery.
- Systems engineering models were used to understand data and resource needs, flows, and access across distributed NASA/Lockheed Martin teams.
- Custom viewpoints were defined in SysML to address stakeholder questions and visualize mission elements like components, data exchanges, and interface requirements.
To view this webinar:
http://ecast.opensystemsmedia.com/320
Suppliers of C4I, C2, Cyber, ISR and sensor and weapons platforms are challenged to meet commercial pressure from defense procurement for more capability at lower cost, and from acquisition officials for increasing interoperability across their combat systems in order to be able to enable new system capability through Information Dominance (ID).
RTI will present an architecture and its Connext solution, designed to meet these twin imperatives. Built upon proven open technology, Connext is a foundational system architecture that delivers significant productivity gains in integration, while also enabling discovery and rapid assimilation of existing system entities, potentially from 3rd party suppliers or already deployed in the field of operation.
Given the unique requirements of tactical system-of-systems, the architecture must support both real-time combat systems as well as brigade and command HQ enterprise style systems, bringing them together in a scalable, dynamic, and flexible framework. Connext addresses the performance and scale impedance mismatch between these disparate systems types, and delivers the ability to develop a common infrastructure that runs over DIL (Disconnected Intermittent Loss) communications as well as it does over Ethernet, putting minimal strain on the communications interfaces and maximizing information exchange.
The Connext foundation is in use in over 400 defense programs globally with over 350,000 licensed deployments. It has been approved by the US DoD to TRL9 (Technology Readiness Level).
Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer questions such as what exactly a Cloud is, why you need it, what changes it will bring to the enterprise, and what the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS).
When Should You Consider Meta Architecturesccsl-usp
There are times when a system must adapt to new requirements within months or weeks rather than years. These new requirements can include complicated rules and new products or services the organization needs to scale to support. As the system scales up and becomes more complicated it can become very hard to adapt quickly to these changing requirements. In fact, the system can even become harder to change and slower to accept new requirements. This can lead to a desired architecture that is designed to scale allowing the system to more easily adapt to changing requirements.
Do you have the goal of building a system that can be extended and adapted without programmer intervention? Do you have the itch to explore meta-programming, not just because it is cool or complicated, but because you want your system's behavior or domain representations to scale? If so, consider learning about the design of systems that represent user-defined behavior specifications as metadata.
Architects create Adaptive Object-Model systems when they want to enable end-user programmers to adapt their system's behavior and they don't want developers to become the bottleneck. But what does it take to build a system that can be changed and adapted without programming? How do Adaptive Object-Models differ from little languages or DSLs and when is it appropriate to consider stepping into the meta world to build such an extensible system? This talk presents the basic ideas about meta-architecture such as an Adaptive Object-Model architecture and shares experiences that the presenters have had building and tuning various production meta-architecture systems that have enabled organizations to scale them more easily and without programming wizardry.
When Should You Consider Meta ArchitecturesDaniel Cukier
There are times when a system must adapt to new requirements within months or weeks rather than years. These new requirements can include complicated rules and new products or services the organization needs to scale to support. As the system scales up and becomes more complicated it can become very hard to adapt quickly to these changing requirements. In fact, the system can even become harder to change and slower to accept new requirements. This can lead to a desired architecture that is designed to scale allowing the system to more easily adapt to changing requirements.
Do you have the goal of building a system that can be extended and adapted without programmer intervention? Do you have the itch to explore meta-programming, not just because it is cool or complicated, but because you want your system's behavior or domain representations to scale? If so, consider learning about the design of systems that represent user-defined behavior specifications as metadata.
Architects create Adaptive Object-Model systems when they want to enable end-user programmers to adapt their system's behavior and they don't want developers to become the bottleneck. But what does it take to build a system that can be changed and adapted without programming? How do Adaptive Object-Models differ from little languages or DSLs and when is it appropriate to consider stepping into the meta world to build such an extensible system? This talk presents the basic ideas about meta-architecture such as an Adaptive Object-Model architecture and shares experiences that the presenters have had building and tuning various production meta-architecture systems that have enabled organizations to scale them more easily and without programming wizardry.
Interoperability for Intelligence Applications using Data-Centric MiddlewareGerardo Pardo-Castellote
Presentation at the May 2012 Intelligence Workshop held in Rome Italy.
Interoperability is key to reducing cost in the development and maintenance of applications that span multiple providers or must be supported over long periods of time. This presentation describes the role of network middleware technologies in such systems and how the use of a data-centric middleware, such as OMG DDS, makes developing such systems easier and more cost-effective.
Hugtakið hugbúnaðararkítektúr er yfirhlaðið orð og þýðir mismunandi hluti fyrir mismunandi fólk. Við ætlum í þessum fyrirlestri að skilgreina ýmis hugtök tengd arkítektúr til að fá betri skilning á þessu. Við munum einnig skilgreina hvað agile arkítektúr þýðir eða hvað það þýðir ekki. Þá skoðum við monolith arkítektúr sem er hinn hefðbundi arkítektúr sem flestir nota í dag. Vandinn er sá að í dag eru kröfurnar meiri en þessi arkítektúr ræður við og því hafa menn verið að skoða aðrar leiðir eins og lightweight Service Oriented Architecture og hvernig smíða má hugbúnað sem þjónustur eða microapps eða microservice.
Við skoðum einnig lagskiptingu en það er elsta trikkið í bókinni og byggir á deila og drottna aðferðinni.
DCEU 18: From Monolith to MicroservicesDocker, Inc.
Jeff Nickoloff - Co-founder, Topple
Growth can be challenging to address once monolithic systems begin to fail under strain or internal software development processes begin to slow the release cadence. Many organizations are looking to microservices architecture to solve these application issues, whether they plan to write new applications or rewrite the monoliths into microservices. This talk will highlight the common technical and cultural issues that will make microservice architectures a challenge to adopt and maintain. Issues include impact of Dunbar's Number and Conway's Law, build-time vs runtime continuous integration, evolution of testability, API versioning impact, logistics overhead, artifact management, and strategies for iteration in a distributed environment. Attendees will learn: - How and why microservice architectures and ownership end up falling along organizational lines (and why that is a good thing) - How we can learn from monolith tooling to inform our tooling in a microservice environment - How you can achieve operational excellence at scale taking a logistical approach with Docker.
From Model-based to Model and Simulation-based Systems ArchitecturesObeo
Achieving quality engineering through descriptive and analytical models
Systems architecture design is a key activity that affect the
overall systems engineering cost. It is hence fundamental
to ensure that the system architecture reaches a proper quality.
In this paper, we leverage on MBSE approaches and complement them
with simulation techniques, as a prom-ising way to improve the quality of the system architecture definition, and to come up with inno-vative solutions while securing the systems engineering process.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
By Design, not by Accident - Agile Venture Bolzano 2024
Thomas.mc vittie
1. National Aeronautics and Space Administration
Systems Engineering Using Models
EFT-1 Case Study
Presentation to PM Challenge 2012
Thomas McVittie
Oleg Sindiy
Kim Simpson
Jet Propulsion Laboratory
California Institute of Technology
Copyright 2011 California Institute of Technology. Government sponsorship acknowledged.
2. EFT-1 Mission Overview
Mission Overview:
• Reduced Orion Block 0 configuration
• Launch FY14 from CCAFS aboard Delta 4H
• 2 orbits to high apogee
• High energy re-entry
• Water landing & recovery
Purpose: test & observe key
characteristics of the Orion spacecraft
• Key Functions:
- Nominal jettison/separations
- Parachute performance
- Attitude Control/Guided Entry
- Water up-righting & recovery systems
• Environments
- Aerodynamic, Aerothermal, Acoustic, Vibration,
Loads, Communications, etc.
Team
• Lockheed Martin (Mission Lead)
• NASA (supporting – KSC, JSC, MSFC, SN)
2
3. EFT-1 Mission Overview (continued)
Need to ENSURE that we are able to control the spacecraft and gather all of the
data needed to meet the flight test objectives.
Particularly confusing given the large number of different organizations and
required vs desired data flows
Raised by MPCV as a significant risk
Spawned a joint NASA/LM effort to
understand (for all phases):
• What resources do we need to record and
observe the flight?
• What data needs to be collected
• What needs to be live vs recorded
• How will data be distributed/archived/shared?
• What data is proprietary or sensitive and
needs to be protected?
Systems Engineering team chose to a model-based approach to augment
regular systems engineering approaches
3
4. Systems Engineering using Models
Life Cycle Support
Formalizes the practice of
systems engineering through the
use of models
Broad in scope
• Integrates with multiple
modeling domains across life
Vertical Integration
cycle
Results in quality/productivity
improvements & lower risk
• Rigor and precision
• Communications among
system/project stakeholders
• Management of complexity
4
5. Current State
Future State
Structural Operations
Model Plan
System Model
“System Model”
Coupled models of structure,
Spreadsheets, viewgraphs,
behavior, requirements, and
text documents, e-mails,
equations. Single source of truth.
Power Model meeting notes, etc. Mass Roll-up
Checkable. Queryable.
Radiation
Thermal Model
Model
5
6. Technical Approach
Work with the distributed NASA/LM team to understand mission needs, trade
options, and establish baseline architecture.
Use the Systems Engineering Markup Language (SysML) to define the models
needed to capture relevant information
• Intent is to have a single authoritative source of information
• Suitable for driving analysis/simulations (often via external tools) and capture results
Understand Key Stakeholder Concerns/Questions and address by defining Custom
viewpoints (which also drives definition of model parameters.)
• What imagery will be available and when?
• How will data be retrieved from Orion?
• Who will have access to mission voice loops?
• When will a command path to Orion be available?
Tools
• MagicDraw/Teamwork for the complex modeling and visualization
- Viewpoints, diagrams, defining the models & relationships
• Web-based forms less technical interaction & wider access for engineers & managers
6
7. Key Questions and Viewpoints
What are the pieces
When do What data needs
& how can they be
configuration to be exchanged?
combined?
changes occur? Does it match the IRDs?
What is the high- Composition
level mission? Mission Config’s Needlines How do the systems
& Phases need to be connected?
OV-1
Connectivity
What
interfaces IRDs Hybrid Comms
do we need?
Core SysML Models
Stakeholders Protocol What data goes
& Concerns Communications across which
Principles Data Exchange
Functions Functions connections?
Constraints
Trades Is there enough
Deployment bandwidth?
How is data
Who is performing packaged/ro
which functions & when uted? 7
8. Viewpoint: Mission Overview
Key Questions:
• What’s the big picture?
Viewpoint Features:
• Major: phases, assets,
communications links, data
flows, and deployments.
• Object attributes drive color
coding.
Uses:
• Single view that can be used
as a high level description
the key aspects of the
mission
• Helps management
understand the major
players, how they interact,
and the issues that need to
be worked (color coded).
8
9. Viewpoint: Mission Configuration and Phases
Key Questions:
• What are the major testing, integration and mission phases?
• How does the configuration change between each of these?
Viewpoint Features:
• key phases and transition events
Uses:
• Provides a consistent vocabulary for talking
about the mission phases and configurations
• Basis for defining all other phase-specific
configurations
- E.g., connections during test
vs on-pad operations vs flight
9
10. Viewpoint: Composition
Key Questions:
• What are the major components?
• How can they be combined?
• What are the configurations during each
mission phase?
Viewpoint Features:
• Hierarchical list of components (systems, hardware
software, people, etc.)
• Definition of how they are combined during each
phases.
Uses:
• Clear definition of the components and how they are organized.
• Understanding of components available/active in
each mission configuration
• Understanding of how components are composed
into systems
10
11. Viewpoint: Needlines
phase Data types
Key Questions:
• What data needs to be exchanged Flow
between systems during each Direction
mission phase to meet the
mission objectives?
• What are its parameters (quantity,
quality, security, etc.)?
Viewpoint Features:
• key data exchange types and
sources/destinations
Uses:
• Definition of data types,
• Basis of Interface Requirements
Documents (IRDs), Needlines are also linked to :
• Understanding of what data will * parameters about a particular flow during
a specific phase.
be exchanged during each mission
* More precise definition of the data types.
phase
11
12. Viewpoint: Requirements - IRDs
Key Questions: Ports are linked to IRDs
• Are there gaps between the IRDs & the
Needlines?
• Which IRDs are on contract, funded, or tbd?
Viewpoint Features:
• Augments needlines with specific IRDs
• Attributes of the IRDs (e.g. contract status)
are used to color code the needlines/ports
Uses:
• Helps management understand the
status of the IRDs versus operational
data exchange needs
• Provides an initial gap analysis
- Needlines that don’t have IRDs
• Identifies exchanges we may not need IRDs are managed in a data table which
was imported from the IRD documents
to fund
* ideally the IRDs would be automatically
- IRDs with no corresponding needline generated from the needline models.
12
13. Viewpoint: Connectivity
Key Questions:
• How do the systems need to be
connected?
• What types of connections do we need
(umbilical, RF, discrete, WAN, etc.)?
• What are the parameters of each
connection (data rate, security,
completeness, reliability, etc)?
Viewpoint Features:
• Identifies Communications Providers
• Major communications links and
configurations between system.
Uses: Each connection is linked to a configuration table
• Defines the high-level communications that defines its unique parameters
architecture (can be decomposed into * RF – frequency, modulation, framing protocol,
SNR, BER, security, etc.
lower level models) * WANs – bandwidth, reliability, security, routing,
address allocations, etc
13
14. Viewpoint: Hybrid Needline-Connectivity
Key Questions:
• Which connections are used by which
needlines?
• Do the connections provide the needed
capabilities?
- Bandwidth, data reliability, latency, etc.
- Security
Viewpoint Features:
• Overlays the needline flows on top of the
connectivity views
- Still linked to all the same information
Uses:
• Reasoning about the impact of the
communications architecture on data exchange.
• Allows high level modeling of the
communications/data exchange architecture
14
15. Viewpoint: Protocol Stacks
Key Questions:
• What communications protocols are used
by the data exchanges and connections?
• How are the protocols related?
• What headers and overhead do they add?
Viewpoint Features:
• Relationship between data exchanges and
protocols.
• Relationship between protocols
- Buffering, encapsulation, routing, etc.
Uses:
• Reasoning about the impact of the
communications architecture on data
exchange.
Currently the model only has high level parameters
• More precise modeling of the of the protocols (e.g., header size). We’re working
communications/data exchange on modeling the behavior of the protocols (say
architecture retransmit on loss of packet)
• Impact of integrating with other protocols
15
16. Viewpoint: Data Exchange Functions
needline
Key Questions:
• What are the key functions for each
type of data exchange and who is
responsible for providing them?
• Where are the authoritative &
secondary data stores?
Viewpoint Features:
• Key functions and allocations to systems
• Identifies the functions that are
producing and consuming the
needlines.
Uses:
• Common understanding of who is doing
what and when.
• Identifies all data exchanges and where
data is stored (useful in planning for
contingencies and showing that the
architecture supports the analysis plan)
16
17. Viewpoint: Communications Functions
Key Questions:
• What are the communications functions
for each connection?
Viewpoint Features:
• Key functions and allocations to systems
Uses:
• Common understanding of how the
communication system functions.
• During integration and test, also helped
us understand which portions of the
communications system were being
emulated, simulated, or provided by
actual systems.
17
18. Other Model Components: Comments
Features
• Objects are managed in a table and that can
be associated with any part of the model and
displayed on any of the views
Used to:
• Help clarify portions of the model or views
• Identify areas where work needs to be done
(questions, issues, discrepancies)
18
19. Other Model Components & Viewpoints
Other Viewpoints (in the model, but not used in EFT-1)
• Stakeholders & Concerns
• Constraints
• Architecture Principles
• Trades
• Deployments
19
20. Advantages
Single Source of Truth (authoritative)
• Information is entered ONCE.
• Consistent terminology, configurations, representations etc. across all products.
It’s Alive!
• Model always reflects the latest/best information and decisions.
• Updates are automatically applied across all products.
Shared Understanding
• Allows stakeholders to understand the system from “their perspective”.
- Addresses their concerns & understand how their pieces fit in.
• Helps new users understand the system more quickly, and allows all users to
explore/drill-down as needed.
Expressive but easy to understand.
• Can model complex relationships between systems (versus boxes and lines)
• Produces products that are usable by both humans and machine
- Gate products for reviews
- data and configurations to drive analysis and simulation
- Promise of being able to support “what-if” trade analysis 20
21. Lessons Learned
Understanding stakeholders concerns is essential to designing the right
viewpoints and capturing the right information
The model-based approach must provide concrete value to the project,
not just a different way of generating the same material.
Teams more readily adopt the approach if (and only if)
• It directly helps them do their job – i.e., generates products they need in a format
they can use, performs useful analysis, etc.
• They have appropriate training & access to the tools.
• They have support (a community of peers!) to resolve problems/questions.
Cross-pollination with other teams can be a big efficiency multiplier
Modeling Tools are still catching up to the needs.
21