This document summarizes a presentation about project management challenges at NASA Goddard Space Flight Center. The presentation outlines a vision for anomaly management, including establishing consistent problem reporting and analysis processes across all missions. It describes the current problem management approach, which lacks centralized information sharing. The presentation aims to close this gap by implementing online problem reporting and trend analysis tools to extract lessons learned across missions over time. This will help improve spacecraft design and operations based on ongoing anomaly experiences.
This program was designed for those Maintenance and Reliability Engineers who are ready to make a serious impact in asset reliability. This is a 7 Day Workshop Coupled with 52 iLearning Modules plus a Coach. 10:1 ROI is expected from all students.
This program was designed for those Maintenance and Reliability Engineers who are ready to make a serious impact in asset reliability. This is a 7 Day Workshop Coupled with 52 iLearning Modules plus a Coach. 10:1 ROI is expected from all students.
Blending Methods To Succeed Comparing Prince2 S Agility With Scrum Within The...thavo001
Comparing PRINCE2\'s Agility with Scrum within the TFS2010 ALM.
By Vincent THAVONEKHAM www.thavo.com. Microsoft Team Foundation Server 2010 / ALM Trainer
Build Your NGO: Monitoring & Evaluation Allie Hoffman
The presentation attached is designed for grassroots NGOs wanting to learn more about monitoring and evaluation.
The presentation is a mini 'how to', in addition to providing an overview of strategic planning
To learn more or with any direct questions, please visit our website: www.thepariproject.com
uTest and Crowdsortium Webinar: Scope & BriefsCrowdsortium
Have you ever pondered on how you might be able to better engage your crowd on projects or jobs? You might call it a “campaign description,” “brief,” or project “scope,” but whatever your name is, this presentation is for you. Thought leader, Matt Johnston, CMO of uTest, discuses the ways in which we reach out to our crowd.
A few questions explored:
- What formats do you use and find most effective?
- What methods are used for distribution?
- How do you guide remote workers, and how much direction is needed?
- How might we engage clients to understand the value of campaign descriptions?
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
1. Project Management Challenge 2007
Track: Case Study Insights at NASA
Case Study:
Extracting Trends and Lessons Learned
From Project Problems and Solutions
NASA Goddard Space Flight Center
February 6, 2007
Mike Rackley Catherine Traffanstedt
NASA GSFC Code 170 NASA GSFC – SGT, Inc
301-614-7058 301-925-1125
Michael.W.Rackley@nasa.gov ctraffanstedt@sgt-mets.com
2. Presentation Outline
The Vision
What are we trying to
achieve?
Implementation
How do we achieve
the Vision?
Current Status
Where are we
today?
Gap Analysis
How do we close
the gap?
2
3. The Anomaly Management Vision
The Vision
What are we trying to
achieve?
Implementation
How do we achieve
the Vision?
Current Status
Where are we
today?
Gap Analysis
How do we close
the gap?
3
4. Anomaly Management Vision
• GSFC has well documented requirements, procedures, and best
practices for mission-related problem management, reporting,
analysis, and trending
• GSFC efficiently verifies adherence to the established problem
management requirements and established best practices
• All GSFC development and on-orbit missions use a consistent
problem management process and philosophy
– Flexible enough to handle the diversity of spacecraft missions in the
organization, enforcing certain fundamental “rules” for all missions, but
allowing some differences where appropriate in other areas
• Example: HST vs. Small Explorer Program
• Recognizes that risks and budgets can be very different among programs
– Applies to all missions that the organization is responsible for,
regardless of whether or not the missions are built and/or flown “in-
house”
• NASA responsible for mission success regardless of approach to
development or operations (i.e., in or out of house)
• Out-of-house missions do pose unique challenges
4
5. Anomaly Management Vision (cont)
• GSFC provides online access to accurate mission configuration
information for all Goddard managed missions in development and
operations
– Example: Subsystems, components, orbit configuration, duty cycles
• GSFC analyzes and trends problems across development and on-
orbit missions
– Individual missions, across mission “families”, across all missions
– Consistent use of a variety of Categorizations (e.g., failure type, root
cause, impact)
• GSFC extracts appropriate “Lessons Learned” information from the
problem and solution space and apply these back into the
development and operations processes
– Examples: improvements in Rules/Best Practices for development,
testing, and operations; planned vs. actual hardware & software
performance
5
6. Problem Management
Information Flow
Mission 1 . . . Mission N
• In-House
Development
• Out-of-House
Procurement
Mission Anomaly Information Impediments:
• Project focus
Mission • Indirect (export) • Heterogeneous
Configuration • Direct missions
• Record
Definitions anomaly – Design
information – Timeframe
Centralized
Centralized • Provide – Orbit
Examples: – Ops/science
Problem
Problem search/query
• HW & SW profile
descriptions Information System
Information System capability
• Contracts
• Design life info • Generate
reports – End-item
• Failure – Proprietary
handling • Trending • Support data
approach (e.g., • Analysis individual and • Culture
single vs. dual multi-mission
• Data Mining “views” • Sustaining over
string) time
Cross-
Multi-mission Ops/ Lessons
Output Failure mission
Summary Engineering Learned
Analysis Trending &
Products Reports Research Extraction
Analysis
6
7. Lessons Learned Feedback Process
Knowledge Management
Ops process or
procedure proves to be Feedback /
successful over time recommendations
provided to
Lessons spacecraft flight
Ops experiences actual projects
behavior of spacecraft
Learned
established
Best practices
Anomaly resolved with established
defined corrective action and
documented
Anomaly
occurs
Influencing Variables
• Spacecraft design / technology • Risk tolerance • Personnel changeover
• Ground system design / technology • Organizational culture • Mission models/profiles
7
8. Implementing The Vision
The Vision
What are we trying to
achieve?
Implementation
How do we achieve
the Vision?
Current Status
Where are we
today?
Gap Analysis
How do we close
the gap?
8
9. Implementation Approach
• Ensure appropriate requirements are officially documented so that
they apply to all missions in development and operations
– Goddard Procedures and Requirements Documents (GPR’s)
– Mission Assurance Requirements (MAR) Document (per mission), with
the Mission Assurance Guideline (MAG) document as a template
• Establish and document rules/best practices for how to document
anomalies
– Main goal is to achieve the needed consistency and robustness across
missions
– Anomaly Management Best Practices Handbook (or comparable)
• Utilize a centralized, independent operations mission assurance
team to verify adherence across missions to requirements and best
practices
– Provides ability to recognize and correct problems early
9
10. Implementation Approach (cont)
• Utilize a centralized anomaly trending and analysis team to perform
various trending and analysis functions within and across missions
– Assist in development of Mission Configurations
– Analyze trends
– Extract “lessons learned” and work to apply them across the relevant
organizations, documents, etc.
• Implement a “Lessons Learned CCB” with broad participation to disposition
potential/proposed lessons learned and best practices
• Utilize a centralized database to capture problem report data that
provides ability:
– For individual missions to document, track, and close out their problems
– For the Center to perform various analysis and trend functions across
missions
– To handle development/test (pre-launch) and operations phases
– To handle flight and ground problems
– To bring in data from outside users
– To provide broad access, but with appropriate access protections (for
ITAR sensitive and proprietary data)
10
11. Implementation Approach (cont)
• The centralized database is the Goddard Problem Reporting System
(GPRS), which contains two modules for documenting and
managing ground and flight segment problems
– Both provide multi-mission, web-based, secure user access
– Other features include Mission Configuration definition, search/query
capabilities, user notification (via email), user privilege control, user
problem reporting, and attachments
• SOARS -- Spacecraft Orbital Anomaly Reporting System
– Used to document flight and ground problems during operations
– Problems categorized via a common approach, using various
Categorization Fields (Anomaly Classification, Impacts, Root Cause)
• PR/PFR -- Problem Reporting/Problem Failure Reporting System
– Used to capture problems encountered during the development,
integration, and test
– Similar to SOARS, but tailored to the I&T environment (e.g., the
signature process)
11
12. SOARS Model
Maximize Consistency &
Commonality in how
Support Mission Anomaly Anomalies are Documented &
Categorized
Management
(document, track, report, notify)
Document Mission
Configurations
(mission-unique and common
subsystems & components)
Anomaly
Event SOARS
SOARS
Occurs Perform Cross-Mission
Trending & Analysis
Support Independent Mission
Assurance Activities
(analysis, audits)
Support Cross-Mission
Analysis
(have others had a similar
Provide Common Approach to
problem?)
Maintenance, Security, etc.
12
13. Current Status:
Where Are We Today?
The Vision
What are we trying to
achieve?
Implementation
How do we achieve
the Vision?
Current Status
Where are we
today?
s
lenge
C hal Gap Analysis
How do we close
the gap?
13
14. Current Status
• Requirements & Best Practices Development/Compliance
– High-level requirements associated with using GPRS and performing
quality assurance, trending, and analysis are in place
• GPR 1710.H - Corrective and Preventative Action
• GPR 5340.2 - Control of Nonconformances
– Mission Assurance Requirements Documents, with the Mission
Assurance Guideline (MAG) document as a template also generally
cover the requirements
• But, capturing problem information in the central PR/PFR System limited to
in-house missions for pre-launch phase
– Operations Rules/Best Practices not yet formally documented in a
central location that would apply across all missions
• Exist in pockets within missions, families of missions, and Programs
• Working with missions and Programs to begin development
14
15. Current Status (cont)
• Operational Improvements
– Major GPRS enhancements made to SOARS and PR/PFR software
over the last year to improve operational usability, etc.
• SOARS now considered an operationally useful tool, with ongoing activities
to continue incremental improvements
• PR/PFR has more recently undergone improvement, largely driven by two
Goddard in-house missions (SDO and LRO)
– PR/PFR needs much more improvement to “catch-up” to SOARS
– SOARS use by the operational missions has increased dramatically
over the last year
• One year ago very few missions were using SOARS at Goddard
• Almost all in-house missions (~20) now using SOARS effectively, including
RXTE, Terra, Aqua, EO-1, and TDRSS
• Only one out-of-house mission currently populating SOARS with anomaly
report data (Swift at Penn State University, as a direct user)
– Other out-of-house “operations vendors” include APL, Orbital, LASP, and
University of California at Berkeley (UCB)
– Currently working with these organizations to develop plan for populating SOARS
(e.g., make use of data import)
15
16. Current Status (cont)
• Anomaly Analysis and Trending
– Centralized teams in place within Office of Mission Assurance (Code
300) for performing roles of operations mission assurance and cross-
mission anomaly analysis and trending
• Development phase handled by individual Code 300 System Assurance
Managers (SAM’s) assigned to missions
– GSFC (Code 300) using SOARS data extraction to:
• Provide monthly Center-wide snapshot of mission anomalies, characterized
by Mission Impact, Operations Impact, and Root Cause
• Develop annual “year-in-review” anomaly summary report (OAGS)
• Trend and analyze problems across missions, subsystems, failure types,
etc.
• Support long term reliability studies on flight hardware
• Identify missions that are not adequately documenting anomalies and work
with them to improve
16
17. SOARS Data Extraction – Example 1
SOAR Records - All active Goddard Missions Root Cause Categorization
(12 month summary) for Cum CY06 "Ops Workaround" SOAR's
50
Ground Segment SOARs
45 Flight Segment SOARs Design, 8
40
35 Workmanship, 4
30
25 Unknown, 28
Process, 4
20
Environmental, 2
15
10
Degradation, 3
5
0
6
6
06
06
06
06
06
06
6
06
6
6
r-0
-0
l- 0
-0
-0
n-
n-
g-
p-
v-
c-
Hardware Failure, 18
b-
ct
ay
ar
Ju
Ap
No
De
Au
Se
Ja
Ju
Fe
O
M
M
Human Error, 3
Training, 1
Significant Nov’06 On-orbit Anomalies
Root Cause Categorization
• Msn-X power load shed anomaly
for Cum CY06 "Data Loss" SOAR's
– All science instrument main voltages shed in accordance to the
Design, 3
Under Voltage protection Circuitry Workmanship, 2
– Under voltage condition, possibly indicating a failure (short) Process, 0
somewhere in the affected loads Environmental, 2
Degradation, 2
• Msn-Y Star Tracker and Sun Sensor Boresight anomaly
– Star Tracker alignment calibration analysis indicates over a 2 arc-
minute shift in the alignment between the Star Tracker and FSS Hardware Failure, 7
• Msn-Z ACS Quaternion Spike Error anomaly
– Star Tracker Quaternion element was corrupted
Training, 0
– Spacecraft commanded to Sun/Mag pointing and the Star Tracker Unknown, 26
Human Error, 2
stabilized
18. SOARS Data Extraction – Example 2
Root Cause Categorization CY06 Cumulative Root Cause Categorizations
1 Design Related
0.9
Hardware Degradation/Failure Design Related, 16%
0.8
Operations Related Hardware
0.7
0.6
Environmentally Related Degradation/Failure, 20%
0.5
Unknown
0.4
0.3
Operations Related, 4%
0.2 Unknown, 50%
0.1
0
Environmentally Related,
6
06
06
6
06
6
6
06
06
6
6
6
l -0
-0
v-0
r- 0
-0
c-0
t- 0
10%
n-
n-
b-
g-
p-
ar
ay
Ju
Ap
Oc
No
Ja
Ju
Fe
Au
Se
De
M
M
Anomaly Classification Categorization CY06 Cumulative Anomaly Classification Categorization
100%
Hardware Other, 5%
90%
80%
Software
70%
Operations Operations, 13%
Other
60% Hardware, 41%
50%
40%
30%
20%
10%
0%
06
06
6
6
6
06
6
6
06
6
6
06
l- 0
-0
-0
-0
0
r-0
0
Software, 41%
n-
-
n-
b-
g-
p-
-
ov
ar
ct
ay
ec
Ju
Ap
Ju
Ja
Au
Fe
Se
O
M
N
M
D
Operations Impact Categorization CY06 Cumulative Mission Impact Categorization
100%
Mission Data Loss
90% Permanent Loss of Mission Data Loss,
Operational Workaround
80% Capacity, 0% 28%
70%
No Impact
60% Permanent Loss of Capacity
50%
No Impact, 42%
40%
30%
20%
10%
0%
Operational
06
06
6
6
6
6
06
6
06
6
6
06
l- 0
-0
-0
-0
0
r-0
0
n-
-
n-
b-
g-
p-
-
ov
ar
ct
ay
Workaround, 30%
ec
Ju
Ap
Ju
Ja
Au
Fe
Se
O
M
N
M
D
19. Current Status (cont)
• Inserting “Lessons Learned” into the Process
– Implementing an “Operations Lessons Learned” process, but still in the
very early stages
– Plan is to implement a process where a Lessons Learned
Committee/Board is presented with potential lessons
• Committee determines if lesson valid, and if so, the actions that are appropriate
to take it forward
• Working to engage broad participation (mission assurance, missions, and
engineering)
Lesson extracted Lesson vetted
Action
from Operations by
taken on LL
event, activity, etc. LL Committee
Examples:
• Ops best practice update
• Improvement in testing
• Feedback on hardware
performance issues
• Recommendation to flight
software group
• Recommendation for “Gold
Rules” update
19
20. Sample Lessons Learned Summary
Anomaly Anomaly Proposed
Mission
Date Action
Description Lesson Learned
Msn_X 10/3/06 Spacecraft clock Establish best practice of Work directly with Ops
rollover to an illegal evaluating registers prior to Projects to document
1998 date; on-board launch, and as part of the as Ops Best Practice.
clock register capacity on-orbit ops procedures. Work with Flight
was exceeded. Include FSW capabilities to Software Branch to
issue warnings when determine if design
appropriate. best practice needed.
Msn_Y 10/19/06 UV instrument Lesson tbd, but would apply Investigate further.
experiencing channel to improving test approach. Discuss with Msn_Y
noise, apparently Seems this should have SAM and I&T Manager.
associated with heater been caught in instrument Generate independent
activation and or observatory I&T. Why analysis report if
environmental was this not seen during warranted.
conditions. Causing thermal vac? Does this
failure of instrument to have implications to other
meet signal to noise missions? Could or should
ratio requirements for this instrument sensitivity
the data channel. have been better tested
during instrument I&T?
21. Implementation Challenges
• Achieving consistency across missions in how anomalies are
documented (e.g., terminology, level of detail) and categorized is
proving to be very difficult
– Many missions with different ops teams have different taxonomies,
operations cultures, etc.
– Missions themselves also very different in terms of bus design, payload
suite, operations profile, age, and risk tolerance
• Mission Configuration definitions in SOARS not well-defined across
all missions, making it difficult to perform cross-mission analyses
– Not all information is readily available to use in SOARS definitions,
especially for older missions (e.g., component manufacturers and
design life)
– Missions have struggled to put in the extra time to populate SOARS with
their Mission Configurations
– Working with the existing missions to improve the definitions, but effort
is time consuming and challenging
– Working with upcoming missions to get the definitions in place prior to
launch (e.g., GLAST, SDO, and LRO)
21
22. Implementation Challenges (cont)
• Proving very difficult to get out-of-house operations centers/teams to
document problems in SOARS
– Only one direct user so far (Swift/PSU)
– Working with others to take advantage of enhanced SOARS data import
capability
– Finding that the critical factor is getting the agreement more explicitly
planned well before launch
• Only getting pre-launch problem report data into PR/PFR for in-
house missions
– Leaves a big “information gap” given the high number of out-of-house
missions
– Vendors cannot use PR/PFR directly, but working to determine if certain
types of problem report data can be imported into PR/PFR
22
23. Gap Analysis
The Vision
What are we trying to
achieve?
Implementation
How do we achieve
the Vision?
Current Status
Where are we
today?
Gap Analysis
How do we close
the gap?
23
24. Gap Analysis:
How Would You Complete “The Job”?
1. How would you approach achieving and maintaining more
consistency across missions in how anomalies are documented?
– Getting them to do it consistently over time
– Working to measure/enforce consistency over time
2. Do you think it is possible to develop a set of Operations Anomaly
Management Best Practices that would reasonably apply across a
set of heterogeneous missions, that are a mix of in-house and out-of-
house operations approaches?
– If so, how would you approach getting this defined and keeping it
updated?
– If not, what can be done? Do you just give up?
3. What type of information besides the basic anomaly information
would you want to capture for your missions to support long term
trending, reliability/failure analysis, etc?
24
25. Gap Analysis:
How Would You Complete “The Job”?
4. How would you ensure that the relevant Mission Configuration
information for upcoming missions is documented so that it is
usable by the problem management system?
– Address the in-house vs. out-of-house challenges
5. How would you approach getting more out-of-house operations
centers/teams to document their anomalies in the centralized
problem information system (SOARS)?
6. How do you extract problem information from out-of-house mission
vendors for use in the centralized problem information system
(PR/PFR) during the pre-launch development phase?
– Will likely have proprietary data issues
7. How do you integrate lessons learned into projects/programs so
that they will actually affect mission development and operations,
and maintain benefit over time?
25
29. SOARS Anomaly Report Form
Observation area includes all problem descriptions
including orbit and attitude data, and other user
defined fields
Investigation area provides a place for the
operations team to document the problem
investigation and status. Includes an investigative
log, subsystem and component identification,
anomaly criticality, status, and responsibility
assignment.
Resolution area provides the operations team an text
area to capture a synopsis of the problem cause and
resolution.
Assessment area is used to identify follow-on
activities or recommendations and categorize the
SOAR for generic use.
Notification area is used to send email notifications
to personnel (either other SOARS users or other
contacts identified by missions)
29
30. SOARS Data Summary
• All SOARS anomaly reports will include the following information:
– Anomaly occurrence time
– Anomaly title
– Anomaly category (flight or ground segment specification)
– Anomaly Subsystem and Component identification
– Anomaly description
– Closure information
– Follow-on recommendations
– Anomaly categorization
• SOARS may include the following types of information:
– Anomaly assignment and responsible party
– Investigation information
– Orbital information
– Mission unique phase and mode
– Additional file attachments as appropriate
30
31. SOAR Data Categorization
• SOAR Categorization Fields are the key fields used for trending and
analysis across missions for the Center. The fields include:
– Mission Impact: describes the actual anomaly impact on the ability to meet
Science and/or Mission Goals or requirements
• No Effect, Minor, Substantial, Major, Catastrophic, Undetermined
– Impact Severity on the Anomalous Item: describes the actual impact to the
subsystem or component where the anomaly occurred
• No Effect, Minor, Major, Catastrophic, Undetermined, Not applicable
– Operations Impact: describes the effect of the anomaly on mission and/or
science operations
• Data Loss, Service Loss, Mission Degradation/Loss, Undetermined, Ops Workaround,
None
– Anomaly Classification: describes where within the mission systems the failure
occurred
• Hardware, Software, Operations, Other
– Anomaly Root Cause: describes the root cause of the anomaly.
• Design, Workmanship/Implementation, Process, Environmental, Degradation, Hardware
Failure, Training, Human Error, Unknown
31
32. One Size Fits All Anomaly Management (AM) System:
Does It Exist?
Mission Phases
(S/C I&T, Observatory I&T, L&EO, Normal Ops, Disposal)
• AM System must meet the needs of individual
Missions
missions and of the broader multiple mission
perspective (e.g., looking for cross mission trends)
• Mission may want one discrepancy/anomaly
management system through all mission phases, but
the phases are very different environments (pre and
post-launch)
• AM System may have to handle spacecraft and
ground anomalies
• AM System inevitably must be able to interface with
other AM Systems used by individual missions
32