Kickoff deck for a large enterprise wide security vulnerability assessment. The presentation has been sanitized and is intended for demonstration purposes only. From August 2008
This document introduces the return on investment (ROI) methodology for measuring the value of project management. It discusses why measuring value is important, as most projects are over budget and behind schedule. The ROI methodology provides a 10-step process for conducting an evaluation, including planning objectives, collecting data during and after implementation, analyzing data, calculating costs and benefits, and reporting results. Implementing ROI can help justify budgets, improve processes, and show how project management contributes to business goals.
Lean Project Management is a proven method for improving project performance. It focuses on managing variability through planning, execution, and monitoring approaches like identifying essential inputs, aggressive task estimates, critical chain protection, and buffer management. Team support is critical for implementing Lean Project Management successfully.
EVM is a project management process that integrates scope, schedule, and cost to assess project performance and forecast outcomes. It allows objective measurement of progress against a performance measurement baseline (PMB) which includes a work breakdown structure, schedule, and time-phased budget. EVM helps identify variances so management can take corrective actions to mitigate risks and keep the project on track. NASA requires EVM for projects over $20 million to enhance the likelihood of success through active measurement and management.
The document discusses integrating technical performance measures with earned value management. It argues that EVM data is only reliable if technical performance is objectively assessed using the right measures of progress. Standards like CMMI and IEEE 1220 provide guidance on using requirements, product metrics, and success criteria to evaluate technical progress. The document provides examples of how to calculate earned value by linking it to completion of drawings and meeting technical performance targets for weight and diameter. It recommends acquisition best practices like requiring technical performance measurement in proposals and verifying integration at contract award and reviews.
This document provides an overview of project scheduling from NASA's perspective. It discusses NASA's large, complex projects and the requirements for project scheduling. The presentation covers key project scheduling processes including activity definition, sequencing, duration estimating, schedule development, status accounting, and performance reporting. It provides examples and definitions for these processes. The goal is to give attendees a basic understanding of project scheduling as it relates to NASA projects.
The document discusses the Business Operating Success Strategies (BOSS), a new initiative at Kennedy Space Center Launch Services Program to standardize and improve consistency in mission management. It provides an overview of BOSS, including its purpose to align activities with requirements and increase accountability. It outlines how compliance will be achieved through checklists and schedules. Responsibility for implementation and updates is assigned, and next steps are to obtain feedback and measure BOSS' effectiveness.
This document discusses the development and validation of an Earned Value Management (EVM) system at the Jet Propulsion Laboratory (JPL). It outlines the key components of developing the EVM system, including establishing the architecture, implementing the necessary tools and processes, and providing education and training. It also describes validating the system through progress assistance visits and a formal validation review. The document shares lessons learned around implementing an effective EVM system.
The NASA Ames Research Center has developed a scaled project management framework for IT projects under $500k based on NASA's NPR 7120.7. The framework includes Lite and Medium classifications to provide flexibility and structure for smaller projects. It establishes common project reviews, entrance and success criteria, and decision points for projects below the NPR 7120.7 threshold. The framework is designed to standardize project management practices while allowing tailoring to individual project needs.
This document introduces the return on investment (ROI) methodology for measuring the value of project management. It discusses why measuring value is important, as most projects are over budget and behind schedule. The ROI methodology provides a 10-step process for conducting an evaluation, including planning objectives, collecting data during and after implementation, analyzing data, calculating costs and benefits, and reporting results. Implementing ROI can help justify budgets, improve processes, and show how project management contributes to business goals.
Lean Project Management is a proven method for improving project performance. It focuses on managing variability through planning, execution, and monitoring approaches like identifying essential inputs, aggressive task estimates, critical chain protection, and buffer management. Team support is critical for implementing Lean Project Management successfully.
EVM is a project management process that integrates scope, schedule, and cost to assess project performance and forecast outcomes. It allows objective measurement of progress against a performance measurement baseline (PMB) which includes a work breakdown structure, schedule, and time-phased budget. EVM helps identify variances so management can take corrective actions to mitigate risks and keep the project on track. NASA requires EVM for projects over $20 million to enhance the likelihood of success through active measurement and management.
The document discusses integrating technical performance measures with earned value management. It argues that EVM data is only reliable if technical performance is objectively assessed using the right measures of progress. Standards like CMMI and IEEE 1220 provide guidance on using requirements, product metrics, and success criteria to evaluate technical progress. The document provides examples of how to calculate earned value by linking it to completion of drawings and meeting technical performance targets for weight and diameter. It recommends acquisition best practices like requiring technical performance measurement in proposals and verifying integration at contract award and reviews.
This document provides an overview of project scheduling from NASA's perspective. It discusses NASA's large, complex projects and the requirements for project scheduling. The presentation covers key project scheduling processes including activity definition, sequencing, duration estimating, schedule development, status accounting, and performance reporting. It provides examples and definitions for these processes. The goal is to give attendees a basic understanding of project scheduling as it relates to NASA projects.
The document discusses the Business Operating Success Strategies (BOSS), a new initiative at Kennedy Space Center Launch Services Program to standardize and improve consistency in mission management. It provides an overview of BOSS, including its purpose to align activities with requirements and increase accountability. It outlines how compliance will be achieved through checklists and schedules. Responsibility for implementation and updates is assigned, and next steps are to obtain feedback and measure BOSS' effectiveness.
This document discusses the development and validation of an Earned Value Management (EVM) system at the Jet Propulsion Laboratory (JPL). It outlines the key components of developing the EVM system, including establishing the architecture, implementing the necessary tools and processes, and providing education and training. It also describes validating the system through progress assistance visits and a formal validation review. The document shares lessons learned around implementing an effective EVM system.
The NASA Ames Research Center has developed a scaled project management framework for IT projects under $500k based on NASA's NPR 7120.7. The framework includes Lite and Medium classifications to provide flexibility and structure for smaller projects. It establishes common project reviews, entrance and success criteria, and decision points for projects below the NPR 7120.7 threshold. The framework is designed to standardize project management practices while allowing tailoring to individual project needs.
The document introduces the Project Management Toolkit (PPME Toolkit) developed by NASA's Glenn Research Center (GRC) to provide a standardized set of project planning and execution tools. The PPME Toolkit aims to facilitate life cycle project management from proposal development through project control and reporting. It was developed using a rapid prototyping approach and has been piloted with five GRC space flight projects. Version 1 of the Toolkit will be deployed across GRC's space flight portfolio in 2011, and Version 2 will include additional capabilities and an enterprise server solution to enable true portfolio management.
KDP C is an important decision point for NASA projects where the agency decides whether to proceed to implementation and commits to a project's cost and schedule estimates. This panel discusses updated NASA processes to help ensure projects are on track for technical success within budget and schedule by KDP C. These include developing an integrated baseline, independent reviews, and documenting approvals and commitments in a decision memorandum to formalize support and establish external commitments. The integration of baseline development, independent checks, approval to proceed, and commitments is meant to help projects successfully complete implementation.
This document discusses leading indicators for systems engineering. It begins by outlining the concepts and motivation behind measuring leading indicators. It then describes a project to develop a set of 13 leading indicators to assess how effectively a program is performing systems engineering. These indicators are defined to provide predictive insights before impacts are realized. The document discusses challenges in implementing and interpreting leading indicators and mapping them to different life cycle phases. It notes that validating leading indicators is difficult as companies are reluctant to share information, and that leading indicators can be dismissed as similar to existing metrics.
The document discusses establishing a performance measurement baseline (PMB) in a cost effective manner. It defines key Earned Value Management (EVM) concepts like the PMB, which is a time-phased budget plan used to measure contract performance. It emphasizes the importance of thorough upfront planning, including developing a work breakdown structure (WBS) and schedule to fully capture the work scope. Establishing the PMB is a three-step process of defining the work, scheduling it, and allocating budgets to control accounts and work packages.
This document discusses increasing the robustness of flight project concepts. It proposes several improvements and innovations, including establishing new concept maturity levels (CML) to better communicate a concept's readiness. A new P4 document is suggested to provide requirements and guidelines for incorporating and evaluating a concept's robustness. Additional proposed enhancements involve new tools and templates, increased project team support, organizational changes, and training for the pre-phase A community. The overall goal is to address current challenges around assessing risks, communicating maturity, and guidelines for robustness evaluations in NASA's competitive funding environment.
This document outlines improvements made to NASA's independent program review process in fiscal year 2010. It discusses the reviews completed, including 8 program and 20 project reviews. Process improvements included quick look reports, increased coordination, readiness assessments, and streamlining of review documentation. The roles and responsibilities of review board members are covered, including ensuring member competency, currency, and independence. Coordination between review boards and NASA mission directorates and centers is also summarized.
The document discusses a project management approach to source evaluation boards (SEBs) being implemented at NASA's Johnson Space Center. It aims to align SEB processes with project management principles by treating each SEB like a project, focusing on requirements, scheduling, teamwork, and control. Feedback from industry and assessments identified issues like unclear processes and schedules. The new approach establishes common vocabulary, templates, and training to bring more consistency to SEBs handled as projects.
This document describes Ball Aerospace's implementation of a Life Cycle and Gated Milestone (LCGM) process to improve program planning, execution, and control across its diverse portfolio. The LCGM provides a standardized yet flexible framework that maps out program activities and products across phases. It was developed through cross-functional collaboration and introduced gradually across programs while allowing flexibility. Initial results showed the LCGM supported improved planning and management while aligning with Ball Aerospace's entrepreneurial culture.
The document discusses root cause analysis conducted by the PARCA office in the Department of Defense. It provides an overview of PARCA's functions, analytical framework, and operations. Recent root cause analyses have identified unrealistic cost/schedule estimates and quantity changes as common problems. Areas of ongoing development include analytical methodologies, relationships with subject matter experts, and policies/procedures to guide root cause analyses. The goal is to independently identify the predominant causes of problems in a transparent, fact-based manner to prevent repeating mistakes.
NG BB 53 Process Control [Compatibility Mode]Leanleaders.org
This document provides an overview of process control concepts and tools. It discusses an 8-step process for process improvement that includes control. Control plans are important to ensure improved processes remain stable. Measurement systems should be analyzed and process capability recalculated during control. Cultural issues can impact control and force field analysis can identify drivers and restraints. Standard operating procedures, control charts, and mistake proofing are discussed as control mechanisms.
The document provides an implementation strategy for Integrated Baseline Reviews (IBRs) according to NASA requirements. It outlines IBR goals, assumptions, and a strategy that stages reviews throughout project phases from pre-Phase A to Phase D. The strategy emphasizes evolving review focus from process to content as the project matures. Reviews are scaled based on risk level and include roles and requirements for each project phase.
This document discusses NASA's Earned Value Management (EVM) capability project. It outlines the overarching EVM requirements for NASA, including compliance with ANSI/EIA-748 guidelines. It describes NASA's development of a common EVM process and its testing on two pilot projects. The rollout plan and available EVM resources are also summarized. Maintaining the integrity of the EVM process through surveillance is highlighted as a key ongoing activity.
This document discusses the challenges of partnering on major research platforms and facilities. It notes that the high costs and complexity of such projects have driven increased partnering between U.S. agencies and with international entities. However, ensuring alignment between partner processes and practices can be challenging. The document analyzes the practices of three science agencies - DOE, NASA, and NSF - to identify similarities and differences in their approaches to developing and managing large science projects. Understanding these comparative practices is important for facilitating effective interagency and international cooperation on major research infrastructure initiatives.
This document discusses managing integrated project work across geographically dispersed NASA teams. It provides a case study of the Orion project, which involved collaboration between 10 NASA centers. Key challenges of geographic dispersion include different organizational cultures, time zones, and the need to be part of a larger distributed team. Suggested paths for success include frequent communication, building trust, establishing common goals and processes, and travel to facilitate in-person interactions. Geographic dispersion will continue as NASA relies more on distributed teams, but success requires focus on open communication and shared objectives.
This document introduces the Schedule Test and Assessment Tool (STAT) developed by NASA to assess schedule credibility. It provides an overview of STAT's capabilities, benefits of assessing schedules, and background on why schedule assessment is important. The document demonstrates STAT's schedule health check, trend analysis, and summary reporting features using example output. It summarizes that STAT enables efficient schedule assessment, quality improvement, and timely analysis through an easy-to-use automated tool.
The document summarizes an NSC Audits and Assessments Workshop from September 2009-2010. It discusses the background and purpose of different types of NASA safety audits conducted by the NSC Audits and Assessments Office. The document analyzes audit findings from 2007-2010 and identifies potential systemic safety issues across multiple NASA centers, particularly in electrical safety, inspection records, and probabilistic risk assessment. Action plans were developed to address these issues and improve safety audit processes.
This document summarizes the role and responsibilities of the Systems and Software Engineering Directorate within the Office of the Deputy Under Secretary of Defense for Acquisition and Technology. The Directorate provides independent technical advice and oversight to programs, establishes acquisition policy and guidance, and works to advance systems engineering practices. It sees opportunities to improve how programs apply systems engineering early in the acquisition lifecycle to better define requirements and manage risks.
The document outlines NASA's plan to improve its implementation of internal controls as required by OMB Circular A-123. It proposes changes to NASA's governance structure including making the Director of the Office of Program and Institutional Integration (OPII) the chair of the Senior Assessment Team to better integrate controls across the agency. The plan is to assess internal controls at more discrete "assessable units" and have program managers work closely with institutional counterparts to ensure requirements are met.
This document summarizes Ken Jenks' presentation on NASA's product peer review process. It discusses that product peer reviews are used to discover defects, validate products, and prepare for formal reviews. The presentation provides an overview of NASA's requirements for peer reviews according to various directives and standards. It also describes the different types of peer reviews NASA uses and demonstrates the flow and expectations of a product peer review through a live example, with introductions of team members and a background of the product being reviewed.
The document introduces the Project Management Toolkit (PPME Toolkit) developed by NASA's Glenn Research Center (GRC) to provide a standardized set of project planning and execution tools. The PPME Toolkit aims to facilitate life cycle project management from proposal development through project control and reporting. It was developed using a rapid prototyping approach and has been piloted with five GRC space flight projects. Version 1 of the Toolkit will be deployed across GRC's space flight portfolio in 2011, and Version 2 will include additional capabilities and an enterprise server solution to enable true portfolio management.
KDP C is an important decision point for NASA projects where the agency decides whether to proceed to implementation and commits to a project's cost and schedule estimates. This panel discusses updated NASA processes to help ensure projects are on track for technical success within budget and schedule by KDP C. These include developing an integrated baseline, independent reviews, and documenting approvals and commitments in a decision memorandum to formalize support and establish external commitments. The integration of baseline development, independent checks, approval to proceed, and commitments is meant to help projects successfully complete implementation.
This document discusses leading indicators for systems engineering. It begins by outlining the concepts and motivation behind measuring leading indicators. It then describes a project to develop a set of 13 leading indicators to assess how effectively a program is performing systems engineering. These indicators are defined to provide predictive insights before impacts are realized. The document discusses challenges in implementing and interpreting leading indicators and mapping them to different life cycle phases. It notes that validating leading indicators is difficult as companies are reluctant to share information, and that leading indicators can be dismissed as similar to existing metrics.
The document discusses establishing a performance measurement baseline (PMB) in a cost effective manner. It defines key Earned Value Management (EVM) concepts like the PMB, which is a time-phased budget plan used to measure contract performance. It emphasizes the importance of thorough upfront planning, including developing a work breakdown structure (WBS) and schedule to fully capture the work scope. Establishing the PMB is a three-step process of defining the work, scheduling it, and allocating budgets to control accounts and work packages.
This document discusses increasing the robustness of flight project concepts. It proposes several improvements and innovations, including establishing new concept maturity levels (CML) to better communicate a concept's readiness. A new P4 document is suggested to provide requirements and guidelines for incorporating and evaluating a concept's robustness. Additional proposed enhancements involve new tools and templates, increased project team support, organizational changes, and training for the pre-phase A community. The overall goal is to address current challenges around assessing risks, communicating maturity, and guidelines for robustness evaluations in NASA's competitive funding environment.
This document outlines improvements made to NASA's independent program review process in fiscal year 2010. It discusses the reviews completed, including 8 program and 20 project reviews. Process improvements included quick look reports, increased coordination, readiness assessments, and streamlining of review documentation. The roles and responsibilities of review board members are covered, including ensuring member competency, currency, and independence. Coordination between review boards and NASA mission directorates and centers is also summarized.
The document discusses a project management approach to source evaluation boards (SEBs) being implemented at NASA's Johnson Space Center. It aims to align SEB processes with project management principles by treating each SEB like a project, focusing on requirements, scheduling, teamwork, and control. Feedback from industry and assessments identified issues like unclear processes and schedules. The new approach establishes common vocabulary, templates, and training to bring more consistency to SEBs handled as projects.
This document describes Ball Aerospace's implementation of a Life Cycle and Gated Milestone (LCGM) process to improve program planning, execution, and control across its diverse portfolio. The LCGM provides a standardized yet flexible framework that maps out program activities and products across phases. It was developed through cross-functional collaboration and introduced gradually across programs while allowing flexibility. Initial results showed the LCGM supported improved planning and management while aligning with Ball Aerospace's entrepreneurial culture.
The document discusses root cause analysis conducted by the PARCA office in the Department of Defense. It provides an overview of PARCA's functions, analytical framework, and operations. Recent root cause analyses have identified unrealistic cost/schedule estimates and quantity changes as common problems. Areas of ongoing development include analytical methodologies, relationships with subject matter experts, and policies/procedures to guide root cause analyses. The goal is to independently identify the predominant causes of problems in a transparent, fact-based manner to prevent repeating mistakes.
NG BB 53 Process Control [Compatibility Mode]Leanleaders.org
This document provides an overview of process control concepts and tools. It discusses an 8-step process for process improvement that includes control. Control plans are important to ensure improved processes remain stable. Measurement systems should be analyzed and process capability recalculated during control. Cultural issues can impact control and force field analysis can identify drivers and restraints. Standard operating procedures, control charts, and mistake proofing are discussed as control mechanisms.
The document provides an implementation strategy for Integrated Baseline Reviews (IBRs) according to NASA requirements. It outlines IBR goals, assumptions, and a strategy that stages reviews throughout project phases from pre-Phase A to Phase D. The strategy emphasizes evolving review focus from process to content as the project matures. Reviews are scaled based on risk level and include roles and requirements for each project phase.
This document discusses NASA's Earned Value Management (EVM) capability project. It outlines the overarching EVM requirements for NASA, including compliance with ANSI/EIA-748 guidelines. It describes NASA's development of a common EVM process and its testing on two pilot projects. The rollout plan and available EVM resources are also summarized. Maintaining the integrity of the EVM process through surveillance is highlighted as a key ongoing activity.
This document discusses the challenges of partnering on major research platforms and facilities. It notes that the high costs and complexity of such projects have driven increased partnering between U.S. agencies and with international entities. However, ensuring alignment between partner processes and practices can be challenging. The document analyzes the practices of three science agencies - DOE, NASA, and NSF - to identify similarities and differences in their approaches to developing and managing large science projects. Understanding these comparative practices is important for facilitating effective interagency and international cooperation on major research infrastructure initiatives.
This document discusses managing integrated project work across geographically dispersed NASA teams. It provides a case study of the Orion project, which involved collaboration between 10 NASA centers. Key challenges of geographic dispersion include different organizational cultures, time zones, and the need to be part of a larger distributed team. Suggested paths for success include frequent communication, building trust, establishing common goals and processes, and travel to facilitate in-person interactions. Geographic dispersion will continue as NASA relies more on distributed teams, but success requires focus on open communication and shared objectives.
This document introduces the Schedule Test and Assessment Tool (STAT) developed by NASA to assess schedule credibility. It provides an overview of STAT's capabilities, benefits of assessing schedules, and background on why schedule assessment is important. The document demonstrates STAT's schedule health check, trend analysis, and summary reporting features using example output. It summarizes that STAT enables efficient schedule assessment, quality improvement, and timely analysis through an easy-to-use automated tool.
The document summarizes an NSC Audits and Assessments Workshop from September 2009-2010. It discusses the background and purpose of different types of NASA safety audits conducted by the NSC Audits and Assessments Office. The document analyzes audit findings from 2007-2010 and identifies potential systemic safety issues across multiple NASA centers, particularly in electrical safety, inspection records, and probabilistic risk assessment. Action plans were developed to address these issues and improve safety audit processes.
This document summarizes the role and responsibilities of the Systems and Software Engineering Directorate within the Office of the Deputy Under Secretary of Defense for Acquisition and Technology. The Directorate provides independent technical advice and oversight to programs, establishes acquisition policy and guidance, and works to advance systems engineering practices. It sees opportunities to improve how programs apply systems engineering early in the acquisition lifecycle to better define requirements and manage risks.
The document outlines NASA's plan to improve its implementation of internal controls as required by OMB Circular A-123. It proposes changes to NASA's governance structure including making the Director of the Office of Program and Institutional Integration (OPII) the chair of the Senior Assessment Team to better integrate controls across the agency. The plan is to assess internal controls at more discrete "assessable units" and have program managers work closely with institutional counterparts to ensure requirements are met.
This document summarizes Ken Jenks' presentation on NASA's product peer review process. It discusses that product peer reviews are used to discover defects, validate products, and prepare for formal reviews. The presentation provides an overview of NASA's requirements for peer reviews according to various directives and standards. It also describes the different types of peer reviews NASA uses and demonstrates the flow and expectations of a product peer review through a live example, with introductions of team members and a background of the product being reviewed.
This document discusses computer forensics as it relates to investigating a Windows system for a pharmaceutical company. It covers gathering volatile system data through the use of tools run from a trusted CD, acquiring memory and filesystem images over the network, and analyzing these images to identify files, registry entries, and other artifacts that can provide a timeline of system activity and detect any unauthorized use. The goal is to preserve forensic evidence in a way that is admissible in court.
This document discusses memory forensics and incident response. It notes that 46-58% of large organizational losses are due to insider threats, even though identifying offenders and recovering assets from insider incidents should be easier. However, in 40% of insider incidents, those responsible are never identified due to insufficient evidence. This is often because 61% of businesses do not have access to forensic technology or procedures. The document then outlines best practices for incident response, including collecting volatile memory data and using tools like Volatility to analyze RAM and identify intrusions. It also discusses challenges like anti-forensics programs and using direct memory access via FireWire to bypass passwords and collect passwords from memory.
This document discusses analyzing the contents of RAM on a Windows machine for forensic purposes. It explains that RAM contains valuable evidence about running processes, open files and registry handles, network information, passwords, and hidden data. The document outlines techniques for acquiring memory dumps, enumerating processes, investigating suspicious files and registry keys, and analyzing network connections and encryption keys from volatile memory. It also mentions tools that can be used for memory analysis, such as Memdump, Procenum, Volatility Framework, and commercial tools like Memoryze.
This document discusses techniques for exploiting DLL hijacking vulnerabilities remotely through user interaction. It argues that DLL hijacking is still a viable attack vector despite protections like DEP and ASLR. It proposes manipulating the current directory to execute exploits and hiding DLLs in archives, email attachments, and browser redressing to trigger exploits without appearing suspicious. While not suitable for mass attacks, it concludes DLL hijacking enables rapid targeted attacks by abusing existing vulnerabilities.
This document provides instructions and code examples for using offensive programming techniques, including bash scripting, Perl programming, buffer overflow exploits, and JavaScript exploits. It includes a keylogger bash script, a Perl script to access an exploits archive, a C code example for a buffer overflow, and a JavaScript code example to steal login credentials. The goal is to educate on how hackers use these programming languages and exploits to carry out offensive security attacks.
Unmasking Careto through Memory Forensics (video in description)Andrew Case
My presentation from SecTor 2014 on analyzing the sophisticated Careto malware with memory forensics & Volatility
Video here: http://2014.video.sector.ca/video/110388398
Volatility is an open source memory forensics tool that analyzes RAM dumps to detect malware. It supports Windows, Linux, Mac and Android and uses plugins to parse memory images and extract useful information. Some key things Volatility can do include detecting injected code, unpacked files, hooks, and kernel driver modifications left by malware in memory. It allows analyzing ransomware locked systems, unpacking encrypted files, and dumping executable content of running processes for further reverse engineering.
Secugenius provides malware protection through email security, web security, and malware analysis. They analyze malware to determine the impact on infected systems and generate detection signatures. Secugenius analysts can also reverse engineer malware to understand the author's capabilities and intentions at a deeper level.
Windows FE (Forensic Environment) allows forensic examiners to boot an evidence machine to Windows instead of Linux or other operating systems. This allows examiners to use their familiar Windows-based forensic tools rather than needing to learn Linux applications. Windows FE is based on Windows PE (Preinstallation Environment) but is designed for forensic analysis, where Windows PE is for system preparation and installation. Booting to Windows FE preserves evidence better than hardware write blocking and allows examiners to efficiently image, triage, and examine evidence machines using their preferred Windows software tools.
This document provides an overview of the Volatility memory forensics framework. It discusses the Volatility Foundation, which supports development of the open-source Volatility tool. It highlights new features in Volatility 2.4, including improved support for Windows, Linux, MacOS, and analyzing application artifacts from programs like Chrome, Firefox, and Notepad. It outlines the Volatility roadmap and concludes by discussing the 2014 Volatility Plugin Contest winners, whose new plugins will enhance Volatility's capabilities.
This document discusses various topics in digital and computer forensics. It introduces computer forensics and some key concepts like evidence of every action and absence of evidence can be evidence. It then discusses extracting information from Windows memory like login credentials, processes, and registry keys. Specific tools used for memory and registry analysis are also mentioned. Finally, it discusses network forensic processes like capturing traffic, analyzing logs and devices, and tools used for network traffic analysis.
Digital forensics research: The next 10 yearsMehedi Hasan
Today’s Golden Age of computer forensics is quickly coming to an end. Without a clear strategy for enabling research efforts that build upon one another, forensic research will fall behind the market, tools will become increasingly obsolete, and law enforcement, military and other users of computer forensics products will be unable to rely on the results of forensic analysis. This article summarizes current forensic research directions and argues that to move forward the community needs to adopt standardized, modular approaches for data representation and forensic processing.
@2010 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved
The Practice of Cyber Crime InvestigationsAlbert Hui
Albert Hui is an experienced cybersecurity expert who has worked with law enforcement and in the private sector. He discusses the practice of cyber crime investigations and common techniques used. These include gathering digital evidence, forensic analysis, incident response, and using methodologies like Locard's Exchange Principle and the case theory approach to form hypotheses and test them. Red flags and artifacts are investigated using these methods to find potential intrusions, malware, or fraudulent activities. The goal is to identify bad actors and gather evidence for legal actions or risk mitigation. In-depth domain knowledge is crucial for effective cyber investigations.
Cyber Threat Intelligence: What do we Want? The Incident Response and Technol...Albert Hui
Introduces "Hui's Hierarchy of CTIs", a reference model upon which cyber threat intelligence (CTI) can be classified, a 5W1H model for CTI contexts, and illustrates through examples what CTIs IR and TRM will find useful.
Key Considerations for a Successful Hyperion Planning ImplementationAlithya
The document provides an overview and recommendations for a successful Hyperion Planning implementation. It discusses key project phases, recommended build techniques including application definition, dimensionality, master data integration, building the planning model, and form and calculation development. It also covers tips for planning design including delineating plan types, defining dimensionality, integrating master data from various sources, and best practices for building forms to ensure performance.
The document summarizes a presentation on software process methodology and findings from related research. It discusses common misconceptions about software processes, questions about process and practice, and the research methodology used. The methodology included questionnaires, user stories, and a Plus/Minus/Interesting analysis of project management practices. Key findings included a need for improved estimation, scheduling, and risk tracking. Salient suggestions for success focused on life cycle selection, estimation, monitoring, measurement, knowledge sharing, and ensuring domain expertise. The overall goal was to identify basic process improvements that could help small to medium organizations.
IDC & Gomez Webinar --Best Practices: Protect Your Online Revenue Through Web...Compuware APM
Did you know that 85% of users complain about slow response time? Poor web application performance can directly impact your bottom line
The success of your critical eBusiness initiatives depends on your ability to deliver quality web experiences. Unfortunately, 65% of applications are not properly load tested prior to launch, resulting in lost revenue, increased support costs and brand damage. So how can you ensure success when launching new applications, adding features, deploying new infrastructure, rolling out marketing campaigns, or preparing for seasonal spikes like the holiday shopping season?
Join us as our guest speaker, Melinda Ballou, IDC’s Program Director for Application Life-Cycle Management research discusses challenges, drivers and best practices for effective web performance testing and quality life-cycle management for today’s rich and complex applications. Additional topics that Imad Mouline, Gomez’s CTO will cover in this session are:
Best practices for ensuring the success of critical eBusiness initiatives
The end-user experience and business impact of emerging web technologies like Rich Internet Applications, virtualization, cloud computing and Web 2.0
A new approach for web performance and load testing that’s easy to use, delivered on-demand, and enables you to find and fix problems before they impact customers
Who Should Watch: Line of Business and eCommerce Managers, Interactive Marketing, Brand Managers, Project Managers and IT Operations Executives.
This document discusses sustaining process improvements through project closeout and transitioning to process owners. It outlines the timeline for project closeout, including transitioning to the final process owner at a commissioning meeting and subsequent review meetings. Maintaining improvements requires executing process management, with elements like process maps, monitoring, and response plans. Process owners must institutionalize changes through cultural shifts and updated systems to drive permanent behavior changes.
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal method used to drive improvements in systems and software engineering processes. SCAMPI appraisals involve trained teams examining processes, documents, and interviews to evaluate strengths, weaknesses, and determine a maturity rating. The appraisal follows a defined process including planning, conducting interviews and documentation review, validating findings, and reporting results to identify improvement opportunities.
Fuse Customer Perspectives: Oil & Gas / EnergyAcumen
This document outlines an upcoming webinar hosted by Acumen, a project management software company, that will discuss scheduling best practices and demonstrate their analytics platform Fuse. The webinar will feature three user stories from Marathon, Florida Power & Light, and an unnamed EPC discussing how they use Fuse metrics and analytics to improve project planning and monitoring. It will also announce Acumen's upcoming annual user summit in September focused on hands-on Fuse training and presentations on scheduling and risk management.
The document discusses NASA's independent review process for programs and projects. It aims to ensure the highest probability of mission success. Key points:
1. Independent reviews are conducted by Standing Review Boards at each project life-cycle milestone to objectively assess technical approach, schedule, resources, risk, and management approach.
2. Reviews provide independent validation of projects' readiness to proceed and reassure stakeholders that commitments can be delivered. Preparing for reviews allows holistic project examination.
3. Reviews follow NASA governance involving senior management, technical authorities, and decision authorities. Standing Review Boards comprised of independent experts conduct the actual reviews.
4. The process helps ensure projects receive independent assurance they are on
Project management in pharmaceutical generic industry basics and standardsJayesh Khatri
Project management involves coordinating activities to meet objectives within constraints like time, cost and quality. It involves planning, tracking progress, and controlling a project. Key aspects include defining requirements, creating a schedule and assigning resources. Tools like Gantt charts help plan and monitor the project. Benefits-centered project management focuses on achieving business benefits in addition to project execution. Prioritization techniques like MoSCoW help determine requirement importance. Effective strategies include empowered decision making, clear roles, and collaboration across functions.
Corporater at BSC and Strategy Forum - March 2013Pedro S. Pereira
Corporater is a specialised vendor for Balanced Scorecard and Performance Management software solutions that are flexible, ready-to-run, and that can be easily managed and configured by business users. Founded in the year 2000, Corporater has over 1000 customers from all key domains with an international presence in over 29 countries through its offices and strategic partnerships.
Corporater EPM Suite assist businesses to more effectively manage performance through building dashboards, enterprise reports, and balanced scorecards. Functionality to manage initiatives, visualize strategy, and tools to assist in budgeting, management meetings, risk management, manual and automated data collection, and analysis are also available.
The document discusses the importance of a structured cost estimating process. It outlines key aspects of an effective process including producing quality estimates, consistency, accuracy, competency development, and risk reduction. It then describes the key stages of the capital cost estimating process including inputs, estimating procedures and tools, quality procedures, types of estimates, estimating presentation standards, and example estimating software.
Considerations in Selecting and Protecting Your IT InvestmentHelene Heller, PMP
The document discusses considerations for selecting and protecting IT investments, including the importance of aligning technology choices with business needs. It recommends using an enterprise IT structure and governance model to select projects that address end-to-end business processes and have a clear business case. A portfolio management approach is suggested to prioritize projects, optimize costs and benefits, and adapt to changing business needs. Key steps include understanding stakeholder goals, brainstorming potential enterprise solutions, and developing a business case for each project.
Project Management Overview for PM LeadersJeff Thaler
An overview of Project Management core areas of focus. Includes core areas of focus such as Planning >> Managing >> Tracking >> Reporting >> Collaborating
Web App Testing - A Practical ApproachWalter Mamed
Testing Web Applications: A Practical Approach
Walter Mamed, JWT.com
Track 3: 11:00 – 12:00
Web-based applications have become the most widely used form of software, not only for e-commerce, but in our personal lives as well. Whether your spouse is booking your next vacation, or you are scheduling an appointment in an acute care facility, responsiveness and reliability are key to your satisfaction and desire to return. The quality assurance group testing these applications faces many challenges, with shorter test cycle times, fewer resources, constantly evolving technology, and instant world wide exposure. Explore how to plan, test, and deploy new or updated websites with confidence using practical, no nonsense methods. Functional and non-functional testing including configuration, usability, performance, and security will be covered. Learn how to use software tools to improve your testing techniques. Automated testing, mobile browsing, and the future of Rich Internet Applications will also be discussed. Take home a new perspective on testing web applications; implement these solutions and reduce your testing anxiety.
About the Speaker…
Walter Mamed is Director of Quality Assurance at JWT (Digital Technology) in Irving, Texas. He has over 30 years experience in a variety of quality assurance and software test engineering development positions, focusing on software and hardware test automation. Walt has been building test automation frameworks for GUI testing and web based applications for over 15 years. His web testing experience includes secure Email, On-boarding, ecommerce and lead generation as well as large-scale automated regression test suites. Walt is very active in the professional community as Director of the Board and Secretary for the Dallas/Ft. Worth (HP) Mercury User Group (DFWMUG.com) for the last 7 years. He is an ASQ Certified Software Quality Engineer.
The document describes Dual-track Scrum, a two-phased agile process consisting of discovery and delivery phases. In the discovery phase, problems are defined and assumptions validated through customer interviews, prototypes, and testing. This phase addresses issues like lack of innovation and surprises in production. The delivery phase then releases software, conducting A/B tests, measuring KPIs, and deciding to pivot or persevere. Stakeholders from various functions are involved at different stages, and the process uses techniques like story mapping, iteration planning, and usability testing to balance short and long-term goals.
This document provides an overview of multi-generation project planning (MGPP). MGPPs allow organizations to plan related improvement projects over multiple generations or releases. They help manage scope, capture additional ideas, identify replication opportunities, and communicate how individual projects fit into the overall strategy. The benefits, elements, and an example of an MGPP to reduce Army medical mobilization lead times are described.
DYNDEC develops customized talent management systems using over 50 years of experience in human resources and business consulting. They build centralized data systems for comprehensive interactive assessment and development using an assessment portal. This portal allows for mobile access, data analysis, training, assessment, feedback and reporting in a customizable system. DYNDEC is committed to meeting business, legal and ethical concerns for its clients.
DYNDEC develops customized talent management systems using over 50 years of experience in human resources and organizational psychology. Their interactive assessment portal allows companies to comprehensively assess and develop candidates and employees across various devices. It features international language translation, customized branding, and reduces costs by saving travel expenses and increasing productivity compared to traditional talent programs. DYNDEC can implement a solution by analyzing jobs, using existing job data and assessments, or developing a new web-based program.
This document provides an overview of project chartering for continuous process improvement (CPI) projects. It discusses selecting CPI projects, developing a project charter, and who is responsible for chartering a project. The project charter defines the team's mission and includes the opportunity/problem statement, business case, goal statement, project scope, timeline, and team selection. It is a living document that may change over time. Developing an effective charter involves scoping the project based on the identified problem and determining proportional benefits, measurements, and boundaries.
Dnv Improving Your Process Performances With AgileGeorge Ang
This document discusses a presentation given by Yann Hamon of DNV IT Global Services on improving process performances with agile methods. It provides background on DNV, describes agile software development practices like scrum and lean, and how mixing agile and CMMI can provide repeatable and controlled agile processes. The presentation explains how agile benefits productivity, reduces time-to-market and defects, and improves maintainability through practices like iterative development, continuous integration and automated testing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
Kickoffsample
1. 2008 Security Vulnerability Assessment (SVA)
Project Kick-Off Meeting
August 1, 2008
Presentation has been sanitized – Intended for demonstration purposes only
2. Meeting Agenda
Project Team Introduction – Dan Wallace
Project Approach – Mark O’Brien
Project Execution – White Hats
Project Phases
Phase Timeline
Target Selection Process
Effort Allocation by Business Unit
Key Project Milestones
Findings Release Process
Key Project Assumptions
Documentation Administration
Page 2 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
3. Project Team
ITG
Mark Gibaldi Executive Champion
Dan Wallace Project Manager
Mark O'Brien Lead Project Owner
Dell Hartmann Project Owner
Core White Hats Team
Tony Buffomante Lead Engagement Partner
Kyle Kappel Lead Engagement Manager
Charlie Hosner QA Manager
Daimon Geopfert Lead Testing Manager
Hany Wassef Primary Staff Resource
Adam Keagle Primary Staff Resource
Page 3 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
4. Project Approach
White Hats is the new SVA service supplier this
year
• Core White Hats team has extensive SVA experience
• We are not auditors. White Hats SVA team will not
discuss any part of this project with Internal or
External Audit.
We bring a different outlook and approach
• Focal point is on improving critical systems and
processes rather than individual technical issues
Focus will be based on business risks and activities
that add value
• Vulnerability ratings will be based on all conditions,
not just the rating a tool assigns
Page 4 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
5. Project Approach
You will notice a significant difference from last
year
Scope and Target selection process will be driven
by high-risk applications & systems
Executive Interviews will also help define the
Scope & Targets
Menu-driven approach for assessment activities
• Sector Security Leads will help decide which security
assessment activities White Hats will perform
(webapp pentesting, wireless, social engineering
,etc.)
• Final decisions made by Project Owners (ie - Global
Security)
Page 5 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
6. SVA Project Phases
PHASE 3:
PHASE 1: PHASE 2:
Planning/Scoping Assessment/Execution Data
Analysis/Reporting
Page 6 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
7. Project Phase Timeline
Activity Start Date End Date
PHASE 1: Planning/Scoping Aug 4 Aug 29
Includes: Sector security lead interviews & scoping, Executive interviews, testing
plan creation, CAB Approvals, etc.
PHASE 2: Assessment/Execution
(Fieldwork)
Sept 2 Sept 26
Includes: All fieldwork activities, passive reviews, network vulnerability
scanning, web application scanning, validation procedures, etc.
PHASE 3: Data Analysis/Reporting Sept 29 Oct 24
Includes: Testing results analysis, acceptance of findings, report creation &
socialization.
Page 7 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
8. Target Selection Process
Critical ITG Application: ABC App
Prioritized, risk-
based selection of Infrastructure Computing: ID
locations, systems, Servers and Platforms
and devices to
assess.
Network Environment: Router,
switch, firewall
Conducted to
Location: Red Hills Data Center
determine target
population.
ID Targets and Testing Steps
Page 8 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
9. Effort Allocation by BU
Estimated Task Hours Per Business
Unit
Task ResCap PL NAO IO CF MIC
Global Project Management 32 32 32 32 16 16
Scoping Phase 48 48 48 48 24 24
Assessment/Execution 260 260 260 260 130 130
Analysis & Reporting 60 60 60 60 30 30
Totals: 400 400 400 400 200 200
If activities are requested which exceed the hours allocated above,
a project change control process will be coordinated through Mark
Gibaldi’s office.
Page 9 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
10. Key Milestones
Completion
# Milestone Responsibility Date
1 Provide Preliminary Scope, Targets, Executive BU Leads Monday, Aug 11
Interview List
2 Schedule BU Executive Interviews Dan Wallace Tuesday, Aug 12
3 Complete BU Executive Interviews Execs, White Thurs, Aug 21
Hats
4 Final Scope & Target Approval for All BU’s D. Hartman Friday, Aug 22
M. Obrien
5 Obtain CAB Approval/Authorization for BU Leads Wed, Aug 27
Sept-2 Start Date
6 Testing Begins White Hats Tues, Sept 2
7 Testing Complete White Hats Friday, Sept 26
8 Preliminary Findings Released White Hats Friday, Oct 3
9 Findings Reviewed & Accepted by BU BU Leads & Wed, Oct 8
Leads and Performing Suppliers PS’ers
10 Draft Reports Released to ITG White Hats Thursday, Oct 16
11 Report Changes/Comments Provided to White ITG Monday, Oct 20
Hats
12 Final Reports Complete White Hats Friday, Oct 24
Page 10 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
11. Overview - Findings Release Process
DIAGRAM REMOVED
The full diagram outlining the findings release process will be provided.
Understanding this process is a key component to the project.
White Hats will not be responsible for coordinating or tracking
remediation activities. This includes validating closed issues. (IBM
IIM/OM)
Page 11 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
12. Key Assumptions
1. Sector Security Leads and Performing Suppliers will identify a backup
resource who can make decisions on their behalf in the event of
unavailability, such as personal time off.
2. Sector Security Leads will be responsible for obtaining CAB approval
for all in-scope testing activities. White Hats will not be involved in
this process.
3. White Hats will leverage existing ITG Qualys installations for network
vulnerability scanning.
4. Vulnerability scanning will be network-based, not host-based (ie – no
admin credentials).
5. Once findings are accepted by ITG and the PS’ers, White Hats
involvement is complete. White Hats will not be coordinating or
tracking remediation efforts.
6. Global Security Operations and IBM IIM/OM will be responsible for
tracking remediation outside the scope of this project.
Page 12 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only
13. Documentation Administration
“XClient eRoom is the system of record for this project
https://xx13x.xcl1ent.XXXX.com/eRoom/ITG-SVA/2008
Secure online collaboration tool based on Documentum eRoom
technology
Hosted by White Hats in a secure datacenter
All that is needed is a Web browser, access to the Internet
and username/password
Installing a browser plug-in is necessary for advanced features
All project team members will be given individual access
Must be used to exchange sensitive documents and
information (no email)
eRoom will also hold other relevant project information
(project plans, calendar of events, status reports, contact
lists, findings database, etc.)
Page 13 15-Jul-09
Presentation has been sanitized – Intended for demonstration purposes only