The document describes the internal innovation pipeline process at the HHS IDEA Lab, which includes discovery, testing, and implementation phases. It notes that government traditionally struggles with testing innovations in a customer-driven way before large-scale implementation. The HHS IDEA Lab launched an Ignite Accelerator program to help employees test ideas, and key learnings included the need for more training on identifying user needs and conducting sophisticated testing before rollout. The program was adjusted to focus on earlier discovery and testing phases where support is most needed.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
Product and Service Development Council - Prosci PresentationTim Creasey
April 30, 2015 presentation on change management to The Conference Board Product and Service Development Council.
Tim Creasey @timcreasey tcreasey@prosci.com
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics are complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each.
All knowledge work requires a delicate and continuously shifting balance between delivery – exploiting existing knowledge – and discovery – exploring new knowledge. This need to balance discovery and delivery can be found across the entire innovation cycle: from technology innovation over performance and sustaining innovation to disruptive innovation. It has been a driving concern for specific approaches such as Lean Product and Process Development as well as The Kanban Method, as exemplified in examples such as: developing a new product that requires novel features (discovery) while at the same time managing the overall risk that is involved in developing those features (delivery); improving agility and predictability of an organization that may require substantial change (discovery) while at the same time keeping resistance to change under control (delivery); a startup that requires an initial focus on finding problem/solution fit or product market fit (discovery) but then needs to develop the organization to delivery at scale (delivery); etc.
In each of the examples above, too much emphasis on discovery may result in a disconnection with the past leading to resistance to change, increasing delivery risk, and non-adoption of innovation. Too little emphasis on discovery (and consequently too much emphasis on delivery) may lead to not being prepared for the future resulting in stagnation and the risk of being disrupted. Discovery Kanban systems are Kanban systems that help to balance discovery and delivery while moving from a mindset of episodic (one-off) innovation and change towards a culture of continuous innovation and change. Discovery Kanban systems work across the entire discovery cycle starting from pre-hypothesis moving into hypothesis validation and ending in post-hypothesis. In this presentation, we will discuss the different elements of Discovery Kanban, examples and underlying principles.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
Product and Service Development Council - Prosci PresentationTim Creasey
April 30, 2015 presentation on change management to The Conference Board Product and Service Development Council.
Tim Creasey @timcreasey tcreasey@prosci.com
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics are complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each.
All knowledge work requires a delicate and continuously shifting balance between delivery – exploiting existing knowledge – and discovery – exploring new knowledge. This need to balance discovery and delivery can be found across the entire innovation cycle: from technology innovation over performance and sustaining innovation to disruptive innovation. It has been a driving concern for specific approaches such as Lean Product and Process Development as well as The Kanban Method, as exemplified in examples such as: developing a new product that requires novel features (discovery) while at the same time managing the overall risk that is involved in developing those features (delivery); improving agility and predictability of an organization that may require substantial change (discovery) while at the same time keeping resistance to change under control (delivery); a startup that requires an initial focus on finding problem/solution fit or product market fit (discovery) but then needs to develop the organization to delivery at scale (delivery); etc.
In each of the examples above, too much emphasis on discovery may result in a disconnection with the past leading to resistance to change, increasing delivery risk, and non-adoption of innovation. Too little emphasis on discovery (and consequently too much emphasis on delivery) may lead to not being prepared for the future resulting in stagnation and the risk of being disrupted. Discovery Kanban systems are Kanban systems that help to balance discovery and delivery while moving from a mindset of episodic (one-off) innovation and change towards a culture of continuous innovation and change. Discovery Kanban systems work across the entire discovery cycle starting from pre-hypothesis moving into hypothesis validation and ending in post-hypothesis. In this presentation, we will discuss the different elements of Discovery Kanban, examples and underlying principles.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Paul Holland introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Paul to learn how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
This presentation highlights some of the work of the Seattle Children's Imagination Lab between October 2016 and December 2018. It covers the development pathway for Seattle-PAP, a portfolio of some of the 130+ projects completed using our Innovation Pipeline, and a few thoughts on innovation that informed the development of our pipeline.
For a company like Blue Apron that is radically transforming the way we buy, prepare and eat meals, experimentation is mission critical for delivering a great customer experience. Blue Apron doesn’t just think about experimenting to improve short term conversion, they focus on ways to impact longer term metrics like retention, referrals, and lifetime value.
Join John Cline, engineering manager at Blue Apron, to learn how his team has built their experimentation program on Optimizely’s platform.
Attend this webinar to learn:
-How Blue Apron built their experimentation program on top of Optimizely Full Stack
-How developers play a critical role in experimentation
-The key considerations for developers when thinking about experimentation
Julian Harty - Alternatives To Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on "Presentation Title" by "Speaker Name". See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This talk sets out the thinking behind the Gerrard Consulting Business Story Method. The slides present background, an overview of the method and introduces the Testela Business tool that supports Gerrard Consulting services.
Testaus 2014 -seminaari: Paul Gerrard. The Changing Role of Testers’.Tieturi Oy
Testausala on suuressa murroksessa. Sen sijaan että kehittäjät ja testaajat käyttävät kaikissa tilanteissa tiettyä tuotantotapaa, kuten vaikkapa Agilea, tulee ammattilaisen itse osoittaa ketteryyttä valitsemalla ja yhdistelemällä luovasti erilaisia tuotantotapoja tilanteen mukaan.
Bundledarrows160 bit.ly/teamcaptainsguild
Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils.
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
Conducting the Experimentation OrchestraOptimizely
Join us to hear from Jill Brown Thomas, the head of the Optimization program at Wolverine Worldwide, as she discusses her own experience changing culture and building a testing program across 12 very different brands. From alignment on people, to building process, to buying technology, to changing and establishing new culture, Jill will discuss what works and what doesn’t, and what the future holds for Wolverine’s Optimization Journey.
Attend this webinar and:
-How to unify best practices across a portfolio of brands.
-A new model for uncovering what’s important to test from Optimizely’s solution partner Clearhead.
-New use cases for running experiments that move the needle.
Innovation and Opportunity IdentificationPeachy Essay
However, circumstances can be turned into opportunities but with conditions. There are many ways to prepare for circumstances that will wilt opportunities depending on the field of interest. Of course, innovation is key but one has to also continually seek and utilize opportunities.
However, identifying an opportunity especially in business relies heavily on four areas. These areas have been associated with many business opportunities across the world and many people continue to acquire more from them
Gerlof Hoekstra - OMG What Have We Done - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on "OMG What Have We Done" by Gerlof Hoekstra.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Paul Holland introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Paul to learn how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
This presentation highlights some of the work of the Seattle Children's Imagination Lab between October 2016 and December 2018. It covers the development pathway for Seattle-PAP, a portfolio of some of the 130+ projects completed using our Innovation Pipeline, and a few thoughts on innovation that informed the development of our pipeline.
For a company like Blue Apron that is radically transforming the way we buy, prepare and eat meals, experimentation is mission critical for delivering a great customer experience. Blue Apron doesn’t just think about experimenting to improve short term conversion, they focus on ways to impact longer term metrics like retention, referrals, and lifetime value.
Join John Cline, engineering manager at Blue Apron, to learn how his team has built their experimentation program on Optimizely’s platform.
Attend this webinar to learn:
-How Blue Apron built their experimentation program on top of Optimizely Full Stack
-How developers play a critical role in experimentation
-The key considerations for developers when thinking about experimentation
Julian Harty - Alternatives To Testing - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on "Presentation Title" by "Speaker Name". See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
This talk sets out the thinking behind the Gerrard Consulting Business Story Method. The slides present background, an overview of the method and introduces the Testela Business tool that supports Gerrard Consulting services.
Testaus 2014 -seminaari: Paul Gerrard. The Changing Role of Testers’.Tieturi Oy
Testausala on suuressa murroksessa. Sen sijaan että kehittäjät ja testaajat käyttävät kaikissa tilanteissa tiettyä tuotantotapaa, kuten vaikkapa Agilea, tulee ammattilaisen itse osoittaa ketteryyttä valitsemalla ja yhdistelemällä luovasti erilaisia tuotantotapoja tilanteen mukaan.
Bundledarrows160 bit.ly/teamcaptainsguild
Only 5 percent of entrepreneurship is the big idea, the business model, the whiteboard strategizing, and the splitting up of the spoils.
The other 95 percent is the gritty work that is measured by innovation accounting: product prioritization decisions, deciding which customers to target or listen to, and having the courage to subject a grand vision to constant testing and feedback.
Conducting the Experimentation OrchestraOptimizely
Join us to hear from Jill Brown Thomas, the head of the Optimization program at Wolverine Worldwide, as she discusses her own experience changing culture and building a testing program across 12 very different brands. From alignment on people, to building process, to buying technology, to changing and establishing new culture, Jill will discuss what works and what doesn’t, and what the future holds for Wolverine’s Optimization Journey.
Attend this webinar and:
-How to unify best practices across a portfolio of brands.
-A new model for uncovering what’s important to test from Optimizely’s solution partner Clearhead.
-New use cases for running experiments that move the needle.
Innovation and Opportunity IdentificationPeachy Essay
However, circumstances can be turned into opportunities but with conditions. There are many ways to prepare for circumstances that will wilt opportunities depending on the field of interest. Of course, innovation is key but one has to also continually seek and utilize opportunities.
However, identifying an opportunity especially in business relies heavily on four areas. These areas have been associated with many business opportunities across the world and many people continue to acquire more from them
Gerlof Hoekstra - OMG What Have We Done - EuroSTAR 2013TEST Huddle
EuroSTAR Software Testing Conference 2013 presentation on "OMG What Have We Done" by Gerlof Hoekstra.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
7 Features of Highly Effective Outcomes Improvement ProjectsHealth Catalyst
There’s a formula for success when putting together outcomes improvement projects and organizing the teams that make them prosper. Too often, critically strategic projects launch without the proper planning, structure, and people in place to ensure viability and long-term sustainability. They never achieve the critical mass required to realize substantial improvements, or they do, but then the project fades away and the former state returns. The formula for enduring success follows seven simple steps:
Take an Accountability Versus Outcomes Focus
Define Your Goal and Aim Statements Early and Stick to Them
Assign an Owner of the Analytics (Report or Application) Up Front
Get End Users Involved In the Process
Design to Make Doing the Right Thing Easy
Don’t Underestimate the Power of 1:1 Training
Get the Champion Involved
Testing at Startup Companies: What, When, Where, and HowJosiah Renaudin
Startups are becoming increasingly prolific—technology startups even more so. CEOs are recognizing the need for quality. Their users are their growth, and if they can't retain users, their growth slows or stops. So quality matters. How do you convince the rest of the company that test brings value? How do you convince developers and product owners that spending time on quality is important, particularly if they have never worked with testers before? Should startups even have testers? Alice Till-Carty shares her experience finding a role for testing and QA within the ever changing and fast growing landscape of a fashion startup. Join Alice to explore the major challenges and hurdles that testers can face in startups—how to improve relations with developers, how to introduce process (even when “process” is a dirty word in your company), how to become more involved with the development process, and ways to improve communication as teams start to grow quickly.
How Can User Research Help PMs by Yelp Group Product ManagerProduct School
Main Takeaways:
- Product Managers are the voice of the user
- User research can help Product Managers build the right features
- User research can also help Product Managers ensure great designs
How To Build a Winning Experimentation Program & Team | Optimizely ANZ Webinar 8Optimizely
Watch Dan Ross, Managing Director for Optimizely ANZ in our latest webinar from the Experimentation Insights Tour -- "How To Build a Winning Experimentation Program & Team."
View the presentation here: https://optimizely.wistia.com/medias/1o6xy4j0xm
Take Optimizely's Maturity Assessment here: https://www.optimizely.com/maturity-model/
DESCRIPTION: The world’s leading companies utilise experimentation to build a culture that fosters innovation and agility. The key to experimentation is to have both the right tools (software) in combination with the right people and processes
In this webinar, you will learn:
* Why experimentation is central to competing and innovating
* Areas to assess when building your experimentation capability
* How organisational culture helps scale an experimentation program
About Optimizely:
Optimizely is the world's leading experimentation platform, enabling businesses to deliver continuous experimentation and personalisation across websites, mobile apps and connected devices. Optimizely enables businesses to experiment deeply into their technology stack and broadly across the entire customer experience.
The platform’s ease of use and speed of deployment empower organisations to create and run bold experiments that help them make data-driven decisions and grow faster.
To date, marketers, developers and product managers have delivered over 700 billion experiences tailored to the needs of their customers. Optimizely’s global client base includes Atlassian, eBay, Fox, IBM, The New York Times, LendingClub, Hotwire, Microsoft and many more leading businesses.
To learn more about customer experience optimisation, visit optimizely.com
Trends in Software Testing: There has been a slow realization among the top executives that simply outsourcing testing to the lowest bidder is not resulting in a sufficient level of quality in their software products. In this session, Paul Holland will discuss how American companies are starting to reconsider “factory school” testing and are no longer satisfied with the current situation of simply outsourcing their “checking”. As the development side of software continues its dramatic shift toward Agile development – what role can testers have and how can testers still add value?
Putting Customers First: How To Build Data-Driven Strategies To Ensure Custom...VWO
When opinions take a backseat and data takes control, you can be sure of incremental revenue by building data-informed digital experiences that your customers will love. In this webinar, learn how to develop an evidence-based approach instead of an opinion-based approach.
Watch webinar recording here - https://vwo.com/resources/webinars/how-to-build-data-driven-strategies-ensuring-customer-loyalty/
Intro to Data Analytics with Oscar's Director of ProductProduct School
The Director of Product at Oscar, Vasudev Vadlamudi, went over key types of quantitative analysis that B2C product managers use on the job including: funnels, cohorts, and a/b testing. For each one he looked into when and why they are used, and used examples.
[work in progress]
*The Lead (Pb) Data Initiative* is an effort being developed around exactly those two things: Lead + Data.
This slide deck presents background on the issue of lead, a short list of some exciting efforts already underway, and a framework for a series of projects to accelerate those efforts and address other problem areas.
This is all a work in progress. Everything is draft and up for your feedback! Please reach out to read@publiclab.org if you have any interest in this conversation.
Join us in-person in the Humphrey Building Auditorium or tune-in online for rapid-fire, five-minute presentations from this year’s HHS Ignite teams as they present the results of their efforts and pitch their projects to panels of senior Department “investors” for continued funding and support.
HHS Ignite: Incubating New Ideas. By exposing teams to a network of innovators and equipping them with the methodologies and tools used by successful startup companies, HHS Ignite provides a startup environment in which small teams can try something new.
Monitoring Health for the SDGs - Global Health Statistics 2024 - WHOChristina Parmionova
The 2024 World Health Statistics edition reviews more than 50 health-related indicators from the Sustainable Development Goals and WHO’s Thirteenth General Programme of Work. It also highlights the findings from the Global health estimates 2021, notably the impact of the COVID-19 pandemic on life expectancy and healthy life expectancy.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
ZGB - The Role of Generative AI in Government transformation.pdfSaeed Al Dhaheri
This keynote was presented during the the 7th edition of the UAE Hackathon 2024. It highlights the role of AI and Generative AI in addressing government transformation to achieve zero government bureaucracy
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Preliminary findings _OECD field visits to ten regions in the TSI EU mining r...OECDregions
Preliminary findings from OECD field visits for the project: Enhancing EU Mining Regional Ecosystems to Support the Green Transition and Secure Mineral Raw Materials Supply.
2017 Omnibus Rules on Appointments and Other Human Resource Actions, As Amended
Overview: The Internal Innovation Pipeline @ The HHS Idea Lab
1. The Internal
Innovation Pipeline of
the HHS IDEA Lab
Read Holman, HHS IDEA LAB
U.S. Department of Health and Human Services
January 2016
2. 2
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
3. 3
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
4. Government doesn’t
do this well.
4
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
5. Government doesn’t
do this well.
If testing is done, then it’s
approached as an itemized list
of technically-driven tests...
instead of as a spectrum of
increasingly sophisticated
customer-driven tests.
5
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
6. 6
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
Government doesn’t
do this well.
If testing is done, then it’s
approached as an itemized list
of technically-driven tests...
instead of as a spectrum of
increasingly sophisticated
customer-driven tests.
“Is it secure?”
“Is it 508 compliant?”
“Did it go through clearance?”
“Does it meet the requirements?”
“Does anyone actually want this thing?”
“Even if people say they want it, will they
actually use it when given the tool?”
“If they do use it, will the tool have the
intended effect on the user?”
7. These questions mitigate for
technical risk.
“Does it work?” is interpreted as
“Is it up and running?”
7
Government doesn’t
do this well.
If testing is done, then it’s
approached as an itemized list
of technically-driven tests...
instead of as a spectrum of
increasingly sophisticated
customer-driven tests.
These questions mitigate for
outcome risk.
“Does it work?” is interpreted as
“Is it having an impact?”
“Does anyone actually want this thing?”
“Even if people say they want it, will they
actually use it when given the tool?”
“If they do use it, will the tool have the
intended effect on the user?”
“Is it secure?”
“Is it 508 compliant?”
“Did it go through clearance?”
“Does it meet the requirements?”
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
8. 8
The incentives for government are
simply not aligned with the goals
our missions statements demand.
Our internal processes have thus
evolved not to just hinder but to
resist and reject doing things in a
customer-centric and
outcomes-oriented way.
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
9. 9
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
For example:
Do prototypes have to be 508
and FISMA compliant?
Do all item of feedback have to
go through Regs.gov?
Does talking to more than 9
people trigger the PRA?
10. These and other questions are
answerable.
BUT their very existence
scares employees from operating
in this “testing” space and
scares management away from
allowing for experimentation.
10
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
11. 11
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
These and other questions are
answerable.
BUT their very existence
scares employees from operating
in this “testing” space and
scares management away from
allowing for experimentation.
So we roll out large-scale
programs without proper testing.
And we’re surprised
when they fail.
12. 12
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
These and other questions are
answerable.
BUT their very existence
scares employees from operating
in this “testing” space and
scares management away from
allowing for experimentation.
Or even worse…
They putter along with
questionable impact but with so
much up-front money invested that
they are (politically) hard to
kill/change.
13. 13
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
These and other questions are
answerable.
BUT their very existence
scares employees from operating
in this “testing” space and
scares management away from
allowing for experimentation.
Or even worse…
They become idea-driven (rather
than a data-driven) programs.
14. In 2013, we launched Ignite (Round 1)
to support employees who wanted to test new ideas
HHS Ignite Accelerator
(6 months + $10k @ 20% time)
14
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
To address these issues…
15. In 2013, we launched Ignite (Round 1)
to support employees who wanted to test new ideas
HHS Ignite Accelerator
(6 months + $10k @ 20% time)
15
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
HHS employees with innovative ideas
pitched to get 6 months of funding and
coaching to test out their idea with
customers before pitching to Sr
Leadership during our own Shark Tank.
70 teams applied. 13 were selected.
5 got full follow-up support to continue.
16. We had these Key Findings:
More training needed to help teams
identify the right problem to address.
Some teams went into the “Rollout”
stages without sufficient testing.
HHS Ignite Accelerator
(6 months + $10k @ 20% time)
16
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
In 2013, we launched Ignite (Round 1)
to support employees who wanted to test new ideas
17. We had these Key Findings:
More training needed to help teams
identify the right problem to address.
Some teams went into the “Rollout”
stages without sufficient testing.
HHS Ignite Accelerator
(6 months + $10k @ 20% time)
17
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
In 2013, we launched Ignite (Round 1)
to support employees who wanted to test new ideas
18. Round 2 (Summer 2014) --> Round 4 (Summer 2015)
Teams started in early phase. Partnered w UMD for training + coaching.
HHS Ignite Accelerator
(3 months + $5k @ 25-50% time)
18
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
19. HHS Ignite Accelerator
(3 months + $5k @ 25-50% time)
We had these Key Findings:
Teams don’t need funding
...for Discovery or early-stage tests
Not many teams are conducting
sophisticated tests / getting that far
down the pipeline
19
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
Round 2 (Summer 2014) --> Round 4 (Summer 2015)
Teams started in early phase. Partnered w UMD for training + coaching.
20. HHS Ignite Accelerator
(3 months + $5k @ 25-50% time)
We had these Key Findings:
Teams don’t need funding
...for Discovery or early-stage tests
Not many teams are conducting
sophisticated tests / getting that far
down the pipeline
20
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
Round 2 (Summer 2014) --> Round 4 (Summer 2015)
Teams started in early phase. Partnered w UMD for training + coaching.
21. HHS Ignite Accelerator
(3 months + $5k @ 25-50% time)
We had these Key Findings:
Teams don’t need funding
...for Discovery or early-stage tests
Not many teams are conducting
sophisticated tests / getting that far
down the pipeline
21
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
Round 2 (Summer 2014) --> Round 4 (Summer 2015)
Teams started in early phase. Partnered w UMD for training + coaching.
22. Round 5 (Spring 2016) = The Current Round
Teams started in early phase. Partnered w UMD for training + coaching.
HHS Ignite Accelerator
(3 months + $3k @ 50% time)
Ignite Finalist Stage
(2 months @ 10% time) Hypotheses
Allows us to scale by training more
employees (there are 47 Finalist teams).
Allows the teams that get into Ignite to
travel further down the pipeline.
22
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
23. HHS Ventures Fund
(15 months @ 50% time + $100k)
23
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
The HHS Ventures Fund
takes to the next level innovative projects with early-but-demonstrated impact
24. HHS Ventures Fund
(15 months @ 50% time + $100k)
24
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
The HHS Ventures Fund
takes to the next level innovative projects with early-but-demonstrated impact
● Also competitive: Teams pitch to a panel of Sr Leadership who select those to fund.
● Requires Executive Sponsor to signal internal Agency support.
● This stage involves re-integrating with established government processes and policies.
● We help them navigate the bureaucracy. AND we work to ensure the core principles of
customer-centered iterations are maintained (this is hard).
25. The HHS Ignite Accelerator
Ignite Finalists Round
The HHS Ventures Fund
Live
New products, services, and
processes that have been
validated get implemented.
Discovery
User needs are
researched and pain
points are identified.
Testing
An innovation is prototyped
and tested in increasingly
sophisticated ways.
Our
Internal
Innovation
Pipeline