Narrative Mind Lessons Learned H4D Stanford 2016

42,762 views

Published on

agile, bmnt, business model, corporate innovation, customer development, diux, dod, h4d, hacking for defense, lean, lean launchpad, lean startup, nsa, stanford, steve blank

Published in: Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
42,762
On SlideShare
0
From Embeds
0
Number of Embeds
39,032
Actions
Shares
0
Downloads
26
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • We continued learning…
  • (3:05)
  • (4:00)
  • By mapping out these mission achievement goals for the various tiers of ARCYBER-- from the analyst who generates actionable insights, to the major making the plans, to the general who makes the final call--we were able to better conceptualize what a solution to this problem might look like
  • Shifting our focus back to analyst level…
  • 7:35
  • Narrative Mind Lessons Learned H4D Stanford 2016

    1. Original Problem: “We seek to develop tools that will optimize discovery and investigation of adversary communication trends on social media, allowing ARCYBER and others to more efficiently respond and mitigate threats posed by enemy messaging.” Team Narrative Mind Sponsor: US Army Cyber Command (ARCYBER) 100 Ten weeks of interviews later... Current Status: We learned a lot about this this space and the acquisition process.
    2. Show similar commercial tools--hear feedback The Journey Week # EmotionalState 0 1 2 3 4 5 6 7 8 9 Suggestions for MVP switches each week Hashtag co- occurrence framework ARCYBER visit, spec of final MVP Unclear response from stakeholders.
    3. How do we begin? Our sponsor gave us a lot of freedom to explore.
    4. Initial Mission Model Canvas (Week 1) - Gnip/Twitter - CrowdFlower, Samasource, or Mechanical Turk - Pre-existing social media service and micro- labor aggregators - Optimize workflow for social media analysts - Expedite categorization of social media content. - Use MechanicalTurk to crowdsource categorization of content. - - Algorithmic virality predictor ARCYBER wants to derive “meaning” Primary: Intelligence analysts receive a better platform. - Help intelligence analysts: receive cleaner, pre-categorized data, - Architecture that can support massive concurrent data aggregation and analysis. E.g. Storm/Hadoop. - Testing with analysts - MechanicalTurk or crowdsourcing labor (microtasks) - UI Development/Testing with CYBERCOM/ARCYBER analysts. - Software Development - Access to Twitter firehose (Stanford academic license) - Individual Analysts - ARCYBER - Continued partnership with crowdsourcing firms, CrowdFlower, Samasource, etc. Beneficiaries Mission AchievementMission Budget/Costs Buy-In/Support Deployment Value Proposition Key Activities Key Resources Key Partners - Categorize SM posts by content Week 1
    5. How do we begin? We learned about a wide variety of problems. Weeks 1-3
    6. DoD/Gvt. social media presence is weak. Account bans and multiple aliases across different networks make IDs hard to track Can’t monitor who views dark-web content. Fail to understand which narratives are the most salient. No baseline for monitoring/aggregating use of tech; Language/culture experts aren’t able to work at scale. Scale of social media makes manual efforts painful. Significant data management overhead. Can’t determine actual scale Problems Weeks 1-3
    7. How did we respond? We brainstormed a lot of MVP’s. Weeks 1-3
    8. Proposed MVPs Group A Group B Weeks 1-3
    9. Information Overload Information overload! Week 4
    10. Information Overload How do we conceptualize and organize the challenges our users face? Week 4
    11. What kinds of problems are there? Global Tweet-level Awareness Response Mapping the Problem Space Week 4
    12. What kinds of problems are there? Global Tweet-level Awareness Response Mapping the Problem Space Week 4
    13. What kinds of problems are there? Global Tweet-level Awareness Response Mapping the Problem Space Week 4
    14. Mapping the Problem Space Week 4 What kinds of problems are there? Global Tweet-level Awareness Response
    15. Mapping the Problem Space Awareness Response Week 4
    16. Mapping the Problem Space Global Tweet-level Awareness Response Week 4
    17. Mapping the Problem Space Global Tweet-level Awareness Response IO Org Chart No baseline for monitoring/aggregating use of tech Automatic Narrative Detection Language/culture experts aren’t able to work at scale Important Event Predictor Preempt real world events Persistent ID-Alias tracker Account bans and multiple aliases across different networks make IDs hard to track Site Scraper Need more cached information access Expedited Content Categorization Scale of social media makes manual efforts painful Bot Detector Can’t determine actual scale of info. Virality Predictor Understand which narratives are the most salient
    18. Company F Persistent ID-Alias tracker Existing Products Global Tweet-level Awareness Response IO Org Chart No baseline for monitoring/aggregating use of tech Automatic Narrative Detection Language/culture experts aren’t able to work at scale Company C Virality Predictor Understand which narratives are the most salient Counter-Narrative Generator DoD/Gvt. social media presence is weak Company E Site Scraper Need more cached information access Company D Expedited Content Categorization Scale of social media makes manual efforts painful Company B Bot Detector . Company A Important Event Predictor Preempt real world events
    19. Opportunities Global Tweet-level Awareness Response Adversary IO Org Chart No baseline for monitoring/aggregating adversary use of tech; cyber targeting Automated Narrative Detection Language/culture experts aren’t able to work at scale Open Opportunities
    20. Our Beneficiaries How can we understand our users and the environment in which they operate? Week 5-6
    21. MMC - Week 6 - Track how groups use technology over time. - Gnip/Twitter/Facebook - CrowdFlower, Samasource, or Mechanical Turk - Third-party access platforms for social media -Data visualization -Content analysis platforms Primary ARCYBER -Bg. General (decision maker) -MAJ/LTC/COL (operational plan) -Analysts/Operators (actionable insights) COCOMs -General (decision maker) -MAJ/LTC/COL -Analyst/Operator Secondary Political Campaigns -Campaign managers -Supporters Consumer Brands -CMO -Public Relations Team - Optimize workflow for social media analysts. -Deliver insights to commanders about online environment. -Insights into responses against narratives -Detect narratives emerging in real time -Early warning on emerging brand issues ??????????? -Enable faster problem awareness to problem response times for decision-makers across organizations- UI Development/Testing with ARCYBER analysts. - Software Development - Research aggregation - Access to Twitter firehose or API - Local language speaking crowdsourcing staff. - Accurate testing for intercoder reliability - ARCYBER: Bg. General, LTC, Strategic Initiatives Group, OTA, Purchasing PMs, End-User operator) -COCOMs: OTA, Operators, Purchasing PMs -Political Campaign: Opposition research team, ??? -Private Sector: CMO, ??? Beneficiaries Mission AchievementMission Budget/Costs Buy-In/Support Deployment Value Proposition Key Activities Key Resources Key Partners Week 6
    22. Value Proposition Canvas Products & Services Web/Desktop Application Act on reports and plans generated - Little understanding of ground-level nuances - Little understanding of proliferating platforms Customer Jobs Gains Pains Gain Creators Pain Relievers - Narrative detection and topic categorization - Global awareness - Reduce uncertainty of decision making - Add methodological rigor to ARCYBER’s operations Awareness ARCYBER - BG General (Decision Maker)
    23. Bg. General (decision maker) ❏ Quickly understand key thematic points of organization's use of social media put out by intelligence briefs. MAJ/LTC/COL (operational planer) ❏ Determine what types themes are rising in popularity and better identify type of response Analysts/Operators (actionable insights) ❏ New movements can be understood and tracked with less direct cooperation of experts. Mission Achievement ARCYBER Big Picture Success Analogy: “Most COCOMs and IO shops spend their whole day looking for a needle in a haystack: a user, a post, an IP address. For narrative-level awareness, we need a strategy that helps us divide the haystack into a bunch of smaller haystacks that don’t all look like same damn pile of hay.”
    24. Low-Fidelity MVP Week 7 How can we incrementally improve ARCYBER’s existing workflow?
    25. Customer Quote “Things are changing everyday. We need something that can help us with our long-term strategy, regardless of how many times we have to adjust our execution.” Week 7
    26. Example Criteria Co-Occurring Hashtags Week 7
    27. Research Dataset Dataset Features: ● 600k Unique Tweets ● Spanning October 2015 to May 2016 ● 200k Unique Hashtag Combinations Procedure: 1. Merged records into frequency table of hashtag co- occurrences. 2. Manually coded 1300 most frequent hashtag sets 3. Visualized volume of these hashtag sets over time as related to big-picture “narrative”. Key Findings: ● Process is reasonably scalable. ● Could be implemented quickly by ARCYBER to supplement workflow. ● Deciding on narrative categories is difficult. ● Need to further condition input tweets to specific groups. ● How does hashtagged traffic compare to total traffic? Week 7
    28. Low-Fidelity MVP Weeks 6-8 How would any of these MVPs ever make it into the hands of our sponsors?
    29. First Steps: OTA Proving a prototype and informing a requirement. Innovation Challenge Initial Award Testing Evaluation ~$5M 1. Industry Day 2. Reqs. synopsis 3. Submit white papers 4. Evaluate papers 5. Proposals selected. 6. Technical discussion. Original requirements synopsis modified. 1. ACT office has personnel working with testers from all ranks. 2. Army Cyber Battle Lab involved for integration/concepts. Budget Adoption 1. Requirement written. 2. Companies apply. 3. If OTA company is selected, may skip as far as MS-B. 4. Follow FAR/JCIDS. 1. Problem requirements are changed iteratively. 2. Phase objectives and timeline are flexible. Weeks 6-8 Traditional and non-traditional contractors must partner.
    30. MMC - Week 7 - Track how groups propagate narratives with co-occurring hashtags. - Gnip/Twitter/Facebook - CrowdFlower, Samasource, or Mechanical Turk - Pre-existing social media service and micro- labor aggregators - Third-party access platforms for social media -Data visualization -Content analysis platforms (Sens.ai, Leidos) Primary ARCYBER -Bg. General (decision maker) -MAJ/LTC/COL (operational plan) -Analysts/Operators (actionable insights) COCOMs -General (decision maker) -MAJ/LTC/COL -Analyst/Operator Secondary Political Campaigns -Campaign managers -Supporters Consumer Brands -CMO -Public Relations Team OTA in parallel for product development Create dual-use demand w/ PR and political campaigns - Optimize workflow for social media analysts. -Deliver insights to commanders about online environment. -Insights into responses against broadcasting narratives -Enable faster problem awareness to problem response times for decision-makers across organizations - MechanicalTurk or crowdsourcing labor (microtasks) - UI Development/Testing with ARCYBER analysts. - Software Development - Research aggregation? - Access to Twitter firehose or API - Local language speaking crowdsourcing staff. - Accurate testing for intercoder reliability - ARCYBER: Bg. General, LTC, Strategic Initiatives Group, OTA, Purchasing PMs, End-User operator) -COCOMs: OTA, Operators, Purchasing PMs -Political Campaign: Opposition research team, ??? -Private Sector: CMO, ??? Beneficiaries Mission AchievementMission Budget/Costs Buy-In/Support Deployment Value Proposition Key Activities Key Resources Key Partners
    31. Development Timeline Next Steps 20192017 2018 2020 20212016 Q1 Q2 Q1 Q2 Q1 Q2 Q1 Q2 Q1 Q2Q1 Q2 Q3 Q4 Q3 Q4 Q3 Q4 Q3 Q4 Q3 Q4 Q3 Q4 R&D/Dual-Use Research Join C5/Apply for OTA OTA Iterative Testing Research Systems Integration Burden Hire Initial Data Scientists/Engineers Requirement Dev User Onboarding Seed: $1.2M Series A: $4.75M Series B: $20M Submit Proposals Demonstrate Support Capability System Threat Assessment Hire Onboarding Firm Verify Requirement Compliance Joint Staff Approval R&D: Engineering + Design $332,000 Sales $160,000 R&D: Data science $300,000 Support $70,000 QA $80,000 Source Data $250k month (est) Office, travel, admin, HR $250,000 Crowd labor $90k month (est)
    32. Investment Readiness Level IRL 1 IRL 4 IRL 3 IRL 2 IRL 7 IRL 6 IRL 5 IRL 8 IRL 9 First pass on MMC w/Problem Sponsor Complete ecosystem analysis petal diagram Validate mission achievement (Right side of canvas) Problem validated through initial interviews Prototype low-fidelity Minimum Viable Product Value proposition/mission fit (Value Proposition Canvas) Validate resource strategy (Left side of canvas) Prototype high-fidelity Minimum Viable Product Establish mission achievement metrics that matterTeam Assessment : IRL 4

    ×