Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Transparent AI

439 views

Published on

A brief overview of the principles, practices and risks of AI governance and ethics transparency, from a reputational and communications perspective

Published in: Business
  • Be the first to comment

  • Be the first to like this

Transparent AI

  1. 1. TRANSPARENT AI OpenEthics.ai November 2020charliepownall.com CHARLIE POWNALL
  2. 2. • Independent reputation and communications advisor, trainer, speaker focused on AI, cyber, and social media • Technology & telecoms; healthcare, pharma & life sciences; business & professional services; start-up & early stage • UK-based, with Europe, Middle- east and Asia-Pacific experience and footprint About Charlie Pownall • Author, Managing Online Reputation (Palgrave Macmillan, 2015) • Fellow, Royal Society of Arts • Faculty, Center for Leadership & Learning, Johnson & Johnson • Chairman, Communications & Marketing Committee, American Chamber of Commerce Hong Kong, 2012-2015 • Formerly: European Commission, Reuters, SYZYGY AG, WPP plc, Burson-Marsteller 2
  3. 3. 3 Perceptions of AI • Widely divergent views on AI between different stakeholders, notably industry and general public/consumers • High general public/consumer awareness, low understanding • Widespread concerns about privacy, cyber attacks, manipulation, dis/misinformation, equality and human rights, (un)employment
  4. 4. General public AI concerns (USA) 4
  5. 5. 5 Why trust in AI is low • ‘Black box’ AI/algorithmic systems and opaque research • Inadequate understanding of AI functionality, competence, risks, limitations • Sensationalist/alarmist media coverage on (un)employment, surveillance, killer robots, geo-politics, etc • Many myths and misconceptions
  6. 6. 6
  7. 7. 7 Changing AI landscape • AI use is fully embedded in everyday life • AI/algorithms are regular headline news • More general public/consumer/end user backlashes • Plethora of active NGOs/civil society organisations • Pressure on AI developer/manager and academic/researcher responsibility and accountability • Prospect of government intervention • Broader, deeper understanding of AI limitations
  8. 8. 8 Components of trustworthy AI are emerging • Traceability/verifiability and explainability/interpretability help experts make AI systems safer and fairer, and understand AI decision-making • Strong governance and ethics help organisations develop and manage more appropriate AI systems and be more accountable • AI/algorithms remain fundamentally opaque and confusing to the general public/consumers, and perceived accountability remains low
  9. 9. 9 Principles of AI transparency • Put values, ethics, and transparency at the centre of AI governance • Involve directly impacted stakeholders in AI design and testing • Monitor and strengthen AI governance and technology continuously • Communicate clearly, honestly, consistently, regularly, from the start • Think laterally about unexpected consequences
  10. 10. 10 AI transparency in practice – communications • Governance communication – People, policies, protocols, process, strategy, etc – Stakeholder involvement, incl. suppliers – Datasets, toolkits, tools • Product/service communication – Purpose, objectives, context, technology, data processing, outcomes – Risks/limitations, privacy, human oversight, ownership • Incident and crisis management – Preparation, response, recovery – Learning, and acting upon lessons learned
  11. 11. 11 The dangers of AI transparency • Loss of IP • Loss of competitive advantage • Failure to meet higher expectations • Reputational risk
  12. 12. Useful resources 12 • AI and robotics perception research Recent research studies on perceptions on and trust in AI and robotics amongst the general public, consumers, patients, politicians, business, employees and other stakeholders • AI and algorithmic incident and controversy repositry An open registry of AI-driven incidents and controversies used by journalists, researchers, NGOs, businesses and others for reference, research and product/service development
  13. 13. THANK YOU. cp@charliepownall.com linkedin.com/in/charliepownall charliepownall.com

×