Até aqui experimentamos apenas os efeitos iniciais de tecnologias (big data, inteligência artificial, etc) que transformarão as relações sociais, econômicas, políticas e geopolíticas, em escala global, nas próximas décadas. Estamos às portas de um mundo novo, difícil de antever.
Que oportunidades e riscos as novas tecnologias acarretam para a democracia, a segurança, a paz e o desenvolvimento?
Para identificar e compreender os desafios dessa mudança de época, a Fundação FHC receberá Lindsay Gorman, especialista em tecnologias emergentes do German Marshall Fund of the United States, um dos principais think tanks globais, com presença nos Estados Unidos e na Europa.
LINDSAY GORMAN
Bacharel em Física (Princeton University) com mestrado em Física Aplicada (Stanford University), é Fellow de tecnologias emergentes da Alliance for Securing Democracy e integra o time de especialistas do German Marshall Fund (GMF). Administrou a Politech Advisory, que realiza consultoria em tecnologia (Inteligência Artificial e FinTech) e foi membra adjunta do Programa de Políticas Tecnológicas do CSIS, think tank baseado em Washington. Trabalhou no Senado dos EUA, no Escritório de Política Científica e Tecnológica da Casa Branca e na Academia Nacional de Ciências. Suas áreas de estudo e atuação são Inteligência Artificial, estatística de machine learning, materiais quânticos e cibersegurança, entre outras.
Ameaças e oportunidades das novas tecnologias para o desenvolvimento e a democracia - Lindsay Gorman
1. The Future of Internet
Applications: connected
devices, smart cities,
autonomous vehicles, big data
Lindsay Gorman
The Alliance for Securing Democracy, The German
Marshall Fund of the United States
4. We are seeing the same thing with data
175 zettabytes of
connected data
by 2025
5. The new oil?
• Advances in AI make data increasingly valuable
Source: The Economist (May 06, 2017). “Regulating the internet giants: the world’s most valuable resource is no longer oil but data,”
https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.
9. AI for good…but also for ill
“Artificial intelligence is the future, not only for Russia, but for all
humankind. It comes with colossal opportunities, but also threats that
are difficult to predict. Whoever becomes the leader in this sphere will
become the ruler of the world.”
- Vladimir Putin
“AI is a vital driving force for a new round of technological revolution
and industrial transformation, and accelerating AI development is a
strategic issue to decide whether we can grasp opportunities.”
- Xi Jinping
10. Human rights tragedy in Xinjiang
Source: The New York Times (April 04, 2019). “How China Turned a City into a Prison,”
https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html.
11. Red flags for detention in Xinjiang
Source: Foreign Policy (September 13, 2018). “48 Ways to Get Sent to a Chinese Concentration Camp,” https://foreignpolicy.com/2018/09/13/48-
ways-to-get-sent-to-a-chinese-concentration-camp/.
12. Completely enabled by AI and citizen data
Source: The Washington Post (January 07, 2018). “China’s Watchful Eye,”
https://www.washingtonpost.com/news/world/wp/2018/01/07/feature/in-china-facial-recognition-is-sharp-end-of-a-drive-for-total-
surveillance/?noredirect=on.
14. …And more
• DNA surveillance
• Online activity monitoring
• Speech recognition speech
monitoring
• Human Rights Watch: A police
app collects information on
citizens to determine “threats”
Source: Human Rights Watch (May 02, 2019). “How Mass Surveillance in Xinjiang, China,”
https://www.hrw.org/video-photos/interactive/2019/05/02/china-how-mass-surveillance-
works-xinjiang.
15. Aided by Western tech firms and investments
Source: Hong Kong Free Press (February 27, 2019). “‘Social credit’ scoring: How
China’s Communist Party is incentivising repression,”
https://www.hongkongfp.com/2019/02/27/social-credit-scoring-chinas-communist-
party-incentivising-repression/.
Source: Bloomberg (May 30, 2018). “Biggest AI Startup Boosts Fundraising to
$1.2 Billion,” https://www.bloomberg.com/news/articles/2018-05-31/world-
s-biggest-ai-startup-raises-1-2-billion-in-mere-months.
16. Surveillance State Outside Xinjiang
~200 million surveillance
cameras in China
Located a BBC reporter in
just 7 minutes. (2017) “Breed-ready” status
Source: Victor Gevers (March 09, 2019). Twitter
https://twitter.com/0xDUDE/status/1104528181846032385
17. Made in China, Exported to the World
• 18 countries are using Chinese-made intelligent monitoring systems
• 36 (including Brazil) have received training in topics like “public
opinion guidance”
Source: New York Times (April 24, 2019). “Made in China, Exported to the World: The
Surveillance State,” https://www.nytimes.com/2019/04/24/technology/ecuador-
surveillance-cameras-police-government.html
20. AI’s Governance and Ethical Challenges
• Bias and fairness
• Transparency and accountability (the black box problem)
• Privacy
• Machine decision-making and accountability in critical areas
21. AI’s Challenges: Re-entrenching Bias
• Systems are only as good as their input data – facial
recognition performs with 99% accuracy on lighter-skinned
males. It misidentifies women of color 30-35% of the time.
• Amazon recruiting algorithm biased towards male candidates
• Gendering of virtual personal assistants
23. AI’s Governance Challenges: The Black Box
Source: DARPA XAI, cited in Forbes:
https://www.forbes.com/sites/jasonbloomberg/2018/09/16/dont-trust-artificial-intelligence-
time-to-open-the-ai-black-box/#71e9f2c13b4a.
24. AI’s Governance Challenges: Privacy
• Data exploitation
• Identification and tracking
• Anonymity in the public square
• Prediction
• Profiling
Source: Adapted from
AI And the Future of Privacy:
https://towardsdatascience.com/ai-and-the-future-of-privacy-3d5f6552a7c4
26. Democracies also rely on objective reality
Source: Business Insider (April 17, 2018). “A viral video that appeared to show Obama calling Trump a 'dips---'
shows a disturbing new trend called 'deepfakes',” https://www.businessinsider.com/obama-deepfake-video-
insulting-trump-2018-4.
27. Information operations weaponize media
In 2016, 32 of 33 major American news outlets cited Russian troll accounts
at least once in their coverage.
28. Synthetic text + data for microtargeting =
human-out-of-the-loop influence operations
29. What can we do?
1) Invest in the solution
2) Strong moral and ethical frameworks around emerging technologies
3) Condemn tech-enabled human rights abuses (not enable them)
4) Be a bit more conscious about partnerships and investments
31. The General Data Protection Regulation
• Consent
• Right to No Profiling
• Right to be forgotten
• Right to know when data has been
hacked (breach notification within 72
hours of businesses becoming aware)
• Data Portability
• And more…
32. Regulating AI? Under consideration
• Opt-in consent to data use
• Data broker registration (Vermont, DMVs)
• Ability to opt-out of targeted ads
• Limits to auto-replay and autorecommenders
• Algorithmic transparency to counter bias & censorship
• Breaking up big tech
33. Source: Getty Images. “Who Brought Down the Berlin Wall?,” https://foreignpolicy.com/2009/11/05/who-brought-down-the-berlin-wall/.
Editor's Notes
10 to the 21st power. A data-producing interaction once every 18 seconds.
--data is not finite and it doenst disappear—not a perfect analogy, but it speaks to geopolitical value. And its not just democracie that are realizing it.
Technolgoies are fundamentally neutral. Myriad AI for good applications. Unforutnatley, authoritarisn reimges who would suppress freedom, undermine openness and human rights hae taken an interest as well.
Why do we talk about this at a tech conference in the context of authoritarian uses of tech?
Huawei and city surveillance in Ecuador, the Phillipines, . Huawei is leading the charge here with smart and “safe” city surveillance technology. The problem is that with it comes the authoritarian worldview. I’d be remiss if I didn’t mention here in Berlin that the government will take up whether to allow Huawei in its 5G networoks. Because this really isn’t about whats cheaper. It’s about the future of our values.
Today, 18 countries — including Zimbabwe, Uzbekistan, Pakistan, Kenya, the United Arab Emirates and Germany — are using Chinese-made intelligent monitoring systems, and 36 have received training in topics like “public opinion guidance,” which is typically a euphemism for censorship, according to an October report from Freedom House, a pro-democracy research group.
One 2018 study conducted by Joy Buolamwini of the M.I.T. Media Lab found that the technology is correct 99 percent of the time with photos of white men. But the software misidentified the gender as often as 35 percent of the time when viewing an image of a darker-skinned woman.
In January, researchers with M.I.T. Media Lab reported that facial-recognition software developed by Amazon and marketed to local and federal law enforcement also fell short on basic accuracy tests, including correctly identifying a person’s gender. Specifically, Amazon’s Rekognition system was perfect in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.
One 2018 study conducted by Joy Buolamwini of the M.I.T. Media Lab found that the technology is correct 99 percent of the time with photos of white men. But the software misidentified the gender as often as 35 percent of the time when viewing an image of a darker-skinned woman.
In January, researchers with M.I.T. Media Lab reported that facial-recognition software developed by Amazon and marketed to local and federal law enforcement also fell short on basic accuracy tests, including correctly identifying a person’s gender. Specifically, Amazon’s Rekognition system was perfect in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.
One 2018 study conducted by Joy Buolamwini of the M.I.T. Media Lab found that the technology is correct 99 percent of the time with photos of white men. But the software misidentified the gender as often as 35 percent of the time when viewing an image of a darker-skinned woman.
In January, researchers with M.I.T. Media Lab reported that facial-recognition software developed by Amazon and marketed to local and federal law enforcement also fell short on basic accuracy tests, including correctly identifying a person’s gender. Specifically, Amazon’s Rekognition system was perfect in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.
Identification and Tracking
AI can be utilized to identify, track and monitor individuals across multiple devices, whether they are at work, at home, or at a public location. This means that even if your personal data is anonymized once it becomes a part of a large data set, an AI can de-anonymize this data based on inferences from other devices. This blurs the distinction between personal and non-personal data, which has to be maintained under present legislation.
One 2018 study conducted by Joy Buolamwini of the M.I.T. Media Lab found that the technology is correct 99 percent of the time with photos of white men. But the software misidentified the gender as often as 35 percent of the time when viewing an image of a darker-skinned woman.
In January, researchers with M.I.T. Media Lab reported that facial-recognition software developed by Amazon and marketed to local and federal law enforcement also fell short on basic accuracy tests, including correctly identifying a person’s gender. Specifically, Amazon’s Rekognition system was perfect in predicting the gender of lighter-skinned men, the researchers said, but misidentified the gender of darker-skinned women in roughly 30 percent of their tests.
In 2016, 32 of 33 major American news outlets cited Russian troll accounts at least once in their coverage.
EIF 25 May 2018.
Last year, Brazil passed the LGDP, which goes into inforce in February of 2020