Quiero empezar la presentación por mostrarles esta diapositiva que evidencia el ritmo de la innovación en la historia.
Si miramos hacia atrás en el tiempo, hay muchos ejemplos de hitos históricos que se marcaron con la llegada de nuevas tecnologías. Estos hitos han impulsado el progreso de la humanidad.
Luego la invención del motor a vapor en 1698 1 revolución industrial – el hombre delega su capacidad industrial en una maquina y eso le permite especializarse.
Si ven los avances subsiguientes se dan cuenta que cada vez mas se va acotando el tiempo del desarrollo de nuevas innovaciones y la humanidad empieza a vivir dicotomía entre el potencial del desarrollo tecnológico y la capacidad humana para adaptarse a la tecnología.
El ritmo de los avances tecnológicos y los avances tecnológicos como tal no nos debe asustar, siempre que hay una nueva tecnología surgen retos y preocupaciones frente a lo desconocido y la humanidad aprende a darle la vuelta a estos retos para que se conviertan en oportunidades.
Tomemos el ejemplo del automóvil. Cuando se inventaron los primeros carros, esto implicó un gran cambio para la la sociedad. La gente tuvo que acostumbrarse a nuevas formas de moverse, y se perdieron empleos en industrias como la construcción de carruajes tirados por caballos. Pero el automóvil también contribuyó a la creación de muchos nuevos empleos e industrias, y en última instancia ayudó a estimular cambios que mejoraron el nivel de vida de la mayoría de las personas.
Our Responsible AI Journey
Eight years ago, Satya Nadella’s article in Slate magazine started our journey. We are very proud of our milestones and anticipate many more in the future as we learn from each other on our responsible AI journey.
That’s why we began our Responsible AI journey back in 2017.
We believe that the development and deployment of AI must be guided by the creation of an ethical framework. In 2018, we set out our view that there are six core principles that should guide the work around AI.
But it’s not enough to define these principles – we need to operationalize them at scale as well.
We’re focused on four key areas to help put these principles into action:
First, you need governance.
Second, you need the rules to standardize AI requirements.
Third, you need training and best practices.
And fourth, you need the tools for implementation
[To link in chat….]
Here are a few examples of those milestones:
The building blocks of Microsoft's responsible AI program https://aka.ms/RAIbuildblockblog
Microsoft’s framework for building AI systems responsibly https://aka.ms/RAIFrameworkblog
Meeting the AI Moment https://aka.ms/meetingAImomentblog
Microsoft’s AI Principles
Guidance: The purpose of this slide is to showcase the six core ethical recommendations in The Future Computed that represents Microsoft view on AI. NOTE there is often a huge amount of interest from customers in these principles.
For a full description of each principle see: https://microsoft.sharepoint.com/sites/ResponsibleAI/SitePages/Microsoft's-AI-Principles.aspx
Microsoft believes that the development and deployment of AI must be guided by the creation of an ethical framework. We set out our view in The Future Computed that there are six core principles that should guide the work around AI. Four core principles of fairness, reliability & safety, privacy & security, and inclusiveness, underpinned by two foundational principles of transparency and accountability
At Microsoft, we began our AI governance work by adopting a set of principles in January 2018. And we were in good company, at last count there are close to 200 sets of principles that organizations, governments, NGOs and others have adopted. There is a high degree of coming together on the topics that those principles address.
The first principle is fairness. For AI, this means that AI systems should AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone with similar symptoms, financial circumstances, or professional qualifications.
The second principle is reliability and safety. To build trust, it’s also important that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing. We have been designing software engineering systems to be reliable and safe for years. We need to understand that engineering probabilistic systems are different. AI systems, by their very nature, have errors.
It’s also crucial to develop AI systems that can protect private information and resist attacks. As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. Privacy and data security issues require especially close attention for AI because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. As an industry we have made significant advancements in Privacy & security, fueled significantly by regulations like the GDPR. Yet with AI systems we must acknowledge the tension between the need for more personal data to make systems more personal and effective – and privacy. Just like with the birth of connected computers with the internet, we are also seeing a huge uptick in the number of security issues related to AI. At the same time, we have seen AI being used to improve security. As an example most modern anti-virus scanners are driven by AI heuristics today. We need to ensure that our Data Science processes blend harmoniously with the latest privacy and security practices.
For the 1 billion people with disabilities around the world, AI technologies can be a game-changer. AI can improve access to education, government services, employment, information, and a wide range of other opportunities. Inclusive design practices can help system developers understand and address potential barriers in a product environment that could unintentionally exclude people. By addressing these barriers, we create opportunities to innovate and design better experiences that benefit everyone. AI has the potential to make computer systems more accessible to those who cannot access computers today. AI must be developed and deployed in a way that can benefit all and is accessible by all. Microsoft’s vision is about democratizing technology and this applies to AI. NOTE this is an opportunity to talk about some of the company’s work in relation to AI for accessibility https://www.microsoft.com/en-us/ai-for-accessibility
When AI systems are used to help inform decisions that have tremendous impacts on people’s lives, it’s critical that people understand how those decisions were made. A crucial part of transparency is what we refer to as intelligibility or the useful explanation of the behavior of AI systems and their components. Improving intelligibility requires that stakeholders comprehend how and why they function so that they can identify potential performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes. We also believe that those who use AI systems should be honest and forthcoming about when, why, and how they choose to deploy them. Transparency is an important principle as people can’t identify whether progress is being made on the top four principles unless there is enough transparency around how systems have been built and function. It is also paramount to the way these systems are managed, operationalized, and documented.
Our final principle is accountability. We believe the people who design and deploy AI systems must be accountable for how their systems operate. This is perhaps the most important of all the principles. Ultimately one of the biggest questions for our generation, as the first generation that is bringing AI to society, is how to ensure that AI will remain accountable to people and how to ensure that the people that design, build, and deploy AI remain accountable to everyone else.
Key learnings from the first version of the Standard that went into the second version included feedback where engineering teams appreciated examples and struggled with open-ended considerations. They asked for more concrete requirements along with closer integration engineering practices. With those considerations, and many many more, the second version of the Standard was created. Let me take you through the different levels that exist in the Responsible AI Standard.
Principles: State values that we must uphold when developing or deploying AI systems – these are our enduring values that guide our responsible AI work
Goals: Define what it means to uphold our AI Principles. Adding goals was an innovation for us as this activated problem-solving instincts and helped to frame a better understanding of the requirements that were being asked.
Requirements: Set out how we must uphold our AI principles. Requirements are the concrete steps that teams need to take in order to secure the goals.
Tools and Practices: Detail aids that help us satisfy the Requirements. In the Standard, we’ve mapped the tools and practices we have available to help our teams meet each of the requirements.
Ex. Accountability:
AI systems should have algorithmic accountability
People must be accountable for how their systems operate
Norms should be observed during system design and in an ongoing manner
Role for internal review boards
Across accountability, transparency, fairness, and reliability & safety, there are very specific new requirements that address the unique risks for AI, mapped to the tools and practices that we have available.
Ex. Privacy & Security:
AI systems should be secure and respect privacy
Existing privacy laws (e.g., the General Data Protection Regulation) apply
Provide transparency about data collection and use, and good controls so people can make choices about their data
Design systems to protect against bad actors
Use de-identification techniques to promote both privacy and security
For Privacy & Security and Inclusiveness, we cross reference to other standards that we have at Microsoft. The intent is to be building upon those other standards as opposed to duplicating efforts in the responsible AI space.
More broadly, government, civil society and industry must come together to ensure that, as AI becomes a bigger part of our lives that laws, norms and standards are in place to guide responsible use.
This multistakeholder approach is essential. Microsoft is a founding member of the Partnership on AI, the Rome Call on AI Ethics and the IDB’s fAIrLAC initiative and has engaged deeply with the AI work of international organizations like OECD and UNESCO.
Informed by our internal work to identify and address AI risk, we believe regulation should be:
Risk based: focusing resources and safeguards on the highest risk applications.
Outcomes focused: setting out what regulated actors must achieve rather than how they achieve it. Requirements for an application to deliver a similar quality of service to different demographic groups will be more effective and durable than highly prescriptive requirements that datasets be “error free”.
Adaptable and aligned to international norms and standards: Process related requirements, e.g., requiring teams building a high-risk application to identify and mitigate its potential risks will help frameworks remain relevant and effective in the face of rapid developments in AI technology and responsible AI practice. Empowering and upskilling existing regulators to identify how to use AI and where to update regulation in response to AI’s impact on their sector will likely further advance adaptability. Alignment to international norms and standards, including the important work of the OECD and best practices like the new NIST AI Risk Management Framework, will be important so that organizations can collaborate across borders and access state of the art technology.