Canada is going to be a global leader of Artificial Intelligence
The pertinent analysis of Deloitte prepare the next generation for a global leadership in AI
Governments around the world are developing national AI strategies to encourage innovation, protect citizens, and compete globally in artificial intelligence. These strategies aim to boost economic growth while addressing concerns about privacy, bias, jobs, and other issues. The document urges businesses to engage with governments on developing policies to help manage various tradeoffs around AI, such as innovation vs regulation and transparency vs vulnerability. National strategies and international cooperation will be important to balance opportunities and risks as AI increasingly transforms society and business.
This document provides an overview of the future of entrepreneurship and highlights 10 successful entrepreneurs revamping the future. It discusses how the future of entrepreneurship is bright but also extremely competitive as businesses reshape themselves to compete in cutthroat markets. Educational institutions now recognize entrepreneurship as a discipline and community members understand its importance to economic growth. The document then profiles 10 entrepreneurs who are taking on future challenges, including Debra Griffin and Dean Harrison, a healthcare business leader duo; Susanne Skov Diemer, who provides security, risk and crisis solutions; and Jillian Hamilton, a proficient in risk management.
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
Blockchain technology has the potential to help address bottlenecks facing China's insurance industry by improving security, reducing information asymmetry, and increasing efficiency. The document discusses problems like security issues, fraud due to lack of information sharing, and low efficiency due to separated information systems. It then explains how blockchain characteristics like distributed storage, encryption, and an immutable record could help solve these problems by better protecting data, facilitating information verification to reduce fraud, and integrating separated records. Major insurance companies in China are beginning to explore applications of blockchain technology to address these issues and promote the development of the insurance sector.
Insights Success has shortlisted “The 20 Most Innovative FinTech Solution Providers 2018.” Our Cover Story features Advisor Software, Inc., which simplifies the process by bringing tried and tested functionality to the systems you already use. ASI focuses on advancing the science of wealth planning so financial advisors can focus on the art of advice delivery.
The document summarizes an interview with Joyce Brocaglia, founder and CEO of Alta Associates, a boutique executive search firm specializing in information security, IT risk management, and privacy. Brocaglia discusses how the role of Chief Information Security Officer (CISO) has evolved over the past 20 years from a highly technical role focused on securing mainframe systems to a more holistic risk management role. She also outlines the industries most actively recruiting for security positions and the types of roles in highest demand, such as CISO, Chief Data Officer, and security architects.
Blockchain the inception of a new database of everything by dinis guarda bloc...Dinis Guarda
Blockchain the inception of a new database of everything by Dinis Guarda blockchain age
Trends and questions?
1. Redefinition of banking and relation with Blockchain
Mobile App banking finance – mobile ledgers – blockchain identity
New products and the emergence of DAO products.
2. System Legacies in paralel with advanced tech - Ethereum.
3. Distribution Strategy in a new Digitalised World who own what.
4. Super computer Cloud base blochcain solutions / infrastructure.
5. Emergence of AI IOE in relation with blockchain all connected.
6. User Experience, UI, UE, Big data and the IOE blockchain touching.
7. Blockchain Cyber Security and Value Reinvention.
IS AI IN JEOPARDY? THE NEED TO UNDER PROMISE AND OVER DELIVER – THE CASE FOR ...csandit
This document provides a review of media coverage on artificial intelligence (AI) and discusses the need to set precise and realistic goals for AI research. It summarizes both the positive perspectives on recent successes in AI as well as potential pitfalls, such as limitations of current deep learning techniques. The document recommends naming AI projects with specific, non-conflated terms to develop "really useful machines" rather than perpetuating hype around goals like artificial general intelligence.
Governments around the world are developing national AI strategies to encourage innovation, protect citizens, and compete globally in artificial intelligence. These strategies aim to boost economic growth while addressing concerns about privacy, bias, jobs, and other issues. The document urges businesses to engage with governments on developing policies to help manage various tradeoffs around AI, such as innovation vs regulation and transparency vs vulnerability. National strategies and international cooperation will be important to balance opportunities and risks as AI increasingly transforms society and business.
This document provides an overview of the future of entrepreneurship and highlights 10 successful entrepreneurs revamping the future. It discusses how the future of entrepreneurship is bright but also extremely competitive as businesses reshape themselves to compete in cutthroat markets. Educational institutions now recognize entrepreneurship as a discipline and community members understand its importance to economic growth. The document then profiles 10 entrepreneurs who are taking on future challenges, including Debra Griffin and Dean Harrison, a healthcare business leader duo; Susanne Skov Diemer, who provides security, risk and crisis solutions; and Jillian Hamilton, a proficient in risk management.
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
Blockchain technology has the potential to help address bottlenecks facing China's insurance industry by improving security, reducing information asymmetry, and increasing efficiency. The document discusses problems like security issues, fraud due to lack of information sharing, and low efficiency due to separated information systems. It then explains how blockchain characteristics like distributed storage, encryption, and an immutable record could help solve these problems by better protecting data, facilitating information verification to reduce fraud, and integrating separated records. Major insurance companies in China are beginning to explore applications of blockchain technology to address these issues and promote the development of the insurance sector.
Insights Success has shortlisted “The 20 Most Innovative FinTech Solution Providers 2018.” Our Cover Story features Advisor Software, Inc., which simplifies the process by bringing tried and tested functionality to the systems you already use. ASI focuses on advancing the science of wealth planning so financial advisors can focus on the art of advice delivery.
The document summarizes an interview with Joyce Brocaglia, founder and CEO of Alta Associates, a boutique executive search firm specializing in information security, IT risk management, and privacy. Brocaglia discusses how the role of Chief Information Security Officer (CISO) has evolved over the past 20 years from a highly technical role focused on securing mainframe systems to a more holistic risk management role. She also outlines the industries most actively recruiting for security positions and the types of roles in highest demand, such as CISO, Chief Data Officer, and security architects.
Blockchain the inception of a new database of everything by dinis guarda bloc...Dinis Guarda
Blockchain the inception of a new database of everything by Dinis Guarda blockchain age
Trends and questions?
1. Redefinition of banking and relation with Blockchain
Mobile App banking finance – mobile ledgers – blockchain identity
New products and the emergence of DAO products.
2. System Legacies in paralel with advanced tech - Ethereum.
3. Distribution Strategy in a new Digitalised World who own what.
4. Super computer Cloud base blochcain solutions / infrastructure.
5. Emergence of AI IOE in relation with blockchain all connected.
6. User Experience, UI, UE, Big data and the IOE blockchain touching.
7. Blockchain Cyber Security and Value Reinvention.
IS AI IN JEOPARDY? THE NEED TO UNDER PROMISE AND OVER DELIVER – THE CASE FOR ...csandit
This document provides a review of media coverage on artificial intelligence (AI) and discusses the need to set precise and realistic goals for AI research. It summarizes both the positive perspectives on recent successes in AI as well as potential pitfalls, such as limitations of current deep learning techniques. The document recommends naming AI projects with specific, non-conflated terms to develop "really useful machines" rather than perpetuating hype around goals like artificial general intelligence.
Innovation in the Legal Services Industry - "The Future is Already Here, It i...Daniel Katz
This document discusses innovation in the legal services industry and outlines several chapters of a story about the current changes and challenges facing law firms. It describes the problem of underinvestment in law firms due to partnership structures. It discusses how sophisticated clients are demanding more efficiency and driving the creation of new legal service providers. It also discusses how legal tech companies are growing rapidly and how traditional law firms need to embrace new technologies to compete. Finally, it outlines how some law firms are restructuring to allow for outside investment in legal technologies and services while maintaining the partnership model for legal work.
Artificial Intelligence has become a new way for businesses to strategize, connect, shape business models and rethinking their operations from scratch. Artificial Intelligence is a continuously growing phenomenon with more possibilities each day. Today, societies at large are realizing that AI can sow endless benefits in collaboration with human intelligence
Legal Analytics, Machine Learning and Some Comments on the Status of Innovat...Daniel Katz
This document summarizes a presentation on legal analytics and machine learning. It discusses three faces of innovation in law: (1) lawyers for innovators/entrepreneurs, (2) lawyers innovating substantive legal areas, and (3) lawyers innovating business processes. It provides examples of predictive analytics using machine learning methods like classification, regression, and clustering. Specifically, it discusses using regression to predict legal costs and how machine learning could transform e-discovery through techniques like predictive coding (classification of documents as relevant/not relevant). Overall, the presentation outlines the growing role of data and machine learning in innovating and transforming the legal industry.
How EY and Credit Suisse teams brought growth opportunities to future leaders...Varun Mittal
EY and Credit Suisse teams co-hosted the CS-EY FinTech Forum. The forum attracted over 90 participants from top financial institutions in Bangkok, Jakarta, Kuala Lumpur, Manila, Melbourne, Mumbai and Singapore, as well as 25 FinTech firms with headquarters in Singapore, Indonesia and India. EY teams are committed to work with industry participants to help bring about greater FinTech integration to countries in ASEAN-6 and India.
Internal or insider threats are far more dangerous than the external - bala g...Bala Guntipalli ♦ MBA
- Internal threats are more dangerous than external ones, as 60% of attacks in 2016 were by insiders with malicious or negligent intent. Healthcare, manufacturing, and financial services are most at risk due to valuable personal data.
- Electronic medical records can be worth over $1300 each to hackers, who can use stolen health information to commit lifetime blackmail or fraud. Insider threats are the largest risk.
- There are many approaches to minimize potential insider threats, including strict access controls, monitoring for anomalies, social engineering tests, awareness training, and separating duties. Prioritizing security is crucial to protect valuable data and systems from internal and external threats.
Unlock power of LinkedIn data to understand the world's professionalsMukkul Dasgupta
Three case studies indicating how LinkedIn Insights team uses data to help inform business, product, and marketing strategy by understanding the world's professionals and their activities on LinkedIn.
THE DEMOCRATIZATION OF JUDGMENT IN THE AGE OF ARTIFICIAL INTELLIGENCESimon Schneider
With companies investing heavily in machine
learning and predictive modeling, Artificial
Intelligence is now a reality. But in contrast to
common belief, this is an opportunity rather
than a threat. With advanced analytics,
people – not data – are key to unlocking the
opportunity.
Organizations can adopt the most
sophisticated technologies and hire an
army of data scientists, but their efforts will
fail if the human aspect is understated. As
many business processes – from strategy to
innovation and market – are still governed and
carried out by people, companies will require
more human judgment in the future.
Coupling sound qualitative judgment – the
ability to make a decision based on a personal
interpretation of the context and available facts
– with Big Data and AI can make a company
unbeatable.
Deloitte - Blockchain and the future of financial infrastructure - fintech cl...Ian Beckett
The document discusses how distributed ledger technology (DLT) could transform the infrastructure of financial services by making processes simpler, more efficient, and sometimes entirely new. It identifies nine potential use cases of DLT across different sectors of financial services like payments, insurance claims processing, and capital markets. DLT may streamline reconciliation, dispute resolution, and transaction clearing while increasing transparency and automation. However, DLT is just one of several emerging technologies that could reshape financial services infrastructure and its benefits will depend on the specific business problems applications address.
A Financial Tech Tsunami Driven by Blockchain AI Crypto EconomicsDinis Guarda
How to Cope in / with a Financial and Tech Tsunami driven by Blockchain, AI and Crypto Economics?
The world economy and the financial industry are only in its early days of digitalisation and disruption.
We are going through a wave, or tsunami of emergent disruptive fintech systems and blockchain decentralised models that will change things forever.
At the moment there is a process of digitalisation / Tokenisation of the economy/ financial industry.
A data-driven look at financing trends, investors, and hot markets within the...Netreba
This document provides an overview of investment trends in the Internet of Things (IoT) ecosystem based on data from CB Insights. It discusses funding amounts and deals for various sectors within IoT like drones, wearable tech, connected home/car, and notable investors. For drones, it notes that funding topped $108M in 2014 across 29 deals. For wearable tech, the 10 largest deals of 2014 combined for nearly $997M. It also includes "Periodic Tables" mapping the key players in IoT like companies, investors, acquirers.
15 Years of Web Security: The Rebellious Teenage YearsJeremiah Grossman
Jeremiah Grossman is the founder of WhiteHat Security, a company that helps secure websites by finding vulnerabilities in source code and production and helping companies fix them. Organized crime has become the most frequent threat actor for web app attacks according to Verizon. Many websites remain vulnerable for long periods, with 60% of retail sites always vulnerable. Compliance is the top priority for resolving vulnerabilities according to 15% of respondents, while risk reduction is the top priority for 35% of respondents.
MEDICI's new India InsurTech Report 2020 explores the InsurTech sector in India. The report delves into what drives transformation in the sector, regulatory initiatives, funding & investment activity, prominent players, and business models.
This document discusses how women are poised to succeed in leadership roles in the growing Internet of Things (IoT) industry. It argues that traditionally feminine leadership skills like collaboration, communication, and relationship building are increasingly important for leading in the IoT era. As technology becomes more connected, visionary and inspirational leadership will be more valued over command-and-control styles. The document suggests that promoting women and others with strong soft skills can help companies build innovative ecosystems and cultures for the future of connected devices and systems.
Colombia InsurTech conference keynote _Delivered on 26 May 2019 (final)Alchemy Crew
In May 2019, I delivered a keynote on the future of Insurance, mostly focused on the InsurTech market trends, volumes, and profiles. These slides share the point of view I had at the time. Some things have changed. Still, I feel that many of the concepts covered are relevant. You will find drivers of growth, as well as information on a few business models I reviewed to validate my thesis.
The document presents a comparative study of the legal aspects of e-commerce in developing countries. It discusses the research objectives, which include understanding the evolution and conceptual framework of e-commerce in India and analyzing the legal aspects, insecurity, and barriers of e-commerce in developing countries through a comparative study. The methodology used secondary research sources to study these topics. The document then analyzes the legal status of e-contracts and digital signatures in several developed countries and India. It identifies challenges in implementing e-commerce in developing countries and concludes by discussing the future scope of e-commerce.
This document provides an overview of trends, opportunities, and startups in the cybersecurity industry in 2020. It discusses rising cyber threats across different industries, increasing cybersecurity funding and M&A activity. The document then profiles several emerging cybersecurity startups working in areas like disinformation detection, decentralized digital identities, password-less verification, and privacy by design. These startups are developing new technologies to address modern cybersecurity challenges.
MEDICI's new India InsurTech Report 2020 explores the InsurTech sector in India. The report delves into what drives transformation in the sector, regulatory initiatives, funding & investment activity, prominent players, and business models.
111 of the biggest, costliest startup failures of all time – cbinsights 062017Ian Beckett
The document summarizes several startup failures that raised over $100 million in funding. It provides details on the company, funding amount, investors, and reasons for failure based on press reports and statements. Some common reasons for failure included mismanaging finances, raising funds too aggressively, high cash burn, lack of demand for products/services, and inability to raise additional funding to stay afloat.
La inteligencia artificial (IA) está demostrando ser una espada de doble filo. Si bien esto se puede decir de la mayoría de las nuevas tecnologías, ambos lados de la hoja de IA son mucho más nítidos, y ninguno de los dos es bien entendido.
Este artículo busca ayudar ilustrando primero una gama de trampas fáciles de pasar por alto. A continuación, presenta marcos que ayudarán a los líderes a identificar sus mayores riesgos e implementar la amplitud y profundidad de los controles matizados necesarios para eludirlos. Por último, ofrece una visión temprana de algunos esfuerzos del mundo real que se están llevando a cabo actualmente para hacer frente a los riesgos de IA mediante la aplicación de estos enfoques.
India Top5 Information Security Concerns 2013Dinesh O Bareja
Indian Information Security scenario, and the global one too, leaves much to be desired - this report covers concerns about InfoSec in this year. A straightforward document with lots of practical insights about what ails Information and Data Security in Government, Business and Users.
- Around 23 million people in the UK have experienced a life shock such as illness, job loss, or relationship breakdown in the past two years, and those who experienced a life shock were three times as likely to be in problem debt.
- Current support mechanisms are insufficient, as many people, even those still employed, cannot build financial protections against common life shocks.
- There is a need for policymakers to prioritize this issue and work with organizations to identify how to improve support and break the link between life shocks and problem debt.
Innovation in the Legal Services Industry - "The Future is Already Here, It i...Daniel Katz
This document discusses innovation in the legal services industry and outlines several chapters of a story about the current changes and challenges facing law firms. It describes the problem of underinvestment in law firms due to partnership structures. It discusses how sophisticated clients are demanding more efficiency and driving the creation of new legal service providers. It also discusses how legal tech companies are growing rapidly and how traditional law firms need to embrace new technologies to compete. Finally, it outlines how some law firms are restructuring to allow for outside investment in legal technologies and services while maintaining the partnership model for legal work.
Artificial Intelligence has become a new way for businesses to strategize, connect, shape business models and rethinking their operations from scratch. Artificial Intelligence is a continuously growing phenomenon with more possibilities each day. Today, societies at large are realizing that AI can sow endless benefits in collaboration with human intelligence
Legal Analytics, Machine Learning and Some Comments on the Status of Innovat...Daniel Katz
This document summarizes a presentation on legal analytics and machine learning. It discusses three faces of innovation in law: (1) lawyers for innovators/entrepreneurs, (2) lawyers innovating substantive legal areas, and (3) lawyers innovating business processes. It provides examples of predictive analytics using machine learning methods like classification, regression, and clustering. Specifically, it discusses using regression to predict legal costs and how machine learning could transform e-discovery through techniques like predictive coding (classification of documents as relevant/not relevant). Overall, the presentation outlines the growing role of data and machine learning in innovating and transforming the legal industry.
How EY and Credit Suisse teams brought growth opportunities to future leaders...Varun Mittal
EY and Credit Suisse teams co-hosted the CS-EY FinTech Forum. The forum attracted over 90 participants from top financial institutions in Bangkok, Jakarta, Kuala Lumpur, Manila, Melbourne, Mumbai and Singapore, as well as 25 FinTech firms with headquarters in Singapore, Indonesia and India. EY teams are committed to work with industry participants to help bring about greater FinTech integration to countries in ASEAN-6 and India.
Internal or insider threats are far more dangerous than the external - bala g...Bala Guntipalli ♦ MBA
- Internal threats are more dangerous than external ones, as 60% of attacks in 2016 were by insiders with malicious or negligent intent. Healthcare, manufacturing, and financial services are most at risk due to valuable personal data.
- Electronic medical records can be worth over $1300 each to hackers, who can use stolen health information to commit lifetime blackmail or fraud. Insider threats are the largest risk.
- There are many approaches to minimize potential insider threats, including strict access controls, monitoring for anomalies, social engineering tests, awareness training, and separating duties. Prioritizing security is crucial to protect valuable data and systems from internal and external threats.
Unlock power of LinkedIn data to understand the world's professionalsMukkul Dasgupta
Three case studies indicating how LinkedIn Insights team uses data to help inform business, product, and marketing strategy by understanding the world's professionals and their activities on LinkedIn.
THE DEMOCRATIZATION OF JUDGMENT IN THE AGE OF ARTIFICIAL INTELLIGENCESimon Schneider
With companies investing heavily in machine
learning and predictive modeling, Artificial
Intelligence is now a reality. But in contrast to
common belief, this is an opportunity rather
than a threat. With advanced analytics,
people – not data – are key to unlocking the
opportunity.
Organizations can adopt the most
sophisticated technologies and hire an
army of data scientists, but their efforts will
fail if the human aspect is understated. As
many business processes – from strategy to
innovation and market – are still governed and
carried out by people, companies will require
more human judgment in the future.
Coupling sound qualitative judgment – the
ability to make a decision based on a personal
interpretation of the context and available facts
– with Big Data and AI can make a company
unbeatable.
Deloitte - Blockchain and the future of financial infrastructure - fintech cl...Ian Beckett
The document discusses how distributed ledger technology (DLT) could transform the infrastructure of financial services by making processes simpler, more efficient, and sometimes entirely new. It identifies nine potential use cases of DLT across different sectors of financial services like payments, insurance claims processing, and capital markets. DLT may streamline reconciliation, dispute resolution, and transaction clearing while increasing transparency and automation. However, DLT is just one of several emerging technologies that could reshape financial services infrastructure and its benefits will depend on the specific business problems applications address.
A Financial Tech Tsunami Driven by Blockchain AI Crypto EconomicsDinis Guarda
How to Cope in / with a Financial and Tech Tsunami driven by Blockchain, AI and Crypto Economics?
The world economy and the financial industry are only in its early days of digitalisation and disruption.
We are going through a wave, or tsunami of emergent disruptive fintech systems and blockchain decentralised models that will change things forever.
At the moment there is a process of digitalisation / Tokenisation of the economy/ financial industry.
A data-driven look at financing trends, investors, and hot markets within the...Netreba
This document provides an overview of investment trends in the Internet of Things (IoT) ecosystem based on data from CB Insights. It discusses funding amounts and deals for various sectors within IoT like drones, wearable tech, connected home/car, and notable investors. For drones, it notes that funding topped $108M in 2014 across 29 deals. For wearable tech, the 10 largest deals of 2014 combined for nearly $997M. It also includes "Periodic Tables" mapping the key players in IoT like companies, investors, acquirers.
15 Years of Web Security: The Rebellious Teenage YearsJeremiah Grossman
Jeremiah Grossman is the founder of WhiteHat Security, a company that helps secure websites by finding vulnerabilities in source code and production and helping companies fix them. Organized crime has become the most frequent threat actor for web app attacks according to Verizon. Many websites remain vulnerable for long periods, with 60% of retail sites always vulnerable. Compliance is the top priority for resolving vulnerabilities according to 15% of respondents, while risk reduction is the top priority for 35% of respondents.
MEDICI's new India InsurTech Report 2020 explores the InsurTech sector in India. The report delves into what drives transformation in the sector, regulatory initiatives, funding & investment activity, prominent players, and business models.
This document discusses how women are poised to succeed in leadership roles in the growing Internet of Things (IoT) industry. It argues that traditionally feminine leadership skills like collaboration, communication, and relationship building are increasingly important for leading in the IoT era. As technology becomes more connected, visionary and inspirational leadership will be more valued over command-and-control styles. The document suggests that promoting women and others with strong soft skills can help companies build innovative ecosystems and cultures for the future of connected devices and systems.
Colombia InsurTech conference keynote _Delivered on 26 May 2019 (final)Alchemy Crew
In May 2019, I delivered a keynote on the future of Insurance, mostly focused on the InsurTech market trends, volumes, and profiles. These slides share the point of view I had at the time. Some things have changed. Still, I feel that many of the concepts covered are relevant. You will find drivers of growth, as well as information on a few business models I reviewed to validate my thesis.
The document presents a comparative study of the legal aspects of e-commerce in developing countries. It discusses the research objectives, which include understanding the evolution and conceptual framework of e-commerce in India and analyzing the legal aspects, insecurity, and barriers of e-commerce in developing countries through a comparative study. The methodology used secondary research sources to study these topics. The document then analyzes the legal status of e-contracts and digital signatures in several developed countries and India. It identifies challenges in implementing e-commerce in developing countries and concludes by discussing the future scope of e-commerce.
This document provides an overview of trends, opportunities, and startups in the cybersecurity industry in 2020. It discusses rising cyber threats across different industries, increasing cybersecurity funding and M&A activity. The document then profiles several emerging cybersecurity startups working in areas like disinformation detection, decentralized digital identities, password-less verification, and privacy by design. These startups are developing new technologies to address modern cybersecurity challenges.
MEDICI's new India InsurTech Report 2020 explores the InsurTech sector in India. The report delves into what drives transformation in the sector, regulatory initiatives, funding & investment activity, prominent players, and business models.
111 of the biggest, costliest startup failures of all time – cbinsights 062017Ian Beckett
The document summarizes several startup failures that raised over $100 million in funding. It provides details on the company, funding amount, investors, and reasons for failure based on press reports and statements. Some common reasons for failure included mismanaging finances, raising funds too aggressively, high cash burn, lack of demand for products/services, and inability to raise additional funding to stay afloat.
La inteligencia artificial (IA) está demostrando ser una espada de doble filo. Si bien esto se puede decir de la mayoría de las nuevas tecnologías, ambos lados de la hoja de IA son mucho más nítidos, y ninguno de los dos es bien entendido.
Este artículo busca ayudar ilustrando primero una gama de trampas fáciles de pasar por alto. A continuación, presenta marcos que ayudarán a los líderes a identificar sus mayores riesgos e implementar la amplitud y profundidad de los controles matizados necesarios para eludirlos. Por último, ofrece una visión temprana de algunos esfuerzos del mundo real que se están llevando a cabo actualmente para hacer frente a los riesgos de IA mediante la aplicación de estos enfoques.
India Top5 Information Security Concerns 2013Dinesh O Bareja
Indian Information Security scenario, and the global one too, leaves much to be desired - this report covers concerns about InfoSec in this year. A straightforward document with lots of practical insights about what ails Information and Data Security in Government, Business and Users.
- Around 23 million people in the UK have experienced a life shock such as illness, job loss, or relationship breakdown in the past two years, and those who experienced a life shock were three times as likely to be in problem debt.
- Current support mechanisms are insufficient, as many people, even those still employed, cannot build financial protections against common life shocks.
- There is a need for policymakers to prioritize this issue and work with organizations to identify how to improve support and break the link between life shocks and problem debt.
This document provides an introduction to scenario planning as a foresight methodology to help make better decisions. Scenario planning involves listing driving forces, ranking their importance, creating a scenario grid with different outcomes, and imagining possible future worlds. It is used to identify risks and opportunities and stimulate innovation. The goal is not to predict the future but provide structure to think about possible long-term implications. The document outlines the steps to scenario planning and provides an example scenario grid around issues of personal data and privacy in the year 2020. It concludes by encouraging the reader to try scenario planning.
The document discusses the biochemical approach to treating alcoholism. It views alcoholism as caused by imbalances in neurotransmitters in the brain due to factors like poor nutrition, stress, genetics, and trauma. The goal of the biochemical approach is to restore balance to the neurotransmitters and repair brain chemistry using natural cures and lifestyle changes, rather than just traditional treatment methods. It aims to achieve long-term sobriety through healing at the biochemical level.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
This document discusses the ethical dimensions of artificial intelligence. It begins with definitions of AI and ethics. It then discusses how AI is revolutionizing industries like healthcare, finance, transportation, and more. However, it also notes challenges of AI like bias, lack of transparency, job displacement, privacy and security issues. It provides examples of authorities like the European Union and United Nations taking action to address these issues and ensure ethical governance of AI through frameworks like the EU Artificial Intelligence Act. The document emphasizes the importance of balancing AI innovation with ethical considerations to build trust and align AI with human values.
The document announces the 3rd International Conference on Artificial Intelligence and International Law being organized by the Campus Law Centre at University of Delhi from June 24-25, 2022. It discusses the objectives of exploring how AI is developing and interacting with international law. Some key themes that will be covered are the international regulation of AI in finance and governance/defense sectors, privacy regulation of AI, and the ecological and intellectual property aspects of AI. The conference aims to further the discussion on challenges around ensuring accountability and fairness of AI systems.
Artificial intelligence (AI) currently being used by insurance companies has failed to remove gender bias from the profession’s claims, underwriting and marketing processes.
A Chartered Insurance Institute (CII) report tells insurers they must tackle these gender biases. The report found that the datasets used to train the algorithms which support AI systems are rooted in outdated gender concepts. Algorithms learn by being trained on historic data but the report notes more and more of that data is now unstructured, coming from text, audio, video and sensors.
Yet the report warns embedded in that historic data are decisions based upon historic biases, particularly around gender. The report concluded insurance firms need to prepare a structured response to this issue, starting with visible leadership on tackling gender bias in AI.
This presentation discusses the impact of artificial intelligence on technology entrepreneurship. It begins with an introduction to AI and its ability to process and analyze vast amounts of data. The presentation then covers topics like AI applications in different industries, the role of AI in product development and innovation, and the impact on jobs. It also discusses challenges for AI startups like data collection and model interpretability, as well as opportunities in industries like healthcare and finance. The conclusion emphasizes how AI is transforming businesses by automating tasks and providing insights, while also creating new opportunities for entrepreneurs.
The-Promise-and-Peril-of-Artificial-General-Intelligence.pdfChris H. Leeb
The document discusses the promise and risks of artificial general intelligence (AGI). It outlines several pros like new discoveries, increased efficiency, and improved decision making. However, it also discusses cons such as dystopian futures and system crashes. It examines impacts on the workforce, ethical considerations, the potential for creativity, and risks of superintelligent AI surpassing human control. The conclusion calls for developing ethical AI and creating policies to ensure its safe and responsible development and deployment.
Article 1 currently, smartphone, web, and social networking technohoney690131
The document discusses several articles and papers related to ethics and privacy issues with new technologies. It covers concerns around obtaining private patient data online and vulnerabilities of electronic health records. It also discusses increased government surveillance impacting human rights and national security. Additional topics covered include privacy of patient information shared with healthcare providers and ethical challenges of data security, anonymity, and intellectual property with information technology.
Internet 2.0 Conference Explores The Future Of Work With AI Startups Reshapin...Internet 2Conf
In this presentation we will be looking at the role of AI startups in reshaping the job landscape with insights from the Internet 2.0 Conference, one of the leading 2025 tech events.
Cyber Security, User Interface, and You - Deloitte CIO - WSJSherry Jones
This document summarizes an article from Deloitte CIO discussing how cybersecurity professionals can improve the user experience of security tools and features. The article argues that cybersecurity is becoming everyone's concern, not just professionals. It suggests learning from other industries like meteorology, pharmaceuticals, entertainment, and more to develop security tools that are simple, reliable, pleasurable to use, and provide users visibility into threats and control over their data. The goal is to "sweeten the security pill" and make security less of a negative experience for users.
Cyber Security, User Interface, and You - Deloitte CIO - WSJSherry Jones
This document is an article from Deloitte CIO Journal that discusses how cyber security interfaces can be improved to provide a better user experience. It suggests taking lessons from other industries, such as using maps to visualize security threats like meteorology, adding tracking tags to data like pharmaceuticals use on drugs, and making security features more automatic and enjoyable to use like entertainment venues that use wristbands. The goal is to address users' desire to avoid thinking about risks while still protecting their information by improving the user interface of cyber security systems.
1) The document is a research report that summarizes a survey of over 2,000 American consumers on their perceptions of privacy regarding connected devices and the Internet of Things.
2) The survey found that consumers have significant concerns about how companies use and share their data from connected devices, and are anxious about third parties accessing their information without consent.
3) While exposure and adoption of connected technologies varies, the survey showed strong discomfort across all age groups regarding companies' collection and sale of personal data without transparency or consent.
This document presents a maturity model for artificial intelligence adoption in enterprises. It outlines four stages of maturity: exploring, experimenting, formalizing, and integrating. It also discusses four macro trends affecting AI success: the shift from screen-based to sensory interactions; from rules-based to probabilistic decision making; from data analytics to data engineering; and from expertise-driven to data-driven leadership. Key aspects of maturity include having a data strategy, using AI in product development, establishing ethics principles, and integrating AI throughout the organization.
4 Steps To Using AI Ethically In Your Organization Bernard Marr
AI can do incredible things, but just because something is possible doesn't mean it's right. There’s enormous potential for backlash against the misuse of AI, and policymakers and regulators will no doubt take an increasing interest in AI. This means it’s vital organizations pursue an ethical use of AI.
Global Cyber Market Overview June 2017Graeme Cross
The document provides an overview of the global cyber insurance market, including:
- The cyber insurance market is still in its infancy globally but has grown significantly in recent years, especially in the US where it is estimated to be worth $1.5 billion in 2015.
- The largest market is the US, driven by data breach legislation, high-profile cyber attacks increasing awareness, and demand from companies storing personal data.
- The upcoming European GDPR regulation coming into effect in 2018 is expected to be a major driver for the growing but still relatively nascent European cyber insurance market.
- Various industries like retail, healthcare, and financial institutions are among the largest buyers of cyber insurance.
Spring Splash 3.4.2019: When AI Meets Ethics by Meeri Haataja Saidot
Meeri Haataja's keyote 'When AI Meets Ethics' at Keväthumaus 2019 / Spring Splash 2019 (organised by Väestörekisterikeskus / Population Register Centre).
Similar to Ca overcoming-risks-building-trust-aoda-en (20)
The document provides specifications and operating limits for Segway PT models. It lists the maximum payload, minimum rider weight, maximum handlebar cargo weight, maximum speed, battery type, range, dimensions, and other details. It explains that exceeding weight limits can increase risks and that the Segway monitors power usage and will activate safety alerts if limits are exceeded.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
2. Former Governor General of Canada
David Johnston wrote in his book Trust
that, “Adhering to the moral imperative ahead
of the operational imperative builds and
maintains trust.” At Deloitte, we firmly believe
Canada can and should lead the way in artificial
intelligence. To do so, we need leaders who will
address AI's risks and ethical implications in
order to foster greater trust in it and capitalize
on its full potential. We need leaders like
Sir Frederick Banting, the Canadian medical
practitioner who won the 1923 Nobel Prize in
Medicine for co-discovering insulin, who is
depicted on the cover of this report.
Table of contents
Introduction.........................................................................................................................2
AI is different and so are the risks.................................................................................4
What we heard from Canadians....................................................................................8
AI needs more than a technical fix—it requires building trust .............................20
Canada's imperative to lead............................................................................................32
Definitions and endnotes ................................................................................................34
3. Canada’s AI imperative Overcoming risks, building trust
1
Artificial intelligence (AI) is
expected to be one of the leading
economic drivers of our time, and
Canada has the opportunity—
and responsibility—to be a global
leader. As a country, we have the
research strength, talent pool, and
startups to capitalize, but that’s
not enough if we want to lead in an
AI-driven world and shape what it
might look like. True leadership is
required—that means taking steps
now to establish a world-class AI
ecosystem in Canada.
4. Canada’s AI imperative Overcoming risks, building trust
2
Introduction
AI has the potential to be the catalyst
for an era of unprecedented innovation,
progress, and prosperity. Yet many worry
that the rapid pace of AI development
and adoption is outstripping our ability
to understand and manage AI’s impact
on society and the economy.
The increasing use of AI has brought a host of new
risks and challenges into the spotlight. Around the
world, citizens are raising questions about AI’s impact
on privacy, security, bias, consumer protection, and
more. And they’re looking to business and government
leaders to provide substantial and satisfactory answers
and solutions to those questions.
We believe the questions and concerns raised by
Canadians and others around the world are rooted
in two key areas: understanding and trust. People
struggle to understand what AI is—and isn’t—and what
it can offer. It doesn’t help that the hype surrounding
AI on all sides can make it difficult to separate science
from science fiction. This blurry understanding is
in turn contributing to a rising distrust of AI among
businesses, governments, and citizens alike.
Left unaddressed, this lack of trust could have a serious
impact on Canada’s future prosperity. Businesses,
governments, and other organizations could be
persuaded to slow or even abandon AI investments.
Governments could implement regulations that severely
limit the use of AI technologies. Such steps might
assuage the worries of the public, but at what cost to our
competitiveness in the years to come?
At Deloitte, we believe that Canada is uniquely
positioned to respond to the challenges of AI. It’s time
for our nation to step to the forefront of this global
conversation and take the lead. It’s time for us to work
together to build AI standards and practices that mirror
our distinctly Canadian values, and develop AI that is
open, safe, equitable, and used in ways that create
prosperity for all.
We must begin this effort by first understanding the
perceptions Canadians have about AI, so that business
and government can address those attitudes most
effectively. This report, the second installment in a multi-
part Deloitte series on Canada’s AI imperative, sheds
light on Canadians’ key perceptions about AI—and offers
our recommendations for moving forward.
5. Canada’s AI imperative Overcoming risks, building trust
3
We believe the
questions and
concerns raised
by Canadians and
others around the
world are rooted
in two key areas:
understanding
and trust.
This page has been intentionally left blank
6. Canada’s AI imperative Overcoming risks, building trust
4
AI is different and
so are the risks
It can be tempting to dismiss the concerns
about the risks of AI as similar to any
other new technology.
Certainly, common questions related to cost, technical
feasibility, and change management all hold true. But
a growing body of research shows that AI is in fact
different than other technologies, and this gives greater
importance to AI’s risks and ethical dimensions.1
• The speed and scale at which AI is expected to
proliferate and combine with existing technologies
shortens the amount of time available to develop
responses to new risks as they emerge.
• The unknown reach and impact of AI—given its
potential effect on almost all industries and sectors
simultaneously—makes planning for every possible
risk seem daunting.
Our research shows that the ethical risks of AI are top
of mind for both businesses and consumers. Between
July and September 2018, Deloitte surveyed over 1,000
Canadian citizens and 2,500 businesses from around the
world to understand the state of AI in Canada.2
• Canadian companies’ reservations about AI often
involved AI’s “black box” problem. The lack of
explanation for and unintended consequences of AI
decisions, and the use of AI to manipulate information
and create falsehoods, emerged as top concerns.
• Among Canadian consumers, 65 percent reported
privacy concerns over how companies were using their
data today.Understanding is a major issue as well: only
4 percent of Canadians felt confident about explaining
what AI is and how it works.3
7. Canada’s AI imperative Overcoming risks, building trust
5
From a technical
perspective, concerns
about algorithmic
bias, data privacy,
and cybersecurity
are real challenges
to overcome.
Given the rapid pace of change with respect to AI, it’s
not surprising that businesses and consumers are wary
and confused. The potential pervasiveness of AI and its
applications does open the door to myriad risks (see
What are the risks of AI?). From a technical perspective,
concerns about algorithmic bias, data privacy, and
cybersecurity are real challenges to overcome. As well,
large companies’ heavily reported missteps have only
contributed to a public perception that AI can be highly
risky and harmful—a perception that matters, since
a lack of understanding of AI will have a real effect on
organizational and public policy decision-making.4
To ensure Canada reaps the benefits of AI in the
years to come, it’s vital that Canada’s businesses and
governments work together to help Canadians better
understand AI—and develop the controls, rules, and
practices necessary to address legitimate concerns
about the impact of AI on our society. To help shed some
light on the issue, in the pages that follow we explore
what our research tells us about Canadians’ current
perceptions about AI.
8. 6
Canada’s AI imperative Overcoming risks, building trustCanada’s AI imperative Overcoming risks, building trust
Societal
risks
Economic
risks
Enterprise
risks
What are the risks of AI?
The speed and scale at which AI is evolving makes it harder
to identify risks in an effective and timely manner. They could
also be manifesting themselves in unfamiliar ways. We broadly
classify these risks based on the varying impacts each one will
have on society, markets, and enterprise. These risks are not
mutually exclusive—they are all interconnected in some way.
9. 7
Canada’s AI imperative Overcoming risks, building trustCanada’s AI imperative Overcoming risks, building trust
Societal risks
The way that AI interacts with citizens can pose risks from a societal
perspective. Consider, for instance, the use of automated surveillance
which could infringe on individual and cultural views of privacy and
autonomy, or the use of automated drones for military warfare.5
Economic risks
The transformational nature of AI can cause significant changes to
current economic and market structures. For instance, the changing
nature of jobs and the workforce can potentially alter employment
opportunities for individuals and organizations.
Enterprise risks
Enterprise risks arise as an organization looks to incorporate AI into
its strategic and technical frameworks.6
These risks might include:
Privacy: AI creates the potential for personal data to be misused or
compromised.
Transparency: The use of black-box models can limit the ability to
explain decisions or audit outcomes.
Control: AI creates the potential for AI systems or models to produce
unintended consequences.
Cybersecurity: AI systems, models, and data may be vulnerable to
cyber threats and adversarial attacks.
10. Canada’s AI imperative Overcoming risks, building trust
8
What we heard
from Canadians
While discussions about AI risks and
ethics are increasingly common in
business and policy circles, you need to
speak directly with Canadians to truly
understand the sources of distrust and
confusion surrounding AI.
Our in-depth interviews uncovered five themes that
illuminate Canadians’ experiences with AI and can help
guide business and government as they move forward.
The fears and concerns that were raised don’t have
easy answers. There are numerous contradictions and
tensions in what we heard from our interview subjects
(see Exploring the contradictions). As expected, we heard
about long-term fears surrounding the potential impact
of artificial general intelligence (AGI) alongside concerns
about how narrow applications of AI, like computer
vision or machine learning, are being used today and
could be used tomorrow.
At Deloitte, we generally distinguish between two
different types of ethical challenges when it comes to
AI governance: ethical breaches and ethical dilemmas.
In our interviews, concerns over high-profile ethical
breaches (such as the Facebook-Cambridge Analytica
scandal) quickly segued into concerns that, as a society,
we’re ill prepared to address the long-term ethical
dilemmas raised by AI.7
Ethical breaches are violations of trust that occur when
there is an established and accepted code by which to
determine if wrongdoing has occurred (e.g., codes of
ethics, laws, regulations, clear norms). Many of the issues
raised by AI about privacy, security, and transparency
fall into this category. Pre-existing norms, business
practices, and laws are a starting point, but leaders need
to figure out how to apply and update them with regard
to AI. Ethical breaches will require much discussion and
iteration, and can likely be addressed with technical,
legal, or strategic fixes.
Ethical dilemmas occur when the question of which
moral code should be applied is contested (e.g., if it
must hit a person, should the autonomous car hit the
child or the elderly person?). In these cases, the way
forward is unclear and not everyone agrees as many
moral frameworks could apply. Hence, the dilemma.
Questions related to AI’s purpose and values—where
and how it should be used—fall into this category, as
do questions about the future nature of collaboration
between AI and humans in the workplace. These are
longer-term challenges that will likely require new forms
of collaboration and discussion to address them. At
the same time, this isn’t the first time that technology
has hit up against profound questions. Consider the
profession of bioethics as a practical, applied model.
What we don’t yet have is as rich a body of AI ethics as
we have with bioethics.
The narratives that follow do not explain all concerns
about AI or speak for all Canadians. Instead, they’re
meant to complement existing research and highlight
some of the nuanced barriers that must be addressed
to promote greater trust and adoption.
11. Canada’s AI imperative Overcoming risks, building trust
9
Our methodology
We went coast to coast across Canada conducting interviews with
Canadians to understand how they think and feel about AI with the goal
of translating these concerns into recommendations for policymakers and
businesses.
This research was conducted using Doblin’s method of design research,
which is an applied research approach that borrows from ethnographic
methods. Doblin is Deloitte’s innovation and human-centered design
consultancy. Human-centered design (HCD) helps organizations identify
opportunities for change by first understanding people’s behaviours and
experiences. HCD leans on established methods from the social sciences
that systematically study and develop insight into how people, cultures, and
society function.
In total, we conducted 13 two-hour, in-home interviews and observational
research to hear how people reflected on and rationalized their experiences
and to see their lives in context. Such qualitative research allows us to
understand both the facts and the meaning behind the facts (for instance, if
only 4 percent of Canadians understand AI, why is that?), bringing depth and
nuance to our analysis of people’s perceptions of and experiences with AI.
Our sampling size was chosen to gather the right level of depth of data over
sheer quantity.
Interviewees were intentionally selected to represent a range of
geographies, ages, income levels, educational attainment, and gender
identities: variables from the Organisation for Economic Cooperation and
Development (OECD) data that correlate to technological disruption. We
also selected participants working in industries likely to be disrupted by
technological change and those working in industries that were least likely
to be disrupted, which were drawn from Statistics Canada NAICS codes and
OECD data. The final 13 interviewees were selected through an extensive
screening process.
12. Canada’s AI imperative Overcoming risks, building trust
10
“AI is everywhere…
I think”
Our interviewees told us that while AI feels
ubiquitous, they aren’t clear about when
and where they are using it—fuelling a
mythology around AI that’s more science
fiction than science.
Our interviewees assumed they were constantly interacting
with AI. Many expressed they couldn’t imagine an AI-free
life, as technology seems so pervasive. Despite this, they
reported feeling deeply uninformed about how and when
they were using AI. They often drew upon examples from
popular culture, science fiction, or high-profile news stories.
Some of our interviewees defined AI as algorithms while
others spoke about robots.
The intangibility of AI may be contributing to a sense
of confusion and distrust. This concern should not be
dismissed. Knowledge of an innovation cannot be held in the
hands of a select few. Research shows that knowledge and
understanding are critical factors in successful technology
diffusion across organizations and societies.8
And our
ongoing research into barriers to greater AI adoption
found that a lack of understanding among executives
and employees, particularly the non-technical expertise
required to apply AI to business challenges, is a challenge
for Canadian companies in scaling their AI efforts beyond
a proof of concept. Businesses and governments will need
to work together to demystify AI and build AI literacy, which
means explaining the basics of how AI works, the ways in
which it uses data, and what it can and cannot do.
13. Canada’s AI imperative Overcoming risks, building trust
11
Exploring the contradictions:
What is AI truly
capable of?
Most of our interviewees told us that AI would not be capable
of doing their jobs. They were optimistic that AI would augment
human intelligence, not replace it. On the other hand, some of
our interviewees spoke of a world where AI surpassed human
intelligence and was able to communicate in languages and make
decisions beyond our comprehension.
“I'm not a techy person, so I would say no, I don’t
use AI in my day-to-day life. I know people who
do, but I personally don't. Unless I do and I don't
know, maybe?”
“In terms of my role, I would say that there's no
AI that could take it on; my job is a very human
connection-type job.”
“I think for more rote or automated tasks, that's
where AI adds a lot of value in terms of efficiency
and reduction of human error.”
"AI will treat us like ants."
“There will be areas where these algorithms,
simply through learning over and over again,
are gonna surpass our abilities.”
14. Canada’s AI imperative Overcoming risks, building trust
12
“I want to feel in
control and know
who’s accountable”
Our interviewees expressed concerns
that increased AI will lead to a loss of
control and decision-making over their
own lives. They emphasized the idea that,
unlike decisions made by humans, there
is no clear accountability when AI fails.
Worries over data privacy and security were common.
Interviewees believed that opaque privacy policies
often resulted in their data being collected or shared
without permission, possibly being used to make
decisions that could harm them or society at large. Other
major concerns related to accountability. Interviewees
expressed worry that AI would be able to make and
impose unfair decisions in important situations, like job
hiring or criminal convictions, without clear accountability
or oversight. Much of the concern seemed focused on
the idea that algorithms might make decisions on their
own. This speaks to the idea of algorithmic aversion, a
preference for human decision-makers even when an
algorithm is proven to outperform humans.9
Left unaddressed, there’s a risk that consumers and
businesses may come to see AI as more of an autocrat
than an advisor, and would avoid it. Businesses and
governments will need to work together to combat this
by considering use cases where humans should always
be kept in the loop and work to increase transparency
and predictability in terms of how AI is applied. Our
interviewees also called for more clarity in how their
data is used and pointed to governments to lead in
creating accountability and safeguards for AI use.
15. Canada’s AI imperative Overcoming risks, building trust
13
Exploring the contradictions:
Do we value privacy
or convenience?
Privacy concerns were a recurring theme in our interviews.
Interviewees expressed distrust toward businesses and
had low expectations of businesses keeping their data safe.
Many expressed fatigue from worrying about data privacy.
Nevertheless, they readily shared data with private companies
in order to have services tailored to their individual needs
and interests.
“Privacy, not to say I don't value it, but I think
it's already out there anyway so I can't be as
concerned anymore.”
“Businesses adopting stricter data privacy
regulations. I think that's really important.”
“I know some people feel very uncomfortable
with how their data might be used unbeknownst
to them. Obviously, if it were used for nefarious
reasons, such as a political agenda, as it's been
in the news, that's not good if it's used as a
device to create division. On the other hand, I
kind of like it when I'm suggested something that
I might not have thought of.”
"I don't feel like I'm a conspiracy theory person or
anything like that, but I feel that you really just have
to be maybe a bit wary of technology."
“So who's responsible at a point when a machine
makes a decision and something happens? Is it...the
technology? The company that made the technology?
Or is it the person who chose to buy that technology
or the person who's in charge of that project?”
16. Canada’s AI imperative Overcoming risks, building trust
14
“AI makes me
question what
it means to
be human”
Our interviewees were not only
concerned about how AI could affect
their lives now; they were worried about
humanity’s place in an AI-driven world.
Speculation over what AI might be capable of made
it easy for our interviewees to switch from their
current worries to long-term fears about the relevance
of human existence. They cited attributes such as
empathy and emotional intelligence as uniquely human
characteristics, but were uncertain if AI could surpass
these human qualities in the future. More so than other
technologies, artificial intelligence seems to touch a
nerve when it comes to difficult questions about what
constitutes our own humanity.
Fearing the loss of control and human agency will
only fuel resistance and distrust of AI. Businesses and
governments will need to work together to assuage
these fears and create a strong shared vision of
the ways in which AI will lead to prosperity for all.
This includes giving the workforce of today—and
tomorrow—the skills needed to thrive in a new,
AI-driven world of work.
17. Canada’s AI imperative Overcoming risks, building trust
15
Exploring the contradictions:
Is AI human?
Our interviewees believed that AI must reflect
humanity, both the good and the bad, as we are the
ones who created it. At the same time, they were
cognizant of the fact that AI is heavily data-dependent
and therefore does not display emotions or qualities
that are human-like.
“So, human beings don't have the edge on
making the world a better place. So, maybe
robots could do a better job. But then why
do we even exist at all?”
“Empathy is a very human trait.
You cannot build that into AI.”
“I wonder about art. I wonder if it's
worth defending as strictly human.”
“I think one of the weaknesses of collecting human-
type data is that we can't be figured out with a
computer, and so huge mistakes are going to be
made.”
“Writing and literature and journalism are such
human pleasures. Without them, what is there?”
"Knowledge is power. If you consider the self-checkout,
I can make an informed decision not to use it because
it’s taking a job away from a person. I would rather
have a person than a machine do that job."
18. Canada’s AI imperative Overcoming risks, building trust
16
“I want AI to
have the ‘good’
kind of bias”
Our interviewees told us that it’s not
about removing all bias from AI. They’re
worried that AI will replicate the worst of
our biases and remove the good, like the
human ability to exercise judgment, faith,
and hope.
Our interviews demonstrated that recent news stories
about AI making biased decisions, from gendered
hiring to racialized pre-trial detention decisions, are
dominating public sentiment. In fact, some interviewees
felt strongly that AI development should stop until these
errors are corrected. Yet others balked at what they
saw as the opposite: AI making what they saw as purely
rational decisions instead. They felt that a data-driven
machine could never capture the best of the human
spirit, for example, the ability to see promise in a young
offender or to understand suffering when delivering a
terminal medical diagnosis. Instead, they wanted to keep
the best of humanity, but hoped technology might help
us remove some of the worst.
Businesses and governments will need to work together
to assure consumers and citizens of the fairness of
AI decision-making or risk losing the social licence
to operate. This means creating AI that purposefully
amplifies the values we wish to see. The ethical
development of AI will need to be front and centre from
the beginning of development, and proper redress
mechanisms should be put in place in the event that AI
goes wrong after the fact.
19. Canada’s AI imperative Overcoming risks, building trust
17
Exploring the contradictions:
Is bias only an AI
problem?
Interviewees expressed strong concerns about the inability of AI
to think like a human when making decisions. The human capacity
to judge situations with empathy and contextual awareness was
mentioned often as key to making the right, or best, decisions.
On the other hand, interviewees acknowledged that humans also
have implicit biases that can lead to bad decision-making, biases
that could possibly be avoided by using AI.
“Humans definitely
have bias, but I
think bias is not
necessarily a
bad thing.”
“AI is only as good
as the creator.”
"What if the data
informing the AI
is wrong?"
“Let’s talk about the real
issue here with AI, with the
information and the input.
Algorithmic bias is a thing.”
"In an ideal world, I would like human
oversight, i.e., someone with human
qualities that I believe AI can never
have, to make the final decisions."
“Well, humans make mistakes. I don't
know if AI necessarily makes mistakes
if it's programmed a certain way.”
20. Canada’s AI imperative Overcoming risks, building trust
18
“We aren’t prepared
and I don’t know
who I trust to lead”
Our interviewees told us they worry
about how ready we are as a society
for the pace of change that AI will bring.
They reported feeling overwhelmed by the pace of
technological change and felt unsure whom they could
trust to lead in the face of widespread change caused
by AI. We heard that people felt governments could be
“too slow” or “ill-equipped” to react given the current
pace of change. While businesses might be better
suited to acting quickly, prominent cases of ethical
breaches related to data privacy, bias, or malicious
use of information were cited by our interviewees as
examples of businesses not taking their obligations to
lead seriously.
There will always be an inherent tension between
pushing for technological progress and safeguarding
those who are affected by it. By working together,
businesses and governments can demonstrate to
consumers and citizens that they take their concerns
seriously and are working toward a future with AI
prosperity for all.
21. Canada’s AI imperative Overcoming risks, building trust
19
Exploring the contradictions:
Who should take
the lead on AI?
Some of our interviewees expressed a desire for strong AI
leadership from government but worried that government
might move too slowly to be effective. Others felt that the ones
designing the technology should be leading, but they were
concerned those businesses would be driven by self-interest.
“I think that, ideally, businesses as the designers
and developers of these technologies have
obligations and self-interested obligations
to be moral stewards of their tech.”
"Right now, businesses aren't the right
spokespeople for AI; they won't be believed."
"People think AI tech is benefiting them, but it’s
basically the companies that are winning."
"With AI, you're jumping from a horse-drawn
carriage into a Ferrari overnight. I hope
society knows how to build traffic lights."
“When new things happen, can government keep up
with all the changes? They can't for health care, so I
can’t imagine they could for AI.”
“Businesses don’t speak for me. I don’t pay my taxes to
a corporation. For me, the only publicly accountable
institution is government. That's why it exists.”
22. Canada’s AI imperative Overcoming risks, building trust
20
AI needs more
than a technical
fix—it requires
building trust
To take full advantage of AI’s
potential benefits, we need to
build public trust—something
sorely missing today. Canadians’
feelings of distrust in AI go deep;
they are rooted in emotion and
can't be overcome by mere
technical fixes.
Unfortunately, business and government
may not have much time to act to
address the perceived risks of AI before
Canadians definitively turn against the
new technology. Our research found that
respondents struggled to distinguish
between short- and long-term AI risks
(see Are short- and long-term risks a false
distinction?). No matter the technical issue
at the heart of an AI-related breach—
poor data collection, a lack of oversight,
etc.—respondents were quick to look at
it in terms of big-picture ethical dilemmas
surrounding AI’s use.
If this distrust and misunderstanding of
AI continues to grow, we risk seeing its
adoption slow and backlash increase. A
fatal self-driving car accident in Arizona,
for instance, has resulted in more than
20 incidents of individuals attacking self-
driving cars being tested.10
Left unchecked, this lack of trust will
contribute to a societal divide between
those who trust and use AI and those
who don’t.11
Should this happen, it could
hinder efforts to develop an AI prosperity
strategy that benefits everyone, because
the tools and solutions created won’t
address all the important issues or meet
all the important needs.
23. Canada’s AI imperative Overcoming risks, building trust
21
If this distrust and misunderstanding of
AI continues to grow, we risk seeing its
adoption slow and backlash increase.
For Canada to lead in addressing these
AI challenges, we’ll need collaboration
between our policymakers, businesses,
and academia as well as with international
bodies. The time to do so is now, at the
current stage of AI development, while the
stakes are still relatively low. This is key to
ensuring that Canada’s governments and
businesses become trusted leaders in
shaping the future of AI.
In the pages that follow, Deloitte offers
our recommendations for what actions
Canada’s business and government
leaders should take to foster a greater
understanding of and trust in AI. This
is not meant to be an exhaustive list,
but simply a starting point and a call to
action for leaders to meaningfully address
Canadians’ real concerns about AI.
Are short- and long-
term risks a false
distinction?
There’s research to show that, in fact, the tendency
to classify AI risks as either short- or long-term
issues might not be the right approach at all.12
By classifying issues into two categories largely
disconnected from one another, there’s an impulse
to continuously prioritize the short-term risks
over the long-term ones. This focus may come at
a price: the decisions we make in the future will be
constrained by the ones we make today.13
Consider the development of the internet, which
is probably the closest example we have to the
proliferation of AI. It was built without a view toward
longer-term security issues and we’re still paying the
price for that today.14
AI is similar in the sense that
if we only focus on providing technical fixes for the
problems we see now, we’re unlikely to be prepared
for the issues that arise in the future.
24. Canada’s AI imperative Overcoming risks, building trust
22
Bring Canadian
values to the
forefront
Amid the confusion and uncertainty surrounding AI, there is a
clear opportunity for Canada to step forward and provide global
leadership. AI needs to be inclusive, fair, open, and safe, values
that Canadians hold dear.
Key actions for business
Start with values and ethics. Before
embarking on any AI project, consider
the ethical implications of the tool
you are building and codify how your
organization will ensure you maintain
fairness, privacy, safety, and openness.
This is not an exercise that can be done
after the fact; it must be the starting point.
For instance, think about the project’s fit
with company values, its ability to deliver
a safe and reliable solution, the societal
impact of the technology, and potential
reputational implications. Any application
should comply with existing legislation and
codes of practice, such as compliance with
Canadian and organization-specific data
security and privacy laws.
Create inclusive teams—and use
inclusive data. Canadian businesses
are uniquely placed to lead by drawing on
the diverse, skilled talent pool available
within our borders. Diverse and inclusive
development teams are critical for building
solutions that incorporate and meet the
needs of diverse audiences. Recent news of
racialized facial-recognition systems serves
as a sobering example of why it’s important
to get inclusion right.
Data is another critical source in preventing
and addressing bias. Think about how
data provenance affects the accuracy,
reliability, and representativeness of the
solution. Moreover, to prevent unintentional
bias, consider whether there is accurate
representation of all the groups you are
trying to serve in the data you use. Bias
should be viewed from a wide scope,
including social, gender, and financial;
indeed, any bias that might adversely
affect disparate groups. Data audits and
algorithmic assessments are important for
flagging and mitigating discrimination and
bias in machine-learning models. For the
longer term, Canadian businesses can lead
the way in developing robust data practices
and standards that allow data to be shared
freely and openly while ensuring it isn’t used
in ways that could be harmful.
25. Canada’s AI imperative Overcoming risks, building trust
23
Diverse and inclusive development
teams are critical for building solutions
that incorporate and meet the needs
of diverse audiences.
Key actions for government
Act as a vocal leader and convener
on global standards. Influencing AI
standards globally is a clear opportunity for
government to both assuage the fears of
Canadians at home and establish Canada
as a leader on the global stage. The federal
government has taken early steps to
collaborate with industry groups like the CIO
Strategy Council on this important work, but
there’s a need to move quickly.15
The race
to set global standards is already heating
up and having a voice at the table is critical
for retaining a leadership role. Canada was
the first country to announce a national AI
strategy and now we need to maintain this
momentum.
26. Canada’s AI imperative Overcoming risks, building trust
24
Ensure AI
literacy is on
the agenda
If AI continues to conjure more images of dystopian futures than
ground-breaking innovations, then Canadians will continue to be
distrustful. Business and government leaders need to work together to
combat the hype around AI and build literacy instead. Canada should
lead the way in ensuring its population is AI-fluent and ready to put
this understanding to use. This means helping Canadians develop an
understanding of how AI works and what it can and cannot do.
Key actions for business
Build non-technical fluency among
employees and executives. Businesses
can lead by investing in training employees
to pair technical fluency with business
understanding. This doesn’t mean
that employees and executives at your
organization need to learn how to code
algorithms. Quite the opposite. It means
pairing an applied understanding of AI and
its capabilities with existing non-technical
skillsets so that the value of applying AI to
solve business problems is clear.
This also means embedding ethics
at every stage of the AI process. All
employees and executives should take a
holistic view in identifying and addressing
ethical considerations throughout the
AI development life cycle, not just at the
beginning. For example, establishing a
problem scope should also mean being
mindful of the potential implications of
solving it with AI. And, when deploying,
monitoring, and improving AI solutions,
employees and executives should ensure
they are scaled in a way that respects
the organization’s internal values and
operations.
27. Canada’s AI imperative Overcoming risks, building trust
25
Canada should lead the way in ensuring
its population is AI-fluent and ready to
put this understanding to use.
Key actions for government
Make AI literacy a priority. Canada
should make AI literacy a pillar of its AI
strategy. Our interviewees echoed this
as well—they felt information about AI
should be disseminated through a public
information campaign, similar to the
ones for cannabis and mental health.
Governments around the world are starting
to tackle this issue and Canada should, too.
For example, Finland has launched an open
online course aimed at getting 1 percent of
the population trained in AI basics.16
These
emerging practices could provide a starting
point for Canadian governments at all levels
to assess the best ways to build knowledge
about AI. This could be by offering public
education platforms similar to the Finnish
model or by finding made-in-Canada ways to
work AI knowledge into school curricula.
28. Canada’s AI imperative Overcoming risks, building trust
26
Collaborate to create
the right controls
and incentives
Businesses and governments need to work in tandem to build
the necessary guardrails to assure Canadians they’re working
to prevent, detect, and repair AI breaches. We need a clear
regulatory environment and avenues for redress, alongside sound
organizational governance, to enable innovation and growth that
don’t compromise on trust.
Key actions for business
Proactively build oversight and
accountability controls. Procedures
for the development, maintenance, and
oversight of AI systems need to be in place
from the very beginning across the entire
AI life cycle to detect and flag errors so
that potential breaches can be detected
before they occur and addressed promptly
if they do. Managing AI risk and ethical
concerns requires a focused approach,
starting with understanding the role of AI
in corporate strategy and the resultant
AI risks, and taking action to drive value-
aligned AI innovation. Furthermore, building
AI innovation will mean embedding values
into the development life cycle, from use
case definition, modelling, and training to
deployment and monitoring.
Policies addressing matters such as when to
put a human in the loop and when and how
to conduct AI impact assessments will help
prevent and mitigate any AI errors.
29. Canada’s AI imperative Overcoming risks, building trust
27
Clear and consistent regulation is essential
for creating a stable business environment
for innovators and building public trust.
Key actions for government
Ensure the regulatory environment
promotes trust and innovation. Clear
and consistent regulation is essential for
creating a stable business environment
for innovators and building public trust.
Regulations must balance protection and
privacy with the need for exploration
and advancement. Tools like regulatory
sandboxes can help governments keep pace
with innovation and produce better long-
term outcomes by providing a controlled
environment to test products and ideas.17
This allows the government to understand
the myriad regulatory changes required
alongside industry before codifying them.
Create new legal frameworks for
algorithmic accountability. In situations
where people believe AI has wrongly or
unfairly affected them, new legal frameworks
that lay out clear redress and accountability
for the consequences of decisions made by
AI algorithms will need to be created.
30. Canada’s AI imperative Overcoming risks, building trust
28
Inform, engage,
and communicate
—continuously
Transparency can’t be a one-off activity. It means finding the right
balance between information overload and keeping people in the
dark. Think of the exchange you have when you go to see your
doctor about a procedure or diagnosis: you don’t need to know
every medical detail but you want to be adequately informed about
the impact, the risks, and any preparations you need to make.
Key actions for business
Clearly articulate why AI is necessary.
Businesses can combat low trust in and
understanding of AI by actively sharing the
positive value that AI brings to their work.
Sharing stories of successful AI projects with
employees and, in some cases, the broader
public is critical for getting employees and
consumers on the same page. You must
be able to articulate the value AI will bring,
especially since the fear of shifting roles or
job losses is high. Being clear in messaging
and having strong champions articulate
the value and expectations will be key
for gaining acceptance. As an example,
Scotiabank’s stated objectives for interactive
AI systems begin with a clear directive: those
systems must be truly useful. They need
to improve outcomes for customers and
society as well as the bank.18
Make privacy an ongoing conversation.
Shift the burden away from the user
by making privacy policies and user
agreements easy to understand. Keep these
agreements top of mind by offering frequent
reminders or pop-ups that make clear to
consumers how their data is being used and
allow them to change their permissions.
Apple, for instance, has set up a data and
privacy portal for users who want to see
what kind of personal data about them is on
the company’s servers, which can then be
downloaded and examined.19
31. Canada’s AI imperative Overcoming risks, building trust
29
Being clear in messaging and having
strong champions articulate the
value and expectations will be key
for gaining acceptance.
Key actions for government
Engage the public and guide the
conversation. Making individual Canadians
feel heard is critical for building trust. But
the ability to engage citizens effectively
hinges on having a well-informed public,
which can be challenging when it comes
to complex topics like AI. To give citizens
a voice in the development of AI policy
decisions, the government should consider
experimenting with innovative consultation
and engagement approaches such as
citizen reference panels or the one being
developed in Estonia (see International
spotlight).20
International spotlight:
Estonia
Estonia recognizes the importance of
public consultations on AI—and the
difficulty of consulting a public that
feels uninformed. To create a shared
understanding of AI, the country’s
leadership has drawn on its rich cultural
mythology to provide an impactful
metaphor: the Kratt. The Kratt is a magical
creature in Estonian lore made of hay
or household items that can be brought
to life. Creating this parallel has enabled
the national AI taskforce to further public
dialogue about the responsibilities and
accountabilities of “the Kratt” and draft
an algorithmic liability law—known as the
Kratt law.21
32. Canada’s AI imperative Overcoming risks, building trust
30
Prepare for
machine-human
collaboration
As the use of AI in society increases, we will need to start reskilling
and preparing the workforce to take up higher-value tasks and jobs
for the long term. This includes thinking about how people and AI will
work together rather than focusing on a people-versus-AI outcome.
Key actions for business
Retrain employees if AI is going to
change their jobs. Businesses should
begin investing in employees to prepare
them for the changing nature of work.
Over the long term, employees will
require ongoing skills training and career
counselling as new skills become in
demand. One example is RBC’s digital
navigation program, which offers training
to over 20,000 branch managers to be
able to transition into positions that
require similar skillsets.22
Explore opportunities to redesign work
to take advantage of AI and learning.
Businesses will need to focus beyond
automation and identify the most promising
areas in which AI can augment workers’
performance as they shift into more creative
and value-added work. There needs to
be a plan for redesigning and reinventing
work that will combine the capabilities of
machines and people, create meaningful
jobs and careers, and help employees with
the learning and support they will need to
navigate rapidly evolving circumstances.23
33. Canada’s AI imperative Overcoming risks, building trust
31
Businesses should begin investing
in employees to prepare them for
the changing nature of work.
Key actions for government
Provide support for lifelong learning. As
the half-life of skills rapidly shrinks, constant
reskilling and retraining will need to become
the norm. However, we also heard from
interviewees the need for programs that
will develop the right kind of multi-purpose
skills, like empathy and critical thinking, to
support lifelong learning.
Governments should consider developing
shorter and more targeted certifications
that are created in conjunction with
businesses and incorporate co-op or
apprenticeship terms to meet these needs.
Consider, for instance, the way Singapore's
government works with industry to assess
the current state of workplace learning and
offers guidance on effectively structuring
workplace learning activities.24
34. Canada’s AI imperative Overcoming risks, building trust
32
Canada's imperative
to lead
AI has the potential to fundamentally
change the way we live, work, do
business, and make decisions in
every aspect of our lives. It could
unlock a new and exciting era
of great shared prosperity for
Canadians and people around the
world. But we must first address the
legitimate concerns about how AI
will be developed and used before
we can truly realize its immense
rewards. Because at this point in
time, the broader public is growing
ever more skeptical—and ever more
distrustful—of AI.
Trust in AI is falling because we are becoming
increasingly aware of the risks, challenges, and
ethical implications of using it, even as we grow
more astonished at the opportunities it could
unlock. Addressing these risks, overcoming these
challenges, and tackling these ethical implications
is essential if we are to foster greater trust in AI
and capitalize on its full potential.
Canada’s global reputation and commitment
to transparency, inclusion, and collaboration
presents our country—and our people—with a
unique opportunity to lead the world in developing
sound, moral, and ethical AI practices that put
people first and deliver prosperity to the many,
not just the few. As former Governor General of
Canada David Johnston wrote in his book Trust,
“Adhering to the moral imperative ahead of the
operational imperative builds and maintains trust.”
Working together, Canadian governments,
businesses, and academia can address citizens’
AI worries, helping them understand what AI is,
what it isn’t, and how it could benefit all of us.
Collaboration will also be essential for ensuring
we address AI’s ethical dilemmas in a way that
protects society yet allows for innovation, growth,
and shared prosperity. “What is most important,”
writes Johnston, “and what builds trust, is making
sure we get to our destination together.”25
If we truly join forces, that destination will be
Canada’s leadership position in shaping how AI
is used in the future, both at home and around
the world.
35. Canada’s AI imperative Overcoming risks, building trust
33
–
“Trust is earned
incrementally and
granted with greater
acquaintance. The
same cannot be said
for the loss of trust.
It often speeds off at
the first sign of doubt
and usually at the first
suggestion of deceit.”
David Johnston, Former Governor General of Canada
36. Compute
vision
34
Canada’s AI imperative Overcoming risks, building trustCanada’s AI imperative Overcoming risks, building trust
Definitions
Artificial
intelligence
Artificial intelligence: Deloitte
defines AI as systems and
applications that perform tasks
that mimic or augment human
intelligence, ranging from simple
gaming to sophisticated decision-
making agents that adapt to their
environment.
People often use the term
AI interchangeably with the
technologies that enable it—
including machine learning, deep
learning, neural networks, natural
language processing, rule engines,
robotic process automation, and
combinations of these capabilities
for higher-level applications.
Narrow
vs
general
AI
Narrow
AI
Narrow AI: Most current applications are
what we call narrow AI, which means it can
only perform the task it was designed to
do. This means that for every problem,
a specific algorithm needs to be designed
to solve it.4
General
AI
General AI: The holy grail of AI is general
AI, a single system that’s equal or
superior to human intelligence. Today’s
AI applications don’t exhibit this ‘human-
like’ intelligence—an algorithm designed
to drive a car, for example, wouldn’t be
able to detect a face in a crowd or guide a
domestic robot assistant.5
37. 35
Canada’s AI imperative Overcoming risks, building trustCanada’s AI imperative Overcoming risks, building trust
AI
technologies
Machine
learning
Machine learning: The ability of
statistical models to improve their
performance over time by creating
feedback loops for relevant data,
without the need to follow explicitly
programmed instructions.
Deep
learning
Deep learning: A relatively complex
form of machine learning involving
neural networks with many layers
of abstract variables. Deep learning
models are excellent for image and
speech recognition but are difficult or
impossible for humans to interpret.
Robotic
process
automation
(RPA)
Robotic process automation (RPA):
Software that automates repetitive,
rules-based processes that are usually
performed by people sitting in front
of computers. By interacting with
applications just as humans would,
software robots can open email
attachments, complete e-forms, record
and re-key data, and perform other tasks
that mimic human action.
Physical
robots
Physical robots: Cognitive technologies
are being used in the broader robotics
field to create robots that can work
alongside, interact with, assist, or
entertain people. Such robots can
perform many different tasks in
unpredictable environments, often in
collaboration with human workers.
Natural language
processing/generation
(NLP/G)
Natural language processing/
generation (NLP/G): The ability to
extract or generate meaning and intent
from text in a readable, stylistically
natural, and grammatically correct form.
Computer
vision
Computer vision: The ability to
extract meaning and intent from visual
elements, whether characters (in the
case of document digitization) or the
categorization of content in images such
as faces, objects, scenes, and activities.
38. Canada’s AI imperative Overcoming risks, building trust
36
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Endnotes
1 Iain Cockburn, Rebecca Henderson, and Scott Stern, “The
Impact of Artificial Intelligence on Innovation” (Cambridge, MA:
National Bureau of Economic Research, March 2018), https://
doi.org/10.3386/w24449., Ajay Agrawal, John McHale, and
Alex Oettl, “Finding Needles in Haystacks: Artificial Intelligence
and Recombinant Growth,” Working Paper (National Bureau
of Economic Research, April 2018), https://doi.org/10.3386/
w24541.
2 In order to understand the state of AI demand and adoption
in Canada, Deloitte surveyed over 1,000 Canadian citizens
and 2,500 businesses from around the world between July
and September 2018. To understand early adopters, we
surveyed leaders of 300 Canadian and 1,600 international large
companies who are aggressively adopting AI to understand
the practices and attitudes of early adopters. We asked them
about their goals, spending, and results from adopting AI
technologies, as well as the risks and challenges they perceive
in implementing them.
All respondents were required to be knowledgeable about
their company’s use of artificial intelligence, and have direct
involvement with their company’s AI strategy, spending,
implementation, and/or decision-making. Companies must
report global annual revenues of at least $50 million US and
have at least 500 employees worldwide. The margin of error on
these results is +/- 5.7 percentage points, 19 times out of 20.
To understand Canadian consumers’ understanding,
expectations, and beliefs about AI, we surveyed 1,019
Canadians from across the country. All respondents were
aged 18+ and meant to be representative of the general adult
population. The margin of error on these results is +/- 3.1
percentage points, 19 times out of 20.
3 Deloitte Canada, Citizen Survey 2018. See note above.
4 Valentina A. Assenova, “Modeling the Diffusion of Complex
Innovations as a Process of Opinion Formation through Social
Networks,” PLoS ONE 13, no. 5 (May 2, 2018), https://doi.
org/10.1371/journal.pone.0196699.
5 “Artificial Intelligence: A Policy-Oriented Introduction | Wilson
Center,” accessed March 1, 2019, https://www.wilsoncenter.org/
publication/artificial-intelligence-policy-oriented-introduction.
6 “AI and Risk Management,” Deloitte United States, accessed
March 1, 2019, https://www2.deloitte.com/global/en/pages/
financial-services/articles/gx-ai-and-risk-management.html.
7 Ivan Semeniuk, “What Is the Science behind the Cambridge
Analytica Scandal?,” The Globe and Mail, March 28, 2019 edition,
accessed February 21, 2019, https://www.theglobeandmail.
com/world/article-what-is-the-science-behind-cambridge-
analytica-scandal/.
8 Zou, Bo, Feng Guo, and Jinyu Guo. “Absorptive Capacity,
Technological Innovation, and Product Life Cycle: A System
Dynamics Model.” SpringerPlus 5, no. 1 (September
26, 2016). https://www.ncbi.nlm.nih.gov/pmc/articles/
PMC5037108/#CR46.
9 Dietvorst, Simmons, and Massey, “Algorithm Aversion: People
Erroneously Avoid Algorithms After Seeing Them Err.”
10 “Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars -
The New York Times,” accessed February 20, 2019, https://www.
nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-
attacks.html.
11 “AI Trust and AI Fears: A Media Debate That Could Divide
Society — Oxford Internet Institute,” accessed February 20,
2019, https://www.oii.ox.ac.uk/blog/ai-trust-and-ai-fears-a-
media-debate-that-could-divide-society/.
12 Stephen Cave and Seán S. ÓhÉigeartaigh, “Bridging Near- and
Long-Term Concerns about AI,” Nature Machine Intelligence 1, no.
1 (January 2019): 5, https://doi.org/10.1038/s42256-018-0003-2.
13 Cave and ÓhÉigeartaigh.
14 “The Real Story of How the Internet Became so Vulnerable |
The Washington Post,” accessed February 20, 2019, https://
www.washingtonpost.com/sf/business/2015/05/30/net-of-
insecurity-part-1/.
15 “Draft Standards – CIO Strategy Council,” accessed February
21, 2019, https://ciostrategycouncil.com/standards/draft-
standards/.
16 Janosch Delcker, “Finland’s Grand AI Experiment,” POLITICO,
January 2, 2019, https://www.politico.eu/article/finland-one-
percent-ai-artificial-intelligence-courses-learning-training/.
17 “Regulating Emerging Technology | Deloitte Insights,” accessed
February 21, 2019, https://www2.deloitte.com/insights/us/
en/industry/public-sector/future-of-regulation/regulating-
emerging-technology.html.
18 “The AI Revolution Needs a Rulebook. Here’s a Beginning - The
Globe and Mail,” accessed December 13, 2018, https://www.
theglobeandmail.com/business/commentary/article-the-ai-
revolution-needs-a-rulebook-heres-a-beginning/.
19 “Understand and Control the Personal Information That You
Store with Apple,” Apple Support, accessed February 21, 2019,
https://support.apple.com/en-ca/HT208501.
20 “Frequently Asked Questions —,” accessed February 21, 2019,
https://www.crppc-gccamp.ca/faq-eng.
21 Marten Kaevats, “Estonia Considers a ’Kratt Law’ to Legalise
Artifical Intelligence (AI),” Medium (blog), September 25, 2017,
https://medium.com/e-residency-blog/estonia-starts-public-
discussion-legalising-ai-166cb8e34596.
22 School of Policy Studies, Queen’s University, “The Future of
Work: What Do We Do?” - Session 6 - Armchair Discussion,
accessed February 21, 2019, https://www.youtube.com/
watch?v=j8qZ21Rdmc8&t=0s&index=18&list=PL57JJtmhwg
Xnx2dFhCHmAsGfTxl7efYwS.
23 “Navigating through New Forms of Work | Deloitte Insights,”
accessed February 21, 2019, https://www2.deloitte.com/
insights/us/en/deloitte-review/issue-21/navigating-new-forms-
of-work.html.
24 “Lifelong Learning Helps People, Governments and Business.
Why Don’t We Do More of It?,” World Economic Forum, accessed
February 21, 2019, https://www.weforum.org/agenda/2017/07/
lifelong-learning-helps-governments-business/.
25 David Johnston, Trust: Twenty Reliable Ways to Build a Better
Country (McClelland & Stewart, 2018).
39. This page has been intentionally left blank Canada’s AI imperative Overcoming risks, building trust
37