SAFETY AND SECURITY track - Tuesday 28th
"While facial recognition technology is utilised increasingly across the globe, there are extending debates on the ethical aspects and acceptability of facial recognition. Such issues include e.g. that facial recognition is not an accurate tech, it is creating step by step everywhere reaching “surveillance state”, there are challenges with individual privacy and data security, as well as it may have distorting effects on democratic processes. It is suggested, among other things, that facial recognition technology needs to be well regulated, system needs to be transparent and include “bias checks” as well as there needs to be an administrational procedure for correcting technological and social biases and faults in the system."
MIKA NIEMINEN, Principal Scientist, VTT, Technical Research Centre of Finland
Smart City Mindtrek 2020 – conference
28th-29th January
Tampere, Finland
www.mindtrek.org/2020/
Artificial Intelligence Can Now Generate Amazing Images – What Does The Mean ...Bernard Marr
Figuring out the formula to help computers see as good (or better than) humans has been a challenge. Today, artificial intelligence can not only identify the subject of an image, but it’s also creating realistic images and original artwork. With the capability of image creation and other skills, artificial intelligence continues to revolutionize just about every industry.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
Artificial Intelligence Can Now Generate Amazing Images – What Does The Mean ...Bernard Marr
Figuring out the formula to help computers see as good (or better than) humans has been a challenge. Today, artificial intelligence can not only identify the subject of an image, but it’s also creating realistic images and original artwork. With the capability of image creation and other skills, artificial intelligence continues to revolutionize just about every industry.
deepfake
seminar
computer engineering
ppt on deepfake which uses ai and deep learning technology.with adavantages,disadvantages,intro,reference,conclusion
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
We create a group presentation for Simulation & Modeling. This presentation has so many related fields as like artificial intelligence ,Information engineering,Neurology, Signal processing etc.
Face recognition system plays an important role when its comes to security, In this slide using of neural networking system for face recognition system has demonstrated.
Regulating Generative AI: A Pathway to Ethical and Responsible ImplementationIJCI JOURNAL
Artificial intelligence (AI) is becoming more and more prevalent in our daily lives, and its potential applications are practically limitless. However, as with any technology, there are concerns about how AI could be misused or abused. One of the most serious concerns is the potential for discrimination, particularly against women or minorities, when AI systems are used for tasks like job hiring. Additionally, there are concerns about privacy and security, as AI could be used to monitor people's movements or launch cyberattacks. To address these concerns, regulations must be developed to ensure that AI is developed and used ethically and responsibly. These regulations should address issues like safety, privacy, security, and discrimination. Finally, it is important to educate the public about AI and how to use it safely and responsibly. In this paper, I will examine the AI regulations and challenges that exist today, particularly in the United States. Two regulations I will focus on are the AI in Government Act of 2020 and the National Artificial Intelligence Initiative Act of 2020. Additionally, I will examine two Executive Orders that have addressed the issue of AI in the federal government. Finally, I will conclude with some policy considerations and recommendations for federal agencies.
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
DeepFake Detection: Challenges, Progress and Hands-on Demonstration of Techno...Symeon Papadopoulos
Slides accompanying an online webinar on DeepFake Detection and a hands-on demonstration of the MeVer DeepFake Detection service. The webinar is supported by the US-Paris Tech Challenge award for our work on the InVID-WeVerify plugin.
Dissecting the dangers of deepfakes and their impact on reputation Generative...CSIRO National AI Centre
At the recent Generative AI Conference - This talk defined deepfakes and the widespread damage misinformation can cause. In order to build awareness of the ethical implications of deepfakes. At the
National AI Centre, Responsible AI and Responsible AI Network
allows us to action a way to use AI that is aligned to Australia's AI ethics principles.
Although manipulations of visual and auditory media are as old as the media themselves, the recent entrance of deepfakes has marked a turning point in the creation of fake content. Powered by latest technological advances in AI and machine learning, they offer automated procedures to create fake content that is harder and harder to detect to human observers. The possibilities to deceive are endless, including manipulated pictures, videos and audio, that will have large societal impact. Because of this, organizations need to understand the inner workings of the underlying techniques, as well as their strengths and limitations. This article provides a working definition of deepfakes together with an overview of the underlying technology. We classify different deepfake types: photo (face- and body-swapping), audio (voice-swapping, text to speech), video (face-swapping, face-morphing, full body puppetry) and audio & video (lip-synching), and identify risks and opportunities to help organizations think about the future of deepfakes. Finally, we propose the R.E.A.L. framework to manage deepfake risks: Record original content to assure deniability, Expose deepfakes early, Advocate for legal protection and Leverage trust to counter credulity. Following these principles, we hope that our society can be more prepared to counter the deepfake tricks as we appreciate its treats.
Deepfakes - How they work and what it means for the futureJarrod Overson
Deepfakes originally started as cheap costing but believable video effects and have expanded into AI-generated content of every format. This session dove into the state of deepfakes and how the technology highlights an exciting but dangerous future.
We create a group presentation for Simulation & Modeling. This presentation has so many related fields as like artificial intelligence ,Information engineering,Neurology, Signal processing etc.
Face recognition system plays an important role when its comes to security, In this slide using of neural networking system for face recognition system has demonstrated.
Regulating Generative AI: A Pathway to Ethical and Responsible ImplementationIJCI JOURNAL
Artificial intelligence (AI) is becoming more and more prevalent in our daily lives, and its potential applications are practically limitless. However, as with any technology, there are concerns about how AI could be misused or abused. One of the most serious concerns is the potential for discrimination, particularly against women or minorities, when AI systems are used for tasks like job hiring. Additionally, there are concerns about privacy and security, as AI could be used to monitor people's movements or launch cyberattacks. To address these concerns, regulations must be developed to ensure that AI is developed and used ethically and responsibly. These regulations should address issues like safety, privacy, security, and discrimination. Finally, it is important to educate the public about AI and how to use it safely and responsibly. In this paper, I will examine the AI regulations and challenges that exist today, particularly in the United States. Two regulations I will focus on are the AI in Government Act of 2020 and the National Artificial Intelligence Initiative Act of 2020. Additionally, I will examine two Executive Orders that have addressed the issue of AI in the federal government. Finally, I will conclude with some policy considerations and recommendations for federal agencies.
LEGAL AND REGULATORY STRUCTURE PREVAILING IN THE UK RELATED TO DATA PRIVACY A...DamaineFranklinMScBE
Privacy ideas have had a long tradition within the UK and can be tracked prior to the common law and their recent appearance within the General Data Protection Regulation (GDPR)(Garcia-Alfaro et al., 2014). At the moment, Article 8 of the European Convention on Human Rights (ECHR) points out the right to privacy, which is also integrated into UK law owing to the Human Rights Act (HRA) of 1998 and also incorporated in the General Data Protection Regulation (Cornock, 2018). Despite the European Court of Human Rights, which is located in Strasbourg and not part of the European Union, it upholds privacy rights and data protection laws through its enforcement of the European Convention on Human Rights and Convention 108 and has also considered the question of the safeguards of personal data from the viewpoint of rights of access to such data (Zaeem and Barber, 2020).
Legal Risks and Preventive Measures in ChatGPT Applicationsijtsrd
In November 2020, OpenAI officially launched their new generation of generative artificial intelligence, ChatGPT, based on large language models. ChatGPT garnered the attention and usage of over a billion people in less than two months, making it the fastest consumer level application to reach this milestone. As a generative AI, ChatGPT not only achieved remarkable success in the market but also contributed to the advancements in the field of generative AI, with its representation of 2022s top ten scientific breakthroughs in the international reputable journal, Science. Generative artificial intelligence is having a profound impact on our lives and work, with one of the most critical aspects being its algorithms. These algorithms directly influence the learning performance of the entire intelligent system and the quality of recommendations. However, with the widespread application of algorithms and the continuous evolution of artificial intelligence, a series of regulatory and management issues have arisen. Many countries worldwide are continually strengthening algorithm oversight to ensure transparency, fairness, and security. This project, part of a university student innovation training program, will primarily focus on researching personal information security risks among the potential risks associated with generative artificial intelligence. The following will provide an introduction to what generative artificial intelligence is, the personal information security risks and preventive measures involved in generative artificial intelligence. Chen Jiaqi | Zhen Yunuo | Guo Simeng "Legal Risks and Preventive Measures in ChatGPT Applications" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-6 , December 2023, URL: https://www.ijtsrd.com/papers/ijtsrd60109.pdf Paper Url: https://www.ijtsrd.com/computer-science/artificial-intelligence/60109/legal-risks-and-preventive-measures-in-chatgpt-applications/chen-jiaqi
e-SIDES presentation at Leiden University 21/09/2017e-SIDES.eu
On September 21st the eLaw team member of e-SIDES, Magdalena Jozwiak, made a presentation of the e-SIDES project at a lunch event at the Leiden University’s Law Faculty. The event, organized within the Interaction Between Legal Systems research theme, attracted an interdisciplinary audience and was followed by a discussion on e-SIDES, its goals and approaches.
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
In contemporary democratic societies, where technology is pervasive, the right to privacy remains a fundamental human right that pertains to an individual’s ability to keep their personal or identifiable information, activities, and private life free from unwanted intrusion or interference by public authorities except in accordance with law (Caprioli, et al., 2006).
Global Technology Governance Report 2021: Harnessing Fourth Industrial Revolu...Prachyanun Nilsook
Global Technology
Governance Report 2021:
Harnessing Fourth Industrial
Revolution Technologies in a
COVID-19 World
In Collaboration
with Deloitte
I N S I G H T R E P O R T
D E C E M B E R 2 0 2 0
Presentation by Christian D'Cunha at the 2019 CMPF Summer School for Journalists and Media Practitioners - Covering Political Campaigns in the Age of Data, Algorithms & Artificial Intelligence
Looking for a career in AI ethics? Enrol in USAII certifications and land the best career opportunity in AI engineering, AI consultancy, and AI scientists roles. Begin with the most credible AI certifications worldwide!
Learn more: https://bit.ly/3SdLAuD
The Impact of Big Data and Artificial Intelligence (AI) in the Insurance SectorΔρ. Γιώργος K. Κασάπης
Big data and artificial intelligence (AI) are two words that are widely used when discussing the future of business. The potential for applying them in diverse aspects of business has caught the imagination of many, in particular, how AI could replace humans in the workplace. Big data and AI could customise business processes and decisions better suited to individual needs and expectations, improving the efficiency of processes and decisions.
As big data and AI (also called machine learning) are increasing being employed in the insurance sector, the great benefits that are expected also come with risks. The granularity of data has the potential to give insights into a variety of predictable behaviours and incidents. Given that insurance is based on predicting how risk is realised, having access to big data has the potential to transform the entire insurance production process.
This report examines both the benefits and risks big data and AI can bring to the insurance industry. In particular, this reports discusses how the OECD Recommendation on Artificial Intelligence and the European Commission’s Independent High-Level Expert Group on Artificial Intelligence’s (HLAG AI) Ethics Guidelines for Trustworthy AI should be considered in the context of the insurance sector.
The report concludes with policy areas in which policy makers may consider action in the insurance sector in relation to big data and AI going forward.
What does “BIG DATA” mean for official statistics?Vincenzo Patruno
In our modern world more and more data are generated on the web and produced by sensors in the ever growing number of electronic devices surrounding us. The amount of data and the frequency at which they are produced have led to the concept of 'Big data'. Big data is characterized as data sets of increasing volume, velocity and variety; the 3 V's. Big data is often largely unstructured, meaning that it has no pre-defined data model and/or does not fit well into conventional relational databases.
Legal Risks and Preventive Measures in ChatGPT Applications in Chinaijtsrd
On November 30, 2022, the American artificial intelligence company OpenAI released the large language model ChatGPT. ChatGPT, as an AI language model, is not only capable of interacting with humans but can also write articles, develop strategies, create poetry, and even write code and check for vulnerabilities. However, along with its capabilities, there are also legal risks associated with the application of ChatGPT, making it important for us to research and consider how to properly prevent these risks. The main research focus of this project is on ChatGPTs ethical responsibilities, the relationship and order of human machine coexistence, the protection of individual safety, and the governance of ChatGPT by both the nation and society. Through our research, we aim to maximize the convenience that ChatGPT offers us and effectively mitigate its potential risks. Chen Jiaqi | Zhen Yunuo | Guo Simeng "Legal Risks and Preventive Measures in ChatGPT Applications in China" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-6 , December 2023, URL: https://www.ijtsrd.com/papers/ijtsrd60106.pdf Paper Url: https://www.ijtsrd.com/computer-science/artificial-intelligence/60106/legal-risks-and-preventive-measures-in-chatgpt-applications-in-china/chen-jiaqi
Similar to Ethical Questions of Facial Recognition Technologies by Mika Nieminen (20)
What the AI revolution means for Open Source, Open Tech and Open SocietiesMindtrek
Track | Sustainable and Future-proof Tech
Frank Karlitschek, CEO & Founder, Nextcloud
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Data balance sheets laying foundations for sustainable and ethical use of dataMindtrek
Track | Sustainable and Future-proof Tech
Mikko Merisaari, Co-Founder and Partner, Functos Oy
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Towards data responsibility - how to put ideals into actionMindtrek
Track | Sustainable and Future-proof Tech
Mikko Eloholma Accelerator of Digital skills, TIEKE
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Täytä velvollisuudet ja hyödynnä mahdollisuudet – käytännön työkaluja regulaa...Mindtrek
Track | Sustainable and Future-proof Tech
Hanna Vuohelainen, Accelerator of digital competences and communications, TIEKE
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Datatalouden ja tekoälyn regulaatio – missä mennään?Mindtrek
Track | Sustainable and Future-proof Tech
Joonas Mikkilä, Senior Advisor, Technology Industries Finland
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Green ICT Tools for Sustainable DigitalizationMindtrek
Track | Sustainable and Future-proof Tech
Antti Sipilä, Project Manager, Green ICT, TIEKE
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Future-proof digitalization is on full speed – are you on board?Mindtrek
Track | Sustainable and Future-proof Tech
Hanna Niemi-Hugaerts, Executive Director, TIEKE
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
How to (Help to) Save Our Planet with Green CodingMindtrek
Track | Sustainable and Future-proof Tech
Janne Kalliola, Chief Growth Officer, Exove
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
National Library of Finland - open source solutions in the development of nat...Mindtrek
Track | The Future of Open Source Business
Kristiina Hormia-Poutanen, Service Director, National Library of Finland
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
The Case for Open Source in the Public SectorMindtrek
Track | The Future of Open Source Business
Heikki Nousiainen, Field CTO & Co-founder, Aiven
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
KEYNOTE: From Lutece to CiteLibre, City of Paris' commitment to open sourceMindtrek
Track | The Future of Open Source Business
Philippe Bareille, Open Source Program Officer, City of Paris
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Freedom & Functionality – A Startup Approach to Open Source & Innovation for ...Mindtrek
Track | The Future of Open Source Business
Mikko Lampi, Chief Business Development Officer, Metatavu
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
How open source empowers startups to start big, with case Double Open OyMindtrek
Track | The Future of Open Source Business
Martin von Willebrand, Attorney at HH Partners Attorneys-at-law, Founder at Double Open Oy
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Sustainable Open Source; Balancing Business and CommunityMindtrek
Track | The Future of Open Source Business
Tomas Gustavsson, Chief PKI Officer, Keyfactor
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
Empowering Employment: The Swedish Public Employment Service’s digital transf...Mindtrek
Track | The Future of Open Source Business
Maria Dalhage, Project manager open source and data, Swedish Public Employment Service
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
KEYNOTE: How to automate the world the open source wayMindtrek
Track | The Future of Open Source Business
Carol Chen, Principal Community Architect, Red Hat
Mindtrek Conference
3rd of October 2023.
Tampere, Finland
www.mindtrek.org
"Smart Villages in Finland" by Marianne SelkäinahoMindtrek
Track | Smart Villages
Marianne Selkäinaho, Senior officer, Rural Affairs, Ministry of Agriculture and Forestry
Mindtrek Conference
15th of November 2022.
Tampere, Finland
www.mindtrek.org
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Ethical Questions of Facial Recognition Technologies by Mika Nieminen
1. Ethical Questions of Facial
Recognition Technology
Adjunct professor,
Principal scientist
Mika Nieminen, VTT
mika.nieminen@vtt.fi
11.2.2020 VTT – beyond the obvious 1
2. 11/02/2020 VTT – beyond the obvious 2
There are number of identified
positive uses for facial recgonition
technology, including e.g. prevent
crime, find missing persons, aid
forensic investigations, diagnose
diseases, validate identity etc,
BUT, at the same time, it has been
stated that:
”Facial recognition, simply by being
designed and built, is intrinsically
socially toxic, reagrdless of the
intentions of its makers” (Stark 2019,
52)
3. 11.2.2020 VTT – beyond the obvious 3
Recent AI Global Surveillance (AIGS) Index by Carnegie
Endowment for International Peace compiles empirical
data on AI surveillance use for 176 countries (Feldstein
(2019): The Global Expansion of AI Surveillance) and
indicates constant expansion of the use of tech.Source: Feldstein (2019)
4. 11/02/2020 4
While the strong law traditions and independent institutions of accountability may protect us
against unlawful exploitation of AI tech, there are number of open questions relating to
potential abuse and unintended consequences of the use of AI tech
Such challenges include among others:
Facial recognition is not accurate tech and discriminates against non-whites, women, and
children (e.g. it has been estimated to produce errors up to 35% for non-white women)
(e.g. Kuflinski 2019)
Facial recognition can be claimed to be in any case a “racializing technology” per se
through “the classification and schematization of human facial features” and imposition of
racial categories on to human bodies even though it is scientifically unsound, and thus
reproducing and enhancing systemic inequality (Stark 2019)
If it is possible and easy to use, it is used “just for the case” expanding the extent of
surveillance, creating step by step everywhere reaching “surveillance state” even though
that is not intended by any actor (Kuflinski 2019)
Balancing pros and cons?
5. 11/02/2020 5
Balancing pros and cons?
The surveillance tech produces a lot of data which becomes useful only if connected to
other personal data: How much data there is from innocent people and does the storage
and combination of data breach the right for individual privacy and data security?
There is little publicly available information on the use of surveillance tech (Bode 2019);
Lack of transparency may erode societal trust on and acceptance of legitimate use of
surveillance tech
Surveillance tech may have distorting effect on public events and through that way on
democratic processes. For instance, “London policing ethics panel found that 38% of 16 to
24-year-olds would stay away from events using live facial recognition, with black and
Asian people roughly twice as likely to do so than white people”.(The Guardian 7.6. 2019)
7. 11/02/2020 7
The ethical use of AI has been considered also highly important in Europe: E.g. EU High Level
Experts Group’s recommendations for “Trustworthy AI” (April 2019):
“Individuals should not be subject to unjustified personal, physical or mental tracking or
identification, profiling and nudging through AI powered methods of biometric recognition
such as: emotional tracking, empathic media, DNA, iris, and behavioural identification,
affect recognition, voice, and facial recognition and the recognition of micro-expressions”
Currently the European Commission is exploring stricter rules for facial recognition technology
(EU Observer, 22.8.2019)
In the on-going debate has been discussed e.g. on the necessity of new rules due to existing
General Data Protection Regulation (GDPR)
It has been stated that the GDPR should cover essential questions and now its impacts
are monitored and evaluated (18.9.2019, Berlin; Joanna Goodey, Head of Freedoms and
Justice Department, European Union Agency for Fundamental Rights)
GDPR stipulates that any processing of "biometric data" (like facial recognition data)
requires explicit consent from the person who is concerned
However, data used for national security purposes is excluded from this
More strict regulation is under preparation according to the latest news
8. 11/02/2020 VTT – beyond the obvious 8
Besides number of general ethical guidelines for
the use of AI, there are also attempts to define
specific guidelines for non-biased and unethical
use of facial recognition technology
One recent attempt is put forward by U.K.
Biometrics and Forensics Ethics Group (February
2019) in response to field trials of South Wales
Police and Metropolitan Police Service (“Ethical
issues arising from the police use of live facial
recognition technology”)
Ethical principles include:
Public Interest: The use of this technology is
permissible only when it is being employed in
the public interest
Effectiveness: The use of this technology
can be justified only if it is an effective
tool for identifying people
9. 11/02/2020 9
The Avoidance of Bias and Algorithmic Injustice: For the use of the technology to be
legitimate it should not involve or exhibit undue bias
Impartiality and Deployment: If the technology is deployed for policing purposes it must
be used in an even handed way.
Necessity: Individuals normally have rights to conduct their lives without being monitored
and scrutinized
Proportionality: It can be permissible only if the benefits are proportionate to any
loss of liberty and privacy.
Impartiality, Accountability, Oversight and the Construction of Watch lists: (a) If
humans (or algorithms) are involved in the construction of watch lists for use with the
technology, it is essential that they be impartial and free from bias.(b)The construction of
‘watch lists’ needs to be subject to oversight by an independent body.
Public Trust: It is important that those using it (either in operational deployments or
trials) engage in public consultation and provide the rationale for its use
Cost-effectiveness: Any evaluation of the use of this technology needs to take into
account whether any resources it requires could be better used elsewhere
10. 11/02/2020 VTT – beyond the obvious 10
In order to be widely accepted and utilized facial recognition technology needs to be
well regulated (international and national standards & agreements)
The ethical governance needs to take place on institutional (formal regulation, “hard
governance”), organizational and individual level (self-regulation, ethical principles
e.g. organizational and professional codes)
The system needs to be transparent and include “bias checks” and a possibility for
citizens and organizations appeal against faults and participate in the development
of the system
There needs to be an administrational procedure for correcting technological and
social biases and faults in the system
We need to be aware of unintended negative consequences (like those for the
democratic processes and social activities) and avoid unnecessary use of
surveillance tech
Some preliminary concluding remarks