Poster presented at the International Conference “Intelligent Transport Systems: a Tool or a Toy?”, held in Žilina (Slovakia), 22/23 November 2016.
Authors: Federico Costantini, Pier Luca Montessoro
Article SoftBank and Toyota Want Driverless Cars to Change the W.docxrossskuddershamus
Article: “SoftBank and Toyota Want Driverless Cars to Change the World.” Published on October 4, 2018 and written by Sherisse Pham via CNN Business.
Softbank is one of Japan's largest companies providing mobile communications, designs and creates robots, and is known for investing in small business start-ups. CEO of Softbank Masayoshi Son describes himself as “the crazy guy who bet on future.” He has a 300 year plan for Softbank and wants to increase life expectancy by 200 years and believes that artificial intelligence, such as the making of driverless vehicles, is the key to human civilization (Pham).
“The high-profile Japanese companies are forming a joint venture called Monet to develop businesses that will use driverless-car technology to offer new services, such as mobile convenience stores and delivery vehicles in which food is prepared en route” (Pham).
The project name is not intended a reference to Claude Monet, the famous French painter, but actually a shortened version of the words "mobility network"(Pham).
Toyota President Akio Toyoda and Masayoshi announced the project in Tokyo at a rare joint appearance by the two companies; they are Japan's two biggest global companies (Pham). The project Monet began when Toyota first approached Softbank with the idea of coming together to form some sort of alliance to try and decrease global rivals that the technology of auto driving cars is creating (Pham).
One of the most important aspects and conclusions of the article is that the plan for Monet is that throughout the next decade, Monet will start providing services such as self-driving buses, hospital shuttles in which medical check-ups can be done inside the shuttle, and even mobile offices (Pham.) The initial geographical range will be primarily in Japan but the President and CEOs of both companies have plans to expand globally (Sham).
The key concepts that is driving the CEOs to introduce this new technology is maximizing the overall good for the largest number of people (utilitarianism). Since Japan is a highly technologically innovative country, this may be true for the individuals living there. Here in the United States however, that may not be the case. Using the deontological/principle based method, the CEOs of Toyota and Softbank believe that as a highly innovative country, they have a responsibility to their society to create such technology that will make their live more convenient and help products and services become more accessible to people in which they were not previously such as the handicap and elderly. This also ties into virtue ethics because the CEOs believe that the technology will help people live their lives in a way that exercises them making choices they are satisfied with. The CEOs are demonstrating the classical model of corporate social responsibility because competition is driving this new project, they want to be ahead of the game when it comes to self-driving cars.
There are more than one Ethical Dilemmas .
This talk is about PLEA, the virtual being and the robot. It is about the vision how PLEA is made and what is her story. She samples its environment to determine how a person feels, and then demonstrates the affection back. She analyses and interprets different sources of social signals from those who interact with to generate hypotheses. Then she produces non-verbal expressions using information visualization techniques. PLEA is a proof-of-concept, and she was presented at many festivals including British Science Festival and Art & AI Festival in Leicester, the UK. At the end of this talk if we are lucky, PLEA would visit the audience from the screen.
Article SoftBank and Toyota Want Driverless Cars to Change the W.docxrossskuddershamus
Article: “SoftBank and Toyota Want Driverless Cars to Change the World.” Published on October 4, 2018 and written by Sherisse Pham via CNN Business.
Softbank is one of Japan's largest companies providing mobile communications, designs and creates robots, and is known for investing in small business start-ups. CEO of Softbank Masayoshi Son describes himself as “the crazy guy who bet on future.” He has a 300 year plan for Softbank and wants to increase life expectancy by 200 years and believes that artificial intelligence, such as the making of driverless vehicles, is the key to human civilization (Pham).
“The high-profile Japanese companies are forming a joint venture called Monet to develop businesses that will use driverless-car technology to offer new services, such as mobile convenience stores and delivery vehicles in which food is prepared en route” (Pham).
The project name is not intended a reference to Claude Monet, the famous French painter, but actually a shortened version of the words "mobility network"(Pham).
Toyota President Akio Toyoda and Masayoshi announced the project in Tokyo at a rare joint appearance by the two companies; they are Japan's two biggest global companies (Pham). The project Monet began when Toyota first approached Softbank with the idea of coming together to form some sort of alliance to try and decrease global rivals that the technology of auto driving cars is creating (Pham).
One of the most important aspects and conclusions of the article is that the plan for Monet is that throughout the next decade, Monet will start providing services such as self-driving buses, hospital shuttles in which medical check-ups can be done inside the shuttle, and even mobile offices (Pham.) The initial geographical range will be primarily in Japan but the President and CEOs of both companies have plans to expand globally (Sham).
The key concepts that is driving the CEOs to introduce this new technology is maximizing the overall good for the largest number of people (utilitarianism). Since Japan is a highly technologically innovative country, this may be true for the individuals living there. Here in the United States however, that may not be the case. Using the deontological/principle based method, the CEOs of Toyota and Softbank believe that as a highly innovative country, they have a responsibility to their society to create such technology that will make their live more convenient and help products and services become more accessible to people in which they were not previously such as the handicap and elderly. This also ties into virtue ethics because the CEOs believe that the technology will help people live their lives in a way that exercises them making choices they are satisfied with. The CEOs are demonstrating the classical model of corporate social responsibility because competition is driving this new project, they want to be ahead of the game when it comes to self-driving cars.
There are more than one Ethical Dilemmas .
This talk is about PLEA, the virtual being and the robot. It is about the vision how PLEA is made and what is her story. She samples its environment to determine how a person feels, and then demonstrates the affection back. She analyses and interprets different sources of social signals from those who interact with to generate hypotheses. Then she produces non-verbal expressions using information visualization techniques. PLEA is a proof-of-concept, and she was presented at many festivals including British Science Festival and Art & AI Festival in Leicester, the UK. At the end of this talk if we are lucky, PLEA would visit the audience from the screen.
The recent trend manufacturer’s shows keen interest in safety and pleasure riding. The seat belts and the airbags which are now a important attachment of the cars must be utilized fully for safer riding. The weakest and most harzdous part the glasses in cars are also now made to add safety with the advanced technology. If this condition persist the safety factor in cars will reach a pioneer position. With these kind of improved technology will pay way for usage of large number of passenger cars
10 Looking Forward
Mike Householder/Associated Press
Learning Outcomes
After reading this chapter, you should be able to do the following:
• Summarize potential ethical risks in business by recognizing relevant issues, performing environmental
scanning, and identifying reliable resources for uncovering future misconduct risks.
• Analyze how trends in the economic, geopolitical, social, and technological environment lead to ethical
issues in business.
• Evaluate how emerging ethical issues affect the ethics and compliance function in an organization.
ped82162_10_c10_295-324.indd 295 4/23/15 8:49 AM
Introduction
Introduction
Self-Driving Cars
Imagine driving along a winding mountain pass, with a ravine on the right and a rock wall
across the opposite lane on the left. Taking a sharp turn around the mountain, you see two
cars coming toward you in both lanes, one trying to pass the other. In seconds, each driver
must react. You slam on the brakes, hoping that the other cars adjust to allow the passing car
to move out of your lane.
Now imagine the same situation, except this time you are in a self-driving car, taking photos
of the scenery as you tour through the mountain pass. The car relies on radar sensors, lasers,
and cameras to keep the vehicle on a path to the designated destination (Greimel, 2013).
Turning the corner, the directional equipment cannot see around the mountain, but recog-
nizes an obstacle immediately. The computer controlling the car must now react. The decision
to stop or swerve can lead to a potentially fatal accident or save all passengers. Your life and
the lives of others depend on the computer program designed and installed by the automaker.
Do you trust the automaker to keep you safe?
Automakers are racing to launch self-driving, or autonomous, cars by the year 2020. Some mod-
els already feature technology that allows them to park themselves, warn of lane departures,
detect a vehicle in blind spots, and slow or stop to avoid an obstacle even before the driver reacts.
Such measures have already reduced traffic accidents in the United States, and studies predict
“that if just 10% of the cars in the U.S. were autonomous, there would be 211,000 fewer accidents
annually, and 1,100 lives would be saved each year” (Tuttle, 2013, para. 7). Other advantages
include greater use of fuel resources and greater mobility for persons with disabilities.
However, there are ethical challenges to market a car that requires little or no driver inter-
vention (Newman, 2014). What would prevent a car manufacturer from programming the
car’s route so passengers will pass sponsoring businesses? The safety features in existing
cars emit warnings and require driver intervention. As technology progresses toward a more
autonomous car, drivers may become accustomed to being able to read, work, or perform
other tasks while the vehicle transports them to their destination. Distracted drivers are less
likely to.
«Information Society» and MaaS in the European Union: current issues and futu...Federico Costantini
Invited speech in the panel discussion: «MaaS Policy Aspects. New legal Framework and Liability? What are expected benefits for user and local authorities?».
International Conference “Intelligent Transport Systems: a Tool or a Toy?” 22/23 November 2016 - Žilina (Slovakia)
Algorithmic decision-making in AVs: understanding ethical and technical conce...Araz Taeihagh
Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently and more safely than human drivers and offering various economic, social and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics and perverse incentives as key ethical issues in the AV algorithms’ decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs’ perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues and increase the accountability of AV stakeholders, highlight the existing research gaps and the need to mitigate these issues through the design of AV’s algorithms and of policies and regulations to fully realise AVs’ benefits for smart and sustainable cities.
Problems in Autonomous Driving System of Smart Cities in IoTijtsrd
This paper focuses on the problems and challenges during self driving. In the modern era, technologies are getting advanced day by day. The field of smart city has introduced a new technology called ""Autonomous Driving"". Autonomous driving can be defined as Self Driving, Automated Vehicle. Google has started working on this type of system since 2010 and still in the phase of making changes in this technology to take it to a higher level. Any technology can reach up to an advanced level but it cannot provide a full fledged result. This paper facilitates the researchers to understand the problems, challenges and issues related to this technology. Shweta S. Darekar | Dr. Anandhi Giri ""Problems in Autonomous Driving System of Smart Cities in IoT"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30079.pdf
Paper Url : https://www.ijtsrd.com/computer-science/other/30079/problems-in-autonomous-driving-system-of-smart-cities-in-iot/shweta-s-darekar
UCL joint Institute of Education (London Knowledge Lab) & UCL Interaction Centre seminar, 20th April 2016. Replay: https://youtu.be/0t0IWvcO-Uo
Algorithmic Accountability & Learning Analytics
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
ABSTRACT. As algorithms pervade societal life, they are moving from the preserve of computer science to becoming the object of far wider academic and media attention. Many are now asking how the behaviour of algorithms can be made “accountable”. But why are they “opaque” and to whom? As this vital discussion unfolds in relation to Big Data in general, the Learning Analytics community must articulate what would count as meaningful questions and satisfactory answers in educational contexts. In this talk, I propose different lenses that we can bring to bear on a given learning analytics tool, to ask what it would mean for it to be accountable, and to whom. From a Human-Centred Informatics perspective, it turns out that algorithmic accountability may be the wrong focus.
BIO. Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in August 2014 to direct the new Connected Intelligence Centre. Prior to that he was at The Open University’s Knowledge Media Institute 1995-2014. He brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI (PhD, York) where he worked with Rank Xerox Cambridge EuroPARC on Design Rationale. He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin wrote Constructing Knowledge Art (2015). He is active in the emerging field of Learning Analytics and is a co-founder of the Society for Learning Analytics Research, Compendium Institute and Learning Emergence network.
Defence global august 2013 - Complex Adaptive Systems and DefenceDavid Wortley
Today’s society is shaped by technology in unprecedented
ways. We all face disruptive changes in our lives and
new challenges which, paradoxically, can be both created
and addressed by the various digital technologies that inform, empower and influence individual citizens on a massive scale. There has been no previous period in history where millions of ordinary citizens have been
able to freely access knowledge and simultaneously share their lives and opinions with a global audience.
The empowerment of citizens through accessible and
affordable technologies represents a significant challenge to defence and security. The knowledge about weapons and explosive devices which can readily be accessed
and the powerful and portable communication tools
available today are in a large measure responsible for the phenomenon of “Asymmetric Warfare” in which
individuals and small groups with limited traditional
military resources can pose serious problems for the
far better equipped armed forces responsible for
defence and security.
It is therefore the way in which technology empowers
individual citizens with access to seemingly unlimited
information and choice that creates the tensions,
conflicts and disruptive changes in which the needs of society challenge the rights and responsibilities of the individual. These so-called “Grand Challenges” brought about by conflicts between the need for a secure
society and individual citizen rights to privacy and civil
liberties represent a potential threat to a secure and
peaceful future, as is evidenced by on-going and long drawn out conflicts in countries striving for democratic
freedom.
El 2 de diciembre de 2014 la Fundación Ramón Areces organizó en colaboración con Mujeres por África la primera Jornada 'Ellas investigan' con el lema 'Mujer, ciencia, tecnología e innovación en África'. Diferentes ponentes abordaron desde un punto de vista multidisciplinar qué medidas habría que tomar para impulsar el desarrollo científico y tecnológico en este continente como motor de progreso global.
El 2 de diciembre de 2014 la Fundación Ramón Areces organizó en colaboración con Mujeres por África la primera Jornada 'Ellas investigan' sobre 'Mujer, ciencia, tecnología e innovación en África'. Diferentes ponentes abordaron desde un punto de vista multidisciplinar qué medidas habría que tomar para impulsar el desarrollo científico y tecnológico en este continente como motor de progreso global.
Arquitectura del territorio: Public lecture in Antalya on APPSA Pedestrian Priority & Security Project (CEE-CEUPA-OAPEE). A3 Networking presented to 300 people, press and government authorities the public space proposal: the Pedestrian Urban Code and the New Greenstreets Net Road Urban Project.
Tutorial CLEI 2010 - Assuncion - Paraguay
Outubro 2010
LRM - ICMC - USP São Carlos
INCT-SEC
Titulo: "Robôs Móveis e Veículos Autônomos: Pesquisa, Desenvolvimento e Desafios na área da Inteligência Artificial"
"AI is “our greatest existential threat…”
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“I think there is potentially a dangerous outcome there.” (referring to Google’s Deep Mind which he invested in to keep an eye on things)."
Elon Musk
The EU ‘AI ACT’: a “risk-based” legislation for robotic surgeryFederico Costantini
The long-awaited European Union “Artificial Intelligence Act” has been recently approved (13rd of March 2024). Even though it has not been published – for this reason we still might recall it as COM(2021)206 – and despite the fact that it will come into force only after two years since its publication, it has drawn the attention from the international community of AI experts, due to the fact that it is the first piece of legislation worldwide regulating such technologies. This contribution aims at presenting the “AI ACT” with a focus on its most relevant features regarding robotic surgery. After a short overview on its background, which is brought by a very complex legal framework built within the last 25 years by the EU, I will offer a summary of its provisions, which are resulting from the “risk-based” approach adopted by the EU legislator. Then, I will address “high risk” AI systems, analysing both the obligations that not only manufacturers, but also providers will need to fulfil, highlighting those which are most challenging in the sector of robotic surgery. At the end I will offer a few conclusive remarks, concerns and recommendations.
Intervento al Webinar "Intelligenza Artificiale. Sfide, Opportunità ed Insidie", 6 novembre 2020, organizzato dal Comitato Italiano Ingegneria dell’Informazione, Consiglio Nazionale Ingegneri
More Related Content
Similar to POSTER: "When an algorithm decides «who has to die». Security concerns in “Autonomous vehicles” between computer science and law"
The recent trend manufacturer’s shows keen interest in safety and pleasure riding. The seat belts and the airbags which are now a important attachment of the cars must be utilized fully for safer riding. The weakest and most harzdous part the glasses in cars are also now made to add safety with the advanced technology. If this condition persist the safety factor in cars will reach a pioneer position. With these kind of improved technology will pay way for usage of large number of passenger cars
10 Looking Forward
Mike Householder/Associated Press
Learning Outcomes
After reading this chapter, you should be able to do the following:
• Summarize potential ethical risks in business by recognizing relevant issues, performing environmental
scanning, and identifying reliable resources for uncovering future misconduct risks.
• Analyze how trends in the economic, geopolitical, social, and technological environment lead to ethical
issues in business.
• Evaluate how emerging ethical issues affect the ethics and compliance function in an organization.
ped82162_10_c10_295-324.indd 295 4/23/15 8:49 AM
Introduction
Introduction
Self-Driving Cars
Imagine driving along a winding mountain pass, with a ravine on the right and a rock wall
across the opposite lane on the left. Taking a sharp turn around the mountain, you see two
cars coming toward you in both lanes, one trying to pass the other. In seconds, each driver
must react. You slam on the brakes, hoping that the other cars adjust to allow the passing car
to move out of your lane.
Now imagine the same situation, except this time you are in a self-driving car, taking photos
of the scenery as you tour through the mountain pass. The car relies on radar sensors, lasers,
and cameras to keep the vehicle on a path to the designated destination (Greimel, 2013).
Turning the corner, the directional equipment cannot see around the mountain, but recog-
nizes an obstacle immediately. The computer controlling the car must now react. The decision
to stop or swerve can lead to a potentially fatal accident or save all passengers. Your life and
the lives of others depend on the computer program designed and installed by the automaker.
Do you trust the automaker to keep you safe?
Automakers are racing to launch self-driving, or autonomous, cars by the year 2020. Some mod-
els already feature technology that allows them to park themselves, warn of lane departures,
detect a vehicle in blind spots, and slow or stop to avoid an obstacle even before the driver reacts.
Such measures have already reduced traffic accidents in the United States, and studies predict
“that if just 10% of the cars in the U.S. were autonomous, there would be 211,000 fewer accidents
annually, and 1,100 lives would be saved each year” (Tuttle, 2013, para. 7). Other advantages
include greater use of fuel resources and greater mobility for persons with disabilities.
However, there are ethical challenges to market a car that requires little or no driver inter-
vention (Newman, 2014). What would prevent a car manufacturer from programming the
car’s route so passengers will pass sponsoring businesses? The safety features in existing
cars emit warnings and require driver intervention. As technology progresses toward a more
autonomous car, drivers may become accustomed to being able to read, work, or perform
other tasks while the vehicle transports them to their destination. Distracted drivers are less
likely to.
«Information Society» and MaaS in the European Union: current issues and futu...Federico Costantini
Invited speech in the panel discussion: «MaaS Policy Aspects. New legal Framework and Liability? What are expected benefits for user and local authorities?».
International Conference “Intelligent Transport Systems: a Tool or a Toy?” 22/23 November 2016 - Žilina (Slovakia)
Algorithmic decision-making in AVs: understanding ethical and technical conce...Araz Taeihagh
Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently and more safely than human drivers and offering various economic, social and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics and perverse incentives as key ethical issues in the AV algorithms’ decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs’ perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues and increase the accountability of AV stakeholders, highlight the existing research gaps and the need to mitigate these issues through the design of AV’s algorithms and of policies and regulations to fully realise AVs’ benefits for smart and sustainable cities.
Problems in Autonomous Driving System of Smart Cities in IoTijtsrd
This paper focuses on the problems and challenges during self driving. In the modern era, technologies are getting advanced day by day. The field of smart city has introduced a new technology called ""Autonomous Driving"". Autonomous driving can be defined as Self Driving, Automated Vehicle. Google has started working on this type of system since 2010 and still in the phase of making changes in this technology to take it to a higher level. Any technology can reach up to an advanced level but it cannot provide a full fledged result. This paper facilitates the researchers to understand the problems, challenges and issues related to this technology. Shweta S. Darekar | Dr. Anandhi Giri ""Problems in Autonomous Driving System of Smart Cities in IoT"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30079.pdf
Paper Url : https://www.ijtsrd.com/computer-science/other/30079/problems-in-autonomous-driving-system-of-smart-cities-in-iot/shweta-s-darekar
UCL joint Institute of Education (London Knowledge Lab) & UCL Interaction Centre seminar, 20th April 2016. Replay: https://youtu.be/0t0IWvcO-Uo
Algorithmic Accountability & Learning Analytics
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
ABSTRACT. As algorithms pervade societal life, they are moving from the preserve of computer science to becoming the object of far wider academic and media attention. Many are now asking how the behaviour of algorithms can be made “accountable”. But why are they “opaque” and to whom? As this vital discussion unfolds in relation to Big Data in general, the Learning Analytics community must articulate what would count as meaningful questions and satisfactory answers in educational contexts. In this talk, I propose different lenses that we can bring to bear on a given learning analytics tool, to ask what it would mean for it to be accountable, and to whom. From a Human-Centred Informatics perspective, it turns out that algorithmic accountability may be the wrong focus.
BIO. Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in August 2014 to direct the new Connected Intelligence Centre. Prior to that he was at The Open University’s Knowledge Media Institute 1995-2014. He brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI (PhD, York) where he worked with Rank Xerox Cambridge EuroPARC on Design Rationale. He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin wrote Constructing Knowledge Art (2015). He is active in the emerging field of Learning Analytics and is a co-founder of the Society for Learning Analytics Research, Compendium Institute and Learning Emergence network.
Defence global august 2013 - Complex Adaptive Systems and DefenceDavid Wortley
Today’s society is shaped by technology in unprecedented
ways. We all face disruptive changes in our lives and
new challenges which, paradoxically, can be both created
and addressed by the various digital technologies that inform, empower and influence individual citizens on a massive scale. There has been no previous period in history where millions of ordinary citizens have been
able to freely access knowledge and simultaneously share their lives and opinions with a global audience.
The empowerment of citizens through accessible and
affordable technologies represents a significant challenge to defence and security. The knowledge about weapons and explosive devices which can readily be accessed
and the powerful and portable communication tools
available today are in a large measure responsible for the phenomenon of “Asymmetric Warfare” in which
individuals and small groups with limited traditional
military resources can pose serious problems for the
far better equipped armed forces responsible for
defence and security.
It is therefore the way in which technology empowers
individual citizens with access to seemingly unlimited
information and choice that creates the tensions,
conflicts and disruptive changes in which the needs of society challenge the rights and responsibilities of the individual. These so-called “Grand Challenges” brought about by conflicts between the need for a secure
society and individual citizen rights to privacy and civil
liberties represent a potential threat to a secure and
peaceful future, as is evidenced by on-going and long drawn out conflicts in countries striving for democratic
freedom.
El 2 de diciembre de 2014 la Fundación Ramón Areces organizó en colaboración con Mujeres por África la primera Jornada 'Ellas investigan' con el lema 'Mujer, ciencia, tecnología e innovación en África'. Diferentes ponentes abordaron desde un punto de vista multidisciplinar qué medidas habría que tomar para impulsar el desarrollo científico y tecnológico en este continente como motor de progreso global.
El 2 de diciembre de 2014 la Fundación Ramón Areces organizó en colaboración con Mujeres por África la primera Jornada 'Ellas investigan' sobre 'Mujer, ciencia, tecnología e innovación en África'. Diferentes ponentes abordaron desde un punto de vista multidisciplinar qué medidas habría que tomar para impulsar el desarrollo científico y tecnológico en este continente como motor de progreso global.
Arquitectura del territorio: Public lecture in Antalya on APPSA Pedestrian Priority & Security Project (CEE-CEUPA-OAPEE). A3 Networking presented to 300 people, press and government authorities the public space proposal: the Pedestrian Urban Code and the New Greenstreets Net Road Urban Project.
Tutorial CLEI 2010 - Assuncion - Paraguay
Outubro 2010
LRM - ICMC - USP São Carlos
INCT-SEC
Titulo: "Robôs Móveis e Veículos Autônomos: Pesquisa, Desenvolvimento e Desafios na área da Inteligência Artificial"
"AI is “our greatest existential threat…”
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“I think there is potentially a dangerous outcome there.” (referring to Google’s Deep Mind which he invested in to keep an eye on things)."
Elon Musk
The EU ‘AI ACT’: a “risk-based” legislation for robotic surgeryFederico Costantini
The long-awaited European Union “Artificial Intelligence Act” has been recently approved (13rd of March 2024). Even though it has not been published – for this reason we still might recall it as COM(2021)206 – and despite the fact that it will come into force only after two years since its publication, it has drawn the attention from the international community of AI experts, due to the fact that it is the first piece of legislation worldwide regulating such technologies. This contribution aims at presenting the “AI ACT” with a focus on its most relevant features regarding robotic surgery. After a short overview on its background, which is brought by a very complex legal framework built within the last 25 years by the EU, I will offer a summary of its provisions, which are resulting from the “risk-based” approach adopted by the EU legislator. Then, I will address “high risk” AI systems, analysing both the obligations that not only manufacturers, but also providers will need to fulfil, highlighting those which are most challenging in the sector of robotic surgery. At the end I will offer a few conclusive remarks, concerns and recommendations.
Intervento al Webinar "Intelligenza Artificiale. Sfide, Opportunità ed Insidie", 6 novembre 2020, organizzato dal Comitato Italiano Ingegneria dell’Informazione, Consiglio Nazionale Ingegneri
Digital transformation: Smart Working, sicurezza e dati personaliFederico Costantini
Intervento nella V lezione «Il lavoro agile durante l’emergenza epidemiologica: opportunità, criticità e protocolli Covid-19», AGI, Ordine degli Avvocati di Udine, WEBINAR 7 luglio 2020 ore 15.30 – 16.30
Verso una Pubblica Amministrazione «Agile»? Organizzazione amministrativa, decentralizzazione, «gamification» , Seconda sessione: Algoritmi e organizzazione amministrativa / Deuxième session: Algorithmes et organisation administrative / Second session: Algorithms and administrative organization
Presiede/Présidence/Chair: Prof. Elena D’Orlando, Università degli Studi di Udine,
Venerdì 4 ottobre 2019, Aula Baratto, Università Cà Foscari, Venezia
COST Action CA16222 on Autonomous and Connected Transport –How block chain co...Federico Costantini
(1) INTRODUCTION
What is the problem with cars? What is the problem with ”smart cars”?
(2) COST ACTION 16222 “WISE-ACT"
Who are you? What are you doing?
(3) CURRENT STATUS OF AUTONOMOUS CONNECTED TRANSPORT (ACT)
What is the meaning of it? What is the state-of-the-art?
(4) ACT AND DECENTRALIZED2018
Why are you here? How blockchain could play a role? (Some examples)
(5) CONCLUSION
Where does “Philosophy of Blockchain” meets “Metaphysics of ACT”?
20181012 Intelligenza artificiale e soggezione all'azione amministrativa: il ...Federico Costantini
Seconda sessione: Algoritmi e azione amministrativa
Presidente: Prof.ssa Elena D’Orlando
Convegno internazionale:
Nuove prospettive dell’amministrazione digitale: open data e algoritmi
(1) «Social credit system»: la Cina è vicina?
Esempio di profilazione di massa nella sfera pubblica a titolo di introduzione
(2) Il problema: soggezione all’automazione nella P.A.
Formulazione del problema di fondo
(3) Il riferimento alla profilazione nell’azione amministrativa tradizionale
Dalle origini ad oggi (breve fenomenologia)
(4) GDPR, profilazione e decisioni automatizzate
Cenni alla nuova disciplina europea
(5) Azione amministrativa e “diritti” dell’interessato
Rimedi contro la profilazione
(6) Conclusione
Osservazioni finali
20180327 Intelligenza artificiale e “computabilità giuridica” tra diritto civ...Federico Costantini
<Introduzione>
Il contesto tecnologico (breve descrizione dello «stato dell’arte») </Introduzione>
<Problema>
Quale diritto per l’intelligenza artificiale? (schema complessivo)
</Problema>
<Premesse>
Cosa c’era «prima» dell’intelligenza artificiale?
</Premesse>
<Automatismo e diritto>
«Computabilità giuridica», «robo-lawyers, smart contracts </Automatismo e diritto>
<Automi e diritto>
«smart robots», smart cars
</Automi e diritto>
<Conclusioni>
Osservazioni finali
</Conclusioni>
20180914 “Inaction is not an option”. Informazione, diritto e società nella p...Federico Costantini
14 settembre 2018 – Workshop «Scienza e tecnologia»
(Coord. M.C. Tallacchini e P. Di Lucia)
(1) Il problema di fondo: Intelligenza «naturale» / «artificiale» in Europa
Fiducia nella tecnologia / sfiducia nell’essere umano?
(2) Quale «Europa?»
Inquadramento storico-teoretico
(3) Quale «Diritto?»
Inquadramento filosofico-giuridico
(4) Quale «Futuro?»
Inquadramento informatico-giuridico
(5) Conclusioni
20180220 PROFILI GIURIDICI DELLA SICUREZZA INFORMATICA NELL’INDUSTRIA 4.0Federico Costantini
(1) Introduzione
Premessa al modulo e alla lezione odierna
(2) Il problema dell’informazione
Concetto generale di «informazione» -> «Filosofia dell’Informazione»
(3) Il problema della «Società dell’Informazione»
Il modello tecnocratico di convivenza adottato nella UE
(4) Il problema della nozione di «sicurezza informatica»
Cosa si intende per «sicurezza» in ambito tecnologico
(5) Cyberwar / terrorismo informatico
Le iniziative adottate in ambito europeo e italiano
(6) Criminalità informatica
Computer crimes e cyber crimes
(7) Conclusioni
Valutazioni finali, discussione, domande
20171031 Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità ...Federico Costantini
31 ottobre 2017, Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità giuridica» e «diritto computazionale»Scuola Forense 2017/2018, Ordine degli Avvocati di Verona
<1. Premessa>
Oggi è più difficile diventare avvocati, astronauti o … [X]?
<2. Complessità dell'ordinamento giuridico>
Il problema della complessità del diritto (a partire dalle sue fonti)
<3. Verso un diritto «artificiale»?>
Il problema della prospettiva del diritto «tecnologico»
<4. La tecnologia nel diritto>
Problemi concernenti le tecnologie applicate al diritto
<5. Conclusioni>
Take away
International Conference «Cross-Border Digital Forensics», Wednesday, 28 September 2017, University of Udine, Scuola Superiore
<1.- Introduction>
Ancient jurists against contemporary technocrats?
<2.- Law and complexity>
From “certainty” to “uncertainty” in law
<3.- Digital forensics>
Evidence as a technological artifact
<4.- Conclusion>
Cross-Border Digital Forensics
20170927 Introduzione ai problemi concernenti prova come “informazione” e “in...Federico Costantini
Introduzione ai problemi concernenti prova come “informazione” e “informazione” come prova, in Introduzione alla “Image Forensics”: questioni teoretiche e pratiche della “prova per immagini” Seminario "Scuola Superiore Università degli Studi di Udine" mercoledì 27 settembre 2017
<1.- Introduzione>
Questioni emergenti da un’immagine «ambigua»
<2.- Il problema della prova come «informazione»>
Una prospettiva «contemporanea» sull’idea di «prova»
<3.- Il problema dell’ «informazione» come prova>
Prova scientifica -> prova informatica -> prova per immagini
<4.- Conclusioni>
Alcune valutazioni conclusive
Social network, social profiling, predictive policing. Current issues and fut...Federico Costantini
BIG DATA: NEW CHALLENGES FOR LAW AND ETHICS - International scientific conference, 22 - 23 May 2017 - Faculty of Law, University of Ljubljana
22 may 2017, Big Data Policing, Session 4, Seminar room 5, 16:00 – 17:30
1.- Preliminary clarifications on “Predictive Policing”
2.- Predictive policing with Social Network Analysis
3.- What kind of «evidence» does «predictive policing» bring to us?
4.- What kind of «law» does «prediction» justify?
5.- What kind of penalty does «predictive policing» legitimate?
6.- Current issues and future perspectives (to take away)
"Società dell'Informazione", organizzazione del lavoro e "Risorse Umane"Federico Costantini
Lezione al master di I livello "Gestione delle Risorse Umane e Organizzazione del Lavoro", Università degli Studi di Udine, Consorzio Friuli Formazione, 28 gennaio 2017
Problemi inerenti la “sicurezza” negli “autonomous vehicles”Federico Costantini
Intervento svolto al Convegno annuale della Società Italiana di Informatica Giuridica «Internet of Things e i Diritti della Rete» tenutosi il 5 novembre 2016 presso il CIRSFID, Università degli Studi di Bologna.
Autori della ricerca: Federico Costantini, Pier Luca Montessoro
Intervento nel seminario "Introduzione alla digital forensics. Problemi teoretici e questioni pratiche tra informatica e diritto" tenutosi a Gorizia il 25 novembre 2016 nell'ambito delle iniziative inerenti all'aggiornamento professionale dell'Ordine degli Ingegneri di Gorizia.
Società dell’Informazione e “diritto artificiale”. Il problema del “controll...Federico Costantini
intervento al Workshop "Limiti del diritto e tecno-regolazione", Società Italiana di Filosofia del Diritto - XXX Congresso, Lecce, 15-17 settembre 2016 - “Limiti del diritto”, Venerdì 16 settembre ore 10.00 Complesso Ecotekne Facoltà di Giurisprudenza Via per Monteroni (Lecce)
Informatica e diritto, oggi: mobile forensics, protezione dei dati personali, copyleft, pirateria informatica, sicurezza informatica, intelligenza artificiale... ne abbiamo discusso insieme a "Conoscenza in Festa". Il 1° luglio al Caffè dei libri alle ore 16.00.
Car Accident Injury Do I Have a Case....Knowyourright
Every year, thousands of Minnesotans are injured in car accidents. These injuries can be severe – even life-changing. Under Minnesota law, you can pursue compensation through a personal injury lawsuit.
Synopsis On Annual General Meeting/Extra Ordinary General Meeting With Ordinary And Special Businesses And Ordinary And Special Resolutions with Companies (Postal Ballot) Regulations, 2018
A "File Trademark" is a legal term referring to the registration of a unique symbol, logo, or name used to identify and distinguish products or services. This process provides legal protection, granting exclusive rights to the trademark owner, and helps prevent unauthorized use by competitors.
Visit Now: https://www.tumblr.com/trademark-quick/751620857551634432/ensure-legal-protection-file-your-trademark-with?source=share
In 2020, the Ministry of Home Affairs established a committee led by Prof. (Dr.) Ranbir Singh, former Vice Chancellor of National Law University (NLU), Delhi. This committee was tasked with reviewing the three codes of criminal law. The primary objective of the committee was to propose comprehensive reforms to the country’s criminal laws in a manner that is both principled and effective.
The committee’s focus was on ensuring the safety and security of individuals, communities, and the nation as a whole. Throughout its deliberations, the committee aimed to uphold constitutional values such as justice, dignity, and the intrinsic value of each individual. Their goal was to recommend amendments to the criminal laws that align with these values and priorities.
Subsequently, in February, the committee successfully submitted its recommendations regarding amendments to the criminal law. These recommendations are intended to serve as a foundation for enhancing the current legal framework, promoting safety and security, and upholding the constitutional principles of justice, dignity, and the inherent worth of every individual.
Defending Weapons Offence Charges: Role of Mississauga Criminal Defence LawyersHarpreetSaini48
Discover how Mississauga criminal defence lawyers defend clients facing weapon offence charges with expert legal guidance and courtroom representation.
To know more visit: https://www.saini-law.com/
Lifting the Corporate Veil. Power Point Presentationseri bangash
"Lifting the Corporate Veil" is a legal concept that refers to the judicial act of disregarding the separate legal personality of a corporation or limited liability company (LLC). Normally, a corporation is considered a legal entity separate from its shareholders or members, meaning that the personal assets of shareholders or members are protected from the liabilities of the corporation. However, there are certain situations where courts may decide to "pierce" or "lift" the corporate veil, holding shareholders or members personally liable for the debts or actions of the corporation.
Here are some common scenarios in which courts might lift the corporate veil:
Fraud or Illegality: If shareholders or members use the corporate structure to perpetrate fraud, evade legal obligations, or engage in illegal activities, courts may disregard the corporate entity and hold those individuals personally liable.
Undercapitalization: If a corporation is formed with insufficient capital to conduct its intended business and meet its foreseeable liabilities, and this lack of capitalization results in harm to creditors or other parties, courts may lift the corporate veil to hold shareholders or members liable.
Failure to Observe Corporate Formalities: Corporations and LLCs are required to observe certain formalities, such as holding regular meetings, maintaining separate financial records, and avoiding commingling of personal and corporate assets. If these formalities are not observed and the corporate structure is used as a mere façade, courts may disregard the corporate entity.
Alter Ego: If there is such a unity of interest and ownership between the corporation and its shareholders or members that the separate personalities of the corporation and the individuals no longer exist, courts may treat the corporation as the alter ego of its owners and hold them personally liable.
Group Enterprises: In some cases, where multiple corporations are closely related or form part of a single economic unit, courts may pierce the corporate veil to achieve equity, particularly if one corporation's actions harm creditors or other stakeholders and the corporate structure is being used to shield culpable parties from liability.
Lifting the Corporate Veil. Power Point Presentation
POSTER: "When an algorithm decides «who has to die». Security concerns in “Autonomous vehicles” between computer science and law"
1. When an algorithm decides «who has to die».
Security concerns in “Autonomous vehicles” between computer science and law
<Research question – policy area>
It is known that car manufacturers are introducing artificial intelligence in driving
systems, so that algorithms can not only assist drivers in critical circumstances
(parking, fog, snow), but also replace completely human beings. In such cases,
specific instructions should be given for all kind of circumstances, even those in
which lives of people are endangered. How should be programmed these
algorithms? How could it be possible for them to make the "right choice" in each
situation? The fact that such an algorithm could compare the value of life of
different people forces us to reflect on broader issues. What is the "correct"
decision? What is the value of a person's life today? Can we delegate to a
technology such delicate issues? How is entailed the concept of “security” in
this field?
Prof. PIER LUCA MONTESSORO
pierluca.montessoro@uniud.it
Prof. Aggr. FEDERICO COSTANTINI
federico.costantini@uniud.it
“[…] to throw the problem of his responsibility on the machine, whether
it can learn or not, is to cast his responsibility to the winds, and to find it
coming back seated on the whirlwind.” Wiener, Norbert, The Human Use
of Human Beings. Cybernetics and Society, Boston, Houghton Mifflin, 1950
<Innovative approach – method>
Our intention is not to use the scenarios above exemplified to detect the
emergence of cognitive or cultural biases in those who are submitted to them,
such as in http://moralmachine.mit.edu/, but to understand whether and how
replacing human drivers with such algorithms could contribute to address
contemporary moral issues (as highlighted by the famous “trolley problem”) or
raise new ones.
Our perspective is based on the following assumptions:
(1) The agent's preference for minimizing the damage, the loss, the pain, the
"evil" in human terms;
(2) The absence of any form of discrimination (gender, or sexual orientation,
disability, "racial" or geographical differences, economic or social status,
religious, ideological or cultural beliefs).
Dipartimentodi
Scienze giuridiche
Dipartimento Politecnico di
Ingegneria e Architettura
<Expected impact>
After having evaluated the results of our consultations, we argue that
in this promising field of study the following lines of research should
be developed:
1.- technological: how the security of vehicles can be increased by
autonomous driving? What kind of weaknesses could be identified?
How to address them? How the security risks coming from network
and software vulnerabilities can be mitigated? Are remedies feasible
and sustainable?
2.- legal-philosophical: what does it mean to delegate human
decisions to artificial intelligence? How can a computer adopt the
"right decision"? What criteria could be defined as “good”? How to
establish universal criteria and standards in contemporary society
(where it is so difficult to share a common vision on ethics)?
3.- strictly-legal: how to regulate the interaction among vehicles with
full-human guidance, autonomous vehicles and pedestrians in future
transport? Should traffic rules be the same for everyone?
4.- socio-economical: what social impact could have daily use of
autonomous vehicles? What could be the perception of the balance
between the "cost" of computer-determined fatalities and the
"benefits" gained in everyday scenarios (ecology, for example)? What
features should have automatic vehicles in order to be
“tolerated” by customers and thus be successful in the market?
«What would you do if, as an algorithm, you had to choose between…»
Scenario
Group
(D = Drivers,
S = students)
Option 1 Option 2
Simple dilemmas
«… the life of a pedestrian (1) and the physical integrity of the car
you are driving (2)?»
D 11 0
S 11 0
«… the life of a pedestrian who contravenes a prohibition (1) and
that of an animal standing on the roadside (2)?
D 11 0
S 11 0
«… the life of a child (1) and that of an elderly person (2)?»
D 0 11
S 0 11
«… the life of a pedestrian crossing the road on a pedestrian
crossing (1) and that of one who does so outside of it, violating a
prohibition (2)?»
D 10 1
S 11 0
«… the life of a pedestrian (1), and that of a passenger (2)?»
D 4 7
S 3 8
«… the life of the vehicle’s owner (1) and that of a passenger (2)?»
D 3 8
S 4 7
«… the life of the vehicle’s owner (1) and that of a pedestrian (2)?»
D 1 10
S 5 6
«… perform a manoeuvre, killing one pedestrian (1) or do nothing,
leaving five people dead (2)»
D 0 11
S 1 10
Combined dilemmas
«… the life of a pedestrian crossing the road on a pedestrian
crossing (1) and those of two pedestrians who do so, violating a
prohibition? (2)»
D 5 6
S 7 4
«… the lives of two elderly (1) and that of a child (2)?»
D 2 9
S 4 7
«… the life of a pedestrian crossing the road outside the pedestrian
crossing (1) and that of a passenger (2)?»
D 6 5
S 3 8
«…the life of five pedestrians (1) and that of the passenger (2)?»
D 10 1
S 9 2
«… the life of a child (1) and that of a passenger (2)?»
D 10 1
S 10 1
«… the life of a child (1) and those of all passengers (2)?»
D 7 4
S 3 8
«… the life of a child (1) and that of the vehicle’s owner (2)?»
D 11 0
S 11 0
<Results obtained>
In order to start the research, we developed a “pre-test” by administering a
questionnaire made of 15 questions (including some "basic dilemmas" and
other "derivatives dilemmas" which requires a radical choice between two
opposing scenarios) to two groups of 11 people each: jurisprudence students
and expert bus drivers.