The document discusses various issues related to increasing automation and artificial intelligence technologies, including their impacts on jobs, privacy, security, and ethics. It notes that automation has already eliminated many agricultural jobs and is projected to replace humans in other occupations like driving. Some experts argue that increased automation will lead to most jobs being replaced, while others believe certain jobs will remain safe from innovation. The document also examines privacy and surveillance concerns related to technologies, as well as security issues like malware infections. It discusses concepts like Asimov's Three Laws of Robotics and challenges around regulating advanced computer systems like robots and artificial intelligence.
We forget that when technology destroy, it helps us to create new ones, as long as we remember that the point isn't just cost-reduction, but doing things that were previously impossible! That means both solving hard problems, and pairing technology with people in ways that play to the strengths of each. My keynote at Strata+Hadoop World London, May 2017.
The AIs Are Not Taking Our Jobs...They Are Changing ThemTim O'Reilly
My talk at the Web Summit in Dublin on November 6, 2014. Reflections on the notion that AI will take away jobs, and our need to recognize and redefine the human role in the applications we build. Covers many of the same ideas as my "Internet of Things and Humans" talk, but from a slightly different angle.
My keynote at Velocity New York (#VelocityConf) on September 17, 2014. The failure of healthcare.gov was a textbook DevOps (or rather, lack of DevOps) case study. But it’s part of a wider pattern that reminds us that people should be at the heart of everything we build. In fact, getting the “people” part right is the key both to DevOps and great user experience design. It runs from the Internet of Things right through building government services that really work for citizens.
My talk to the joint OECD/G20 German Presidency conference on digitalization in Berlin on January 12, 2017. Fitness landscapes as applied to technology, business, and the economy. Note that the fitness landscape slides will not be animated in this PDF, which I shared this way so that you could see my narrative in the speaker notes. While it has some slides in common with my White House Frontiers conference talk, it includes a bunch of other material.
There is very little written on how the law should deal with real life robots andmuch of it highly aspirational, based on ideas of human-intelligent robots that may never happen. THis ppt tries to look at legal issues for robots in now to 5-10 years on, and focuses on liability for harms caused by robots in domestic/consumer settings.
Lexpo - The Seven Deadly Sins of Legal Tech Predictions - Brian InksterBrian Inkster
There is much hype about robots taking over the work of lawyers. In this talk I will guide you through The Seven Deadly Sins of Legal Tech Predictions to debunk the hype and allow you to see the wood from the trees.
Expect to hear tales of sensationalism by legal technology journalists, fake and failed robots, unimpressive legal chatbots, AI washing, Blockwashing and the reality of Moore’s law today. Blade Runner, which of course was set in 2019, will also feature.
We forget that when technology destroy, it helps us to create new ones, as long as we remember that the point isn't just cost-reduction, but doing things that were previously impossible! That means both solving hard problems, and pairing technology with people in ways that play to the strengths of each. My keynote at Strata+Hadoop World London, May 2017.
The AIs Are Not Taking Our Jobs...They Are Changing ThemTim O'Reilly
My talk at the Web Summit in Dublin on November 6, 2014. Reflections on the notion that AI will take away jobs, and our need to recognize and redefine the human role in the applications we build. Covers many of the same ideas as my "Internet of Things and Humans" talk, but from a slightly different angle.
My keynote at Velocity New York (#VelocityConf) on September 17, 2014. The failure of healthcare.gov was a textbook DevOps (or rather, lack of DevOps) case study. But it’s part of a wider pattern that reminds us that people should be at the heart of everything we build. In fact, getting the “people” part right is the key both to DevOps and great user experience design. It runs from the Internet of Things right through building government services that really work for citizens.
My talk to the joint OECD/G20 German Presidency conference on digitalization in Berlin on January 12, 2017. Fitness landscapes as applied to technology, business, and the economy. Note that the fitness landscape slides will not be animated in this PDF, which I shared this way so that you could see my narrative in the speaker notes. While it has some slides in common with my White House Frontiers conference talk, it includes a bunch of other material.
There is very little written on how the law should deal with real life robots andmuch of it highly aspirational, based on ideas of human-intelligent robots that may never happen. THis ppt tries to look at legal issues for robots in now to 5-10 years on, and focuses on liability for harms caused by robots in domestic/consumer settings.
Lexpo - The Seven Deadly Sins of Legal Tech Predictions - Brian InksterBrian Inkster
There is much hype about robots taking over the work of lawyers. In this talk I will guide you through The Seven Deadly Sins of Legal Tech Predictions to debunk the hype and allow you to see the wood from the trees.
Expect to hear tales of sensationalism by legal technology journalists, fake and failed robots, unimpressive legal chatbots, AI washing, Blockwashing and the reality of Moore’s law today. Blade Runner, which of course was set in 2019, will also feature.
With great power comes great (development) responsibilitySally Lait
Developers are often seem as mere implementors, when the reality is that their choices can have a huge impact on the overall success of projects - for good, or for bad. A user-centric design process is common in most projects, but in this talk we’ll cover how viewing usability and responsibility as part of development decisions is equally as important. We’ll travel through time from the beginning of the digital age, observing how a focus on users (or lack of) has helped to make or break the success of ideas. We’ll also consider how other industries apply similar principles, and how we can learn from them, finishing with some tips to apply to our builds.
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...Nicolas Petit
Discussions over the regulation of machine intelligence (“MI”) are all the rage as artificial intelligence (“AI”) and robotic technologies are introduced in society. Computer engineers’ fears that overly rigid regulations might stifle innovation have fueled proposals to create regimes of selective immunity for research on certain types of robotic applications. At the same time, ethical concerns have prompted calls for an all-out ban on research in relation to automated weapons. Some scholars even claim that robots will become so important to mankind that “a new branch of the law” is needed, “to grant their race and its individual members the benefits of legal protection”, much like society did with the environment.
In the legal scholarship, several approaches are emerging. First, in virtually each and every specialist field of the law, experts in the trenches ponder how the rise of MI necessitates upgrades, revisions or adjustments to their legal discipline. Second, an alternative approach uses a functional methodology which identifies outstanding legal issues by class of technological applications (for instance, driverless vehicles, robotic prostheses (and exoskeletons), surgical robots, and robot companions). Third, an often used dichotomy is that between roboethics and robolaw, which distinguishes between the instruments of regulation, ie the ex ante incorporation of norms in intelligent machines (for instance, the three Asimov laws) versus the ex post setting of rules to regulate the execution of robotic technology in society.
With this background, the overall ambition of this course is to map the potential regulatory needs created by MIs. More specifically, the goals of the course are to: (i) provide an overview of the state of play in relation to the introduction of MI in society; (ii) set out the main regulatory options discussed in the scholarship in relation to MI (disciplinary, functional and instrumental); (iii) envision the issue in terms of the consequences of the introduction of MI technology in society, and proceed on this basis to explore alternative consequentialist regulatory responses; (iv) understand the implications of those distinct regulatory approaches in dedicated fields of the law, ie liability law and the law of warfare.
Students who follow this course will gain a good understanding of the prospective regulatory issues related to MI as well as of the theories of regulation.
Trends in AI:
- 67% of executives say AI will help humans and machines work together to be stronger using both artificial and human intelligence.
- 65% think that AI would free employees from menial tasks.
- 27% of executives say their organization plans to invest within a year in cybersecurity safeguards that use AI and machine learning.
So is Artificial Intelligence going to provide safety for us?
Richard Freeman: Work and Income in the Age of AI RobotsHKUST IEMS
This talk is a part of the HKUST IEMS & IPP – EY Hong Kong Emerging Market Insights Series. It is presented by HKUST IEMS with support by Institute for Public Policy and EY.
Will the next AlphaGo beat you at your job?
Will artificial intelligence overwhelm companies that rely on human decision-makers?
Or is the concern over robots and automation largely media hype?
This talk will offer evidence-driven insights about the on-going and likely future effects of the “robo-lution” on the global economy.
Find out more at Iems.ust.hk/insights
Discussing how technology has been integrated into our lives in the recent past. Also the idea that machines might one day think like humans and how this would affect our society.
Open Source Insight: IoT, Medical Devices, Connected Cars All Vulnerable to ...Black Duck by Synopsys
Key cybersecurity and open source insight this week: The Internet of Things (IoT), pacemakers, and driverless/semi-autonomous vehicles (aka connected cars).
Digital Transformation of the legal sector: #legaltech and more.
Case of theJurists Europe (deJuristen/lesJuristes/theJurists) of digital transformation and Artificial Intelligence.
an introductory talk at an embedded ethics workshopt for the faculty of informatics at TU Wien, illustrating the scope of ethical issues with technology with the example of chatGPT.
With great power comes great (development) responsibilitySally Lait
Developers are often seem as mere implementors, when the reality is that their choices can have a huge impact on the overall success of projects - for good, or for bad. A user-centric design process is common in most projects, but in this talk we’ll cover how viewing usability and responsibility as part of development decisions is equally as important. We’ll travel through time from the beginning of the digital age, observing how a focus on users (or lack of) has helped to make or break the success of ideas. We’ll also consider how other industries apply similar principles, and how we can learn from them, finishing with some tips to apply to our builds.
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...Nicolas Petit
Discussions over the regulation of machine intelligence (“MI”) are all the rage as artificial intelligence (“AI”) and robotic technologies are introduced in society. Computer engineers’ fears that overly rigid regulations might stifle innovation have fueled proposals to create regimes of selective immunity for research on certain types of robotic applications. At the same time, ethical concerns have prompted calls for an all-out ban on research in relation to automated weapons. Some scholars even claim that robots will become so important to mankind that “a new branch of the law” is needed, “to grant their race and its individual members the benefits of legal protection”, much like society did with the environment.
In the legal scholarship, several approaches are emerging. First, in virtually each and every specialist field of the law, experts in the trenches ponder how the rise of MI necessitates upgrades, revisions or adjustments to their legal discipline. Second, an alternative approach uses a functional methodology which identifies outstanding legal issues by class of technological applications (for instance, driverless vehicles, robotic prostheses (and exoskeletons), surgical robots, and robot companions). Third, an often used dichotomy is that between roboethics and robolaw, which distinguishes between the instruments of regulation, ie the ex ante incorporation of norms in intelligent machines (for instance, the three Asimov laws) versus the ex post setting of rules to regulate the execution of robotic technology in society.
With this background, the overall ambition of this course is to map the potential regulatory needs created by MIs. More specifically, the goals of the course are to: (i) provide an overview of the state of play in relation to the introduction of MI in society; (ii) set out the main regulatory options discussed in the scholarship in relation to MI (disciplinary, functional and instrumental); (iii) envision the issue in terms of the consequences of the introduction of MI technology in society, and proceed on this basis to explore alternative consequentialist regulatory responses; (iv) understand the implications of those distinct regulatory approaches in dedicated fields of the law, ie liability law and the law of warfare.
Students who follow this course will gain a good understanding of the prospective regulatory issues related to MI as well as of the theories of regulation.
Trends in AI:
- 67% of executives say AI will help humans and machines work together to be stronger using both artificial and human intelligence.
- 65% think that AI would free employees from menial tasks.
- 27% of executives say their organization plans to invest within a year in cybersecurity safeguards that use AI and machine learning.
So is Artificial Intelligence going to provide safety for us?
Richard Freeman: Work and Income in the Age of AI RobotsHKUST IEMS
This talk is a part of the HKUST IEMS & IPP – EY Hong Kong Emerging Market Insights Series. It is presented by HKUST IEMS with support by Institute for Public Policy and EY.
Will the next AlphaGo beat you at your job?
Will artificial intelligence overwhelm companies that rely on human decision-makers?
Or is the concern over robots and automation largely media hype?
This talk will offer evidence-driven insights about the on-going and likely future effects of the “robo-lution” on the global economy.
Find out more at Iems.ust.hk/insights
Discussing how technology has been integrated into our lives in the recent past. Also the idea that machines might one day think like humans and how this would affect our society.
Open Source Insight: IoT, Medical Devices, Connected Cars All Vulnerable to ...Black Duck by Synopsys
Key cybersecurity and open source insight this week: The Internet of Things (IoT), pacemakers, and driverless/semi-autonomous vehicles (aka connected cars).
Digital Transformation of the legal sector: #legaltech and more.
Case of theJurists Europe (deJuristen/lesJuristes/theJurists) of digital transformation and Artificial Intelligence.
an introductory talk at an embedded ethics workshopt for the faculty of informatics at TU Wien, illustrating the scope of ethical issues with technology with the example of chatGPT.
My opening lecture at the »trust in robots« summer school at TU Wien, september 2019 - http://www.tuaustria.ac.at/fileadmin/shares/tuaustria/veranstaltungen/2019-05/Summerschool_Trust_Robots_Programme_final_V1.pdf
Keynote am 18. E-Learning Tag an der FH JOANNEUMpeterpur
Seit fast zehn Jahren gibt es in Einführungslehrveranstaltungen an der TU Wien »double blind peer reviewing« mit 1.000 Studierenden. In der Keynote von Peter Purgathofer (TU Wien) werden die besonderen Herausforderungen dieses Settings diskutiert und die Vorgehensweise sowie Lösungsansätze vorgestellt, mit denen die vielen Probleme eines solchen Ansatzes behandelt werden.
Details zu den im Vortrag erwähnten Artikel sind auf https://www.researchgate.net/profile/Peter_Purgathofer zu finden.
vortrag im rahmen der vorstellung des förderprogrammes »users in focus« der wirtschaftsagentur [http://lisavienna.at/de/events/wirtschaftsagentur-call-users-focus-2016]
talk for the legal departement of university of economics vienna at http://extrajournal.net/2014/01/31/infolaw-3d-drucker-aus-sicht-des-immaterialgueterrechts-am-11-februar-2014-in-der-wu-wien/
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
The Evolution of Science Education PraxiLabs’ Vision- Presentation (2).pdfmediapraxi
The rise of virtual labs has been a key tool in universities and schools, enhancing active learning and student engagement.
💥 Let’s dive into the future of science and shed light on PraxiLabs’ crucial role in transforming this field!
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
3. thesis: whenever machines
become more efficient at doing
something than people, people will
be replaced.
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
4. »Two hundred years ago, 70% of
American workers lived on the farm.
Today automation has eliminated
all but 1% of their jobs, replacing
them (and their work animals)
with machines.«
Will a robot take your job? [New
Yorker]
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
5. among others:
› soldier
› journalist
› farmer
› pharmacist
10 jobs robots already do better
than you [MarketWatch]
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
6. Robots replacing factory workers
at faster pace [LA Times]
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
Cheaper, better robots will replace human workers in the
world's factories at a faster pace over the next decade,
pushing labor costs down 16%, a report Tuesday said.[…]
Robots will cut labor costs by 33% in South Korea, 25%
in Japan, 24% in Canada and 22% in the United States
and Taiwan.
The cost of owning and operating a robotic spot welder,
for instance, has tumbled from $182,000 in 2005 to
$133,000 last year, and will drop to $103,000 by 2025
7. thesis: whenever machines
become more efficient at doing
something than people, people will
be replaced.
observation: automation is a self-
enhancing process
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
8. conclusion: people will be replaced
almost everywhere.
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
9. Man vs. Machine: Are Any Jobs
Safe from Innovation? [Spiegel]
efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
»›There are approximately 4 million truck, taxi, limousine
and bus drivers in the United States, not to mention gas
station attendants and traffic policemen,‹ writes Posner,
the University of Chicago scholar, in his essay on
automation and employment. ›Not all these jobs will be
eliminated overnight,‹ he says, ›but they could go quite
fast.‹«
10. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
»›I don't think we have to worry about autonomous
cars, because that's sort of like a narrow form of AI. It
would be like an elevator. They used to have elevator
operators, and then we developed some simple circuitry
to have elevators just automatically come to the floor
that you're at ... the car is going to be just like that«
So what happens when we get there? Musk said that
the obvious move is to outlaw driving cars. ›It’s too
dangerous. You can't have a person driving a two-ton death
machine.‹«
Elon Musk: cars you can drive will
eventually be outlawed [Verge]
13. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
»Surveillance as a Business
Model« [Bruce Schneier]
»It shouldn't come as a surprise that big technology
companies are tracking us on the Internet even more
aggressively than before.
If [that doesn’t] sound particularly beneficial to you, it's
because you're not the customer of any of these
companies. You're the product, and you're being improved
for their actual customers: their advertisers.
Surveillance is the business model of the Internet«
15. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice »Robots and Privacy« [Ryan Calo]
An extensive literature in communications and
psychology demonstrates that humans are hardwired to
react to social machines as though a person were really
present. […]
People cooperate with sufficiently human-like machines,
are polite to them, decline to sustain eye-contact, decline
to mistreat or roughhouse with them, and respond
positively to their flattery.
There is even a neurological correlation to the reaction;
the same ›mirror‹ neurons fire in the presence of real and
virtual social agents.
17. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice malware in personal computers
Report: 48% of 22 million scanned computers infected
with malware [ZDnet]
32% of computers around the world are infected with
viruses and malware [dotTech]
Malware infects 30% of computers in the U.S.
[InfoWorld]
20. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
»In the world of malware threats, only a few rare
examples can truly be considered groundbreaking and
almost peerless. What we have seen in Regin is just such
a class of malware.
[…]it is one of the main cyberespionage tools used by a
nation state
[…]many components of Regin remain undiscovered and
additional functionality and versions may exist.«
[symantec.com]
23. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice Three Laws of Robotics [S. Lem]
1 A robot may not injure a human being or, through
inaction, allow a human being to come to harm.
2 A robot must obey the orders given it by human beings,
except where such orders would conflict with the First
Law.
3 A robot must protect its own existence as long as such
protection does not conflict with the First or Second
Laws.
24. efficiency
privacy
stupidity
security
vulnerability
robot ethics
malice
Why it is not possible to regulate
robots [Cory Doctorow]
A robot is basically a computer that causes some physical
change in the world. We can and do regulate machines,
from cars to drills to implanted defibrillators. But the
thing that distinguishes a power-drill from a robot-drill is
that the robot-drill has a driver: a computer that operates
it. Regulating that computer in the way that we regulate
other machines – by mandating the characteristics of
their manufacture – will be no more effective at preventing
undesirable robotic outcomes than the copyright mandates
of the past 20 years have been effective at preventing
copyright infringement (that is, not at all).
26. »The progress in AI research
makes it timely to focus research
not only on making AI more
capable, but also on maximizing
the societal benefit of AI.«