We are at the dawn of a major shift in the evolution of technology. The next two decades will transform the way people live and work just as the computing revolution has transformed the human landscape over the past half century. The host of opportunities and challenges that come with this new era will require a new generation of technologies and a rewriting of the rules of computing.
“Artificial Intelligence, Cognitive Computing and Innovating in Practice”diannepatricia
Cristina Mele, Full Professor of Management at the University of Napoli “Federico II”, presentation as part of Cognitive Systems Institute Speaker Series
What are Cognitive Applications? What is exciting about them? They represent a whole new way of human computer interaction and acting on data insights. Introducing IBM Watson and how to develop Cognitive applications. AI, Machine Learning compared and contrasted.
“Artificial Intelligence, Cognitive Computing and Innovating in Practice”diannepatricia
Cristina Mele, Full Professor of Management at the University of Napoli “Federico II”, presentation as part of Cognitive Systems Institute Speaker Series
What are Cognitive Applications? What is exciting about them? They represent a whole new way of human computer interaction and acting on data insights. Introducing IBM Watson and how to develop Cognitive applications. AI, Machine Learning compared and contrasted.
Introduction to Cognitive Computing the science behind and use of IBM WatsonSubhendu Dey
The lecture was given in a Cognitive and Analytics workshop at Indian Institute of Management. Topics covered was -
1) Understanding Natural Language Processing, Classification, Watson & its modules
2) Industry applications of Cognitive Computing
3) Understanding Cognitive Architecture
4) Understanding the disciplines / tools being used in Cognitive Science
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
Deloitte's report and point of view on IBM's Watson. IBM Watson, AI, Cognitive Computing are rapidly evolving technologies that can support and enhance enterprise solutions. Learn about IBM Watson the Why? and the How?
Thank you for your interest in the recent NY Outthink breakfast on July 19th at the Rainbow Room. Presentations shared highlighted how cognitive computing is being applied today in a variety of business situations, in many industries, and across multiple business functions. See the presentation by Beth Smith for steps to becoming a cognitive business!
Knowledge Will Propel Machine Understanding of Big DataAmit Sheth
Preview video: https://youtu.be/4e0dtV7CTWM
CCKS Keynote, August 2017: http://www.ccks2017.com/?page_id=358
SEAS Summer School, July 2017
https://sites.google.com/view/seasschool2017/talks
Related paper: http://knoesis.org/node/2835
CCKS Conf had over 500 attendees- some photos: https://photos.app.goo.gl/5CdlfAX1uYwvgqsQ2
Semantic, Cognitive, and Perceptual Computing – three intertwined strands of ...Amit Sheth
Keynote at Web Intelligence 2017: http://webintelligence2017.com/program/keynotes/
Video: https://youtu.be/EIbhcqakgvA Paper: http://knoesis.org/node/2698
Abstract: While Bill Gates, Stephen Hawking, Elon Musk, Peter Thiel, and others engage in OpenAI discussions of whether or not AI, robots, and machines will replace humans, proponents of human-centric computing continue to extend work in which humans and machine partner in contextualized and personalized processing of multimodal data to derive actionable information.
In this talk, we discuss how maturing towards the emerging paradigms of semantic computing (SC), cognitive computing (CC), and perceptual computing (PC) provides a continuum through which to exploit the ever-increasing and growing diversity of data that could enhance people’s daily lives. SC and CC sift through raw data to personalize it according to context and individual users, creating abstractions that move the data closer to what humans can readily understand and apply in decision-making. PC, which interacts with the surrounding environment to collect data that is relevant and useful in understanding the outside world, is characterized by interpretative and exploratory activities that are supported by the use of prior/background knowledge. Using the examples of personalized digital health and a smart city, we will demonstrate how the trio of these computing paradigms form complementary capabilities that will enable the development of the next generation of intelligent systems. For background: http://bit.ly/PCSComputing
IBM Watson Question-Answering System and Cognitive ComputingRakuten Group, Inc.
IBM's vision of cognitive computing has been steadily embraced across the industries since IBM's Watson question-answering system made a sensational debut at the US Jeopardy! television quiz show in 2011. As a core member of the Watson project, I would like to share the excitement of the project and the last five and a half year of its progress into the cognitive business. In this talk, I will also give a technical overview of Watson, major use cases, and perspectives on the future of cognitive computing.
https://tech.rakuten.co.jp/
IBM Watson Ecosystem roadshow - Chicago 4-2-14cheribergeron
IBM Watson is powering a new generation of cognitive applications. Learn how IBM is partnering with visionaries and entrepreneurs to bring innovative cognitive applications to market through the IBM Watson Ecosystem.
Innovation and economic growth depends on company's ability to gain insight into data. However, data is growing exponentially, but our ability to make use of it is not. Untapped economic value resides in this unutilized data, called "dark data." This presentation looks at some of the causes for the explosion of data, some of the impediments preventing exploring and creating business value from dark data; and some ideas for ways around those impediments.
Smart Data for you and me: Personalized and Actionable Physical Cyber Social ...Amit Sheth
Featured Keynote at Worldcomp'14, July 2014: http://www.world-academy-of-science.org/worldcomp14/ws/keynotes/keynote_sheth
Video of the talk at: http://youtu.be/2991W7OBLqU
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is human health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information, etc.). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will forward the concept of Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If I am an asthma patient, for all the data relevant to me with the four V-challenges, what I care about is simply, “How is my current health, and what is the risk of having an asthma attack in my personal situation, especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city. I will present examples from a couple of these.
Presented at the Panel on
Sensor, Data, Analytics and Integration in Advanced Manufacturing, at the Connected Manufacturing track of Bosch-USA organized "Leveraging Public-Private Partnerships for Regional Growth Summit". Panel statement: Sensors, data and analytics are the core of any smart manufacturing system. What are the main challenges to create actionable outputs, replicate systems and scale efficiency gains across industries?
Moderator: Thomas Stiedl, Bosch
Panelists:
1. Amit Sheth, Wright State University
2. Howie Choset, Carnegie Melon University
3. Nagi Gebraeel, Georgia Institute of Technology
4. Brian Anthony, Massachusetts Institute of Technology
5. Yarom Polosky, Oak Ridget National Laboratory
For in-depth look:
Smart IoT: IoT as a human agent, human extension, and human complement
http://amitsheth.blogspot.com/2015/03/smart-iot-iot-as-human-agent-human.html
Semantic Gateway: http://knoesis.org/library/resource.php?id=2154
SSN Ontology: http://knoesis.org/library/resource.php?id=1659
Applications of Multimodal Physical (IoT), Cyber and Social Data for Reliable and Actionable Insights: http://knoesis.org/library/resource.php?id=2018
Smart Data: Transforming Big Data into Smart Data...: http://wiki.knoesis.org/index.php/Smart_Data
Historic use of the term Smart Data (2004): http://www.scribd.com/doc/186588820
How is Watson Changing the Future of the Automative Industry?IBM Watson
“How is Watson Changing the Future of the Automotive Industry?” presented in Livonia, MI. Event participants were introduced to the age of cognitive computing, where cognitive analytics evaluate complex data in new ways to help solve the industry's most challenging problems. Cognitive computing has arrived, and its potential to transform the industry is momentous. Learn how cognitive solutions are being applied in the automotive industry and how industry leaders are embracing this ground breaking technology to spark the digital future.
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Computing, cognition and the future of knowing,. by IBMVirginia Fernandez
How humans and machines are forging a new age of understanding.
-The history of computing and the rise of cognitive
-The world’s first cognitive system.
-The technical path forward and the science of what’s possible
-Implications and obligations for the advance of cognitive science.
-Paving the way for the next generation of human cognition.
Learning to trust artificial intelligence systems accountability, compliance ...Diego Alberto Tamayo
It’s not surprising that the
public’s imagination has
been ignited by Artificial
Intelligence since the term
was first coined in 1955.
In the ensuing 60 years,
we have been alternately
captivated by its promise,
wary of its potential for
abuse and frustrated by
its slow development.
Introduction to Cognitive Computing the science behind and use of IBM WatsonSubhendu Dey
The lecture was given in a Cognitive and Analytics workshop at Indian Institute of Management. Topics covered was -
1) Understanding Natural Language Processing, Classification, Watson & its modules
2) Industry applications of Cognitive Computing
3) Understanding Cognitive Architecture
4) Understanding the disciplines / tools being used in Cognitive Science
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
Deloitte's report and point of view on IBM's Watson. IBM Watson, AI, Cognitive Computing are rapidly evolving technologies that can support and enhance enterprise solutions. Learn about IBM Watson the Why? and the How?
Thank you for your interest in the recent NY Outthink breakfast on July 19th at the Rainbow Room. Presentations shared highlighted how cognitive computing is being applied today in a variety of business situations, in many industries, and across multiple business functions. See the presentation by Beth Smith for steps to becoming a cognitive business!
Knowledge Will Propel Machine Understanding of Big DataAmit Sheth
Preview video: https://youtu.be/4e0dtV7CTWM
CCKS Keynote, August 2017: http://www.ccks2017.com/?page_id=358
SEAS Summer School, July 2017
https://sites.google.com/view/seasschool2017/talks
Related paper: http://knoesis.org/node/2835
CCKS Conf had over 500 attendees- some photos: https://photos.app.goo.gl/5CdlfAX1uYwvgqsQ2
Semantic, Cognitive, and Perceptual Computing – three intertwined strands of ...Amit Sheth
Keynote at Web Intelligence 2017: http://webintelligence2017.com/program/keynotes/
Video: https://youtu.be/EIbhcqakgvA Paper: http://knoesis.org/node/2698
Abstract: While Bill Gates, Stephen Hawking, Elon Musk, Peter Thiel, and others engage in OpenAI discussions of whether or not AI, robots, and machines will replace humans, proponents of human-centric computing continue to extend work in which humans and machine partner in contextualized and personalized processing of multimodal data to derive actionable information.
In this talk, we discuss how maturing towards the emerging paradigms of semantic computing (SC), cognitive computing (CC), and perceptual computing (PC) provides a continuum through which to exploit the ever-increasing and growing diversity of data that could enhance people’s daily lives. SC and CC sift through raw data to personalize it according to context and individual users, creating abstractions that move the data closer to what humans can readily understand and apply in decision-making. PC, which interacts with the surrounding environment to collect data that is relevant and useful in understanding the outside world, is characterized by interpretative and exploratory activities that are supported by the use of prior/background knowledge. Using the examples of personalized digital health and a smart city, we will demonstrate how the trio of these computing paradigms form complementary capabilities that will enable the development of the next generation of intelligent systems. For background: http://bit.ly/PCSComputing
IBM Watson Question-Answering System and Cognitive ComputingRakuten Group, Inc.
IBM's vision of cognitive computing has been steadily embraced across the industries since IBM's Watson question-answering system made a sensational debut at the US Jeopardy! television quiz show in 2011. As a core member of the Watson project, I would like to share the excitement of the project and the last five and a half year of its progress into the cognitive business. In this talk, I will also give a technical overview of Watson, major use cases, and perspectives on the future of cognitive computing.
https://tech.rakuten.co.jp/
IBM Watson Ecosystem roadshow - Chicago 4-2-14cheribergeron
IBM Watson is powering a new generation of cognitive applications. Learn how IBM is partnering with visionaries and entrepreneurs to bring innovative cognitive applications to market through the IBM Watson Ecosystem.
Innovation and economic growth depends on company's ability to gain insight into data. However, data is growing exponentially, but our ability to make use of it is not. Untapped economic value resides in this unutilized data, called "dark data." This presentation looks at some of the causes for the explosion of data, some of the impediments preventing exploring and creating business value from dark data; and some ideas for ways around those impediments.
Smart Data for you and me: Personalized and Actionable Physical Cyber Social ...Amit Sheth
Featured Keynote at Worldcomp'14, July 2014: http://www.world-academy-of-science.org/worldcomp14/ws/keynotes/keynote_sheth
Video of the talk at: http://youtu.be/2991W7OBLqU
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is human health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information, etc.). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will forward the concept of Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If I am an asthma patient, for all the data relevant to me with the four V-challenges, what I care about is simply, “How is my current health, and what is the risk of having an asthma attack in my personal situation, especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city. I will present examples from a couple of these.
Presented at the Panel on
Sensor, Data, Analytics and Integration in Advanced Manufacturing, at the Connected Manufacturing track of Bosch-USA organized "Leveraging Public-Private Partnerships for Regional Growth Summit". Panel statement: Sensors, data and analytics are the core of any smart manufacturing system. What are the main challenges to create actionable outputs, replicate systems and scale efficiency gains across industries?
Moderator: Thomas Stiedl, Bosch
Panelists:
1. Amit Sheth, Wright State University
2. Howie Choset, Carnegie Melon University
3. Nagi Gebraeel, Georgia Institute of Technology
4. Brian Anthony, Massachusetts Institute of Technology
5. Yarom Polosky, Oak Ridget National Laboratory
For in-depth look:
Smart IoT: IoT as a human agent, human extension, and human complement
http://amitsheth.blogspot.com/2015/03/smart-iot-iot-as-human-agent-human.html
Semantic Gateway: http://knoesis.org/library/resource.php?id=2154
SSN Ontology: http://knoesis.org/library/resource.php?id=1659
Applications of Multimodal Physical (IoT), Cyber and Social Data for Reliable and Actionable Insights: http://knoesis.org/library/resource.php?id=2018
Smart Data: Transforming Big Data into Smart Data...: http://wiki.knoesis.org/index.php/Smart_Data
Historic use of the term Smart Data (2004): http://www.scribd.com/doc/186588820
How is Watson Changing the Future of the Automative Industry?IBM Watson
“How is Watson Changing the Future of the Automotive Industry?” presented in Livonia, MI. Event participants were introduced to the age of cognitive computing, where cognitive analytics evaluate complex data in new ways to help solve the industry's most challenging problems. Cognitive computing has arrived, and its potential to transform the industry is momentous. Learn how cognitive solutions are being applied in the automotive industry and how industry leaders are embracing this ground breaking technology to spark the digital future.
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Computing, cognition and the future of knowing,. by IBMVirginia Fernandez
How humans and machines are forging a new age of understanding.
-The history of computing and the rise of cognitive
-The world’s first cognitive system.
-The technical path forward and the science of what’s possible
-Implications and obligations for the advance of cognitive science.
-Paving the way for the next generation of human cognition.
Learning to trust artificial intelligence systems accountability, compliance ...Diego Alberto Tamayo
It’s not surprising that the
public’s imagination has
been ignited by Artificial
Intelligence since the term
was first coined in 1955.
In the ensuing 60 years,
we have been alternately
captivated by its promise,
wary of its potential for
abuse and frustrated by
its slow development.
I made this presentation in my 7th semester of B.Tech as per academic curriculum.
Took help from several videos from youtube and studied some IBM publications.
Cognitive Era is at the dawn. It does not make machines intelligent but instead it allows them to develop cognisance and learn by themselves as we humans do.
I am fascinated and looking forward to contribute my existence in this great thought of almighty came into human mind.
Guys! You could get a nice introduction from this presentation and explain it to others and even it could be used for your academic homework.
Goodluck! GODSPEED!
Rise of the Machines” Is Not a Likely FutureEvery new technolog.docxmalbert5
“Rise of the Machines” Is Not a Likely Future
Every new technology brings its own nightmare scenarios. Artificial intelligence (AI) and robotics are no exceptions. Indeed, the word “robot” was coined for a 1920 play that dramatized just such a doomsday for humanity.
Recently, an open letter about the future of AI, signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines like “Top Scientists Have an Ominous Warning about Artificial Intelligence,” and “Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.” The implication is that the machines will one
day displace humanity.
Let’s get one thing straight: a world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research. Nevertheless, the idea of dramatically changing the AI research agenda to focus on AI “safety” is the primary message of a group calling itself the Future of Life Institute (FLI). FLI includes a handful of deep thinkers and public figures such as Elon Musk and Stephen Hawking and worries about the day in which humanity is steamrolled by powerful programs run a muck.
As eloquently described in the book Superintelligence: Paths, Dangers, Strategies by FLI advisory board member and Oxford-based philosopher Nick Bostrom, the plot unfolds in three parts. In the first part—roughly where we are now—computational power and intelligent software develops at an increasing pace through the toil of scientists and engineers. Next, a breakthrough is made: programs are created that possess intelligence on par with humans. These programs, running on increasingly fast computers, improve themselves extremely rapidly, resulting in a runaway “intelligence explosion.” In the third and final act, a singular super-intelligence takes hold—outsmarting, outmaneuvering, and ultimately outcompeting the entirety of humanity and perhaps life itself. End scene.
Let’s take a closer look at this apocalyptic storyline. Of the three parts, the first is indeed happening now and Bostrom provides cogent and illuminating glimpses into current and near-future technology. The third part is a philosophical romp exploring the consequences of supersmart machines. It’s that second part—the intelligence explosion—that demonstrably violates what we know of computer science and natural intelligence.
Runaway Intelligence?
The notion of the intelligence explosion arises from Moore’s Law, the observation that the speed of computers has been increasing exponentially since the 1950s. Project this trend forward and we’ll see computers with the computational power of the entire human race within the next few decades. It’s a leap to go from this idea to unchecked growth of machine intelligence, however.
First, ingenuity is not the sole bottleneck to developing faster com.
Level 1 Individual EcologyWe will measure 3 characteristics o.docxsmile790243
Level 1: Individual Ecology
We will measure 3 characteristics of individuals in 3 locations along the Upper Winter Creek trail. We will measure DBH (Diameter at Breast Height), tree height, and leaf size. Each team will have to choose their own methods for each measurement and be sure to verify the precision, accuracy and bias. There is a freeware Image J program developed by the NIH described in a file attached to this module for leaf area measurement but you are welcome to try any app or other method you prefer.
Level 2: Population Ecology
We will document age structure using the DBH data and we will measure dispersion of the population. Once again each team will choose a method for each. 2 methods for calculating dispersion are described in file attached to this module.
Level 3: Community Ecology
We will measure species richness and species diversity using a species count and a calculation each of which, once again, will determined by each team.
The final product will be a scientific poster with all of your data and and explanation of the synthesis of all 3 levels of ecology we sampled. This will be communicated as a concept map with graphs of your data verifying the relationships among the components. This is the first step in making a predictive systems model, like a climate model.
Small tree height: 3.5814 m medium tree height:7.875m tall tree height : 18.02m
Small tree leaves length 3.81 cm mid tree leaves length: 5.08 width 2.54cm
mid tree perimeter80
Width 2.54 cm tall tree leaves length 10.16cm width 6.36 cm
Small Tree perimeter 50cm tall tree perimeter 290cm
Small Shurb Community
Butterfly
50
Black Bee
27
Yellow Bee
4
Lizard
5
Fly
25
Gnat
40
Beetle
4
snake
1
Medium Tree Community
Birds
5
Catepillar
3
Gnats
20
Flys
15
Mouse
1
Snake
1
Mosquito
3
Spider
1
Tall Tree Community
Woodpecker
2
Bluejay
3
Lizard
5
Beetle
3
Butterfly
34
Ladybug
300
Squirrel
4
Gecko
2
Waterbugs
27
Birds
7
As the prominent philosopher Jerry, Kaplan puts it “Viewpoint Artificial Intelligence Think Again” (Jerry, 2017). The purpose is that we need to use more hand-working and we do not need Artificial Intelligence replace our brain. Firstly, Social and cultural conventions are an often-neglected aspect of intelligent-machine development. (1) The DOMINANT PUBLIC narrative about artificial intelligence is that we are building increasingly intelligent ma- chines that will ultimately surpass human capabilities, steal our jobs, possibly even escape human control and kill us all. This misguided perception, not widely shared by AI researchers, runs a significant risk of delaying or derailing practical applications and influencing public policy in counterproductive ways. (1) Secondly, Machines don’t have minds, and there is precious little evidence to suggest they ever will. (2) Finally, So the robots are certainly coming, but not in the way most people think. So the robo ...
The paper must have the following subheadings which is not include.docxoreo10
The paper must have the following subheadings which is not included in word count:
Introduction
Analysis
Rationale to support the response [1 and 2 separately]
Description of key job types
Conclusions
Week 11 Discussion 1
"The Future of Training" Please respond to the following:
From the first e-Activity, analyze the views of Cross and Jarche about the “Golden Age of Training” and its future. Then, assess the claims Miller makes about training in the article “Training is Not an Option.” Take a position on which views you agree with most. Provide a rationale to support your response.
From the second e-Activity, describe three key job types and competencies that professional organizations such as ISPI and ASTD claim that professionals in the field of organizational training and development should possess. Provide a rationale to support your response.
e-Activity Bottom of Form
Read the article by Cross and Jarche titled “The Future of the Training Department” published in Training Magazine (June 2009). Then, read the article titled, “Training is Not an Option,” by Adrian Miller. Be prepared to discuss.
Search the Internet for a professional organization (e.g., ISPI, ASTD) and review the primary job types and job competencies listed. Be prepared to discuss.
Article: “The Future of the Training Department”
URL: https://www.polleverywhere. com/blog/the-future-of-the- training-department/
Article: “Training is Not an Option,”
URL: http://ezinearticles.com/? Training-is-Not-an-Option&id= 157604
Post 1 AW
Referencing the Learning Resources for this module, choose any question in the research project list and answer it in relation to posthumanism. In other words treat posthumanism as a new technology or technological way of being.
Posthumanism is essentially the interlinking of humans and technology. This could range from artificial intelligence to a human that has prosthetics or technological enhancements fused into their bodies. But how did this term even come about? What is so wrong with humans and their ability to function that we need to incorporate such technology into our lives? What is the problem for which posthumanism is the solution?
The answer is everything. All aspects of our lives involve problems and solutions. This technology that is being referred to as posthumanism has the ability to solve a vast majority of the problems humans encounter and create. Steven Poole, although a strong supporter against posthumanism, discusses a few of these problems as well as new problems that could be created in his article “Slaves to the algorithm”. First referring to a chess match between world champions, then to vehicle automation, crime algorithms and psychotherapy applications, Poole is able to illustrate the involvement posthumanism already has in our present day. Before he argues that humans are quickly rationing off our conscious thoughts and judgements he recognizes the need for imp ...
In the last decade, workplaces have started to evolve towards digitalisation. In the future people will work in digitally connected environments where personalisation is enabled, collaboration is improved and data sharing and information management are automated. Ultimately, these future workplaces will provide context-aware artificial intelligence (AI) and decision support that leverage both localised information and broader community knowledge whenever needed.
ICCA 2063 - Exploring the Next Fifty Years by Rohit Talwar 03/09/13Rohit Talwar
To help us explore what the next fifty years might hold, ICCA asked industry futurist, Rohit Talwar, to peer over the horizon and help us understand the science and technology developments that might shape our world and explore the implications for associations and their events.
Included topics - Future frontiers of science and technology; information technology, the internet and beyond; manufacturing, robotics, and new materials; and human enhancement
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE. This report is a product of Access Now. We thank lead author Lindsey Andersen for her
significant contributions. If you have questions about this report or you would like more information, you can contact info@accessnow.org.
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
The evolution of artificial intelligence in the workplace
Since the first appearance of the words “artificial intelligence” more than 60 years ago, our imaginations have been sparked. Imagine creating computers that simulate human intelligence.
AI has the potential to profoundly influence our lives, perhaps to the point when our world can be better understood and even predicted. In workplaces we can develop systems through which AI may evolve. And Konica Minolta is progressing with the concept of intelligent hubs which will provide businesses with insight, support and greater collaboration.
By combining our core technologies with transformative solutions in the digital workplace, we’re evolving to become a problem-solving digital company creating new value for people and society.
FUTURE OF COMPUTER ETHICSComputers and the transformations.docxbudbarber38650
FUTURE OF COMPUTER ETHICS
Computers and the transformations they engender have only recently entered into most people’s lives. Large mainframes used by the government and businesses made their debut only 60 years ago, and the personal computer began appearing in small businesses and homes as recently as the early 1980s. While the precursor of the Internet was created almost 40 years ago, widespread use of the Internet and the creation of the World Wide Web occurred during the past fifteen years. The field of computer ethics has been developed to address ethical problems arising from new technologies. While accurately predicting the direction of such ethical considerations in the future is impossible, some general speculations can be made on emerging trends.
New Computer Ethics Laws
Legislation is often slow to catch up to the reality of everyday life. Eventually it does catch up and this will prove true for many of the computer ethics issues we face today. For example, many attempts have already been made in the United States to draft laws increasing personal privacy protection.
Artificial Intelligence and Avatars
The first generation of avatars, digital representations that look and talk like ordinary people, has already appeared. Given the rapid advances in technological capabilities and artificial intelligence, predicting that avatars in the future will be indistinguishable from real human beings is not farfetched. This eventuality will pose a rash of ethical dilemmas. For example, should computer users necessarily be warned that they are dealing with an avatar and not a human being? Should avatars be prevented from emulating certain behaviors? In what ways might avatars be used by some for criminal activities, and how can such behavior be prevented? These are just a few of the numerous ethical questions arising from future developments in artificial intelligence.
Self-Replicating Robots
We already know that robots can be created that, in turn, create other robots. Such self-replication of technology has great potential to get out of hand, and it thus poses numerous ethical concerns. For instance, have we created a technology that might someday displace, dominate, or even eliminate human beings? How can we know? These are some of the ethical questions concerning robots that will be debated in the future.
Narrowing the Digital Divide
While the digital divide between developed countries is narrowing, the gulf between developed and underdeveloped nations is still vast. Debate over how this gap is to be narrowed will continue. Should poorer nations be encouraged to concentrate on gaining basic needs, such as adequate electricity, communications, and health care infrastructures, or should they be encouraged to use available resources to develop digital technologies in order to “catch up” with developed nations? Will computer technologies truly benefit less developed societies and, if so, how? What role should developed nations play in providing.
In our research, we work to understand how people feel about the expansion of robots in different employment areas, and what factors influence their feelings. Mainly we aim to discover what factors influence people‟s opinions on robots.
The widely publicized views about robotics and artificial intelligence come to opposite conclusions. One being the idea that increased development of artificial intelligence and robots may lead to a situation of mass unemployment. The other more optimistic one being that the fear of job loss is unwarranted because a displacement and reposition of employment is what will ensue. There are also more contemporary views such as the following, to accelerate the development of robots and AI while maintaining employment opportunities at the same time, it is necessary to upgrade human capital.
The results of our research show that males have a more positive view about robots than females. People who found out about robots via scientific readings are also more likely to have a positive opinion about them than those who found out about robots via media. Furthermore, people who were personally exposed to robots or who had heard about them from friends are less likely to have a negative opinion about them than those who found out the information via scientific readings. The results also show that the more interested a person is in science and technology, the more likely he or she will have a positive view of robots.
We did not discover significant correlation between peoples‟ view about robots and their country of origin, also their age was not a significant determinate. We included further descriptive questions in our study pertaining to where respondents believe robots should be used as well as where robots should not be used. The majority of responses were in the fields of manufacturing and education. From this we draw that as of now, most people cannot accept the use of robots within social interaction due to either personal fears or lack of trust.
Similar to Smart machines IBM’s watson and the era of cognitive computing (20)
Mobile phones have come a long way since their introduction over 30 years ago. Phones are smaller, weigh less, and do more, carrying data as well as voice. Phones not only make telephone calls, they send e-mail and text messages, take and send photographs, play games, and access and browse the Internet. And mobile phones are everywhere. By the end of 2009, mobile cellular subscriptions worldwide numbered approximately 4.6 billion.1 Neither age, nationality, nor economic status represents a barrier to owning and using a mobile phone. The first phones may have been novelties, but mobile phones now are a necessity, especially for the under-35 demographic.
Healthcare Providers,
As you know Healthcare IT is changing every day especially in the new m-health field. Where mobile devices like iPhones, Smartphones and tablets becoming how doctors react to their patients and staff. But with this changes comes new laws about how mobility should be used to work with patient data in a secure way. This is where our mobile app called "Healthcare POP" can help Healthcare POP brings all of your healthcare info into one web-based app that can that can work on every mobile, tablet and desktop.
Healthcare is in crisis. While this is not news for many
countries, we believe what is now different is that the
current paths of many healthcare systems around the
world will become unsustainable by 2015.
This may seem a contrarian conclusion, given the efforts
of competent and dedicated healthcare professionals
and the promise of genomics, regenerative medicine, and
information-based medicine. Yet, it is also true that costs
are rising rapidly; quality is poor or inconsistent; and
access or choice in many countries is inadequate.
Introduction
The increasing use of smartphones and tablet computers as business tools has brought organizations and their employees new levels of productivity, flexibility and mobility. But their use is a double-edged sword, bringing with it new levels of complexity to IT management and security.
The shift to mobile is dramatically changing the way people work and companies do business. IT departments are being challenged to effectively manage resources and infrastructure to support the mobile enterprise. The stakes are high, and enterprise mobility solutions are complex. To succeed, you need more than a piecemeal solution. You need a partner who can deliver a true end-to-end mobility service.
The Illinois Health Information Exchange (ILHIE) has just released its Consumer Education Health IT Toolkit. It was developed to provide healthcare professionals with simple and informative educational material to share with their patients. Consumer education and engagement are cornerstones for the implementation of a successful state health IT program.
The Illinois Health Information Exchange (ILHIE) has just released its Consumer Education Health IT Toolkit. It was developed to provide healthcare professionals with simple and informative educational material to share with their patients. Consumer education and engagement are cornerstones for the implementation of a successful state health IT program.
This guide to designed to help private doctors and small clinics understand the HIPPA regulation and get them ready for an audit. The guide contains several checklists that will guide them step by step to make sure everything is done to create and secure and EMR network
Can you use more useful information in your business and don't know where to find it. Then read this IBM white paper to see how using your staff can use the power of social media to collaborate on projects to make your business run smoother.
Are you wondering how social media will change your business then check out this whitepaper by IBM about how your business will be effected by social media.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
2. We are at the dawn of a major shift in the evolution of technology.
The next two decades will transform the way people live and
work just as the computing revolution has transformed the human
landscape over the past half century. The host of opportunities
and challenges that come with this new era will require a new
generation of technologies and a rewriting of the rules of
computing.
The availability of huge amounts of data should help people
better understand complex situations. In reality, though, more data
often lead to more confusion. We make too many decisions with
irrelevant or incorrect information or with data that represent
only part of the picture. So we need a new generation of
tools—cognitive technologies—that help us penetrate complexity
and better understand the world around us so we can make better
decisions and live more successfully and sustainably. Yet some of
the techniques of computer science and engineering are reaching
their limits. The technology industry must change the way it
designs and uses computers and software if it is to continue to
make progress in how we work and live.
This perspective on the future of information technology is
the result of a large and continuing group effort at IBM. A couple
of years ago, a group of IBM Research scientists engaged in an
intriguing project. They looked decades into the future and
sketched out a picture of how computing will change. Their work
sparked discussion and debate among a wide range of IBMers.
We want to expose some of these early thoughts to others and
1
3. 2 Smart Machines
start a broader conversation, so we’re laying out our vision in a
short book, Smart Machines: IBM’s Watson and the Era of Cognitive
Computing, which will be published later in 2013. This first chapter
is a teaser. We want people to know what’s coming.
Laying the foundations for a new era of computing is a
monumental endeavor, and no company can take on this sort of
challenge alone. We look to leading corporate users of information
technology, university researchers, government policy makers,
industry partners, and tech entrepreneurs—indeed, the entire tech
industry—to take this journey with us. We also want to inspire
young people to pursue studies and careers in science and
technology. With this book, we hope to provoke new thinking that
will drive exploration and invention for the next fifty years.
4. 1.
A New Era of
Computing
IBM’s Watson computer created a sensation when it bested two
past grand champions on the TV quiz show Jeopardy!. Tens of
millions of people suddenly understood how “smart” a computer
could be. This was no mere parlor trick; the scientists who
designed Watson built upon decades of research in the fields of
artificial intelligence and natural-language processing and
produced a series of breakthroughs. Their ingenuity made it
possible for a system to excel at a game that requires both
encyclopedic knowledge and lightning-quick recall. In preparation
for the match, the machine ingested more than one million pages
of information. On the TV show, first broadcast in February 2011,
the system was able to search that vast database in response
to questions, size up its confidence level, and, when sufficiently
confident, beat the humans to the buzzer. After more than five
years of intense research and development, a core team of about
twenty scientists had made a very public breakthrough. They
demonstrated that a computing system—using traditional
3
5. 4 Smart Machines
strengths and overcoming assumed limitations—could beat expert
humans in a complex question-and-answer competition using
natural language.
Now IBM scientists and software engineers are busy
improving the Watson technology so it can take on much bigger
and more useful tasks. The Jeopardy! challenge was relatively
limited in scope. It was bound by the rules of the game and the
fact that all the information Watson required could be expressed
in words on a page. In the future, Watson will take on open-ended
problems. It will be able to interpret images, numbers, voices, and
sensory information. It will participate in dialogue with human
beings aimed at navigating vast quantities of information to solve
extremely complicated yet common problems. The goal is to
transform the way humans get things done, from health care and
education to financial services and government.
One of the next challenges for Watson is to help doctors
diagnose diseases and assess the best treatments for individual
patients. IBM is working with physicians at the Cleveland Clinic
and Memorial Sloan-Kettering Cancer Center in New York to train
Watson for this new role. The idea is not to prove that Watson
could do the work of a doctor but to make Watson a useful aid to
a physician. The Jeopardy! challenge pitted man against machine;
with Watson and medicine, man and machine are taking on a
challenge together—and going beyond what either could do on
its own. It’s impossible for even the most accomplished doctors
to keep up with the explosion of new knowledge in their fields.
Watson can keep up to date, though, and provide doctors with
the information they need. Diseases can be freakishly complicated,
and they express themselves differently in each individual. Within
the human genome, there are billions of combinations of variables
that can figure in the course of a disease. So it’s no wonder that an
estimated 15 to 20 percent of medical diagnoses are inaccurate or
6. A New Era of Computing
5
incomplete.1 Doctors know how to deal with general categories of
diseases and patients. What they need help with is diagnosing and
treating individuals.
Dr. Larry Norton, a world-renowned oncologist at Memorial
Sloan-Kettering Cancer Center who is helping to train Watson,
believes the computer will provide both encyclopedic medical and
patient information and the kind of insights that normally come
only from deeply experienced specialists. But in addition to
knowledge, he believes, Watson will offer wisdom. “This is more
than a machine,” Larry says. “Computer science is going to evolve
rapidly and medicine will evolve with it. This is coevolution. We’ll
help each other.”2
The Coming Era of Cognitive Computing
Watson’s potential to help with health care is just one of
the possibilities opening up for next-generation technologies.
Scientists at IBM and elsewhere are pushing the boundaries of
science and technology fields ranging from nanotechnology to
artificial intelligence with the goal of creating machines that do
much more than calculate and organize and find patterns in
data—they sense, learn, reason, and interact with people in new
ways. Watson’s exploits on TV were one of the first steps into a
new phase in the evolution of information technology—the era of
cognitive computing. Thomas Malone, director of the MIT Center
for Collective Intelligence, says the big question for researchers as
this era unfolds is: How can people and computers be connected
so that collectively they act more intelligently than any person,
1. Dr. Herb Chase, Columbia University School of Medicine, IBM IBV report, “The
Future of Connected Healthcare Devices,” March, 2011.
2. Dr. Larry Norton, Memorial Sloan-Kettering Cancer Center, interview, June 12,
2012.
7. 6 Smart Machines
group, or computer has ever done before? 3 This avenue of thought
stretches back to the computing pioneer J. C. R. Licklider, who
led the U.S. government project that evolved into the Internet.
In 1960 he authored a paper, “Man-Computer Symbiosis,” where
he predicted that “in not too many years, human brains and
computing machines will be coupled together very tightly and
the resulting partnership will think as no human brain has ever
thought and process data in a way not approached by the
information-handling machines we know today.”4 That time is fast
approaching.
The new era of computing is not just an opportunity for
society; it’s also a necessity. Only with the help of thinking
machines will we be able to deal adequately with the exploding
complexity of today’s world and successfully address interlocking
problems like disease and poverty and stress on our natural
systems. Computers today are brilliant idiots. They have
tremendous capacities for storing information and performing
numerical calculations—far superior to those of any human. Yet
when it comes to another class of skills, the capacities for
understanding, learning, adapting, and interacting, computers are
woefully inferior to humans; there are many situations where
computers can’t do a lot to help us.
Up until now, that hasn’t mattered much. Over the past sixtyplus years, computers have transformed the world by automating
defined tasks and processes that can be codified in software
programs in series of procedural “if A, then B”
statements—expressing logic or mathematical equations. Faced
with more complex tasks or changes in tasks, software
programmers add to or modify the steps in the operations they
3. Thomas Malone, Massachusetts Institute of Technology, interview, May 3, 2013.
4. J. C. R. Licklider, “Man-Computer Symbiosis,” IRE Transactions on Human Factors in
Electronics 1 (March 1960): 4–11.
8. A New Era of Computing
7
want the machine to perform. This model of computing—in which
every step and scenario is determined in advance by a
person—can’t keep up with the world’s evolving social and
business dynamics or deliver on its potential. The emergence of
social networking, sensor networks, and huge storehouses of
business, scientific, and government records creates an abundance
of information that technology leaders call “big data.” Think of it as
a parallel universe to the world of people, places, things, and their
interrelationships, but the digital universe is growing at about 60
percent each year. 5
All of these data create the potential for people to understand
the environment around us with a depth and clarity that was
simply not possible before. Governments and businesses struggle
to understand complex situations, such as the inner workings of
a city or the behavior of global financial markets. In the cognitive
era, using the new tools of decision science, we will be able to
apply new kinds of computing power to huge volumes of data and
achieve deeper insight into how things really work. Armed with
those insights, we can develop strategies and design systems for
achieving the best outcomes—taking into account the effects of
the variable and the unknowable. Think of big data as a natural
resource waiting to be mined. And in order to tap this vast
resource, we need computers that “think” and interact more like
we do.
The human brain evolved over millions of years to become
a remarkable instrument of cognition. We are capable of sorting
through multitudes of sensory impressions in the blink of an eye.
For instance, faced with the chaotic scene of a busy intersection,
we’re able to instantly identify people, vehicles, buildings, streets,
and sidewalks and understand how they relate to one another. We
5. CenturyLink Business Inc., infographic, 2011, http://www.centurylink.com/
business/artifacts/pdf/resources/big-data-defining-the-digital-deluge.pdf.
9. 8 Smart Machines
can recognize and greet a friend we haven’t seen for ten years
even while sensing and prioritizing the need to avoid stepping in
front of a moving bus. Today’s computers can’t do that.
With the exception of robots, tomorrow’s computers won’t
need to navigate in the world the way humans do. But to help
us think better they will need the underlying humanlike
characteristics—learning, adapting, interacting, and some form of
understanding—that make human navigation possible. New
cognitive systems will extract insights from data sources that are
almost totally opaque today, such as population-wide health-care
records, or from new sources of information, such as sensors
monitoring pollution in delicate marine environments. Such
systems will still sometimes be programmed by people using “if
A, then B” logic, but programmers won’t have to anticipate every
procedure and every rule. Instead, computers will be equipped
with interpretive capabilities that will let them learn from the
data and adapt over time as they gain new knowledge or as the
demands on them change.
But the goal is not to replicate human brains or replace
human thinking with machine thinking. Rather, in the era of
cognitive systems, humans and machines will collaborate to
produce better results, each bringing its own skills to the
partnership. The machines will be more rational and analytic—and,
of course, possess encyclopedic memories and tremendous
computational abilities. People will provide judgment, intuition,
empathy, a moral compass, and human creativity.
To understand what’s different about this new era, it helps to
compare it to the two previous eras in the evolution of information
technology. The tabulating era began in the nineteenth century
and continued into the 1940s. Mechanical tabulating machines
automated the process of recording numbers and making
calculations. They were essentially elaborate mechanical abacuses.
10. A New Era of Computing
9
People used them to organize data and make calculations that
were helpful in everything from conducting a national population
census to tracking the performance of a company’s sales force. The
programmable computing era—today’s technologies—emerged in
the 1940s. Programmable machines are still based on a design laid
out by Hungarian American mathematician John von Neumann.
Electronic devices governed by software programs perform
calculations, execute logical sequences of steps, and store
information using millions of zeros and ones. Scientists built the
first such computers for use in decrypting encoded messages in
wartime. Successive generations of computing technology have
enabled everything from space exploration to global
manufacturing-supply chains to the Internet.
Tomorrow’s cognitive systems will be fundamentally
different from the machines that preceded them. While traditional
computers must be programmed by humans to perform specific
tasks, cognitive systems will learn from their interactions with
data and humans and be able to, in a sense, program themselves
to perform new tasks. Traditional computers are designed to
calculate rapidly; cognitive systems will be designed to draw
inferences from data and pursue the objectives they were given.
Traditional computers have only rudimentary sensing capabilities,
such as license-plate-reading systems on toll roads. Cognitive
systems will be able to sense more like humans do. They’ll
augment our hearing, sight, taste, smell, and touch. In the
programmable-computing era, people have to adapt to the way
computers work. In the cognitive era, computers will adapt to
people. They’ll interact with us in ways that are natural to us.
Von Neumann’s architecture has persisted for such a long
time because it provides a powerful means of performing many
computing tasks. His scheme called for the processing of data via
calculations and the application of logic in a central processing
11. 10 Smart Machines
unit. Today, the CPU is a microprocessor, a stamp-sized sliver of
silicon and metal that’s the brains of everything from smartphones
and laptops to the largest mainframe computers. Other major
components of the von Neumann design are the memory, where
data are stored in the computer while waiting to be processed, and
the technologies that bring data into the system or push it out.
These components are connected to the central processing unit
via a “bus”—essentially a highway for data. Most of the software
programs written for today’s computers are based on this
architecture.
But the design has a flaw that makes it inefficient: the von
Neumann bottleneck. Each element of the process requires
multiple steps where data and instructions are moved back and
forth between memory and the CPU. That requires a tremendous
amount of data movement and processing. It also means that
discrete processing tasks have to be completed linearly, one at a
time. For decades, computer scientists have been able to rapidly
increase the capabilities of central processing units by making
them smaller and faster. But we’re reaching the limits of our ability
to make those gains at a time when we need even more computing
power to deal with complexity and big data. And that’s putting
unbearable demands on today’s computing technologies—mainly
because today’s computers require so much energy to perform
their work.
What’s needed is a new architecture for computing, one that
takes more inspiration from the human brain. Data processing
should be distributed throughout the computing system rather
than concentrated in a CPU. The processing and the memory
should be closely integrated to reduce the shuttling of data and
instructions back and forth. And discrete processing tasks should
be executed simultaneously rather than serially. A cognitive
computer employing these systems will respond to inquires more
12. A New Era of Computing
11
quickly than today’s computers; less data movement will be
required and less energy will be used. Today’s von Neumann–style
computing won’t go away when cognitive systems come online.
New chip and computing technologies will extend its life far into
the future. In many cases, the cognitive architecture and the von
Neumann architecture will be employed side by side in hybrid
systems. Traditional computing will become ever more capable
while cognitive technologies will do things that were not possible
before.
Today, a handful of technologies are getting a tremendous
amount of buzz, including the cloud, social networking, mobile,
and new ways to interact with computing from tablets to glasses.
These new technologies will fuel the requirement and desire for
cognitive systems that will, for example, both harvest insights
from social networks and enhance our experiences within them.
“This will affect everything. It will be like the discovery of DNA,”
predicts Ralph Gomory, a pioneer of applied mathematics who was
director of IBM Research in the 1970s and 1980s and later head of
the Alfred P. Sloan Foundation.6
How Cognitive Systems Will Help Us Think
As smart as human beings are, there are many things that we
can’t do or that we could do better. Cognitive systems in many
cases help us overcome our limitations.
Complexity
We have difficulty processing large amounts of information
that come at us rapidly. We also have problems understanding
the interactions among elements of large systems, such as all of
6. Ralph Gomory, former IBM Research director, interview, March 19, 2012.
13. 12 Smart Machines
the moving parts in a city or the global economy. With cognitive
computing, we will be able to harvest insights from huge
quantities of data to understand complex situations, make accurate
predictions about the future, and anticipate the unintended
consequences of actions.
City mayors, for instance, will be able to understand the
interrelationships among the subsystems within their
cities—everything from electrical grids to weather to subways
to demographic trends to emergent cultural shifts expressed in
text, video, music, and visual arts. One example is monitoring
social media during a major storm to spot patterns of words and
images that indicate critical problems in particular neighborhoods.
Much of this information will come from sensors—video cameras,
instruments that detect motion or consumption, and devices that
spot anomalies. Mobile phones will also be used as sensors that
help us understand the movements of people. But mayors will
also be able to measure the financial, material, and knowledge
resources they put into a system and the results they get from
those investments. And they’ll be able to accurately predict the
effects of policies and actions they’re considering.
Expertise
With the help of cognitive systems, we will be able to see the
big picture and make better decisions. This is especially important
when we’re trying to address problems that cut across intellectual
and industrial domains. For instance, police will be able to gather
crime statistics and combine them with information about
demographics, events, blueprints, economic activity, and weather
to produce better analysis and safer cities. Armed with abundant
data, police chiefs will be able to set strategies and deploy
resources more effectively—even predicting where and when
14. A New Era of Computing
13
crimes are likely to happen. Patrol officers will gain a wealth
of information about locations they’re approaching. Situational
intelligence will be extremely useful when they’re about to knock
on the door of an apartment. The ability to achieve such
comprehensive understanding of situations at every level will be
an essential tool and will become one of the most important
factors in the economic growth and competitiveness of cities.
Objectivity
We all possess biases based on our personal experiences,
egos, and intuition about what works and what doesn’t, as well
as the influence group dynamics. Cognitive systems can make
it possible for us to be more objective in our decision making.
Corporations may evolve into “conscious organizations” made up
of humans and systems in collaboration. Sophisticated analytic
engines will understand how an organization works, the dynamics
of its competitive environment, and the capabilities within the
organization and ecosystem of partners. Computers might take
notes at meetings, convert data to graphic images, spot hard-tosee connections, and help guide individuals in achieving business
goals.
Imaginations
Because of our prejudices, we have difficulty envisioning
things that are dramatically different than what we’re familiar
with. Cognitive systems will help us discover and explore new
and contrarian ideas. A chemical or pharmaceutical company’s
research-and-development team might use a cognitive system to
explore combinations of molecules or even individual atoms that
have not been contemplated before. Programs run on high-
15. 14 Smart Machines
performance computers will simulate the effects of the
combinations, spotting potentially valuable new materials or
drugs and also anticipating negative side effects. With the aid
of cognitive machines, researchers and engineers will be able to
explore millions of combinations in ways that are economical with
both time and money.
Senses
We can only take in and make sense of so much raw, physical
information. With cognitive systems, computer sensors teamed
with analytics software will vastly extend our ability to gather
and process such information. Imagine a world where individuals
carry their own personal Watson in the form of a handheld device.
These personal cognitive assistants will carry on conversations
with us that will make the speech technology in today’s
smartphones seem like baby talk. They will acquire knowledge
about us, in part, from observing what we see, say, touch, and
type on our electronic devices—so they can better anticipate our
wishes. In addition, the assistant will be able to use sophisticated
sensing to monitor a person’s health and threats to her well-being.
If there’s carbon monoxide or the influenza virus in a room, for
example, the device will alert its user. Over time, humans have
evolved to be more successful as a species. We continually adapt
to overcome our limitations. This partnership with computers is
simply the latest step in a long process of adaptation.
The uses for cognitive computing will be nearly limitless—a
veritable playground for the human imagination. Think of any
activity that involves a lot of complexity, many variables, a great
deal of uncertainty, and incomplete information and that requires
a high level of expertise and rapid decision making. That activity
is going to be a fat target for cognitive technologies. Just as the
16. A New Era of Computing
15
personal computer, the Internet, mobile communications, and
social networking have given rise to tens of thousands of software
applications, Web services, and smartphone apps, the cognitive
era will produce a similar explosion of creativity. Think of the
coming technologies as cognitive apps. For enterprises, you can
easily envision apps for handling mergers and acquisitions, crisis
management, competitive analysis, and product design. Picture a
team within a company that’s in charge of sizing up acquisition
candidates using a cognitive M&A app. In order to augment its
understanding of potential targets, many of which are private
companies, the team will set up a special social network of
employees, customers, and business partners who have had direct
experiences with other companies in the same industry. The links
and the nature of the interactions will all be mapped out. A
cognitive system will find information stored there, gathering
insights about companies and suggesting acquisition targets. The
M&A team will also track the performance of previously acquired
companies, finding what worked and what didn’t. Those insights,
constantly updated in the learning system, will help the team
identify risks and synergies, helping it decide which acquisitions
to pursue. Moreover, in the everyday lives of individuals, cognitive
apps will help in selecting a college, making investment decisions,
choosing among insurance options, and purchasing a car or home.
Technology
Necessities
Breakthroughs:
Opportunities
and
Much of the progress in science and technology comes in
small increments. Scientists and engineers build on top of the
innovations that came before. Consider the tablet computer. The
first such devices appeared on the scene back in the 1980s. They
had large, touch-sensitive screens but weighed nearly five pounds
17. 16 Smart Machines
and were an inch and a half thick. They were more like bricks
than books, and about all you could do with them was scrawl
brief memos and fill out forms. After thirty years of gradual
improvements, we have slim, light, powerful tablets that combine
the features of a telephone, a personal computer, a television, and
more.
There’s nothing wrong with incremental innovation. It’s
absolutely necessary, and, sometimes, its results are both
delightful and transformational. A prime example is the iPhone.
With its superior navigation and abundance of easy-to-use
applications, this breakthrough product spawned a burst of
smartphone innovation, which combined with the socialnetworking phenomenon to produce a revolutionary shift in
global human behavior. Yet, technologically, iPhone was built on
top of many smartphone advances that preceded it. In fact, IBM
introduced the first data-accessing phone, called Simon, in 1994,
long before the term “smartphone” had been coined. New waves of
progress, however, require majorly disruptive innovations—things
like the transistor, the microchip, and the first programmable
computers. These are the advances that fundamentally change our
world.
Today, many of the core technologies that provide the basic
functions for traditional computers are mature; they have been in
use for decades. In some cases, each wave of improvements is less
profound than the wave that preceded it. We’re reaching the point
of diminishing returns. Yet, at the same time, the demands on
computing technology are growing exponentially. One example
of a maturing technology is microchips. These slivers of silicon
densely packed with tiny transistors replaced the vacuum tube.
Early on, they brought consumers digital watches and pocketsized radios. Today, a handful of chips provide all the data
processing required in a tablet or a data-center computer that
18. A New Era of Computing
17
serves up tens of thousands of Facebook pages per day. Yet the
basic concept of a microchip is essentially the same as it was
forty years ago. They’re made of tiny transistors—switches that
turn on and off to create the zeros and ones required to perform
calculations and process information. The more transistors on a
chip, the more data can be processed and stored there. The
problem is that with each new generation of chips, it becomes
more difficult to shrink the wires and transistors and pack more
of them onto chips. So what’s needed is a disruptive innovation
or, more likely, several of them that will change the game in the
microchip realm and launch another period of rapid innovation.
Soon incremental innovation will no longer be sufficient.
People who demand the most from computers are already running
into the limits of today’s circuitry. Michel McCoy, director of
the Simulation and Computing Program at the U.S. Lawrence
Livermore National Laboratory, is among those calling for a
nationwide initiative involving national laboratories and
businesses aimed at coming up with radical new approaches to
microprocessor and computer-system design and software
programming. “In a sense, everything we’ve done up until this
point has been easy,” he says. “Now we have reached a physicsdominated threshold in the design of microprocessors and
computing systems which, unless we do something about it, is
essentially going to stagnate progress.”7 We need more radical
innovations. In the years ahead, a number of fundamental
advances in science and technology will be required to make
progress. Think of those colorful Russian wooden dolls where
progressively smaller dolls nest inside slightly larger ones. We
need to achieve technology advances in layers.
The top layer is the way we interact with computers and get
them to do what we want. The big innovation at this outer layer
7. Michel McCoy, Lawrence Livermore National Laboratory, interview, June 14, 2012.
19. 18 Smart Machines
is “learning systems,” which we will explore deeply in chapter
2. The goal is to create machines that do not require as much
programming by humans. Instead they’ll be “taught” by people,
who will set objectives for them. As they learn, the machines will
work out the details of how to meet those goals.
The next layer represents how we organize and interpret data,
which we’ll discuss in chapter 3. Today’s databases do an excellent
job of organizing information in columns and rows; tomorrow’s
are being designed to manage huge volumes of different kinds of
data, understand information in context, and crunch data in real
time.
Another major dimension, which we’ll go into in chapter 4,
is how to make use of data gathered through sensor technology.
Today, we use rudimentary sensor technologies to perform useful
tasks such as locating leaks in water systems. In the cognitive era,
sensors and pattern-recognition software will augment our senses,
making us hyper-aware of the world around us.
The next layer represents the design of systems—how we fit
together all the physical components that make up a computer.
The challenge here, which we address in chapter 5, is creating
data-centric computers. The designers of computing systems have
long treated logic and memory as separate elements. Now, they
will meld the components together, first, on circuit boards and,
later, on single microchips. Also, they’ll move the processing to the
data, rather than visa versa.
Finally, in the innermost layer is nanotechnology, where we
manipulate matter at the molecular and atomic scale. In chapter
6, we’ll explore what it will take to invent a new physics of
computing. To overcome the limits of today’s microchip
technology, scientists must shift to new nanomaterials and new
approaches to switching from one digital state to another.
20. A New Era of Computing
19
Possibilities include harnessing quantum mechanics or chips
driven by “synapses and neurons” for data processing.
A New Culture of Innovation
We’re still in the early stages of the emergence of this new era
of computing. Progress will require a willingness to make big bets,
take a long-term view, and engage in open collaboration. We’ll
explore the elements of the culture of innovation in each of the
subsequent chapters in what we call the journeys of discovery. An
absolutely critical aspect of the culture of innovation will be the
ambition and capabilities of the inventors themselves. For rapid
progress to be made in the new era of computing, young people
must be inspired to become scientists, and they must be educated
by teachers using superior tools and techniques. They have to
be rewarded and given opportunities to challenge everything we
think we know about how the world works. It requires dedication
and investment by all of society’s institutions, including families,
local communities, governments, universities, and businesses.
When we ask scientists at IBM Research what motivates
them, the answer is often that they want to change the world—not
in minuscule increments but in great leaps forward. Dr. Mark
Ritter, a senior manager in IBM Research’s Physical Sciences
Department, leads an effort to rethink the entire architecture of
computing for the era of cognitive systems inspired by the human
brain. As a child, Mark, whose father was a plumber, had an
intense curiosity about how things work on a fundamental level.
It was his good fortune that his grandparents, who lived near his
family in Grinnell, Iowa, had two neighbors who were physics
professors at Grinnell College. One of the physicists, whom Mark
pestered with science questions while the neighbor repaired his
VW in the driveway, lent Mark a book on particle physics when
21. 20 Smart Machines
he was about twelve years old. As a teenager, Mark bicycled over
to the campus to attend physics lectures. He built a simple gas
laser in the basement of his home. It was the beginning of decades
of inquiry into how things work and how they can work better.
A few years ago, after more than twenty years in IBM Research,
Mark and his colleagues recognized that the computing model
designed for mid-twentieth-century demands was running out of
gas. So they set about inventing a new one. “This is the most
exciting time in my career,” Mark says. “The old ways of doing
things aren’t going to solve efficiently the big, real-world problems
we face.”
For his part, Dr. Larry Norton of Memorial Sloan-Kettering
Cancer Center is driven to transform the way medicine is
practiced. Ever since he can remember, he was motivated by the
desire to do something with his life that would improve the world.
Born in 1947, he grew up at a time when people saw science as
a powerful means of solving humanity’s problems. He recalls a
thought-crystallizing experience when he was an undergraduate
at the University of Rochester. He lived in a dorm where students
often gathered in the mornings for freewheeling discussions of
politics, values, and ethics. He was already contemplating a career
in medicine, and the topic that day was, if you were a doctor and
had done everything medical science could offer to save a patient
but she died anyway, how would you feel? The students were
split. “I realized I would feel terrible about it,” he says. “Offering
everything available isn’t enough. I should have done better. And
since, because of limitations in the world’s knowledge, I couldn’t
do better, I should be involved in moving things forward.”
During his forty-year career, Larry has been an innovator
in cancer treatment. Among his contributions is the central role
he played in developing the Norton-Simon hypothesis, a oncerevolutionary but now widely used approach to chemotherapy.