IBM is at the dawn of a new era of cognitive computing that will transform how people live and work, just as previous computing revolutions have. Technologies like IBM's Watson computer demonstrate that machines can now understand natural language, learn from experience, and provide insights by accessing huge amounts of data. IBM aims to develop cognitive systems that can help humans better understand complex problems and make better decisions across many fields like healthcare, education, and government. This new generation of intelligent machines will require collaboration between technology companies and other organizations to fully realize the potential of cognitive computing.
Helping Crisis Responders Find the Informative Needle in the Tweet HaystackCOMRADES project
Leon Derczynski - University of Sheffield,
Kenny Meesters - TU Delft, Kalina Bontcheva - University of Sheffield, Diana Maynard- University of Sheffield
WiPe Paper – Social Media Studies
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
A survey of nearly 900 Internet stakeholders reveals fascinating new perspectives on the way the Internet is affecting human intelligence and the ways that information is being shared and rendered.
Helping Crisis Responders Find the Informative Needle in the Tweet HaystackCOMRADES project
Leon Derczynski - University of Sheffield,
Kenny Meesters - TU Delft, Kalina Bontcheva - University of Sheffield, Diana Maynard- University of Sheffield
WiPe Paper – Social Media Studies
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
A survey of nearly 900 Internet stakeholders reveals fascinating new perspectives on the way the Internet is affecting human intelligence and the ways that information is being shared and rendered.
From Humanities to Metahumanities: Transhumanism and the Future of Education....eraser Juan José Calderón
From Humanities to Metahumanities: Transhumanism and the Future of Education. Poppy Frances Gibson
Abstract
Educational policy and provision is ever-changing; but how does pedagogy need to adapt to respond to transhumanism? This opinion piece discusses transhumanism, questions what it will mean to be posthuman, and considers the implications of this on the future of education. This piece aims to identify some key questions in the area of transhumanism and education as four themes are considered: teachers, human hardware, curriculum and lifelong learning.
technology companies invest in technology investment choices,How many technology companies are there in the world,vision of technology,How information technology works ,Motive of investing
The Fragmentation of Culture, Learning, Teaching and Technology: Implications...eraser Juan José Calderón
The Fragmentation of Culture, Learning, Teaching and Technology: Implications for the Artificial Intelligence in Education Research Agenda in 2010" de Gordon McCalla.
Abstract.
My goal in this paper is to try to characterize Artificial Intelligence in Education (AIEd) research a decade hence. By then, the increasing universality of information technology will have so overloaded people with information that they will find it necessary to drastically constrain their interactions in cyberspace. The result will be a major trend to localization, not globalization. This localization will have two main aspects, both resulting in a fragmented social environment. The first is that people will live in their own personal electronic villages, and will view cyberspace locally from there, accessing only that information and contacting only those other people that are consistent with their own perspectives and goals. The second is that cyberspace will be partitioned into a massive number of virtual communities each with a global geographic reach but a narrow conceptual focus. People in their villages will be members of only a few such communities. Knowledge will flow relatively slowly from community to community, impacting people only when it enters their village through the communities in which they participate. This will, of course, have major impact on the nature of learning and teaching, which, in turn, will affect the AIEd research agenda. The issues the field considers to be important, the kinds of technology that it builds, even the way research is carried out, may all be transformed.
Keynote on Crowd Computing presented at The 5th International Joint Conference on Natural Language Processing (IJCNLP2011) on November 10th in Chiang Mai, Thailand
The Future of the Internet
Experts and stakeholders say the Internet will enhance our
intelligence – not make us stupid. It will also change the functions of
reading and writing and be built around still‐unanticipated gadgetry
and applications. The battle over control of the internet will rage on
and debates about online anonymity will persist.
Janna Quitney Anderson, Elon University
Lee Rainie, Pew Internet & American Life Project
February 19, 2010
Pew Research Center’s Internet & American Life Project
An initiative of the Pew Research Center
1615 L St., NW – Suite 700
Washington, D.C. 20036
202‐419‐4500 | pewinternet.org
Creative destrution, Economic Feasibility, and Creative Destruction: The Case...Jeffrey Funk
This paper shows how new forms of electronic products and services such as smart phones, tablet computers and ride sharing become economically feasible and thus candidates for commercialization and creative destruction as improvements in standard electronic components such as microprocessors, memory, and displays occur. Unlike the predominant viewpoint in which commercialization is reached as advances in science facilitate design changes that enable improvements in performance and cost, most new forms of electronic products and services are not invented in a scientific sense and the cost and performance of them are primarily driven by improvements in standard components. They become candidates for commercialization as the cost and performance of standard components reach the levels necessary for the final products and services to have the required levels of performance and cost. This suggests that when managers, policy makers, engineers, and entrepreneurs consider the choice and timing of commercializing new electronic products and services, they should understand the composition of new technologies, the impact of components on a technology's cost, performance and design, and the rates of improvement in the components.
"'Tis true. There's magic in the Web: The Short and the Long of Co-Creation, Web Science, and Data Driven Innovation". Keynote for the DATA-DRIVEN INNOVATION WORKSHOP 2016 collocated with ACM Web Science 2016, Hannover, Germany, Sunday 22 May 2016
Computing, cognition and the future of knowing,. by IBMVirginia Fernandez
How humans and machines are forging a new age of understanding.
-The history of computing and the rise of cognitive
-The world’s first cognitive system.
-The technical path forward and the science of what’s possible
-Implications and obligations for the advance of cognitive science.
-Paving the way for the next generation of human cognition.
Learning to trust artificial intelligence systems accountability, compliance ...Diego Alberto Tamayo
It’s not surprising that the
public’s imagination has
been ignited by Artificial
Intelligence since the term
was first coined in 1955.
In the ensuing 60 years,
we have been alternately
captivated by its promise,
wary of its potential for
abuse and frustrated by
its slow development.
I made this presentation in my 7th semester of B.Tech as per academic curriculum.
Took help from several videos from youtube and studied some IBM publications.
Cognitive Era is at the dawn. It does not make machines intelligent but instead it allows them to develop cognisance and learn by themselves as we humans do.
I am fascinated and looking forward to contribute my existence in this great thought of almighty came into human mind.
Guys! You could get a nice introduction from this presentation and explain it to others and even it could be used for your academic homework.
Goodluck! GODSPEED!
Rise of the Machines” Is Not a Likely FutureEvery new technolog.docxmalbert5
“Rise of the Machines” Is Not a Likely Future
Every new technology brings its own nightmare scenarios. Artificial intelligence (AI) and robotics are no exceptions. Indeed, the word “robot” was coined for a 1920 play that dramatized just such a doomsday for humanity.
Recently, an open letter about the future of AI, signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines like “Top Scientists Have an Ominous Warning about Artificial Intelligence,” and “Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.” The implication is that the machines will one
day displace humanity.
Let’s get one thing straight: a world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research. Nevertheless, the idea of dramatically changing the AI research agenda to focus on AI “safety” is the primary message of a group calling itself the Future of Life Institute (FLI). FLI includes a handful of deep thinkers and public figures such as Elon Musk and Stephen Hawking and worries about the day in which humanity is steamrolled by powerful programs run a muck.
As eloquently described in the book Superintelligence: Paths, Dangers, Strategies by FLI advisory board member and Oxford-based philosopher Nick Bostrom, the plot unfolds in three parts. In the first part—roughly where we are now—computational power and intelligent software develops at an increasing pace through the toil of scientists and engineers. Next, a breakthrough is made: programs are created that possess intelligence on par with humans. These programs, running on increasingly fast computers, improve themselves extremely rapidly, resulting in a runaway “intelligence explosion.” In the third and final act, a singular super-intelligence takes hold—outsmarting, outmaneuvering, and ultimately outcompeting the entirety of humanity and perhaps life itself. End scene.
Let’s take a closer look at this apocalyptic storyline. Of the three parts, the first is indeed happening now and Bostrom provides cogent and illuminating glimpses into current and near-future technology. The third part is a philosophical romp exploring the consequences of supersmart machines. It’s that second part—the intelligence explosion—that demonstrably violates what we know of computer science and natural intelligence.
Runaway Intelligence?
The notion of the intelligence explosion arises from Moore’s Law, the observation that the speed of computers has been increasing exponentially since the 1950s. Project this trend forward and we’ll see computers with the computational power of the entire human race within the next few decades. It’s a leap to go from this idea to unchecked growth of machine intelligence, however.
First, ingenuity is not the sole bottleneck to developing faster com.
The paper must have the following subheadings which is not include.docxoreo10
The paper must have the following subheadings which is not included in word count:
Introduction
Analysis
Rationale to support the response [1 and 2 separately]
Description of key job types
Conclusions
Week 11 Discussion 1
"The Future of Training" Please respond to the following:
From the first e-Activity, analyze the views of Cross and Jarche about the “Golden Age of Training” and its future. Then, assess the claims Miller makes about training in the article “Training is Not an Option.” Take a position on which views you agree with most. Provide a rationale to support your response.
From the second e-Activity, describe three key job types and competencies that professional organizations such as ISPI and ASTD claim that professionals in the field of organizational training and development should possess. Provide a rationale to support your response.
e-Activity Bottom of Form
Read the article by Cross and Jarche titled “The Future of the Training Department” published in Training Magazine (June 2009). Then, read the article titled, “Training is Not an Option,” by Adrian Miller. Be prepared to discuss.
Search the Internet for a professional organization (e.g., ISPI, ASTD) and review the primary job types and job competencies listed. Be prepared to discuss.
Article: “The Future of the Training Department”
URL: https://www.polleverywhere. com/blog/the-future-of-the- training-department/
Article: “Training is Not an Option,”
URL: http://ezinearticles.com/? Training-is-Not-an-Option&id= 157604
Post 1 AW
Referencing the Learning Resources for this module, choose any question in the research project list and answer it in relation to posthumanism. In other words treat posthumanism as a new technology or technological way of being.
Posthumanism is essentially the interlinking of humans and technology. This could range from artificial intelligence to a human that has prosthetics or technological enhancements fused into their bodies. But how did this term even come about? What is so wrong with humans and their ability to function that we need to incorporate such technology into our lives? What is the problem for which posthumanism is the solution?
The answer is everything. All aspects of our lives involve problems and solutions. This technology that is being referred to as posthumanism has the ability to solve a vast majority of the problems humans encounter and create. Steven Poole, although a strong supporter against posthumanism, discusses a few of these problems as well as new problems that could be created in his article “Slaves to the algorithm”. First referring to a chess match between world champions, then to vehicle automation, crime algorithms and psychotherapy applications, Poole is able to illustrate the involvement posthumanism already has in our present day. Before he argues that humans are quickly rationing off our conscious thoughts and judgements he recognizes the need for imp ...
Level 1 Individual EcologyWe will measure 3 characteristics o.docxsmile790243
Level 1: Individual Ecology
We will measure 3 characteristics of individuals in 3 locations along the Upper Winter Creek trail. We will measure DBH (Diameter at Breast Height), tree height, and leaf size. Each team will have to choose their own methods for each measurement and be sure to verify the precision, accuracy and bias. There is a freeware Image J program developed by the NIH described in a file attached to this module for leaf area measurement but you are welcome to try any app or other method you prefer.
Level 2: Population Ecology
We will document age structure using the DBH data and we will measure dispersion of the population. Once again each team will choose a method for each. 2 methods for calculating dispersion are described in file attached to this module.
Level 3: Community Ecology
We will measure species richness and species diversity using a species count and a calculation each of which, once again, will determined by each team.
The final product will be a scientific poster with all of your data and and explanation of the synthesis of all 3 levels of ecology we sampled. This will be communicated as a concept map with graphs of your data verifying the relationships among the components. This is the first step in making a predictive systems model, like a climate model.
Small tree height: 3.5814 m medium tree height:7.875m tall tree height : 18.02m
Small tree leaves length 3.81 cm mid tree leaves length: 5.08 width 2.54cm
mid tree perimeter80
Width 2.54 cm tall tree leaves length 10.16cm width 6.36 cm
Small Tree perimeter 50cm tall tree perimeter 290cm
Small Shurb Community
Butterfly
50
Black Bee
27
Yellow Bee
4
Lizard
5
Fly
25
Gnat
40
Beetle
4
snake
1
Medium Tree Community
Birds
5
Catepillar
3
Gnats
20
Flys
15
Mouse
1
Snake
1
Mosquito
3
Spider
1
Tall Tree Community
Woodpecker
2
Bluejay
3
Lizard
5
Beetle
3
Butterfly
34
Ladybug
300
Squirrel
4
Gecko
2
Waterbugs
27
Birds
7
As the prominent philosopher Jerry, Kaplan puts it “Viewpoint Artificial Intelligence Think Again” (Jerry, 2017). The purpose is that we need to use more hand-working and we do not need Artificial Intelligence replace our brain. Firstly, Social and cultural conventions are an often-neglected aspect of intelligent-machine development. (1) The DOMINANT PUBLIC narrative about artificial intelligence is that we are building increasingly intelligent ma- chines that will ultimately surpass human capabilities, steal our jobs, possibly even escape human control and kill us all. This misguided perception, not widely shared by AI researchers, runs a significant risk of delaying or derailing practical applications and influencing public policy in counterproductive ways. (1) Secondly, Machines don’t have minds, and there is precious little evidence to suggest they ever will. (2) Finally, So the robots are certainly coming, but not in the way most people think. So the robo ...
From Humanities to Metahumanities: Transhumanism and the Future of Education....eraser Juan José Calderón
From Humanities to Metahumanities: Transhumanism and the Future of Education. Poppy Frances Gibson
Abstract
Educational policy and provision is ever-changing; but how does pedagogy need to adapt to respond to transhumanism? This opinion piece discusses transhumanism, questions what it will mean to be posthuman, and considers the implications of this on the future of education. This piece aims to identify some key questions in the area of transhumanism and education as four themes are considered: teachers, human hardware, curriculum and lifelong learning.
technology companies invest in technology investment choices,How many technology companies are there in the world,vision of technology,How information technology works ,Motive of investing
The Fragmentation of Culture, Learning, Teaching and Technology: Implications...eraser Juan José Calderón
The Fragmentation of Culture, Learning, Teaching and Technology: Implications for the Artificial Intelligence in Education Research Agenda in 2010" de Gordon McCalla.
Abstract.
My goal in this paper is to try to characterize Artificial Intelligence in Education (AIEd) research a decade hence. By then, the increasing universality of information technology will have so overloaded people with information that they will find it necessary to drastically constrain their interactions in cyberspace. The result will be a major trend to localization, not globalization. This localization will have two main aspects, both resulting in a fragmented social environment. The first is that people will live in their own personal electronic villages, and will view cyberspace locally from there, accessing only that information and contacting only those other people that are consistent with their own perspectives and goals. The second is that cyberspace will be partitioned into a massive number of virtual communities each with a global geographic reach but a narrow conceptual focus. People in their villages will be members of only a few such communities. Knowledge will flow relatively slowly from community to community, impacting people only when it enters their village through the communities in which they participate. This will, of course, have major impact on the nature of learning and teaching, which, in turn, will affect the AIEd research agenda. The issues the field considers to be important, the kinds of technology that it builds, even the way research is carried out, may all be transformed.
Keynote on Crowd Computing presented at The 5th International Joint Conference on Natural Language Processing (IJCNLP2011) on November 10th in Chiang Mai, Thailand
The Future of the Internet
Experts and stakeholders say the Internet will enhance our
intelligence – not make us stupid. It will also change the functions of
reading and writing and be built around still‐unanticipated gadgetry
and applications. The battle over control of the internet will rage on
and debates about online anonymity will persist.
Janna Quitney Anderson, Elon University
Lee Rainie, Pew Internet & American Life Project
February 19, 2010
Pew Research Center’s Internet & American Life Project
An initiative of the Pew Research Center
1615 L St., NW – Suite 700
Washington, D.C. 20036
202‐419‐4500 | pewinternet.org
Creative destrution, Economic Feasibility, and Creative Destruction: The Case...Jeffrey Funk
This paper shows how new forms of electronic products and services such as smart phones, tablet computers and ride sharing become economically feasible and thus candidates for commercialization and creative destruction as improvements in standard electronic components such as microprocessors, memory, and displays occur. Unlike the predominant viewpoint in which commercialization is reached as advances in science facilitate design changes that enable improvements in performance and cost, most new forms of electronic products and services are not invented in a scientific sense and the cost and performance of them are primarily driven by improvements in standard components. They become candidates for commercialization as the cost and performance of standard components reach the levels necessary for the final products and services to have the required levels of performance and cost. This suggests that when managers, policy makers, engineers, and entrepreneurs consider the choice and timing of commercializing new electronic products and services, they should understand the composition of new technologies, the impact of components on a technology's cost, performance and design, and the rates of improvement in the components.
"'Tis true. There's magic in the Web: The Short and the Long of Co-Creation, Web Science, and Data Driven Innovation". Keynote for the DATA-DRIVEN INNOVATION WORKSHOP 2016 collocated with ACM Web Science 2016, Hannover, Germany, Sunday 22 May 2016
Computing, cognition and the future of knowing,. by IBMVirginia Fernandez
How humans and machines are forging a new age of understanding.
-The history of computing and the rise of cognitive
-The world’s first cognitive system.
-The technical path forward and the science of what’s possible
-Implications and obligations for the advance of cognitive science.
-Paving the way for the next generation of human cognition.
Learning to trust artificial intelligence systems accountability, compliance ...Diego Alberto Tamayo
It’s not surprising that the
public’s imagination has
been ignited by Artificial
Intelligence since the term
was first coined in 1955.
In the ensuing 60 years,
we have been alternately
captivated by its promise,
wary of its potential for
abuse and frustrated by
its slow development.
I made this presentation in my 7th semester of B.Tech as per academic curriculum.
Took help from several videos from youtube and studied some IBM publications.
Cognitive Era is at the dawn. It does not make machines intelligent but instead it allows them to develop cognisance and learn by themselves as we humans do.
I am fascinated and looking forward to contribute my existence in this great thought of almighty came into human mind.
Guys! You could get a nice introduction from this presentation and explain it to others and even it could be used for your academic homework.
Goodluck! GODSPEED!
Rise of the Machines” Is Not a Likely FutureEvery new technolog.docxmalbert5
“Rise of the Machines” Is Not a Likely Future
Every new technology brings its own nightmare scenarios. Artificial intelligence (AI) and robotics are no exceptions. Indeed, the word “robot” was coined for a 1920 play that dramatized just such a doomsday for humanity.
Recently, an open letter about the future of AI, signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines like “Top Scientists Have an Ominous Warning about Artificial Intelligence,” and “Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.” The implication is that the machines will one
day displace humanity.
Let’s get one thing straight: a world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, AI has risks and benefits, but we cannot let fear dominate the conversation or guide AI research. Nevertheless, the idea of dramatically changing the AI research agenda to focus on AI “safety” is the primary message of a group calling itself the Future of Life Institute (FLI). FLI includes a handful of deep thinkers and public figures such as Elon Musk and Stephen Hawking and worries about the day in which humanity is steamrolled by powerful programs run a muck.
As eloquently described in the book Superintelligence: Paths, Dangers, Strategies by FLI advisory board member and Oxford-based philosopher Nick Bostrom, the plot unfolds in three parts. In the first part—roughly where we are now—computational power and intelligent software develops at an increasing pace through the toil of scientists and engineers. Next, a breakthrough is made: programs are created that possess intelligence on par with humans. These programs, running on increasingly fast computers, improve themselves extremely rapidly, resulting in a runaway “intelligence explosion.” In the third and final act, a singular super-intelligence takes hold—outsmarting, outmaneuvering, and ultimately outcompeting the entirety of humanity and perhaps life itself. End scene.
Let’s take a closer look at this apocalyptic storyline. Of the three parts, the first is indeed happening now and Bostrom provides cogent and illuminating glimpses into current and near-future technology. The third part is a philosophical romp exploring the consequences of supersmart machines. It’s that second part—the intelligence explosion—that demonstrably violates what we know of computer science and natural intelligence.
Runaway Intelligence?
The notion of the intelligence explosion arises from Moore’s Law, the observation that the speed of computers has been increasing exponentially since the 1950s. Project this trend forward and we’ll see computers with the computational power of the entire human race within the next few decades. It’s a leap to go from this idea to unchecked growth of machine intelligence, however.
First, ingenuity is not the sole bottleneck to developing faster com.
The paper must have the following subheadings which is not include.docxoreo10
The paper must have the following subheadings which is not included in word count:
Introduction
Analysis
Rationale to support the response [1 and 2 separately]
Description of key job types
Conclusions
Week 11 Discussion 1
"The Future of Training" Please respond to the following:
From the first e-Activity, analyze the views of Cross and Jarche about the “Golden Age of Training” and its future. Then, assess the claims Miller makes about training in the article “Training is Not an Option.” Take a position on which views you agree with most. Provide a rationale to support your response.
From the second e-Activity, describe three key job types and competencies that professional organizations such as ISPI and ASTD claim that professionals in the field of organizational training and development should possess. Provide a rationale to support your response.
e-Activity Bottom of Form
Read the article by Cross and Jarche titled “The Future of the Training Department” published in Training Magazine (June 2009). Then, read the article titled, “Training is Not an Option,” by Adrian Miller. Be prepared to discuss.
Search the Internet for a professional organization (e.g., ISPI, ASTD) and review the primary job types and job competencies listed. Be prepared to discuss.
Article: “The Future of the Training Department”
URL: https://www.polleverywhere. com/blog/the-future-of-the- training-department/
Article: “Training is Not an Option,”
URL: http://ezinearticles.com/? Training-is-Not-an-Option&id= 157604
Post 1 AW
Referencing the Learning Resources for this module, choose any question in the research project list and answer it in relation to posthumanism. In other words treat posthumanism as a new technology or technological way of being.
Posthumanism is essentially the interlinking of humans and technology. This could range from artificial intelligence to a human that has prosthetics or technological enhancements fused into their bodies. But how did this term even come about? What is so wrong with humans and their ability to function that we need to incorporate such technology into our lives? What is the problem for which posthumanism is the solution?
The answer is everything. All aspects of our lives involve problems and solutions. This technology that is being referred to as posthumanism has the ability to solve a vast majority of the problems humans encounter and create. Steven Poole, although a strong supporter against posthumanism, discusses a few of these problems as well as new problems that could be created in his article “Slaves to the algorithm”. First referring to a chess match between world champions, then to vehicle automation, crime algorithms and psychotherapy applications, Poole is able to illustrate the involvement posthumanism already has in our present day. Before he argues that humans are quickly rationing off our conscious thoughts and judgements he recognizes the need for imp ...
Level 1 Individual EcologyWe will measure 3 characteristics o.docxsmile790243
Level 1: Individual Ecology
We will measure 3 characteristics of individuals in 3 locations along the Upper Winter Creek trail. We will measure DBH (Diameter at Breast Height), tree height, and leaf size. Each team will have to choose their own methods for each measurement and be sure to verify the precision, accuracy and bias. There is a freeware Image J program developed by the NIH described in a file attached to this module for leaf area measurement but you are welcome to try any app or other method you prefer.
Level 2: Population Ecology
We will document age structure using the DBH data and we will measure dispersion of the population. Once again each team will choose a method for each. 2 methods for calculating dispersion are described in file attached to this module.
Level 3: Community Ecology
We will measure species richness and species diversity using a species count and a calculation each of which, once again, will determined by each team.
The final product will be a scientific poster with all of your data and and explanation of the synthesis of all 3 levels of ecology we sampled. This will be communicated as a concept map with graphs of your data verifying the relationships among the components. This is the first step in making a predictive systems model, like a climate model.
Small tree height: 3.5814 m medium tree height:7.875m tall tree height : 18.02m
Small tree leaves length 3.81 cm mid tree leaves length: 5.08 width 2.54cm
mid tree perimeter80
Width 2.54 cm tall tree leaves length 10.16cm width 6.36 cm
Small Tree perimeter 50cm tall tree perimeter 290cm
Small Shurb Community
Butterfly
50
Black Bee
27
Yellow Bee
4
Lizard
5
Fly
25
Gnat
40
Beetle
4
snake
1
Medium Tree Community
Birds
5
Catepillar
3
Gnats
20
Flys
15
Mouse
1
Snake
1
Mosquito
3
Spider
1
Tall Tree Community
Woodpecker
2
Bluejay
3
Lizard
5
Beetle
3
Butterfly
34
Ladybug
300
Squirrel
4
Gecko
2
Waterbugs
27
Birds
7
As the prominent philosopher Jerry, Kaplan puts it “Viewpoint Artificial Intelligence Think Again” (Jerry, 2017). The purpose is that we need to use more hand-working and we do not need Artificial Intelligence replace our brain. Firstly, Social and cultural conventions are an often-neglected aspect of intelligent-machine development. (1) The DOMINANT PUBLIC narrative about artificial intelligence is that we are building increasingly intelligent ma- chines that will ultimately surpass human capabilities, steal our jobs, possibly even escape human control and kill us all. This misguided perception, not widely shared by AI researchers, runs a significant risk of delaying or derailing practical applications and influencing public policy in counterproductive ways. (1) Secondly, Machines don’t have minds, and there is precious little evidence to suggest they ever will. (2) Finally, So the robots are certainly coming, but not in the way most people think. So the robo ...
ICCA 2063 - Exploring the Next Fifty Years by Rohit Talwar 03/09/13Rohit Talwar
To help us explore what the next fifty years might hold, ICCA asked industry futurist, Rohit Talwar, to peer over the horizon and help us understand the science and technology developments that might shape our world and explore the implications for associations and their events.
Included topics - Future frontiers of science and technology; information technology, the internet and beyond; manufacturing, robotics, and new materials; and human enhancement
In the last decade, workplaces have started to evolve towards digitalisation. In the future people will work in digitally connected environments where personalisation is enabled, collaboration is improved and data sharing and information management are automated. Ultimately, these future workplaces will provide context-aware artificial intelligence (AI) and decision support that leverage both localised information and broader community knowledge whenever needed.
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE. This report is a product of Access Now. We thank lead author Lindsey Andersen for her
significant contributions. If you have questions about this report or you would like more information, you can contact info@accessnow.org.
In our research, we work to understand how people feel about the expansion of robots in different employment areas, and what factors influence their feelings. Mainly we aim to discover what factors influence people‟s opinions on robots.
The widely publicized views about robotics and artificial intelligence come to opposite conclusions. One being the idea that increased development of artificial intelligence and robots may lead to a situation of mass unemployment. The other more optimistic one being that the fear of job loss is unwarranted because a displacement and reposition of employment is what will ensue. There are also more contemporary views such as the following, to accelerate the development of robots and AI while maintaining employment opportunities at the same time, it is necessary to upgrade human capital.
The results of our research show that males have a more positive view about robots than females. People who found out about robots via scientific readings are also more likely to have a positive opinion about them than those who found out about robots via media. Furthermore, people who were personally exposed to robots or who had heard about them from friends are less likely to have a negative opinion about them than those who found out the information via scientific readings. The results also show that the more interested a person is in science and technology, the more likely he or she will have a positive view of robots.
We did not discover significant correlation between peoples‟ view about robots and their country of origin, also their age was not a significant determinate. We included further descriptive questions in our study pertaining to where respondents believe robots should be used as well as where robots should not be used. The majority of responses were in the fields of manufacturing and education. From this we draw that as of now, most people cannot accept the use of robots within social interaction due to either personal fears or lack of trust.
React is a JavaScript library for creating user interfaces by Facebook and Instagram. Many people choose to think of React as the V in MVC.
We built React to solve one problem: building large applications with data that changes over time. To do this, React uses two main ideas.
Simple
Simply express how your app should look at any given point in time, and React will automatically manage all UI updates when your underlying data changes.
Declarative
When the data changes, React conceptually hits the "refresh" button, and knows to only update the changed parts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
2. We are at the dawn of a major shift in the evolution of technology.
The next two decades will transform the way people live and
work just as the computing revolution has transformed the human
landscape over the past half century. The host of opportunities
and challenges that come with this new era will require a new
generation of technologies and a rewriting of the rules of
computing.
The availability of huge amounts of data should help people
better understand complex situations. In reality, though, more data
often lead to more confusion. We make too many decisions with
irrelevant or incorrect information or with data that represent
only part of the picture. So we need a new generation of
tools—cognitive technologies—that help us penetrate complexity
and better understand the world around us so we can make better
decisions and live more successfully and sustainably. Yet some of
the techniques of computer science and engineering are reaching
their limits. The technology industry must change the way it
designs and uses computers and software if it is to continue to
make progress in how we work and live.
This perspective on the future of information technology is
the result of a large and continuing group effort at IBM. A couple
of years ago, a group of IBM Research scientists engaged in an
intriguing project. They looked decades into the future and
sketched out a picture of how computing will change. Their work
sparked discussion and debate among a wide range of IBMers.
We want to expose some of these early thoughts to others and
1
3. start a broader conversation, so we’re laying out our vision in a
short book, Smart Machines: IBM’s Watson and the Era of Cognitive
Computing, which will be published later in 2013. This first chapter
is a teaser. We want people to know what’s coming.
Laying the foundations for a new era of computing is a
monumental endeavor, and no company can take on this sort of
challenge alone. We look to leading corporate users of information
technology, university researchers, government policy makers,
industry partners, and tech entrepreneurs—indeed, the entire tech
industry—to take this journey with us. We also want to inspire
young people to pursue studies and careers in science and
technology. With this book, we hope to provoke new thinking that
will drive exploration and invention for the next fifty years.
2 Smart Machines
4. 1. A New Era of
Computing
IBM’s Watson computer created a sensation when it bested two
past grand champions on the TV quiz show Jeopardy!. Tens of
millions of people suddenly understood how “smart” a computer
could be. This was no mere parlor trick; the scientists who
designed Watson built upon decades of research in the fields of
artificial intelligence and natural-language processing and
produced a series of breakthroughs. Their ingenuity made it
possible for a system to excel at a game that requires both
encyclopedic knowledge and lightning-quick recall. In preparation
for the match, the machine ingested more than one million pages
of information. On the TV show, first broadcast in February 2011,
the system was able to search that vast database in response
to questions, size up its confidence level, and, when sufficiently
confident, beat the humans to the buzzer. After more than five
years of intense research and development, a core team of about
twenty scientists had made a very public breakthrough. They
demonstrated that a computing system—using traditional
3
5. strengths and overcoming assumed limitations—could beat expert
humans in a complex question-and-answer competition using
natural language.
Now IBM scientists and software engineers are busy
improving the Watson technology so it can take on much bigger
and more useful tasks. The Jeopardy! challenge was relatively
limited in scope. It was bound by the rules of the game and the
fact that all the information Watson required could be expressed
in words on a page. In the future, Watson will take on open-ended
problems. It will be able to interpret images, numbers, voices, and
sensory information. It will participate in dialogue with human
beings aimed at navigating vast quantities of information to solve
extremely complicated yet common problems. The goal is to
transform the way humans get things done, from health care and
education to financial services and government.
One of the next challenges for Watson is to help doctors
diagnose diseases and assess the best treatments for individual
patients. IBM is working with physicians at the Cleveland Clinic
and Memorial Sloan-Kettering Cancer Center in New York to train
Watson for this new role. The idea is not to prove that Watson
could do the work of a doctor but to make Watson a useful aid to
a physician. The Jeopardy! challenge pitted man against machine;
with Watson and medicine, man and machine are taking on a
challenge together—and going beyond what either could do on
its own. It’s impossible for even the most accomplished doctors
to keep up with the explosion of new knowledge in their fields.
Watson can keep up to date, though, and provide doctors with
the information they need. Diseases can be freakishly complicated,
and they express themselves differently in each individual. Within
the human genome, there are billions of combinations of variables
that can figure in the course of a disease. So it’s no wonder that an
estimated 15 to 20 percent of medical diagnoses are inaccurate or
4 Smart Machines
6. incomplete.1
Doctors know how to deal with general categories of
diseases and patients. What they need help with is diagnosing and
treating individuals.
Dr. Larry Norton, a world-renowned oncologist at Memorial
Sloan-Kettering Cancer Center who is helping to train Watson,
believes the computer will provide both encyclopedic medical and
patient information and the kind of insights that normally come
only from deeply experienced specialists. But in addition to
knowledge, he believes, Watson will offer wisdom. “This is more
than a machine,” Larry says. “Computer science is going to evolve
rapidly and medicine will evolve with it. This is coevolution. We’ll
help each other.”2
The Coming Era of Cognitive Computing
Watson’s potential to help with health care is just one of
the possibilities opening up for next-generation technologies.
Scientists at IBM and elsewhere are pushing the boundaries of
science and technology fields ranging from nanotechnology to
artificial intelligence with the goal of creating machines that do
much more than calculate and organize and find patterns in
data—they sense, learn, reason, and interact with people in new
ways. Watson’s exploits on TV were one of the first steps into a
new phase in the evolution of information technology—the era of
cognitive computing. Thomas Malone, director of the MIT Center
for Collective Intelligence, says the big question for researchers as
this era unfolds is: How can people and computers be connected
so that collectively they act more intelligently than any person,
1. Dr. Herb Chase, Columbia University School of Medicine, IBM IBV report, “The
Future of Connected Healthcare Devices,” March, 2011.
2. Dr. Larry Norton, Memorial Sloan-Kettering Cancer Center, interview, June 12,
2012.
A New Era of Computing 5
7. group, or computer has ever done before? 3
This avenue of thought
stretches back to the computing pioneer J. C. R. Licklider, who
led the U.S. government project that evolved into the Internet.
In 1960 he authored a paper, “Man-Computer Symbiosis,” where
he predicted that “in not too many years, human brains and
computing machines will be coupled together very tightly and
the resulting partnership will think as no human brain has ever
thought and process data in a way not approached by the
information-handling machines we know today.”4
That time is fast
approaching.
The new era of computing is not just an opportunity for
society; it’s also a necessity. Only with the help of thinking
machines will we be able to deal adequately with the exploding
complexity of today’s world and successfully address interlocking
problems like disease and poverty and stress on our natural
systems. Computers today are brilliant idiots. They have
tremendous capacities for storing information and performing
numerical calculations—far superior to those of any human. Yet
when it comes to another class of skills, the capacities for
understanding, learning, adapting, and interacting, computers are
woefully inferior to humans; there are many situations where
computers can’t do a lot to help us.
Up until now, that hasn’t mattered much. Over the past sixty-
plus years, computers have transformed the world by automating
defined tasks and processes that can be codified in software
programs in series of procedural “if A, then B”
statements—expressing logic or mathematical equations. Faced
with more complex tasks or changes in tasks, software
programmers add to or modify the steps in the operations they
3. Thomas Malone, Massachusetts Institute of Technology, interview, May 3, 2013.
4. J. C. R. Licklider, “Man-Computer Symbiosis,” IRE Transactions on Human Factors in
Electronics 1 (March 1960): 4–11.
6 Smart Machines
8. want the machine to perform. This model of computing—in which
every step and scenario is determined in advance by a
person—can’t keep up with the world’s evolving social and
business dynamics or deliver on its potential. The emergence of
social networking, sensor networks, and huge storehouses of
business, scientific, and government records creates an abundance
of information that technology leaders call “big data.” Think of it as
a parallel universe to the world of people, places, things, and their
interrelationships, but the digital universe is growing at about 60
percent each year. 5
All of these data create the potential for people to understand
the environment around us with a depth and clarity that was
simply not possible before. Governments and businesses struggle
to understand complex situations, such as the inner workings of
a city or the behavior of global financial markets. In the cognitive
era, using the new tools of decision science, we will be able to
apply new kinds of computing power to huge volumes of data and
achieve deeper insight into how things really work. Armed with
those insights, we can develop strategies and design systems for
achieving the best outcomes—taking into account the effects of
the variable and the unknowable. Think of big data as a natural
resource waiting to be mined. And in order to tap this vast
resource, we need computers that “think” and interact more like
we do.
The human brain evolved over millions of years to become
a remarkable instrument of cognition. We are capable of sorting
through multitudes of sensory impressions in the blink of an eye.
For instance, faced with the chaotic scene of a busy intersection,
we’re able to instantly identify people, vehicles, buildings, streets,
and sidewalks and understand how they relate to one another. We
5. CenturyLink Business Inc., infographic, 2011, http://www.centurylink.com/
business/artifacts/pdf/resources/big-data-defining-the-digital-deluge.pdf.
A New Era of Computing 7
9. can recognize and greet a friend we haven’t seen for ten years
even while sensing and prioritizing the need to avoid stepping in
front of a moving bus. Today’s computers can’t do that.
With the exception of robots, tomorrow’s computers won’t
need to navigate in the world the way humans do. But to help
us think better they will need the underlying humanlike
characteristics—learning, adapting, interacting, and some form of
understanding—that make human navigation possible. New
cognitive systems will extract insights from data sources that are
almost totally opaque today, such as population-wide health-care
records, or from new sources of information, such as sensors
monitoring pollution in delicate marine environments. Such
systems will still sometimes be programmed by people using “if
A, then B” logic, but programmers won’t have to anticipate every
procedure and every rule. Instead, computers will be equipped
with interpretive capabilities that will let them learn from the
data and adapt over time as they gain new knowledge or as the
demands on them change.
But the goal is not to replicate human brains or replace
human thinking with machine thinking. Rather, in the era of
cognitive systems, humans and machines will collaborate to
produce better results, each bringing its own skills to the
partnership. The machines will be more rational and analytic—and,
of course, possess encyclopedic memories and tremendous
computational abilities. People will provide judgment, intuition,
empathy, a moral compass, and human creativity.
To understand what’s different about this new era, it helps to
compare it to the two previous eras in the evolution of information
technology. The tabulating era began in the nineteenth century
and continued into the 1940s. Mechanical tabulating machines
automated the process of recording numbers and making
calculations. They were essentially elaborate mechanical abacuses.
8 Smart Machines
10. People used them to organize data and make calculations that
were helpful in everything from conducting a national population
census to tracking the performance of a company’s sales force. The
programmable computing era—today’s technologies—emerged in
the 1940s. Programmable machines are still based on a design laid
out by Hungarian American mathematician John von Neumann.
Electronic devices governed by software programs perform
calculations, execute logical sequences of steps, and store
information using millions of zeros and ones. Scientists built the
first such computers for use in decrypting encoded messages in
wartime. Successive generations of computing technology have
enabled everything from space exploration to global
manufacturing-supply chains to the Internet.
Tomorrow’s cognitive systems will be fundamentally
different from the machines that preceded them. While traditional
computers must be programmed by humans to perform specific
tasks, cognitive systems will learn from their interactions with
data and humans and be able to, in a sense, program themselves
to perform new tasks. Traditional computers are designed to
calculate rapidly; cognitive systems will be designed to draw
inferences from data and pursue the objectives they were given.
Traditional computers have only rudimentary sensing capabilities,
such as license-plate-reading systems on toll roads. Cognitive
systems will be able to sense more like humans do. They’ll
augment our hearing, sight, taste, smell, and touch. In the
programmable-computing era, people have to adapt to the way
computers work. In the cognitive era, computers will adapt to
people. They’ll interact with us in ways that are natural to us.
Von Neumann’s architecture has persisted for such a long
time because it provides a powerful means of performing many
computing tasks. His scheme called for the processing of data via
calculations and the application of logic in a central processing
A New Era of Computing 9
11. unit. Today, the CPU is a microprocessor, a stamp-sized sliver of
silicon and metal that’s the brains of everything from smartphones
and laptops to the largest mainframe computers. Other major
components of the von Neumann design are the memory, where
data are stored in the computer while waiting to be processed, and
the technologies that bring data into the system or push it out.
These components are connected to the central processing unit
via a “bus”—essentially a highway for data. Most of the software
programs written for today’s computers are based on this
architecture.
But the design has a flaw that makes it inefficient: the von
Neumann bottleneck. Each element of the process requires
multiple steps where data and instructions are moved back and
forth between memory and the CPU. That requires a tremendous
amount of data movement and processing. It also means that
discrete processing tasks have to be completed linearly, one at a
time. For decades, computer scientists have been able to rapidly
increase the capabilities of central processing units by making
them smaller and faster. But we’re reaching the limits of our ability
to make those gains at a time when we need even more computing
power to deal with complexity and big data. And that’s putting
unbearable demands on today’s computing technologies—mainly
because today’s computers require so much energy to perform
their work.
What’s needed is a new architecture for computing, one that
takes more inspiration from the human brain. Data processing
should be distributed throughout the computing system rather
than concentrated in a CPU. The processing and the memory
should be closely integrated to reduce the shuttling of data and
instructions back and forth. And discrete processing tasks should
be executed simultaneously rather than serially. A cognitive
computer employing these systems will respond to inquires more
10 Smart Machines
12. quickly than today’s computers; less data movement will be
required and less energy will be used. Today’s von Neumann–style
computing won’t go away when cognitive systems come online.
New chip and computing technologies will extend its life far into
the future. In many cases, the cognitive architecture and the von
Neumann architecture will be employed side by side in hybrid
systems. Traditional computing will become ever more capable
while cognitive technologies will do things that were not possible
before.
Today, a handful of technologies are getting a tremendous
amount of buzz, including the cloud, social networking, mobile,
and new ways to interact with computing from tablets to glasses.
These new technologies will fuel the requirement and desire for
cognitive systems that will, for example, both harvest insights
from social networks and enhance our experiences within them.
“This will affect everything. It will be like the discovery of DNA,”
predicts Ralph Gomory, a pioneer of applied mathematics who was
director of IBM Research in the 1970s and 1980s and later head of
the Alfred P. Sloan Foundation.6
How Cognitive Systems Will Help Us Think
As smart as human beings are, there are many things that we
can’t do or that we could do better. Cognitive systems in many
cases help us overcome our limitations.
Complexity
We have difficulty processing large amounts of information
that come at us rapidly. We also have problems understanding
the interactions among elements of large systems, such as all of
6. Ralph Gomory, former IBM Research director, interview, March 19, 2012.
A New Era of Computing 11
13. the moving parts in a city or the global economy. With cognitive
computing, we will be able to harvest insights from huge
quantities of data to understand complex situations, make accurate
predictions about the future, and anticipate the unintended
consequences of actions.
City mayors, for instance, will be able to understand the
interrelationships among the subsystems within their
cities—everything from electrical grids to weather to subways
to demographic trends to emergent cultural shifts expressed in
text, video, music, and visual arts. One example is monitoring
social media during a major storm to spot patterns of words and
images that indicate critical problems in particular neighborhoods.
Much of this information will come from sensors—video cameras,
instruments that detect motion or consumption, and devices that
spot anomalies. Mobile phones will also be used as sensors that
help us understand the movements of people. But mayors will
also be able to measure the financial, material, and knowledge
resources they put into a system and the results they get from
those investments. And they’ll be able to accurately predict the
effects of policies and actions they’re considering.
Expertise
With the help of cognitive systems, we will be able to see the
big picture and make better decisions. This is especially important
when we’re trying to address problems that cut across intellectual
and industrial domains. For instance, police will be able to gather
crime statistics and combine them with information about
demographics, events, blueprints, economic activity, and weather
to produce better analysis and safer cities. Armed with abundant
data, police chiefs will be able to set strategies and deploy
resources more effectively—even predicting where and when
12 Smart Machines
14. crimes are likely to happen. Patrol officers will gain a wealth
of information about locations they’re approaching. Situational
intelligence will be extremely useful when they’re about to knock
on the door of an apartment. The ability to achieve such
comprehensive understanding of situations at every level will be
an essential tool and will become one of the most important
factors in the economic growth and competitiveness of cities.
Objectivity
We all possess biases based on our personal experiences,
egos, and intuition about what works and what doesn’t, as well
as the influence group dynamics. Cognitive systems can make
it possible for us to be more objective in our decision making.
Corporations may evolve into “conscious organizations” made up
of humans and systems in collaboration. Sophisticated analytic
engines will understand how an organization works, the dynamics
of its competitive environment, and the capabilities within the
organization and ecosystem of partners. Computers might take
notes at meetings, convert data to graphic images, spot hard-to-
see connections, and help guide individuals in achieving business
goals.
Imaginations
Because of our prejudices, we have difficulty envisioning
things that are dramatically different than what we’re familiar
with. Cognitive systems will help us discover and explore new
and contrarian ideas. A chemical or pharmaceutical company’s
research-and-development team might use a cognitive system to
explore combinations of molecules or even individual atoms that
have not been contemplated before. Programs run on high-
A New Era of Computing 13
15. performance computers will simulate the effects of the
combinations, spotting potentially valuable new materials or
drugs and also anticipating negative side effects. With the aid
of cognitive machines, researchers and engineers will be able to
explore millions of combinations in ways that are economical with
both time and money.
Senses
We can only take in and make sense of so much raw, physical
information. With cognitive systems, computer sensors teamed
with analytics software will vastly extend our ability to gather
and process such information. Imagine a world where individuals
carry their own personal Watson in the form of a handheld device.
These personal cognitive assistants will carry on conversations
with us that will make the speech technology in today’s
smartphones seem like baby talk. They will acquire knowledge
about us, in part, from observing what we see, say, touch, and
type on our electronic devices—so they can better anticipate our
wishes. In addition, the assistant will be able to use sophisticated
sensing to monitor a person’s health and threats to her well-being.
If there’s carbon monoxide or the influenza virus in a room, for
example, the device will alert its user. Over time, humans have
evolved to be more successful as a species. We continually adapt
to overcome our limitations. This partnership with computers is
simply the latest step in a long process of adaptation.
The uses for cognitive computing will be nearly limitless—a
veritable playground for the human imagination. Think of any
activity that involves a lot of complexity, many variables, a great
deal of uncertainty, and incomplete information and that requires
a high level of expertise and rapid decision making. That activity
is going to be a fat target for cognitive technologies. Just as the
14 Smart Machines
16. personal computer, the Internet, mobile communications, and
social networking have given rise to tens of thousands of software
applications, Web services, and smartphone apps, the cognitive
era will produce a similar explosion of creativity. Think of the
coming technologies as cognitive apps. For enterprises, you can
easily envision apps for handling mergers and acquisitions, crisis
management, competitive analysis, and product design. Picture a
team within a company that’s in charge of sizing up acquisition
candidates using a cognitive M&A app. In order to augment its
understanding of potential targets, many of which are private
companies, the team will set up a special social network of
employees, customers, and business partners who have had direct
experiences with other companies in the same industry. The links
and the nature of the interactions will all be mapped out. A
cognitive system will find information stored there, gathering
insights about companies and suggesting acquisition targets. The
M&A team will also track the performance of previously acquired
companies, finding what worked and what didn’t. Those insights,
constantly updated in the learning system, will help the team
identify risks and synergies, helping it decide which acquisitions
to pursue. Moreover, in the everyday lives of individuals, cognitive
apps will help in selecting a college, making investment decisions,
choosing among insurance options, and purchasing a car or home.
Technology Breakthroughs: Opportunities and
Necessities
Much of the progress in science and technology comes in
small increments. Scientists and engineers build on top of the
innovations that came before. Consider the tablet computer. The
first such devices appeared on the scene back in the 1980s. They
had large, touch-sensitive screens but weighed nearly five pounds
A New Era of Computing 15
17. and were an inch and a half thick. They were more like bricks
than books, and about all you could do with them was scrawl
brief memos and fill out forms. After thirty years of gradual
improvements, we have slim, light, powerful tablets that combine
the features of a telephone, a personal computer, a television, and
more.
There’s nothing wrong with incremental innovation. It’s
absolutely necessary, and, sometimes, its results are both
delightful and transformational. A prime example is the iPhone.
With its superior navigation and abundance of easy-to-use
applications, this breakthrough product spawned a burst of
smartphone innovation, which combined with the social-
networking phenomenon to produce a revolutionary shift in
global human behavior. Yet, technologically, iPhone was built on
top of many smartphone advances that preceded it. In fact, IBM
introduced the first data-accessing phone, called Simon, in 1994,
long before the term “smartphone” had been coined. New waves of
progress, however, require majorly disruptive innovations—things
like the transistor, the microchip, and the first programmable
computers. These are the advances that fundamentally change our
world.
Today, many of the core technologies that provide the basic
functions for traditional computers are mature; they have been in
use for decades. In some cases, each wave of improvements is less
profound than the wave that preceded it. We’re reaching the point
of diminishing returns. Yet, at the same time, the demands on
computing technology are growing exponentially. One example
of a maturing technology is microchips. These slivers of silicon
densely packed with tiny transistors replaced the vacuum tube.
Early on, they brought consumers digital watches and pocket-
sized radios. Today, a handful of chips provide all the data
processing required in a tablet or a data-center computer that
16 Smart Machines
18. serves up tens of thousands of Facebook pages per day. Yet the
basic concept of a microchip is essentially the same as it was
forty years ago. They’re made of tiny transistors—switches that
turn on and off to create the zeros and ones required to perform
calculations and process information. The more transistors on a
chip, the more data can be processed and stored there. The
problem is that with each new generation of chips, it becomes
more difficult to shrink the wires and transistors and pack more
of them onto chips. So what’s needed is a disruptive innovation
or, more likely, several of them that will change the game in the
microchip realm and launch another period of rapid innovation.
Soon incremental innovation will no longer be sufficient.
People who demand the most from computers are already running
into the limits of today’s circuitry. Michel McCoy, director of
the Simulation and Computing Program at the U.S. Lawrence
Livermore National Laboratory, is among those calling for a
nationwide initiative involving national laboratories and
businesses aimed at coming up with radical new approaches to
microprocessor and computer-system design and software
programming. “In a sense, everything we’ve done up until this
point has been easy,” he says. “Now we have reached a physics-
dominated threshold in the design of microprocessors and
computing systems which, unless we do something about it, is
essentially going to stagnate progress.”7
We need more radical
innovations. In the years ahead, a number of fundamental
advances in science and technology will be required to make
progress. Think of those colorful Russian wooden dolls where
progressively smaller dolls nest inside slightly larger ones. We
need to achieve technology advances in layers.
The top layer is the way we interact with computers and get
them to do what we want. The big innovation at this outer layer
7. Michel McCoy, Lawrence Livermore National Laboratory, interview, June 14, 2012.
A New Era of Computing 17
19. is “learning systems,” which we will explore deeply in chapter
2. The goal is to create machines that do not require as much
programming by humans. Instead they’ll be “taught” by people,
who will set objectives for them. As they learn, the machines will
work out the details of how to meet those goals.
The next layer represents how we organize and interpret data,
which we’ll discuss in chapter 3. Today’s databases do an excellent
job of organizing information in columns and rows; tomorrow’s
are being designed to manage huge volumes of different kinds of
data, understand information in context, and crunch data in real
time.
Another major dimension, which we’ll go into in chapter 4,
is how to make use of data gathered through sensor technology.
Today, we use rudimentary sensor technologies to perform useful
tasks such as locating leaks in water systems. In the cognitive era,
sensors and pattern-recognition software will augment our senses,
making us hyper-aware of the world around us.
The next layer represents the design of systems—how we fit
together all the physical components that make up a computer.
The challenge here, which we address in chapter 5, is creating
data-centric computers. The designers of computing systems have
long treated logic and memory as separate elements. Now, they
will meld the components together, first, on circuit boards and,
later, on single microchips. Also, they’ll move the processing to the
data, rather than visa versa.
Finally, in the innermost layer is nanotechnology, where we
manipulate matter at the molecular and atomic scale. In chapter
6, we’ll explore what it will take to invent a new physics of
computing. To overcome the limits of today’s microchip
technology, scientists must shift to new nanomaterials and new
approaches to switching from one digital state to another.
18 Smart Machines
20. Possibilities include harnessing quantum mechanics or chips
driven by “synapses and neurons” for data processing.
A New Culture of Innovation
We’re still in the early stages of the emergence of this new era
of computing. Progress will require a willingness to make big bets,
take a long-term view, and engage in open collaboration. We’ll
explore the elements of the culture of innovation in each of the
subsequent chapters in what we call the journeys of discovery. An
absolutely critical aspect of the culture of innovation will be the
ambition and capabilities of the inventors themselves. For rapid
progress to be made in the new era of computing, young people
must be inspired to become scientists, and they must be educated
by teachers using superior tools and techniques. They have to
be rewarded and given opportunities to challenge everything we
think we know about how the world works. It requires dedication
and investment by all of society’s institutions, including families,
local communities, governments, universities, and businesses.
When we ask scientists at IBM Research what motivates
them, the answer is often that they want to change the world—not
in minuscule increments but in great leaps forward. Dr. Mark
Ritter, a senior manager in IBM Research’s Physical Sciences
Department, leads an effort to rethink the entire architecture of
computing for the era of cognitive systems inspired by the human
brain. As a child, Mark, whose father was a plumber, had an
intense curiosity about how things work on a fundamental level.
It was his good fortune that his grandparents, who lived near his
family in Grinnell, Iowa, had two neighbors who were physics
professors at Grinnell College. One of the physicists, whom Mark
pestered with science questions while the neighbor repaired his
VW in the driveway, lent Mark a book on particle physics when
A New Era of Computing 19
21. he was about twelve years old. As a teenager, Mark bicycled over
to the campus to attend physics lectures. He built a simple gas
laser in the basement of his home. It was the beginning of decades
of inquiry into how things work and how they can work better.
A few years ago, after more than twenty years in IBM Research,
Mark and his colleagues recognized that the computing model
designed for mid-twentieth-century demands was running out of
gas. So they set about inventing a new one. “This is the most
exciting time in my career,” Mark says. “The old ways of doing
things aren’t going to solve efficiently the big, real-world problems
we face.”
For his part, Dr. Larry Norton of Memorial Sloan-Kettering
Cancer Center is driven to transform the way medicine is
practiced. Ever since he can remember, he was motivated by the
desire to do something with his life that would improve the world.
Born in 1947, he grew up at a time when people saw science as
a powerful means of solving humanity’s problems. He recalls a
thought-crystallizing experience when he was an undergraduate
at the University of Rochester. He lived in a dorm where students
often gathered in the mornings for freewheeling discussions of
politics, values, and ethics. He was already contemplating a career
in medicine, and the topic that day was, if you were a doctor and
had done everything medical science could offer to save a patient
but she died anyway, how would you feel? The students were
split. “I realized I would feel terrible about it,” he says. “Offering
everything available isn’t enough. I should have done better. And
since, because of limitations in the world’s knowledge, I couldn’t
do better, I should be involved in moving things forward.”
During his forty-year career, Larry has been an innovator
in cancer treatment. Among his contributions is the central role
he played in developing the Norton-Simon hypothesis, a once-
revolutionary but now widely used approach to chemotherapy.
20 Smart Machines