This document discusses cognitive technologies and their potential application to analyzing and mapping the complex debate around internet governance. It provides an overview of cognitive science and how developments in engineering and research have led to cognitive technologies that can mimic some human cognitive functions. As an example, it describes how text mining as an applied cognitive science can be used to discover meaningful patterns in large amounts of structured and unstructured data related to the internet governance debate. The document argues that cognitive technologies may help address the limits of human cognition when dealing with vast information from global governance processes and social issues involving thousands of actors.
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
The evolution of artificial intelligence in the workplace
Since the first appearance of the words “artificial intelligence” more than 60 years ago, our imaginations have been sparked. Imagine creating computers that simulate human intelligence.
AI has the potential to profoundly influence our lives, perhaps to the point when our world can be better understood and even predicted. In workplaces we can develop systems through which AI may evolve. And Konica Minolta is progressing with the concept of intelligent hubs which will provide businesses with insight, support and greater collaboration.
By combining our core technologies with transformative solutions in the digital workplace, we’re evolving to become a problem-solving digital company creating new value for people and society.
Gordana Dodig-Crnkovic: Participating and Anticipating. Actors and Agents Net...José Nafría
Lecture belonging to the thematic axis: "Cosmological Perspectives of the Possible Worlds"
International Workshop on Social Networks: from communicating to solidary netwoks (an interdisciplinary Approach), Sierra Pambley, León, Spain, Septiembre de 2013
http://primer.unileon.es/eventos/RS2013
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistancePhD Assistance
Imagine a world where knowledge isn’t limited to humans!!! A world in which computers will think and collaborate with humans to create a more exciting universe. Although this future is still a long way off, Artificial Intelligence has made significant progress in recent years. In almost every area of AI, such as quantum computing, healthcare, autonomous vehicles, the internet of things, robotics, and so on, there is a lot of research going on. So much so that the number of annual Published Research Papers on Artificial Intelligence has increased by 90% since 1996.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/2Sdlfn4
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
The evolution of artificial intelligence in the workplace
Since the first appearance of the words “artificial intelligence” more than 60 years ago, our imaginations have been sparked. Imagine creating computers that simulate human intelligence.
AI has the potential to profoundly influence our lives, perhaps to the point when our world can be better understood and even predicted. In workplaces we can develop systems through which AI may evolve. And Konica Minolta is progressing with the concept of intelligent hubs which will provide businesses with insight, support and greater collaboration.
By combining our core technologies with transformative solutions in the digital workplace, we’re evolving to become a problem-solving digital company creating new value for people and society.
Gordana Dodig-Crnkovic: Participating and Anticipating. Actors and Agents Net...José Nafría
Lecture belonging to the thematic axis: "Cosmological Perspectives of the Possible Worlds"
International Workshop on Social Networks: from communicating to solidary netwoks (an interdisciplinary Approach), Sierra Pambley, León, Spain, Septiembre de 2013
http://primer.unileon.es/eventos/RS2013
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistancePhD Assistance
Imagine a world where knowledge isn’t limited to humans!!! A world in which computers will think and collaborate with humans to create a more exciting universe. Although this future is still a long way off, Artificial Intelligence has made significant progress in recent years. In almost every area of AI, such as quantum computing, healthcare, autonomous vehicles, the internet of things, robotics, and so on, there is a lot of research going on. So much so that the number of annual Published Research Papers on Artificial Intelligence has increased by 90% since 1996.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/2Sdlfn4
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
AI(Full name Artificial Intelligence)is a new technological science that studies and develops theories, methods, techniques, and application systems used to simulate, extend, and expand human intelligence.
Artificial intelligence is a branch of computer science. It attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence.
A discussion on chapter 5 of Hatch & Cunliffe (2013, 127-157)'s "Organization Theory: Modern, Symbolic, and Postmodern Perspective." This paper will summarize their discussions, and to make more value added, I’ll discuss on a possibility to enhance fundamental perspective of this chapter by discussing about the post-postmodern paradigm , which will be a result of a debate between neo-phenomenology and speculative reality in details. The paper will use this perspective to examine the future evolution of exponential performance of computational technology until the “technological singularity,” or when machine intelligence surpasses human being’s in 2047, according to Verne Vinge , later adopted and enhanced further by Kurzwell (2005).
Human-robot interaction can increase the challenges of artificial intelligence. Many domains of AI and its effect is laid down, which is mainly called for their integration, modelling of human cognition and human, collecting and representing knowledge, use of this knowledge in human level, maintaining decision making processes and providing these decisions towards physical action eligible to and in coordination with humans. A huge number of AI technologies are abstracted from task planning to theory of mind building, from visual processing to symbolic reasoning and from reactive control to action recognition and learning. Specific human-robot interaction is focused on this case. Multi-model and situated communication can support human-robot collaborative task achievement. Present study deals with the process of using artificial intelligence (AI) for human-robot interaction. by Vishal Dineshkumar Soni 2018. Artificial Cognition for Human-robot Interaction. International Journal on Integrated Education. 1, 1 (Dec. 2018), 49-53. DOI:https://doi.org/10.31149/ijie.v1i1.482. https://journals.researchparks.org/index.php/IJIE/article/view/482/459 https://journals.researchparks.org/index.php/IJIE/article/view/482
Knowledge Engineering and Intelligence GatheringNicolae Sfetcu
A process of intelligence gathering begins when a user enters a query into the system. Several objects can match the result of a query with different degrees of relevance. Most systems estimate a numeric value about how well each object matches the query and classifies objects according to this value. Many researches have focused on practices of intelligence gathering. In knowledge engineering, knowledge gathering consists in fiding it from structured and unstructured sources in a way that must represent knowledge in a way that facilitates inference.
DOI: 10.13140/RG.2.2.32191.15527
Presentatie Internet of Things Conferentie 9 april 2013 door Ben van Lier van...Centric
Ben van Lier, Internet of Things-expert van Centric, sprak tijdens de tweede Internet of Things-conferentie die op dinsdag 9 april werd gehouden in Rotterdam.
The generation of digital content has undergone a great increase in recent years due to the
development of new technologies that allow the creation of content quickly and easily. A further step in this
evolution is the generation of contents by automatic systems without human intervention. Thus, for decadesit has
been developing models for the Natural Language Generation (NLG) that allow the transformation of content to
the form of narratives. At present, there are several systems that enable the generation in text format. In this
paper we present the Narrative system, which allows the generation of text narratives from different sources,
and which are indistinguishable for user from those made by a human being.
The purpose of this workshop was to highlight the the significance of AI, IoT and their integration under the light of scientific research. The presentation of the workshop can be found below.
The Internet of Things, Ubiquitous Computation and the Evolutionary Future of...Eric Kingsbury, MBA
As humans at this point in time, we are in the midst of and agents of evolutionary forces—and these forces are driving us toward a future of smart objects everywhere. Matter wants to be conscious, and we will help it become so.
Presentation used by Pedro Prieto-Martín, the founding president of the association, for the defense of his Doctoral Thesis ("Creating the 'symbiotic city': A proposal for the interdisciplinary co-design and co-creation of Civic Software Systems"), 29th of October 2012 in the University of Alcalá.
Artificial intelligence - Approach and MethodRuchi Jain
Human natural intelligence is ubiquitous with human activities, such as solving problems, playing chess, guessing puzzles. AI is new mean to solve such complex problems. We NuAIg is a AI consulting firm, who will help you to create a AI road-map for your business and process automation.
Uvod u R za Data Science :: Sesija 1 [Intro to R for Data Science :: Session 1]Goran S. Milovanovic
Prezentacija za sesiju 1 kursa Uvod u R za Data Science, Data Science zajednica Srbije u saradnji sa Startit, Beograd, 28. april 2016.
Session 1 Presentation for Intro to R for Data Science course, Data Science Community Serbia in co-operation with Startit
Defining privacy and related notions such as Personal Identifiable Information (PII) is a central notion in computer science and other fields. The theoretical, technological, and application aspects of PII require a framework that provides an overview and systematic structure for the discipline’s topics. This paper develops a foundation for representing information privacy. It introduces a coherent conceptualization of the privacy senses built upon diagrammatic representation. A new framework is presented based on a flow-based model that includes generic operations performed on PII.
AI(Full name Artificial Intelligence)is a new technological science that studies and develops theories, methods, techniques, and application systems used to simulate, extend, and expand human intelligence.
Artificial intelligence is a branch of computer science. It attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence.
A discussion on chapter 5 of Hatch & Cunliffe (2013, 127-157)'s "Organization Theory: Modern, Symbolic, and Postmodern Perspective." This paper will summarize their discussions, and to make more value added, I’ll discuss on a possibility to enhance fundamental perspective of this chapter by discussing about the post-postmodern paradigm , which will be a result of a debate between neo-phenomenology and speculative reality in details. The paper will use this perspective to examine the future evolution of exponential performance of computational technology until the “technological singularity,” or when machine intelligence surpasses human being’s in 2047, according to Verne Vinge , later adopted and enhanced further by Kurzwell (2005).
Human-robot interaction can increase the challenges of artificial intelligence. Many domains of AI and its effect is laid down, which is mainly called for their integration, modelling of human cognition and human, collecting and representing knowledge, use of this knowledge in human level, maintaining decision making processes and providing these decisions towards physical action eligible to and in coordination with humans. A huge number of AI technologies are abstracted from task planning to theory of mind building, from visual processing to symbolic reasoning and from reactive control to action recognition and learning. Specific human-robot interaction is focused on this case. Multi-model and situated communication can support human-robot collaborative task achievement. Present study deals with the process of using artificial intelligence (AI) for human-robot interaction. by Vishal Dineshkumar Soni 2018. Artificial Cognition for Human-robot Interaction. International Journal on Integrated Education. 1, 1 (Dec. 2018), 49-53. DOI:https://doi.org/10.31149/ijie.v1i1.482. https://journals.researchparks.org/index.php/IJIE/article/view/482/459 https://journals.researchparks.org/index.php/IJIE/article/view/482
Knowledge Engineering and Intelligence GatheringNicolae Sfetcu
A process of intelligence gathering begins when a user enters a query into the system. Several objects can match the result of a query with different degrees of relevance. Most systems estimate a numeric value about how well each object matches the query and classifies objects according to this value. Many researches have focused on practices of intelligence gathering. In knowledge engineering, knowledge gathering consists in fiding it from structured and unstructured sources in a way that must represent knowledge in a way that facilitates inference.
DOI: 10.13140/RG.2.2.32191.15527
Presentatie Internet of Things Conferentie 9 april 2013 door Ben van Lier van...Centric
Ben van Lier, Internet of Things-expert van Centric, sprak tijdens de tweede Internet of Things-conferentie die op dinsdag 9 april werd gehouden in Rotterdam.
The generation of digital content has undergone a great increase in recent years due to the
development of new technologies that allow the creation of content quickly and easily. A further step in this
evolution is the generation of contents by automatic systems without human intervention. Thus, for decadesit has
been developing models for the Natural Language Generation (NLG) that allow the transformation of content to
the form of narratives. At present, there are several systems that enable the generation in text format. In this
paper we present the Narrative system, which allows the generation of text narratives from different sources,
and which are indistinguishable for user from those made by a human being.
The purpose of this workshop was to highlight the the significance of AI, IoT and their integration under the light of scientific research. The presentation of the workshop can be found below.
The Internet of Things, Ubiquitous Computation and the Evolutionary Future of...Eric Kingsbury, MBA
As humans at this point in time, we are in the midst of and agents of evolutionary forces—and these forces are driving us toward a future of smart objects everywhere. Matter wants to be conscious, and we will help it become so.
Presentation used by Pedro Prieto-Martín, the founding president of the association, for the defense of his Doctoral Thesis ("Creating the 'symbiotic city': A proposal for the interdisciplinary co-design and co-creation of Civic Software Systems"), 29th of October 2012 in the University of Alcalá.
Artificial intelligence - Approach and MethodRuchi Jain
Human natural intelligence is ubiquitous with human activities, such as solving problems, playing chess, guessing puzzles. AI is new mean to solve such complex problems. We NuAIg is a AI consulting firm, who will help you to create a AI road-map for your business and process automation.
Uvod u R za Data Science :: Sesija 1 [Intro to R for Data Science :: Session 1]Goran S. Milovanovic
Prezentacija za sesiju 1 kursa Uvod u R za Data Science, Data Science zajednica Srbije u saradnji sa Startit, Beograd, 28. april 2016.
Session 1 Presentation for Intro to R for Data Science course, Data Science Community Serbia in co-operation with Startit
Predmet "Kognitivna psihologija", predavač: Goran S. Milovanović, jesenji semestar 2012, Fakultet za medije i komunikacije, Univerzitet Singidunum, Beograd, Srbija.
X predavanje: Jezik 2: Mentalni leksikon
KogPsi2012, Fmk, Singidunum. 8. Analogne predstave i epizodička memorijaGoran S. Milovanovic
Predmet "Kognitivna psihologija", predavač: Goran S. Milovanović, jesenji semestar 2012, Fakultet za medije i komunikacije, Univerzitet Singidunum, Beograd, Srbija.
VIII predavanje: Analogne predstave i epizodička memorija
KogPsi2012, Fmk, Singidunum. 9. Jezik 1: Uvod u psiholingvistiku i percepciju...Goran S. Milovanovic
Predmet "Kognitivna psihologija", predavač: Goran S. Milovanović, jesenji semestar 2012, Fakultet za medije i komunikacije, Univerzitet Singidunum, Beograd, Srbija.
IX predavanje: Jezik 1: Uvod u psiholingvistiku i percepciju govora
Učenje i viši kognitivni procesi 8. Simboličke funkcije, IV Deo: Analogija i ...Goran S. Milovanovic
Učenje i viši kognitivni procesi 8. Simboličke funkcije, IV Deo: Analogija i strukturalno mapiranje, konceptualne kombinacije i interpretacija karakteristika u kategorizaciji
Computing, cognition and the future of knowing,. by IBMVirginia Fernandez
How humans and machines are forging a new age of understanding.
-The history of computing and the rise of cognitive
-The world’s first cognitive system.
-The technical path forward and the science of what’s possible
-Implications and obligations for the advance of cognitive science.
-Paving the way for the next generation of human cognition.
Learning to trust artificial intelligence systems accountability, compliance ...Diego Alberto Tamayo
It’s not surprising that the
public’s imagination has
been ignited by Artificial
Intelligence since the term
was first coined in 1955.
In the ensuing 60 years,
we have been alternately
captivated by its promise,
wary of its potential for
abuse and frustrated by
its slow development.
In the last decade, workplaces have started to evolve towards digitalisation. In the future people will work in digitally connected environments where personalisation is enabled, collaboration is improved and data sharing and information management are automated. Ultimately, these future workplaces will provide context-aware artificial intelligence (AI) and decision support that leverage both localised information and broader community knowledge whenever needed.
Here are 10 Important Points About Computer Science: 1. A Prelude to Discovery . The Digital Revolution 3. Building Blocks of Complexity 4. The Renaissance of Artificial Intelligence and Machine Learning
Smart cities: how computers are changing our world for the betterRoberto Siagri
Introduction
The world is flat, hot and crowded, as Thomas Friedman says in his last book. Luckily, we can also say that it is getting more and more intelligent. Our world is increasingly interconnected and increasingly able to talk to us: people, systems and objects can communicate and interact with one another in completely new ways. Now we have the means to measure, hear and see instantaneously the state of all things. When all things, including processes and working methods, are intelligent, we will be able to respond to changing conditions with more speed and more focus, and make more precise forecasting which in turn will lead to optimization of future events. This ongoing transformation has given birth to the concept of Smart Cities, cities that are able to take action and improve the quality of life of their inhabitants, reconciling it with the needs of trades, factories, service industries and institutions by means of an innovative and pervasive use of digital technologies.
Click here to view my essay on computer networks. Importance of Computer Essay | Essay on Importance of Computer for .... Essay on Computer - YouTube. Essay on Computer | Computer Essay for Students and Children in English .... What Is Computer Essay In English | Sitedoct.org. Essay on Importance of Computer in Life for Students. ᐅ BEST 10 Lines Essay on Computer in English For Class 1, 2 and 3 .... School Essay: Simple essay on computer. Essay: Computer - YouTube. Computer System - PHDessay.com. Computers Are Everywhere Essay Example | StudyHippo.com. The Computer Essay Introduction | Science And Technology | Office Equipment. Essay on computer in english. How Computer Has Made Learning Easier For Students Essay Example .... Need a great example of essay on computer and its benefits? Check this .... 006 Essay Example On Computer 62 Thumb ~ Thatsnotus. Computer essay – Logan Square Auditorium. Idős államporgárok vonzó Kosciuszko short paragraph on computer .... Essay On The Role of Computers in Everyday Life | Internet | Software. Essay On Computer and It's Uses for School Students - The Study Cafe. Free Computer Essay : Computer essay - Logan Square Auditorium / The .... Importance of
Report 2 empathic things – intimate technology - from wearables to biohackin...Rick Bouter
In the second report we focused on the personalized internet of things. We are witnessing a computer boom in terms of kinds, shapes and sizes – around, on or inside the body. Therefore we explored the coming transition toward a more empathic and contextual form of computerization. The emergence of wearable computing and other forms of empathic ‘things’ seems a logical further step: even more intimate, more human-oriented, and ubiquitous. We explored this development and present seven manifestations that can define the impact on business, such as the ‘quantified employee’ and the ‘body as the new password’.
Source, Sogeti ViNT: http://vint.sogeti.com/internet-things-4-reports/
Session 1 of Introduction to R for Data Science, Data Science Serbia in cooperation with Startit, Belgrade, lecturers: ing Branko Kovač and dr Goran S. Milovanović
Milovanović, G.S., Krstić, M. & Filipović, O. (2015). Kršenje homogenosti pre...Goran S. Milovanovic
Homogenost preferencija (PH) predstavlja nužan i dovoljan uslov za reprezentaciju donosioca odluka sa stepenom funkcijom korisnosti pod Kumulativnom teorijom izgleda (CPT). Ukoliko ekvivalent u izvesnosti (CE) loza oblika (x, p; 0, 1-p) uzima vrednost CE, PH je zadovoljena ako CE loza (kx, p; 0, 1-p) uzima vrednost kCE. Ipak: pretpostavimo da je donosilac odluka spreman da prihvati siguran iznos od oko 2000 RSD za loz koji sa 50% donosi 4000 RSD; donosilac odluka bi možda prihvatio siguran iznos mnogo manji od 20 miliona RSD za loz koji sa 50% donosi 40 miliona RSD. U literaturi ne postoje direktni testovi PH već se o njenom važenju zaključuje posredno. U ovom radu predstavljamo dva direktna eksperimentalna testa PH.
U Eksperimentu 1 (N=49) ispitanici su dali direktne numeričke ocene CE za 27 lozova oblika (x, p; 0, 1-p), gde je x varirano kao 100, 1000, 100000, 200, 2000, 200000, 500, 5000, i 500000 u RSD, a p kao 5%, 50%, i 90%. Vrednosti na lozovima su uvek bile umnošci osnovnih vrednosti x od 100, 200, i 500 RSD faktorima k = 10 i k = 1000 . Test PH koji smo razvili je semi-parametrijski i sastoji se od sledećih koraka. Prvo se na osnovu medijane CE lozova sa osnovnim vrednostima x određuju očekivane medijane za CE lozova koji nude vrednost x sa umnošcima k = 10 i k = 1000, na odgovarajućim nivoima verovatnoće dobitka: te vrednosti medijana su očekivane ukoliko je PH zadovoljena. Zatim se binomijalnim testom ispituje da li je raspodela CE ispitanika strogo iznad i strogo ispod očekivane medijane simetrična, i ukoliko nije, donosi se zaključak da PH nije zadovoljena. Intuitivno, ako se PH krši, očekuje se veća proporcija ispitanika ispod očekivane medijane. U devet od 18 binomijalnih testova PH nije bila zadovoljena na nivou p < .05; pored toga, dva puta je vrednost testa bila statistički marginalno značajna (p < .07). Svaki put kada je PH kršena, kršena je na očekivani način. U Eksperimentu 2 (N=37), koji je izveden po istom dizajnu sa nivoima p od 25%, 50%, i 75%, dobijeni su isti rezultati (devet kršenja PH na nivou p < .05, i dva marginalno značajna na p < .09 od kojih jedno u nepredviđenom pravcu). U Eksperimentu 2, sa faktorom k = 1000, svi ispitanici na svim lozovima krše PH u očekivanom pravcu.
U 50% eksperimentalnih situacija u ovoj studiji PH nije bila zadovoljena; pri tom, njena kršenja su sistematske prirode i konzistentna sa intuicijom. Važenje PH ne predstavlja solidnu pretpostavku za izgradnju deskriptivne teorije odlučivanja.
Ključne reči: kumulativna teorija izgleda, homogenost preferencija, rizik, stepena funkcija korisnosti.
1. Cognitive technologies: mapping the Internet governance debate
by Goran S. Milovanović
This paper
• provides a simple explanation of what cognitive technologies are.
• gives an overview of the main idea of cognitive science (why human minds and computers could
be thought of as being essentially similar kinds of systems).
• discusses in brief how developments in engineering and fundamental research interact to result
in cognitive technologies.
• presents an example of applied cognitive science (text‑mining) in the mapping of the Internet
governance debate.
Introduction
Among the words that first come to mind
when Internet governance (IG) is mentioned,
complexity surely scores in the forerunners.
But do we ever grasp the full complexity of such
issues? Is it possible for an individual human
mind ever to claim a full understanding of a
process that encompasses thousands of actors,
a plenitude of different positions, articulates an
agenda of almost non‑stop ongoing meetings,
conferences, forums, and negotiations, while
addressing the interests of billions of Internet
users? With the development of the Internet,
the Information Society, and the Internet
governance processes, the amount of information
that demands effective processing in order for
us to act rationally and in real time increases
tremendously. Paradoxically, the Information
Age, marked by the discovery of the possibility of
digital computers in the first half of the twentieth
century, demonstrated the shortcomings
in processing capacities very quickly as it
progressed. The availability of home computers
and the Internet have been contributing to this
paradox since the early 1990s: as the number of
networked social actors grew, the governance
processes naturally faced increased demand for
information processing and management. But
this is not simply a question of how many raw
processing power or how much memory storage
we have at our disposal. The complexity of social
processes that call for good governance, as well
as the amount of communication that mediates
the actions of the actors involved, increase up
to a level where qualitatively different forms of
management must come into play. One cannot
understand them by simply looking at them, or
listening to what everyone has to say: there are
so many voices, and among billions of thoughts,
ideas, concepts, and words, there are known
limits to human cognition to be recognised.
The good news is, as the Information Age
progresses, new technologies, founded upon the
scientific attempts to mimic the cognitive functions
of the human mind, are becoming increasingly
available. Many of the computational tools that
were only previously available to well‑funded
research initiatives in cognitive science and
artificial intelligence can nowadays run on
average desktop computers and laptops. With
increased trends of cloud computing and the
parallel execution of thousands of lines of
computationally demanding code, the application
2. of cognitive technologies in attempts to discover
meaningful regularities in vast amounts of
structured and unstructured data is now within
reach. If the known advantages of computers
over human minds – namely, the speed of
processing that they exhibit in repetitive,
well‑structured, daunting tasks performed
over huge sets of data – can combine with at
least some of the advantages of our natural
minds over computers, what new frontiers
are touched upon? Can computers do more
than beat the best of our chess players? Can
they help us to better manage the complexity
of societal consequences that have resulted
from our own discovery and the introduction
of digital technologies to human societies? How
can cognitive technologies help us analyse and
manage global governance processes such
as IG? What are their limits and how will they
contribute to societal changes themselves? These
are the questions that we address in this short
paper, tackling the idea of cognitive technology
and providing an illustrative example of their
application in the mapping of the IG debate.
Box 1: Cognitive technologies
2
• The Internet links people; networked
computers are merely mediators.
• By linking people globally, the Internet
has created a network of human minds –
systems that are a priori more complex
than digital computers themselves.
• The networked society exchanges a vast
amount of information that could not have
been transmitted before the inception of
the Internet: management and governance
issues become critical.
• New forms of governance introduced:
global IG.
• New forms of information processing
introduced: cognitive technologies. They
result from the application of cognitive
science that studies both natural and
artifi cial minds.
• Contemporary cognitive technologies
present an attempt to mimic some of the
cognitive functions of the human mind.
• Increasing raw processing power (cloud
computing, parallelisation, massive
memory storage) nowadays enables for
a widespread application of cognitive
technologies.
• How do they help and what are their limits?
The main idea: mind as a machine
For obvious reasons, many theoretical
discussions and introductions to IG begin with
an overview of the history of the Internet. For
reasons less obvious, many discussions about
the Internet and the Information Society tend to
suppress the historical presentation of an idea
that is clearly more important than the very idea
of the Internet. The idea is characteristic of the
cognitive psychology and cognitive science of
the second half of the twentieth century, and
it states – to put it in a nutshell – that human
minds and digital computers possibly share many
important, even essential properties, and that
this similarity in their design – which, as many
believe, goes beyond pure analogy – opens a
set of prospects towards the development of
artifi cial intelligence, which might prove to be
the most important technological development
in the future history of human kind if achieved.
From a practical point of view, and given the
current state of the technological development,
the most important consequence is that at least
some of the cognitive functions of the human
mind can be mimicked by digital computers.
The fi eld of computational cognitive psychology,
where behavioural data collected from
human participants in experimental settings
are modelled mathematically, increasingly
contributes to our understanding that the
human mind acts in perception, judgment,
decision‑making, problem‑solving, language
comprehension, and other activities as if it is
governed by a set of natural principles that can
be eff ectively simulated on digital computers.
Again, even if the human mind is essentially
diff erent from a modern digital computer, these
fi ndings open a way towards the simulation
of human cognitive functions and their
enhancement (given that digital computers are
able to perform many simple computational
tasks with effi ciency which is orders of
magnitudes above the effi ciency of natural
minds).
An overview of cornerstones in the historical
development of cognitive science is given
in Appendix I. The prelude to the history of
cognitive science belongs to the pre World
War II epoch, when a generation of brilliant
mathematicians and philosophers, certainly
best represented by an ingenious British
mathematician Alan Mathison Turing (1912–1954),
paved the way towards the discovery of the
limits formalisation in logic and mathematics
3. in general. By formalisation we mean the
expression of any idea in a strictly defi ned,
unambiguous language, precisely enough that
no two interpretants could possibly argue over
its meaning. The concept of formalisation is
important: any problem that is encoded by a set
of transformations over sequences of symbols –
in other words, by a set of sentences in a precise,
exact, and unambiguous language – is said to
be formalised. The question of whether there
is meaning to human life, thus, can probably
be never formalised. The question of whether
there is a certain way for the white to win a
chess game given its initial advantage of having
the fi rst move can be formalised, since chess is
a game that receives a straightforward formal
description through its well‑defi ned, exact rules.
Turing was among those to discover a way of
expressing any problem that can be formalised
at all in the form of a computer program for
abstract computational machinery known as the
Universal Turing Machine (UCM). By providing
the defi nition for his abstract computer, he
was able to show how any mathematical
reasoning – and all mathematical reasoning
takes place in strictly formalised languages
– can be essentially understood as a form of
computation. Unlike computation in a narrow
sense, where its meaning usually refers to basic
arithmetic operations with numbers only, this
broad sense of computation encompasses all
precisely defi ned operations over symbols and
sets of symbols in some predefi ned alphabet.
The alphabet is used to describe the problem,
while the instructions to the Turing Machine
control its behaviour which essentially presents
no more than the translation of sets of symbols
from their initial form to some other form, with
one of the possible forms of transformation
being discovered and recognised as a solution
to the given problem – the moment when
the machine stops working. More important,
from Turing’s discovery, it followed that formal
reasoning in logic and mathematics can be
performed mechanically, i.e., an automated
device could be constructed that computes any
computable function at all. The road towards the
development of digital computers was thus open.
But even more important, following Turing’s
analyses of mechanical reasoning, the question
of whether the human mind is simply a biological
incarnation of universal computation – a complex
universal digital computer, instantiated by
biological evolution instead being a product
of design processes, and implemented in
carbon‑based organic matter instead of silicon
– was posed. The idea that human intelligence
shares the same essential properties as Turing’s
mechanised system of universal computation
proved to be the major driving force in the
development of post World War II cognitive
psychology. For the fi rst time in history, mankind
not only developed the means of advancing
artifi cial forms of thinking, but instantiated the
fi rst theoretical idea that saw the human mind
as a natural, mechanical system whose abstract
structure is at least, in a sense, analogous to
some well‑studied mathematical description.
A way for the naturalisation of psychology was
fi nally opened, and cognitive science, as the
study of natural and artifi cial minds, was born.
Roughly speaking, three important phases in
the development of its mainstream can be
recognised during the course of the twentieth
century. The fi rst important phase in the
development of cognitive science was marked
by a clear recognition that, at least in principle,
the human mind could operate on principles
that are exactly the same as those that govern
universal computation. Newell and Simon’s
Physical Systems Hypothesis [1] provides probably
the most important theoretical contribution to
this fi rst, pioneering phase. Attempts to design
universal problem solvers and design computers
that successfully play chess were characteristic
of the fi rst phase. The ability to produce and
understand natural language was recognised as
a major characteristic of an artifi cially intelligent
system. An essential critique of this fi rst phase in
the historical development of cognitive science
was provided by the philosopher Hubert Dreyfus
in his classic What Computers Can’t Do in 1972.
[2] The second phase, starting approximately
in the 1970s and gaining momentum during
the whole 1980s and 1990s, was characterised
by an emphasis on the problems of learning,
the restoration of importance of some of the
pre World War II principles of behaviouristic
psychology, the realisation that well‑defi ned
formal problems such as chess are not really
representative of the problems that human
minds are really good at solving, and the
exploitation of a class of computational models
of cognitive functions known as neural networks.
The results of this second phase, marked mainly
by a theoretical movement of connectionism,
showed how sets of strictly defi ned, explicit
rules, almost certainly miss describing
adequately the highly fl exible, adaptive nature of
the human mind. [3a,3b] The third phase is rooted
in the 1990s, when many cognitive scientists
began to understand that human minds
essentially operate on variables of uncertain
Geneva Internet Conference 3
4. value, with incomplete information, and in
uncertain environments. Sometimes referred
to as the probabilistic turn in cognitive science, [4]
the important conclusion of this latest phase in
the development of cognitive science is that the
language of probability theory, used instead of
(or in conjunction with) the language of formal
logic, provides the most natural way to describe
the operation of the human cognitive system.
The widespread application of decision theory,
describing the human mind as a biological organ
that essentially evolved in order to perform the
function of choice under risk and uncertainty, is
characteristic of the most recent developments
in this third, contemporary phase in the history
of cognitive science. [5]
Box 2. The rise of cognitive science
In summary:
4
• Fundamental insights in twentieth century
logic and mathematics enabled a fi rst
attempt at a naturalistic theory of human
intelligence.
• Alan Turing’s seminal contribution to the
theory of computation enabled a direct
parallel between the design of artifi cially
and naturally intelligent systems.
• This theory, in its mainstream form, sees
no essential diff erences between the
structure of the human mind and the
structure of digital computers, both viewed
at the most abstract level of their design.
• Diff erent theoretical ideas and
mathematical theories were used to
formalise the functioning of the mind
during the second half of the twentieth
century. The ideas of physical symbol
systems, neutral networks, and probability
and decision theory, played the most
prominent roles in the development of
cognitive science.
The machine as a mind: applied
cognition
As widely acknowledged, humanity still did not
achieve the goal of developing true artifi cial
intelligence. What, then, is applied cognition?
At the current stage of development, applied
cognitive science encompasses the application
of mostly partial solutions to partial cognitive
problems. For example, we cannot build software
that reads Jorge Luis Borges’ collected short
stories and then produces a critical analysis from
a viewpoint of some specifi c school of literary
critique. One would say not many human beings
can actually do that. But we can’t accomplish
even simpler tasks; with the general rule that
as cognitive tasks get more general, the harder
it gets to simulate them. But, what we can do,
for example, is to feed the software with a large
collection of texts from diff erent authors, let it
search through it, recognise the most familiar
words and patterns of word usage, and then
successfully predict the authorship of a previously
unknown text. We can teach computers to
recognise some visual objects by learning with
feedback from their descriptions in terms of
simpler visual features, and we are getting good
at making them recognise faces and photography.
We cannot ask a computer to act creatively in the
way that humans do, but we can make them prove
complicated mathematical theorems that would
call for years of mathematical work by hand,
and even produce aesthetically pleasing visual
patterns and music by sampling, resampling, and
adding random but not completely irregular noise
to initial sound patterns.
In cognitive science, engineers learn from
psychologists, and vice versa, mathematical
models, developed initially to solve purely
practical problems, are imported in psychological
theories of cognitive functions. The goals of the
study that cognitive engineers and psychologists
pursue are only somewhat diff erent. While
the latter addresses mainly the functioning of
natural minds, the former does not have to
constrain a solution to some cognitive problem
by imposing on it the limits of the human mind
and realistic neurophysiology of the brain.
Essentially, the direction of the arrow usually
goes from mathematicians and engineers
towards psychologists: the ideas proposed in the
fi eld of artifi cial intelligence (AI) are tested only
after having them dressed in a suit of empirical
psychological theory. However, engineers and
mathematicians in AI discover their ideas by
observing and refl ecting on the only known truly
intelligent system, namely, the real, natural,
human mind.
Many computational methods were thus fi rst
discovered in the fi eld of AI before they were
tried out as explanations of the functioning of the
human mind. To begin with, the idea of physical
symbol systems, provided by Newell and Simon
in the early formulation of cognitive science,
presents a direct interpretation of a symbolic
5. theory of computation initially proposed by
Turing and the mathematicians in the fi rst half of
the twentieth century. Neural networks, which
present a class of computational models that
can learn to respond to complex external stimuli
in a fl exible and adaptive way, were clearly
motivated by the empirical study of learning
in humans and animals. However, they were
fi rst proposed as an idea in the fi eld of artifi cial
intelligence, and then only later applied in
human cognitive psychology. Bayesian networks,
known also as causal (graphical) models[6],
represent structured probabilistic machinery
that deal effi ciently with learning, prediction, and
inference tasks, and were again fi rst proposed
in AI before heavily infl uencing the most recent
developments in psychology. Decision and game
theory, to provide an exception, were initially
developed and refl ected on in pure mathematics
and mathematical economics, before being
imported into the arena of empirical psychology,
were they still represent both a focal subject
of experimental research and a mathematical
modelling toolkit.
The current situation in applying the known
principles and methods of cognitive science
can be described as eclectic. In applications to
real‑world problems, and not necessarily to
describe truthfully the functioning of the human
mind, algorithms developed on the behalf of
cognitive scientists do not need to obey any
‘theoretical purity’. Many principles discovered in
empirical psychology, for example reinforcement
learning, are applied without necessary applying
them in exactly the same way as it is thought that
they operate in natural learning systems.
As already noted, it’s uncertain whether applied
cognition will ever produce any AI that will fully
resemble the natural mind. A powerful analogy
is proposed: for example, people rarely admit
that the human kind has never understood
natural fl ying in birds or insects, in spite of the
fact that we have and use artifi cial fl ying of
airplanes and helicopters. The equations that
would correctly describe the natural, dynamic,
biomechanical systems that fl y are simply too
complicated and, in general, they cannot be
analytically solved even if they can be described.
But we have invented artifi cial fl ying by refl ecting
on the principles of the fl ight of birds, without
ever having a complete scientifi c understanding
it. Maybe AI will follow the same path: we may
have useful, practical, and powerful cognitive
applications, even without ever understanding
the functioning of the human mind in totality.
The main goal of current cognitive technologies,
the products of applied cognitive science, is to
help natural human minds to better understand
very complex cognitive problems – those that
would be hard to comprehend by our mental
functions solely – and to increase the speed and
amount of processing that some cognitive tasks
require. For example, studying thousands of text
documents in order to describe, at least roughly,
what are the main themes that are discussed
in them, can be automated to a degree to help
human beings get the big picture without actually
reading through all of them.
Box 3. Applied cognition
• Cognitive engineers and cognitive
psychologists learn from each other. The
former refl ect on natural minds and build
algorithms that solve certain classes of
cognitive problems, which leads directly
to applications, while the latter test the
proposed models experimentally to
determine whether they describe the
workings of the human mind adequately.
• Many principles of cognitive psychology
are applied to real-world problems without
necessary mimicking the corresponding
faculties of the human mind exactly. We
discover something, than change it to suit
our present purpose.
• We provide partial solutions only, since
global human cognitive functioning is
still too diffi cult to describe. However,
even partial solutions that are nowadays
available skyrocket what computers could
have done only decades ago.
• Contemporary cognitive technologies
focus mainly on reducing the complexity of
some cognitive tasks that would be hard to
perform by relying on our natural cognitive
functions only.
Example: applying text-mining to map
the IG debate
The NETmundial Multistakeholder Statement
of São Paulo1 – the fi nal outcome document
of NETmundial (22, 23 April 2014), the Global
Multistakeholder Meeting on the Future of IG
– resulted from a political process of immense
complexity. Numerous forms of inputs, various
1 http://netmundial.br/netmundial‑multistakeholder‑statement/
Geneva Internet Conference 5
6. expertise, several preformed bodies, a mass
of individuals and organisations representing
diff erent stakeholders, all interfaced both
online and in situ, through a complex timeline
of the NETmundial process, to result in
this document. On 3 April, the NETmundial
Secretariat prepared the fi rst draft, previously
processing more than 180 content contributions.
The fi nal document resulted following the
negotiations in São Paulo, based on the second
draft that was itself based on incorporating
numerous suggestions made in comments to
the fi rst draft. The multistakeholder process of
document drafting introduced in its production
is already seen by many as the future common
ingredient of global governance processes in
general. By the complexity of the IG debate
alone, one could have anticipated that more
complex forms of negotiations, decision‑shaping,
and crowdsourced document production
will naturally emerge. As the complexity
of the processes under analysis increases,
the complexity of tools used to conduct the
analyses must increase also. At the present
point of its development, DiploFoundation’s
Text‑Analytics Framework (DTAF) operates
on the Internet Governance Forum (IGF) Text
Corpus, a collection of all available session,
workshop, and panel transcripts from the
IGF 2006–2014, encompassing more than
600 documents and utterances contributed
on behalf of hundreds of speakers. By any
standards in the fi eld of text-mining – an area
of applied cognitive science which focuses on
statistical analyses of patterns of words that
occur in natural language – both the NETmundial
collection of content contributions and the IGF
Text Corpus present rather small datasets. The
analyses of text corpora that encompass tens of
thousands of documents are rather common.
Imagine incorporating all websites, social media,
newspaper and journal articles on IG, in order to
perform a full‑scale monitoring of the discourse
of the IG debate, and you’re already there.
Obviously, the cognitive task of mapping
the IG debate represented even only by two
text corpora that we discuss here, is highly
demanding. It is questionable whether a single
policy analyst or social scientist would manage
to comprehend the full complexity of the IG
discourse in several years of dedicated work.
Here we illustrate the application of text‑mining,
which is a typical cognitive technology used
nowadays, to the discovery of useful, structured
information in large collections of texts. We will
focus our attention on the NETmundial corpus
6
of content contributions and ask the following
question: What are the most important themes,
or topics, that have appeared in this set of more
than 180 contributions, including the NETmundial
Multistakeholder Statement of São Paulo? In
order to answer this question, we fi rst need to
hypothesise a model of how the NETmundial
discourse was produced. We rely on a fairly
well‑studied and frequently applied model
in text‑mining, known by its rather technical
name of Latent Dirichlet Allocation (LDA, see
Methodology section in Appendix II. [7,8,9]). In
LDA, it is assumed that each word (or phrase)
in some particular discourse is produced from
a set of underlying topics with some initially
unknown probability. Thus, each topic is defi ned
as a probability distribution across the words
and phrases that appear in the documents. It
is also assumed that each document in the text
corpus is produced from a mixture of topics,
each of them weighted diff erently in proportion
to their contribution to the generation of the
words that comprise the document. Additional
assumptions must be made about the initial
distribution of topics across documents. All
these assumptions are assembled in a graphical
model that describes the relationships between
the words, documents, and latent topics. One
normally runs a number of LDA models that
encompass diff erent number of topics and rely
on the statistical properties of the obtained
solutions to recognise which one provides
the best explanation for the structure of the
text corpus under analysis. In the case of the
NETmundial corpus of content contributions,
an LDA model with seven topics was selected.
Appendix II presents fi fteen most probable
words generated by each of the seven underlying
topics. By inspecting which words are most
characteristic in each of the topics discovered in
this collection of texts, we were able to provide
meaningful interpretations2 of the topics. We
fi nd that NETmundial content contributions were
mainly focused on questions of (1) human rights,
(2) multistakeholderism, (3) global governance
mechanism for ICANN, (4) information security,
(5) IANA oversight, (6) capacity building, and (7)
development (see Table A‑2.1 in Appendix II).
In order to help a human policy analyst in their
research on the NETmundial, for example, we
could determine the contribution of each of
these seven topics to each document from the
2 I wish to thank Mr Vladimir Radunović of DiploFoundation
for his help in the interpretation of the topics obtained
from the LDA model of the NETmundial content
contributions.
7. collection of content contributions, so that the
analyst interested in just some aspects of this
complex process could select only the most
relevant documents. As an illustration, Figure
A‑2.1 in Appendix II presents the distributions
of topics found in the content contributions of
two important stakeholders in the IG arena,
civil society and government. It is easily read
from the displays that the representatives of the
organisations of civil society strongly emphasised
human rights (Topic 1 in our model) in their
contributions, while representatives of national
governments focused more on IANA oversight
(Topic 5) and development issues (Topic 7).
Figure A‑2.2 in Annex II presents the structure
of similarities between the most important
words in the human rights topic (Topic 1,
Table A‑2.1 in Annex II). We fi rst selected only
the content contributions made on behalf of
civil society organisations. Then we used the
probability distributions of words across topics
and the distribution of topic weights across the
documents to compute the similarities between
all relevant words. Since similarity computed in
this way is represented in a high‑dimensional
space and thus not suitable for visualisation,
we have decided to use the graph represented
in Figure A‑2.2. Each node in Figure A‑2.2
represents a word, and each word receives
exactly three arrows. These arrows originate
at nodes that represent those words that are
found to be among the three most similar words
to the target word. Each word is an origin of as
many links as there are words in whose set of
the three most similar words it is found. Thus
we can use graph representation to assess the
similarities in the patterns of word usage across
diff erent collections of documents. The lower
display in Figure A‑2.2 presents the similarity
structure in the human rights topic extracted
from governmental content contributions to
NETmundial only. By comparing the two graphs,
we can see that only slight diff erences appear,
in spite of the fact that the importance of the
human rights topic is diff erent in the content
contributions of these two stakeholders. Thus,
they seem to understand the conceptual realm
of human rights in a similar way, but tend to
accentuate it diff erently in the statements of
their respective positions.
Conclusions that stream from our cognitive
analysis of the NETmundial content contributions
could have been brought by a person who did
not actually read any of these documents at all.
The analysis does rely on some built‑in human
expert knowledge, but once set, it can produce
this and similar results in a fully automated
manner. While it is not advisable to use this
and similar methods instead of a real, careful
study of the relevant documents, their power
in improving on the work of skilled, thoroughly
educated scholars and professionals should be
emphasised.
Concluding remarks
However far we are from the ideal of true
artifi cial intelligence, and given that the defi nition
of what true artifi cial intelligence might be is
not very clear in itself, cognitive technologies
that have emerged after more than 60 years of
study of the human mind as a natural system
are nowadays powerful enough to provide
meaningful application and valuable insight.
With the increasing trends of big data, numerous
scientists involved in the development of more
powerful algorithms and even faster computers,
cloud computing, and means for massive data
storage, even very hard cognitive problems will
become addressable in the near future. The
planet, our ecosystem, now almost completely
covered by the Internet, will introduce an
additional layer of cognitive computation, making
information search, retrieval, data mining,
and visualisation omnipresent in our media
environments.
A prophecy to end this paper with: not only
will this layer of cognitive computation bring
about more effi cient methods of information
management and extend our personal cognitive
capacities, it will itself introduce additional
questions and complications to the existing IG
debate. Networks intermixed with human minds
and narrowly defi ned artifi cial intelligences
will soon begin to present the major units of
representing interests and ideas, and their
future political signifi cance should not be
underestimated now when their development is
still in its infancy. They will grow fast, as fast as
the fi eld of cognitive science did.
Geneva Internet Conference 7
8. Bibliography
[1] Newell A and Simon HA (1976) Computer Science as Empirical Inquiry: Symbols and Search.,
8
Communications of the ACM, 19(3), 113–126, doi:10.1145/360018.360022
[2] Dreyfus H (1972) What computers can’t do. New York: MIT Press, ISBN 0‑06‑090613‑8
[3a] Rumelhart DE, McClelland JL and the PDP Research Group (1986) Parallel Distributed Processing:
Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge, MA: MIT Press.
[3b] McClelland JL, Rumelhart DE and the PDP Research Group (1986) Parallel Distributed Processing:
Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models.
Cambridge, MA: MIT Press.
[4] Oaksford M and Chater N (2009) Précis of Bayesian rationality: The probabilistic approach to human
reasoning. Behav Brain Sci 32(1), 69–84. doi: 10.1017/S0140525X09000284
[5] Glimcher P (2003) Decisions, Uncertainty, and the Brain. The Science of Neuroeconomics. Cambridge,
MA: MIT Press.
[6] Pearl J (2000) Causality. Models, Reasoning and Inference. Cambridge: Cambridge University Press.
[7] Blei DM, Ng AY, Jordan MI (2003) Laff erty J ed. Latent Dirichlet Allocation. Journal of Machine Learning
Research 3(4–5), 993–1022. doi:10.1162/jmlr.2003.3.4‑5.993
[8] Griffi thsTL, Steyvers M and Tenenbaum JB (2007) Topics in semantic representation. Psychological
Review 114, 211 244. http://dx.doi.org/10.1037/0033‑295X.114.2.211
[9] Grün B and Hornik K (2011) topicmodels: An R Package for Fitting Topic Models. Journal of Statistical
Software 40(3). Available at http://www.jstatsoft.org/v40/i13
9. Appendix I
Timeline of cognitive science
Year Selected developments
1936 Turing publishes On Computable Numbers, with an Application to the
Entscheidungsproblem. Emil Post achieves similar results independently of Turing.
The idea that (almost) all formal reasoning in mathematics can be understood as a
form of computation becomes clear.
1945 The Von Neumann Architecture, employed in virtually all computer systems in use
nowadays, is presented.
1950 Turing publishes Computing machinery and intelligence, introducing what is nowadays
known as the Turing Test for artifi cial intelligence.
1956 • George Miller discusses the constraints on human short‑term memory in
computational terms.
• Noam Chomsky introduces the Chomsky Hierarchy of formal grammars,
enabling the computer modeling of linguistic problems.
• Allen Newell and Herbert Simon publish a work on the Logic Theorist,
mimicking the problem solving skills of human beings; the fi rst AI program.
1957 Frank Rosenblatt invents the Perceptron, an early neural network algorithm for
supervised classifi cation. The critique of the Perceptron published by Marvin
Minsky and Seymour Papert in 1969 is frequently thought of as responsible for
delaying the connectionist revolution in cognitive science.
1972 Stephen Grossberg starts publishing results on neural networks capable of
modeling various important cognitive functions.
1979 James J. Gibson publishes The Ecological Approach to Visual Perception.
1982 David Marr, Vision: A Computational Investigation into the Human Representation and
Processing of Visual Information makes a strong case for computational models of
biological vision and introduces the commonly used levels of cognitive analysis
(computational, algorithmic/representational, and physical).
1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vols
1 and 2, are published, edited by David Rumelhart, Jay McClelland, and the PDP
Research Group. The onset of the connectionism (the term was fi rst used by David
Hebb in the 1940s). Neural networks are considered as powerful models to capture
the fl exible, adaptive nature of human cognitive functions.
Geneva Internet Conference 9
10. Year Selected developments
1990s • Probabilistic turn: the understanding slowly develops, in many scientifi c centres
10
and the work of many cognitive scientists, that the language of probability
theory provides the most suitable means of describing cognitive phenomena.
Cognitive systems control the behaviour of organisms that have only
incomplete information about uncertain environments to which they need to
adapt.
• The Bayesian revolution: most probabilistic models of cognition expressed
in mathematical models relying on the application of the Bayes theorem and
Bayesian analysis. Latent Dirichlet Allocation (used in the example in this paper)
is a typical example of Bayesian analysis.
• A methodological revolution is introduced by Pearl’s study of causal (graphical)
models (also known as Bayesian networks).
• John Anderson’s methodology of rational analysis.
1992 Francisco J. Varela, Evan T. Thompson, and Eleanor Rosch publish The Embodied
Mind: Cognitive Science and Human Experience, formulating another theoretical
alternative to classical symbolic cognitive science.
2000s • Decision‑theoretic models of cognition. Neuroeconomics: the human brain as
a decision‑making organ. The understanding of importance of risk and value in
describing cognitive phenomena begins to develop.
• Geoff rey Hinton and others introduce deep learning: a powerful learning
method for neural networks partially based on ideas that already went under
discussion in the early 1990s and 1980s.
11. Appendix II
Topic model of the content contributions to the NETmundial
Methodology. A terminological model of the IG discourse was fi rst developed by DiploFoundation’s IG
experts. This terminological model encompasses almost 5000 IG‑specifi c words and phrases. The text
corpus of NETmundial content contributions in this analysis encompasses 182 documents. The corpus
was pre‑processed and automatically tagged for the presence of the IG‑specifi c words and phrases.
The resulting document‑term matrix, describing the use frequencies of IG specifi c terms across 182
available documents, was modelled by Latent Dirichlet Allocation (LDA), a statistical model that enables
for the recognition of semantic topics (i.e., thematic units) that accounts for the frequency distribution
in the given document‑term matrix. A single topic comprises all IG‑specifi c terms; the topics diff er by the
probability they assign to each IG‑specifi c term. The model selection procedures proceeded as follows.
We split the text corpus into two halves, by randomly assigning documents to the training and the test
set. We fi t the LDA models ranging from two to twenty topics to the training set and then compute the
perplexity (an information‑theoretic, statistical measure of badness‑of‑fi t) of the fi tted models for the
test set. We select the best model as the one with the lowest perplexity. Since the text corpus is rather
small, we repeated this procedure 400 times and looked at the distribution of the number of topics from
the best‑fi tting LDA models across all iterations. This procedure pointed towards a model encompassing
seven topics. We then fi tted the LDA with seven topics to the whole NETmundial corpus of content
contributions. Table A‑2.1 presents the most probable words per topics. The original VEM algorithm was
used to estimate the LDA model.
Table A-2.1. Topics in the NETmundial Text Corpus. The columns represent the topics recovered by the
application of LDA to the NETmundial content contributions. The words are enlisted by their probability
of being generated by each topic.
Topic 1.
Human Rights
Topic 2.
Multi‑stakeholderism
Topic 3.
Global governance
mechanism for
ICANN
Topic 4.
Information
security
Topic 5.
IANA
oversight
Topic 6.
Capacity
building
Topic 7.
Development
right IG internet internet ICANN curriculum internet
human rights stakeholder global security IANA technology IG
principle internet governance service organisation analysis global
cyberspace principle ICANN data function research development
state process need cyber operation education principle
information discuss technical network account blog open
internet issue role country process online governance
protection participation system need review association participation
access ecosystem issue control policy similarity continue
communication need IG information DNS term stakeholder
surveillance role local nation board product access
law multistakeholder principle policy GAC content model
respect governance level eff ective multistakeholder integration organisation
international NETmundial country trade model innovative innovative
charter address state user government public economic
Geneva Internet Conference 11
12. Figure A-2.1. The comparison of civil society and government content contributions to NETmundial.
We assessed the probabilities with which each of the seven topics from the LDA model of the
NETmundial content contributions determine the contents of the documents, averaged across all
documents per stakeholder, normalised and expressed the contribution of each topic in %.
12
13. Figure A-2.2. The conceptual structures of the topic of human rights (Topic 1 in the LDA model of
NETmundial content contributions) for civil society and government contributions. The graphs
represent the 3‑neighbourhoods of the 15 most important words in the topic of human rights (Topic 1 in
the LDA model). Each node represents a word and has exactly three arrows pointed at it: the nodes from
which these arrows originate represent the words found to be among the three words most similarly
used to a word that receives the links.
Civil Society
Government
Geneva Internet Conference 13
14. About the author
Goran S. Milovanović is a cognitive scientist who studies behavioural decision theory, perception of risk
and probability, statistical learning theory, and psychological semantics. He has studied mathematics,
philosophy, and psychology at the University of Belgrade, and graduated from the Department of
Psychology. He began his PhD studies at the Doctoral Program in Cognition and Perception, Department
of Psychology, New York University, USA, while defending a doctoral thesis entitled Rationality of
Cognition: A Meta-Theoretical and Methodological Analysis of Formal Cognitive Theories at the Faculty of
Philosophy, University of Belgrade, in 2013. Goran has a classic academic training in experimental
psychology, but his current work focuses mainly on the development of mathematical models of
cognition, and the theory and methodology of behavioural sciences.
He organised and managed the fi rst research on Internet usage and attitudes towards information
technologies in Serbia and the region of SE Europe, while managing the research programme of the
Center for Research on Information Technologies (CePIT) of the Belgrade Open School (2002–2005), the
foundation of which he initiated and supported. He edited and co‑authored several books on Internet
Behaviour, attitudes towards the Internet, and the development of the Information Society. He managed
several research projects on Internet Governance in cooperation with DiploFoundation (2002–2014) and
also works as an independent consultant in applied cognitive science and da
14