SlideShare a Scribd company logo
1 of 26
Download to read offline
The feasibility of artificial intelligence
performing as CEO: the vizier-shah theory
Aslıhan Ünal and _
Izzet Kılınç
Abstract
Purpose – This paper aims to examine the feasibility of artificial intelligence (AI) performing as chief
executive officer (CEO) in organizations.
Design/methodology/approach – The authors followed an explorative research design – classic
grounded theory methodology. The authors conducted face-to-face interviews with 27 participants that
were selected according to theoretical sampling. The sample consisted of academics from the fields of
AI, philosophy and management; experts and artists performing in the field of AI and professionals from
the business world.
Findings – As a result of the grounded theory process ‘‘The Vizier-Shah Theory’’ emerged. The theory
consisted of five theoretical categories: narrow AI, hard problems, debates, solutions and AI-CEO. The
category ‘‘AI as a CEO’’ introduces four futuristic AI-CEO models.
Originality/value – This study introduces an original theory that explains the evolution process of
narrow AI to AI-CEO. The theory handles the issue from an interdisciplinary perspective by
following an exploratory research design – classic grounded theory and provides insights for future
research.
Keywords Chief executive officer, Grounded theory, Artificial intelligence, Futurism
Paper type Research paper
1. Introduction
Consciousness is the distinctive feature of humankind and the efforts to transfer this
feature to an artifact is called artificial intelligence (AI). It is an old dream of humans to
reflect on a human-made structure. However, creating a mind has not been an easy
attempt as there is still not a consensus on what “consciousness,” “mind” or
“intelligence” derived from. Science and philosophy have been struggling to find an
answer to how can an abstract entity emerge in a physical world. Also, it is an old
debate about what consciousness is and to transfer this “human-specific” feature to an
artifact is eligible or not.
In the midst of the twentieth century, the dream of creating a humanlike machine in
terms of cognitive abilities commenced to turn out reality. In 1950, Alan Turing made a
significant breakthrough in the history of AI in his paper computing machinery and
intelligence. This was the first article that handles mechanizing of human-like
intelligence entirely (Nilsson, 2010). Turing (1950) also proposed a test – Turing Test –
to evaluate the intelligence of a machine. According to Turing; if a machine achieves
the Turing test, it can be considered as “intelligent.” Soon after Turing’s
groundbreaking paper, in 1956, the term “artificial intelligence” was coined in
Dartmouth Conference and scientific studies on generating a machine that is able to
simulate all aspects of human intelligence (McCarthy et al., 1955). Since then, AI
research has progressed with ups and downs (generally called “seasons of AI”) and
remarkable results have been achieved.
Aslıhan Ünal is based at the
Department of
Management Information
Systems, Cappadocia
University, Ürgüp, Turkey.
_
Izzet Kılınç is based at the
Department of
Management Information
Systems, Düzce University,
Düzce, Turkey.
Received 19 February 2020
Revised 17 July 2021
Accepted 19 July 2021
This research received no
specific grant from any funding
agency in the public,
commercial, or not-for-profit
sectors.
DOI 10.1108/FS-02-2021-0048 © Emerald Publishing Limited, ISSN 1463-6689 jFORESIGHT j
In recent times, the most striking incidence on AI is the victory of AlphaGo. AlphaGo is the
first Go-playing program developed by DeepMind that beat the human World Go Champion
in 2016 (DeepMind, 2021). The attractive aspect of this victory is that Go is a complicated
ancient Chinese game that requires wisdom and insight. Then, the algorithm of AlphaGo
processes much more “humanistic” than the other – Go programs and Deep Blue – the
chess-playing program developed by IBM. The victory of AI over humankind in a specific
area also reveals a question: May AI takes over the task of developing strategy in
organizations in the future?
Recent research suggests that AI applications will take on routine tasks as planning,
programming and optimization. Then, this will be an opportunity for the executives to deal
more effectively with the “judgment work” (Shanks et al., 2015; Kolbjørnsrud et al., 2016).
According to Thomas et al. (2016), AI can take part in C-suite as an assistant (by “creating
scorecards,” “maintaining reports,” “monitoring the environment,” a consultant (by
“answering questions,” “building scenarios” and “generating options” and even an actor (by
“evaluating options,” “making decisions” and “budgeting and planning”). The findings of
this recent research also arose a question in our minds: May AI performs top management
tasks in the future and moreover be a chief executive officer (CEO)? Hence, this research-
based is on two research questions as follows:
RQ1. May AI take over the task of developing strategy in organizations?
RQ2. May AI perform top management tasks in the future and moreover be a CEO?
The main purpose of this research is to examine the feasibility of AI performing as CEO in
organizations in the future. For this purpose, we gathered 27 face-to-face interview data and
analyzed them according to classic grounded theory methodology. As a result, “The Vizier-
Shah Model” that explains the evolution process of narrow AI to AI-CEO emerged. It is an
original and comprehensive model that handles the AI-CEO phenomenon from an
interdisciplinary perspective and introduces four possible futuristic AI-CEO types.
2. Conceptual background and literature review
2.1 Chief executive officer
A CEO is a top manager who is responsible for managing the company in a complex
environment and the final authority for defining the strategic path of the organization
(Thomas and Simerly, 1994, p. 960). CEOs are generally the most powerful figures in
organizations (Hambrick and Mason, 1984, p. 196; Daily and Johnson, 1997). It is expected
from executives at critical positions to adopt a long-termed viewpoint, develop short-termed
aims and strategies in accordance with this viewpoint and balance between generally
conflicted factors such as constituencies, demands, aims and requirements.
Top management studies are carried out within the scope of strategic management
discipline, especially focus on the issues of “features of CEO,” “strategic leadership” and
“top management team.” A considerable part of strategic management research examines
who and how manages the organization and what kind of processes are followed. The
Upper Echelons Theory developed by Hambrick and Mason (1984) is considered a
milestone in strategic management research. The theory provides a model through which
the roles of top executives can be interpreted. Basing on behavioral theory, Hambrick and
Mason (1984) asserted that executives pass through a perceptual process including
sequential steps while they are making significant decisions. In this model, the choices of
executives reflect their personality to some extent. Therefore, executives in an objective
environment are likely to make different decisions according to their personal prejudices,
experiences and value judgments. Hence, the distinctive personal features of executives
play a significant role in the strategic stance of organizations.
jFORESIGHT j
Zaccaro (2004) proposed that after The Upper Echelons Theory developed, a great number
of research examined the effects of top management on organizations. However, research
that provides a significant model gathering the improvements in the area and the new ideas
were not been developed since then. For this reason, Zaccaro examined the conceptual
models focus on the nature of executive leadership and requirements and constructed an
integrated executive leadership model. In this model, Zaccaro (2004), defined requisite
executive leader characteristics under five categories: “cognitive capacities,” “social
capacities,” “personality,” “motivation” and “knowledge and expertise” (p. 291). Most of the
characteristics proposed by Zaccaro are still humanistic properties such as creativity, need
for achievement, behavioral flexibility and curiosity.
According to Bagozzi and Lee (2017), organization research is closely related to mental states
and human phenomenology. Without the knowledge of the functioning of the brain and the
nature of mental states, it is hard to interpret ongoing conditions in organizations. An arguable
reality of mantle states may cause us to consider some concepts as “satisfaction,” “charisma,”
“leadership,” “intention,” “emotion,” etc. as metaphors (p. 3). According to the authors, the
body-mind problem should not be neglected in organizational research.
At present, AI systems expertise in a narrow area and not achieved the level of artificial
general intelligence (AGI). Therefore, in today’s human-intensive workplace conditions, it is
not possible that an AI to perform the role of an executive or a CEO. For the reason of
empirical data deficiency, a theory explaining the key features of an AI executive or the
effects of an AI-based top management board on organization performance has not been
developed yet, but it is an expected phenomenon in the future. According to the Global
Agenda Council on the Future of Software and Society Survey Report findings, the
expected date for AI to take part as a decision-maker in top management is 2026. In total,
45% of participants – 816 senior managers and experts from the information communication
and technology sector – anticipate that this will happen by 2025.
2.2 Consciousness
Consciousness is a hard problem for both sciences and philosophy. The problem arises from
the fact that qualitative feelings emerge in a physical structure. Chalmers (1995a) explained
this paradox as “the really hard part of the mind-body problem” (p. 4). According to Chalmers
(1995b) “The really hard problem of consciousness is the problem of experience” (p. 2). We
see an object, this occurs as a result of information processing but also, we feel something
when we see an object or hear a voice, this feeling is subjective. The experience of listening to
music belongs to us and we do not know how it is experienced by another person. Namely,
experience (or in other terms “qualia” and “phenomenology”) is strictly related to our identity
but cannot be explained how it emerges in a physical body.
Fjelland (2020) articulated this issue by referring to Dreyfus and Polanyi. The author
mentioned Polanyi’s examples on experience related to tacit knowledge that we know but
cannot articulate such as swimming and riding a bicycle. For example, we know how to ride
a bike but cannot explain exactly the dynamics of our riding experience, we just ride and
know that we know how to ride. As we cannot articulate our tacit knowledge, we cannot
transfer it to a computer. According to Fjelland (2020), Dreyfus considers AI from the
perspective of Platon’s idealism. Platon’s knowledge theory, there are two kinds of
knowledge (knowledge hierarchy) doxa and episteme. Episteme is the “real knowledge”
that is reached by reasoning (propositional knowledge) and can be articulated explicitly.
Then, doxa is the kind of knowledge that can be identified with “skills” that are based
on tacit knowledge and cannot be articulated and for that reason, it is placed at the bottom
of the knowledge hierarchy. Dreyfus (1972) counterargument about the issue is that the way
humans think cannot be programmed because humans do not follow certain rules when
playing chess, solving complex problems or in everyday actions. They seem to “use global
perceptual organization, making pragmatic distinctions between essential and inessential
jFORESIGHT j
operations, appealing to paradigm cases and using a shared sense of the situation to get
their meanings across” (p. 198).
This tacit knowledge of human experience is a multidisciplinary hard problem. In neuroscience
research, the consciousness concept is still a vague concept. However, the neuroscience
discipline has been improving and scientists taking the lid off day by day. For example, according
to Zhao et al. (2019), although explaining the concept of consciousness is a hard issue, the
“intrinsic neurobiological mechanism” was explored that “the cortex of each part of the brain plays
an important role in the production of consciousness, especially the prefrontal and posterior
occipital cortices and the claustrum” (p. 6).
In the philosophy of mind, debates on consciousness have been continuing through several
philosophic approaches. Dualism considers mind and body as separate and different
substances (Robinson, 2020). Materialism considers humans as a single substance (material)
and denies the view that the mind is a divine or nonmaterial substance, hence, consciousness is
considered as a function of the brain (Armstrong, 1968). Although they appeared in different
times in history and diverge in terms of theoretical foundations, materialism is generally used
interchangeably with the term “physicalism” in contemporary usage (Stoljar, 2021). Then, as
opposed to materialism, Idealism denies the existence of material. According to this view, “all
that exists are ideas and the minds” which Berkeley used the term “immaterialism” (Guyer and
Horstmann, 2021). Panpsychism is “the doctrine that everything has a mind.” Then,
functionalism explains the consciousness on its functions apart from the biological system where
mental states emerge (Levin, 2018). Hence, from the functionalist perspective consciousness is
not specific to the human body.
Descartes’ dualism (interactionism, Cartesian dualism) is the most criticized approach
among these views. In 17th Descartes (2003a) considered mind and body as two separate
substances. According to Descartes’ (2003b) argument, the body is related to “space and
time” where the mind is just to time; mental substance cannot take part in the material body
but only interact through the pineal gland. Hence, with the philosophical proposition
“Cogito, ergo sum” (I think, therefore, I am) Descartes identified the existence of a human
being with the ability of thinking and proposed that it is specific for humankind, and
therefore, animals are unconscious automats. Descartes’ pineal gland argument was expired
as a result. After Descartes, the dualist point of view was defended with different arguments.
Leibniz’s parallelism denied the interactionist approach of Descartes and proposed the
doctrine of “pre-established harmony” that the body and mind were created by the god and
their actions were programmed at the time of creation (Kulstad and Laurence, 2020).
Bagozzi and Lee (2017) stated that apart from the classic dualist approach, property
dualism and naturalist dualism consider mind and body as separate substances, but
propose that two substances are natural, not metaphysical. Natural dualism proposes that
physical reality can be observed from outside objectively where mental reality can be
observed from inside subjectively. Similarly, naturalist dualism proposes that both objective
physical substance and subjective nonphysical substance are needed for understanding
the mind and both substances are natural (Bagozzi and Lee, 2017).
Along with these debates on consciousness, in general, AI researchers do not interest in “strong
AI” assumption. According to McDermott (2007) most AI researchers, whether they believe
humans and AI think a different way or not, are computationalists to some extent – the theory that
proposes the human brain is a computer. The problem arises when it comes to phenomenal
consciousness that a minority of the researchers care about this issue and believe it would be
solved by AI one day.
2.3 Artificial intelligence
Alan Turing is widely appreciated as “the father of computer science and AI” due to his
seminal works on computational theory (1937), the Turing machine (1937) and the Turing
jFORESIGHT j
Test (1950) (Beavers, 2013) In his 1950 paper, Alan Turing brought forward a groundbreaking
approach to the question “Can machines think?” And instead of considering the concept of
“thinking” from an anthropomorphic perspective, he proposed “the imitation game.” Turing
(1950) organized the imitation game as follows: An interrogator and two respondents (a man
and a woman) take part in the game in such a way that the interrogator stays in a separate room
and can only see the typewritten form of the answers. The mission of the interrogator is to identify
who is the man and who is the woman. Turing, reorganized this game as he put “a machine”
instead of a human respondent. Hence, the interrogator is to decide which one is a machine and
which one is a human. According to Turing, if the machine achieves to pass the imitation game,
it can be considered as “intelligent.” In other words, it is not required for machines to “think
exactly the same as humans,” if its answers cannot be distinguished from a human’s, then, it is
intelligent. Then, this fact is actually different from how humanistic its algorithm operates.
Soon after Turing’s paper was published, a group of computer scientists organized a
summer research project at Dartmouth College in 1956. This project was a cornerstone for
AI research. The pioneers of the project were John McCarthy, Marvin L. Minsky, Nathaniel
Rochester and Claude E. Shannon. In Dartmouth, the term “artificial intelligence” was
coined and AI was founded as an academic discipline. McCarthy et al. (1955) defined the
purpose of the Dartmouth project as follows (p. 2):
The study is to proceed on the basis of the conjecture that every aspect of learning or any other
feature of intelligence can in principle be so precisely described that a machine can be made to
simulate it.
McCarthy and colleagues aimed to produce an AI that can simulate human intelligence in
all aspects. Although this aim has not been achieved yet, AI research progressed in various
dimensions and even, won victories over human intelligence in specific areas. Evolution of
AI throughout history is generally termed “seasons” or “booms” of AI (Miyazaki and Sato,
2018; Haenlein and Kaplan, 2019; Shin, 2019). AI has pursuit a cyclic process throughout
history, as Yasnitsky (2020) stated “winter gives way to spring and summer. Summer gives
way to autumn and winter” (p. 16). Yasnistky also highlighted an important point whether we
can consider exponential developments in AI technology as a “revolution” or AI may be at
the edge of a new winter season. Yasnistky addressed reasonable counterarguments to the
field and warned that unfounded enthusiasm and popularity boosted with PR would likely
lead to a new winter, as AI history is full of unachieved great AI projects.
AI discipline raised anchor in 1956 to a brilliant purpose that has not been achieved yet.
During the past decade, we have been experiencing a spring season in AI due to
improvements in the machine learning area. However, the superiority of AI over humankind
is still in limited areas. They are able to do what humans programmed them to do, but not on
a general intelligence level.
AI is generally classified under three titles with regard to its evolutionary progress: artificial
narrow intelligence (ANI), AGI and artificial super intelligence (ASI) (Kaplan and Haenlein,
2019). ANI exhibits intelligence superior to a human in limited areas. For example, AlphaGo
is superior to the human world Go champion, but cannot exhibit the general mental states of
the human it defeated. The general level of intelligence can be simulated by an AGI that is
the purpose of the Dartmouth Summer Project. Then, ASI would exhibit intelligence beyond
humans in every aspect if it were to be invented. Besides evolutionary classification, Kaplan
and Haenlein (2019) classified current AI systems under three titles (p. 4):
䊏 Analytical AI “generates a cognitive representation of the world and uses learning
based on past experience to inform future decisions.”
䊏 Human-inspired AI “can, in addition to cognitive elements, understand human
emotions and consider them in their decision-making.”
jFORESIGHT j
䊏 Humanized AI “shows characteristics of all types of competencies (i.e. cognitive,
emotional and social intelligence.”
And from a philosophical perspective, AI is generally classified under two titles: Weak
AI and Strong AI. The weak AI hypothesis asserts that AI “could act as if they were
intelligent” and the strong AI hypothesis asserts that machines that exhibit intelligence “are
actually thinking (not just simulating thinking).”
We found in the literature review process that “weak AI” and “narrow AI” terms are used
synonymously in some articles (Siau and Yang, 2017; Lu et al., 2018). Current AI systems
are in the ANI category from the evolutionary perspective and they are weak AIs from a
philosophical perspective, but an AGI can also be an ANI because defining whether an AI
is self-aware or not is a controversial issue. Thus, the AI discipline is based on weak AI
hypotheses (Russell and Norvig, 2010). What is expected from an AGI is to “simulate”
human-level intelligence. Therefore, in this research, we used the term “narrow AI” to define
current AI systems instead of the term “weak AI.” We used the term AGI to represent an
autonomous software program that is able to solve complex problems in various areas and
has its own emotions, concerns, feelings, tendencies, etc., as humans (Pennachin and
Goertzel, 2007, p. 1).
2.4 Research on artificial intelligence and strategic management
In strategic management research, Holloway’s (1983) article strategic management and AI
has an important place, as it examined potential impacts of AI on management and
addressed the problems that may occur when AI takes place in management centers. The
major question Holloway addressed was “How is the Artificial Intelligence to be
administered?” And he addressed disturbing questions about the social and organizational
repercussions of inhibition of executive function by AI (see Holloway, 1983, p. 92).
Holloway’s (1983) ideas and the problems he foresighted were ahead of his time and the
questions he addresses have not been handled in detail and resolved yet.
Dewhurst and Willmott (2014) attracted attention to self-managed organizations in the
future. According to the authors, as AI becomes stronger in the organization, information will
be democratized, rather than bureaucratized. Business units and functions will continue not
only to report to top management and CEO but also they will make better decisions by the
virtue of precise insights and pattern recognition features of computers. Therefore,
organizations will make better decisions on their own and a self-managed organization may
discomfort top executives. Dewhurst and Willmott’s (2014) foresight is significant for
providing a slice of the future of organizations.
Thomas, Fuchs and Silverstone (2016) proposed that AI have the potential of performing in
management board as an “assistant,” as an “advisor” and as an “actor,” and could enhance
the performance of management boards in three ways: “change the mindset from
incrementalism to experimentation,” “help shape strategy” and “challenge the status quo,
including sacred cows.” (p. 2). Then, as the intelligent machines take over the tasks, human
executives will be able to focus on the task they are better at: “judgment-work.”
Parry, Cohen and Bhattacharya (2016) argued an AI-based decision system in an
organizational context. In this scenario AI is not just a decision support system, rather it is
an actor in the decision-making process in collaboration with a human leader. Parry and
colleagues named this system “automated leadership decision-making” performing in a
social setting. The authors argued two conditions: Human leader holds veto power and
human leader has no veto power to decisions of AI system. They also defined several
advantages and disadvantages of this leadership style. According to the authors, an AI-based
decision system would be superior to a human in forming vision, as humankind have inherent
predispositions as cognitive biases, beliefs, emotions, etc. AI systems are free from these
constraints (besides, bias in AI systems is still a controversial issue for the reason that they
jFORESIGHT j
process human data) and are highly capable of defining latent patterns in complexity, but this
advantage of AI is effective on structured data inputs. De-individualized leadership would also
mitigate agency problems in large organizations. However, in the instance that human leader
has no veto power, some ethical problems may arise as accountability. Parry and colleagues
proposed a “critical event logged veto” right to the human leader to overcome this challenge.
von Krogh (2018) also examined the issue of delegating decision-making authority to AI.
According to Von Krogh, delegating decision-making will change organizations
unprecedentedly. Data flow may centralize around data processing algorithms and may not
follow information structure spreading among business units and the human experts.
Besides, there is a possible and serious thread that AI may stay programmed to one or
more aims and may not need a particular incentive for processing information. For this
reason, von Krogh emphasized that “how the phenomenon of AI relates to organization
design” needs to be a fundamental research topic for management scholars and
addressed significant questions that are required to be examined (p. 405). According to the
author, a research program, grounded on abductive reasoning, is required through which
both qualitative and quantitative data are gathered analyzed.
According to Barnea (2020), AI is superior to a human in processing big data and humans
can make wrong strategic decisions although they have considerable information. This
superiority of AI will likely lead to a groundbreaking change in the concept of management
and decision-making. If organizations can analyze the “cognitive algebra” of competitors’
decisions, AI would be more effective in predicting their next move and this will provide a
great competitive advantage. These AI systems also prevent senior managers to make
biased decisions. Barnea foresees a human-machine collaboration in C-suite in the future.
Farrow (2020) conducted a workshop on the future of AI and the findings show that “AI has
the jobs humans don’t want to do” is the best future case. Participants of the workshop foresee
that AI would augment human decision-making as an advisor or an assurance service by
2038. Farrow’s scenario makes an optimistic impression that AI and humans are colleagues,
not enemies. Hence, binary language may take over human language in the future. Humans
and AI would produce services and solutions together. Human is not at the center of work and
concepts of employee and work are expected to be changed or regulated. AI or human
leaders may guide human/AI, hybrid teams.
Ferràs-Hern
andez (2018) future expectations are bolder and “scary.” According to the
author, a “future digital CEO” and even “self-driven companies” are possible. This may also
lead to the end of management science, but he also adds the most powerful weapon of
humans is intuition in strategic management that is related to “creative thinking and art.” At
present, an intelligent machine can find patterns and answer questions better than humans,
but cannot ask questions. Hence, human still leads the way in management in terms of
intuition and social interaction, but it is likely AI would close this gap as it gains more
strategic thinking capabilities. Spitz (2020) also supports the idea that “as AI continues to
develop, machines could become increasingly legitimate in autonomously making strategic
decisions, where today humans have the edge” (p. 5). According to Spitz, a general level of
intelligence is not necessary for AI to be dominant in human-specific areas in the strategic
management process. AI evolves exponentially and its improvement includes the field of
artificial emotional intelligence. In that case, humans have one choice: to become agile,
antifragile and agile (AAA) to sustain their superiority in decision-making. Otherwise, C-suite
would turn to A-suite.
As a result of the literature review on strategic management and AI, we defined that there
are optimistic and pessimistic expectations about the role of AI in management. Delegation
of decision-making to AI, ethical concerns about AI, human-AI collaboration in strategic
management, the future of management science are the main topics researchers handled,
but we could not come across comprehensive research that examine the topic “AI as a
jFORESIGHT j
CEO” in various dimensions. This research is based on this problem and we decided to
conduct an explorative research design, to examine the feasibility of AI performing as CEO.
3. Methodology
In this research, we followed classic grounded theory (CGT) design. Grounded theory (GT)
was discovered by two sociologists Glaser and Strauss (1965, 2006). They discovered this
methodology due to that existing sociology theories did not meet the scope of their
research then. Glaser and Strauss (2006) defined GT as the “discovery of theory from data”
(p. 1).
In the following years, Glaser and Strauss separated their ways and remodeled GT
methodology from diverse epistemological and ontological perspectives. Glaser’s (1978) CGT
design is generally related to objectivist epistemology and critical realist ontology in literature
(Annells,1997). However, Glaser (2007) emphasizes the transcendent nature of CGT and
defines it as a general methodology that does not adhere to a specific paradigm. According to
Glaser, CGT is a “highly structured but eminently flexible methodology” (p. 48). Hence, the
CGT design does not adhere to a specific paradigm.
In this research, we preferred following Glaser’s design due to that CGT assumes that the
pattern is hidden in the data and the mission of the researcher is just to discover it. Also,
CGT provides a flexible research process and does not adhere to a specific paradigm.
CGT has specific research methods and the researcher should follow these procedures
and let the theory emerge. The mission of a classic grounded theorist is to discover a theory
not to invent it.
Glaser (2007) defined GT as “simply the discovery of emerging patterns in data” (Glaser,
2015: 13) and integration of “simultaneous,” “sequential,” “subsequent,” “scheduled” and
“serendipitous” procedures. The procedures of GT are listed below:
䊏 Theoretical sampling.
䊏 Theoretical coding: open coding and selective coding.
䊏 Constant comparative method.
䊏 Memoing.
3.1 Theoretical sampling
Definition. Theoretical sampling is a data collecting method specific to GT. In this process,
the researcher collects and analyzes the data jointly (Glaser and Strauss, 2006). The aim of
the researcher is to generate a theory and define the next sample and research area to
serve this aim (Glaser, 2007).
Application. At first, we decided on a sample consisted of strategic management
professors but in the process of asking for an interview, some academics stated that they
did not have comprehensive knowledge of AI and refused interview claims. Therefore, we
realized that sample is insufficient and extended the scope of the sample with academics
from management, management information systems, computer sciences disciplines;
executives, entrepreneurs and experts working in the AI field. Through the data collection
process we expanded our sample with academics that have expertise on the philosophy of
mind, philosophy of AI and artists combine “art and AI” (One of the participants performs
generative art and conducts studies on integrating technology, algorithm and art. He also
developed a poet robot. The other participant is an inventor, poet, author and computer
scientist who designed a poet robot and studies on an AI project) As a result, we conducted
27 interviews and stopped the data collecting process when we decided that categories
are saturated. Information about data is presented in Table 1.
jFORESIGHT j
We did not restrict participant selection to a particular city but were limited to Turkey. We
struggled to reach any participant that is related to the subject of the research in Turkey.
Data collecting period took approximately seven months, from July 2018 to February 2019.
Interviews were conducted in participants’ workplaces and requests were sent via email
with an attached ethical report approved by researchers’ institutions. We asked participants
for using an audio-recorder order to prevent data loss. Interviews were performed by the
first author and semi-structured and unstructured interview methods were followed.
Table 1 Information about participants and interviews
Participants Affiliation Area Length Date City
1 Professor Management
information
systems (MIS)
26 min July. 27, 2018 Düzce
2 Bureaucrat, MSc Public 40 min October 4, 2018 Ankara
3 CEO, PhD Software and
robotics
37 min October 22, 2018 _
Istanbul
4 Professor Management 32 min October. 25, 2018 Düzce
5 Associate professor,
entrepreneur
MIS and software 50 min November 13, 2018 Ankara
6 Professor,
entrepreneur
Computer
engineering
25 min November 17, 2018 Düzce
7 Professor, author Computer
engineering
35 min November 22, 0.2018 _
Istanbul
8 Associate professor,
entrepreneur
Computer
engineering
30 min November 26, 2018 Antalya
9 Assistant professor Computer
engineering
1 h December 3, 2018 Isparta
10 Professor, breaucrat Management 21 min December 7, 2018 _
Istanbul
11 Associate professor Philosophy of AI 1 h 20 min December 10, 2018 Ankara
12 Professor,
entrepreneur
Software and
electronic
engineering
1 h 15 min December 11, 2018 Ankara
13 MSc, lecturer Electrical
electronics
engineer and deep
learning
45 min December 14, 2018 _
Istanbul
14 AI manager AI and software 1 h December 17, 2018 _
Istanbul
15 Associate professor Management 1 h 4 min December 24, 2018 Eskis
ehir
16 Asisstant professor,
author, TV
commentator and
presenter
Theology and
philosophy
40 min January 3, 2019 _
Istanbul
17 Director of
architecture and
quality assurance
IT 1 h 20 min January 4, 2019 Ankara
18 Software director Software 32 min January 4, 2019 Ankara
19 Professor Management 1 h 30 min January 8, 2019 Ankara
20 Assistant professor Management 1 h 30 min January 17, 2019 _
Istanbul
21 Associate professor Philsophy 1 h 8 min January 18, 2019 _
Istanbul
22 Artist, instructor,
lecturer
Generative art 1 h 4 min February 7, 2019 _
Istanbul
23 Associate professor Sosiology 24 min February 12, 2019 Bolu
24 Professor Philosophy 2 h 45 min February 13, 2019 _
Istanbul
25 Professor Philosophy 53 min February 14, 2019 _
Istanbul
26 Professor Urban and regional
planning
50 min February 18, 2019 Ankara
27 Entrepreneur, MSc,
author
Cybersecurity 1 h 25 min February 2, 2019 Ankara
jFORESIGHT j
As a first step, we prepared an interview form consisted of nine open-ended questions with
probes. We revised this form throughout the data collection process in line with the
emerging concepts. Before we started to collect data, we submitted our research project
and interview form to X University Scientific Research and Publication Ethics Committee.
The committee approved our research project as ethical with decision no: 2018/2021 on
May 24, (2018).
The interviewer adopted a “communicative validity” approach (Kvale, 1996: 246) and
performed a participative and questioning role in interviews. According to Kvale (1996)
“Communicative validation approximates an educational endeavor where truth is developed
through a communicative process, with both researcher and subjects learning and
changing through the dialogue” (p. 247). Our sample consists of participants from different
scientific fields and sometimes participants’ ideas contradicted each other. The interviewer
addressed her interpretations to the participants and asked their opinions; sometimes
questioned their views, shared our arguments about the topic and views of the other
participants and discussed these issues and initial findings with the participants.
Communicative validation enabled us to compare interdisciplinary views during the
interview process. Apart from participants’ views on interdisciplinary issues, we also
referred to their expert knowledge. All of the participants are expertized and experienced in
a particular area. Some of them referred to books and articles and read passages from
them during the interviews to support their views. While writing a research report, we take
this kind of knowledge as expert knowledge and interpreted it according to interview data,
especially when the statements of participants’ expert views confirm each other and the
literature. Hence, we did not give reference when we used participants’ expert knowledge,
we gave reference when we referred to literature as a supportive data source.
3.2 Constant comparative method
Definition. Grounded theorist starts the coding process as the first data collected and
constantly compare the new data with the former ones (Glaser and Strauss, 2006). Incidents,
concepts and hypotheses are constantly compared to provide theoretical elaboration,
saturation and verification of emerging concepts. This method also serves as an auto-control
mechanism for emerging theory (Glaser, 2007).
Application. As we collect new data, we compared the new content with our previous
findings. When we realized a new concept is emerging, we headed toward the participants
and research areas that are related to the new concept. We revised our interview form,
added new questions and eliminated some of them. Thus, we focused our energy on
emerging concepts and their relationships.
3.3 Theoretical coding
Theoretical coding is applied in two stages: Open coding and selective coding. In the open
coding process, the analyst codes the data line-by-line and search for initial codes (Glaser,
2007). Selective coding process starts when the researcher discovered the core category.
Core category is the variable that explains how the main problem is solved. As Glaser
(2002) stated, the “core category organizes other categories by continually resolving the
main concern” (p. 30). After the core category emerged, the researcher continues coding by
focusing on the core category and its relationship with other categories. Saturating the
categories and testing hypotheses are at the center of this process. Researcher defines the
sample and collects new data to fulfill this aim. This coding process is called “selective coding”
(Glaser, 2007).
Application. The data of the first 10 interviews showed that AI has superiority over humans
in terms of rationality, objectivity, speed, etc. Besides humans had superiorities on AI as
emotion, experience, emotional intelligence, consciousness, etc. These features can be
jFORESIGHT j
called deficiencies of AI. Participants remarked that “AI cannot perform the role of a human
CEO” because of these deficiencies. During the 11th interview, the first time a participant
mentioned differences among artificial general intelligence (AGI), strong AI, narrow AI and
weak AI and these differences become apparent in the following interviews. As a result, we
realized that some basic problems should be solved to consider “AI in CEO position.”
These problems are really “hard” and related to computer sciences, philosophy and
neurosciences. Hence, in the middle of the research process, “solving hard problems”
emerged as the core category.
We titled the core category “hard problems.” After we discovered the core category, we did
not use an interview form for collecting data and followed an unstructured interview method.
Emergence of core category shifted codding processes from “open coding” to “selective
coding” and data gathering method from “semi-structured” to “unstructured interviews.”
3.4 Memoing
Memos are theoretical notes that the researcher records along with the GT process.
Researchers should take notes whenever an idea about the emerging theory comes to their
mind, Glaser termed these moments “eureka moments.” Hence, memos are theoretical
discoveries and help the researcher to realize the correlation between concepts (Holton,
2017).
Application. The research process was full of eureka moments. We recorded these brilliant
ideas on a Word file and they directed us to discover emerging concepts and relationships
among categories.
Consequently, we adhered to these basic procedures of CGT through the research strictly.
We conducted research in line with the following words of Glaser (2015, p. 13):
“Everybody engages in GT every day because it’s a very simple human process to figure out
patterns and to act in response to those patterns. GT is the generation of theories from data. GT
goes on every day in everybody’s lives: conceptualizing patterns and acting in terms of them.”
In brief, we searched our data to find patterns; discovered concepts, a core category and
four theoretical categories related to it.
4. Findings
As a result of the CGT process, five theoretical categories emerged and these categories
were linked to each other and to the core category via five hypotheses. We titled the
emergent theory The Vizier-Shah Theory by referring to the historical functions of Vizier and
Shah. The term Vizier dates to Ancient Egypt and “is conventionally used translation of
Egyptian term tjaty who was responsible for overseeing the running of all state departments,
excluding the religious affairs.” The Vizier was the most powerful figure under the Pharaoh
and “was not simply a counselor or advisor to the king but was the administrative head of
the government.” The vizier was also a statue in Ottoman Empire’s bureaucracy. Grand
Vizier (Vezir-i Azam) was the most prominent member of dıˆvân, was expected to meet the
requirement of collegiality and has a “decisive role in the daily running of the Ottoman state
administration.” Namely, Grand Vizier was “analogous to prime minister” position in
Ottoman bureaucracy (Weiker, 1968:457). Then, “Shah” is “the title of the king of Persia.”
The term Shah is “shortened from Old Persian xšayathiya ‘king.’ We preferred to use the
analogy of Vizier-Shah related to terms” historical positions. Also, in the Turkish language,
Vizier (Vezir) and Shah (S
ah) terms are used instead of King and Queen in chess. Thus, you
can consider that Shah represents the CEO and Vizier represents the “right hand” of the
CEO in this research. Consequently, The Vizier-Shah Theory includes five main theoretical
categories:
jFORESIGHT j
䊏 Narrow AI.
䊏 Hard problems (the core category).
䊏 Debates.
䊏 Solutions.
䊏 AI as a CEO.
These five categories are grounded on 13 theoretical codes and 45 empirical codes. The
categories and codes are listed in Table 2.
Table 2 Categories and codes
Data codes Theoretical codes Theoretical categories
- Rationality
- Objectivity
- Speed
- Durability
- Computing ability
- Big data processing
Superiority Narrow AI
- Consciousness
- Emotional intelligence
- Incentive
- Will
- Judgment
- Motivating
- Leadership
Deficiency
- Dualism
- Materialism
- Idealism
- Panpsychism
Philosophy: the mind-body problem Hard problems
(the core category)
- NP-complete problems
- Toolbox
Computer sciences’ problems
- Brain
- Consciousness
Neuroscience: the functioning of the brain
- AI and dualism?
- Semantics
- Paradigms
Debates among disciplines Current debates
- Cacophony
- Fashion effect
- Lack of theoretical knowledge
- Scenearios
- Ethics and legal issues
Mess
- Flexible view
- Merging disciplines
- Theoretical view
Holistic view Solutions
- Educational revolution
- Legal arrangements
- AI strategy
- AI divisions
AI investments
- Tool
- Extension
Vizier AI CEO
- Transhumanism
- Cyborg CEO
Vizier-Shah
- Strong AI CEO
- Weak AI CEO
- The ego of human
Shah
- Swarm intelligence
- Network
Swarm-Shah
45 data codes 13 theoretical codes 5 theoretical categories
jFORESIGHT j
In Table 2, the first column presents data codes. These codes are found in the data as a
result of the open coding process. Second column presents theoretical codes. These
codes are conceptualized forms of data codes. Then, the third column presents theoretical
categories, more conceptualized forms of theoretical codes. This conceptualization process
shows that theory is grounded in the data but also generalizable as data codes were
conceptualized and ascended to theoretical categories.
Relationships between five categories are provided with five hypotheses. The illustration of
The Vizier-Shah Theory is presented in Figure 1.
Five hypotheses of the theory are listed below:
H1. Narrow AI should enhance its cognitive capabilities to the general intelligence level
to perform the role of a CEO.
H2. “Hard problems” prevent narrow AI to perform CEO roles.
H3. Recent significant improvements on narrow AI give rise to debates.
H4. Hard problems give rise to debates.
H5. If and when hard problems and current debates are solved, it may be possible for AI
to become a CEO.
Before explaining the theory, we want to explain how The Vizier-Shah Theory emerged. H1 is
the first hypothesis that emerged during the grounded theory process. During our
interviews, we realized that Narrow AI should gain general-level intelligence capabilities to
perform the tasks of a CEO. Our participants agreed on a common ground on this view
except one. Hence, we defined H1 as our first hypothesis. Narrow AI should enhance the
scope of its intelligence to be a CEO. However, we found a strong barrier that prevents
narrow AI to enhance its capabilities to general-level intelligence. We titled these problems
as “hard problems”-the core category. Then, our second hypothesis emerged: Hard
problems prevent narrow AI to perform CEO roles. Emergence of the category “hard
problems” was a turning point in our research process. For that reason, we defined this
category as “the core category.” It appeared in the midst of our data gathering process and
strongly influenced the aim of the research, participant selection and the interview method
we used. We used the arrow (representing H1) directly from Narrow AI to AI CEO to
emphasize the first crucial finding of the research. Of course, narrow AI should follow the
path linked with the hypotheses H2,3,4,5 to ascent to CEO level. Consequently, the arrows in
Figure 1 represent the hypotheses that link categories and are organized due to their
emerging sequence. For example, the categories “hard problems” and “debates” emerged
Figure 1
jFORESIGHT j
almost simultaneously, but hard problems are such a crucial factor that changed the flow of
the research process. To emphasize the power of “hard problems” we illustrated it in the
shape of a relatively big square and we put “debates” just beneath the “hard problems.” By
doing so, we intended to show these two categories emerged almost at the same time
interval, but “hard problems” have the central role in The Vizier-Shah Theory as we
organized the other categories by taking into account “hard problems.” In the following
section, we explain the theory in detail.
5. The Vizier-Shah theory
In this section, we explain The Vizier-Shah Theory under five main titles: Narrow AI, hard
problems, debates, solutions and AI-CEO.
5.1 Narrow artificial intelligence
This category represents the superior and deficient aspects of current AI technologies that
are generally defined as “narrow AI.” The superiorities and deficiencies are considered
according to human capabilities. Narrow AI is superior to a human in specific areas but not
at a general intelligence level and this deficiency is a great obstacle that prevents AI from
performing a CEO position in the organization. Hence:
H1. Narrow AI should enhance its cognitive capabilities to the general intelligence level
to perform as a CEO.
Narrow AI technologies continue to improve exponentially especially in the machine
learning area. Speed of improvement also gives rise to several concerns and debates.
Hence:
H3. Recent significant improvements on narrow AI give rise to debates.
5.2 Hard problems
There are important obstacles that prevent AI from being strong AI or AGI. These obstacles
were titled “hard problems” of computer sciences, philosophy of mind and neuroscience.
These problems are considered “unsolvable” in the short term or according to some, will not
be achieved forever.
The hard problem of computer sciences. Hard problems of the field are “non-deterministic
polynomial time problem” and “NP-complete” problems. These are the complex problems
that cannot be solved even if the Turing Machine (i.e. computer) is processed for an
exceptionally long time. Researchers should overcome these problems. Otherwise, there is
no point in talking or predicting about the future of AI that can exhibit full human intelligent
behavior. At present, Turing Machine is incapable of solving complex problems. Turing
machine is still not comparable to the human brain and the “toolbox” of a computer scientist
is not capable of AGI revolution. Thus, participants think that it is too early to expect AI to
perform human-specific actions and take over all human-specific tasks. The predictions of
feasibility are between 50 and 150 years.
The hard problem of neurosciences. Functioning of the human brain has not been solved yet
completely and this is another obstacle to generate an artificial brain operating like the human
brain. Is human brain structure and function the only way for AI to think and make decisions,
judgment and be self-aware? Neuroscience examines the functioning of the brain and neuron
system and adopts a deductive materialist – in other terms physicalist – approach. Deductive
materialism asserts that the essence of everything in-universe is material and denies other
forms of substances. According to this approach, consciousness is not a substance apart
from the brain; namely, consciousness is a function of the brain. Therefore, it is not possible to
generate an artificial conscious brain without solving entirely how the brain works. Even if it
jFORESIGHT j
happens in the future, it is an enigma whether an artificial brain exhibits human intelligence.
Because consciousness is commonly considered as a human-specific phenomenon and has
been at the center of ongoing debate in philosophy.
The hard problem of philosophy. Human phenomenology that is generally called “subjective
experience” or “qualia” has not been definitively explained yet. Subjective experience is
also underlining the problem of the first-person and third-person view of consciousness
(Chalmers, 1999, 2002). This is another conflict between science and philosophy (Dennett,
2001; Ross, 2002). Human experiences are subjective and first-person narrative is used
when expressing these experiences. However, sciences examine the facts and use third-
person language. Consciousness states of humans cannot be observed objectively. Then,
can human phenomenology be the subject of science? There are opinions it can be or
cannot. This is one of the problems that should be solved.
A CEO is required to make a judgment, make inference through logical reasoning or take
rational action but should an AI-CEO also have emotional states as hate, love, anger,
ambition, hope or desire? And if not, how effective will be the actions of a creature isolated
from all these emotional states in a population of humans. These emotional states exhibit an
“asymmetric” impact even in today’s human-intensive business world. It cannot be asserted
that the emotional states entirely have a positive effect on organizational performance or
totally a bad effect. For example, in the decision-making process, some emotions arise the
performance, but in some conditions, the outcome is worst. Participants from management
discipline take attention to this issue. As emotional states make an asymmetric impression,
then other problems arise: Will we be able to transfer emotions that we marked as
“beneficial” to AI or eliminate the “harmful” ones? Would it be better, if AIs make purely
rational decisions isolated from emotions? Or should AI think and act just “human-like?”
Then, how can we be sure that AI is conscious, though it exhibits intelligent behaviors?
Therefore, we should know exactly how conscious states emerge. The state of
consciousness is a multidisciplinary hard problem and the last enigma of humankind: What
is consciousness? This question is strongly related to the hard problem of philosophy: The
body-mind problem.
There have been several approaches to the mind-body problem in philosophy. Deductive
materialism, idealism, dualism, functionalism, panpsychism approach to the mind-body
problem from different perspectives. Participants from the philosophy discipline especially
attracted attention on this point. The phenomenon “AI may have same consciousness states
as human one day” is possible from the lens of functionalism, but from classic dualist and
deductive materialist perspectives, it is not. It is obvious that classic dualism cannot explain a
“conscious AI” phenomenon. Because Descartes referred to the “pineal gland” as an
interaction area of body and soul and this argument runs into the ground. Besides, other
dualist approaches defend that consciousness is specific for humankind. Thus, AGI research
is based on a materialist point of view. Deductive materialism and functionalism are materialist
approaches mentioned by participants repeatedly. Neuroscience adopts a deductive
materialist approach and explains consciousness states with the interaction of neurons and
denies the existence of a separate substance. Then, even, radical deductive materialists
defend that the terms mind, soul, the spirit should be eliminated from the language. Besides,
functionalism is a materialist approach but explains the consciousness on its functions. If these
functions can be exhibited by another creature, it can be accepted as conscious for it is able
to perform the functions of consciousness. A participant (professor of philosophy) referred to
Putnam’s criticism of deductive materialism. If consciousness is completely related to brain
functions, then consciousness states would be specific to that brain. However, according to
Putnam, although some living things do not have the same brain structure of human, are able
to exhibit similar consciousness states. A participant exampled this circumstance as:
For example, Hilary Putnam states that octopuses also have feelings as pain and hunger. These
are his original examples. He states octopuses’ brains are different from humans’. Therefore,
jFORESIGHT j
similar consciousness states can emerge in different organisms with different brain structures,
different anatomies and different physiologies. This is a quite different point of view; this is not
deductive materialism.
According to the findings, AI research is much closer to the functionalist point of view and
according to functionalism, AI may exhibit conscious states in the future. One of the
challenges in front of this is that how conscious states emerge has not been solved yet. At
this point, a multidisciplinary approach is required. Neuroscience, AI and philosophy
disciplines intersect at “consciousness” research. Hence:
H2. “Hard problems” prevent narrow AI to perform CEO roles.
H4. Hard problems give rise to debates.
5.3 Current debates
Debates among disciplines. Computer sciences, philosophy and neuroscience disciplines
follow different research paradigms, adopt different research methods and consider AI
issues from different perspectives. Therefore, conflicts between disciplines arose inevitably.
We discussed previously that the idea of “conscious AI” conflicts with classic dualism and
mentioned that AI research progress in line with materialism, specifical functionalism.
Participants also mentioned that AI research is in accordance with the functionalist
approach, but we found that there are still conflicts between neuroscience and philosophy;
especially, about the dualist approach to AI. The argument on “AI research adopts a dualist
approach” is about the structure of an AI system. AI is composed of two separate parts:
hardware and software. A neuroscientist Ant
onio Dam
asio’s standpoint was given as an
example for this point of view. The participant (professor of philosophy) referred to
Dam
asio’s (1995) book Descartes’ error: emotion, reason and the human brain and read
the following section (pp. 247–248):
My concern, as you see, is for both the dualist notion with which Descartes split the mind from
brain and body (in its extreme version, it holds less sway) and for the modern variants of this
notion: the idea, for instance, that mind and brain are related, but only in the sense that the mind
is the software program run in a piece of computer hardware called brain; or that brain and body
are related, but only in the sense that the former cannot survive without the life support of the
latter.
Consequently, a substance is a thing that needs nothing -except itself- to exist. According to
the dualist view, there are two different substances and this duality in terms of substances is
identified with hardware and software components of AI. However, there is a nuance in this
comparison that body and hardware are both materials but what about the software? Can it be
considered as a material or a spiritual substance as in dualism? It is obvious that software is
not the spiritual substance like the mind in dualism. If it is a material substance then, the dualist
view is denied. Then, if it is considered as an abstract substance as an “idea” then, materialist
view is denied.
Another important point in understanding the philosophy of AI through the lens of
philosophy of mind is that where software can operate in different hardware, consciousness
emerges only in the brain it belongs to. Transferring consciousness from the body it
emerged to a different body has not been achieved yet. Neuroscience research adopt
generally a deductive materialist view, but it does not mean that transferring consciousness
will not come true one day, though it has not been achieved yet. A participant likened the
case that software can operate in different hardware to the reincarnation phenomenon in
Platon’s idealism. Another participant (Computer scientist, expert) mentioned he is a
Platonist:
jFORESIGHT j
A line is the union of dots. A triangle is a shape constituted of three dots connected with lines. If
you achieve to describe the triangle, you crack the secret of AI. Actually, that is what I want to
say, describing the triangle conceptually will solve the problem. It always stucks in mathematics
and cannot be conceptualized. I am a Platonist in this sense.
Materialism and idealism are completely opposed to two philosophical perspectives.
Idealism denies the material and materialism denies the idea. Then, dualism accepts both
body and mind and in this way, it distinguishes from the other two philosophical views.
Besides, the philosophy of AI does not fit completely with these three approaches. We
found that although according to some, the AI discipline is based on the dualist view, AI is a
project based on and fit to functionalist approach, but as you see in the quote above, a
computer scientist adopts a Platonist approach to solve the hard problem of human
phenomenology.
Semantics. The founding purpose of AI research was to generate a machine that simulates
the whole mental state of humans. The purpose was not to generate an AI that has totally
humanlike phenomenology but is able to exhibit consciousness states that cannot be
distinguished from humans’. AI research is in accordance with functionalism, but may AI be
conscious or just simulate consciousness states? According to computer scientists
“simulating” human mental states in all aspects is a valid criterion. Dartmouth’s summer
Project Proposal and the Turing Test were based on this idea. Some participants mentioned
Searle’s (1997) famous Chinese Room thought experiment as a counterargument about the
issue of a machine “actually” thinks. In his argument, Searle (1997) attracted attention to
“semantics.” Although machines can give appropriate answers that cannot be
distinguished from humans, they are not aware of what they are doing, they just follow the
instructions, namely, caries out “syntax.” Therefore, such kind of machine is just exhibiting
syntactic features but not semantics. Besides, computer scientists handle the issue through
a technical lens, they do not care about semantics, exhibiting intelligent behavior is
practical and enough. A machine may not be conscious but can exhibit consciousness
states. This view confirms the weak AI hypothesis in philosophy. Actually, as we still do not
know what consciousness is and how it emerges, we do not know to what kind of things it
can be attributed. We found a contradiction among our participants’ views in this sense,
especially between computer scientists and philosophers.
Mess. Another finding on current debates on AI that almost all participants stated is
“cacophony.” AI has been a popular issue in academic environments, business, social life,
social media, etc. in recent times. Therefore, various programs, conferences, seminars are
being organized; articles, conversations are issued on printed, visual and social media. Actually,
the AI area is “under attack” and many people struggle to get benefit from this popularity.
However, a lack of theoretical background causes misunderstandings and chaos. Marketing
tools and media also boost information pollution and this situation gives rise to dystopic and
utopic scenarios. Various dystopic and utopic scenarios have been roaming around about AI
and the future of humankind. We found several concerns and expectations about the future of AI
in the short and long term. You can see dystopic and utopic scenarios in Table 3.
We grouped the future scenarios under two titles according to participants’ views. There are
also several classifications in parallel with our findings of the possible impact of AI on the
business world, society, our everyday lives, etc. For example, Makridakis (2017) defined
four scenarios about the impact of AI on society and organizations in the future: The
optimists, the pessimists, the pragmatists and the doubters. According to the author, the
optimist view is a utopian scenario based on developments in Genetics, Nanotechnology
and Robotics. AI will take over the work and humans will have an opportunity of doing
various activities or working. Also, humans’ deficiencies derive from biological limitations will
disappear as the technology reaches revolutionary levels. The pessimist view fictionalizes a
dystopian scenario that machines will take possession of the authority of decision-making
and human will be dependent on them. The pragmatists think that regulations and
jFORESIGHT j
transparent AI initiatives as Open AI will prevent the dark side of AI from doing harm and a
controlled AI would be beneficial to humankind.
Moreover, the debates based on insufficient theoretical and technical ground, are also
insufficient and misguiding for producing solutions. For example, at present, there is not any
AI at the level of AGI or strong AI but some chatbots and IVRs as Robot Sophia are being
perceived as “conscious” robots by some segments of society, although they are not.
An example of a misunderstood term is “robot.” The term “robot” can sometimes be used
instead of AI or synonym of AI, but all the robots may not be loaded with AI programs, where all
AI systems do not have a human-like body shape. Another example is the term “singularity,” a
participant (professor of AI philosophy) explained the misusage of the concept as follows:
Singularity is generally misunderstood. Singularity does not mean that a machine is superior to a
human. For example, Deep Blue is superior to all of us in chess play, isn’t it? Humans created that
program and cannot beat it. Is Deep Blue an example of singularity? No [...] Do you know what
singularity is? You create a system superior to you and that system creates a system that is superior
to it and that system creates another system and destroys all the sub-systems. Think about Deep
Blue. It plays chess and it beats humans, for being singular should teach chess to another machine
and it beat its creator and humankind and then that machine beats another machine [...] A
perpetual process. What is the best example of singularity? It is you! Think about evolution theory.
Victories of AI in specific areas to humankind do not mean that they are completely superior
to humans. These programs are at a narrow AI level, do not exhibit general capabilities as
humans do. These victorious narrow AIs do not have the capability of causing a revolution in
social systems or a paradigm shift but an AGI would have.
5.4 Solutions
Multi-disciplinary research, flexible, holistic and theoretical approaches emerged as the
codes that play a significant role in solving hard problems and current debates.
Holistic view. A flexible, interdisciplinary approach and new skills are required in the new
era. Integrating AI technology in disciplines is a beneficial practice. At present,
Table 3 Dystopic and utopic scenarios
Dystopic scenarios Utopic scenarios
Caste system: Class discrimination between robots and humans and/or
between countries is expected in the long term. According to this scenario,
robots would be slaves as a working-class or humans would be slaves of AGIs.
Also, discrimination between countries is expected. Countries that produce AI
technology will monopolize and poverty would reign in other countries. The
idea of a war between robots and humankind or between countries derives
from possible class discrimination in the future. Generally, participants referred
to Elon Musk’s and Stephan Hawking’s views about this issue
Union of countries: A new world order
without frontiers is expected in the long
term. A global union will provide a
decrease in conflicts and differences
between nations and a gradual decrease in
competition -even disappear- between
organizations. Peace reigns around the
world
Violent competition: Companies may take place of countries in the future and a
violent competition would reign. A monopoly of AI companies and regional
competition is expected
Renascence Era. Unemployed life would
lead to a new Renascence. Humans may
have the opportunity of sparing time for
their environment and performing arts and
philosophy
Unemployment. Decrease in the human workforce is a common view but the
emergence of new job titles and concepts are also expected. Dystopic part of
this view is that AI would reign the business world entirely and that leads to two
problems: “unnecessary humankind” and “mass unemployment.” According to
this scenario, the human may also lose its competencies for being accustomed
to using AI applications and when this situation merges with unemployment it
may lead to feelings of purposelessness, lazing and inadequacy
jFORESIGHT j
multidisciplinary research is being handled to develop a consciousness theory as Tuscon’s
conference series and Association for the Scientific Study of Consciousness Conferences and
these initiatives play a significant role in solving the enigma of consciousness and also
participants mentioned interdisciplinary research on “science of consciousness.” In PwC’s
(2017) knowledge paper on AI and robotics, it was also emphasized that a multidisciplinary
expert collaboration from the fields of computer sciences, social and behavioral sciences, law
and policy, ethics, psychology, economics, is required for improving AI research.
Consequently, the code “holistic view” indicates that an awareness of interdisciplinary
approach is emerging. It is expected that strict boundaries between disciplines will
disappear and a collaborative holistic approach will be adopted in the short term.
However, considering the issue from the lens of today’s paradigm is insufficient to predict
future outcomes. All we need to do is to observe trends and predict the milestones. Thus, we
can see the alternate ways, discuss the outcomes of alternates and develop strategies, but
cannot make a definite judgment that would cause speculation. History of humanity is full of
surprises but also with realized predictions. Then, this is exactly what this research’s aim is;
handling the developments in AI technology from an interdisciplinary perspective, clear of
information pollution and providing alternate ways.
Investments in AI. Current education systems are generally based on Industry Revolution
and are insufficient for Fourth Industry Revolution. A flexible, interdisciplinary approach and
new skills are required in the new era. Integrating AI technology in disciplines is a beneficial
implementation.
Developments in AI technology also give rise to legal and ethical problems. “Civil rights,”
“taxation” and “real person” issues of AI need solutions. Especially reverse outcomes of
autonomous cars give rise to debates about ethics, conscience and criminal liability. Exponential
growth of AI technology requires regulations. Regulations should be arranged by governments
and regional or worldwide unions. In total, a 100-year study on AI (AI100) (2016) report supports
this finding. According to AI100 (2016) report more legal and ethical issues emerge as the level
of AI-human interaction increase. Torresen (2018) also emphasized the importance of legal
arrangements about the accountability of AI and ethical issues. Jiang et al. (2017) attracted
attention to the lack of standards and safety deficiency of current AI regulations. The authors
defined this fact as an important obstacle in implementing AI systems in healthcare.
Developing an AI strategy is another important finding. National and international AI
strategies are crucial factors for organizations in developing business strategies.
Governments and unions define their AI investments on this strategy and this strategy
guides organizations in terms of deciding on which country they will invest. As countries’ AI
strategies vary related to economic, sociocultural, technological and various dynamics,
organizations’ AI strategies also vary for their internal and external dynamics. There is not a
definite and common AI strategy for all countries and organizations. Participants mentioned
Japan, China, the USA, South Korea and France as prominent countries investing in AI. As
of 2018, 26 countries and regions defined an AI strategy or AI supporting strategy (Dutton,
2018; Heumann and Zahn, 2018).
Hence:
H5. If and when hard problems and current debates are solved, it may be possible for AI
to become a CEO.
5.5 Artificial intelligence as a chief executive officer
The Vizier CEO Type. Majority of the participants claimed that narrow AI cannot perform the
role of a CEO in the future, but it is expected AI will take part in the management board as a
decision support system. In that case, AI’s ultimate position is the Vizier status. In vizier
position, AI serves as an advisor in terms of a “right-hand” of CEO. Human CEO has the last
jFORESIGHT j
word and AI performs as an extension. Therefore, The Vizier CEO Type consists of two
actors: The human CEO and an ultimate AI but still not a full AGI. These two actors are
dependent on each other and collaborate.
The anticipation of Barnea (2020) and the finding of Farrow (2020) support our Vizier type
CEO:
“It appears as if CEOs will need to combine strong strategic thinking skills with increasingly
sophisticated analytic tools to help them run the organization [. . .] Senior executives who use
instinctive leadership skills or past successes to make decisions will have to become evidence.”
(Barnea, 2020, p. 77).
“Participants felt that AI as an advisory or assurance service provided to augment leader
decision-making would be a standard corporate governance best practice by 2038” (Farrow,
2020, p. 6).
Parry et al.’s (2016) “automated leadership decision-making” scenario based on the
collaboration of AI-based decision-making systems and human leaders is an example for
Vizier. Then, Spitz’s (2020) future scenario “hyper-augmentation,” a symbiotic partnership
between “smart algorithm-augmented predictive decision-making” and humans with AAA
capabilities (anticipatory, antifragile and agility of decision-making) can be considered as a
Vizier type CEO.
The Vizier-Shah CEO Type (Cyborg CEO) When collaboration turns into integration then, the
leader is a “Cyborg-CEO” and the type is The Vizier-Shah. This type of AI-CEO is based on
transhumanism in that human features are enhanced through scientific and technological
developments. Hence, The Vizier-Shah CEO Type includes one actor that is a cyborg-an
integration of enhanced human and AI. Recent developments in technology support
cyborgization of humankind. Today, smartphones, computers and applications have
become a significant complement of human life and the absence of them for a while causes
a feeling of deprivation. This fact can be considered a determinant of cyborgization of
humans; however, the integration has not happened yet. In Cyborg-CEO Type, the Vizier
and Shah are integrated into one body -the body of an enhanced human. Evolution of
humankind to a biologically and technologically enhanced form is expected to happen and
for sure, that fact will also affect the business world.
Dong et al. (2020) foresee the coevolution of humans and AI that supports our Vizier-Shah
Cyborg type:
“With a brain-computer interface, human beings can communicate with each other without using
language and only rely on neural signals in the brain, thus realizing ‘lossless’ brain information
transmission [. . .] The future development of AI is to enhance, not to replace, the overall
intelligence of human beings and promote the complementation of AI and human intelligence,
giving play to their respective advantages to realize the ‘coevolution’ of human and AI machines”
(p. 6).
Kurzweil’s (2005) ideas on cyborgization also support Vizier-Shah type. According to
Kurzweil humans will integrate technology inside human bodies in the near future. By, 2030
our brains will be more non-biological and in the 2040s nonbiological intelligence of humans
will achieve tremendous capabilities. However, Tegmark (2017) emphasizes that although
cyborgs and uploads are feasible, even, we have addicted to technology already and use
technological tools as an extension of our cognitive capabilities, humans will find easier
ways in achieving advanced machines intelligence. Then, this road will be faster.
The Shah CEO type. In Vizier and Vizier-Shah CEO types, human predominate over AI as
soon as an AGI that entirely simulate human mental states is invented. We coined the
AGI-CEO type as The Shah. This type of CEO requires a general level of intelligence.
Besides, legal and ethical issues should be solved. An AGI that can simulate human’s
jFORESIGHT j
intelligent behaviors with its superior features as objectivity, durability, rationality, etc.
moves ahead of a human CEO. If the mental states of humans are transferred to AI,
then the superiority of humans to other species would disappear. Then, whether mental
states of human and AI is equalized, AI would still have superiority to human as it does
not have biological deficiencies but if a human is enhanced as transhumanism
propose, then cyborg-human will appear and may confront with AGI.
Humans have always been eager to assign routine tasks to AI. Such kind of collaboration goes
well but will a human be eager to or need to transfer human-specific tasks to AI? As most of
the participants stated, “it is a dream” for the reason that it is not needed and human is
“selfish.” The nuance here is that AGI will likely be invented one day as scientists are
enthusiastic to achieve that but when it happens will humans want to assign human-specific
tasks as CEO position to AI? Most of the participants think that the last word in organization
management will still be said by a human. Another point of view is that duplicating humankind
entirely is unnecessary because humans exist at present and there is no need for a duplicate.
The concept of technology is about facilitating human work; therefore, it is expected that AI
support humans not to manage. It seems that the position of a CEO is one of the superiorities
that humankind would likely be reluctant to assign to a machine.
A human-depended AI is programmed by humans and processes human data and that
outcome is likely to reflect default features of humans as “bias” and “discrimination.” Hence,
an AI-CEO should be independent of human control. If this happens in the future, it does not
mean that society would accept this fact. Besides acceptance, the “cost of an AI-CEO” is
another challenge. An AI-CEO will not be preferred for it is over-costing. Besides, the
invention of AGI will be a revolution that leads to a “paradigm shift.” This shift may happen
gradually or brings out chaos, polarization and conflicts.
The Swarm-Shah CEO type. Swarm-Shah represents a system that is related to “distributed
architecture,” “swarm intelligence” and “collective consciousness,” which means this type
of AI-CEO generates common decisions. Swarm-Shah has the potential of disrupting all
prevalent organizational structures, organizational culture and functioning models. The
developments in the internet of things (IoT), Industry 4.0 and distributed architecture can be
considered as the initial stage of this revolution. IoT “refers to the networked interconnection
of everyday objects, which are often equipped with ubiquitous intelligence” (Xia et al., 2012,
p. 1101) and this interaction will be more improved and even proceed to a level that does
not require human intervention (Khan et al., 2012; Gubbi et al., 2013). Spitz’s (2020) future
scenarios “Decentralized Autonomous Organizations” and “swarm AI” that infinite groups
augment their intelligence by “forming swarms in real-time” (p. 8) can be considered as an
example for The Swarm-Shah CEO type. Then, in accordance with Bostrom’s (2014)
“collective superintelligence” concept that he defined it as “a system composed of a smaller
intellect such that the system’s overall performance across many very general domains
vastly outstrips that of any current cognitive systems” (p. 65).
Network systems involved in social life with the evolvement of the internet through worldwide with
personal computers and later social media and content sharing platforms became popular.
Today, people can collectively respond to social events and even can change the course of
events. We can predict that this network structure will improve and be enriched with new
applications, then individual decisions may be replaced by collective decisions. Then, even, with
scientific improvements, human consciousness may include in distributed systems. The
progress of network structure will likely cause radical changes in organizations as the central
authority of the CEO would be distributed. Most of the participants agreed that network structure
will be practiced in the future, even in today’s organizations CEO make decisions not individually
but with the support of his/her “extensions.” A participant mentioned this issue as follows:
Participant: Think that way [. . .] Don’t consider computers just as a statistical report provider.
Think of them as collective intelligence. For example, you have a back-team of 10 staff. It is not
important whether the team consists of robots or humans, the thing that matters is the report.
jFORESIGHT j
Think about that we take away all sources and rights of the CEO, even the right of entering the
company, then tell him “Manage the company.” He cannot. He can only manage with the
support of whole extensions as software, hardware or human.
Interviewer: Actually, the whole system is the CEO.
Participant: Of course, it is. It is a cyborg, isn’t it? The representative of an organism. If that
system proceeds to a level that can perform self-management, then the CEO position becomes
irrelevant. It is too early for that because – even a little – human is still effective in organization
management. At least, human has a face, speaks, compliments, persuades, supports, etc. Still
makes something [. . .]
As the participant stated if humans become irrelevant and unnecessary for the system, then
humans will be checkmated. In this example, AI is still at the Vizier position and the human
CEO is at an early stage of Cyborg-CEO. When humans become irrelevant, then the Swarm-
Shah type comes into play, a system that produces collective decisions. The voice of the
system would surpass the voice of individuals. It can be interpreted as the rise of
collectivism and the down of individualism. Humans may continue to be a part of this
system or may be eliminated by the system for being unnecessary.
6. Conclusion
In this research, we examined the feasibility of AI performing as CEO by following classic
grounded theory methodology. As a result, The Vizier-Shah theory emerged grounded on
27 interview data. The theory consists of five categories that are linked to each other with
seven hypotheses. As a result, we answered two research questions:
RQ1. May AI take over the task of developing strategy in organizations?
RQ2. May AI perform top management tasks in the future and moreover be a CEO?
The answer to research questions is “Yes, it is possible. AI can take over CEO-position but
at first, challenges and problems should be solved.” The Vizier-Shah Theory explains the
issues that should be considered, provide recommendations and moreover, introduces four
futuristic AI-CEO types.
Holloway (1983) stated in his article; “The possibility is clear that within a decade a computer
may share or usurp functions of a corporate chief executive –functions that up to now have been
thought unsharable. Now is the time to begin planning for such a development.” Although his
prediction has not been achieved yet, expectations have ascended (World Economic Forum,
2015). The major question he addressed in his article; “How and when do we expect a
supercomputer to share or usurp functions of a corporate chief executive?” (Holloway, 1983: 83)
This problem is handled in detail in this research from a broader perspective. The other crucial
questions he addressed were specific questions and still require to be examined.
As Von Krogh (2018) also stated more abductive research following explorative designs are
required. Each theoretical components of The Vizier-Shah theory can be the subject of
future research. The four CEO types and their impact on future organizations can also be
considered as a research topic. This research is an early-period research conducted to
contribute these efforts, gain attraction to the issue and shed light on future research.
References
AI100 (2016). “Artificial intelligence and life in 2030. Report of the 2015 study panel”, Stanford University,
available at: https://ai100.stanford.edu/
Annells, M. (1997), “Grounded theory method, part II – options for users of the method”, Nursing Inquiry,
Vol. 4, pp. 176-180.
Armstrong, D.M. (1968), A Materialist Theory of the Mind, Routledge  Kegan Paul, London.
jFORESIGHT j
Bagozzi, R. and Lee, N. (2017), “Philosophical foundations of neuroscience in organizational
research: functional and nonfunctional approaches”, Organizational Research Methods, Vol. 22
No. 1, pp. 1-33.
Barnea, A. (2020), “How will AI change intelligence and decision-making? [article”, Journal of Intelligence
Studies in Business, Vol. 10 No. 1, pp. 75-80.
Beavers, A. (2013), “Alan turing: mathematical mechanist”, In Cooper, S. and van Leeuwen, J. (Eds). Alan
Turing: His Work and Impact, Elsevier, Waltham, 481-485.
Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press.
Chalmers, D.J. (1995a), “The conscious mind: in search of a theory of conscious experience”, Doctoral
dissertation, University of California.
Chalmers, D.J. (1995b), “Facing up to the problem of consciousness”, Journal of Consciousness Studies,
Vol. 4 No. 1, pp. 3-46.
Chalmers, D.J. (1999), “First person methods in the science of consciousness”, available at: http://consc.
net/papers/firstperson.html
Chalmers, D.J. (2002), “The first person and third person views (part I)”, available at: http://consc.net/
notes/first-third.html
Daily, C.M. and Johnson, J.L. (1997), “Sources of CEO power and firm financial performance: a
longitudinal assessment”, Journal of Management, Vol. 23 No. 2, pp. 97-117.
Dam
asio, A.R. (1995), Descartes’ Error: Emotion, Reason, and the Human Brain, Avon Books, USA.
DeepMind (2021), “AlphaGo”, available at: https://deepmind.com/research/case-studies/alphago-the-
story-so-far
Dennett, D. (2001), “The fantasy of first-person science”, available at: https://ase.tufts.edu/cogstud/
dennett/papers/chalmersdeb3dft.htm
Descartes, R. (2003a), Discourse on Method and Meditations, in Ross, E.S. (Ed.), Dover Publications,
Inc, Mineola, New York, NY.
Descartes, R. (2003b), Meditations on First Philosophy: With Selections from the Objections and Replies,
in Moriarty, M. (Ed.), Oxford University Press, New York, NY.
Dewhurst, M. and Willmott, P. (2014), “Manager and machine: the new leadership equation”, available at:
www.mckinsey.com/featured-insights/leadership/manager-and-machine
Dreyfus, H.L. (1972), What Computers Can’t Do: A Critique of Artificial Reason, Harper  Row,
Publishers, Inc., USA.
Dutton, T. (2018), “An overview of national AI strategies”, available at: https://medium.com/politics-ai/an-
overview-of-national-ai-strategies-2a70ec6edfd
Farrow, E. (2020), “Organisational artificial intelligence future scenarios: futurists insights and
implications for the organisational adaptation approach, leader and team”, Journal of Futures Studies,
Vol. 24 No. 3, pp. 1-15.
Ferràs-Hern
andez, X. (2018), “The future of management in a world of electronic brains”, Journal of
Management Inquiry, Vol. 27 No. 2, pp. 260-263.
Fjelland, R. (2020), “Why general artificial intelligence will not be realized”, Humanities and Social
Sciences Communications, Vol. 7 No. 1, pp. 1-9.
Glaser, B.G. (1978), “Advances in the Methodology of Grounded Theory: theoretical Sensitivity”.
Sociology Press, Mill Valley, CA.
Glaser, B.G. (2002), “Conceptualization: on theory and theorizing using grounded theory”, International
Journal of Qualitative Methods, Vol. 1 No. 2, pp. 23-38.
Glaser, B., (2015), Organizational Research Methods, Vol. 18 No. 4, pp. 1-19., doi: 10.1177/
1094428114565028. GT as the discovery of patterns. in Walsh, I., Holton, J.A., Bailyn, L., Fernandez, W.,
Levina, N. and Glaser, B. (Eds).
Glaser, B. (2007), “Remodeling grounded theory”, Forum: Qualitative Social Research, Vol. 5 No. 2, doi,
doi: 10.17169/fqs-5.2.607.
Glaser, B. and Strauss, A. (1965), Awareness of Dying, Aldine Transaction.
jFORESIGHT j
Glaser, B. and Strauss, A. (2006), The Discovery of Grounded Theory: Strategies for Qualitative
Research, Aldine Transaction.
Gubbi, J., Buyya, R., Marusic, S. and Palaniswami, M. (2013), “Internet of things (IoT): a vision,
architectural elements, and future directions”, Future Generation Computer Systems, Vol. 29 No. 7,
pp. 1645-1660.
Guyer, P. and Horstmann, R.P. (2021), “Idealism”, In Zalta, E.N. (Ed.), The Stanford Encyclopedia of
Philosophy, available at: https://plato.stanford.edu/archives/spr2021/entries/idealism/
Hambrick, D. and Mason, P. (1984), “Upper echelons: the organization as a reflection of its top
managers”, Academy of Management Review, Vol. 9 No. 2, pp. 193-206.
Haenlein, M. and Kaplan, A. (2019), “A brief history of artificial intelligence: on the past, present, and
future of artificial intelligence”, California Management Review, Vol. 61 No. 4, pp. 5-14.
Heumann, S. and Zahn, N. (2018), “Benchmarking national AI strategies: why and how indicators and
monitoring can support agile implementation. Research report. Stiftung-nv.de. SSRN”, available at: www.
stiftung-nv.de/sites/default/files/benchmarking_ai_strategies.pdf
Holloway, C. (1983), “Strategic management and artificial intelligence”, Long Range Planning, Vol. 16 No.
5, pp. 89-93.
Holton, J. (2017), “The discovery power of staying open”, The Grounded Theory Review, Vol. 16 No. 1,
pp. 46-49, available at: http://groundedtheoryreview.com/2017/06/23/the-discovery-power-of-staying-
open/
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S. and Wang, Y. (2017), “Artificial intelligence in
healthcare: past, present and future”, Stroke and Vascular Neurology, Vol. 2 No. 4, pp. 230-243.
Kaplan, A. and Haenlein, M. (2019), “Siri, siri, in my hand: who’s the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence”, Business Horizons, Vol. 62 No. 1,
pp. 15-25.
Khan, R., Khan, S., Zaheer, R. and Khan, S. (2012), “Future internet: the internet of things architecture,
possible applications and key challenges”, 2012 10th International Conference on Frontiers of
Information Technology (FIT) Proceedings, IEEE, pp. 257-260, available at: https://ieeexplore.ieee.org/
abstract/document/6424332/
Kolbjørnsrud, V., Thomas, R. and Amico, R. (2016), “The promise of artificial intelligence: redefining
management in the workforce of the future”, Accenture, available at: www.accenture.com/_acnmedia/
PDF-19/AI_in_Management_Report.pdf
Kulstad, M. and Laurence, C. (2020), Leibniz’s philosophy of mind. in Zalta, E.N. (Ed.), The Stanford
Encyclopedia of Philosophy (Winter 2020 Edition), available at: https://plato.stanford.edu/archives/
win2020/entries/leibniz-mind/
Kurzweil, R. (2005), The Singularity is near: When Humans Transcend Biology, Duckworth Overlook.
Kvale, S. (1996), InterViews: An Introduction to Qualitative Research Interviewing, SAGE Publications,
Inc.
Levin, J. (2018), Functionalism. In Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2018
Edition), available at: https://plato.stanford.edu/archives/fall2018/entries/functionalism/
Lu, H., Li, Y., Chen, M., Kim, H. and Serikawa, S. (2018), “Brain intelligence: go beyond artificial
intelligence”, Mobile Networks and Applications, Vol. 23 No. 2, pp. 368-375.
McCarthy, J., Minsky, M.L., Rochester, N. and Shannon, C.E. (1955), “A proposal for the dartmouth
summer research project on artificial intelligence”, available at: http://jmc.stanford.edu/articles/
dartmouth/dartmouth.pdf
McDermott, D. (2007), “Artificial intelligence and consciousness”, In Zelazo, P.D., Moscovitch, M. and
Thompson, E. (Eds) The Cambridge Handbook of Consciousness, Cambridge University Press,
pp. 117-150.
Makridakis, S. (2017), “The forthcoming artificial intelligence (AI) revolution: its impact on society and
firms”, Futures, Vol. 90, pp. 46-60.
Miyazaki, K. and Sato, R. (2018), “Analyses of the technological accumulation over the 2nd and the 3rd AI
boom and the issues related to AI adoption by firms”, 2018 Portland International Conference on
Management of Engineering and Technology (PICMET), Honolulu, HI, pp. 1-7.
jFORESIGHT j
Nilsson, N.J. (2010), The Quest for Artificial Intelligence a History of Ideas and Achievements, Cambridge
University Press, UK.
Parry, K., Cohen, M. and Bhattacharya, S. (2016), Rise of the Machines: A Critical Consideration of
Automated Leadership Decision Making in Organizations. Group  Organization Management, Vol. 41
No. 5, pp. 571-594.available at: https://doi.org/10.1177/1059601116643442
Pennachin, C. and Goertzel, B. (2007), “Contemporary approaches to artificial general intelligence”, in
Goertzel, B. and Pennachin, C. (Eds), Artificial General Intelligence, Springer, Berlin, Heidelberg,
pp. 1-30.
PwC (2017), “Artificial intelligence and robotics: leveraging artificial intelligence and robotics for
sustainable growth. PwC knowledge paper”, available at: www.pwc.in/assets/pdfs/publications/2017/
artificial-intelligence-and-robotics-2017.pdf
Robinson, H. (2020), Dualism. In Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020
Edition), available at: https://plato.stanford.edu/archives/fall2020/entries/dualism/
Ross, J. (2002), “First-person consciousness”, Journal of Consciousness Studies, Vol. 9 No. 7, pp. 1-28.
available at: www.ucl.ac.uk/uctytho/RossOnHonderichMcGinn.pdf
Russell, S. and Norvig, P. (2010), Artificial Intelligence: A Modern Approach, 3rd ed., Pearson Education,
Inc, Upper Saddle River, NJ.
Searle, J.R. (1997), The Mystery of Consciousness (7th Printing), The Newyork Review of Books.
Shanks, R., Sinha, S. and Thomas, R.J. (2015), “Manager and machines, unite!”, Accenture, available at:
www.accenture.com/_acnmedia/pdf-19/accenture-strategy-manager-machine-unite-v2.pdf
Shanks, R., Sinha, S. and Thomas, R. (2016), “Judgment calls: preparing leaders to thrive in the age of
intelligent machines”, Accenture, available at: www.accenture.com/t20170411T174032Z__w__/us-en/
_acnmedia/PDF-19/Accenture-Strategy-Workforce-
Shin, Y. (2019), “The spring of artificial intelligence in its global winter”, IEEE Annals of the History of
Computing, Vol. 41 No. 4, pp. 71-82.
Siau, K.L. and Yang, Y. (2017), “Impact of artificial intelligence, robotics, and machine learning on sales
and marketing”, Twelve Annual Midwest Association for Information Systems Conference (MWAIS 2017)
Proceedings, pp. 18-19, available at: http://aisel.aisnet.org/mwais2017/48
Stoljar, D. (2021), Physicalism, in Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Summer
2021 Edition), available at: https://plato.stanford.edu/archives/sum2021/entries/physicalism/
Spitz, R. (2020), “The future of strategic decision-making [blog post]”, available at: https://jfsdigital.org/
2020/07/26/the-future-of-strategic-decision-making/
Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf, New York, NY.
Thomas, A. and Simerly, R. (1994), “The chief executive officer and corporate social performance: an
interdisciplinary examination”, Journal of Business Ethics, Vol. 13 No. 12, pp. 959-968.
Thomas, R., Fuchs, R. and Silverstone, Y. (2016), “A machine in the C-suite. Research report”, Accenture,
available at: www.accenture.com/t00010101T000000Z__w__/br-pt/_acnmedia/PDF-13/Accenture-
Strategy-WotF-Machine-CSuite.pdf
Turing, A. (1950), “Computing machinery and intelligence”, Mind, Vol. 59 No. 236, pp. 433-460, available
at: www.jstor.org/stable/2251299?origin=JSTOR-pdfseq=1#page_scan_tab_contents
Von Krogh, G. (2018), “Artifıcial intelligence in organizations: new opportunities for phenomenon-based
theorizing”, Academy of Management Discoveries, Vol. 4 No. 4, pp. 404-409.
Weiker, W. (1968), “The ottoman bureaucracy: modernization and reform”, Administrative Science
Quarterly, Vol. 13 No. 3, pp. 451-470.
World Economic Forum (2015), “Deep shift: technology tipping points and societal impact”, Global
Agenda Council on the Future of Software  Society. Weforum.org, available at: www3.weforum.org/
docs/WEF_GAC15_Technological_Tipping_Poi
Xia, F., Yang, L., Wang, L. and Vinel, A. (2012), “Internet of things”, International Journal of
Communication Systems, Vol. 25 No. 9, pp. 25-1102, available at: https://doi.org/. 1101.
Torresen, J. (2018), “A review of future and ethical perspectives of robotics and AI”, Frontiers in Robotics
and AI, Vol. 4 No. 75.
jFORESIGHT j
Yasnitsky, L.N. (2020), “Whether be new “winter” of artificial ıntelligence?”, In Antipova, T. (Ed.),. Integrated
Science in Digital Age. ICIS 2019. Lecture Notes in Networks and Systems, 78, Springer, Cham.
Zaccaro, S. (2004), The Nature of Executive Leadership: A Conceptual and Empirical Analysis of
Success, American Psychological Association, Washington, DC.
Zhao, T., Zhu, Y., Tang, H., Xie, R., Zhu, J. and Zhang, J.H. (2019), “Consciousness: new concepts and
neural networks”, Frontiers in Cellular Neuroscience, Vol. 13, p. 302.
Further reading
Annells, M. (1997), “Grounded theory method, part II: options for users of the method”, Nursing Inquiry,
Vol. 4 No. 3, pp. 176-180.
Chalmers, D.J. (2015), “Panpsychism and panprotopsychism”, In Torin Alter, Y.N. (Ed.), Consciousness
in the Physical World: Perspectives on Russellian Monism, Oxford University Press, pp. 246-276.
Corbin, J. and Strauss, A. (1990), “Grounded theory research: procedures, canons, and evaluative
criteria”, Qualitative Sociology, Vol. 13 No. 1, pp. 3-21.
Damasio, A.R. (1995), Descartes’ Error: Emotion, Reason, and the Human Brain, Avon Books.
Dong, Y., Hou, J., Zhang, N. and Zhang, M. (2020), “Research on how human ıntelligence, consciousness,
and cognitive computing affect the development of artificial ıntelligence”, Complexity, pp. 1-10.
Fodor, P. (1994), “Sultan, imperial council, grand vizier: changes in the ottoman ruling elite and the formation
of the grand vizieral ’telḫī
s”, Acta Orientalia Academiae Scientiarum Hungaricae, Vol. 47 Nos. 1/2, pp. 67-85,
available at: www.jstor.org/stable/23658130?seq=1
Mark, J. (2017), “Ancient egyptian vizier. Ancient history encyclopedia”, available at: www.ancient.eu/
Egyptian_Vizier/
Overgaard, M. (2017), “The status and future of consciousness research”, Fronties in Psychology, Vol. 8,
pp. 1-4.
Shah (2021), “In online ethymology dictionary”, available at: www.etymonline.com/search?q=Shah
Shaw, I. (2000), The Oxford History of Ancient Egypt, Oxford University Press Inc, New York, NY.
Silver, D. and Hassabis, D. (2016), “Silver, D.  hassabis, D. 2016. Mastering the ancient game of go”,
available at: https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html
Turing, A. (1937), “On computable numbers, with an application to the entscheidungsproblem”,
Proceedings of the London Mathematical Society, Vol. 2 No. 1, pp. 230-265.
About the authors
Aslıhan Ünal is an Assistant Professor in the Department of Management Information
Systems at Cappadocia University. She completed her PhD and MSc at Düzce University
on business administration and her undergraduate studies at Istanbul University on
econometrics. Her main research interests are strategic management, competitive
strategies, management information systems, artificial intelligence and grounded theory.
Aslıhan Ünal can be contacted at: aslihan.unal@kapadokya.edu.tr
_
Izzet Kılınç is a Professor of Strategic Management in the Department of Management
Information systems at Duzce University. Kılınç, completed his PhD at Dokuz Eylül
University on tourism and hospitality, his MA at Sheffield Hallam University on tourism and
hospitality and his undergraduate studies at Dokuz Eylül University on tourism and
hospitality. His main research interests are strategic management, competitive strategies,
management information systems, artificial intelligence and qualitative research.
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com
jFORESIGHT j

More Related Content

Similar to The-feasibility-of-AI-performing-as-CEO

HumanitiesPoems Facing it” by Yusef komunyakaaMending wal
HumanitiesPoems  Facing it” by Yusef komunyakaaMending walHumanitiesPoems  Facing it” by Yusef komunyakaaMending wal
HumanitiesPoems Facing it” by Yusef komunyakaaMending walLizbethQuinonez813
 
B390308
B390308B390308
B390308aijbm
 
In your paper,identify the societies you will compare and cont
In your paper,identify the societies you will compare and contIn your paper,identify the societies you will compare and cont
In your paper,identify the societies you will compare and contsherni1
 
PD for the development of AI ethics
PD for the development of AI ethicsPD for the development of AI ethics
PD for the development of AI ethicsMartijn Stolk
 
Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Olivia Kresic
 
Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Olivia Kresic
 
Organizational and social impact of Artificial Intelligence
Organizational and social impact of Artificial IntelligenceOrganizational and social impact of Artificial Intelligence
Organizational and social impact of Artificial IntelligenceAJHSSR Journal
 
Artificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentArtificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentDavid Raj Kanthi
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...ijtsrd
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
 
Best practice talent management
Best practice talent managementBest practice talent management
Best practice talent managementtrianss
 
The Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisThe Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisArXlan1
 
2016 q1 McKinsey quarterly - organizing for the future
2016 q1 McKinsey quarterly - organizing for the future2016 q1 McKinsey quarterly - organizing for the future
2016 q1 McKinsey quarterly - organizing for the futureAhmed Al Bilal
 
The Foundations of a Design-based Theory of the Firm
The Foundations of a Design-based Theory of the FirmThe Foundations of a Design-based Theory of the Firm
The Foundations of a Design-based Theory of the FirmAndy Dong
 
The role of intellectual capital in promoting knowledge management initiatives
The role of intellectual capital in promoting knowledge management initiativesThe role of intellectual capital in promoting knowledge management initiatives
The role of intellectual capital in promoting knowledge management initiativesMansour Esmaeil Zaei
 
28 Human capital
28 Human capital28 Human capital
28 Human capitalGraylit
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdfBob Marcus
 
Corus rdt-article by amrinder
Corus rdt-article by amrinderCorus rdt-article by amrinder
Corus rdt-article by amrinderAmrinderbhalla
 
An Introduction To Quot Purpose Engineering Quot An Essay On Quot Practic...
An Introduction To  Quot Purpose Engineering Quot   An Essay On  Quot Practic...An Introduction To  Quot Purpose Engineering Quot   An Essay On  Quot Practic...
An Introduction To Quot Purpose Engineering Quot An Essay On Quot Practic...Jeff Brooks
 

Similar to The-feasibility-of-AI-performing-as-CEO (20)

HumanitiesPoems Facing it” by Yusef komunyakaaMending wal
HumanitiesPoems  Facing it” by Yusef komunyakaaMending walHumanitiesPoems  Facing it” by Yusef komunyakaaMending wal
HumanitiesPoems Facing it” by Yusef komunyakaaMending wal
 
B390308
B390308B390308
B390308
 
In your paper,identify the societies you will compare and cont
In your paper,identify the societies you will compare and contIn your paper,identify the societies you will compare and cont
In your paper,identify the societies you will compare and cont
 
PD for the development of AI ethics
PD for the development of AI ethicsPD for the development of AI ethics
PD for the development of AI ethics
 
Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...
 
Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...Generative AI in Organizations: Insights and Strategies from Communication Le...
Generative AI in Organizations: Insights and Strategies from Communication Le...
 
Organizational and social impact of Artificial Intelligence
Organizational and social impact of Artificial IntelligenceOrganizational and social impact of Artificial Intelligence
Organizational and social impact of Artificial Intelligence
 
CIPR 'The Effects of AI on the Professions; A literature repository'
CIPR 'The Effects of AI on the Professions; A literature repository'CIPR 'The Effects of AI on the Professions; A literature repository'
CIPR 'The Effects of AI on the Professions; A literature repository'
 
Artificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentArtificial Intelligence power point presentation document
Artificial Intelligence power point presentation document
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
 
Best practice talent management
Best practice talent managementBest practice talent management
Best practice talent management
 
The Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisThe Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence Analysis
 
2016 q1 McKinsey quarterly - organizing for the future
2016 q1 McKinsey quarterly - organizing for the future2016 q1 McKinsey quarterly - organizing for the future
2016 q1 McKinsey quarterly - organizing for the future
 
The Foundations of a Design-based Theory of the Firm
The Foundations of a Design-based Theory of the FirmThe Foundations of a Design-based Theory of the Firm
The Foundations of a Design-based Theory of the Firm
 
The role of intellectual capital in promoting knowledge management initiatives
The role of intellectual capital in promoting knowledge management initiativesThe role of intellectual capital in promoting knowledge management initiatives
The role of intellectual capital in promoting knowledge management initiatives
 
28 Human capital
28 Human capital28 Human capital
28 Human capital
 
AGI Part 1.pdf
AGI Part 1.pdfAGI Part 1.pdf
AGI Part 1.pdf
 
Corus rdt-article by amrinder
Corus rdt-article by amrinderCorus rdt-article by amrinder
Corus rdt-article by amrinder
 
An Introduction To Quot Purpose Engineering Quot An Essay On Quot Practic...
An Introduction To  Quot Purpose Engineering Quot   An Essay On  Quot Practic...An Introduction To  Quot Purpose Engineering Quot   An Essay On  Quot Practic...
An Introduction To Quot Purpose Engineering Quot An Essay On Quot Practic...
 

More from Aslhannal3

Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdf
Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdfÜnal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdf
Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdfAslhannal3
 
"On research" fotoğraf koleksiyonu
"On research" fotoğraf koleksiyonu"On research" fotoğraf koleksiyonu
"On research" fotoğraf koleksiyonuAslhannal3
 
Ünal (2019) doktora tez sunumu
Ünal (2019) doktora tez sunumuÜnal (2019) doktora tez sunumu
Ünal (2019) doktora tez sunumuAslhannal3
 
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir Değerlendirme
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir DeğerlendirmeYapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir Değerlendirme
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir DeğerlendirmeAslhannal3
 
David bohm diyalog uzerine ceviri metni-
David bohm diyalog uzerine  ceviri metni-David bohm diyalog uzerine  ceviri metni-
David bohm diyalog uzerine ceviri metni-Aslhannal3
 
Bilimsel bir makalede giris bolumu nasil yazilir
Bilimsel bir makalede giris bolumu nasil yazilirBilimsel bir makalede giris bolumu nasil yazilir
Bilimsel bir makalede giris bolumu nasil yazilirAslhannal3
 
Tao - ron hogan yorumu
Tao - ron hogan yorumuTao - ron hogan yorumu
Tao - ron hogan yorumuAslhannal3
 

More from Aslhannal3 (8)

Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdf
Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdfÜnal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdf
Ünal (2023) Yapay Zeka-İnsan İlişkileri Üzerine eleştirel Bir İnceleme.pdf
 
"On research" fotoğraf koleksiyonu
"On research" fotoğraf koleksiyonu"On research" fotoğraf koleksiyonu
"On research" fotoğraf koleksiyonu
 
Ünal (2019) doktora tez sunumu
Ünal (2019) doktora tez sunumuÜnal (2019) doktora tez sunumu
Ünal (2019) doktora tez sunumu
 
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir Değerlendirme
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir DeğerlendirmeYapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir Değerlendirme
Yapay Zeka İşletme Yönetimi İlişkisi Üzerine Bir Değerlendirme
 
Little bee 26
Little bee 26Little bee 26
Little bee 26
 
David bohm diyalog uzerine ceviri metni-
David bohm diyalog uzerine  ceviri metni-David bohm diyalog uzerine  ceviri metni-
David bohm diyalog uzerine ceviri metni-
 
Bilimsel bir makalede giris bolumu nasil yazilir
Bilimsel bir makalede giris bolumu nasil yazilirBilimsel bir makalede giris bolumu nasil yazilir
Bilimsel bir makalede giris bolumu nasil yazilir
 
Tao - ron hogan yorumu
Tao - ron hogan yorumuTao - ron hogan yorumu
Tao - ron hogan yorumu
 

Recently uploaded

Dealing with Poor Performance - get the full picture from 3C Performance Mana...
Dealing with Poor Performance - get the full picture from 3C Performance Mana...Dealing with Poor Performance - get the full picture from 3C Performance Mana...
Dealing with Poor Performance - get the full picture from 3C Performance Mana...Hedda Bird
 
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort ServiceBDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort ServiceDelhi Call girls
 
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual serviceanilsa9823
 
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, MumbaiPooja Nehwal
 
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...Pooja Nehwal
 
Does Leadership Possible Without a Vision.pptx
Does Leadership Possible Without a Vision.pptxDoes Leadership Possible Without a Vision.pptx
Does Leadership Possible Without a Vision.pptxSaqib Mansoor Ahmed
 
Reviewing and summarization of university ranking system to.pptx
Reviewing and summarization of university ranking system  to.pptxReviewing and summarization of university ranking system  to.pptx
Reviewing and summarization of university ranking system to.pptxAss.Prof. Dr. Mogeeb Mosleh
 
Day 0- Bootcamp Roadmap for PLC Bootcamp
Day 0- Bootcamp Roadmap for PLC BootcampDay 0- Bootcamp Roadmap for PLC Bootcamp
Day 0- Bootcamp Roadmap for PLC BootcampPLCLeadershipDevelop
 
GENUINE Babe,Call Girls IN Baderpur Delhi | +91-8377087607
GENUINE Babe,Call Girls IN Baderpur  Delhi | +91-8377087607GENUINE Babe,Call Girls IN Baderpur  Delhi | +91-8377087607
GENUINE Babe,Call Girls IN Baderpur Delhi | +91-8377087607dollysharma2066
 
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girl
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call GirlVIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girl
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girladitipandeya
 
Call Now Pooja Mehta : 7738631006 Door Step Call Girls Rate 100% Satisfactio...
Call Now Pooja Mehta :  7738631006 Door Step Call Girls Rate 100% Satisfactio...Call Now Pooja Mehta :  7738631006 Door Step Call Girls Rate 100% Satisfactio...
Call Now Pooja Mehta : 7738631006 Door Step Call Girls Rate 100% Satisfactio...Pooja Nehwal
 
Construction Project Management | Coursera 2024
Construction Project Management | Coursera 2024Construction Project Management | Coursera 2024
Construction Project Management | Coursera 2024Alex Marques
 
internal analysis on strategic management
internal analysis on strategic managementinternal analysis on strategic management
internal analysis on strategic managementharfimakarim
 

Recently uploaded (20)

Dealing with Poor Performance - get the full picture from 3C Performance Mana...
Dealing with Poor Performance - get the full picture from 3C Performance Mana...Dealing with Poor Performance - get the full picture from 3C Performance Mana...
Dealing with Poor Performance - get the full picture from 3C Performance Mana...
 
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort ServiceBDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 99 Noida Escorts >༒8448380779 Escort Service
 
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Charbagh Lucknow best sexual service
 
Unlocking the Future - Dr Max Blumberg, Founder of Blumberg Partnership
Unlocking the Future - Dr Max Blumberg, Founder of Blumberg PartnershipUnlocking the Future - Dr Max Blumberg, Founder of Blumberg Partnership
Unlocking the Future - Dr Max Blumberg, Founder of Blumberg Partnership
 
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai
{ 9892124323 }} Call Girls & Escorts in Hotel JW Marriott juhu, Mumbai
 
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...
Call now : 9892124323 Nalasopara Beautiful Call Girls Vasai virar Best Call G...
 
LoveLocalGov - Chris Twigg, Inner Circle
LoveLocalGov - Chris Twigg, Inner CircleLoveLocalGov - Chris Twigg, Inner Circle
LoveLocalGov - Chris Twigg, Inner Circle
 
Does Leadership Possible Without a Vision.pptx
Does Leadership Possible Without a Vision.pptxDoes Leadership Possible Without a Vision.pptx
Does Leadership Possible Without a Vision.pptx
 
Reviewing and summarization of university ranking system to.pptx
Reviewing and summarization of university ranking system  to.pptxReviewing and summarization of university ranking system  to.pptx
Reviewing and summarization of university ranking system to.pptx
 
Day 0- Bootcamp Roadmap for PLC Bootcamp
Day 0- Bootcamp Roadmap for PLC BootcampDay 0- Bootcamp Roadmap for PLC Bootcamp
Day 0- Bootcamp Roadmap for PLC Bootcamp
 
GENUINE Babe,Call Girls IN Baderpur Delhi | +91-8377087607
GENUINE Babe,Call Girls IN Baderpur  Delhi | +91-8377087607GENUINE Babe,Call Girls IN Baderpur  Delhi | +91-8377087607
GENUINE Babe,Call Girls IN Baderpur Delhi | +91-8377087607
 
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girl
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call GirlVIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girl
VIP 7001035870 Find & Meet Hyderabad Call Girls Ameerpet high-profile Call Girl
 
Discover -CQ Master Class - Rikita Wadhwa.pdf
Discover -CQ Master Class - Rikita Wadhwa.pdfDiscover -CQ Master Class - Rikita Wadhwa.pdf
Discover -CQ Master Class - Rikita Wadhwa.pdf
 
Peak Performance & Resilience - Dr Dorian Dugmore
Peak Performance & Resilience - Dr Dorian DugmorePeak Performance & Resilience - Dr Dorian Dugmore
Peak Performance & Resilience - Dr Dorian Dugmore
 
Empowering Local Government Frontline Services - Mo Baines.pdf
Empowering Local Government Frontline Services - Mo Baines.pdfEmpowering Local Government Frontline Services - Mo Baines.pdf
Empowering Local Government Frontline Services - Mo Baines.pdf
 
Intro_University_Ranking_Introduction.pptx
Intro_University_Ranking_Introduction.pptxIntro_University_Ranking_Introduction.pptx
Intro_University_Ranking_Introduction.pptx
 
Call Now Pooja Mehta : 7738631006 Door Step Call Girls Rate 100% Satisfactio...
Call Now Pooja Mehta :  7738631006 Door Step Call Girls Rate 100% Satisfactio...Call Now Pooja Mehta :  7738631006 Door Step Call Girls Rate 100% Satisfactio...
Call Now Pooja Mehta : 7738631006 Door Step Call Girls Rate 100% Satisfactio...
 
Construction Project Management | Coursera 2024
Construction Project Management | Coursera 2024Construction Project Management | Coursera 2024
Construction Project Management | Coursera 2024
 
internal analysis on strategic management
internal analysis on strategic managementinternal analysis on strategic management
internal analysis on strategic management
 
Leadership in Crisis - Helio Vogas, Risk & Leadership Keynote Speaker
Leadership in Crisis - Helio Vogas, Risk & Leadership Keynote SpeakerLeadership in Crisis - Helio Vogas, Risk & Leadership Keynote Speaker
Leadership in Crisis - Helio Vogas, Risk & Leadership Keynote Speaker
 

The-feasibility-of-AI-performing-as-CEO

  • 1. The feasibility of artificial intelligence performing as CEO: the vizier-shah theory Aslıhan Ünal and _ Izzet Kılınç Abstract Purpose – This paper aims to examine the feasibility of artificial intelligence (AI) performing as chief executive officer (CEO) in organizations. Design/methodology/approach – The authors followed an explorative research design – classic grounded theory methodology. The authors conducted face-to-face interviews with 27 participants that were selected according to theoretical sampling. The sample consisted of academics from the fields of AI, philosophy and management; experts and artists performing in the field of AI and professionals from the business world. Findings – As a result of the grounded theory process ‘‘The Vizier-Shah Theory’’ emerged. The theory consisted of five theoretical categories: narrow AI, hard problems, debates, solutions and AI-CEO. The category ‘‘AI as a CEO’’ introduces four futuristic AI-CEO models. Originality/value – This study introduces an original theory that explains the evolution process of narrow AI to AI-CEO. The theory handles the issue from an interdisciplinary perspective by following an exploratory research design – classic grounded theory and provides insights for future research. Keywords Chief executive officer, Grounded theory, Artificial intelligence, Futurism Paper type Research paper 1. Introduction Consciousness is the distinctive feature of humankind and the efforts to transfer this feature to an artifact is called artificial intelligence (AI). It is an old dream of humans to reflect on a human-made structure. However, creating a mind has not been an easy attempt as there is still not a consensus on what “consciousness,” “mind” or “intelligence” derived from. Science and philosophy have been struggling to find an answer to how can an abstract entity emerge in a physical world. Also, it is an old debate about what consciousness is and to transfer this “human-specific” feature to an artifact is eligible or not. In the midst of the twentieth century, the dream of creating a humanlike machine in terms of cognitive abilities commenced to turn out reality. In 1950, Alan Turing made a significant breakthrough in the history of AI in his paper computing machinery and intelligence. This was the first article that handles mechanizing of human-like intelligence entirely (Nilsson, 2010). Turing (1950) also proposed a test – Turing Test – to evaluate the intelligence of a machine. According to Turing; if a machine achieves the Turing test, it can be considered as “intelligent.” Soon after Turing’s groundbreaking paper, in 1956, the term “artificial intelligence” was coined in Dartmouth Conference and scientific studies on generating a machine that is able to simulate all aspects of human intelligence (McCarthy et al., 1955). Since then, AI research has progressed with ups and downs (generally called “seasons of AI”) and remarkable results have been achieved. Aslıhan Ünal is based at the Department of Management Information Systems, Cappadocia University, Ürgüp, Turkey. _ Izzet Kılınç is based at the Department of Management Information Systems, Düzce University, Düzce, Turkey. Received 19 February 2020 Revised 17 July 2021 Accepted 19 July 2021 This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. DOI 10.1108/FS-02-2021-0048 © Emerald Publishing Limited, ISSN 1463-6689 jFORESIGHT j
  • 2. In recent times, the most striking incidence on AI is the victory of AlphaGo. AlphaGo is the first Go-playing program developed by DeepMind that beat the human World Go Champion in 2016 (DeepMind, 2021). The attractive aspect of this victory is that Go is a complicated ancient Chinese game that requires wisdom and insight. Then, the algorithm of AlphaGo processes much more “humanistic” than the other – Go programs and Deep Blue – the chess-playing program developed by IBM. The victory of AI over humankind in a specific area also reveals a question: May AI takes over the task of developing strategy in organizations in the future? Recent research suggests that AI applications will take on routine tasks as planning, programming and optimization. Then, this will be an opportunity for the executives to deal more effectively with the “judgment work” (Shanks et al., 2015; Kolbjørnsrud et al., 2016). According to Thomas et al. (2016), AI can take part in C-suite as an assistant (by “creating scorecards,” “maintaining reports,” “monitoring the environment,” a consultant (by “answering questions,” “building scenarios” and “generating options” and even an actor (by “evaluating options,” “making decisions” and “budgeting and planning”). The findings of this recent research also arose a question in our minds: May AI performs top management tasks in the future and moreover be a chief executive officer (CEO)? Hence, this research- based is on two research questions as follows: RQ1. May AI take over the task of developing strategy in organizations? RQ2. May AI perform top management tasks in the future and moreover be a CEO? The main purpose of this research is to examine the feasibility of AI performing as CEO in organizations in the future. For this purpose, we gathered 27 face-to-face interview data and analyzed them according to classic grounded theory methodology. As a result, “The Vizier- Shah Model” that explains the evolution process of narrow AI to AI-CEO emerged. It is an original and comprehensive model that handles the AI-CEO phenomenon from an interdisciplinary perspective and introduces four possible futuristic AI-CEO types. 2. Conceptual background and literature review 2.1 Chief executive officer A CEO is a top manager who is responsible for managing the company in a complex environment and the final authority for defining the strategic path of the organization (Thomas and Simerly, 1994, p. 960). CEOs are generally the most powerful figures in organizations (Hambrick and Mason, 1984, p. 196; Daily and Johnson, 1997). It is expected from executives at critical positions to adopt a long-termed viewpoint, develop short-termed aims and strategies in accordance with this viewpoint and balance between generally conflicted factors such as constituencies, demands, aims and requirements. Top management studies are carried out within the scope of strategic management discipline, especially focus on the issues of “features of CEO,” “strategic leadership” and “top management team.” A considerable part of strategic management research examines who and how manages the organization and what kind of processes are followed. The Upper Echelons Theory developed by Hambrick and Mason (1984) is considered a milestone in strategic management research. The theory provides a model through which the roles of top executives can be interpreted. Basing on behavioral theory, Hambrick and Mason (1984) asserted that executives pass through a perceptual process including sequential steps while they are making significant decisions. In this model, the choices of executives reflect their personality to some extent. Therefore, executives in an objective environment are likely to make different decisions according to their personal prejudices, experiences and value judgments. Hence, the distinctive personal features of executives play a significant role in the strategic stance of organizations. jFORESIGHT j
  • 3. Zaccaro (2004) proposed that after The Upper Echelons Theory developed, a great number of research examined the effects of top management on organizations. However, research that provides a significant model gathering the improvements in the area and the new ideas were not been developed since then. For this reason, Zaccaro examined the conceptual models focus on the nature of executive leadership and requirements and constructed an integrated executive leadership model. In this model, Zaccaro (2004), defined requisite executive leader characteristics under five categories: “cognitive capacities,” “social capacities,” “personality,” “motivation” and “knowledge and expertise” (p. 291). Most of the characteristics proposed by Zaccaro are still humanistic properties such as creativity, need for achievement, behavioral flexibility and curiosity. According to Bagozzi and Lee (2017), organization research is closely related to mental states and human phenomenology. Without the knowledge of the functioning of the brain and the nature of mental states, it is hard to interpret ongoing conditions in organizations. An arguable reality of mantle states may cause us to consider some concepts as “satisfaction,” “charisma,” “leadership,” “intention,” “emotion,” etc. as metaphors (p. 3). According to the authors, the body-mind problem should not be neglected in organizational research. At present, AI systems expertise in a narrow area and not achieved the level of artificial general intelligence (AGI). Therefore, in today’s human-intensive workplace conditions, it is not possible that an AI to perform the role of an executive or a CEO. For the reason of empirical data deficiency, a theory explaining the key features of an AI executive or the effects of an AI-based top management board on organization performance has not been developed yet, but it is an expected phenomenon in the future. According to the Global Agenda Council on the Future of Software and Society Survey Report findings, the expected date for AI to take part as a decision-maker in top management is 2026. In total, 45% of participants – 816 senior managers and experts from the information communication and technology sector – anticipate that this will happen by 2025. 2.2 Consciousness Consciousness is a hard problem for both sciences and philosophy. The problem arises from the fact that qualitative feelings emerge in a physical structure. Chalmers (1995a) explained this paradox as “the really hard part of the mind-body problem” (p. 4). According to Chalmers (1995b) “The really hard problem of consciousness is the problem of experience” (p. 2). We see an object, this occurs as a result of information processing but also, we feel something when we see an object or hear a voice, this feeling is subjective. The experience of listening to music belongs to us and we do not know how it is experienced by another person. Namely, experience (or in other terms “qualia” and “phenomenology”) is strictly related to our identity but cannot be explained how it emerges in a physical body. Fjelland (2020) articulated this issue by referring to Dreyfus and Polanyi. The author mentioned Polanyi’s examples on experience related to tacit knowledge that we know but cannot articulate such as swimming and riding a bicycle. For example, we know how to ride a bike but cannot explain exactly the dynamics of our riding experience, we just ride and know that we know how to ride. As we cannot articulate our tacit knowledge, we cannot transfer it to a computer. According to Fjelland (2020), Dreyfus considers AI from the perspective of Platon’s idealism. Platon’s knowledge theory, there are two kinds of knowledge (knowledge hierarchy) doxa and episteme. Episteme is the “real knowledge” that is reached by reasoning (propositional knowledge) and can be articulated explicitly. Then, doxa is the kind of knowledge that can be identified with “skills” that are based on tacit knowledge and cannot be articulated and for that reason, it is placed at the bottom of the knowledge hierarchy. Dreyfus (1972) counterargument about the issue is that the way humans think cannot be programmed because humans do not follow certain rules when playing chess, solving complex problems or in everyday actions. They seem to “use global perceptual organization, making pragmatic distinctions between essential and inessential jFORESIGHT j
  • 4. operations, appealing to paradigm cases and using a shared sense of the situation to get their meanings across” (p. 198). This tacit knowledge of human experience is a multidisciplinary hard problem. In neuroscience research, the consciousness concept is still a vague concept. However, the neuroscience discipline has been improving and scientists taking the lid off day by day. For example, according to Zhao et al. (2019), although explaining the concept of consciousness is a hard issue, the “intrinsic neurobiological mechanism” was explored that “the cortex of each part of the brain plays an important role in the production of consciousness, especially the prefrontal and posterior occipital cortices and the claustrum” (p. 6). In the philosophy of mind, debates on consciousness have been continuing through several philosophic approaches. Dualism considers mind and body as separate and different substances (Robinson, 2020). Materialism considers humans as a single substance (material) and denies the view that the mind is a divine or nonmaterial substance, hence, consciousness is considered as a function of the brain (Armstrong, 1968). Although they appeared in different times in history and diverge in terms of theoretical foundations, materialism is generally used interchangeably with the term “physicalism” in contemporary usage (Stoljar, 2021). Then, as opposed to materialism, Idealism denies the existence of material. According to this view, “all that exists are ideas and the minds” which Berkeley used the term “immaterialism” (Guyer and Horstmann, 2021). Panpsychism is “the doctrine that everything has a mind.” Then, functionalism explains the consciousness on its functions apart from the biological system where mental states emerge (Levin, 2018). Hence, from the functionalist perspective consciousness is not specific to the human body. Descartes’ dualism (interactionism, Cartesian dualism) is the most criticized approach among these views. In 17th Descartes (2003a) considered mind and body as two separate substances. According to Descartes’ (2003b) argument, the body is related to “space and time” where the mind is just to time; mental substance cannot take part in the material body but only interact through the pineal gland. Hence, with the philosophical proposition “Cogito, ergo sum” (I think, therefore, I am) Descartes identified the existence of a human being with the ability of thinking and proposed that it is specific for humankind, and therefore, animals are unconscious automats. Descartes’ pineal gland argument was expired as a result. After Descartes, the dualist point of view was defended with different arguments. Leibniz’s parallelism denied the interactionist approach of Descartes and proposed the doctrine of “pre-established harmony” that the body and mind were created by the god and their actions were programmed at the time of creation (Kulstad and Laurence, 2020). Bagozzi and Lee (2017) stated that apart from the classic dualist approach, property dualism and naturalist dualism consider mind and body as separate substances, but propose that two substances are natural, not metaphysical. Natural dualism proposes that physical reality can be observed from outside objectively where mental reality can be observed from inside subjectively. Similarly, naturalist dualism proposes that both objective physical substance and subjective nonphysical substance are needed for understanding the mind and both substances are natural (Bagozzi and Lee, 2017). Along with these debates on consciousness, in general, AI researchers do not interest in “strong AI” assumption. According to McDermott (2007) most AI researchers, whether they believe humans and AI think a different way or not, are computationalists to some extent – the theory that proposes the human brain is a computer. The problem arises when it comes to phenomenal consciousness that a minority of the researchers care about this issue and believe it would be solved by AI one day. 2.3 Artificial intelligence Alan Turing is widely appreciated as “the father of computer science and AI” due to his seminal works on computational theory (1937), the Turing machine (1937) and the Turing jFORESIGHT j
  • 5. Test (1950) (Beavers, 2013) In his 1950 paper, Alan Turing brought forward a groundbreaking approach to the question “Can machines think?” And instead of considering the concept of “thinking” from an anthropomorphic perspective, he proposed “the imitation game.” Turing (1950) organized the imitation game as follows: An interrogator and two respondents (a man and a woman) take part in the game in such a way that the interrogator stays in a separate room and can only see the typewritten form of the answers. The mission of the interrogator is to identify who is the man and who is the woman. Turing, reorganized this game as he put “a machine” instead of a human respondent. Hence, the interrogator is to decide which one is a machine and which one is a human. According to Turing, if the machine achieves to pass the imitation game, it can be considered as “intelligent.” In other words, it is not required for machines to “think exactly the same as humans,” if its answers cannot be distinguished from a human’s, then, it is intelligent. Then, this fact is actually different from how humanistic its algorithm operates. Soon after Turing’s paper was published, a group of computer scientists organized a summer research project at Dartmouth College in 1956. This project was a cornerstone for AI research. The pioneers of the project were John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon. In Dartmouth, the term “artificial intelligence” was coined and AI was founded as an academic discipline. McCarthy et al. (1955) defined the purpose of the Dartmouth project as follows (p. 2): The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. McCarthy and colleagues aimed to produce an AI that can simulate human intelligence in all aspects. Although this aim has not been achieved yet, AI research progressed in various dimensions and even, won victories over human intelligence in specific areas. Evolution of AI throughout history is generally termed “seasons” or “booms” of AI (Miyazaki and Sato, 2018; Haenlein and Kaplan, 2019; Shin, 2019). AI has pursuit a cyclic process throughout history, as Yasnitsky (2020) stated “winter gives way to spring and summer. Summer gives way to autumn and winter” (p. 16). Yasnistky also highlighted an important point whether we can consider exponential developments in AI technology as a “revolution” or AI may be at the edge of a new winter season. Yasnistky addressed reasonable counterarguments to the field and warned that unfounded enthusiasm and popularity boosted with PR would likely lead to a new winter, as AI history is full of unachieved great AI projects. AI discipline raised anchor in 1956 to a brilliant purpose that has not been achieved yet. During the past decade, we have been experiencing a spring season in AI due to improvements in the machine learning area. However, the superiority of AI over humankind is still in limited areas. They are able to do what humans programmed them to do, but not on a general intelligence level. AI is generally classified under three titles with regard to its evolutionary progress: artificial narrow intelligence (ANI), AGI and artificial super intelligence (ASI) (Kaplan and Haenlein, 2019). ANI exhibits intelligence superior to a human in limited areas. For example, AlphaGo is superior to the human world Go champion, but cannot exhibit the general mental states of the human it defeated. The general level of intelligence can be simulated by an AGI that is the purpose of the Dartmouth Summer Project. Then, ASI would exhibit intelligence beyond humans in every aspect if it were to be invented. Besides evolutionary classification, Kaplan and Haenlein (2019) classified current AI systems under three titles (p. 4): 䊏 Analytical AI “generates a cognitive representation of the world and uses learning based on past experience to inform future decisions.” 䊏 Human-inspired AI “can, in addition to cognitive elements, understand human emotions and consider them in their decision-making.” jFORESIGHT j
  • 6. 䊏 Humanized AI “shows characteristics of all types of competencies (i.e. cognitive, emotional and social intelligence.” And from a philosophical perspective, AI is generally classified under two titles: Weak AI and Strong AI. The weak AI hypothesis asserts that AI “could act as if they were intelligent” and the strong AI hypothesis asserts that machines that exhibit intelligence “are actually thinking (not just simulating thinking).” We found in the literature review process that “weak AI” and “narrow AI” terms are used synonymously in some articles (Siau and Yang, 2017; Lu et al., 2018). Current AI systems are in the ANI category from the evolutionary perspective and they are weak AIs from a philosophical perspective, but an AGI can also be an ANI because defining whether an AI is self-aware or not is a controversial issue. Thus, the AI discipline is based on weak AI hypotheses (Russell and Norvig, 2010). What is expected from an AGI is to “simulate” human-level intelligence. Therefore, in this research, we used the term “narrow AI” to define current AI systems instead of the term “weak AI.” We used the term AGI to represent an autonomous software program that is able to solve complex problems in various areas and has its own emotions, concerns, feelings, tendencies, etc., as humans (Pennachin and Goertzel, 2007, p. 1). 2.4 Research on artificial intelligence and strategic management In strategic management research, Holloway’s (1983) article strategic management and AI has an important place, as it examined potential impacts of AI on management and addressed the problems that may occur when AI takes place in management centers. The major question Holloway addressed was “How is the Artificial Intelligence to be administered?” And he addressed disturbing questions about the social and organizational repercussions of inhibition of executive function by AI (see Holloway, 1983, p. 92). Holloway’s (1983) ideas and the problems he foresighted were ahead of his time and the questions he addresses have not been handled in detail and resolved yet. Dewhurst and Willmott (2014) attracted attention to self-managed organizations in the future. According to the authors, as AI becomes stronger in the organization, information will be democratized, rather than bureaucratized. Business units and functions will continue not only to report to top management and CEO but also they will make better decisions by the virtue of precise insights and pattern recognition features of computers. Therefore, organizations will make better decisions on their own and a self-managed organization may discomfort top executives. Dewhurst and Willmott’s (2014) foresight is significant for providing a slice of the future of organizations. Thomas, Fuchs and Silverstone (2016) proposed that AI have the potential of performing in management board as an “assistant,” as an “advisor” and as an “actor,” and could enhance the performance of management boards in three ways: “change the mindset from incrementalism to experimentation,” “help shape strategy” and “challenge the status quo, including sacred cows.” (p. 2). Then, as the intelligent machines take over the tasks, human executives will be able to focus on the task they are better at: “judgment-work.” Parry, Cohen and Bhattacharya (2016) argued an AI-based decision system in an organizational context. In this scenario AI is not just a decision support system, rather it is an actor in the decision-making process in collaboration with a human leader. Parry and colleagues named this system “automated leadership decision-making” performing in a social setting. The authors argued two conditions: Human leader holds veto power and human leader has no veto power to decisions of AI system. They also defined several advantages and disadvantages of this leadership style. According to the authors, an AI-based decision system would be superior to a human in forming vision, as humankind have inherent predispositions as cognitive biases, beliefs, emotions, etc. AI systems are free from these constraints (besides, bias in AI systems is still a controversial issue for the reason that they jFORESIGHT j
  • 7. process human data) and are highly capable of defining latent patterns in complexity, but this advantage of AI is effective on structured data inputs. De-individualized leadership would also mitigate agency problems in large organizations. However, in the instance that human leader has no veto power, some ethical problems may arise as accountability. Parry and colleagues proposed a “critical event logged veto” right to the human leader to overcome this challenge. von Krogh (2018) also examined the issue of delegating decision-making authority to AI. According to Von Krogh, delegating decision-making will change organizations unprecedentedly. Data flow may centralize around data processing algorithms and may not follow information structure spreading among business units and the human experts. Besides, there is a possible and serious thread that AI may stay programmed to one or more aims and may not need a particular incentive for processing information. For this reason, von Krogh emphasized that “how the phenomenon of AI relates to organization design” needs to be a fundamental research topic for management scholars and addressed significant questions that are required to be examined (p. 405). According to the author, a research program, grounded on abductive reasoning, is required through which both qualitative and quantitative data are gathered analyzed. According to Barnea (2020), AI is superior to a human in processing big data and humans can make wrong strategic decisions although they have considerable information. This superiority of AI will likely lead to a groundbreaking change in the concept of management and decision-making. If organizations can analyze the “cognitive algebra” of competitors’ decisions, AI would be more effective in predicting their next move and this will provide a great competitive advantage. These AI systems also prevent senior managers to make biased decisions. Barnea foresees a human-machine collaboration in C-suite in the future. Farrow (2020) conducted a workshop on the future of AI and the findings show that “AI has the jobs humans don’t want to do” is the best future case. Participants of the workshop foresee that AI would augment human decision-making as an advisor or an assurance service by 2038. Farrow’s scenario makes an optimistic impression that AI and humans are colleagues, not enemies. Hence, binary language may take over human language in the future. Humans and AI would produce services and solutions together. Human is not at the center of work and concepts of employee and work are expected to be changed or regulated. AI or human leaders may guide human/AI, hybrid teams. Ferràs-Hern andez (2018) future expectations are bolder and “scary.” According to the author, a “future digital CEO” and even “self-driven companies” are possible. This may also lead to the end of management science, but he also adds the most powerful weapon of humans is intuition in strategic management that is related to “creative thinking and art.” At present, an intelligent machine can find patterns and answer questions better than humans, but cannot ask questions. Hence, human still leads the way in management in terms of intuition and social interaction, but it is likely AI would close this gap as it gains more strategic thinking capabilities. Spitz (2020) also supports the idea that “as AI continues to develop, machines could become increasingly legitimate in autonomously making strategic decisions, where today humans have the edge” (p. 5). According to Spitz, a general level of intelligence is not necessary for AI to be dominant in human-specific areas in the strategic management process. AI evolves exponentially and its improvement includes the field of artificial emotional intelligence. In that case, humans have one choice: to become agile, antifragile and agile (AAA) to sustain their superiority in decision-making. Otherwise, C-suite would turn to A-suite. As a result of the literature review on strategic management and AI, we defined that there are optimistic and pessimistic expectations about the role of AI in management. Delegation of decision-making to AI, ethical concerns about AI, human-AI collaboration in strategic management, the future of management science are the main topics researchers handled, but we could not come across comprehensive research that examine the topic “AI as a jFORESIGHT j
  • 8. CEO” in various dimensions. This research is based on this problem and we decided to conduct an explorative research design, to examine the feasibility of AI performing as CEO. 3. Methodology In this research, we followed classic grounded theory (CGT) design. Grounded theory (GT) was discovered by two sociologists Glaser and Strauss (1965, 2006). They discovered this methodology due to that existing sociology theories did not meet the scope of their research then. Glaser and Strauss (2006) defined GT as the “discovery of theory from data” (p. 1). In the following years, Glaser and Strauss separated their ways and remodeled GT methodology from diverse epistemological and ontological perspectives. Glaser’s (1978) CGT design is generally related to objectivist epistemology and critical realist ontology in literature (Annells,1997). However, Glaser (2007) emphasizes the transcendent nature of CGT and defines it as a general methodology that does not adhere to a specific paradigm. According to Glaser, CGT is a “highly structured but eminently flexible methodology” (p. 48). Hence, the CGT design does not adhere to a specific paradigm. In this research, we preferred following Glaser’s design due to that CGT assumes that the pattern is hidden in the data and the mission of the researcher is just to discover it. Also, CGT provides a flexible research process and does not adhere to a specific paradigm. CGT has specific research methods and the researcher should follow these procedures and let the theory emerge. The mission of a classic grounded theorist is to discover a theory not to invent it. Glaser (2007) defined GT as “simply the discovery of emerging patterns in data” (Glaser, 2015: 13) and integration of “simultaneous,” “sequential,” “subsequent,” “scheduled” and “serendipitous” procedures. The procedures of GT are listed below: 䊏 Theoretical sampling. 䊏 Theoretical coding: open coding and selective coding. 䊏 Constant comparative method. 䊏 Memoing. 3.1 Theoretical sampling Definition. Theoretical sampling is a data collecting method specific to GT. In this process, the researcher collects and analyzes the data jointly (Glaser and Strauss, 2006). The aim of the researcher is to generate a theory and define the next sample and research area to serve this aim (Glaser, 2007). Application. At first, we decided on a sample consisted of strategic management professors but in the process of asking for an interview, some academics stated that they did not have comprehensive knowledge of AI and refused interview claims. Therefore, we realized that sample is insufficient and extended the scope of the sample with academics from management, management information systems, computer sciences disciplines; executives, entrepreneurs and experts working in the AI field. Through the data collection process we expanded our sample with academics that have expertise on the philosophy of mind, philosophy of AI and artists combine “art and AI” (One of the participants performs generative art and conducts studies on integrating technology, algorithm and art. He also developed a poet robot. The other participant is an inventor, poet, author and computer scientist who designed a poet robot and studies on an AI project) As a result, we conducted 27 interviews and stopped the data collecting process when we decided that categories are saturated. Information about data is presented in Table 1. jFORESIGHT j
  • 9. We did not restrict participant selection to a particular city but were limited to Turkey. We struggled to reach any participant that is related to the subject of the research in Turkey. Data collecting period took approximately seven months, from July 2018 to February 2019. Interviews were conducted in participants’ workplaces and requests were sent via email with an attached ethical report approved by researchers’ institutions. We asked participants for using an audio-recorder order to prevent data loss. Interviews were performed by the first author and semi-structured and unstructured interview methods were followed. Table 1 Information about participants and interviews Participants Affiliation Area Length Date City 1 Professor Management information systems (MIS) 26 min July. 27, 2018 Düzce 2 Bureaucrat, MSc Public 40 min October 4, 2018 Ankara 3 CEO, PhD Software and robotics 37 min October 22, 2018 _ Istanbul 4 Professor Management 32 min October. 25, 2018 Düzce 5 Associate professor, entrepreneur MIS and software 50 min November 13, 2018 Ankara 6 Professor, entrepreneur Computer engineering 25 min November 17, 2018 Düzce 7 Professor, author Computer engineering 35 min November 22, 0.2018 _ Istanbul 8 Associate professor, entrepreneur Computer engineering 30 min November 26, 2018 Antalya 9 Assistant professor Computer engineering 1 h December 3, 2018 Isparta 10 Professor, breaucrat Management 21 min December 7, 2018 _ Istanbul 11 Associate professor Philosophy of AI 1 h 20 min December 10, 2018 Ankara 12 Professor, entrepreneur Software and electronic engineering 1 h 15 min December 11, 2018 Ankara 13 MSc, lecturer Electrical electronics engineer and deep learning 45 min December 14, 2018 _ Istanbul 14 AI manager AI and software 1 h December 17, 2018 _ Istanbul 15 Associate professor Management 1 h 4 min December 24, 2018 Eskis ehir 16 Asisstant professor, author, TV commentator and presenter Theology and philosophy 40 min January 3, 2019 _ Istanbul 17 Director of architecture and quality assurance IT 1 h 20 min January 4, 2019 Ankara 18 Software director Software 32 min January 4, 2019 Ankara 19 Professor Management 1 h 30 min January 8, 2019 Ankara 20 Assistant professor Management 1 h 30 min January 17, 2019 _ Istanbul 21 Associate professor Philsophy 1 h 8 min January 18, 2019 _ Istanbul 22 Artist, instructor, lecturer Generative art 1 h 4 min February 7, 2019 _ Istanbul 23 Associate professor Sosiology 24 min February 12, 2019 Bolu 24 Professor Philosophy 2 h 45 min February 13, 2019 _ Istanbul 25 Professor Philosophy 53 min February 14, 2019 _ Istanbul 26 Professor Urban and regional planning 50 min February 18, 2019 Ankara 27 Entrepreneur, MSc, author Cybersecurity 1 h 25 min February 2, 2019 Ankara jFORESIGHT j
  • 10. As a first step, we prepared an interview form consisted of nine open-ended questions with probes. We revised this form throughout the data collection process in line with the emerging concepts. Before we started to collect data, we submitted our research project and interview form to X University Scientific Research and Publication Ethics Committee. The committee approved our research project as ethical with decision no: 2018/2021 on May 24, (2018). The interviewer adopted a “communicative validity” approach (Kvale, 1996: 246) and performed a participative and questioning role in interviews. According to Kvale (1996) “Communicative validation approximates an educational endeavor where truth is developed through a communicative process, with both researcher and subjects learning and changing through the dialogue” (p. 247). Our sample consists of participants from different scientific fields and sometimes participants’ ideas contradicted each other. The interviewer addressed her interpretations to the participants and asked their opinions; sometimes questioned their views, shared our arguments about the topic and views of the other participants and discussed these issues and initial findings with the participants. Communicative validation enabled us to compare interdisciplinary views during the interview process. Apart from participants’ views on interdisciplinary issues, we also referred to their expert knowledge. All of the participants are expertized and experienced in a particular area. Some of them referred to books and articles and read passages from them during the interviews to support their views. While writing a research report, we take this kind of knowledge as expert knowledge and interpreted it according to interview data, especially when the statements of participants’ expert views confirm each other and the literature. Hence, we did not give reference when we used participants’ expert knowledge, we gave reference when we referred to literature as a supportive data source. 3.2 Constant comparative method Definition. Grounded theorist starts the coding process as the first data collected and constantly compare the new data with the former ones (Glaser and Strauss, 2006). Incidents, concepts and hypotheses are constantly compared to provide theoretical elaboration, saturation and verification of emerging concepts. This method also serves as an auto-control mechanism for emerging theory (Glaser, 2007). Application. As we collect new data, we compared the new content with our previous findings. When we realized a new concept is emerging, we headed toward the participants and research areas that are related to the new concept. We revised our interview form, added new questions and eliminated some of them. Thus, we focused our energy on emerging concepts and their relationships. 3.3 Theoretical coding Theoretical coding is applied in two stages: Open coding and selective coding. In the open coding process, the analyst codes the data line-by-line and search for initial codes (Glaser, 2007). Selective coding process starts when the researcher discovered the core category. Core category is the variable that explains how the main problem is solved. As Glaser (2002) stated, the “core category organizes other categories by continually resolving the main concern” (p. 30). After the core category emerged, the researcher continues coding by focusing on the core category and its relationship with other categories. Saturating the categories and testing hypotheses are at the center of this process. Researcher defines the sample and collects new data to fulfill this aim. This coding process is called “selective coding” (Glaser, 2007). Application. The data of the first 10 interviews showed that AI has superiority over humans in terms of rationality, objectivity, speed, etc. Besides humans had superiorities on AI as emotion, experience, emotional intelligence, consciousness, etc. These features can be jFORESIGHT j
  • 11. called deficiencies of AI. Participants remarked that “AI cannot perform the role of a human CEO” because of these deficiencies. During the 11th interview, the first time a participant mentioned differences among artificial general intelligence (AGI), strong AI, narrow AI and weak AI and these differences become apparent in the following interviews. As a result, we realized that some basic problems should be solved to consider “AI in CEO position.” These problems are really “hard” and related to computer sciences, philosophy and neurosciences. Hence, in the middle of the research process, “solving hard problems” emerged as the core category. We titled the core category “hard problems.” After we discovered the core category, we did not use an interview form for collecting data and followed an unstructured interview method. Emergence of core category shifted codding processes from “open coding” to “selective coding” and data gathering method from “semi-structured” to “unstructured interviews.” 3.4 Memoing Memos are theoretical notes that the researcher records along with the GT process. Researchers should take notes whenever an idea about the emerging theory comes to their mind, Glaser termed these moments “eureka moments.” Hence, memos are theoretical discoveries and help the researcher to realize the correlation between concepts (Holton, 2017). Application. The research process was full of eureka moments. We recorded these brilliant ideas on a Word file and they directed us to discover emerging concepts and relationships among categories. Consequently, we adhered to these basic procedures of CGT through the research strictly. We conducted research in line with the following words of Glaser (2015, p. 13): “Everybody engages in GT every day because it’s a very simple human process to figure out patterns and to act in response to those patterns. GT is the generation of theories from data. GT goes on every day in everybody’s lives: conceptualizing patterns and acting in terms of them.” In brief, we searched our data to find patterns; discovered concepts, a core category and four theoretical categories related to it. 4. Findings As a result of the CGT process, five theoretical categories emerged and these categories were linked to each other and to the core category via five hypotheses. We titled the emergent theory The Vizier-Shah Theory by referring to the historical functions of Vizier and Shah. The term Vizier dates to Ancient Egypt and “is conventionally used translation of Egyptian term tjaty who was responsible for overseeing the running of all state departments, excluding the religious affairs.” The Vizier was the most powerful figure under the Pharaoh and “was not simply a counselor or advisor to the king but was the administrative head of the government.” The vizier was also a statue in Ottoman Empire’s bureaucracy. Grand Vizier (Vezir-i Azam) was the most prominent member of dıˆvân, was expected to meet the requirement of collegiality and has a “decisive role in the daily running of the Ottoman state administration.” Namely, Grand Vizier was “analogous to prime minister” position in Ottoman bureaucracy (Weiker, 1968:457). Then, “Shah” is “the title of the king of Persia.” The term Shah is “shortened from Old Persian xšayathiya ‘king.’ We preferred to use the analogy of Vizier-Shah related to terms” historical positions. Also, in the Turkish language, Vizier (Vezir) and Shah (S ah) terms are used instead of King and Queen in chess. Thus, you can consider that Shah represents the CEO and Vizier represents the “right hand” of the CEO in this research. Consequently, The Vizier-Shah Theory includes five main theoretical categories: jFORESIGHT j
  • 12. 䊏 Narrow AI. 䊏 Hard problems (the core category). 䊏 Debates. 䊏 Solutions. 䊏 AI as a CEO. These five categories are grounded on 13 theoretical codes and 45 empirical codes. The categories and codes are listed in Table 2. Table 2 Categories and codes Data codes Theoretical codes Theoretical categories - Rationality - Objectivity - Speed - Durability - Computing ability - Big data processing Superiority Narrow AI - Consciousness - Emotional intelligence - Incentive - Will - Judgment - Motivating - Leadership Deficiency - Dualism - Materialism - Idealism - Panpsychism Philosophy: the mind-body problem Hard problems (the core category) - NP-complete problems - Toolbox Computer sciences’ problems - Brain - Consciousness Neuroscience: the functioning of the brain - AI and dualism? - Semantics - Paradigms Debates among disciplines Current debates - Cacophony - Fashion effect - Lack of theoretical knowledge - Scenearios - Ethics and legal issues Mess - Flexible view - Merging disciplines - Theoretical view Holistic view Solutions - Educational revolution - Legal arrangements - AI strategy - AI divisions AI investments - Tool - Extension Vizier AI CEO - Transhumanism - Cyborg CEO Vizier-Shah - Strong AI CEO - Weak AI CEO - The ego of human Shah - Swarm intelligence - Network Swarm-Shah 45 data codes 13 theoretical codes 5 theoretical categories jFORESIGHT j
  • 13. In Table 2, the first column presents data codes. These codes are found in the data as a result of the open coding process. Second column presents theoretical codes. These codes are conceptualized forms of data codes. Then, the third column presents theoretical categories, more conceptualized forms of theoretical codes. This conceptualization process shows that theory is grounded in the data but also generalizable as data codes were conceptualized and ascended to theoretical categories. Relationships between five categories are provided with five hypotheses. The illustration of The Vizier-Shah Theory is presented in Figure 1. Five hypotheses of the theory are listed below: H1. Narrow AI should enhance its cognitive capabilities to the general intelligence level to perform the role of a CEO. H2. “Hard problems” prevent narrow AI to perform CEO roles. H3. Recent significant improvements on narrow AI give rise to debates. H4. Hard problems give rise to debates. H5. If and when hard problems and current debates are solved, it may be possible for AI to become a CEO. Before explaining the theory, we want to explain how The Vizier-Shah Theory emerged. H1 is the first hypothesis that emerged during the grounded theory process. During our interviews, we realized that Narrow AI should gain general-level intelligence capabilities to perform the tasks of a CEO. Our participants agreed on a common ground on this view except one. Hence, we defined H1 as our first hypothesis. Narrow AI should enhance the scope of its intelligence to be a CEO. However, we found a strong barrier that prevents narrow AI to enhance its capabilities to general-level intelligence. We titled these problems as “hard problems”-the core category. Then, our second hypothesis emerged: Hard problems prevent narrow AI to perform CEO roles. Emergence of the category “hard problems” was a turning point in our research process. For that reason, we defined this category as “the core category.” It appeared in the midst of our data gathering process and strongly influenced the aim of the research, participant selection and the interview method we used. We used the arrow (representing H1) directly from Narrow AI to AI CEO to emphasize the first crucial finding of the research. Of course, narrow AI should follow the path linked with the hypotheses H2,3,4,5 to ascent to CEO level. Consequently, the arrows in Figure 1 represent the hypotheses that link categories and are organized due to their emerging sequence. For example, the categories “hard problems” and “debates” emerged Figure 1 jFORESIGHT j
  • 14. almost simultaneously, but hard problems are such a crucial factor that changed the flow of the research process. To emphasize the power of “hard problems” we illustrated it in the shape of a relatively big square and we put “debates” just beneath the “hard problems.” By doing so, we intended to show these two categories emerged almost at the same time interval, but “hard problems” have the central role in The Vizier-Shah Theory as we organized the other categories by taking into account “hard problems.” In the following section, we explain the theory in detail. 5. The Vizier-Shah theory In this section, we explain The Vizier-Shah Theory under five main titles: Narrow AI, hard problems, debates, solutions and AI-CEO. 5.1 Narrow artificial intelligence This category represents the superior and deficient aspects of current AI technologies that are generally defined as “narrow AI.” The superiorities and deficiencies are considered according to human capabilities. Narrow AI is superior to a human in specific areas but not at a general intelligence level and this deficiency is a great obstacle that prevents AI from performing a CEO position in the organization. Hence: H1. Narrow AI should enhance its cognitive capabilities to the general intelligence level to perform as a CEO. Narrow AI technologies continue to improve exponentially especially in the machine learning area. Speed of improvement also gives rise to several concerns and debates. Hence: H3. Recent significant improvements on narrow AI give rise to debates. 5.2 Hard problems There are important obstacles that prevent AI from being strong AI or AGI. These obstacles were titled “hard problems” of computer sciences, philosophy of mind and neuroscience. These problems are considered “unsolvable” in the short term or according to some, will not be achieved forever. The hard problem of computer sciences. Hard problems of the field are “non-deterministic polynomial time problem” and “NP-complete” problems. These are the complex problems that cannot be solved even if the Turing Machine (i.e. computer) is processed for an exceptionally long time. Researchers should overcome these problems. Otherwise, there is no point in talking or predicting about the future of AI that can exhibit full human intelligent behavior. At present, Turing Machine is incapable of solving complex problems. Turing machine is still not comparable to the human brain and the “toolbox” of a computer scientist is not capable of AGI revolution. Thus, participants think that it is too early to expect AI to perform human-specific actions and take over all human-specific tasks. The predictions of feasibility are between 50 and 150 years. The hard problem of neurosciences. Functioning of the human brain has not been solved yet completely and this is another obstacle to generate an artificial brain operating like the human brain. Is human brain structure and function the only way for AI to think and make decisions, judgment and be self-aware? Neuroscience examines the functioning of the brain and neuron system and adopts a deductive materialist – in other terms physicalist – approach. Deductive materialism asserts that the essence of everything in-universe is material and denies other forms of substances. According to this approach, consciousness is not a substance apart from the brain; namely, consciousness is a function of the brain. Therefore, it is not possible to generate an artificial conscious brain without solving entirely how the brain works. Even if it jFORESIGHT j
  • 15. happens in the future, it is an enigma whether an artificial brain exhibits human intelligence. Because consciousness is commonly considered as a human-specific phenomenon and has been at the center of ongoing debate in philosophy. The hard problem of philosophy. Human phenomenology that is generally called “subjective experience” or “qualia” has not been definitively explained yet. Subjective experience is also underlining the problem of the first-person and third-person view of consciousness (Chalmers, 1999, 2002). This is another conflict between science and philosophy (Dennett, 2001; Ross, 2002). Human experiences are subjective and first-person narrative is used when expressing these experiences. However, sciences examine the facts and use third- person language. Consciousness states of humans cannot be observed objectively. Then, can human phenomenology be the subject of science? There are opinions it can be or cannot. This is one of the problems that should be solved. A CEO is required to make a judgment, make inference through logical reasoning or take rational action but should an AI-CEO also have emotional states as hate, love, anger, ambition, hope or desire? And if not, how effective will be the actions of a creature isolated from all these emotional states in a population of humans. These emotional states exhibit an “asymmetric” impact even in today’s human-intensive business world. It cannot be asserted that the emotional states entirely have a positive effect on organizational performance or totally a bad effect. For example, in the decision-making process, some emotions arise the performance, but in some conditions, the outcome is worst. Participants from management discipline take attention to this issue. As emotional states make an asymmetric impression, then other problems arise: Will we be able to transfer emotions that we marked as “beneficial” to AI or eliminate the “harmful” ones? Would it be better, if AIs make purely rational decisions isolated from emotions? Or should AI think and act just “human-like?” Then, how can we be sure that AI is conscious, though it exhibits intelligent behaviors? Therefore, we should know exactly how conscious states emerge. The state of consciousness is a multidisciplinary hard problem and the last enigma of humankind: What is consciousness? This question is strongly related to the hard problem of philosophy: The body-mind problem. There have been several approaches to the mind-body problem in philosophy. Deductive materialism, idealism, dualism, functionalism, panpsychism approach to the mind-body problem from different perspectives. Participants from the philosophy discipline especially attracted attention on this point. The phenomenon “AI may have same consciousness states as human one day” is possible from the lens of functionalism, but from classic dualist and deductive materialist perspectives, it is not. It is obvious that classic dualism cannot explain a “conscious AI” phenomenon. Because Descartes referred to the “pineal gland” as an interaction area of body and soul and this argument runs into the ground. Besides, other dualist approaches defend that consciousness is specific for humankind. Thus, AGI research is based on a materialist point of view. Deductive materialism and functionalism are materialist approaches mentioned by participants repeatedly. Neuroscience adopts a deductive materialist approach and explains consciousness states with the interaction of neurons and denies the existence of a separate substance. Then, even, radical deductive materialists defend that the terms mind, soul, the spirit should be eliminated from the language. Besides, functionalism is a materialist approach but explains the consciousness on its functions. If these functions can be exhibited by another creature, it can be accepted as conscious for it is able to perform the functions of consciousness. A participant (professor of philosophy) referred to Putnam’s criticism of deductive materialism. If consciousness is completely related to brain functions, then consciousness states would be specific to that brain. However, according to Putnam, although some living things do not have the same brain structure of human, are able to exhibit similar consciousness states. A participant exampled this circumstance as: For example, Hilary Putnam states that octopuses also have feelings as pain and hunger. These are his original examples. He states octopuses’ brains are different from humans’. Therefore, jFORESIGHT j
  • 16. similar consciousness states can emerge in different organisms with different brain structures, different anatomies and different physiologies. This is a quite different point of view; this is not deductive materialism. According to the findings, AI research is much closer to the functionalist point of view and according to functionalism, AI may exhibit conscious states in the future. One of the challenges in front of this is that how conscious states emerge has not been solved yet. At this point, a multidisciplinary approach is required. Neuroscience, AI and philosophy disciplines intersect at “consciousness” research. Hence: H2. “Hard problems” prevent narrow AI to perform CEO roles. H4. Hard problems give rise to debates. 5.3 Current debates Debates among disciplines. Computer sciences, philosophy and neuroscience disciplines follow different research paradigms, adopt different research methods and consider AI issues from different perspectives. Therefore, conflicts between disciplines arose inevitably. We discussed previously that the idea of “conscious AI” conflicts with classic dualism and mentioned that AI research progress in line with materialism, specifical functionalism. Participants also mentioned that AI research is in accordance with the functionalist approach, but we found that there are still conflicts between neuroscience and philosophy; especially, about the dualist approach to AI. The argument on “AI research adopts a dualist approach” is about the structure of an AI system. AI is composed of two separate parts: hardware and software. A neuroscientist Ant onio Dam asio’s standpoint was given as an example for this point of view. The participant (professor of philosophy) referred to Dam asio’s (1995) book Descartes’ error: emotion, reason and the human brain and read the following section (pp. 247–248): My concern, as you see, is for both the dualist notion with which Descartes split the mind from brain and body (in its extreme version, it holds less sway) and for the modern variants of this notion: the idea, for instance, that mind and brain are related, but only in the sense that the mind is the software program run in a piece of computer hardware called brain; or that brain and body are related, but only in the sense that the former cannot survive without the life support of the latter. Consequently, a substance is a thing that needs nothing -except itself- to exist. According to the dualist view, there are two different substances and this duality in terms of substances is identified with hardware and software components of AI. However, there is a nuance in this comparison that body and hardware are both materials but what about the software? Can it be considered as a material or a spiritual substance as in dualism? It is obvious that software is not the spiritual substance like the mind in dualism. If it is a material substance then, the dualist view is denied. Then, if it is considered as an abstract substance as an “idea” then, materialist view is denied. Another important point in understanding the philosophy of AI through the lens of philosophy of mind is that where software can operate in different hardware, consciousness emerges only in the brain it belongs to. Transferring consciousness from the body it emerged to a different body has not been achieved yet. Neuroscience research adopt generally a deductive materialist view, but it does not mean that transferring consciousness will not come true one day, though it has not been achieved yet. A participant likened the case that software can operate in different hardware to the reincarnation phenomenon in Platon’s idealism. Another participant (Computer scientist, expert) mentioned he is a Platonist: jFORESIGHT j
  • 17. A line is the union of dots. A triangle is a shape constituted of three dots connected with lines. If you achieve to describe the triangle, you crack the secret of AI. Actually, that is what I want to say, describing the triangle conceptually will solve the problem. It always stucks in mathematics and cannot be conceptualized. I am a Platonist in this sense. Materialism and idealism are completely opposed to two philosophical perspectives. Idealism denies the material and materialism denies the idea. Then, dualism accepts both body and mind and in this way, it distinguishes from the other two philosophical views. Besides, the philosophy of AI does not fit completely with these three approaches. We found that although according to some, the AI discipline is based on the dualist view, AI is a project based on and fit to functionalist approach, but as you see in the quote above, a computer scientist adopts a Platonist approach to solve the hard problem of human phenomenology. Semantics. The founding purpose of AI research was to generate a machine that simulates the whole mental state of humans. The purpose was not to generate an AI that has totally humanlike phenomenology but is able to exhibit consciousness states that cannot be distinguished from humans’. AI research is in accordance with functionalism, but may AI be conscious or just simulate consciousness states? According to computer scientists “simulating” human mental states in all aspects is a valid criterion. Dartmouth’s summer Project Proposal and the Turing Test were based on this idea. Some participants mentioned Searle’s (1997) famous Chinese Room thought experiment as a counterargument about the issue of a machine “actually” thinks. In his argument, Searle (1997) attracted attention to “semantics.” Although machines can give appropriate answers that cannot be distinguished from humans, they are not aware of what they are doing, they just follow the instructions, namely, caries out “syntax.” Therefore, such kind of machine is just exhibiting syntactic features but not semantics. Besides, computer scientists handle the issue through a technical lens, they do not care about semantics, exhibiting intelligent behavior is practical and enough. A machine may not be conscious but can exhibit consciousness states. This view confirms the weak AI hypothesis in philosophy. Actually, as we still do not know what consciousness is and how it emerges, we do not know to what kind of things it can be attributed. We found a contradiction among our participants’ views in this sense, especially between computer scientists and philosophers. Mess. Another finding on current debates on AI that almost all participants stated is “cacophony.” AI has been a popular issue in academic environments, business, social life, social media, etc. in recent times. Therefore, various programs, conferences, seminars are being organized; articles, conversations are issued on printed, visual and social media. Actually, the AI area is “under attack” and many people struggle to get benefit from this popularity. However, a lack of theoretical background causes misunderstandings and chaos. Marketing tools and media also boost information pollution and this situation gives rise to dystopic and utopic scenarios. Various dystopic and utopic scenarios have been roaming around about AI and the future of humankind. We found several concerns and expectations about the future of AI in the short and long term. You can see dystopic and utopic scenarios in Table 3. We grouped the future scenarios under two titles according to participants’ views. There are also several classifications in parallel with our findings of the possible impact of AI on the business world, society, our everyday lives, etc. For example, Makridakis (2017) defined four scenarios about the impact of AI on society and organizations in the future: The optimists, the pessimists, the pragmatists and the doubters. According to the author, the optimist view is a utopian scenario based on developments in Genetics, Nanotechnology and Robotics. AI will take over the work and humans will have an opportunity of doing various activities or working. Also, humans’ deficiencies derive from biological limitations will disappear as the technology reaches revolutionary levels. The pessimist view fictionalizes a dystopian scenario that machines will take possession of the authority of decision-making and human will be dependent on them. The pragmatists think that regulations and jFORESIGHT j
  • 18. transparent AI initiatives as Open AI will prevent the dark side of AI from doing harm and a controlled AI would be beneficial to humankind. Moreover, the debates based on insufficient theoretical and technical ground, are also insufficient and misguiding for producing solutions. For example, at present, there is not any AI at the level of AGI or strong AI but some chatbots and IVRs as Robot Sophia are being perceived as “conscious” robots by some segments of society, although they are not. An example of a misunderstood term is “robot.” The term “robot” can sometimes be used instead of AI or synonym of AI, but all the robots may not be loaded with AI programs, where all AI systems do not have a human-like body shape. Another example is the term “singularity,” a participant (professor of AI philosophy) explained the misusage of the concept as follows: Singularity is generally misunderstood. Singularity does not mean that a machine is superior to a human. For example, Deep Blue is superior to all of us in chess play, isn’t it? Humans created that program and cannot beat it. Is Deep Blue an example of singularity? No [...] Do you know what singularity is? You create a system superior to you and that system creates a system that is superior to it and that system creates another system and destroys all the sub-systems. Think about Deep Blue. It plays chess and it beats humans, for being singular should teach chess to another machine and it beat its creator and humankind and then that machine beats another machine [...] A perpetual process. What is the best example of singularity? It is you! Think about evolution theory. Victories of AI in specific areas to humankind do not mean that they are completely superior to humans. These programs are at a narrow AI level, do not exhibit general capabilities as humans do. These victorious narrow AIs do not have the capability of causing a revolution in social systems or a paradigm shift but an AGI would have. 5.4 Solutions Multi-disciplinary research, flexible, holistic and theoretical approaches emerged as the codes that play a significant role in solving hard problems and current debates. Holistic view. A flexible, interdisciplinary approach and new skills are required in the new era. Integrating AI technology in disciplines is a beneficial practice. At present, Table 3 Dystopic and utopic scenarios Dystopic scenarios Utopic scenarios Caste system: Class discrimination between robots and humans and/or between countries is expected in the long term. According to this scenario, robots would be slaves as a working-class or humans would be slaves of AGIs. Also, discrimination between countries is expected. Countries that produce AI technology will monopolize and poverty would reign in other countries. The idea of a war between robots and humankind or between countries derives from possible class discrimination in the future. Generally, participants referred to Elon Musk’s and Stephan Hawking’s views about this issue Union of countries: A new world order without frontiers is expected in the long term. A global union will provide a decrease in conflicts and differences between nations and a gradual decrease in competition -even disappear- between organizations. Peace reigns around the world Violent competition: Companies may take place of countries in the future and a violent competition would reign. A monopoly of AI companies and regional competition is expected Renascence Era. Unemployed life would lead to a new Renascence. Humans may have the opportunity of sparing time for their environment and performing arts and philosophy Unemployment. Decrease in the human workforce is a common view but the emergence of new job titles and concepts are also expected. Dystopic part of this view is that AI would reign the business world entirely and that leads to two problems: “unnecessary humankind” and “mass unemployment.” According to this scenario, the human may also lose its competencies for being accustomed to using AI applications and when this situation merges with unemployment it may lead to feelings of purposelessness, lazing and inadequacy jFORESIGHT j
  • 19. multidisciplinary research is being handled to develop a consciousness theory as Tuscon’s conference series and Association for the Scientific Study of Consciousness Conferences and these initiatives play a significant role in solving the enigma of consciousness and also participants mentioned interdisciplinary research on “science of consciousness.” In PwC’s (2017) knowledge paper on AI and robotics, it was also emphasized that a multidisciplinary expert collaboration from the fields of computer sciences, social and behavioral sciences, law and policy, ethics, psychology, economics, is required for improving AI research. Consequently, the code “holistic view” indicates that an awareness of interdisciplinary approach is emerging. It is expected that strict boundaries between disciplines will disappear and a collaborative holistic approach will be adopted in the short term. However, considering the issue from the lens of today’s paradigm is insufficient to predict future outcomes. All we need to do is to observe trends and predict the milestones. Thus, we can see the alternate ways, discuss the outcomes of alternates and develop strategies, but cannot make a definite judgment that would cause speculation. History of humanity is full of surprises but also with realized predictions. Then, this is exactly what this research’s aim is; handling the developments in AI technology from an interdisciplinary perspective, clear of information pollution and providing alternate ways. Investments in AI. Current education systems are generally based on Industry Revolution and are insufficient for Fourth Industry Revolution. A flexible, interdisciplinary approach and new skills are required in the new era. Integrating AI technology in disciplines is a beneficial implementation. Developments in AI technology also give rise to legal and ethical problems. “Civil rights,” “taxation” and “real person” issues of AI need solutions. Especially reverse outcomes of autonomous cars give rise to debates about ethics, conscience and criminal liability. Exponential growth of AI technology requires regulations. Regulations should be arranged by governments and regional or worldwide unions. In total, a 100-year study on AI (AI100) (2016) report supports this finding. According to AI100 (2016) report more legal and ethical issues emerge as the level of AI-human interaction increase. Torresen (2018) also emphasized the importance of legal arrangements about the accountability of AI and ethical issues. Jiang et al. (2017) attracted attention to the lack of standards and safety deficiency of current AI regulations. The authors defined this fact as an important obstacle in implementing AI systems in healthcare. Developing an AI strategy is another important finding. National and international AI strategies are crucial factors for organizations in developing business strategies. Governments and unions define their AI investments on this strategy and this strategy guides organizations in terms of deciding on which country they will invest. As countries’ AI strategies vary related to economic, sociocultural, technological and various dynamics, organizations’ AI strategies also vary for their internal and external dynamics. There is not a definite and common AI strategy for all countries and organizations. Participants mentioned Japan, China, the USA, South Korea and France as prominent countries investing in AI. As of 2018, 26 countries and regions defined an AI strategy or AI supporting strategy (Dutton, 2018; Heumann and Zahn, 2018). Hence: H5. If and when hard problems and current debates are solved, it may be possible for AI to become a CEO. 5.5 Artificial intelligence as a chief executive officer The Vizier CEO Type. Majority of the participants claimed that narrow AI cannot perform the role of a CEO in the future, but it is expected AI will take part in the management board as a decision support system. In that case, AI’s ultimate position is the Vizier status. In vizier position, AI serves as an advisor in terms of a “right-hand” of CEO. Human CEO has the last jFORESIGHT j
  • 20. word and AI performs as an extension. Therefore, The Vizier CEO Type consists of two actors: The human CEO and an ultimate AI but still not a full AGI. These two actors are dependent on each other and collaborate. The anticipation of Barnea (2020) and the finding of Farrow (2020) support our Vizier type CEO: “It appears as if CEOs will need to combine strong strategic thinking skills with increasingly sophisticated analytic tools to help them run the organization [. . .] Senior executives who use instinctive leadership skills or past successes to make decisions will have to become evidence.” (Barnea, 2020, p. 77). “Participants felt that AI as an advisory or assurance service provided to augment leader decision-making would be a standard corporate governance best practice by 2038” (Farrow, 2020, p. 6). Parry et al.’s (2016) “automated leadership decision-making” scenario based on the collaboration of AI-based decision-making systems and human leaders is an example for Vizier. Then, Spitz’s (2020) future scenario “hyper-augmentation,” a symbiotic partnership between “smart algorithm-augmented predictive decision-making” and humans with AAA capabilities (anticipatory, antifragile and agility of decision-making) can be considered as a Vizier type CEO. The Vizier-Shah CEO Type (Cyborg CEO) When collaboration turns into integration then, the leader is a “Cyborg-CEO” and the type is The Vizier-Shah. This type of AI-CEO is based on transhumanism in that human features are enhanced through scientific and technological developments. Hence, The Vizier-Shah CEO Type includes one actor that is a cyborg-an integration of enhanced human and AI. Recent developments in technology support cyborgization of humankind. Today, smartphones, computers and applications have become a significant complement of human life and the absence of them for a while causes a feeling of deprivation. This fact can be considered a determinant of cyborgization of humans; however, the integration has not happened yet. In Cyborg-CEO Type, the Vizier and Shah are integrated into one body -the body of an enhanced human. Evolution of humankind to a biologically and technologically enhanced form is expected to happen and for sure, that fact will also affect the business world. Dong et al. (2020) foresee the coevolution of humans and AI that supports our Vizier-Shah Cyborg type: “With a brain-computer interface, human beings can communicate with each other without using language and only rely on neural signals in the brain, thus realizing ‘lossless’ brain information transmission [. . .] The future development of AI is to enhance, not to replace, the overall intelligence of human beings and promote the complementation of AI and human intelligence, giving play to their respective advantages to realize the ‘coevolution’ of human and AI machines” (p. 6). Kurzweil’s (2005) ideas on cyborgization also support Vizier-Shah type. According to Kurzweil humans will integrate technology inside human bodies in the near future. By, 2030 our brains will be more non-biological and in the 2040s nonbiological intelligence of humans will achieve tremendous capabilities. However, Tegmark (2017) emphasizes that although cyborgs and uploads are feasible, even, we have addicted to technology already and use technological tools as an extension of our cognitive capabilities, humans will find easier ways in achieving advanced machines intelligence. Then, this road will be faster. The Shah CEO type. In Vizier and Vizier-Shah CEO types, human predominate over AI as soon as an AGI that entirely simulate human mental states is invented. We coined the AGI-CEO type as The Shah. This type of CEO requires a general level of intelligence. Besides, legal and ethical issues should be solved. An AGI that can simulate human’s jFORESIGHT j
  • 21. intelligent behaviors with its superior features as objectivity, durability, rationality, etc. moves ahead of a human CEO. If the mental states of humans are transferred to AI, then the superiority of humans to other species would disappear. Then, whether mental states of human and AI is equalized, AI would still have superiority to human as it does not have biological deficiencies but if a human is enhanced as transhumanism propose, then cyborg-human will appear and may confront with AGI. Humans have always been eager to assign routine tasks to AI. Such kind of collaboration goes well but will a human be eager to or need to transfer human-specific tasks to AI? As most of the participants stated, “it is a dream” for the reason that it is not needed and human is “selfish.” The nuance here is that AGI will likely be invented one day as scientists are enthusiastic to achieve that but when it happens will humans want to assign human-specific tasks as CEO position to AI? Most of the participants think that the last word in organization management will still be said by a human. Another point of view is that duplicating humankind entirely is unnecessary because humans exist at present and there is no need for a duplicate. The concept of technology is about facilitating human work; therefore, it is expected that AI support humans not to manage. It seems that the position of a CEO is one of the superiorities that humankind would likely be reluctant to assign to a machine. A human-depended AI is programmed by humans and processes human data and that outcome is likely to reflect default features of humans as “bias” and “discrimination.” Hence, an AI-CEO should be independent of human control. If this happens in the future, it does not mean that society would accept this fact. Besides acceptance, the “cost of an AI-CEO” is another challenge. An AI-CEO will not be preferred for it is over-costing. Besides, the invention of AGI will be a revolution that leads to a “paradigm shift.” This shift may happen gradually or brings out chaos, polarization and conflicts. The Swarm-Shah CEO type. Swarm-Shah represents a system that is related to “distributed architecture,” “swarm intelligence” and “collective consciousness,” which means this type of AI-CEO generates common decisions. Swarm-Shah has the potential of disrupting all prevalent organizational structures, organizational culture and functioning models. The developments in the internet of things (IoT), Industry 4.0 and distributed architecture can be considered as the initial stage of this revolution. IoT “refers to the networked interconnection of everyday objects, which are often equipped with ubiquitous intelligence” (Xia et al., 2012, p. 1101) and this interaction will be more improved and even proceed to a level that does not require human intervention (Khan et al., 2012; Gubbi et al., 2013). Spitz’s (2020) future scenarios “Decentralized Autonomous Organizations” and “swarm AI” that infinite groups augment their intelligence by “forming swarms in real-time” (p. 8) can be considered as an example for The Swarm-Shah CEO type. Then, in accordance with Bostrom’s (2014) “collective superintelligence” concept that he defined it as “a system composed of a smaller intellect such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive systems” (p. 65). Network systems involved in social life with the evolvement of the internet through worldwide with personal computers and later social media and content sharing platforms became popular. Today, people can collectively respond to social events and even can change the course of events. We can predict that this network structure will improve and be enriched with new applications, then individual decisions may be replaced by collective decisions. Then, even, with scientific improvements, human consciousness may include in distributed systems. The progress of network structure will likely cause radical changes in organizations as the central authority of the CEO would be distributed. Most of the participants agreed that network structure will be practiced in the future, even in today’s organizations CEO make decisions not individually but with the support of his/her “extensions.” A participant mentioned this issue as follows: Participant: Think that way [. . .] Don’t consider computers just as a statistical report provider. Think of them as collective intelligence. For example, you have a back-team of 10 staff. It is not important whether the team consists of robots or humans, the thing that matters is the report. jFORESIGHT j
  • 22. Think about that we take away all sources and rights of the CEO, even the right of entering the company, then tell him “Manage the company.” He cannot. He can only manage with the support of whole extensions as software, hardware or human. Interviewer: Actually, the whole system is the CEO. Participant: Of course, it is. It is a cyborg, isn’t it? The representative of an organism. If that system proceeds to a level that can perform self-management, then the CEO position becomes irrelevant. It is too early for that because – even a little – human is still effective in organization management. At least, human has a face, speaks, compliments, persuades, supports, etc. Still makes something [. . .] As the participant stated if humans become irrelevant and unnecessary for the system, then humans will be checkmated. In this example, AI is still at the Vizier position and the human CEO is at an early stage of Cyborg-CEO. When humans become irrelevant, then the Swarm- Shah type comes into play, a system that produces collective decisions. The voice of the system would surpass the voice of individuals. It can be interpreted as the rise of collectivism and the down of individualism. Humans may continue to be a part of this system or may be eliminated by the system for being unnecessary. 6. Conclusion In this research, we examined the feasibility of AI performing as CEO by following classic grounded theory methodology. As a result, The Vizier-Shah theory emerged grounded on 27 interview data. The theory consists of five categories that are linked to each other with seven hypotheses. As a result, we answered two research questions: RQ1. May AI take over the task of developing strategy in organizations? RQ2. May AI perform top management tasks in the future and moreover be a CEO? The answer to research questions is “Yes, it is possible. AI can take over CEO-position but at first, challenges and problems should be solved.” The Vizier-Shah Theory explains the issues that should be considered, provide recommendations and moreover, introduces four futuristic AI-CEO types. Holloway (1983) stated in his article; “The possibility is clear that within a decade a computer may share or usurp functions of a corporate chief executive –functions that up to now have been thought unsharable. Now is the time to begin planning for such a development.” Although his prediction has not been achieved yet, expectations have ascended (World Economic Forum, 2015). The major question he addressed in his article; “How and when do we expect a supercomputer to share or usurp functions of a corporate chief executive?” (Holloway, 1983: 83) This problem is handled in detail in this research from a broader perspective. The other crucial questions he addressed were specific questions and still require to be examined. As Von Krogh (2018) also stated more abductive research following explorative designs are required. Each theoretical components of The Vizier-Shah theory can be the subject of future research. The four CEO types and their impact on future organizations can also be considered as a research topic. This research is an early-period research conducted to contribute these efforts, gain attraction to the issue and shed light on future research. References AI100 (2016). “Artificial intelligence and life in 2030. Report of the 2015 study panel”, Stanford University, available at: https://ai100.stanford.edu/ Annells, M. (1997), “Grounded theory method, part II – options for users of the method”, Nursing Inquiry, Vol. 4, pp. 176-180. Armstrong, D.M. (1968), A Materialist Theory of the Mind, Routledge Kegan Paul, London. jFORESIGHT j
  • 23. Bagozzi, R. and Lee, N. (2017), “Philosophical foundations of neuroscience in organizational research: functional and nonfunctional approaches”, Organizational Research Methods, Vol. 22 No. 1, pp. 1-33. Barnea, A. (2020), “How will AI change intelligence and decision-making? [article”, Journal of Intelligence Studies in Business, Vol. 10 No. 1, pp. 75-80. Beavers, A. (2013), “Alan turing: mathematical mechanist”, In Cooper, S. and van Leeuwen, J. (Eds). Alan Turing: His Work and Impact, Elsevier, Waltham, 481-485. Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press. Chalmers, D.J. (1995a), “The conscious mind: in search of a theory of conscious experience”, Doctoral dissertation, University of California. Chalmers, D.J. (1995b), “Facing up to the problem of consciousness”, Journal of Consciousness Studies, Vol. 4 No. 1, pp. 3-46. Chalmers, D.J. (1999), “First person methods in the science of consciousness”, available at: http://consc. net/papers/firstperson.html Chalmers, D.J. (2002), “The first person and third person views (part I)”, available at: http://consc.net/ notes/first-third.html Daily, C.M. and Johnson, J.L. (1997), “Sources of CEO power and firm financial performance: a longitudinal assessment”, Journal of Management, Vol. 23 No. 2, pp. 97-117. Dam asio, A.R. (1995), Descartes’ Error: Emotion, Reason, and the Human Brain, Avon Books, USA. DeepMind (2021), “AlphaGo”, available at: https://deepmind.com/research/case-studies/alphago-the- story-so-far Dennett, D. (2001), “The fantasy of first-person science”, available at: https://ase.tufts.edu/cogstud/ dennett/papers/chalmersdeb3dft.htm Descartes, R. (2003a), Discourse on Method and Meditations, in Ross, E.S. (Ed.), Dover Publications, Inc, Mineola, New York, NY. Descartes, R. (2003b), Meditations on First Philosophy: With Selections from the Objections and Replies, in Moriarty, M. (Ed.), Oxford University Press, New York, NY. Dewhurst, M. and Willmott, P. (2014), “Manager and machine: the new leadership equation”, available at: www.mckinsey.com/featured-insights/leadership/manager-and-machine Dreyfus, H.L. (1972), What Computers Can’t Do: A Critique of Artificial Reason, Harper Row, Publishers, Inc., USA. Dutton, T. (2018), “An overview of national AI strategies”, available at: https://medium.com/politics-ai/an- overview-of-national-ai-strategies-2a70ec6edfd Farrow, E. (2020), “Organisational artificial intelligence future scenarios: futurists insights and implications for the organisational adaptation approach, leader and team”, Journal of Futures Studies, Vol. 24 No. 3, pp. 1-15. Ferràs-Hern andez, X. (2018), “The future of management in a world of electronic brains”, Journal of Management Inquiry, Vol. 27 No. 2, pp. 260-263. Fjelland, R. (2020), “Why general artificial intelligence will not be realized”, Humanities and Social Sciences Communications, Vol. 7 No. 1, pp. 1-9. Glaser, B.G. (1978), “Advances in the Methodology of Grounded Theory: theoretical Sensitivity”. Sociology Press, Mill Valley, CA. Glaser, B.G. (2002), “Conceptualization: on theory and theorizing using grounded theory”, International Journal of Qualitative Methods, Vol. 1 No. 2, pp. 23-38. Glaser, B., (2015), Organizational Research Methods, Vol. 18 No. 4, pp. 1-19., doi: 10.1177/ 1094428114565028. GT as the discovery of patterns. in Walsh, I., Holton, J.A., Bailyn, L., Fernandez, W., Levina, N. and Glaser, B. (Eds). Glaser, B. (2007), “Remodeling grounded theory”, Forum: Qualitative Social Research, Vol. 5 No. 2, doi, doi: 10.17169/fqs-5.2.607. Glaser, B. and Strauss, A. (1965), Awareness of Dying, Aldine Transaction. jFORESIGHT j
  • 24. Glaser, B. and Strauss, A. (2006), The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine Transaction. Gubbi, J., Buyya, R., Marusic, S. and Palaniswami, M. (2013), “Internet of things (IoT): a vision, architectural elements, and future directions”, Future Generation Computer Systems, Vol. 29 No. 7, pp. 1645-1660. Guyer, P. and Horstmann, R.P. (2021), “Idealism”, In Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy, available at: https://plato.stanford.edu/archives/spr2021/entries/idealism/ Hambrick, D. and Mason, P. (1984), “Upper echelons: the organization as a reflection of its top managers”, Academy of Management Review, Vol. 9 No. 2, pp. 193-206. Haenlein, M. and Kaplan, A. (2019), “A brief history of artificial intelligence: on the past, present, and future of artificial intelligence”, California Management Review, Vol. 61 No. 4, pp. 5-14. Heumann, S. and Zahn, N. (2018), “Benchmarking national AI strategies: why and how indicators and monitoring can support agile implementation. Research report. Stiftung-nv.de. SSRN”, available at: www. stiftung-nv.de/sites/default/files/benchmarking_ai_strategies.pdf Holloway, C. (1983), “Strategic management and artificial intelligence”, Long Range Planning, Vol. 16 No. 5, pp. 89-93. Holton, J. (2017), “The discovery power of staying open”, The Grounded Theory Review, Vol. 16 No. 1, pp. 46-49, available at: http://groundedtheoryreview.com/2017/06/23/the-discovery-power-of-staying- open/ Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S. and Wang, Y. (2017), “Artificial intelligence in healthcare: past, present and future”, Stroke and Vascular Neurology, Vol. 2 No. 4, pp. 230-243. Kaplan, A. and Haenlein, M. (2019), “Siri, siri, in my hand: who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence”, Business Horizons, Vol. 62 No. 1, pp. 15-25. Khan, R., Khan, S., Zaheer, R. and Khan, S. (2012), “Future internet: the internet of things architecture, possible applications and key challenges”, 2012 10th International Conference on Frontiers of Information Technology (FIT) Proceedings, IEEE, pp. 257-260, available at: https://ieeexplore.ieee.org/ abstract/document/6424332/ Kolbjørnsrud, V., Thomas, R. and Amico, R. (2016), “The promise of artificial intelligence: redefining management in the workforce of the future”, Accenture, available at: www.accenture.com/_acnmedia/ PDF-19/AI_in_Management_Report.pdf Kulstad, M. and Laurence, C. (2020), Leibniz’s philosophy of mind. in Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), available at: https://plato.stanford.edu/archives/ win2020/entries/leibniz-mind/ Kurzweil, R. (2005), The Singularity is near: When Humans Transcend Biology, Duckworth Overlook. Kvale, S. (1996), InterViews: An Introduction to Qualitative Research Interviewing, SAGE Publications, Inc. Levin, J. (2018), Functionalism. In Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), available at: https://plato.stanford.edu/archives/fall2018/entries/functionalism/ Lu, H., Li, Y., Chen, M., Kim, H. and Serikawa, S. (2018), “Brain intelligence: go beyond artificial intelligence”, Mobile Networks and Applications, Vol. 23 No. 2, pp. 368-375. McCarthy, J., Minsky, M.L., Rochester, N. and Shannon, C.E. (1955), “A proposal for the dartmouth summer research project on artificial intelligence”, available at: http://jmc.stanford.edu/articles/ dartmouth/dartmouth.pdf McDermott, D. (2007), “Artificial intelligence and consciousness”, In Zelazo, P.D., Moscovitch, M. and Thompson, E. (Eds) The Cambridge Handbook of Consciousness, Cambridge University Press, pp. 117-150. Makridakis, S. (2017), “The forthcoming artificial intelligence (AI) revolution: its impact on society and firms”, Futures, Vol. 90, pp. 46-60. Miyazaki, K. and Sato, R. (2018), “Analyses of the technological accumulation over the 2nd and the 3rd AI boom and the issues related to AI adoption by firms”, 2018 Portland International Conference on Management of Engineering and Technology (PICMET), Honolulu, HI, pp. 1-7. jFORESIGHT j
  • 25. Nilsson, N.J. (2010), The Quest for Artificial Intelligence a History of Ideas and Achievements, Cambridge University Press, UK. Parry, K., Cohen, M. and Bhattacharya, S. (2016), Rise of the Machines: A Critical Consideration of Automated Leadership Decision Making in Organizations. Group Organization Management, Vol. 41 No. 5, pp. 571-594.available at: https://doi.org/10.1177/1059601116643442 Pennachin, C. and Goertzel, B. (2007), “Contemporary approaches to artificial general intelligence”, in Goertzel, B. and Pennachin, C. (Eds), Artificial General Intelligence, Springer, Berlin, Heidelberg, pp. 1-30. PwC (2017), “Artificial intelligence and robotics: leveraging artificial intelligence and robotics for sustainable growth. PwC knowledge paper”, available at: www.pwc.in/assets/pdfs/publications/2017/ artificial-intelligence-and-robotics-2017.pdf Robinson, H. (2020), Dualism. In Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), available at: https://plato.stanford.edu/archives/fall2020/entries/dualism/ Ross, J. (2002), “First-person consciousness”, Journal of Consciousness Studies, Vol. 9 No. 7, pp. 1-28. available at: www.ucl.ac.uk/uctytho/RossOnHonderichMcGinn.pdf Russell, S. and Norvig, P. (2010), Artificial Intelligence: A Modern Approach, 3rd ed., Pearson Education, Inc, Upper Saddle River, NJ. Searle, J.R. (1997), The Mystery of Consciousness (7th Printing), The Newyork Review of Books. Shanks, R., Sinha, S. and Thomas, R.J. (2015), “Manager and machines, unite!”, Accenture, available at: www.accenture.com/_acnmedia/pdf-19/accenture-strategy-manager-machine-unite-v2.pdf Shanks, R., Sinha, S. and Thomas, R. (2016), “Judgment calls: preparing leaders to thrive in the age of intelligent machines”, Accenture, available at: www.accenture.com/t20170411T174032Z__w__/us-en/ _acnmedia/PDF-19/Accenture-Strategy-Workforce- Shin, Y. (2019), “The spring of artificial intelligence in its global winter”, IEEE Annals of the History of Computing, Vol. 41 No. 4, pp. 71-82. Siau, K.L. and Yang, Y. (2017), “Impact of artificial intelligence, robotics, and machine learning on sales and marketing”, Twelve Annual Midwest Association for Information Systems Conference (MWAIS 2017) Proceedings, pp. 18-19, available at: http://aisel.aisnet.org/mwais2017/48 Stoljar, D. (2021), Physicalism, in Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), available at: https://plato.stanford.edu/archives/sum2021/entries/physicalism/ Spitz, R. (2020), “The future of strategic decision-making [blog post]”, available at: https://jfsdigital.org/ 2020/07/26/the-future-of-strategic-decision-making/ Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf, New York, NY. Thomas, A. and Simerly, R. (1994), “The chief executive officer and corporate social performance: an interdisciplinary examination”, Journal of Business Ethics, Vol. 13 No. 12, pp. 959-968. Thomas, R., Fuchs, R. and Silverstone, Y. (2016), “A machine in the C-suite. Research report”, Accenture, available at: www.accenture.com/t00010101T000000Z__w__/br-pt/_acnmedia/PDF-13/Accenture- Strategy-WotF-Machine-CSuite.pdf Turing, A. (1950), “Computing machinery and intelligence”, Mind, Vol. 59 No. 236, pp. 433-460, available at: www.jstor.org/stable/2251299?origin=JSTOR-pdfseq=1#page_scan_tab_contents Von Krogh, G. (2018), “Artifıcial intelligence in organizations: new opportunities for phenomenon-based theorizing”, Academy of Management Discoveries, Vol. 4 No. 4, pp. 404-409. Weiker, W. (1968), “The ottoman bureaucracy: modernization and reform”, Administrative Science Quarterly, Vol. 13 No. 3, pp. 451-470. World Economic Forum (2015), “Deep shift: technology tipping points and societal impact”, Global Agenda Council on the Future of Software Society. Weforum.org, available at: www3.weforum.org/ docs/WEF_GAC15_Technological_Tipping_Poi Xia, F., Yang, L., Wang, L. and Vinel, A. (2012), “Internet of things”, International Journal of Communication Systems, Vol. 25 No. 9, pp. 25-1102, available at: https://doi.org/. 1101. Torresen, J. (2018), “A review of future and ethical perspectives of robotics and AI”, Frontiers in Robotics and AI, Vol. 4 No. 75. jFORESIGHT j
  • 26. Yasnitsky, L.N. (2020), “Whether be new “winter” of artificial ıntelligence?”, In Antipova, T. (Ed.),. Integrated Science in Digital Age. ICIS 2019. Lecture Notes in Networks and Systems, 78, Springer, Cham. Zaccaro, S. (2004), The Nature of Executive Leadership: A Conceptual and Empirical Analysis of Success, American Psychological Association, Washington, DC. Zhao, T., Zhu, Y., Tang, H., Xie, R., Zhu, J. and Zhang, J.H. (2019), “Consciousness: new concepts and neural networks”, Frontiers in Cellular Neuroscience, Vol. 13, p. 302. Further reading Annells, M. (1997), “Grounded theory method, part II: options for users of the method”, Nursing Inquiry, Vol. 4 No. 3, pp. 176-180. Chalmers, D.J. (2015), “Panpsychism and panprotopsychism”, In Torin Alter, Y.N. (Ed.), Consciousness in the Physical World: Perspectives on Russellian Monism, Oxford University Press, pp. 246-276. Corbin, J. and Strauss, A. (1990), “Grounded theory research: procedures, canons, and evaluative criteria”, Qualitative Sociology, Vol. 13 No. 1, pp. 3-21. Damasio, A.R. (1995), Descartes’ Error: Emotion, Reason, and the Human Brain, Avon Books. Dong, Y., Hou, J., Zhang, N. and Zhang, M. (2020), “Research on how human ıntelligence, consciousness, and cognitive computing affect the development of artificial ıntelligence”, Complexity, pp. 1-10. Fodor, P. (1994), “Sultan, imperial council, grand vizier: changes in the ottoman ruling elite and the formation of the grand vizieral ’telḫī s”, Acta Orientalia Academiae Scientiarum Hungaricae, Vol. 47 Nos. 1/2, pp. 67-85, available at: www.jstor.org/stable/23658130?seq=1 Mark, J. (2017), “Ancient egyptian vizier. Ancient history encyclopedia”, available at: www.ancient.eu/ Egyptian_Vizier/ Overgaard, M. (2017), “The status and future of consciousness research”, Fronties in Psychology, Vol. 8, pp. 1-4. Shah (2021), “In online ethymology dictionary”, available at: www.etymonline.com/search?q=Shah Shaw, I. (2000), The Oxford History of Ancient Egypt, Oxford University Press Inc, New York, NY. Silver, D. and Hassabis, D. (2016), “Silver, D. hassabis, D. 2016. Mastering the ancient game of go”, available at: https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html Turing, A. (1937), “On computable numbers, with an application to the entscheidungsproblem”, Proceedings of the London Mathematical Society, Vol. 2 No. 1, pp. 230-265. About the authors Aslıhan Ünal is an Assistant Professor in the Department of Management Information Systems at Cappadocia University. She completed her PhD and MSc at Düzce University on business administration and her undergraduate studies at Istanbul University on econometrics. Her main research interests are strategic management, competitive strategies, management information systems, artificial intelligence and grounded theory. Aslıhan Ünal can be contacted at: aslihan.unal@kapadokya.edu.tr _ Izzet Kılınç is a Professor of Strategic Management in the Department of Management Information systems at Duzce University. Kılınç, completed his PhD at Dokuz Eylül University on tourism and hospitality, his MA at Sheffield Hallam University on tourism and hospitality and his undergraduate studies at Dokuz Eylül University on tourism and hospitality. His main research interests are strategic management, competitive strategies, management information systems, artificial intelligence and qualitative research. For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: permissions@emeraldinsight.com jFORESIGHT j