SlideShare a Scribd company logo
1 of 12
Download to read offline
Industrial Marketing Management 101 (2022) 45–56
Available online 6 December 2021
0019-8501/© 2021 Elsevier Inc. All rights reserved.
Employees' perceptions of chatbots in B2B marketing: Affordances
vs. disaffordances
Xiaolin Lin a
, Bin Shao b
, Xuequn Wang c,*
a
Assistant Professor of Computer Information Systems, Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business,
West Texas A&M University, 2501 4th Ave, Canyon, TX 79016, United States of America
b
Decision Management & Terry Professor of Business, Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business,
West Texas A&M University, 2501 4th Ave, Canyon, TX 79016, United States of America
c
School of Business and Law, Edith Cowan University, Joondalup, WA 6027, Australia
A R T I C L E I N F O
Keywords:
Chatbot
B2B marketing
Affordance
Disaffordance
Effectiveness
Discomfort
A B S T R A C T
We investigate the impacts of chatbots' technical features on employees' perceptions (namely, chatbot effec­
tiveness and discomfort with using chatbots) in the context of B2B marketing. In particular, to capture the
technical features of chatbots, we identify three types of chatbot affordances (i.e., automatability, personaliza­
tion, and availability) and three types of chatbot disaffordances (i.e., limited understanding, lack of emotion, and
null decision-making). We show that these types of chatbot affordances and disaffordances are related to chatbot
effectiveness and discomfort with using chatbots, which in turn can affect employees' attitudes toward chatbots.
We conducted an empirical study via an online survey and collected data from 228 B2B marketing employees.
While automatability and personalization enhance chatbot effectiveness, null decision-making increases
discomfort with using chatbots. Further, chatbot effectiveness and discomfort with using chatbots both influence
employees' attitudes toward chatbots. Theoretical and managerial implications are also discussed.
1. Introduction
Advancements in the science of artificial intelligence (AI) have
rapidly transformed chatbot technology into an innovative interface,
through which companies can more efficiently interact with their cus­
tomers in both B2B and B2C processes (e.g., Borges, Laurindo, Spínola,
Gonçalves, & Mattos, 2020; Cao, Duan, Edwards, & Dwivedi, 2021;
Collins, Dennehy, Conboy, & Mikalef, 2021; Dwivedi et al., 2021;
Dwivedi et al., 2021; Han et al., 2021; Hu, Lu, Pan, Gong, & Yang, 2021;
Lalicic & Weismayer, 2021; Murtarelli, Gregory, & Romenti, 2021; Pil­
lai, Sivathanu, & Dwivedi, 2020). Observable advantages such as lower
operating costs, improved response times during customer service
communication, and consistent 24/7 customer assistance have stimu­
lated enthusiasm and increased the pressure to develop chatbots and
create a role for them in business. For example, chatbots can serve to
enhance business performance by improving the quality and efficiency
of customer services, automating online purchases, facilitating and
engaging in communication with customers, and greatly improving
response rates to inquiring customers (De, 2018). Chatbots can also
strengthen the impression of fair treatment (Wang, Teo, & Janssen,
2021). For these reasons, many companies believe in the importance of
adopting and using chatbots for business practices to upgrade
performance.
For example, Juniper Research predicts that chatbot usage could
save companies $7.3 billion by 2023 (Juniper Research, 2021). Others
foresee the value of e-commerce transactions supported by chatbots
reaching $112 billion by 2023, and the global market size of chatbots
reaching $1.3 billion by 2025 (Dilmegani, 2021). According to a recent
report, 58% of companies that use chatbots are B2B, and 22% are B2C
(Boomtown, 2019). Chatbots have been increasingly applied in the
business operations of B2B marketing. Chatbots can enhance B2B mar­
keting in various ways, such as facilitating the sales process, automating
traffic from emails, and providing efficient customer services (Johnston,
2020). For example, chatbots can provide answers to the frequently
asked questions of customers, who desire quick and useful responses.
Therefore, chatbots are now playing an increasingly important role in
improving customer services in B2B marketing. To better leverage
chatbots into customer services in B2B companies, it is essential for
practitioners to understand employees' psychosocial perceptions of the
use of chatbots within organizations (e.g., Araujo, 2018; Castelo, Bos, &
* Corresponding author.
E-mail addresses: xlin@wtamu.edu (X. Lin), bshao@wtamu.edu (B. Shao), xuequnwang1600@gmail.com (X. Wang).
Contents lists available at ScienceDirect
Industrial Marketing Management
journal homepage: www.elsevier.com/locate/indmarman
https://doi.org/10.1016/j.indmarman.2021.11.016
Received 11 January 2021; Received in revised form 25 November 2021; Accepted 30 November 2021
Industrial Marketing Management 101 (2022) 45–56
46
Lehmann, 2019).
This study is an attempt to provide further insights into how to
heighten employees' positive psychosocial perceptions (i.e., attitudes)
about chatbots in B2B marketing by studying the impacts of techno­
logical features on their beliefs regarding chatbots. It is important to do
so because marketing employees may be reluctant to use chatbots if they
do not have positive attitudes. For example, a recent study indicated that
when users feel positive about chatbots, they are more likely to believe
that using chatbots produces good value (Lalicic & Weismayer, 2021).
By understanding which chatbot features help improve employee per­
ceptions, companies can identify and focus on specific key features when
selecting and implementing chatbots. These enhanced features can then
facilitate employees' acceptance and the integration of chatbots into
customer services. In accordance with Castelo et al. (2019), we captured
people's beliefs by using two terms: the perceived effectiveness of
chatbots and discomfort with using chatbots (please refer to the section
on “Employees' Psychological Perceptions of Chatbots” for an explana­
tion). To capture technology features, our study draws upon the concept
of affordance and disaffordance (Gibson, 1977; Strong et al., 2014;
Wittkower, 2016) to develop the relevant chatbot features (please refer
to the section on “Technology Affordance” for an explanation). There­
fore, our research questions are 1) what chatbot features influence per­
ceptions of chatbot effectiveness? and 2) what chatbot features influence
discomfort with using chatbots?
Our study makes two main contributions. First, we apply the concept
of technology affordance and propose three types of chatbot affordance:
automatability, personalization, and availability. Our study thus pro­
vides a deeper understanding of chatbot features. Second, our study
contextualizes the concept of disaffordance and proposes three types of
chatbot disaffordance: limited understanding, lack of emotion, and null
decision-making. Our study can thus enhance our understanding
regarding the limitations of chatbots. Alternatively, we uncover new
knowledge by demonstrating the relationships between the features of
chatbot technology and people's psychological perceptions from a
technology affordance perspective. The results provide useful guidelines
for companies to encourage marketing employees to use chatbots and
offer valuable suggestions for the future development of chatbots.
The rest of the paper is organized as follows. We first review previous
studies on chatbots. We then identify the features of chatbots following
the concepts of affordance and disaffordance upon which the research
model is based. We then describe our methodologies, including data
collection, measures, data analysis, and results. Finally, implications for
theory and practice, limitations, and suggestions for future research are
discussed.
2. Literature review and theoretical background
2.1. Related studies on chatbots
There are two main streams in the existing literature on chatbots.
One stream of chatbot research is predominantly focused on chatbot
design and feature optimization (Dale, 2016; Gnewuch, Morana, &
Maedche, 2017; Landis, 2014; Shah, Warwick, Vallverdú, & Wu, 2016;
Shawar & Atwell, 2007; Xu, Liu, Guo, Sinha, & Akkiraju, 2017). How­
ever, most of the discussions concerning chatbot design and optimiza­
tion revolve around psychology, linguistics, computer science, and
engineering, but there is an evident lack of academic research on chat­
bots from a business perspective. Therefore, it is meaningful to consider
business practices to provide insight into which chatbot design features
can help companies improve performance.
In another stream of present work, scholars examine user behavior
and experiences with chatbots (Brandtzaeg & Følstad, 2017; Hill, Ford,
& Farreras, 2015; Jain, Kumar, Kota, & Patel, 2018; Mimoun, Poncin, &
Garnier, 2012; Schuetzler, Grimes, Giboney, & Buckman, 2014). When
human–chatbot communication was compared to human–human
communication, there were notable differences in the content and level
of quality of such interactions. In human–chatbot communication,
people used a greater number of messages, but the average length of the
messages was shorter than in human–human communications. The
employed vocabulary was more restricted overall, yet there was an in­
crease in the amount of profanity used (Hill et al., 2015).
In addition, the existing literature has also studied how customers
perceive the use of chatbots. Those studies are conducted and analyzed
using a technological or anthropomorphic lens (Murtarelli et al., 2021).
In particular, the technological lens deals with functionalities for
effective chatbots. For example, Brandtzaeg and Følstad (2017)
concluded that productivity is the top motivation for customers to use
chatbots. Chung, Ko, Joung, and Kim (2020) found that chatbots' mar­
keting efforts (e.g., customization and problem solving) can improve
communication accuracy and credibility, which in turn increases
customer satisfaction. Balakrishnan and Dwivedi (2021b) also demon­
strated how human-to-machine interaction could increase cognitive
absorption. More recently, Lalicic and Weismayer (2021) identified
personalization, convenience, ubiquity, and super-functionality as the
four main reasons for chatbot adoption, and usage barrier, technology
anxiety, privacy concern, and need for personal interaction as the four
main reasons against it. These studies can help identify effective design
principles and appropriate functionalities for the development of chat­
bots, thus leading to the adoption and use of chatbots.
Alternatively, the anthropomorphic lens emphasizes the social-actor
role that chatbots play when interacting with customers (Balakrishnan &
Dwivedi, 2021a; Murtarelli et al., 2021). Users (e.g., customers) may
favor social features (e.g., human likeness) that enable chatbots to
provide humanized conversations and help improve social relationships
between customers and firms (Sowa, Przegalinska, & Ciechanowski,
2021). For example, Brandtzaeg and Følstad (2017) reported that
entertainment was the second most frequent motivation for using
chatbots, a response given by 20% of participants, and that it is neces­
sary that chatbots have a sense of humor to provide a pleasant user
experience. They argued it is crucial for chatbots to be “fun” even if the
primary purpose of chatbot technology is to enhance productivity. This
is consistent with other findings about users' first experiences conversing
with chatbots (Jain et al., 2018). Additionally, participants reported
heightened enthusiasm pertaining to the novelty of chatbots and the use
of chatbots for social and relational purposes. More recently, Pizzi,
Scarpi, and Pantano (2021) found that customers show a lower level of
psychological reactance when using anthropomorphic chatbots, espe­
cially when the customers themselves can choose to initiate conversa­
tion. Along this stream, Balakrishnan, Dwivedi, Hughes, and Boy (2021)
found that psychological commitment (e.g., regret avoidance) can in­
crease resistance toward the adoption of AI voice assistants. Further­
more, Roy and Naidoo (2021) determined that the effectiveness of
human-like conversation styles depends on consumers' time orienta­
tion: while present-oriented consumers prefer warm styles, future-
oriented consumers prefer competent styles.
Recent literature has also begun to examine chatbot adoption among
employees. For example, Brachten, Kissmer, and Stieglitz (2021) indi­
cated both intrinsic and extrinsic motivations increase employees'
intention to use Enterprise Bots. Regardless, the most prevalent and
familiar use of chatbots is in the context of customer service. Although
consumers appreciate the consistent availability and “responsiveness” of
technological auto-service (Meuter, Bitner, Ostrom, & Brown, 2005),
they also desire the personalized attention provided by traditional
human–human communication (Scherer, Wünderlich, & Wangenheim,
2015). Chatbots have the potential to combine the best of both. For
example, Chung et al. (2020) demonstrated that luxury brands can and
do deliver personalized care to their customers by using chatbots instead
of traditional human–human interactions. Shumanov and Johnson
(2021) additionally found that matching the consumer's personality
with a congruent chatbot personality can encourage the consumer to
interact more with the chatbot and bolster purchasing outcomes.
Additionally, Sheehan, Jin, and Gottlieb (2020) observed that when a
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
47
chatbot is able to effectively resolve miscommunication by seeking
clarification, it is perceived as being qualitatively identical to a chatbot
that avoids errors altogether, and that an anthropomorphic chatbot may
satisfy customers who demand more humanized interaction.
2.2. Technology affordance
The literature has applied the concept of affordance to improve un­
derstanding of the role of technology (in particular, modern technolo­
gies) in various contexts (Grgecic, Holten, & Rosenkranz, 2015; Markus
& Silver, 2008). This leads to the term technology affordance, which re­
fers to “the mutuality of actor intentions and technology capabilities that
provide the potential for a particular action” (Majchrzak, Faraj, Kane, &
Azad, 2013, p. 39). Thus, technology affordance arises as individuals
perceive how features of certain technologies can be used to support
their goals (Grgecic et al., 2015). Because individuals can have different
goals, they can interpret the features of a technology in different ways
and perceive that the technology can afford their corresponding goal-
oriented activities (Leonardi, 2013; Treem & Leonardi, 2013).
Technology affordance is a useful theoretical lens through which to
investigate how advanced technologies are used by individuals in
various phenomena such as cyberbullying and big data marketing. For
example, Chan, Cheung, and Wong (2019) proposed four types of social
media affordances relevant to cyberbullying and big data marketing.
They showed that information retrieval affordance enhances the pres­
ence of suitable targets, whereas editability and association affordance
increase the absence of capable guardians. De Luca, Herhausen, Troilo,
and Rossi (2021) proposed three types of big data marketing affordances
including customer behavior pattern spotting, real-time market
responsiveness, and data-driven market ambidexterity. They found that
real-time market responsiveness and data-driven market ambidexterity
affordance are positively related to service innovation. Thus, technology
affordance can facilitate the investigation of how chatbots can be used
for improving customer services. In other words, goal-oriented actors (e.
g., marketing employees) interpret and learn about certain artifacts (e.
g., chatbots) regarding the possibilities for activities provided by these
artifacts (i.e., affordance) to support their goals (e.g., provide customer
services).
Although other theoretical lenses have been used to understand the
use of technology, technology affordance was chosen as the most
appropriate perspective as it is consistent with the objective of this
study. As a contrasting example, task-technology fit theory (TTF) has
been developed to understand how task and technology characteristics
jointly influence individual performance (Goodhue & Thompson, 1995).
The basic argument is that technology has a positive effect on perfor­
mance when it is a good fit for the task. Since we are centering this study
on understanding how chatbots' technical features influence employees'
general perceptions, TTF is less appropriate for our study. Nevertheless,
TTF can be a useful theoretical foundation for future investigations into
how certain technical features of chatbots can support employees in
completing various tasks.
2.2.1. Affordances of chatbots
In the context of B2B marketing, with the materiality of chatbots,
affordances can be jointly facilitated by social and technical aspects
(Volkoff & Strong, 2013, 2017) and can be interpreted based on the
goals of marketing employees. In this way, the affordances of chatbots
emerge from the interactions between employees and chatbots such that
employees use certain features of chatbots for achieving marketing goals
via such activities as interacting with customers and providing services.
The rationale is that employees may use chatbots to maximize their
opportunities to enhance customer interactions for supporting customer
service. Accordingly, based upon the literature on technology afford­
ance (Chan et al., 2019; De Luca et al., 2021) and chatbots (Schuetzler,
Grimes, & Scott Giboney, 2020), we propose three types of chatbot
affordance including automatability, personalization, and availability
(see Table 1). First, chatbots can manage customer service without the
intervention of marketing employees. Specifically, chatbots can auto­
mate routine tasks during customer services such as answering common
questions (Paschen, Kietzmann, & Kietzmann, 2019), because historical
structured and unstructured data allows the integrated AI capabilities to
produce logical responses (Paschen et al., 2019), therefore reinforcing
automatability as the first affordance. Second, chatbots can adapt
products/services to meet a customer's specific needs and preferences,
thus improving their relations with customers (Chung et al., 2020).
Chatbots can recommend different products/services based upon each
customer's background and interests instead of suggesting the same
products/services to every customer. In fact, Chung et al. (2020) pro­
posed personalization as one of the main marketing efforts of chatbots.
Therefore, we select personalization as the second type of chatbot
affordance. Finally, chatbots can be available 24/7, and availability (i.e.,
ubiquity) is one of the main reasons that businesses adopt chatbots
(Lalicic & Weismayer, 2021). Therefore, we propose availability as the
third type of chatbot affordance.
2.2.2. Disaffordance
Notwithstanding that technology affordances have gained much
attention (Leonardi, 2013; Strong et al., 2014; Treem & Leonardi, 2013),
researchers have also begun to pay attention to disaffordance. In one of
the first studies, Wittkower (2016) provided an initial understanding of
disaffordance in a systematic and theorized manner. In general, dis­
affordance generally means the lack of affordance in design, which
suggests that the artifacts cannot provide the potential for behaviors
associated with achieving specific outcomes. In other words, a tech­
nology cannot empower the users through their capacities for acting and
achieving goals because of unsuccessful or nonexistent design features.
Thus, disaffordance refers to artifacts failing to facilitate certain ob­
jectives of goal-oriented actors (Wittkower, 2016). Disaffordance can be
caused by non-affordance or poor affordance. Non-affordance results in
a scenario in which a certain affordance does not appear within the
actors' experiences. It is possible that the artifacts lack a set of functions
to support the actors' goals. On the other hand, poor affordance refers to
Table 1
Chatbot affordances.
Chatbot
Affordance
Definition How the Affordance
Relates to Marketing
References
Automatability The extent to which
marketing
employees believe
that chatbots offer
the opportunity to
respond to
customers' questions
automatically.
This affordance
allows chatbots to
deal with simple
questions
automatically, so
that marketing
employees can focus
on handling complex
questions from
customers.
(Paschen et al.,
2019)
Personalization The extent to which
marketing
employees believe
that chatbots offer
the opportunity to
provide
personalized
responses to
customers.
This affordance
allows chatbots to
support personalized
responses and
customized services,
so that marketing
employees can
reduce their
involvement during
interactions with
customers.
Chung et al.
(2020); Lalicic
and
Weismayer
(2021)
Availability The extent to which
marketing
employees believe
that chatbots offer
the opportunity to
provide customer
services at any time
of the day.
This affordance
allows chatbots to
support customer
services at any time,
even when
marketing employees
are not available.
Lalicic and
Weismayer
(2021)
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
48
the scenario where a certain intended affordance does not provide the
actual affordance. The artifacts may not have clear and unobstructed
interfaces to allow actors to achieve their goals, often as a consequence
of poor design. Accordingly, we focus on the non-affordance aspects of
disaffordance, and in this study, we propose three types of chatbot dis­
affordances including limited understanding, lack of emotion, and null
decision-making (see Table 2). Fore mostly, because chatbots conduct
predictive analytics based upon data collected from customers (Mur­
tarelli et al., 2021), they cannot understand a new scenario that they had
not previously encountered, creating the disaffordance of limited un­
derstanding. Further, chatbots lack important human qualities, such as
the ability to express emotions (e.g., empathy) and judgement marking
(Murtarelli et al., 2021). Therefore, lack of emotion and null decision-
making are proposed as additional types of disaffordances.
2.3. Employees' psychological perceptions of chatbots
Castelo et al. (2019) suggested that people's psychological percep­
tions of a technology (i.e., algorithms) include two primary variables:
the effectiveness of a technology and discomfort with using the tech­
nology. More specifically, technology effectiveness is a cognitive factor
that captures people's cognitive beliefs about the technology's compe­
tency, and discomfort with use is an affective factor that captures peo­
ple's subjective feeling of discomfort with using the technology.
Discomfort represents an unhealthy state of psychological well-being
(Albarracin & Shavitt, 2018), so individuals try to avoid objects or ac­
tivities resulting in discomfort.
The existing literature has applied several theoretical lenses to
investigate both positive and negative aspects of technology. For
example, Lalicic and Weismayer (2021) used the behavioral reasoning
theory to identify reasons for/against adopting chatbots. Here, “reasons
for” and “reasons against” are used to justify users' future behaviors and
represent their distinct cognitive routes leading to intention and support
of use (Westaby, 2005). In the case of smart home devices, Wang,
McGill, and Klobas (2018) adopted the net valence model to examine
how various benefits and risks influence the adoption intention of
different devices. Perceived benefits and risks each represented the
users' overall perceptions of benefits and risks. These approaches are
similar to Castelo et al. (2019), who captured individuals' positive and
negative feelings toward technology. While behavioral reasoning theory
and the net valence model only deal with individuals' cognitive beliefs,
effectiveness and discomfort cover both cognitive beliefs and affective
feelings. Therefore, we chose to use the approach from Castelo et al.
(2019).
Following Castelo et al. (2019), we propose that employees' attitudes
toward chatbots are influenced by their beliefs about the technology,
including effectiveness and discomfort with use. Effectiveness represents
the employees' positive cognitive perceptions toward chatbots and refers
to the degree to which the latter are competent to provide customer
services. Discomfort with using chatbots reflects employees' negative
affective perceptions when they are uneasy about using chatbots.
3. Research model and hypotheses development
Following Castelo et al. (2019), we argue that chatbot effectiveness
and discomfort with using chatbots can represent marketing employees'
perceptions of the technology, which influence their attitudes. In addi­
tion, we illustrate how chatbot affordances and disaffordances are
related to employees' perceptions. Specifically, we propose that three
types of chatbot affordances (i.e., automatability, personalization, and
availability) are positively related to chatbot effectiveness and that three
types of chatbot disaffordance (i.e., limited understanding, lack of
emotion, and null decision-making) are positively related to discomfort
with use. In the following we discuss each hypothesis in more detail.
3.1. Impact of effectiveness and discomfort on attitudes
Attitude refers to an individual's general summary affective feeling of
(un)favorableness toward a behavior (Ajzen & Fishbein, 1980). An
employee's attitude toward chatbots thus represents his or her overall
subjective feeling about using chatbots within organizations. The liter­
ature suggests that attitudes can be affected by both technological fea­
tures and social aspects—more specifically, people's beliefs (e.g., Kwok
& Gao, 2005; Lin, Featherman, Brooks, & Hajli, 2019). In particular, it is
well understood that people's beliefs can form their attitudes (Ajzen &
Fishbein, 1980). For example, employees' beliefs about information
sharing have been proven to play a significant role in forming their at­
titudes toward sharing information within organizations (Kolekofski Jr
& Heminger, 2003). Similarly, it is suggested that people's beliefs about
information and interpersonal relationships are related to their attitudes
about online behaviors such as online shopping (Lin, Wang, & Hajli,
2019). Likewise, people's beliefs in the effectiveness of an algorithm
have been found to be positively associated with their psychological
states regarding their use of the algorithm, that is, their reliance on the
algorithm (Castelo et al., 2019). In our study, chatbot effectiveness
captures employees' beliefs about the performance of chatbots based on
their own experiences. When one believes that the performance of
chatbot is high, one is likely to favor using chatbots. Therefore, it is safe
to expect that a higher level of effectiveness leads to a more positive
attitude toward using chatbots within organizations.
H1. Chatbot effectiveness is positively related to attitudes toward chatbots.
3.2. The impact of discomfort with using on attitudes
On the other hand, we argue that marketing employees' discomfort
with using chatbots can have a negative effect on their attitude. Whereas
chatbot effectiveness represents employees' positive cognitive percep­
tions of chatbots, discomfort with using chatbots reflects their negative
affective perceptions. Cognitive dissonance theory identifies discomfort
as a component of dissonance (Festinger, 1957; Hinojosa, Gardner,
Walker, Cogliser, & Gullifor, 2017). Specifically, discomfort can moti­
vate or drive the individual's attitude change process to reduce disso­
nance (Fazio & Cooper, 1983). In our study, discomfort with using
Table 2
Chatbot disaffordance.
Chatbot
Affordance
Definition How the Affordance Relates to Marketing Reference
Limited
understanding
The extent to which marketing employees believe that chatbots
do NOT offer the opportunity to understand new questions from
customers.
This disaffordance does not allow chatbots to deal with new contexts, so
marketing employees need to be involved in these new contexts.
(Murtarelli
et al., 2021);
(Paschen et al.,
2019)
Lack of emotion The extent to which marketing employees believe that chatbots
do NOT offer the opportunity to express emotions during
interactions with customers.
This disaffordance does not allow chatbots to express emotions, so
marketing employees may need to deal with customers' emotions (e.g.,
to provide emotional support).
(Murtarelli
et al., 2021)
Null decision-
making
The extent to which marketing employees believe that chatbots
do NOT offer the opportunity to make decisions.
This disaffordance does not allow chatbots to make decisions, so
marketing employees need to be involved when certain marketing
decisions need to be made.
(Murtarelli
et al., 2021)
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
49
chatbots represents employees' mental uneasiness about using chatbots.
Employees may hold neutral attitudes toward chatbots before adopting
them. However, if and when employees do not feel comfortable with
chatbots after using them, their negative affective feelings become
inconsistent with their previous attitudes toward chatbots. Such disso­
nance then changes their attitudes (Hinojosa et al., 2017), and em­
ployees probably develop a negative attitude toward them. As a result,
the employees' discomfort may make them reluctant to use the tech­
nology. Castelo et al. (2019) also argued that discomfort is negatively
related to reliance on algorithms. Therefore, we hypothesize that:
H2. Discomfort with using chatbots is negatively related to attitudes toward
chatbots.
3.3. The impact of technology affordances on effectiveness
The literature suggests that affordances can affect people's beliefs
and psychological states in a variety of contexts (e.g, Karahanna, Xu, Xu,
& Zhang, 2018; Leidner, Gonzalez, & Koch, 2018). More specifically, in
the context of social media, Karahanna et al. (2018) summarized a va­
riety of social media affordances and proposed a link between social
media affordances and people's psychological needs. In the context of
enterprise social media, Leidner et al. (2018) provided an enhanced
understanding of how the affordances of enterprise social media can
result in various outcomes such as stress and a sense of social support.
Thus, it is clear that affordances can affect people's beliefs.
In the context of chatbots, automatability offers the possibility of
answering customers and interacting with them automatically. This
affordance enables companies to serve customers via automatically
generated responses and to provide quick responses based on customer
requests (Xu et al., 2017). Taking advantage of this affordance, em­
ployees may reduce their workloads accordingly because they do not
have to answer so many phone calls from customers. In addition, with
this affordance, employees have more opportunities to focus on other
things and can thus maximize the possibility of improving their per­
formance. In this way, employees may believe that chatbots can better
satisfy their needs for performing the job and providing customer ser­
vices with less effort, thus leading to a sense of effectiveness.
H3a. Automatability is positively related to chatbot effectiveness.
Personalization offers the opportunity to recall the history of con­
versations with customers, which can enable chatbots to produce
personalized responses (e.g., Gomez, 2018; Ritter, Cherry, & Dolan,
2011). In this way, companies can provide dedicated customer services
because personalization offers the ability to respond to individual cus­
tomers' requests and needs (Xu et al., 2017). With this affordance,
chatbots enable the companies to facilitate favorable interactions be­
tween customers and businesses through the possibility of building
conversation skills (Schuetzler et al., 2020). As a result, employees have
confidence in providing customer services through the use of chatbots
because of the affordance of personalization. Therefore, a higher level of
perceived affordance of personalization is likely to be related to a higher
level of effectiveness of chatbots.
H3b. Personalization is positively related to chatbot effectiveness.
The affordance of availability enables companies to be accessible to
customers and respond to their requests at any time. This affordance
offers customers the possibility of interacting with businesses and get­
ting some help anytime and anywhere. With this affordance, chatbots
offer the opportunity for businesses and customers to communicate
conveniently (Putri, Meidia, & Gunawan, 2019), thus supporting
customer needs and improving customer service (Pizzi et al., 2021). In
addition, the affordance of availability can build customers' satisfaction
because it can maximize the service quality provided by companies
(Ashfaq, Yun, Yu, & Loureiro, 2020). All these factors can increase the
likelihood that employees will believe that chatbots can be effective for
improving customer services. Therefore, employees' perceptions of the
effectiveness of chatbots can increase as the affordance of availability
increases.
H3c. Availability is positively related to chatbot effectiveness.
3.4. Impact of technology disaffordances on discomfort with using
chatbots
In our study we propose three types of chatbot disaffordance leading
to discomfort. Limited understanding is defined as the extent to which
marketing employees believe that chatbots do not offer the opportunity
to understand new questions from customers. Because the AI of chatbots
is developed based upon historical conversations with customers, chat­
bots cannot handle completely new queries from customers (Gomez,
2018). In these situations, customers can feel frustrated because they do
not get the responses they need, and sales may be lost. As a result,
marketing employees may not feel comfortable letting chatbots
completely handle interactions with customers. Therefore, we hypoth­
esize that:
H4a. Limited understanding is positively related to discomfort with using
chatbots.
In this context, the lack of emotion refers to the extent to which
marketing employees believe that chatbots do not offer the opportunity
to express emotions during interactions with customers. Unlike people,
chatbots have no emotions. Although chatbots can be taught to simulate
certain emotions (e.g., empathy) based upon certain messages (Aivo,
2019), marketing employees can understand customer emotions better
and respond more effectively. Customers can easily identify messages
created by chatbots and often appreciate revisions by marketing em­
ployees that respond to their emotions (CoSource, 2018). Often this is
essential to keeping interactions going in the right direction and to
providing good service. Therefore, marketing employees may perceive
that they need to maintain a certain level of involvement to effectively
deal with customers' emotions. Therefore, we posit:
H4b. The lack of emotion is positively related to discomfort with using
chatbots.
Last, null decision-making refers to the extent to which marketing
employees believe that chatbots do not offer the opportunity to make
decisions. Chatbots cannot make decisions, which causes problems. For
example, Microsoft initiated a chatbot for Twitter. However, it was
turned into a racist account by content from users in less than 24 h
(Gomez, 2018). The chatbot did not have the decision-making capacity
to prevent this outcome. Therefore, marketing employees may not feel
comfortable letting chatbots run by themselves and may feel they must
be involved when decisions need to be made. Therefore, we hypothesize
that:
H4c. Null decision-making is positively related to discomfort with using
chatbots.
To conclude, our research model is shown in Fig. 1.
4. Research methodology
4.1. Data collection and samples
We hired a survey company to recruit American participants by
systematic sampling. The survey company maintains national panels
and can access people with different backgrounds. Our study is focused
on marketing employees who use chatbots to interact with their cus­
tomers in B2B companies. The data were collected in January and
February of 2021 through a survey link sent to participants. During the
data collection process, we recruited participants from different back­
grounds (e.g., age, gender, education, firm sizes, industry) to increase
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
50
the generalizability. Participants were qualified for our study if they 1)
worked in the marketing department and used chatbots to provide
customer services, and 2) worked in B2B companies. Qualified partici­
pants then proceeded to the main survey and completed the
questionnaire.
In total, we received 228 valid responses, and their demographic
background is shown in Table 3. In our sample, 218 participants were
full-time employees. On average, they had worked for their current
companies for 8.08 years (SD: 5.14). Participants used a variety of
chatbots such as Twilio, Aivo, and Bold360.
4.2. Measures
Measures are listed in the Appendix. All items were measured with a
7-point Likert scale. Items pertaining to chatbot effectiveness and
discomfort with using chatbots were adapted from Castelo et al. (2019);
items pertaining to attitudes toward chatbots were adapted from Klobas,
McGill, and Wang (2019). Items referring to chatbot affordances
(namely, automatability, personalization, and availability) and chatbot
disaffordances (namely, limited understanding, lack of emotion, and
null decision-making) were newly developed in accordance with the
work of Moore and Benbasat (1991). First, we defined each construct by
reviewing the relevant literature (Tables 1 and 2). Second, we developed
measures based on the definitions, and they were also assessed by two
experienced researchers to ensure the face and content validity of all
items. Lastly, we conducted a card-sorting exercise with a group of
students. The overall placement ratio of items within the target di­
mensions was acceptable, indicating that the items were sorted into the
intended variables.
4.3. Data analysis and results
We designed our survey to exclude personally-identifying informa­
tion and reduced evaluation apprehension. All the variables were
collected in one survey, so both procedural and statistical approaches
were adopted to alleviate the impact of common method bias (CMB)
(Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). Then two statistical
analyses were conducted. First, Harman's single factor analysis was
conducted. The results revealed six factors, with the greatest factor
explaining 25.24% of the total variance. Second, a common method
factor that included all items was created (Podsakoff et al., 2003). We
then calculated the variance explained by the focal factor and by the
method for each item. The average variance explained by the focal factor
was 0.77, while the average variance explained by the method factor
was 0.005. The ratio was approximately 145:1, and all method factor
loadings were nonsignificant. Therefore, CMB was unlikely to be a
serious issue.
Mplus (Muthén & Muthen, 2017), a covariance-based structural
equation modeling technique, was then used to validate our items.
Confirmatory factor analysis was conducted, and the model had a good
fit (χ2
(341) = 576.81, CFI = 0.94, SRMR = 0.05). Furthermore, all items
loaded significantly on their focal constructs, and all loadings were
above 0.60 (Table 4). Cronbach's alpha and composite reliabilities (CRs)
were more than 0.70, and the average variance extracted (AVE) was
more than 0.50 (Table 4). These results support the convergent validity
of our items. Finally, as shown in Table 5, the correlations between the
constructs were less than 0.85 (Brown, 2015), and the square root of
each factor's AVE exceeded all correlations between that factor and any
other construct, supporting the discriminant validity. Overall, our items
showed good psychometric properties.
We then tested our hypotheses (Fig. 2). H1, stating that chatbot
H3c
H3a
Attitudes
toward Chatbots
Chatbot
Effectiveness
Discomfort with
Using chatbots
Personalization
Automatability
Availability
Chatbot Affordance
Lack of
Emotion
Limited
Understanding
Null Decision-
Making
Chatbot
Disaffordance
H1
H2
H3b
H4a
H4b
H4c
Fig. 1. Research mode.
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
51
effectiveness is positively related to attitudes toward chatbots, is sup­
ported (β = 0.65, p < .001). H2 states that discomfort with using chat­
bots is negatively related to attitudes toward chatbots. This hypothesis is
supported (β = − 0.16, p < .01). H3a argues that automatability is
positively related to chatbot effectiveness. This hypothesis is also sup­
ported (β = 0.29, p < .001). H3b proposes that personalization is posi­
tively related to chatbot effectiveness. This hypothesis is also supported
(β = 0.49, p < .001). H3c, arguing that availability is positively related
to chatbot effectiveness, is not supported (β = 0.10, p = .21). Limited
understanding is not significantly related to discomfort with using
chatbots and does not support H4a (β = 0.07, p = .23). Lack of emotion is
not significantly related to discomfort with using chatbots either and
does not support H4b (β = − 0.05, p = .33). H4c proposes that null
decision-making is positively related to discomfort with using chatbots.
This hypothesis is supported (β = 0.56, p < .001). Three dimensions of
chatbot affordance explain 49.07% of the variance from chatbot effec­
tiveness, whereas three dimensions of chatbot disaffordance explain
32.74% of the variance from discomfort with using chatbots. Chatbot
effectiveness and discomfort with using chatbots together explain
47.66% of the variance in attitudes toward chatbots. The effects of
several other control variables were also tested, including age (β = 0.03,
p = .65), gender (β = 0.03, p = .60), education (β = 0.09, p = .14), firm
size (β = 0.05, p = .37) and tenure (β = 0.04, p = .53), though none were
significant. Overall, these results provide strong support for our model.
4.4. Post-hoc analysis
The literature has shown that differently sized companies typically
follow different managerial approaches that lead to different outcomes
(Qiu & Yang, 2018; Visentin & Scarpi, 2012). For example, it is possible
that smaller sellers may develop closer and more personal relationships
with buyers (Visentin & Scarpi, 2012). Therefore, it is possible that
marketing employees from firms of various sizes follow different ap­
proaches to the use of chatbots for supporting customer services. In this
post-hoc analysis, we examine the moderating effect of firm size by
comparing the path coefficients between small firms and medium-to-
large firms following Keil et al. (2000). According to American Small
Business Administration, small companies usually have no more than
500 employees.1
Therefore, we classified firms with 500 employees or
fewer as small firms, and those with more than 500 employees as
medium-to-large firms. We then divided our sample into these two
subsamples (i.e., small firms and medium-to-large firms) accordingly.
The results show (Table 6) that the effects of automatability and null
decision-making are stronger for small firms, while personalization has a
larger effect for medium-to-large firms.
5. Discussion
In this study we aimed to examine how different types of chatbot
affordance and disaffordance influence employees' psychological per­
ceptions of chatbot effectiveness and discomfort with using chatbots,
which in turn affect their attitudes toward chatbots in the B2B context.
Our results show that both chatbot effectiveness (H1) and discomfort
with using chatbots (H2) significantly affect employees' attitudes toward
chatbots. These results are consistent with Castelo et al. (2019), who
showed that both effectiveness and discomfort significantly influence
users' reliance on algorithms.
Our results further reveal that automatability (H3a) and personali­
zation (H3b) can enhance chatbot effectiveness. These results highlight
the importance of automatability in driving chatbot effectiveness, as
suggested by Paschen et al. (2019). The results are also consistent with
Chung et al. (2020), who pushed personalization as a marketing point.
The effect of automatability is stronger for small firms, as demon­
strated by post-hoc analysis. It is possible that small firms have fewer
marketing employees, who need chatbots' help to answer customers'
standard questions. On the other hand, post-hoc analysis shows
personalization has a stronger effect for medium-to-large firms. Since
Table 3
Participants' demographic background.
Category Sample (N = 228)
Gender
Female 36.4%
Male 63.6%
Age
18–24 6.6%
25–34 39.0%
35–44 39.9%
45–54 12.3%
55 or older 2.2%
Education
High school or below 11.8%
Some college education or bachelor's degree 58.3%
Graduate degree 29.8%
Company Size
1–100 11.4%
101–200 9.6%
201–500 18.4%
501–1000 22.4%
1001–3000 18.4%
> 3000 19.7%
Industry
Health care 5.7%
Manufacturing 15.8%
Education 4.8%
Higher education 1.3%
Banking/Finance 16.2%
Insurance 3.1%
Wholesale and Distribution 3.9%
Transportation 5.7%
Government 2.2%
Retail 11.4%
Hospitality 1.3%
Other 28.5%
Table 4
Item descriptive statistics.
Construct Item Mean SD Loading Alpha CR AVE
Automatability
AUT1 5.72 1.21 0.78 0.77 0.77 0.53
AUT2 5.76 1.19 0.72
AUT3 5.63 1.25 0.69
Personalization
PER1 5.00 1.57 0.81 0.89 0.89 0.66
PER2 5.10 1.45 0.78
PER3 4.76 1.78 0.84
PER4 5.18 1.61 0.83
Availability
AVA1 5.89 1.31 0.84 0.85 0.85 0.66
AVA2 5.90 1.23 0.81
AVA3 5.96 1.19 0.79
Limited
Understanding
LU1 4.76 1.76 0.80 0.88 0.88 0.70
LU2 4.73 1.76 0.87
LU3 4.65 1.83 0.85
Lack of Emotion
LE1 4.90 1.75 0.81 0.87 0.87 0.69
LE2 5.00 1.78 0.83
LE3 4.91 1.91 0.84
Null Decision-
Making
ND1 4.20 1.85 0.72 0.88 0.88 0.72
ND2 3.97 1.87 0.90
ND3 4.09 1.89 0.90
Chatbot
Effectiveness
EFF1 5.14 1.52 0.72 0.76 0.77 0.53
EFF2 5.56 1.22 0.65
EFF3 5.46 1.26 0.81
Discomfort With
Using Chatbots
DC1 3.58 1.90 0.83 0.90 0.90 0.75
DC2 3.33 1.93 0.86
DC3 3.16 1.91 0.91
Attitudes Toward
Chatbots
ATT1 5.58 1.19 0.85 0.87 0.87 0.63
ATT2 5.47 1.30 0.79
ATT3 5.43 1.33 0.73
ATT4 5.64 1.23 0.80
1
https://www.sba.gov/document/support–table-size-standards
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
52
medium-to-large firms deal with more clients and interactions can be
less personal (Visentin & Scarpi, 2012), they probably prefer chatbots
that support personalized interactions with customers.
We also find that null decision-making (H4c) can result in discomfort
with using chatbots. This finding further emphasizes that employees feel
concerned that chatbots lack judgement marking, an important human
quality (Murtarelli et al., 2021). This effect is stronger for small firms as
seen in post-hoc analysis. Since smaller firms deal with fewer clients and
manage less customer data, they may lack deeper knowledge regarding
customer profiles. Therefore, marketing employees may desire tools
which can help them make decisions, and thus feel more concerned
about null decision-making.
It is worth noting that availability is not significantly related to
chatbot effectiveness, which is not consistent with previous arguments
from the literature (Lalicic & Weismayer, 2021). One possibility is,
because chatbots lack decision-making capabilities (i.e., null decision-
making), marketing employees still need to maintain some level of
involvement and supervision when dealing with customer inquiries and
fulfilling customer needs. Therefore, even if a chatbot is available 24/7,
employees may feel that this availability is ineffective during the hours
they themselves are not also available. An alternative possibility is that
Table 5
Correlation between constructs and square root of AVEs (on diagonal).
1 2 3 4 5 6 7 8 9
1 Automatability 0.73
2 Personalization 0.41 0.81
3 Availability 0.58 0.12 0.81
4 Limited Understanding 0.15 0.04 0.13 0.84
5 Lack of Emotion 0.13 − 0.08 0.16 0.40 0.83
6 Null Decision-Making − 0.05 − 0.07 0.01 0.41 0.39 0.85
7 Chatbot Effectiveness 0.54 0.62 0.33 0.03 0.03 − 0.06 0.73
8 Discomfort Using Chatbots − 0.10 − 0.04 − 0.02 0.28 0.20 0.57 − 0.13 0.86
9 Attitudes Toward Chatbots 0.55 0.56 0.23 0.05 − 0.11 − 0.16 0.67 − 0.24 0.79
H3c (n.s).
H3a (.29***)
Attitudes
toward Chatbots
Chatbot
Effectiveness
Discomfort with
Using Chatbots
Personalization
Automatability
Availability
Lack of
Emotion
Limited
Understanding
Null Decision-
Making
H1 (.65***)
H2 (-.16**)
H3b (.49***)
H4a (n.s.)
H4b (n.s.)
H4c (.56***)
R2
= .48
R2
= .49
R2
= .33
** p < .01
*** p < .001
Fig. 2. Results of model testing.
Table 6
The moderating effect of firm size.
Small firms
(N = 90)
Medium-to-large
firms (N = 138)
Sig.
Diff.?
H1: Effectiveness → Attitudes
toward Chatbots
0.59*** 0.70*** <***
H2: Discomfort with Using
Chatbots → Attitudes toward
Chatbots
− 0.22* − 0.13* <***
H3a: Automatability → Chatbot
Effectiveness
0.33* 0.28** >**
H3b: Personalization → Chatbot
Effectiveness
0.45*** 0.54*** <***
H3c: Availability → Chatbot
Effectiveness
0.08 0.07 bns
H4a: Limited Understanding →
Discomfort with Using Chatbots
0.14 0.05 bns
H4b: Lack of Emotion →
Discomfort with Using Chatbots
− 0.13 0.01 bns
H4c: Null Decision-Making →
Discomfort with Using Chatbots
0.63*** 0.51*** >***
* p < .05; ** p < .01; *** p < .001; Sig. Diff. = Significant difference; bns = both
are non-significant.
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
53
employees expect chatbots to function 24/7 anyway and therefore
consider constant availability as the minimum requirement.
It is also worth noting that limited understanding and lack of
emotion do not have significant impacts on discomfort with using
chatbots. Marketing employees have access to information about the
current progress of chatbot development and its limitations. Thus, they
do not feel too concerned about these two disaffordances since they can
always intervene when chatbots are interacting with customers. In the
following, we summarize our implications for both theory and
practitioners.
5.1. Implications for theory
Our study makes two main theoretical contributions. First, we extend
the technology affordance theory to the context of chatbots and identify
three relevant types of chatbot affordances: automatability, personali­
zation, and availability. Chatbots have been actively used to support
customer services in many companies. As customers move from human-
to-human interactions to human-to-technology interactions, it is
important to understand which aspects of chatbots can facilitate effec­
tive interactions with customers in this new setting. Our conceptuali­
zation can thus shed light on the question of how technology affordances
can be facilitated based on the unique features of a technology. This can
also help researchers understand how users may use a technology to
achieve their goals. Our results show that availability does not have any
impact on chatbot effectiveness. On the other hand, automatability and
personalization can enhance marketing employees' perceptions of
chatbot effectiveness. Furthermore, the effect of personalization is
greater than that of automatability, indicating that marketing em­
ployees focus more on how to use chatbots to provide customized ser­
vices to their customers. These findings provide further evidence that
the affordance theory can play a key role in explaining how a technology
can build its effectiveness and generate outcomes (e.g., Leidner et al.,
2018; Lin & Kishore, 2021; Sæbø, Federici, & Braccini, 2020). By
adopting the technology affordance theory, our study provides empirical
evidence to develop and validate the scales for chatbot affordances. It
further enriches the technology affordance theory and extends it to the
most recently developed technologies. Thus, we contribute to the liter­
ature by identifying the technology affordances of the emerging chatbot
technology and examining their impacts.
Although the concepts of affordance in general and technology
affordances in particular have gained increasing attention in the litera­
ture (e.g., Leonardi, 2013; Strong et al., 2014; Treem & Leonardi, 2013),
few scholars have attempted to explore disaffordance, which is consid­
ered an underexplored concept (Wittkower, 2016). Thus, our second
contribution is to provide some understanding of disaffordance theory
by contextualizing technology disaffordance and empirically testing its
impacts in the context of chatbots. More specifically, we identify three
types of chatbot disaffordances—limited understanding, lack of
emotion, and null decision-making—and then investigate their effects
on discomfort with using chatbots. Our analysis shows that limited un­
derstanding and lack of emotion do not significantly influence discom­
fort. On the other hand, null decision-making can increase discomfort.
These findings indicate that technology disaffordance can serve as a
theoretical lens for revealing what causes users' negative perceptions of
a technology. Thus, our study contributes to the literature by offering a
novel perspective on the underexamined effects of technology dis­
affordance as the potential disabler of chatbot adoption.
Additionally, we confirm the roles of chatbot effectiveness and
discomfort with chatbots in developing employees' psychological per­
ceptions of chatbots (e.g., Castelo et al., 2019). These are valuable
findings that can offer empirical evidence to support how companies
may effectively implement chatbots based on growing understandings of
employees' psychological statuses. Our study also provides evidence
from post-hoc analysis that the firm size moderates the respective effects
of chatbot affordances and chatbot disaffordances on chatbot
effectiveness and discomfort with using chatbots. Specifically, auto­
matability more strongly affects chatbot effectiveness in small firms,
while personalization more strongly affects chatbot effectiveness in
medium-to-large firms. In addition, null decision-making more strongly
affects discomfort with using chatbots in small firms. These findings
offer additional insights into understanding how affordances and dis­
affordances influence employees' perceptions of chatbots across firms of
different sizes. This study thereby contributes to the literature of psy­
chological perceptions of technology through extending it to chatbots
and empirically testing the moderating effects of firm size.
5.2. Implications for practice
Our study also has important practical implications. Our results
signify that chatbot effectiveness is essential for marketing employees to
form more positive attitudes about the use of chatbots in their organi­
zation. Companies using chatbots to support interactions with their
customers need to enhance chatbot effectiveness, so that their em­
ployees can perceive chatbots more favorably and become willing to
adopt them. In addition, our post-hoc analysis places the importance of
effectiveness higher for medium-to-large firms than small firms.
Medium-to-large firms typically have more resources to adopt advanced
information technology tools to support their customer services, which
mitigate cost concerns and possibly isolate the effectiveness of the tools
as the sole key in building employees' attitudes. As such, medium-to-
large firms need to pay additional attention to the effectiveness of
chatbots when adopting chatbots.
Our study indicates that automatability and personalization are
important types of chatbot affordances that enhance chatbot effective­
ness. Therefore, companies need to collaborate with chatbot vendors to
ensure that these two types of chatbot affordances are supported.
Moreover, our post-hoc analysis outlines which affordance should be
emphasized for small versus medium-to-large firms.
Regarding automatability, companies could provide frequently
asked questions and answers to vendors so that they can configure
chatbots to answer these questions automatically. Further, our post-hoc
analysis evinces a greater effect of automatability for small firms. Since
small firms likely have fewer marketing employees to provide customer
services, automatability is an important affordance so chatbots can
resolve customers' routine questions and concerns.
Regarding personalization, data from historical transactions and
customers profiles are also needed to let chatbots run analysis and
provide customized responses to certain customers. Further, our post-
hoc analysis reveals that the effect of personalization is larger for
medium-to-large firms. Personalization is a useful tool in providing
tailored services to the larger customer base of these firms.
The effect of availability is not significant, suggesting that marketing
employees do not perceive availability as an important affordance to
improve chatbot effectiveness. Therefore, availability may be less
emphasized during chatbot adoption, especially when a certain level of
involvement from marketing employees is needed for customized
interactions.
On the other hand, firms need to reduce employee discomfort with
using chatbots. Our results negatively relate discomfort with using
chatbots to employees' attitudes toward chatbots. Further, our results
highlight null decision-making as a vital type of disaffordance leading to
discomfort. In other words, employees do feel uncomfortable with the
chatbots' inability to make decisions. This can be an important direction
for the future development of chatbots. Although important decisions
are still made by marketing employees and managers, chatbots may be
developed to make simple decisions related to low-value and low-risk
purchases (e.g., office supplies). Additionally, our post-hoc analysis
measures higher impacts of null decision-marking for small firms.
Smaller firms may not have enough customers to develop detailed
customer profiles. Therefore, marketing employees need to rely more on
other tools, such as chatbots, to help them make decisions, and managers
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
54
from small firms need to pay more attention to this feature during
chatbot adoption. Employees seem to feel less concerned about a chat­
bot's limited understanding and lack of emotion. Nevertheless, because
these two types of chatbot disaffordance are still positively related to
discomfort, vendors also need to make an effort to resolve them in the
long term.
6. Conclusion
Chatbots have been increasingly used in B2B marketing. In our study
we examined how marketing employees develop their attitudes toward
the use of chatbots within organizations. We argued that chatbot
affordances and disaffordances influence employees' perceptions (i.e.,
effectiveness and discomfort with using chatbots), which in turn affect
their attitudes. Survey data were collected from 132 B2B marketing
employees, and the results provided strong support for our model.
Our study has a few limitations. First, we recruited our sample using
a survey company. Although our participants are B2B marketing em­
ployees from a variety of backgrounds, our sample may still be biased.
Second, we focused on the context of B2B, and the results may not hold
in the context of B2C. For example, B2C deals with individual con­
sumers, and it is possible that certain types of affordances (e.g., avail­
ability) and disaffordances (e.g., the lack of emotion) are more
important in the context of B2C. Third, we focused on American em­
ployees. It is possible that employees from other cultures may emphasize
different types of affordance and disaffordance.
Future studies can extend our research in several ways. First, addi­
tional types of chatbot affordance and disaffordance can be proposed in
B2B as well as in other contexts. Our study did not differentiate chatbots
by type, so while we attempt to increase the generalizability of this
study, it is still possible that certain types (e.g., human-like versus not
human-like) of chatbots may have specific affordances or disaffordances
that are not captured in this study. The stated effects of chatbot affor­
dances and disaffordances may also vary across different types of chat­
bots. Second, future researchers can examine how different types of
chatbot affordances and disaffordances influence marketing employees'
work performance and well-being. Third, our study examined the
moderating effect of firm size. Future studies can examine how other
relevant moderators, such as individual characteristics and industry
types (e.g., relationship-intensive industries versus non-intensive ones),
moderate the relationships between chatbot affordances and dis­
affordances and employees' perceptions and adoptions.
While we focus on chatbots to conceptualize different types of
affordances and disaffordances, certain affordances and/or dis­
affordances proposed in this study can be relevant to other types of AI.
Virtual reality (VR), for example, can simulate products (Courtney &
Van Doren, 1996) and generate utilitarianism and hedonism (Pizzi,
Scarpi, Pichierri, & Vannucci, 2019; Sung, Bae, Han, & Kwon, 2021). It
is possible that the affordance of availability may play a more important
role in the context of VR, as consumers will no longer need to go to
physical stores to view products (Pizzi et al., 2019). The disaffordance of
null decision-making can be relevant for VR as well. Although VR allows
users to view 3-D virtual data, it cannot make decisions for users
(Courtney & Van Doren, 1996). Moreover, VR has its own distinct ca­
pabilities, such as eliciting certain emotions from users (Shin, 2018).
Future studies are needed to determine how the affordances and/or
disaffordances proposed in this study may influence employees' per­
ceptions, and develop affordances and disaffordances in other contexts
of AI such as VR.
Acknowledgements
This study is supported by the internal grant from West Texas A & M
University.
Appendix: Measures
Automatability (self-development)
AUT1 Chatbots can automate answers to customers' most-asked questions.
AUT2 Chatbots can automate repeated tasks when interacting with customers.
AUT3 Chatbots can handle more questions from customers automatically.
Personalization (self-development)
PER1 Chatbots can provide personalized experiences to customers.
PER2 Chatbots can deal with customers' specific needs.
PER3 Chatbots can take into account customers' unique circumstances.
PER4 Chatbots can deliver immediate one-on-one responses based upon exactly what customers demand.
Availability (self-development)
AVA1 Chatbots can interact with customers 24 h a day and 7 days a week.
AVA2 Chatbots can interact with customers whenever they want to.
AVA3 Chatbots can interact with customers at any time of the day.
Limited understanding (self-development)
LU 1 Chatbots cannot understand a new context that was not previously taught during customer interactions.
LU2 Chatbots cannot understand new queries of customers not previously taught.
LU3 Chatbots cannot understand new questions of customers that were not asked previously.
Lack of emotion (self-development)
LE1 Chatbots cannot express emotions during customer interactions.
LE2 Chatbots cannot create emotional connections with customers.
LE3 Chatbots cannot feel customers' emotions during customer interactions.
Null decision-making (self-development)
ND1 Chatbots cannot make decisions during customer interactions.
ND2 Chatbots have poor decision-making capability during customer interactions.
ND3 Chatbots lack in decision-making during customer interactions.
Chatbot Effectiveness (Castelo et al., 2019)
EFF1 I can see the benefits in chatbots that can deal with customers' requests better than humans.
(continued on next page)
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
55
(continued)
Automatability (self-development)
AUT1 Chatbots can automate answers to customers' most-asked questions.
EFF2 Chatbots that can deal with customers' requests could be useful.
EFF3 I believe chatbots can perform well during customer interactions.
Discomfort with using Chatbots (Castelo et al., 2019)
DC1 Using chatbots to interact with customers makes me uncomfortable.
DC2 Using chatbots to interact with customers goes against how I believe computers should be used.
DC3 Using chatbots to interact with customers is unsettling.
Attitude toward Chatbots (Klobas et al., 2019)
ATT1 Using chatbots is a (good/bad) idea.
ATT2 Using chatbots is a (wise/foolish) idea.
ATT3 Using chatbots is a (pleasant/unpleasant) idea.
ATT4 Using chatbots is a (positive/negative) idea.
References
Aivo. (2019). Advantages and disadvantages of chatbots you need to know. https:
//www.aivo.co/blog/advantages-and-disadvantages-of-chatbots. Access on
December 10, 2020.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behaviour.
Englewood Cliffs, NJ: Prentice-Hall.
Albarracin, D., & Shavitt, S. (2018). Attitudes and attitude change. Annual Review of
Psychology, 69, 299–327.
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic
design cues and communicative agency framing on conversational agent and
company perceptions. Computers in Human Behavior, 85, 183–189.
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the
determinants of users’ satisfaction and continuance intention of AI-powered service
agents. Telematics and Informatics, 54, Article 101473.
Balakrishnan, J., & Dwivedi, Y. K. (2021a). Conversational commerce: Entering the next
stage of AI-powered digital assistants. Annals of Operations Research, 1-35. https://
doi.org/10.1007/s10479-021-04049-5
Balakrishnan, J., & Dwivedi, Y. K. (2021b). Role of cognitive absorption in building user
trust and experience. Psychology & Marketing, 38(4), 643–668.
Balakrishnan, J., Dwivedi, Y. K., Hughes, L., & Boy, F. (2021). Enablers and inhibitors of
AI-powered voice assistants: A dual-factor approach by integrating the status quo
bias and technology acceptance model. Information Systems Frontiers, 1-22. https://
doi.org/10.1007/s10796-021-10203-y
Boomtown. (2019). Chatbot statistics: The 2018 state of chatbots. https://www.goboo
mtown.com/blog/chatbot-statistics-study.
Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2020).
The strategic use of artificial intelligence in the digital era: Systematic literature
review and future research directions. International Journal of Information
Management, 57, Article 102225. https://doi.org/10.1016/j.ijinfomgt.2020.102225
Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an
enterprise context–a survey study. International Journal of Information Management,
60, Article 102375. https://doi.org/10.1016/j.ijinfomgt.2021.102375
Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Paper presented at the
international conference on internet science.
Brown, T. A. (2015). Confirmatory factor analysis for applied research. New York, NY:
Guilford publications.
Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’
attitudes and behavioral intentions towards using artificial intelligence for
organizational decision-making. Technovation, 106, Article 102312. https://doi.org/
10.1016/j.technovation.2021.102312
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion.
Journal of Marketing Research, 56(5), 809–825.
Chan, T. K., Cheung, C. M., & Wong, R. Y. (2019). Cyberbullying on social networking
sites: The crime opportunity and affordance perspectives. Journal of Management
Information Systems, 36(2), 574–609.
Chung, M., Ko, E., Joung, H., & Kim, S. J. (2020). Chatbot e-service and customer
satisfaction regarding luxury brands. Journal of Business Research, 117, 587–595.
Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in
information systems research: A systematic literature review and research agenda.
International Journal of Information Management, 60, Article 102383. https://doi.org/
10.1016/j.ijinfomgt.2021.102383
CoSource. (2018). All about chatbots: Pros, cons, and how they can help your business.
https://cosource.us/2018/02/13/chatbots-pros-cons-can-help-business/. Access on
December 10, 2020.
Courtney, J. A., & Van Doren, D. C. (1996). Succeeding in the communiputer age:
Technology and the marketing mix. Industrial Marketing Management, 25(1), 1–10.
Dale, R. (2016). The return of the chatbots. Natural Language Engineering, 22(5),
811–817.
De, A. (2018). A look at the future of chatbots in customer service. https://readwrite.co
m/2018/12/04/a-look-at-the-future-of-chatbots-in-customer-service/.
De Luca, L. M., Herhausen, D., Troilo, G., & Rossi, A. (2021). How and when do big data
investments pay off? The role of marketing affordances and service innovation.
Journal of the Academy of Marketing Science, 49, 790–810.
Dilmegani, C. (2021). 84 Chatbot/conversational statistics: Market size, adoption. AI
multiple (2021) https://research.aimultiple.com/chatbot-stats.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., &
Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on
emerging challenges, opportunities, and agenda for research, practice and policy.
International Journal of Information Management, 57, Article 101994. https://doi.org/
10.1016/j.ijinfomgt.2019.08.002
Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., …
Wang, Y. (2021). Setting the future of digital and social media marketing research:
Perspectives and research propositions. International Journal of Information
Management, 59, Article 102168. https://doi.org/10.1016/j.ijinfomgt.2020.102168
Fazio, R. H., & Cooper, J. (1983). Arousal in the dissonance process. Social
psychophysiology: A sourcebook (pp. 122–152).
Festinger, L. (1957). A theory of cognitive dissonance (Vol. 2). Stanford University Press.
Gibson, J. J. (1977). A theory of affordances. In S. A. J. Bransford (Ed.), Perceiving, acting
and knoweing: Toward an ecological psychology (pp. 67–82). Hillsdale, NJ: Lawrence
Erlbaum Associates.
Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and
social conversational agents for customer service. In Paper presented at thirty eighth
international conference on information systems, South Korea.
Gomez, A. (2018). Chatbots: Advantages and disadvantages of these tools. https://www.
ecommerce-nation.com/chatbots-advantages-and-disadvantages-of-these-tools/.
Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual
performance. MIS Quarterly, 19(2), 213–236.
Grgecic, D., Holten, R., & Rosenkranz, C. (2015). The impact of functional affordances
and symbolic expressions on the formation of beliefs. Journal of the Association for
Information Systems, 16(7), 500–607.
Han, R., Lam, H. K., Zhan, Y., Wang, Y., Dwivedi, Y. K., & Tan, K. H. (2021). Artificial
intelligence in business-to-business marketing: A bibliometric analysis of current
research status, development and future directions. Industrial Management & Data
Systems, 121(12), 2467–2497.
Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial
intelligence: A comparison between human–human online conversations and
human–chatbot conversations. Computers in Human Behavior, 49, 245–250.
Hinojosa, A. S., Gardner, W. L., Walker, H. J., Cogliser, C., & Gullifor, D. (2017). A review
of cognitive dissonance theory in management research: Opportunities for further
development. Journal of Management, 43(1), 170–199.
Hu, Q., Lu, Y., Pan, Z., Gong, Y., & Yang, Z. (2021). Can AI artifacts influence human
cognition? The effects of artificial autonomy in intelligent personal assistants.
International Journal of Information Management, 56, Article 102250.
Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018). Evaluating and informing the design
of chatbots. In Paper presented at the Proceedings of the 2018 designing interactive
systems conference.
Johnston, M. (2020). 3 powerful ways chatbots are impacting B2B marketing. https://
www.digital22.com/insights/powerful-ways-chatbots-are-impacting-b2b-marketi
ng. Access on December 10, 2020.
Juniper Research. (2021). Bank cost savings via chatbots to reach $7.3 billion by 2023,
as automated customer experience evolv. https://www.juniperresearch.com/press
/bank-cost-savings-via-chatbots-reach-7-3bn-2023. Access on June 1, 2021.
Karahanna, E., Xu, S. X., Xu, Y., & Zhang, N. A. (2018). The needs–affordances–features
perspective for the use of social media. MIS Quarterly, 42(3), 737–756.
Keil, M., Tan, B. C., Wei, K.-K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000).
A cross-cultural study on escalation of commitment behavior in software projects.
MIS Quarterly, 24(2), 299–325.
Klobas, J. E., McGill, T., & Wang, X. (2019). How perceived security risk affects intention
to use smart home devices: A reasoned action explanation. Computers & Security, 87,
Article 101571.
X. Lin et al.
Industrial Marketing Management 101 (2022) 45–56
56
Kolekofski, K. E., Jr., & Heminger, A. R. (2003). Beliefs and attitudes affecting intentions
to share information in an organizational setting. Information & Management, 40(6),
521–532.
Kwok, S. H., & Gao, S. (2005). Attitude towards knowledge sharing behavior. Journal of
Computer Information Systems, 46(2), 45–51.
Lalicic, L., & Weismayer, C. (2021). Consumers’ reasons and perceived value co-creation
of using artificial intelligence-enabled travel service agents. Journal of Business
Research, 129, 891–901.
Landis, G. A. (2014). Future tense: The chatbot and the drone. Communications of the
ACM, 57(7), 112–ff.
Leidner, D. E., Gonzalez, E., & Koch, H. (2018). An affordance perspective of enterprise
social media and organizational socialization. The Journal of Strategic Information
Systems, 27(2), 117–138.
Leonardi, P. M. (2013). When does technology use enable network change in
organizations? A comparative study of feature use and shared affordances. MIS
Quarterly, 37(3), 749–775.
Lin, X., Featherman, M., Brooks, S. L., & Hajli, N. (2019). Exploring gender differences in
online consumer purchase decision making: An online product presentation
perspective. Information Systems Frontiers, 21(5), 1187–1201.
Lin, X., & Kishore, R. (2021). Social media-enabled healthcare: A conceptual model of
social media affordances, online social support, and health behaviors and outcomes.
Technological Forecasting and Social Change, 166, Article 120574. https://doi.org/
10.1016/j.techfore.2021.120574
Lin, X., Wang, X., & Hajli, N. (2019). Building E-commerce satisfaction and boosting
sales: The role of social commerce trust and its antecedents. International Journal of
Electronic Commerce, 23(3), 328–363.
Majchrzak, A., Faraj, S., Kane, G. C., & Azad, B. (2013). The contradictory influence of
social media affordances on online communal knowledge sharing. Journal of
Computer-Mediated Communication, 19(1), 38–55.
Markus, M. L., & Silver, M. S. (2008). A foundation for the study of IT effects: A new look
at DeSanctis and Poole’s concepts of structural features and spirit. Journal of the
Association for Information Systems, 9(3/4), 609–632.
Meuter, M. L., Bitner, M. J., Ostrom, A. L., & Brown, S. W. (2005). Choosing among
alternative service delivery modes: An investigation of customer trial of self-service
technologies. Journal of Marketing, 69(2), 61–83.
Mimoun, M. S. B., Poncin, I., & Garnier, M. (2012). Case study—Embodied virtual agents:
An analysis on reasons for failure. Journal of Retailing and Consumer Services, 19(6),
605–612.
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the
perceptions of adopting an information technology innovation. Information Systems
Research, 2(3), 192–222.
Murtarelli, G., Gregory, A., & Romenti, S. (2021). A conversation-based perspective for
shaping ethical human–machine interactions: The particular challenge of chatbots.
Journal of Business Research, 129, 927–935.
Muthén, L. K., & Muthen, B. (2017). Mplus user’s guide: Statistical analysis with latent
variables, user’s guide. Muthén & Muthén.
Paschen, J., Kietzmann, J., & Kietzmann, T. C. (2019). Artificial intelligence (AI) and its
implications for market knowledge in B2B marketing. Journal of Business & Industrial
Marketing, 33(3), 543–556.
Pillai, R., Sivathanu, B., & Dwivedi, Y. K. (2020). Shopping intention at AI-powered
automated retail stores (AIPARS). Journal of Retailing and Consumer Services, 57,
Article 102207. https://doi.org/10.1016/j.jretconser.2020.102207
Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial intelligence and the new forms of
interaction: Who has the control when interacting with a chatbot? Journal of Business
Research, 129, 878–890.
Pizzi, G., Scarpi, D., Pichierri, M., & Vannucci, V. (2019). Virtual reality, real reactions?:
Comparing consumers' perceptions and shopping orientation across physical and
virtual-reality retail stores. Computers in Human Behavior, 96, 1–12.
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method
biases in behavioral research: A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
Putri, F. P., Meidia, H., & Gunawan, D. (2019). Designing intelligent personalized
chatbot for hotel services. In Paper presented at the Proceedings of the 2019 2nd
international conference on algorithms, computing and artificial intelligence.
Qiu, T., & Yang, Y. (2018). Knowledge spillovers through quality control requirements on
innovation development of global suppliers: The firm size effects. Industrial
Marketing Management, 73, 171–180.
Ritter, A., Cherry, C., & Dolan, W. B. (2011). Data-driven response generation in social
media. In Paper presented at the Proceedings of the 2011 conference on empirical
methods in natural language processing.
Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of
anthropomorphic conversational styles and time orientation. Journal of Business
Research, 126, 23–34.
Sæbø, Ø., Federici, T., & Braccini, A. M. (2020). Combining social media affordances for
organising collective action. Information Systems Journal, 30(4), 699–732.
Scherer, A., Wünderlich, N. V., & Wangenheim, F. V. (2015). The value of self-service:
Long-term effects of technology-based self-service usage on customer retention. MIS
Quarterly, 39(1), 177–200.
Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot
conversational skill on engagement and perceived humanness. Journal of
Management Information Systems, 37(3), 875–900.
Schuetzler, R. M., Grimes, M., Giboney, J. S., & Buckman, J. (2014). Facilitating natural
conversational agent interactions: Lessons from a deception experiment. In
Information systems and quantitative analysis faculty proceedings & presentations.
Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can machines talk? Comparison of
Eliza with modern dialogue systems. Computers in Human Behavior, 58, 278–295.
Shawar, B. A., & Atwell, E. (2007). Chatbots: Are they really useful?. In Paper presented at
the Ldv forum.
Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots:
Anthropomorphism and adoption. Journal of Business Research, 115, 14–24.
Shin, D. (2018). Empathy and embodied experience in virtual environment: To what
extent can virtual reality stimulate empathy and embodied experience? Computers in
Human Behavior, 78, 64–73.
Shumanov, M., & Johnson, L. (2021). Making conversations with chatbots more
personalized. Computers in Human Behavior, 117, Article 106627. https://doi.org/
10.1016/j.chb.2020.106627
Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work:
Human–AI collaboration in managerial professions. Journal of Business Research, 125,
135–142.
Strong, D. M., Volkoff, O., Johnson, S. A., Pelletier, L. R., Tulu, B., Bar-On, I., …
Garber, L. (2014). A theory of organization-EHR affordance actualization. Journal of
the Association for Information Systems, 15(2), 53–85.
Sung, E. C., Bae, S., Han, D.-I. D., & Kwon, O. (2021). Consumer engagement via
interactive artificial intelligence and mixed reality. International Journal of
Information Management, 60, Article 102382. https://doi.org/10.1016/j.
ijinfomgt.2021.102382
Treem, J. W., & Leonardi, P. M. (2013). Social media use in organizations: Exploring the
affordances of visibility, editability, persistence, and association. Annals of the
International Communication Association, 36(1), 143–189.
Visentin, M., & Scarpi, D. (2012). Determinants and mediators of the intention to
upgrade the contract in buyer–seller relationships. Industrial Marketing Management,
41(7), 1133–1141.
Volkoff, O., & Strong, D. M. (2013). Critical realism and affordances: Theorizing IT-
associated organizational change processes. MIS Quarterly, 37(3), 819–834.
Volkoff, O., & Strong, D. M. (2017). Affordance theory and how to use it in IS research. The
routledge companion to management information systems (pp. 232–245).
Wang, C., Teo, T. S., & Janssen, M. (2021). Public and private value creation using
artificial intelligence: An empirical study of AI voice robot users in Chinese public
sector. International Journal of Information Management, 61, Article 102401.
Wang, X., McGill, T. J., & Klobas, J. E. (2018). I want it anyway: Consumer perceptions of
smart home devices. Journal of Computer Information Systems, 60(5), 437–447.
Westaby, J. D. (2005). Behavioral reasoning theory: Identifying new linkages underlying
intentions and behavior. Organizational Behavior and Human Decision Processes, 98
(2), 97–120.
Wittkower, D. (2016). Principles of anti-discriminatory design. In Paper presented at the
2016 IEEE international symposium on ethics in engineering, science and technology.
Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A new chatbot for customer
service on social media. In Paper presented at the Proceedings of the 2017 CHI
conference on human factors in computing systems.
Xiaolin Lin is an assistant professor of computer information systems in the Department of
Computer Information and Decision Management, Paul and Virginia Engler College of
Business, West Texas A&M University. He received his Ph.D. in information systems from
Washington State University. His interests are the impacts of IT on e-commerce and
healthcare, cyber security, and gender differences in IT behavioral research. Dr. Lin has
published or has forthcoming papers in premier journals including Journal of Business
Ethics, Decision Sciences, Information & Management, International Journal of Electronic
Commerce, Industrial Marketing Management, and International Journal of Information
Management, among others. He has also presented numerous papers at international and
national conferences.
Bin Shao is a professor of decision management & Terry Professor of Business in the
Department of Computer Information and Decision Management, Paul and Virginia Engler
College of Business, West Texas A&M University. She earned her Ph.D. from University of
Illinois at Urbana-Champaign. Her current research interests are mainly in the interdis­
ciplinary areas of information systems, decision management, and marketing. Her works
have been published in International Advances in Economic Research, International Journal of
Applied Management Science, Journal of Supply Chain and Operations Management, Journal of
Management Information and Decision Sciences, and many others.
Xuequn (Alex) Wang is a Senior Lecturer in Edith Cowan University. He received his Ph.
D. in Information Systems from Washington State University. His research interests
include social media, privacy, e-commerce, and human-computer interaction. His research
has appeared in MIS Quarterly, Information Systems Journal, Information & Management,
Communications of the ACM, ACM Transactions, and Communications of the Association for
Information Systems, among others.
X. Lin et al.

More Related Content

Similar to 1-s2.0-S001985012100242X-main (4).pdf

HealthCare ChatBot Using Machine Learning
HealthCare ChatBot Using Machine LearningHealthCare ChatBot Using Machine Learning
HealthCare ChatBot Using Machine LearningIRJET Journal
 
What is Conversational AI How it is different from chatbots.pdf
What is Conversational AI How it is different from chatbots.pdfWhat is Conversational AI How it is different from chatbots.pdf
What is Conversational AI How it is different from chatbots.pdfBluebash LLC
 
Elevate Customer Engagement with Expert Chatbot Services.pdf
Elevate Customer Engagement with Expert Chatbot Services.pdfElevate Customer Engagement with Expert Chatbot Services.pdf
Elevate Customer Engagement with Expert Chatbot Services.pdfThatwareIO
 
Personalized AI Chatbot Services for Your Business | Thatwareio
Personalized AI Chatbot Services for Your Business | ThatwareioPersonalized AI Chatbot Services for Your Business | Thatwareio
Personalized AI Chatbot Services for Your Business | ThatwareioThatwareio1
 
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...IRJET Journal
 
Chatbots - The Next Generation Technology
Chatbots - The Next Generation TechnologyChatbots - The Next Generation Technology
Chatbots - The Next Generation Technologyaakash malhotra
 
Chatbot report by Anishka gupta .pdf chatbot presentation
Chatbot report by Anishka gupta .pdf chatbot presentationChatbot report by Anishka gupta .pdf chatbot presentation
Chatbot report by Anishka gupta .pdf chatbot presentationAnishka Gupta
 
Enhancing The Capability of Chatbots
Enhancing The Capability of ChatbotsEnhancing The Capability of Chatbots
Enhancing The Capability of Chatbotsvivatechijri
 
Popular uses of chatbot app development in various industries
Popular uses of chatbot app development in various industriesPopular uses of chatbot app development in various industries
Popular uses of chatbot app development in various industriesAndroid Developer
 
MKT 4800Fall 2022Tesla Answer FormStudent Name Grad
MKT 4800Fall 2022Tesla Answer FormStudent Name GradMKT 4800Fall 2022Tesla Answer FormStudent Name Grad
MKT 4800Fall 2022Tesla Answer FormStudent Name GradIlonaThornburg83
 
An Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for HospitalityAn Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for Hospitalitysipij
 
An Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for HospitalityAn Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for Hospitalitysipij
 
Harness the power of Conversational AI to build better conversational engagem...
Harness the power of Conversational AI to build better conversational engagem...Harness the power of Conversational AI to build better conversational engagem...
Harness the power of Conversational AI to build better conversational engagem...tv2064526
 
HRO handbook V 2.0
HRO handbook V 2.0HRO handbook V 2.0
HRO handbook V 2.0PeopleStrong
 
AI-Based Chatbots Adoption-UTAUT2.pdf
AI-Based Chatbots Adoption-UTAUT2.pdfAI-Based Chatbots Adoption-UTAUT2.pdf
AI-Based Chatbots Adoption-UTAUT2.pdfIddrisuSalifu3
 
Sakha Global - Artificial Intelligence in Customer Service
Sakha Global - Artificial Intelligence in Customer ServiceSakha Global - Artificial Intelligence in Customer Service
Sakha Global - Artificial Intelligence in Customer ServiceSakha Global
 
WHATSAPP CHATBOT FOR CAREER GUIDANCE
WHATSAPP CHATBOT FOR CAREER GUIDANCEWHATSAPP CHATBOT FOR CAREER GUIDANCE
WHATSAPP CHATBOT FOR CAREER GUIDANCEIRJET Journal
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencehaifa rzem
 
How chatbots will benefit businesses venkat vajradhar - medium
How chatbots will benefit businesses    venkat vajradhar - mediumHow chatbots will benefit businesses    venkat vajradhar - medium
How chatbots will benefit businesses venkat vajradhar - mediumvenkatvajradhar1
 

Similar to 1-s2.0-S001985012100242X-main (4).pdf (20)

HealthCare ChatBot Using Machine Learning
HealthCare ChatBot Using Machine LearningHealthCare ChatBot Using Machine Learning
HealthCare ChatBot Using Machine Learning
 
What is Conversational AI How it is different from chatbots.pdf
What is Conversational AI How it is different from chatbots.pdfWhat is Conversational AI How it is different from chatbots.pdf
What is Conversational AI How it is different from chatbots.pdf
 
Elevate Customer Engagement with Expert Chatbot Services.pdf
Elevate Customer Engagement with Expert Chatbot Services.pdfElevate Customer Engagement with Expert Chatbot Services.pdf
Elevate Customer Engagement with Expert Chatbot Services.pdf
 
Personalized AI Chatbot Services for Your Business | Thatwareio
Personalized AI Chatbot Services for Your Business | ThatwareioPersonalized AI Chatbot Services for Your Business | Thatwareio
Personalized AI Chatbot Services for Your Business | Thatwareio
 
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...
A Review on the Determinants of a suitable Chatbot Framework- Empirical evide...
 
Chatbots - The Next Generation Technology
Chatbots - The Next Generation TechnologyChatbots - The Next Generation Technology
Chatbots - The Next Generation Technology
 
Transforming Customer Service with AI-Powered Chatbots
Transforming Customer Service with AI-Powered ChatbotsTransforming Customer Service with AI-Powered Chatbots
Transforming Customer Service with AI-Powered Chatbots
 
Chatbot report by Anishka gupta .pdf chatbot presentation
Chatbot report by Anishka gupta .pdf chatbot presentationChatbot report by Anishka gupta .pdf chatbot presentation
Chatbot report by Anishka gupta .pdf chatbot presentation
 
Enhancing The Capability of Chatbots
Enhancing The Capability of ChatbotsEnhancing The Capability of Chatbots
Enhancing The Capability of Chatbots
 
Popular uses of chatbot app development in various industries
Popular uses of chatbot app development in various industriesPopular uses of chatbot app development in various industries
Popular uses of chatbot app development in various industries
 
MKT 4800Fall 2022Tesla Answer FormStudent Name Grad
MKT 4800Fall 2022Tesla Answer FormStudent Name GradMKT 4800Fall 2022Tesla Answer FormStudent Name Grad
MKT 4800Fall 2022Tesla Answer FormStudent Name Grad
 
An Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for HospitalityAn Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for Hospitality
 
An Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for HospitalityAn Implementation of Voice Assistant for Hospitality
An Implementation of Voice Assistant for Hospitality
 
Harness the power of Conversational AI to build better conversational engagem...
Harness the power of Conversational AI to build better conversational engagem...Harness the power of Conversational AI to build better conversational engagem...
Harness the power of Conversational AI to build better conversational engagem...
 
HRO handbook V 2.0
HRO handbook V 2.0HRO handbook V 2.0
HRO handbook V 2.0
 
AI-Based Chatbots Adoption-UTAUT2.pdf
AI-Based Chatbots Adoption-UTAUT2.pdfAI-Based Chatbots Adoption-UTAUT2.pdf
AI-Based Chatbots Adoption-UTAUT2.pdf
 
Sakha Global - Artificial Intelligence in Customer Service
Sakha Global - Artificial Intelligence in Customer ServiceSakha Global - Artificial Intelligence in Customer Service
Sakha Global - Artificial Intelligence in Customer Service
 
WHATSAPP CHATBOT FOR CAREER GUIDANCE
WHATSAPP CHATBOT FOR CAREER GUIDANCEWHATSAPP CHATBOT FOR CAREER GUIDANCE
WHATSAPP CHATBOT FOR CAREER GUIDANCE
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
How chatbots will benefit businesses venkat vajradhar - medium
How chatbots will benefit businesses    venkat vajradhar - mediumHow chatbots will benefit businesses    venkat vajradhar - medium
How chatbots will benefit businesses venkat vajradhar - medium
 

More from Hemlata Gangwar

125-Orignal Article-2615-1992-10-20230520.pdf
125-Orignal Article-2615-1992-10-20230520.pdf125-Orignal Article-2615-1992-10-20230520.pdf
125-Orignal Article-2615-1992-10-20230520.pdfHemlata Gangwar
 
Data Analysis Workshop.docx
Data Analysis Workshop.docxData Analysis Workshop.docx
Data Analysis Workshop.docxHemlata Gangwar
 
E-COMMERCE SECURITY (2).ppt
E-COMMERCE SECURITY (2).pptE-COMMERCE SECURITY (2).ppt
E-COMMERCE SECURITY (2).pptHemlata Gangwar
 
Digital Business MBA-M1-E-Commerce.pptx
Digital Business MBA-M1-E-Commerce.pptxDigital Business MBA-M1-E-Commerce.pptx
Digital Business MBA-M1-E-Commerce.pptxHemlata Gangwar
 
E commerce strategy.pptx
E commerce strategy.pptxE commerce strategy.pptx
E commerce strategy.pptxHemlata Gangwar
 
E- commerce business model.pptx
E- commerce business model.pptxE- commerce business model.pptx
E- commerce business model.pptxHemlata Gangwar
 

More from Hemlata Gangwar (7)

125-Orignal Article-2615-1992-10-20230520.pdf
125-Orignal Article-2615-1992-10-20230520.pdf125-Orignal Article-2615-1992-10-20230520.pdf
125-Orignal Article-2615-1992-10-20230520.pdf
 
Presentation1 (3).pptx
Presentation1 (3).pptxPresentation1 (3).pptx
Presentation1 (3).pptx
 
Data Analysis Workshop.docx
Data Analysis Workshop.docxData Analysis Workshop.docx
Data Analysis Workshop.docx
 
E-COMMERCE SECURITY (2).ppt
E-COMMERCE SECURITY (2).pptE-COMMERCE SECURITY (2).ppt
E-COMMERCE SECURITY (2).ppt
 
Digital Business MBA-M1-E-Commerce.pptx
Digital Business MBA-M1-E-Commerce.pptxDigital Business MBA-M1-E-Commerce.pptx
Digital Business MBA-M1-E-Commerce.pptx
 
E commerce strategy.pptx
E commerce strategy.pptxE commerce strategy.pptx
E commerce strategy.pptx
 
E- commerce business model.pptx
E- commerce business model.pptxE- commerce business model.pptx
E- commerce business model.pptx
 

Recently uploaded

Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdf
Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdfInnomantra Viewpoint - Building Moonshots : May-Jun 2024.pdf
Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdfInnomantra
 
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![© ر
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![©  ر00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![©  ر
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![© رnafizanafzal
 
Sex service available my WhatsApp number 7374088497
Sex service available my WhatsApp number 7374088497Sex service available my WhatsApp number 7374088497
Sex service available my WhatsApp number 7374088497dipikakk482
 
Mastering The Art Of 'Closing The Sale'.
Mastering The Art Of 'Closing The Sale'.Mastering The Art Of 'Closing The Sale'.
Mastering The Art Of 'Closing The Sale'.SNSW group8
 
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deck
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deckPitch Deck Teardown: Goodcarbon's $5.5m Seed deck
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deckHajeJanKamps
 
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdf
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdfProgress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdf
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdfHolger Mueller
 
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...CIO Look Magazine
 
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...BabaJohn3
 
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjw
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjwSCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjw
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjwadimosmejiaslendon
 
How to refresh to be fit for the future world
How to refresh to be fit for the future worldHow to refresh to be fit for the future world
How to refresh to be fit for the future worldChris Skinner
 
Elevate Your Online Presence with SEO Services
Elevate Your Online Presence with SEO ServicesElevate Your Online Presence with SEO Services
Elevate Your Online Presence with SEO ServicesHaseebBashir5
 
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证ogawka
 
Beyond Numbers A Holistic Approach to Forensic Accounting
Beyond Numbers A Holistic Approach to Forensic AccountingBeyond Numbers A Holistic Approach to Forensic Accounting
Beyond Numbers A Holistic Approach to Forensic AccountingYourLegal Accounting
 
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...prakheeshc
 
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg Pfizer
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg PfizerJual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg Pfizer
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg PfizerPusat Herbal Resmi BPOM
 

Recently uploaded (20)

Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdf
Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdfInnomantra Viewpoint - Building Moonshots : May-Jun 2024.pdf
Innomantra Viewpoint - Building Moonshots : May-Jun 2024.pdf
 
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![© ر
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![©  ر00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![©  ر
00971508021841 حبوب الإجهاض في دبي | أبوظبي | الشارقة | السطوة |❇ ❈ ((![© ر
 
Obat Aborsi Malang 0851\7696\3835 Jual Obat Cytotec Di Malang
Obat Aborsi Malang 0851\7696\3835 Jual Obat Cytotec Di MalangObat Aborsi Malang 0851\7696\3835 Jual Obat Cytotec Di Malang
Obat Aborsi Malang 0851\7696\3835 Jual Obat Cytotec Di Malang
 
Sex service available my WhatsApp number 7374088497
Sex service available my WhatsApp number 7374088497Sex service available my WhatsApp number 7374088497
Sex service available my WhatsApp number 7374088497
 
Home Furnishings Ecommerce Platform Short Pitch 2024
Home Furnishings Ecommerce Platform Short Pitch 2024Home Furnishings Ecommerce Platform Short Pitch 2024
Home Furnishings Ecommerce Platform Short Pitch 2024
 
Mastering The Art Of 'Closing The Sale'.
Mastering The Art Of 'Closing The Sale'.Mastering The Art Of 'Closing The Sale'.
Mastering The Art Of 'Closing The Sale'.
 
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deck
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deckPitch Deck Teardown: Goodcarbon's $5.5m Seed deck
Pitch Deck Teardown: Goodcarbon's $5.5m Seed deck
 
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdf
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdfProgress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdf
Progress Report - UKG Analyst Summit 2024 - A lot to do - Good Progress1-1.pdf
 
Obat Aborsi Surabaya 0851\7696\3835 Jual Obat Cytotec Di Surabaya
Obat Aborsi Surabaya 0851\7696\3835 Jual Obat Cytotec Di SurabayaObat Aborsi Surabaya 0851\7696\3835 Jual Obat Cytotec Di Surabaya
Obat Aborsi Surabaya 0851\7696\3835 Jual Obat Cytotec Di Surabaya
 
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...
Most Visionary Leaders in Cloud Revolution, Shaping Tech’s Next Era - 2024 (2...
 
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...
Pay after result spell caster (,$+27834335081)@ bring back lost lover same da...
 
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjw
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjwSCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjw
SCI9-Q4-MOD8.1.pdfjttstwjwetw55k5wwtwrjw
 
How to refresh to be fit for the future world
How to refresh to be fit for the future worldHow to refresh to be fit for the future world
How to refresh to be fit for the future world
 
Elevate Your Online Presence with SEO Services
Elevate Your Online Presence with SEO ServicesElevate Your Online Presence with SEO Services
Elevate Your Online Presence with SEO Services
 
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(SUT毕业证书)斯威本科技大学毕业证成绩单本科硕士学位证留信学历认证
 
Beyond Numbers A Holistic Approach to Forensic Accounting
Beyond Numbers A Holistic Approach to Forensic AccountingBeyond Numbers A Holistic Approach to Forensic Accounting
Beyond Numbers A Holistic Approach to Forensic Accounting
 
Obat Aborsi Depok 0851\7696\3835 Jual Obat Cytotec Di Depok
Obat Aborsi Depok 0851\7696\3835 Jual Obat Cytotec Di DepokObat Aborsi Depok 0851\7696\3835 Jual Obat Cytotec Di Depok
Obat Aborsi Depok 0851\7696\3835 Jual Obat Cytotec Di Depok
 
Contact +971581248768 for 100% original and safe abortion pills available for...
Contact +971581248768 for 100% original and safe abortion pills available for...Contact +971581248768 for 100% original and safe abortion pills available for...
Contact +971581248768 for 100% original and safe abortion pills available for...
 
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...
A BUSINESS PROPOSAL FOR SLAUGHTER HOUSE WASTE MANAGEMENT IN MYSORE MUNICIPAL ...
 
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg Pfizer
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg PfizerJual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg Pfizer
Jual Obat Aborsi Di Sibolga wa 0851/7541/5434 Cytotec Misoprostol 200mcg Pfizer
 

1-s2.0-S001985012100242X-main (4).pdf

  • 1. Industrial Marketing Management 101 (2022) 45–56 Available online 6 December 2021 0019-8501/© 2021 Elsevier Inc. All rights reserved. Employees' perceptions of chatbots in B2B marketing: Affordances vs. disaffordances Xiaolin Lin a , Bin Shao b , Xuequn Wang c,* a Assistant Professor of Computer Information Systems, Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business, West Texas A&M University, 2501 4th Ave, Canyon, TX 79016, United States of America b Decision Management & Terry Professor of Business, Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business, West Texas A&M University, 2501 4th Ave, Canyon, TX 79016, United States of America c School of Business and Law, Edith Cowan University, Joondalup, WA 6027, Australia A R T I C L E I N F O Keywords: Chatbot B2B marketing Affordance Disaffordance Effectiveness Discomfort A B S T R A C T We investigate the impacts of chatbots' technical features on employees' perceptions (namely, chatbot effec­ tiveness and discomfort with using chatbots) in the context of B2B marketing. In particular, to capture the technical features of chatbots, we identify three types of chatbot affordances (i.e., automatability, personaliza­ tion, and availability) and three types of chatbot disaffordances (i.e., limited understanding, lack of emotion, and null decision-making). We show that these types of chatbot affordances and disaffordances are related to chatbot effectiveness and discomfort with using chatbots, which in turn can affect employees' attitudes toward chatbots. We conducted an empirical study via an online survey and collected data from 228 B2B marketing employees. While automatability and personalization enhance chatbot effectiveness, null decision-making increases discomfort with using chatbots. Further, chatbot effectiveness and discomfort with using chatbots both influence employees' attitudes toward chatbots. Theoretical and managerial implications are also discussed. 1. Introduction Advancements in the science of artificial intelligence (AI) have rapidly transformed chatbot technology into an innovative interface, through which companies can more efficiently interact with their cus­ tomers in both B2B and B2C processes (e.g., Borges, Laurindo, Spínola, Gonçalves, & Mattos, 2020; Cao, Duan, Edwards, & Dwivedi, 2021; Collins, Dennehy, Conboy, & Mikalef, 2021; Dwivedi et al., 2021; Dwivedi et al., 2021; Han et al., 2021; Hu, Lu, Pan, Gong, & Yang, 2021; Lalicic & Weismayer, 2021; Murtarelli, Gregory, & Romenti, 2021; Pil­ lai, Sivathanu, & Dwivedi, 2020). Observable advantages such as lower operating costs, improved response times during customer service communication, and consistent 24/7 customer assistance have stimu­ lated enthusiasm and increased the pressure to develop chatbots and create a role for them in business. For example, chatbots can serve to enhance business performance by improving the quality and efficiency of customer services, automating online purchases, facilitating and engaging in communication with customers, and greatly improving response rates to inquiring customers (De, 2018). Chatbots can also strengthen the impression of fair treatment (Wang, Teo, & Janssen, 2021). For these reasons, many companies believe in the importance of adopting and using chatbots for business practices to upgrade performance. For example, Juniper Research predicts that chatbot usage could save companies $7.3 billion by 2023 (Juniper Research, 2021). Others foresee the value of e-commerce transactions supported by chatbots reaching $112 billion by 2023, and the global market size of chatbots reaching $1.3 billion by 2025 (Dilmegani, 2021). According to a recent report, 58% of companies that use chatbots are B2B, and 22% are B2C (Boomtown, 2019). Chatbots have been increasingly applied in the business operations of B2B marketing. Chatbots can enhance B2B mar­ keting in various ways, such as facilitating the sales process, automating traffic from emails, and providing efficient customer services (Johnston, 2020). For example, chatbots can provide answers to the frequently asked questions of customers, who desire quick and useful responses. Therefore, chatbots are now playing an increasingly important role in improving customer services in B2B marketing. To better leverage chatbots into customer services in B2B companies, it is essential for practitioners to understand employees' psychosocial perceptions of the use of chatbots within organizations (e.g., Araujo, 2018; Castelo, Bos, & * Corresponding author. E-mail addresses: xlin@wtamu.edu (X. Lin), bshao@wtamu.edu (B. Shao), xuequnwang1600@gmail.com (X. Wang). Contents lists available at ScienceDirect Industrial Marketing Management journal homepage: www.elsevier.com/locate/indmarman https://doi.org/10.1016/j.indmarman.2021.11.016 Received 11 January 2021; Received in revised form 25 November 2021; Accepted 30 November 2021
  • 2. Industrial Marketing Management 101 (2022) 45–56 46 Lehmann, 2019). This study is an attempt to provide further insights into how to heighten employees' positive psychosocial perceptions (i.e., attitudes) about chatbots in B2B marketing by studying the impacts of techno­ logical features on their beliefs regarding chatbots. It is important to do so because marketing employees may be reluctant to use chatbots if they do not have positive attitudes. For example, a recent study indicated that when users feel positive about chatbots, they are more likely to believe that using chatbots produces good value (Lalicic & Weismayer, 2021). By understanding which chatbot features help improve employee per­ ceptions, companies can identify and focus on specific key features when selecting and implementing chatbots. These enhanced features can then facilitate employees' acceptance and the integration of chatbots into customer services. In accordance with Castelo et al. (2019), we captured people's beliefs by using two terms: the perceived effectiveness of chatbots and discomfort with using chatbots (please refer to the section on “Employees' Psychological Perceptions of Chatbots” for an explana­ tion). To capture technology features, our study draws upon the concept of affordance and disaffordance (Gibson, 1977; Strong et al., 2014; Wittkower, 2016) to develop the relevant chatbot features (please refer to the section on “Technology Affordance” for an explanation). There­ fore, our research questions are 1) what chatbot features influence per­ ceptions of chatbot effectiveness? and 2) what chatbot features influence discomfort with using chatbots? Our study makes two main contributions. First, we apply the concept of technology affordance and propose three types of chatbot affordance: automatability, personalization, and availability. Our study thus pro­ vides a deeper understanding of chatbot features. Second, our study contextualizes the concept of disaffordance and proposes three types of chatbot disaffordance: limited understanding, lack of emotion, and null decision-making. Our study can thus enhance our understanding regarding the limitations of chatbots. Alternatively, we uncover new knowledge by demonstrating the relationships between the features of chatbot technology and people's psychological perceptions from a technology affordance perspective. The results provide useful guidelines for companies to encourage marketing employees to use chatbots and offer valuable suggestions for the future development of chatbots. The rest of the paper is organized as follows. We first review previous studies on chatbots. We then identify the features of chatbots following the concepts of affordance and disaffordance upon which the research model is based. We then describe our methodologies, including data collection, measures, data analysis, and results. Finally, implications for theory and practice, limitations, and suggestions for future research are discussed. 2. Literature review and theoretical background 2.1. Related studies on chatbots There are two main streams in the existing literature on chatbots. One stream of chatbot research is predominantly focused on chatbot design and feature optimization (Dale, 2016; Gnewuch, Morana, & Maedche, 2017; Landis, 2014; Shah, Warwick, Vallverdú, & Wu, 2016; Shawar & Atwell, 2007; Xu, Liu, Guo, Sinha, & Akkiraju, 2017). How­ ever, most of the discussions concerning chatbot design and optimiza­ tion revolve around psychology, linguistics, computer science, and engineering, but there is an evident lack of academic research on chat­ bots from a business perspective. Therefore, it is meaningful to consider business practices to provide insight into which chatbot design features can help companies improve performance. In another stream of present work, scholars examine user behavior and experiences with chatbots (Brandtzaeg & Følstad, 2017; Hill, Ford, & Farreras, 2015; Jain, Kumar, Kota, & Patel, 2018; Mimoun, Poncin, & Garnier, 2012; Schuetzler, Grimes, Giboney, & Buckman, 2014). When human–chatbot communication was compared to human–human communication, there were notable differences in the content and level of quality of such interactions. In human–chatbot communication, people used a greater number of messages, but the average length of the messages was shorter than in human–human communications. The employed vocabulary was more restricted overall, yet there was an in­ crease in the amount of profanity used (Hill et al., 2015). In addition, the existing literature has also studied how customers perceive the use of chatbots. Those studies are conducted and analyzed using a technological or anthropomorphic lens (Murtarelli et al., 2021). In particular, the technological lens deals with functionalities for effective chatbots. For example, Brandtzaeg and Følstad (2017) concluded that productivity is the top motivation for customers to use chatbots. Chung, Ko, Joung, and Kim (2020) found that chatbots' mar­ keting efforts (e.g., customization and problem solving) can improve communication accuracy and credibility, which in turn increases customer satisfaction. Balakrishnan and Dwivedi (2021b) also demon­ strated how human-to-machine interaction could increase cognitive absorption. More recently, Lalicic and Weismayer (2021) identified personalization, convenience, ubiquity, and super-functionality as the four main reasons for chatbot adoption, and usage barrier, technology anxiety, privacy concern, and need for personal interaction as the four main reasons against it. These studies can help identify effective design principles and appropriate functionalities for the development of chat­ bots, thus leading to the adoption and use of chatbots. Alternatively, the anthropomorphic lens emphasizes the social-actor role that chatbots play when interacting with customers (Balakrishnan & Dwivedi, 2021a; Murtarelli et al., 2021). Users (e.g., customers) may favor social features (e.g., human likeness) that enable chatbots to provide humanized conversations and help improve social relationships between customers and firms (Sowa, Przegalinska, & Ciechanowski, 2021). For example, Brandtzaeg and Følstad (2017) reported that entertainment was the second most frequent motivation for using chatbots, a response given by 20% of participants, and that it is neces­ sary that chatbots have a sense of humor to provide a pleasant user experience. They argued it is crucial for chatbots to be “fun” even if the primary purpose of chatbot technology is to enhance productivity. This is consistent with other findings about users' first experiences conversing with chatbots (Jain et al., 2018). Additionally, participants reported heightened enthusiasm pertaining to the novelty of chatbots and the use of chatbots for social and relational purposes. More recently, Pizzi, Scarpi, and Pantano (2021) found that customers show a lower level of psychological reactance when using anthropomorphic chatbots, espe­ cially when the customers themselves can choose to initiate conversa­ tion. Along this stream, Balakrishnan, Dwivedi, Hughes, and Boy (2021) found that psychological commitment (e.g., regret avoidance) can in­ crease resistance toward the adoption of AI voice assistants. Further­ more, Roy and Naidoo (2021) determined that the effectiveness of human-like conversation styles depends on consumers' time orienta­ tion: while present-oriented consumers prefer warm styles, future- oriented consumers prefer competent styles. Recent literature has also begun to examine chatbot adoption among employees. For example, Brachten, Kissmer, and Stieglitz (2021) indi­ cated both intrinsic and extrinsic motivations increase employees' intention to use Enterprise Bots. Regardless, the most prevalent and familiar use of chatbots is in the context of customer service. Although consumers appreciate the consistent availability and “responsiveness” of technological auto-service (Meuter, Bitner, Ostrom, & Brown, 2005), they also desire the personalized attention provided by traditional human–human communication (Scherer, Wünderlich, & Wangenheim, 2015). Chatbots have the potential to combine the best of both. For example, Chung et al. (2020) demonstrated that luxury brands can and do deliver personalized care to their customers by using chatbots instead of traditional human–human interactions. Shumanov and Johnson (2021) additionally found that matching the consumer's personality with a congruent chatbot personality can encourage the consumer to interact more with the chatbot and bolster purchasing outcomes. Additionally, Sheehan, Jin, and Gottlieb (2020) observed that when a X. Lin et al.
  • 3. Industrial Marketing Management 101 (2022) 45–56 47 chatbot is able to effectively resolve miscommunication by seeking clarification, it is perceived as being qualitatively identical to a chatbot that avoids errors altogether, and that an anthropomorphic chatbot may satisfy customers who demand more humanized interaction. 2.2. Technology affordance The literature has applied the concept of affordance to improve un­ derstanding of the role of technology (in particular, modern technolo­ gies) in various contexts (Grgecic, Holten, & Rosenkranz, 2015; Markus & Silver, 2008). This leads to the term technology affordance, which re­ fers to “the mutuality of actor intentions and technology capabilities that provide the potential for a particular action” (Majchrzak, Faraj, Kane, & Azad, 2013, p. 39). Thus, technology affordance arises as individuals perceive how features of certain technologies can be used to support their goals (Grgecic et al., 2015). Because individuals can have different goals, they can interpret the features of a technology in different ways and perceive that the technology can afford their corresponding goal- oriented activities (Leonardi, 2013; Treem & Leonardi, 2013). Technology affordance is a useful theoretical lens through which to investigate how advanced technologies are used by individuals in various phenomena such as cyberbullying and big data marketing. For example, Chan, Cheung, and Wong (2019) proposed four types of social media affordances relevant to cyberbullying and big data marketing. They showed that information retrieval affordance enhances the pres­ ence of suitable targets, whereas editability and association affordance increase the absence of capable guardians. De Luca, Herhausen, Troilo, and Rossi (2021) proposed three types of big data marketing affordances including customer behavior pattern spotting, real-time market responsiveness, and data-driven market ambidexterity. They found that real-time market responsiveness and data-driven market ambidexterity affordance are positively related to service innovation. Thus, technology affordance can facilitate the investigation of how chatbots can be used for improving customer services. In other words, goal-oriented actors (e. g., marketing employees) interpret and learn about certain artifacts (e. g., chatbots) regarding the possibilities for activities provided by these artifacts (i.e., affordance) to support their goals (e.g., provide customer services). Although other theoretical lenses have been used to understand the use of technology, technology affordance was chosen as the most appropriate perspective as it is consistent with the objective of this study. As a contrasting example, task-technology fit theory (TTF) has been developed to understand how task and technology characteristics jointly influence individual performance (Goodhue & Thompson, 1995). The basic argument is that technology has a positive effect on perfor­ mance when it is a good fit for the task. Since we are centering this study on understanding how chatbots' technical features influence employees' general perceptions, TTF is less appropriate for our study. Nevertheless, TTF can be a useful theoretical foundation for future investigations into how certain technical features of chatbots can support employees in completing various tasks. 2.2.1. Affordances of chatbots In the context of B2B marketing, with the materiality of chatbots, affordances can be jointly facilitated by social and technical aspects (Volkoff & Strong, 2013, 2017) and can be interpreted based on the goals of marketing employees. In this way, the affordances of chatbots emerge from the interactions between employees and chatbots such that employees use certain features of chatbots for achieving marketing goals via such activities as interacting with customers and providing services. The rationale is that employees may use chatbots to maximize their opportunities to enhance customer interactions for supporting customer service. Accordingly, based upon the literature on technology afford­ ance (Chan et al., 2019; De Luca et al., 2021) and chatbots (Schuetzler, Grimes, & Scott Giboney, 2020), we propose three types of chatbot affordance including automatability, personalization, and availability (see Table 1). First, chatbots can manage customer service without the intervention of marketing employees. Specifically, chatbots can auto­ mate routine tasks during customer services such as answering common questions (Paschen, Kietzmann, & Kietzmann, 2019), because historical structured and unstructured data allows the integrated AI capabilities to produce logical responses (Paschen et al., 2019), therefore reinforcing automatability as the first affordance. Second, chatbots can adapt products/services to meet a customer's specific needs and preferences, thus improving their relations with customers (Chung et al., 2020). Chatbots can recommend different products/services based upon each customer's background and interests instead of suggesting the same products/services to every customer. In fact, Chung et al. (2020) pro­ posed personalization as one of the main marketing efforts of chatbots. Therefore, we select personalization as the second type of chatbot affordance. Finally, chatbots can be available 24/7, and availability (i.e., ubiquity) is one of the main reasons that businesses adopt chatbots (Lalicic & Weismayer, 2021). Therefore, we propose availability as the third type of chatbot affordance. 2.2.2. Disaffordance Notwithstanding that technology affordances have gained much attention (Leonardi, 2013; Strong et al., 2014; Treem & Leonardi, 2013), researchers have also begun to pay attention to disaffordance. In one of the first studies, Wittkower (2016) provided an initial understanding of disaffordance in a systematic and theorized manner. In general, dis­ affordance generally means the lack of affordance in design, which suggests that the artifacts cannot provide the potential for behaviors associated with achieving specific outcomes. In other words, a tech­ nology cannot empower the users through their capacities for acting and achieving goals because of unsuccessful or nonexistent design features. Thus, disaffordance refers to artifacts failing to facilitate certain ob­ jectives of goal-oriented actors (Wittkower, 2016). Disaffordance can be caused by non-affordance or poor affordance. Non-affordance results in a scenario in which a certain affordance does not appear within the actors' experiences. It is possible that the artifacts lack a set of functions to support the actors' goals. On the other hand, poor affordance refers to Table 1 Chatbot affordances. Chatbot Affordance Definition How the Affordance Relates to Marketing References Automatability The extent to which marketing employees believe that chatbots offer the opportunity to respond to customers' questions automatically. This affordance allows chatbots to deal with simple questions automatically, so that marketing employees can focus on handling complex questions from customers. (Paschen et al., 2019) Personalization The extent to which marketing employees believe that chatbots offer the opportunity to provide personalized responses to customers. This affordance allows chatbots to support personalized responses and customized services, so that marketing employees can reduce their involvement during interactions with customers. Chung et al. (2020); Lalicic and Weismayer (2021) Availability The extent to which marketing employees believe that chatbots offer the opportunity to provide customer services at any time of the day. This affordance allows chatbots to support customer services at any time, even when marketing employees are not available. Lalicic and Weismayer (2021) X. Lin et al.
  • 4. Industrial Marketing Management 101 (2022) 45–56 48 the scenario where a certain intended affordance does not provide the actual affordance. The artifacts may not have clear and unobstructed interfaces to allow actors to achieve their goals, often as a consequence of poor design. Accordingly, we focus on the non-affordance aspects of disaffordance, and in this study, we propose three types of chatbot dis­ affordances including limited understanding, lack of emotion, and null decision-making (see Table 2). Fore mostly, because chatbots conduct predictive analytics based upon data collected from customers (Mur­ tarelli et al., 2021), they cannot understand a new scenario that they had not previously encountered, creating the disaffordance of limited un­ derstanding. Further, chatbots lack important human qualities, such as the ability to express emotions (e.g., empathy) and judgement marking (Murtarelli et al., 2021). Therefore, lack of emotion and null decision- making are proposed as additional types of disaffordances. 2.3. Employees' psychological perceptions of chatbots Castelo et al. (2019) suggested that people's psychological percep­ tions of a technology (i.e., algorithms) include two primary variables: the effectiveness of a technology and discomfort with using the tech­ nology. More specifically, technology effectiveness is a cognitive factor that captures people's cognitive beliefs about the technology's compe­ tency, and discomfort with use is an affective factor that captures peo­ ple's subjective feeling of discomfort with using the technology. Discomfort represents an unhealthy state of psychological well-being (Albarracin & Shavitt, 2018), so individuals try to avoid objects or ac­ tivities resulting in discomfort. The existing literature has applied several theoretical lenses to investigate both positive and negative aspects of technology. For example, Lalicic and Weismayer (2021) used the behavioral reasoning theory to identify reasons for/against adopting chatbots. Here, “reasons for” and “reasons against” are used to justify users' future behaviors and represent their distinct cognitive routes leading to intention and support of use (Westaby, 2005). In the case of smart home devices, Wang, McGill, and Klobas (2018) adopted the net valence model to examine how various benefits and risks influence the adoption intention of different devices. Perceived benefits and risks each represented the users' overall perceptions of benefits and risks. These approaches are similar to Castelo et al. (2019), who captured individuals' positive and negative feelings toward technology. While behavioral reasoning theory and the net valence model only deal with individuals' cognitive beliefs, effectiveness and discomfort cover both cognitive beliefs and affective feelings. Therefore, we chose to use the approach from Castelo et al. (2019). Following Castelo et al. (2019), we propose that employees' attitudes toward chatbots are influenced by their beliefs about the technology, including effectiveness and discomfort with use. Effectiveness represents the employees' positive cognitive perceptions toward chatbots and refers to the degree to which the latter are competent to provide customer services. Discomfort with using chatbots reflects employees' negative affective perceptions when they are uneasy about using chatbots. 3. Research model and hypotheses development Following Castelo et al. (2019), we argue that chatbot effectiveness and discomfort with using chatbots can represent marketing employees' perceptions of the technology, which influence their attitudes. In addi­ tion, we illustrate how chatbot affordances and disaffordances are related to employees' perceptions. Specifically, we propose that three types of chatbot affordances (i.e., automatability, personalization, and availability) are positively related to chatbot effectiveness and that three types of chatbot disaffordance (i.e., limited understanding, lack of emotion, and null decision-making) are positively related to discomfort with use. In the following we discuss each hypothesis in more detail. 3.1. Impact of effectiveness and discomfort on attitudes Attitude refers to an individual's general summary affective feeling of (un)favorableness toward a behavior (Ajzen & Fishbein, 1980). An employee's attitude toward chatbots thus represents his or her overall subjective feeling about using chatbots within organizations. The liter­ ature suggests that attitudes can be affected by both technological fea­ tures and social aspects—more specifically, people's beliefs (e.g., Kwok & Gao, 2005; Lin, Featherman, Brooks, & Hajli, 2019). In particular, it is well understood that people's beliefs can form their attitudes (Ajzen & Fishbein, 1980). For example, employees' beliefs about information sharing have been proven to play a significant role in forming their at­ titudes toward sharing information within organizations (Kolekofski Jr & Heminger, 2003). Similarly, it is suggested that people's beliefs about information and interpersonal relationships are related to their attitudes about online behaviors such as online shopping (Lin, Wang, & Hajli, 2019). Likewise, people's beliefs in the effectiveness of an algorithm have been found to be positively associated with their psychological states regarding their use of the algorithm, that is, their reliance on the algorithm (Castelo et al., 2019). In our study, chatbot effectiveness captures employees' beliefs about the performance of chatbots based on their own experiences. When one believes that the performance of chatbot is high, one is likely to favor using chatbots. Therefore, it is safe to expect that a higher level of effectiveness leads to a more positive attitude toward using chatbots within organizations. H1. Chatbot effectiveness is positively related to attitudes toward chatbots. 3.2. The impact of discomfort with using on attitudes On the other hand, we argue that marketing employees' discomfort with using chatbots can have a negative effect on their attitude. Whereas chatbot effectiveness represents employees' positive cognitive percep­ tions of chatbots, discomfort with using chatbots reflects their negative affective perceptions. Cognitive dissonance theory identifies discomfort as a component of dissonance (Festinger, 1957; Hinojosa, Gardner, Walker, Cogliser, & Gullifor, 2017). Specifically, discomfort can moti­ vate or drive the individual's attitude change process to reduce disso­ nance (Fazio & Cooper, 1983). In our study, discomfort with using Table 2 Chatbot disaffordance. Chatbot Affordance Definition How the Affordance Relates to Marketing Reference Limited understanding The extent to which marketing employees believe that chatbots do NOT offer the opportunity to understand new questions from customers. This disaffordance does not allow chatbots to deal with new contexts, so marketing employees need to be involved in these new contexts. (Murtarelli et al., 2021); (Paschen et al., 2019) Lack of emotion The extent to which marketing employees believe that chatbots do NOT offer the opportunity to express emotions during interactions with customers. This disaffordance does not allow chatbots to express emotions, so marketing employees may need to deal with customers' emotions (e.g., to provide emotional support). (Murtarelli et al., 2021) Null decision- making The extent to which marketing employees believe that chatbots do NOT offer the opportunity to make decisions. This disaffordance does not allow chatbots to make decisions, so marketing employees need to be involved when certain marketing decisions need to be made. (Murtarelli et al., 2021) X. Lin et al.
  • 5. Industrial Marketing Management 101 (2022) 45–56 49 chatbots represents employees' mental uneasiness about using chatbots. Employees may hold neutral attitudes toward chatbots before adopting them. However, if and when employees do not feel comfortable with chatbots after using them, their negative affective feelings become inconsistent with their previous attitudes toward chatbots. Such disso­ nance then changes their attitudes (Hinojosa et al., 2017), and em­ ployees probably develop a negative attitude toward them. As a result, the employees' discomfort may make them reluctant to use the tech­ nology. Castelo et al. (2019) also argued that discomfort is negatively related to reliance on algorithms. Therefore, we hypothesize that: H2. Discomfort with using chatbots is negatively related to attitudes toward chatbots. 3.3. The impact of technology affordances on effectiveness The literature suggests that affordances can affect people's beliefs and psychological states in a variety of contexts (e.g, Karahanna, Xu, Xu, & Zhang, 2018; Leidner, Gonzalez, & Koch, 2018). More specifically, in the context of social media, Karahanna et al. (2018) summarized a va­ riety of social media affordances and proposed a link between social media affordances and people's psychological needs. In the context of enterprise social media, Leidner et al. (2018) provided an enhanced understanding of how the affordances of enterprise social media can result in various outcomes such as stress and a sense of social support. Thus, it is clear that affordances can affect people's beliefs. In the context of chatbots, automatability offers the possibility of answering customers and interacting with them automatically. This affordance enables companies to serve customers via automatically generated responses and to provide quick responses based on customer requests (Xu et al., 2017). Taking advantage of this affordance, em­ ployees may reduce their workloads accordingly because they do not have to answer so many phone calls from customers. In addition, with this affordance, employees have more opportunities to focus on other things and can thus maximize the possibility of improving their per­ formance. In this way, employees may believe that chatbots can better satisfy their needs for performing the job and providing customer ser­ vices with less effort, thus leading to a sense of effectiveness. H3a. Automatability is positively related to chatbot effectiveness. Personalization offers the opportunity to recall the history of con­ versations with customers, which can enable chatbots to produce personalized responses (e.g., Gomez, 2018; Ritter, Cherry, & Dolan, 2011). In this way, companies can provide dedicated customer services because personalization offers the ability to respond to individual cus­ tomers' requests and needs (Xu et al., 2017). With this affordance, chatbots enable the companies to facilitate favorable interactions be­ tween customers and businesses through the possibility of building conversation skills (Schuetzler et al., 2020). As a result, employees have confidence in providing customer services through the use of chatbots because of the affordance of personalization. Therefore, a higher level of perceived affordance of personalization is likely to be related to a higher level of effectiveness of chatbots. H3b. Personalization is positively related to chatbot effectiveness. The affordance of availability enables companies to be accessible to customers and respond to their requests at any time. This affordance offers customers the possibility of interacting with businesses and get­ ting some help anytime and anywhere. With this affordance, chatbots offer the opportunity for businesses and customers to communicate conveniently (Putri, Meidia, & Gunawan, 2019), thus supporting customer needs and improving customer service (Pizzi et al., 2021). In addition, the affordance of availability can build customers' satisfaction because it can maximize the service quality provided by companies (Ashfaq, Yun, Yu, & Loureiro, 2020). All these factors can increase the likelihood that employees will believe that chatbots can be effective for improving customer services. Therefore, employees' perceptions of the effectiveness of chatbots can increase as the affordance of availability increases. H3c. Availability is positively related to chatbot effectiveness. 3.4. Impact of technology disaffordances on discomfort with using chatbots In our study we propose three types of chatbot disaffordance leading to discomfort. Limited understanding is defined as the extent to which marketing employees believe that chatbots do not offer the opportunity to understand new questions from customers. Because the AI of chatbots is developed based upon historical conversations with customers, chat­ bots cannot handle completely new queries from customers (Gomez, 2018). In these situations, customers can feel frustrated because they do not get the responses they need, and sales may be lost. As a result, marketing employees may not feel comfortable letting chatbots completely handle interactions with customers. Therefore, we hypoth­ esize that: H4a. Limited understanding is positively related to discomfort with using chatbots. In this context, the lack of emotion refers to the extent to which marketing employees believe that chatbots do not offer the opportunity to express emotions during interactions with customers. Unlike people, chatbots have no emotions. Although chatbots can be taught to simulate certain emotions (e.g., empathy) based upon certain messages (Aivo, 2019), marketing employees can understand customer emotions better and respond more effectively. Customers can easily identify messages created by chatbots and often appreciate revisions by marketing em­ ployees that respond to their emotions (CoSource, 2018). Often this is essential to keeping interactions going in the right direction and to providing good service. Therefore, marketing employees may perceive that they need to maintain a certain level of involvement to effectively deal with customers' emotions. Therefore, we posit: H4b. The lack of emotion is positively related to discomfort with using chatbots. Last, null decision-making refers to the extent to which marketing employees believe that chatbots do not offer the opportunity to make decisions. Chatbots cannot make decisions, which causes problems. For example, Microsoft initiated a chatbot for Twitter. However, it was turned into a racist account by content from users in less than 24 h (Gomez, 2018). The chatbot did not have the decision-making capacity to prevent this outcome. Therefore, marketing employees may not feel comfortable letting chatbots run by themselves and may feel they must be involved when decisions need to be made. Therefore, we hypothesize that: H4c. Null decision-making is positively related to discomfort with using chatbots. To conclude, our research model is shown in Fig. 1. 4. Research methodology 4.1. Data collection and samples We hired a survey company to recruit American participants by systematic sampling. The survey company maintains national panels and can access people with different backgrounds. Our study is focused on marketing employees who use chatbots to interact with their cus­ tomers in B2B companies. The data were collected in January and February of 2021 through a survey link sent to participants. During the data collection process, we recruited participants from different back­ grounds (e.g., age, gender, education, firm sizes, industry) to increase X. Lin et al.
  • 6. Industrial Marketing Management 101 (2022) 45–56 50 the generalizability. Participants were qualified for our study if they 1) worked in the marketing department and used chatbots to provide customer services, and 2) worked in B2B companies. Qualified partici­ pants then proceeded to the main survey and completed the questionnaire. In total, we received 228 valid responses, and their demographic background is shown in Table 3. In our sample, 218 participants were full-time employees. On average, they had worked for their current companies for 8.08 years (SD: 5.14). Participants used a variety of chatbots such as Twilio, Aivo, and Bold360. 4.2. Measures Measures are listed in the Appendix. All items were measured with a 7-point Likert scale. Items pertaining to chatbot effectiveness and discomfort with using chatbots were adapted from Castelo et al. (2019); items pertaining to attitudes toward chatbots were adapted from Klobas, McGill, and Wang (2019). Items referring to chatbot affordances (namely, automatability, personalization, and availability) and chatbot disaffordances (namely, limited understanding, lack of emotion, and null decision-making) were newly developed in accordance with the work of Moore and Benbasat (1991). First, we defined each construct by reviewing the relevant literature (Tables 1 and 2). Second, we developed measures based on the definitions, and they were also assessed by two experienced researchers to ensure the face and content validity of all items. Lastly, we conducted a card-sorting exercise with a group of students. The overall placement ratio of items within the target di­ mensions was acceptable, indicating that the items were sorted into the intended variables. 4.3. Data analysis and results We designed our survey to exclude personally-identifying informa­ tion and reduced evaluation apprehension. All the variables were collected in one survey, so both procedural and statistical approaches were adopted to alleviate the impact of common method bias (CMB) (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). Then two statistical analyses were conducted. First, Harman's single factor analysis was conducted. The results revealed six factors, with the greatest factor explaining 25.24% of the total variance. Second, a common method factor that included all items was created (Podsakoff et al., 2003). We then calculated the variance explained by the focal factor and by the method for each item. The average variance explained by the focal factor was 0.77, while the average variance explained by the method factor was 0.005. The ratio was approximately 145:1, and all method factor loadings were nonsignificant. Therefore, CMB was unlikely to be a serious issue. Mplus (Muthén & Muthen, 2017), a covariance-based structural equation modeling technique, was then used to validate our items. Confirmatory factor analysis was conducted, and the model had a good fit (χ2 (341) = 576.81, CFI = 0.94, SRMR = 0.05). Furthermore, all items loaded significantly on their focal constructs, and all loadings were above 0.60 (Table 4). Cronbach's alpha and composite reliabilities (CRs) were more than 0.70, and the average variance extracted (AVE) was more than 0.50 (Table 4). These results support the convergent validity of our items. Finally, as shown in Table 5, the correlations between the constructs were less than 0.85 (Brown, 2015), and the square root of each factor's AVE exceeded all correlations between that factor and any other construct, supporting the discriminant validity. Overall, our items showed good psychometric properties. We then tested our hypotheses (Fig. 2). H1, stating that chatbot H3c H3a Attitudes toward Chatbots Chatbot Effectiveness Discomfort with Using chatbots Personalization Automatability Availability Chatbot Affordance Lack of Emotion Limited Understanding Null Decision- Making Chatbot Disaffordance H1 H2 H3b H4a H4b H4c Fig. 1. Research mode. X. Lin et al.
  • 7. Industrial Marketing Management 101 (2022) 45–56 51 effectiveness is positively related to attitudes toward chatbots, is sup­ ported (β = 0.65, p < .001). H2 states that discomfort with using chat­ bots is negatively related to attitudes toward chatbots. This hypothesis is supported (β = − 0.16, p < .01). H3a argues that automatability is positively related to chatbot effectiveness. This hypothesis is also sup­ ported (β = 0.29, p < .001). H3b proposes that personalization is posi­ tively related to chatbot effectiveness. This hypothesis is also supported (β = 0.49, p < .001). H3c, arguing that availability is positively related to chatbot effectiveness, is not supported (β = 0.10, p = .21). Limited understanding is not significantly related to discomfort with using chatbots and does not support H4a (β = 0.07, p = .23). Lack of emotion is not significantly related to discomfort with using chatbots either and does not support H4b (β = − 0.05, p = .33). H4c proposes that null decision-making is positively related to discomfort with using chatbots. This hypothesis is supported (β = 0.56, p < .001). Three dimensions of chatbot affordance explain 49.07% of the variance from chatbot effec­ tiveness, whereas three dimensions of chatbot disaffordance explain 32.74% of the variance from discomfort with using chatbots. Chatbot effectiveness and discomfort with using chatbots together explain 47.66% of the variance in attitudes toward chatbots. The effects of several other control variables were also tested, including age (β = 0.03, p = .65), gender (β = 0.03, p = .60), education (β = 0.09, p = .14), firm size (β = 0.05, p = .37) and tenure (β = 0.04, p = .53), though none were significant. Overall, these results provide strong support for our model. 4.4. Post-hoc analysis The literature has shown that differently sized companies typically follow different managerial approaches that lead to different outcomes (Qiu & Yang, 2018; Visentin & Scarpi, 2012). For example, it is possible that smaller sellers may develop closer and more personal relationships with buyers (Visentin & Scarpi, 2012). Therefore, it is possible that marketing employees from firms of various sizes follow different ap­ proaches to the use of chatbots for supporting customer services. In this post-hoc analysis, we examine the moderating effect of firm size by comparing the path coefficients between small firms and medium-to- large firms following Keil et al. (2000). According to American Small Business Administration, small companies usually have no more than 500 employees.1 Therefore, we classified firms with 500 employees or fewer as small firms, and those with more than 500 employees as medium-to-large firms. We then divided our sample into these two subsamples (i.e., small firms and medium-to-large firms) accordingly. The results show (Table 6) that the effects of automatability and null decision-making are stronger for small firms, while personalization has a larger effect for medium-to-large firms. 5. Discussion In this study we aimed to examine how different types of chatbot affordance and disaffordance influence employees' psychological per­ ceptions of chatbot effectiveness and discomfort with using chatbots, which in turn affect their attitudes toward chatbots in the B2B context. Our results show that both chatbot effectiveness (H1) and discomfort with using chatbots (H2) significantly affect employees' attitudes toward chatbots. These results are consistent with Castelo et al. (2019), who showed that both effectiveness and discomfort significantly influence users' reliance on algorithms. Our results further reveal that automatability (H3a) and personali­ zation (H3b) can enhance chatbot effectiveness. These results highlight the importance of automatability in driving chatbot effectiveness, as suggested by Paschen et al. (2019). The results are also consistent with Chung et al. (2020), who pushed personalization as a marketing point. The effect of automatability is stronger for small firms, as demon­ strated by post-hoc analysis. It is possible that small firms have fewer marketing employees, who need chatbots' help to answer customers' standard questions. On the other hand, post-hoc analysis shows personalization has a stronger effect for medium-to-large firms. Since Table 3 Participants' demographic background. Category Sample (N = 228) Gender Female 36.4% Male 63.6% Age 18–24 6.6% 25–34 39.0% 35–44 39.9% 45–54 12.3% 55 or older 2.2% Education High school or below 11.8% Some college education or bachelor's degree 58.3% Graduate degree 29.8% Company Size 1–100 11.4% 101–200 9.6% 201–500 18.4% 501–1000 22.4% 1001–3000 18.4% > 3000 19.7% Industry Health care 5.7% Manufacturing 15.8% Education 4.8% Higher education 1.3% Banking/Finance 16.2% Insurance 3.1% Wholesale and Distribution 3.9% Transportation 5.7% Government 2.2% Retail 11.4% Hospitality 1.3% Other 28.5% Table 4 Item descriptive statistics. Construct Item Mean SD Loading Alpha CR AVE Automatability AUT1 5.72 1.21 0.78 0.77 0.77 0.53 AUT2 5.76 1.19 0.72 AUT3 5.63 1.25 0.69 Personalization PER1 5.00 1.57 0.81 0.89 0.89 0.66 PER2 5.10 1.45 0.78 PER3 4.76 1.78 0.84 PER4 5.18 1.61 0.83 Availability AVA1 5.89 1.31 0.84 0.85 0.85 0.66 AVA2 5.90 1.23 0.81 AVA3 5.96 1.19 0.79 Limited Understanding LU1 4.76 1.76 0.80 0.88 0.88 0.70 LU2 4.73 1.76 0.87 LU3 4.65 1.83 0.85 Lack of Emotion LE1 4.90 1.75 0.81 0.87 0.87 0.69 LE2 5.00 1.78 0.83 LE3 4.91 1.91 0.84 Null Decision- Making ND1 4.20 1.85 0.72 0.88 0.88 0.72 ND2 3.97 1.87 0.90 ND3 4.09 1.89 0.90 Chatbot Effectiveness EFF1 5.14 1.52 0.72 0.76 0.77 0.53 EFF2 5.56 1.22 0.65 EFF3 5.46 1.26 0.81 Discomfort With Using Chatbots DC1 3.58 1.90 0.83 0.90 0.90 0.75 DC2 3.33 1.93 0.86 DC3 3.16 1.91 0.91 Attitudes Toward Chatbots ATT1 5.58 1.19 0.85 0.87 0.87 0.63 ATT2 5.47 1.30 0.79 ATT3 5.43 1.33 0.73 ATT4 5.64 1.23 0.80 1 https://www.sba.gov/document/support–table-size-standards X. Lin et al.
  • 8. Industrial Marketing Management 101 (2022) 45–56 52 medium-to-large firms deal with more clients and interactions can be less personal (Visentin & Scarpi, 2012), they probably prefer chatbots that support personalized interactions with customers. We also find that null decision-making (H4c) can result in discomfort with using chatbots. This finding further emphasizes that employees feel concerned that chatbots lack judgement marking, an important human quality (Murtarelli et al., 2021). This effect is stronger for small firms as seen in post-hoc analysis. Since smaller firms deal with fewer clients and manage less customer data, they may lack deeper knowledge regarding customer profiles. Therefore, marketing employees may desire tools which can help them make decisions, and thus feel more concerned about null decision-making. It is worth noting that availability is not significantly related to chatbot effectiveness, which is not consistent with previous arguments from the literature (Lalicic & Weismayer, 2021). One possibility is, because chatbots lack decision-making capabilities (i.e., null decision- making), marketing employees still need to maintain some level of involvement and supervision when dealing with customer inquiries and fulfilling customer needs. Therefore, even if a chatbot is available 24/7, employees may feel that this availability is ineffective during the hours they themselves are not also available. An alternative possibility is that Table 5 Correlation between constructs and square root of AVEs (on diagonal). 1 2 3 4 5 6 7 8 9 1 Automatability 0.73 2 Personalization 0.41 0.81 3 Availability 0.58 0.12 0.81 4 Limited Understanding 0.15 0.04 0.13 0.84 5 Lack of Emotion 0.13 − 0.08 0.16 0.40 0.83 6 Null Decision-Making − 0.05 − 0.07 0.01 0.41 0.39 0.85 7 Chatbot Effectiveness 0.54 0.62 0.33 0.03 0.03 − 0.06 0.73 8 Discomfort Using Chatbots − 0.10 − 0.04 − 0.02 0.28 0.20 0.57 − 0.13 0.86 9 Attitudes Toward Chatbots 0.55 0.56 0.23 0.05 − 0.11 − 0.16 0.67 − 0.24 0.79 H3c (n.s). H3a (.29***) Attitudes toward Chatbots Chatbot Effectiveness Discomfort with Using Chatbots Personalization Automatability Availability Lack of Emotion Limited Understanding Null Decision- Making H1 (.65***) H2 (-.16**) H3b (.49***) H4a (n.s.) H4b (n.s.) H4c (.56***) R2 = .48 R2 = .49 R2 = .33 ** p < .01 *** p < .001 Fig. 2. Results of model testing. Table 6 The moderating effect of firm size. Small firms (N = 90) Medium-to-large firms (N = 138) Sig. Diff.? H1: Effectiveness → Attitudes toward Chatbots 0.59*** 0.70*** <*** H2: Discomfort with Using Chatbots → Attitudes toward Chatbots − 0.22* − 0.13* <*** H3a: Automatability → Chatbot Effectiveness 0.33* 0.28** >** H3b: Personalization → Chatbot Effectiveness 0.45*** 0.54*** <*** H3c: Availability → Chatbot Effectiveness 0.08 0.07 bns H4a: Limited Understanding → Discomfort with Using Chatbots 0.14 0.05 bns H4b: Lack of Emotion → Discomfort with Using Chatbots − 0.13 0.01 bns H4c: Null Decision-Making → Discomfort with Using Chatbots 0.63*** 0.51*** >*** * p < .05; ** p < .01; *** p < .001; Sig. Diff. = Significant difference; bns = both are non-significant. X. Lin et al.
  • 9. Industrial Marketing Management 101 (2022) 45–56 53 employees expect chatbots to function 24/7 anyway and therefore consider constant availability as the minimum requirement. It is also worth noting that limited understanding and lack of emotion do not have significant impacts on discomfort with using chatbots. Marketing employees have access to information about the current progress of chatbot development and its limitations. Thus, they do not feel too concerned about these two disaffordances since they can always intervene when chatbots are interacting with customers. In the following, we summarize our implications for both theory and practitioners. 5.1. Implications for theory Our study makes two main theoretical contributions. First, we extend the technology affordance theory to the context of chatbots and identify three relevant types of chatbot affordances: automatability, personali­ zation, and availability. Chatbots have been actively used to support customer services in many companies. As customers move from human- to-human interactions to human-to-technology interactions, it is important to understand which aspects of chatbots can facilitate effec­ tive interactions with customers in this new setting. Our conceptuali­ zation can thus shed light on the question of how technology affordances can be facilitated based on the unique features of a technology. This can also help researchers understand how users may use a technology to achieve their goals. Our results show that availability does not have any impact on chatbot effectiveness. On the other hand, automatability and personalization can enhance marketing employees' perceptions of chatbot effectiveness. Furthermore, the effect of personalization is greater than that of automatability, indicating that marketing em­ ployees focus more on how to use chatbots to provide customized ser­ vices to their customers. These findings provide further evidence that the affordance theory can play a key role in explaining how a technology can build its effectiveness and generate outcomes (e.g., Leidner et al., 2018; Lin & Kishore, 2021; Sæbø, Federici, & Braccini, 2020). By adopting the technology affordance theory, our study provides empirical evidence to develop and validate the scales for chatbot affordances. It further enriches the technology affordance theory and extends it to the most recently developed technologies. Thus, we contribute to the liter­ ature by identifying the technology affordances of the emerging chatbot technology and examining their impacts. Although the concepts of affordance in general and technology affordances in particular have gained increasing attention in the litera­ ture (e.g., Leonardi, 2013; Strong et al., 2014; Treem & Leonardi, 2013), few scholars have attempted to explore disaffordance, which is consid­ ered an underexplored concept (Wittkower, 2016). Thus, our second contribution is to provide some understanding of disaffordance theory by contextualizing technology disaffordance and empirically testing its impacts in the context of chatbots. More specifically, we identify three types of chatbot disaffordances—limited understanding, lack of emotion, and null decision-making—and then investigate their effects on discomfort with using chatbots. Our analysis shows that limited un­ derstanding and lack of emotion do not significantly influence discom­ fort. On the other hand, null decision-making can increase discomfort. These findings indicate that technology disaffordance can serve as a theoretical lens for revealing what causes users' negative perceptions of a technology. Thus, our study contributes to the literature by offering a novel perspective on the underexamined effects of technology dis­ affordance as the potential disabler of chatbot adoption. Additionally, we confirm the roles of chatbot effectiveness and discomfort with chatbots in developing employees' psychological per­ ceptions of chatbots (e.g., Castelo et al., 2019). These are valuable findings that can offer empirical evidence to support how companies may effectively implement chatbots based on growing understandings of employees' psychological statuses. Our study also provides evidence from post-hoc analysis that the firm size moderates the respective effects of chatbot affordances and chatbot disaffordances on chatbot effectiveness and discomfort with using chatbots. Specifically, auto­ matability more strongly affects chatbot effectiveness in small firms, while personalization more strongly affects chatbot effectiveness in medium-to-large firms. In addition, null decision-making more strongly affects discomfort with using chatbots in small firms. These findings offer additional insights into understanding how affordances and dis­ affordances influence employees' perceptions of chatbots across firms of different sizes. This study thereby contributes to the literature of psy­ chological perceptions of technology through extending it to chatbots and empirically testing the moderating effects of firm size. 5.2. Implications for practice Our study also has important practical implications. Our results signify that chatbot effectiveness is essential for marketing employees to form more positive attitudes about the use of chatbots in their organi­ zation. Companies using chatbots to support interactions with their customers need to enhance chatbot effectiveness, so that their em­ ployees can perceive chatbots more favorably and become willing to adopt them. In addition, our post-hoc analysis places the importance of effectiveness higher for medium-to-large firms than small firms. Medium-to-large firms typically have more resources to adopt advanced information technology tools to support their customer services, which mitigate cost concerns and possibly isolate the effectiveness of the tools as the sole key in building employees' attitudes. As such, medium-to- large firms need to pay additional attention to the effectiveness of chatbots when adopting chatbots. Our study indicates that automatability and personalization are important types of chatbot affordances that enhance chatbot effective­ ness. Therefore, companies need to collaborate with chatbot vendors to ensure that these two types of chatbot affordances are supported. Moreover, our post-hoc analysis outlines which affordance should be emphasized for small versus medium-to-large firms. Regarding automatability, companies could provide frequently asked questions and answers to vendors so that they can configure chatbots to answer these questions automatically. Further, our post-hoc analysis evinces a greater effect of automatability for small firms. Since small firms likely have fewer marketing employees to provide customer services, automatability is an important affordance so chatbots can resolve customers' routine questions and concerns. Regarding personalization, data from historical transactions and customers profiles are also needed to let chatbots run analysis and provide customized responses to certain customers. Further, our post- hoc analysis reveals that the effect of personalization is larger for medium-to-large firms. Personalization is a useful tool in providing tailored services to the larger customer base of these firms. The effect of availability is not significant, suggesting that marketing employees do not perceive availability as an important affordance to improve chatbot effectiveness. Therefore, availability may be less emphasized during chatbot adoption, especially when a certain level of involvement from marketing employees is needed for customized interactions. On the other hand, firms need to reduce employee discomfort with using chatbots. Our results negatively relate discomfort with using chatbots to employees' attitudes toward chatbots. Further, our results highlight null decision-making as a vital type of disaffordance leading to discomfort. In other words, employees do feel uncomfortable with the chatbots' inability to make decisions. This can be an important direction for the future development of chatbots. Although important decisions are still made by marketing employees and managers, chatbots may be developed to make simple decisions related to low-value and low-risk purchases (e.g., office supplies). Additionally, our post-hoc analysis measures higher impacts of null decision-marking for small firms. Smaller firms may not have enough customers to develop detailed customer profiles. Therefore, marketing employees need to rely more on other tools, such as chatbots, to help them make decisions, and managers X. Lin et al.
  • 10. Industrial Marketing Management 101 (2022) 45–56 54 from small firms need to pay more attention to this feature during chatbot adoption. Employees seem to feel less concerned about a chat­ bot's limited understanding and lack of emotion. Nevertheless, because these two types of chatbot disaffordance are still positively related to discomfort, vendors also need to make an effort to resolve them in the long term. 6. Conclusion Chatbots have been increasingly used in B2B marketing. In our study we examined how marketing employees develop their attitudes toward the use of chatbots within organizations. We argued that chatbot affordances and disaffordances influence employees' perceptions (i.e., effectiveness and discomfort with using chatbots), which in turn affect their attitudes. Survey data were collected from 132 B2B marketing employees, and the results provided strong support for our model. Our study has a few limitations. First, we recruited our sample using a survey company. Although our participants are B2B marketing em­ ployees from a variety of backgrounds, our sample may still be biased. Second, we focused on the context of B2B, and the results may not hold in the context of B2C. For example, B2C deals with individual con­ sumers, and it is possible that certain types of affordances (e.g., avail­ ability) and disaffordances (e.g., the lack of emotion) are more important in the context of B2C. Third, we focused on American em­ ployees. It is possible that employees from other cultures may emphasize different types of affordance and disaffordance. Future studies can extend our research in several ways. First, addi­ tional types of chatbot affordance and disaffordance can be proposed in B2B as well as in other contexts. Our study did not differentiate chatbots by type, so while we attempt to increase the generalizability of this study, it is still possible that certain types (e.g., human-like versus not human-like) of chatbots may have specific affordances or disaffordances that are not captured in this study. The stated effects of chatbot affor­ dances and disaffordances may also vary across different types of chat­ bots. Second, future researchers can examine how different types of chatbot affordances and disaffordances influence marketing employees' work performance and well-being. Third, our study examined the moderating effect of firm size. Future studies can examine how other relevant moderators, such as individual characteristics and industry types (e.g., relationship-intensive industries versus non-intensive ones), moderate the relationships between chatbot affordances and dis­ affordances and employees' perceptions and adoptions. While we focus on chatbots to conceptualize different types of affordances and disaffordances, certain affordances and/or dis­ affordances proposed in this study can be relevant to other types of AI. Virtual reality (VR), for example, can simulate products (Courtney & Van Doren, 1996) and generate utilitarianism and hedonism (Pizzi, Scarpi, Pichierri, & Vannucci, 2019; Sung, Bae, Han, & Kwon, 2021). It is possible that the affordance of availability may play a more important role in the context of VR, as consumers will no longer need to go to physical stores to view products (Pizzi et al., 2019). The disaffordance of null decision-making can be relevant for VR as well. Although VR allows users to view 3-D virtual data, it cannot make decisions for users (Courtney & Van Doren, 1996). Moreover, VR has its own distinct ca­ pabilities, such as eliciting certain emotions from users (Shin, 2018). Future studies are needed to determine how the affordances and/or disaffordances proposed in this study may influence employees' per­ ceptions, and develop affordances and disaffordances in other contexts of AI such as VR. Acknowledgements This study is supported by the internal grant from West Texas A & M University. Appendix: Measures Automatability (self-development) AUT1 Chatbots can automate answers to customers' most-asked questions. AUT2 Chatbots can automate repeated tasks when interacting with customers. AUT3 Chatbots can handle more questions from customers automatically. Personalization (self-development) PER1 Chatbots can provide personalized experiences to customers. PER2 Chatbots can deal with customers' specific needs. PER3 Chatbots can take into account customers' unique circumstances. PER4 Chatbots can deliver immediate one-on-one responses based upon exactly what customers demand. Availability (self-development) AVA1 Chatbots can interact with customers 24 h a day and 7 days a week. AVA2 Chatbots can interact with customers whenever they want to. AVA3 Chatbots can interact with customers at any time of the day. Limited understanding (self-development) LU 1 Chatbots cannot understand a new context that was not previously taught during customer interactions. LU2 Chatbots cannot understand new queries of customers not previously taught. LU3 Chatbots cannot understand new questions of customers that were not asked previously. Lack of emotion (self-development) LE1 Chatbots cannot express emotions during customer interactions. LE2 Chatbots cannot create emotional connections with customers. LE3 Chatbots cannot feel customers' emotions during customer interactions. Null decision-making (self-development) ND1 Chatbots cannot make decisions during customer interactions. ND2 Chatbots have poor decision-making capability during customer interactions. ND3 Chatbots lack in decision-making during customer interactions. Chatbot Effectiveness (Castelo et al., 2019) EFF1 I can see the benefits in chatbots that can deal with customers' requests better than humans. (continued on next page) X. Lin et al.
  • 11. Industrial Marketing Management 101 (2022) 45–56 55 (continued) Automatability (self-development) AUT1 Chatbots can automate answers to customers' most-asked questions. EFF2 Chatbots that can deal with customers' requests could be useful. EFF3 I believe chatbots can perform well during customer interactions. Discomfort with using Chatbots (Castelo et al., 2019) DC1 Using chatbots to interact with customers makes me uncomfortable. DC2 Using chatbots to interact with customers goes against how I believe computers should be used. DC3 Using chatbots to interact with customers is unsettling. Attitude toward Chatbots (Klobas et al., 2019) ATT1 Using chatbots is a (good/bad) idea. ATT2 Using chatbots is a (wise/foolish) idea. ATT3 Using chatbots is a (pleasant/unpleasant) idea. ATT4 Using chatbots is a (positive/negative) idea. References Aivo. (2019). Advantages and disadvantages of chatbots you need to know. https: //www.aivo.co/blog/advantages-and-disadvantages-of-chatbots. Access on December 10, 2020. Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behaviour. Englewood Cliffs, NJ: Prentice-Hall. Albarracin, D., & Shavitt, S. (2018). Attitudes and attitude change. Annual Review of Psychology, 69, 299–327. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics, 54, Article 101473. Balakrishnan, J., & Dwivedi, Y. K. (2021a). Conversational commerce: Entering the next stage of AI-powered digital assistants. Annals of Operations Research, 1-35. https:// doi.org/10.1007/s10479-021-04049-5 Balakrishnan, J., & Dwivedi, Y. K. (2021b). Role of cognitive absorption in building user trust and experience. Psychology & Marketing, 38(4), 643–668. Balakrishnan, J., Dwivedi, Y. K., Hughes, L., & Boy, F. (2021). Enablers and inhibitors of AI-powered voice assistants: A dual-factor approach by integrating the status quo bias and technology acceptance model. Information Systems Frontiers, 1-22. https:// doi.org/10.1007/s10796-021-10203-y Boomtown. (2019). Chatbot statistics: The 2018 state of chatbots. https://www.goboo mtown.com/blog/chatbot-statistics-study. Borges, A. F., Laurindo, F. J., Spínola, M. M., Gonçalves, R. F., & Mattos, C. A. (2020). The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions. International Journal of Information Management, 57, Article 102225. https://doi.org/10.1016/j.ijinfomgt.2020.102225 Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an enterprise context–a survey study. International Journal of Information Management, 60, Article 102375. https://doi.org/10.1016/j.ijinfomgt.2021.102375 Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Paper presented at the international conference on internet science. Brown, T. A. (2015). Confirmatory factor analysis for applied research. New York, NY: Guilford publications. Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation, 106, Article 102312. https://doi.org/ 10.1016/j.technovation.2021.102312 Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. Chan, T. K., Cheung, C. M., & Wong, R. Y. (2019). Cyberbullying on social networking sites: The crime opportunity and affordance perspectives. Journal of Management Information Systems, 36(2), 574–609. Chung, M., Ko, E., Joung, H., & Kim, S. J. (2020). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of Business Research, 117, 587–595. Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International Journal of Information Management, 60, Article 102383. https://doi.org/ 10.1016/j.ijinfomgt.2021.102383 CoSource. (2018). All about chatbots: Pros, cons, and how they can help your business. https://cosource.us/2018/02/13/chatbots-pros-cons-can-help-business/. Access on December 10, 2020. Courtney, J. A., & Van Doren, D. C. (1996). Succeeding in the communiputer age: Technology and the marketing mix. Industrial Marketing Management, 25(1), 1–10. Dale, R. (2016). The return of the chatbots. Natural Language Engineering, 22(5), 811–817. De, A. (2018). A look at the future of chatbots in customer service. https://readwrite.co m/2018/12/04/a-look-at-the-future-of-chatbots-in-customer-service/. De Luca, L. M., Herhausen, D., Troilo, G., & Rossi, A. (2021). How and when do big data investments pay off? The role of marketing affordances and service innovation. Journal of the Academy of Marketing Science, 49, 790–810. Dilmegani, C. (2021). 84 Chatbot/conversational statistics: Market size, adoption. AI multiple (2021) https://research.aimultiple.com/chatbot-stats. Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, Article 101994. https://doi.org/ 10.1016/j.ijinfomgt.2019.08.002 Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., … Wang, Y. (2021). Setting the future of digital and social media marketing research: Perspectives and research propositions. International Journal of Information Management, 59, Article 102168. https://doi.org/10.1016/j.ijinfomgt.2020.102168 Fazio, R. H., & Cooper, J. (1983). Arousal in the dissonance process. Social psychophysiology: A sourcebook (pp. 122–152). Festinger, L. (1957). A theory of cognitive dissonance (Vol. 2). Stanford University Press. Gibson, J. J. (1977). A theory of affordances. In S. A. J. Bransford (Ed.), Perceiving, acting and knoweing: Toward an ecological psychology (pp. 67–82). Hillsdale, NJ: Lawrence Erlbaum Associates. Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and social conversational agents for customer service. In Paper presented at thirty eighth international conference on information systems, South Korea. Gomez, A. (2018). Chatbots: Advantages and disadvantages of these tools. https://www. ecommerce-nation.com/chatbots-advantages-and-disadvantages-of-these-tools/. Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual performance. MIS Quarterly, 19(2), 213–236. Grgecic, D., Holten, R., & Rosenkranz, C. (2015). The impact of functional affordances and symbolic expressions on the formation of beliefs. Journal of the Association for Information Systems, 16(7), 500–607. Han, R., Lam, H. K., Zhan, Y., Wang, Y., Dwivedi, Y. K., & Tan, K. H. (2021). Artificial intelligence in business-to-business marketing: A bibliometric analysis of current research status, development and future directions. Industrial Management & Data Systems, 121(12), 2467–2497. Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. Hinojosa, A. S., Gardner, W. L., Walker, H. J., Cogliser, C., & Gullifor, D. (2017). A review of cognitive dissonance theory in management research: Opportunities for further development. Journal of Management, 43(1), 170–199. Hu, Q., Lu, Y., Pan, Z., Gong, Y., & Yang, Z. (2021). Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. International Journal of Information Management, 56, Article 102250. Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018). Evaluating and informing the design of chatbots. In Paper presented at the Proceedings of the 2018 designing interactive systems conference. Johnston, M. (2020). 3 powerful ways chatbots are impacting B2B marketing. https:// www.digital22.com/insights/powerful-ways-chatbots-are-impacting-b2b-marketi ng. Access on December 10, 2020. Juniper Research. (2021). Bank cost savings via chatbots to reach $7.3 billion by 2023, as automated customer experience evolv. https://www.juniperresearch.com/press /bank-cost-savings-via-chatbots-reach-7-3bn-2023. Access on June 1, 2021. Karahanna, E., Xu, S. X., Xu, Y., & Zhang, N. A. (2018). The needs–affordances–features perspective for the use of social media. MIS Quarterly, 42(3), 737–756. Keil, M., Tan, B. C., Wei, K.-K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A cross-cultural study on escalation of commitment behavior in software projects. MIS Quarterly, 24(2), 299–325. Klobas, J. E., McGill, T., & Wang, X. (2019). How perceived security risk affects intention to use smart home devices: A reasoned action explanation. Computers & Security, 87, Article 101571. X. Lin et al.
  • 12. Industrial Marketing Management 101 (2022) 45–56 56 Kolekofski, K. E., Jr., & Heminger, A. R. (2003). Beliefs and attitudes affecting intentions to share information in an organizational setting. Information & Management, 40(6), 521–532. Kwok, S. H., & Gao, S. (2005). Attitude towards knowledge sharing behavior. Journal of Computer Information Systems, 46(2), 45–51. Lalicic, L., & Weismayer, C. (2021). Consumers’ reasons and perceived value co-creation of using artificial intelligence-enabled travel service agents. Journal of Business Research, 129, 891–901. Landis, G. A. (2014). Future tense: The chatbot and the drone. Communications of the ACM, 57(7), 112–ff. Leidner, D. E., Gonzalez, E., & Koch, H. (2018). An affordance perspective of enterprise social media and organizational socialization. The Journal of Strategic Information Systems, 27(2), 117–138. Leonardi, P. M. (2013). When does technology use enable network change in organizations? A comparative study of feature use and shared affordances. MIS Quarterly, 37(3), 749–775. Lin, X., Featherman, M., Brooks, S. L., & Hajli, N. (2019). Exploring gender differences in online consumer purchase decision making: An online product presentation perspective. Information Systems Frontiers, 21(5), 1187–1201. Lin, X., & Kishore, R. (2021). Social media-enabled healthcare: A conceptual model of social media affordances, online social support, and health behaviors and outcomes. Technological Forecasting and Social Change, 166, Article 120574. https://doi.org/ 10.1016/j.techfore.2021.120574 Lin, X., Wang, X., & Hajli, N. (2019). Building E-commerce satisfaction and boosting sales: The role of social commerce trust and its antecedents. International Journal of Electronic Commerce, 23(3), 328–363. Majchrzak, A., Faraj, S., Kane, G. C., & Azad, B. (2013). The contradictory influence of social media affordances on online communal knowledge sharing. Journal of Computer-Mediated Communication, 19(1), 38–55. Markus, M. L., & Silver, M. S. (2008). A foundation for the study of IT effects: A new look at DeSanctis and Poole’s concepts of structural features and spirit. Journal of the Association for Information Systems, 9(3/4), 609–632. Meuter, M. L., Bitner, M. J., Ostrom, A. L., & Brown, S. W. (2005). Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies. Journal of Marketing, 69(2), 61–83. Mimoun, M. S. B., Poncin, I., & Garnier, M. (2012). Case study—Embodied virtual agents: An analysis on reasons for failure. Journal of Retailing and Consumer Services, 19(6), 605–612. Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. Murtarelli, G., Gregory, A., & Romenti, S. (2021). A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. Journal of Business Research, 129, 927–935. Muthén, L. K., & Muthen, B. (2017). Mplus user’s guide: Statistical analysis with latent variables, user’s guide. Muthén & Muthén. Paschen, J., Kietzmann, J., & Kietzmann, T. C. (2019). Artificial intelligence (AI) and its implications for market knowledge in B2B marketing. Journal of Business & Industrial Marketing, 33(3), 543–556. Pillai, R., Sivathanu, B., & Dwivedi, Y. K. (2020). Shopping intention at AI-powered automated retail stores (AIPARS). Journal of Retailing and Consumer Services, 57, Article 102207. https://doi.org/10.1016/j.jretconser.2020.102207 Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial intelligence and the new forms of interaction: Who has the control when interacting with a chatbot? Journal of Business Research, 129, 878–890. Pizzi, G., Scarpi, D., Pichierri, M., & Vannucci, V. (2019). Virtual reality, real reactions?: Comparing consumers' perceptions and shopping orientation across physical and virtual-reality retail stores. Computers in Human Behavior, 96, 1–12. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. Putri, F. P., Meidia, H., & Gunawan, D. (2019). Designing intelligent personalized chatbot for hotel services. In Paper presented at the Proceedings of the 2019 2nd international conference on algorithms, computing and artificial intelligence. Qiu, T., & Yang, Y. (2018). Knowledge spillovers through quality control requirements on innovation development of global suppliers: The firm size effects. Industrial Marketing Management, 73, 171–180. Ritter, A., Cherry, C., & Dolan, W. B. (2011). Data-driven response generation in social media. In Paper presented at the Proceedings of the 2011 conference on empirical methods in natural language processing. Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126, 23–34. Sæbø, Ø., Federici, T., & Braccini, A. M. (2020). Combining social media affordances for organising collective action. Information Systems Journal, 30(4), 699–732. Scherer, A., Wünderlich, N. V., & Wangenheim, F. V. (2015). The value of self-service: Long-term effects of technology-based self-service usage on customer retention. MIS Quarterly, 39(1), 177–200. Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of Management Information Systems, 37(3), 875–900. Schuetzler, R. M., Grimes, M., Giboney, J. S., & Buckman, J. (2014). Facilitating natural conversational agent interactions: Lessons from a deception experiment. In Information systems and quantitative analysis faculty proceedings & presentations. Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can machines talk? Comparison of Eliza with modern dialogue systems. Computers in Human Behavior, 58, 278–295. Shawar, B. A., & Atwell, E. (2007). Chatbots: Are they really useful?. In Paper presented at the Ldv forum. Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. Shin, D. (2018). Empathy and embodied experience in virtual environment: To what extent can virtual reality stimulate empathy and embodied experience? Computers in Human Behavior, 78, 64–73. Shumanov, M., & Johnson, L. (2021). Making conversations with chatbots more personalized. Computers in Human Behavior, 117, Article 106627. https://doi.org/ 10.1016/j.chb.2020.106627 Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135–142. Strong, D. M., Volkoff, O., Johnson, S. A., Pelletier, L. R., Tulu, B., Bar-On, I., … Garber, L. (2014). A theory of organization-EHR affordance actualization. Journal of the Association for Information Systems, 15(2), 53–85. Sung, E. C., Bae, S., Han, D.-I. D., & Kwon, O. (2021). Consumer engagement via interactive artificial intelligence and mixed reality. International Journal of Information Management, 60, Article 102382. https://doi.org/10.1016/j. ijinfomgt.2021.102382 Treem, J. W., & Leonardi, P. M. (2013). Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association. Annals of the International Communication Association, 36(1), 143–189. Visentin, M., & Scarpi, D. (2012). Determinants and mediators of the intention to upgrade the contract in buyer–seller relationships. Industrial Marketing Management, 41(7), 1133–1141. Volkoff, O., & Strong, D. M. (2013). Critical realism and affordances: Theorizing IT- associated organizational change processes. MIS Quarterly, 37(3), 819–834. Volkoff, O., & Strong, D. M. (2017). Affordance theory and how to use it in IS research. The routledge companion to management information systems (pp. 232–245). Wang, C., Teo, T. S., & Janssen, M. (2021). Public and private value creation using artificial intelligence: An empirical study of AI voice robot users in Chinese public sector. International Journal of Information Management, 61, Article 102401. Wang, X., McGill, T. J., & Klobas, J. E. (2018). I want it anyway: Consumer perceptions of smart home devices. Journal of Computer Information Systems, 60(5), 437–447. Westaby, J. D. (2005). Behavioral reasoning theory: Identifying new linkages underlying intentions and behavior. Organizational Behavior and Human Decision Processes, 98 (2), 97–120. Wittkower, D. (2016). Principles of anti-discriminatory design. In Paper presented at the 2016 IEEE international symposium on ethics in engineering, science and technology. Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A new chatbot for customer service on social media. In Paper presented at the Proceedings of the 2017 CHI conference on human factors in computing systems. Xiaolin Lin is an assistant professor of computer information systems in the Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business, West Texas A&M University. He received his Ph.D. in information systems from Washington State University. His interests are the impacts of IT on e-commerce and healthcare, cyber security, and gender differences in IT behavioral research. Dr. Lin has published or has forthcoming papers in premier journals including Journal of Business Ethics, Decision Sciences, Information & Management, International Journal of Electronic Commerce, Industrial Marketing Management, and International Journal of Information Management, among others. He has also presented numerous papers at international and national conferences. Bin Shao is a professor of decision management & Terry Professor of Business in the Department of Computer Information and Decision Management, Paul and Virginia Engler College of Business, West Texas A&M University. She earned her Ph.D. from University of Illinois at Urbana-Champaign. Her current research interests are mainly in the interdis­ ciplinary areas of information systems, decision management, and marketing. Her works have been published in International Advances in Economic Research, International Journal of Applied Management Science, Journal of Supply Chain and Operations Management, Journal of Management Information and Decision Sciences, and many others. Xuequn (Alex) Wang is a Senior Lecturer in Edith Cowan University. He received his Ph. D. in Information Systems from Washington State University. His research interests include social media, privacy, e-commerce, and human-computer interaction. His research has appeared in MIS Quarterly, Information Systems Journal, Information & Management, Communications of the ACM, ACM Transactions, and Communications of the Association for Information Systems, among others. X. Lin et al.