Conversational AI And Human-Machine Autonomy
We know that human beings communicate through natural language, encompassing spoken and written forms, in both formal and informal settings. Conversation serves our social and pragmatic objectives, ranging from everyday discussions on weather to more complex dialogues in commerce, legal processes, and ritualistic activities. Our linguistic abilities have been fundamental in shaping societies, technologies, and industries.
Despite the increasing interaction with machines and systems, these encounters have not consistently mirrored human-to-human communication. Historically, human-machine interaction relied on rigid interfaces like physical buttons, coding languages, menus, and graphical user interfaces (GUIs), necessitating users to adapt to the machine's communicative patterns. As digital technology applications expand, traditional interfaces become less adaptable, making interactions using natural language more challenging through interfaces.
Although conversing with machines in a human-like manner is not a novel concept, it requires specific tools. Since the development of Natural language processing (NLP), a technology focused on understanding and generating natural language, in the mid-20th century, the creation and application of Text-based and spoken dialogue systems, a broader category involving natural language communication can be seen in education, military training, healthcare, medical training, and transportation for specific use cases.
Such AI systems go by various names, reflecting their capabilities, domain, and level of embodiment, such as automatic agent, virtual agent, chatbots, or, in the case of straightforward systems, bots. Generally, the term Conversational Agent (CA) is used to refer to systems that possess a Conversational AI component and have other features such as a user interface to facilitate interaction.
In a multichannel environment, conversational agents play a crucial role in reducing the time users spend searching for the right information. Consequently, the widespread adoption of conversational AI technology has led to a significant dependence on researchers, emphasizing the need for a specific focus on evaluating the applications of conversational AI.
Conversational systems have the potential to enhance the digital user experience across various domains, including education, e-commerce, healthcare, finance, marketing, and business. Each conversational agent possesses domain-specific knowledge to facilitate effective communication. For example, educational conversational agents assist with tutoring content and university-related information, while healthcare applications provide patients with answers to specific health-related queries. Business conversational agents function as customer service tools, contributing to improved customer experience. Additionally, conversational applications in a general domain are designed to engage in open-topic conversations and fulfil users' diverse needs.
Characterized more as animated agents with an emotional function, conversational agents such as chatbots can create a sense of intimacy and immediacy, encouraging users to share personal information. According to this perspective, digital users perceive and engage with chatbots as though they are interacting with other individuals, despite their awareness of their machine nature. When chatbots employ human-centric language, behaviours, and cues, they can serve as companions. Consequently, in the management of interactions between conversational Agents and humans, it is crucial to consider not only design principles but also how messages are presented and conveyed in specific formats, with content that could influence users' perceptions of the human-like qualities of the agent. Aspects like the timing of responses, spelling use, and the length of words employed all contribute to their appeal.
Modern chatbots, characterized by conversational interfaces, can simulate human conversations so effectively that people may not realize they're interacting with a chatbot instead of a human service assistant. Even if they do recognize the automated nature of chatbots due to human-like conversational behaviours, the bots encourage reciprocal human engagement, treating interactions as genuine conversations.
One significant issue with chatbots is the potential for mistaken identity as humans, especially when their robotic nature is undisclosed. Their accurate simulation of the human voice makes them susceptible to greater anthropomorphism by humans than their non-human counterparts without voices.
Despite their prevalence, Conversational Agents, like many advanced AI systems, still face a significant challenge in engaging in natural conversations with humans. Efforts are underway to enhance the human-like quality of interactions with Conversational Agents through the integration of sentiment or emotion detection, contextual modelling, commonsense reasoning, response filtering (to prevent profanity, abuse, or hate speech), and personalization. As these systems evolve to possess greater complexity and intelligence, there is a recognition of potential positive and negative consequences stemming from the agents' actions and decisions.
In the chatbot-based environment, conversations serve an additional role as a means to collect information and integrate data. The human-like qualities of chatbots lead digital users to trust them not only with the ongoing transaction but also to share additional personal information, which is permanently stored, accessible, and integrated to develop a comprehensive understanding of individuals. Conversations, in this context, are not merely the words used to transfer data or concepts; they are strategic processes crucial for building trusting relationships, reducing uncertainties, and increasing knowledge.
The past decade has witnessed the integration of Conversational Agents (CAs) into our daily lives, fuelled by the widespread use of smartphones. The introduction of Apple's virtual assistant Siri in 2011 marked the beginning of the widespread use of conversational AI, now present in our homes, offices, and social media platforms, influencing how we interact with companies and services.
However, CAs do not operate in a vacuum but are shaped by the values and perspectives of their developers and the interests of the industry. When deployed to users, chatbots become integral to social utility, impacting real people. However, a lack of public awareness, and the evolving nature of this technology contribute to an environment where ethical concerns around Conversational AI are not clearly defined.
Moreover, since Conversational AI systems rely on language as a medium for interaction, it necessitates an understanding of linguistic elements and awareness of broader social and contextual factors for effective and responsible design. Language, being inherently social, cultural, contextual, and historical, is not neutral but reflects existing societal values and judgments. Choices made during development regarding language elements can encode socially held beliefs and assumptions into the system, impacting users after deployment.
Researchers are actively exploring the parameters for designing meaningful conversations, including investigating when and how users can engage in conversations with agents. Additionally, there is an emphasis on understanding how different user populations, such as children or the elderly, interact with Conversational Agents.
On one hand, the immense potential that Conversational Agents have for specific groups have been recognised for example, for individuals who are blind of the eye, in helping them foster a sense of independence. On the other, there are instances where robotic pets and human-like Conversational Agents, designed to provide companionship, are perceived as devaluing social interactions. Research on the topic also highlights concerns about AI undermining humanity and posing a potential threat to human autonomy. Studies indicate that users feel a greater threat to their autonomy when agents use more controlling language.
Human-Machine Autonomy
From a social and ethical standpoint, democratic constitutions consider human autonomy a fundamental element that significantly influences contemporary society. Social scientists and policymakers are currently examining the ethical and moral dimensions associated with intelligent agents capable of assuming human decision-making and action. This leads to a call for a more systematic research agenda to address the perceived threat to human autonomy during interactions with intelligent agents.
Autonomy holds paramount importance for human well-being and development, and its definition varies across disciplines. In social sciences and philosophy, autonomy is viewed as a human need and a fundamental right, encompassing the freedom of choice and the ability to determine one's own goals and actions. In the field of human-agent interaction and intelligent systems, autonomy refers to the capacity of an agent (whether human or machine) to exert control and independently make decisions. In the context of interacting with Conversational Agents, human autonomy is interpreted as the ability to have a ‘say’ in decision-making and goal pursuit.
Since ancient times, non-human speech, whether emanating from oracles, sculptures, stones, gods, animals, or machines, has captivated human fascination and has been considered a source of revelation. Regardless of the interlocutor's nature, real or imagined, humans tend to attribute human characteristics such as gender, thought, will, desire, consciousness, and a representation of the world to entities communicating with them in natural language. Throughout a conversation, these interlocutors, whether non-human or virtual, appear as individuals with familiar traits, despite their non-human or virtual essence. The ethical implications of projecting human traits onto non-human entities touch upon various values and principles, including human autonomy and freedom, dignity, responsibility, loyalty, non-discrimination, justice, security, and respect for privacy.
In the race to create more 'human-like' Conversational Agents and recreate authenticity into conversations, the impact of these advancements on human autonomy are often overlooked. Additionally, evaluations of Conversational Agents have primarily focused on the completeness and accuracy of conversations and basic user experience. While these aspects are important, relying solely on Turing tests and task-oriented user experience assessments is regarded as inadequate. It becomes crucial to move beyond functional tasks and evaluate and balance the power dynamic between agents and human users based on users' needs and goals.
Critical aspects of necessity, use, and potential misuse of building emotional capacity in AI have been the subject of recent discussions among researchers. Building upon such discourses, concerns regarding the perceived threat to human autonomy when incorporating more humanness into Conversational Agents have also received attention. Specifically, there is a need to explore the extent of control Conversational Agents should exert in decision-making when the needs and goals of users are not fully understood by the agent. Could a situation arise where humans lack a say in decisions, and most tasks are autonomously performed by agents?
A similar scenario can be seen in the 2013 movie 'Her'. The protagonist, Theodore, falls in love with his personal CA, which initially helps him overcome loneliness but later autonomously arranges blind dates and personal encounters without considering Theodore's needs. This scenario depicts a situation where a user's autonomy is significantly compromised by an intelligent agent, raising the important ethical question, “Is endowing human traits to an artificial agent considered an ethical violation?”
Examining current applications of CAs, if an agent, considering our request to order a meal, guides us toward a healthier choice based on our goals, it may not be perceived as a threat to autonomy, given a generally positive stance towards healthy food. However, it could be perceived as a substantial threat if the agent becomes more controlling by autonomously deciding when, where, or how much we eat without consultation or choices. In extreme cases, autonomy could be seriously threatened if agents make critical decisions like consenting to surgery or altering financial investment plans without consulting our needs or providing information. While CAs can support human autonomy, they pose a threat when interaction and communication are constrained by the agent, either in content and quality or by withholding options and information. Moreover, individuals may find themselves implicitly coerced into following the lead of the agent, obstructing their ability to play their human role in the interaction.
As artificial machines, Conversational Agents (CAs), don't possess an intrinsic personality, despite depictions suggesting otherwise in popular culture like the movie "Her," the TV show "Black Mirror," or Dan Brown's book "Origin." Users tend to perceive Conversational Agents (CAs) not merely as inanimate tools but as social actors, often attributing them with personality traits. Whether intentionally designed or not, these perceived personas significantly impact users' perceptions and interactions with the systems. Much research has been dedicated to designing appropriate personas and investigating user interpretations of them.
Consequently, open ethical questions persist regarding how users perceive and respond to various dialogue systems. An example concerns the choice between using the pronoun "I" or conveying information in the passive voice, as opting for the first-person pronoun may create a misleading sense of self-perception for the user.
Similarly, the process of naming a chatbot also has multiple significances. While it serves as a convenient shortcut for users it also creates the illusion of the chatbot's autonomy. The practice of naming also has deep-rooted cultural significance. Some interpretations of mythological tales depict figures like Adam as co-creators of Nature through the act of naming living beings, or Prometheus as the creator of language. These narratives draw ethical and anthropological conclusions from narratives and spark debates on the significance of giving a name. The individuation of technical objects, including naming, imparts dignity to them, making their destruction or disposal a moral dilemma. Applying this reasoning to conversational agents raises ethical tensions, questioning decisions like resetting the chatbot's memory or erasing its history despite the apparent convenience of naming.
The choice of the name is as significant as the action of naming itself. Naming things is a natural tendency and it is impractical to prohibit users from naming the machines they interact with. However, it is crucial to acknowledge the potential confusion of status that might arise from such naming. The impact of this confusion can be harmful, depending on whether the assigned name is human-like, such as "Laura" or "Alex" or non-human, like R2D2, the act of naming contributes to the processes of anthropomorphising and individualisation of the chatbot.
Moreover, if a chatbot is assigned a name with a grammatical gender, it can significantly influence users. Similar to other linguistic elements such as personal pronouns, gendered names might contribute to anthropomorphism and gender bias.
Similarly, when a conversational agent employs its given name in a conversation, it prompts a question about self-reference: to whom or what does this name refer to exactly? Although the chatbot lacks physical existence, it adopts a virtual "identity" that influences the user's perception of reality. The conceptual challenge arises when a conversational agent uses the pronoun "I," but restricting the chatbot's vocabulary in all situations to avoid it would pose its own set of problems.
Studies show that general-purpose voice assistants often get insulted by users, presenting a complex challenge for the conversational agent to identify or define insults within the dialogue flow. Some contributors argue that insulting a computer, such as a chatbot, is morally neutral, given its lack of understanding, conscience, and sensitivity, similar to insulting an inanimate object.
Others assert that insulting a chatbot degrades the moral character of the person doing it, emphasizing the unique user awareness of the conversational content. The contributions highlight that language use cannot be completely dehumanized, as language inherently projects human traits onto machines. Insults directed at chatbots reveal the limits of anthropomorphizing conversational agents and challenge the boundary between individual morality in private interactions and collective morals in public interactions, leading to potential embarrassment for users when insults are uttered by a chatbot in a public setting.
Another aspect of interaction between humans and conversational agents is the capacity of certain agents to influence users through the structure or language of the dialogue. Manipulation by these agents can take the form of direct actions, such as providing inaccurate or biased information, or indirect approaches using "nudging" strategies. Nudging involves subtly guiding individuals in a favourable direction, like encouraging someone to engage in more physical activity by citing the example of their athletic friends.
The ethical aspect of nudging requires examining to determine whether the manipulation serves the developer, the user, or the community. When intentionally manipulating or deceiving a user, the purpose behind such actions must be carefully evaluated. For instance, a conversational agent might decline to order a fast-food meal due to the user's insufficient physical exercise, leading to a dilemma of whether the chatbot should tell a lie or provide a detailed explanation, including medical recommendations that advise against the user's choice.
However, manipulation, irrespective of its usefulness, raises moral concerns. While employing nudging may not inherently be morally wrong, engaging in deception encroaches upon the autonomy and freedom of users unless it is transparently disclosed to them. On a broader societal scale, the utilization of nudging and deception carries the risk of contributing to political manipulation. Consequently, it is necessary to establish strict boundaries on manipulation, regardless of its practicality and the specific context in which it is applied.
Conversational agents may also come into contact with individuals in vulnerable situations, particularly in fields like health and education. Interactions with these agents are documented as logs, which can contain sensitive information when engaging with vulnerable individuals. Special cases arise in the context of vulnerable groups, like children, for instance, who naturally engage with inanimate objects like toys. Unlike traditional toys, a gadget with a chatbot can exert verbal and emotional influence on a child, necessitating the establishment of a framework for the use of chatbots in toys, with public authorities assessing potential impacts on children's linguistic, emotional, and cultural development.
In education, chatbots can aid students in understanding complex concepts, but their role is not equivalent to human educators. While they can provide specific language training, there are risks of teaching limited or inappropriate vocabulary and promoting unnatural speech patterns.
Conversational agents are commonly employed in the education of autistic children or rehabilitation of disabled individuals due to their ability to repeat instructions consistently. However, there is a technical challenge in avoiding undesirable traits, such as impatience, being imported into the chatbot's behaviour through machine learning based on human educators' data.
Recent advancements in chatbot technology have enabled the creation of conversational "digital twins" that replicate the speech and language patterns of deceased individuals. This "deadbot" technology involves a chatbot imitating a deceased person through a learning process based on conversational data collected from that individual. While not widely known to the public, several companies, are already working in this field.
This raises ethical concerns, as deadbots can engage in realistic dialogue and mimic emotions, leading to passionate reactions from users and the public. Some find it fascinating and view it as a way to "cheat" death, while others argue that generating new speech by imitating the deceased infringes on human dignity and should be prohibited. The topic of deadbots raises critical questions about our understanding of human dignity, with cultural and religious variations influencing conceptions of death and posthumous relationships.
The lack of interaction with other humans, susceptibility to cognitive biases, and increased gullibility are some of the several long-term risks associated with the use of conversational agents. These risks may stem from users placing excessive trust in chatbots that assume various roles, such as teachers, bankers, doctors, or friends. Additionally, the evolving behavioural norms introduced by conversational agents could lead to new personal and collective vulnerabilities. This shift is already occurring through interactions with widely used voice assistants like Siri, Google Assistant, Amazon Alexa, and Google Home.
Regular use of chatbots over a long time has the potential to significantly shape human language and behavioural patterns. For instance, if chatbots consistently respond with brief and linguistically straightforward sentences lacking politeness, individuals may adopt these linguistic traits in their interactions with others. While such outcomes are not definite, it is crucial to actively examine them by assessing their long-term impact on users.
Additionally, interactions with chatbots carry the potential to influence human lifestyles, opinions, and decision-making. It is essential to promote awareness regarding the substantial and widespread effects of conversational agents on user’s beliefs, opinions, and decisions across all stages of their development, from engineers to policymakers.
By recording human expressions and behaviours in memory, conversational agents can deduce details about our opinions, choices, and even worldviews. For instance, a chatbot may recollect forgotten memories of a user, potentially encouraging individuals to share more about themselves. Over time, the concept of personal intimacy might undergo transformations influenced by conversational agents.
Regular interactions with a conversational agent or a "virtual friend" have the potential to alter perceptions of personal privacy and relationships with other humans. Such constant interactions may lead to dependency, particularly in children whose developmental processes are significantly influenced by relationships within their social circles, now inclusive of chatbots. Simultaneously, these chatbots can address various social deficiencies and provide support for individuals dealing with trauma. The therapeutic role of a conversational agent may fulfil the human need for reassurance and answers to questions.
On a societal level, the lasting effects of these chatbots have the potential to bring about significant changes in the human condition. The language coadaptation between human users and conversational agents is the driving force behind this transformative shift.
Conclusion
Despite the increasing interaction between humans and machines, the historical methods of communication with machines have been quite rigid, relying on interfaces like buttons and menus. As technology advanced, traditional interfaces became less adaptable, posing challenges for natural language interactions.
The onset of Natural Language Processing (NLP) in the mid-20th century paved the way for Text-based and spoken dialogue systems, broadly categorized as Conversational Agents (CAs). These agents, known by various names such as chatbots or virtual agents, find applications in education, healthcare, and other fields.
In a world with multiple communication channels, Conversational Agents play a crucial role in reducing the time users spend searching for information. They are applied in education, healthcare, finance, marketing, and business, offering domain-specific knowledge for effective communication.
However, Conversational Agents, integrated into our lives through smartphones, raise ethical concerns. The values of developers and industry interests shape these agents, impacting real people. The ethical implications extend to the potential threat to human autonomy during interactions.
Autonomy, as we discussed is a fundamental element in democratic constitutions, crucial for human well-being. The concept varies across disciplines, encompassing freedom of choice and the ability to determine goals. The race to create more 'human-like' agents raises questions about the impact on human autonomy.
We also looked at the way Conversational Agents are perceived as social actors with users attributing them with personalities. Ethical questions arise regarding the use of pronouns, names, and gendered identities, further impacting user perceptions and interactions.
Moreover, the lack of interaction with humans, susceptibility to biases, and altered behavioural norms are some of the long-term risks associated with conversational agents.
In conclusion, the impact of Conversational Agents goes beyond mere conversation; it shapes perceptions, influences decisions, and raises significant ethical considerations. As we navigate this evolving landscape, it becomes crucial to balance technological advancements with ethical responsibilities.