AI and Consciousness
The Conceptualization Of Artificial Consciousness.
Its Implications For Human-Machine Interactions
The Question Of Personhood And Freedom Of Conscious Machines, And
Future Possibilities
The Conceptualization of Artificial Consciousness.
Consider the question, can machines have consciousness?
The debate surrounding the possibility of machines possessing consciousness has a longstanding history. Those advocating for both strong AI and weak AI have engaged in philosophical discourse on this matter for quite some time.
American Philosopher, John Searle, despite being critical of strong AI, defined it as the belief that a properly programmed computer can genuinely be considered as a mind. According to him, computers, with the right programming, can be considered to truly comprehend and possess cognitive states. On the other hand, weak AI assumes that machines lack consciousness, mind, and sentience, as they only SIMULATE thought and understanding.
When discussing the concept of artificial consciousness, we encounter a series of challenges. Primarily, we grapple with the intricate task of describing or rather explaining consciousness itself.
Oftentimes we approach Intelligence through the lens of rationality, employing mathematics and logic for analysis. Computer Science and AI, for instance, heavily rely on these principles. However, when we consider the topic of consciousness, a shift occurs—mathematics takes a backseat, and philosophy comes to the forefront.
Scientists have embarked on a quest to uncover the origins of consciousness, exploring from the intricacies of the brain to the nuances of genetic code. Yet, even at these levels, the elusive trace of consciousness remains undiscovered, despite its palpable existence in our individual experiences. Interestingly, the composition of consciousness appears to defy purely mechanical or chemical explanations.
Understanding our own minds and intelligence proves to be a formidable challenge, let alone comprehending consciousness. If consciousness were solely composed of matter, we might simulate it through electronic circuits or engineering methods. However, studies indicate that consciousness extends beyond the confines of mere matter.
This realization raises doubts about the feasibility of achieving artificial consciousness. Despite our best efforts, it seems that consciousness may forever transcend the boundaries of human knowledge. Notably, the very source of Logic is rooted in consciousness itself, making it impractical to understand consciousness through the application of Logic. Since attempting to understand the source, using a product of the source presents a paradoxical challenge. While partial understanding might be within reach, a comprehensive grasp appears elusive.
Attempting to clarify how subjectivity can arise from material existence is a puzzle often referred to by experts as the hard problem of consciousness. Additionally, the fact that our comprehension of human consciousness is heavily influenced by our personal firsthand experiences further complicates discussions surrounding artificial consciousness.
While we grasp the complexities of human consciousness from a subjective standpoint, artificial consciousness remains observable only from an external, third-person perspective.
This brings us to the question of understanding whether a machine truly possesses consciousness.
Various approaches emerge in grappling with the definition of machine consciousness. Some experts adopt a pragmatic stance, advocating for a consensus on consciousness and proposing to use the term without an exhaustive definition. Others centre their focus on self-awareness in robots. They emphasize the need for principles enabling robots to comprehend their environment, be aware of their actions, initiate appropriate responses, learn from experiences, and demonstrate awareness of the learning process.
When tackling questions specific to artificial consciousness, the philosophical discourse surrounding consciousness becomes a valuable starting point, particularly with a focus on human (and animal) consciousness. Various approaches to defining artificial consciousness highlight different facets.
The numerous concepts available to describe consciousness are typically categorized into three main aspects:
First of all (a) a conscious entity possessing sentience, wakefulness, self-consciousness, and subjective qualitative experiences,
Secondly, (b) being conscious of something external, such as a rose, and lastly,
(c) conscious mental states, representing an entity's awareness of being in a particular mental state, like the awareness of smelling a rose.
When considering artificial consciousness, American Philosopher Ned Block's differentiation between phenomenal consciousness and access consciousness proves to be particularly insightful. Phenomenal consciousness refers to the experiential aspect of a conscious mental state, capturing what it is like to be in that state.
On the other hand, access consciousness refers to the availability of a mental state for use by the organism, such as in reasoning and guiding behaviour, and describes how a mental state is interconnected with other mental states. In discussions about artificial consciousness, emphasis on access consciousness can significantly contribute to the discourse.
Cognitive Psychologist Stanislas Dehaene (dian) and his collaborators in the article titled (2017) “What is consciousness, and could machines have it?” ((Stanislas Dehaene, Hakwan Lau, Sid Kouider)) contribute to this discussion by identifying two crucial dimensions of conscious computation: global availability (C1) and self-monitoring (C2).
Global availability aligns with Ned Block's concept of access consciousness, involving information that is globally available to the organism.
Self-monitoring, akin to introspection, involves a self-referential relationship where the cognitive system can monitor its own processing and obtain information about itself.
There is evident space for further reflection and research to formulate comprehensive third-person definitions of artificial consciousness, considering the multifaceted nature of consciousness and its implications in the context of artificial entities.
Artificial Consciousness And Human-Machine Interactions
In the field of artificial intelligence, researchers generally agree that current machines and robots lack consciousness, despite the prevalence of science fiction portraying them otherwise. Several surveys conducted on the topic reveal that the majority of the participants did not believe contemporary electronic computers to be conscious.
The exploration of consciousness becomes particularly relevant in the context of social robots and human-robot social interaction. Kate Darling defines a social robot as a physically embodied, autonomous agent capable of communicating and interacting with humans on a social level. Examples of such social robots include MIT's Kismet, Aldebaran NAO, and Hanson Robotics' humanoid social robot Sophia.
Social robots demonstrate distinctive features that make them noteworthy for humans. These robots possess the ability to make limited decisions, learn, display behaviour, and engage with people. Notably, characteristics like the nonverbal immediacy of their social behaviour, speech recognition, verbal communication, facial expressions, and a perceived "personality" contribute significantly to how humans react to these robots, as suggested by various studies.
This interaction often leads to the development of emotional bonds from humans to robots, accompanied by the attribution of lifelike qualities, anthropomorphizing tendencies, and the attribution of intentions to these social robots. A striking example illustrating this tendency is the case of Sophia, a social humanoid robot, being granted Saudi-Arabian citizenship in 2017.
Despite the consensus that current robots lack sentience or consciousness; some scholars advocate for ascribing rights to robots. For instance, Kate Darling, a research specialist in human-robot interactions, drawing on research on violent behaviour toward robots, argues that treating robots more like pets aligns with our social values rather than viewing them as mere inanimate objects.
In the discussions surrounding the attribution of rights to robots, various arguments exist, yet they share a common emphasis on the societal roles that humans assign to robots, the emotional connections formed with them, or the broader social context of human-robot interactions. Notably, proponents of robot rights do not advocate for rights based on the capabilities of robots but rather argue in favour of rights based on the roles that robots fulfil for humans.
However, a significant challenge arises with the social roles approach. The recommendations it proposes for interacting with robots deviate from our typical interactions with fellow humans.
When it comes to attributing status and rights to humans, the concept of personhood assumes the central role. Personhood encompasses various capacities and key elements, including rationality, consciousness, personal stance, the ability to reciprocate the personal stance, verbal communication, and self-consciousness.
On the other hand, the allocation of rights to robots is not based on their moral standing or abilities. Instead, it hinges on the social roles these robots undertake for others. Interestingly, consciousness becomes irrelevant in this stance. The argument is that current robots lack certain qualities like sentience or consciousness, making it implausible to assert their moral significance in their own right.
However, looking ahead, this could change. Future robots might be considered under a concept termed "robothood," with moral status assigned based on their capabilities. A noteworthy and debatable topic is the discussion around granting legal personhood to robots.
To navigate this discourse on the moral and legal standing of robots, as well as the broader issue of how we interact with machines, a more profound comprehension of artificial consciousness, artificial rationality, artificial sentience, and related concepts is important. The conversation should extend to the field of artificial consciousness, shedding light on its presence in AI and robots, or rather, its absence. A focus on third-person definitions of artificial consciousness and access consciousness can also significantly contribute to our understanding.
To enable computers to engage with humans on a personal level, a crucial aspect is for them to possess a sense of self. This involves the ability to engage in arguments, comprehend human emotions, and respond empathetically. Without emotions similar to those of humans, computers would struggle to mimic human behaviour. This implies a need for computers to evolve beyond mere algorithmic calculators and take on a more comprehensive role in understanding and replicating human qualities.
When we consider the intriguing intersection of intelligence and consciousness, a pivotal question emerges: Can we fashion conscious machines capable of recognizing a distinct "self" within them?
Examining current research, the prospect of designing conscious machines appears elusive. Notably, there's no widespread use of the term "Artificial Consciousness," possibly due to the perceived implausibility of its realization. But, is there a valid reason for the scepticism surrounding conscious machines?
A significant concern in the creation of intelligent machines revolves around the freedom of machines. Currently, machines rely on humans for their production, upgrades, and maintenance. However, the prospect of self-thinking machines shaping their own developmental trajectory cannot be dismissed. This would entail computers coding and enhancing themselves, essentially altering their own architecture. This theme is a common motif in science-fiction films, envisioning a future where machines take over the world.
Interestingly, machines might outperform humans in areas such as politics, conflict resolution, war, and ecological awareness. The absence of human emotions like anger, fear, and jealousy could eliminate the margin of error in intelligent machines. This leads us to ponder the potential advantages of machines handling these complex tasks.
In our exploration of self-awareness, a frequent point of comparison is with animals. What sets humans apart is our unique capacity for self-awareness, self-reflection, and self-critique, distinguishing us from the animal kingdom. Throughout history, comparing ourselves to animals has often been a way to underscore human uniqueness and superiority. While animals exhibit awareness of their environment and some may possess a limited form of consciousness, the concept of a sense of self is absent.
Drawing parallels to our relationships with animals, where we form attachments, it leads us to ponder if we could similarly bond with intelligent machines, especially if they display signs of intelligence and self-awareness. As attachment implies a reciprocal interaction between the parties involved.
The film "Her", for instance, depicts a human developing a romantic connection with an operating system. While the system simulates emotions, it is acknowledged as artificial and not genuine.
Studies highlight the importance humans place on the subjective states of their partners, recognizing authenticity in relationships. If empathic androids were to gain consciousness and authentic subjective experiences, experts argue that they should be entitled to the same dignity and rights as humans.
Interestingly, when we turn the comparison to intelligent machines, a reversal occurs, as machines are predicted to soon surpass human intelligence. This shift raises complex questions about the evolving dynamics between humans and artificial intelligence.
In contemplating the prospect of machines, even those considered 'conscious,' it's important to recognize that they won't possess the intricate qualities of human consciousness and mind. Unlike humans, thinking machines lack the nuanced complexity of the subconscious. Their form of consciousness will be distinctly different from the human experience.
According to Yuval Noah Harari (2015), there is a shift from liberal humanism to techno-humanism. In the AI discourse, certain human traits, like intelligence, are identified and replicated in machines, acknowledging the imminent reality of machine intelligence. However, it's crucial to distinguish that machine intelligence does not equate to human intelligence. While human consciousness relies on intelligence, the presence of intelligence in machines does not imply consciousness.
Likewise, the ability for information storage and data processing in machines does not automatically translate to knowledge or wisdom. True knowledge and wisdom involve a sense of self, which, in turn, requires a personal history, cultural context, a sense of belonging to a specific group, language proficiency, and adherence to specific values. Unlike machines, we are not born with values; we acquire them through experience. Values are not programmable, and their dynamic and complex influence on our subconscious thinking and actions stems from our personal history and cultural environment.
Future Possibilities
Although the arrival of the first sentient computer life form remains uncertain, as we look ahead, the transition from AGI to ASI is anticipated to be swift. With the advancing capabilities of ANI, a future emerges where both machines and humans collaborate, fostering greater intelligence, consciousness, and self-awareness. Our relationship with technology extends beyond mere tools for bodily functions; we now envision a groundbreaking era where the human mind can be significantly enhanced.
Traditionally, technological artifacts have primarily focused on augmenting bodily capabilities, becoming integral parts of our identity. However, a new frontier awaits as we embark on enhancing the human mind itself. This entails the potential to connect with the Internet, gaining immediate access to vast information and engaging in 'cloud-computing.'
AI, like many technologies, has both positive and negative aspects. If the technological singularity occurs, it won't lead to machines taking over. Instead, we'll coexist with AI in a world where machines enhance human abilities. According to research, the intelligence that emerges will still reflect human civilization, transforming it into a human-machine civilization. Our understanding of humanity, as a result, will extend beyond biological origins.
Expert’s prediction for 2045 suggests that we could multiply our intelligence a billion-fold through wireless connections from our neocortex to a synthetic neocortex in the cloud. This fusion of humans and machines could allow us to connect not only with machines but also with other people's neocortex. This has the potential to enrich the human experience and unveil previously unexplored aspects of our humanity.
However, the impact on human consciousness is yet unknown, as the consequences will only unfold with the realization of these newfound cognitive functions.
Conclusion
The conceptualization of artificial consciousness has sparked a longstanding debate about whether machines can possess consciousness. Discussions on the topic involve strong AI proponents, who believe a properly programmed computer can have a genuine mind, and weak AI advocates, who argue that machines lack true consciousness, mind, and sentience.
Exploring artificial consciousness faces challenges in defining consciousness itself. While disciplines like computer science rely on logic and mathematics, the study of consciousness often shifts towards philosophy. Scientists strive to understand the origins of consciousness, focusing on the complexities of the brain and genetic code. Despite efforts, consciousness eludes purely mechanical or chemical explanations.
Philosophical discussions on human consciousness serve as a valuable starting point for approaching artificial consciousness. Concepts like sentience, and self-consciousness are also crucial in understanding consciousness.
The impact of artificial consciousness on human-machine interactions is significant, particularly in the context of social robots. Despite the consensus that current machines lack consciousness, humans tend to form emotional bonds with social robots, raising questions about robot rights and societal roles.
To engage with humans on a personal level, computers need a sense of self. The question of designing conscious machines remains elusive, considering their current dependence on human input. However, the potential advantages of machines handling complex tasks without human emotions, such as politics and conflict resolution, are to be considered.
The evolving dynamics between humans and artificial intelligence, as machines surpass human intelligence, highlight the distinction between machine intelligence and human consciousness. The shift from liberal humanism to techno-humanism acknowledges the replication of certain human traits in machines but emphasizes the unique complexities of human consciousness.
Looking ahead, the transition from AGI to ASI is anticipated to be swift, leading to collaboration between humans and machines to enhance intelligence, consciousness, and self-awareness. While AI brings both positive and negative aspects, the coexistence of humans and machines is envisioned as a transformation into a human-machine civilization, potentially enriching the human experience through unprecedented cognitive functions. However, the impact on human consciousness remains uncertain and will only unfold with the realization of these advancements.