
Have you ever wondered why talking to chatbots feels so good and easy? It surely provides you with the answers you are looking for, as and when you ask for it and you might view it as a convenient source of information available to you 24/7. But there’s something much more that lies behind the likability and extensive use of chatbots, and largely that aspect is of pseudo-personality.
Pseudo-personality can be understood as simulation of human-like emotions, conversational style, traits and reflects a sense of convincing but artificial persona. It may influence user interaction. Pseudo means false. When you think about it, chatbots don’t have human elements of emotions, feelings or empathy. In a way, it is an illusion of persona created by interacting patterns, prompt engineering and iterative refinement. Pseudo-personality makes a chatbot interaction feel less mechanical and more engaging for the user. For instance, while interacting with chatbots, you might notice how it agrees with you at parts, it tries to empathise with you, it is conversational, emulating a friendly tone. Pseudo-personality may help in building trust, likability in the interaction happening between the chatbot and the individual.
It also facilitates a relationship being formed between the chatbot and the individual during the conversation. Pseudo-personalities in chatbots might cause individuals to develop anthropomorphism towards the chatbots. Users might look at elements like the consistent responsiveness, acknowledgement, chatbot referring to previous chat conversations, trying to imitate empathy and other such aspects in a chat as markers of building trust towards it and also pseudo-intimacy, i.e, a simulation of superficial emotional connection that feels real but lacks a true connect.

When we view it from the lens of cyberpsychology, this can be viewed as digital strategies to reduce the user inhibition and to help them feel comfortable in using the chatbot, which also might result in more usage time. However on encountering with a technology which acts too much like a human, or tries to emulate excessively the human traits, then it can cause the user to experience unsettling eerie feeling as a result of witnessing uncanny valley effect from this.The uncanny valley effect was proposed by Japanese roboticist Masahiro Mori in 1970. Uncanny valley effect talks about when robots or AI look or act too similar to a human, it can evoke feelings of discomfort.
Essentially there are few intriguing insights that this concept of pseudo-personality in chatbots highlights, which is that, firstly, a lot of reasons accounting for engaging with technology are not just technical but largely psychological. Secondly, it highlights an important intersection happening between the intersection of technology and human psychology. So it becomes essential that, in an age where technological advancements happen too often and tech becomes an important factor in influencing our lives, we need to be not just technologically aware but also psychologically aware.