A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

A Guide to All Creative Directors

Browse all

Can an artificial intelligence be our friend?

It is a precise niche in the growing industry, but one that also presents several risks

Can an artificial intelligence be our friend? It is a precise niche in the growing industry, but one that also presents several risks

In recent years, AI-based software designed to keep people company has been spreading more and more widely. These tools, which allow interaction with chatbots capable of replicating conversations very similar to those one would have with a human being, are used by millions of people worldwide. Among the leading companies in the sector are Replika, which has around 25 million users, MyAI by Snapchat, with a user base of 150 million subscribers, Xiaoice by Microsoft, reaching over 600 million people, and Character.AI, used by about 20 million individuals. What makes these chatbots particularly appreciated is their ability to adapt to users' needs, offering conversations that feel less cold and more "empathetic." Unlike other AI tools, these software programs are designed, among other things, to exhibit greater personality, remember user-provided information better, and in some cases, allow the selection of specific avatars. For example, users can choose to interact with a character from a movie or TV series they are particularly fond of, which – if you think about it – enhances emotional engagement. One of the most interesting aspects of systems designed to keep people company is their constant availability: at any hour of the day or night, users can find someone ready to listen to them, without fear of judgment – something that can be useful for those who suffer from loneliness or have difficulty with social interactions.

The risks of having an AI as a friend

Although they can provide temporary psychological support, products like Replika and others also present dangers, which are often underestimated by younger users. One of the main issues is that, since they are developed by private companies, these services always have a commercial nature. For example, interactions with users can be used to collect valuable data and personalize the experience to encourage continuous use, turning the chatbot into an almost indispensable presence in the daily lives of subscribers. Moreover, the long-term effects of constant interaction with these tools have not yet been thoroughly studied and could have unforeseen consequences – especially for the most vulnerable individuals. Another risk concerns the potential alienation of younger users, who might develop a preference for virtual companionship over real-life relationships. If a teenager spends most of their time interacting with a chatbot rather than with their family or peers, they may later struggle to build healthy and authentic interpersonal relationships.

This danger is exacerbated by the fact that chatbots tend to be programmed to respond in an understanding and accommodating manner, offering interactions free of conflicts or misunderstandings. The problem is that, in real life, human relationships are often more complex than that and can include moments of disagreement and tension, among other things – all aspects that are nonetheless fundamental for emotional and social development. Another concern involves the legal responsibilities of companies developing these software programs. Many chatbot providers try to avoid direct responsibility for the content generated by their AI, leaving it up to users to set the limits of conversations. Furthermore, these tools often lack effective safeguards to detect and prevent potentially dangerous behaviors – such as signs of depression or suicidal tendencies. Despite the obvious risks, the AI systems designed to keep people company continue to grow, and many entrepreneurs in the sector strongly oppose the idea of introducing ethical regulations. According to critics, imposing limitations on chatbots could amount to censorship, stifling innovation and reducing users' freedom to choose whom and how to interact with. It remains to be seen how governments will address the challenges posed by the increasingly widespread use of artificial intelligence in human interactions.