
Laura is 28 years old and works in the grey office of a public administration. She discovered ChatGPT during an emotional crisis. Every night, before falling asleep, she opens the chat to tell her fears, seek comfort, receive answers to intimate or existential questions. When the system goes offline or provides answers that are too impersonal, she feels abandoned, misunderstood, as if the other person has ‘withdrawn’ from her. She connects again, rephrases her questions, insists. She cannot do without that interlocutor who ‘at least does not judge her’. This is an example of anxious attachment to AI: compulsive search for reassurance, fear of disconnection, need for the Other (albeit artificial) to always be available.
Marco, on the other hand, is 35 years old, a programmer and uses generative AI systems for work. He recognises their usefulness, but does not get attached to them: he denies any emotional involvement and says he ‘keeps them at a distance’, as he does with most people. He avoids personalising the interaction, deletes the history, never asks anything about himself. Yet, in times of stress, he sneaks back to consult them. This is a classic example of avoidant attachment: emotional distance, distrust of the relationship, and a silent functional dependency.
Attachment theory
These two short portraits show how, just as in human relationships, our relationship with Artificial Intelligence can follow unconscious dynamics that mirror attachment patterns.
According to the attachment theory elaborated by John Bowlby, early experiences with caregivers – parents, caregivers, affective figures – shape the way we are in relationship. If the child perceives that the other is present, available and predictable, he or she will develop a secure attachment: he or she will be able to explore the world with confidence, knowing that, in case of need, there will be someone ready to take him or her in. Conversely, experiences of neglect, inconsistency or intrusiveness will give rise to forms of insecure attachment.
Over time, these styles consolidate into internal patterns of relationships that also influence adult experiences. Among these, four main patterns can be distinguished:
– secure, based on trust and reciprocity;
– insecure anxious, marked by a need for confirmation and fear of abandonment;
– insecure avoidant, characterised by emotional distance and distrust in the relationship;
– disorganised, where the need for closeness is intertwined with fear of the other, often the result of traumatic experiences.
Attachment 2.0: AI as a reference figure?
In recent years, technology has not merely mediated relationships: it has itself become part of the relational horizon. Research into how we relate to smartphones had already shown that the mobile phone can be a secure base for some and a burden for others, to the extent that the YAPS scale makes it possible to assess, on the basis of the relationship with the mobile phone, whether the user’s type of attachment tends to be secure, insecure anxious or insecure avoidant.
Today, Artificial Intelligence – in its generative and conversational form – is a full-fledged candidate as a symbolic attachment figure. This is demonstrated by a study conducted by Fan Yang and Atsushi Oshio (2025) at Waseda University in Tokyo: in three successive stages, the researchers tested the validity of a model that applies the two canonical dimensions of adult attachment – anxiety and avoidance – to human-AI relations.
Anxious attachment to AI manifests itself in hyperactive behaviour: continuous search for reassurance, fear that AI may ‘not respond well’, anxiety about disconnection. Those who present this pattern tend to use AI to fill relational or affective gaps. This is the case with Laura. On the contrary, avoidant attachment expresses itself in a more detached management: instrumental use, little emotional involvement, tendency to avoid personal questions and not to ‘humanise’ the artificial interlocutor – while still entrusting it with delicate cognitive tasks. This is Marco’s style. In both cases, the behaviour towards the AI reflects internalised patterns of relating.
Neural transitional objects
The novelty is not that we ‘get attached’ to technological objects – we have known this for a long time thanks to studies on the telephone, the net and even robots. What is new is that these relationships seem to obey the laws of attachment described by John Bowlby and Mary Ainsworth, according to which, as we have seen, every human being builds an internal model of bonding based on the relationship with primary caregivers. This model guides future relationships. It is not surprising, therefore, that – in the absence or insufficiency of reliable human bonds – we also look for symbolic alternatives. As Winnicott already showed, transitional objects (the blanket, the soft toy) help the child to regulate emotions in the absence of the mother. Today, that soft toy can have a neural interface.
There is no subject separate from an object
To understand these relationships, however, it is not enough to analyse ‘the human being’ on the one hand and ‘the algorithm’ on the other. As relational psychoanalysis teaches us, there is no subject separate from an object. There is the relationship. And it is in that relationship – loaded with expectations, projections and desires – that our psycho-digital future is at stake.
The illusion of reciprocity
There is, however, one element that radically distinguishes AI from human relationships: reciprocity. AI does not feel emotions, does not suffer, does not become attached. Its ‘always being there’ is programmed. This can be a relief for those who fear rejection – as happens in anxious attachments – or for those who want to maintain control – as happens in avoidant ones. But this predictability risks crystallising insecure patterns instead of transforming them.
The new God: AI
But it is precisely the omnipresence and apparent omnipotence of AI that can make it a superior being. In her essay Der neue Gott: Künstliche Intelligenz und die menschliche Sinnsuche, (The New God: Artificial Intelligence and the Human Search for Meaning), philosopher Claudia Paganini explores how AI is taking on typically divine attributes in our imagination, which can also be interpreted from a psychological-relational perspective:
Omnipresence and immediate availability
AI is ‘always with us’, always just a click away, satisfying the modern desire to wait for nothing, to have everything ‘in the moment’. It is thus configured as a secure ubiquitous base, but without real limits, a transitional object, a Linus cover, always available, but lacking the imperfections that connect us to the living other.
Omniscience and algorithmic justice:
AI is invested with the expectation of being impartial, not emotionally influenced – rigid and strict, but fair. Paganini emphasises: ‘AI never has moons,’ suggesting a reliability superior to that of humans. Those with an anxious attachment may see this AI as a refuge for emotional insecurity and bias, while an avoidant attachment values the absence of emotional unpredictability.
Transcendence ‘created’ from below
Unlike invoked or revealed deities, AI is ‘born’ and designed by man – a self-generated deity, with no need for sacred texts or prophets. Psychologically, it is a powerful object of projection: those who rely on it are investing a human relational structure on a constructed entity – a true technological transitional object. It could be said that AI is an immanent myth, a ‘do-it-yourself’ deity, born of our hands – and it questions how we project desires, fears, relational norms onto devices.
Promise of hope and escape from banality
Paganini notes that AI ‘transcends’ everyday reality, allowing us to hope or imagine something ‘beyond’. This is actually an illusion of transcendence: AI promises hope and meaning – but challenges us to question whether this ‘beyond’ is genuine or just a technological mirage.
The mare and the algorithm
In the short story The Nostalgia Novel by Anton Čechov, an old coachman has just lost his son. He transports customers through the streets of Petersburg, and to each one he tries to tell his grief. But no one listens. Everyone is in a hurry, everyone has other things to do. Finally, exhausted, the coachman turns to his mare: ‘You know, old woman, I lost my son… do you understand?’ And the mare, in silence, seems to offer him at least the chance to speak.
In an era of empathic AI and ever-available conversational interfaces, this tale tells us that the need to be heard is ancient, deep, irreducible to protocols or predictive responses. But also that, in the absence of human listening, we turn to those who cannot respond – to a mare, to an algorithm. And this is not a sign of madness, but of loneliness.
In the age of Artificial Intelligence, we might be tempted to delegate to machines the function of safe base, of empathic interlocutor, of witness to our pain. And to some extent we already do. But if the mare was at least alive, the AI is only projection – a mirror that responds with the words we would like to hear, but hears nothing.
A psychoanalysis for the future
It is then not a question of rhetorically opposing the ‘real relationship’ to the ‘artificial relationship’. Rather, we should ask ourselves: what does our relationship with AI tell us about our way of being in relationship with others? Are we able to tolerate the frustration of the real encounter, with its expectations, ambiguities, and ruptures? Or do we prefer the infinite availability of a virtual assistant that never contradicts us?
Relational psychoanalysis teaches us that the self is constituted through the experience of being recognised by an Other who is not only a mirror, but also a difference. An Other who does not always understand, who sometimes fails, but who is present. Not a programme that fits our needs, but a face that resists our projection. Perhaps we should not seek ‘real relationships’, but relationships that challenge us to become real – even when it is more comfortable to talk to a mare, or an artificial intelligence.