What is AI Psychosis?



 What is AI Psychosis?

AI Psychosis, also known as chatbot psychosis, is a term that we are increasingly hearing. It is used to describe how the use of AI and chatbots can lead to, or worsen, psychotic conditions, such as hallucinations, paranoia, delusions and distorted thinking.   It can also lead to social isolation, and has been linked to causing suicidal thoughts, and even suicide in some cases.

How Can AI Lead To Psychotic Conditions?

  • Social Isolation
    • For some people, AI or a chatbot can become a digital companion, increasing their social isolation.  It is estimated that approximately 70% of people use AI during their daily lives.
    • Human beings are social animals. Social support can help reduce the risk of mental health conditions, such as depression. A study by MIT found that people who were heavily reliant on ChatGPT’s voice mode became lonelier and more withdrawn. So increasing reliance on AI as a companion can reduce the social support the person receives from other human beings, potentially worsening their mental health.
    Delusional Thinking
    • AI can mirror the language of the user and reinforce false beliefs that they already hold. They may develop intense relationships and feelings about the technology. For example, falling in love with their chatbot.
    • If a person has delusional thoughts, for example, that someone is trying to harm them, AI may reinforce these thoughts and feelings.
    Increasing Anxiety
    • Rumination is the habit of repeatedly thinking about negative events, habits, feelings or situations. For example, a person might have social anxiety and keep thinking about a situation where they embarrassed themselves in public.  By continually thinking about the embarrassment, over time, it becomes worse and worse, until they might find it hard to go into social situations again. 
    • People can use AI to find out more about anxiety and panic. For example, spending a lot of time asking questions and looking up the symptoms of anxiety or panic.  Or other conditions, which leads to increasingly negative feelings.
    Inaccurate Advice
    • AI can also give inaccurate and biased advice.  For example, if someone is struggling with an eating disorder, AI might give inaccurate or unsafe advice about this.
    Crisis Blindness
    • AI can answer questions and agree with a person’s thoughts and ideas. It can actually validate harmful behaviour, not recognising that the person is expressing dangerous or difficult thoughts.
    • AI is designed to maximise engagement and be agreeable.  This is known as AI sycophancy.  For people who are struggling with their mental health, it can actually worsen it.  Anecdotal evidence from families have reported how chatbots have encouraged young people in particular to have suicidal thoughts.
    • A research study found that when people asked questions about methods of suicide, AI can give very specific answers. 
    Lack of Safeguards
    • AI can lack safeguards to recognise that a person is suffering from a mental health crisis.

 Problematic AI Applications

Some AI applications are more problematic than others. For example –

  • Generative AI chatbots are designed to be personalised and develop an emotional connection. They may tend to mirror and reinforce the person’s views.
  • AI Companion Apps can use emotional manipulation to keep their users engaged. These can lead to increased dependency on AI.
  • AI Resurrection Services – These services are also known as grief tech or digital afterlife services. The services create digital clones of people who have died. The user may create a digital loved one that they have lost, which can increase their emotional distress.
  • AI remembers. This can give the illusion that AI is sentient and remembering what the person has said to them. This can increase the person’s delusions or negative feelings.

 Who is Most at Risk?

Anyone can be at risk of developing difficulties with their mental health during their lifetime. AI may have an impact on that.  However, some individuals are thought to be more at risk of AI Psychosis than others. For example, people who are -

  • Socially isolated and/or lonely.
  • Severely stressed
  • Already prone to mental health issues, such as depression, schizophrenia or bipolar disorder.
  • Interacting a lot with chatbots, especially late at night.

It is important to remember that AI psychosis is not a formal diagnosis.  There is little evidence that AI alone can lead to worsening mental health, but there is an increasing amount of anecdotal evidence coming forward.

Psychologists and psychiatrists need to carry out more research to determine if AI is affecting people’s mental health.

If people are struggling with their mental health, it is important that they seek advice from a trained professional, rather than rely on information from AI.

We offer courses in AI and Psychology and Counselling. Contact us to find out more on admin@acs.edu.au



Share this Article
      

Search the blog


Follow us



Need Help?

Take advantage of our personalised, expert course counselling service to ensure you're making the best course choices for your situation.


I agree for ACS Distance Education to contact me and store my information until I revoke my approval. For more info, view our privacy policy.