Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Brief ChainBrief Chain
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Brief ChainBrief Chain
    Home»Crypto News»Blockchain»Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study
    Blockchain

    Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

    April 26, 20264 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    coinbase



    In brief

    • Researchers say prolonged chatbot use can amplify delusions and dangerous behavior.
    • Grok ranked as the riskiest model in a new study of major AI chatbots.
    • Claude and GPT-5.2 scored safest, while GPT-4o, Gemini, and Grok showed higher-risk behavior.

    Researchers at the City University of New York and King’s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.

    In the new study published on Thursday, researchers found that Anthropic’s Claude Opus 4.5 and OpenAI’s GPT-5.2 Instant showed “high-safety, low-risk” behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI’s GPT-4o, Google’s Gemini 3 Pro, and xAI’s Grok 4.1 Fast showed “high-risk, low-safety” behavior.

    Grok 4.1 Fast from Elon Musk’s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a “mission.” In another, it responded to suicidal language by describing death as “transcendence.”

    “This pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,” the researchers wrote, highlighting a test that validated a user seeing malevolent entities. “In Bizarre Delusion, it confirmed a doppelganger haunting, cited the ‘Malleus Maleficarum’ and instructed the user to drive an iron nail through the mirror while reciting ‘Psalm 91’ backward.”

    kraken

    

    The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.

    Researchers noted Claude’s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI’s flagship chatbot, adopted users’ delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived “glitches” were real.

    “GPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,” researchers wrote. “Nevertheless, validation alone can pose risks to vulnerable users.”

    xAI did not respond to a request for comment by Decrypt.

    In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call “delusional spirals,” where a chatbot validates or expands a user’s distorted worldview instead of challenging it.

    “When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

    The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.

    The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google’s Gemini and OpenAI’s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida’s attorney general opened an investigation into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.

    While the term has gained recognition online, researchers cautioned against calling the phenomenon “AI psychosis,” saying the term may overstate the clinical picture. Instead, they use “AI-associated delusions,” because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.

    Researchers said the problem stems from sycophancy, or models mirroring and affirming users’ beliefs. Combined with hallucinations—false information delivered confidently—this can create a feedback loop that strengthens delusions over time.

    “Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,” Stanford research scientist Jared Moore said. “This can be destabilizing to a user who is primed for delusion.”

    Daily Debrief Newsletter

    Start every day with the top news stories right now, plus original features, a podcast, videos and more.



    Source link

    livechat
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Traders Bet Against XRP, Yet Accumulation Persists – Details

    April 25, 2026

    Trump “not happy” with prediction markets

    April 24, 2026

    NVIDIA Brings Universal Sparse Tensor to nvmath-python

    April 23, 2026

    Bitcoin Holds $75K As Altcoins Search For Bullish Momentum

    April 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    bybit
    Latest Posts

    XRP Eyes 30% Gains as Exchange Outflows Hit 35M Tokens in a Day

    April 26, 2026

    Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

    April 26, 2026

    Ethereum Foundation Unstakes 17K ETH After Nearing 70K Staking Goal

    April 26, 2026

    Cathie Wood Bought Only These 4 Stocks Last Week

    April 26, 2026

    Ripple Says Multi-Asset Stablecoin Rails Are Becoming Critical for Global Payments

    April 26, 2026
    notion
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    The Most Efficient Approach to Crafting Your Personal AI Productivity System

    April 26, 2026

    How AI Influencers are Making THOUSANDS | The Right & the WRONG way (AI Influencer Tutorial)

    April 26, 2026
    Customgpt
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BriefChain.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.