Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Brief ChainBrief Chain
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Brief ChainBrief Chain
    Home»AI News»EU vs X: Grok’s Explicit-Image Mess Has Officially Crossed the Line
    logo
    AI News

    EU vs X: Grok’s Explicit-Image Mess Has Officially Crossed the Line

    January 28, 20263 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    coinbase


    The EU has now singled X out – and this time it’s not political, misinformation based or some nebulous free speech argument.

    It has to do with porn: Specifically, there’s this question of the sexually explicit images that can be created using Grok, the AI associated with Elon Musk’s platform, and whether some of those were being used to make “digital undressing” content.

    This is the sort of thing that makes your stomach clench when you read it, because it’s not just harm that is abstract. It’s targeted, personal and in some instances may be illegal.

    And about the mood, too. This is not melodramatic E.U. This is the EU saying, “Enough”.

    changelly

    Regulators are concerned about how fast this type of content propagates itself on the internet and also just the fact that once something like a deepfake explicit image is out there, it’s not going to disappear.

    The damage is done, even if the platform cuts it down, even if the account meets its end.

    Now here’s the kicker. People keep seeming surprised when AI gets put to use for the worst things. But, I mean, let’s face it – are we really surprised?

    You unleash a delicious image tool on millions of people and the internet does what it always does: throws its shiny new toy into a garbage disposal, searches for ways to hurt someone with it.

    That’s why this investigation isn’t merely “EU angry at chatbot.” It is occurring with the Digital Services Act, which essentially commands big platforms to behave like responsible adults.

    It should always be possible to tell whether X took a reasonable approach to risk assessment, and put in place sufficient safety guardrails. Not after the damage. Before.

    X has apparently taken some measures in response, such as paying more attention to certain features and tightening control (by, for example, putting some functions generating images behind a paywall).

    That’s… something, I guess. But if you’re the one whose image was altered and circulated, it probably doesn’t feel like a win. It’s as if you’re locking the front door only after your house has been robbed.

    And here’s another uncomfortable fact: Platforms today don’t simply “host content.” They amplify it. They recommend it. They push it into feeds.

    That’s why the EU isn’t just concerned about Grok-exposed explicit images – it’s interested in whether X systems made that content travel faster and further than it ever should have.

    What’s frightening is that this is about to become the new normal.

    AI-generated images are not going anywhere. In fact, it’s only getting better, faster, cheaper and more real.

    Which is to say the “gross uses” are going to multiply as well. Today it’s Grok. Tomorrow it’s another model, another platform, another crop of victims.

    And it’s not just celebrities anymore; it’s classmates, colleagues, ex-lovers and random women on the internet who posted one selfie in 2011 and still rue the day they ever existed online.

    That’s why the E.U. inquiry is important. And not because it’s fun to see big tech sweat (though, O.K., that part is sustaining).

    It matters, though, because this is one of the first high-profile tests of whether governments can actually compel platforms to treat AI harm as a real emergency and not just the side quest.

    And if X fails this test? And anticipate that regulators will be more aggressive across the board – because the next platform in their cross hairs may not have so many chances.



    Source link

    coinbase
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Featured video: Coding for underwater robotics | MIT News

    March 1, 2026

    Anthropic vs. The Pentagon: what enterprises should do

    February 28, 2026

    Poor implementation of AI may be behind workforce reduction

    February 27, 2026

    Nous Research Releases ‘Hermes Agent’ to Fix AI Forgetfulness with Multi-Level Memory and Dedicated Remote Terminal Access Support

    February 26, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    changelly
    Latest Posts

    AI Tool Helps Avert Critical XRP Ledger Security Flaw

    March 1, 2026

    Binance Liquidity Supply Revisits 2024 Levels As Tradable BTC Rises — Details 

    March 1, 2026

    Ethereum Smart Accounts Coming in Hegota Fork

    March 1, 2026

    Government Bonds Are Getting Interesting Again

    March 1, 2026

    Bitcoin Crashes as US and Israel Strike Iran, War Begins

    March 1, 2026
    coinbase
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Z Score of Bitcoin-to-Gold Ratio Signals ‘Major’ Rally Coming: Analyst

    March 1, 2026

    Featured video: Coding for underwater robotics | MIT News

    March 1, 2026
    frase
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BriefChain.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.