Grok AI News
10 articles in grok ai

Life after Molly: Ian Russell on big tech, his daughter’s death – and why a social media ban won’t work
Molly Russell was just 14 when she took her own life in 2017, and an inquest later found negative online content was a significant factor. With many people now pushing for teenagers to be kept off tech platforms, her father explains why he backs a different approach Ian Russell describes his life as being split into two parts: before and after 20 November 2017, the day his youngest daughter, Molly, took her own life as a result of depression and negative social media content. “Our life before Molly’s death was very ordinary. Unremarkable,” he says. He was a television producer and director, married with three daughters. “We lived in an ordinary London suburb, in an ordinary semi-detached house, the children went to ordinary schools.” The weekend before Molly’s death, they had a celebration for all three girls’ birthdays, which are in November. One was turning 21, another 18 and Molly was soon to be 15. “And I remember being in the kitchen of a house full of friends and family and thinking, ‘This is so good. I’ve never been so happy,’” he says. “That was on a Saturday night and the following Tuesday morning, everything was different.” The second part of Russell’s life has been not only grief and trauma, but also a commitment to discovering and exposing the truth about the online content that contributed to Molly’s death, and campaigning to prevent others falling prey to the same harms. Both elements lasted far longer than he anticipated. It took nearly five years to get enough information out of social media companies for an inquest to conclude that Molly died “from an act of self-harm while suffering from depression and the negative effects of online content”. As for the campaigning, the Molly Rose Foundation provides support, conducts research and raises awareness of online harms, and Russell has been an omnipresent spokesperson on these issues. Continue reading...

Grok AI generated about 3m sexualised images in 11 days, study finds
Estimate made by Center for Countering Digital Hate after Elon Musk’s AI image generation tool sparked outrage Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appear to depict children, according to researchers who said it “became an industrial-scale machine for the production of sexual abuse material”. The estimate has been made by the Center for Countering Digital Hate (CCDH) after Elon Musk’s AI image generation tool sparked international outrage when it allowed users to upload photographs of strangers and celebrities, digitally strip them to their underwear or into bikinis, put them in provocative poses and post the images on X. Continue reading...

Musk’s X to block Grok AI tool from creating sexualised images of real people
UK government claims vindication after Keir Starmer criticised earlier decision to keep functionality as ‘horrific’ The UK government has claimed “vindication” after Elon Musk’s X announced it had stopped its AI-powered Grok feature from editing pictures of real people to show them in revealing clothes such as bikinis, including for premium subscribers. After a fortnight of public outcry at the tool embedded into X being used to create sexualised images of women and children , the company said it would “geoblock” the ability of users “to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X”, in countries where it was illegal. Continue reading...

UK politics: West Midlands crime commissioner resists calls for immediate sacking of chief constable – as it happened
Simon Foster says he will give report into force’s handling of Maccabi Tel Aviv fan ban ‘careful consideration’ in deciding Craig Guildford’s fate Here are extracts from three interesting comment articles about the digital ID U-turn. Ailbhe Rea in the New Statesman in the New Statesmans says there were high hopes for the policy when it was first announced. I remember a leisurely lunch over the summer when a supporter of digital IDs told me how they thought Keir Starmer would reset his premiership. Alongside a reorganisation of his team in Number 10, and maybe a junior ministerial reshuffle, they predicted he would announce in his speech at party conference that his government would be embracing digital IDs. “It will allow him to show he’s willing to do whatever it takes to tackle illegal immigration,” was their rationale. Sure enough, Starmer announced “phase two” of his government, reshuffled his top team and, on the Friday before Labour party conference, he duly announced his government would make digital IDs mandatory for workers. “We need to know who is in our country,” he said, arguing that the IDs would prevent migrants who “come here, slip into the shadow economy and remain here illegally”. In policy terms, I don’t think you particularly gain anything by making the government’s planned new digital ID compulsory. One example of that: Kemi Badenoch has both criticised the government’s plans to introduce compulsory ID, while at the same time committing to creating a “British ICE” that would go around deporting large numbers of people living in the UK. In a country with that kind of target and approach, people would be forced to carry their IDs around with them in any case! The Online Safety Act, passed into law by the last Conservative government with cross-party support and implemented by Labour, presupposes some form of ID to work properly. Here is the political challenge for Downing Street: the climbdowns, dilutions, U- turns, about turns, call them what you will, are mounting up. In just the last couple of weeks, there has been the issue of business rates on pubs in England and inheritance tax on farmers. We welcome Starmer’s reported U-turn on making intrusive, expensive and unnecessary digital IDs mandatory. This is a huge success for Big Brother Watch and the millions of Brits who signed petitions to make this happen. The case for the government now dropping digital IDs entirely is overwhelming. Taxpayers should not be footing a £1.8bn bill for a digital ID scheme that is frankly pointless. Continue reading...

Can X be banned under UK law and what are the other options?
UK media regulator is investigating whether X has breached the Online Safety Act – what could happen next? The UK government is threatening Elon Musk’s X with the nuclear option under the country’s online safety laws: a ban. The social media platform is under pressure from ministers after it allowed the Grok AI tool, which is integrated within the app, to generate indecent images of unsuspecting women and children. The government has said it will support the media regulator Ofcom, which has launched an investigation into X, if it decides to push ahead with a ban. But is such a move likely? Continue reading...

UK media regulator investigating Elon Musk’s X after outcry over sexualised AI images
Liz Kendall describes content as vile and illegal and says Ofcom has the government’s backing to use its full powers The UK media watchdog has opened a formal investigation into Elon Musk’s X over the use of the Grok AI tool to manipulate images of women and children by removing their clothes. Ofcom has acted after a public and political outcry over a deluge of sexual images appearing on the platform, created by Musk’s Grok, which is integrated with X. Failing to assess the risk of people seeing illegal content on the platform. Not taking appropriate steps to prevent users from viewing illegal content such as intimate image abuse and CSAM. Not taking down illegal material quickly. Not protecting users from breaches of privacy law. Failing to assess the risk X may pose to children. Not using effective age checking for pornography. Continue reading...

UK threatens action against X over sexualised AI images of women and children
Government signals support for possible Ofcom intervention on Grok as scrutiny of X’s AI tool intensifies Business live – latest updates Elon Musk’s X “is not doing enough to keep its customers safe online”, a minister has said, as the UK government prepares to outline possible action against the platform over the mass production of sexualised images of woman and children. Peter Kyle, the business secretary, said the government would fully support any action taken by Ofcom, the media regulator, against X – including the possibility that the platform could be blocked in the UK. Continue reading...

Elon Musk’s X threatened with UK ban over wave of indecent AI images
Platform has restricted image creation on the Grok AI tool to paying subscribers, but victims and experts say this does not go far enough Elon Musk’s X has been ordered by the UK government to tackle a wave of indecent AI images or face a de facto ban, as an expert said the platform was no longer a “safe space” for women. The media watchdog, Ofcom, confirmed it would accelerate an investigation into X as a backlash grew against the site, which has hosted a deluge of images depicting partially stripped women and children. Continue reading...

No 10 condemns ‘insulting’ move by X to restrict Grok AI image tool
Spokesperson says limiting access to paying subscribers just makes ability to generate unlawful images a premium service UK politics live – latest updates Downing Street has condemned the move by X to restrict its AI image creation tool to paying subscribers as insulting, saying it simply made the ability to generate explicit and unlawful images a premium service. There has been widespread anger after the image tool for Grok, the AI element of X, was used to manipulate thousands of images of women and sometimes children to remove their clothing or put them in sexual positions. Continue reading...

Hundreds of nonconsensual AI images being created by Grok on X, data shows
Sample of roughly 500 posts shows how frequently people are creating sexualized images with Elon Musk’s AI chatbot New research that samples X users prompting Elon Musk’s AI chatbot Grok demonstrates how frequently people are creating sexualized images with it. Nearly three-quarters of posts collected and analyzed by a PhD researcher at Dublin’s Trinity College were requests for nonconsensual images of real women or minors with items of clothing removed or added. The posts offer a new level of detail on how the images are generated and shared on X, with users coaching one another on prompts; suggesting iterations on Grok’s presentations of women in lingerie or swimsuits, or with areas of their body covered in semen; and asking Grok to remove outer clothing in replies to posts containing self-portraits by female users. Continue reading...