- Author, Zo Kleinman
- Role, BBC technology editor
Imagine this scene: You’re at home with your family and suddenly your phone starts ringing… People you know are warning you about something they saw about you on social media.
You will have a bad feeling.
In my case, it was a screenshot, apparently taken from Elon Musk’s chatbot Grok – I couldn’t verify it – but it put me on a list of the worst misinformation spreaders on X (formerly Twitter), Along with some of the biggest conspiracy theorists in the United States.
I had nothing in common with them, and as a journalist, that list was not a top 10 list I wanted to be on.
Access to Grok is not available in the UK, so I asked Google’s ChatGPT and Bard to make the same menu, using the same command. They both refused, and Bard responded by saying it would be “irresponsible” to do so.
I’ve written a lot about artificial intelligence and the laws governing it, and one of the biggest concerns people have is how our laws will keep up with this rapidly changing and very disturbing technology.
Experts from many countries agree that humans should always be able to challenge the actions of artificial intelligence, and over time, artificial intelligence tools are increasingly working to create content that speaks about us and also to make decisions related to our lives.
There is still no formal law in the UK regulating the work of artificial intelligence, but the government says that any issues related to its activity should be brought under the work of existing regulatory bodies.
I decided to try to put things right.
My first contact was with Platform X, which ignored me, as they do with most media inquiries.
Then I tried two UK regulators. The first point was the Information Commissioner’s Office, the government body responsible for protecting data, but he suggested that I go to Ofcom, which works to enforce online safety law.
Ofcom told me that the list is not covered by the law because it is not considered criminal activity.
“Unlawful content… means the content must rise to the level of a criminal misdemeanor and therefore does not cover civil wrongs such as defamation,” she said. “The person would have to follow civil procedures to make some sort of response.”
This basically means I need a lawyer.
There are a number of legal cases currently before courts around the world, but there is no precedent yet.
In the United States, a radio broadcaster named Mark Wolters sued the creator of ChatGBT, OpenAI, after the chatbot falsely stated that it had defrauded a charity.
The mayor of a city in Australia threatened to do the same after the same chatbot wrongly said he had been found guilty of receiving bribes. But he was in fact a whistleblower, and the AI seems to have connected the wrong dots in the information about him. The case ended with a settlement between the two parties.
I contacted two lawyers who have experience in artificial intelligence, and the first did not give me room to delve into the topic.
The second told me that I was in an “unfavorable area” in England and Wales, stressing that what happened to me could be considered defamation, because my identity had been identified and the list had been published.
But she also said the onus was on me to prove the content was harmful. The idea that I would prove that I was a journalist accused of spreading misinformation was bad news for me.
I don’t know how I ended up on that list, or who exactly saw it. It was very frustrating that I couldn’t reach Grok myself.
AI chatbots are known for “hallucinating,” which is the term used by big tech companies to make things up. Even the creators do not know the reason behind this. They carry a disclaimer saying that their product may not be reliable, and that you may not necessarily get the same answer twice.
The final twist in the plot
I spoke to colleagues in the BBC’s fact-checking unit, a team of journalists who fact-check information and sources.
So they did some investigating, and they believe that the screenshot that accused me of spreading misinformation and started this whole story was probably fake in the first place.
I realize the irony that occurred in the matter.
But my experience has opened my eyes to just one of the challenges that lie ahead as AI plays an increasingly powerful role in our lives.
The task before AI regulators is to ensure that there is always a straightforward way for humans to challenge computers. If AI is lying about you – where do you start? I thought I knew the answer, but the road was difficult nonetheless.