One of the world's most notable libel lawyers Paul Tweed has raised concerns over the rise of artificial intelligence (Photo: PressEye)
He says people are being defamed and reveals an ‘alarming’ number of calls
The use of AI chatbots is resulting in people being seriously defamed, according to top libel and privacy lawyer Paul Tweed.
Mr Tweed – who is regularly referred to as one of the world’s most powerful legal figures, having represented and advised everyone from Prince Andrew to Britney Spears — said he has personally received an “alarming amount of calls” in relation to artificial intelligence within the past few months.
The Bangor-born lawyer, in an interview with the Belfast Telegraph, said the concerns are “in relation to misinformation and other related issues” from AI.
The most common staple being information generated from ChatGPT and the Google–owned Bard platform, he said.
Created by the learning machine company OpenAI, ChatGPT can generate almost any form of information simply by “asking” a prompt question or statement.
“We’re keeping a very close eye on the situation, not only because the ordinary man on the street is not going to have the funds to run a test case against OpenAI or any of the chatbots. It will have to be someone with the means to take them on if they are defamed,” said Mr Tweed.
“They’re like a news aggregator on speed, (AI platforms) are using any information they can get, you put in a name and details, and then all of sudden it all comes up like a factual article or statement like something you would expect to see on the likes of Wikipedia.
“News articles and websites are being cited that simply don’t exist, and it’s very concerning because international banks for example would use someone’s media presence to sanction them.”
A recent example of AI inaccuracy was the case involving law professor Jonathan Turley, who was falsely accused of sexual harassment by ChatGPT after his name appeared in its search engine after being asked to list “five examples” of sexual harassment related to professors in American law schools.
The prompt asked the bot to attribute any results to established newspapers. However, Mr Turley’s name appeared in the generated response citing a post from the Washington Post which did not exist.
Mr Tweed added: “It puts a whole new aspect on everything. Only earlier this year I was complaining bitterly about not being able to contact someone within the social media platforms, so this is now a new extreme where you haven’t a hope in hell in speaking to someone (involved in the platforms).
“I have clients entering into debates with AI, after they have cited sources which have been fabricated or they have put a slant on it.
“I think these people have lost control, it’s getting very serious,” he said. Earlier this year screenshots of ChatGPT, which is largely unregulated, went viral after the bot refused to acknowledge the correct year and claimed 2023 “was in the future”.
Mr Tweed said he is also seeing concern from publishers, not just clients, given his role in also representing several notable newspaper publishing houses.
“I’m hearing from publishers, simply because their newspaper articles are being attributed in results that simply are not there so they’ll now be dragged into this.”
He said his “only hope” is due to the likes of Google maintaining their European, Middle Eastern and African headquarters in Dublin, meaning he believes AI platforms will be subjected to the same laws around mainstream social media and search engine websites.
Representatives for both OpenAI and Google have been contacted for comment.