Your reputation and AI

Your reputation and AI

The development of what AI is able to achieve shows no sign of slowing down. With its capabilities only increasing, various issues arise with regard to AI’s ability to affect an individual’s or corporation’s reputation.

The use of “deepfake” images and videos of well-known individuals is becoming increasingly common, but it is not only those who operate in the public domain who might be affected. Deepfakes are curated photographs or videos which have been manipulated and digitally altered to create a convincing likeness which does not exist originally. They are a synthetic creation of the individual’s appearance and/or voice. Any images on public social media sites or company websites can be manipulated, as can extracts of your voice from videos – making viewers believe what they are seeing is really you. Not only are there sites where deepfakes can be made for free, but the possibilities of what these images and videos can be used for are of course endless.

Taylor Swift is one of the most recent celebrities to have been affected by deepfakes, joining the likes of Tom Hanks at the end of last year. Whilst Tom Hanks’ image was used to promote a dental plan, AI images of Taylor Swift show a series of explicit scenarios surrounding the Kansas City Chiefs (her boyfriend’s sports team).

Examples like this are not only a disturbing intrusion into individuals’ private lives, but also reveal the potential horrors of deepfakes exacerbated by the current lack of regulation in the United Kingdom. While they generally ‘only’ circulate on social media sites, such as X, they are in reality widely accessible online and demonstrate the ease with which individuals’ private lives and reputations can be adversely altered and affected by AI.

In the United Kingdom, not all deepfakes are immediately illegal. Whilst they may infringe copyright and data protection law and have the potential to be defamatory, the specific ability to protect individuals operating in public industries from deepfakes in England & Wales is not always clear. Laws are often not enacted quickly enough to keep up with corresponding developments in technology. Whilst the Online Safety Act creates a new criminal offence of sharing deepfake pornography, it does not restrict any other type of synthetically generated content made without the subject’s consent. Individuals are therefore currently at risk of being defamed, embarrassed or publicly censored for content they did not consent to.

In these circumstances, it is important for relevant individuals to understand and be able to rely on the various recourses available under English law. If you require further advice, or specific legal advice on topics such as invasions of privacy or harassment online, please do not hesitate to contact a member of our team.

Recent posts

Previous
Next
The UK's data protection regulator publishes a new code of conduct for UK private investigators and litigation services
Read more
Unable to row the distance: No copyright in a rowing machine as a work of artistic craftsmanship (WaterRower v Liking)
Read more
The wait is over – Sky v SkyKick decision handed down today
Read more
Autumn Budget 2024: Headlines
Read more
The Final Word
Read more
The UK's new Data (Use and Access) Bill has been introduced into Parliament
Read more
New reforms but a long wait for change: government publishes Employment Rights Bill draft
Read more
The UK's Data Protection Regulator begins its modernisation plans
Read more
A cautionary tale of lessons learnt in cases involving crypto fraud from D'Aloia v Persons Unknown Category A & Ors [2024]
Read more
‘This is a true story’: A lesson learnt from ‘Baby Reindeer’ for shows dramatising the lives of real people
Read more

More from this author

Previous
Next
The Online Safety Act is Finally Here
Read more
Ofcom publishes first code of practice in relation to the Online Safety Act
Read more

Share this page