Will deepfake faces soon be more trustworthy than real ones?

People are increasingly having trouble telling the difference between the faces of an artificial intelligence and real faces. This is the result of a study by two American researchers. 

Deepfake
©depositphotos

If people increasingly perceive computer-generated faces as real likenesses, this may be worrying. AI creations could be used as so-called "deepfakes" for various criminal activities, such as revenge porn and fraud. With the help of an AI, for example, arbitrary images or videos can be inserted into the faces of ex-lovers and provided with synthetic facial expressions.

"We found that synthetic faces are not only very realistic, they are rated as more trustworthy than real faces," Hany Farid, a computer science professor at the University of California, Berkeley, told Scientific American.

The research team has within the framework of a study studied over 400 AI-generated faces using 400 real photos. The 315 test subjects without training and 219 with training were tasked with recognizing fakes for 128 images each and judging which ones were genuine. Sobering: the hit rate for the untrained was only just under 50 percent.

According to the researchers, the fact that many synthetic faces were not recognized correctly could be related to the fact that people tend to trust "average faces" in everyday life. There were hardly any differences between faces of different ethnicities, for example.

Source: mimikama.at

(Visited 91 times, 1 visits today)

More articles on the topic

SECURITY NEWS

Bleiben Sie informiert über aktuelle Sicherheitsthemen – praxisnah und zuverlässig. Erhalten Sie exklusive Inhalte direkt in Ihren Posteingang. Verpassen Sie keine Updates.

Jetzt anmelden!
anmelden
You can unsubscribe at any time!
close-link