Detecting materials generated by advanced AI algorithms is becoming increasingly difficult. Materials are becoming more and more perfect, and the methods for their verification are not increasing. But scientists have just boasted about a new method that is characterized by high effectiveness!

We live in an age of ubiquitous disinformation. Putting it into circulation and fueling it has never been easier. And if someone doesn’t want to believe “on the word”, it is worth supporting it with an image or even video material. Fortunately or not – generating them also doesn’t take long and is available at everyone’s fingertips. No secret knowledge is needed anymore, all you need are the right tools. This is a big problem, around which discussions do not cease. Hijacking the image of celebrities to advertise services and products they would never consider has become standard. As have attempts to cause political confusion.

We recommend for Geekweek: Global Systems Failure Ended. What Actually Happened?

As such materials become increasingly better in quality, recognizing them is a huge challenge. Scientists from the American University of Hull have boasted a new method for recognizing deepfakes. Importantly, this time it is supposed to be really effective.

How to recognize a manipulated image? Looking the characters straight in the eye

The university representatives reported their discovery at the Royal Astronomical Society National Astronomical Meeting in Hull. It results from them that all artificially generated images generated with the help of artificial intelligence images can be detected by taking a closer look at… human eyes. Interestingly, scientists note that this is a method analogous to how astronomers study photos of galaxies. The key element to focus on in this case is the reflections visible in the eyes of the person, as to the “truthfulness” of the photo of which we have doubts. If these make sense – the image probably shows a real person. If not, there is a high probability that it is a deepfake.

The researchers’ research showed that the technique works. They analyzed the reflections in the eyes of both real photographs and AI-generated images of models, and then used methods from astronomy to quantify the reflections—and also check the correspondence between the left and right eyes. Fake images often show a lack of uniformity in this regard. In photographs, however, the reflections of both eyes usually show the same thing.

“To measure the shapes of galaxies, we analyze whether they are centrally compact, whether they are symmetric and how smooth they are. We analyze the light distribution. We detect reflections in an automated way and check their morphological features using CAS (Concentration, Asymmetry, Smoothness) and Gini indices to compare the similarity between the left and right eyeballs. The results show that deepfakes have some differences between the pair,” said Professor Pimblet.

It’s not perfect yet, but it’s a step in the right direction.

The method developed by scientists will probably not be 100% effective yet. Unfortunately. But we need some tools that will allow us to control it at least a little — and even if they are not perfect, they will work. We regularly hear about new software that offers us the ability to create such media in an increasingly effective way. However, there is little talk about the other side of the coin, i.e. tools that would allow them to be verified. And in an era when the older generation is still trying to understand how the internet works and is very susceptible to such manipulations, and (what is terrifying) the young who are constantly falling for all kinds of scams, they should be installed by default on all devices. And leading platforms should have stickers for suspicious materials similar to the “made with AI” stickers that recently appeared on Instagram.

Source