Press "Enter" to skip to content

What is Deepfake and how does it work? – Apps – Technology

A video by Mark Zuckerberg in which he says that one person owns all his secrets has raised concern among industry experts. Although it is a false message, with phrases that the CEO of Facebook never uttered in its entirety, the resulting piece of video, which is part of an art exhibition, is so realistic that it gives you chills.

The technology behind the manipulation is known as the creator of the ‘Deepfakes‘. It is an application that uses artificial intelligence to emulate facial gestures on a video and from thousands of images of a person it obtains quite convincing results. In the past, the system became known for a series of scandal with fake porn videos created with images of famous actresses.

The main concern is that the authenticity of content is increasingly difficult to verify. The new capabilities deceive the ordinary view and amid misinformation, fake news and coordinated campaigns to manipulate opinions on social media, it is challenging to be prepared to check the veracity of what comes to your WhatsApp.

In the opinion of postdoctor Alexánder Caicedo, professor of the Applied Mathematics and Computer Science Program at the Universidad del Rosario, the results of this system are surprising.

Deep fake

Mark Zuckerberg’s ‘Deepfake’ was posted on Instagram as part of an art exhibition that plays with the concept of technology.

(This technology could) facilitate the creation of ‘fake news’ and make a person say things they never said

“One thinks that it is real, because the quality of the video is very good. Later, knowing that it is created artificially, it is a little scary due to the use that can be made of these technologies and the responsibility of the creators in front of the use of tools that can do a lot of damage, “he says.

The expert in machine learning ensures that the fundamental principle of ‘Deepfakes’ are antagonistic generative networks (GAN for its acronym in English), developments in which two algorithms compete with each other to create increasingly realistic pieces.

These types of networks resemble the figure of a policeman and a thief. A discriminator algorithm is trained to differentiate something between a set of data, such as an apple. To achieve this, you need a sufficient quantity of apples and a sufficient variety so that if you see a green, a red or a water one, you will be able to recognize and differentiate them.

The other algorithm, the generator, acts by knowing previous criteria about what would be false or what would be real for the discriminator. Their job is to generate content, such as artificially created photos of apples, to fool the discriminator. If the apple is blue, square and with holes, it will fail in the attempt, but as it adjusts the algorithm is more and more efficient.

AI arts

Edmond Belamy’s portrait was sold in the US for more than $ 400,000.

Photo:

Taken from http://www.obvious-art.com

Uses beyond theory

Those inverse problems generate and evaluate patterns and then rate the results with probability. According to Caicedo, these networks are frequently used for many purposes. From producing images to generating secure keys automatically.

For example, months ago it was auctioned at United states a painting generated by artificial intelligence worth more than $ 400,000. The creation of the ‘Baron of Belamy’, inspired by the works of Rembrandt, was the product of a GAN algorithm, by the French collective Obvious, which was fed by more than 15,000 existing portraits and paintings.

There are also applications in science, Caicedo assures that they are used in black matter concentration mapping and are an attractive technology for industries. Another use is some video game remasters, which take low-quality titles to graphically improve them to 4K qualities in a computerized way.

Can’t believe in anything anymore?

The massification of realistic videos in which the president of a nation could be placed to say things that are not true can unleash a crisis of public opinion never seen before. There are many concerns for journalists, researchers and social organizations around ‘fake news’, which until now use biased texts and edited images to decontextualize a fact.
Famous actresses like Emma Watson, Gal Gadot and Scarlett Johannson have been affected in the past with videos that simulated explicit sexual encounters.

Deepfake

The realistic animation of famous faces in porn videos was revealed in February 2018. The contents were created with an artificial intelligence tool.

Deepfakes are more than GANTo achieve their chillingly realistic results, they use other artificial intelligence tools to flatten a face or an artificially generated part of the face into real video. Things like the shadows in the environment, the gestures and the angles are adjusted by the systems.

As Caicedo puts it, this technology could “facilitate the creation of ‘fake news’ and make a person say things they never said“In his opinion, the greatest impact is that an artificial video is used as proof of a message on a passionate theme that seeks to polarize public opinion.

However, he assures that the problem is the use and not the tool per se. In fact, he argues that the same technology could be the solution to face an eventual wave of disinformation. As if antagonistic networks were trained to recognize the passage of similar systems in content.

This same method can allow us to develop algorithms that help us detect what is real and what is ‘fake’. The cure to those risks could be an antagonistic network that can hunt down the other.“.

What defines the superiority in the end between the algorithms, is the data with which they train them. The quality of the data changes the probability of the results. This, along with the skills or cunning of investigators, will be key in the battle against a next generation of fake news.

TECHNOSPHERE WRITING
@TecnosferaET

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *