In video games, different applications of artificial intelligence have been used in different ways. The beginning of use of artificial intelligence in video games began with Atari; which is a very basic example in the field of computer games. But today, neural networks allow artificial intelligence systems to act more intelligently. Video games are not just for entertainment. They provide a platform for neural networks to learn how to interact with dynamic environments and solve complex problems just like in real life. Video games have been used for decades to evaluate the performance of artificial intelligence.

Game studios spend millions of dollars and thousands of hours developing game graphics to make them as close to reality as possible. While in the past few years, these graphics look surprisingly realistic, they are still very easy to distinguish from the real world. However, with the vast advances that have been made in image processing using deep neural networks, isn’t it time to use this technology to improve graphics while reducing the human need to create them?

Answer this question using the FIFA 18 game

To find out if recent developments in deep learning can answer our question, we focus on enhancing the faces of players in FIFA using the famous Deepfakes algorithm. Deepfakes is a deep neural network that can be trained to recognize and generate highly realistic human faces. Our focus in this project is to recreate player faces from within the game and improve them to make them look exactly like real players. Here is a great explanation of how the deepfakes algorithm works. Using automatic decoding and convolutional neural networks, anyone’s face in a video can be replaced with other people’s faces.

Data collection

Let’s start by looking at one of the best-designed faces in FIFA 18, Cristiano Ronaldo, and see if we can improve it. To collect the data needed for the deepfakes algorithm, we simply recorded the player’s face from the in-game replay option. Now we want to replace this face with the real face of Ronaldo. For this purpose, we downloaded some of Ronaldo’s images from different angles from Google. Unlike the methods used by game developers, in this method all the required data can be collected from Google search; Without the need for Ronaldo to wear a special outfit for recording images.

Architecture and model education

This algorithm includes the training of deep neural networks called autocoders. These networks are used for unsupervised learning and have an encoder that can encode an input using an encoder. After that, they use a decoder to reconstruct the original input. For an image like our example, we use a convolution network as an encoder and a deconvolution network as a decoder. This architecture is designed to minimize the reconstruction error.

In our case, we train two networks simultaneously. One network learns to recreate Ronaldo’s face from FIFA 18 game graphics, And the other network learns to recreate the face from real images of Ronaldo. In deepfakes, both networks use a common encoder and two different decoders. So, we now have two networks that have learned what Ronaldo looks like in the game and real life.

Using trained models for face-swapping

Now it’s the turn of the interesting part. This algorithm can change faces using a clever trick. At this stage, the second auto-decoder network is fed with the input of the first network. The shared encoder takes the encoding from the FIFA image, but the decoder reconstructs the real image.

Can we use this algorithm to put our image in the game?

How would you feel if you could play instead of Alex Hunter? All you have to do is upload a long video of yourself and the trained model will download within hours.

The biggest advantage of this method is creating amazing faces that are difficult to distinguish from reality. All this can be achieved with just a few hours of training. While the game designers have spent years to reach it. This means game publishers can release new games much faster. Studios can also save millions of dollars and use their inventory to hire skilled storytellers.

But the obvious limitation is that these faces are produced in this way like computer-generated images for movies (CGI), while games require continuous image generation. Also, the time required to produce the output image is time-consuming in this method.

However, one of the big advantages of using deep learning in computer games is that once a model is trained, there is no need for human intervention to generate results.

As a result, if a person without graphic expertise can implement such a process in just a few hours of training; Certainly, game developers can revolutionize the computer game industry by investing in this direction and employing experts in the game industry.

Leave a Reply

Your email address will not be published. Required fields are marked *