Intel researchers use machine learning to make GTA ‘photorealistic’
The technique replaces each frame with assets from a database of real-life photos
Researchers at Intel Labs have created a way to make video game footage look “photorealistic” using machine learning.
The experiment used footage of Grand Theft Auto V, and when applied it replaces the game’s polygonal environments with images taken from real-life photographs.
The researchers describe the setup as “a convolutional network which produces images frame by frame and can be run at interactive rates”, which suggests that in time it could be optimised to run at real-time while playing.
A video demonstration shows how the process analyses a frame of Grand Theft Auto V footage then compares it with the team’s Cityscapes dataset, which consists of many photographs of German cities recorded with an automotive grade camera.
The process then replaces the game’s assets with those taken from the photographs, which in turn makes the asphalt smoother, makes car paint glossier and makes the hills greener.
The new assets are “geometrically and semantically consistent with the input images and temporally stable”, meaning as the game continues to play, the photorealistic objects and textures follow along.
The video shows how Intel Labs’ new approach is more stable than other attempts to replace game footage with photorealistic imagery. These have led to issues such as extreme shimmering, artifacts and colour flickering, issues that don’t appear to be present in this new method.
The Intel Labs researchers have made their paper available online, along with a number of image sliders that can be used to better compare how GTA V looks with and without the process applied.