Renderização por GPU

GPU rendering makes it possible to use your graphics card for rendering, instead of the CPU. This can speed up rendering because modern GPUs are designed to do quite a lot of number crunching. On the other hand, they also have some limitations in rendering complex scenes, due to more limited memory, and issues with interactivity when using the same graphics card for display and rendering.

To enable GPU rendering, go into the Preferences ‣ System ‣ Cycles Render Devices, and select either CUDA, OptiX, or HIP. Next, you must configure each scene to use GPU rendering in Properties ‣ Render ‣ Device.

Nota

GPU rendering is only supported on Windows and Linux; macOS is currently not supported.

Supported Hardware

Blender supports different technologies to render on the GPU depending on the particular GPU manufacturer.

Nvidia

CUDA and OptiX are supported for GPU rendering with Nvidia graphics cards.

Nota

Open Shading Language is not supported.

CUDA

CUDA requires graphics cards with compute capability 3.0 and higher. To make sure your GPU is supported, see the list of Nvidia graphics cards with the compute capabilities and supported graphics cards.

OptiX

OptiX requires graphics cards with compute capability 5.0 and higher and a driver version of at least 470. To make sure your GPU is supported, see the list of Nvidia graphics cards OptiX works best on RTX graphics cards with hardware ray tracing support (e.g. Turing and above).

AMD

HIP is supported for GPU rendering with AMD graphics cards on Windows. Blender supports GPU rendering on discrete graphics cards with the RDNA architecture or newer and GPU driver version 21.Q4 or newer. To make sure your GPU is supported, see the list of AMD graphics cards and their architectures.

Nota

Unsupported Features:

Perguntas Frequentes

Porquê o Blender não responde durante a renderização?

While a graphics card is rendering, it cannot redraw the user interface, which makes Blender unresponsive. We attempt to avoid this problem by giving back control over to the GPU as often as possible, but a completely smooth interaction cannot be guaranteed, especially on heavy scenes. This is a limitation of graphics cards for which no true solution exists, though we might be able to improve this somewhat in the future.

Se possível, é melhor instalar mais de uma GPU, usando uma para a visualização e outra(s) para renderização.

Por que uma cena que renderiza na CPU não renderiza na GPU?

There may be multiple causes, but the most common one is that there is not enough memory on your graphics card. Typically, the GPU can only use the amount of memory that is on the GPU (see Would multiple GPUs increase available memory? for more information). This is usually much smaller than the amount of system memory the CPU can access. With CUDA, OptiX and HIP devices, if the GPU memory is full Blender will automatically try to use system memory. This has a performance impact, but will usually still result in a faster render than using CPU rendering.

Múltiplas GPUs podem ser usadas para renderização?

Yes, go to Preferences ‣ System ‣ Compute Device Panel, and configure it as you desire.

Múltiplas GPUs aumentam a memória disponível?

Typically, no, each GPU can only access its own memory, however, some GPUs can share their memory. This is can be enabled with Distributed Memory Across Devices.

What renders faster?

This varies depending on the hardware used. Different technologies also have different compute times depending on the scene tested. For the most up to date information on the performance of different devices, browse the Blender Open Data resource.

Mensagens de erro

In case of problems, be sure to install the official graphics drivers from the GPU manufacturers website, or through the package manager on Linux.

Unsupported GNU version

On Linux, depending on your GCC version you might get this error. See the Nvidia CUDA Installation Guide for Linux for a list of supported GCC versions. There are two possible solutions to this error:

Use an alternate compiler

If you have an older GCC installed that is compatible with the installed CUDA toolkit version, then you can use it instead of the default compiler. This is done by setting the CYCLES_CUDA_EXTRA_CFLAGS environment variable when starting Blender.

Launch Blender from the command line as follows:

CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-x.x" blender

(Substitute the name or path of the compatible GCC compiler).

Remove compatibility checks

If the above is unsuccessful, delete the following line in /usr/local/cuda/include/host_config.h:

#error -- unsupported GNU version! gcc x.x and up are not supported!

This will allow Cycles to successfully compile the CUDA rendering kernel the first time it attempts to use your GPU for rendering. Once the kernel is built successfully, you can launch Blender as you normally would and the CUDA kernel will still be used for rendering.

Erro CUDA: Compilação do Kernel falhou

This error may happen if you have a new Nvidia graphics card that is not yet supported by the Blender version and CUDA toolkit you have installed. In this case Blender may try to dynamically build a kernel for your graphics card and fail.

Neste caso você pode:

  1. Check if the latest Blender version (official or experimental builds) supports your graphics card.

  2. Se você compilar o Blender você mesmo, tente baixar e instalar uma nova versão de desenvolvedor do kit de ferramentas CUDA.

Normalmente usuários não precisam instalar o kit de ferramentas CUDA já que o Blender vem com os kerneis pré compilados.

Erro CUDA: Sem memória

This usually means there is not enough memory to store the scene for use by the GPU.

Nota

One way to reduce memory usage is by using smaller resolution textures. For example, 8k, 4k, 2k, and 1k image textures take up respectively 256MB, 64MB, 16MB and 4MB of memory.

The Nvidia OpenGL driver lost connection with the display driver

If a GPU is used for both display and rendering, Windows has a limit on the time the GPU can do render computations. If you have a particularly heavy scene, Cycles can take up too much GPU time. Reducing Tile Size in the Performance panel may alleviate the issue, but the only real solution is to use separate graphics cards for display and rendering.

Another solution can be to increase the time-out, although this will make the user interface less responsive when rendering heavy scenes. Learn More Here.

Erro CUDA: Erro desconhecido em cuCtxSynchronize()

Um erro desconhecido pode ter muitas causas, mas uma possibilidade é que o tempo se esgotou. Veja acima respostas para soluções.