So after looking at shaders we will now take some time to set up stuff on the CPU side. We need a way to tell the GPU what to render and then also somewhere for the GPU to put the finished image. This will be a bit of a short one as this is just kind of boilerplate code that we need to get written

wgpu setup

We have seen that shaders are little programs on the GPU which process a piece of input data (Vertices/Pixels) in a given environment (uniforms/textures/...) and finally produce an output as a pixel on an output texture. For this to work a few things need to happen:

  • The shaders need to be read from disk, compiled and transfered to the GPU
  • The uniforms for the shader need to be set up
  • the data needs to be set up.

All of this needs a connection to the GPU, which is the focus of this part. The library wgpu in rust which under the hood uses Vulkan/Metal/dx11 to talk to the gpu driver is what we are going to use to accomplish all this. We also need a place to display everything. winit will provide us with a connection to a display while wgpu will take care of the setup of the GPU driver. We do need to set up some things from wgpu so that we can start drawing. We need a queue so that we can submit work to the GPU and to do that we need the following things:

  • Instance This is the representation of the entire GPU driver in our application
  • Surface The GPU side of the screen. It tells the GPU where to put the output image.
  • Adapter The object that represents the physical GPU card (so GPU with memory and so on)
  • Device logical GPU, this is the thing that we submit our work to. We can use the same physical adapter to create multiple devices but seemingly that is not done often.
  • Queue The structure that we record the steps into that we want the GPU to perform, all the things we want the GPU to do are recorded into this buffer and then sent to the GPU to process.

So first things first however. We still need to get a window to draw into, for which we use winit

let window = WindowBuilder::new().build(&event_loop).unwrap();

now that we have the window, we can initialize the GPU environment.

let instance_descriptor = wgpu::InstanceDescriptor {
    backends: wgpu::Backends::all(),
    dx12_shader_compiler: wgpu::Dx12Compiler::Fxc
};
let instance = wgpu::Instance::new(instance_descriptor);

produces the instance, that is the general connection to the GPU, wgpu inherits this build a descriptor struct and then pass it to the function to 'create the thing' from Vulkan, which it is why this permeates nearly the entire API

The Instance represents a connection to the GPU driver which in turn can be connected to possibly multiple GPUs.

let surface = unsafe { instance.create_surface(&window).unwrap() };

The Surface is the thing that we are going to draw our final pixels to. This is the connection between what winit did to get access to the screen from our operating system and our GPU output. This operation is unsafe in the sense that we need to guarantee that the window lives at least as long as the surface

let adapter_descriptor = wgpu::RequestAdapterOptions {
    power_preference: wgpu::PowerPreference::HighPerformance,
    // here we pass the surface to the adapter so it can render to it
    compatible_surface: Some(&surface),
    force_fallback_adapter: false,
};
// wait for the gpu driver to set up everything so that we can talk to the GPU
let adapter = instance.request_adapter(
    &adapter_descriptor)
    .await.unwrap();

The Adapter is the thing, that represents the actual graphics card, in our system. A single adapter, can be split into different logical devices (even though this is rather uncommon). An adapter can have a certain set of features, depending on what GPU is physically installed in the system. The devices are the containers that store the context for the different operations (like the buffers, bind groups and so on). So the next thing that is instantiated, is the device.

let device_descriptor = wgpu::DeviceDescriptor {
    label: Some("Main Device"), // we don't give this logical thread a name
    features: wgpu::Features::empty(), // we are happy with the required features
    limits: wgpu::Limits::default(),
};
let (mut device, queue) = adapter.request_device(&device_descriptor, None).await.unwrap();

As can be seen, the device also comes with a queue. This queue is the place where the CPU places the operations it wants the GPU to perform and after scheduling all the operations it sends the queue to the GPU for execution. The GPU driver might reorder them, but must guarantee that operations that have interdependencies are processed in the order that they appeared in the queue.

The last thing in our list is to tell the GPU how the data that is written to the surface (which is the link between the GPU and the Image on the screen) so that it can be properly shown. For that we first query the supported modes and then specify what we want (in this case we are fine with the defaults

let surface_capabilities = surface.get_capabilities(&adapter);
// we want a surface with a srgb format, otherwise we panic
let surface_format = surface_capabilities.formats.iter()
    .copied()
    .filter(|f| f.is_srgb())
    .next()
    .unwrap(); 
// we now set up the surface configuration that we want and then configure
// the surface
let config = wgpu::SurfaceConfiguration {
    usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
    format: surface_format,
    width: size.width,
    height: size.height,
    present_mode: surface_capabilities.present_modes[0],
    alpha_mode: surface_capabilities.alpha_modes[0],
    view_formats: vec![],
};
surface.configure(&device, &config);

And with that we have a working environment for the next steps. This is generally a bit of 'boilerplate' but also reveals how the driver (and at the end the GPU manufacturer) think about GPUs and how they behave.