After working on the physics engine and using OpenGL at a high level I wanted to get my hands dirty with some deeper graphics programming. I foolishly thought that it would be helpful to be ahead of the curve so I chose to use DirectX12. This ended up hurting me with a lack of tutorials available online (one of which I got halfway through without realizing it was incomplete) and most of Microsoft’s documentation seemed geared towards people switching from DX11 (to be fair I’m pretty sure it said somewhere in there that DX12 was for experienced graphics programmers but I stubbornly pressed onward).
I ended up restarting a couple of times: once due to the aforementioned incomplete tutorial and the next time because I wanted to start with a working view and projection matrix.
Here was one of the failed first starting points, Hello Triangle:
And this was the Windows Universal App starting point that I eventually went with (due to the inclusion of a view and projection matrix):
The first thing that I set out to do was to texture that cube instead of giving the vertices a raw color. Implementing textures gave me more trouble than I thought it would because of my lack of understanding the fundamentals of DX12’s data model. I had rushed gung-ho into it thinking that looking through and modifying the tutorial code while learning the concepts would be more effective than thoroughly studying them beforehand. I had started out optimistically by pasting in the Texture Upload Heap and Shader Resource View definition from Hello Texture into my code.
The Hello Texture tutorial:
After messing around with different combinations of root signature descriptions, descriptor heap declarations, and shader registers without any luck I stepped back to understand a bit more about DX12’s memory management. I eventually figured out that I should bind my texture into the same descriptor heap as my model/view/projection matrices and use separate descriptor tables for the two different types of data. This allowed me to switch where my model/view/projection descriptor table pointed to for each rendered object while keeping my second descriptor table pointed at the same bit of texture data. I also ended up going with a static sampler defined with my root signature simply because I didn’t have a use for different or variable samplers.
So after figuring out a bit more about binding resources (and adding a two triangle floor) I had this:
I started out with my own naive approach for camera control just as a sanity check. I just took in raw input and transformed the camera’s matrix when that input was received. There was no velocity or concept of a forward or right vector; you had to mash keys to move along a particular axis or rotate. I already knew that several of Microsoft’s demos (just not the one I had started with) had a working SimpleCamera class that you can see in action here:
After modifying the class a little bit to take in Windows Universal Inputs instead of looking at Windows messages and to transform my camera’s matrix it was ready to go:
As nice as boxes are I was looking to include some cooler objects into the engine. I arbitrarily decided to go for the .obj format and found a nice website with some samples files and pictures of how they should look. It ended up being relatively straightforward (though I did choose to ignore any files that included non-triangular polygons) to read in the files: a ‘v’ preceded a vertex, an ‘f’ preceded a face, and I just counted up the two of them to add them to my buffers. Here are some of the .objs I imported.
A gourd (I guess?):
And the classic teapot:
And finally by modifying the imported object’s model matrix I can get movement and rotation (it shouldn’t be too tough to implement my physics engine into this later):
And that’s where the engine is now, you can check out the source code here!