Wanted to try implementing deffered shading. Hit the wall that if I want to do it efficiently, then I should be able to render to several render targets ("surfaces") at once. Found out that GM:S can do it with an undocumented function called surface_set_target_ext(int index, int id); which takes the index for the "stage" (as Robert calls them) to bind and id is the surface itself. Sadly we make surfaces as individual framebuffer objects (FBO). OpenGL allows only one FBO to be bound at any one time. This means I cannot bind several of them at once, like GM does. GM can do it because it uses DX underneath (on Windows only I presume, where surface_set_target_ext only seems to work, and only HLSL shaders can render to MRT in GM:S as far as I can see) and it allows that (http://msdn.microsoft.com/en-us/library/windows/desktop/bb147221%28v=vs.85%29.aspx
). In OGL you do it differently, you add all the required textures to the one FBO (http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html
) which can then be bound and all the textures accessed.
So as I couldn't add surface_set_target_ext(), I planned to add surface_add_colorbuffer(), which would add a texture with specific formats to the FBO. Something like this:
surf = surface_create(640,480); //This create 640x480 RGBA texture with unsigned int type and BGRA format (this is how it's made default right now)
surface_add_colorbuffer(surf, 3, tx_rgb, tx_bgr, tx_float); //This adds 640x480 RGB texture with float type and BGR format (and binds it to GL_COLOR_ATTACHMENT0 + 3)
surface_add_depthbuffer(surf, tx_depth_component, tx_depth_component32f, tx_float); //This adds 640x480 depth texture with float type and 32f format (and binds it to GL_DEPTH_ATTACHMENT)
I intentionally bound it to color attachment 3 and skipped 2, so you would see where the number comes in later. Now we can do this in pixel shader:
layout(location = 0) out vec4 surfaceBufferOne;
layout(location = 3) out vec3 surfaceBufferThree;
surfaceBufferOne = vec4(1.0,0.5,0.0,1.0); //This buffer actually holds unsigned integers, so this becomes 255, 127, 0, 255
surfaceBufferThree = vec3(3.1415,2.4891,1.2345); //This holds floats
Depth is rendered automatically.
The problem with all of this is that I cannot make this work together with other systems. I need a new graphics_create_texture() function (I called it graphics_create_texture_custom) which I have no place to put. I need:
//Formats and internal formats
tx_rgba = GL_RGBA,
tx_rgb = GL_RGB,
tx_rg = GL_RG,
tx_red = GL_RED,
tx_bgra = GL_BGRA,
tx_bgr = GL_BGR,
tx_depth_component = GL_DEPTH_COMPONENT
//Internal formats only
tx_rgb32f = GL_RGB32F,
tx_depth_component32f = GL_DEPTH_COMPONENT32F,
tx_depth_component24 = GL_DEPTH_COMPONENT24,
tx_depth_component16 = GL_DEPTH_COMPONENT16,
tx_unsigned_byte = GL_UNSIGNED_BYTE,
tx_byte = GL_BYTE,
tx_unsigned_short = GL_UNSIGNED_SHORT,
tx_short = GL_SHORT,
tx_unsigned_int = GL_UNSIGNED_INT,
tx_int = GL_INT,
tx_float = GL_FLOAT;
which I cannot define in General, because I use GL_ enums. If I didn't, then I would still need to define them in General and then access them trough arrays, which is what GL3d3d file does which is garbage. And then I need to add surface_add_colorbuffer and surface_add_depthbuffer somewhere, but I cannot do it in General, because GL1 will never have it (and DX will probably not have it either). So I end up making a stupid header where all of this junk goes into.
I seriously consider forking ENIGMA to have only one graphics system, because GL1 is obsolete and I haven't really touched it forever, and DX9/11 are not worked on and are not required as far as I see. If we somehow manage to get GLES working then we would still have problems like these, but at least GLES is like 95% compatible, so problems would be a lot smaller.
I guess this is why most engines have only one graphics system. Or at least abstracts everything even more, so it becomes agnostic to it. We cannot easily do it, because we make a tool, which allows people writing their own code, which is already a layer on top of the graphics system.