ENIGMA Forums

Development => Works in Progress => Topic started by: orange451 on December 27, 2014, 02:54:55 PM

Title: [OGL3] Shader Test
Post by: orange451 on December 27, 2014, 02:54:55 PM
I've known about Enigma for a few years now, this is the first thing I've made in it :)

It's a simple little shader test containing an 8-bit depth buffer shader, and a normal buffer shader.

Enjoy ^^

Source:
https://www.dropbox.com/s/k4rx5vk40jsrs3v/Shader%20Test.egm?dl=0

Exe:
https://www.dropbox.com/s/4y9amzsubshnk52/shader.exe?dl=0

I don't think that the normal buffer is entirely correct; if Enigma gave me access to the normal matrix then I could guarantee it's correctness :)

(http://i.imgur.com/g1Uhq5r.png)
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 27, 2014, 04:17:49 PM
This is cool. These two are actually very useful for things like deferred shader which I want to write for ENIGMA as an example. I also want to finish my island demo which uses a specific depth buffer (basically the distance from the water plane to the vertex, not from camera to vertex) for shore lines.

ENIGMA does provide you with normal matrix. "uniform mat3 normalMatrix;" in GLSL.

Also, you can see the two surfaces are upside down. I don't really have an idea what to do in this case. I guess we just can keep it like this and allow people to rotate the projection how they choose. In 2D they will be correct side up, because in recent changes I made the default projection in surface_set_target() to flip the projection. We cannot do that when we want surfaces to be used with 3D.

The example is very well written. Has comments for every line, divides everything in chunks so people can learn more easily. I'm also impressed that you managed to do this even though we haven't documented some of the stuff, like the fact that surface_create() has a third optional parameter enabling depth buffer for that FBO. In this case it's not that useful, but in a more complex scene you would probably have to set it to true or you would get visual artifacts. We could also make a way to assign buffers to surfaces like GL, so you could add a depth texture and use that instead, without making a specific shader for that. I'm not sure how to do it efficiently now though.
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 27, 2014, 04:48:13 PM
ENIGMA does provide you with normal matrix. "uniform mat3 normalMatrix;" in GLSL.
It does? Well I spent the past 10 minutes arguing with Josh for no reason then.

Also, you can see the two surfaces are upside down. I don't really have an idea what to do in this case. I guess we just can keep it like this and allow people to rotate the projection how they choose. In 2D they will be correct side up, because in recent changes I made the default projection in surface_set_target() to flip the projection. We cannot do that when we want surfaces to be used with 3D.
Because OpenGL flips the y axis so the surfaces are flipped when drawn. An easy fix is to modify draw_surface(...). In my own OpenGL Java engine that I wrote, when I draw a surface like I do in this example, I just draw it with a quad model that has flipped y texture coords. I think that is the most practical fix, as it doesn't disrupt any math down the line, it is solely for viewing purposes.

The example is very well written. Has comments for every line, divides everything in chunks so people can learn more easily. I'm also impressed that you managed to do this even though we haven't documented some of the stuff, like the fact that surface_create() has a third optional parameter enabling depth buffer for that FBO.
Thanks ^^
Robert helped me a bit in the IRC last night.

We could also make a way to assign buffers to surfaces like GL, so you could add a depth texture and use that instead, without making a specific shader for that. I'm not sure how to do it efficiently now though.
To be completely honest I don't know if there would be a major performance gain in doing this. Regardless, I couldn't figure out how to do it in my own Java engine :c
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 27, 2014, 04:57:08 PM
Okay I changed the normal shader:
Code: [Select]
// Vertex
in vec3 in_Position;
in vec3 in_Normal;

out vec3 v_Normal;

void main() {
    v_Normal = normalize(normalMatrix * in_Normal);
    gl_Position = modelViewProjectionMatrix * vec4( in_Position.xyz, 1.0);
}

Code: [Select]
// Fragment
in vec3 v_Normal;

out vec4 out_FragColor;

vec3 encodeNormal(vec3 normal) {
    normal = normalize( normal );
    vec3 shifted = (normal+vec3(1.0, 1.0, 1.0))/2.0;
    return shifted;
}

void main() {
    out_FragColor = vec4( encodeNormal(v_Normal), 1.0 );
}

I noticed something odd. I think that the normal matrix isn't calculated properly. What should happen, is that every polygon face you stare at directly should be Blue. Faces that are straight up (floors) should be green, Faces facing right (walls on the left) should be red. However, this isn't the case.

I also tried this to calculate the normal matrix:
Code: [Select]
in vec3 in_Position;
in vec3 in_Normal;

out vec3 v_Normal;

void main() {
    mat3 extractedViewMatrix = mat3(viewMatrix);
    mat3 normalMatrix = transpose(inverse(extractedViewMatrix));
    v_Normal = normalize(normalMatrix * in_Normal);
    gl_Position = modelViewProjectionMatrix * vec4( in_Position.xyz, 1.0);
}

I believe that the view matrix isn't properly calculated. Either that, or in_Normal isn't correctly calculated when using d3d_draw_ commands.




[EDIT]
I believe I have confirmed that in_Normal is incorrect when using the d3d_draw commands!
Code: [Select]
// Vertex
in vec3 in_Position;
in vec3 in_Normal;

out vec3 v_Normal;

void main() {
v_Normal = normalize(in_Normal);
    gl_Position = modelViewProjectionMatrix * vec4( in_Position.xyz, 1.0);
}
Code: [Select]
// Fragment
in vec3 v_Normal;

out vec4 out_FragColor;

void main() {
    out_FragColor = vec4( v_Normal, 1.0 );
}

In my example, every face is black except for 1 wall. This should not be the case. Only two of the walls should be black (negative normal), two walls should have color (one green, one red), and the floor should be blue.
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 27, 2014, 05:39:17 PM
Quote
I think that the normal matrix isn't calculated properly.
...
I believe that the view matrix isn't properly calculated.
If that would be the case, then model_view matrix would be wrong, as well as model_view_projection matrix. Essentially everything would be wrong then. I did have problems with normal matrix calculation previously though. Right now it's used in lighting engine, and it took me some time to make it look correctly. This is the part that calculates the matrices:
Code: (EDL) [Select]
void transformation_update(){
        if (enigma::transform_needs_update == true){
            //Recalculate matrices
            enigma::mv_matrix = enigma::view_matrix * enigma::model_matrix;
            enigma::mvp_matrix = enigma::projection_matrix * enigma::mv_matrix;

            //normal_matrix = invert(transpose(mv_submatrix)), where mv_submatrix is modelview top-left 3x3 matrix
            enigma::Matrix4 tmpNorm = enigma::mv_matrix.Transpose().Inverse();
            enigma::normal_matrix = enigma::Matrix3(tmpNorm(0,0),tmpNorm(0,1),tmpNorm(0,2),
                                                    tmpNorm(1,0),tmpNorm(1,1),tmpNorm(1,2),
                                                    tmpNorm(2,0),tmpNorm(2,1),tmpNorm(2,2));
            enigma::transform_needs_update = false;
        }
    }
It seems you try inverting and then transposing, while I think it should be the other way around. To be honest, I have seen both being used, so I never figured out which is correct. Like here it shows the way *you* did it: http://stackoverflow.com/questions/21079623/how-to-calculate-the-normal-matrix

I will test some.

edit: It seems that normals are in world space when using this code. Also, I think when you use d3d_draw_wall() to create the object, the normals are pointing in the same direction for opposing walls. That could be a bug for that function.
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 27, 2014, 05:45:33 PM
Quote
I think that the normal matrix isn't calculated properly.
...
I believe that the view matrix isn't properly calculated.
If that would be the case, then model_view matrix would be wrong, as well as model_view_projection matrix. Essentially everything would be wrong then. I did have problems with normal matrix calculation previously though. Right now it's used in lighting engine, and it took me some time to make it look correctly. This is the part that calculates the matrices:
Code: (EDL) [Select]
void transformation_update(){
        if (enigma::transform_needs_update == true){
            //Recalculate matrices
            enigma::mv_matrix = enigma::view_matrix * enigma::model_matrix;
            enigma::mvp_matrix = enigma::projection_matrix * enigma::mv_matrix;

            //normal_matrix = invert(transpose(mv_submatrix)), where mv_submatrix is modelview top-left 3x3 matrix
            enigma::Matrix4 tmpNorm = enigma::mv_matrix.Transpose().Inverse();
            enigma::normal_matrix = enigma::Matrix3(tmpNorm(0,0),tmpNorm(0,1),tmpNorm(0,2),
                                                    tmpNorm(1,0),tmpNorm(1,1),tmpNorm(1,2),
                                                    tmpNorm(2,0),tmpNorm(2,1),tmpNorm(2,2));
            enigma::transform_needs_update = false;
        }
    }
It seems you try inverting and then transposing, while I think it should be the other way around. To be honest, I have seen both being used, so I never figured out which is correct. Like here it shows the way I did it: http://stackoverflow.com/questions/21079623/how-to-calculate-the-normal-matrix

I will test some.

that external link says:
transpose(invert(viewMatrix));
which is
viewMatrix.invert().transpose();


you're doing
viewMatrix.transpose().invert();

Also, you're supposed to extract it to a mat3 before applying the operations. Though, I am not sure if this has an effect.


In Java I would do it:
Code: [Select]
public static Matrix3f createNormalMatrix(Matrix4f matrix, Matrix3f normalMatrix){

    // extract 4x4 matrix to 3x3 matrix
    extract(matrix, normalMatrix);

    // Invert the matrix
    normalMatrix.invert();

    // Transpose the matrix
    normalMatrix.transpose();

return normalMatrix;
}
--

Regardless, look at my edited post about in_Position.
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 27, 2014, 06:26:38 PM
Quote
that external link says:
Yes, you are right. I was referring to something else then. The problem was that I made the lighting shaders and then implemented normal matrix that worked for them. So I ditched the 3x3 matrix thing, because transpose and then inverse worked for me to get correct normals. If you do inverse and then transpose, then normals changed in relation to camera position. For that not to happen I had to get the 3x3 top-left of mv (basically to cut off translation). I made the changes now and it seems that none of my previous examples have broken.
Also, yeah, I also think it's wrong normals for d3d_wall. For floors and cubes it seems correct. Check this: https://www.dropbox.com/s/r17cwch4962voi8/3D_Lights_test.egm?dl=0
Press Insert to switch to per-pixel lights and Delete to switch to regular per-vertex. The cube and floor is okay, but the two side walls are totally wrong. I also seem to have trouble changing the orientation of the normals with changing the coordinates. It doesn't matter if the x,y is changed, the floor is still pointing downwards (so if I enable backface culling the floor is not visible from top). All of these things must be checked for. I don't think I will be the one fixing this though.

Please test this branch: https://github.com/enigma-dev/enigma-dev/commits/GL3.3NormalMatrix
I did the matrix change there. See if it helps. Normals for d3d_walls still need to be fixed.

This should also be slightly faster, because it's a lot faster to calculate 3x3 inverse than 4x4 inverse.
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 27, 2014, 11:47:59 PM
I really don't know if d3d_draw_floor() is correct.

Backface culling is enabled, and I am drawing a floor that correctly shows its face. Since the face is pointing directly upward, its normal would be (0, 0, 1).

So, the in_Normal in the shader should then also be (0, 0, 1).
In my shader, I am outputting in_Normal as the outColor. It's black, so the normal must be negative.



[EDIT]
I turned off culling, and changed the floor drawing to:
Code: [Select]
d3d_draw_floor( 0, 0, 0, 512, 512, 0, background_get_texture(bkg_0), 16, 16 );Now, the floor is technically a ceiling, but since culling is disabled I can see it as a floor. In the shader, it is a solid blue (0, 0, 1). So, d3d_draw_floor is flipped!
Perhaps in your lighting shader, you were negating the viewposition when it was already negated, which is why it appeared to work.





[EDIT 2]
I made a replica of the map using Model Creator. Here is what the normal buffer looks like:
(http://i.imgur.com/h2RpnV2.png)

Here is what it looks like using d3d_draw functions:
(http://i.imgur.com/cgPkGMO.png)
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 28, 2014, 01:56:58 AM
I fixed the wall and floor code in GL3Model.cpp:
http://pastebin.com/9McPLkSU

I also flipped the surface drawing in GSsurface.cpp:
http://pastebin.com/aBgq9hH3
Title: Re: [OGL3] Shader Test
Post by: Josh @ Dreamland on December 28, 2014, 12:14:28 PM
I see. I thought you were asking me to add a function to fetch "the normal matrix," which just isn't a common entity (it only exists in our default shader—there's no guarantee that other shaders calculate one).

Anyway, thanks for the patch! If you like, you can file it through GitHub, here (https://github.com/enigma-dev/enigma-dev/).
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 28, 2014, 01:44:17 PM
Quote
Perhaps in your lighting shader, you were negating the viewposition when it was already negated, which is why it appeared to work.
Both sides of the floor had the same lighting, which makes me belive it wouldn't matter if the normal is inverted. The falling angle would be the same. But after looking at the shader it seems that it shouldn't actually render anything on the other side, as the color is actually black (0) if the value is negative (float LdotN = max( dot(norm, L), 0.0 ); ).

I don't get a blue color with that d3d_floor code either. In the lights demo the floor is black with that code too. Same with the wall code. I guess more investigation is required.

Quote
I also flipped the surface drawing in GSsurface.cpp:
We had numerous discussions about this and we didn't come to a real conclusion (as it seemed I was the only one interested on fixing it). Basically we had (and still have) three options:
1) Flip the coordinates in draw_surface functions like you did. It would still break drawing to surfaces (like when using views) and all _texture_ functions (as the texture coordinates will have to be inverted compared to any other texture in ENIGMA). Could work in shaders, because this is how GL likes it.
2) Flip the projection (via d3d_set_projection_ortho(0, surf->height, surf->width, -surf->height, 0); ) when binding the surface (surface_set_target()). This fixes 2D rendering, but any projection function will override this. So if you render anything in 3D, then you will have to flip it manually in your own projection function. This allows _texture_ functions to be used like with any other texture in 2D. Doesn't break surfaces usage in shaders, as everything is rendered how GL likes it in 3D. In 2D it could break shaders, but haven't tested it. This how it is done now.
3) texture_matrix. Basically flip everything in shader and keep everything flipped elsewhere. This would mean people writing shaders will have to keep this in mind and manually add this every time.

Quote
I see. I thought you were asking me to add a function to fetch "the normal matrix," which just isn't a common entity (it only exists in our default shader—there's no guarantee that other shaders calculate one).
It actually exists for all shaders. Like all matrices it's calculated on the CPU and then passed as a uniform if the shader uses it.
Title: Re: [OGL3] Shader Test
Post by: Goombert on December 28, 2014, 02:33:18 PM
Quote from: TheExDeus
We had numerous discussions about this and we didn't come to a real conclusion (as it seemed I was the only one interested on fixing it).
I am very interested in this as well, I just hate the problem because it shouldn't exist. You can choose the order the pixels are read when you blit a framebuffer but not when you use it as a texture, in addition the default main framebuffer isn't set up like this. It's such an annoying bug, you should be able to flip the thing left to right or vertically without any performance cost.

1) gross
2) We could just keep track of whether a surface is bound and check that in the projection functions, but this method is still inefficient
3) texture_matrix is deprecated GL, we'd have to do it manually ourselves and know in the default shaders for GL3 whether a surface is bound to invert the texture matrix, but this is only a little more efficient ostensibly than doing the same for the projection matrix

It would be nice to know how ANGLE handles this.
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 28, 2014, 04:20:14 PM
I do not think my fix for d3d_draw_floor() is 100% correct. It has correct results for floors, but I think ceilings are off.
[edit]
No it seems correct. Compared it to Model Creator and had same color.
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 28, 2014, 04:57:24 PM
Quote
No it seems correct. Compared it to Model Creator and had same color.
Weird. What shader and what normal matrix did you use? I tried both your code which creates the normal matrix in shader, as well as the modified my code which does it on the CPU. In both cases I cannot get the floor to be blue. This is what I get:
(http://i.imgur.com/tFD5Dip.png)
I also tested the surface fix, that is why the surface draws correct side up. But the floor and walls are still wrong. The floor is in x/y plane, so the normal should be in z (blue).

Quote
1) gross
I don't like it either, but as time goes on I keep coming back to this. Pros:
1) It will draw correctly.
2) It should work in shaders.
3) No math is involved, so it's the fastest option.
Cons:
1) People will have to flip texture coordinates when used manually for rendering. This means it won't be consistent. If we check the two most popular use cases for surfaces then they are either drawn with draw_surface functions, which will work, and as full screen effects (i.e. shaders), which should also work.

I will have to do additional tests later. There is sadly no magic fix for this. We will have to sacrifice something to make this work.
Title: Re: [OGL3] Shader Test
Post by: orange451 on December 28, 2014, 05:30:01 PM
Quote
No it seems correct. Compared it to Model Creator and had same color.
Weird. What shader and what normal matrix did you use? I tried both your code which creates the normal matrix in shader, as well as the modified my code which does it on the CPU. In both cases I cannot get the floor to be blue. This is what I get:
(http://i.imgur.com/tFD5Dip.png)
I also tested the surface fix, that is why the surface draws correct side up. But the floor and walls are still wrong. The floor is in x/y plane, so the normal should be in z (blue).
No your picture looks correct. When I was speaking about the floor being blue, I was speaking of outputting JUST the vertex normals, not combined with the normal matrix. When you combine it with the normal matrix then:
-triangles that have a normal face in the direction of your camera are blue
-triangles that face upwards are green

Since you're multiplying the normalmatrix by the in_Normal, it should always be consistent no matter what angle your camera is facing; which it is in your picture :)

If I output JUST the normals the scene looks like:
(http://i.imgur.com/Svs8MuN.png)
Title: Re: [OGL3] Shader Test
Post by: Goombert on December 29, 2014, 12:33:23 AM
Harri I still want to dig for lower level solutions to this. Mainly it seems that a lot of applications on the internet also just don't care, including some graphics abstraction API's. I am going to dig to see what ANGLE does about this, and see if I can figure out whether Studio has this issue if the user draws the surface themselves.

The first thing I'd like to roll out, is messing with the projection or matrices, I think me and you both hate that option, it's way too much math involved and it has too many caveats to work in all cases. I am open to the shader solution, if we provide a way to disable it. Another thing we could do is just flip the texture data the first time surface_get_texture is called on the surface after it was rendered to with surface_set_target, though I don't know what the cost of this is compared to doing it in the shader after upload, obviously more efficient for drawing the surface without changing it multiple times in a row which is the same concept applied to vertex buffers, a pixel buffer could be used to do this. This is actually similar to the correct way OpenGL would handle it, there should have been an extension to change the pixel upload origin since framebuffers were introduced. Take the following Nvidia extension which lets you specify this origin for blitting to the main window.

https://www.opengl.org/registry/specs/ARB/clip_control.txt

Quote from: OpenGL Spec
    When rendering Direct3D content into a framebuffer object in OpenGL, there
    is one complication -- how to get a correct image *out* of the related
    textures.  Direct3D applications would expect a texture coordinate of
    (0,0) to correspond to the upper-left corner of a rendered image, while
    OpenGL FBO conventions would map (0,0) to the lower-left corner of the
    rendered image.  For applications wishing to use Direct3D content with
    unmodified texture coordinates, the command

        glClipControl(GL_UPPER_LEFT, GL_ZERO_TO_ONE);

    configures the OpenGL to invert geometry vertically inside the viewport.
    Content at the top of the viewport for Direct3D will be rendered to the
    bottom of the viewport from the point of view of OpenGL, but will have a
    <t> texture coordinate of zero in both cases.  When operating in this
    mode, applications need not invert the programmed viewport rectangle as
    recommended for windowed rendering above.

We would just call that function to flip the viewport when surface_set_target is called and then call the same function to undo that in surface_reset_target, it's only a two line fix. It looks like it's OpenGL 4.5 though.
https://www.opengl.org/sdk/docs/man/html/glClipControl.xhtml

Sadly, I don't even have this extension.
Quote from: GLEW Info
GL_ARB_clip_control:                                           MISSING
--------------------
  glClipControl:                                               MISSING

ANGLE does it in the shader, so this is also how it is done in GM: Studio. Search PDF for "Direct3D inverts the"
http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-ANGLE.pdf

Here's the specific commit where they fixed FBO flipping in ANGLE
https://code.google.com/p/angleproject/source/detail?r=b31f532d7137039e73d5bbdcc0b54a9883718c58&path=/src/libGLESv2/mathutil.h

For the solution we find we should (if we can) offer a way to disable it, in which case all of our surface drawing functions draw with upside down coordinates and the user has to do the same when using the surface as a texture. But let me keep doing some additional research.
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 29, 2014, 08:41:59 AM
Quote
I am open to the shader solution, if we provide a way to disable it.
The problem is that user shaders will have to take this account every time. This will also break all compatibility with GM shaders (which right now are about 99% compatible).

Quote
Another thing we could do is just flip the texture data the first time surface_get_texture is called on the surface after it was rendered to with surface_set_target, though I don't know what the cost of this is compared to doing it in the shader after upload, obviously more efficient for drawing the surface without changing it multiple times in a row which is the same concept applied to vertex buffers, a pixel buffer could be used to do this.
For most cases you don't need to do any flips in the pixel shader, as you can flip the texture coordinates in the vertex shader. That is extremely fast. Of course as the texture lookup can be independent of texture coordinates, then you must flip the GLSL texture lookup functions (like done in ANGLE). Flipping texture on the CPU will be A LOT slower. And doing it on some surface_get_texture() will basically ruin the surface for any later drawings on it, which would make surfaces quite unusable. Surfaces are not just for using once, draw once, clear surface. You often draw on them multiple times over many frames, like when drawing blood or bodies in a Crimsonland clone or something like that.

Quote
Here's the specific commit where they fixed FBO flipping in ANGLE
ANGLE translates GLSL to HLSL. In this translation they can do a lot of marvelous things. We on the other hand don't do any GLSL parsing. And I don't nominated myself to write a GLSL parser. So the solution doesn't work for us, because any user's shaders will have to manually do this inversion themselves. This means no compatibility with GM, no compatibility even with shaders found online. ANGLE had this quite easy actually, as they just flip all the textures in memory to be upside down just like FBO's, and then do the other flip in shaders. So in the end the fix is a lot easier. They don't differentiate between a regular texture and an FBO texture like we are trying to do.
Title: Re: [OGL3] Shader Test
Post by: Goombert on December 29, 2014, 08:47:46 AM
Yeah I hadn't considered that you may want to draw to the same surface again, so flipping on the CPU is out of the question.

What about doing what ANGLE does then, would that be compatible with GM Studio shaders? If not our only real option is to just use ANGLE. It however would be nice if we all had GL4.5 cards and could just use the proper fix.
Title: Re: [OGL3] Shader Test
Post by: TheExDeus on December 29, 2014, 09:25:23 AM
As I said - to replicate that what ANGLE does we will need a GLSL parser, that can replace all texture lookup functions with a custom one. In the simplest case it could be a find/replace kind of fix, but I don't think it would be that easy. And it must be done in run-time, probably in glsl_shader_compile().
I don't think using ANGLE is an option, as it just adds another layer of abstraction on top of one we already have. The idea of ANGLE is to be able to run GLES programs on Windows. It's not meant for GL3 or GL4 to run in Windows, as they technically can already do it. They have a shader validator which we could maybe use, but that's about it. Using ANGLE just to flip a freaking texture is an overkill.