time-killer-games
|
|
Reply #15 Posted on: March 18, 2014, 06:29:30 pm |
|
|
"Guest"
|
Hey HaRRi if you get any one of these shaders working in ENIGMA I will be your bestest friend for life!! I'm so glad to hear you guys are addressing this not even that long after my request, I love this community and how the developers listen to me. It's a beautiful thing!
|
|
|
Logged
|
|
|
|
Josh @ Dreamland
|
|
Reply #16 Posted on: March 19, 2014, 01:22:35 pm |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
Are textures still upside-down when used in shader scripts? If they are, then someone is doing more work here. Hardware is hardware. The hardware is only doing one thing, so if our behavior differs from DirectX's, we need to find the reason, not have everyone hack around it. Having users say 1-y in a script is a hack. Using 1-y in our functions is a hack. Flipping the textures in memory is a hack. One of the following is true:
1. GLSL does not exhibit this problem. 2. The sampler always multiplies by a matrix. 3. G/HLSL is tweaking the sampler calls at compile time.
Whichever it is, we need to find it and either fix it or replicate.
|
|
« Last Edit: March 19, 2014, 01:25:48 pm by Josh @ Dreamland »
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
TheExDeus
|
|
Reply #17 Posted on: March 20, 2014, 09:07:32 am |
|
|
Joined: Apr 2008
Posts: 1860
|
Are textures still upside-down when used in shader scripts? If you use the texture coordinates provided then of course it is. Why shouldn't it be? so if our behavior differs from DirectX's OpenGL behavior differs from DX, not ours. That is actually the problem. Texture coordinates in DX and OGL are different, but we treat the OGL as DX. So it's a problem on our part. 1. GLSL does not exhibit this problem. It does what we say it to do. So yes, it will exhibit the problem unless we change the V (ty) coordinate given. 2. The sampler always multiplies by a matrix. Same as 1 - It does what we say it to do. In GL3 we don't use a texture matrix. It almost never going to be used and so enabling it for GL3 just to fix our foobar seems not to be worth it. And the bigger problem here is that changing something in shader will break custom user made shaders. We don't have a shader editor that modifies shaders. All we do is append a prefix with vertex coordinates, texture coordinates and so on so people don't have to do it themselves (this is how GM:S does it and other engines, like ThreeJS). 3. G/HLSL is tweaking the sampler calls at compile time. It's not doing anything to change the outcome. Compilers only optimize and those optimizations don't change the outcome. Whichever it is, we need to find it and either fix it or replicate. I will just again explain the problem: 1) LGM sends textures upside down to ENIGMA. 2) ENIGMA renders all textures using top-left UV coordinate system, while in reality it is bottom-left. We don't notice that because LGM sends textures upside down. This only becomes a problem with surfaces (FBO's). 3) If we make LGM send textures right side up, then currently all textures (including surfaces) will be upside-down. 4) To fix this we need to change texture coordinates to compansate (because OGL, as mentioned, has bottom-left coordinates system). We have these options: For GL1: a) As texture matrix is always used in FFP, then we can just do glBindMatrix(GL_TEXTURE); glScalef(1,-1,1); in InitGraphicsSystems and forget about it (because we don't change that matrix ever). This essentially fixes the problem for GL1. For GL3: a) Create a texture matrix ourselves. And do as in GL1. The problem here is that we will need to manually multiply stuff in GL3 shaders and that will break custom user shaders (as they will have to do it themselves also). This also seems wasteful as texture matrix will probably not be used for anything else. b) Do the flip in the sampler calculations. Problems the same as a). c) Change all drawing functions to compensate. This means changing all the drawing functions in General to take bottom-left style UV coordinates. Here the problem is that all graphics engines use these functions, as this could maybe break DX. But it shouldn't. In this case GL1 will be fixed together with GL3 and no texture matrix was needed. d) Do a combination of a) for GL1 and c) for GL3. But here change not the functions in General, but only the Add_Texture() function in modelstruct.h in GL3. This will be slightly slower for GL3, because for every textured vertex (1-ty) will be calculated. Shouldn't make a big performance dent in the long run. This is also a very easy solution and would work at least temporarily until a better one is found. I am interested in doing d), because it will fix both and make surfaces useful again. As I use surfaces EVERYWHERE, then this setback with broken surfaces is really a pain for me. So I want to fix this as fast as possible. I asked Robert to make the .jar changes for me and send me the .jar's so I can test the fixes.
|
|
|
Logged
|
|
|
|
Josh @ Dreamland
|
|
Reply #18 Posted on: March 20, 2014, 07:34:56 pm |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
You keep speaking of GL and DX as though their state of being separate entities implies the existence of two parallel universes. Let me be more clear.
Look slightly to your left. You should notice that outside of this window, there is a small plastic frame. Follow that frame downward until you see a thick, dark wire. Follow the wire downward. You should see it join to a somewhat small, metal box with lights on it. If you were to open the box, you would find several thin boards with small conductive engravings in them. The board closest to the wire you were just following is special. It contains two even smaller boards which contain hundreds of microscopic boards. Every few nanoseconds, these boards perform a mathematical operation. The mathematical operation that is performed is governed by a piece of software. THIS IS WHAT WE'RE TRYING TO FIND.
I don't know if that software is a driver. I don't know if that software is owned by Microsoft, or if it's part of GL, or what. SOMEONE IS TRANSFORMING THE COORDINATES, because the little box inside your computer ONLY HAS ONE BEHAVIOR. Maybe DirectX is doing that because they always felt the top-left should be the logical (0,0). Maybe GL Is doing that because they want the API to be consistent across all GL devices. I DON'T CARE WHO IS DOING IT. THERE IS MATH BEING DONE, ALREADY. We need to FIND that math. If the math is being done by GL, we need to REPLACE that math or REMOVE that math, if possible. If that math is being done by DirectX, we need to REPLICATE that math.
Once again: the chip in the box connected to the device displaying this text is performing PRECISELY the operations requested of it by the software. THESE TWO PIECES OF SOFTWARE, DIRECTX AND OPENGL, ARE SOMEHOW EFFECTING DIFFERENT BEHAVIORS FOR THE SAME INPUT COORDINATES. Our task is to find out who it is and do the most efficient thing possible to make ours look like DirectX.
The solutions already discussed are not adequate because they require intervention by either the ENIGMA maintainers or by the user. The former is only a problem because we allow the user to write shader scripts just as we do, and so anything we do in the engine, we have to also force the user to do. In the case that DirectX or GL is modifying the shader scripts at compile time to use their preferred coordinate system, we can and should do the same for us and for the user alike. In the case that there is a always a matrix multiply, WHICH WOULD BE CONDUCTED in the aforementioned chip, we need to modify that matrix.
Edit: The research I have done has been largely inconclusive, but is leading me to the conclusion that all of this is handled in the driver in a way that we cannot touch it. It seems that each vendor is free to choose their implementation, and support the two sampling methods as they see fit, with neither API offering a way to switch between them. I'm going to continue asking around to see if I can get a definitive "no" on the issue. In which case, we run the risk of not only uglying up code, but having generated shader code actually doing operations that cancel each other out. I'm hoping cards optimize such trife out, but I can't be certain at this point.
|
|
« Last Edit: March 21, 2014, 12:04:36 am by Josh @ Dreamland »
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
|
TheExDeus
|
|
Reply #20 Posted on: March 21, 2014, 03:09:38 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
Josh, I love sarcasm filled posts and I am usually the one writing those, but what I grasped from it is that you are asking this: -If the problem is what you say it is, then why are DX9 and GL drawing the same way? And that is actually a good question which I didn't look into previously. Because if LGM sent data upside down, then GL would render correctly, but DX wouldn't (it should render everything upside down). I did some time consuming testing (mostly because I couldn't find a debugger that took DX9 games), but when I finally did find one, I noticed that in DX the textures are right side up. So it ends up that the problem is slightly different. LGM sends textures correctly - top-left being origin. The problem is that DX9 reads this data the same way - top-left being origin - while GL reads in it's own texture coordinate system (from docs of glTexImage2D): The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image. So it does the flipping during load. Sadly I didn't know that (or at least forgot). So the issue is slightly different, but solutions are basically the same. I am open to other solution suggestions though. people don't follow standards They do. It's just a different one. GL is a standard, as is DirectX. math classes usually start graphs in the first quadrant. That as far as I know is the reasoning in GL. That math "makes more sense" in this case. But sadly creates these compatibility problems. edit: always a matrix multiply FFP is the only one having mandatory multiplications, but in GL3 shaders there is no secret unseen math. It will not transform anything unless you explicitly tell it to. You can see our current GL3 shader here: https://github.com/enigma-dev/enigma-dev/blob/master/ENIGMAsystem/SHELL/Graphics_Systems/OpenGL3/GL3shader.cpp. we have to also force the user to do Technically we don't have to force them to do anything. The whole idea of the shader system in ENIGMA, GM:S and other places is that it just gives you all the data you need (like matrices, input vertex data, currently bound texture and so on) and the user then can write a shader he wants. I was just saying that if we fix this problem by changing the sampler in shader, then all user shaders will have to do it, or they will have upside down textures. Right now we are actually very compatible with GM:S shaders, like all the ones here: http://gmc.yoyogames.com/index.php?showtopic=586380 work. Some needs changes because of syntax errors and some are broken because of the flipped texture issue we are now trying to fix, but otherwise they work fine. And I was hoping to make the examples in this topic (the shadow and animation ones) to work in ENIGMA as well. Even shaders from other sources (like ThreeJS I mention often) work in ENIGMA. I was actually making a cool water reflection example (with the shader taken from an example I did in ThreeJS) when I was struck by the surface flip bug. edit2: Technically taking all of this into account it almost seems that any solution should be only applied to surfaces. Because DX load and draws using top-left origin, GL loads and draws using bottom-left, this means that both actually draw the same way with the same code and no problems arise. But to surfaces it renders right side up and that is the only thing that breaks. The weird thing is though, that the problem was previously somehow fixed and it worked fine. But now I tried many things with projections and scaling and none of that works. Because if I flip the "up" vector, for example, then a 3D camera will break. Same if I do scaling, then even views break. I need a way to only flip the texture.
|
|
« Last Edit: March 21, 2014, 03:55:17 pm by TheExDeus »
|
Logged
|
|
|
|
Josh @ Dreamland
|
|
Reply #21 Posted on: March 21, 2014, 11:54:29 pm |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
Damn it. That still isn't what I'm trying to convey. I know that the two systems use a different point as (0,0). But the hardware doesn't. The hardware has some representation of (0,0) that is independent of GL or DirectX. And yet, both APIs' methods are supported, by the driver. How is it doing this? How does a GL application say, "I am GL! Treat (0,0) as bottom-left!"? How does a DirectX application say, "I'm Direct3D! Treat (0,0) as top-left!"? And more importantly, can we do the same? Can we say, "I'm ENIGMA! (0,0) is top-left!"? Can we lie and pretend to be DirectX?
I don't know who owns the code that forms the difference between these two. That's what I am trying to figure out. The physical sampler probably isn't doing this translation live. It's probably done by the shader compiler or by the driver, and unfortunately, I'm leaning toward the latter. Either way, we need to figure it out before we go hacking in a fix. I'm curious as to how Angle deals with this.
|
|
« Last Edit: March 21, 2014, 11:56:03 pm by Josh @ Dreamland »
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
TheExDeus
|
|
Reply #22 Posted on: March 22, 2014, 06:43:39 am |
|
|
Joined: Apr 2008
Posts: 1860
|
How is it doing this? I don't think it is. The math makes sense either way even if GPU/driver/software doesn't do anything. Check this image I made: You can see that when passing the same UV coordinates with the given texture to GPU, it will sample the same point in both GL3 and DX9. So for GPU it doesn't matter and it doesn't do any conversion whatsoever. That is because not only we flip texture coordinates, we flip also the textures in memory. That is why we can provide the same texture coordinates for GL and DX in General/GSsprites.cpp and they both render sprites identically. But for surfaces it's different, because it doesn't flip the texture data. If we didn't flip the textures (or rather if it didn't do it when using glTexImage2D) in GL, then we would have to give texture coordinates different from DX. So if we did it "properly" and had all our textures right-side-up then we would have to provide coordinates like this: http://uploads.gamedev.net/monthly_06_2011/ccs-8549-0-98845200-1307472511.jpgAs we flip the textures, then we provide the DX variant: http://i.msdn.microsoft.com/dynimg/IC282223.jpg
|
|
« Last Edit: March 22, 2014, 06:51:16 am by TheExDeus »
|
Logged
|
|
|
|
|
|
TheExDeus
|
|
Reply #25 Posted on: March 22, 2014, 12:12:06 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
It's actually a good possibility the texture data could be flipped. That was my original thought, but now I am almost certain it's not. DX loads it as you give it, and that means DX loads the data right side up. GL loads it by considering the first point being at bottom left corner and so it flips the image while loading. Here is an image I made to illustrate: The first one is the original 75x75 sprite. We resize it to nearest power of two for compatibility and then we load it in memory. When I took the DX image from GPU memory we can see that it is right-side up. When I took the image from GPU memory in GL, then we can see the image is flipped. That is because the glTexImage2D function loads it considering the first element to be at bottom-left corner. As we provide the top-left origin image, then it loads it flipped. One salient point is that GL expects texture data to start in the lower-left corner (like bitmap files), so keep that in mind while dealing with texture loading code. That is exactly what the problem is and what the discussion is about.
|
|
|
Logged
|
|
|
|
|
Josh @ Dreamland
|
|
Reply #27 Posted on: March 23, 2014, 03:34:30 pm |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
Rendering is easily fixed with a projection, which makes sense; it isn't as though it screws with projection matrices you set yourself.
But I'm so ashamed right now I'm just going to stop talking.
enigma::Matrix4 orhto; orhto.InitOtrhoProjTransform(x-0.5,x + width,y-0.5,y + height,32000,-32000);
That is the single saddest thing I've ever seen.
For future reference, O-R-T-H-O. And 32000 isn't a very special magic number.
I can't even find something to look at and say, "yes! everything about this is right!"
|
|
« Last Edit: March 23, 2014, 03:37:55 pm by Josh @ Dreamland »
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
TheExDeus
|
|
Reply #28 Posted on: March 23, 2014, 03:58:41 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
The problem is that GL has the opposite-handed coordinate system for texture sampling and rendering. If it were just the texture loading, you could just flip the texture and everything would work. The problem is both. Because if it only had the bottom-left origin coordinates, then it would render everything upside and we would notice from day 1. As it loads textures flipped as well, then it compensates and we don't notice. Until we use surfaces, which are right side up. Rendering is easily fixed with a projection, which makes sense; it isn't as though it screws with projection matrices you set yourself. It does. d3d_set_projection_ortho() and other "set" functions overwrites previous projection matrices. So in ENIGMA it will work differently from GM. But all my trying to figure out how to flip the projection has failed, because I don't want to flip the projection in a way that reverses the camera direction. For example, when you look straight up and you render to surface, you should still look up. If we just flip the projection or view matrix, then he will actually face down, because the matrix will have inverted values. So basically I just need to change the "up" vector I guess (in d3d_set_projection), but that ortho is used on surfaces which don't have an "up". For future reference, O-R-T-H-O. And 32000 isn't a very special magic number. It seems there are typos there. Doesn't change anything though. I will fix them. Also, 32000 is there just because it was there previously (since day 1 I suppose). Not sure who came up with that or why. Mathematically it doesn't change much, because if one is -x and the other +x, then the sum is 0 and the matrix just has zeros in it. In other examples I have seen it actually is 0f and 1f which makes more sense. So other than a typo and a magic value, is there anything else particularly that should be improved upon? And real ideas with solutions on how to fix surfaces in a way to be compatible with GM? edit: Lol, I just went back and looked at the math and figured out how to flip orhto. It wasn't hard of course, but the fact we use GM functions (d3d_set_projection_ortho()) in surface functions made it less apparent. There is still problems with view though. But I guess we keep the coordinate system? Because technically it's not pretty that we abuse GL like that, but it does help with compatibility with DX and allows us to make more General functions. edit2: The 32000 thing is the brain child of Robert - https://github.com/enigma-dev/enigma-dev/commit/ba85fa04e7285af5238a937df68828411cafb354Dunno why. edit3: Yup we still basically have the same problem. Right now it stems from the fact that all GM d3d_set functions overwrite the previously set matrices. This means that all the d3d_set_projection_ortho() called during screen_redraw() in G3screen.cpp are overwriting the projection set in surface_set_target(). I guess what we want again is to check if FBO is bound and only then use d3d_set_projection_ortho() and screen_set_viewport(), just like it was done previously with glScale(1,-1,1); edit4: Also, those who aren't still aware the bug I am fixing now is when screen_draw() is called after surface_set_target() - i.e. when trying to draw screen to surface.
|
|
« Last Edit: March 23, 2014, 05:56:38 pm by TheExDeus »
|
Logged
|
|
|
|
Josh @ Dreamland
|
|
Reply #29 Posted on: March 23, 2014, 06:22:40 pm |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
One of my biggest problems with this "let's just flip all the projections" shit is that the code had creeped its way into the view code. The screen_redraw code had its fingers in every system. It wasn't just checking room variables, it was checking horrifying shit like "are we in an FBO?" or "is the moon currently waning or waxing?", and contained a copy of the entire function for each of these cases. It was then that I deliberately broke surfaces by removing the over-involved logic, in an attempt to get the original author of that segment to find a better way of dealing with the problem. Little did I know that this is problem is a very artificial yet extremely well-defined difference in the two APIs which is probably impossible to change directly, which would be preferable to working around. I'd much rather find a way to use Direct3D's sampler instead of OpenGL's, so that no additional math has to be done, but unfortunately, it seems that this is impossible without improper intimacy with individual graphics drivers.
As such, it is my belief that our best bet is to maintain this probably-leaky abstraction in the projection functions (asking for an orthographic projection gives you two fucking different matrices when GL is the system vs DirectX, which will probably confuse some poor bastard to tears later on), and to do these for the sampler:
In GL1, use [snip]glScalef(1,-1);[/snip]- at the beginning of the game, as I had pointed out long ago and you had suggested earlier.
In GL3, create a macro for the sampler call; ie, [snip]#define texture(sampler, coord) texture((sampler), invY(coord))[/snip], where invY takes vec2 in and computes y = 1-y in it, returning the result. We then pray that the optimizer gets it, because that almost certainly does more math to undo math already added by the driver.
|
|
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
|