Pages: « 1 2 3 4 »
  Print  
Author Topic: Realtime 3D Shadows, Animations, & Outlined Cel Shading  (Read 7585 times)
Offline (Male) Goombert
Reply #30 Posted on: March 23, 2014, 07:27:00 PM

Developer
Location: Cappuccino, CA
Joined: Jan 2013
Posts: 3108

View Profile
Ok, I honestly can't believe what I am reading, but it appears Josh has gone over to the dark side. I am not going to stand here and let our OpenGL system be dumbed down because of DirectX.
Logged
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.

Offline (Unknown gender) TheExDeus
Reply #31 Posted on: March 24, 2014, 08:18:49 AM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
the code had creeped its way into the view code.
You mean projection code? Because I don't think we can get rid of that. They are actually one of the reasons fixing surfaces is a pain. All the d3d_set_projection_ortho() and screen_set_viewport() in screen_redraw() overwrite the same function calls in surfaces. And there isn't much you can do about that. You need those two functions in screen_redraw() because we can have many views and all the views need different ortho position and viewport position. Surfaces don't need those to change and so we need to check. Another, equally bad solution, is to put the FBO checking and ortho flipping code inside the projection functions themselves. That means we could get rid of the checks in screen_redraw().

Quote
it was checking horrifying shit like "are we in an FBO?
And why wouldn't it check that? How exactly is that slow/bad? Because the truth is that FBO and main framebuffer are two different beasts in this regard. That is probably why GM:S now renders everything on a surface by default and only then draws the surface to screen. It also allows rendering views on surfaces. We maybe would need to consider the same, but compatibility with 10 years old PC's would be a problem.

Quote
asking for an orthographic projection gives you two fucking different matrices when GL is the system vs DirectX, which will probably confuse some poor bastard to tears later on
The only difference between them is that one is transpose of the other. And DX uses it's own matrices with its own functions, so they don't interfere with one another. The custom matrix classes and functions were made just for GL, because >GL3.0 no longer has FFP and any matrices whatsoever.

Quote
undo math already added by the driver.
Driver DOESN'T do any math. The samplers in driver level ARE IDENTICAL. At least they should be, because the math works the same either way. Look my previous post with the sampler picture. So of course while I cannot be certain they work the same, I see no reason why they wouldn't.

Quote
In GL1, use glScalef(1,-1);- at the beginning of the game, as I had pointed out long ago and you had suggested earlier.
I no longer believe this is a solution, because then we need to flip textures given by LGM. This means it will break DX unless we make an exception for it. So we end up having custom LGM code just for each graphics system.

Quote
In GL3, create a macro for the sampler call; ie, #define texture(sampler, coord) texture((sampler), invY(coord)), where invY takes vec2 in and computes y = 1-y in it, returning the result.
It would be a lot faster to do this in vertex shader (then the inversion is only done once per vertex and interpolated when passed to pixel shader). But still, the same problem as before. We need to flip textures in memory and that means LGM needs to write those textures flipped.
Logged
Offline (Male) Josh @ Dreamland
Reply #32 Posted on: March 25, 2014, 10:03:26 AM

Prince of all Goldfish
Developer
Location: Pittsburgh, PA, USA
Joined: Feb 2008
Posts: 2956

View Profile Email
Quote from: Harri
You mean projection code?
No, I mean the code in screen_redraw was literally doing checks to see if we're inside a framebuffer. Doing a check at any point to tell if we're in a framebuffer is disgusting.

If you want my honest opinion, we ought to draw *everything* to a framebuffer and only at paint time render the buffer to the screen. That will make draw_line scale with the window like it's supposed to and make all our checks for "LOL ARE WE IN A FRAMEBUFER?" true. Meaning we can remove that nonsense from the projection code entirely.

Related: I was wanting to move screen_redraw into universal. You may have noticed that I separated individual pieces of that function out into their own methods. This was for clarity, yes, but also because only those methods contain any intimacy with the rest of the graphics layer. The rest can be moved to universal, provided those functions are added to namespace ENIGMA in the mandatory header.

Quote from: Harri
And why wouldn't it check that? How exactly is that slow/bad? Because the truth is that FBO and main framebuffer are two different beasts in this regard.
In general, it is bad for a system to have logical dependencies on other systems. I know that we have isolated the GL graphics system as its own module, and so it seems that extreme intimacy inside that module is acceptable, but to a prospective contributor, this will appear extremely tacky, and if at any point a person forgets that a check for that nonsense needs done, we have an easter egg of a problem. And I don't mean to pull the slippery slope card, but from my experience, these hacks tend to compound. Again, screen_redraw was pretty bad.

Quote from: Harri
That is probably why GM:S now renders everything on a surface by default and only then draws the surface to screen.
As mentioned, proper primitive scaling is another reason.

Quote from: Harri
We maybe would need to consider the same, but compatibility with 10 years old PC's would be a problem.
Ten-year-old PCs will use GL1 binaries, which are designed for people who don't take gaming seriously. Their cards either always multiply by a matrix or just have shitty GL support. Essentially, the problem doesn't apply either way, for better or for worse.

Quote from: Harri
The custom matrix classes and functions were made just for GL, because >GL3.0 no longer has FFP and any matrices whatsoever.
So you are trying to maintain forward compatibility. So that GL1 games can run on modern machines and old machines, while GL3 games only work on modern machines. I suppose that's acceptable, and probably beneficial as (1) it gives us more control over multiplication order (which was a problem for compatibility in the past) and (2) it gives users access to those matrices for math, in addition to a possible (3) that when lots of matrix math is done, communication with the graphics card can be reduced substantially.

Quote from: Harri
The only difference between them is that one is transpose of the other. And DX uses it's own matrices with its own functions, so they don't interfere with one another.
Giving the users access to those matrices is to our advantage. It lets them query their own vector transformations to determine physical points on screen. That is an insanely powerful feature that was completely missing from GML. The matrices being a transpose of one another will just further confuse the user. "WHY IS THIS SYSTEM GIVING ME THE MATRIX ROTATED?" Bug report inbound.

Quote from: Harri
Driver DOESN'T do any math. The samplers in driver level ARE IDENTICAL. At least they should be, because the math works the same either way. Look my previous post with the sampler picture. So of course while I cannot be certain they work the same, I see no reason why they wouldn't.
If there is no math being done at the driver level, then it is being done at the software level. We are seeing different behaviors; the GL and DX samplers are "upside-down." So if it's not being done at the driver level, that's fantastic! That means we can probably tell it we would like the DirectX sampler instead of the GL sampler, and then our problems are solved.

But outside of fairytale land, I believe it is up to the driver to adapt the software as required so that GL applications have an inverted sampler. How this is done would be a black box to us. It's possible the hardware could offer two samplers, but I am doubtful. It's possible the hardware has a "take 1-y" line. But I am doubtful. I'd guess that on old hardware, there was always a matrix multiply for the sampler, which is why it's part of the GL API. On new hardware, I believe that nine times in ten, the hardware sampler has only one function, and the driver augments GLSL shader scripts to take 1-y. If you can prove me wrong, that would be great. Especially if you do so by finding a way to make the sampler behave properly.

Quote from: Harri
I no longer believe this is a solution, because then we need to flip textures given by LGM. This means it will break DX unless we make an exception for it. So we end up having custom LGM code just for each graphics system.
LGM should be giving texture data to us in the format most immediate to itself, which is probably right-side up. If GL or DX has to flip this texture to load it correctly, so be it. But what you might be missing is that it is not an option to only flip the projection: as soon as you start texturing in a surface, you'll find those textures are upside-down in the end. Using the GL1 API, we can invert the y-values for both the projection and the sampler matrices, thus mimicking DirectX's behavior. I see no reason to not do this.

If it's of any consolation, I am planning on this new compiler supporting scripting. So graphics systems will, ideally, be able to supply code to invert texture data at compile time, if need be. The new JDI I am currently hooking up has very nice AST evaluation support, so I believe EDL scripting in the compiler is going to be an option.

Quote from: Harri
It would be a lot faster to do this in vertex shader (then the inversion is only done once per vertex and interpolated when passed to pixel shader).
Blazingly so. Too bad we don't know which values to invert.

Quote from: Harri
We need to flip textures in memory and that means LGM needs to write those textures flipped.
That never followed, period. LGM conveys the data. What we do with it does not concern LGM in the slightest, regardless of the availability of compiler scripts.
Logged
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble
"I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
Offline (Unknown gender) TheExDeus
Reply #33 Posted on: March 25, 2014, 02:39:36 PM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
If you want my honest opinion, we ought to draw *everything* to a framebuffer and only at paint time render the buffer to the screen.
Agreed.

Quote
So you are trying to maintain forward compatibility. So that GL1 games can run on modern machines and old machines, while GL3 games only work on modern machines. I suppose that's acceptable, and probably beneficial as (1) it gives us more control over multiplication order (which was a problem for compatibility in the past) and (2) it gives users access to those matrices for math, in addition to a possible (3) that when lots of matrix math is done, communication with the graphics card can be reduced substantially.
GL1 also uses the matrix class. There is actually very small difference between GLmatrix.cpp and GL3matrix.cpp - the difference being that in GL1 they are instantly uploaded to GL, while in GL3 we wait until we need to draw something. But that matrix class won't be a compatibility problem as it uses pure C/C++ (circa 97). Right now that class it not in enigma_user, but there are some function planned (like GM's http://help.yoyogames.com/entries/28707818-Matrix-Functions) but they are very limited and specific (basically only 4x4 matrices are supported). For general purpose badassness user should use Matrix extension (right now in git https://github.com/enigma-dev/enigma-dev/tree/master/ENIGMAsystem/SHELL/Universal_System/Extensions/Matrix). They are templated however and that means we cannot currently use it in EDL. I am waiting for the new parser for that.

Quote
If there is no math being done at the driver level, then it is being done at the software level.
My point is that there is no math done AT ALL.
Quote
We are seeing different behaviors; the GL and DX samplers are "upside-down."
We are not seeing that. Samplers are identical - textures aren't. Surfaces are not flipped, while all the other textures are. That is the difference we see. Check again these two images:

This one shows how the samplers are identical (because we give the sampler the same coordinates and it samples the same point in both GL and DX). So no math difference. If it did 1-y at this point it would break.

This image shows how the textures are represented in memory. If I had added a surface there as well, it would look just like the DX9 one, while we need it to be like the GL one. That is the only difference I can see.

Quote
Blazingly so. Too bad we don't know which values to invert.
In GL3 shaders we input texture coordinates with attribute named in_TextureCoord. All we need to do for flipping is vec2(in_TextureCoord.s,1-in_TextureCoord.t). Just tested that and works fine. So it's not complicated.

Think of the whole problem this way - If we didn't use FBO's, then we would NEVER see the problem. GL loads the textures flipped and flips the coordinates origin as well. This means that samplers are identical, shaders are identical, drivers are identical, software is identical. The problem is that FBO's are not loaded as regular textures are. So GL flips the textures while loading, but it doesn't flip the FBO (because it doesn't load it). That is the problem.
« Last Edit: March 25, 2014, 02:54:40 PM by TheExDeus » Logged
Offline (Male) Goombert
Reply #34 Posted on: March 25, 2014, 02:45:45 PM

Developer
Location: Cappuccino, CA
Joined: Jan 2013
Posts: 3108

View Profile
Quote from: JoshDreamland
The rest can be moved to universal,
Toss it in General, because Universal is getting to be a mess and needs cleaned up and organized into folders or something.

Quote from: JoshDreamland
So you are trying to maintain forward compatibility. So that GL1 games can run on modern machines and old machines, while GL3 games only work on modern machines. I suppose that's acceptable, and probably beneficial as (1) it gives us more control over multiplication order (which was a problem for compatibility in the past) and (2) it gives users access to those matrices for math, in addition to a possible (3) that when lots of matrix math is done, communication with the graphics card can be reduced substantially.
Actually, that's been my doings, and I asked that he do the same.

Quote from: JoshDreamland
LGM should be giving texture data to us in the format most immediate to itself, which is probably right-side up. If GL or DX has to flip this texture to load it correctly, so be it. But what you might be missing is that it is not an option to only flip the projection: as soon as you start texturing in a surface, you'll find those textures are upside-down in the end. Using the GL1 API, we can invert the y-values for both the projection and the sampler matrices, thus mimicking DirectX's behavior. I see no reason to not do this.

If it's of any consolation, I am planning on this new compiler supporting scripting. So graphics systems will, ideally, be able to supply code to invert texture data at compile time, if need be. The new JDI I am currently hooking up has very nice AST evaluation support, so I believe EDL scripting in the compiler is going to be an option.
What's all this driver nonsense, as I already said, OpenGL reads textures into memory upside down. Direct3D can do either since you perform the memory allocation yourself. For the same reasons OpenGL and Direct3D both use BGRA internally, and why I switched ENIGMA to do the same.

Quote
LGM should be giving texture data to us in the format most immediate to itself, which is probably right-side up. If GL or DX has to flip this texture to load it correctly, so be it.
I agree with the former, but not the latter. Direct3D is completely unaffected by this issue and the BGRA byte ordering for the reasons stated above, that is, that you perform the memory allocation yourself and can easily swap the order bytes are transferred to the GPU with no performance difference.

imho, I am more comfortable with OpenGL's origin than Direct3D's
« Last Edit: March 25, 2014, 02:55:49 PM by Robert B Colton » Logged
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.

Offline (Unknown gender) TheExDeus
Reply #35 Posted on: March 25, 2014, 02:57:03 PM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
Actually, that's been my doings, and I asked that he do the same.
I don't see what the matrix class or matrix functions have anything to do with compatibility. Originally I only planned to replace GL3 matrices, but then I figured that they probably work faster than the GL1 implementation and so I added them there too. In the end it's possible we could have all that GLmatrix.cpp in General and even make DX use them (but in DX they would need to be transposed whenever used). I do have a feeling DX matrices are probably super optimized however.
Logged
Offline (Male) Goombert
Reply #36 Posted on: March 25, 2014, 03:02:29 PM

Developer
Location: Cappuccino, CA
Joined: Jan 2013
Posts: 3108

View Profile
No Harri, I meant I've been the one advocating OpenGL1 not focus on dumping the FFP or anything like that, as Josh stated, me and you are treating OpenGL 1 so that it works on both old and new computers, and OpenGL3 targets the users with higher end graphics cards. The same is not being done with Direct3D, Direct3D9 and 11 systems are meant to be the same as OpenGL 3.

Edit: Also, fuck the idea of rendering to a framebuffer and waiting to copy to the main buffer. Just render to the main buffer but don't swap.

Edit 2: Also there are other ways of using textures themselves as render targets we could look into that might possibly fix the flipping shit.

Edit 3: Also none of you guys solutions is going to work anyway, why? Because of surface_get_texture() and games like Project Mario that use surfaces as textures :P
« Last Edit: March 25, 2014, 06:02:00 PM by Robert B Colton » Logged
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.

Offline (Unknown gender) TheExDeus
Reply #37 Posted on: March 25, 2014, 03:27:33 PM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
one advocating OpenGL1 not focus on dumping the FFP or anything like that
Dumping FFP would be dumping GL1. You cannot have GL1 without FFP. So we cannot change anything in it even if we tried.

Quote
Edit 3: Also non of you guys solutions is going to work anyway, why? Because of surface_get_texture() and games like Project Mario that use surfaces as textures :P
But they are already broken! That is why we are trying to fix it. Surfaces in ENIGMA DOESN'T WORK right now. In Mario you use surfaces for freaking water - a symmetrical and tileable texture. Of course you don't notice and/or care that it is actually upside down right now. If you compared screenshots of Mario example in ENIGMA and GM:S you will see the difference.
Logged
Offline (Male) Goombert
Reply #38 Posted on: March 25, 2014, 03:35:12 PM

Developer
Location: Cappuccino, CA
Joined: Jan 2013
Posts: 3108

View Profile
Project Mayo don't work in Studio :P

Anyway, that doesn't matter, even if you did fix it one of those ways, it would still be technically broken. My specific game it may not matter, but on someone elses game it would. Possibly 3D shadow examples, etc.
Logged
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.

Offline (Male) Josh @ Dreamland
Reply #39 Posted on: March 25, 2014, 05:41:00 PM

Prince of all Goldfish
Developer
Location: Pittsburgh, PA, USA
Joined: Feb 2008
Posts: 2956

View Profile Email
Harri, you keep showing me that image, and it shows me two different sampler behaviors. Yes, the coordinate (0.5, 0.25) gives the same pixel in both systems, so long as the image is loaded upside-down. And I believe you're saying GL is taking care of that for us. That's good and well, until FBOs happen. Then it becomes SURPASSINGLY APPARENT that the two samplers are behaving differently. One of them expects the texture to be upside-down, and in the case of the textures we have given it the data to load, it is correct. Then the user starts rendering to a framebuffer, and suddenly, it's wrong, and the fact that the samplers ARE behaving differently begins to show. You have to recognize this.

And you can name your texture anything you like. There is no limit to how many texture coordinates you can have; you can sample any number. We can't know them all.
« Last Edit: March 25, 2014, 05:42:38 PM by Josh @ Dreamland » Logged
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble
"I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
Offline (Unknown gender) TheExDeus
Reply #40 Posted on: March 25, 2014, 09:11:59 PM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
Anyway, that doesn't matter, even if you did fix it one of those ways, it would still be technically broken. My specific game it may not matter, but on someone elses game it would. Possibly 3D shadow examples, etc.
It's broken NOW. That means when we fix it, not matter how, it will FIX the games or examples you speak of. None of them actually works now. The 3D shadows example mentioned in this topic doesn't work in ENIGMA now for other reasons, but if everything else worked, then it would break while rendering to surface right now. That is what we want to fix.

Quote
the two samplers are behaving differently.
Sampler doesn't care what it samples, be it FBO or a texture. It doesn't handle them differently. The data is different.

Quote
And you can name your texture anything you like. There is no limit to how many texture coordinates you can have; you can sample any number. We can't know them all.
Texture names are not significant. If you were thinking names of the attributes having texture coordinates, then yes, you can name them whatever you want. GM and ENIGMA gives the user In_Texture as a predefined attribute holding the texture coordinates of the rendered scene. But the fact that user may later have a custom attribute (right now impossible in ENIGMA) is the reason why I didn't want this fix to be in shaders. I don't want to force them having to change anything from GM:S just to make it work here.
Logged
Offline (Male) Josh @ Dreamland
Reply #41 Posted on: March 26, 2014, 08:59:42 AM

Prince of all Goldfish
Developer
Location: Pittsburgh, PA, USA
Joined: Feb 2008
Posts: 2956

View Profile Email
...

Harri, I am talking about the DirectX sampler vs the OpenGL sampler.

And yes, I meant naming texture coordinates. And the point of the fix is to make sure that the user *never* notices that GL's coordinate system is flipped. WE would be modifying the shader, which, as I've said eleven times, is probably what the graphics driver is doing for GL programs (as opposed to DirectX programs!). Users can name texture coordinates they send to the GPU anything we like. So we have to modify them only once the sampler is invoked. The only alternative is to track every vector that goes to the sampler and somehow modify it at its first assignment. Since users can create vec2s in their code, that's not practical.
Logged
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble
"I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
Offline (Male) Goombert
Reply #42 Posted on: March 26, 2014, 11:31:19 AM

Developer
Location: Cappuccino, CA
Joined: Jan 2013
Posts: 3108

View Profile
We could also write the sampler our selves :P
Logged
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.

Offline (Male) time-killer-games
Reply #43 Posted on: March 26, 2014, 01:10:20 PM

Contributor
Location: Virginia Beach
Joined: Jan 2013
Posts: 1164

View Profile Email
Why not just stop arguing by the books and and books and fix the surfaces in the first way of going about it that comes to mind and call it a day? :P Flip a coin or something, at least.
Logged
Offline (Unknown gender) Darkstar2
Reply #44 Posted on: March 26, 2014, 01:25:19 PM
Member
Joined: Jan 2014
Posts: 1244

View Profile Email
Why not just stop arguing by the books and and books and fix the surfaces in the first way of going about it that comes to mind and call it a day? :P Flip a coin or something, at least.

lol !  Flip a coin ? You mean like YoYo does

Heads = We break something new today
Tails = We deprecate

Logged
Pages: « 1 2 3 4 »
  Print