This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 »
286
Developing ENIGMA / Re: [GL3.3] Multiple render targets (MRT)
« on: January 13, 2015, 06:42:58 pm »Quote
1) ENIGMA would finally have its shit together in the graphics department and little duplicate graphics code, which despite my efforts we still have a lot of.1 is only because of the number of systems we try to support. 2 I don't care about. 3 is potential plus. 4 is a myth. Besides polygone and his 1990's PC, nobody ever has any problems with it. I have tested it on so many PC's (new, old, laptops, desktop etc.) and zero problems.
2) We can use HLSL and GLSL at the same time.
3) We would actually have working graphics for embedded systems and mobile.
4) Everybody could use surfaces, and there wouldn't be inconsistencies in the behavior of our graphics systems.
Quote
Anyway, did you even bother to Google search Harri? Why can't you swap the color attachments?That doesn't do what I need. I need to use 3 attachments AT THE SAME TIME, so I can render to them all in one pass. The forum post talks about changing one attachment to another just so you wouldn't need to bind a framebuffer, which is something totally different.
You are free to try ANGLE and see how hard it would be to add to ENIGMA as another graphics engine (or to replace all of them). I personally don't want it or have any reason to want it. For all desktop platforms >=GL3.3 is just fine. For embedded stuff there would have to be differences (when targeting GLES2.0, not GLES3.0), but in most cases those are just resource limitations (like in GLES2.0 you can have only one framebuffer object, so FBO's themselves are quite useless). I just think we should stop supporting 3 graphics systems people don't use. I personally haven't done anything for DX at all, so i'm sure it's already broken because of changes I have done elsewhere.
287
Developing ENIGMA / [GL3.3] Multiple render targets (MRT)
« on: January 13, 2015, 03:38:48 pm »
Wanted to try implementing deffered shading. Hit the wall that if I want to do it efficiently, then I should be able to render to several render targets ("surfaces") at once. Found out that GM:S can do it with an undocumented function called surface_set_target_ext(int index, int id); which takes the index for the "stage" (as Robert calls them) to bind and id is the surface itself. Sadly we make surfaces as individual framebuffer objects (FBO). OpenGL allows only one FBO to be bound at any one time. This means I cannot bind several of them at once, like GM does. GM can do it because it uses DX underneath (on Windows only I presume, where surface_set_target_ext only seems to work, and only HLSL shaders can render to MRT in GM:S as far as I can see) and it allows that (http://msdn.microsoft.com/en-us/library/windows/desktop/bb147221%28v=vs.85%29.aspx). In OGL you do it differently, you add all the required textures to the one FBO (http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html) which can then be bound and all the textures accessed.
So as I couldn't add surface_set_target_ext(), I planned to add surface_add_colorbuffer(), which would add a texture with specific formats to the FBO. Something like this:
The problem with all of this is that I cannot make this work together with other systems. I need a new graphics_create_texture() function (I called it graphics_create_texture_custom) which I have no place to put. I need:
I seriously consider forking ENIGMA to have only one graphics system, because GL1 is obsolete and I haven't really touched it forever, and DX9/11 are not worked on and are not required as far as I see. If we somehow manage to get GLES working then we would still have problems like these, but at least GLES is like 95% compatible, so problems would be a lot smaller.
I guess this is why most engines have only one graphics system. Or at least abstracts everything even more, so it becomes agnostic to it. We cannot easily do it, because we make a tool, which allows people writing their own code, which is already a layer on top of the graphics system.
So as I couldn't add surface_set_target_ext(), I planned to add surface_add_colorbuffer(), which would add a texture with specific formats to the FBO. Something like this:
Code: [Select]
surf = surface_create(640,480); //This create 640x480 RGBA texture with unsigned int type and BGRA format (this is how it's made default right now)
surface_add_colorbuffer(surf, 3, tx_rgb, tx_bgr, tx_float); //This adds 640x480 RGB texture with float type and BGR format (and binds it to GL_COLOR_ATTACHMENT0 + 3)
surface_add_depthbuffer(surf, tx_depth_component, tx_depth_component32f, tx_float); //This adds 640x480 depth texture with float type and 32f format (and binds it to GL_DEPTH_ATTACHMENT)
I intentionally bound it to color attachment 3 and skipped 2, so you would see where the number comes in later. Now we can do this in pixel shader:Code: [Select]
layout(location = 0) out vec4 surfaceBufferOne;
layout(location = 3) out vec3 surfaceBufferThree;
void main()
{
surfaceBufferOne = vec4(1.0,0.5,0.0,1.0); //This buffer actually holds unsigned integers, so this becomes 255, 127, 0, 255
surfaceBufferThree = vec3(3.1415,2.4891,1.2345); //This holds floats
}
Depth is rendered automatically.The problem with all of this is that I cannot make this work together with other systems. I need a new graphics_create_texture() function (I called it graphics_create_texture_custom) which I have no place to put. I need:
Code: [Select]
enum {
//Formats and internal formats
tx_rgba = GL_RGBA,
tx_rgb = GL_RGB,
tx_rg = GL_RG,
tx_red = GL_RED,
tx_bgra = GL_BGRA,
tx_bgr = GL_BGR,
tx_depth_component = GL_DEPTH_COMPONENT
};
enum {
//Internal formats only
tx_rgb32f = GL_RGB32F,
tx_depth_component32f = GL_DEPTH_COMPONENT32F,
tx_depth_component24 = GL_DEPTH_COMPONENT24,
tx_depth_component16 = GL_DEPTH_COMPONENT16,
};
enum {
//Types
tx_unsigned_byte = GL_UNSIGNED_BYTE,
tx_byte = GL_BYTE,
tx_unsigned_short = GL_UNSIGNED_SHORT,
tx_short = GL_SHORT,
tx_unsigned_int = GL_UNSIGNED_INT,
tx_int = GL_INT,
tx_float = GL_FLOAT;
};
which I cannot define in General, because I use GL_ enums. If I didn't, then I would still need to define them in General and then access them trough arrays, which is what GL3d3d file does which is garbage. And then I need to add surface_add_colorbuffer and surface_add_depthbuffer somewhere, but I cannot do it in General, because GL1 will never have it (and DX will probably not have it either). So I end up making a stupid header where all of this junk goes into.I seriously consider forking ENIGMA to have only one graphics system, because GL1 is obsolete and I haven't really touched it forever, and DX9/11 are not worked on and are not required as far as I see. If we somehow manage to get GLES working then we would still have problems like these, but at least GLES is like 95% compatible, so problems would be a lot smaller.
I guess this is why most engines have only one graphics system. Or at least abstracts everything even more, so it becomes agnostic to it. We cannot easily do it, because we make a tool, which allows people writing their own code, which is already a layer on top of the graphics system.
288
Developing ENIGMA / Re: LateralGM 1.8.6.844
« on: January 12, 2015, 04:20:59 pm »
How about adding default options? Like right now by default when you create a Shader you get it as glsles and precompile false. How about a way for me to say the default to be glsl and precompile true? This is less of a problem when I actually create a shader, but when I load non-egm files. Like I open a GM example and all the shaders have defaulted to GLSLES and False, so I have to open 10 shaders and reset them all.
They probably shouldn't be saved in configurations as they are not project specific, but user preference specific.
They probably shouldn't be saved in configurations as they are not project specific, but user preference specific.
289
Off-Topic / Re: How amateurs can improve a game's graphics !
« on: January 04, 2015, 05:36:24 am »
I can still run every game I tried on Ultra with my 660ti. I have it on SLI now, so I get even higher fps. But even if I run on single card, I haven't seen a game which I couldn't run on ultra. I do run them usually without MSAA and on 1080p though. If you want 4k, then no card will be able to run some of the games.
The guy said he had only about 30 mods running at one time. But as he mostly makes screenshots, instead of videos, then he constantly keeps switching them and tuning them to get one great screenshot. That is sadly the problem with those mods. You will get few beautiful scenes, but many will look even worse.
The guy said he had only about 30 mods running at one time. But as he mostly makes screenshots, instead of videos, then he constantly keeps switching them and tuning them to get one great screenshot. That is sadly the problem with those mods. You will get few beautiful scenes, but many will look even worse.
290
Off-Topic / Re: How amateurs can improve a game's graphics !
« on: January 03, 2015, 07:54:14 pm »
The biggest impact was from all that grass, and the guy could run it at about 40fps with geforce 670. So it's clearly not that bad. I think you can run the game just fine on a top of the line 970. If anyone made 1 mod to create all that instead requiring about 30, I would try that on my 660ti. I'm sure it would run just fine.
Also, he has a youtube channel with many videos on modded skyrim. Right now he does use 970, but previously he had a 670 and still ran just fine (that is where the 40fps I mentioned comes).
Also, he has a youtube channel with many videos on modded skyrim. Right now he does use 970, but previously he had a 670 and still ran just fine (that is where the 40fps I mentioned comes).
291
Off-Topic / Re: why I hate websites owned by tiny people
« on: January 03, 2015, 07:51:29 pm »
It's called a bus factor: How many people on the team can die by bus, until the project dies with them?
http://en.wikipedia.org/wiki/Bus_factor
For us, it really could be about 3.
http://en.wikipedia.org/wiki/Bus_factor
For us, it really could be about 3.
292
Programming Help / Re: Matrix Math extension?
« on: December 31, 2014, 06:39:28 am »
Is it working for you too?
293
Proposals / Re: Disabling Automatic Semicolons
« on: December 30, 2014, 08:30:56 am »
Yes, I do put semicolons everywhere myself. And mingw would return the error, but the parser will not (so the syntax check button will not do it). This will hurt novice users that come from GM, but an option for stricter syntax would be useful anyway, so people would learn how other languages do it, not only GM which allows you to code in 6 different styles.
294
Programming Help / Re: Matrix Math extension?
« on: December 29, 2014, 09:02:26 pm »
I was not able to get variable arrays working right now (still don't know how the definition must look like, probably will take a peek at string() which should have it), but due to recent fixes with typed array's, I was able to test it like this:
I added the egm to this post. Please test.
Code: (edl) [Select]
//Create event
//Lets make a 4x4 identity matrix
local float mat[16];
mat[0] = 1;
mat[1] = 0;
mat[2] = 0;
mat[3] = 0;
mat[4] = 0;
mat[5] = 1;
mat[6] = 0;
mat[7] = 0;
mat[8] = 0;
mat[9] = 0;
mat[10] = 1;
mat[11] = 0;
mat[12] = 0;
mat[13] = 0;
mat[14] = 0;
mat[15] = 1;
//Draw event
mat[3] = sin(get_timer()/1000000);
mat[7] = cos(get_timer()/1000000);
shader_set(shr_simple);
glsl_uniform_matrix4fv(glsl_get_uniform_location(shr_simple, "myMatrix"), 1, mat);
draw_rectangle(250,250,350,350,0);
shader_reset();
//shr_simple vertex shader
in vec3 in_Position; // (x,y,z)
uniform mat4 myMatrix = mat4(1.0); //Default set to identity matrix
void main()
{
gl_Position = viewMatrix * myMatrix * projectionMatrix * vec4( in_Position.xyz, 1.0);
}
//shr_simple pixel shader
out vec4 out_FragColor;
void main()
{
out_FragColor = vec4(1,0,0,1); //Output red rectangle
}
And it worked. I got a moving rectangle on the screen by manually replacing the modelMatrix with myMatrix. This should also allow writing over built-in matrices, like viewMatrix, projectionMatrix etc. There could be problems with that though, as the matrices are calculated only when needed and when drawing. Which means your custom changes could be overridden in code like this:Code: (edl) [Select]
shader_set(shr_simple);
d3d_transform_set_translation(10,10,10); //We change the enigma::Matrix4 model_matrix
glsl_uniform_matrix4fv(glsl_get_uniform_location(shr_simple, "modelMatrix "), 1, mat); //We write over shader's built-in modelMatrix
draw_rectangle(250,250,350,350,0); //We draw something, so all the matrices are calculated (view_matrix * model_matrix etc.) and sent to shader via glsl_uniform_matrix4fv, thus overriding the previous line
shader_reset();
This is the place where the matrix_ functions come in. Still, somehow not feeling implementing them right now.I added the egm to this post. Please test.
295
Works in Progress / Re: [OGL3] Shader Test
« on: December 29, 2014, 09:25:23 am »
As I said - to replicate that what ANGLE does we will need a GLSL parser, that can replace all texture lookup functions with a custom one. In the simplest case it could be a find/replace kind of fix, but I don't think it would be that easy. And it must be done in run-time, probably in glsl_shader_compile().
I don't think using ANGLE is an option, as it just adds another layer of abstraction on top of one we already have. The idea of ANGLE is to be able to run GLES programs on Windows. It's not meant for GL3 or GL4 to run in Windows, as they technically can already do it. They have a shader validator which we could maybe use, but that's about it. Using ANGLE just to flip a freaking texture is an overkill.
I don't think using ANGLE is an option, as it just adds another layer of abstraction on top of one we already have. The idea of ANGLE is to be able to run GLES programs on Windows. It's not meant for GL3 or GL4 to run in Windows, as they technically can already do it. They have a shader validator which we could maybe use, but that's about it. Using ANGLE just to flip a freaking texture is an overkill.
296
Works in Progress / Re: [OGL3] Shader Test
« on: December 29, 2014, 08:41:59 am »Quote
I am open to the shader solution, if we provide a way to disable it.The problem is that user shaders will have to take this account every time. This will also break all compatibility with GM shaders (which right now are about 99% compatible).
Quote
Another thing we could do is just flip the texture data the first time surface_get_texture is called on the surface after it was rendered to with surface_set_target, though I don't know what the cost of this is compared to doing it in the shader after upload, obviously more efficient for drawing the surface without changing it multiple times in a row which is the same concept applied to vertex buffers, a pixel buffer could be used to do this.For most cases you don't need to do any flips in the pixel shader, as you can flip the texture coordinates in the vertex shader. That is extremely fast. Of course as the texture lookup can be independent of texture coordinates, then you must flip the GLSL texture lookup functions (like done in ANGLE). Flipping texture on the CPU will be A LOT slower. And doing it on some surface_get_texture() will basically ruin the surface for any later drawings on it, which would make surfaces quite unusable. Surfaces are not just for using once, draw once, clear surface. You often draw on them multiple times over many frames, like when drawing blood or bodies in a Crimsonland clone or something like that.
Quote
Here's the specific commit where they fixed FBO flipping in ANGLEANGLE translates GLSL to HLSL. In this translation they can do a lot of marvelous things. We on the other hand don't do any GLSL parsing. And I don't nominated myself to write a GLSL parser. So the solution doesn't work for us, because any user's shaders will have to manually do this inversion themselves. This means no compatibility with GM, no compatibility even with shaders found online. ANGLE had this quite easy actually, as they just flip all the textures in memory to be upside down just like FBO's, and then do the other flip in shaders. So in the end the fix is a lot easier. They don't differentiate between a regular texture and an FBO texture like we are trying to do.
297
Programming Help / Re: Matrix Math extension?
« on: December 29, 2014, 08:21:00 am »
And how exactly can I pass a var array to function? How does the C++ declaration look?
Because it seems the function parser has ignored my functions which require a pointer to an array, so these functions are in the header, but not visible in LGM:
Because it seems the function parser has ignored my functions which require a pointer to an array, so these functions are in the header, but not visible in LGM:
Code: (C++) [Select]
void glsl_uniform1fv(int location, int size, const float *value);
void glsl_uniform2fv(int location, int size, const float *value);
void glsl_uniform3fv(int location, int size, const float *value);
void glsl_uniform4fv(int location, int size, const float *value);
void glsl_uniform1iv(int location, int size, const float *value);
void glsl_uniform2iv(int location, int size, const float *value);
void glsl_uniform3iv(int location, int size, const float *value);
void glsl_uniform4iv(int location, int size, const float *value);
void glsl_uniform1uiv(int location, int size, const float *value);
void glsl_uniform2uiv(int location, int size, const float *value);
void glsl_uniform3uiv(int location, int size, const float *value);
void glsl_uniform4uiv(int location, int size, const float *value);
void glsl_uniform_matrix2fv(int location, int size, const float *matrix);
void glsl_uniform_matrix3fv(int location, int size, const float *matrix);
void glsl_uniform_matrix4fv(int location, int size, const float *matrix);
I still need to cast the var array as floats, which I don't know how effective will be. Right now I just assume they are floats and do a memcpy:Code: (C++) [Select]
void glsl_uniform_matrix4fv(int location, int size, const float *matrix){
get_uniform(it,location,16);
if (std::equal(it->second.data.begin(), it->second.data.end(), matrix, enigma::UATypeFComp) == false){
glUniformMatrix4fv(location, size, true, matrix);
memcpy(&it->second.data[0], &matrix[0], it->second.data.size() * sizeof(enigma::UAType));
}
}
298
Programming Help / Re: Matrix Math extension?
« on: December 29, 2014, 08:05:05 am »
That extension doesn't work yet in ENIGMA. It was written as a pure C++ thing and the idea was that EDL will support classes and templates at some point. So then you could use it. It wasn't meant to be for graphics either, as it's not optimized for that. That matrix class allows arbitrary size and type matrices which is useful when making things like filters (in my case, a scientific direction), but probably won't be that useful for generic GM/ENIGMA user.
Graphics system uses a specific matrix classes that are optimized for 3x3 and 4x4 matrices. It is not available in EDL though and is used only internally. GM:S has functions for getting matrices into arrays (matrix_get, matrix_set, matrix_build, matrix_multiply), but they haven't been implemented in ENIGMA. I haven't got around to that, as we lack a way to return EDL/GML arrays or pointer from a function. The only way to change the internal matrices are to use the transformation and projection functions.
edit: I also implemented functions for sending matrices to shaders - glsl_uniform_matrix2fv, glsl_uniform_matrix3fv, glsl_uniform_matrix4fv - which are the GL equivalents. Sadly, I don't think they work in EDL either, as I don't think we can pass arrays to functions. I haven't tested that though. They are used internally to pass matrices to shaders, so they do work in pure C++. I will test and see if I can use them in EDL right now.
Graphics system uses a specific matrix classes that are optimized for 3x3 and 4x4 matrices. It is not available in EDL though and is used only internally. GM:S has functions for getting matrices into arrays (matrix_get, matrix_set, matrix_build, matrix_multiply), but they haven't been implemented in ENIGMA. I haven't got around to that, as we lack a way to return EDL/GML arrays or pointer from a function. The only way to change the internal matrices are to use the transformation and projection functions.
edit: I also implemented functions for sending matrices to shaders - glsl_uniform_matrix2fv, glsl_uniform_matrix3fv, glsl_uniform_matrix4fv - which are the GL equivalents. Sadly, I don't think they work in EDL either, as I don't think we can pass arrays to functions. I haven't tested that though. They are used internally to pass matrices to shaders, so they do work in pure C++. I will test and see if I can use them in EDL right now.
299
Works in Progress / Re: [OGL3] Shader Test
« on: December 28, 2014, 04:57:24 pm »Quote
No it seems correct. Compared it to Model Creator and had same color.Weird. What shader and what normal matrix did you use? I tried both your code which creates the normal matrix in shader, as well as the modified my code which does it on the CPU. In both cases I cannot get the floor to be blue. This is what I get:
I also tested the surface fix, that is why the surface draws correct side up. But the floor and walls are still wrong. The floor is in x/y plane, so the normal should be in z (blue).
Quote
1) grossI don't like it either, but as time goes on I keep coming back to this. Pros:
1) It will draw correctly.
2) It should work in shaders.
3) No math is involved, so it's the fastest option.
Cons:
1) People will have to flip texture coordinates when used manually for rendering. This means it won't be consistent. If we check the two most popular use cases for surfaces then they are either drawn with draw_surface functions, which will work, and as full screen effects (i.e. shaders), which should also work.
I will have to do additional tests later. There is sadly no magic fix for this. We will have to sacrifice something to make this work.
300
Proposals / Re: Improving object selection
« on: December 28, 2014, 03:02:57 pm »
I agree. It's just regular drag&drop mechanics.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 »