Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - TheExDeus

1201
Proposals / Re: Vertex color
« on: July 31, 2013, 04:49:37 am »
I was talking about the VBO implementation on GL3. Either way it changes nothing just as I mentioned.

1202
Quote
Guys, hey I got the #1 reason we are going to use DX now, look at the updated screen shot, that is 50,000 draw_sprite calls at 30fps, OpenGL can't handle 15,000 on my pc at more than 12fps. That is utilizing DX's internal sprite batching class, please proceed with the DirectX vs. OpenGL debates. :P
That is because we don't use any batching at all. But I do have a proposition for it here: http://enigma-dev.org/forums/index.php?topic=1380.0 (which no one seems to read). In managed to get 30FPS with 100k sprites. But it will probably be slower (or the same, maybe even faster, wtfk) in the final implementation. So I doubt that in proper implementation one is any faster than the other.

Quote
Also to prove my point, this is the d3d blend modes and their OpenGL and Direct3D equivalents.
No one ever doubted that GM was based on D3D.... because everyone know it's based on D3D.... even D3D functions have a prefix.. wait for it... D3D.

Quote
We already have GLES for other embedded systems
Sadly we don't. I would really want an android port, but the framework it requires (together with a shit ton of configuration for SDK/NDK) makes it all hard to do. Maybe I'll just try making GLES run on Windows (if that is even possible) and then try to make it work on another device.

1203
Proposals / Re: Vertex color
« on: July 31, 2013, 04:10:26 am »
And that was in no way connected to anything that was said here.

1204
Proposals / Re: Vertex color
« on: July 30, 2013, 03:45:23 pm »
Quote
GM's behavior could be simulated by pushing and popping the color in the list where needed.
We use VBO's instead of lists as it is just better usually. But even lists cannot be changed. So if you turn this into a list:
Code: [Select]
glBegin(pr_trianglelist);
glColor4f(enigma::current_color[0],enigma::current_color[1],enigma::current_color[2],enigma::current_color[3]);
glVertex2f(10,10);
glColor4f(1.0,0.0,0.0,1.0);
glVertex2f(100,10);
glColor4f(enigma::current_color[0],enigma::current_color[1],enigma::current_color[2],enigma::current_color[3]);
glVertex2f(100,100);
glEnd();
The current_color[] would still be pushed in there at the time it was created, not when it is drawn. At least in VBO's you could loop trough the values and manually change them. In display lists you cannot. The boolean is because if I don't bind the color buffer, then it defaults to bound color during drawing. That is if I don't do this:
Code: [Select]
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer( GL_ARRAY_BUFFER, colorsVBO );
glColorPointer( 4, GL_FLOAT, 0, (char *) NULL );
Then glDrawArrays() will draw the VBO with the currently bound color (last glColor4f()). If I do bind it then it ignores that and looks for color in the color buffer. And as I cannot have a "don't care" value in the color buffer, then I must fill it during the creation or just before drawing. During creation I cannot know the bound color which will be used while drawing, but if I do it while drawing, then I must loop trough all vertices. I actually don't think that would be that much of a pain if the vertices are not in the thousands.

Anyway, I will then commit the fix with the bool later. After that I will look into sprite batching with VBO (check the other topic) and see maybe if we can switch to shaders in general.

1205
General ENIGMA / Re: DirectX Image formats and Fonts
« on: July 30, 2013, 12:13:53 pm »
Quote
I still am uncertain what to do with formats. I think the best course of action is to assume that there are differences in the formats that will be offered by each system's base installation, and that the user will only choose extensions for codecs/formats that are missing. So basically, the LoadPNG code would go in an extension, and would register an image format reader. Before loading any image, the loader function cycles through those loaders and asks each one, "can you load this?". If the extension returns true, it is asked for the data, and the search is over. Otherwise, the next extension is asked. When all extensions have been asked, the data is handed to the base system. In GL's case, that means it's a bitmap, or you're SOL. In DirectX's case, I guess that means it's .bmp, .dds, .dib, .hdr, .jpg, .pfm, .png, .ppm, or .tga, or you're SOL.
Guess that could work. Of course they should be prioritized and be included at compile time. Like if DirectX is used, then it will have the most priority (will be the first one to be asked) and LodePNG won't even be included, because DX already said it could load .png's. If OGL is used and no native one is included, then LodePNG will be included as it will be the first to affirm it can load .png's. Of course a way to select needed file extensions (like select .png and it will include either DX or LodePNG) or file loader extensions (if you just want .png then maybe using just LodePNG instead of DX could be better). Of course everything on Windows is shared library, so there probably be no reason (size or otherwise) to deselect DX and use LodePNG when using D3D to draw.

Quote
As for fonts, GL can do that, too; we don't want mesh fonts. We want nice, anti-aliased sprite fonts. When ENIGMA used meshes for its fonts, they were infinitely ugly. Moreover, you're also relying on an assumption you should certainly not be making: All computers have the font the user requested available. Turns out, not everyone has the super cool font the user picked out in the IDE that looks kind of alien-y. Guess Arial will work in its place, right? In embracing this, you also damn the font_add_sprite() function. The answer is no, no, no, no, and a thousand times, no.
I actually like font_add_sprite() and I have used it. Of course it won't give much for a game, but for an editor (text, image or otherwise) it is very useful. So I hate that we don't have it. Also, vector fonts can be very useful, especially when drawn on geometry and in many sizes. Even simple things like rotations make sprite fonts look like shit. Sadly I cannot fathom how much loops we need to jump trough to support both... maybe not that many.

1206
Proposals / Re: Vertex color
« on: July 30, 2013, 12:04:47 pm »
Quote
"Destroyed" and "moved to the GPU" are two very different concepts. They exist *somewhere*.
No, they really are destroyed. My idea was that I keep a bool which tell me if any _color functions are used. If by _end() the bool is still false, then I don't even bind the color buffer (so it is not sent to GPU) and just deleted like the rest. Of course GPU could have some internal color buffer it uses for all vertices if none is provided (and filled with bound color from immediate mode), but I don't think so. I think it just has 1 color value (bound by immediate mode) which is then reused.

Quote
As an alternative, we could allow a secondary alpha parameter, which specifies the amount to use of the model color vs the draw color.
But if we don't use shaders, then that is also not possible. We cannot blend anything to the vertices already in the vbo without manually looping them.

So I think the bool should suffice until we shader the whole thing.

1207
Proposals / GL3 changes from immediate to retained mode
« on: July 30, 2013, 11:44:34 am »
Hi! I think I made a topic about this a long time ago, but lets do it again. We have GL3 for some time now, but most of the things are still rendered in immediate mode. So I propose changing all the drawing functions not to use it. The problem is with caching, if we even decide use it. Right now we render a sprite like so:
Code: [Select]
void draw_sprite(int spr, int subimg, gs_scalar x, gs_scalar y)
{
    get_spritev(spr2d,spr);
    const int usi = subimg >= 0 ? (subimg % spr2d->subcount) : int(((enigma::object_graphics*)enigma::instance_event_iterator->inst)->image_index) % spr2d->subcount;
    texture_use(GmTextures[spr2d->texturearray[usi]]->gltex);

    glPushAttrib(GL_CURRENT_BIT);
    glColor4f(1,1,1,1);

    const float tbx = spr2d->texbordxarray[usi], tby = spr2d->texbordyarray[usi],
                xvert1 = x-spr2d->xoffset, xvert2 = xvert1 + spr2d->width,
                yvert1 = y-spr2d->yoffset, yvert2 = yvert1 + spr2d->height;

    glBegin(GL_QUADS);
    glTexCoord2f(0,0);
    glVertex2f(xvert1,yvert1);
    glTexCoord2f(tbx,0);
    glVertex2f(xvert2,yvert1);
    glTexCoord2f(tbx,tby);
    glVertex2f(xvert2,yvert2);
    glTexCoord2f(0,tby);
    glVertex2f(xvert1,yvert2);
    glEnd();

glPopAttrib();
}
This means in immediate mode it sends vertices one by one and is bad and slow and deprecated. The change would be using VAO's or VBO's which are sadly for more static geometry. It requires rebuilding the buffer all the time before drawing. So if just do this:
Code: [Select]
void draw_sprite(int spr, int subimg, gs_scalar x, gs_scalar y)
{
    get_spritev(spr2d,spr);
    const int usi = subimg >= 0 ? (subimg % spr2d->subcount) : int(((enigma::object_graphics*)enigma::instance_event_iterator->inst)->image_index) % spr2d->subcount;
    texture_use(GmTextures[spr2d->texturearray[usi]]->gltex);

    const float tbx = spr2d->texbordxarray[usi], tby = spr2d->texbordyarray[usi],
        xvert1 = x-spr2d->xoffset, xvert2 = xvert1 + spr2d->width,
        yvert1 = y-spr2d->yoffset, yvert2 = yvert1 + spr2d->height;

    float data[][7] = {
       {  xvert1, yvert1, 0.0, 0.0, 1.0, 1.0, 1.0  },
       {  xvert2, yvert1, tbx, 0.0, 1.0, 1.0, 1.0  },
       {  xvert2, yvert2, tbx, tby, 1.0, 1.0, 1.0  },

       {  xvert2, yvert2, tbx, tby, 1.0, 1.0, 1.0  },
       {  xvert1, yvert2, 0.0, tby, 1.0, 1.0, 1.0  },
       {  xvert1, yvert1, 0.0, 0.0, 1.0, 1.0, 1.0  }
    };

    GLuint spriteVBO;
    glGenBuffers(1, &spriteVBO);
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glEnableClientState(GL_COLOR_ARRAY);

    glBindBuffer(GL_ARRAY_BUFFER, spriteVBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_DYNAMIC_DRAW);
    glVertexPointer( 2, GL_FLOAT, sizeof(float) * 7, NULL );
    glTexCoordPointer( 2, GL_FLOAT, sizeof(float) * 7, (void*)(sizeof(float) * 2) );
    glColorPointer( 3, GL_FLOAT, sizeof(float) * 7, (void*)(sizeof(float) * 4) );

    glDrawArrays( GL_TRIANGLES, 0, 6);

    glDisableClientState( GL_COLOR_ARRAY );
    glDisableClientState( GL_TEXTURE_COORD_ARRAY );
    glDisableClientState( GL_VERTEX_ARRAY );

    glDeleteBuffers(1, &spriteVBO);
}
Then we will be using VBO's, but because we rebuild both the VBO and the data, then it ends up A LOT slower. So does anyone have any ideas on how we could buffer this? Originally I thought if it could be possible to assign some ID for each draw call which could be used throughout frames? So it could be possible to draw the same sprite if no such things like sprite index or position has changed? Or that even if we could create 1 VBO per sprite and then just rebuild the data per render, then it would be a lot faster? But I just did some tests and even if I only call:
Code: [Select]
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_DYNAMIC_DRAW);
glDrawArrays( GL_TRIANGLES, 0, 6);
Then it is still a lot slower then immediate mode. The problem I guess is that we need to batch sprites into one VBO. But it cannot be painlessly done with all the dynamic things we have. I tried a quick hack by using a global vbo and then populating it in draw_sprite() just before drawing and I got a 3 times boost over immediate mode (though I seemed to get capped at exactly 100-101FPS, which mean it maybe was more, but for some reason I got vsyn'ced). That means I drawn 25k objects together with their logic (simple "bounce against walls" logic) and I got 30FPS with immediate mode and 100FPS with global VBO. 100k objects were 9fps for immediate and stable 25FPS with VBO. But I tested without drawing and found that I was actually capped at 33FPS by my use of a vector (I pushed 6*7 values for each sprite and there were 100k of them). Dunno how to improve that much though. After reserving and using manual counter (so no need for clear, just overwrite) I got to 44FPS (which was 30FPS with drawing).

So basically what I propose is this:
1) Have 1 global VBO.
2) In all drawing functions we populate this FBO with x,y,tx,ty,r,g,b,a and do that until texture_use() fails (eg, when the currently used texture is not the same as the requested one) at which point we draw the VBO and clear it.
3) Bind the new texture and repeat.

Advantages:
1) This way we will batch as much as we can before drawing and yet have the possibility to use different drawing functions (even sprite and background) interchangeably.
2) When we add sprite packing (or more precisely texture packing), then we will have a massive speed boost without changing any drawing functions. This is because we push the texture coordinates and render only when texture changes. So less texture changes means more batching.
3) Tiles would automatically be batched (usually), because calling draw_background_ext_transformed like previously would automatically make them be added to the same VBO (if the same tilestrip is used which often is). Right now it seems some GLLists are made and populated, by I think that is slower (especially when many glBegin and glEnd functions are used per tile). Of course remaking the tile system for 1 VBO per layer could maybe be better and speed the whole thing up (but will take more memory).
4) Port to GLES (Android and such) would be a lot simpler, as it doesn't support immediate mode and requires the app to basically be GL3 (so no gl transformation functions either). So we must push towards that for easier maintenance and compatibility.

Disadvantages:
1) If a lot of texture switching happens (like having two objects with the same depth and be created interleaved with one another, so the draw event is called interleaved as well)) then there will be a performance impact. On a game with few hundred sprites it will probably not be seen, but with thousands of sprites the impact could be noticeable. The good thing is that things like depth changes would reduce the impact. As well as texture packing.

note: Functions like glEnableClientState and such are actually also deprecated. Now all of that has to happen on a vertex shader. I plan to test that too and maybe implement it that way. But this global VBO thing is a lot simpler and could potentially give a lot of speed.

So, any ideas?

edit: By replacing glBufferData with glBufferSubData I got to 36FPS with 100k objects, but this won't be possible in the implementation mentioned here (as the size will change all the time depending on how many sprites are drawn and how many texture swaps happen). But with a much smaller VBO the impact of that function will not be so great. It is even recommended to use several smaller VBO's than one big one anyway.

1208
DX10 isn't really used in games either. It's always DX9 or DX11, never DX10. Probably because it didn't give that much useful features to substantiate the changes necessary to any big engine.

1209
Proposals / Re: Vertex color
« on: July 30, 2013, 09:26:13 am »
Quote
Notice the UK spelling of color, inconsistent with the other spellings in GM. Can't tell if that's deliberate.
Never knew of such functions. Must be new to GM:S. Will check their docs later.

Quote
EDIT: You seem to be under the impression that the currently bound color affects model vertices. That shouldn't be the case if they're in a VBO. In this case, all points will be blended by the current draw color, regardless of their own color or texture.
In my tests with GM8 it did. And it doesn't really blend the colors, it overwrites them. For example, if you do this:
Code: [Select]
//Create
model = d3d_model_create();
d3d_model_primitive_begin(model,pr_trianglelist);
d3d_model_vertex(model,10,10,0);
d3d_model_vertex_color(model,100,10,0,c_green,1);
d3d_model_vertex(model,100,100,0);
d3d_model_primitive_end(model);

//Draw
draw_set_color(c_red);
draw_set_alpha(0.5);
d3d_model_draw(model,0,0,0,-1);
Then you will see a red transparent triangle (from bound color and alpha) with one green opaque corner (from model_vertex_color). If you do this:
Code: [Select]
draw_set_color(c_blue);
draw_set_alpha(0.25);
//Create
model = d3d_model_create();
d3d_model_primitive_begin(model,pr_trianglelist);
d3d_model_vertex(model,10,10,0);
d3d_model_vertex_color(model,100,10,0,c_green,1);
d3d_model_vertex(model,100,100,0);
d3d_model_primitive_end(model);

//Draw
draw_set_color(c_red);
draw_set_alpha(0.5);
d3d_model_draw(model,0,0,0,-1);
You will see the same thing, even though the bound color (blue) and alpha (0.25) was set before creating the model. So it ignored the blue and 0.25. In ENIGMA it would draw a blue transparent triangle with one solid green color and ignore the red and 0.5. It also never blends the colors if the color is specified. So if you do:
Code: [Select]
//Create
model = d3d_model_create();
d3d_model_primitive_begin(model,pr_trianglelist);
d3d_model_vertex_color(model,10,10,0,c_white,1.0);
d3d_model_vertex_color(model,100,10,0,c_white,1.0);
d3d_model_vertex_color(model,100,100,0,c_white,1.0);
d3d_model_primitive_end(model);

//Draw
draw_set_color(c_red);
draw_set_alpha(0.5);
d3d_model_draw(model,0,0,0,-1);
It will draw a solid white triangle and ignore the red and 0.5. This is the same in ENIGMA and GM.

Quote
The options are to waste space for each vertex format or to allow specifying vertex formats manually (as Yoyo has already done; didn't think they had it in them).
The space wasting is only done until d3d_model_primitive_end() as then all the vectors are destroyed. And I don't know how specifying vertex formats will help here.

1210
Well one reason why Rober chose DX9 is because MinGW already has it. I personally also don't want a requirement to install a 10GB sdk or something. Of course if the installation would be painless, then the problem wouldn't be that big.
Also, at least for now, we don't support anything that could use the new features. I personally don't have any use for DX anyway. OGL support is clearly good enough to even support FBO's on Intel Cards, so I don't have any problems with it. I would more gladly create an android port.

1211
Proposals / Re: Vertex color
« on: July 30, 2013, 06:36:29 am »
Quote
We need to do something about that. The options are to waste space for each vertex format or to allow specifying vertex formats manually (as Yoyo has already done; didn't think they had it in them).
How does GM allow specifying vertex formats manually?

Quote
If the vertex format specifies draw color, and no draw color is supplied to create it, the current draw color should be used. It's that simple.
This is what I am going to make it do now. The problem is that it is not how GM does it. GM uses bound color right before the model is drawn, not when it is created. That is a problem for us, as we use buffered geometry. If we don't bind color buffer before drawing the model, then OGL uses immediate mode color (which is set via draw_set_color), but we cannot mix and match them. We cannot bind a color buffer and yet for some vertices make it use bound color outside buffer. That could be possible with shaders by adding a bool to vertex (which I think is possible) that specifies to use bound color or set color. We can still get 95% compatibility which would cover like all cases but the ones where the functions are mixed.

1212
General ENIGMA / Re: DirectX Image formats and Fonts
« on: July 30, 2013, 04:23:36 am »
Quote
http://msdn.microsoft.com/en-us/library/windows/desktop/bb172801%28v=vs.85%29.aspx
".. the following file formats: .bmp, .dds, .dib, .hdr, .jpg, .pfm, .png, .ppm, and .tga"
Josh was thinking about some way to specify different formats for different systems. I don't think we should get rid of one system for the other though. We could use .bmp and .png just like we have now to be consistent and in D3D implement other formats (although the implementation would probably ugly). So unless Josh explains how the format feature would work, then I don't think we should touch this for now. It's not that essential anyway.

Quote
DirectX also has internal classes for dealing with fonts, you can simply request them by name and tell it to render, it takes like 5 lines of code...
http://www.two-kings.de/tutorials/dxgraphics/dxgraphics09.html
You can also ask it to build a mesh from font numerics and it will do so.
Same as with formats. I think we can use the ones we have now until we either split it or just think of augmenting it. The font functions microsoft provide can be useful for loading fonts at runtime, so it allows creating those functions, but I don't think we should get rid of current texture fonts, as those rendered and packed in the .exe, so if you use some font which the user doesn't have, then it won't break for him. I don't think we should pack MS font files in the .exe though.

Quote
Now this also leads to another thing as to whether or not we want to use those sprite functions for backgrounds as well. Not to mention in GameMaker sometimes people will apply transformations to text and sprite drawing functions to use them as 3D billboards. I would like some input from you guys Harri, forthevin, Josh.
I thin we can use batching only for backgrounds. For sprites it would not be really possible taking into account the depth changes and dynamic nature. And as sprites are still defined as vertices then I don't see how billboards are a problem.

Quote
As you can see there is a tad bit of a problem, DX uses ARGB, ENIGMA loads textures into RGBA :( You can tell by the sprites pixel color being off.
You just specify the format when loading. I don't know which function you use, but the function which loads from files has D3DFORMAT, which allows specifying shit-ton of formats.

1213
Proposals / Re: Vertex color
« on: July 30, 2013, 04:08:14 am »
Ok, then I will just implement it to work if the functions are not mixed. Because I really cannot fathom how GM does it as it's not really possible by using buffered geometry. Maybe it changed in GM:S though and it no longer works like in GM8 I tested. It is possible with shaders though. Adding a shader param to all vertexes which basically tell it if it's bound color or set color. I do like this GM possibility though, because even without using shaders or changing textures you could color units in an RTS by just using:
Code: [Select]
//Draw red tank, but it could have other colored parts and even textures as well
draw_set_color(c_red);
d3d_model_draw(tank,0,0,0,tank_texture);

//Now draw blue tank, but some parts could be the same as the red one
draw_set_color(c_blue);
d3d_model_draw(tank,0,0,0,tank_texture);

1214
Issues Help Desk / Re: Can't save to egm without errors.
« on: July 28, 2013, 03:41:26 pm »
Sadly GM8.1 isn't really supported for LGM (as well as GM:S I think). At lease it corrupts all the time. I suggest you save to gm6.1 or 7. Maybe even 8, but not 8.1 (if that is possible from GM). If it loads fine in LGM, but cannot export EGM for some reason, then it must be another issue. Some time ago EGM had some problems and so I still don't save into that format. The only real advantage from functionallity standpoint is that it saves ENIGMA settings and used extensions. So you don't have to change that every time a project is opened.

1215
General ENIGMA / Re: Scalar Types
« on: July 28, 2013, 03:04:37 pm »
Quote
Immediate mode functions only take float because they are obsolete, VBO's do take GLdouble, and it is less optimal because your model becomes 2x as much being sent to the GPU.
Your first sentence seemed like a statement about something I said, but I didn't say that they only take float. I even gave you a function name which needs to be used to use doubles in immediate mode (glVertex2d). I just said that we didn't (and shouldn't). So just changing that scalar type will do nothing, because you basically do:
Code: [Select]
double x, y;
glVertex2f((float)x,(float)y);
Where only speed impact is cast to float. We would also need to abstract the glVertex function and then redeclare them the same place we declare the scalar. But I don't think that is really that needed.