ENIGMA Forums

Contributing to ENIGMA => Developing ENIGMA => Topic started by: Goombert on July 07, 2014, 02:23:06 am

Title: Texture Handling
Post by: Goombert on July 07, 2014, 02:23:06 am
Well this is something that needed discussed a long time ago. OpenGL stores sampling information per texture without the use of the OGL3 sampler object extension approved by the ARB for that core version. Direct3D has done it with sampler objects and properly abstracted the concept since the beginning of time.

Now, with YoYoGames essentially building a cross-platform DirectX emulator ("ANGEL dust") they are basically sticking to sampler states. In fact, I don't really know who I hate more them or the ARB. Don't get me wrong I love a lot of things about Direct3D, but Microsoft is purposefully responsible for fucking up a lot of OGL, but this this fuck up rests on the ARB's shoulders.

Anyway, because of this, OpenGL1 needs a sampler state emulator object which I've already written and works fine, however Project Mario takes a 10fps hit. OpenGL3 needs further integration with its shaders and the extension sampler objects, which I've already programmed and I can not find my way through Harri's shaders to integrate. Direct3D9 is fine, in fact it and OpenGL1 have working texture filtering.

Based on these two functions my assumption is that YYG is going to make all of the texture stuff based on samplers unlike Unity3D which stores texture repetition information, filtering, and what not per-texture.
http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/texture_set_interpolation_ext.html
http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/texture_set_repeat_ext.html

They may also change their minds once they begin implementing proper mipmapping and things which they have indicated interest in doing.
http://gmc.yoyogames.com/index.php?showtopic=608392

At any rate, if they don't change which I don't believe they will I have begun coding everything this way. These are the proposed texture functions, however they are not all documented yet I haven't had time as I've been focusing on the coding.
http://enigma-dev.org/docs/Wiki/Texture_Functions

My suggestion is that my current pull request be merged into the OpenGL3 branch until Harri or myself has time to complete the OpenGL3 sampler object integration.
https://github.com/enigma-dev/enigma-dev/pull/770

A tutorial for the OpenGL3 sampler objects is available below.
http://www.sinanc.org/blog/?p=215
Title: Re: Texture Handling
Post by: Darkstar2 on July 07, 2014, 02:41:36 am
10fps hit ? Will this affect any game or just the 3D stuff ? Why would you want to do something that slows things down?  Whenever I see the word "emulation" in a sentence I immediately think, performance hit.  Personally I could bloody care less what that other company does at this point, do we need to "emulate" the slowness of that other company's product ? :P

Title: Re: Texture Handling
Post by: Goombert on July 07, 2014, 02:57:26 am
Yes if you want OpenGL graphics, especially if you want them compatible with GM. As I said you can blame both YYG for making a cross-platform DirectX emulator, a.k.a. GM: Studio, and you can thank the ARB for allowing OpenGL to clusterfuck sampler properties per-texture and not properly abstracting sampler objects.

The point here is this:
* OpenGL is a "low level" graphics API, it should have had sampler objects since the beginning just like Direct3D, this is also partially Microsofts fault, but not so much, this is the one rare case where they had nothing to do with OpenGL's stupidity.
* When it comes to graphics, low-level is important
* A game engine like GM or Unity3D should be storing repetition, filtering and other options in materials and per-texture would be fine. A game engine is supposed to be the opposite of low-level. But they are unlikely to do that because they're on "ANGEL dust"

So basically, YYG and the ARB are both equally stupid.
Title: Re: Texture Handling
Post by: Darkstar2 on July 07, 2014, 03:26:39 am
Well fuck Y** (!=you :P) and fuck A*B and fuck M******t, MFN....  lol :D :P

One thing that stood out for ENIGMA is being faster than said other product, do we need to be just as slow ? So this change will slow down ALL games made in ENIGMA (OpenGL) ???? Do we have to emulate their shite too ??? They are multi platform and because of their retarded decision, windows suffers ! :P  People try to squeeze the last fps, for some 10fps is a big hit, the hit can be bigger in theory for more complex games.


Title: Re: Texture Handling
Post by: Goombert on July 07, 2014, 03:34:59 am
Uhm, no, OpenGL1 games suffer, on Linux it's safe to just use OpenGL3. Windows has Direct3D9 which works great, Direct3D11 too when we convert the GL3 shaders to HLSL.
Title: Re: Texture Handling
Post by: TheExDeus on July 07, 2014, 05:52:21 am
1) So now we are raising GL3.2 to GL3.3? I don't have a problem with that, but it does seem we might end up with GL4 in a year. Maybe just start now.
2) I also don't understand why exactly we need to "emulate" everything YYG does? Especially if it's not that great of a decision? I have never seen problems with storing sampler information per-texture. I can see maybe one scenario where using sampler objects would be useful (like when using font textures for both rendering text, and rendering to texture!??!?), but still, not much point in my mind. I guess sampler objects are supposed to be faster, but they actually require more binding? But that is required only when changing the sampler state no?
3) Taking into account that it should only be used when changing sampler state or binding a different sampler to a texture, then I don't see why your GL1 should be slower. I haven't looked over the code, but I think you shouldn't be constantly binding samplers per frame. It's something you do once for a texture. At least cache the thing, so calling it repeatedly doesn't actually call the gl function.
4) There shouldn't be any reason to change the shader here. Texture sampler uniform is sent in G3ModelStruct.h line 599, where it just sets it to 0, because the currently used texture is bound at GL_TEXTURE0. You can just put the function in the next line if you really want to.
5) ARB didn't need to do anything. They didn't add sampler objects because no one needed them. You yourself even say that sampler data should be per-texture. It's more weird that they actually added them in GL3, not that they didn't add them in GL1. They probably added it, because there could be some fringe cases where it might be useful. But I don't see using them often myself. Maybe I will learn to understand what they are used for.

Also, things like "// Honestly not a big deal, Unity3D doesn't allow you to specify either." or "YYG did it" is not really a reason to implement or not implement something. This is a separate project and we shouldn't be doing all the stuff others are doing. If there is no logical reason to implement something other than "they did it", then we shouldn't implement it.

You can use GL3 on Windows just fine as well. If you have PC with card newer than 2008, then you should be fine. Especially if you have an Nvidia card, because Nvidia supports everything in OGL possible. So if you have a 400 generation card (released in early 2010, which is 4 years ago), then you already support GL4.4 which is the newest one.
Title: Re: Texture Handling
Post by: Darkstar2 on July 07, 2014, 12:10:18 pm
Uhm, no, OpenGL1 games suffer, on Linux it's safe to just use OpenGL3. Windows has Direct3D9 which works great, Direct3D11 too when we convert the GL3 shaders to HLSL.

Remember when we were testing particles ? OpenGL3 already was slower in our tests than GL1, as far as shaders, they can only be used in GL right ?

@Harri:  I fully agree with you.  There are certain things worthy of emulating at the feature level, but certain things not worth emulating if it means slowing things down.  I canceled my recent pull request which added alpha clipping across all functions, to emulate how GMS worked, but that would have had some implications on (untested) on complex, graphically intense games.....  I too don't like the idea of changes that might impact performance or can potentially...
There would really have to be a significant benefit that could benefit all, otherwise this would slow down ANY type of game even if it did not use the feature it was intended for.

The odds are better in using OGL1 for most projects, if you don't require GL3 specifics or greater, as it is faster, suitable for most projects and you will reach out more configurations. 

Then there is DX9, of course nobody used it before because it was broken (now fixed :D)  but then your games would require minimum DX9.  So people with DX6/7/8 cards could not run your game, but would be able to run your OGl1 game. :D
Title: Re: Texture Handling
Post by: TheExDeus on July 08, 2014, 11:56:24 am
Quote
OpenGL3 already was slower in our tests than GL1, as far as shaders, they can only be used in GL right ?
Right now it should be quite the same on both GL1 and GL3 (on a non-toaster hardware at least). And in theory it could be faster (even A LOT), but if you want to render something so basic GL1 can render it, then it will be faster in GL1. Mostly because you cannot beat the perfect optimization engineers did for GL1. But if you want something more complex, then using GL3 might not even be faster, but also the only option. Also shaders allow making particle systems on GPU's, which you make the thing run many times faster than GL1. Right now I think only GL has shaders implemented. In the end you will be able to use HLSL in DX though.

Quote
So people with DX6/7/8 cards could not run your game
Which century PC's are you actually targeting? I only use GL3 now, because I see no reason to target anything lower. If your PC doesn't support GL3, that means your PC hasn't been upgraded in more than 6 years (like the first one that can run GL3 was GF8000 which was launched in late 2006... and you it also means that you don't want to spend less than 50$ on a card that supports GL4, because now graphics cards really are dirt cheap. So anyone who doesn't have a PC that can run GL3 is dead in my eyes, and I don't need them to use my software. That includes Mac users, which has very bad GL support (or had previously a few years ago).
Title: Re: Texture Handling
Post by: Darkstar2 on July 08, 2014, 12:26:33 pm
I like to target as many people as I can,
as far as my PC, no it's up to date alright, :P I posted my specs before.  However given the type of games I plan to do, from my tests, I see that OGl1 is quite suffice, particles, blending, lighting, I ran tests and both look identical GL1/3 and GL1 ran faster, now keep in mind my GPU runs all latest games at their max settings. both my GPU and CPU are up to speed.   So I will use GL1 whenever I can, but I'd hate to see a frame drop because of some mangling that I or most other ENIGMA users will never need.

Title: Re: Texture Handling
Post by: Goombert on July 08, 2014, 11:40:06 pm
Quote
1) So now we are raising GL3.2 to GL3.3? I don't have a problem with that, but it does seem we might end up with GL4 in a year. Maybe just start now.
Not necessarily, most OGL2 hardware also support the sampler objects, it just wasn't officially approved by the ARB until then. Just search the hardware reports.
http://enigma-dev.org/forums/index.php?topic=1131.0

Quote
I also don't understand why exactly we need to "emulate" everything YYG does?
They added texture_set_interpolation_ext, do you want to support it or not? This is not one of those deals where you can have your cake and eat it too, if you want ENIGMA to do per-texture properties like Unity3D and every other game engine does, that's fine, but ENIGMA won't support GM's functions because one would override the other.

Quote
Especially if it's not that great of a decision?
I attempted to address that, basically here is how I feel.
* A graphics API should be low level and not have sampler properties per-texture, they should be abstract objects that can be used on individual textures if needed. This is why I support Direct3D over OpenGL because it had the best of both world from the beginning, both low level and high level depending on how you wanted to use it.
* A game engine should generally be high level and store the texture information per-texture, not every texture is going to use interpolation and changing sampler states frequently is redundant and inefficient.

Quote
I guess sampler objects are supposed to be faster, but they actually require more binding? But that is required only when changing the sampler state no?
I am not sure as I have not got it fully working in OGL3, and I am not sure why texture binding is not working correctly, Project Mario is looking like Project Mario LSD again. But it may be possible to just bind all 8 when they are created to the sampler stage and leave them, and this would be a lot faster since this is how GM's texture sampling functions are design, like Direct3D's not per-texture like Unity3D.

Quote
Taking into account that it should only be used when changing sampler state or binding a different sampler to a texture, then I don't see why your GL1 should be slower.
OpenGL1 is slower because all I can do is create the sampler object and apply its state when a new texture is bound, this requires changing texture properties that haven't actually changed, as I said there is probably a way to make this more efficient.

Here is what it does basically. When you change the interpolation for sampler states 1-8 it updates the sampler and immediately applies the new sampler settings to the currently bound texture at that stage.
https://github.com/RobertBColton/enigma-dev/blob/master/ENIGMAsystem/SHELL/Graphics_Systems/OpenGL1/GLtextures.cpp#L287
Then if you bind a new texture the entire current sampler state for that stage is applied to it as well.
https://github.com/RobertBColton/enigma-dev/blob/master/ENIGMAsystem/SHELL/Graphics_Systems/OpenGL1/GLtextures.cpp#L273

Previously when you toggled interpolation we were just cycling every texture and toggling it on each. Now, despite OGL1 having been slown down, OGL3 has sped up because of less iterations, we simply change it only for the sampler and that equals less hardware interference, the sampler state then overrides everything for the texture at that slot.

Now this is all to emulate Direct3D's behavior, but if we switched to doing it Unity3D's way, Direct3D9 would slow down and OpenGL would be faster, because I don't think Direct3D lets you set these properties per-texture, but Direct3D11 might since it lets you control the creation of samplers.

Quote
4) There shouldn't be any reason to change the shader here. Texture sampler uniform is sent in G3ModelStruct.h line 599, where it just sets it to 0, because the currently used texture is bound at GL_TEXTURE0. You can just put the function in the next line if you really want to.
Are you saying our default shader does not support multi-texturing, because YoYoGames does, at least I think. This is strictly theoretical because they have vertex buffers which are meant for shaders, so I do not know if the multi-texturing or vertex buffers even work without shaders.

Quote
ARB didn't need to do anything. They didn't add sampler objects because no one needed them. You yourself even say that sampler data should be per-texture.
It's more efficient if you intend to use the same settings for all textures, and sampler data should be per-texture, if we're talking about game engines. If we're talking about acclaimed low-level graphics API's it should be optional and should have been from the very beginning.

Quote
Also, things like "// Honestly not a big deal, Unity3D doesn't allow you to specify either." or "YYG did it" is not really a reason to implement or not implement something. This is a separate project and we shouldn't be doing all the stuff others are doing. If there is no logical reason to implement something other than "they did it", then we shouldn't implement it.
Uhm yes, because this thing we are talking about is how many mipmap levels are automatically generated. Unity3D does not let you specify, Direct3D9 apparently doesn't let you specify they just select the best number of mipmap levels given the texture. OpenGL does if you set the max level before calling the generate mipmaps function, I attempted what I thought was the equivalent in D3D but it didn't yield results. I do not think Direct3D11 lets you chose the number of mipmaps either. So basically, if you want to not do what everyone else is doing, we need to write our own mipmap generator, but then it won't be hardware accelerated or have hardware filtering.

Quote
You can use GL3 on Windows just fine as well. If you have PC with card newer than 2008, then you should be fine. Especially if you have an Nvidia card, because Nvidia supports everything in OGL possible. So if you have a 400 generation card (released in early 2010, which is 4 years ago), then you already support GL4.4 which is the newest one.
I am praying for the OpenGL/Linux/Open Source gaming revolution with the Steambox and all, but I don't get ahead of myself like you seem to be. We are after all developing a GM clone, one of our greatest features is that our IDE and engine don't just build anywhere but also run anywhere.

It basically boils down to this, do you want to remain compatible with GM or not? There is really no in between anymore, there are so many differences and ways of doing things a million miles better than they did. If you don't want to support these functions, then there's no need to support any of the functions, and we should do all of the shit properly. Many places where they've fucked up makes GM harder to learn, learning to write sockets with regular scripting is easier than having some obtuse mechanism that fires events and fills data structures is the most ridiculous thing and in no way makes it easier for a noob to program, it in fact makes it harder!

If we want to take ENIGMA in our direction then we need to start doing things the correct way and take only the important games and projects with us revising and improving them as we go and eventually fully drop GM compatibility. I for one believe this project would be a lot farther ahead if we weren't even compatible with their engine, compatibility at this point is only inhibiting the project. But I will restate my position once again, ENIGMA being a GM augmentation should be as compatible as it possible can, period. And that ENIGMA should facilitate and encourage the development of spawnoff projects and game engines.
Title: Re: Texture Handling
Post by: TheExDeus on July 09, 2014, 12:58:42 pm
Quote
Not necessarily, most OGL2 hardware also support the sampler objects, it just wasn't officially approved by the ARB until then. Just search the hardware reports.
I'm not talking about the hardware, but the GL version. Like GF400 supports GL4, even though it was released 3 years before GL4 actually was done. It's possible because GPU's are no longer FFP and so it's possible to add features via firmware and drivers. But it does mean the hardware/software needs to support GL3.3. Some older hardware (especially Intel) had some crappy drivers, which meant that even if it supported EXT, it didn't end up supporting ARB.

Quote
I said there is probably a way to make this more efficient.
Cache it.

Quote
They added texture_set_interpolation_ext, do you want to support it or not?
But can't it work with both? Because the _ext functions change parameters of the sampler, which override the parameters set by texture stage. So you should be able to keep both. They shouldn't even be refering to texture stages as far as I understand, but to uniform samplers. Like look at this function: http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/texture_set_repeat_ext.html
The example they show is this:
Code: (edl) [Select]
shader_sample = shader_get_sampler_index(shader_glass, "s_NoiseSampler");
texture_set_repeat_ext(shader_sample, true);
shader_sample could in theory be anything. It doesn't need to be 0-8. It can be 240 if it's 240th uniform defined. So in this case you can override the sampler information in the shader.
Maybe I am not understanding something correctly or their doc's are wrong. Previously I though that they are tied to texture stages like TEXTURE0, TEXTURE1 etc. But that is something different.

Quote
Are you saying our default shader does not support multi-texturing, because YoYoGames does, at least I think. This is strictly theoretical because they have vertex buffers which are meant for shaders, so I do not know if the multi-texturing or vertex buffers even work without shaders.
How does it support multi-texturing exactly? By default you only draw 1 texture at a time - the used one. Like draw_sprite(spr_guy,0,0,0) will bind spr_guy to texture unit 0 and draw with it. There are no functions in GM or ENIGMA that allows you to do anything differently. You can of course render with as many textures as you want when writing your own shader (as demonstrated in many examples).

Quote
Uhm yes, because this thing we are talking about is how many mipmap levels are automatically generated. Unity3D does not let you specify, Direct3D9 apparently doesn't let you specify they just select the best number of mipmap levels given the texture. OpenGL does if you set the max level before calling the generate mipmaps function, I attempted what I thought was the equivalent in D3D but it didn't yield results. I do not think Direct3D11 lets you chose the number of mipmaps either. So basically, if you want to not do what everyone else is doing, we need to write our own mipmap generator, but then it won't be hardware accelerated or have hardware filtering.
I actually looked in GL's spec and doc's on how the thing works. Apparently you don't need to set MAX either, by default it's 1000. But it only generates the mipmaps until smallest texture is 1x1 (so it never really goes to 1000 mipmaps unless you have a texture size 2^1000 = 10^300). So there really isn't much use in manually specifying that number unless you want to have a finer control. Like it might be useful for text, when you want it crisp even at a distance, instead of totally smoothed out, while at the same time still having mipmapping enabled for performance boost. So after finding this out I don't have a problem with not specifying the mipmapping layer number, because there is a logical and valid reason for that. Until I looked at the docs there wasn't any, but only "they did it like that".

Quote
It basically boils down to this, do you want to remain compatible with GM or not? There is really no in between anymore, there are so many differences and ways of doing things a million miles better than they did. If you don't want to support these functions, then there's no need to support any of the functions, and we should do all of the shit properly. Many places where they've fucked up makes GM harder to learn, learning to write sockets with regular scripting is easier than having some obtuse mechanism that fires events and fills data structures is the most ridiculous thing and in no way makes it easier for a noob to program, it in fact makes it harder!
I have mentioned this several times before - I don't really care about GM compatibility. But that isn't up to me or you alone - many here feel totally different. They feel ENIGMA should strive for total GM compatibility. Also, I don't believe that everything GM done is bad and that we need to go in a TOTALLY different direction. I like GM's event system, I like it's instance system, it's scripting and so on. But doesn't mean we should do everything they do. Like I would want to be able to properly reference objects instead of everything being an int. While in theory it's cool and easy to use/learn, it does make a lot of the code messier, as I cannot do anything with classes. But class support for EDL should change that (still waiting for that..).

Quote
If we want to take ENIGMA in our direction then we need to start doing things the correct way and take only the important games and projects with us revising and improving them as we go and eventually fully drop GM compatibility. I for one believe this project would be a lot farther ahead if we weren't even compatible with their engine, compatibility at this point is only inhibiting the project. But I will restart my position once again, ENIGMA being a GM augmentation should be as compatible as it possible can, period. And that ENIGMA should facilitate and encourage the development of spawnoff projects and game engines.
I also agree it should be as compatible as possible, but to a degree of sanity. I don't beliave a user should expect to run their GM game in ENIGMA right away, just like they wouldn't expect their UnrealEngine game to automatically work in Cryengine. Some work is always required and we can mimize it. So I think the goals of ENIGMA should be this:
1) Easy porting of GM games to ENIGMA. That means it requires a day or two usually to get a game running, especially considering that GM games are not that complex in terms of code (as GM engine handles most of the stuff). So no total rewrite required, just removing some functions, replacing some others etc.
2) People experienced with GM to feel like at home with ENIGMA. That means they don't have to really learn anything. Just open LGM and start making games. When they hit a incompatibility, they should be able to figure out the way to do it ENIGMA (either trough Intuition, Help, Wiki or Forums).
Title: Re: Texture Handling
Post by: Goombert on July 09, 2014, 07:35:03 pm
Quote from: TheExDeus
I'm not talking about the hardware, but the GL version. Like GF400 supports GL4, even though it was released 3 years before GL4 actually was done. It's possible because GPU's are no longer FFP and so it's possible to add features via firmware and drivers. But it does mean the hardware/software needs to support GL3.3. Some older hardware (especially Intel) had some crappy drivers, which meant that even if it supported EXT, it didn't end up supporting ARB.
Ok, I get that, I did not know the specifics however, but thank you for clarifying.

Quote from: TheExDeus
Cache it.
I am already caching the objects, how else would I cache this? We would just need to know what has changed and what hasn't, but we'd need to memorize it for each texture.

Quote from: TheExDeus
Because the _ext functions change parameters of the sampler, which override the parameters set by texture stage.
Yeah, exactly, that's why it won't work. By default interpolation is turned off, so wherever you are storing min/mag filter in the sampler object, that will override your per-texture information, so calling another texture function which sets it per-texture, is still going to be overridden by the sampler state, so that makes no sense.

Quote from: TheExDeus
shader_sample could in theory be anything. It doesn't need to be 0-8. It can be 240 if it's 240th uniform defined. So in this case you can override the sampler information in the shader.
Maybe I am not understanding something correctly or their doc's are wrong. Previously I though that they are tied to texture stages like TEXTURE0, TEXTURE1 etc. But that is something different.
No it does have to be 1-8, that specific doc doesn't mention it, but it does mention the term "slot" which is semantically defined in texture_set_stage
Quote from: Studio Manual texture_set_repeat_ext
This function can be used to set whether a single sampler "slot" repeats the given texture when using Shaders in GameMaker: Studio.
Quote from: Stupido Manual texture_set_stage
This function will set the given stage "slot" a texture to be used. The number of stage "slots" available will depend on the platform you are compiling to, with a maximum of 8 being available for Windows, Mac and Linux, but on lower end Android devices (for example) this number can be as low as 2.
http://docs.yoyogames.com/source/dadiospice/002_reference/drawing/texture_set_stage.html

Their functions work Harri just like the Direct3D functions do, and do you know why? Because they never wrote OGL code, except for GM4Mac, they wrote their graphics system in Direct3D then ported it using ANGLE, that's also why they have add D3D9 FFP constants for flexible vertex format to GML while they are a cross-platform game engine! This is why I keep saying they are doing everything wrong.

Quote from: TheExDeus
How does it support multi-texturing exactly? By default you only draw 1 texture at a time - the used one. Like draw_sprite(spr_guy,0,0,0) will bind spr_guy to texture unit 0 and draw with it. There are no functions in GM or ENIGMA that allows you to do anything differently. You can of course render with as many textures as you want when writing your own shader (as demonstrated in many examples).
Through the function texture_set_stage and the vertex buffer functions, but I am not sure if the vertex buffer functions can be rendered without a shader, but judging by the fact some of their constants are FFP I am pretty certain they can.
http://docs.yoyogames.com/source/dadiospice/002_reference/shaders/primitive%20building/index.html
http://docs.yoyogames.com/source/dadiospice/002_reference/shaders/vertex%20formats/index.html
http://docs.yoyogames.com/source/dadiospice/002_reference/shaders/vertex%20formats/vertex_format_add_custom.html

Quote
I actually looked in GL's spec and doc's on how the thing works. Apparently you don't need to set MAX either, by default it's 1000. But it only generates the mipmaps until smallest texture is 1x1 (so it never really goes to 1000 mipmaps unless you have a texture size 2^1000 = 10^300). So there really isn't much use in manually specifying that number unless you want to have a finer control. Like it might be useful for text, when you want it crisp even at a distance, instead of totally smoothed out, while at the same time still having mipmapping enabled for performance boost. So after finding this out I don't have a problem with not specifying the mipmapping layer number, because there is a logical and valid reason for that. Until I looked at the docs there wasn't any, but only "they did it like that".
Ok, good, glad you understand now. I was simply stating that for an explanation as to why I didn't offer the ability to control the mipmap levels, it's not really needed either and can't even be done in Direct3D as far as I am aware.

Quote
I have mentioned this several times before - I don't really care about GM compatibility. But that isn't up to me or you alone - many here feel totally different. They feel ENIGMA should strive for total GM compatibility. Also, I don't believe that everything GM done is bad and that we need to go in a TOTALLY different direction. I like GM's event system, I like it's instance system, it's scripting and so on. But doesn't mean we should do everything they do. Like I would want to be able to properly reference objects instead of everything being an int. While in theory it's cool and easy to use/learn, it does make a lot of the code messier, as I cannot do anything with classes. But class support for EDL should change that (still waiting for that..).
Yeah, I've never complained about any of that. You are confused by reading my other posts about a spawn off project with no objects, for the purpose of learning yes I see the practicality of the graphical objects and event system. I am talking more about functionality Harri, for instance, the inconsistency in RGBA color values, sometimes an int, sometimes alpha is a double, sometimes all components need specified 0-255. That is really difficult for new users to grasp as a concept, and makes it harder for them to learn, does that make sense? I am not saying GM is bad, I am saying many parts of it are counter-intuitive, and inconsistent, which is a separate argument than me saying it is bad for advanced programmers. I believe with the right design GM could be much better for novice and advanced programmers alike and accommodate all of them in the same program.
Title: Re: Texture Handling
Post by: TheExDeus on July 10, 2014, 09:08:33 am
Quote
Yeah, exactly, that's why it won't work. By default interpolation is turned off, so wherever you are storing min/mag filter in the sampler object, that will override your per-texture information, so calling another texture function which sets it per-texture, is still going to be overridden by the sampler state, so that makes no sense.
But we could make samplers to have disable/enable functions or make them disabled for the unit when the texture objects parameters change. In GL sampler objects need to be manually created for a reason. It would make A LOT more sense to have texture_sample_create() or something similar. Also, GM:S already allows using both samples (_ext functions) and the older regular way (without _ext).

Quote
No it does have to be 1-8, that specific doc doesn't mention it, but it does mention the term "slot" which is semantically defined in texture_set_stage
I was specifically referring to "shader_get_sampler". I guess their semantics is gotten from DX or something, because in GL sampler is just a uniform. Just like any other. And uniforms have indices in any range (starting from 0 to MAX_UNIFORMS). So in ENIGMA, for example, they can be returned as 126 or anything else.
Also, in the two doc excerpts you quoted I feel "stage slots" and "sampler slots" are meant to be two different things. Basically, I ask you to make 20 uniforms and then 1 sampler. Check the ID the sampler returns. In ENIGMA it might be 21 (or 20), while in GM:S I guess it returns 1 (or 0)? Maybe their function just does somethings differently then the one we have in ENIGMA.

Quote
Through the function texture_set_stage and the vertex buffer functions,
Multi-texturing usually doesn't require multiple UV coordinates. I plan to make Attribute functions anyway, and they would basically allow passing your own per-vertex data to shaders. This could then be wrapped in vertex buffer functions if needed, but I don't see that much of a point. I'll look into supporting those GM:S functions anyway. Other than that multitexturing is as possible in ENIGMA as it's in GM:S. You use texture_set_stage and sampler uniforms. That way you can use several textures, like to render specular highlights only on some parts of the object or make only some part of the texture glow. All of that can be done with the functions already in ENIGMA, as you usually use the same UV's for all the textures (it's even required to make all those effects). Terrain example also (by your own description) was multitexturing.
Quote
but I am not sure if the vertex buffer functions can be rendered without a shader, but judging by the fact some of their constants are FFP I am pretty certain they can.
By their own description you cannot.
Quote
NOTE: You can build the primitive and store it in the vertex buffer in any step of your game, however to draw it, you must submit it during the Draw Event and only when calling a shader.
And it's only logical. Making a shader that can interpret your data is basically impossible. You have to write a custom shader to use this data. I am not understanding 100% why that vertex format stuff is necessary though. I believe I can do everything it is used for without it.

Quote
I was simply stating that for an explanation as to why I didn't offer the ability to control the mipmap levels
Yes, and your statement was that D3D doesn't allow and Unity doesn't either. You didn't specify WHY.

Quote
he inconsistency in RGBA color values, sometimes an int, sometimes alpha is a double, sometimes all components need specified 0-255.
I agree with these kinds of things. Consistency is very important and GM often lacks it. Problem is that you cannot easily fix this without breaking a lot of compatibility even with already made ENIGMA projects. So these kinds of things need to be though out before it gets out of hand and we will just have "to deal with" (like YYG does now).
Title: Re: Texture Handling
Post by: Goombert on July 10, 2014, 08:25:37 pm
Quote
But we could make samplers to have disable/enable functions or make them disabled for the unit when the texture objects parameters change. In GL sampler objects need to be manually created for a reason. It would make A LOT more sense to have texture_sample_create() or something similar. Also, GM:S already allows using both samples (_ext functions) and the older regular way (without _ext).
I see, I would support that, but they would still need enabled by default.

Quote
I guess their semantics is gotten from DX or something, because in GL sampler is just a uniform.
Yes it is, as I've said many times, their graphics is mainly DX9 and is ported cross-platform using ANGLE. And we already know it uses ANGLE because you can see the binaries in Studio's installation path.

Quote
And uniforms have indices in any range (starting from 0 to MAX_UNIFORMS). So in ENIGMA, for example, they can be returned as 126 or anything else.
Theoretically OpenGL supports several hundred sampler objects, most cards should support a minimum of 80 I believe it was.

Quote
Also, in the two doc excerpts you quoted I feel "stage slots" and "sampler slots" are meant to be two different things. Basically, I ask you to make 20 uniforms and then 1 sampler. Check the ID the sampler returns. In ENIGMA it might be 21 (or 20), while in GM:S I guess it returns 1 (or 0)? Maybe their function just does somethings differently then the one we have in ENIGMA.
Good catch, I may have misunderstood I see now with the sample code for texture_set_stage, and yes I will test it when I have time.

Quote
Multi-texturing usually doesn't require multiple UV coordinates.
Quote
Terrain example also (by your own description) was multitexturing.
Ahh yes, good point then, so yeah, our default shader does only need to handle 1 texture.

Quote
And it's only logical. Making a shader that can interpret your data is basically impossible. You have to write a custom shader to use this data. I am not understanding 100% why that vertex format stuff is necessary though. I believe I can do everything it is used for without it.
Yes, I see, and I do not understand the purpose of the vertex formats either, most of it maps to FVF implying that they are for the FFP to know how to render it.

Quote
Yes, and your statement was that D3D doesn't allow and Unity doesn't either. You didn't specify WHY.
Ahh, I didn't think I needed to, I thought it was evident enough that it did not need explained, I'll try to be more descriptive in the future.

Quote
I agree with these kinds of things. Consistency is very important and GM often lacks it. Problem is that you cannot easily fix this without breaking a lot of compatibility even with already made ENIGMA projects. So these kinds of things need to be though out before it gets out of hand and we will just have "to deal with" (like YYG does now).
Personally I support a unified color type that can construct from doubles or floats but maintains the same internal data representation. But sadly, as you pointed out. But yeah, I don't bother this is why I just try to focus on compatibility with ENIGMA and hope that the project gets popular enough I can develop a spinoff project that fixes all this. But there are more things than just this too, there are plenty of places where the inconsistency is counter-intuitive. I also don't feel Studio is at all revolutionary, I think GM needs a fundamental redesign.