Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Goombert

Thanks Jimn, I can't really comment to much on it right now without investigating deper. But it is probably something to do with the cast of inst_t, these are compiler issues that need worked out by Dreamland. But I do know some locals do work under certain scopes, such as the child, the child should definitely be able to access the local. It would also be useful to test without the parent.

Developing ENIGMA / Re: EnigmaFileFunctions repo
« on: July 10, 2014, 10:53:52 pm »
I gave this an honest attempt, I really did, but Windows is fucking hopeless. You'll have to package it for us or something.

Developing ENIGMA / Re: Texture Handling
« on: July 10, 2014, 08:25:37 pm »
But we could make samplers to have disable/enable functions or make them disabled for the unit when the texture objects parameters change. In GL sampler objects need to be manually created for a reason. It would make A LOT more sense to have texture_sample_create() or something similar. Also, GM:S already allows using both samples (_ext functions) and the older regular way (without _ext).
I see, I would support that, but they would still need enabled by default.

I guess their semantics is gotten from DX or something, because in GL sampler is just a uniform.
Yes it is, as I've said many times, their graphics is mainly DX9 and is ported cross-platform using ANGLE. And we already know it uses ANGLE because you can see the binaries in Studio's installation path.

And uniforms have indices in any range (starting from 0 to MAX_UNIFORMS). So in ENIGMA, for example, they can be returned as 126 or anything else.
Theoretically OpenGL supports several hundred sampler objects, most cards should support a minimum of 80 I believe it was.

Also, in the two doc excerpts you quoted I feel "stage slots" and "sampler slots" are meant to be two different things. Basically, I ask you to make 20 uniforms and then 1 sampler. Check the ID the sampler returns. In ENIGMA it might be 21 (or 20), while in GM:S I guess it returns 1 (or 0)? Maybe their function just does somethings differently then the one we have in ENIGMA.
Good catch, I may have misunderstood I see now with the sample code for texture_set_stage, and yes I will test it when I have time.

Multi-texturing usually doesn't require multiple UV coordinates.
Terrain example also (by your own description) was multitexturing.
Ahh yes, good point then, so yeah, our default shader does only need to handle 1 texture.

And it's only logical. Making a shader that can interpret your data is basically impossible. You have to write a custom shader to use this data. I am not understanding 100% why that vertex format stuff is necessary though. I believe I can do everything it is used for without it.
Yes, I see, and I do not understand the purpose of the vertex formats either, most of it maps to FVF implying that they are for the FFP to know how to render it.

Yes, and your statement was that D3D doesn't allow and Unity doesn't either. You didn't specify WHY.
Ahh, I didn't think I needed to, I thought it was evident enough that it did not need explained, I'll try to be more descriptive in the future.

I agree with these kinds of things. Consistency is very important and GM often lacks it. Problem is that you cannot easily fix this without breaking a lot of compatibility even with already made ENIGMA projects. So these kinds of things need to be though out before it gets out of hand and we will just have "to deal with" (like YYG does now).
Personally I support a unified color type that can construct from doubles or floats but maintains the same internal data representation. But sadly, as you pointed out. But yeah, I don't bother this is why I just try to focus on compatibility with ENIGMA and hope that the project gets popular enough I can develop a spinoff project that fixes all this. But there are more things than just this too, there are plenty of places where the inconsistency is counter-intuitive. I also don't feel Studio is at all revolutionary, I think GM needs a fundamental redesign.

Tips, Tutorials, Examples / Re: Multitextured Terrain Example
« on: July 09, 2014, 08:00:39 pm »
Now that's looking fantastic, did you pull the mipmapping code from my commit? If so, did you enable anisotropic filtering as well?

Developing ENIGMA / Re: Texture Handling
« on: July 09, 2014, 07:35:03 pm »
Quote from: TheExDeus
I'm not talking about the hardware, but the GL version. Like GF400 supports GL4, even though it was released 3 years before GL4 actually was done. It's possible because GPU's are no longer FFP and so it's possible to add features via firmware and drivers. But it does mean the hardware/software needs to support GL3.3. Some older hardware (especially Intel) had some crappy drivers, which meant that even if it supported EXT, it didn't end up supporting ARB.
Ok, I get that, I did not know the specifics however, but thank you for clarifying.

Quote from: TheExDeus
Cache it.
I am already caching the objects, how else would I cache this? We would just need to know what has changed and what hasn't, but we'd need to memorize it for each texture.

Quote from: TheExDeus
Because the _ext functions change parameters of the sampler, which override the parameters set by texture stage.
Yeah, exactly, that's why it won't work. By default interpolation is turned off, so wherever you are storing min/mag filter in the sampler object, that will override your per-texture information, so calling another texture function which sets it per-texture, is still going to be overridden by the sampler state, so that makes no sense.

Quote from: TheExDeus
shader_sample could in theory be anything. It doesn't need to be 0-8. It can be 240 if it's 240th uniform defined. So in this case you can override the sampler information in the shader.
Maybe I am not understanding something correctly or their doc's are wrong. Previously I though that they are tied to texture stages like TEXTURE0, TEXTURE1 etc. But that is something different.
No it does have to be 1-8, that specific doc doesn't mention it, but it does mention the term "slot" which is semantically defined in texture_set_stage
Quote from: Studio Manual texture_set_repeat_ext
This function can be used to set whether a single sampler "slot" repeats the given texture when using Shaders in GameMaker: Studio.
Quote from: Stupido Manual texture_set_stage
This function will set the given stage "slot" a texture to be used. The number of stage "slots" available will depend on the platform you are compiling to, with a maximum of 8 being available for Windows, Mac and Linux, but on lower end Android devices (for example) this number can be as low as 2.

Their functions work Harri just like the Direct3D functions do, and do you know why? Because they never wrote OGL code, except for GM4Mac, they wrote their graphics system in Direct3D then ported it using ANGLE, that's also why they have add D3D9 FFP constants for flexible vertex format to GML while they are a cross-platform game engine! This is why I keep saying they are doing everything wrong.

Quote from: TheExDeus
How does it support multi-texturing exactly? By default you only draw 1 texture at a time - the used one. Like draw_sprite(spr_guy,0,0,0) will bind spr_guy to texture unit 0 and draw with it. There are no functions in GM or ENIGMA that allows you to do anything differently. You can of course render with as many textures as you want when writing your own shader (as demonstrated in many examples).
Through the function texture_set_stage and the vertex buffer functions, but I am not sure if the vertex buffer functions can be rendered without a shader, but judging by the fact some of their constants are FFP I am pretty certain they can.

I actually looked in GL's spec and doc's on how the thing works. Apparently you don't need to set MAX either, by default it's 1000. But it only generates the mipmaps until smallest texture is 1x1 (so it never really goes to 1000 mipmaps unless you have a texture size 2^1000 = 10^300). So there really isn't much use in manually specifying that number unless you want to have a finer control. Like it might be useful for text, when you want it crisp even at a distance, instead of totally smoothed out, while at the same time still having mipmapping enabled for performance boost. So after finding this out I don't have a problem with not specifying the mipmapping layer number, because there is a logical and valid reason for that. Until I looked at the docs there wasn't any, but only "they did it like that".
Ok, good, glad you understand now. I was simply stating that for an explanation as to why I didn't offer the ability to control the mipmap levels, it's not really needed either and can't even be done in Direct3D as far as I am aware.

I have mentioned this several times before - I don't really care about GM compatibility. But that isn't up to me or you alone - many here feel totally different. They feel ENIGMA should strive for total GM compatibility. Also, I don't believe that everything GM done is bad and that we need to go in a TOTALLY different direction. I like GM's event system, I like it's instance system, it's scripting and so on. But doesn't mean we should do everything they do. Like I would want to be able to properly reference objects instead of everything being an int. While in theory it's cool and easy to use/learn, it does make a lot of the code messier, as I cannot do anything with classes. But class support for EDL should change that (still waiting for that..).
Yeah, I've never complained about any of that. You are confused by reading my other posts about a spawn off project with no objects, for the purpose of learning yes I see the practicality of the graphical objects and event system. I am talking more about functionality Harri, for instance, the inconsistency in RGBA color values, sometimes an int, sometimes alpha is a double, sometimes all components need specified 0-255. That is really difficult for new users to grasp as a concept, and makes it harder for them to learn, does that make sense? I am not saying GM is bad, I am saying many parts of it are counter-intuitive, and inconsistent, which is a separate argument than me saying it is bad for advanced programmers. I believe with the right design GM could be much better for novice and advanced programmers alike and accommodate all of them in the same program.

Developing ENIGMA / Re: Texture Handling
« on: July 08, 2014, 11:40:06 pm »
1) So now we are raising GL3.2 to GL3.3? I don't have a problem with that, but it does seem we might end up with GL4 in a year. Maybe just start now.
Not necessarily, most OGL2 hardware also support the sampler objects, it just wasn't officially approved by the ARB until then. Just search the hardware reports.

I also don't understand why exactly we need to "emulate" everything YYG does?
They added texture_set_interpolation_ext, do you want to support it or not? This is not one of those deals where you can have your cake and eat it too, if you want ENIGMA to do per-texture properties like Unity3D and every other game engine does, that's fine, but ENIGMA won't support GM's functions because one would override the other.

Especially if it's not that great of a decision?
I attempted to address that, basically here is how I feel.
* A graphics API should be low level and not have sampler properties per-texture, they should be abstract objects that can be used on individual textures if needed. This is why I support Direct3D over OpenGL because it had the best of both world from the beginning, both low level and high level depending on how you wanted to use it.
* A game engine should generally be high level and store the texture information per-texture, not every texture is going to use interpolation and changing sampler states frequently is redundant and inefficient.

I guess sampler objects are supposed to be faster, but they actually require more binding? But that is required only when changing the sampler state no?
I am not sure as I have not got it fully working in OGL3, and I am not sure why texture binding is not working correctly, Project Mario is looking like Project Mario LSD again. But it may be possible to just bind all 8 when they are created to the sampler stage and leave them, and this would be a lot faster since this is how GM's texture sampling functions are design, like Direct3D's not per-texture like Unity3D.

Taking into account that it should only be used when changing sampler state or binding a different sampler to a texture, then I don't see why your GL1 should be slower.
OpenGL1 is slower because all I can do is create the sampler object and apply its state when a new texture is bound, this requires changing texture properties that haven't actually changed, as I said there is probably a way to make this more efficient.

Here is what it does basically. When you change the interpolation for sampler states 1-8 it updates the sampler and immediately applies the new sampler settings to the currently bound texture at that stage.
Then if you bind a new texture the entire current sampler state for that stage is applied to it as well.

Previously when you toggled interpolation we were just cycling every texture and toggling it on each. Now, despite OGL1 having been slown down, OGL3 has sped up because of less iterations, we simply change it only for the sampler and that equals less hardware interference, the sampler state then overrides everything for the texture at that slot.

Now this is all to emulate Direct3D's behavior, but if we switched to doing it Unity3D's way, Direct3D9 would slow down and OpenGL would be faster, because I don't think Direct3D lets you set these properties per-texture, but Direct3D11 might since it lets you control the creation of samplers.

4) There shouldn't be any reason to change the shader here. Texture sampler uniform is sent in G3ModelStruct.h line 599, where it just sets it to 0, because the currently used texture is bound at GL_TEXTURE0. You can just put the function in the next line if you really want to.
Are you saying our default shader does not support multi-texturing, because YoYoGames does, at least I think. This is strictly theoretical because they have vertex buffers which are meant for shaders, so I do not know if the multi-texturing or vertex buffers even work without shaders.

ARB didn't need to do anything. They didn't add sampler objects because no one needed them. You yourself even say that sampler data should be per-texture.
It's more efficient if you intend to use the same settings for all textures, and sampler data should be per-texture, if we're talking about game engines. If we're talking about acclaimed low-level graphics API's it should be optional and should have been from the very beginning.

Also, things like "// Honestly not a big deal, Unity3D doesn't allow you to specify either." or "YYG did it" is not really a reason to implement or not implement something. This is a separate project and we shouldn't be doing all the stuff others are doing. If there is no logical reason to implement something other than "they did it", then we shouldn't implement it.
Uhm yes, because this thing we are talking about is how many mipmap levels are automatically generated. Unity3D does not let you specify, Direct3D9 apparently doesn't let you specify they just select the best number of mipmap levels given the texture. OpenGL does if you set the max level before calling the generate mipmaps function, I attempted what I thought was the equivalent in D3D but it didn't yield results. I do not think Direct3D11 lets you chose the number of mipmaps either. So basically, if you want to not do what everyone else is doing, we need to write our own mipmap generator, but then it won't be hardware accelerated or have hardware filtering.

You can use GL3 on Windows just fine as well. If you have PC with card newer than 2008, then you should be fine. Especially if you have an Nvidia card, because Nvidia supports everything in OGL possible. So if you have a 400 generation card (released in early 2010, which is 4 years ago), then you already support GL4.4 which is the newest one.
I am praying for the OpenGL/Linux/Open Source gaming revolution with the Steambox and all, but I don't get ahead of myself like you seem to be. We are after all developing a GM clone, one of our greatest features is that our IDE and engine don't just build anywhere but also run anywhere.

It basically boils down to this, do you want to remain compatible with GM or not? There is really no in between anymore, there are so many differences and ways of doing things a million miles better than they did. If you don't want to support these functions, then there's no need to support any of the functions, and we should do all of the shit properly. Many places where they've fucked up makes GM harder to learn, learning to write sockets with regular scripting is easier than having some obtuse mechanism that fires events and fills data structures is the most ridiculous thing and in no way makes it easier for a noob to program, it in fact makes it harder!

If we want to take ENIGMA in our direction then we need to start doing things the correct way and take only the important games and projects with us revising and improving them as we go and eventually fully drop GM compatibility. I for one believe this project would be a lot farther ahead if we weren't even compatible with their engine, compatibility at this point is only inhibiting the project. But I will restate my position once again, ENIGMA being a GM augmentation should be as compatible as it possible can, period. And that ENIGMA should facilitate and encourage the development of spawnoff projects and game engines.

Off-Topic / Re: GM:Studio Standard Now Free
« on: July 08, 2014, 10:04:51 pm »
TKG, Monkey is also open source, you only have to pay for the binaries. I really like what Mark Sibly is doing, his IDE is also Qt. This is of course if we are talking about the same primate....

Tips, Tutorials, Examples / Re: Multitextured Terrain Example
« on: July 08, 2014, 10:01:11 pm »
Harri this is why LGM has the precompile option, you'll notice that option does not exist in Studio. So it is entirely up to us how we define its behavior, etc. in general one would assume that the precompile option builds and links your shader only when in the compliant graphics system.

Developing ENIGMA / Re: Iji on ENIGMA
« on: July 08, 2014, 09:57:22 pm »
This is very nice sorlok, you are actually inspiring me, keep up the good work!

Sorry If I generally appear ignorant on compiler related issues, but it's because I usually am, I will speak up if I feel the need to however, but for the most part that stuff is best directed at Josh.

Tips, Tutorials, Examples / Re: Multitextured Terrain Example
« on: July 07, 2014, 04:01:00 am »
Hey ego, Direct3D9 can't run it because my shaders are GLSL. I never even added HLSL support for Direct3D9 yet. OpenGL3 can't run it because the shaders are incompatible. And Harri used a better random generation than I did, so yes mine appears blocky.

On an important side note, I am getting massive framerate increases, from 100 to 500, with mipmapping enabled. We need the sampler objects worked out for OGL3 however before it can be utilized in all graphics systems.

Developing ENIGMA / Re: Texture Handling
« on: July 07, 2014, 03:34:59 am »
Uhm, no, OpenGL1 games suffer, on Linux it's safe to just use OpenGL3. Windows has Direct3D9 which works great, Direct3D11 too when we convert the GL3 shaders to HLSL.

Developing ENIGMA / Re: Texture Handling
« on: July 07, 2014, 02:57:26 am »
Yes if you want OpenGL graphics, especially if you want them compatible with GM. As I said you can blame both YYG for making a cross-platform DirectX emulator, a.k.a. GM: Studio, and you can thank the ARB for allowing OpenGL to clusterfuck sampler properties per-texture and not properly abstracting sampler objects.

The point here is this:
* OpenGL is a "low level" graphics API, it should have had sampler objects since the beginning just like Direct3D, this is also partially Microsofts fault, but not so much, this is the one rare case where they had nothing to do with OpenGL's stupidity.
* When it comes to graphics, low-level is important
* A game engine like GM or Unity3D should be storing repetition, filtering and other options in materials and per-texture would be fine. A game engine is supposed to be the opposite of low-level. But they are unlikely to do that because they're on "ANGEL dust"

So basically, YYG and the ARB are both equally stupid.

Developing ENIGMA / Texture Handling
« on: July 07, 2014, 02:23:06 am »
Well this is something that needed discussed a long time ago. OpenGL stores sampling information per texture without the use of the OGL3 sampler object extension approved by the ARB for that core version. Direct3D has done it with sampler objects and properly abstracted the concept since the beginning of time.

Now, with YoYoGames essentially building a cross-platform DirectX emulator ("ANGEL dust") they are basically sticking to sampler states. In fact, I don't really know who I hate more them or the ARB. Don't get me wrong I love a lot of things about Direct3D, but Microsoft is purposefully responsible for fucking up a lot of OGL, but this this fuck up rests on the ARB's shoulders.

Anyway, because of this, OpenGL1 needs a sampler state emulator object which I've already written and works fine, however Project Mario takes a 10fps hit. OpenGL3 needs further integration with its shaders and the extension sampler objects, which I've already programmed and I can not find my way through Harri's shaders to integrate. Direct3D9 is fine, in fact it and OpenGL1 have working texture filtering.

Based on these two functions my assumption is that YYG is going to make all of the texture stuff based on samplers unlike Unity3D which stores texture repetition information, filtering, and what not per-texture.

They may also change their minds once they begin implementing proper mipmapping and things which they have indicated interest in doing.

At any rate, if they don't change which I don't believe they will I have begun coding everything this way. These are the proposed texture functions, however they are not all documented yet I haven't had time as I've been focusing on the coding.

My suggestion is that my current pull request be merged into the OpenGL3 branch until Harri or myself has time to complete the OpenGL3 sampler object integration.

A tutorial for the OpenGL3 sampler objects is available below.

Tips, Tutorials, Examples / Re: Multitextured Terrain Example
« on: July 06, 2014, 04:33:25 pm »
Harri, people were complaining the normal calculation method was not working, I also had made changes to the way models were being built at that time I simply removed the function.

I suggest following whatever Unity3D offers in their model class.

I am also working on cleaning up our texture shit, some things were not done right in OGL, YYG has their functions mimicing Direct3D sampler states which are global, so I need to write a sampler cache for OGL. On top of that, I figured out how to do automatic mipmap generation for both systems.

Tips, Tutorials, Examples / Re: Multitextured Terrain Example
« on: July 05, 2014, 06:41:59 pm »
God damn Harri, I actually started the original so that I could maybe do a SimCity clone, but you've far outdone me so far.

But anyway, that shit needs some mipmapping and anisotropic filtering, I am going to go cleanup that shit and get it working between OGL and Direct3D.