ENIGMA Development Environment
Website is in read-only mode due to a recent attack.

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Goombert

Programming Help / Help with Maths
« on: February 07, 2014, 12:44:17 PM »
Can someone help me with something?

I need to generate the following numbers in a combobox, so I don't have to type them out statically.
Code: (C++) [Select]
8, 16, 24, 32
40, 48, 56, 64
80, 96, 112, 128
144, 160, 192, 224,
256, 320, 512

If you notice there is a pattern, every even square of 8 the increment increases by 8. For instance, the numbers increase by 8 up until it hits 64, which is a perfect square of 8, and then it starts adding 16, it continues this pattern.

Just added information is that this is for the audio bit depth in the new sound editor for LGM 1.8.4

Anyway, I wrote the following for loop.
Code: (Java) [Select]
for (int i = 0; i <= 512; i += 8 * Math.sqrt(i)) {

However I only need to increment i by the multiplicative of the lowest square root, so if for instance i is 0 or 8, I need the result to be 1, but when it is 64 I need the result to be 2 * 8. Got any ideas?

General ENIGMA / Re: The new GL3
« on: February 07, 2014, 12:30:18 PM »
I use CodeXL by AMD. It's basically the gDEbugger I used previously, but this one is a lot newer and is still maintained. And it works on all cards, not just AMD. OpenCL debugging works only with AMD in that tool. And that tool showed that many one vertex buffers have been created.
Ok then you can't roll out that I made a mistake in the vertex data upload, where the BufferData() shit occurs, I remember not getting any performance difference when I added flags for static and dynamic buffer usage, which was rather odd in and of itself.

General ENIGMA / Re: The new GL3
« on: February 07, 2014, 12:03:47 PM »
CG allows some "profile" thing, but usually it's still done on startup. I don't thing it meant recompiling during rendering.
I don't really give a shit about speed anymore, your changes added with my code make things a million times faster than Studio, probably because they use that stupid ANGLE shit I wouldn't shut up about. But never the less, it wouldn't ever hurt to run some benchmarks just to investigate, but don't listen to me, because it's not important in the slightest and I don't want to distract you to this over real problems like the 1 vertex bug you discovered.

Even the "if" branching I already do,
The correct term is short-circuit evaluation.
I don't care for your usage of Nvidia proprietary vocabulary.

But I want to know why your game tries to render batches of 1 vertex? Like what drawing function creates a batch of 1 vertex? The only thing coming to my mind is "draw_primitive_begin(pr_points); draw_vertex(x,y); draw_primitive_end();".
I want to know what you meant by debugging it, I thought you meant an OpenGL profiler, and if that is the case, there may be an error in my data buffering which also causes the performance decrease, which is why I mentioned it.

General ENIGMA / Re: The new GL3
« on: February 07, 2014, 10:49:12 AM »
Yeah I get that, Harri, I might be talking out my ass, if it is expensive to do it for lighting too, I know plenty of games that turn on and turn off lighting quite a bit, I've done so in games before as well.

Developing ENIGMA / Re: Window Alpha and Message Box
« on: February 07, 2014, 10:09:36 AM »
Quote from: TheExDeus
I meant combining them on CPU side before sending just like you do with colors. I just chose my wording incorrectly as of course it would loose precision data type wise, but I wanted to say that it probably won't loose precision because the data isn't that precise. Like "float color = 255.0f; unsigned char color2 = (unsigned char)color;" will not make color2 data loose precision, while of course char is a lot less precise than a float.
Ok, I am fine with that if we just do some math and come up with a proof to show that it doesn't.

Quote from: TheExDeus
That is because you packed and made it 4x smaller and reduced memory bandwidth. That I don't oppose - I even suggest trying the same with texture coordinates. That is why I said it does the conversion on GPU, so the data on the bus is still unsigned chars, but when it gets to the GPU it gets converted to float and normalized to 0-1.
Yes, but here is the wierd part, it was tested on a static model, the model wasn't being uploaded every frame, it was uploaded once the very first time it was drawn, and then never uploaded again. So that doesn't make any sense, the performance boost should have been nothing in the case of your argument, it couldn't have affected the dynamic rendering that much it only drew like 3 lines of text with the framerate. So as I said, the interpolation on the GPU with float you speak of, which I know exactly what you mean, should not have given a 30fps boost by doing that, the 30fps boost indicates it was expanding the vertex data into floats every time it rendered, instead of only once at upload. That to me seems like OpenGL is horribly unoptimal, it should do that expansion on the first upload and leave the data that way.

Quote from: TheExDeus
You should probably cast that to unsigned char.
Yeah I see that.

Quote from: TheExDeus
And while I don't care one bit about it (because I don't use GM at all), others do.
Don't worry I know, I am too, we get too many people coming here complaining about stuff being different, which is why I put a lot of effort into compatibility, effort that you would not likely get from Josh, not saying that in a bad way. For instance, GM's new stupid fuck ds accessors, DaSpirit thought they were retarded and so did I and when I said to Josh don't ever bother adding compatibility for that, he said, don't worry I am not.

General ENIGMA / Re: The new GL3
« on: February 07, 2014, 09:58:13 AM »
Quote from: TheExDues
Can you give source on that I could read?
Quote from: Wikipedia
In addition to being able to compile Cg source to assembly code, the Cg runtime also has the ability to compile shaders during execution of the supporting program. This allows the runtime to compile the shader using the latest optimizations available for hardware that the program is currently executing on. However, this technique requires that the source code for the shader be available in plain text to the compiler, allowing the user of the program to access the source-code for the shader. Some developers view this as a major drawback of this technique.
It's on Nvidia's site somewhere else to, I just assume the same holds true for GLSL.

Quote from: TheExDeus
What we should technically do though is not keep shader compiled shaders around. Like we do this "glCompileShader(vshader->shader); glCompileShader(fshader->shader);" but not release them. We need to keep the source's around (the shaderstruct), but after linking the shaders should usually be destroyed.
For reasons above, I decided not to destroy the shader code, but we should go ahead and do so if we add a generate_general_shader(lighting, color); call.

Quote from: TheExDeus
What? You created the new GLSL standart? :D Also, the separable programs are only in GL4.
Come on, you know I meant an expansion of GM's shader functions, so tessellation shaders could be made.

Quote from: TheExDeus
It does batch normally. The problem is that it gives texture coordinates, but GM (and ENIGMA) allows passing -1 as texture to draw using currently bound color and alpha. So that is what I did. It works the same in GL1.1 and I suppose it worked the same in GM.
Good, I didn't realize I copied that code over already.

NOTE: I wan't to reiterate this point to you.
Quote from: TheExDeus
Project Mario on the other hand has a lot more (my debugger even shows Mario tries to render vertex buffers with only 1 vertex, which Robert and I should investigate), so there the FPS was massively decreased.
There is also a significant difference in OpenGL3 and Direct3D9 with the Mario game, D3D runs it 50fps faster. Remember when I was poking around with setting the _DYNAMIC flags in BufferData() ? Yes, that might be where the issue is occuring, OpenGL3 should be faster than Direct3D9 without your performance changes, if you find that bug and fix it you should at minimum get a 50fps boost in OpenGL3.

Works in Progress / Re: Dungeon Blabber
« on: February 07, 2014, 02:01:34 AM »
I've seen his hardware specs, they aren't glamorous but they are good, bout the same as mine. It really shouldn't be taking that long TKG.

Quote from: TKG
and perhaps some videos (In Bryce videos are exported as AVI and I'll have to convert it all to multi frame sprites).
If you use DirectShow I can write you an example that will thread video playback.

Quote from: TKG
When I render I can't do anything else on my PC but wait for the renders to finnish because Bryce hogs up all my resources and anything running other than Bryce could slow the render down and/or cause an aborting error message which are really annoying because I have to start over.
What is Bryce? :\

Quote from: TKG
The screenshot you have looks neat, Robert, are those gravestones/fences 2D sprites or 3D models? I can't tell because they look like sprites a glance but they seem to have correct 3D perspective so IDK.
Actually, sort of, they are 3D sprites, by that I mean they are a 3D bitmap, like you see in MRI scanners, the technique is called voxel rasterization. It died when ever Direct3D came out and vector graphics for 3D became more popular.

Developing ENIGMA / Re: Window Alpha and Message Box
« on: February 07, 2014, 01:26:29 AM »
Quote from: TheExDeus
Well I already said that it should hold up to 8k fine. Of course all of that needs to be tested though. You were then referring to arithmetic, which of course is done in shaders, and so I said that in shader part of the things it's no different. If you meant arithmetic as in changing 0-1 to 16bit short, then I also don't think precision would be lost. It just wouldn't be a float though. That is actually mentioned in that Apple link you gave:
I am not understanding, where and when do you plan to combine tx and ty? Because if you do it before sending the vertex data to the GPU, it will lose precision during that, but if you wait until it's in the shader to combine then, then no it won't lose precision at that stage. If you do plan to do it CPU side then combine tx and ty using half floats in the addTexture(tx,ty) call before pushing it into the vector, use a union too like color does because only GLES 3.0 offers half floats.

Quote from: TheExDeus
In FFP there are many color setting functions. In the deprecated OpenGL FFP colors are floats (https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml)
That's OpenGL's dumbassery and explains why I didn't get a performance boost in GL1 when I replaced it with the byte functions, but I did get a boost in OpenGL3 and Direct3D9

Direct3D9's deprecated FFP and FVF (Flexible-vertex-format) as well was vertex declarations (which are not dep.) use the macro's to convert to only a single DWORD CPU side.
Quote from: Microsoft Developers Network
D3DFVF_DIFFUSE   Vertex format includes a diffuse color component.   DWORD in ARGB order. See D3DCOLOR_ARGB.

Quote from: TheExDeus
The conversion is done in GPU, so it doesn't care one bit what format you actually use for uploading. I just keep saying that float's are what GPU's use inside - for everything - including colors. That is the only point I am trying to make.
That's fine, we both agree on that, but you can't ignore the fact that removing 4 float color from the CPU side, did give me a noticeable frame rate difference, 220->250FPS in the Studio cubes demo, and it gave the same performance results in Direct3D. It also went from 10fps->30fps when I made the change to GL1 vertex arrays.

Quote from: TheExDeus
And "consistent" also means that if 90% of the functions take 0-1 as alpha, then the rest should too.
I actually agree with you on this, but disagree at the same time, because I like how YYG is using only 0-255 for new functions. But you won't stop me from macro'ing every color function.

Quote from: TKG
Oh I didn't realize. That's actually a pretty good idea.
Let me show you through code why it is such a good idea, then you'll clearly see.

This is window_set_alpha(0-255)
Code: (C++) [Select]
void window_set_alpha(char alpha) {
  // Set WS_EX_LAYERED on this window
  SetWindowLong(enigma::hWndParent, GWL_EXSTYLE,
  GetWindowLong(enigma::hWndParent, GWL_EXSTYLE) | WS_EX_LAYERED);

  // Make this window transparent
  SetLayeredWindowAttributes(enigma::hWndParent, 0, alpha, LWA_ALPHA);
This is window_set_alpha(0-1)
Code: (C++) [Select]
void window_set_alpha(float alpha) {
  // Set WS_EX_LAYERED on this window
  SetWindowLong(enigma::hWndParent, GWL_EXSTYLE,
  GetWindowLong(enigma::hWndParent, GWL_EXSTYLE) | WS_EX_LAYERED);

  // Make this window transparent
  SetLayeredWindowAttributes(enigma::hWndParent, 0, (unsigned char)(alpha*255), LWA_ALPHA);
Actually, you could implement both functions at the same time since the data type is different, which would make window_set_alpha(0-1) look even more ridiculous.
Code: (C++) [Select]
void window_set_alpha(float alpha) {
  window_set_alpha((unsigned char)(alpha*255));

Notice the 0-1 versions require extra multiplication/division and then casting? Yeah that makes it less optimal.
Especially when you consider char holds only 0 to 255, and float holds 3.4E +/- 38 (7 digits)

Quote from: TKG
But it must be done in a way that could still give the option to be GM compatible.
Don't worry if I do implement options, the GM behavior will be the default. I just want this new function to do it the proper way, like how YYG is using only 0 to 255 for new functions they add.

General ENIGMA / Re: The new GL3
« on: February 07, 2014, 12:39:47 AM »
Quote from: TheExDeus
So the worst case is no difference, best case is improvement. And as this doesn't create any compatibility issues, then I don't see why free performance gain should be discarded.
Right, good job Harri.

Quote from: TheExDeus
Note: In this topic I will say GL3 everywhere, when in fact it's GL3.1 full context or even GL3.2 core (but no GL3.2 functions are used).
Don't go any higher than 3.1, my driver on Windows supports 4.1, but only 3.1 on Linux

Rename it in the about info too, like I did with OpenGL1.1, the about info should specify the exact OpenGL core.

Quote from: TheExDeus
Project Mario on the other hand has a lot more (my debugger even shows Mario tries to render vertex buffers with only 1 vertex, which Robert and I should investigate), so there the FPS was massively decreased.
There is also a significant difference in OpenGL3 and Direct3D9 with the Mario game, D3D runs it 50fps faster. Remember when I was poking around with setting the _DYNAMIC flags in BufferData() ? Yes, that might be where the issue is occuring, OpenGL3 should be faster than Direct3D9 without your performance changes, if you find that bug and fix it you should at minimum get a 50fps boost in OpenGL3.

Quote from: TheExDeus
(like when drawing d3d_model_block with texture == -1).
I've got some bad news for you Harri, we actually have to undo all the shit I did with textures again. Studio now returns a pointer with _get_texture() functions, see my latest comments on this GitHub issue.

Quote from: TheExDeus
Nvidia, for example, uses something called Warps - they are basically batches that work on several pixels at once - like 32. If all of them have the same instructions (so branching doesn't change per pixel),
Nvidia also suggests and encourages dynamically recompiling shaders at runtime. We wouldn't necessarily need an uber shader, we'd need an uber shader that is broken down into different functions. You'd only need to rebuild the shader for instance when a state change occurs like d3d_set_lighting(). I actually suggest we do this too, if you don't want to, it's fine your code is good enough, I guarantee it still kicks the shit out of Studio, because we already do so using the plain old FFP. But anyway, it's just another optimization we can look at down the road, and I might even do it if I feel it becomes worth it.

You'd basically just structure the program like this. And you only copy the functions to the shader code that are used, for instance in the following code when the shader is rebuilt in d3d_set_lighting(true) you would copy the apply lighting call and method into the string and then rebuild the shader, when the user turns it off youd rebuild the shader without it. All you'd need is a basic generate_default_shader(lighting, color);
Code: (glSlang) [Select]
void apply_environment();
void apply_lighting();

void main() {

But anyway, in the future I can refactor your code if you don't want to do it. Also, there may be a way by doing them as separate shaders and just recompiling the main one only to utilize the other ones, see the following.
This is why I created advanced GLSL functions.

Quote from: TheExDeus
There are still some function here and there (like d3d_begin() and d3d_end()), but I will remove them soon enough.
Harri, I hope you don't mean you are officially removing d3d_start(), Studio has not deprecated that, and the behavior is defined as follows.

Quote from: TheExDeus
So "draw_set_color(c_red); draw_set_alpha(.5); d3d_draw_block(..., texture = -1, ..);" will draw a transparent red block.
I have to change that, d3d_model_block should batch like it does in Direct3D9.

Quote from: TheExDeus
like 50 commits behind Master, but conflicts should be minimal.
I'll be sure not to touch it in the mean time. To be honest, I wouldn't really care if you went ahead and merged it right now.

Quote from: TheExDeus
These changes will probably break GL3 to some people here (like Poly), but that is because their hardware just don't support GL3 in the first place. They currently can run it, because the implementation in Master is more like GL2.
He couldn't run the games anyway because of surfaces, so don't worry about his shitty Intel drivers, he has OpenGL1 and Direct3D9

Quote from: TKG
If you guys are now calling d3d_start/end() deprecated even though in GM it isn't I don't like where this is going at all. Whatever happened to this being a GM compatible, GM clone? We might as well rename ENIGMA because the the path it's going is to no longer have the "GM" in "ENIGMA".
TKG, relax, he can't remove it, I don't think he meant what we think he means anyway. Technically in Studio they are though, because all 2D rendering is done with Direct3D, OpenGL on Mac. So that means there is never a need to initialize Direct3D, d3d_start() was back in the day when GM did it's 2D rendering through software mode, and only 3D drawing was with Direct3D.

We still have the functions in ENIGMA, but they really don't do anything except set the perspective projection, they don't really do anything in Studio either.

At any rate, everyone give Harri a round of applause, he deserves it.

Works in Progress / Re: Dungeon Blabber
« on: February 06, 2014, 06:51:31 AM »
Why no playable demo? Your water looks nice, btw, all of your renders remind me of this old school voxel engine.

I don't mean that in a bad way, I am a huge fan of voxel graphics!

Developing ENIGMA / Re: Optimized GMX Loading
« on: February 05, 2014, 03:33:37 PM »
c h o s w x y
Oh wow you're clever I did not notice that, I wonder if that is what is behind the ISO file_text_read() thing, perhaps it behaves the same way. GM traditionally did however guarantee the order, which we established with Project Mario, hence the reason for me re-releasing the game ISO/C compliant.

So basically, I just want to clarify, you and Josh are not against ENIGMA being able to load EGM's without the plugin? If so that is awesome, I would have done much more work on the Command Line Interface a long time ago.

And to further clarify, IsmAvatar, what do you specifically think of delaying the loading of each resource to the first time it is accessed?

Developing ENIGMA / Re: Window Alpha and Message Box
« on: February 05, 2014, 11:06:21 AM »
Quote from: TheExDeus
I already said this - in the shader all of it is floats. It doesn't matter what format you pass it as. They always are converted as floats in the shader. Like look at the normalization argument in glVertexAttribPointer:
They are expanded in the vertex stage after upload.

They also get byte aligned in this stage.

Quote from: TheExDeus
So arithmetic in the shaders won't loose precision, because the 16bit value will be normalized and converted to a 32bit float.
Yeah, but you said combining tx and ty into a single float before uploading to the bus, that would be done on the CPU, losing precision?

Quote from: TheExDeus
What? Those are not even the same things. We were talking about data types
I thought we were talking about systems of pixel measurement. Because when it comes to 3D raster graphics, you definitely would not want to use 4 float colors. Specifically read the citation about John Carmack, you should know who he is.

Quote from: TheExDeus
Windowing API? Yes. Graphics API? No.
CPU? Yes. GPU? No. A graphics API obviously performs uploading on the former not the latter, otherwise DirectX and OpenGL wouldn't both internally upload color as only 4 bytes using their fixed-function pipeline. Also, ironically, OpenGL ES 3.0 adds a wider array of support for smaller data types.

Quote from: Apple, Inc.
OpenGL ES 3.0 contexts support a wider range of small data types, such as GL_HALF_FLOAT and GL_INT_2_10_10_10_REV. These often provide sufficient precision for attributes such as normals, with a smaller footprint than GL_FLOAT.

Quote from: TheExDeus
Yes, RGB functions. But that wasn't an RGB function. It made the window transparent, if anything it could of been considered a drawing function. Drawing functions take 0-1. And they always will be taking 0-1 unless you want to break compatibility with all versions of GM.
Harri, there is a difference between what GM does, what we do, and what we should do, and what GM should have done, and what we should have done.

RGBA parameters should have all been 0 to 1 or 0 to 255 at the same time so that people could chose and casting could be eliminated, not a mixture of both. Now, we obviously shouldn't break GM compatibility, well heh actually we really should, but I am not going to, it's too big of a difference, I am going to hack around it with macro's.

Quote from: TheExDeus
He did that way because alpha wasn't even a thing for GM at first. Later when he added it, he did it to be consistent with drawing API he used. It apparently set alpha as floats and that is what he did.
Evidence for this? GM6 or 7 I forget became Direct3D hardware accelerated, I know 0 to 1 alpha was in use before that. But if it was Direct3D he would have definitely used 0 to 255, unless he refereed to the Microsoft documents which say that 0 to 1 is only useful for noobs. Whatever was in use before D3D for graphics, I wouldn't believe for a second it didn't offer 0 to 255 for alpha.

One possibility is that he was using Graphics Device Interface or just native Win32.
Quote from: Microsoft Developers Network
In general, the number of possible colors is equal to 2 raised to the power of the pixel depth. Windows Embedded CE supports pixel depths of 1, 2, 4, 8, 16, 24, and 32 bpp.
Quote from: Microsoft Developers Network
Microsoft Windows considers that a color is a 32-bit numeric value. Therefore, a color is actually a combination of 32 bits:

Quote from: TheExDeus
and 0-255 color (at least in color functions, because in reality color is an int holding all 3 components).
If red, green and blue were all 0 to 1 as well, then it would at least be fucking consistent, and I'd not be complaining about GM.

To correct myself earlier it would actually be 56/64 bit color, 32/24bit RGB integer + 32bit floating point alpha = 56/64bit color. No application or anybody has ever even heard of a 56 bit color before, and 64 bit "high color" is defined as 16/bits per channel and only 16 bit alpha.

Developing ENIGMA / Re: Optimized GMX Loading
« on: February 04, 2014, 09:36:38 PM »

The first part I decided to go ahead and do is moving all the glyph and font metric shit out of the plugin and into LateralGM. This is for several reasons, but the base reason is GMX requires a texture stored with every font as well as the glyph metrics in the properties file. This will also make it easier to simply pass an EGM to the compiler by dumping the glyph metric information into the EGM. I guess this is why GM also dumps it to the GMX.

Now we can add proper support for multiple character ranged fonts. This however leaves me with some questions for Josh.

1) The current approach I made was to store the ranges and glyphs into a data Eef writer alongside the properties file. They look like this.
Code: (YAML) [Select]
  - [32, 127]
   - [32, 127]
   - [32, 127]
  - [132, 0, 0, 12, 12, 0]
   - [133, 12, 0, 12, 12, 0]
Now, currently I do not statically type out the parameter order for each property array, and because of ISO not guaranteeing parameter order if this would be an issue with YAML and Java? Also because the following occurs with all GMX projects exported by LGM.
Code: (XML) [Select]
   <glyph character="113" h="18" offset="1" shift="9" w="7" x="123" y="22"/>
    <glyph character="87" h="15" offset="0" shift="15" w="15" x="19" y="2"/>
Notice h is before offset and shift and not after w?

2) To make it possible for EGM to just be passed to the compiler, I will also need to store the texture into the EGM. I can do two things, I can tack it onto the end of glyph/ranges data file as just pure binary preceded by a 4 byte size header and just make that part of the data file illegible when opened in say notepad. The other option is to store a second data file, eg. "font_0.tex" and add a property to the range/glyph data file that tells us what format it is in, kind of like I did with shaders where I had two data files. Which do you like better?

3) So you like the simple way of sub-classing ResNode in order to accomplish responsive file editing? And also, do you think it is worth it to try and implement this behavior for GMK as well, or only EGM and GMX?

Developing ENIGMA / Re: Window Alpha and Message Box
« on: February 04, 2014, 09:29:52 PM »
That should mean it can represent everything from 0 to 8192 fine.
That sounds ok when you first think about it but you have to leave a margin of error as a result of division and or other operators.
It looks like Wikipedia has some info on it.

Yes, with the floats being the metric system. Seeing how it's the primary data type in graphics, I don't see why fight with that.
Actually no it wouldn't be floats or pixels, but DPCI.

Floating point precision is only necessary in vector graphics, not raster graphics. And being that both Direct3D and OpenGL are 3D rendering API's, that explains why, also why graphics card manufacturers would decide to optimize for floats and vector based graphics.

0-255 is a weird approach if you ask me.
Yes but 0 to 255 is faster and more optimal since that is what 100% of API's use internally, not to mention 0 to 255 for only alpha is inconsistent with all other RGB functions already taking 255. The problem was Mark Overmars decided to go half way in between and do alpha as a float which is inconsistent with true 48bit color. I'd be fine with it if it was all 0 to 255 or all 0-1 with the option to switch to 0 to 255 to prevent casting. DirectX provides macros for 0-1 for each RGBA component, but those are also turned into 0 to 255 internally.
Here are all their definitions in d3d9types.h
Code: (C++) [Select]
#define D3DCOLOR_ARGB(a,r,g,b)       ((D3DCOLOR)((((a)&0xff)<<24)|(((r)&0xff)<<16)|(((g)&0xff)<<8)|((b)&0xff)))
#define D3DCOLOR_COLORVALUE(r,g,b,a) D3DCOLOR_RGBA((DWORD)((r)*255.f),(DWORD)((g)*255.f),(DWORD)((b)*255.f),(DWORD)((a)*255.f))
#define D3DCOLOR_RGBA(r,g,b,a)       D3DCOLOR_ARGB(a,r,g,b)
#define D3DCOLOR_XRGB(r,g,b)         D3DCOLOR_ARGB(0xff,r,g,b)
#define D3DCOLOR_XYUV(y,u,v)         D3DCOLOR_ARGB(0xFF,y,u,v)
#define D3DCOLOR_AYUV(a,y,u,v)       D3DCOLOR_ARGB(a,y,u,v)
Overmars going halfway in between also makes it difficult to write a single macro to do this.

Direct3D's float spec is here.

it always has, and always will, likewise for the alpha arguments in draw_sprite_ext(), draw_sprite_general(), draw_sprite_stretched_ext() and many other functions they all use 0-1 alpha arguments. So there's really no reason why a window alpha function should be dealt with any different IMHO.
You are missing the point, I can macro them to take whatever parameters you want. You could basically change 1 line and have all functions do 0-255, or switch them all to 0-1. Then people can choose which versions they want.

Developing ENIGMA / Re: Window Alpha and Message Box
« on: February 04, 2014, 11:29:39 AM »
For example, texture coordinates don't need to be 32bit floats either, 16bit should be fine. So both x and y could be packed into one value and halve the size.
Texture paging is going to give us massive textures, potentially 8,192x8,192, I am afraid that would drastically reduce the accuracy. I hate floats, it's like an English system vs Metric system kind of thing.

Well, make it take alpha as 0-1 and I will merge it. As the only consensus was that I and Josh think it should be 0-1 and that you think it should be 255. Later it can be changed. But now for convention it should be 0-1. I personally didn't ever argue that internal representation needs to be changed somewhere. I just said that almost every function we have takes alpha as 0-1. So it only makes sense to stick with that.
I am not going to address just the one function I am going to make a separate commit where I address all color functions with new macros and types. It is going to be extensive work and I'd appreciate if it could be ambiguated from my current commit.

It will probably be forgotten later.
This is where general headers come into play, if a system does not implement all window functions in the general header, it is incomplete. Same concept applies to graphics and other sub systems. I have coded and committed them regardless of testing, I have done a lot of XLIB changes from Windows, sometimes they're winners sometimes they aren't, but I mostly do it negligently to get back at cheeseboy, since he's on that operating system and he never complies with my requests.

I did read. And the fact that is was in the same pull request is the reason why I didn't merge it. I cannot merge only half of the pull request as far as I know. So the reason bug fixes aren't pulled is because they also include functions we didn't want to pull in the current state.
I've completely removed the function, so you can go ahead and merge now.