Goombert
|
|
Reply #15 Posted on: February 02, 2014, 10:52:04 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
(1) ISO RGB does not concern alpha. Maybe you meant to quote something from ISO that includes the letter "A" or specifically mentions phrases such as "alpha" or "opacity." Actually, the point was the lack-thereof the alpha. http://en.wikipedia.org/wiki/48-bitNote: 48-bit color is also known as deep color and implemented as 16-bits/channel. http://en.wikipedia.org/wiki/Deep_color#Deep_color_.2830.2F36.2F48-bit.29One of such programs that can do so is 3DS Max. http://docs.autodesk.com/3DSMAX/15/ENU/3ds-Max-Help/index.html?url=files/GUID-DEDEE7A3-8B3F-4A00-B156-E6771EE3FFEC.htm,topicNumber=d30e524001This is because GM stores all colors in int's (and so does ENIGMA And so does Java, GM used to waste the remaining 8 bits. Moreover, Game Maker never used ARGB. OpenGL prefers RGBA, Direct3D prefers ARGB. I am not arguing with you, it is just important to note. Colors in OpenGL are stored in RGBA format. That is, each color has a Red, Green, Blue, and Alpha component http://www.opengl.org/wiki/Image_Formatf we keep adding their shit ass implementations of functions, then ENIGMA will become as chaotic as GM. We either have to start ditching that useless compatibility with GM:S. Or stick with their bull only in their functions and use correct (and consistent) convention in ENIGMA functions. I wouldn't have problems with ENIGMA taking 0-1 in all functions, but that clearly isn't possible. Unless we start putting colors in structs (or classes) and then overloading drawing functions. 3 chars followed by a double is more consistent than 4 bytes? I don't think so, YYG is at least getting closer to following standards, however very poorly. I am not arguing in favor of their shit, I am simply arguing in favor of color being 4 bytes internally. Whatever wrappers or macro's people want to add for color is perfectly fine. In fact, we can just go ahead and drop this debate and macro all the functions that accept color, people can toggle between 4 floats, 3 bytes and a float, or 4 bytes, whatever the hell they choose, and we just convert it internally to 4 bytes RGBA, which is what our engine should use always, everywhere. I'll write it up later. They are not merged because we are clearly still talking about them. We should do this more often, than just randomly merge all the stuff you somehow deem worthy. That is why I personally only merge for bug fixes instead of new functionality. For new functions I actually try to read the code. Then you didn't bother to read the pull request, it contains a number of fixes to outstanding bug reports including ENIGMA being able to finally syntax highlight its own functions, but that topic was buried by this one, but both were part of the same pull request. , as to whether we embrace a byte for alpha or a float, I am unbiased. Robert prefers bytes for the sake of cards with terrible memory bandwidths, which ENIGMA should ideally cater to as well. Macro expansion Joshi, read above. On a side note, I just realized your name is Yoshi with a 'J'. But also, I actually want ENIGMA's defaults to be RGBA, as in make_color_* would expand to the following... unsigned make_color_rgba(byte red, byte green, byte blue, byte alpha); unsigned make_color_argb(byte alpha, byte red, byte green, byte blue);
|
|
« Last Edit: February 02, 2014, 11:01:53 am by Robert B Colton »
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
Josh @ Dreamland
|
|
Reply #16 Posted on: February 02, 2014, 11:21:44 am |
|
|
Prince of all Goldfish
Location: Pittsburgh, PA, USA Joined: Feb 2008
Posts: 2950
|
Three chars and a float is more consistent with the rest of the ENIGMA API. Even if four chars is more homogeneous. Ages ago, I added a draw_set_color_rgba(), whose purpose was to accept all the color components at once. Since the pipeline works in floats, I sent the data over as floats in one call. Alpha was the only parameter I didn't have to modify, because like every other function in GML (at the time, apparently), it accepted alpha as a float.
ENIGMA cannot use RGBA as a default; it would only confuse people. Existing functions treat color and opacity as different concepts. Preserving those functions and adding new alpha-sensitive functions would confuse people, and removing those functions is not an option.
|
|
« Last Edit: February 02, 2014, 11:27:12 am by Josh @ Dreamland »
|
Logged
|
"That is the single most cryptic piece of code I have ever seen." -Master PobbleWobble "I disapprove of what you say, but I will defend to the death your right to say it." -Evelyn Beatrice Hall, Friends of Voltaire
|
|
|
time-killer-games
|
|
Reply #17 Posted on: February 03, 2014, 01:51:04 pm |
|
|
"Guest"
|
I actually like your controller icon the most Jost, well done. =)
Robert: are you saying that you added window alpha and that's it? What's all this other stuff about windows you're talking about? Are these things you have or haven't added yet? Are you actually Planning on adding these functions or are you just playing around with random ideas that might not ever happened?
Two things I'd like to suggest.
1) a way to make a specific color completely transparent. This when paired with a border less window it could emulate the look of a non-rectangular window. Or perhaps a rectangular one with rounded edges. And maybe a way to make the window click & drag-able somewhere in the client area? This way we could make custom title bars. And a function to minimize the window so a custom minimize button could be done.
2) A way to draw specific objects, particles, rooms, and other elements to one or more windows handles, assuming multiple windows can be created by the game. This would be perfect for multiplayer, non-networking, single keyboard 3D games or a 3D car rear view camera, etc.
|
|
« Last Edit: February 03, 2014, 01:52:58 pm by time-killer-games »
|
Logged
|
|
|
|
TheExDeus
|
|
Reply #18 Posted on: February 03, 2014, 03:45:44 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
1) a way to make a specific color completely transparent. This when paired with a border less window it could emulate the look of a non-rectangular window. Or perhaps a rectangular one with rounded edges. I think Robert can add this. I think the same windows API call he added for transparency is the same that did this. And maybe a way to make the window click & drag-able somewhere in the client area? This should already be possible with display_mouse_get_ functions and window_set_position. And a function to minimize the window so a custom minimize button could be done. This is still missing. Btw, I noticed that these new functions are windows only. Shouldn't we be making cross-platform for at least Linux and Mac as well?
|
|
|
Logged
|
|
|
|
Goombert
|
|
Reply #19 Posted on: February 04, 2014, 08:19:12 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
Robert: are you saying that you added window alpha and that's it? It was two functions I added to my commit containing other bugfixes, I just implemented because it is useful, and I like to add things that Studio thinks aren't cross-platform. But the topic went into me suggesting that all Windows be managed and all window functions accept the window id. Which is basically what you want to do, create and manage multiple windows. You could basically call the old version of the functions which modify the default window or add a specific window id returned by window_create or window_get_default. It would require a hefty amount of work, but I could get it done in about a day along with macro'ing the color parameters. Just tell Josh or Harri to merge the current pull request I have so Linux can at least build again. Btw, I noticed that these new functions are windows only. Shouldn't we be making cross-platform for at least Linux and Mac as well? Right, you think Cocoa and XLIB don't support transparent Windows? I wouldn't add the function if they didn't. Which gives me another idea, I don't know if it's possible, but I wonder if Josh could make JDI parse out code that is checked by the os_type constant and make it work like a pre-processor directive. I think Robert can add this. I think the same windows API call he added for transparency is the same that did this. I don't understand, window_set_color() you mean? It is possible to make a combined function that accepts RGBA, but I don't think that is a good idea since it is a separate flag for transparency.
|
|
« Last Edit: February 04, 2014, 08:24:37 am by Robert B Colton »
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
TheExDeus
|
|
Reply #20 Posted on: February 04, 2014, 11:09:19 am |
|
|
Joined: Apr 2008
Posts: 1860
|
I am not arguing in favor of their shit, I am simply arguing in favor of color being 4 bytes internally That is okay. I also don't have problems with internal representation using 4 bytes. In the new GL3 shaders I still use the packed format and just do this: glVertexAttribPointer(colorLoc, 4, GL_UNSIGNED_BYTE, GL_TRUE, STRIDE, OFFSET(offset)); //Normalization needs to be true, because we pack them as unsigned bytes I set normalization to true, so that the 0-255 is changed to 0-1. If that was done in shader, then that would slow the whole thing down (and shaders ONLY use 0-1 representation), but apparently normalization while calling glVertexAttribPointer doesn't have any performance penalty ( http://gamedev.stackexchange.com/questions/9792/opengl-vertex-attributes-normalisation). We could apparently even pack more things. For example, texture coordinates don't need to be 32bit floats either, 16bit should be fine. So both x and y could be packed into one value and halve the size. Just tell Josh or Harri to merge the current pull request I have so Linux can at least build again. Well, make it take alpha as 0-1 and I will merge it. As the only consensus was that I and Josh think it should be 0-1 and that you think it should be 255. Later it can be changed. But now for convention it should be 0-1. I personally didn't ever argue that internal representation needs to be changed somewhere. I just said that almost every function we have takes alpha as 0-1. So it only makes sense to stick with that. Right, you think Cocoa and XLIB don't support transparent Windows? No, I meant that you didn't do that in the commit. It will probably be forgotten later. But anyway, I often implement windows only stuff as well, as I don't have a linux to test. I don't understand, window_set_color() you mean? It is possible to make a combined function that accepts RGBA, but I don't think that is a good idea since it is a separate flag for transparency. I meant that this: "SetLayeredWindowAttributes" with LWA_COLORKEY parameter allows setting transparency color. I think that is what he asked for. Then you didn't bother to read the pull request, it contains a number of fixes to outstanding bug reports including ENIGMA being able to finally syntax highlight its own functions, but that topic was buried by this one, but both were part of the same pull request. I did read. And the fact that is was in the same pull request is the reason why I didn't merge it. I cannot merge only half of the pull request as far as I know. So the reason bug fixes aren't pulled is because they also include functions we didn't want to pull in the current state.
|
|
|
Logged
|
|
|
|
Goombert
|
|
Reply #21 Posted on: February 04, 2014, 11:29:39 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
For example, texture coordinates don't need to be 32bit floats either, 16bit should be fine. So both x and y could be packed into one value and halve the size.
Texture paging is going to give us massive textures, potentially 8,192x8,192, I am afraid that would drastically reduce the accuracy. I hate floats, it's like an English system vs Metric system kind of thing. Well, make it take alpha as 0-1 and I will merge it. As the only consensus was that I and Josh think it should be 0-1 and that you think it should be 255. Later it can be changed. But now for convention it should be 0-1. I personally didn't ever argue that internal representation needs to be changed somewhere. I just said that almost every function we have takes alpha as 0-1. So it only makes sense to stick with that. I am not going to address just the one function I am going to make a separate commit where I address all color functions with new macros and types. It is going to be extensive work and I'd appreciate if it could be ambiguated from my current commit. It will probably be forgotten later. This is where general headers come into play, if a system does not implement all window functions in the general header, it is incomplete. Same concept applies to graphics and other sub systems. I have coded and committed them regardless of testing, I have done a lot of XLIB changes from Windows, sometimes they're winners sometimes they aren't, but I mostly do it negligently to get back at cheeseboy, since he's on that operating system and he never complies with my requests. I did read. And the fact that is was in the same pull request is the reason why I didn't merge it. I cannot merge only half of the pull request as far as I know. So the reason bug fixes aren't pulled is because they also include functions we didn't want to pull in the current state. I've completely removed the function, so you can go ahead and merge now. https://github.com/enigma-dev/enigma-dev/pull/635
|
|
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
TheExDeus
|
|
Reply #22 Posted on: February 04, 2014, 02:21:00 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
I am afraid that would drastically reduce the accuracy. I cannot find a definitive answer, so I am not sure if that causes problems with 8k x 8k. From what I read it at least shouldn't have problems with 4k x 4k. The thing is that it's still going to be a float in the shader, it's just that while sending it to the gpu it will be smaller. 16bits have more than 65k values. That should mean it can represent everything from 0 to 8192 fine. But that is something that could later be tested. No reason to do that now. Was just saying that it is possible. I hate floats, it's like an English system vs Metric system kind of thing. Yes, with the floats being the metric system. Seeing how it's the primary data type in graphics, I don't see why fight with that. I am not going to address just the one function I am going to make a separate commit where I address all color functions with new macros and types. It is going to be extensive work and I'd appreciate if it could be ambiguated from my current commit. Alright. Could of removed _get as well, as it is returning a char. But I will merge anyway. Someone else can remove it if need be.
|
|
|
Logged
|
|
|
|
time-killer-games
|
|
Reply #23 Posted on: February 04, 2014, 05:25:38 pm |
|
|
"Guest"
|
I think it should be 0-1 and stay that way. 0-255 is a weird approach if you ask me. draw_set_alpha() uses 0-1, it always has, and always will, likewise for the alpha arguments in draw_sprite_ext(), draw_sprite_general(), draw_sprite_stretched_ext() and many other functions they all use 0-1 alpha arguments. So there's really no reason why a window alpha function should be dealt with any different IMHO.
|
|
« Last Edit: February 04, 2014, 05:29:53 pm by time-killer-games »
|
Logged
|
|
|
|
Goombert
|
|
Reply #24 Posted on: February 04, 2014, 09:29:52 pm |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
That should mean it can represent everything from 0 to 8192 fine. That sounds ok when you first think about it but you have to leave a margin of error as a result of division and or other operators. It looks like Wikipedia has some info on it. http://en.wikipedia.org/wiki/Floating_point#Minimizing_the_effect_of_accuracy_problemsYes, with the floats being the metric system. Seeing how it's the primary data type in graphics, I don't see why fight with that. Actually no it wouldn't be floats or pixels, but DPCI. http://en.wikipedia.org/wiki/Dots_per_centimetreFloating point precision is only necessary in vector graphics, not raster graphics. And being that both Direct3D and OpenGL are 3D rendering API's, that explains why, also why graphics card manufacturers would decide to optimize for floats and vector based graphics. 0-255 is a weird approach if you ask me. Yes but 0 to 255 is faster and more optimal since that is what 100% of API's use internally, not to mention 0 to 255 for only alpha is inconsistent with all other RGB functions already taking 255. The problem was Mark Overmars decided to go half way in between and do alpha as a float which is inconsistent with true 48bit color. I'd be fine with it if it was all 0 to 255 or all 0-1 with the option to switch to 0 to 255 to prevent casting. DirectX provides macros for 0-1 for each RGBA component, but those are also turned into 0 to 255 internally. http://msdn.microsoft.com/en-us/library/windows/desktop/bb172518%28v=vs.85%29.aspxHere are all their definitions in d3d9types.h #define D3DCOLOR_ARGB(a,r,g,b) ((D3DCOLOR)((((a)&0xff)<<24)|(((r)&0xff)<<16)|(((g)&0xff)<<8)|((b)&0xff))) #define D3DCOLOR_COLORVALUE(r,g,b,a) D3DCOLOR_RGBA((DWORD)((r)*255.f),(DWORD)((g)*255.f),(DWORD)((b)*255.f),(DWORD)((a)*255.f)) #define D3DCOLOR_RGBA(r,g,b,a) D3DCOLOR_ARGB(a,r,g,b) #define D3DCOLOR_XRGB(r,g,b) D3DCOLOR_ARGB(0xff,r,g,b) #define D3DCOLOR_XYUV(y,u,v) D3DCOLOR_ARGB(0xFF,y,u,v) #define D3DCOLOR_AYUV(a,y,u,v) D3DCOLOR_ARGB(a,y,u,v)
Overmars going halfway in between also makes it difficult to write a single macro to do this. Direct3D's float spec is here. http://msdn.microsoft.com/en-us/library/windows/desktop/cc308050%28v=vs.85%29.aspxit always has, and always will, likewise for the alpha arguments in draw_sprite_ext(), draw_sprite_general(), draw_sprite_stretched_ext() and many other functions they all use 0-1 alpha arguments. So there's really no reason why a window alpha function should be dealt with any different IMHO. You are missing the point, I can macro them to take whatever parameters you want. You could basically change 1 line and have all functions do 0-255, or switch them all to 0-1. Then people can choose which versions they want.
|
|
« Last Edit: February 04, 2014, 09:34:30 pm by Robert B Colton »
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
TheExDeus
|
|
Reply #25 Posted on: February 05, 2014, 08:46:50 am |
|
|
Joined: Apr 2008
Posts: 1860
|
That sounds ok when you first think about it but you have to leave a margin of error as a result of division and or other operators. I already said this - in the shader all of it is floats. It doesn't matter what format you pass it as. They always are converted as floats in the shader. Like look at the normalization argument in glVertexAttribPointer: For glVertexAttribPointer, if normalized is set to GL_TRUE, it indicates that values stored in an integer format are to be mapped to the range [-1,1] (for signed values) or [0,1] (for unsigned values) when they are accessed and converted to floating point. Otherwise, values will be converted to floats directly without normalization. This means that integer formats will be either normalized (like I do now for colors) or if GL_FALSE, then converted to floats unnormalized (so 255 will be 255.0f in shader). I think there are some macros in the shader than switches between types or whatever, but normally they are floats by default. And that isn't slow either, because normalization and conversion to float is done in hardware, so there apparently is 0 performance penalty for that. So arithmetic in the shaders won't loose precision, because the 16bit value will be normalized and converted to a 32bit float. Actually no it wouldn't be floats or pixels, but DPCI. What? Those are not even the same things. We were talking about data types - I said that float's are the best, makes the most sense and is the most used in GPU's, so in essence it would be "metric system". Instead of an unsigned char. Yes but 0 to 255 is faster and more optimal since that is what 100% of API's use internally Windowing API? Yes. Graphics API? No. And I already said I don't care what they are represented as internally. Yes, as we are dealing with 8bit color and alpha channels, then it makes sense to pack them like that. But it doesn't mean it makes sense to ask the user alpha in 0-255 instead of 0-1. all other RGB functions already taking 255 Yes, RGB functions. But that wasn't an RGB function. It made the window transparent, if anything it could of been considered a drawing function. Drawing functions take 0-1. And they always will be taking 0-1 unless you want to break compatibility with all versions of GM. The problem was Mark Overmars decided to go half way in between and do alpha as a float which is inconsistent with true 48bit color. He did that way because alpha wasn't even a thing for GM at first. Later when he added it, he did it to be consistent with drawing API he used. It apparently set alpha as floats and that is what he did. You are missing the point, I can macro them to take whatever parameters you want. You could basically change 1 line and have all functions do 0-255, or switch them all to 0-1. Then people can choose which versions they want. You are the on missing the point. You can make all the macros you want, the default is still going to be 0-1 alpha and 0-255 color (at least in color functions, because in reality color is an int holding all 3 components). And is what the discussion was about. So you can make macros or whatever, but the default should be the way we have now, so when you add a function taking alpha, the by default, it should take 0-1. That's it.
|
|
|
Logged
|
|
|
|
Goombert
|
|
Reply #26 Posted on: February 05, 2014, 11:06:21 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
I already said this - in the shader all of it is floats. It doesn't matter what format you pass it as. They always are converted as floats in the shader. Like look at the normalization argument in glVertexAttribPointer: They are expanded in the vertex stage after upload. They also get byte aligned in this stage. https://developer.apple.com/library/ios/documentation/3ddrawing/conceptual/opengles_programmingguide/Art/interleaved_vertex_data_1_2x.pngSo arithmetic in the shaders won't loose precision, because the 16bit value will be normalized and converted to a 32bit float. Yeah, but you said combining tx and ty into a single float before uploading to the bus, that would be done on the CPU, losing precision? What? Those are not even the same things. We were talking about data types I thought we were talking about systems of pixel measurement. Because when it comes to 3D raster graphics, you definitely would not want to use 4 float colors. Specifically read the citation about John Carmack, you should know who he is. http://en.wikipedia.org/wiki/Voxel#RasterizationWindowing API? Yes. Graphics API? No. CPU? Yes. GPU? No. A graphics API obviously performs uploading on the former not the latter, otherwise DirectX and OpenGL wouldn't both internally upload color as only 4 bytes using their fixed-function pipeline. Also, ironically, OpenGL ES 3.0 adds a wider array of support for smaller data types. https://developer.apple.com/library/ios/documentation/3ddrawing/conceptual/opengles_programmingguide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.htmlOpenGL ES 3.0 contexts support a wider range of small data types, such as GL_HALF_FLOAT and GL_INT_2_10_10_10_REV. These often provide sufficient precision for attributes such as normals, with a smaller footprint than GL_FLOAT. Yes, RGB functions. But that wasn't an RGB function. It made the window transparent, if anything it could of been considered a drawing function. Drawing functions take 0-1. And they always will be taking 0-1 unless you want to break compatibility with all versions of GM. Harri, there is a difference between what GM does, what we do, and what we should do, and what GM should have done, and what we should have done. RGBA parameters should have all been 0 to 1 or 0 to 255 at the same time so that people could chose and casting could be eliminated, not a mixture of both. Now, we obviously shouldn't break GM compatibility, well heh actually we really should, but I am not going to, it's too big of a difference, I am going to hack around it with macro's. He did that way because alpha wasn't even a thing for GM at first. Later when he added it, he did it to be consistent with drawing API he used. It apparently set alpha as floats and that is what he did. Evidence for this? GM6 or 7 I forget became Direct3D hardware accelerated, I know 0 to 1 alpha was in use before that. But if it was Direct3D he would have definitely used 0 to 255, unless he refereed to the Microsoft documents which say that 0 to 1 is only useful for noobs. Whatever was in use before D3D for graphics, I wouldn't believe for a second it didn't offer 0 to 255 for alpha. One possibility is that he was using Graphics Device Interface or just native Win32. http://msdn.microsoft.com/en-us/library/aa932955.aspxIn general, the number of possible colors is equal to 2 raised to the power of the pixel depth. Windows Embedded CE supports pixel depths of 1, 2, 4, 8, 16, 24, and 32 bpp. http://www.functionx.com/win32/Lesson13.htmMicrosoft Windows considers that a color is a 32-bit numeric value. Therefore, a color is actually a combination of 32 bits: and 0-255 color (at least in color functions, because in reality color is an int holding all 3 components). If red, green and blue were all 0 to 1 as well, then it would at least be fucking consistent, and I'd not be complaining about GM. To correct myself earlier it would actually be 56/64 bit color, 32/24bit RGB integer + 32bit floating point alpha = 56/64bit color. No application or anybody has ever even heard of a 56 bit color before, and 64 bit "high color" is defined as 16/bits per channel and only 16 bit alpha.
|
|
« Last Edit: February 05, 2014, 11:23:25 am by Robert B Colton »
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
TheExDeus
|
|
Reply #27 Posted on: February 05, 2014, 04:29:36 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
They are expanded in the vertex stage after upload. And? Technically it would probably be somewhere between application and vertex in that simplified graphic. The same hardware pipeline that deals with data upload would do it. Yeah, but you said combining tx and ty into a single float before uploading to the bus, that would be done on the CPU, losing precision? Well I already said that it should hold up to 8k fine. Of course all of that needs to be tested though. You were then referring to arithmetic, which of course is done in shaders, and so I said that in shader part of the things it's no different. If you meant arithmetic as in changing 0-1 to 16bit short, then I also don't think precision would be lost. It just wouldn't be a float though. That is actually mentioned in that Apple link you gave: Specify texture coordinates using 2 or 4 unsigned bytes (GL_UNSIGNED_BYTE) or unsigned short (GL_UNSIGNED_SHORT). Do not pack multiple sets of texture coordinates into a single attribute. I thought we were talking about systems of pixel measurement We were talking whether floats are more used in graphics for data (and specifically in shaders), than other data types. internally upload color as only 4 bytes using their fixed-function pipeline In FFP there are many color setting functions. In the deprecated OpenGL FFP colors are floats ( https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml): Current color values are stored in floating-point format, with unspecified mantissa and exponent sizes. Unsigned integer color components, when specified, are linearly mapped to floating-point values such that the largest representable value maps to 1.0 (full intensity), and 0 maps to 0.0 (zero intensity). Signed integer color components, when specified, are linearly mapped to floating-point values such that the most positive representable value maps to 1.0, and the most negative representable value maps to -1.0 (Note that this mapping does not convert 0 precisely to 0.0.) Floating-point values are mapped directly. The conversion is done in GPU, so it doesn't care one bit what format you actually use for uploading. I just keep saying that float's are what GPU's use inside - for everything - including colors. That is the only point I am trying to make. One possibility is that he was using Graphics Device Interface or just native Win32. I started using GM around GM5.3 or GM4.3 (can't even remember now) and it didn't have hardware acceleration. 5.3 (draw_sprite_transparent) has software transparency and I don't know what graphics framework he used (as it was written in Delphi), but I guess 0-1 was a necessity for him. It's possible that he did transparency himself (as it is software based after all). If red, green and blue were all 0 to 1 as well, then it would at least be fucking consistent, and I'd not be complaining about GM. Yes, all would like it to be different. But it doesn't matter now, because for compatibility we cannot change it. And "consistent" also means that if 90% of the functions take 0-1 as alpha, then the rest should too. That is what this whole discussion is about.
|
|
« Last Edit: February 06, 2014, 12:35:43 pm by TheExDeus »
|
Logged
|
|
|
|
time-killer-games
|
|
Reply #28 Posted on: February 05, 2014, 06:45:24 pm |
|
|
"Guest"
|
it always has, and always will, likewise for the alpha arguments in draw_sprite_ext(), draw_sprite_general(), draw_sprite_stretched_ext() and many other functions they all use 0-1 alpha arguments. So there's really no reason why a window alpha function should be dealt with any different IMHO. You are missing the point, I can macro them to take whatever parameters you want. You could basically change 1 line and have all functions do 0-255, or switch them all to 0-1. Then people can choose which versions they want.
Oh I didn't realize. That's actually a pretty good idea. But it must be done in a way that could still give the option to be GM compatible. Which means we need a way to set whether to use 0-255/0-1 for each function individually. Because like you said some of the functions in GM are 0-1, and others are 0-255 so this could be a problem if this engine has any further plans on being GM compatible.
|
|
|
Logged
|
|
|
|
Goombert
|
|
Reply #29 Posted on: February 07, 2014, 01:26:29 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
Well I already said that it should hold up to 8k fine. Of course all of that needs to be tested though. You were then referring to arithmetic, which of course is done in shaders, and so I said that in shader part of the things it's no different. If you meant arithmetic as in changing 0-1 to 16bit short, then I also don't think precision would be lost. It just wouldn't be a float though. That is actually mentioned in that Apple link you gave: I am not understanding, where and when do you plan to combine tx and ty? Because if you do it before sending the vertex data to the GPU, it will lose precision during that, but if you wait until it's in the shader to combine then, then no it won't lose precision at that stage. If you do plan to do it CPU side then combine tx and ty using half floats in the addTexture(tx,ty) call before pushing it into the vector, use a union too like color does because only GLES 3.0 offers half floats. In FFP there are many color setting functions. In the deprecated OpenGL FFP colors are floats (https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml) That's OpenGL's dumbassery and explains why I didn't get a performance boost in GL1 when I replaced it with the byte functions, but I did get a boost in OpenGL3 and Direct3D9 Direct3D9's deprecated FFP and FVF (Flexible-vertex-format) as well was vertex declarations (which are not dep.) use the macro's to convert to only a single DWORD CPU side. D3DFVF_DIFFUSE Vertex format includes a diffuse color component. DWORD in ARGB order. See D3DCOLOR_ARGB.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb172559%28v=vs.85%29.aspxThe conversion is done in GPU, so it doesn't care one bit what format you actually use for uploading. I just keep saying that float's are what GPU's use inside - for everything - including colors. That is the only point I am trying to make. That's fine, we both agree on that, but you can't ignore the fact that removing 4 float color from the CPU side, did give me a noticeable frame rate difference, 220->250FPS in the Studio cubes demo, and it gave the same performance results in Direct3D. It also went from 10fps->30fps when I made the change to GL1 vertex arrays. And "consistent" also means that if 90% of the functions take 0-1 as alpha, then the rest should too. I actually agree with you on this, but disagree at the same time, because I like how YYG is using only 0-255 for new functions. But you won't stop me from macro'ing every color function. Oh I didn't realize. That's actually a pretty good idea. Let me show you through code why it is such a good idea, then you'll clearly see. This is window_set_alpha(0-255) void window_set_alpha(char alpha) { // Set WS_EX_LAYERED on this window SetWindowLong(enigma::hWndParent, GWL_EXSTYLE, GetWindowLong(enigma::hWndParent, GWL_EXSTYLE) | WS_EX_LAYERED);
// Make this window transparent SetLayeredWindowAttributes(enigma::hWndParent, 0, alpha, LWA_ALPHA); }
This is window_set_alpha(0-1) void window_set_alpha(float alpha) { // Set WS_EX_LAYERED on this window SetWindowLong(enigma::hWndParent, GWL_EXSTYLE, GetWindowLong(enigma::hWndParent, GWL_EXSTYLE) | WS_EX_LAYERED);
// Make this window transparent SetLayeredWindowAttributes(enigma::hWndParent, 0, (unsigned char)(alpha*255), LWA_ALPHA); }
Actually, you could implement both functions at the same time since the data type is different, which would make window_set_alpha(0-1) look even more ridiculous. void window_set_alpha(float alpha) { window_set_alpha((unsigned char)(alpha*255)); }
Notice the 0-1 versions require extra multiplication/division and then casting? Yeah that makes it less optimal. Especially when you consider char holds only 0 to 255, and float holds 3.4E +/- 38 (7 digits) But it must be done in a way that could still give the option to be GM compatible. Don't worry if I do implement options, the GM behavior will be the default. I just want this new function to do it the proper way, like how YYG is using only 0 to 255 for new functions they add.
|
|
« Last Edit: February 07, 2014, 12:19:45 pm by Robert B Colton »
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
|