This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
1
General ENIGMA / Pure ENIGMA
« on: March 27, 2016, 07:44:01 am »Pure ENIGMA
This is a project I am slowly working. It is a version of ENIGMA engine that will be useful without LGM or the parser. Basically making ENIGMA as pure C++11 engine. But it will still use instances and events, which will be added trough code. At the beginning I will try using LGM as well, but I will either start creating my own IDE in ENIGMA, or later ditch it and just use a C++ code editor.My reasons for this change:
- LGM is not really maintained, and while Robert is back to fix some bugs, I don't know if there are any plans for the future.
- ENIGMA parser is not maintained and haven't been fixed or changed in years. Josh had a new one coming, but the idea died. EDL is as powerful as the parser, and sadly the parser is not powerful enough.
- ENIGMA engine itself is quite powerful and easy to use, but right now it is only usable trough LGM and the parser - both problems which I already described.
In the Pure ENIGMA branch the parser will only be used in a limited way. The new EDL will also be a stricter as a result (so the parser must do a lot less). The new language will be valid C++11 in almost every respect, so mandatory semicolons, all variables will have to be defined and so on. It will also support classes and structs, which are extremely important to make the existing code faster and easier to read. For example, ds_ functions are very verbose and pain in the ass to use and in most cases could instead be replaced by std::vector if desired.
From parser side only these things are currently planned to be used:
- Variables won't be local to instance by default (like in GM) and instead will always require "local type v;" to be defined locally. This will not allow the use of uninitialized variables, which is the current bane of ENIGMA and it can be hardly debugged. So if a variable is prefixed with "local" it will be added to the object class.
- Variable types will be mandatory. By default ENIGMA uses type "var" or "variant" whenever a variable is defined without a type, but here it will have to be specified. So "local var a = 0;" is the same as "a = 0;" right now and "var a = 0;" is the same as "var a = 0;" right now. This is required for better error detection as well as to allow greater amount of types, like classes. You will also be able to use C++ "auto".
- The ID's of instances (and any other types like sprites or backgrounds) will be separate classes. So (983019709).x won't work, but "auto inst = instance_create(10,10,obj_bullet); inst.x = 10;" will. They will still be part of a larger global structure, so it would be possible to iterate them or access them trough ID (like instance_get_by_id() or something similar). This is again required for error detection. Currently every function takes an integer ID, so you cannot at compile-time show an error if a user uses draw_sprite to draw a background resource. It is often the case that the error will not even appear at run-time. This greatly slows down development and allows large amount of errors.
The whole ENIGMA engine will also be rewritten (iteratively) to use C++11 whenever possible. Right now it is used in some places (new functions), but not in previously written systems or the parser. So we have large amounts of code that does something C++11 already does by itself (to_string functions). This should also make the code faster. For example we could make the whole math library constexpr. What it means is that the compiler will execute these functions and replace the code by result whenever the argument for these functions is known at compile time. Like "cos(0)" in step event will currently be recalculated all time, while with constexpr it will be replaced with (1) instead.
Another example would be in the parser. C++ doesn't allow switch(x) statements to have anything other than an integer as x. EDL allows strings as well. It allows that by rewriting that switch() in the parser. It hashes the argument, compares with hashes, the compares the string itself and them jumps. So this is EDL:
Code: (edl) [Select]
var x = "Hello";
switch (x){
case "Hell": b = 0; break;
case "Again": b = 1; break;
case "Hello": b = 2; break;
}
And this is generated C++:Code: (c++) [Select]
var x = "Hello";
const variant $s0value =((x));
switch(enigma::switch_hash($s0value))
{
case 2245469:
if($s0value =="Hell")goto $s0c0;
goto $s0nohash;
case 63193920:
if($s0value =="Again")goto $s0c1;
goto $s0nohash;
case 69609650:
if($s0value =="Hello")goto $s0c2;
goto $s0nohash;
default:
$s0nohash:
break;
$s0c0:
b = 0;
break;
$s0c1:
b = 1;
break;
$s0c2:
b = 2;
break;
}
}
}
;
I guess it does the second if() check because the hashes could have a collision. In C++11 this can be like so (if enigma::switch_hash was rewritten to be constexpr):Code: (edl) [Select]
var x = "Hello";
switch (enigma::switch_hash(x)){
case enigma::switch_hash("Hell"): b = 0; break;
case enigma::switch_hash("Again"): b = 1; break;
case enigma::switch_hash("Hello"): b = 2; break;
}
For collision checking additional if checks could be added. We could also use 64bit int's for hashes. The result will be just as fast code, but greatly reduced parser complexity (even though the switch() code is only about 200 lines).The cleaned up ENIGMA engine will have sized types everywhere. This is actually quite essential for a cross-platform game engine, but we slacked at this. We can use float and double as they are standard sizes, but we shouldn't use "int, unsigned int, short, long" and so on. And we use them a lot. Instead we should use "int32_t" or "uint32_t".
The ENIGMA engine also consists of A LOT of unmaintained systems. For Pure ENIGMA I will focus my efforts only on a limited number of systems which I can maintain and check. If others want to maintain something more, they are welcome. So in Pure ENIGMA there will be only GL3.3 graphics system (no D3D or GL1). D3D is only useful if we want to port ENIGMA to XBOX (which nobody plans to do) and GL1 is not useful for anything anymore - it is only supported by more than 15 years old PC's. Anything newer than that can run GL3 and any new smaller device (phones, RPi, Nvidia Jetson etc.) supports GLES, which is a lot closer to GL3. GLES is also something I plan to support or gladly allow someone else to make.
As you can see this idea is in no way compatible with GM. ENIGMA hasn't really been ideologically compatible in years and in reality it never was. So I this is meant to make ENIGMA a good game engine in its own right. I use ENIGMA in my everyday life and even commercial projects have been made in it. And trough these I have seen the potential of ENIGMA. It is the easiest, while being the fastest and smallest ENGINE out there. Recently I made a quite complex model editor for 3D printing in ENIGMA. The executable, including images and even a scripting engine, was only 3.8mb, while also running >1000 FPS on my home PC (up to 2.5k FPS). And the GUI is only about 6k-7k LOC. No programmers I have worked with has ever seen anything like that. I think we have struck raw diamond with ENIGMA, but we sadly nobody here wants to polish it to get a precious gem.
I will add new toughs to this topic as I move along. Discussion and ideas are welcome.
2
General ENIGMA / ENIGMA progress 28.01.2016
« on: January 28, 2016, 09:56:05 am »Introduction
This is continuation of this topic (this was meant to be a post there, but I now understand it would be confusing). I wanted to make a new one several months ago, but didn't manage to do so. So this will show progress between July 26 2015 and today. Fixed at least 36 bugs (probably more, but depends on how you count) and added 53 functions to main ENIGMA and more than 60 in BGUI extension.Custom vertex attributes
One thing that we lacked in GL3 was custom vertex attribute data for shaders. This meant we couldn't do a lot of custom calculations and effects. GM has vertex_buffer for that and vertex_format. I decided to remove code duplication so for ENIGMA it is done trough the model interface. An example of adding a group attribute to vertex:Code: (edl) [Select]
format = vertex_format_create();
vertex_format_add(format, vertex_type_float3, glsl_get_attribute_location(shr_group_modify_per_pixel, "in_Position"));
vertex_format_add(format, vertex_type_float3, glsl_get_attribute_location(shr_group_modify_per_pixel, "in_Normal"));
vertex_format_add(format, vertex_type_float, glsl_get_attribute_location(shr_group_modify_per_pixel, "in_Group"));
model = d3d_model_create();
d3d_model_format(model , format);
d3d_model_primitive_begin( model, pr_trianglelist );
for (int i=0; i<100; i++){
//Fill with vertices like so
d3d_model_add_float3( model, i*5, 0, 0);
d3d_model_add_float3( model, 0, 0, 1);
d3d_model_add_float( model, i ); //Group
}
d3d_model_primitive_end(model);
And GLSL vertex shader:Code: (edl) [Select]
in vec3 in_Position; // (x,y,z)
in vec3 in_Normal; // (x,y,z)
in vec in_Group;
out vec4 v_Color;
void main()
{
v_Color = vec4(in_Group,0.0,0.0,1.0); //Set red color based on group
}
Now when rendered the shader will have access to in_Group attribute and will be able to use it. This useful in animation (both morph targets and skeleton animation), dynamic terrain, particle systems etc.For example, drawing groups on model and then using custom attributes to use them can be seen here:
If we also send weight (additional float) and another transformation matrix, then we can do this:
I will also try making examples to post here. Ones I have planned are morph target animation and shadowmapping.
More low level stencil buffer control
In previous update I shown stencil functions together with an example. Sadly it was a bad example and didn't show more "powerful" things you can do with it. Here is a CSG example where we do subtraction of two meshes using stencil buffers, depth buffer control and a custom shader to merge depth maps from a surface to screenbuffer.Code example:
Code: (edl) [Select]
//CSG test
d3d_transform_stack_push();
d3d_projection_stack_push();
surface_set_target(csg_buffer);
draw_clear_alpha(0,0);
d3d_transform_set_identity();
d3d_set_projection_ext(x,y,z,x+vector_x,y+vector_y,z+vector_z,0,0,1,60,view_wview[0]/view_hview[0],1,2000);
d3d_set_zwriteenable(true);
d3d_depth_clear_value(1.0);
d3d_clear_depth();
d3d_stencil_enable(true);
d3d_stencil_mask(~0);
d3d_stencil_clear_value(0);
// Draw furthest front face
d3d_set_color_mask(false,false,false,false);
d3d_stencil_enable(false);
//Draw the main object normally
d3d_set_depth_operator(rs_less);
d3d_set_culling(rs_cw);
d3d_draw_block(-1,-1,-1,1,1,1,-1,0,0);
// Count back-facing surfaces behind
d3d_stencil_enable(true);
d3d_stencil_function(rs_always, 0, ~0);
d3d_stencil_operator(rs_keep, rs_keep, rs_incr);
d3d_set_zwriteenable(false);
d3d_set_culling(rs_cw);
d3d_draw_ellipsoid(-0.5,-2,-2,0.5,2,2,-1,0,0,128);
d3d_set_zwriteenable(true);
d3d_set_depth_operator(rs_greater);
d3d_stencil_function(rs_equal, 1, ~0);
d3d_stencil_operator(rs_keep, rs_keep, rs_keep);
d3d_set_culling(rs_ccw);
d3d_draw_ellipsoid(-0.5,-2,-2,0.5,2,2,-1,0,0,128);
// Reset pixels where n != stencil
d3d_set_zwriteenable(false);
d3d_set_depth_operator(rs_less);
d3d_stencil_clear_value(0);
d3d_stencil_function(rs_always, 1, ~0);
d3d_stencil_operator(rs_keep, rs_keep, rs_replace);
d3d_draw_block(-1,-1,-1,1,1,1,-1,0,0);
d3d_stencil_function(rs_equal, 1, ~0);
d3d_stencil_operator(rs_zero, rs_zero, rs_zero);
d3d_set_depth_operator(rs_always);
d3d_set_zwriteenable(true);
d3d_set_culling(rs_none);
d3d_draw_ellipsoid(x+500,y+500,z+500,x-500,y-500,z-500,-1,0,0,32);
// Draw RGB image
d3d_set_color_mask(true,true,true,true);
d3d_stencil_enable(false);
d3d_set_depth_operator(rs_equal);
d3d_set_zwriteenable(false);
d3d_set_culling(rs_ccw);
draw_set_color(c_blue);
d3d_draw_ellipsoid(-0.5,-2,-2,0.5,2,2,-1,0,0,128);
d3d_set_culling(rs_cw);
//d3d_set_culling(rs_cw);
draw_set_color(c_red);
d3d_draw_block(-1,-1,-1,1,1,1,-1,0,0);
//Disable everything
d3d_set_color_mask(true,true,true,true);
d3d_stencil_enable(false);
d3d_set_zwriteenable(true);
d3d_set_depth_operator(rs_less);
d3d_set_culling(true);
d3d_depth_clear_value(1.0);
surface_reset_target();
d3d_transform_stack_pop();
d3d_projection_stack_pop();
And output if using two d3d_models:Textboxes
Added a textbox widget to BGUI extension. Supports unlimited and limited length (both line count and line length), non-monospaced fonts (so you can use any font you like), mouse selection as well as most keyboard functions (copy ctrl+c, paste ctrl+v, move cursor with cursor keys, select with cursor keys while holding shift etc.). It has styles for the marker as well so when you select text you can make it look how you want.Things to do:
1) Add unicode support (will be limited to the font selection you do in LGM of course).
2) More usability and bug fix changes for keyboard (some buttons like alt breaks it now).
3) Pasting multiline string into limited line textbox only partly works.
4) Make it independent of FPS using delta-time. It uses internal counters for key repeating and cursor blink, which needs to be changeable.
Fixes
-Added optimizations in BGUI - if the text of the GUI element is empty, then a lot of code is now skipped. Extra useful with parenting like adding icons to buttons, where buttons could have no title text and the icons (gui_label) wouldn't have any text either. commit-For some reason the texture atlas code threw a compile time error on GCC 4.8. Fixed that. commit
-Removed all warnings from the ENIGMA.exe source. commit
-When a glyph outside of font range was used then some functions would segfault. Now when these characters are used they will be substituted by a blank space. Note that _ext functions will not break lines on this blank space. The fixed functions are: string_width_ext, string_width_ext_line, string_width_ext_line_count, draw_text_ext, draw_text_ext_transformed, draw_text_ext_transformed_color and draw_text_ext_color. commit
-string_height_ext, string_width_ext_line and string_width_ext_line_count now properly uses get_space_width() to determine the width of space character instead of the height/3 backup. This is in case if the font actually defines a space character. commit Fixes 945
-Fixes a few warnings in lang_CPP.cpp. commit
-Added exists() to eYaml parser, so we can check if value exists.
-64bit compile mode on Windows now uses a unique directory name ("Target-platform" in .ey). This means that you can have compiled 32bit and 64bit ENIGMA in parallel. Uses the exists() to implement build-dir. If it doesn't exist the "Target-platform" is used like before, but if it is defined then it is used instead. commit
-Parser uses a lot of unsafe pointers everywhere. And it doesn't check for NULL anywhere, so there are many cases where a NULL pointer is cast to string and a segfault occurs. Here are some basic fixes in one case, but I'm sure this is where most of the problems lie. This fix made crashes somewhat fewer. commit
-Also casting is a lot better (using C++11 functions), so no need to use unsafe functions like sprintf. commit
-Changed compiler makefile so it runs from Win cmd. commit
-font_get_ functions return int instead of unsigned int. This was especially bad because of this humorous line: "unsigned(-1)". commit
-string_ ... _line functions now return 0 if the line selected doesn't actually exist. Previously it returned the length of the last valid line. commit
-Now all returns in the string_width functions are "ceil()'ed", not only the last return. I'm not sure this is correct, but Robert back in June 2014 believed they should be (a81fea0). commit
-clipboard_set_text now replaces all "\n" with "\r\n" which is required for Windows clipboard to pass newlines. commit
-If model_begin(format) is done with format = -2 (which is the default) the format will not be changed. This is useful when use several _begin and _end functions in one model. commit
-Parser now uses std::to_string() to actually implement toString. Should be a lot safer. commit
-Fixed d3d_transform_stack_disGard to be d3d_transform_stack_disCard. Same with d3d_projection_stack_discard. This seems to be a typo done many years ago, but nobody has used that function. commit
-Better errors for shader uniforms (will print its name). commit
-Speed optimizations in gsmath by loop unrolling and other things. This should speed up rendering. commit
-d3d_projection_stack functions now push and pop view_matrix as well. Without it the projection stack didn't actually work. commit
-Fixed a potential SEGFAULT with surface_getpixel_ext. commit
-gui_slider_set_value now properly checks bounds. commit
-d3d_model_get_stride returns pre-calculated stride instead of recalculation. This should be a lot faster in many cases, as this is called every time a texture batch check happens. Basically every time a draw function is used. commit
-Fixed surface error in debug mode. commit
-get_texture in GL3textures is now a macro and shows an error in debug mode. commit
-Fixed a tile bug in GL3 that meant the wrong texture is used. Screwed up rendering with tiles. Thanks to rcobra's example for finding this bug. commit
-Enabled C++11 when compiling on Linux. commit
-Matrix4::init_camera_transform (used in d3d_set_projection_ext) is now slightly faster, because matrix mult was replace with 3 dot products. commit
-Math function lerp() now uses C++11 std::fma function. It is as fast (or faster) than what was done previously, but it returns correct results when t == 1 or when x == y. commit
-Added a GM compatible overload for draw_button that lacks the border size argument and is hardcoded to size 2. commit Fixes 951
-screen_init, screen_save and screen_save_part now properly end the shape batching, so you can actually see in the output image the same things you saw on the screen. commit
-texture_set_repeat_ and texture_set_wrap_ now uses GL_CLAMP_TO_EDGE instead of GL_CLAMP in GL3, as GL_CLAMP is deprecated. commit
-Dr.Memory reports some uninitialized read's and accesses in GLSLshader structs. Fixed by initializing it. The rest of the warnings seem to be in driver. commit
-Removed d3d_depth_clear as we have d3d_clear_depth and it actually didn't clear depth. Yeah, confusing. commit
-GL3ModelStruct Clear() now check if we have anything to clear (aka, stride is not 0). Potentially an optimization, as clear() is called on every batch flush. commit
-Shader functions which change state now call batch flush. This means that if something is drawn, then uniform is changed, then drawn some more. The two drawings will be separate as intended. commit
-texture_reset() now checks if we already don't have reset texture. Doesn't seem to have broken anything and is a potential optimization (as texture_reset is also called on every batch end). commit
Added or implemented
-Added window_set_maximized() and window_get_maximized() on Windows. Not sure why they weren't implemented. Also added them for xlib, but THEY ARE NOT TESTED!!! commit-Added gui_continue_propagation() to BGUI. This allows the event propagation to continue even if it would be stopped. Like if you have a button over a window. Normally the window would not get an event, because the button is pressed. If you want the window to get the event you put "gui_continue_propagation" in the button callback. I use this
for things like double clicks on window titles. If you click once it works like regular window click, but if clicked twice the button does something different. commit
-Added gui_windows_group_update() which allows updating windows in groups. It also has a second "continueProp" argument, which allows the groups to be treated individually or together like gui_windows_update() would. They also return if the event propagation was stopped. That basically means the mouse was over a window or any widget bound to the window. Useful if you want to combine GUI with mouse events that are not tied to GUI, like RTS units shouldn't go to a waypoint if the mouse was actually clicked on a window.
Previously there wasn't an easy way to check this. commit commit
-Added window_update_mouse() which allows calling an update for mouse_x and mouse_y variables. Useful if you depend on them (like the BGUI extension does), and you need to update them in the middle of a step or draw event. For example, you have a view in which you use mouse and GUI (which isn't a view) also uses mouse. Here you must update the mouse two times instead of once like ENIGMA does by default. commit
-Added window_update() which just calls ALL the events for all instances. Note that his can cause infinite loop if called in a place like step event. But useful in alarms, one shot events or places like the resize() event. commit
-Added draw_roundrect, draw_roundrect_color, draw_roundrect_ext and draw_roundrect_ext_color. Thanks Garo for reporting. commit
-Added font_height() that returns height of the maximum character. Previously string_width("M") was required or something similar, but this is O(1) as we already have that number. commit
-Added d3d_model_add_float, d3d_model_add_float2, d3d_model_add_float3, d3d_model_add_float4 and d3d_model_add_ubyte4 for custom attributes. They are like vertex_buffer in GM, but here it is implemented as a part of model class, so there wouldn't be any code duplication. commit
-Added d3d_model_add_color, d3d_model_add_texcoord and d3d_model_add_normal. The only different from _float2 or _float3 is that they can be used without formats. That means the default shader supports them. commit
-Added vertex_format_ functions, but they are totally different from GM:S. Added vertex_format_create, vertex_format_destroy, vertex_format_exists, vertex_format_add. Removed the weird "_end returns the ID" thing because it is totally different from how the other systems work. So now we use ID's instead (with creation and destruction which GM:S doesn't allow). I didn't add vertex_format_add_color, vertex_format_add_position and so on because they don't make sense in a custom shader. vertex_format_add() is different than GM:S because the last argument is attribute to add. This can be found using glsl_get_attribute_location allowing you to use custom attributes (GM:S didn't actually allow that as far as I know). commit
-Added glsl_program_get() which returns the currently bound shader. Most useful to actually get the default shader. commit
-Added glsl_attribute_enable_all() which is used in the modelstruct and not much useful anywhere else. commit
-Added equal() function for floating point math comparisons. commit
-Added ray_sphere_intersect() function which allows raytracing with rays and spheres. Useful for collision, vertex painting and more. commit
-Added d3d_transform_get_array, d3d_transform_set_array, d3d_transform_add_array, d3d_transform_get_array, 3d_projection_set_array, d3d_projection_add_array and d3d_projection_get_array. Much easier to use in external C++ than EDL because of broken pointers in parser. But still useful. commit
-Added d3d_transform_add_rotation/d3d_transform_set_rotation which just takes the three angles. commit
-Added execute_shell and execute_program to Linux (xlib). commit
-Surfaces now have an option to have a stencil buffer, as well as readable depth buffer. For a depth buffer to readable it had to be an FBO, as Renderbuffers cannot be sampled. But then the buffer is slower, so I ended up implementing it as a choice. commit
-New function surface_get_depth_texture() to get surface depth texture if the surface has depth buffer and if its not write only. This can be used for things like shadowmapping. commit
-Added d3d_set_color_mask(r,g,b,a) function to enable/disable writing to a color channel. commit
-Added d3d_stencil_enable, d3d_stencil_clear_value, d3d_stencil_mask,d3d_stencil_clear, d3d_stencil_function and d3d_stencil_operator to have more fine grain control of stencil buffers. commit
-d3d_model_ format can now be changed with d3d_model_format. This allows the same model to be drawn with a different shader which has a different format. commit
-Added d3d_transform_set_look_at and d3d_transform_add_look_at in GL1 and GL3. This rotates an object to face a point. commit
-Added d3d_transformation_get_mv and d3d_transformation_get_mvp that returns model_view_matrix and model_view_projection_matrix respectively. Useful in cases like shadowmapping where we use projection function to set a camera and then we need MVP in a shader later. commit
-Added _duplicate_ functions to BGUI widgets. This returns an exact duplicate of the widget that can then be modified. Very useful for styles and skins. commit
-Added BGUI textbox widget consisting of 57 functions. It is a textbox with unlimited chars and lines (but can be limited in with functions). It supports mouse selection, non-uniform fonts, copy and pasting (ctrl+c and ctrl+v) and much more.
The End
This is just an update so people know that ENIGMA hasn't died. Sadly I haven't been able to compile some of the 3rd party stuff we use so I cannot make an installer now. I use ENIGMA in several of my projects (also at work) so it gets updated quite frequently, but I sadly don't have much time doing community things and uploading new installers.Some of the still long standing problems:
1) LGM is unstable on Windows. Crashes 24/7.
2) We need extracted EGM format. I use Git as version control for my projects and zipped EGM is detected as binary (even though I have tried all the diff tricks with extracted zips).
3) EDL needs to be either fixed or replaced. I want to use pure C++11 for a long time now and I believe it would be a perfect language, as you can do a lot of "auto" things and ranged loops that would make C++11 easier than EDL.
3
Developing ENIGMA / Debuging the .dll
« on: August 07, 2015, 04:48:46 pm »
The crashes are killing me. Seeing as the problem is in the plugin (the .dll or the enigma.jar) I started trying to debug it. As the .dll is accessed from Java, then debugging it is non-trivial. What works for me is running LGM and then attaching gdb to the Java process. If it crashes after startup, then it could be as easy as looking up the PID in Task Manager and then typing:
After that the exceptions thrown by the .dll are also shown there complete with backtrace if the are debugging symbols. To have them you must compile the .dll with the -g flag, but that is done by default in the master anyway.
Some things I noticed:
The JDI code is EXTREMALY unsafe. I understand it was written up to 5 years ago so Josh wasn't experienced (heck, I didn't even write C++ back then) and there was no C++11 features which are useful, but still, the code is a minefield. A lot of unsafe pointers, memory management done trough new/delete, a lot of unbound sprintf's which is an overflow waiting to happen and much more.
Like one of the reasons the plugin crashed was when a value of a template is invalid and so the char array is null. Then it calls std::string(null) and explodes, as that is a segfault. This apparently happens in numerous places I'm trying to fix it, but as I don't know the code that well I'm not sure what I can and cannot touch. Pointers are passed around without knowing if they are freed or not. Like now a segfault happens in the destructor of ~definition_template which has THREE for loops inside with "delete". This is a great way to segfault. Trying to sanitize all that is going to be a pain. I hope someone can help.
Code: [Select]
gdb -p PID
Sadly, for me LGM started crashing on startup. This means I have very little time to attach the debugger before the exception. Thankfully PowerShell can return the pid from the name and that can be then piped as the argument. So it looks like this:Code: [Select]
gdb -p (get-process java |select -expand id)
This finds the PID for java and attaches to that. So it is possible for me to do it the 2 seconds while LGM loads.After that the exceptions thrown by the .dll are also shown there complete with backtrace if the are debugging symbols. To have them you must compile the .dll with the -g flag, but that is done by default in the master anyway.
Some things I noticed:
The JDI code is EXTREMALY unsafe. I understand it was written up to 5 years ago so Josh wasn't experienced (heck, I didn't even write C++ back then) and there was no C++11 features which are useful, but still, the code is a minefield. A lot of unsafe pointers, memory management done trough new/delete, a lot of unbound sprintf's which is an overflow waiting to happen and much more.
Like one of the reasons the plugin crashed was when a value of a template is invalid and so the char array is null. Then it calls std::string(null) and explodes, as that is a segfault. This apparently happens in numerous places I'm trying to fix it, but as I don't know the code that well I'm not sure what I can and cannot touch. Pointers are passed around without knowing if they are freed or not. Like now a segfault happens in the destructor of ~definition_template which has THREE for loops inside with "delete". This is a great way to segfault. Trying to sanitize all that is going to be a pain. I hope someone can help.
4
General ENIGMA / ENIGMA progress
« on: July 25, 2015, 09:20:55 pm »Introduction
I wanted to make a topic about changes in ENIGMA. Forum has been quiet for a while and nothing much has been going on. But I'm still here and still working on ENIGMA. So some of the stuff done in the past few months will be described here.TL;DR
At least 40 bug fixes. At least 297 new functions (about 262 for BasicGUI extension). Texture atlas which can increase FPS up 24x. A much more mature GUI extension. 64bit compilation for windows. ENIGMA not dead.Texture atlas
It took years, but ENIGMA is officially as powerful as any decent engine in the beginning of last decade. We have texture atlas (or as GM call them - texture pages). It basically packs several sprites into one texture, so there isn't any expensive texture changes. This is extremely useful for 2D games and GUI/UI/HUD, as they usually involve a lot of 2D textures. This is less important for 3D games. Previously a code like this was the worse case scenario for ENIGMA:
Code: (edl) [Select]
int i = 0;
repeat (10000){
int spr = spr_0;
switch (i){
case 0: spr = spr_0; break;
case 1: spr = spr_1; break;
}
draw_sprite(spr,-1,random(room_width),20+random(room_height)-20);
++i;
if (i>1) i = 0;
}
draw_set_font(font_0);
draw_text(10,10,string(fps));
draw_text(10,30,"The quick brown fox jumps over the lazy dog");
Here 10k sprites are drawn, but they change image one after the other. So in reality there are only two images - 5k times one is drawn and 5k times the other. Here one sprite is a green pentagon and the other sprite is a red one. Here is a screenshot:You can see I only get 23FPS here. Reason for that can be seen here:
We can see in one frame we call 70k OpenGL functions. We actually call 10k draw calls (one for each sprite) as well as 1 for the text. So it's 10001 draw calls to draw the frame. You can also the there are 4 textures in memory and one of them is visible in the image (the red pentagon).
To use texture atlas I added a few new functions. Right now it is useable at runtime (unlike GM which can only be used in the IDE), so this is the code required:
Code: (edl) [Select]
texture_page = texture_atlas_create();
texture_atlas_pack_begin(texture_page);
texture_atlas_pack_sprite(texture_page, spr_0);
texture_atlas_pack_sprite(texture_page, spr_1);
texture_atlas_pack_font(texture_page, font_0);
texture_atlas_pack_font(texture_page, -1);
texture_atlas_pack_end(texture_page);
That is easy right? We create a texture atlas page, then add two sprites and two fonts (including the default -1 one) and then _end(). In the _end() it actually does the packing. It is very efficient and uses the Josh's rectpack which we already used for fonts. Specifying a size for the atlas texture is optional and it is calculated automatically to be the smallest power-of-two texture it can be. After calling this code the texture looks like this:You can see we only need 12 OpenGL function calls per frame and there is actually only one draw call. There is only one texture as the rest were merged and destroyed. The packed texture is on the right. Now the fonts and sprites can be used as normal and nothing changes, so existing code works fine. You can also see that the font characters are packed per character, not per font texture, so spaces between sprites are packed with fonts. Unlike GM which is quite wasteful (can be seen here).
This is the output for the example after running the texture_atlas code:
We get 560FPS instead of 23FPS. That is 24.3x speed up (2430%). This works in DX9, GL3 and GL1 graphics systems and you can pack sprites, fonts and backgrounds (so all textural resources in ENIGMA).
TODO:
1)This only works in code and is not implemented in LGM. I like it that way, but it would be useful for LGM to pack textures too so I wouldn't have to do it at runtime (which is extremely fast, but could still be a slowdown with thousands of sprites). Allowing to do it at runtime does seem important, as you can now even pack sprites you loaded externally. GM doesn't allow that.
2) There could be a few more options added, like padding. The system also doesn't check if a texture is already packed (if you try to pack the same sprite twice the result is undefined now). And lastly we could allow the same texture to be packed multiple times. So you could optimized the atlas at runtime. Like if you had a desert world it could be packed together with UI. Then the next ice world could also be packed with UI so you can draw as much as possible with one draw call.
BasicGUI improvements
Those who don't know BasicGUI is an extension I'm making for ENIGMA. It adds GUI stuff like windows and widgets. They are not meant to be external of the main window, so they are all drawn inside. Useful for game UI or editor UI. The extension includes windows, buttons, labels, toggles, scrollbars, sliders as well as skins, groups and parenting. It is inspired by Unity system, but it is a little verbose because of EDL limitations. So right now the extensions consists of 262 functions.As simple example:
This shows windows, group toggles (basically radio buttons), sliders, buttons with child labels (the button with the "Lena" picture), scrollbars and labels (the larger "Lena" picture). I get 6600FPS here because I also packed everything into a single texture. This would draw in one draw call if it weren't for a stencil buffer I used that will be described later.
Another example is the node editor I'm working on.
Everything you see there is drawn using the BasicGUI extension (excluding the connecting curves, which are drawn using draw_bezier_cubic_color() function). I get 2430FPS and I also use atlas here, so here is the texture:
It takes a lot more OpenGL function calls here because I use surfaces and stencil buffers for cutting stuff off outside BGUI window. If I would hide those windows and not draw surfaces, I would be able to draw the whole thing in one draw call. It is a lot more cooler in action, so if I make a video of it I will post it here. The BasicGUI extension is graphics system agnostic - it uses only generic drawing functions and should work for all graphics systems that implement them. Now they are GL1, GL3 and DX9.
TODO:
1) Add textbox widget.
64bit for Windows
I say "for Windows", because I think Linux and MacOS had this working for some time now. But on Windows there had to be some few fixes for this to work. There is actually performance reasons to compile in 64bits, because it can increase fps. Like here (press to enlarge):The only difference is that one is compiled in 32bits and the other in 64. The difference is not large (about 4%), but it is still 100fps. 64bit of course uses a little more memory. 32bit uses 29.8mb or ram while 64bit uses 32.1mb.
Here is the atlas test:
Here we also get almost 100fps or about 14%. 32bit uses 50.9mb while 64bit uses 57mb.
All in all this is great. 64bit's of course also mean we can use more than 2GB of ram. Most 2D games don't care about the 2GB limit and most 3D games rarely hit it too (AAA games of course do). I'm dealing with a lot of data not connected with games and so for me the possibility to use more than 2GB is very useful.
TODO:
1) Compile the rest of libraries to 64bit and create a new windows installer which has these libraries. Right now I only compiled the libffi, so I can compile a game. I need to compile OpenAL, ALURE, Box2D and some others.
Stencil buffer
I added some simple stencil buffer functions. They are primarily used in the GUI system so that windows cut off content that is outside of it. It's like using surfaces to do it, but without the additional VRAM. A simple example:Code: (edl) [Select]
repeat (5000){
int spr = spr_0;
draw_sprite(spr,-1,random(room_width),20+random(room_height)-20);
}
d3d_stencil_start_mask();
draw_circle(room_width/2,room_height/2,room_height/2,false);
d3d_stencil_use_mask();
repeat (5000){
int spr = spr_1;
draw_sprite(spr,-1,random(room_width),20+random(room_height)-20);
}
draw_set_font(font_0);
draw_text(10,10,string(fps));
draw_text(10,30,"The quick brown fox jumps over the lazy dog");
d3d_stencil_end_mask();
And the output:What happens here is that I draw 5000 red sprites. Then I start the stencil mask and draw a circle on it. Then I use the mask to draw the rest 5000 green sprites and the text. The green sprites and the text is limited to the circle I drew. So it is masking which pixels can be written to. This works in GL1 and GL3.
TODO:
1) The functions need to be changed so we can use several values in the stencil mask.
Fixes
-Fixed normal matrix. commit-Model_floor and model_wall fixes (changes necessary because of normal matrix change). commit
-fixed sprite_create_from_screen and background_create_from_screen. commit
-Direction is now rounded. Fixes a problem where vspeed = 5, made direction = 269 instead of 270. commit issue
-Added the maximize button if window resizing is enabled. commit
-Fixes string_width(" "). Previously if the string only consisted of spaces, then string_width() returned 0. commit
-Fixed definition of draw_set_line_pattern. commit
-Fixed double define for draw_spline_part with wrong arguments. commit
-Removed glsl_program_bind_frag_data from header. It was never implemented and I cannot even find out what it is (it's not a GM function either). commit
-Added definitions for font_get_glyph_texture_left/top/right/bottom. They were implemented, but not defined. commit
-Remove matrix_ functions and d3d_transform_vertex which were not implemented. commit
-Remove export_include_file, discard_include_file and include_file_location as they were not implemented. commit
-Added empty functions for d3d_set_software_vertex_processing in GL. Software processing is idiotic anyway, but D3D supports it, and I need to make a stub until we make platform specific functions easier to implement. commit
-Fixed d3d_model_part_draw() definitions - they missed vertex_start argument. commit
-Removed display_get_orientation as it hasn't been implemented (we don't even support devices with orientation right now). commit
-Removed joystick_map_button and joystick_map_axis from PFjoystick.h, as they are not implemented in Windows, but they are added in Linux. As they are defined in LINUXjoystick.h, then I guess they still should work on Linux. commit
-Fixed sound_get_pan and sound_get_volume return's. commit
-d3d_draw_torus is now defined properly. commit
-Removed duplicate draw_mandelbrot define. commit
-Fixed room_get_name. For all resource get_name function the default return value was "<undefined>", but room_get_name is implemented differently. It isn't declared in IDE_EDIT like the rest. And when given incorrect room index it would just crash in non-debug version, so I made it return "<undefined>" in this case instead. commit
-Surfaces are now using unordered_map instead of regular arrays of pointers. This was causing a memory issue (Dr. Memory crashed on "new surface"). This is more C++ way anyway. commit
-Fixed bug in the new surface creation, where I actually increment surface_max count even though the id itself was reused. commit
-Fixed memory leak in graphics_copy_texture. commit
-graphics_copy_texture now correctly crops the image. commit
-Fixed the font packer so it would return power-of-two textures. commit
-Some small optimizations in GSbackground. It's very possible compiler on O3 did that anyway. commit
-Removed some warning from GL3textures.cpp. commit
-Removed GSEnable.h and corresponding .cpp files. They are not used anywhere and they implement functions that are already in d3d_ category. commit
-Fixed .obj loading. There was an error that when you load an .obj with normal values, but without texture coordinate values, then the normals were all messed up. This is now fixed in GL1 and GL3. commit
-Fixed d3d_set_fill_mode not drawing in GL3. commit
-Added NOCHANGEDIR flag to dialogs. Previously the get_open_filename and get_save_filename dialogs changed working directory. It was messing with some other stuff and as we cannot change working directory with any built-in function right now, then I don't think this was intended. So I added a flag that forbids changing of the directory. commit
-Disabling zwrite now works correctly in GL3. commit
-Widgets now compile for 64bit. Some code in win32 widgets needed to be changed so it would compile for 64bit. commit
-Windows widget rc files don't show warning anymore. They were because of the manifest.xml include, but I added include guards in the files themselves just as additional measure. commit
-Fixed the for loop in makefile that dealt with windows resources (rc files) as it didn't run on cmd. It was made for sh.exe or something like that, but we can't use it. commit
-It is not possible to pass flags to make. This is required to set SHELL=cmd in windows. It fixes problems described here: http://enigma-dev.org/forums/index.php?topic=2488.0 commit
-Recpack had a limit of 255 rectangles it could pack. That is way too low for a texture atlas which can have thousands. So I fixed that. Texture atlas also had to be changed for this to work. Now the limit is an "max unsigned int". commit
-For 64bit's we need to pass a flag to windres. I added this to the e-yaml's and compiler, so it is possible to pass it. commit
-The compiler is now compiled with -O3 which does make the parsing faster. commit
-Built-in shader is now a C++11 raw literal. This makes it easier to maintain and copy, as we don't need those damn quotes and newlines. commit
Added or implemented
-Added room_first and room_last. commit-Quadratic bezier curve now uses width given by draw_set_curve_width(). commit
-Implemented font_get_glyph_left/top/right/bottom. They were fined but not implemented. commit
-Implemented triangle_area, which was defined, but not implemented. commit
-Implemented sound_get_pan and sound_get_volume in OpenAL, both of which were defined. commit
-Implemented display_get_gui_height and display_get_gui_width. commit
-Implemented date_get_week and date_inc_week. Apparently I missed it when I wrote the thing in 2011. 4 years later, it's in (though not ISO). commit
-Implemented mp_grid_clear_cell which was clearly missing. commit
-Implemented sprite_get/set_bbox_mode. This was also done in the .dll. commit
-Implemented d3d_light_set_ambient and d3d_light_set_specularity in GL1. commit
-Added graphics_copy_texture() which is required for texture atlases. commit
-Added graphics_copy_texture_part. This is also needed for texture atlas, as I need a way to copy only part of the source texture in fonts. commit
-Added functions d3d_stencil_start_mask, d3d_stencil_use_mask and d3d_stencil_end_mask which allow easy use of stencil masking. commit
-Added d3d_transform_set_array and d3d_projection_set_array which take pointer to an array to set the 16 values. commit
-Added d3d_transform_add_array, so you can add an array as well as set it. Same with d3d_projection_add_array. commit
-Added at least 262 BasicGUI extension functions.
-Added 7 texture atlas functions.
And many more smaller fixes here and there. All of this is in this branch: https://github.com/enigma-dev/enigma-dev/tree/GL3.3NormalMatrix
Next for ENIGMA
I plan to work more on the stuff here as well as other interesting and useful features. But sadly I don't know how much I can do alone. There are not active developers right now besides me. One soar point is the parser. It was mostly written and rewritten by Josh, but he is not that interested in ENIGMA anymore. So there is no one that can actually fix the many bugs and issues we have with the parser. I propose changes to the EDL to make the parser a lot simpler, like getting rid of the dynamically added variables. All variables local to an instance will have to be declared as "local". This is actually a small change that would break little, but could fix a lot of problems we have now with std::maps crashing in the parser. I don't need - or even want - GML compatibility. Or GM compatibility in general. We don't need people porting stuff from GM to ENIGMA. We need people making stuff on ENIGMA. Seeing as GM is slowly dying anyway, I don't think we need to follow them. I would like if EDL was much more closer to c++ and it might as well be a little more strict.Another problem is the IDE. LGM on Windows is extremely unstable. I have to restart it 5 times before the Run button actually runs, otherwise it just freezes. egofree mentioned he might be free at the end of this month and look into it. I heard others have continued to work on NaturalGM, but it doesn't have a commit since last year, so it does seem dead. Together with RadialGM and other IDE's.
So I might end up making a branch from ENIGMA. In it I would try replacing the parser with something much more simple (as I would basically need instance system to work and that is it) and the IDE. That of course is still a lot of work which I don't have resources for to do alone. I will probably make a separate topic about.
5
Issues Help Desk / Linker crash
« on: July 02, 2015, 10:30:46 am »
I had a few linker crashes recently. I really cannot figure out why they happen. That only happens when I compile with Build>Compile. It works fine in Debug and Run modes. And the error also happens only when I enable one my extensions. It sometimes works, but often times it doesn't. I haven't yet checked what has changed on the extension side but I'm not sure that it is the problem. The LGM log is like this: http://pastebin.com/xFngiAMS
The error itself is very non-descriptive:
One thing I noticed is that the linker command is actually very long (more than 16k chars) and the Windows 8 limit is 8k. So I cannot even call the linker part from my cmd. But when I call the linker like this "g++ @commands.txt" where commands.txt has all of the string, I can get it to compile. No errors are thrown. Then I noticed ENIGMA uses sh.exe as the shell, probably just because of this limit. But when I call the same thing from sh.exe I still don't get the error. But when I use the "make", then it crashes: http://pastebin.com/ZUzmTcLp
So make.exe (or mingw32-make.exe if used instead) is the one that crashes. I looked on the net and few suggestions were given:
1) Disable AntiVirus or Windows Defender - did that, didn't work. This would also not explain why I can compile by just disabling an extension.
2) Run as Administrator - did that, didn't work. This too wouldn't make sense.
Later I will try installing new MinGW together with new Git and GCC and see if that helps. That will happen a week after the next one though.
Does anyone ever had this problem?
The error itself is very non-descriptive:
Quote
0 [main] sh 7632 handle_exceptions: Exception: STATUS_ACCESS_VIOLATIONThe crash dump is actually even less descriptive:
2288 [main] sh 7632 open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
2288 [main] sh 7632 open_stackdumpfile: Dumping stack trace to sh.exe.stackdump
Quote
MSYS-1.0.12 Build:2012-07-05 14:56
Exception: STATUS_ACCESS_VIOLATION at eip=6E69572F
eax=00000000 ebx=6A626F65 ecx=FFFFFFFF edx=680A4C5C esi=69572F73 edi=776F646E
ebp=736A626F esp=0026B708 program=C:\ENIGMA\git\bin\sh.exe
cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B
Stack trace:
Frame Function Args
4604 [main] sh 7632 handle_exceptions: Exception: STATUS_ACCESS_VIOLATION
4903 [main] sh 7632 handle_exceptions: Error while dumping state (probably corrupted stack)
One thing I noticed is that the linker command is actually very long (more than 16k chars) and the Windows 8 limit is 8k. So I cannot even call the linker part from my cmd. But when I call the linker like this "g++ @commands.txt" where commands.txt has all of the string, I can get it to compile. No errors are thrown. Then I noticed ENIGMA uses sh.exe as the shell, probably just because of this limit. But when I call the same thing from sh.exe I still don't get the error. But when I use the "make", then it crashes: http://pastebin.com/ZUzmTcLp
So make.exe (or mingw32-make.exe if used instead) is the one that crashes. I looked on the net and few suggestions were given:
1) Disable AntiVirus or Windows Defender - did that, didn't work. This would also not explain why I can compile by just disabling an extension.
2) Run as Administrator - did that, didn't work. This too wouldn't make sense.
Later I will try installing new MinGW together with new Git and GCC and see if that helps. That will happen a week after the next one though.
Does anyone ever had this problem?
6
Developing ENIGMA / Everything in a seperate thread?
« on: January 24, 2015, 06:34:28 pm »
For my tools I love to make a resizible window. This is usually not that trivial, but we have everything working in ENIGMA to do this like so:
1) Break the modal loop and do everything yourself: http://sourceforge.net/p/win32loopl/code/ci/default/tree/ , but it seems like a lot of work, as you basically replicate everything Windows was doing.
2) Do everything in separate thread and allow windows to freeze the main thread. I seem to like this more. This can technically boost speed as well. The question: Has anyone done this? My naive idea was that maybe we can just create a thread in winmain(){} and do everything inside it from there. Thus there wouldn't be any problems with memory sharing and so on. The problems are callback functions (among other windows specific functions which use windows API) - will they work inside a child thread? I mean something like this function:
I do see that we have the Resize event working, which means I can do some of stuff inside the thread. The event is called whenever the window actually executes the modal loop. Like I tried screen_redraw() inside the event and it works. It redraws the screen and so I can have it look like it's not totally frozen. But the game itself is still frozen, because I would need to update all events for this to work. I guess we don't have a function for performing a full game loop. That would actually be useful.
Code: [Select]
if (global.window_width != window_get_width() || global.window_height != window_get_height()){
global.window_width = window_get_width();
global.window_height = window_get_height();
view_wview[0] = global.window_width;
view_hview[0] = global.window_height;
view_wport[0] = global.window_width;
view_hport[0] = global.window_height;
window_default(true);
screen_init();
}
So I check if the size has changed and if so, change the view size, reset window and screen. Those last two are needed to fix some visual bugs, as ENIGMA internally also needs to have it's sizes updated. This makes it work. I then noticed that maximize button was not clickable even if I enable resizing. I fixed it here (https://github.com/enigma-dev/enigma-dev/commit/a9f9238a2f50e423c567c50059b8e6765717d214). And then I noticed something that has troubled me for a long time - when you resize or move the window, everything inside it freezes. This is because of the way Windows is made and how they use modal loops. This means that the main thread is essentially frozen when you move or resize. There are two ways to fix this as far as I know:1) Break the modal loop and do everything yourself: http://sourceforge.net/p/win32loopl/code/ci/default/tree/ , but it seems like a lot of work, as you basically replicate everything Windows was doing.
2) Do everything in separate thread and allow windows to freeze the main thread. I seem to like this more. This can technically boost speed as well. The question: Has anyone done this? My naive idea was that maybe we can just create a thread in winmain(){} and do everything inside it from there. Thus there wouldn't be any problems with memory sharing and so on. The problems are callback functions (among other windows specific functions which use windows API) - will they work inside a child thread? I mean something like this function:
Code: [Select]
LRESULT CALLBACK WndProc (HWND hWndParameter, UINT message,WPARAM wParam, LPARAM lParam)
I do see that we have the Resize event working, which means I can do some of stuff inside the thread. The event is called whenever the window actually executes the modal loop. Like I tried screen_redraw() inside the event and it works. It redraws the screen and so I can have it look like it's not totally frozen. But the game itself is still frozen, because I would need to update all events for this to work. I guess we don't have a function for performing a full game loop. That would actually be useful.
7
Developing ENIGMA / [GL3.3] Multiple render targets (MRT)
« on: January 13, 2015, 03:38:48 pm »
Wanted to try implementing deffered shading. Hit the wall that if I want to do it efficiently, then I should be able to render to several render targets ("surfaces") at once. Found out that GM:S can do it with an undocumented function called surface_set_target_ext(int index, int id); which takes the index for the "stage" (as Robert calls them) to bind and id is the surface itself. Sadly we make surfaces as individual framebuffer objects (FBO). OpenGL allows only one FBO to be bound at any one time. This means I cannot bind several of them at once, like GM does. GM can do it because it uses DX underneath (on Windows only I presume, where surface_set_target_ext only seems to work, and only HLSL shaders can render to MRT in GM:S as far as I can see) and it allows that (http://msdn.microsoft.com/en-us/library/windows/desktop/bb147221%28v=vs.85%29.aspx). In OGL you do it differently, you add all the required textures to the one FBO (http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html) which can then be bound and all the textures accessed.
So as I couldn't add surface_set_target_ext(), I planned to add surface_add_colorbuffer(), which would add a texture with specific formats to the FBO. Something like this:
The problem with all of this is that I cannot make this work together with other systems. I need a new graphics_create_texture() function (I called it graphics_create_texture_custom) which I have no place to put. I need:
I seriously consider forking ENIGMA to have only one graphics system, because GL1 is obsolete and I haven't really touched it forever, and DX9/11 are not worked on and are not required as far as I see. If we somehow manage to get GLES working then we would still have problems like these, but at least GLES is like 95% compatible, so problems would be a lot smaller.
I guess this is why most engines have only one graphics system. Or at least abstracts everything even more, so it becomes agnostic to it. We cannot easily do it, because we make a tool, which allows people writing their own code, which is already a layer on top of the graphics system.
So as I couldn't add surface_set_target_ext(), I planned to add surface_add_colorbuffer(), which would add a texture with specific formats to the FBO. Something like this:
Code: [Select]
surf = surface_create(640,480); //This create 640x480 RGBA texture with unsigned int type and BGRA format (this is how it's made default right now)
surface_add_colorbuffer(surf, 3, tx_rgb, tx_bgr, tx_float); //This adds 640x480 RGB texture with float type and BGR format (and binds it to GL_COLOR_ATTACHMENT0 + 3)
surface_add_depthbuffer(surf, tx_depth_component, tx_depth_component32f, tx_float); //This adds 640x480 depth texture with float type and 32f format (and binds it to GL_DEPTH_ATTACHMENT)
I intentionally bound it to color attachment 3 and skipped 2, so you would see where the number comes in later. Now we can do this in pixel shader:Code: [Select]
layout(location = 0) out vec4 surfaceBufferOne;
layout(location = 3) out vec3 surfaceBufferThree;
void main()
{
surfaceBufferOne = vec4(1.0,0.5,0.0,1.0); //This buffer actually holds unsigned integers, so this becomes 255, 127, 0, 255
surfaceBufferThree = vec3(3.1415,2.4891,1.2345); //This holds floats
}
Depth is rendered automatically.The problem with all of this is that I cannot make this work together with other systems. I need a new graphics_create_texture() function (I called it graphics_create_texture_custom) which I have no place to put. I need:
Code: [Select]
enum {
//Formats and internal formats
tx_rgba = GL_RGBA,
tx_rgb = GL_RGB,
tx_rg = GL_RG,
tx_red = GL_RED,
tx_bgra = GL_BGRA,
tx_bgr = GL_BGR,
tx_depth_component = GL_DEPTH_COMPONENT
};
enum {
//Internal formats only
tx_rgb32f = GL_RGB32F,
tx_depth_component32f = GL_DEPTH_COMPONENT32F,
tx_depth_component24 = GL_DEPTH_COMPONENT24,
tx_depth_component16 = GL_DEPTH_COMPONENT16,
};
enum {
//Types
tx_unsigned_byte = GL_UNSIGNED_BYTE,
tx_byte = GL_BYTE,
tx_unsigned_short = GL_UNSIGNED_SHORT,
tx_short = GL_SHORT,
tx_unsigned_int = GL_UNSIGNED_INT,
tx_int = GL_INT,
tx_float = GL_FLOAT;
};
which I cannot define in General, because I use GL_ enums. If I didn't, then I would still need to define them in General and then access them trough arrays, which is what GL3d3d file does which is garbage. And then I need to add surface_add_colorbuffer and surface_add_depthbuffer somewhere, but I cannot do it in General, because GL1 will never have it (and DX will probably not have it either). So I end up making a stupid header where all of this junk goes into.I seriously consider forking ENIGMA to have only one graphics system, because GL1 is obsolete and I haven't really touched it forever, and DX9/11 are not worked on and are not required as far as I see. If we somehow manage to get GLES working then we would still have problems like these, but at least GLES is like 95% compatible, so problems would be a lot smaller.
I guess this is why most engines have only one graphics system. Or at least abstracts everything even more, so it becomes agnostic to it. We cannot easily do it, because we make a tool, which allows people writing their own code, which is already a layer on top of the graphics system.
8
Proposals / Error reporting
« on: December 25, 2014, 07:09:12 pm »
I would want to start a discussion on how to improve ENIGMA's error reporting. Last changes in this respect was by Robert, who added scope tracking, so errors would tell in which event an error occurred. What we need right now is to actually show the offending line number, because usually it's not enough to see the event (because events call scripts, which then can massively big). The bug fixing right now means using GBD with "break dialogs.cpp:56" which adds a breakpoint at "show_error". Then I can backtrace to see where error originated from, but even then the information is in _IDE_EDIT files. So my ideas are these:
1) Creating a separate debugger will probably be infeasible, so we will probably have to use GDB. This means we need to integrate it in LGM, to allow breakpoints to be set and called properly. This is done via the GDB interpreter mode, which allows it to be used via the GDB/MI interface (https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI.html#GDB_002fMI). As far as I understand it's, like using the regular console, but the output is easier to parse. So you run GDB as a separate process and then communicate with it like a cmd program.
2) We need to map the _IDE_EDIT files to original source, so we can track where in the original source something goes wrong, not the parsed IDE_EDIT. This could be done via some macro's that uses parser information to generate something like GCC does with __LINE__, __FILE__ tags. This is actually needed for GDB as well (but in reverse), because we need to be able to set breakpoints in scripts, but GDB needs to change them to be in _IDE_EDIT.
Any ideas on how to do this? This seems purely an LGM side of stuff (only the mapping has to be done something in the parser), so I'm not sure how to do this.
I know Robert did try adding the graphical part to LGM (the model dialog at the bottom) that will be useful here.
edit: Also, the current debugging described here doesn't actually work either. The line locations shown by GBD actually differ from the line locations in files. I don't know why.
1) Creating a separate debugger will probably be infeasible, so we will probably have to use GDB. This means we need to integrate it in LGM, to allow breakpoints to be set and called properly. This is done via the GDB interpreter mode, which allows it to be used via the GDB/MI interface (https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI.html#GDB_002fMI). As far as I understand it's, like using the regular console, but the output is easier to parse. So you run GDB as a separate process and then communicate with it like a cmd program.
2) We need to map the _IDE_EDIT files to original source, so we can track where in the original source something goes wrong, not the parsed IDE_EDIT. This could be done via some macro's that uses parser information to generate something like GCC does with __LINE__, __FILE__ tags. This is actually needed for GDB as well (but in reverse), because we need to be able to set breakpoints in scripts, but GDB needs to change them to be in _IDE_EDIT.
Any ideas on how to do this? This seems purely an LGM side of stuff (only the mapping has to be done something in the parser), so I'm not sure how to do this.
I know Robert did try adding the graphical part to LGM (the model dialog at the bottom) that will be useful here.
edit: Also, the current debugging described here doesn't actually work either. The line locations shown by GBD actually differ from the line locations in files. I don't know why.
9
Proposals / Allow me to use C++ in EDL
« on: November 23, 2014, 06:06:05 am »
I kind of hate GM datastructures. They are slower, harder to use and harder for compiler to optimize, because they are a wrapper. I would want to use STL containers, like vectors and lists. Can we finally make it happen? It's probably a parser thing again, but could it be possible in the short term to just add a language syntax to allow C++, like "{C++} { }". I know Josh already though about something like it, but I want to know how complicated would it be? Basically I want everything inside {C++} to NOT be parsed and taken as valid C++. Then every variable inside that scope to be temporary to the script (it should happen by default), so I can access it. Like for example:
This is very ugly, but the point is that I plan to make a very high performing part in pure C++ and then only copy the results back to EDL for drawing.
This way we don't need to wait for a new parser for 10 years, but get a "patch" that could potentially be easy to implement.
Any ideas?
Code: (c++) [Select]
{C++}{
#include <vector>
using std::vector;
class myClass{
double value;
};
vector<myClass> myVector;
for (auto &cl : myVector){
cl.value = 3.1415;
}
}
//This is parsed EDL
for (unsigned int i=0; i<{C++}{myVector.size();} ++i){
ds_grid_set(grid, i, 0, {C++}{myVector.value});
}
This is very ugly, but the point is that I plan to make a very high performing part in pure C++ and then only copy the results back to EDL for drawing.
This way we don't need to wait for a new parser for 10 years, but get a "patch" that could potentially be easy to implement.
Any ideas?
10
Developing ENIGMA / Switch to C++11?
« on: October 26, 2014, 03:46:50 pm »
I am making some fixes here and there, and I'm an itching to use std::unordered_map among some other C11 stuff. Most of the stuff like std::unordered_map are in GCC since at least 4.5. Also, since we bundle MinGW on windows together with ENIGMA, we can actually control that the user has a new one. On Linux they usually already have a new one. The only downside as far as I know, is that we couldn't be able to support Visual Studio as easily (as MS adopts new standards as fast as stone statue chases birds). So should we enable -std=c++11? That could also bring other performance benefits down the road. The whole resource system could be written better in general.
11
Programming Help / Packing bits
« on: October 05, 2014, 11:35:21 am »
I am looking into packing some stuff more compactly for OpenGL, but I have some problems coding that.
For normals, it's recomended to use GL_INT_2_10_10_10_REV​ format, which is basically 10bits per normal (x,y,z) and 2 bits left over. Right now all normals are floats, 32bit per normal. This packing would reduce the size by 2/3. This is how I tried it:
The second thing that is encouraged to pack is UV coordinates. I see recommendations for them being SHORT. I can't seem to pack them either. Then there is the problem that UV's are not limited to +-1.0. Normally they are, but if you want the texture to repeat, you give values out of this range. If we pack SHORT as in integer, then we cannot have that (as it will normalize and change to +-1.0 when sending to GPU). A half-float could work though. But I don't know how to pack half-floats either.
tl;dr - How to pack 3 floats in 1 float by having 10bit integer each?
How to pack 2 floats in 1 float having 16bit float each?
For normals, it's recomended to use GL_INT_2_10_10_10_REV​ format, which is basically 10bits per normal (x,y,z) and 2 bits left over. Right now all normals are floats, 32bit per normal. This packing would reduce the size by 2/3. This is how I tried it:
Code: (c++) [Select]
normal_t val = 0.0f;
val = val | (0 << 30);
val = val | ((unsigned int)((nz+1.0f)*0.5f*1023) << 20);
val = val | ((unsigned int)((ny+1.0f)*0.5f*1023) << 10);
val = val | ((unsigned int)((nx+1.0f)*0.5f*1023) << 0);
Where normal_t is just a 32bit structure like color_t is right now:Code: (c++) [Select]
template<int x> struct intmatch { };
template<int x> struct uintmatch { };
template<> struct intmatch<1> { typedef int8_t type; };
template<> struct intmatch<2> { typedef int16_t type; };
template<> struct intmatch<4> { typedef int32_t type; };
template<> struct intmatch<8> { typedef int64_t type; };
template<> struct uintmatch<1> { typedef uint8_t type; };
template<> struct uintmatch<2> { typedef uint16_t type; };
template<> struct uintmatch<4> { typedef uint32_t type; };
template<> struct uintmatch<8> { typedef uint64_t type; };
typedef uintmatch<sizeof(gs_scalar)>::type color_t;
typedef intmatch<sizeof(gs_scalar)>::type uv_t;
typedef intmatch<sizeof(gs_scalar)>::type normal_t;
I use 1023, because that is the biggest number you can hold it 10bits. All normals are from -1 to +1, so what I do is offset it (add +1.0) and scale (multiply by 0.5), then I multiply by 1023 and move the bits in place. I don't know how to really unpack it to see if it's correct though. I tried this:Code: (c++) [Select]
printf("Normal before packing %f, %f, %f after unpacking %f, %f, %f\n", nx,ny,nz,(double)(val & 1023)/1023.0-1.0,(double)((val & 1023)>>10)/1023.0-1.0,(double)((val & 1023)>>20)/1023.0-1.0);
But it shows 1.0 for ny and nz when it shouldn't. nx is also incorrect. I know there will be a slight loss of precision, but it shouldn't matter for normals.The second thing that is encouraged to pack is UV coordinates. I see recommendations for them being SHORT. I can't seem to pack them either. Then there is the problem that UV's are not limited to +-1.0. Normally they are, but if you want the texture to repeat, you give values out of this range. If we pack SHORT as in integer, then we cannot have that (as it will normalize and change to +-1.0 when sending to GPU). A half-float could work though. But I don't know how to pack half-floats either.
tl;dr - How to pack 3 floats in 1 float by having 10bit integer each?
How to pack 2 floats in 1 float having 16bit float each?
12
Developing ENIGMA / Massive GL3.3 changes.... again
« on: October 03, 2014, 05:21:22 pm »
Some might remember a merge I did mid-August. It involved massive GL3.3 changes. It stood as a merge request for a week for anyone to test. Nobody did. So I merged it and everything went up in flames. Now I will post a topic so people actually know about these changes, previously maybe only Robert was aware. I will also post how you would test it if using git.
These are some massive changes to the GL3.3 graphics system (it also touches other places). In short:
1) Better errors for GLSL together with more caching.
2) Surfaces now have optional depth buffers. This allows using them for rendering 3D scenes on them, which is basis of many graphics FX, like reflections, refractions and post-processing.
3) Added functions to use attributes and matrices, both of which are now cached.
4) Added proper GL3.3. debug context together with error function. This means when you run GL3.3 in debug mode, then it will error (segfault) whenever you use a deprecated function or wrong enum. Then it prints the function to console and shows an error window. This is very useful when we are trying to get rid of deprecated functions or when we have a hard to find bug (like wrong enum in a function argument). By doing this I removed many functions, fixed many others. In the end it fixed AMD problems we were having and I removed the "hack" that was used previously. That also means that normally ENIGMA users shouldn't see those errors (as they wont use GL directly), and so this could be an additional debug mode (graphics debug mode), so that we don't drop frames without reason (this GL debug mode really does drop FPS).
5) Fixed view_angle in GL1 and GL3.
6) Adds a global VAO, which is necessary for GL3.3 core. Making one VAO per mesh might be better, but I had some problems with rendering when I tried that. Worth investigating later.
7) Fixes GL1 models not updating. This is because GL lists are now used for additional batching, but they were not reset when model_clear() was called.
8) The GL1 GL list was never destroyed, thus creating a memory leak problem. Fixed that by destroying the list in mesh destructor. The list is also created once doing the mesh constructor.
9) Fixes surfaces, which were broken in the recent viewport code changes.
10) Started on the GPU profiler. It would basically keep track of vertices, triangles, texture swaps, draw calls and so on per frame. This is of course done in debug mode for now. Many changes to come in this regard, as well as a forum post to explain this in more detail.
11) Updated GLEW. This wasn't necessarily needed, but it's always good to have newer stuff. The reason I did it though is because I needed to get rid of glGetString(GL_EXTENSIONS) which was in glew.c. This function with this argument is deprecated, and so it always crashed at startup in debug context. The newest versions (1.10) still doesn't remove that function call, but I found many code snippets on the net that replace it.
12) The color choice is more consistent with GM in GL3. It's hard to explain, but basically the bound color (draw_set_color) will be the default one and it won't blend when using vertex_color. This is basically the fix for the purple floor in Minecraft example. In GL1 the floor was purple, in GL3 it was white. Now in GL3 it is also purple.
13) Fixed shadows in Project Mario (can't remember what did it though).
14) Added alpha test functions for GL3.3. This can also improve performance.
15) Added draw_sprite_padded() which is useful for drawing menus, buttons and other things like that. Will be instrumental in GUI extension I'm working on.
16) Added a basic ring buffer. If buffer type is STREAM (like the default render buffer), then it uses a ring buffer. It basically means that if you render stuff with the same stride (like 6 bytes for example), it will constantly use glSubData on different parts of the buffer and not cause GPU/CPU synchronization. This is useful for things like particle systems. For now it will only work when you render something in one batch with the same stride (like particles). In my test I draw 10k sprites - I get 315FPS with current master, and 370FPS with this change. But the gain will not be noticeable in more regular cases. Like minecraft or mario examples have zero gain because of this change. I think the short term the biggest gain can only be from texture atlas or texture arrays. Another thing would be to use GL4 features, like persistent memory mapping. Learn more here: http://gdcvault.com/play/1020791/ and about ring buffers here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/gdc12/Efficient_Buffer_Management_McDonald.pdf.
17) C++11 is now enabled. This means from now on we will start using C++11 features, including unordered_map which is already used in shader system.
18) Some OpenAL changes so calling sound_play(-1) doesn't crash Linux.
There were many other changes as well, but I have forgotten most of it, as this was originally a mid-August merge.
I would like if some other people tested it. I have tried it on AMD laptop and NVIDIA PC. Will do some additional tests later.
Also there are performance improvements for GL3 stemming from these changes. Like project mario is now 1620FPS (vs 1430FPS in master). But there can be also a decrease in some cases, because the caching can actually take more time than calling the gl function. For example, uniforms are very optimized and are meant to be changed frequently (like 10million times a second) and so adding a caching layer can actually slow it down. That is still useful for debugging purposes, as we actually know what types uniforms are and what data they hold (so people can actually query back this data without touching GPU memory) and I'm still investigating if leaving cache in, but disabling cache checks is more useful and faster.
I recommend testing on:
1) Project Mario - http://enigma-dev.org/forums/index.php?topic=1161.0 (GL1 and GL3).
2) Minecraft example - http://enigma-dev.org/edc/games.php?game=65 (GL1 and GL3).
3) Simple shader example - https://www.dropbox.com/s/6fx3r0bg5puyo28/shader_example.egm (GL3).
I will fix up the water example and post a link as well.
This is how they should look after running:
You can find the branch here: https://github.com/enigma-dev/enigma-dev/commits/GL3.3RealCleanUp
To test it you can do this via git:
1) Open console, and cd to enigma directory
2) Write "git checkout GL3.3RealCleanUp"
3) Then open LGM and test
Another way is to download this: https://github.com/enigma-dev/enigma-dev/archive/GL3.3RealCleanUp.zip
Then you must extract it. Copy LGM, plugin directory, ENIGMA.exe, as well as ENIGMAsystem/Additional to the extracted directory from your working version of ENIGMA.
Please test, give feedback and bug reports. I would want this merged as soon as possible.
Known bugs:
Text in Project Mario is messed up. Can't remember if this was fixed or not. It looks fine in Minecraft, so not sure what is going on. Maybe Robert knows.
These are some massive changes to the GL3.3 graphics system (it also touches other places). In short:
1) Better errors for GLSL together with more caching.
2) Surfaces now have optional depth buffers. This allows using them for rendering 3D scenes on them, which is basis of many graphics FX, like reflections, refractions and post-processing.
3) Added functions to use attributes and matrices, both of which are now cached.
4) Added proper GL3.3. debug context together with error function. This means when you run GL3.3 in debug mode, then it will error (segfault) whenever you use a deprecated function or wrong enum. Then it prints the function to console and shows an error window. This is very useful when we are trying to get rid of deprecated functions or when we have a hard to find bug (like wrong enum in a function argument). By doing this I removed many functions, fixed many others. In the end it fixed AMD problems we were having and I removed the "hack" that was used previously. That also means that normally ENIGMA users shouldn't see those errors (as they wont use GL directly), and so this could be an additional debug mode (graphics debug mode), so that we don't drop frames without reason (this GL debug mode really does drop FPS).
5) Fixed view_angle in GL1 and GL3.
6) Adds a global VAO, which is necessary for GL3.3 core. Making one VAO per mesh might be better, but I had some problems with rendering when I tried that. Worth investigating later.
7) Fixes GL1 models not updating. This is because GL lists are now used for additional batching, but they were not reset when model_clear() was called.
8) The GL1 GL list was never destroyed, thus creating a memory leak problem. Fixed that by destroying the list in mesh destructor. The list is also created once doing the mesh constructor.
9) Fixes surfaces, which were broken in the recent viewport code changes.
10) Started on the GPU profiler. It would basically keep track of vertices, triangles, texture swaps, draw calls and so on per frame. This is of course done in debug mode for now. Many changes to come in this regard, as well as a forum post to explain this in more detail.
11) Updated GLEW. This wasn't necessarily needed, but it's always good to have newer stuff. The reason I did it though is because I needed to get rid of glGetString(GL_EXTENSIONS) which was in glew.c. This function with this argument is deprecated, and so it always crashed at startup in debug context. The newest versions (1.10) still doesn't remove that function call, but I found many code snippets on the net that replace it.
12) The color choice is more consistent with GM in GL3. It's hard to explain, but basically the bound color (draw_set_color) will be the default one and it won't blend when using vertex_color. This is basically the fix for the purple floor in Minecraft example. In GL1 the floor was purple, in GL3 it was white. Now in GL3 it is also purple.
13) Fixed shadows in Project Mario (can't remember what did it though).
14) Added alpha test functions for GL3.3. This can also improve performance.
15) Added draw_sprite_padded() which is useful for drawing menus, buttons and other things like that. Will be instrumental in GUI extension I'm working on.
16) Added a basic ring buffer. If buffer type is STREAM (like the default render buffer), then it uses a ring buffer. It basically means that if you render stuff with the same stride (like 6 bytes for example), it will constantly use glSubData on different parts of the buffer and not cause GPU/CPU synchronization. This is useful for things like particle systems. For now it will only work when you render something in one batch with the same stride (like particles). In my test I draw 10k sprites - I get 315FPS with current master, and 370FPS with this change. But the gain will not be noticeable in more regular cases. Like minecraft or mario examples have zero gain because of this change. I think the short term the biggest gain can only be from texture atlas or texture arrays. Another thing would be to use GL4 features, like persistent memory mapping. Learn more here: http://gdcvault.com/play/1020791/ and about ring buffers here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/gdc12/Efficient_Buffer_Management_McDonald.pdf.
17) C++11 is now enabled. This means from now on we will start using C++11 features, including unordered_map which is already used in shader system.
18) Some OpenAL changes so calling sound_play(-1) doesn't crash Linux.
There were many other changes as well, but I have forgotten most of it, as this was originally a mid-August merge.
I would like if some other people tested it. I have tried it on AMD laptop and NVIDIA PC. Will do some additional tests later.
Also there are performance improvements for GL3 stemming from these changes. Like project mario is now 1620FPS (vs 1430FPS in master). But there can be also a decrease in some cases, because the caching can actually take more time than calling the gl function. For example, uniforms are very optimized and are meant to be changed frequently (like 10million times a second) and so adding a caching layer can actually slow it down. That is still useful for debugging purposes, as we actually know what types uniforms are and what data they hold (so people can actually query back this data without touching GPU memory) and I'm still investigating if leaving cache in, but disabling cache checks is more useful and faster.
I recommend testing on:
1) Project Mario - http://enigma-dev.org/forums/index.php?topic=1161.0 (GL1 and GL3).
2) Minecraft example - http://enigma-dev.org/edc/games.php?game=65 (GL1 and GL3).
3) Simple shader example - https://www.dropbox.com/s/6fx3r0bg5puyo28/shader_example.egm (GL3).
I will fix up the water example and post a link as well.
This is how they should look after running:
You can find the branch here: https://github.com/enigma-dev/enigma-dev/commits/GL3.3RealCleanUp
To test it you can do this via git:
1) Open console, and cd to enigma directory
2) Write "git checkout GL3.3RealCleanUp"
3) Then open LGM and test
Another way is to download this: https://github.com/enigma-dev/enigma-dev/archive/GL3.3RealCleanUp.zip
Then you must extract it. Copy LGM, plugin directory, ENIGMA.exe, as well as ENIGMAsystem/Additional to the extracted directory from your working version of ENIGMA.
Please test, give feedback and bug reports. I would want this merged as soon as possible.
Known bugs:
Text in Project Mario is messed up. Can't remember if this was fixed or not. It looks fine in Minecraft, so not sure what is going on. Maybe Robert knows.
13
Developing ENIGMA / Unstable master?
« on: October 03, 2014, 10:44:50 am »
Wanted to ask if Project Mario works in master now? Or I'm just mad? I am making my GL3.3 fixes and couldn't figure out why water wasn't drawing properly. Then I tried master and noticed that there GL1.1 and GL3 also doesn't draw water,. Is this true? Because it's weird, as I thought it ran it just fine. What are you guys running ENIGMA on when testing? I run like 3 example (Project Mario, Minecraft and now my shader example). We really need an automatic testing, because right now it's very hard to catch and fix bugs. Now I need to backtrace to figure out when the bug happened. Also, is text still messed up? At least in project mario it is.
Also, I see a large FPS improvement in GL1 which is nice. I get up to 2300FPS now, instead of previous 1200FPS. I guess that is because of the glList optimization. Having 2.3k FPS isn't really useful for a game, but still. Also Minecraft example ups from 700 to 900. There is a bug though, that doesn't allow GL1 models to be updated (mining in the minecraft didn't work), but I fixed that.
In my GL3.3 Fixes branch I get 1600FPS in GL3 up from 1400FPS in master. So not only it has more features + has no compatibility functions, it also is slightly faster. So after I fix the water and do more testing, it should be good for larger testing.
edit: The bug is in the screen_set_viewport() function. Was introduced when Robert fixed window scaling issues. For some reason it breaks the water, which could also mean it breaks surfaces in general, because surface_set_target() uses screen_set_viewport(). Investigating the problem.
edit2: Well long story short, surfaces don't need window functions in them. So I removed screen_set_viewport() and replaced them with glViewport and glScissor, which is the only two it needs.
Also, I see a large FPS improvement in GL1 which is nice. I get up to 2300FPS now, instead of previous 1200FPS. I guess that is because of the glList optimization. Having 2.3k FPS isn't really useful for a game, but still. Also Minecraft example ups from 700 to 900. There is a bug though, that doesn't allow GL1 models to be updated (mining in the minecraft didn't work), but I fixed that.
In my GL3.3 Fixes branch I get 1600FPS in GL3 up from 1400FPS in master. So not only it has more features + has no compatibility functions, it also is slightly faster. So after I fix the water and do more testing, it should be good for larger testing.
edit: The bug is in the screen_set_viewport() function. Was introduced when Robert fixed window scaling issues. For some reason it breaks the water, which could also mean it breaks surfaces in general, because surface_set_target() uses screen_set_viewport(). Investigating the problem.
edit2: Well long story short, surfaces don't need window functions in them. So I removed screen_set_viewport() and replaced them with glViewport and glScissor, which is the only two it needs.
14
Developing ENIGMA / git madness
« on: September 27, 2014, 08:49:27 am »
Maybe someone can help. In middle of August I made massive changes to GL3, so we could move forward with many features. Sadly, no one had the time to test it for the week I had it as a pull request. So after merging the whole thing broke. I reverted master to previous commit, so master would work again. I would really like that to be merged though. To do this I need to update the GL3.3Fixes branch (https://github.com/enigma-dev/enigma-dev/commits/GL3.3Fixes) to current master changes, and then test/fix everything that is still broken. I sadly don't know how to do it in git. I just tried this:
Basically I need the GL3.3Fix branch to be updated to the newest master, while still remaining in the GL3.3Fix branch. My fixes needs to be on top after the revert, but behind the other commits.
Quote
git checkout GL3.3FixesBut this just returns "everything is up to date" when it clearly isn't. I can try rebasing, but that can cause problems later. I think the issue here is that the revert commit in master (https://github.com/enigma-dev/enigma-dev/commit/07ac18577f5f8007ac6d5e3b5282edafdfeeed02) needs to be behind my branch (so when I merge with master it doesn't revert my changes), but at the same time any commit AFTER that revert, needs to be ON TOP of my branch. So how can I do it?
git merge master
Basically I need the GL3.3Fix branch to be updated to the newest master, while still remaining in the GL3.3Fix branch. My fixes needs to be on top after the revert, but behind the other commits.
15
Developing ENIGMA / New parser please
« on: August 26, 2014, 02:45:11 pm »
Don't want to sound like a broken record, but I want that parser now just as much I wanted it 2 years ago. Right now I want to use things like classes and structures in my code, to make it cleaner and I cannot because the parser doesn't support it right now. They need to be added in definitions to compile, and then it won't work because "struct point3D { double x;}; point3D myPoint; myPoint.x;" doesn't work. The "myPoint.x" part is parsed as if the structure was an object instance, so it tries to convert "point3D to int".
Right now I do stupid stuff like, have a grid full of lists (so I have regions and every region has a list of points in that region), and those lists full of grids (where every grid has width of 3 which holds x,y,z).
Right now I do stupid stuff like, have a grid full of lists (so I have regions and every region has a list of points in that region), and those lists full of grids (where every grid has width of 3 which holds x,y,z).