This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
31
Proposals / We need our own flag for ignoring return values
« on: April 18, 2013, 03:00:51 pm »
With some functions, it's less obvious you really need to use the return value. Like string_replace_all(). When I was young, I spent ten minutes figuring out why string_replace_all wasn't doing anything. I was livid. It's a good case of RTFM, yes, but it's also a great case where we can save users a lot of frustration by using flags like [snip]need_result[/snip], or something.
This is basically a note for me to add the function into JDI, possibly with more compiler hints. If you have ideas for more compiler hints, post them here, so I can best think up a system for handling them.
This is basically a note for me to add the function into JDI, possibly with more compiler hints. If you have ideas for more compiler hints, post them here, so I can best think up a system for handling them.
32
Proposals / Optimizations
« on: April 02, 2013, 10:33:21 pm »
There is a lot of room in a typical piece of GML for optimizations; the kind GCC is not capable of making in arbitrary code. I want ENIGMA's optimizer to be extensible so that we aren't hard-coding a lot of passes into the mechanism. That said, some passes will need hard-coded, and do not seem like a hack to hard-code.
To best avoid hacks, we need to lay out how we want the optimizer to promote extensible lists of things to optimize. To best lay out that framework, we need to know the kinds of optimizations that need made. I will name as many optimizations and classes of optimizations as I can. I've added emphasis to phrases I'm trying to pay attention to, such as what needs to have hard code, what optimizations we can automatically enumerate, and what optimizations we can kickstart through pattern matching.
ENIGMA, as you all know, is typed; I wouldn't call it strongly typed, due to var, but it you can specify strong types explicitly. When you do not specify a type, presently, var is used as the default. This is *terrible*, as var is like half the speed of a regular primitive in terms of raw calculations per second. Even when optimized. Variant, on the other hand, is equally fast as a double.
ENIGMA should, at very least, determine whether the variable is used as an array, and if it is not, use variant. Then GCC will do some inlining magic and, in general, no speed will be lost.
ENIGMA should, preferably, go one step further than that, and determine whether strings are ever assigned to it, and if not, use a double instead. Or, if no reals are assigned, only strings, it should use a string.
ENIGMA should, ideally, then go one step further and determine the slowest data type assigned (in terms of double, fastint_t, and long) and narrow the variable type down to it.
This process will simply be hard coded, but will possibly employ other aspects.
Let's go one step further. The above seem easy to implement, yes? But consider this case:
We're going to look at two optimizations that can be made to that snippet. An intelligent human would reduce that code to the single line [snip=cpp]int a = random_integer(1, 5);[/snip], but an EDL optimizer could only reasonably be expected to produce one of these outputs:
In the first sample output, [snip]a[/snip] is reduced to [snip]int[/snip], even though [snip]choose[/snip] returns [snip]variant[/snip]. I believe this can be done in an enumerable fashion by having either a macro or an entry in an optimization file to denote which parameter(s) share a potential return type. For choose(), any parameter can define the return type. I can't think of a function that has a specific parameter which can be either real or string, and defines a return type, but I'm sure there is one.
In the second sample output, [snip]choose(sequence)[/snip] is replaced with [snip]random_integer(min(sequence), max(sequence))[/snip]. This is a non-trivial replacement which would need hard coded, though it is possible we could enumerate functions which might need replaced with other functions based on their parameters. This way, implementers would need only supply the name of the funtion and a method/virtual class which does the checking and replacement.
Moving the assignment into the initializer is a potential optimization, though in most languages, it would not make a difference, and the attempt could therefore only serve to cause harm if initialization was not valid for whatever reason.
More simple optimizations exist in other codes. Consider this code:
It's clear to a human that the first call to [snip]draw_set_color[/snip] does nothing. However, if the assignment [snip]b = 10[/snip] were replaced with [snip]draw_circle(mouse_x,mouse_y,10,0)[/snip], removing line 1 would cause misbehavior. It would therefore be necessary to have the implementer either specify a list of functions to reference in determining whether two successive function calls of the same type undo each other, or else specify a class/method for examining an AST between two given nodes to make that call for the optimizer.
We then run into a separate, but related, case:
The most efficient code output for that is as follows:
But how does the optimizer know to do that? My only thought is that a good pattern to look for would be consecutive calls to related functions in a given set. A class would be provided to give that set of functions, along with hard code to do the merging—how else do we get 0,0,255 out of c_blue? The best we seem to be able to automate here is auto-matching consecutive calls to functions in set A which are not separated by calls to any functions in set B, and then invoking the merge method on the functions from the first set.
I will post more optimizations when I have some more time and am not so tired. I'll also do some proofreading, because I'm sure this reads like ass.
From what I have written here, it looks like the best approach is to have a base class defining a kind of optimization to perform, and then have child classes to carry out a specific operation, which can then have child classes for very similar optimizations. So call-consolidating (as in [snip]draw_set_color[/snip] optimizations above) would be one child of the optimization class, which would employ its own virtual class for specifying sets of functions to consolidate, as described above.
Please do submit feedback; this process is going to need a lot more thought.
To best avoid hacks, we need to lay out how we want the optimizer to promote extensible lists of things to optimize. To best lay out that framework, we need to know the kinds of optimizations that need made. I will name as many optimizations and classes of optimizations as I can. I've added emphasis to phrases I'm trying to pay attention to, such as what needs to have hard code, what optimizations we can automatically enumerate, and what optimizations we can kickstart through pattern matching.
ENIGMA, as you all know, is typed; I wouldn't call it strongly typed, due to var, but it you can specify strong types explicitly. When you do not specify a type, presently, var is used as the default. This is *terrible*, as var is like half the speed of a regular primitive in terms of raw calculations per second. Even when optimized. Variant, on the other hand, is equally fast as a double.
ENIGMA should, at very least, determine whether the variable is used as an array, and if it is not, use variant. Then GCC will do some inlining magic and, in general, no speed will be lost.
ENIGMA should, preferably, go one step further than that, and determine whether strings are ever assigned to it, and if not, use a double instead. Or, if no reals are assigned, only strings, it should use a string.
ENIGMA should, ideally, then go one step further and determine the slowest data type assigned (in terms of double, fastint_t, and long) and narrow the variable type down to it.
This process will simply be hard coded, but will possibly employ other aspects.
Let's go one step further. The above seem easy to implement, yes? But consider this case:
Code: (edl) [Select]
var a;
a = choose(1, 2, 3, 4, 5);
We're going to look at two optimizations that can be made to that snippet. An intelligent human would reduce that code to the single line [snip=cpp]int a = random_integer(1, 5);[/snip], but an EDL optimizer could only reasonably be expected to produce one of these outputs:
Code: (cpp) [Select]
int a;
a = choose(1, 2, 3, 4, 5);
Code: (cpp) [Select]
int a;
a = random_integer(1, 5);
In the first sample output, [snip]a[/snip] is reduced to [snip]int[/snip], even though [snip]choose[/snip] returns [snip]variant[/snip]. I believe this can be done in an enumerable fashion by having either a macro or an entry in an optimization file to denote which parameter(s) share a potential return type. For choose(), any parameter can define the return type. I can't think of a function that has a specific parameter which can be either real or string, and defines a return type, but I'm sure there is one.
In the second sample output, [snip]choose(sequence)[/snip] is replaced with [snip]random_integer(min(sequence), max(sequence))[/snip]. This is a non-trivial replacement which would need hard coded, though it is possible we could enumerate functions which might need replaced with other functions based on their parameters. This way, implementers would need only supply the name of the funtion and a method/virtual class which does the checking and replacement.
Moving the assignment into the initializer is a potential optimization, though in most languages, it would not make a difference, and the attempt could therefore only serve to cause harm if initialization was not valid for whatever reason.
More simple optimizations exist in other codes. Consider this code:
Code: (edl) [Select]
draw_set_color(c_red);
b = 10;
draw_set_color(c_blue);
It's clear to a human that the first call to [snip]draw_set_color[/snip] does nothing. However, if the assignment [snip]b = 10[/snip] were replaced with [snip]draw_circle(mouse_x,mouse_y,10,0)[/snip], removing line 1 would cause misbehavior. It would therefore be necessary to have the implementer either specify a list of functions to reference in determining whether two successive function calls of the same type undo each other, or else specify a class/method for examining an AST between two given nodes to make that call for the optimizer.
We then run into a separate, but related, case:
Code: (edl) [Select]
draw_set_color_rgba(0, 0, 0, 0.5);
draw_set_color(c_blue);
draw_circle(0,0,10,0);
draw_set_color(c_red);
draw_set_alpha(1);
The most efficient code output for that is as follows:
Code: (edl) [Select]
draw_set_color_rgba(0, 0, 255, 0.5);
draw_circle(0,0,10,0);
draw_set_color_rgba(c_red, 1);
But how does the optimizer know to do that? My only thought is that a good pattern to look for would be consecutive calls to related functions in a given set. A class would be provided to give that set of functions, along with hard code to do the merging—how else do we get 0,0,255 out of c_blue? The best we seem to be able to automate here is auto-matching consecutive calls to functions in set A which are not separated by calls to any functions in set B, and then invoking the merge method on the functions from the first set.
I will post more optimizations when I have some more time and am not so tired. I'll also do some proofreading, because I'm sure this reads like ass.
From what I have written here, it looks like the best approach is to have a base class defining a kind of optimization to perform, and then have child classes to carry out a specific operation, which can then have child classes for very similar optimizations. So call-consolidating (as in [snip]draw_set_color[/snip] optimizations above) would be one child of the optimization class, which would employ its own virtual class for specifying sets of functions to consolidate, as described above.
Please do submit feedback; this process is going to need a lot more thought.
33
Off-Topic / Thank you, Seagate!
« on: March 15, 2013, 01:15:03 am »
Everyone remember to give a huge "Thank you!" to Seagate, incorporated.
The hard drive I bought from them in December—the one with all my development tools and other goodies on it—died on me last Thursday night, so I placed an order for a new drive on Friday morning at 9:00, and paid extra to have it shipped to me via 2-day air.
I should be receiving it tomorrow, a week after ordering it, just in time for my spring break to end. I'll then be spending the next day or two migrating files, and should be all done and ready for action by the time school resumes.
The hard drive I bought from them in December—the one with all my development tools and other goodies on it—died on me last Thursday night, so I placed an order for a new drive on Friday morning at 9:00, and paid extra to have it shipped to me via 2-day air.
I should be receiving it tomorrow, a week after ordering it, just in time for my spring break to end. I'll then be spending the next day or two migrating files, and should be all done and ready for action by the time school resumes.
34
Proposals / Overloads in other languages
« on: February 22, 2013, 12:17:22 pm »
This is mostly for TGMG/whoever furthers EGMJS.
That will be able to translate directly to JDI's storage classes. Essentially, some JS interpreter will use reflection to read through and copy those values from each function into the appropriate storage classes in JDI. Not a huge deal.
This is an optional feature of the engine, but the new EDL specification does support user-overloaded functions.
The pretty printer must select the correct overload when outputting code. JDI will provide helper functions to do this.
Code: (JavaScript) [Select]
function my_overloaded_function(x,y) {
// code
}
my_overloaded_function.argc_min = 2;
my_overloaded_function.argc_max = 2;
my_overloaded_function.overloads = [];
my_overloaded_function.overloads[0] = function(x,y,z) {
// code
};
my_overloaded_function.overloads[0].argc_min = 3;
my_overloaded_function.overloads[0].argc_max = 3;
That will be able to translate directly to JDI's storage classes. Essentially, some JS interpreter will use reflection to read through and copy those values from each function into the appropriate storage classes in JDI. Not a huge deal.
This is an optional feature of the engine, but the new EDL specification does support user-overloaded functions.
The pretty printer must select the correct overload when outputting code. JDI will provide helper functions to do this.
35
This is a reminder to myself to do two things when I finally get around to writing that C++ pretty-printer:
1) Place [snip]#line[/snip] directives before each piece of code.
2) Have the pretty-printer ensure that the lines of C++ match up with the original lines of EDL.
This will ensure that error reporting by GCC can be indicated in the correct piece of code.
If anyone is interested in a homework assignment, I am interested to know if the table used by GDB is generated according to the [snip]#line[/snip] directive as well. If it is, this means that breakpoints can be set as easily as passing "break object0_event_<whatever>:<linenum>" to GDB. Not certain that is the case, but a little testing should confirm it.
1) Place [snip]#line[/snip] directives before each piece of code.
2) Have the pretty-printer ensure that the lines of C++ match up with the original lines of EDL.
This will ensure that error reporting by GCC can be indicated in the correct piece of code.
If anyone is interested in a homework assignment, I am interested to know if the table used by GDB is generated according to the [snip]#line[/snip] directive as well. If it is, this means that breakpoints can be set as easily as passing "break object0_event_<whatever>:<linenum>" to GDB. Not certain that is the case, but a little testing should confirm it.
36
Announcements / Iji
« on: February 03, 2013, 01:17:18 pm »
I was approached today by someone on the IRC regarding an apparently quite famous game made in Game Maker 5 and re-released for Game Maker 7. By the sounds of it, a lot of you should have heard of it. The game is Iji, by Daniel Remar. You can view it and download it from his home page here:
http://www.remar.se/daniel/iji.php
Apparently, the race is on to get the project ported to other operating systems, and GM is (for whatever reason) out of the question. It seems to me like this would be a good place, strategically, for ENIGMA to make an entrance.
There is a discussion open on the Cave Story forums, for those who are interested in a little related literature:
http://www.cavestory.org/forums/index.php?/topic/4628-iji-ports-permission-granted-by-remar/
It seems that the biggest obstacles standing between the game and working on ENIGMA are timelines and name conflicts, which I guess are both a bit overdue. Unfortunately, it'd be unwise to add them to the old compiler with the new one so close to being finished, so I guess the work is going to largely involve me.
I will try to commit some time to getting that pretty printer written and plugged in to the current system. I'd appreciate it if someone else could go through and see what other functions are missing from the game. That probably means you, polygone.
Just posting this here as an FYI. Thoughts on the endeavor are welcome.
http://www.remar.se/daniel/iji.php
Apparently, the race is on to get the project ported to other operating systems, and GM is (for whatever reason) out of the question. It seems to me like this would be a good place, strategically, for ENIGMA to make an entrance.
There is a discussion open on the Cave Story forums, for those who are interested in a little related literature:
http://www.cavestory.org/forums/index.php?/topic/4628-iji-ports-permission-granted-by-remar/
It seems that the biggest obstacles standing between the game and working on ENIGMA are timelines and name conflicts, which I guess are both a bit overdue. Unfortunately, it'd be unwise to add them to the old compiler with the new one so close to being finished, so I guess the work is going to largely involve me.
I will try to commit some time to getting that pretty printer written and plugged in to the current system. I'd appreciate it if someone else could go through and see what other functions are missing from the game. That probably means you, polygone.
Just posting this here as an FYI. Thoughts on the endeavor are welcome.
37
Announcements / Trello
« on: January 09, 2013, 11:14:27 am »
We have a Trello, now.
ENIGMA's development interest has been booming lately—despite Ism and me not having very much time to devote to the project, in general. As such, I figured now's a good time to acquaint everyone with Trello. Dazappa has been putting his to-do points up, and I've been putting the simpler points of mine, If anyone is interested, they are welcome to take on any of them. Just let us know or move the card to "doing" yourself, so two people don't end up doing the same thing.
So. Developers, go ahead and have a look. If you're interested in participating, create a Trello account real quick and we'll add you. There is no qualification for "developer" other than "is interested in developing." Don't be shy.
The alternative is to live vicariously by telling us which card you want and which you have finished. Your choice.
ENIGMA's development interest has been booming lately—despite Ism and me not having very much time to devote to the project, in general. As such, I figured now's a good time to acquaint everyone with Trello. Dazappa has been putting his to-do points up, and I've been putting the simpler points of mine, If anyone is interested, they are welcome to take on any of them. Just let us know or move the card to "doing" yourself, so two people don't end up doing the same thing.
So. Developers, go ahead and have a look. If you're interested in participating, create a Trello account real quick and we'll add you. There is no qualification for "developer" other than "is interested in developing." Don't be shy.
The alternative is to live vicariously by telling us which card you want and which you have finished. Your choice.
38
Proposals / Image Speed
« on: December 27, 2012, 02:17:12 pm »
The LGM sprite editor, like the GM equivalent, has an entry for image speed which is used for previewing. It's a little confusing to newcomers who would expect that the speed is used in-game, even though it's just a demonstration. It'd be the simplest thing to add that value to the EGM format, and support it in the exporter. The alternative, of course, is to just clarify in the GUI that the speed is for previewing purposes, and add a tooltip explaining how to set it in the object.
When you think about it, though, how often do you really want two objects with the same sprite playing at a different speed? In that case, you need to use image_speed anyway, so why bother forcing it to start as 1?
When you think about it, though, how often do you really want two objects with the same sprite playing at a different speed? In that case, you need to use image_speed anyway, so why bother forcing it to start as 1?
39
Announcements / Christmas Plans
« on: December 24, 2012, 08:27:54 am »
First off, Merry Christmas to all of you. Hope you are enjoying some time off for the holidays.
The parser is behaving to expectation, which is great, considering I told you all that it would do everything short of walk on water. However, there are still two problems I see in ENIGMA that I fear will go uncorrected until they bite someone in the ass; I am going to address both of them and one personal problem here.
If you are a developer, try to pay a little attention. At least to the first two.
Problem 1: The extension system is subtly broken.
I don't know if anyone noticed this (I think HaRRi has stumbled upon it?), but the extension system is not as modular as it was designed to be due to issues with uninformed sideways casting. The compiler handles all the casting; the linker is not involved. Thus, extensions think they are the only class that enigma::object_locals inherits virtually. Issue is, they are not. Thus, since alarms are first alphabetically, they're the only extension which will work.
Contrary to popular belief, extensions were designed to facilitate adding "heavy" functions as opposed to "groups of two or more" functions. By "heavy," I mean functions that have weighty dependency that might drive someone who is interested in disk and memory efficiency to drop us a complaint about it. Maybe someone doesn't need a 16 integer array in each object (talking of alarms). Maybe it really bothers someone that their objects all need a path_index variable, as well as position and time variables for their path. The point is, extensions make it so they can remove the entire system from their game, including the local variables that weigh down their objects.
To clarify, the issue is simply that when they need alarms and paths, both extensions assume that their bytes are the first in the structure, and one of them must logically be wrong. The fix is simply to do the casting from within the engine file (which is informed as to what each extension looks like and which extensions are being included). The detriment? We have data misread and life is hard.
Presently, you access the current instance (ENIGMA's equivalent of this) using a global. There are actually a few problems with doing this:
The solution: I am turning the instance addressing system on its ear.
The solution is actually pretty simple. We dump the globals that represent the current instances/iterators (namely, [snip]ENIGMA_global_instance[/snip] and [snip]instance_event_iterator[/snip]) and we replace them with a parameter given to ENIGMA functions which modify the current instance.
So basically, instead of this function:
We have this function:
So, not a huge difference, but enough that it will mean some chaos.
If you are worried about the parameter, don't be. That's where the new parser comes in. I have not added this yet, as the pretty printer is not written. I want everyone on board with this idea before I ship it.
This idea has been up on the proposals board for some time; we're just finally to a point where I think I'll have the free time to deal with it. I will put everything up in the ENIGMA-JDI branch before I begin the migration. I'll make a new newspost to let you know the time has come to tackle it and do regression testing, when I'm ready for the change myself.
Again, the benefits are a fixed extension system, the option of threading, and a more stable instance addressing system.
As for the central iterator list (the list of active iterators to be conscious of when deleting shit), I am still happy with it. Though I may refactor it slightly to maintain links in the list correctly instead of moving deleted iterators back (in case an iterator is reused which is expecting one type of object and instead finds another type by mistake). For now, it should be fine.
Problem 2: The Platform-Graphics bridge is ill-conceived and horrible.
The engine directory structure gives the impression that you can use OpenGL or DirectX as your library on Windows. Presently, this isn't the case, as both systems require the window to be initialized in a special way. You may be familiar with WGL and GLX; they are, respectively, the Windows GL interface and the GL-X11 interface. Code for these monsters is found right in the Platforms folder—So far we've avoided possible issues using preprocessors, but these hurt compile time and are in general unattractive.
It's been a clusterfuck so far, because we're dealing with three entities in the Platforms folder instead of one:
It's messy to separate those three items, which is why we have issues.
Solution: Create a new folder of bridge systems.
I don't care where in SHELL this folder is created, but it needs to contain folders such as Win32-OpenGL, Win32-DirectX, X11-OpenGL, etc, for each valid pairing of Window System - Graphics System.
We may also want to separate out the window system code and put it with the widget system code. This is leading to minor technicalities on Linux: The window is governed by raw X11, while the widgets are governed by GTK+. It hasn't led to any problems, but it has the potential to do so—especially on pairings in the future (eg, if some poor bastard tries to write QT widgets).
Since much of the code in the Win32/ folder was written by me when I was 16, it may be a good idea to comb over it again, anyway. If this can serve as an excuse to do so, go ahead and let it.
Personal Problem: I return to school after two weeks.
I have 14 days of freedom before I return to school, and I already need to start making preparations for that. Depending on how I tackle this semester, it may be even more work than the last. The good news? Two things: (1) the course which will generate the most work? It's on game design. That's right, the thing we've all been doing since we were 12. (2) I am taking five courses instead of six, and one of them is philosophy. For those of you who have not taken a college philosophy course, they are an easy, effortless, and even fun A if you have an open mind and don't mind some discussion.
The bad news: The game I design has to use Ogre (or Unity or XNA—Not even considering the latter, not wanting the former since I use Linux), which I have never used before.
The other good news: I intend to deal with this gracefully by writing an Ogre extension for ENIGMA. If I have to deal with Ogre, this project may as well benefit from it.
Now you are all up to speed. In summary, brace for impact.
The parser is behaving to expectation, which is great, considering I told you all that it would do everything short of walk on water. However, there are still two problems I see in ENIGMA that I fear will go uncorrected until they bite someone in the ass; I am going to address both of them and one personal problem here.
If you are a developer, try to pay a little attention. At least to the first two.
Problem 1: The extension system is subtly broken.
I don't know if anyone noticed this (I think HaRRi has stumbled upon it?), but the extension system is not as modular as it was designed to be due to issues with uninformed sideways casting. The compiler handles all the casting; the linker is not involved. Thus, extensions think they are the only class that enigma::object_locals inherits virtually. Issue is, they are not. Thus, since alarms are first alphabetically, they're the only extension which will work.
Contrary to popular belief, extensions were designed to facilitate adding "heavy" functions as opposed to "groups of two or more" functions. By "heavy," I mean functions that have weighty dependency that might drive someone who is interested in disk and memory efficiency to drop us a complaint about it. Maybe someone doesn't need a 16 integer array in each object (talking of alarms). Maybe it really bothers someone that their objects all need a path_index variable, as well as position and time variables for their path. The point is, extensions make it so they can remove the entire system from their game, including the local variables that weigh down their objects.
To clarify, the issue is simply that when they need alarms and paths, both extensions assume that their bytes are the first in the structure, and one of them must logically be wrong. The fix is simply to do the casting from within the engine file (which is informed as to what each extension looks like and which extensions are being included). The detriment? We have data misread and life is hard.
Presently, you access the current instance (ENIGMA's equivalent of this) using a global. There are actually a few problems with doing this:
- Casting that global to a virtual ancestor doesn't always work (as discussed above)
- Only one object can be this per running game per femtosecond
- All code has to be aware of any access boundaries associated with having only one object being this at a time
The solution: I am turning the instance addressing system on its ear.
The solution is actually pretty simple. We dump the globals that represent the current instances/iterators (namely, [snip]ENIGMA_global_instance[/snip] and [snip]instance_event_iterator[/snip]) and we replace them with a parameter given to ENIGMA functions which modify the current instance.
So basically, instead of this function:
Code: (cpp) [Select]
void motion_set(int dir, double newspeed)
{
enigma::object_graphics* const inst = ((enigma::object_graphics*)enigma::instance_event_iterator->inst);
inst->direction=dir;
inst->speed=newspeed;
}
We have this function:
Code: (cpp) [Select]
void motion_set(enigma::object_graphics* enigma_this, int dir, double newspeed)
{
enigma_this->direction=dir;
enigma_this->speed=newspeed;
}
So, not a huge difference, but enough that it will mean some chaos.
If you are worried about the parameter, don't be. That's where the new parser comes in. I have not added this yet, as the pretty printer is not written. I want everyone on board with this idea before I ship it.
This idea has been up on the proposals board for some time; we're just finally to a point where I think I'll have the free time to deal with it. I will put everything up in the ENIGMA-JDI branch before I begin the migration. I'll make a new newspost to let you know the time has come to tackle it and do regression testing, when I'm ready for the change myself.
Again, the benefits are a fixed extension system, the option of threading, and a more stable instance addressing system.
As for the central iterator list (the list of active iterators to be conscious of when deleting shit), I am still happy with it. Though I may refactor it slightly to maintain links in the list correctly instead of moving deleted iterators back (in case an iterator is reused which is expecting one type of object and instead finds another type by mistake). For now, it should be fine.
Problem 2: The Platform-Graphics bridge is ill-conceived and horrible.
The engine directory structure gives the impression that you can use OpenGL or DirectX as your library on Windows. Presently, this isn't the case, as both systems require the window to be initialized in a special way. You may be familiar with WGL and GLX; they are, respectively, the Windows GL interface and the GL-X11 interface. Code for these monsters is found right in the Platforms folder—So far we've avoided possible issues using preprocessors, but these hurt compile time and are in general unattractive.
It's been a clusterfuck so far, because we're dealing with three entities in the Platforms folder instead of one:
- Platform-dependent code for all manner of things, including grabbing executable name and working with directories
- Window system code; the code that creates and manipulates windows
- Platform-specific Graphics code, eg, WGL/GLX
It's messy to separate those three items, which is why we have issues.
Solution: Create a new folder of bridge systems.
I don't care where in SHELL this folder is created, but it needs to contain folders such as Win32-OpenGL, Win32-DirectX, X11-OpenGL, etc, for each valid pairing of Window System - Graphics System.
We may also want to separate out the window system code and put it with the widget system code. This is leading to minor technicalities on Linux: The window is governed by raw X11, while the widgets are governed by GTK+. It hasn't led to any problems, but it has the potential to do so—especially on pairings in the future (eg, if some poor bastard tries to write QT widgets).
Since much of the code in the Win32/ folder was written by me when I was 16, it may be a good idea to comb over it again, anyway. If this can serve as an excuse to do so, go ahead and let it.
Personal Problem: I return to school after two weeks.
I have 14 days of freedom before I return to school, and I already need to start making preparations for that. Depending on how I tackle this semester, it may be even more work than the last. The good news? Two things: (1) the course which will generate the most work? It's on game design. That's right, the thing we've all been doing since we were 12. (2) I am taking five courses instead of six, and one of them is philosophy. For those of you who have not taken a college philosophy course, they are an easy, effortless, and even fun A if you have an open mind and don't mind some discussion.
The bad news: The game I design has to use Ogre (or Unity or XNA—Not even considering the latter, not wanting the former since I use Linux), which I have never used before.
The other good news: I intend to deal with this gracefully by writing an Ogre extension for ENIGMA. If I have to deal with Ogre, this project may as well benefit from it.
Now you are all up to speed. In summary, brace for impact.
40
General ENIGMA / EGMJS
« on: December 14, 2012, 01:57:26 pm »
TGMG's interested in maintaining the EGMJS port I started a while back, and doing that under the new parser will be pretty easy, imo.
This topic is to try to make it even easier for him.
I will record general notes to all implementers on the Wiki. Notes specifically concerning my thoughts on the implementation of EGMJS will go in this topic.
My first concern is on how TGMG will load definitions. Presently, ENIGMA uses a central JDI context to store its definitions. Since JDI is inherently a C++ parser, this is done by invoking it on the engine file directly. JDI is not a JavaScript parser. However, JDI's structure is easy to figure out, and JavaScript is capable of reflection. The way I see it, there are three ways you can go about this:
1) Choose the language that is going to host the crawler.
This can be Java using javax.script.ScriptEngine (javax.script.ScriptEngineManager.getEngineByName("JavaScript")), or in C++ using Google V8. Both methods have their advantages:
The bottom line is, by this method, you need to use JavaScript reflection to communicate a list of available functions to ENIGMA so the parser can do syntax checking.
The other method that I can see you using is having emscripten parse the JavaScript engine, and then polling it for definition names to pack into JDI classes. This method has similar advantages. On the downside, it means that EGMJS is dependent on LLVM—that's a heavy dependency that I'm in general not fond of. On the other hand, it means that you'll be asking LLVM for the definitions and (probably) using LLVM to store the code so emscripten can compile the code, which would open doors for ENIGMA to compile to other languages for which LLVM has pretty-printers. It might also introduce some issues in the translation, but from what I can tell, as long as you keep within a relatively decent-sized subset of LLVM instructions, you should avoid such issues.
I see a great amount of merit in each option, so I do not care which method you choose. If you go with the V8/ScriptEngine method, I will be happy to have a two-megabyte JavaScript export extension. If you go with emscripten, I will be happy to have LLVM as an abstraction layer. Let me know what you're thinking, though.
This topic is to try to make it even easier for him.
I will record general notes to all implementers on the Wiki. Notes specifically concerning my thoughts on the implementation of EGMJS will go in this topic.
My first concern is on how TGMG will load definitions. Presently, ENIGMA uses a central JDI context to store its definitions. Since JDI is inherently a C++ parser, this is done by invoking it on the engine file directly. JDI is not a JavaScript parser. However, JDI's structure is easy to figure out, and JavaScript is capable of reflection. The way I see it, there are three ways you can go about this:
1) Choose the language that is going to host the crawler.
This can be Java using javax.script.ScriptEngine (javax.script.ScriptEngineManager.getEngineByName("JavaScript")), or in C++ using Google V8. Both methods have their advantages:
- If you use Java's ScriptEngine class, no additional libraries need included or set up. Java's also pretty good about doing the integration for you, and building V8 for Windows is an impossible task (it requires MSVC++). The difficulty is that you have to get this information back to ENIGMA, and it adds ENIGMA.jar as a dependency to the process (meaning a CLI build without Java will be completely impossible).
- If you use Google V8, everything can be done from within C++; you can use JavaScript reflection to call native methods directly. The C++ methods can populate JDI structures in memory while the JavaScript engine is doing the iteration. This is bound to be more efficient, as Java does not guarantee its scripting engines are even compiled, to my knowledge.
The bottom line is, by this method, you need to use JavaScript reflection to communicate a list of available functions to ENIGMA so the parser can do syntax checking.
The other method that I can see you using is having emscripten parse the JavaScript engine, and then polling it for definition names to pack into JDI classes. This method has similar advantages. On the downside, it means that EGMJS is dependent on LLVM—that's a heavy dependency that I'm in general not fond of. On the other hand, it means that you'll be asking LLVM for the definitions and (probably) using LLVM to store the code so emscripten can compile the code, which would open doors for ENIGMA to compile to other languages for which LLVM has pretty-printers. It might also introduce some issues in the translation, but from what I can tell, as long as you keep within a relatively decent-sized subset of LLVM instructions, you should avoid such issues.
I see a great amount of merit in each option, so I do not care which method you choose. If you go with the V8/ScriptEngine method, I will be happy to have a two-megabyte JavaScript export extension. If you go with emscripten, I will be happy to have LLVM as an abstraction layer. Let me know what you're thinking, though.
41
Announcements / JDI ↔ Parser: Code formatting, completion
« on: December 11, 2012, 11:39:46 am »
Tomorrow I take the last of my finals, and so by tomorrow evening I should, finally, be free to work on ENIGMA again. Forthevin has been doing a great job of adding things left and right, but I don't expect him (much less anyone else) to be able to help much with the parser. Especially if I haven't laid out any plans for it publicly.
I was looking at what I have done and what I need to do with it, when I noticed that one of the smaller bullet points for the parser—the ability to automatically format code neatly—is not presently possible for two simple reasons: comments and preprocessors.
Until those can be resolved, the parser will be incapable of correctly formatting code without dropping comments and potentially preprocessing away some code. The solution is simple, but it involves me editing JDI some more, which probably isn't what anyone wants to hear.
So let me present another benefit that can come from having JDI sew comments and preprocessor blocks into returned tokens: Javadoc-esque code completion.
If JDI reads comments in, ENIGMA or LGM can parse out formal comments like Javadoc and Doxygen do. Basically, we could use Doxygen to describe the purpose of GM functions in-line. When the user selects the function in the code completion menu, we could display information about each parameter and what exactly the function does, instead of just the names of the function and its parameters.
Another option is to have some duplicate code and let ENIGMA parse its own expressions independent of JDI. There is no other benefit to doing this, as JDI can handle any unary prefix, unary postfix, binary, or ternary operator already, including GML's ^^ and <> operators.
Or, I can just belay the code formatting idea all together and get the parser working how it does now.
It's up to you people, but try to decide before tomorrow when I actually have time to do some real coding.
I was looking at what I have done and what I need to do with it, when I noticed that one of the smaller bullet points for the parser—the ability to automatically format code neatly—is not presently possible for two simple reasons: comments and preprocessors.
Until those can be resolved, the parser will be incapable of correctly formatting code without dropping comments and potentially preprocessing away some code. The solution is simple, but it involves me editing JDI some more, which probably isn't what anyone wants to hear.
So let me present another benefit that can come from having JDI sew comments and preprocessor blocks into returned tokens: Javadoc-esque code completion.
If JDI reads comments in, ENIGMA or LGM can parse out formal comments like Javadoc and Doxygen do. Basically, we could use Doxygen to describe the purpose of GM functions in-line. When the user selects the function in the code completion menu, we could display information about each parameter and what exactly the function does, instead of just the names of the function and its parameters.
Another option is to have some duplicate code and let ENIGMA parse its own expressions independent of JDI. There is no other benefit to doing this, as JDI can handle any unary prefix, unary postfix, binary, or ternary operator already, including GML's ^^ and <> operators.
Or, I can just belay the code formatting idea all together and get the parser working how it does now.
It's up to you people, but try to decide before tomorrow when I actually have time to do some real coding.
42
Announcements / Commit Privileges
« on: October 22, 2012, 09:03:39 pm »
I'm swamped with college. TGMG's swamped with college, or "uni" (as in "university") as they call it in whatever ghetto he's from (And, well, everywhere else but America). Ism's swamped with "irl things," ie, her job. That's it for the "primary" developers. HaRRi and polygone have also not committed much recently.
In fact, you may have noticed that the lion's share of recent commits belong to forthevin. I haven't received any notices of him fucking anything up to date, and I've just merged another pull request of his which I haven't the time to test, so I decided that the best solution is just to instate him as a contributor with commit access.
So, everyone welcome forthevin to the development team.
Also, go ahead and direct all bitching at the missing primary developers here as well.
As for you, forthevin, don't worry; no additional responsibility seems to come with the title. Apparently. Except maybe fixing things if you fuck them up.
In fact, you may have noticed that the lion's share of recent commits belong to forthevin. I haven't received any notices of him fucking anything up to date, and I've just merged another pull request of his which I haven't the time to test, so I decided that the best solution is just to instate him as a contributor with commit access.
So, everyone welcome forthevin to the development team.
Also, go ahead and direct all bitching at the missing primary developers here as well.
As for you, forthevin, don't worry; no additional responsibility seems to come with the title. Apparently. Except maybe fixing things if you fuck them up.
43
Announcements / Break In
« on: October 02, 2012, 11:07:10 am »
I believe it is my legal obligation to inform everyone we've had a break-in. Presumably by a bot.
At 2PM yesterday I received a report that malware was being hosted on our server and that it was likely we had been compromised. In fact, it appears that some entity had gained root access to our server and loaded a phishing page up on it. The files all belonged to the root account, which means that the entity had full access to our system; this includes databases.
I don't think anyone should be overly concerned, as all passwords are handled by SMF and are therefore salted and hashed.
We are unsure how the break-in occurred, but we believe it may have been related to an old wordpress install hosted elsewhere on this server. From this point forward, no one say "Wordpress" to me.
So, in an effort to uphold due dilligence, etc, this is your warning that it is possible (but unlikely) that someone has a copy of all salted password hashes. It is also possible they have a large list of email addresses. It is also possible (if extremely unlikely) that they can retrieve your password by allocating their presumably large network of bots to brute forcing the hashes. I wouldn't worry about that happening.
Most people don't use very powerful passwords over http, anyway.
So, this is your heads up. Sorry about the shitty news. We're wiping old shit we don't maintain and putting more security in place to prevent this from happening again.
At 2PM yesterday I received a report that malware was being hosted on our server and that it was likely we had been compromised. In fact, it appears that some entity had gained root access to our server and loaded a phishing page up on it. The files all belonged to the root account, which means that the entity had full access to our system; this includes databases.
I don't think anyone should be overly concerned, as all passwords are handled by SMF and are therefore salted and hashed.
We are unsure how the break-in occurred, but we believe it may have been related to an old wordpress install hosted elsewhere on this server. From this point forward, no one say "Wordpress" to me.
So, in an effort to uphold due dilligence, etc, this is your warning that it is possible (but unlikely) that someone has a copy of all salted password hashes. It is also possible they have a large list of email addresses. It is also possible (if extremely unlikely) that they can retrieve your password by allocating their presumably large network of bots to brute forcing the hashes. I wouldn't worry about that happening.
Most people don't use very powerful passwords over http, anyway.
So, this is your heads up. Sorry about the shitty news. We're wiping old shit we don't maintain and putting more security in place to prevent this from happening again.
44
Proposals / Reintroduction of build mode
« on: September 13, 2012, 02:33:54 pm »
I need reports on a successful widget system from each platform. Widgets have not stopped working on Linux, to my knowledge, provided the correct GTK packages are installed. However, they don't work on Windows due to problems with the outdated windres.exe, to which I have still not heard an end, and they also do not work on Mac without serious poking, I imagine.
So I need TGMG or another similarly capable Mac developer to write Cocoa equivalents for the Win32 widget functions (if the Cocoa API is free-form like Windows) or to the GTK widget functions (if Cocoa is more like GTK). I stress this difference because the Windows widget functions have a function which behaves like the layout managers in wx, GTK, and Java Swing: It is capable of ordering items into a table for the existing layout option.
So get widgets working, people. I've coded what I can; we just need windres and a cocoa port.
After that, I'll need collaboration from IsmAvatar to actually set it back up as it once was.
If you're wondering what build mode is, it's a secret.
So I need TGMG or another similarly capable Mac developer to write Cocoa equivalents for the Win32 widget functions (if the Cocoa API is free-form like Windows) or to the GTK widget functions (if Cocoa is more like GTK). I stress this difference because the Windows widget functions have a function which behaves like the layout managers in wx, GTK, and Java Swing: It is capable of ordering items into a table for the existing layout option.
So get widgets working, people. I've coded what I can; we just need windres and a cocoa port.
After that, I'll need collaboration from IsmAvatar to actually set it back up as it once was.
If you're wondering what build mode is, it's a secret.
45
Proposals / Static Sprites
« on: September 13, 2012, 02:22:40 pm »
An idea you'll find in non-GM game development suites is static sprites. Mechanistically, they're sprites that you place at a fixed (static) position in the room. Essentially, they are animated tiles. This is 90% UI related, or I'd probably just throw it in without ever writing up a proposal. LGM's tile editor is sad as it stands, and so is ENIGMA's tile implementation (no offense to TGMG, who just wanted something in that worked).
Ideally, they'll be placed at a certain depth using the tile editor. We'd want a way to set their coordinates, animation speed, and maybe scale/rotation.
Ideally, they'll be placed at a certain depth using the tile editor. We'd want a way to set their coordinates, animation speed, and maybe scale/rotation.