This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 »
811
Issues Help Desk / Re: Compiling Option
« on: April 12, 2013, 11:24:25 am »
If that's all you need, go for it.
812
Issues Help Desk / Re: Compiling Option
« on: April 10, 2013, 02:50:57 pm »
Unfortunately, that's all I can recommend for you. I've talked to cheesesboy, who originally mucked around with the cross-compiler descriptor until he got it sort of working. According to him, ALURE has never built correctly in the cross-compiler toolchain. So even if I did fix whatever it is make is bitching about, it wouldn't do you much good, unfortunately.
I intend to replace ALURE with a custom set of codec managers, eventually. ALURE is a convenience library, so it really doesn't have a place in ENIGMA, as easy as it made life for me. Moreover, KCat changes its interface every five minutes, and Ubuntu is always 12 years behind, so it's really just a messy prospect, maintenance-wise. And to top it all off, since ALURE handles all the codec importing, ENIGMA doesn't have any control over it.
Come to think of it, I may yet be able to use some pieces of ALURE in ENIGMA...
But anyway, the point is that cross-compiling is a mess right now for a number of reasons. It'll be in sooner or later.
I intend to replace ALURE with a custom set of codec managers, eventually. ALURE is a convenience library, so it really doesn't have a place in ENIGMA, as easy as it made life for me. Moreover, KCat changes its interface every five minutes, and Ubuntu is always 12 years behind, so it's really just a messy prospect, maintenance-wise. And to top it all off, since ALURE handles all the codec importing, ENIGMA doesn't have any control over it.
Come to think of it, I may yet be able to use some pieces of ALURE in ENIGMA...
But anyway, the point is that cross-compiling is a mess right now for a number of reasons. It'll be in sooner or later.
813
Issues Help Desk / Re: Compiling Option
« on: April 10, 2013, 10:05:39 am »
Something is wrong with the newest MinGW, and I've yet to investigate. I'd say there was something wrong with ENIGMA, but it's worked intermittently in previous releases of MinGW, and it has always worked in GCC on Linux. So basically, I wholly blame MinGW, and it'll be broken like that until I have time to find what's changed, and then it'll be broken again next time the MinGW team updates.
What's most disconcerting is that I originally blamed MinGW-make, but the cross-compiler descriptor instructs ENIGMA to use the regular make, which has proved capable of reading our makefile. So I'm really not sure what's up.
What's most disconcerting is that I originally blamed MinGW-make, but the cross-compiler descriptor instructs ENIGMA to use the regular make, which has proved capable of reading our makefile. So I'm really not sure what's up.
814
Announcements / Re: NaturalGM Website
« on: April 09, 2013, 10:09:25 pm »
The site has far too many references to ENIGMA. NGM is its own product, as is LGM. At very least, the IRC should point to #naturalgm, not to #enigmaide. It is probably best that the project have its own community and possibly even Wiki, though I can see the benefit in linking to the Wiki on this site as it is, largely thanks to you, relatively comprehensive in its present condition.
It does not bode well that the NaturalGM website is presently a thin layer over this website. That may ward people off.
It does not bode well that the NaturalGM website is presently a thin layer over this website. That may ward people off.
815
Off-Topic / Re: choose your channel operator!!!
« on: April 08, 2013, 05:07:08 pm »
> I say we keep him and hope he finishes puberty soon.
But then we have menopause to worry about.
But then we have menopause to worry about.
816
Off-Topic / Re: choose your channel operator!!!
« on: April 08, 2013, 02:48:48 pm »
Trolled, cheeseboy.
817
Off-Topic / Re: choose your channel operator!!!
« on: April 08, 2013, 02:40:10 pm »If you want to revoke my admin privileges, you can also stop using my intellectual property
I would just like to point out, again,
Quote from: The Wiki
Please note that all contributions to ENIGMA are considered to be released under the GNU Free Documentation License 1.3 (see ENIGMA:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
This is a warning at the bottom of every page you edit. There's no such thing as intellectual property on a Wiki.
Anyway, carry on.
818
Issues Help Desk / Re: cross compilation errors
« on: April 05, 2013, 08:36:55 am »
He's on Linux, polygone.
Anyway, gra, it seems that i486-mingw32-g++ is not the correct name of the cross-compiler. It also appears that MinGW-Make sucks on Linux just as bad as it does Windows. In Compilers/Linux/mingw.ey, make sure that the binary names are correct, and just use regular [snip]make[/snip] in place of the MinGW version.
Anyway, gra, it seems that i486-mingw32-g++ is not the correct name of the cross-compiler. It also appears that MinGW-Make sucks on Linux just as bad as it does Windows. In Compilers/Linux/mingw.ey, make sure that the binary names are correct, and just use regular [snip]make[/snip] in place of the MinGW version.
820
Works in Progress / Re: [WIP] snake revenge :D
« on: April 04, 2013, 12:17:08 am »
Italian is fine. The EDC is a place to share games; they don't need to be open-source. Source is just the easiest way to distribute for all platforms.
821
Proposals / Re: Optimizations
« on: April 03, 2013, 04:36:00 pm »
The hardest part about type optimization is doing the optimization for literally every variable. Basically, I'll be generating dependency graphs between variables, and then using the smallest value in the largest cycle. It still won't be as efficient as if the user optimized the code himself, but it's a start.
The optimizations I gave are specific cases; the point was that there are lots of functions other than draw_set_color which might need reduced by the optimizer.
Something we might want to consider is having some optimization passes warn the user instead of enacting the optimization itself, that way the user can choose to ignore the optimization and no code is accidentally broken.
The optimizations I gave are specific cases; the point was that there are lots of functions other than draw_set_color which might need reduced by the optimizer.
Something we might want to consider is having some optimization passes warn the user instead of enacting the optimization itself, that way the user can choose to ignore the optimization and no code is accidentally broken.
822
Announcements / Re: NEW! Windows Zip Installer
« on: April 03, 2013, 08:20:18 am »
I will test this from school computers on Friday. I'll just go from lab to lab downloading and installing this.
823
Proposals / Optimizations
« on: April 02, 2013, 10:33:21 pm »
There is a lot of room in a typical piece of GML for optimizations; the kind GCC is not capable of making in arbitrary code. I want ENIGMA's optimizer to be extensible so that we aren't hard-coding a lot of passes into the mechanism. That said, some passes will need hard-coded, and do not seem like a hack to hard-code.
To best avoid hacks, we need to lay out how we want the optimizer to promote extensible lists of things to optimize. To best lay out that framework, we need to know the kinds of optimizations that need made. I will name as many optimizations and classes of optimizations as I can. I've added emphasis to phrases I'm trying to pay attention to, such as what needs to have hard code, what optimizations we can automatically enumerate, and what optimizations we can kickstart through pattern matching.
ENIGMA, as you all know, is typed; I wouldn't call it strongly typed, due to var, but it you can specify strong types explicitly. When you do not specify a type, presently, var is used as the default. This is *terrible*, as var is like half the speed of a regular primitive in terms of raw calculations per second. Even when optimized. Variant, on the other hand, is equally fast as a double.
ENIGMA should, at very least, determine whether the variable is used as an array, and if it is not, use variant. Then GCC will do some inlining magic and, in general, no speed will be lost.
ENIGMA should, preferably, go one step further than that, and determine whether strings are ever assigned to it, and if not, use a double instead. Or, if no reals are assigned, only strings, it should use a string.
ENIGMA should, ideally, then go one step further and determine the slowest data type assigned (in terms of double, fastint_t, and long) and narrow the variable type down to it.
This process will simply be hard coded, but will possibly employ other aspects.
Let's go one step further. The above seem easy to implement, yes? But consider this case:
We're going to look at two optimizations that can be made to that snippet. An intelligent human would reduce that code to the single line [snip=cpp]int a = random_integer(1, 5);[/snip], but an EDL optimizer could only reasonably be expected to produce one of these outputs:
In the first sample output, [snip]a[/snip] is reduced to [snip]int[/snip], even though [snip]choose[/snip] returns [snip]variant[/snip]. I believe this can be done in an enumerable fashion by having either a macro or an entry in an optimization file to denote which parameter(s) share a potential return type. For choose(), any parameter can define the return type. I can't think of a function that has a specific parameter which can be either real or string, and defines a return type, but I'm sure there is one.
In the second sample output, [snip]choose(sequence)[/snip] is replaced with [snip]random_integer(min(sequence), max(sequence))[/snip]. This is a non-trivial replacement which would need hard coded, though it is possible we could enumerate functions which might need replaced with other functions based on their parameters. This way, implementers would need only supply the name of the funtion and a method/virtual class which does the checking and replacement.
Moving the assignment into the initializer is a potential optimization, though in most languages, it would not make a difference, and the attempt could therefore only serve to cause harm if initialization was not valid for whatever reason.
More simple optimizations exist in other codes. Consider this code:
It's clear to a human that the first call to [snip]draw_set_color[/snip] does nothing. However, if the assignment [snip]b = 10[/snip] were replaced with [snip]draw_circle(mouse_x,mouse_y,10,0)[/snip], removing line 1 would cause misbehavior. It would therefore be necessary to have the implementer either specify a list of functions to reference in determining whether two successive function calls of the same type undo each other, or else specify a class/method for examining an AST between two given nodes to make that call for the optimizer.
We then run into a separate, but related, case:
The most efficient code output for that is as follows:
But how does the optimizer know to do that? My only thought is that a good pattern to look for would be consecutive calls to related functions in a given set. A class would be provided to give that set of functions, along with hard code to do the merging—how else do we get 0,0,255 out of c_blue? The best we seem to be able to automate here is auto-matching consecutive calls to functions in set A which are not separated by calls to any functions in set B, and then invoking the merge method on the functions from the first set.
I will post more optimizations when I have some more time and am not so tired. I'll also do some proofreading, because I'm sure this reads like ass.
From what I have written here, it looks like the best approach is to have a base class defining a kind of optimization to perform, and then have child classes to carry out a specific operation, which can then have child classes for very similar optimizations. So call-consolidating (as in [snip]draw_set_color[/snip] optimizations above) would be one child of the optimization class, which would employ its own virtual class for specifying sets of functions to consolidate, as described above.
Please do submit feedback; this process is going to need a lot more thought.
To best avoid hacks, we need to lay out how we want the optimizer to promote extensible lists of things to optimize. To best lay out that framework, we need to know the kinds of optimizations that need made. I will name as many optimizations and classes of optimizations as I can. I've added emphasis to phrases I'm trying to pay attention to, such as what needs to have hard code, what optimizations we can automatically enumerate, and what optimizations we can kickstart through pattern matching.
ENIGMA, as you all know, is typed; I wouldn't call it strongly typed, due to var, but it you can specify strong types explicitly. When you do not specify a type, presently, var is used as the default. This is *terrible*, as var is like half the speed of a regular primitive in terms of raw calculations per second. Even when optimized. Variant, on the other hand, is equally fast as a double.
ENIGMA should, at very least, determine whether the variable is used as an array, and if it is not, use variant. Then GCC will do some inlining magic and, in general, no speed will be lost.
ENIGMA should, preferably, go one step further than that, and determine whether strings are ever assigned to it, and if not, use a double instead. Or, if no reals are assigned, only strings, it should use a string.
ENIGMA should, ideally, then go one step further and determine the slowest data type assigned (in terms of double, fastint_t, and long) and narrow the variable type down to it.
This process will simply be hard coded, but will possibly employ other aspects.
Let's go one step further. The above seem easy to implement, yes? But consider this case:
Code: (edl) [Select]
var a;
a = choose(1, 2, 3, 4, 5);
We're going to look at two optimizations that can be made to that snippet. An intelligent human would reduce that code to the single line [snip=cpp]int a = random_integer(1, 5);[/snip], but an EDL optimizer could only reasonably be expected to produce one of these outputs:
Code: (cpp) [Select]
int a;
a = choose(1, 2, 3, 4, 5);
Code: (cpp) [Select]
int a;
a = random_integer(1, 5);
In the first sample output, [snip]a[/snip] is reduced to [snip]int[/snip], even though [snip]choose[/snip] returns [snip]variant[/snip]. I believe this can be done in an enumerable fashion by having either a macro or an entry in an optimization file to denote which parameter(s) share a potential return type. For choose(), any parameter can define the return type. I can't think of a function that has a specific parameter which can be either real or string, and defines a return type, but I'm sure there is one.
In the second sample output, [snip]choose(sequence)[/snip] is replaced with [snip]random_integer(min(sequence), max(sequence))[/snip]. This is a non-trivial replacement which would need hard coded, though it is possible we could enumerate functions which might need replaced with other functions based on their parameters. This way, implementers would need only supply the name of the funtion and a method/virtual class which does the checking and replacement.
Moving the assignment into the initializer is a potential optimization, though in most languages, it would not make a difference, and the attempt could therefore only serve to cause harm if initialization was not valid for whatever reason.
More simple optimizations exist in other codes. Consider this code:
Code: (edl) [Select]
draw_set_color(c_red);
b = 10;
draw_set_color(c_blue);
It's clear to a human that the first call to [snip]draw_set_color[/snip] does nothing. However, if the assignment [snip]b = 10[/snip] were replaced with [snip]draw_circle(mouse_x,mouse_y,10,0)[/snip], removing line 1 would cause misbehavior. It would therefore be necessary to have the implementer either specify a list of functions to reference in determining whether two successive function calls of the same type undo each other, or else specify a class/method for examining an AST between two given nodes to make that call for the optimizer.
We then run into a separate, but related, case:
Code: (edl) [Select]
draw_set_color_rgba(0, 0, 0, 0.5);
draw_set_color(c_blue);
draw_circle(0,0,10,0);
draw_set_color(c_red);
draw_set_alpha(1);
The most efficient code output for that is as follows:
Code: (edl) [Select]
draw_set_color_rgba(0, 0, 255, 0.5);
draw_circle(0,0,10,0);
draw_set_color_rgba(c_red, 1);
But how does the optimizer know to do that? My only thought is that a good pattern to look for would be consecutive calls to related functions in a given set. A class would be provided to give that set of functions, along with hard code to do the merging—how else do we get 0,0,255 out of c_blue? The best we seem to be able to automate here is auto-matching consecutive calls to functions in set A which are not separated by calls to any functions in set B, and then invoking the merge method on the functions from the first set.
I will post more optimizations when I have some more time and am not so tired. I'll also do some proofreading, because I'm sure this reads like ass.
From what I have written here, it looks like the best approach is to have a base class defining a kind of optimization to perform, and then have child classes to carry out a specific operation, which can then have child classes for very similar optimizations. So call-consolidating (as in [snip]draw_set_color[/snip] optimizations above) would be one child of the optimization class, which would employ its own virtual class for specifying sets of functions to consolidate, as described above.
Please do submit feedback; this process is going to need a lot more thought.
824
Third Party / Re: Natural GM (Alternate Cross-Platform IDE made in C++ WIP)
« on: March 31, 2013, 10:27:22 pm »
I've moved this to announcements to draw some official attention to it. As you are no doubt all aware, Robert has begun an IDE using the wxWidgets library for use specifically in ENIGMA. The two IDEs have similar goals, and seem to be at a similar point in their development stages.
If you are interested in contributing to an IDE for this project, either option seems like a good candidate. My only personal aversion to Qt is in its dependency sizes. While LateralGM depends on the Java runtime, which is itself hundreds of megabytes, Qt's own collection of libraries runs a hefty 100MB. By contrast, wxENIGMA is roughly 10MB on a bare-bones Windows system, and roughly 200KB on a bare-bones Ubuntu system (Ubunu ships with the wxGTK headers, as a lot of its software uses them). I do not know how much of Qt ships with Ubuntu.
I shouldn't need to remind anyone, either, that while ENIGMA itself is only a few megabytes, the MinGW distribution on which it already depends is still in the 150 MB range, so while the Qt binaries are not very small, it's not like they are increasing the total size by orders of magnitude. I'll also point out that MSVS is 2.5GB on disc, and nearly eight gigabytes on disk after install.
That said, both projects still look quite promising.
Also related: Spirit has stated on the IRC that his IDE compiles in GCC, as does the rest of ENIGMA, so there should be relatively little hassle involved in setting up a build for it, assuming you can already build the rest of ENIGMA.
Both projects also share the goal of modularizing resource types. This thread seems as good a place as any for discussion of what all needs modularized and how to go about doing so. For example, it is desirable for the IDE to know of few or no resources without plugins attached to ensure flexibility and modularity, but at the same time, resources are largely interdependent. For example, objects depend on sprites for default sprite and mask settings; rooms depend on backgrounds, sprites, and objects for placing tiles and setting up the scene; paths depend on rooms for displaying a room in the background of the editor; and finally, overworlds depend thoroughly on rooms. How to best resolve these dependencies? Here is a good place for discussion on the matter. Even I'm torn between the option of plugins for plugins (eg, path background rendering plugin) and lists of acceptable resource UUIDs (eg, objects allow any of ["res_sprite", "res_polygon_mesh", "res_3d_model"] for their collision mask.
If you are interested in contributing to an IDE for this project, either option seems like a good candidate. My only personal aversion to Qt is in its dependency sizes. While LateralGM depends on the Java runtime, which is itself hundreds of megabytes, Qt's own collection of libraries runs a hefty 100MB. By contrast, wxENIGMA is roughly 10MB on a bare-bones Windows system, and roughly 200KB on a bare-bones Ubuntu system (Ubunu ships with the wxGTK headers, as a lot of its software uses them). I do not know how much of Qt ships with Ubuntu.
I shouldn't need to remind anyone, either, that while ENIGMA itself is only a few megabytes, the MinGW distribution on which it already depends is still in the 150 MB range, so while the Qt binaries are not very small, it's not like they are increasing the total size by orders of magnitude. I'll also point out that MSVS is 2.5GB on disc, and nearly eight gigabytes on disk after install.
That said, both projects still look quite promising.
Also related: Spirit has stated on the IRC that his IDE compiles in GCC, as does the rest of ENIGMA, so there should be relatively little hassle involved in setting up a build for it, assuming you can already build the rest of ENIGMA.
Both projects also share the goal of modularizing resource types. This thread seems as good a place as any for discussion of what all needs modularized and how to go about doing so. For example, it is desirable for the IDE to know of few or no resources without plugins attached to ensure flexibility and modularity, but at the same time, resources are largely interdependent. For example, objects depend on sprites for default sprite and mask settings; rooms depend on backgrounds, sprites, and objects for placing tiles and setting up the scene; paths depend on rooms for displaying a room in the background of the editor; and finally, overworlds depend thoroughly on rooms. How to best resolve these dependencies? Here is a good place for discussion on the matter. Even I'm torn between the option of plugins for plugins (eg, path background rendering plugin) and lists of acceptable resource UUIDs (eg, objects allow any of ["res_sprite", "res_polygon_mesh", "res_3d_model"] for their collision mask.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 »