Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Josh @ Dreamland

General ENIGMA / Re: Lateral GM question
« on: October 08, 2014, 06:54:38 pm »
That's the plan, Ism. But make no mistake—ENIGMA is best to remain a DLL.

ENIGMA being compilable as a DLL is not a reason to not have a CLI. In general, command-line interfaces are not nearly as responsive, and we're dealing with software in which this latency counts. Originally, you pressed a syntax check button when you wanted to know if your GML was valid. Would you want a disk I/O over an entire directory while we read in script and function names, just so we can report to you that this one code you've changed two lines of is valid? Maybe you do, I don't know. But do you want us implementing continual syntax checking that way? You'd have to be crazy. Even if we cached all the engine data to a file, you wouldn't want us reading that behemoth every time you stopped typing for a second.

Moreover, what if LateralGM wants to invoke the compiler for more accurate code completion? For formatting? For automatic indentation? If ENIGMA were confined to a separate executable, even with pipes, I/O overhead would be insufferable.

Anyway. That said, yes; EGM and GMX eliminate the need to ever have all resource data loaded in memory. This means that the reasons for having LGM pass ENIGMA resource data are dying or dead. THAT SAID, the ability for the engine to work out of EGM files needs to come first. I will not myself implement nor encourage any other individual to implement logic for reading an EGM and appending it to an executable at this phase. My recommendation is for the ENIGMA plugin to keep the game saved in /tmp/lateralgm/<datetime>/game.egm/* as an EGM directory (presently not supported for some reason?) and for the game to load resources out of that directly. Writing resources to the executable is an archaic practice that should only be offered as a final redistributable build option.

Programming Help / Re: Why do we use placement new?
« on: October 08, 2014, 06:27:15 pm »
When I wrote var, I had the sense to want to keep the matrix kernel modular, but lacked the sense to do it right. It uses placement new so that the main var class doesn't have to know anything about the type it's storing, which is itself just a pointer. You could easily refactor it so that the lua_table just works with the values data pointer directly, instead of pretending to own it. At this point, I don't even really care if you want to just move the lua_table code into var. If you want to do that refactoring, feel free—it might even speed up var arithmetic because the compiler will be more confident in its optimizations.

Didn't notice any replies. I must have loaded the page before you posted it, then forgotten to refresh. I could move it over, but you've already posted this, now. :P

Split this to a new topic. Seemed to warrant it.

The pastebin link basically tells you exactly how to implement this for your own functions. :P

You don't have to add the cast to variant, but it shows you how to do that, too.

Using typeid is dangerous as its behavior is implementation-defined. A compiler could easily tell you that [snip=edl]typeid(T).name()[/snip] is "T". Were that not the case, you could implement the above directly without use of the ENIGMA compiler—you would accept derived_reference_index<T> and then compare typeid(T).name() against typeid(enigma::RTTI::background_t).name(). Nothing you do will make this operation not require surgery on how variant stores its internals, and major surgery on how the engine passes around types.

I'm trying to be defensive on one front and am over-defending another. What you are asking me for is type safety, and yes, we can get that. What you're also asking me for in the process is to not use an integer, and that would be fallacy. I stand by my original statement—integers as our reference are opaque. The fact that you have reference collisions does not make the data any less opaque. The point of integers is that they're dense and easy to track—it's trivial to check if a sprite is loaded given its index, and it's harmless to be wrong about whether it's still loaded.

Yes, collisions happen all the fucking time, and when concepts are genuinely confusing—eg, I asked you for a texture index, you passed me a sprite index or a background index—head-scratching behavior can occur.

We don't have the framing right now to really lick this. How about this:  In the interim, why don't you replace int in these functions with sprite_t, background_t, path_t, etc. For now, please just use [snip=edl]typedef int sprite_t;[/snip], etc.

Here's where we run into trouble: you are also asking me for overloads. There's a lot more involved in overloading [snip=edl]widget_set_parent(button_t button, window_t window)[/snip] with [snip=edl]widget_set_parent(window_t window1, window_t window2)[/snip] than you are giving credit for. The compiler can generate an array for the cross-product of all these overloads, if it also generates RTTI metadata for them, but then you still have new problems.

Your first problem is going to be that C++ won't let you overload these integer types. Since they're all integers, you can't create this overload. You can get around this by using a struct instead of a typedef. In doing this, you have created the problem that you can't store these in integers at all, anymore, which may be what you want. If you do this, I recommend having these all inherit a class called reference_index which just stores an integer. This will allow you to roughly implement this alongside the current variant implementation, with a little tweaking.

The second problem is arguably worse. All this sounds great until you realize that you have just moved the problem of overload resolution to runtime.

So let's assume you've overloaded widget_set_parent for a number of different types. You'll have something like this:

Code: (cpp) [Select]
struct window_t: reference_index { /* ... */ }
struct button_t: reference_index { /* ... */ }

void widget_set_parent(window_t window, window_t parent);
void widget_set_parent(button_t button, window_t parent);

Naively, you can give the user methods to "cast" a variant between those. To involve the compiler is to introduce our run-time compile errors. We tell the compiler that it's okay for a user to pass variant to these functions. To enable this to happen, the compiler will generate a run-time type enumeration for variant, like so:
Code: (cpp) [Select]
namespace enigma {
  enum variant_runtime_types {
    // ...
    // ...

It can do this, for example, by querying for members structs of enigma_user which extend reference_index. No problem. It will also generate special methods to fetch these, as needed, and we'll assume
  • that this is done in a way that does not involve modifying var.h when this list changes, so
  • nothing in the engine code relies on that constructor, and so the logic does not need to be known at compile time; and that
  • the actual constructor logic (or most of it) is implemented in the engine's main source where all other user code is generated

I have pasted a sample execution of the idea to Pastebin. It shows the basic stages of this, but does not show a complete variant, nor the cast method. But the cast method looks very similar to the construct method, only it actually checks the value of variant.rtti before returning. In practice, we'll probably replace the template function I showed there with a structure containing those so that we don't generate linker errors in problem scenarios (the missing information will be caught at compile time).

This is fine and dandy, but the C++ compiler can't tell which type a variant should use, because a variant can cast to any of those. Thus, all of those methods are weighted equally to the overload resolving compiler. We have lost compile-time overload checking, which is pretty much nothing new, I suppose. But now the compiler has to deal with this to allow it to happen at all.

To do this, the compiler must first identify functions whose overloads are ambiguous to variant in ISO C++. This is a painful check, but it's doable. The compiler must then generate overloads taking const variant&/const var& for these types. This requires two pieces.

First, we need to declare a place for runtime overload disambiguators:
Code: (cpp) [Select]
// Runtime overload disambiguation
map<tuple<int>, void(*)(variant, variant)> widget_set_parent$_overloads;
static void widget_set_parent$button$window(variant arg0, variant arg1) {
  widget_set_parent((button_t)arg0, (window_t)arg1);
static void widget_set_parent$window$window(variant arg0, variant arg1) {
  widget_set_parent((window_t)arg0, (window_t)arg1);
static inline void widget_set_parent$_disambiguate(const variant& arg0, const variant& arg1) {
  map<tuple<int>, void(*)(variant, variant)>::iterator it = widget_set_parent$_overloads.find(tuple<int>(arg0.rtti, arg1.rtti));
  if (it != widget_set_parent$_overloads.end())
    return it->second(arg0, arg1);
  show_error("No overload for widget_set_parent(" + rtti_names[arg0.rtti] + ", " + rtti_names[arg1.rtti] + ")");
  return void();

The above code assumes we also export the names of these types into an array somewhere, which is also dastardly ugly. It also assumes the existence of a tuple class which is basically a vector that I can construct really easily to save code. :P

Then, we need to populate that map at load time:
Code: (cpp) [Select]
void load_overload_disambiguators() {
  widget_set_parent$_overloads[tuple<int>(VRT_BUTTON, VRT_WINDOW)] = widget_set_parent$button$window;
  widget_set_parent$_overloads[tuple<int>(VRT_WINDOW, VRT_WINDOW)] = widget_set_parent$window$window;

And if you want implicit casting, well, you're looking at even more logic generated for the overload disambiguation routine. Coupled with even more metaprogramming-fueled metadata.

But anyway, now you have a very thorough synopsis of how I'd handle it. I imagine if I don't get around to it now, I'll get around to it after you all sit on it for three years and I've long forgotten it's a problem. :P

If we really wanted to fool the system, we'd just password-protect the zip's file list. It's inconvenient for users either way. I figured that as a society, we were past "OMG EXE VIRUS VIRUS VIRUS VIRUS VIRUS," but apparently, we never will be.

Off-Topic / Re: What we could have if Enigma would be closed source :(
« on: October 01, 2014, 06:40:00 pm »
What Ism said (as always).

Announcements / Re: Licensing, the ultimatum
« on: September 29, 2014, 08:54:56 pm »
I can't find any evidence of that from GMail, and I received no response when I asked on their IRC. I'm afraid I'll have to give up on them and ask someone else. So they basically have as long as it takes me to do that to zip us a reply.

Developing ENIGMA / Re: git madness
« on: September 28, 2014, 04:57:50 pm »
Harri, I think it'd be easier to help you if you stopped in on the IRC. Is your branch up on Git? I can merge it for you if it is.

Off-Topic / Re: Restructuring the Community
« on: September 28, 2014, 09:18:22 am »
Maybe as a Christmas present. I do hate web development.

Off-Topic / Re: Creating an ENIGMA fork?
« on: September 28, 2014, 09:09:18 am »
Closed-source forking is allowed, but only until a binary release is made public. At that point, you must provide an identically-licensed source bundle to build any binary distributions. This bundle does not have to include your entire development history, but it does have to be human-readable (not minimized, not obfuscated).

By the sounds of it, a fork is not what you want, anyway; it sounds as though what you want is to run your own ENIGMA community (or rather, a community for a fork of ENIGMA). You can try to do that, but I suspect that you will have at most half as many active users as we do—sort of like the Ubuntu and Mint communities. Meanwhile, if your code actually contains modifications that give you an edge, we'll be bombarded with bug reports about general commentary about it, which will be annoying.

I also suspect you'll find that a community whose only facet is stricter moderation run by people less qualified to provide support for the actual software behind the community is, in fact, less likely to draw a crowd. But I could be wrong.

That said, if nothing else, then as a public demonstration of the principles we have already discussed in action, I'd encourage you to go ahead and fork the project.

Off-Topic / Re: Restructuring the Community
« on: September 27, 2014, 07:43:26 pm »
The reason for ENIGMA's existence is precisely the reason the community is in such a state. Yoyo Games is a corporation. The function of a corporation is to maximize the profit of its shareholers. A burger shop that calls its customers "nigga cunts" is failing on that front: customers are required to draw a profit, and rudeness deters customers. ENIGMA is instead an open-source project; it exists according to the desires of its contributors. Because it is open-source, it receives contributions from people of a whole spectrum of motives.

Harri develops for himself. As a hobby, he said, in this thread I believe. He develops ENIGMA because he wants to use it.

When I was young, I developed ENIGMA because I wanted to use it—I wanted a Game Maker that Yoyo wasn't in charge of. Now, when I develop, I do so just to finish what I started.

The person who wrote this site, a2h (currently notachair), contributed the layout for three reasons: he liked where the project was going, he and I were friends, and he wanted something to add to his professional profile.

Ironically, Robert is the only active contributor right now who does so with the purpose of drawing a community.

If it were up to me, our site would look more like other open-source compiler sites which you might find equally lackluster. ENIGMA is a compiler. LateralGM is an IDE. These forums are a place we have users discuss both—I because I encourage open-source development, Harri because he likes not being the only person working on ENIGMA, and Robert because... well, I'm honestly not sure why Robert does the things he does.

I can't speak to other big contributors' motives. Maybe sorlok has a comment on why he puts up with us? As far as I can tell, Egofree sees that the project is just under the threshold of "good enough to compile this game," and so he pours time into pushing it over. If you asked any one of them if they'd like a bigger community, they'd all say yes. But it isn't the end of the world to any of us (except Robert? Maybe?) if no one else ever posts here again. If we can make this community more accommodating, I'm happy to do so—provided it doesn't infringe on the less ephemeral users' rights to be here.

That said, the burger stand analogy further breaks down at what happens when the employee mouths off to customers. I can't fire Robert and hire a replacement. If I could find people who wanted to work for Robert's wage, I'd be out doing so. Since I can't, yes, this community functions as our only net. I'm not sure what it says that such a substantial fraction of our community are contributors; I see it as a positive. I won't speak for the other contributors further, but I'll point out that they're still around.

In essence, a burger stand has easily replaceable employees and serves to maximize shareholder value, which involves maximizing consumer goodwill. ENIGMA has non-reimbursed, at-will contributors, and for its health needs to maximize contributor value first, which is largely independent of community politics.

Now, all that said, I probably come across as a little negative. Let me share you the good news: we are a rarity among open-source projects in that we don't kick people out for having an opinion. I am frequently stunned by how amazingly, cruelly rude developers are on open-source projects on GitHub. Nothing paints a better picture of what I am trying to convey than any arbitrary post on a GitHub repo I used to watch, whose name I'll withhold because I'm not here to point fingers. ENIGMA contributors, including Robert, are exceptionally kind; not a whole lot of projects have developers who will drop what they are doing to investigate an issue posted over an IRC. The developers on the GitHub repository I have in mind will barely investigate actual bug reports before closing the issue with a snide remark about how it's likely the user's fault. I've been impressed by Robert's ability to respond calmly and helpfully to some pretty dumb posts on our tracker, over and over. Many developers forget that the body of users is not one entity which can remember being informed why an issue is not a bug.

And let's face it: have you two never gotten into a scuffle with the moderation team of products you do pay for? And how quick were they to remind you that you can be banned at any moment for any reason?

There's a certain merit to having people feel that they can speak freely without being banned, ridiculed, or otherwise cast out. If nothing else, we have that here.

Developing ENIGMA / Re: git madness
« on: September 27, 2014, 01:55:37 pm »
I assume you committed an unfinished (or crappy) merge? Do a checkout to before you committed it (use git log to find it), git checkout -b SomeNewMergeBranch, then try git merge master again.

Otherwise, I misunderstand your problem. What state is master in, and what state is GL3.3Fix in?