I took this simple example from http://enigma-dev.org/docs/Wiki/Global
That example is wrong. It probably was meant to be "global var" instead of just "global". So this is how it's right now:
1) Use "global.variable;" or "global var variable;" to create a global variable (if you use the version with the dot you will have to write it everywhere).
2) Use "global int variable;" to define a global integer, or any other type (so the template is "global type variable;").
3) Use "int variable;" to declare TEMPORARY (just like "var") integer variable (so the template is "type variable;").
4) Use "local int variable;" to declare an integer variable in the local object scope (so the template is "local type variable;").
So basically "var" is a type, not a keyword. So wherever you use "var" you instead can use another type. So "global var" turns into "global double" for example. Or "global string" or "global unsigned". And just like "var" it will create a temporary variable (limited to the scope of the script) if the "local" keyword is not used.
Very strange, i tried to modify my variable declaration to do some tests. In the end, i tested again my code :
That is one of a million parser bugs. That's just because you have declared the same global twice and in the object it tries to use the local one. Just comment out or delete the duplicate code either in the script or the object.
Does it mean if I decrease a variable and i goes below 0 it will not be negative?
No it wouldn't be. In ENIGMA for some reason this returns 0:
unsigned int myvar = -100;
But that is a bug in assignment. If you do any arithmetic on it then it works fine. Meaning usually in all programming languages variables can overflow (http://en.wikipedia.org/wiki/Integer_overflow
). So if a char can hold 255 values, then 256 would equal 0 and so on. So in ENIGMA this prints 255:
unsigned char myvar = 255;
But this prints 4:
unsigned char myvar = 255;
myvar += 5;
Even though I just added 5 to 255. That is because "(255+5) mod 256 = 4". So it "wraps around" the maximum value. The same with integers. Like an unsigned integer cannot be negative, so this results in a very large number:
unsigned int myvar = 10;
myvar -= 20;
This will print INTEGER_MAX_VALUE-20 (which is more than 4.2 billion).
int unsigned myvar;
Usually the "unsigned" is before the data type. So it's "unsigned int".
it would be by default signed ?
All the regular C/C++ stuff applies in ENIGMA and its variables. So I suggest googling like a beginners tutorial in C++ and learn about variables. ENIGMA is only different because we have scopes like "global" and "local", but those don't change the variables. Only from where they can be accessed.
Ok, but I'm still pretty sure 4.5 integer would not make sense in any BASIC compiler that I know of, I think you're just remembering wrong.
Casting float to an integer is usually valid in any language. That is usually done constantly (and we do that all the time in ENIGMA).