This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
31
General ENIGMA / Re: Proposal for guidelines on changes to the master branch of enigma-dev/enigma-dev
« on: June 06, 2013, 12:31:58 pm »
polygone: Thanks for the info, that is very useful.
Josh: So basically, bug fixes flow from the branch master to testing to development, while new features and additions flow the other way. That should make things more stable and let us spend less time on regressions, but it would also make development and addition of new features slower more cumbersome. I still think it is too early to adopt such a system, though I agree that adopting a process that increases stability is a good idea at this point in time.
Also, I added a link to your post regarding using git in the guidelines.
IsmAvatar: I added a link to your post regarding using git as well.
After giving it some thought, I think it may make sense to make a simple to follow and understand policy regarding committing to the master branch, and then referring to these guidelines as to the overall thoughts and purpose behind it, as well as for recommended ways to follow the policy. I think the policy would go something like:
By having a policy we can avoid discussing a specific action and instead just refer to the policy, and then discuss the policy separately if there is disagreement about it. The policy doesn't cover everything, but it does cover the minimum, and I think that allows a fair amount of flexibility while still being somewhat effective. What do you think?
Josh: So basically, bug fixes flow from the branch master to testing to development, while new features and additions flow the other way. That should make things more stable and let us spend less time on regressions, but it would also make development and addition of new features slower more cumbersome. I still think it is too early to adopt such a system, though I agree that adopting a process that increases stability is a good idea at this point in time.
Also, I added a link to your post regarding using git in the guidelines.
IsmAvatar: I added a link to your post regarding using git as well.
After giving it some thought, I think it may make sense to make a simple to follow and understand policy regarding committing to the master branch, and then referring to these guidelines as to the overall thoughts and purpose behind it, as well as for recommended ways to follow the policy. I think the policy would go something like:
- Commits to the master branch should strive towards not breaking already working parts.
- Changes that are known to break working parts should not be committed to the master branch, unless the breaking are known and accepted by the other developers and contributors.
- Use forks or branches and inform others of the issues if you cannot fix or test the issues themselves, such that others can look at, test or fix them, before merging them into the master branch.
By having a policy we can avoid discussing a specific action and instead just refer to the policy, and then discuss the policy separately if there is disagreement about it. The policy doesn't cover everything, but it does cover the minimum, and I think that allows a fair amount of flexibility while still being somewhat effective. What do you think?
32
General ENIGMA / Re: Proposal for guidelines on changes to the master branch of enigma-dev/enigma-dev
« on: June 05, 2013, 05:43:37 pm »
Gahhhh. Thinks are getting too heated over mouse_x/mouse_y. polygone, it would be nice if you are a little bit more careful with your commits in the future. Others, mouse_x/mouse_y was likely a mistake from polygone's side, but hardly the end of the world.
I would also appreciate any comments on the guidelines I proposed.
polygone: I am currently looking into set_synchronization, and noticed that it had been commented out earlier in the Windows platform system. Do you remember why we did that? The code looks like it could work, though I haven't tried it out yet. I also wonder if it is related to dependencies, including glew, which may have been solved by the new bridges that have been made.
Robert: I think unit testing would be a very nice thing to have. I have thought about it myself, and I think it would also be nice to have different groups of tests so to say, such that if you made changes to graphics, you can run just tests related to graphics without running tests related to other stuff like sound. That said, I haven't looked much into it yet. I also think it would be very nice if the tests we make at some point can be easily be integrated into the automated regression testing on the site that I and Josh briefly talked about in the thread about the site.
I would also appreciate any comments on the guidelines I proposed.
polygone: I am currently looking into set_synchronization, and noticed that it had been commented out earlier in the Windows platform system. Do you remember why we did that? The code looks like it could work, though I haven't tried it out yet. I also wonder if it is related to dependencies, including glew, which may have been solved by the new bridges that have been made.
Robert: I think unit testing would be a very nice thing to have. I have thought about it myself, and I think it would also be nice to have different groups of tests so to say, such that if you made changes to graphics, you can run just tests related to graphics without running tests related to other stuff like sound. That said, I haven't looked much into it yet. I also think it would be very nice if the tests we make at some point can be easily be integrated into the automated regression testing on the site that I and Josh briefly talked about in the thread about the site.
33
Off-Topic / Interview with Bjarne Stroustrup on C++11 and C++14
« on: June 05, 2013, 09:39:03 am »
I read an interview with Bjarne Stroustrup the other day: http://www.informit.com/articles/article.aspx?p=2080042. In the interview, they talk about a number of different topics, including C++11 and C++14. If you are interested in the new features of C++11 and some of the up-coming features of C++14, I think you will find the interview interesting.
34
General ENIGMA / Re: Proposal for guidelines on changes to the master branch of enigma-dev/enigma-dev
« on: June 04, 2013, 12:21:57 pm »
I generally look at the commit messages, while I usually skim through the commits themselves. Properly looking through a big commit can take quite some time.
It is true that not many users are affected by regressions yet. However, I believe regressions can hinder development considerably as well, given that it takes time to replicate the bug, isolate and find it, figure it out, fix it, and test that the fix works. That said, I agree with you that we shouldn't be too careful with everything either, since that does take time as well
I personally think that automated regression testing will help these issues a lot. It will decrease the burden of testing considerably, and by making things more stable that way, will also lessen the amount of time spent on debugging. I personally plan to look into it in the future, after I have looked at some other issues such as the synchronization issue.
It is true that not many users are affected by regressions yet. However, I believe regressions can hinder development considerably as well, given that it takes time to replicate the bug, isolate and find it, figure it out, fix it, and test that the fix works. That said, I agree with you that we shouldn't be too careful with everything either, since that does take time as well
I personally think that automated regression testing will help these issues a lot. It will decrease the burden of testing considerably, and by making things more stable that way, will also lessen the amount of time spent on debugging. I personally plan to look into it in the future, after I have looked at some other issues such as the synchronization issue.
35
Off-Topic / Re: Dear Polyfuck: Stop breaking the fucking repo
« on: June 04, 2013, 12:01:49 pm »
Well, I don't know if I really deserve such high praise, but I do admit that I try to be a good contributor .
36
General ENIGMA / Proposal for guidelines on changes to the master branch of enigma-dev/enigma-dev
« on: June 04, 2013, 11:34:10 am »
Due to various issues in regards to changes to the master branch of enigma-dev/enigma-dev, I believe it would be useful to have some guidelines on how changes to the master branch should be handled.
The following guidelines are not complete, but should be a good starting point. If the guidelines at some point are accepted, I will post them on the wiki.
Guidelines for changes to the master branch.
The idea behind these guidelines is to help developers and contributors, old as new, making good decisions in regards to the master branch of the repository. It is based off common thought and experience between the developers, and is not meant to be fast and hard rules, but informing and guiding developers. It should be noted that "developers" and "contributors" are used interchangeably in these guidelines.
Change notes:
2013-06-06: Added information on how to use git and GitHub to handle changes.
2013-06-04: Creation of this document.
Purpose of these guidelines:
The purpose of the master branch of enigma-dev/enigma-dev:
The master branch is the generally used branch by both developers and users. Having a common branch enables quick fixing and development, and means users will not have to wait long before getting the newest additions or fixes. Given that ENIGMA is still not fully mature or feature complete, it makes sense to have a semi-stable main branch, which will rarely break games relying on core and established functionality, while still allowing quick development and experimentation on unstable and non-core parts. However, if the master branch breaks core or established functionality all the time, the master branch becomes very unstable for users, and developers will waste time on finding and fixing issues that used to be non-issues. For this reason, it is important to keep the master branch at least semi-stable, rarely breaking core or established functionality.
Possible issues with changes:
Dealing with changes:
When you are in doubt about a change, and whether it should be committed to the master branch, there are several options for dealing with that:
When determining whether a change should be committed to the main branch, it is useful to determine what issues it has. However, it can be difficult to determine all the issues of a given change. To handle this, there are some general guidelines for estimating the potential severity of possible issues of a change.
Using git to handle changes:
git offers several features which can be used to handle changes. One of these features are branches. See http://gitref.org/branching/ for information about what a branch is. In regards to changes, the master branch is the main branch used by users and developers alike. By creating and using a separate branch from the master branch, you can do any changes you want without it bothering anyone else, and let others look at it, possibly fix any issues, and then merge it into master.
For examples on how to use branches, see post 1 and post 2.
Using GitHub to handle changes:
Another feature is the fork of GitHub, which you can find more information about here: https://help.github.com/articles/fork-a-repo. Basically, a fork creates a whole new repository based on the new one, where you can do any changes you want, and once you would like others to look at it or get it merged into the main repository, you can make a pull request where you describe the changes and other relevant details. If more work is needed, you can make more commits, and it will be reflected in the pull request.
The following guidelines are not complete, but should be a good starting point. If the guidelines at some point are accepted, I will post them on the wiki.
Guidelines for changes to the master branch.
The idea behind these guidelines is to help developers and contributors, old as new, making good decisions in regards to the master branch of the repository. It is based off common thought and experience between the developers, and is not meant to be fast and hard rules, but informing and guiding developers. It should be noted that "developers" and "contributors" are used interchangeably in these guidelines.
Change notes:
2013-06-06: Added information on how to use git and GitHub to handle changes.
2013-06-04: Creation of this document.
Purpose of these guidelines:
- Create common perspective amongst developers on the purpose of the master branch and how to deal with changes to it.
- Help make developers aware of some of the possible issues that changes can create.
- Inform and guide developers in regards to how to deal with changes.
The purpose of the master branch of enigma-dev/enigma-dev:
The master branch is the generally used branch by both developers and users. Having a common branch enables quick fixing and development, and means users will not have to wait long before getting the newest additions or fixes. Given that ENIGMA is still not fully mature or feature complete, it makes sense to have a semi-stable main branch, which will rarely break games relying on core and established functionality, while still allowing quick development and experimentation on unstable and non-core parts. However, if the master branch breaks core or established functionality all the time, the master branch becomes very unstable for users, and developers will waste time on finding and fixing issues that used to be non-issues. For this reason, it is important to keep the master branch at least semi-stable, rarely breaking core or established functionality.
Possible issues with changes:
- Buggy changes that cause regressions, either silently or loudly, resulting in runtime crashes, very poor performance, compile-time errors, etc. These are the most common types of issues, and can often be prevented by testing it with the relevant configurations.
- Changes that handles an issue in one subsystem, but does not handle the same issue in other subsystems of its kind. These changes can be bothersome, because it may cause confusion about whether an issue has really been handled, and it takes time to determine where the issue is handled. Frequently, a lot of time can be saved later by ensuring that the issue is handled in all relevant subsystems, or at least creating an issue on the tracker that describes where the issue has been handled and not handled.
- Changes that depends on specific tools and technologies, which can take considerable amounts of work to fix, especially if time pass and more and more of the code depends on it. Some of these changes are necessary and acceptable, for instance when making platform-specific changes in an isolated platform subsystem. Main ways to avoid or handle such issues is to follow standards when available, isolate technology dependencies in specific systems (OpenGL{1,3}, DirectX, OpenAL, SFML Audio, etc.) and using them through generic interfaces, and simply coding things in a way that is fully or mostly independent of the technology or tool used (using common subset of available features amongst main Make build tools). Another way is also to put off using bleeding edge features until support is mature and widely available.
- Architecture and interface changes affect how the different systems interact. Bad interfaces decreases modularity and increases coupling and dependencies amongst systems, and generally make it more difficult to debug, test and develop systems. It can be difficult to determine or predict whether an interface will be good or bad, so thinking about and discussing important systems, as well as developing for change, is generally advisable. Isolating dependent code between systems in "bridges" is one way to decrease coupling in certain cases.
Dealing with changes:
When you are in doubt about a change, and whether it should be committed to the master branch, there are several options for dealing with that:
- The first is to test the change out for relevant scenarios. This is often quite effective, and it is therefore recommended that you always do at least light testing of your change, but you should take care to test with all the relevant configurations. For instance, if you change the interface for a subsystem like Collisions, it is a good idea to test that each collision subsystem still compiles and works with the new interface. That said, testing does not cover everything, and it can sometimes be difficult to test things, but testing for simple compilation and running is generally useful and easy. The main exception is testing of different platforms; in these cases, it is a very good idea to get others to test the changes.
- Taking time to look at the changes, investigating them, looking through them, etc., can be laborious but quite effective. It can also help to improve the changes themselves. This should be always be done to some degree when working on established and core features, to avoid easily avoidable issues.
- Experiment! Make the change in a fork or a branch other than master, and experiment and work with it. If the system is not yet established and known to people not to be ready for use yet, you can also do experimentation directly in master. Just be careful not to do experimentation outside of the specific system.
- Implement the change in a fork or branch, and get others to look at it. This takes the time of others, but may well save time down the road. This is very appropriate for changes that could cause considerable issues if faulty, especially if you have already tested, experimented and/or investigated the changes thoroughly.
- Low-severity changes are much less risky to commit than high-severity changes.
When determining whether a change should be committed to the main branch, it is useful to determine what issues it has. However, it can be difficult to determine all the issues of a given change. To handle this, there are some general guidelines for estimating the potential severity of possible issues of a change.
- If a change is fully isolated to a subsystem, getting the change wrong will only affect that subsystem and other systems that depend on that subsystem (directly or indirectly). The more systems that depend upon a given subsystem, the more things it can break.
- Systems and features that are used by most users and developers directly or indirectly, such as most of the stuff in Universal_System, the graphics system OpenGL1, or the Precise collision system, will affect many. Conversely, systems that are only occasionally used (like the audio system, the None collision system, the datetime extension, or the particles extension), and which aren't depended upon by any other systems, will not affect nearly as many. As long as they still compile or are not by default on, breaking them does is not as severe. That said, breaking such system and letting them stay broken can cause problems down the line, and since they are used less, issues will generally be found and reported relatively later.
- Unstable parts that aren't used and aren't meant to be used, will not affect users or developers other than the ones working on it.
- Issues that prevent compilation or running games with ENIGMA at all are quite severe. It is generally easy to test for these issues, since if it compiles and runs, then it shouldn't be a problem. The main issue is if the changes has platform-specific parts. Testing for several different platforms can be bothersome or impossible, depending on hardware. Use a fork or a branch to share and have an easier time with testing or in order to get others to test on the platforms you do not have access to that it really does not break things.
- Platform-specific issues can be bothersome to find, debug and fix. Therefore, a platform-specific issue is generally more severe than a non-platform-specific change when all else is equal.
- Changes that deal with interfaces and architecture can have long-term issues, and they can be difficult to test. Thinking about them and discussing them with others is generally recommended to help ensure that the changes are good, as well as seeking to make a flexible design that allows correcting some mistakes in the interface. A good thumb of rule is that the less dependencies and the less coupled the changes make the interface, the less that can go wrong.
- If there are parts of the potential issues that you are not confident you know sufficiently about, determining the severity can be difficult. Learning more about those parts and related topics can be recommended, as well as asking others for advise on the topics as well as the changes in question.
- One other factor to consider is the severity of the alternatives. If the change is definitely better than the current status, it may be better to commit and describe the uncertainties regarding the changes, such that they can be looked at later by yourself and others.
Using git to handle changes:
git offers several features which can be used to handle changes. One of these features are branches. See http://gitref.org/branching/ for information about what a branch is. In regards to changes, the master branch is the main branch used by users and developers alike. By creating and using a separate branch from the master branch, you can do any changes you want without it bothering anyone else, and let others look at it, possibly fix any issues, and then merge it into master.
For examples on how to use branches, see post 1 and post 2.
Using GitHub to handle changes:
Another feature is the fork of GitHub, which you can find more information about here: https://help.github.com/articles/fork-a-repo. Basically, a fork creates a whole new repository based on the new one, where you can do any changes you want, and once you would like others to look at it or get it merged into the main repository, you can make a pull request where you describe the changes and other relevant details. If more work is needed, you can make more commits, and it will be reflected in the pull request.
37
Off-Topic / Re: Dear Polyfuck: Stop breaking the fucking repo
« on: June 03, 2013, 04:09:22 pm »
I personally try to be careful with changes. I do have commit access, but I still use my fork now and then if I am uncertain about changes, the changes could be critical, and/or I cannot test them properly at the given time.
That said, I think this is mainly an issue of different expectations in regards to the master branch of the enigma-dev/enigma-dev repository. Given the events discussed here and the discussion itself, and that the master branch is rather important, I think it would make sense to set up some guidelines for doing changes to the repository, which contributors, old as new, can use to figure out how to do changes to the main branch. I am going to write out a proposal for these guidelines in a new thread so that we can discuss them there.
That said, I think this is mainly an issue of different expectations in regards to the master branch of the enigma-dev/enigma-dev repository. Given the events discussed here and the discussion itself, and that the master branch is rather important, I think it would make sense to set up some guidelines for doing changes to the repository, which contributors, old as new, can use to figure out how to do changes to the main branch. I am going to write out a proposal for these guidelines in a new thread so that we can discuss them there.
38
General ENIGMA / Re: The particle systems extension is complete
« on: May 31, 2013, 07:57:19 am »
Sounds good. I will be careful when doing any movements or changes in the engine.
39
General ENIGMA / Re: The particle systems extension is complete
« on: May 31, 2013, 03:23:27 am »
The bridges are the only parts that rely on OpenGL specifically. The OpenGL1 bridge relies on the fixed pipeline and direct rendering, while the OpenGL3 bridge relies on shaders (GLSL 1.3, corresponding to OpenGL 3.0), VBOs and vertex arrays.
In regards to reducing reliance on any given API, I think you have a good point. I still think it is good to be able to specialize the particles drawing for each graphics system, since there ought to be a potential gain in performance by doing so. I think I have a solution for the issue: Use the engine functions in a fall-back "bridge", which is used when there isn't a bridge available for the currently used graphics system. That way, we get the potentially increased performance when available, and we still don't rely on any specific graphics system.
In regards to reducing reliance on any given API, I think you have a good point. I still think it is good to be able to specialize the particles drawing for each graphics system, since there ought to be a potential gain in performance by doing so. I think I have a solution for the issue: Use the engine functions in a fall-back "bridge", which is used when there isn't a bridge available for the currently used graphics system. That way, we get the potentially increased performance when available, and we still don't rely on any specific graphics system.
40
Issues Help Desk / Re: Out of memory error while compiling
« on: May 30, 2013, 05:51:18 pm »
Thank you very much for your help, SuperRiderTH, it has been very useful.
I don't have any more time at this moment to look at it, but I will take a look at it later.
I don't have any more time at this moment to look at it, but I will take a look at it later.
41
Issues Help Desk / Re: Out of memory error while compiling
« on: May 30, 2013, 05:10:51 pm »
polygone, I don't have access to the game source, so I cannot actually do that .
SuperRiderTH, can you try commenting out lines 125 and 126:
in file CompilerSource/compiler/components/write_object_access.cpp, restart LateralGM, and then try to run your game again and report if it works? And if it doesn't tell, tell us the size of IDE_EDIT_objectaccess.h in Preprocessor_Environment_Editable? The size of that file may be considerably reduced by the above change.
SuperRiderTH, can you try commenting out lines 125 and 126:
Code: [Select]
if (dait->second.type == "var")
wto << " case " << it->second->name << ": return map_var(&((OBJ_" << it->second->name << "*)inst)->vmap, \"" << pmember << "\");" << endl;
in file CompilerSource/compiler/components/write_object_access.cpp, restart LateralGM, and then try to run your game again and report if it works? And if it doesn't tell, tell us the size of IDE_EDIT_objectaccess.h in Preprocessor_Environment_Editable? The size of that file may be considerably reduced by the above change.
42
Issues Help Desk / Re: Out of memory error while compiling
« on: May 30, 2013, 04:54:48 pm »
No, I think your game is simply just too big for ENIGMA to handle at the moment. Once ENIGMA handles big games properly, you should try again and see if the issue is gone. I (unless someone beats me to it) will make an announcement once ENIGMA can handle big games properly.
43
Issues Help Desk / Re: Out of memory error while compiling
« on: May 30, 2013, 04:27:41 pm »
IDE_EDIT_objectaccess.h is ~40 MB, which is definitely the cause of the issue. This means that the problem is as far as I can tell the same as for Iji: the game in question is too massive measured in terms of lines of code and number of actions for ENIGMA to handle it properly at the moment. The reason is that the current game compilation process does not scale well. SuperRiderTH, can you verify that the game is indeed very large in terms of lines of code and number of actions?
We do plan to fix the issue in the future, but it may be a while before the issue has been fixed.
We do plan to fix the issue in the future, but it may be a while before the issue has been fixed.
44
General ENIGMA / Re: The particle systems extension is complete
« on: May 30, 2013, 03:35:20 pm »
The particles extension is on by default now.
45
General ENIGMA / Re: The particle systems extension is complete
« on: May 30, 2013, 02:51:25 pm »
I think that is the main reason for it, but there are also other systems there that do not have local variables (such as the DateTime extension), so I also think it can make sense to turn isolated systems into extensions, even when they do not have local variables. The advantage seems to be that there is less to compile and include in the resulting game binary. I also think it will be easier to turn it into an API selection with it being an extension.
That said, I don't think there is much difference to having it in the graphics system. I am mostly against it because it is more work for little or no gain as far as I can see, and the current system works .
In regards to the original question, I think it could make sense to turn the particles extension on by default. Should we do that?
That said, I don't think there is much difference to having it in the graphics system. I am mostly against it because it is more work for little or no gain as far as I can see, and the current system works .
In regards to the original question, I think it could make sense to turn the particles extension on by default. Should we do that?