edsquare
|
|
Reply #15 Posted on: July 24, 2014, 01:26:15 am |
|
|
Location: The throne of ringworld Joined: Apr 2014
Posts: 402
|
worked correctly (at least on pascal they do), but even on C++ you can have more than one return if you need it, to check that something did x or y you can then pass the integer for error management, I think.
Gotcha. and BTW I learned Pascal in school But I have not used Pascal a lot actually, so I forgot lots, I still remember readln and writeln lol.
Right now i'm sticking to C++, EDL, GML for a while should keep me busy and out of trouble lol!
Didn't mean to say you should switch to pascal, I reference it because it's fresh in my memory and it's compiled like C++ Dynamic allocation on C++? Is that possible?
Let's say you want to declare an array but you don't know in advance the size, you can't do this
int x = 100; int array[x] ; //... etc, it needs a constant. so that won't work....
the new allows this.........
Here is an example,
int i = 200; char* p; p = new int[i];
Another example is you want to read a file and store it in memory, you would get the file size first, store it in a variable and then declare the pointer and allocate RAM based on the file size (dynamic allocation).
There are many uses for that, such as creating data base reading into memory, reading files, using arrays of variable or unknown sizes, etc.
Hope I have this right
I'm going to be using this in my new dynamic resource engine
Gotcha, I thought you meant dynamic memory allocation, which I believe it's not possible unless you use an interpreted language. (the garbage collector)
|
|
|
Logged
|
A child of five would understand this. Send someone to fetch a child of five. Groucho Marx
|
|
|
Darkstar2
|
|
Reply #16 Posted on: July 24, 2014, 01:32:52 am |
|
|
Joined: Jan 2014
Posts: 1238
|
Didn't mean to say you should switch to pascal, I reference it because it's fresh in my memory and it's compiled like C++
I know that I was just mentioning this in passing. I remember acing the Pascal exam and made a big project and think it was one of the highest marked (bragging ) But oddly enough I know far more about C++ than Pascal as I have since not touched Pascal ! Gotcha, I thought you meant dynamic memory allocation, which I believe it's not possible unless you use an interpreted language. (the garbage collector)
I meant a dynamic way of allocating memory as shown above. I'm not an "EXPERT" in C++ yet, though know enough about it to make full applications, console, some GUI, but to the level I can contribute BIG stuff to ENIGMA - and one thing I am a complete newbie with is graphics, so won't be making an enigma like engine in C++ anytime soon lol! unless some E.T. puts probes in my brain and feeds me the knowledge, that's the only way I will ever reach that point given my luck
|
|
« Last Edit: July 24, 2014, 01:37:50 am by Darkstar2 »
|
Logged
|
|
|
|
Goombert
|
|
Reply #17 Posted on: July 24, 2014, 03:33:13 am |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
It does take significantly longer to print to a command prompt than a Linux terminal, like we're talking 10 times slower if LGM outputted to a CMD when compiling than running all the JNA callbacks it does.
|
|
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
TheExDeus
|
|
Reply #18 Posted on: July 24, 2014, 04:28:47 am |
|
|
Joined: Apr 2008
Posts: 1860
|
1) Yes, memory is freed at the end of the program if it either crashes or if you didn't free the resources. It's done by C++ runtime (in MinGW case it's libstdc++.dll). It sometimes won't release resource handles though, so if you open a file, but don't close it, then it's possible that over time you won't be able to open new files, as the maximum open files at once has a limit per process. At least it did have a limit in the past. This freeing of memory is true for most devices and OS's. Only some embedded stuff still doesn't do it explicitly. 2) To get size you can use sizeof(). So sizeof(resMem) will return the number of bytes used. It's the same as doing sizeof(char)*5000. Note that you won't be able to use sizeof(array) way in functions, because if you pass the array via function parameter, then it's treated as a pointer. So it will return the size of the pointer, not the array. 3) For dynamic arrays (or actually arrays in general) you should use stl::containers. There is very little use in using the old C arrays anymore. The containers are fast, very optimized and releases you of many stupid tasks. Like freeing the memory, as it's done automatically. In your array case I would suggest using a std::vector. So you could write this instead:
#include <vector>
//in main(){} std::vector<char> resMem(5000,'a'); //This will create a 5000 element vector of chars (filled with char 'a' specificly) //Use regular semantics to modify and access them like this resMem[0] = 'b'; resMem[4999] = 'c'; //You can get size of array printf("resMem size is %i and in bytes = %i\n", resMem.size(), sizeof(resMem)); //When calculating size in bytes you would have to take vector.capacity() into account as well, but I won't go into that here
//You can add elements resMem.push_back('d'); //Now resMem.size() is 5001. So pointers and "new"+"delete" is the bain of C. In C++ there are very few cases you actually have to use them. The biggest thing I have written for myself in pure C++ is 10k line program for research purpose and it only used pointers in about 3 places, because of Abstract Base Class (ABS) usage.
|
|
« Last Edit: July 24, 2014, 04:32:42 am by TheExDeus »
|
Logged
|
|
|
|
|
Darkstar2
|
|
Reply #20 Posted on: July 24, 2014, 12:45:21 pm |
|
|
Joined: Jan 2014
Posts: 1238
|
1) Yes, memory is freed at the end of the program if it either crashes or if you didn't free the resources. It's done by C++ runtime (in MinGW case it's libstdc++.dll). It sometimes won't release resource handles though, so if you open a file, but don't close it, then it's possible that over time you won't be able to open new files, as the maximum open files at once has a limit per process.
Interesting, ok so about memory what I knew was correct, but in regards to file handles, I was under the impression that those got automatically closed as well. As far as the maximum per process, it's 32 right So what happens if file handles don't get closed in such case, does it require a window restart ? This freeing of memory is true for most devices and OS's. Only some embedded stuff still doesn't do it explicitly.
Well I'm using Windows and my target will be windows and maybe soon both windows and linux. 2) To get size you can use sizeof(). So sizeof(resMem) will return the number of bytes used. It's the same as doing sizeof(char)*5000.
ah yes sizeof forgot about that one. didn't think of using it that way. 3) For dynamic arrays (or actually arrays in general) you should use stl::containers. There is very little use in using the old C arrays anymore. The containers are fast, very optimized and releases you of many stupid tasks. Like freeing the memory,
So for storing reading/writing files, and in general you mean it's FASTER that way ? If I were to bench both methods one would be significantly faster ? Remember I am not entering any coding competition, so I would use what I know about and what works, but if one method is significantly faster than the other I would definitely use that. as it's done automatically.
How would it know WHEN to free the memory ? Only at exit ? What if I wanted it to free earlier ? In your array case I would suggest using a std::vector. So you could write this instead:
Do I have to use std::, I see that a lot in code, can't I just use using namespace std; ? So pointers and "new"+"delete" is the bain of C. In C++ there are very few
I thought it was the opposite, that malloc and free are used in C, and that new and delete are C++. The biggest thing I have written for myself in pure C++ is 10k line program for research purpose and it only used pointers in about 3 places, because of Abstract Base Class (ABS) usage.
I agree, not much user of pointers, but I intend to use it mostly for file I/O in my project since I will be handling larger data handling that is more than the stack can handle and for allocating memory size I don't know in advance I would have to use pointers in conjunction with new. But I see pointers, new and delete used in many places in ENIGMA's source
|
|
|
Logged
|
|
|
|
TheExDeus
|
|
Reply #21 Posted on: July 24, 2014, 01:56:31 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
Interesting, ok so about memory what I knew was correct, but in regards to file handles, I was under the impression that those got automatically closed as well. As far as the maximum per process, it's 32 right So what happens if file handles don't get closed in such case, does it require a window restart ? I think it's safe to say that it closes file handles too. It was very long time ago when I heard that it didn't. Maximum amount of open though is limited if using POSIX C. It's 512 by default, but can be changed. So that is not a big issue ( http://msdn.microsoft.com/en-us/library/kdfaxaay%28vs.71%29.aspx). So for storing reading/writing files, and in general you mean it's FASTER that way ? If I were to bench both methods one would be significantly faster ? Remember I am not entering any coding competition, so I would use what I know about and what works, but if one method is significantly faster than the other I would definitely use that. The speed difference if used properly is about the same. Arrays are as low level as you can get. Vector on the other hand is class, which has some memory overhead, but it usually is as fast as an array. But vectors are dynamic, and so when you use it for a dynamic array instead of coding special routines for your own custom array, then it's probably going to be faster. Those containers are also not really used for reading or writing files. Your char array was used for that? Because you will have to use other specific functions to actually write to files, that are not tied to containers the data is stored in. How would it know WHEN to free the memory ? Only at exit ? What if I wanted it to free earlier ? It's freed at the end of scope. If you create a vector inside the function, then it is freed when the function ends. If it's inside main(){}, then it's freed when the program ends. You can always delete the contents of a vector via vector.clear(). Do I have to use std::, I see that a lot in code, can't I just use using namespace std; ? You can use the latter. But it's usually a good coding convention not to pollute namespace with "using namespace", because if it's used in a header, then you effectively make everything in std namespace. That can cause conflicts and hard to find bugs, like the one we had in ENIGMA a few months ago. ENIGMA would not compile a game, because LGM showed an error in valid code, and it took like 3 weeks to fix that. Turned out Robert had included namespace in a header, and it broke LGM. If you want to use only something specific, like vectors, then you can write "using std::vector;". This will allow you not put std in front of vector, but nothing else will be changed. I thought it was the opposite, that malloc and free are used in C, and that new and delete are C++. Correct, my mistake. But what I wanted to say is that you would almost never need to use them. Even in the ABS situation I had, I could of used smart pointers (which is basically wrapping a pointer around a class so it can automatically be destroyed when not in use). Using these things in a modern code is not that great of a practice and while many would disagree, I personally don't like it. I had so many problems with them, then it has scarred me for life. But I see pointers, new and delete used in many places in ENIGMA's source Those are in underlying resource structures and in some hack'd classes, like "variant". In most decent code (like the graphics systems) they are not much used.
|
|
|
Logged
|
|
|
|
Darkstar2
|
|
Reply #22 Posted on: July 24, 2014, 03:27:43 pm |
|
|
Joined: Jan 2014
Posts: 1238
|
I think it's safe to say that it closes file handles too. It was very long time ago when I heard that it didn't. Maximum amount of open though is limited if using POSIX C. It's 512 by default, but can be changed. So that is not a big issue
512 ok quite far from the 32, I guess 32 I was taking from GML lol! But anyhow I probably won't be needing that many file handles ! And yes better be safe and always free memory and close files, in practice, something I would always do anyway. Your char array was used for that? Because you will have to use other specific functions to actually write to files, that are not tied to containers the data is stored in.
I know how to read and write blocks or bytes of data, but how else am I supposed to store read files in memory ? Let's say I want to read 100 bytes from a binary file, I would use char and store it as an array, easier to have individual access, for bigger files, can't use strings, so I'd have to allocate memory on the heap. What I want to do is not only read and store file in memory but have access to individual bytes and manipulate them (encryption, etc.).
|
|
|
Logged
|
|
|
|
TheExDeus
|
|
Reply #23 Posted on: July 24, 2014, 05:28:21 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
All of those containers will be allocated on the heap, not that it really matters. But you can just use a char vector. Like this little example I found: #include <iostream> #include <iterator> #include <fstream> #include <vector>
int main() { // open the file: std::ifstream file(filename, std::ios::binary);
// read the data: std::vector<unsigned char> myData((std::istreambuf_iterator<unsigned char>(file)), std::istreambuf_iterator<unsigned char>());
//Now myData holds the file, so you can access them however you like if (myData[0] == 'a') return 2; return 0; } There are MANY other ways to load a file into some kind of array. Some are slower (usually the C++ variants) and some are faster (for me C variants are usually faster). Like check this: http://stackoverflow.com/questions/15138353/reading-the-binary-file-into-the-vector-of-unsigned-chars
|
|
|
Logged
|
|
|
|
Goombert
|
|
Reply #24 Posted on: July 24, 2014, 06:10:51 pm |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
What container is allocated on the heap? Those sure look like they are being allocated on the stack, doesn't automatic storage duration apply to most of STL?
|
|
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
Darkstar2
|
|
Reply #25 Posted on: July 24, 2014, 10:42:55 pm |
|
|
Joined: Jan 2014
Posts: 1238
|
All of those containers will be allocated on the heap, not that it really matters.
Actually it does matter, there is limit on the size of an array you can use on the stack before you crash your program - I've found out the hard way when trying to use big arrays. If I will be manipulating large files 50MB, 100Mb, +++ the stack won't cut it But you can just use a char vector. Like this little example I found:
#include <iostream> #include <iterator> #include <fstream> #include <vector>
int main() { // open the file: std::ifstream file(filename, std::ios::binary);
// read the data: std::vector<unsigned char> myData((std::istreambuf_iterator<unsigned char>(file)), std::istreambuf_iterator<unsigned char>());
//Now myData holds the file, so you can access them however you like if (myData[0] == 'a') return 2; return 0; } There are MANY other ways to load a file into some kind of array. Some are slower (usually the C++ variants) and some are faster (for me C variants are usually faster). Like check this:
You don't use stdio for files ? Doesn't ENIGMA use stdio ? Isn't it faster ? I would use that instead, not familiar with the above sounds more complicated for nothing. I've heard that with stdio and memory allocation someone managed to load a file at FULL speed (100MB/s) instead of the regular slower method of 10MB/s
|
|
|
Logged
|
|
|
|
TheExDeus
|
|
Reply #26 Posted on: July 25, 2014, 04:40:44 am |
|
|
Joined: Apr 2008
Posts: 1860
|
What container is allocated on the heap? Those sure look like they are being allocated on the stack, doesn't automatic storage duration apply to most of STL? As far as I know the difference between stack and heap is that stack has limited size, it's not dynamic (so the sizes must be known at compile time) and has limited scope (so it gets deleted when it goes out of scope). stl::containers are dynamic. Their size is usually not known at compile time, as you can always resize them and add/delete as much as you want. This means they are not allocated on the stack. What could be allocated on the stack is the class instance itself, which allows it to exist only in scope. But the memory for data internally probably uses new/delete, to allocate on the heap. That is why you can make a vector as big as you want, and if it fits in RAM, then you are okay. Actually it does matter, there is limit on the size of an array you can use on the stack before you crash your program - I've found out the hard way when trying to use big arrays. If I will be manipulating large files 50MB, 100Mb, +++ the stack won't cut it What I meant is that if you use stl functions and don't use new/delete or regular static arrays, then you don't even have to know what heap and stack is. The whole distinction disappears, like in C# and Java where virtual machines have no such distinction. You don't use stdio for files ? Doesn't ENIGMA use stdio ? Isn't it faster ? I would use that instead, not familiar with the above sounds more complicated for nothing. I've heard that with stdio and memory allocation someone managed to load a file at FULL speed (100MB/s) instead of the regular slower method of 10MB/s As I said, there are many ways to do it. You could try the one I posted, I will try it too, and see how fast it is. I remember I tried many ways to make the fastest file loading possible and the fastest I was able to get was this: int load_csv(string fname, int kind, double radius) { model_primitive_begin(kind); ifstream data(fname.c_str()); string str_line; int line_num = 0; float v[3]; int col; //vector<vector<unsigned int> > benchmark_frames; //benchmark_frames.reserve(1000); while(getline(data,str_line)) { stringstream lineStream(str_line); string cell; int cell_num = 0; while(getline(lineStream,cell,' ')) { if (cell_num<3){ v[cell_num] = atof(cell.c_str()); }else{ col = atoi(cell.c_str()); } cell_num+=1; } if (v[2]*v[2]<radius*radius){ model_vertex_color(v,col + (col << 8) + (col << 16),1.0); } line_num += 1; } model_primitive_end(); return line_num; } If regular ENIGMA takes like 20 seconds to load the file, then this does it in less than 1. But I didn't try loading using the vector copy. That might be faster.
|
|
|
Logged
|
|
|
|
Darkstar2
|
|
Reply #27 Posted on: July 25, 2014, 01:53:02 pm |
|
|
Joined: Jan 2014
Posts: 1238
|
As far as I know the difference between stack and heap is that stack has limited size, it's not dynamic
Stack uses continuous memory, and by default is limited, in theory I could have the linker allocate more space, though I never touched that and never will, when using multiple threads, you are using stacks on each, and potentially wasting space that way. even as a non expert I know better Perhaps for most people and their application stack will be enough, but bigger games, apps that process large files and memory, you will use the heap, as its size is limited by available RAM (including virtual), and is dynamic. For what I am going to achieve I will use the heap, or rather I will use your vector suggestion I guess it takes care of more for me (so the sizes must be known at compile time) and has limited scope (so it gets deleted when it goes out of scope). stl::containers are dynamic. Their size is usually not known at compile time, as you can always resize them and add/delete as much as you want.
stl::containers is definitely something I need then, since I would need to pass the objects between functions, as I like to keep it clean and not do everything in main say for example a file reading function, then passing the read large block to an encryption function and so forth, I know that the non dynamic, regular ways of doing things, it gets lost outside the function (scope). Something I definitely do not want at this point for my specific needs As I said, there are many ways to do it. You could try the one I posted, I will try it too, and see how fast it is. I remember I tried many ways to make the fastest file loading possible and the fastest
I discovered the hard way back when I using GM that file functions were HORRIBLY, RIDICULOUSLY slow, but YYG must have assumed most people would use small files and never notice...... But try reading a larger binary and it's a PAIN....... They read 1 byte at a time, and I believe ENIGMA also, since it is compatible to its GML, also uses the same method....... I believe with vector / container / new / methods file access should be near native speed though I have yet to try this I'm not at expert level yet and learning new things as I go along In ENIGMA using current functions the most I've gotten was near 10MB/S binary. LOL ! The slowest P.O.S. drive I used back the days did no less than 30MB/s...... My native sustained read speed is 106-110MB/s, if not more, so with the right C++ code I should be able to achieve this. Now imagine making a MYST like game or some big adventure games using large assets, and someone has a SSD (they are becoming cheaper) I'm not sure it would be too convenient to load stuff at 10MB/s....it's a disgrace even for old generation PATA drives LOL. I was able to get was this:
int load_csv(string fname, int kind, double radius) { model_primitive_begin(kind); ifstream data(fname.c_str()); string str_line; int line_num = 0; float v[3]; int col; //vector<vector<unsigned int> > benchmark_frames;
IN your example you are reading lines of text from a text file. I'm guessing, model information If regular ENIGMA takes like 20 seconds to load the file, then this does it in less than 1. But I didn't try loading using the vector copy. That might be faster.
Imagine how long it takes in GM Probably close too. When I get some time I will try experimenting a little, and see where it brings me One thing I am never using is the built in file functions in ENIGMA, I will code my own. I already know how to add new functions and extensions to ENIGMA. BTW speaking of that, if I were to code a function that uses containers, buckets, vectors, or whatever they are called, and stuff dynamically allocated (heap), in functions, can I use this in my ENIGMA projects. Example, if I use stl containers and vector and store a large 100 MB file into for example ResRead, and do something like return ResRead, can I pass back that 100MB back the function that was called in my enigma project ? If that is the case, then I could easily revamp the file functions and add faster more efficient ones for advanced game developers. [/code]
|
|
« Last Edit: July 25, 2014, 01:54:42 pm by Darkstar2 »
|
Logged
|
|
|
|
TheExDeus
|
|
Reply #28 Posted on: July 25, 2014, 02:00:58 pm |
|
|
Joined: Apr 2008
Posts: 1860
|
stl::containers is definitely something I need then, since I would need to pass the objects between functions, as I like to keep it clean and not do everything in main say for example a file reading function, then passing the read large block to an encryption function and so forth, I know that the non dynamic, regular ways of doing things, it gets lost outside the function (scope). Something I definitely do not want at this point for my specific needs Yeah, it's useful when passing to functions, because regular arrays don't hold it's size. So if you pass an array to a function, you cannot know how large it was (so you cannot iterate over it). That requires ugly fixes like passing size as a separate argument. With containers, you always have that information. Just remember passing stuff by reference, so you don't create a copy. I had a project to do with computer vision - I passed images to functions to extract data from them. double awesomeFunction(std::vector<std::vector<unsigned char> > img){ //Calculate something awesome and return } I had like 20FPS, because I was passing data by value. Then I just had to pass it by reference (add &) and it jumps to 60FPS: double awesomeFunction(std::vector<std::vector<unsigned char> > &img){ //Calculate something awesome and return } It's something people learn early, but still something worth reminding. Especially when working with large amount of data. I discovered the hard way back when I using GM that file functions were HORRIBLY, RIDICULOUSLY slow, but YYG must have assumed most people would use small files and never notice...... But try reading a larger binary and it's a PAIN....... They read 1 byte at a time, and I believe ENIGMA also, since it is compatible to its GML, also uses the same method....... Yeah, we need to address that. One suggestion I have is to have an option to load the whole file (file_open function) and then abstract that fact, so you can still use file_text_readln() and so on, but instead of doing that on the HDD, you do that in RAM. So in essence you would have a much larger speed, but still be compatible. IN your example you are reading lines of text from a text file. I'm guessing, model information As the name implies, I loaded data from a CSV file. I saved point clouds in that format (as it's very easy) and used this function to load it in ENIGMA. BTW speaking of that, if I were to code a function that uses containers, buckets, vectors, or whatever they are called, and stuff dynamically allocated (heap), in functions, can I use this in my ENIGMA projects. Example, if I use stl containers and vector and store a large 100 MB file into for example ResRead, and do something like return ResRead, can I pass back that 100MB back the function that was called in my enigma project ? If that is the case, then I could easily revamp the file functions and add faster more efficient ones for advanced game developers. Well the idea is that you can use C++ in ENIGMA. So you should be able to just call "vector<double> myVector;" straight in ENIGMA. I do make calls like "glEnable(...)" in my ENIGMA projects, so I know it's possible to use stuff that is not just in ENIGMA functions. Problem is that they break GML/EDL compatibility, as you know are using classes instead of ID's. But I personally think we should end using them.
|
|
« Last Edit: July 25, 2014, 02:05:58 pm by TheExDeus »
|
Logged
|
|
|
|
Goombert
|
|
Reply #29 Posted on: July 25, 2014, 03:22:38 pm |
|
|
Location: Cappuccino, CA Joined: Jan 2013
Posts: 2993
|
As far as I know the difference between stack and heap is that stack has limited size, it's not dynamic (so the sizes must be known at compile time) and has limited scope (so it gets deleted when it goes out of scope). stl::containers are dynamic. Their size is usually not known at compile time, as you can always resize them and add/delete as much as you want. This means they are not allocated on the stack. What could be allocated on the stack is the class instance itself, which allows it to exist only in scope. But the memory for data internally probably uses new/delete, to allocate on the heap. That is why you can make a vector as big as you want, and if it fits in RAM, then you are okay. Ah so we are both right somewhat. According to the link below you can control that. http://stackoverflow.com/questions/783944/how-do-i-allocate-a-stdstring-on-the-stack-using-glibcs-string-implementation
|
|
|
Logged
|
I think it was Leonardo da Vinci who once said something along the lines of "If you build the robots, they will make games." or something to that effect.
|
|
|
|