Pages: « 1 2 3
  Print  
Author Topic: C++ short delay when using CIN + file access discussion  (Read 5943 times)
Offline (Unknown gender) Darkstar2
Reply #30 Posted on: July 25, 2014, 03:52:09 PM
Member
Joined: Jan 2014
Posts: 1244

View Profile Email
Quote
stl::containers is definitely something I need then, since I would need to pass the objects between functions, as I like to keep it clean and not do everything in main :P  say for example a file reading function, then passing the read large block to an encryption function and so forth, I know that the non dynamic, regular ways of doing things, it gets lost outside the function (scope).  Something I definitely do not want at this point for my specific needs :D
Yeah, it's useful when passing to functions, because regular arrays don't hold it's size. So if you pass an array to a function, you cannot know how large it was (so you cannot iterate over it). That requires ugly fixes like passing size as a separate argument. With containers, you always have that information. Just remember passing stuff by reference, so you don't create a copy. I had a project to do with computer vision - I passed images to functions to extract data from them.
Code: (EDL) [Select]
double awesomeFunction(std::vector<std::vector<unsigned char> > img){
//Calculate something awesome and return
}
I had like 20FPS, because I was passing data by value. Then I just had to pass it by reference (add &) and it jumps to 60FPS:
Code: (EDL) [Select]
double awesomeFunction(std::vector<std::vector<unsigned char> > &img){
//Calculate something awesome and return
}
It's something people learn early, but still something worth reminding. Especially when working with large amount of data.

Quote
I discovered the hard way back when I using GM that file functions were HORRIBLY, RIDICULOUSLY slow, but YYG must have assumed most people would use small files and never notice...... But try reading a larger binary and it's a PAIN.......  They read 1 byte at a time, and I believe ENIGMA also, since it is compatible to its GML, also uses the same method.......
Yeah, we need to address that. One suggestion I have is to have an option to load the whole file (file_open function) and then abstract that fact, so you can still use file_text_readln() and so on, but instead of doing that on the HDD, you do that in RAM. So in essence you would have a much larger speed, but still be compatible.

Quote
IN your example you are reading lines of text from a text file.  I'm guessing, model information :D :P
As the name implies, I loaded data from a CSV file. I saved point clouds in that format (as it's very easy) and used this function to load it in ENIGMA.

Quote
BTW speaking of that, if I were to code a function that uses containers, buckets, vectors, or whatever they are called, and stuff dynamically allocated (heap), in functions, can I use this in my ENIGMA projects.  Example, if I use stl containers and vector and store a large 100 MB file into for example ResRead, and do something like return ResRead, can I pass back that 100MB back the function that was called in my enigma project ?  If that is the case, then I could easily revamp the file functions and add faster more efficient ones for advanced game developers. :)
Well the idea is that you can use C++ in ENIGMA. So you should be able to just call "vector<double> myVector;" straight in ENIGMA. I do make calls like "glEnable(...)" in my ENIGMA projects, so I know it's possible to use stuff that is not just in ENIGMA functions. Problem is that they break GML/EDL compatibility, as you know are using classes instead of ID's. But I personally think we should end using them.

So you are telling me I could use those in ENIGMA, but I thought we could not do that inside ENIGMA, meaning can't use STL containers with this parser... I would much rather make it into a C++ function and call it from my ENIGMA project :)
More and more I realise how LAZY and UGLY many parts of GM were implemented...... But that idea of yours is a great one, keeping compatibility with file functions in G**Studio but reading the whole file in memory and reading from the buffer instead of the HDD.  However, keep in mind this has its pros and cons......

#1) More memory consumption
#2) counter productive, especially if you need to open really small files which most people do, so you won't gain any noticeable speed, and in some cases the opposite.

So in my opinion adding NEW functions would be ideal.  People can't have their cake and eat it, there is a hefty price to pay for being compatible to a highly flawed product.  :)

There are times when you DON'T want to load the entire file in memory and read straight from HDD, particularly when you need to read a small chunk of info....... There are times when you would want to read by blocks as opposed to the entire file.......and there are times when you want the entire file in RAM.

So it's nice of YoYoGames to have assumed and DECIDED that 1 byte binary reads was ideal LOL.
:P



« Last Edit: July 25, 2014, 04:01:36 PM by Darkstar2 » Logged
Offline (Unknown gender) TheExDeus
Reply #31 Posted on: July 25, 2014, 04:19:01 PM

Developer
Joined: Apr 2008
Posts: 1872

View Profile
Quote
#1) More memory consumption
#2) counter productive, especially if you need to open really small files which most people do, so you won't gain any noticeable speed, and in some cases the opposite.
That is what I proposed to be optional. Like an additional argument to file_open functions which has a "bool preload = false" flag, so by default nothing changes. But if you want to load it in RAM, then you can. This way you won't have to make a duplicate set of functions just for this. I also don't think it will be slower for small files. You have to load the file in the ram always anyway. You will be able to load only parts of the file from HDD just like before, but I don't know how reading part of a file to RAM would work here. Maybe another two arguments would be required (read start and size to read).

I just don't want 100 file reading functions in ENIGMA, just for one reading in RAM, reading from HDD, reading from ASCII text, reading from UNICODE text, reading small endian binary, large endian binary etc. We should be able to do it with what we got, but expand to allow more. Like when you load in RAM, you should have an option to have access the buffer and then use buffer_ functions on them. So you can easily send the file via network. We basically need to tie all this together and not have a million different functions. Although that would allow the additional set to be as an extension, thus simplifying management.,
« Last Edit: July 25, 2014, 04:23:53 PM by TheExDeus » Logged
Pages: « 1 2 3
  Print