How do I know what dimensions and subsequent memory load wont cause overflow.
Trial and error is probably the best way. Keep increasing the memory used from a "this works" amount to one the OS can't allocate.
Personally I'd use a C++ container so all the messy memory management is done for me. All I have to deal with then is if my app successfully gets the memory allocated I want. Properly allocating a 3D array on the heap can be very easy to screw up IMO.
The syntax of creating for instance a 3D vector can be a bit more to type than a 3D regular array, the ease of it being run-time dynamic without manual memory management over-comes that issue IMO:
One nitpick, a multi-dimensional vector is probably a whole lot slower and cumbersome because it's not a simple, contiguous array; for a 3D vector, you have n^2 array allocations instead of just one. I've done some stuff with vectors of vectors in the past for image processing, and it can be like a magnitude slower depending on how it's being used (probably because of caching). Would be nice if C++ automatically did the math for 1D -> nD vector offsets, but using a single vector is still probably your best bet.
If you could put the multidimensional array on a linear single contiguous array I would like to know how to do that. It makes more sense and reduces the search overhead. But isn't that saying then this array never belonged in a multi dimensional array in the first place if it can be in a single?
Hi George P. I'm going to try the 3 dimensional vector. I'll compare this with one lastchance posted just for giggles. I'll try to find a decent algo to see which one goes quickest.
With multi-dimensional C-arrays, this math is done for you, but not for containers like std::array/vector.
Actually -- there might be something you could do with a clever pointer cast to emulate the same behavior, but I'm hesitant to go down that path.
Personally I'd never create a 3D (or more) vector*, the most I'd do is 2D. Or do simulated xD in 1D as you point out.
I normally don't work with gobs and gobs of data on the order of multi-GBs that have be in memory at any one time. So performance hits wouldn't be noticeable for the most part.
YMMV.
*Displaying a 1D/2D vector/array as if it were a native C++ type is IMO easy to achieve:
Displaying a sized 1D vector:
5
0 0 0 0 0
Creating a 2-dimensional vector, enter row size: 4
Enter column size: 5
Displaying the filled 2D vector:
101 102 103 104 105
201 202 203 204 205
301 302 303 304 305
401 402 403 404 405
Ramping up to 3D requires manual looping -- I haven't found a way to automate -- and the looping be done "out of order" to get IMO the proper 3D display.