English 中文(简体)
How to manipulate *huge* amounts of data
原标题:

I m having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I m wondering what s the best way to do it (combinations of programming language + OS + whatever you think its important).

The structure of the information I m using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all!

I m thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I m not sure how to implement an application to take advantage of it (I m not a professional programmer, so any book/reference will help me a lot).

An alternative solution I m thinking on is to buy a dedicated server with lots of RAM, but I don t know for sure if that will solve the problem. So right now my ignorance doesn t let me choose the best way to proceed.

What would you do if you were in this situation? I m open to any idea.

Thanks in advance!


EDIT: Sorry for not providing enough information, I ll try to be more specific.

I m storing a discretized 4D mathematical function. The operations that I would like to perform includes transposition of the array (change b[i,j,k,l] = a[j,i,k,l] and the likes), array multiplication, etc.

As this is a simulation of a proposed experiment, the operations will be applied only once. Once the result is obtained it wont be necessary to perform more operations on the data.


EDIT (2):

I also would like to be able to store more information in the future, so the solution should be somehow scalable. The current 32 GB goal is because I want to have the array with N=256 points, but it ll be better if I can use N=512 (which means 512 GB to store it!!).

问题回答

Amazon s "High Memory Extra Large Instance" is only $1.20/hr and has 34 GB of memory. You might find it useful, assuming you re not running this program constantly..

Any decent answer will depend on how you need to access the data. Randomly access? Sequential access?

32GB is not really that huge.

How often do you need to process your data? Once per (lifetime | year | day | hour | nanosecond)? Often, stuff only needs to be done once. This has a profound effect on how much you need to optimize your solution.

What kind of operations will you be performing (you mention multiplication)? Can the data be split up into chunks, such that all necessary data for a set of operations is contained in a chunk? This will make splitting it up for parallel execution easier.

Most computers you buy these days have enough RAM to hold your 32GB in memory. You won t need a supercomputer just for that.

As Chris pointed out, what are you going to do with the data.

Besides, I think storing it in a (relational) database will be faster than reading it from the harddrive since the RDBMS will perform some optimizations for you like caching.

If you can represent your problem as MapReduce, consider a clustering system optimized for disk access, such as Hadoop.

Your description sounds more math-intensive, in which case you probably want to have all your data in memory at once. 32 GB of RAM in a single machine is not unreasonable; Amazon EC2 offers virtual servers with up to 68 GB.

Without more information, if you need quickest possible access to all the data I would go with using C for your programming language, using some flavor of *nix as the O/S, and buying RAM, it s relatively cheap now. This also depends on what you are familiar with, you can go the windows route as well. But as others have mentioned it will depend on how you are using this data.

So far, there are a lot of very different answers. There are two good starting points mentioned above. David suggests some hardware and someone mentioned learning C. Both of these are good points.

C is going to get you what you need in terms of speed and direct memory paging. The last thing you want to do is perform linear searches on the data. That would be slow - slow - slow.

Determine your workflow -, if your workflow is linear, that is one thing. If the workflow is not linear, I would design a binary tree referencing pages in memory. There are tons of information on B-trees on the Internet. In addition, these B-trees will be much easier to work with in C since you will also be able to set up and manipulate your memory paging.

Depending on your use, some mathematical and physical problems tend to be mostly zeros (for example, Finite Element models). If you expect that to be true for your data, you can get serious space savings by using a sparse matrix instead of actually storing all those zeros in memory or on disk.

Check out wikipedia for a description, and to decide if this might meet your needs: http://en.wikipedia.org/wiki/Sparse_matrix

Here s another idea:

Try using an SSD to store your data. Since you re grabbing very small amounts of random data, an SSD would probably be much, much faster.

You may want to try using mmap instead of reading the data into memory, but I m not sure it ll work with 32Gb files.

The whole database technology is about manipulating huge amounts of data that can t fit in RAM, so that might be your starting point (i.e. get a good dbms principles book and read about indexing, query execution, etc.).
A lot depends on how you need to access the data - if you absolutely need to jump around and access random bits of information, you re in trouble, but perhaps you can structure your processing of the data such that you will scan it along one axis (dimension). Then you can use a smaller buffer and continuously dump already processed data and read new data.

For transpositions, it s faster to actually just change your understanding of what index is what. By that, I mean you leave the data where it is and instead wrap an accessor delegate that changes b[i][j][k][l] into a request to fetch (or update) a[j][i][k][l].

Could it be possible to solve it by this procedure?

First create M child processes and execute them in paralel. Each process will be running in a dedicated core of a cluster and will load some information of the array into the RAM of that core.

A father process will be the manager of the array, calling (or connecting) the appropiate child process to obtain certain chunks of data.

Will this be faster than the HDD storage approach? Or am I cracking nuts with a sledgehammer?

The first thing that I d recommend is picking an object-oriented language, and develop or find a class that lets you manipulate a 4-D array without concern for how it s actually implemented.

The actual implementation of this class would probably use memory-mapped files, simply because that can scale from low-power development machines up to the actual machine where you want to run production code (I m assuming that you ll want to run this many times, so that performance is important -- if you can let it run overnight, then a consumer PC may be sufficient).

Finally, once I had my algorithms and data debugged, I would look into buying time on a machine that could hold all the data in memory. Amazon EC2, for instance, will provide you with a machine that has 68 GB of memory for $US 2.40 an hour (less if you play with spot instances).

How to handle processing large amounts of data typically revolves around the following factors:

  • Data access order / locality of reference: Can the data be separated out into independent chunks that are then processed either independently or in a serial/sequential fashon vs. random access to the data with little or no order?

  • CPU vs I/O bound: Is the processing time spent more on computation with the data or reading/writing it from/to storage?

  • Processing frequency: Will the data be processed only once, every few weeks, daily, etc?

If the data access order is essentially random, you will need either to get access to as much RAM as possible and/or find a way to at least partially organize the order so that not as much of the data needs to be in memory at the same time. Virtual memory systems slow down very quickly once physical RAM limits are exceeded and significant swapping occurs. Resolving this aspect of your problem is probably the most critical issue.

Other than the data access order issue above, I don t think your problem has significant I/O concerns. Reading/writing 32 GB is usually measured in minutes on current computer systems, and even data sizes up to a terabyte should not take more than a few hours.

Programming language choice is actually not critical so long as it is a compiled language with a good optimizing compiler and decent native libraries: C++, C, C#, or Java are all reasonable choices. The most computationally and I/O-intensive software I ve worked on has actually been in Java and deployed on high-performance supercomputing clusters with a few thousand CPU cores.





相关问题
Windows Mobile 6 Emulator change storage?

How do i change the size of the Windows Mobile 6 Emulator. Its fixed at 32mb. I read this post: Increasing Windows Mobile 5 Emulator Storage But it only helps for the 5.0 version. Isnt there any way ...

CUDA Memory Allocation accessible for both host and device

I m trying to figure out a way to allocate a block of memory that is accessible by both the host (CPU) and device (GPU). Other than using cudaHostAlloc() function to allocate page-locked memory that ...

RAM memory reallocation - Windows and Linux

I am working on a project involving optimizing energy consumption within a system. Part of that project consists in allocating RAM memory based on locality, that is allocating memory segments for a ...

Should I send retain or autorelease before returning objects?

I thought I was doing the right thing here but I get several warnings from the Build and Analyze so now I m not so sure. My assumption is (a) that an object I get from a function (dateFromComponents: ...

Java Library Size

If I m given two Java Libraries in Jar format, 1 having no bells and whistles, and the other having lots of them that will mostly go unused.... my question is: How will the larger, mostly unused ...

doubts regarding Memory management in .net

I m learning about Memory management in C# from the book "Professional C#" The presence of the garbage collector means that you will usually not worry about objects that you no longer need; ...

Objective-C returning alloc d memory in a function == bad?

This is on the iPhone. So what if I have a function like - (SomeObject*)buildObject; Do I need to pass in a variable that I have already alloc d outside like - (void)assignObject(SomeObject** out);...

热门标签