Sorting file.

Hello people.
I have to sort a huge (about 30MB) file of integers.
Please send me an efficient algorithm or program in C++.
Thanks to ozo I've decided to use qsort. However, I still have a problem. In order to use qsort I have to put my data in some kind of array, but it's impossible to create an array that size (my array should contain at least 800000 integers). What can I do?

Please answer quickly.
Thanks.
nir_shAsked:
Who is Participating?
 
msa092298Connect With a Mentor Commented:
You don't have to use huge pointers.  You can divide them into at most 64 K chunks, sort each one alone and output it to a temp file.  Finally read the first 4 K of every file (for performance), insert the first integer from every file into a heap-array (if you know about Heap Sort) where the smallest value is always kept at the top of the array. Each value has an identifier of the temp file it is coming from.  Now move the top of the list to the final sorted file and replace it with its successor from its temp file (if you finish on all the values of the 4 k buffer related to that file, read another 4 k buffer).  If the successor is still less than its 2 sons(which are the least values remaining in the 2 buffers corresponding to 2 chunks) then add it to the final file and repeat the process.  If not, you will have to remove it from the top of the heap and replace it with the lease one of its 2 children and reinsert it in the heap.   You can greatly reduce the time and size required if you know that you have a lot of repeated values, when writing the values to the temp files in the first phase, write the value and the count of its repetitions.
conclusion : divide the data into chunks, sort each chunk alone using quicksort and write it to a temp file and preferably with a repeat count for each repeated values.  Now merge the temp files into a single file in one of 2 ways:
1. you have an easy way : merge each 2 files together and replace them with a single file.  Repeat merging each 2 files untill all the files are done.  Delete merged files to save space.
2. the harder one (but should be faster): merge the n temp files together utilizing a heap sort to know which file contains the smallest values.  Keep reading from that file into the final file until the least value of another file is less than the file you are reading from.   This way, you don't have to qsort all the values while merging.
0
 
snoeglerCommented:
Beyond this link:

  http://now.cs.berkeley.edu/NowSort

you can find the fastest(not sure about that) algorithm for disk based sorting.
I am not sure if you can get the algorithm there, but i think so. I would try it, because the
algorithm works even for 6.4 GB(In the benchmark you can see there, in less than 1 hour with
32 workstations).

Another conventional sorting algorithm for disk files is the bucket sort. But i don't have a good
link, try to search for it.
Hope this helped...
0
 
ozoCommented:
30MB isn't that large.
If it fits in memory you might just
#include <stdlib.h>
int compatr(const void *a, const void *b){
      return *((int *)a)-*((int *)b);
}

qsort(base, nel, sizeof(int), &compar);


0
Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

 
elfieCommented:
What about calling an already existing sort program? On UX just call "sort -n < in  > out"
0
 
thresher_sharkCommented:
I would think that 30mb would fit in memory, so if I were you I'd use ozo's method.
0
 
nir_shAuthor Commented:
Edited text of question
0
 
nir_shAuthor Commented:
Edited text of question
0
 
nir_shAuthor Commented:
Adjusted points to 400
0
 
ozoCommented:
C++ can easily declare an array containing 800000 integers,
but perhaps your system doesn't have the memory to load it after all.
So, I would either pipe it to sort, if you have it,
or split it into pieces that you can sort in memory, and then merge the pieces.
0
 
danny_pavCommented:
what do you know about the data?
If these are integers, aren't there going to be a lot of duplicates?

Can you then use a map like map<val, count>?  Read it all in, insert into/increment the map as you go, and then traverse the map on output, writing count copies of val.  

0
 
AlexVirochovskyCommented:
Hi, some times ago I have same problem(but data in file).
My way is very close to "ozo" solution:
1. Define number of parts that can be in memory: N
2. Loop (for i = 0 ; i < N; i++)
{
   read i-th part of file. Sort it by qsort and save in temp file
}
3. Union temp files: (this is problem with max.number of open files  and i make union by 10 files ).
  for union I open up 10 files, read 1-st line from all files,
  qsort them, save min, read line from that file, where is min
and ..., untill end of all files.
Can be this help, Alex
0
 
demoneyeCommented:
Get a bat and beat it with a stick.
That should make it work.

0
 
thresher_sharkCommented:
demoneye - If you have nothing of any value to add to this site, please refrain from commenting or answering.  Experts exchange is a professional site.  Wasting room and people's time is usually not accepted very well by the general populous.  If you have other questions, please refer to the FAQ page and HELP page.
0
 
nir_shAuthor Commented:
I've managed to create a very big array using the farmalloc function. However, the array I'm using should be bigger, my pointer should be huge and not far. This is my last problem, if you solve this problem for me, everything is solved.
0
 
elfieCommented:
when you talk about "huge" pointers, did you check the memory model in which you are compiling?
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.