Even for few thousands or few millions entries - your approach is just fine - it has a linear on average (O(n)
) run time - so it is not that bad.
However, you can use a map-reduce approach if you want more scalability.
map(file):
for each entry in file:
emitIntermediate(getDomainName(entry),"1"))
reduce(domain,list):
emit(domain,size(list))
The above will efficiently give you the list of (domain,count)
tupples, and all you have to do is select the top 10.
Selecting the top 10 can be done using a map-reduce (distributed) sort for scalability - or using a min heap (iterate while maintaining the top 10 elements encountered in the heap). The second is explained with more details in this thread
About space complexity: If you are using a 64 bit system, you can use it as RAM, and let the OS do what it cans (by swapping elements to disk when needed), it is very unlikely that you will need more then the amount of Virtual Memory you have on a 64 bits machine. An alternative is to use a hash table (or a B+ tree) optimized for file systems and do the same algorithm on it.
However, if this is indeed the case - and the data does not fit RAM, and you cannot use map-reduce, I suspect sorting and iterating - though will be O(nlogn)
will be more efficient (using external sort) - because the number of DISK ACCESSES will be minimized, and disk access is much slower then RAM access.