I am evaluating Terracotta for my current problem statement. The process is CPU intensive and takes about 5-10 GB working memory(RAM). Each object in memory is 1 kilobyte fine and consists of a handful of primitive data types. The whole RAM data goes through thousands of iterations and each iteration changes all the objects. Each object is modified completely. The process takes days to finish.
The million+ objects are partitioned and now run on multiple core machines, but i need more power and much more RAM(for bigger problems). The data/objects processed by one thread is not shared with others
Would Terracota be a good solution? Would syncing up of the million of objects to the clustering server be a really bad bottleneck rendering it ineffective?