I am looking for a way to store a big amount of data in the file or files. The additional requirement is: it should be indexed, two indexes on integer fields should allow selecting a specific set of data very fast.
Details: the data record is a fixed-length set of 3 integers like this:
A (int) | B (int) | N (int)
A and B are indexable columns while N is just a data value.
This data set may contain billions of records (for example 30M) and there should be a way to select all records with A= as fast as possible. Or records with B= as fast as possible.
I can not use any other technologies except MySQL and PHP and you can say: "Wow, you can use MySQL!". Sure. I am already using it, but because of MySQL's extra data, my database takes 10 times more space than it should, plus index data.
So I am looking for a file-based solution.
Are there any ready algorithms to implement this? Or source code solution?
Thank you!
Update 1:
CREATE TABLE `w_vectors` (
`wid` int(11) NOT NULL,
`did` int(11) NOT NULL,
`wn` int(11) NOT NULL DEFAULT '0',
UNIQUE KEY `did_wn` (`did`,`wn`),
KEY `wid` (`wid`),
KEY `did` (`did`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci
Update 2:
The goal of this table is to store document-vs-words vectors for a word-based search application. This table stores all the words from all the documents in compact form (wid is the word ID from the word vocabulary, did is the document ID and wn is the number of the word in the document). This works pretty well, however, in case you have, let's say, 1000000 documents, each document contains average of 10k words, this table becomes VERY VERY huge like 10 billion rows! And with row size 34 bytes it becomes a 340 Gb structure for just 1 million documents... not good, right?
I am looking for a way to optimize this.