0

I have no expirience with MySQL and database performance tuning in general.

Wondering whether it is possible to perform a large number of selects against 100 millions of records by primary key within 30ms per request in average.

Some details:

  • PK is 16 bytes.
  • Each record is approximately 100 bytes and can be shrinked to 24 bytes if normalized (though cardinality is approximately 10 to 1 and we need joins in this case).
  • Table should be writeable. There may be 1 update per 1 select, though updates can be deferred and their performance is less critical.

  • I could allocate up to 8GB for mysql needs.

  • Disk is a regular HDD (this concerns me the most, because seek time is about 9ms)

Please, suggest, what storage engine is preferable, what mysql parameters should be tuned. Also suggest, please, some technics (perhaps selecting groups of records with where in() against record by record, etc.) to improve performance.

Thanks in advance.

4

0 回答 0