Loading the wiki-fasttext model with the gensim library takes six minutes.
I'm aware of ways to cache the model but I'm looking for ways to speedup the initial model loading. The specific api is below:
en_model = KeyedVectors.load_word2vec_format(os.path.join(root_dir, model_file))
Granted, wiki-fasttext a very large model, however I have load the same model in many languages.