Modules pre-trained to embed words, phrases, and sentences as many-dimensional vectors.
Click on a module to view its documentation, or reference the URL from the TensorFlow Hub library like so:
m = hub.Module("https://tfhub.dev/...")
Encoder of greater-than-word length text trained on a variety of data.
- universal-sentence-encoder
- universal-sentence-encoder-large
- universal-sentence-encoder-lite (*Text preprocessing required)
Deep Contextualized Word Representations trained on the 1 Billion Word Benchmark.
Embedding from a neural network language model trained on Google News dataset.
Embedding trained by word2vec on Wikipedia.
250 dimensions | 500 dimensions |
---|---|
Wiki-words-250 Wiki-words-250-with-normalization |
Wiki-words-500 Wiki-words-500-with-normalization |