Skip to content

Latest commit

 

History

History
44 lines (33 loc) · 3.83 KB

text.md

File metadata and controls

44 lines (33 loc) · 3.83 KB

Text Modules

Modules pre-trained to embed words, phrases, and sentences as many-dimensional vectors.

Click on a module to view its documentation, or reference the URL from the TensorFlow Hub library like so:

m = hub.Module("https://tfhub.dev/...")

Universal Sentence Encoder

Encoder of greater-than-word length text trained on a variety of data.

ELMo

Deep Contextualized Word Representations trained on the 1 Billion Word Benchmark.

NNLM embedding trained on Google News

Embedding from a neural network language model trained on Google News dataset.

50 dimensions 128 dimensions
Chinese nnlm-zh-dim50
nnlm-zh-dim50-with-normalization
nnlm-zh-dim128
nnlm-zh-dim128-with-normalization
English nnlm-en-dim50
nnlm-en-dim50-with-normalization
nnlm-en-dim128
nnlm-en-dim128-with-normalization
German nnlm-de-dim50
nnlm-de-dim50-with-normalization
nnlm-de-dim128
nnlm-de-dim128-with-normalization
Indonesian nnlm-id-dim50
nnlm-id-dim50-with-normalization
nnlm-id-dim128
nnlm-id-dim128-with-normalization
Japanese nnlm-ja-dim50
nnlm-ja-dim50-with-normalization
nnlm-ja-dim128
nnlm-ja-dim128-with-normalization
Korean nnlm-ko-dim50
nnlm-ko-dim50-with-normalization
nnlm-ko-dim128
nnlm-ko-dim128-with-normalization
Spanish nnlm-es-dim50
nnlm-es-dim50-with-normalization
nnlm-es-dim128
nnlm-es-dim128-with-normalization

Word2vec trained on Wikipedia

Embedding trained by word2vec on Wikipedia.

English

250 dimensions 500 dimensions
Wiki-words-250
Wiki-words-250-with-normalization
Wiki-words-500
Wiki-words-500-with-normalization