Model variants allow for trade-offs between accuracy and compute resources. We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsĪssociation for Computational Linguistics We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias.", Transfer learning using sentence-level embeddings is shown to outperform models without transfer learning and often those that use only word-level transfer. Comparisons are made with baselines without transfer learning and to baselines that incorporate word-level transfer. We report the relationship between model complexity, resources, and transfer performance. Publisher = "Association for Computational Linguistics",Ībstract = "We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance. Cite (Informal): Universal Sentence Encoder for English (Cer et al., EMNLP 2018) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Data MPQA Opinion Corpus, SNLI, = "Universal Sentence Encoder for nglish",īooktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", Association for Computational Linguistics. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. Anthology ID: D18-2029 Volume: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations Month: November Year: 2018 Address: Brussels, Belgium Venue: EMNLP SIG: SIGDAT Publisher: Association for Computational Linguistics Note: Pages: 169–174 Language: URL: DOI: 10.18653/v1/D18-2029 Bibkey: cer-etal-2018-universal Cite (ACL): Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. We show good transfer task performance with minimal training data and obtain encouraging results on word embedding association tests (WEAT) of model bias. Abstract We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |