正在加载图片...
Research context Language model pre-training has been used to improve many NLP tasks .ELMo(Peters et al.,2018) OpenAl GPT (Radford et al.,2018) ULMFit(Howard and Rudder,2018) ·BERT 。Unidirectional JLM-FIT Feature-Based(ELMo) Fine-tuning(OpenAl GPT). 。Bidirectional 。BERT 国产之小连Research context 2024/5/13 12 • Language model pre-training has been used to improve many NLP tasks • ELMo (Peters et al., 2018) • OpenAI GPT (Radford et al., 2018) • ULMFit (Howard and Rudder, 2018) • BERT ● Unidirectional ○ Feature-Based(ELMo) ○ Fine-tuning(OpenAI GPT). ● Bidirectional ○ BERT
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有