Research context Language model pre-training has been used to improve many NLP tasks .ELMo(Peters et al.,2018) OpenAl GPT (Radford et al.,2018) ULMFit(Howard and Rudder,2018) ·BERT 。Unidirectional JLM-FIT Feature-Based(ELMo) Fine-tuning(OpenAl GPT). 。Bidirectional 。BERT 国产之小连Research context 2024/5/13 12 • Language model pre-training has been used to improve many NLP tasks • ELMo (Peters et al., 2018) • OpenAI GPT (Radford et al., 2018) • ULMFit (Howard and Rudder, 2018) • BERT ● Unidirectional ○ Feature-Based(ELMo) ○ Fine-tuning(OpenAI GPT). ● Bidirectional ○ BERT