Research context Two existing strategies for applying pre-trained language representations to downstream tasks Feature-based:include pre-trained representations as additional features (e.g., ELMo) Fine-tunning:introduce task-specific parameters and fine-tune the pre-trained parameters (e.g., OpenAl GPT,ULMFit) 国产之大丝 Research context 2024/5/13 13 • Two existing strategies for applying pre-trained language representations to downstream tasks • Feature-based: include pre-trained representations as additional features (e.g., ELMo) • Fine-tunning: introduce task-specific parameters and fine-tune the pre-trained parameters (e.g., OpenAI GPT, ULMFit)