TY - GEN
T1 - In-BoXBART
T2 - 2022 Findings of the Association for Computational Linguistics: NAACL 2022
AU - Parmar, Mihir
AU - Mishra, Swaroop
AU - Luo, Mirali Purohit Man
AU - Murad, M. Hassan
AU - Baral, Chitta
N1 - Publisher Copyright:
© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.
PY - 2022
Y1 - 2022
N2 - Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multitasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multitask generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedical MTL. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed as In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks. Experimental results indicate that the proposed model: 1) outperforms single-task baseline by ~3% and multitask (without instruction) baseline by ~18% on an average, and 2) shows ~23% improvement compared to single-task baseline in few-shot learning (i.e., 32 instances per task) on an average. Our analysis indicates that there is significant room for improvement across tasks in the BoX, implying the scope for future research direction.
AB - Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multitasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multitask generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedical MTL. We introduce the BoX, a collection of 32 instruction tasks for Biomedical NLP across (X) various categories. Using this meta-dataset, we propose a unified model termed as In-BoXBART, that can jointly learn all tasks of the BoX without any task-specific modules. To the best of our knowledge, this is the first attempt to propose a unified model in the biomedical domain and use instructions to achieve generalization across several biomedical tasks. Experimental results indicate that the proposed model: 1) outperforms single-task baseline by ~3% and multitask (without instruction) baseline by ~18% on an average, and 2) shows ~23% improvement compared to single-task baseline in few-shot learning (i.e., 32 instances per task) on an average. Our analysis indicates that there is significant room for improvement across tasks in the BoX, implying the scope for future research direction.
UR - http://www.scopus.com/inward/record.url?scp=85131555703&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131555703&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85131555703
T3 - Findings of the Association for Computational Linguistics: NAACL 2022 - Findings
SP - 112
EP - 128
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 10 July 2022 through 15 July 2022
ER -