WING-NUS NLP Seminar by Xiang Lisa Li, PhD student (Stanford) Thu, 14 Apr 11:30-12:30 / Prefix-Tuning: Optimizing Continuous Prompts for Generation

4 views
Skip to first unread message

Min Yen KAN

unread,
Apr 9, 2022, 1:01:41 AMApr 9
to Singapore NLP Group, xli...@stanford.edu, Taha Aksu <tahaaksu01@gmail.com> (tahaaksu01@gmail.com)
Dear all:

Just spreading the news of our next local WING-NUS NLP seminar.  It's an online only event, so if you can come, please join us.

WING-NUS NLP Seminar 2022 - Talk 3

For online only attendance, there is no need to register, please connect via:
https://nus-sg.zoom.us/j/7704478736?pwd=QU8ybHd5NThxR1hENUo5WXVyc0d5UT09
(Meeting ID: 770 447 8736 / Passcode: 3244)

Speaker: Xiang Lisa Li
Title: Prefix-Tuning: Optimizing Continuous Prompts for Generation

Fine-tuning is the de facto way of leveraging large pretrained language models for downstream tasks. However, fine-tuning modifies all the language model parameters and therefore necessitates storing a full copy for each task.

I will introduce prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a small continuous task-specific vector, which we call the prefix.  Prefix-tuning draws inspiration from prompting for language models, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens".  We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization.

We find that by learning only 0.1% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics that are unseen during training.
Then I will discuss some downsides of lightweight fine-tuning (e.g., prefixtuning, adapters): they sometimes underperform full finetuning in-distribution (ID) on harder tasks. I will present methods to combine the benefits of full and lightweight finetuning, achieving strong performance both ID and OOD (out-of-distribution).

Bio: Xiang Lisa Li is a second-year PhD student in computer science at Stanford University, advised by Percy Liang and Tatsunori Hashimoto. She works on controllable text generation/decoding and efficient adaptation of pre-trained language models. Lisa is supported by a Stanford Graduate Fellowship and is the recipient of an EMNLP Best Paper award.  https://xiangli1999.github.io/

Past seminars' slides and recordings are available at our seminar home page: https://wing-nus.github.io/nlp-seminar/
Reply all
Reply to author
Forward
0 new messages