WING-NUS NLP Seminar by Xiang Lisa Li, PhD student (Stanford) Thu, 14 Apr 11:30-12:30 / Prefix-Tuning: Optimizing Continuous Prompts for Generation

Skip to first unread message

Min Yen KAN

Apr 9, 2022, 1:01:41 AMApr 9
to Singapore NLP Group,, Taha Aksu <> (
Dear all:

Just spreading the news of our next local WING-NUS NLP seminar.  It's an online only event, so if you can come, please join us.

WING-NUS NLP Seminar 2022 - Talk 3

For online only attendance, there is no need to register, please connect via:
(Meeting ID: 770 447 8736 / Passcode: 3244)

Speaker: Xiang Lisa Li
Title: Prefix-Tuning: Optimizing Continuous Prompts for Generation

Fine-tuning is the de facto way of leveraging large pretrained language models for downstream tasks. However, fine-tuning modifies all the language model parameters and therefore necessitates storing a full copy for each task.

I will introduce prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a small continuous task-specific vector, which we call the prefix.  Prefix-tuning draws inspiration from prompting for language models, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens".  We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization.

We find that by learning only 0.1% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics that are unseen during training.
Then I will discuss some downsides of lightweight fine-tuning (e.g., prefixtuning, adapters): they sometimes underperform full finetuning in-distribution (ID) on harder tasks. I will present methods to combine the benefits of full and lightweight finetuning, achieving strong performance both ID and OOD (out-of-distribution).

Bio: Xiang Lisa Li is a second-year PhD student in computer science at Stanford University, advised by Percy Liang and Tatsunori Hashimoto. She works on controllable text generation/decoding and efficient adaptation of pre-trained language models. Lisa is supported by a Stanford Graduate Fellowship and is the recipient of an EMNLP Best Paper award.

Past seminars' slides and recordings are available at our seminar home page:
Reply all
Reply to author
0 new messages