by , , , ,
Abstract:
We study the question: How can we select the right data for fine-tuning to a specific task? We call this data selection problem active fine-tuning and show that it is an instance of transductive active learning, a novel generalization of classical active learning. We propose ITL, short for information-based transductive learning, an approach which samples adaptively to maximize information gained about the specified task. We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data. We apply ITL to the few-shot fine-tuning of large neural networks and show that fine-tuning with ITL learns the task with significantly fewer examples than the state-of-the-art.
Reference:
Active Few-Shot Fine-Tuning J. Hübotter, B. Sukhija, L. Treven, Y. As, A. KrauseIn ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning, 2024
Bibtex Entry:
@inproceedings{hubotter2024active,
  title={Active Few-Shot Fine-Tuning},
  author={H{\"u}botter, Jonas and Sukhija, Bhavya and Treven, Lenart and As, Yarden and Krause, Andreas},
  booktitle = {ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning},
  year={2024},
  pdf = {https://arxiv.org/pdf/2402.15441.pdf}  
}