Teaching Big Models With Less Data: How Adapters + Active Learning Win

This paper studies active learning with parameter‑efficient fine‑tuning (adapters), showing AL+PEFT improves PLMs in low‑resource text classification.


This content originally appeared on HackerNoon and was authored by Model Tuning

:::info Authors:

(1) Josip Jukic, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia (josip.jukic@fer.hr);

(2) Jan Šnajder, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia (jan.snajder@fer.hr).

:::

Abstract and 1. Introduction

  1. Related Work
  2. Preliminaries
  3. Experiments
  4. Analysis
  5. Conclusion, Limitations, and References

A. Reproducibility

Abstract

Pre-trained language models (PLMs) have ignited a surge in demand for effective finetuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.[1]

\

1 Introduction

Pre-trained language models (PLMs) have quickly become a staple in the field of natural language processing. With the growing demand for data for training these models, developing efficient finetuning methods has become critical. This is particularly relevant for many domains and languages where obtaining large amounts of labeled training data is difficult or downright impossible. In such low-resource settings, it becomes essential to effectively leverage and adapt PLMs while minimizing the need for extensive labeled data.

\ Data labeling is notoriously time-consuming and expensive, often hindering the development of sizable labeled datasets required for training high-performance models. Active learning (AL) (Cohn et al., 1996; Settles, 2009) has emerged as a potential solution to this challenge. In contrast to passive learning, in which the training set is sampled at random, AL encompasses a unique family of machine learning algorithms specifically designed to reduce labeling costs by reducing label complexity, i.e., the number of labels required by an acquisition model to achieve a certain level of performance (Dasgupta, 2011). With the advent of PLMs, AL research has pivoted towards investigating training regimes for PLMs, such as task-adaptive pre-training (TAPT; Gururangan et al., 2020), that could be combined with AL to further reduce the label complexity.

\ While AL aims at directly minimizing the label complexity of learning, training efficiency can also be improved by reducing the parameter complexity of the model. This becomes more important as PLMs grow larger, and fine-tuning becomes increasingly challenging due to the sheer number of parameters involved. To address this issue, adapters (Houlsby et al., 2019) have been introduced as compact modules that can be incorporated between the layers of PLMs. Adapters enable considerable parameter-sharing, facilitating parameterefficient fine-tuning (PEFT) through modular learning (Pfeiffer et al., 2023). In this process, only the parameters of the adapters are updated during the tuning for a specific downstream task. Recent research (He et al., 2021; Li and Liang, 2021; Karimi Mahabadi et al., 2021) has revealed that some PEFT methods outperform full fine-tuning (FFT) in low-resource settings, potentially due to better stability and a decreased risk of overfitting. In contrast, FFT has been shown to exhibit instability in scenarios with limited data.

\ Despite the promising results demonstrated by PEFT methods in low-resource settings, there is a arXiv:2305.14576v2 [cs.CL] 23 Oct 2023 striking gap in research on parameter-efficient training with respect to how PEFT interacts with AL. Given that the majority of real-world AL scenarios involve a restricted amount of data, PEFT methods emerge as strong candidates for AL acquisition models. However, there has been no exploration of AL in conjunction with adapters. Investigating this uncharted territory can further advance our understanding of AL and reveal novel strategies for optimizing performance in low-resource settings.

\ In this paper, we present an empirical study on the behavior of PEFT in low-resource settings for text classification tasks. We analyze PEFT with and without AL and compare it against FFT. While our results confirm that PEFT exhibits superior performance in low-resource setups compared to FFT, we show that the improved performance with PEFT extends to AL scenarios in terms of performance gains over passive learning. Furthermore, we analyze the efficacy of TAPT in conjunction with AL and PEFT. We find that TAPT is beneficial in AL scenarios for both PEFT and fully fine-tuned models, thus representing a viable technique for improving performance in low-resource settings. Finally, aiming to illuminate why PEFT and TAPT improve AL performance in low-resource settings, we analyze the properties of PEFT and FFT via forgetting dynamics (Toneva et al., 2019) and PLMs’ instance-level representations. We find that AL methods choose fewer unforgettable and more moderately forgettable examples when combined with PEFT and TAPT, where forgetfulness indicates the model’s tendency to learn and forget the gold label of a particular instance. Compared to FFT, we observe that PEFT yields representations in the early and middle layers of a model that are more similar to the representations of the base PLM. We hypothesize that this property mitigates the issue of forgetting the knowledge obtained during pretraining when fine-tuning for downstream tasks.

\ In summary, we show that in AL low-resource settings for text classification, (1) PEFT yields greater performance improvements compared to FFT and (2) TAPT enhances the overall classification performance of adapters and is well-suited for AL scenarios. We also show that (3) AL methods choose fewer unforgettable and more moderately forgettable examples with PEFT and that (4) PEFT produces instance-level representations of early and middle layers that are more similar to the base PLM than FFT. Our results uncover the intricacies of positive interactions between AL, PEFT, and TAPT, providing empirical justification for their combined use in low-resource settings.

\

2 Related Work

Our research involves combining AL with PLMs and investigating the use of PEFT techniques within the confines of low-resource settings.

\ AL with PLMs. Until recently, the conventional approach for integrating PLMs with AL involved performing full fine-tuning with a fixed number of training epochs and training the model from scratch in each AL step (Ein-Dor et al., 2020; Margatina et al., 2021; Shelmanov et al., 2021; Karamcheti et al., 2021; Schröder et al., 2022). However, studies by Mosbach et al. (2021) and Zhang et al. (2021) revealed that fine-tuning in low-resource setups is prone to instability, particularly when training for only a few epochs. This instability, often sensitive to weight initialization and data ordering (Dodge et al., 2020), presents a significant challenge for AL, which frequently operates in lowresource settings. Recent research has looked into the impact of PLM training regimes on AL performance (Grießhaber et al., 2020; Yuan et al., 2020; Yu et al., 2022), suggesting that the choice of training regime is more critical than the choice of the AL method. Notably, TAPT has proven particularly effective in enhancing AL performance (Margatina et al., 2022; Jukic and Šnajder ´ , 2023).

\ Adapters in low-resource settings. Research on adapters in low-resource settings has primarily focused on areas such as cross-lingual transfer for low-resource languages (Ansell et al., 2021; Lee et al., 2022; Parovic et al., 2022), where the emphasis lies on exploring diverse methods of fusing adapters. In monolingual settings with scarce data, adapters have been found to outperform full finetuning (Li and Liang, 2021; Mao et al., 2022). A study by He et al. (2021) demonstrated that adapterbased tuning exhibits enhanced stability and generalization capabilities by virtue of being less sensitive to learning rates than traditional fine-tuning methods. While incorporating task adaptation techniques, such as TAPT, has been shown to match or even improve performance over FFT in lowresource setups, Kim et al. (2021) noted an interesting caveat: the benefits of integrating TAPT with adapters tend to taper off as the amount of data increases.

\ Despite the established effectiveness of adapters in setups with limited resources, their integration into AL frameworks — which frequently face analogous resource constraints — remains an untapped area of research. This gap is particularly notable given that AL’s iterative learning process could significantly benefit from adapters’ parameter efficiency and transferability, especially in scenarios where data scarcity or labeling costs are primary concerns.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[1] Our code is available at https://github.com/josipjukic/adapter-al


This content originally appeared on HackerNoon and was authored by Model Tuning


Print Share Comment Cite Upload Translate Updates
APA

Model Tuning | Sciencx (2025-08-25T21:13:12+00:00) Teaching Big Models With Less Data: How Adapters + Active Learning Win. Retrieved from https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/

MLA
" » Teaching Big Models With Less Data: How Adapters + Active Learning Win." Model Tuning | Sciencx - Monday August 25, 2025, https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/
HARVARD
Model Tuning | Sciencx Monday August 25, 2025 » Teaching Big Models With Less Data: How Adapters + Active Learning Win., viewed ,<https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/>
VANCOUVER
Model Tuning | Sciencx - » Teaching Big Models With Less Data: How Adapters + Active Learning Win. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/
CHICAGO
" » Teaching Big Models With Less Data: How Adapters + Active Learning Win." Model Tuning | Sciencx - Accessed . https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/
IEEE
" » Teaching Big Models With Less Data: How Adapters + Active Learning Win." Model Tuning | Sciencx [Online]. Available: https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/. [Accessed: ]
rf:citation
» Teaching Big Models With Less Data: How Adapters + Active Learning Win | Model Tuning | Sciencx | https://www.scien.cx/2025/08/25/teaching-big-models-with-less-data-how-adapters-active-learning-win-2/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.