More Than a Feeling: Visualizing Why Filter Atoms Outsmart LoRA in Fine-Tuning Post date July 1, 2025 Post author By Model Tuning Post categories In channel-mixing-weights, convolutional-neural-networks, efficient-tuning, filter-atom-decomposition, filter-subspace, latent-basis-tasks, low-dimensional-subspace, pre-trained-large-models
Tuning the Pixels, Not the Soul: How Filter Atoms Remake ConvNets Post date July 1, 2025 Post author By Model Tuning Post categories In channel-mixing-weights, convolutional-neural-networks, efficient-tuning, filter-atom-decomposition, filter-subspace, latent-basis-tasks, low-dimensional-subspace, pre-trained-large-models
Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models Post date July 1, 2025 Post author By Model Tuning Post categories In channel-mixing-weights, convolutional-neural-networks, efficient-tuning, filter-atom-decomposition, filter-subspace, latent-basis-tasks, low-dimensional-subspace, pre-trained-large-models
Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models Post date July 1, 2025 Post author By Model Tuning Post categories In channel-mixing-weights, convolutional-neural-networks, efficient-tuning, filter-atom-decomposition, filter-subspace, latent-basis-tasks, low-dimensional-subspace, pre-trained-large-models