Towards Mitigating Architecture Overfitting in Dataset Distillation

Voices Powered byElevenlabs logo


Xuyang Zhong, Chen Liu


Dataset distillation methods have demonstrated remarkable performance for neural networks trained with very limited training data. However, a significant challenge arises in the form of architecture overfitting: the distilled training data synthesized by a specific network architecture (i.e., training network) generates poor performance when trained by other network architectures (i.e., test networks). This paper addresses this issue and proposes a series of approaches in both architecture designs and training schemes which can be adopted together to boost the generalization performance across different network architectures on the distilled training data. We conduct extensive experiments to demonstrate the effectiveness and generality of our methods. Particularly, across various scenarios involving different sizes of distilled data, our approaches achieve comparable or superior performance to existing methods when training on the distilled data using networks with larger capacities.

Follow Us on


Add comment
Recommended SciCasts
[GPT-4] Technical Report
Quantum spherical codes
Attention Is All You Need