Neural Architecture Search (NAS) benchmarks (NAS-Bench-101, NAS-Bench-201) provide pre-evaluated architectures within constrained search spaces, enabling reproducible research. However, this creates a fundamental limitation: the benchmark defines the boundary of “valid” architectures.
In our pipeline, a graph diffusion model learns the distribution of benchmark architectures and is then fine-tuned to shift toward high-performing designs. During fine-tuning, the model may generate architectures that are structurally valid but do not exist in the benchmark lookup table-these Out-of-Distribution (OOD).”
Research Questions:
The research aims to characterize OOD architectures, evaluate their actual performance through training, and develop OOD-aware reward mechanisms-potentially enabling genuine architecture discovery beyond benchmark constraints.
Primary Datasets:
NAS-Bench-101, NAS-Bench-201 (CIFAR-10, CIFAR-100, ImageNet16-120)
Technical Prerequisites
Literature