Universität Bielefeld Play

[MA/Project]

Exploring Novel Architectures Beyond NAS Benchmark Boundaries: Evaluating OOD Graphs Generated by Graph-Diffusion-based NAS

Contact: Aleksei Liuliakov

Neural Architecture Search (NAS) benchmarks (NAS-Bench-101, NAS-Bench-201) provide pre-evaluated architectures within constrained search spaces, enabling reproducible research. However, this creates a fundamental limitation: the benchmark defines the boundary of “valid” architectures.

In our pipeline, a graph diffusion model learns the distribution of benchmark architectures and is then fine-tuned to shift toward high-performing designs. During fine-tuning, the model may generate architectures that are structurally valid but do not exist in the benchmark lookup table-these Out-of-Distribution (OOD).”

Research Questions:

The research aims to characterize OOD architectures, evaluate their actual performance through training, and develop OOD-aware reward mechanisms-potentially enabling genuine architecture discovery beyond benchmark constraints.

Primary Datasets:

NAS-Bench-101, NAS-Bench-201 (CIFAR-10, CIFAR-100, ImageNet16-120)

Technical Prerequisites

Literature

  1. Lukasik, J., Jung, S., & Keuper, M. (2022). Learning Where To Look — Generative NAS is Surprisingly Efficient. arXiv:2203.08734. Link
  2. Vignac, C., Krawczuk, I., Siraudin, A., Wang, B., Cevher, V., & Frossard, P. (2022). DiGress: Discrete Denoising diffusion for graph generation. arXiv:2209.14734. Link
  3. Liu, Y., Du, C., Pang, T., Li, C., Lin, M., & Chen, W. (2024). Graph Diffusion Policy Optimization. Advances in Neural Information Processing Systems, 37, 9585–9611. Link
  4. Black, K., Janner, M., Du, Y., Kostrikov, I., & Levine, S. (2024). Training Diffusion Models with Reinforcement Learning. arXiv:2305.13301. Link