Skip to content

Latest commit

 

History

History
261 lines (130 loc) · 19.7 KB

【神经网络搜索】ICLR 2021 NAS 相关论文(包含Workshop).md

File metadata and controls

261 lines (130 loc) · 19.7 KB

ICLR 2021 NAS 相关论文(包含Workshop)

ICLR 2021 Workshop 接收

  • Measuring Uncertainty through Bayesian Learning of Deep Neural Network Structure

Zhijie Deng, Yucen Luo and Jun Zhu PDF

  • AutoHAS: Efficient Hyperparameter and Architecture Search

Xuanyi Dong, Mingxing Tan, Adams Yu, Daiyi Peng, Bogdan Gabrys and Quoc Le PDF

  • Tensorizing Neural Architecture Search in the Supernet

Hansi Yang, Quanming Yao and James T. Kwok PDF

  • Simulation-based Scoring for Model-based Asynchronous Hyperparameter and Neural Architecture Search

Matthias Seeger, Aaron Klein, Thibaut Lienart and Louis Tiao PDF

  • Making Differentiable Architecture Search less local

Erik Bodin, Federico Tomasi and Zhenwen Dai PDF

  • Width transfer: on the (in)variance of width optimization

Ting-Wu Chin, Diana Marculescu and Ari Morcos PDF

  • How Powerful are Performance Predictors in Neural Architecture Search?

Colin White, Arber Zela, Binxin Ru, Yang Liu and Frank Hutter PDF

  • On Adversarial Robustness: A Neural Architecture Search perspective

Chaitanya Devaguptapu, Gaurav Mittal, Devansh Agarwal and Vineeth N Balasubramanian PDF

  • A multi-objective perspective on jointly tuning hardware and hyperparameters

David Salinas, Valerio Perrone, Cedric Archambeau and Olivier Cruchant PDF

  • HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search

Niv Nayman, Yonathan Aflalo, Asaf Noy and Lihi Zelnik PDF

  • Cost-aware Adversarial Best Arm Identification

Nikita Ivkin, Zohar Karnin, Valerio Perrone and Giovanni Zappella PDF

  • MONCAE: Multi-Objective Neuroevolution of Convolutional Autoencoders

Daniel Dimanov, Emili Balaguer-Ballester, Shahin Rostami and Colin Singleton PDF

  • Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger and Cedric Archambeau PDF

  • How does Weight Sharing Help in Neural Architecture Search?

Yuge Zhang, Quanlu Zhang and Yaming Yang PDF

  • AlphaNet: Improved Training of Supernet with Alpha-Divergence

Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu and Vikas Chandra PDF

  • One-Shot Neural Architecture Search Via Compressive Sensing

Minsu Cho, Mohammadreza Soltani and Chinmay Hegde PDF

  • Rethinking NAS Operations for Diverse Tasks

Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Re and Ameet Talwalkar PDF

  • Recovering uantitative Models of Human Information Processing with Differentiable Architecture Search

Sebastian Musslick PDF

  • Flexible Multi-task Networks by Learning Parameter Allocation

Krzysztof Maziarz, Efi Kokiopoulou, Andrea Gesmundo, Luciano Sbaiz, Gabor Bartok and Jesse Berent

PDF

ICLR 2021 接收

1. How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS

2. DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

3. Noisy Differentiable Architecture Search

4. FTSO: Effective NAS via First Topology Second Operator

Our method, named FTSO, reduces NAS's search time from days to 0.68 seconds while achieving 76.42% testing accuracy on ImageNet and 97.77% testing accuracy on CIFAR10 via searching for network topology and operators separately

5. DOTS: Decoupling Operation and Topology in Differentiable Architecture Search

We improve DARTS by discoupling the topology representation from the operation weights and make explicit topology search.

4. Geometry-Aware Gradient Algorithms for Neural Architecture Search

Studying the right single-level optimization geometry yields state-of-the-art methods for NAS.

5. GOLD-NAS: Gradual, One-Level, Differentiable

A new differentiable NAS framework incorporating one-level optimization and gradual pruning, working on large search spaces.

6. Weak NAS Predictor Is All You Need

We present a novel method to estimate weak predictors progressively in predictor-based neural architecture search. By coarse-to-fine iteration, the ranking of sampling space is refined gradually which helps find the optimal architectures eventually.

7. Differentiable Graph Optimization for Neural Architecture Search

we learn a differentiable graph neural network as a surrogate model to rank candidate architectures.

8 . DrNAS: Dirichlet Neural Architecture Search

we propose a simple yet effective progressive learning scheme that enables searching directly on large-scale tasks, eliminating the gap between search and evaluation phases. Extensive experiments demonstrate the effectiveness of our method.

9 . Neural Architecture Search of SPD Manifold Networks

we first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design. Further, we model our new NAS problem using the supernet strategy which models the architecture search problem as a one-shot training process of a single supernet.

10 . Neighborhood-Aware Neural Architecture Search

We propose a neighborhood-aware formulation for neural architecture search to find flat minima in the search space that can generalize better to new settings.

11 . A Surgery of the Neural Architecture Evaluators

This paper assesses current fast neural architecture evaluators with multiple direct criteria, under controlled settings.

12 . Exploring single-path Architecture Search ranking correlations

An empirical study of how several method variations affect the quality of the architecture ranking prediction.

13 . Neural Architecture Search without Training

14 . Zero-Cost Proxies for Lightweight NAS

A single minibatch of data is used to score neural networks for NAS instead of performing full training.

15 . Improving Zero-Shot Neural Architecture Search with Parameters Scoring

A score can be designed taking into account the jacobian in parameter space, that is highly predictive of final performance in a task.

16 . Multi-scale Network Architecture Search for Object Detection

17. Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators

We propose the Triple-Search framework to jointly search network structure, precision and hardware architecture in a differentiable manner.

18 . TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search

19 . Searching for Convolutions and a More Ambitious NAS

20 . EnTranNAS: Towards Closing the Gap between the Architectures in Search and Evaluation

We show how effective dimensionality can shed light on a number of phenomena in modern deep learning including double descent, width-depth trade-offs, and subspace inference, while providing a straightforward and compelling generalization metric.

21 . Efficient Graph Neural Architecture Search

By designing a novel and expressive search space, an efficient one-shot NAS method based on stochastic relaxation and natural gradient is proposed.

22 . Searching for Convolutions and a More Ambitious NAS

A general-purpose search space for neural architecture search that enables discovering operations that beat convolutions on image data.

23 . Exploring single-path Architecture Search ranking correlations

An empirical study of how several method variations affect the quality of the architecture ranking prediction.

24 . Network Architecture Search for Domain Adaptation

25. HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

26. Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

Our TE-NAS framework analyzes the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space, achieving high-quality architecture search while dramatically reducing the search cost to four hours on ImageNet.

27. Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters

Fixing errors in gradient estimation of architectural parameters for stabilizing the DARTS algorithm.

28. NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search

29. NASOA: Towards Faster Task-oriented Online Fine-tuning

We propose a Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning upon the request of users.

28. Model-based Asynchronous Hyperparameter and Neural Architecture Search

We present a new, asynchronous multi-fidelty Bayesian optimization method to efficiently search for hyperparameters and architectures of neural networks.

29. Searching for Convolutions and a More Ambitious NAS

A general-purpose search space for neural architecture search that enables discovering operations that beat convolutions on image data.

30. A Gradient-based Kernel Approach for Efficient Network Architecture Search

We first formulate these two terms into a unified gradient-based kernel and then select architectures with the largest kernels at initialization as the final networks. The new approach replaces the expensive "train-then-test'' evaluation paradigm.

31. Fast MNAS: Uncertainty-aware Neural Architecture Search with Lifelong Learning

We proposed FNAS which accelerates standard RL based NAS process by 10x and guarantees better performance on various vision tasks.

32. Explicit Learning Topology for Differentiable Neural Architecture Search

33. NASLib: A Modular and Flexible Neural Architecture Search Library

34. TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search

35. Rethinking Architecture Selection in Differentiable NAS

36. Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

37. Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets

We propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly generate a neural architecture for a novel dataset.

38. Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels

We propose a NAS method that is sample-efficient, highly performant and interpretable.

39. AutoHAS: Efficient Hyperparameter and Architecture Search

40. EnTranNAS: Towards Closing the Gap between the Architectures in Search and Evaluation

42. Differentiable Graph Optimization for Neural Architecture Search

we learn a differentiable graph neural network as a surrogate model to rank candidate architectures.

41. Width transfer: on the (in)variance of width optimization

we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings.

42. NAHAS: Neural Architecture and Hardware Accelerator Search

We propose NAHAS, a latency-driven software/hardware co-optimizer that jointly optimize the design of neural architectures and a mobile edge processor.

43. Neural Network Surgery: Combining Training with Topology Optimization

We demonstrate a hybrid approach for combining neural network training with a genetic-algorithm based architecture optimization.

44. Efficient Architecture Search for Continual Learning

Our proposed CLEAS works closely with neural architecture search (NAS) which leverages reinforcement learning techniques to search for the best neural architecture that fits a new task.

45. Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto Seg-Loss is the first general framework for searching surrogate losses for mainstream semantic segmentation metrics.

46. Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space

47. SEDONA: Search for Decoupled Neural Networks toward Greedy Block-wise Learning

Our approach is the first attempt to automate decoupling neural networks for greedy block-wise learning and outperforms both end-to-end backprop and state-of-the-art greedy-learning methods on CIFAR-10, Tiny-ImageNet and ImageNet classification.

48. Intra-layer Neural Architecture Search

Neural architecture search at the level of individual weight parameters.