Home /

Research

Showing 31 - 36 / 904

The large learning rate phase of deep learning


Authors:  Anonymous....
Published date-01/01/2021

Abstract: The choice of initial learning rate can have a profound effect on the performance of deep networks. We present empirical evidence that networks exhibit sharply distinct behaviors at small and …

Hierarchical Meta Reinforcement Learning for Multi-Task Environments


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  HierarchicalReinforcementLearning, MetaReinforcementLearning

Abstract: Deep reinforcement learning algorithms aim to achieve human-level intelligence by solving practical decisions-making problems, which are often composed of multiple sub-tasks. Complex and subtle relationships between sub-tasks make traditional methods …

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  ImageGeneration

Abstract: Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for …

On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  3DShapeRepresentation

Abstract: A neural implicit outputs a number indicating whether the given query point in space is outside, inside, or on a surface. Many prior works have focused on _latent-encoded_ neural implicits, …

Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  ImageClassification, NeuralArchitectureSearch

Abstract: Random-sampling Neural Architecture Search (RandomNAS) has recently become a prevailing NAS approach because of its search efficiency and simplicity. There are two main steps in RandomNAS: the training step that …

Model-Free Energy Distance for Pruning DNNs


Authors:  Anonymous....
Published date-01/01/2021

Abstract: We propose a novel method for compressing Deep Neural Networks (DNNs) with competitive performance to state-of-the-art methods. We measure a new model-free information between the feature maps and the output …

Filter by

Categories

Tags