conference
stringclasses
3 values
year
int32
2.02k
2.02k
paper_id
int32
5.89k
80k
title
stringlengths
12
188
abstract
stringlengths
1
4.65k
topics
listlengths
1
20
image_url
stringlengths
54
89
ICML
2,023
26,539
Model-based Policy Optimization under Approximate Bayesian Inference
Model-based reinforcement learning algorithms~(MBRL) present an exceptional potential to enhance sample efficiency within the realm of online reinforcement learning (RL). Nevertheless, a substantial proportion of prevalent MBRL algorithms fail to adequately address the dichotomy of exploration and exploitation. Posterior sampling reinforcement learning (PSRL) emerges as an innovative strategy adept at balancing exploration and exploitation, albeit its theoretical assurances are contingent upon exact inference. In this paper, we show that adopting the same methodology as in exact PSRL can be suboptimal under approximate inference. Motivated by the analysis, we propose an improved factorization for the posterior distribution of polices by removing the conditional independence between the policy and data given the model. By adopting such a posterior factorization, we further propose a general algorithmic framework for PSRL under approximate inference and a practical instantiation of it. Empirically, our algorithm can surpass baseline methods by a significant margin on both dense rewards and sparse rewards tasks from the Deepmind control suite, OpenAI Gym and Metaworld benchmarks.
[ "Reinforcement Learning", "Model-based Reinforcement Learning", "Bayesian Inference", "Machine Learning Algorithms", "Exploration-Exploitation Trade-off" ]
https://icml.cc/media/Po…202023/26539.png
NeurIPS
2,023
70,413
Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection
Current research is primarily dedicated to advancing the accuracy of camera-only 3D object detectors (apprentice) through the knowledge transferred from LiDAR- or multi-modal-based counterparts (expert). However, the presence of the domain gap between LiDAR and camera features, coupled with the inherent incompatibility in temporal fusion, significantly hinders the effectiveness of distillation-based enhancements for apprentices. Motivated by the success of uni-modal distillation, an apprentice-friendly expert model would predominantly rely on camera features, while still achieving comparable performance to multi-modal models. To this end, we introduce VCD, a framework to improve the camera-only apprentice model, including an apprentice-friendly multi-modal expert and temporal-fusion-friendly distillation supervision. The multi-modal expert VCD-E adopts an identical structure as that of the camera-only apprentice in order to alleviate the feature disparity, and leverages LiDAR input as a depth prior to reconstruct the 3D scene, achieving the performance on par with other heterogeneous multi-modal experts. Additionally, a fine-grained trajectory-based distillation module is introduced with the purpose of individually rectifying the motion misalignment for each object in the scene. With those improvements, our camera-only apprentice VCD-A sets new state-of-the-art on nuScenes with a score of 63.1% NDS. The code will be released at https://github.com/OpenDriveLab/Birds-eye-view-Perception.
[ "Computer Vision", "3D Object Detection", "Multi-Modal Learning", "Autonomous Vehicles" ]
https://neurips.cc/media…202023/70413.png
NeurIPS
2,022
56,954
SE(3)-equivariant self-attention via invariant features
In this work, we use classical invariant theory to construct a self-attention module equivariant to 3D rotations and translations. The parameterization is based on the characterization of SE(3)-equivariant functions via the invariants ---scalar products of vectors and certain subdeterminants. This parameterization can be seen as a natural extension to a (more straightforward) E(3) equivariant attention based on invariants ---scalar products or pairwise distances of vectors. We evaluate our model using a toy N-body particle simulation dataset and a real-world dataset of molecular properties. Our model is easy to implement and it exhibits comparable performance and running time to state-of-the-art methods.
[ "Computer Vision", "Computational Chemistry", "Geometric Deep Learning", "Invariant Theory" ]
https://neurips.cc/media…202022/56954.png
ICML
2,024
34,723
Mitigating Label Noise on Graphs via Topological Sample Selection
Despite the success of the carefully-annotated benchmarks, the effectiveness of existing graph neural networks (GNNs) can be considerably impaired in practice when the real-world graph data is noisily labeled. Previous explorations in sample selection have been demonstrated as an effective way for robust learning with noisy labels, however, the conventional studies focus on i.i.d data, and when moving to non-iid graph data and GNNs, two notable challenges remain: (1) nodes located near topological class boundaries are very informative for classification but cannot be successfully distinguished by the heuristic sample selection. (2) there is no available measure that considers the graph topological information to promote sample selection in a graph. To address this dilemma, we propose a $\textit{Topological Sample Selection}$ (TSS) method that boosts the informative sample selection process in a graph by utilising topological information. We theoretically prove that our procedure minimizes an upper bound of the expected risk under target clean distribution, and experimentally show the superiority of our method compared with state-of-the-art baselines.
[ "Graph Neural Networks", "Data Mining", "Topological Data Analysis", "Robust Learning", "Noisy Labels" ]
https://icml.cc/media/Po…202024/34723.png
ICML
2,024
33,985
MLI Formula: A Nearly Scale-Invariant Solution with Noise Perturbation
Monotonic Linear Interpolation (MLI) refers to the peculiar phenomenon that the error between the initial and converged model monotonically decreases along the linear interpolation, i.e., $(1-\alpha)\boldsymbol{\theta}_0 + \alpha \boldsymbol{\theta}_F$. Previous works focus on paired initial and converged points, relating MLI to the smoothness of the optimization trajectory. In this paper, we find a shocking fact that the error curves still exhibit a monotonic decrease when $\boldsymbol{\theta}_0$ is replaced with noise or even zero values, implying that the decreasing curve may be primarily related to the property of the converged model rather than the optimization trajectory. We further explore the relationship between $\alpha\boldsymbol{\theta}_F$ and $\boldsymbol{\theta}_F$ and propose scale invariance properties in various cases, including Generalized Scale Invariance (GSI), Rectified Scale Invariance (RSI), and Normalized Scale Invariance (NSI). From an inverse perspective, the MLI formula is essentially an equation that adds varying levels of noise (i.e., $(1-\alpha)\boldsymbol{\epsilon}$) to a nearly scale-invariant network (i.e., $\alpha \boldsymbol{\theta}_F$), resulting in a monotonically increasing error as the noise level rises. MLI is a special case where $\boldsymbol{\epsilon}$ is equal to $\boldsymbol{\theta}_0$.
[ "Optimization", "Neural Networks", "Scale Invariance", "Noise Perturbation" ]
https://icml.cc/media/Po…202024/33985.png
NeurIPS
2,023
79,181
Continual Driving Policy Optimization with Closed-Loop Individualized Curricula
The safety of autonomous vehicles (AV) has been a long-standing top concern, stemming from the absence of rare and safety-critical scenarios in the long-tail naturalistic driving distribution. To tackle this challenge, a surge of research in scenario-based autonomous driving has emerged, with a focus on generating high-risk driving scenarios and applying them to conduct safety-critical testing of AV models. However, limited work has been explored on the reuse of these extensive scenarios to iteratively improve AV models. Moreover, it remains intractable and challenging to filter through gigantic scenario libraries collected from other AV models with distinct behaviors, attempting to extract transferable information for current AV improvement. Therefore, we develop a continual driving policy optimization framework featuring Closed-Loop Individualized Curricula (CLIC), which we factorize into a set of standardized sub-modules for flexible implementation choices: AV Evaluation, Scenario Selection, and AV Training. CLIC frames AV Evaluation as a collision prediction task, where it estimates the chance of AV failures in these scenarios at each iteration. Subsequently, by re-sampling from historical scenarios based on these failure probabilities, CLIC tailors individualized curricula for downstream training, aligning them with the evaluated capability of AV. Accordingly, CLIC not only maximizes the utilization of the vast pre-collected scenario library for closed-loop driving policy optimization but also facilitates AV improvement by individualizing its training with more challenging cases out of those poorly organized scenarios. Experimental results clearly indicate that CLIC surpasses other curriculum-based training strategies, showing substantial improvement in managing risky scenarios, while still maintaining proficiency in handling simpler cases.
[ "Autonomous Vehicles", "Reinforcement Learning", "Safety-Critical Systems", "Curriculum Learning", "Scenario-Based Testing" ]
https://neurips.cc/media…202023/79181.png
NeurIPS
2,022
60,672
Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data
Offline RL is an important step towards making data-hungry RL algorithms more widely usable in the real world, but conventional assumptions on the distribution of logging data do not apply in some key real-world scenarios. In particular, it is unrealistic to assume that RL practitioners will have access to sets of trajectories that simultaneously are mutually independent and explore well. We propose two natural ways to relax these assumptions: by allowing the data to be distributed according to different logging policies independently, and by allowing logging policies to depend on past trajectories. We discuss Offline Policy Evaluation (OPE) in these settings, analyzing the performance of a model-based OPE estimator when the MDP is tabular.
[ "Reinforcement Learning", "Offline Learning", "Policy Evaluation", "Data-Driven Decision Making" ]
https://neurips.cc/media…202022/60672.png
NeurIPS
2,022
56,865
Generating astronomical spectra from photometry with conditional diffusion models
A trade-off between speed and information controls our understanding of astronomical objects. Fast-to-acquire photometric observations provide global properties, while costly and time-consuming spectroscopic measurements enable a better understanding of the physics governing their evolution. Here, we tackle this problem by generating spectra directly from photometry, through which we obtain an estimate of an object's intricacies from easily acquired images. This is achieved by using multimodal conditional diffusion models, where the best out of the generated spectra is selected with a contrastive network. Initial experiments on minimally processed SDSS galaxy data show promising results.
[ "Astrophysics", "Computational Astronomy" ]
https://neurips.cc/media…202022/56865.png
ICML
2,024
33,632
On the Recoverability of Causal Relations from Temporally Aggregated I.I.D. Data
We consider the effect of temporal aggregation on instantaneous (non-temporal) causal discovery in general setting. This is motivated by the observation that the true causal time lag is often considerably shorter than the observational interval. This discrepancy leads to high aggregation, causing time-delay causality to vanish and instantaneous dependence to manifest. Although we expect such instantaneous dependence has consistency with the true causal relation in certain sense to make the discovery results meaningful, it remains unclear what type of consistency we need and when will such consistency be satisfied. We proposed functional consistency and conditional independence consistency in formal way correspond functional causal model-based methods and conditional independence-based methods respectively and provide the conditions under which these consistencies will hold. We show theoretically and experimentally that causal discovery results may be seriously distorted by aggregation especially in complete nonlinear case and we also find causal relationship still recoverable from aggregated data if we have partial linearity or appropriate prior. Our findings suggest community should take a cautious and meticulous approach when interpreting causal discovery results from such data and show why and when aggregation will distort the performance of causal discovery methods.
[ "Causal Inference", "Temporal Data Analysis", "Data Aggregation", "Statistical Methods" ]
https://icml.cc/media/Po…202024/33632.png
NeurIPS
2,023
73,424
T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation
Despite the stunning ability to generate high-quality images by recent text-to-image models, current approaches often struggle to effectively compose objects with different attributes and relationships into a complex and coherent scene. We propose T2I-CompBench, a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional text prompts from 3 categories (attribute binding, object relationships, and complex compositions) and 6 sub-categories (color binding, shape binding, texture binding, spatial relationships, non-spatial relationships, and complex compositions). We further propose several evaluation metrics specifically designed to evaluate compositional text-to-image generation and explore the potential and limitations of multimodal LLMs for evaluation. We introduce a new approach, Generative mOdel finetuning with Reward-driven Sample selection (GORS), to boost the compositional text-to-image generation abilities of pretrained text-to-image models. Extensive experiments and evaluations are conducted to benchmark previous methods on T2I-CompBench, and to validate the effectiveness of our proposed evaluation metrics and GORS approach. Project page is available at https://karine-h.github.io/T2I-CompBench/.
[ "Computer Vision", "Natural Language Processing", "Multimodal Learning", "Machine Learning Benchmarks", "Generative Models" ]
https://neurips.cc/media…202023/73424.png
NeurIPS
2,023
71,908
Tanimoto Random Features for Scalable Molecular Machine Learning
The Tanimoto coefficient is commonly used to measure the similarity between molecules represented as discrete fingerprints,either as a distance metric or a positive definite kernel. While many kernel methods can be accelerated using random feature approximations, at present there is a lack of such approximations for the Tanimoto kernel. In this paper we propose two kinds of novel random features to allow this kernel to scale to large datasets, and in the process discover a novel extension of the kernel to real-valued vectors. We theoretically characterize these random features, and provide error bounds on the spectral norm of the Gram matrix. Experimentally, we show that these random features are effective at approximating the Tanimoto coefficient of real-world datasetsand are useful for molecular property prediction and optimization tasks. Future updates to this work will be available at http://arxiv.org/abs/2306.14809.
[ "Computational Chemistry", "Molecular Informatics", "Kernel Methods", "Random Feature Approximations" ]
https://neurips.cc/media…202023/71908.png
ICLR
2,024
18,989
FairerCLIP: Debiasing CLIP's Zero-Shot Predictions using Functions in RKHSs
Large pre-trained vision-language models such as CLIP provide compact and general-purpose representations of text and images that are demonstrably effective across multiple downstream zero-shot prediction tasks. However, owing to the nature of their training process, these models have the potential to 1) propagate or amplify societal biases in the training data and 2) learn to rely on spurious features. This paper proposes FairerCLIP, a general approach for making zero-shot predictions of CLIP more fair and robust to spurious correlations. We formulate the problem of jointly debiasing CLIP’s image and text representations in reproducing kernel Hilbert spaces (RKHSs), which affords multiple benefits: 1) Flexibility: Unlike existing approaches, which are specialized to either learn with or without ground-truth labels, FairerCLIP is adaptable to learning in both scenarios. 2) Ease of Optimization: FairerCLIP lends itself to an iterative optimization involving closed-form solvers, which leads to 4×-10× faster training than the existing methods. 3) Sample Efficiency: Under sample-limited conditions, FairerCLIP significantly outperforms baselines when they fail entirely. And, 4) Performance: Empirically, FairerCLIP achieves appreciable accuracy gains on benchmark fairness and spurious correlation datasets over their respective baselines.
[ "Computer Vision", "Natural Language Processing", "Fairness in AI", "Representation Learning", "Zero-Shot Learning" ]
https://iclr.cc/media/Po…202024/18989.png
ICML
2,023
24,931
Transformers Meet Directed Graphs
Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs. However, transformers for directed graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains, including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for directed graphs: (1) the eigenvectors of the Magnetic Laplacian — a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%.
[ "Graph Neural Networks", "Natural Language Processing", "Deep Learning", "Data Science" ]
https://icml.cc/media/Po…202023/24931.png
ICLR
2,022
6,100
Adversarial Support Alignment
We study the problem of aligning the supports of distributions. Compared to the existing work on distribution alignment, support alignment does not require the densities to be matched. We propose symmetric support difference as a divergence measure to quantify the mismatch between supports. We show that select discriminators (e.g. discriminator trained for Jensen-Shannon divergence) are able to map support differences as support differences in their one-dimensional output space. Following this result, our method aligns supports by minimizing a symmetrized relaxed optimal transport cost in the discriminator 1D space via an adversarial process. Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance. We quantitatively evaluate the method across domain adaptation tasks with shifts in label distributions. Our experiments show that the proposed method is more robust against these shifts than other alignment-based baselines.
[ "Domain Adaptation", "Adversarial Learning", "Optimal Transport", "Distribution Alignment" ]
https://iclr.cc/media/Po…439a28a2b9a3.png
NeurIPS
2,023
71,996
Demystifying Oversmoothing in Attention-Based Graph Neural Networks
Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations. While previous work has established that Graph Convolutional Networks (GCNs) exponentially lose expressive power, it remains controversial whether the graph attention mechanism can mitigate oversmoothing. In this work, we provide a definitive answer to this question through a rigorous mathematical analysis, by viewing attention-based GNNs as nonlinear time-varying dynamical systems and incorporating tools and techniques from the theory of products of inhomogeneous matrices and the joint spectral radius. We establish that, contrary to popular belief, the graph attention mechanism cannot prevent oversmoothing and loses expressive power exponentially. The proposed framework extends the existing results on oversmoothing for symmetric GCNs to a significantly broader class of GNN models, including random walk GCNs, Graph Attention Networks (GATs) and (graph) transformers. In particular, our analysis accounts for asymmetric, state-dependent and time-varying aggregation operators and a wide range of common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
[ "Graph Neural Networks ", "Deep Learning", "Network Theory", "Mathematical Analysis in Machine Learning", "Dynamical Systems", "Spectral Graph Theory" ]
https://neurips.cc/media…202023/71996.png
ICLR
2,022
6,913
Learning Altruistic Behaviours in Reinforcement Learning without External Rewards
Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Such an approach assumes that other agents' goals are known so that the altruistic agent can cooperate in achieving those goals. However, explicit knowledge of other agents' goals is often difficult to acquire. In the case of human agents, their goals and preferences may be difficult to express fully; they might be ambiguous or even contradictory. Thus, it is beneficial to develop agents that do not depend on external supervision and learn altruistic behaviour in a task-agnostic manner. We propose to act altruistically towards other agents by giving them more choice and allowing them to achieve their goals better. Some concrete examples include opening a door for others or safeguarding them to pursue their objectives without interference. We formalize this concept and propose an altruistic agent that learns to increase the choices another agent has by preferring to maximize the number of states that the other agent can reach in its future. We evaluate our approach in three different multi-agent environments where another agent's success depends on altruistic behaviour. Finally, we show that our unsupervised agents can perform comparably to agents explicitly trained to work cooperatively, in some cases even outperforming them.
[ "Reinforcement Learning", "Multi-Agent Systems", "Artificial Intelligence Ethics", "Autonomous Agents" ]
https://iclr.cc/media/Po…e7d41c4a2065.png
ICLR
2,024
18,672
Efficient Heterogeneous Meta-Learning via Channel Shuffling Modulation
We tackle the problem of meta-learning across heterogenous tasks. This problem seeks to extract and generalize transferable meta-knowledge through streaming task sets from a multi-modal task distribution. The extracted meta-knowledge can be used to create predictors for new tasks using a small number of labeled samples. Most meta-learning methods assume a homogeneous task distribution, thus limiting their generalization capacity when handling multi-modal task distributions. Recent work has shown that the generalization of meta-learning depends on the similarity of tasks in the training distribution, and this has led to many clustering approaches that aim to detect homogeneous clusters of tasks. However, these methods suffer from a significant increase in parameter complexity. To overcome this weakness, we propose a new heterogeneous meta-learning strategy that efficiently captures the multi-modality of the task distribution via modulating the routing between convolution channels in the network, instead of directly modulating the network weights. This new mechanism can be cast as a permutation learning problem. We further introduce a novel neural permutation layer based on the classical Benes routing network, which has sub-quadratic parameter complexity in the total number of channels, as compared to the quadratic complexity of the state-of-the-art Gumbel-Sinkhorn layer. We demonstrate our approach on various multi-modal meta-learning benchmarks, showing that our framework outperforms previous methods in both generalization accuracy and convergence speed.
[ "Meta-Learning", "Neural Networks", "Multi-Task Learning", "Transfer Learning" ]
https://iclr.cc/media/Po…202024/18672.png
ICML
2,024
33,368
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze
Animals can swiftly adapt to novel tasks, while maintaining proficiency on previously trained tasks. This contrasts starkly with machine learning models, which struggle on these capabilities. We first propose a new task, the sequential Morris Water Maze (sWM), which extends a widely used task in the psychology and neuroscience fields and requires both rapid and continual learning. It has frequently been hypothesized that inductive biases from brains could help build better ML systems, but the addition of constraints typically hurts rather than helping ML performance. We draw inspiration from biology to show that combining 1) a content-addressable heteroassociative memory based on the entorhinal-hippocampal circuit with grid cells that retain shared across-environment structural representations and hippocampal cells that acquire environment-specific information; 2) a spatially invariant convolutional network architecture for rapid adaptation across unfamiliar environments; and 3) the ability to perform remapping, which orthogonalizes internal representations; leads to good generalization, rapid learning, and continual learning without forgetting, respectively. Our model outperforms ANN baselines from continual learning contexts applied to the task. It retains knowledge of past environments while rapidly acquiring the skills to navigate new ones, thereby addressing the seemingly opposing challenges of quick knowledge transfer and sustaining proficiency in previously learned tasks. These biologically motivated results may point the way toward ML algorithms with similar properties.
[ "Neuroscience", "Cognitive Science", "Continual Learning", "Computational Neuroscience" ]
https://icml.cc/media/Po…202024/33368.png
NeurIPS
2,023
76,253
Operator SVD with Neural Networks via Nested Low-Rank Approximation
This paper proposes an optimization-based method to learn the singular value decomposition (SVD) of a compact operator with ordered singular functions. The proposed objective function is based on Schmidt's low-rank approximation theorem (1907) that characterizes a truncated SVD as a solution minimizing the mean squared error, accompanied with a technique called \emph{nesting} to learn the ordered structure. When the optimization space is parameterized by neural networks, we refer to the proposed method as \emph{NeuralSVD}. The implementation does not require sophisticated optimization tricks unlike existing approaches.
[ "Numerical Linear Algebra", "Optimization", "Neural Networks" ]
https://neurips.cc/media…202023/76253.png
NeurIPS
2,022
60,715
Benchmarking Offline Reinforcement Learning Algorithms for E-Commerce Order Fraud Evaluation
Amazon and other e-commerce sites must employ mechanisms to protect their millions of customers from fraud, such as unauthorized use of credit cards. One such mechanism is order fraud evaluation, where systems evaluate orders for fraud risk, and either “pass” the order, or take an action to mitigate high risk. Order fraud evaluation systems typically use binary classification models that distinguish fraudulent and legitimate orders, to assess risk and take action. We seek to devise a system that considers both financial losses of fraud and long-term customer satisfaction, which may be impaired when incorrect actions are applied to legitimate customers. We propose that taking actions to optimize long-term impact can be formulated as a Reinforcement Learning (RL) problem. Standard RL methods require online interaction with an environment to learn, but this is not desirable in high-stakes applications like order fraud evaluation. Offline RL algorithms learn from logged data collected from the environment, without the need for online interaction, making them suitable for our use case. We show that offline RL methods outperform traditional binary classification solutions in SimStore, a simplified e-commerce simulation that incorporates order fraud risk. We also propose a novel approach to training offline RL policies that adds a new loss term during training, to better align policy exploration with taking correct actions.
[ "Reinforcement Learning", "E-Commerce", "Fraud Detection", "Data Science" ]
https://neurips.cc/media…202022/60715.png
ICML
2,024
33,328
MILP-FBGen: LP/MILP Instance Generation with Feasibility/Boundedness
Machine learning (ML) has been actively adopted in Linear Programming (LP) and Mixed-Integer Linear Programming (MILP), whose potential is hindered by instance scarcity. Current synthetic instance generation methods often fall short in closely mirroring the distribution of original datasets or ensuring the feasibility and boundedness of the generated data — a critical requirement for obtaining reliable supervised labels in model training. In this paper, we present a diffusion-based LP/MILP instance generative framework called MILP-FBGen. It strikes a balance between structural similarity and novelty while maintaining feasibility/boundedness via a meticulously designed structure-preserving generation module and a feasibility/boundedness-constrained sampling module. Our method shows superiority on two fronts: 1) preservation of key properties (hardness, feasibility, and boundedness) of LP/MILP instances, and 2) enhanced performance on downstream tasks. Extensive studies show two-fold superiority that our method ensures higher distributional similarity and 100% feasibility in both easy and hard datasets, surpassing current state-of-the-art techniques.
[ "Operations Research", "Optimization", "Machine Learning in Optimization", "Computational Mathematics", "Artificial Intelligence in Operations Research" ]
https://icml.cc/media/Po…202024/33328.png
NeurIPS
2,023
72,380
DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data
Current perceptual similarity metrics operate at the level of pixels and patches. These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object pose, and semantic content. In this paper, we develop a perceptual metric that assesses images holistically. Our first step is to collect a new dataset of human similarity judgments over image pairs that are alike in diverse ways. Critical to this dataset is that judgments are nearly automatic and shared by all observers. To achieve this we use recent text-to-image models to create synthetic pairs that are perturbed along various dimensions. We observe that popular perceptual metrics fall short of explaining our new data, and we introduce a new metric, DreamSim, tuned to better align with human perception. We analyze how our metric is affected by different visual attributes, and find that it focuses heavily on foreground objects and semantic content while also being sensitive to color and layout. Notably, despite being trained on synthetic data, our metric generalizes to real images, giving strong results on retrieval and reconstruction tasks. Furthermore, our metric outperforms both prior learned metrics and recent large vision models on these tasks. Our project page: https://dreamsim-nights.github.io/
[ "Computer Vision", "Perceptual Metrics", "Image Processing", "Synthetic Data", "Human Visual Perception" ]
https://neurips.cc/media…202023/72380.png
ICML
2,023
27,771
SepVAE: a contrastive VAE to separate pathological patterns from healthy ones.
Contrastive Analysis VAEs (CA-VAEs) are a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i.e., healthy subjects) and a target dataset (TG) (i.e., patients) from the ones that only exist in the target dataset. To do so, these methods separate the latent space into a set of salient features (i.e., proper to the target dataset) and a set of common features (i.e., exist in both datasets). Currently, all models fail to effectively prevent the sharing of information between latent spaces and to capture all salient factors of variation.To this end, we introduce two crucial regularization losses: a disentangling term between common and salient representations and a classification term between background and target samples in the salient space. We show a better performance than previous CA-VAEs methods on three medical applications and a natural images dataset (CelebA). Code and datasets are available at https://anonymous.4open.science/r/sep_vae-0D94/.
[ "Variational Autoencoders ", "Contrastive Learning", "Medical Imaging", "Computational Pathology", "Representation Learning" ]
https://icml.cc/media/Po…202023/27771.png
NeurIPS
2,023
72,682
DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning
Data augmentation techniques, such as image transformations and combinations, are highly effective at improving the generalization of computer vision models, especially when training data is limited. However, such techniques are fundamentally incompatible with differentially private learning approaches, due to the latter’s built-in assumption that each training image’s contribution to the learned model is bounded. In this paper, we investigate why naive applications of multi-sample data augmentation techniques, such as mixup, fail to achieve good performance and propose two novel data augmentation techniques specifically designed for the constraints of differentially private learning. Our first technique, DP-MixSelf, achieves SoTA classification performance across a range of datasets and settings by performing mixup on self-augmented data. Our second technique, DP-MixDiff, further improves performance by incorporating synthetic data from a pre-trained diffusion model into the mixup process. We open-source the code at https://github.com/wenxuan-Bao/DP-Mix.
[ "Differential Privacy", "Data Augmentation", "Computer Vision", "Privacy-Preserving Machine Learning" ]
https://neurips.cc/media…202023/72682.png
NeurIPS
2,022
64,443
Learning Topological Representation of Sensor Network with Persistent Homology in HCI Systems
Hand gesture and movement analysis is a crucial learning task in Human-computer interaction (HCI) applications. Sensor-based HCI systems simultaneously capture the information with multiple locations to track the coordination of different regions of muscles. Based on the fact that there exists a temporal correlation between the regions, the connectivity analysis of sensor signals builds a network. The graph-based approach for analyzing the sensor network has provided novel insight into the learning in HCI, which has not been broadly investigated in hand gesture recognition tasks. This work proposes a topological representation learning scheme as a graph-based approach for sensor network analysis. Through investigation of the topological properties with persistent homology, the spatial-temporal characteristics are well described to build recognition models. Experiments on the NinaPro DB-2, DB-4, DB-5, and DB-7 datasets with sensor networks built with sEMG signal and IMU signal demonstrate exceptional performance of the proposed topological approach. The topological features are effective in graph representation learning with sensor networks used in hand gesture recognition tasks. The proposed work provides a novel learning scheme in HCI systems and human-in-the-loop studies.
[ "Human-Computer Interaction ", "Sensor Networks", "Topological Data Analysis", "Graph Representation Learning", "Hand Gesture Recognition", "Persistent Homology" ]
https://neurips.cc/media…202022/64443.png
ICLR
2,023
12,038
Fair Attribute Completion on Graph with Missing Attributes
Tackling unfairness in graph learning models is a challenging task, as the unfairness issues on graphs involve both attributes and topological structures. Existing work on fair graph learning simply assumes that attributes of all nodes are available for model training and then makes fair predictions. In practice, however, the attributes of some nodes might not be accessible due to missing data or privacy concerns, which makes fair graph learning even more challenging. In this paper, we propose FairAC, a fair attribute completion method, to complement missing information and learn fair node embeddings for graphs with missing attributes. FairAC adopts an attention mechanism to deal with the attribute missing problem and meanwhile, it mitigates two types of unfairness, i.e., feature unfairness from attributes and topological unfairness due to attribute completion. FairAC can work on various types of homogeneous graphs and generate fair embeddings for them and thus can be applied to most downstream tasks to improve their fairness performance. To our best knowledge, FairAC is the first method that jointly addresses the graph attribution completion and graph unfairness problems. Experimental results on benchmark datasets show that our method achieves better fairness performance with less sacrifice in accuracy, compared with the state-of-the-art methods of fair graph learning. Code is available at: https://github.com/donglgcn/FairAC.
[ "Graph Learning", "Fairness in AI", "Data Imputation", "Network Analysis", "Ethical AI" ]
https://iclr.cc/media/Po…202023/12038.png
NeurIPS
2,023
72,970
ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns
Many approaches to draping individual garments on human body models are realistic, fast, and yield outputs that are differentiable with respect to the body shape on which they are draped. However, they are either unable to handle multi-layered clothing, which is prevalent in everyday dress, or restricted to bodies in T-pose. In this paper, we introduce a parametric garment representation model that addresses these limitations. As in models used by clothing designers, each garment consists of individual 2D panels. Their 2D shape is defined by a Signed Distance Function and 3D shape by a 2D to 3D mapping. The 2D parameterization enables easy detection of potential collisions and the 3D parameterization handles complex shapes effectively. We show that this combination is faster and yields higher quality reconstructions than purely implicit surface representations, and makes the recovery of layered garments from images possible thanks to its differentiability. Furthermore, it supports rapid editing of garment shapes and texture by modifying individual 2D panels.
[ "Computer Graphics", "Computational Fabrication", "Computer Vision", "Digital Fashion Design", "3D Modeling and Simulation" ]
https://neurips.cc/media…202023/72970.png
ICLR
2,024
20,662
Be More Active! Understanding the Differences Between Mean and Sampled Representations of Variational Autoencoders
The ability of Variational Autoencoders to learn disentangled representations has made them appealing for practical applications. However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured. In this paper, we refine this observation through the lens of selective posterior collapse, which states that only a subset of the learned representations, the active variables, is encoding useful information while the rest (the passive variables) is discarded. We first extend the existing definition to multiple data examples and show that active variables are equally disentangled in mean and sampled representations. Based on this extension and the pre-trained models from disentanglement_lib}, we then isolate the passive variables and show that they are responsible for the discrepancies between mean and sampled representations. Specifically, passive variables exhibit high correlation scores with other variables in mean representations while being fully uncorrelated in sampled ones. We thus conclude that despite what their higher correlation might suggest, mean representations are still good candidates for downstream tasks applications. However, it may be beneficial to remove their passive variables, especially when used with models sensitive to correlated features.
[ "Variational Autoencoders", "Representation Learning", "Disentangled Representations", "Deep Learning" ]
https://iclr.cc/media/Po…202024/20662.png
NeurIPS
2,022
55,645
mRI: Multi-modal 3D Human Pose Estimation Dataset using mmWave, RGB-D, and Inertial Sensors
The ability to estimate 3D human body pose and movement, also known as human pose estimation (HPE), enables many applications for home-based health monitoring, such as remote rehabilitation training. Several possible solutions have emerged using sensors ranging from RGB cameras, depth sensors, millimeter-Wave (mmWave) radars, and wearable inertial sensors. Despite previous efforts on datasets and benchmarks for HPE, few dataset exploits multiple modalities and focuses on home-based health monitoring. To bridge the gap, we present mRI, a multi-modal 3D human pose estimation dataset with mmWave, RGB-D, and Inertial Sensors. Our dataset consists of over 160k synchronized frames from 20 subjects performing rehabilitation exercises and supports the benchmarks of HPE and action detection. We perform extensive experiments using our dataset and delineate the strength of each modality. We hope that the release of mRI can catalyze the research in pose estimation, multi-modal learning, and action understanding, and more importantly facilitate the applications of home-based health monitoring.
[ "Human Pose Estimation", "Multi-modal Learning", "Action Recognition", "Health Monitoring Technology", "Sensor Fusion", "Computer Vision", "Machine Learning for Healthcare" ]
https://neurips.cc/media…202022/55645.png
ICLR
2,023
14,174
SimuStruct: Simulated Structural Plate with Holes Dataset with Machine Learning Applications
This paper introduces SimuStruct: Simulated Structural Parts Dataset, a dataset that contains 2D structural parts, their respective meshes and the outputs of numerical simulations for different properties for linear and elastic material, boundary and loading conditions, and for varying levels of refinement. SimuStruct comprises the classic case of plates with holes since it is a 2D simple case with analytical resolution and which is found in different mechanical design applications. The SimuStruct dataset comprises many different cases, where each case is solved using standard Finite Element Methods (FEMs) with the open-source package FEniCS. Compared to other datasets similar in purpose, SimuStruct is more diversified and realistic because it aims to comprises diverse real cases for different loading and boundary conditions, different properties for linear and elastic material, and different levels of refinement. In addition, SimuStruct is more flexible, versatile, and scalable because all algorithms and codes are implemented using open-source libraries. The main goal of the SimuStruct dataset is to serve both as training and evaluation data for Machine Learning (ML)-based methods in structural analysis and optimal mesh generation and therefore support the development of ML-based optimal mechanical design solutions. An application of SimuStruct is presented to train and test an ANN model to predict stress-strain fields. SimuStruct will contribute to the connection of the Mechanical Engineering and ML communities, which will allow accelerating and exploitation the research in the computational design field.
[ "Mechanical Engineering", "Structural Analysis", "Finite Element Methods ", "Machine Learning Applications", "Computational Design", "Numerical Simulation", "Data Science in Engineering" ]
https://iclr.cc/media/Po…202023/14174.png
ICML
2,023
24,935
Stabilizing Transformer Training by Preventing Attention Entropy Collapse
Training stability is of great importance to Transformers. In this work, we investigate the training dynamics of Transformers by examining the evolution of the attention layers. In particular, we track the attention entropy for each attention head during the course of training, which is a proxy for model sharpness. We identify a common pattern across different architectures and tasks, where low attention entropy is accompanied by high training instability, which can take the form of oscillating loss or divergence. We denote the pathologically low attention entropy, corresponding to highly concentrated attention scores, as $\textit{entropy collapse}$. As a remedy, we propose $\sigma$Reparam, a simple and efficient solution where we reparametrize all linear layers with spectral normalization and an additional learned scalar. We demonstrate that $\sigma$Reparam successfully prevents entropy collapse in the attention layers, promoting more stable training. Additionally, we prove a tight lower bound of the attention entropy, which decreases exponentially fast with the spectral norm of the attention logits, providing additional motivation for our approach. We conduct experiments with $\sigma$Reparam on image classification, image self-supervised learning, machine translation, speech recognition, and language modeling tasks. We show that $\sigma$Reparam provides stability and robustness with respect to the choice of hyperparameters, going so far as enabling training (a) a Vision Transformer to competitive performance without warmup, weight decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and (c) speech recognition to competitive performance without warmup and adaptive optimizers. Code is available at https://github.com/apple/ml-sigma-reparam.
[ "Deep Learning", "Natural Language Processing", "Computer Vision", "Speech Recognition", "Neural Network Optimization" ]
https://icml.cc/media/Po…202023/24935.png
NeurIPS
2,022
57,528
Physics-Constrained Deep Learning for Climate Downscaling
The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and therefore often predict quantities at a coarse spatial resolution. Statistical downscaling can provide an efficient method of upsampling low-resolution data. In this field, deep learning has been applied successfully, often using methods from the super-resolution domain in computer vision. Despite often achieving visually compelling results, such models often violate conservation laws when predicting physical variables. In order to conserve important physical quantities, we develop methods that guarantee physical constraints are satisfied by a deep downscaling model while also increasing their performance according to traditional metrics. We introduce two ways of constraining the network: A renormalization layer added to the end of the neural network and a successive approach that scales with increasing upsampling factors. We show the applicability of our methods across different popular architectures and upsampling factors using ERA5 reanalysis data.
[ "Climate Science", "Computational Physics", "Environmental Science", "Meteorology", "Data Science" ]
https://neurips.cc/media…202022/57528.png
ICML
2,024
33,518
Position: A Call for Embodied AI
We propose Embodied AI (E-AI) as the next fundamental step in the pursuit of Artificial General Intelligence (AGI), juxtaposing it against current AI advancements, particularly Large Language Models (LLMs). We traverse the evolution of the embodiment concept across diverse fields (philosophy, psychology, neuroscience, and robotics) to highlight how E-AI distinguishes itself from the classical paradigm of static learning. By broadening the scope of E-AI, we introduce a theoretical framework based on cognitive architectures, emphasizing perception, action, memory, and learning as essential components of an embodied agent. This framework is aligned with Friston’s active inference principle, offering a comprehensive approach to E-AI development. Despite the progress made in the field of AI, substantial challenges, such as the formulation of a novel AI learning theory and the innovation of advanced hardware, persist. Our discussion lays down a foundational guideline for future E-AI research. Highlighting the importance of creating E-AI agents capable of seamless communication, collaboration, and coexistence with humans and other intelligent entities within real-world environments, we aim to steer the AI community towards addressing the multifaceted challenges and seizing the opportunities that lie ahead in the quest for AGI.
[ "Embodied AI", "Cognitive Science", "Robotics", "Neuroscience", "Philosophy of Mind", "Cognitive Architectures" ]
https://icml.cc/media/Po…202024/33518.png
ICLR
2,024
21,354
MultiSTOP: Solving Functional Equations with Reinforcement Learning
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics. This new methodology is also able to find actual numerical solutions instead of bounds. We extend the original BootSTOP algorithm by adding multiple constraints derived from domain-specific knowledge, even in integral form, to improve the accuracy of the solution. We investigate a particular equation in a one-dimensional Conformal Field Theory.
[ "Reinforcement Learning", "Functional Equations", "Physics", "Numerical Methods", "Conformal Field Theory" ]
https://iclr.cc/media/Po…202024/21354.png
NeurIPS
2,022
61,167
On RKHS Choices for Assessing Graph Generators via Kernel Stein Statistics
Score-based kernelised Stein discrepancy (KSD) tests have emerged as a powerful tool for the goodness of fit tests, especially in high dimensions; however, the test performance may depend on the choice of kernels in an underlying reproducing kernel Hilbert space (RKHS). Here we assess the effect of RKHS choice for KSD tests of random networks models, developed for exponential random graph models (ERGMs) in Xu and Reinert (2021) and for synthetic graph generators in Xu and Reinert (2022). We investigate the power performance and the computational runtime of the test in different scenarios, including both dense and sparse graph regimes. Experimental results on kernel performance for model assessment tasks are shown and discussed on synthetic and real-world network applications.
[ "Statistical Methods", "Graph Theory", "Network Analysis" ]
https://neurips.cc/media…202022/61167.png
NeurIPS
2,022
58,380
tinyMAN: Lightweight Energy Manager using Reinforcement Learning for Energy Harvesting Wearable IoT Devices
Advances in low-power electronics and machine learning techniques lead to many novel wearable IoT devices. These devices have limited battery capacity and computational power.Thus, energy harvesting from ambient sources is a promising solution to power these low-energy wearable devices. They need to manage the harvested energy optimally to achieve energy-neutral operation, which eliminates recharging requirements. Optimal energy management is a challenging task due to the dynamic nature of the harvested energy and the battery energy constraints of the target device. To address this challenge, we present a reinforcement learning based energy management framework, tinyMAN, for resource-constrained wearable IoT devices. The framework maximizes the utilization of the target device under dynamic energy harvesting patterns and battery constraints. Moreover, tinyMAN does not rely on forecasts of the harvested energy which makes it a prediction-free approach. We deployed tinyMAN on a wearable device prototype using TensorFlow Lite for Micro thanks to its small memory footprint of less than 100 KB. Our evaluations show that tinyMAN achieves less than 2.36 ms and 27.75 uJ while maintaining up to 45% higher utility compared to prior approaches.
[ "Internet of Things ", "Wearable Technology", "Energy Harvesting", "Reinforcement Learning", "Low-Power Electronics", "Energy Management Systems" ]
https://neurips.cc/media…202022/58380.png
NeurIPS
2,022
53,409
Not too little, not too much: a theoretical analysis of graph (over)smoothing
We analyze graph smoothing with mean aggregation, where each node successively receives the average of the features of its neighbors. Indeed, it has quickly been observed that Graph Neural Networks (GNNs), which generally follow some variant of Message-Passing (MP) with repeated aggregation, may be subject to the oversmoothing phenomenon: by performing too many rounds of MP, the node features tend to converge to a non-informative limit. In the case of mean aggregation, for connected graphs, the node features become constant across the whole graph. At the other end of the spectrum, it is intuitively obvious that some MP rounds are necessary, but existing analyses do not exhibit both phenomena at once: beneficial ``finite'' smoothing and oversmoothing in the limit. In this paper, we consider simplified linear GNNs, and rigorously analyze two examples for which a finite number of mean aggregation steps provably improves the learning performance, before oversmoothing kicks in. We consider a latent space random graph model, where node features are partial observations of the latent variables and the graph contains pairwise relationships between them. We show that graph smoothing restores some of the lost information, up to a certain point, by two phenomena: graph smoothing shrinks non-principal directions in the data faster than principal ones, which is useful for regression, and shrinks nodes within communities faster than they collapse together, which improves classification.
[ "Graph Neural Networks ", "Machine Learning Theory", "Graph Theory", "Network Analysis", "Data Smoothing Techniques" ]
https://neurips.cc/media…202022/53409.png
ICML
2,022
18,407
The Primacy Bias in Deep Reinforcement Learning
This work identifies a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later. Because of training on progressively growing datasets, deep RL agents incur a risk of overfitting to earlier experiences, negatively affecting the rest of the learning process. Inspired by cognitive science, we refer to this effect as the primacy bias. Through a series of experiments, we dissect the algorithmic aspects of deep RL that exacerbate this bias. We then propose a simple yet generally-applicable mechanism that tackles the primacy bias by periodically resetting a part of the agent. We apply this mechanism to algorithms in both discrete (Atari 100k) and continuous action (DeepMind Control Suite) domains, consistently improving their performance.
[ "Deep Reinforcement Learning", "Machine Learning Algorithms", "Cognitive Science in AI", "Overfitting in Machine Learning", "Algorithmic Bias and Fairness" ]
https://icml.cc/media/Po…759ae1324e96.png
NeurIPS
2,022
60,172
Language-guided Task Adaptation for Imitation Learning
We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language. The proposed setting allows reusing demonstrations from other tasks, by providing low effort language descriptions, and can also be used to provide feedback to correct agent errors, which are both important desiderata for building intelligent agents that assist humans in daily tasks. To enable progress in this proposed setting, we create two benchmarks---Room Rearrangement and Room Navigation---that cover a diverse set of task adaptations. Further, we propose a framework that uses a transformer-based model to reason about the entities in the tasks and their relationships, to learn a policy for the target task.
[ "Imitation Learning", "Natural Language Processing", "Robotics", "Human-Computer Interaction" ]
https://neurips.cc/media…202022/60172.png
NeurIPS
2,023
70,175
GEX: A flexible method for approximating influence via Geometric Ensemble
Through a deeper understanding of predictions of neural networks, Influence Function (IF) has been applied to various tasks such as detecting and relabeling mislabeled samples, dataset pruning, and separation of data sources in practice. However, we found standard approximations of IF suffer from performance degradation due to oversimplified influence distributions caused by their bilinear approximation, suppressing the expressive power of samples with a relatively strong influence. To address this issue, we propose a new interpretation of existing IF approximations as an average relationship between two linearized losses over parameters sampled from the Laplace approximation (LA). In doing so, we highlight two significant limitations of current IF approximations: the linearity of gradients and the singularity of Hessian. Accordingly, by improving each point, we introduce a new IF approximation method with the following features: i) the removal of linearization to alleviate the bilinear constraint and ii) the utilization of Geometric Ensemble (GE) tailored for non-linear losses. Empirically, our approach outperforms existing IF approximations for downstream tasks with lighter computation, thereby providing new feasibility of low-complexity/nonlinear-based IF design.
[ "Neural Networks", "Model Interpretability", "Algorithm Development", "Data Analysis" ]
https://neurips.cc/media…202023/70175.png
ICML
2,023
23,476
Hyperbolic Representation Learning: Revisiting and Advancing
The non-Euclidean geometry of hyperbolic spaces has recently garnered considerable attention in the realm of representation learning. Current endeavors in hyperbolic representation largely presuppose that the underlying hierarchies can be automatically inferred and preserved through the adaptive optimization process. This assumption, however, is questionable and requires further validation. In this work, we first introduce a position-tracking mechanism to scrutinize existing prevalent hyperbolic models, revealing that the learned representations are sub-optimal and unsatisfactory. To address this, we propose a simple yet effective method, hyperbolic informed embedding (HIE), by incorporating cost-free hierarchical information deduced from the hyperbolic distance of the node to the origin (i.e., induced hyperbolic norm) to advance existing hyperbolic models. The proposed method HIE is both task-agnostic and model-agnostic, enabling its seamless integration with a broad spectrum of models and tasks. Extensive experiments across various models and different tasks demonstrate the versatility and adaptability of the proposed method. Remarkably, our method achieves a remarkable improvement of up to 21.4% compared to the competing baselines.
[ "Representation Learning", "Non-Euclidean Geometry", "Hyperbolic Geometry", "Embedding Methods", "Hierarchical Data Analysis" ]
https://icml.cc/media/Po…202023/23476.png
NeurIPS
2,022
53,089
Meta-Reward-Net: Implicitly Differentiable Reward Learning for Preference-based Reinforcement Learning
Setting up a well-designed reward function has been challenging for many reinforcement learning applications. Preference-based reinforcement learning (PbRL) provides a new framework that avoids reward engineering by leveraging human preferences (i.e., preferring apples over oranges) as the reward signal. Therefore, improving the efficacy of data usage for preference data becomes critical. In this work, we propose Meta-Reward-Net (MRN), a data-efficient PbRL framework that incorporates bi-level optimization for both reward and policy learning. The key idea of MRN is to adopt the performance of the Q-function as the learning target. Based on this, MRN learns the Q-function and the policy in the inner level while updating the reward function adaptively according to the performance of the Q-function on the preference data in the outer level. Our experiments on robotic simulated manipulation tasks and locomotion tasks demonstrate that MRN outperforms prior methods in the case of few preference labels and significantly improves data efficiency, achieving state-of-the-art in preference-based RL. Ablation studies further demonstrate that MRN learns a more accurate Q-function compared to prior work and shows obvious advantages when only a small amount of human feedback is available. The source code and videos of this project are released at https://sites.google.com/view/meta-reward-net.
[ "Reinforcement Learning", "Preference-based Learning", "Reward Learning", "Robotics", "Bi-level Optimization" ]
https://neurips.cc/media…202022/53089.png
NeurIPS
2,023
72,350
Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning
We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized Experience Relay, in which agents share with other agents a limited number of transitions they observe during training. The intuition behind this is that even a small number of relevant experiences from other agents could help each agent learn. Unlike many other multi-agent RL algorithms, this approach allows for largely decentralized training, requiring only a limited communication channel between agents. We show that our approach outperforms baseline no-sharing decentralized training and state-of-the art multi-agent RL algorithms. Further, sharing only a small number of highly relevant experiences outperforms sharing all experiences between agents, and the performance uplift from selective experience sharing is robust across a range of hyperparameters and DQN variants.
[ "Multi-Agent Reinforcement Learning", "Decentralized Training", "Experience Sharing in Reinforcement Learning", "Machine Learning Algorithms" ]
https://neurips.cc/media…202023/72350.png
ICLR
2,024
19,393
Graphical Multioutput Gaussian Process with Attention
Integrating information while recognizing dependence from multiple data sources and enhancing the predictive performance of the multi-output regression are challenging tasks. Multioutput Gaussian Process (MOGP) methods offer outstanding solutions with tractable predictions and uncertainty quantification. However, their practical applications are hindered by high computational complexity and storage demand. Additionally, there exist model mismatches in existing MOGP models when dealing with non-Gaussian data. To improve the model representation ability in terms of flexibility, optimality, and scalability, this paper introduces a novel multi-output regression framework, termed Graphical MOGP (GMOGP), which is empowered by: (i) Generating flexible Gaussian process priors consolidated from dentified parents, (ii) providing dependent processes with attention-based graphical representations, and (iii) achieving Pareto optimal solutions of kernel hyperparameters via a distributed learning framework. Numerical results confirm that the proposed GMOGP significantly outperforms state-of-the-art MOGP alternatives in predictive performance, as well as in time and memory efficiency, across various synthetic and real datasets.
[ "Gaussian Processes", "Multi-output Regression", "Computational Statistics", "Data Integration", "Uncertainty Quantification" ]
https://iclr.cc/media/Po…202024/19393.png
NeurIPS
2,023
72,192
$\varepsilon$-fractional core stability in Hedonic Games.
Hedonic Games (HGs) are a classical framework modeling coalition formation of strategic agents guided by their individual preferences. According to these preferences, it is desirable that a coalition structure (i.e. a partition of agents into coalitions) satisfies some form of stability. The most well-known and natural of such notions is arguably core-stability. Informally, a partition is core-stable if no subset of agents would like to deviate by regrouping in a so-called core-blocking coalition. Unfortunately, core-stable partitions seldom exist and even when they do, it is often computationally intractable to find one. To circumvent these problems, we propose the notion of $\varepsilon$-fractional core-stability, where at most an $\varepsilon$-fraction of all possible coalitions is allowed to core-block. It turns out that such a relaxation may guarantee both existence and polynomial-time computation. Specifically, we design efficient algorithms returning an $\varepsilon$-fractional core-stable partition, with $\varepsilon$ exponentially decreasing in the number of agents, for two fundamental classes of HGs: Simple Fractional and Anonymous. From a probabilistic point of view, being the definition of $\varepsilon$-fractional core equivalent to requiring that uniformly sampled coalitions core-block with probability lower than $\varepsilon$, we further extend the definition to handle more complex sampling distributions. Along this line, when valuations have to be learned from samples in a PAC-learning fashion, we give positive and negative results on which distributions allow the efficient computation of outcomes that are $\varepsilon$-fractional core-stable with arbitrarily high confidence.
[ "Game Theory", "Algorithmic Game Theory", "Cooperative Games", "Computational Social Choice" ]
https://neurips.cc/media…202023/72192.png
ICML
2,024
34,562
Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains
The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions. To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets. Our empirical investigations ascertain several advantages of binning: capturing the irregular function, compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. The codes are available in https://github.com/kyungeun-lee/tabularbinning.
[ "Self-Supervised Learning", "Tabular Data Analysis", "Representation Learning", "Data Preprocessing" ]
https://icml.cc/media/Po…202024/34562.png
ICML
2,023
24,101
Conformal Prediction for Federated Uncertainty Quantification Under Label Shift
Federated Learning (FL) is a machine learning framework where many clients collaboratively train models while keeping the training data decentralized. Despite recent advances in FL, the uncertainty quantification topic (UQ) remains partially addressed. Among UQ methods, conformal prediction (CP) approaches provides distribution-free guarantees under minimal assumptions. We develop a new federated conformal prediction method based on quantile regression and take into account privacy constraints. This method takes advantage of importance weighting to effectively address the label shift between agents and provides theoretical guarantees for both valid coverage of the prediction sets and differential privacy. Extensive experimental studies demonstrate that this method outperforms current competitors.
[ "Federated Learning", "Uncertainty Quantification", "Conformal Prediction", "Privacy and Security in Machine Learning" ]
https://icml.cc/media/Po…202023/24101.png
ICLR
2,024
17,953
TRAM: Bridging Trust Regions and Sharpness Aware Minimization
Sharpness-aware minimization (SAM) reports improving domain generalization byreducing the loss surface curvature in the parameter space. However,generalization duringfine-tuningis often more dependent on thetransferability ofrepresentationsin the function space. Trust-regionmethods (TR) target this goal by regularizing representation curvature to reducecatastrophic forgetting of pre-trained task-agnostic information while adoptingtask-specific skills. We consider unifying these strategies for low curvature inboth parameter space and function space to improve out-of-domain (OOD)generalization. We proposeTrust Region Aware Minimization(TRAM), aSAM algorithm fine-tuning for low parameter sharpness and smooth, informativerepresentations preserving pre-trained structure. TRAM uses a trust region boundto inform the SAM adversarial neighborhood, introducing an awareness of functioncurvature within optimization for flatter minima. We empirically validate TRAMin vision (cross-dataset adaptation) and text (OOD language modeling, zero-shotcross-lingual transfer) tasks where robust domain transfer and representationgenerality are critical. TRAM outperforms SAM- and TR-based optimization acrossall tasks, notably surpassing competing methods for hard transfer betweenanticorrelateddomains. TRAM establishes a novel standard infine-tuning for domain-generalizable models with minimal additional computationover previous sharpness-aware methods.
[ "Optimization", "Domain Generalization", "Transfer Learning", "Fine-tuning", "Trust Region Methods", "Sharpness-aware Minimization", "Out-of-Domain Generalization" ]
https://iclr.cc/media/Po…202024/17953.png
ICML
2,022
18,275
PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
Learning to collaborate is critical in Multi-Agent Reinforcement Learning (MARL). Previous works promote collaboration by maximizing the correlation of agents’ behaviors, which is typically characterized by Mutual Information (MI) in different forms. However, we reveal sub-optimal collaborative behaviors also emerge with strong correlations, and simply maximizing the MI can, surprisingly, hinder the learning towards better collaboration. To address this issue, we propose a novel MARL framework, called Progressive Mutual Information Collaboration (PMIC), for more effective MI-driven collaboration. PMIC uses a new collaboration criterion measured by the MI between global states and joint actions. Based on this criterion, the key idea of PMIC is maximizing the MI associated with superior collaborative behaviors and minimizing the MI associated with inferior ones. The two MI objectives play complementary roles by facilitating better collaborations while avoiding falling into sub-optimal ones. Experiments on a wide range of MARL benchmarks show the superior performance of PMIC compared with other algorithms.
[ "Multi-Agent Reinforcement Learning ", "Collaborative Systems", "Information Theory" ]
https://icml.cc/media/Po…8093c652fd79.png
ICLR
2,024
18,168
Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
While Large Language Models (LLMs) are increasingly being used in real-world applications, they remain vulnerable toprompt injection attacks: malicious third party prompts that subvert the intent of the system designer. To help researchers study this problem, we present a dataset of over 563,000 prompt injection attacks and 118,000 prompt-based "defenses" against prompt injection, all created by players of an online game called Tensor Trust. To the best of our knowledge, this is the first dataset that includes both human-generated attacks and defenses for instruction-following LLMs. The attacks in our dataset have easily interpretable structure, and shed light on the weaknesses of LLMs. We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to asprompt extractionandprompt hijacking. Our benchmark results show that many models are vulnerable to the attack strategies in the Tensor Trust dataset. Furthermore, we show that some attack strategies from the dataset generalize to deployed LLM-based applications, even though they have a very different set of constraints to the game. We release data and code attensortrust.ai/paper
[ "Cybersecurity", "Natural Language Processing", "Machine Learning Security", "Artificial Intelligence Safety", "Data Science", "Game-based Learning" ]
https://iclr.cc/media/Po…202024/18168.png
NeurIPS
2,023
73,927
MMD Aggregated Two-Sample Test
We propose two novel nonparametric two-sample kernel tests based on the Maximum Mean Discrepancy (MMD). First, for a fixed kernel, we construct an MMD test using either permutations or a wild bootstrap, two popular numerical procedures to determine the test threshold. We prove that this test controls the probability of type I error non-asymptotically. Hence, it can be used reliably even in settings with small sample sizes as it remains well-calibrated, which differs from previous MMD tests which only guarantee correct test level asymptotically. When the difference in densities lies in a Sobolev ball, we prove minimax optimality of our MMD test with a specific kernel depending on the smoothness parameter of the Sobolev ball. In practice, this parameter is unknown and, hence, the optimal MMD test with this particular kernel cannot be used. To overcome this issue, we construct an aggregated test, called MMDAgg, which is adaptive to the smoothness parameter. The test power is maximised over the collection of kernels used, without requiring held-out data for kernel selection (which results in a loss of test power), or arbitrary kernel choices such as the median heuristic. We prove that MMDAgg still controls the level non-asymptotically, and achieves the minimax rate over Sobolev balls, up to an iterated logarithmic term. Our guarantees are not restricted to a specific type of kernel, but hold for any product of one-dimensional translation invariant characteristic kernels. We provide a user-friendly parameter-free implementation of MMDAgg using an adaptive collection of bandwidths. We demonstrate that MMDAgg significantly outperforms alternative state-of-the-art MMD-based two-sample tests on synthetic data satisfying the Sobolev smoothness assumption, and that, on real-world image data, MMDAgg closely matches the power of tests leveraging the use of models such as neural networks.
[ "Statistics", "Nonparametric Methods", "Hypothesis Testing", "Kernel Methods" ]
https://neurips.cc/media…202023/73927.png
NeurIPS
2,022
65,592
Red-Teaming the Stable Diffusion Safety Filter
Stable Diffusion is a recent open-source image generation model comparable to proprietary models such as DALL·E, Imagen, or Parti. Stable Diffusion comes with a safety filter that aims to prevent generating explicit images. Unfortunately, the filter is obfuscated and poorly documented. This makes it hard for users to prevent misuse in their applications, and to understand the filter's limitations and improve it. We first show that it is easy to generate disturbing content that bypasses the safety filter. We then reverse-engineer the filter and find that while it aims to prevent sexual content, it ignores violence, gore, and other similarly disturbing content. Based on our analysis, we argue safety measures in future model releases should strive to be fully open and properly documented to stimulate security contributions from the community.
[ "Artificial Intelligence Safety", "Machine Learning Security", "Image Generation Models", "Content Moderation", "Open Source Software", "Computer Vision" ]
https://neurips.cc/media…202022/65592.png
NeurIPS
2,022
54,733
Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis
Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
[ "Medical Image Analysis", "Neuroimaging", "Machine Learning in Healthcare", "Computer Vision", "Self-supervised Learning", "Longitudinal Data Analysis" ]
https://neurips.cc/media…202022/54733.png
ICLR
2,023
12,069
Denoising Diffusion Error Correction Codes
Error correction code (ECC) is an integral part of the physical communication layer, ensuring reliable data transfer over noisy channels. Recently, neural decoders have demonstrated their advantage over classical decoding techniques. However, recent state-of-the-art neural decoders suffer from high complexity and lack the important iterative scheme characteristic of many legacy decoders. In this work, we propose to employ denoising diffusion models for the soft decoding of linear codes at arbitrary block lengths. Our framework models the forward channel corruption as a series of diffusion steps that can be reversed iteratively. Three contributions are made: (i) a diffusion process suitable for the decoding setting is introduced, (ii) the neural diffusion decoder is conditioned on the number of parity errors, which indicates the level of corruption at a given step, (iii) a line search procedure based on the code's syndrome obtains the optimal reverse diffusion step size. The proposed approach demonstrates the power of diffusion models for ECC and is able to achieve state-of-the-art accuracy, outperforming the other neural decoders by sizable margins, even for a single reverse diffusion step.
[ "Error Correction Codes", "Neural Decoding", "Communication Systems", "Machine Learning for Communications", "Information Theory" ]
https://iclr.cc/media/Po…202023/12069.png
NeurIPS
2,022
63,644
An instance segmentation approach for automatic insulator defect detection
Research regarding the problem of defective insulator recognition on power distribution networks retains an open interest, due to the significant role insulators play to maintain quality service delivery. Most existing methods detect insulators by rectangular bounding box but do not perform segmentation down to instance pixel-level. In this paper, we propose an automated end-to-end framework enabled by attention mechanism to enhance recognition of defective insulators. Using natural industry dataset of images acquired by unmanned aerial vehicle (UAV), pixel-level recognition is formulated into two computer vision tasks; object detection and instance segmentation. We increase the capabilities of our chosen model by leveraging a lightweight but effective three-branch attention structure integrated into the backbone network as an add-on module. Specifically, we exploit cross-dimensional interactions to build an efficient computation of attention weights across channels of the backbone network to achieve gains in detection performance for defective insulators up to about +2.0 points compared to our base model, at negligible overhead cost. Moreover, we implement a training scheme to improve segmentation performance while demonstrating segmentation superiority over traditional segmentation approaches.
[ "Computer Vision", "Instance Segmentation", "Object Detection", "Power Distribution Networks", "UAV-based Inspection", "Defect Detection" ]
https://neurips.cc/media…202022/63644.png
ICLR
2,022
6,526
Towards Understanding the Data Dependency of Mixup-style Training
In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels. Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training. In this paper, we investigate how these benefits of Mixup training rely on properties of the data in the context of classification. For minimizing the original empirical risk, we compute a closed form for the Mixup-optimal classification, which allows us to construct a simple dataset on which minimizing the Mixup loss leads to learning a classifier that does not minimize the empirical loss on the data. On the other hand, we also give sufficient conditions for Mixup training to also minimize the original empirical risk. For generalization, we characterize the margin of a Mixup classifier, and use this to understand why the decision boundary of a Mixup classifier can adapt better to the full structure of the training data when compared to standard training. In contrast, we also show that, for a large class of linear models and linearly separable datasets, Mixup training leads to learning the same classifier as standard training.
[ "Data Augmentation", "Model Generalization", "Classification Algorithms", "Empirical Risk Minimization" ]
https://iclr.cc/media/Po…667959c12239.png
ICLR
2,022
6,637
Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings
While recent work has shown that scores from models trained by the ubiquitous masked language modeling (MLM) objective effectively discriminate probable from improbable sequences, it is still an open question if these MLMs specify a principled probability distribution over the space of possible sequences. In this paper, we interpret MLMs as energy-based sequence models and propose two energy parametrizations derivable from the trained MLMs. In order to draw samples correctly from these models, we develop a tractable sampling scheme based on the Metropolis--Hastings Monte Carlo algorithm. In our approach, samples are proposed from the same masked conditionals used for training the masked language models, and they are accepted or rejected based on their energy values according to the target distribution. We validate the effectiveness of the proposed parametrizations by exploring the quality of samples drawn from these energy-based models for both open-ended unconditional generation and a conditional generation task of machine translation. We theoretically and empirically justify our sampling algorithm by showing that the masked conditionals on their own do not yield a Markov chain whose stationary distribution is that of our target distribution, and our approach generates higher quality samples than other recently proposed undirected generation approaches (Wang et al., 2019, Ghazvininejad et al., 2019).
[ "Natural Language Processing", "Energy-Based Models", "Probabilistic Models", "Monte Carlo Methods", "Language Modeling", "Sequence Modeling" ]
https://iclr.cc/media/Po…d32bce4e733a.png
NeurIPS
2,022
62,783
Transfer Learning Lithium and Electrolyte Potential Energy Surfaces from Pure and Hybrid DFT
One of the most important problems in rational design of batteries is predicting the properties of the Solid Electrolyte Interphase, which (for a metallic anode) is the part of the battery where metallic and non-metallic components come into contact. However, there is a fundamental problem with predicting the properties of such a mixed material: the two components are best simulated with incompatible levels of density functional theory. Pure functionals perform well for metallic properties, while hybrid or long-range-corrected density functionals perform better for molecular properties and reaction barriers. We demonstrate a simple method to obviate this conflict by training a machine learning potential energy surface using both levels of theory via transfer learning. We further show that the resulting model is more accurate than models trained individually to these levels of theory, allowing more accurate property prediction and potentially faster materials discovery.
[ "Computational Chemistry", "Materials Science", "Machine Learning in Chemistry", "Battery Research", "Density Functional Theory " ]
https://neurips.cc/media…202022/62783.png
ICLR
2,023
10,920
First Steps Toward Understanding the Extrapolation of Nonlinear Models to Unseen Domains
Real-world machine learning applications often involve deploying neural networks to domains that are not seen in the training time. Hence, we need to understand the extrapolation of \textit{nonlinear} models---under what conditions on the distributions and function class, models can be guaranteed to extrapolate to new test distributions. The question is very challenging because even two-layer neural networks cannot be guaranteed to extrapolate outside the support of the training distribution without further assumptions on the domain shift. This paper makes some initial steps towards analyzing the extrapolation of nonlinear models for structured domain shift. We primarily consider settings where the \textit{marginal} distribution of each coordinate of the data (or subset of coordinates) do not shift significantly across the training and test distributions, but the joint distribution may have a much bigger shift. We prove that the family of nonlinear models of the form $f(x)=\sum f_i(x_i)$, where $f_i$ is an \emph{arbitrary} function on the subset of features $x_i$, can extrapolate to unseen distributions, if the covariance of the features is well-conditioned. To the best of our knowledge, this is the first result that goes beyond linear models and the bounded density ratio assumption, even though the assumptions on the distribution shift and function class are stylized.
[ "Neural Networks", "Domain Adaptation", "Extrapolation", "Nonlinear Models" ]
https://iclr.cc/media/Po…202023/10920.png
ICML
2,024
32,994
Cluster-Aware Similarity Diffusion for Instance Retrieval
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph. However, existing techniques that construct the affinity graph based on pairwise instances can lead to the propagation of misinformation from outliers and other manifolds, resulting in inaccurate results. To overcome this issue, we propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval. The primary concept of CAS is to conduct similarity diffusion within local clusters, which can reduce the influence from other manifolds explicitly. To obtain a symmetrical and smooth similarity matrix, our Bidirectional Similarity Diffusion strategy introduces an inverse constraint term to the optimization objective of local cluster diffusion. Additionally, we have optimized a Neighbor-guided Similarity Smoothing approach to ensure similarity consistency among the local neighbors of each instance. Evaluations in instance retrieval and object re-identification validate the effectiveness of the proposed CAS, our code is publicly available.
[ "Computer Vision", "Information Retrieval", "Pattern Recognition", "Data Mining" ]
https://icml.cc/media/Po…202024/32994.png
ICML
2,024
34,573
Learning from Memory: Non-Parametric Memory Augmented Self-Supervised Learning of Visual Features
This paper introduces a novel approach to improving the training stability of self-supervised learning (SSL) methods by leveraging a non-parametric memory of seen concepts. The proposed method involves augmenting a neural network with a memory component to stochastically compare current image views with previously encountered concepts. Additionally, we introduce stochastic memory blocks to regularize training and enforce consistency between image views. We extensively benchmark our method on many vision tasks, such as linear probing, transfer learning, few-shot classification, and image retrieval on many datasets. The experimental results consolidate the effectiveness of the proposed approach in achieving stable SSL training without additional regularizers while learning highly transferable representations and requiring less computing time and resources.
[ "Self-Supervised Learning", "Computer Vision", "Neural Networks", "Transfer Learning", "Few-Shot Learning" ]
https://icml.cc/media/Po…202024/34573.png
ICML
2,024
33,384
Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms
While machine learning shows promise in automated knowledge generation, current techniques such as large language models and micro-targeted influence operations can be exploited for harmful purposes like the proliferation of disinformation. The European Union's Digital Services Act (DSA) is an exemplary policy response addressing these harms generated by online platforms. In this regard, it necessitates a comprehensive evaluation of its impact on curbing the harmful downstream effects of these opaque practices. Despite their harmful applications, we argue that machine learning techniques offer immense, yet under-exploited, potential for unraveling the impacts of regulations like the DSA. Following an analysis that reveals possible limitations in the DSA's provisions, we call for resolute efforts to address methodological barriers around appropriate data access, isolating marginal regulatory effects, and facilitating generalization across different contexts. Given the identified advantages of data-driven approaches to regulatory delivery, we advocate for machine learning research to help quantify the policy impacts on online harms.
[ "Policy Impact Assessment", "Digital Regulation", "Online Safety and Harms", "European Union Policy", "Data-Driven Policy Analysis" ]
https://icml.cc/media/Po…202024/33384.png
NeurIPS
2,022
64,223
Dynamic Collaborative Multi-Agent Reinforcement Learning Communication for Autonomous Drone Reforestation
We approach autonomous drone-based reforestation with a collaborative multi-agent reinforcement learning (MARL) setup. Agents can communicate as part of a dynamically changing network. We explore collaboration and communication on the back of a high-impact problem. Forests are the main resource to control rising CO2 conditions. Unfortunately, the global forest volume is decreasing at an unprecedented rate. Many areas are too large and hard to traverse to plant new trees. To efficiently cover as much area as possible, here we propose a Graph Neural Network (GNN) based communication mechanism that enables collaboration. Agents can share location information on areas needing reforestation, which increases viewed area and planted tree count. We compare our proposed communication mechanism with a multi-agent baseline without the ability to communicate. Results show how communication enables collaboration and increases collective performance, planting precision and the risk-taking propensity of individual agents.
[ "Multi-Agent Systems", "Reinforcement Learning", "Autonomous Systems", "Drone Technology", "Environmental Applications", "Graph Neural Networks", "Communication Networks" ]
https://neurips.cc/media…202022/64223.png
NeurIPS
2,023
74,847
Learning from Invalid Data: On Constraint Satisfaction in Generative Models
Generative models have demonstrated impressive results in vision, language, and speech. However, even with massive datasets, they struggle with precision, generating physically invalid or factually incorrect data. To improve precision while preserving diversity and fidelity, we propose a novel training mechanism that leverages datasets of constraint-violating data points, which we consider invalid. Our approach minimizes the divergence between the generative distribution and the valid prior while maximizing the divergence with the invalid distribution. We demonstrate how generative models like Diffusion Models and GANs that we augment to train with invalid data improve their standard counterparts which solely train on valid data points. We also explore connections between density ratio and guidance in diffusion models. Our proposed mechanism offers a promising solution for improving precision in generative models while preserving diversity and fidelity, particularly in domains where constraint satisfaction is critical and data is limited, such as engineering design, robotics, and medicine.
[ "Generative Models", "Constraint Satisfaction", "Data Science", "Computer Vision", "Natural Language Processing", "Speech Processing", "Engineering Design", "Robotics", "Medicine" ]
https://neurips.cc/media…202023/74847.png
ICLR
2,024
17,370
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models
The success of recent text-to-image diffusion models is largely due to their capacity to be guided by a complex text prompt, which enables users to precisely describe the desired content. However, these models struggle to effectively suppress the generation of undesired content, which is explicitly requested to be omitted from the generated image in the prompt. In this paper, we analyze how to manipulate the text embeddings and remove unwanted content from them. We introduce two contributions, which we refer to as soft-weighted regularization and inference-time text embedding optimization. The first regularizes the text embedding matrix and effectively suppresses the undesired content. The second method aims to further suppress the unwanted content generation of the prompt, and encourages the generation of desired content. We evaluate our method quantitatively and qualitatively on extensive experiments, validating its effectiveness. Furthermore, our method is generalizability to both the pixel-space diffusion models (i.e. DeepFloyd-IF) and the latent-space diffusion models (i.e. Stable Diffusion).
[ "Computer Vision", "Image Processing", "Natural Language Processing", "Generative Models", "Text-to-Image Synthesis" ]
https://iclr.cc/media/Po…202024/17370.png
NeurIPS
2,023
71,472
Temporally Disentangled Representation Learning under Unknown Nonstationarity
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure.However, in nonstationary setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios.In this study, we further explored the Markov Assumption under time-delayed causally related process in nonstationary setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, without the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only.Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
[ "Causal Inference", "Representation Learning", "Time Series Analysis", "Nonstationary Processes" ]
https://neurips.cc/media…202023/71472.png
ICML
2,023
24,490
Global Selection of Contrastive Batches via Optimization on Sample Permutations
Contrastive Learning has recently achieved state-of-the-art performance in a wide range of unimodal and multimodal tasks. Many contrastive learning approaches use mined hard negatives to make batches more informative during training but these approaches are inefficient as they increase epoch length proportional to the number of mined negatives and require frequent updates of nearest neighbor indices or mining from recent batches. In this work, we provide an alternative to hard negative mining, Global Contrastive Batch Sampling (GCBS), an efficient approximation to the batch assignment problem that upper bounds the gap between the global and training losses, $\mathcal{L}^{Global} - \mathcal{L}^{Train}$, in contrastive learning settings. Through experimentation we find GCBS improves state-of-the-art performance in sentence embedding and code-search tasks. Additionally, GCBS is easy to implement as it requires only a few additional lines of code, does not maintain external data structures such as nearest neighbor indices, is more computationally efficient than the most minimal hard negative mining approaches, and makes no changes to the model being trained. Code is available at https://github.com/vinayak1/GCBS.
[ "Contrastive Learning", "Unsupervised Learning", "Representation Learning", "Optimization", "Natural Language Processing", "Information Retrieval" ]
https://icml.cc/media/Po…202023/24490.png
ICLR
2,024
17,685
On the Provable Advantage of Unsupervised Pretraining
Unsupervised pretraining, which learns a useful representation using a large amount of unlabeled data to facilitate the learning of downstream tasks, is a critical component of modern large-scale machine learning systems. Despite its tremendous empirical success, the rigorous theoretical understanding of why unsupervised pretraining generally helps remains rather limited---most existing results are restricted to particular methods or approaches for unsupervised pretraining with specialized structural assumptions. This paper studies a generic framework,where the unsupervised representation learning task is specified by an abstract class of latent variable models $\Phi$ and the downstream task is specified by a class of prediction functions $\Psi$. We consider a natural approach of using Maximum Likelihood Estimation (MLE) for unsupervised pretraining and Empirical Risk Minimization (ERM) for learning downstream tasks. We prove that, under a mild ``informative'' condition, our algorithm achieves an excess risk of $\\tilde{\\mathcal{O}}(\sqrt{\mathcal{C}\_\Phi/m} + \sqrt{\mathcal{C}\_\Psi/n})$ for downstream tasks, where $\mathcal{C}\_\Phi, \mathcal{C}\_\Psi$ are complexity measures of function classes $\Phi, \Psi$, and $m, n$ are the number of unlabeled and labeled data respectively. Comparing to the baseline of $\tilde{\mathcal{O}}(\sqrt{\mathcal{C}\_{\Phi \circ \Psi}/n})$ achieved by performing supervised learning using only the labeled data, our result rigorously shows the benefit of unsupervised pretraining when $m \gg n$ and $\mathcal{C}\_{\Phi\circ \Psi} > \mathcal{C}\_\Psi$. This paper further shows that our generic framework covers a wide range of approaches for unsupervised pretraining, including factor models, Gaussian mixture models, and contrastive learning.
[ "Machine Learning Theory", "Unsupervised Learning", "Representation Learning", "Theoretical Computer Science" ]
https://iclr.cc/media/Po…202024/17685.png
NeurIPS
2,023
71,911
On the Exploration of Local Significant Differences For Two-Sample Test
Recent years have witnessed increasing attentions on two-sample test with diverse real applications, while this work takes one more step on the exploration of local significant differences for two-sample test. We propose the ME$_\text{MaBiD}$, an effective test for two-sample testing, and the basic idea is to exploit local information by multiple Mahalanobis kernels and introduce bi-directional hypothesis for testing. On the exploration of local significant differences, we first partition the embedding space into several rectangle regions via a new splitting criterion, which is relevant to test power and data correlation. We then explore local significant differences based on our bi-directional masked $p$-value together with the ME$_\text{MaBiD}$ test. Theoretically, we present the asymptotic distribution and lower bounds of test power for our ME$_\text{MaBiD}$ test, and control the familywise error rate on the exploration of local significant differences. We finally conduct extensive experiments to validate the effectiveness of our proposed methods on two-sample test and the exploration of local significant differences.
[ "Statistics", "Hypothesis Testing", "Data Analysis" ]
https://neurips.cc/media…202023/71911.png
ICLR
2,024
17,563
Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes. GNNs have emerged as pivotal architectures in analyzing graph-structured data, and their expansive application in sensitive domains requires a comprehensive understanding of their decision-making processes --- necessitating a framework for GNN explainability. An explanation function for GNNs takes a pre-trained GNN along with a graph as input, to produce a `sufficient statistic' subgraph with respect to the graph label. A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions. This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics, including $Fid_+$, $Fid_-$, and $Fid_\Delta$. Specifically, a formal, information-theoretic definition of explainability is introduced and it is shown that existing metrics often fail to align with this definition across various statistical scenarios. The reason is due to potential distribution shifts when subgraphs are removed in computing these fidelity measures. Subsequently, a robust class of fidelity measures are introduced, and it is shown analytically that they are resilient to distribution shift issues and are applicable in a wide range of scenarios. Extensive empirical analysis on both synthetic and real datasets are provided to illustrate that the proposed metrics are more coherent with gold standard metrics.
[ "Graph Neural Networks", "Explainable AI", "Information Theory", "Data Science" ]
https://iclr.cc/media/Po…202024/17563.png
ICLR
2,024
18,371
InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules
Generalizing Neural Radiance Fields (NeRF) to new scenes is a significant challenge that existing approaches struggle to address without extensive modifications to vanilla NeRF framework. We introduceInsertNeRF, a method forINStilling gEneRalizabiliTy intoNeRF. By utilizing multiple plug-and-play HyperNet modules, InsertNeRF dynamically tailors NeRF's weights to specific reference scenes, transforming multi-scale sampling-aware features into scene-specific representations. This novel design allows for more accurate and efficient representations of complex appearances and geometries. Experiments show that this method not only achieves superior generalization performance but also provides a flexible pathway for integration with other NeRF-like systems, even in sparse input settings. Code will be available at: https://github.com/bbbbby-99/InsertNeRF.
[ "Computer Vision", "Neural Networks", "3D Reconstruction" ]
https://iclr.cc/media/Po…202024/18371.png
NeurIPS
2,023
69,959
Convergence of Adam Under Relaxed Assumptions
In this paper, we provide a rigorous proof of convergence of the Adaptive Moment Estimate (Adam) algorithm for a wide class of optimization objectives. Despite the popularity and efficiency of the Adam algorithm in training deep neural networks, its theoretical properties are not yet fully understood, and existing convergence proofs require unrealistically strong assumptions, such as globally bounded gradients, to show the convergence to stationary points. In this paper, we show that Adam provably converges to $\epsilon$-stationary points with $\mathcal{O}(\epsilon^{-4})$ gradient complexity under far more realistic conditions. The key to our analysis is a new proof of boundedness of gradients along the optimization trajectory of Adam, under a generalized smoothness assumption according to which the local smoothness (i.e., Hessian norm when it exists) is bounded by a sub-quadratic function of the gradient norm. Moreover, we propose a variance-reduced version of Adam with an accelerated gradient complexity of $\mathcal{O}(\epsilon^{-3})$.
[ "Optimization Algorithms", "Machine Learning Theory", "Deep Learning", "Mathematical Analysis of Algorithms" ]
https://neurips.cc/media…202023/69959.png
NeurIPS
2,023
77,816
Unbiased Sequential Prediction for Fairness in Predictions-to-Decisions Pipelines
We develop an efficient sequential (online adversarial) algorithm for making high-dimensional vector predictions --- to be fed into a downstream decision maker --- such that these predictions are fair in various senses with respect to the predictions-to-decisions pipeline, and in particular fair conditional on downstream actions taken by the decision maker as a function of these predictions. (1) As a major example, in the online problem where at each round, a subset of k-many candidates is selected by a downstream selection method as a function of our predictions about each of these candidates' success rates, our algorithm can give predictions that are fair to each candidate, in the sense that the candidate's average predicted probability is unbiased relative to the true empirical average, both \emph{over those days when the candidate was selected} and also \emph{over days when the candidate wasn't selected}. Thus, e.g., each candidate can be assured that their probability of success is not systematically underestimated whenever they aren't picked, and conversely, that no other candidate's chances of success are systematically overestimated when they are picked.(2) Via another instantiation of our prediction algorithm, we improve on the online multigroup fairness result of Blum and Lykouris (2020), who produce predictions with no \emph{external} regret conditional on all demographic groups of interest. We directly improve on this by providing the much stronger no \emph{swap} regret guarantees for each group.(3) Our efficient algorithm, in its general form, can make unbiased predictions conditional on any polynomially many subsequences of rounds, which can be functions of our predictions themselves. This is classically achievable via (full) online calibration (Foster and Vohra, 1998), which is however inefficient and statistically suboptimal in high dimensions. Our algorithm is thus a novel computationally and statistically efficient relaxation of calibration.
[ "Fairness in Machine Learning", "Online Learning", "Algorithmic Fairness", "Sequential Prediction", "Decision-Making Systems" ]
https://neurips.cc/media…202023/77816.png
ICLR
2,023
12,207
IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION?
Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. Though the results are astonishing to human eyes, how applicable these generated images are for recognition tasks remains under-explored. In this work, we extensively study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks, and focus on two perspectives: synthetic data for improving classification models in the data-scare settings (i.e. zero-shot and few-shot), and synthetic data for large-scale model pre-training for transfer learning. We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks. Code: https://github.com/CVMI-Lab/SyntheticData.
[ "Computer Vision", "Image Recognition", "Generative Models", "Synthetic Data", "Transfer Learning" ]
https://iclr.cc/media/Po…202023/12207.png
ICLR
2,023
13,459
Long-lead forecasts of wintertime air stagnation index in southern China using oceanic memory effects
Stagnant weather condition is one of the major contributors to air pollution as it is favorable for the formation and accumulation of pollutants. To measure the atmosphere’s ability to dilute air pollutants, Air Stagnation Index (ASI) has been introduced as an important meteorological index. Therefore, making long-lead ASI forecasts is vital to make plans in advance for air quality management. In this study, we found that autumn Niño indices derived from sea surface temperature (SST) anomalies show a negative correlation with wintertime ASI in southern China, offering prospects for a prewinter forecast. We developed an LSTM-based model to predict the future wintertime ASI. Results demonstrated that multivariate inputs (past ASI and Niño indices) achieve better forecast performance than univariate input (only past ASI). The model achieves a correlation coefficient of 0.778 between the actual and predicted ASI, exhibiting a high degree of consistency.
[ "Meteorology", "Climate Science", "Environmental Science", "Air Quality Management", "Oceanography", "Machine Learning in Environmental Science" ]
https://iclr.cc/media/Po…202023/13459.png
ICML
2,024
33,886
Time Series Diffusion in the Frequency Domain
Fourier analysis has been an instrumental tool in the development of signal processing. This leads us to wonder whether this framework could similarly benefit generative modelling. In this paper, we explore this question through the scope of time series diffusion models. More specifically, we analyze whether representing time series in the frequency domain is a useful inductive bias for score-based diffusion models. By starting from the canonical SDE formulation of diffusion in the time domain, we show that a dual diffusion process occurs in the frequency domain with an important nuance: Brownian motions are replaced by what we call mirrored Brownian motions, characterized by mirror symmetries among their components. Building on this insight, we show how to adapt the denoising score matching approach to implement diffusion models in the frequency domain. This results in frequency diffusion models, which we compare to canonical time diffusion models. Our empirical evaluation on real-world datasets, covering various domains like healthcare and finance, shows that frequency diffusion models better capture the training distribution than time diffusion models. We explain this observation by showing that time series from these datasets tend to be more localized in the frequency domain than in the time domain, which makes them easier to model in the former case. All our observations point towards impactful synergies between Fourier analysis and diffusion models.
[ "Signal Processing", "Time Series Analysis", "Generative Modeling", "Frequency Domain Analysis", "Stochastic Processes" ]
https://icml.cc/media/Po…202024/33886.png
NeurIPS
2,022
54,397
The Sample Complexity of One-Hidden-Layer Neural Networks
We study norm-based uniform convergence bounds for neural networks, aiming at a tight understanding of how these are affected by the architecture and type of norm constraint, for the simple class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm. We begin by proving that in general, controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees (independent of the network width), while a stronger Frobenius norm control is sufficient, extending and improving on previous work. Motivated by the proof constructions, we identify and analyze two important settings where (perhaps surprisingly) a mere spectral norm control turns out to be sufficient: First, when the network's activation functions are sufficiently smooth (with the result extending to deeper networks); and second, for certain types of convolutional networks. In the latter setting, we study how the sample complexity is additionally affected by parameters such as the amount of overlap between patches and the overall number of patches.
[ "Neural Networks", "Computational Learning Theory", "Sample Complexity", "Deep Learning Theory" ]
https://neurips.cc/media…202022/54397.png
ICML
2,024
35,631
Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics
Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data as a sparse combination of simpler, more interpretable components. Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time. The decomposed nature of the dynamics is more expressive than previous switched approaches for a given number of parameters and enables modeling of overlapping and non-stationary dynamics. In both continuous-time and discrete-time instructional examples, we demonstrate that our model effectively approximates the original system, learns efficient representations, and captures smooth transitions between dynamical modes. Furthermore, we highlight our model’s ability to efficiently capture and demix population dynamics generated from multiple independent subnetworks, a task that is computationally impractical for switched models. Finally, we apply our model to neural ``full brain'' recordings of C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.
[ "Computational Neuroscience", "Neural Dynamics", "Dynamical Systems", "Time Series Analysis" ]
https://icml.cc/media/Po…202024/35631.png
NeurIPS
2,023
70,817
Emergent Communication for Rules Reasoning
Research on emergent communication between deep-learning-based agents has received extensive attention due to its inspiration for linguistics and artificial intelligence. However, previous attempts have hovered around emerging communication under perception-oriented environmental settings, that forces agents to describe low-level perceptual features intra image or symbol contexts. In this work, inspired by the classic human reasoning test (namely Raven's Progressive Matrix), we propose the Reasoning Game, a cognition-oriented environment that encourages agents to reason and communicate high-level rules, rather than perceived low-level contexts. Moreover, we propose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid overfitting, 2) and a two-stage curriculum agent training method as a baseline for more stable convergence in the Reasoning Game, where contexts and semantics are bilaterally drifting. Experimental results show that, in the Reasoning Game, a semantically stable and compositional language emerges to solve reasoning problems. The emerged language helps agents apply the extracted rules to the generalization of unseen context attributes, and to the transfer between different context attributes or even tasks.
[ "Natural Language Processing", "Cognitive Science", "Computational Linguistics" ]
https://neurips.cc/media…202023/70817.png
NeurIPS
2,022
54,747
A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs
We consider online learning with feedback graphs, a sequential decision-making framework where the learner's feedback is determined by a directed graph over the action set. We present a computationally-efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both stochastic and adversarial environments. The bound against oblivious adversaries is $\tilde{O} (\sqrt{\alpha T})$, where $T$ is the time horizon and $\alpha$ is the independence number of the feedback graph. The bound against stochastic environments is $O\big((\ln T)^2 \max_{S\in \mathcal I(G)} \sum_{i \in S} \Delta_i^{-1}\big)$ where $\mathcal I(G)$ is the family of all independent sets in a suitably defined undirected version of the graph and $\Delta_i$ are the suboptimality gaps.The algorithm combines ideas from the EXP3++ algorithm for stochastic and adversarial bandits and the EXP3.G algorithm for feedback graphs with a novel exploration scheme. The scheme, which exploits the structure of the graph to reduce exploration, is key to obtain best-of-both-worlds guarantees with feedback graphs. We also extend our algorithm and results to a setting where the feedback graphs are allowed to change over time.
[ "Online Learning", "Algorithmic Game Theory", "Graph Theory", "Sequential Decision Making" ]
https://neurips.cc/media…202022/54747.png
ICLR
2,023
11,651
Versatile Neural Processes for Learning Implicit Neural Representations
Representing a signal as a continuous function parameterized by neural network (a.k.a. Implicit Neural Representations, INRs) has attracted increasing attention in recent years. Neural Processes (NPs), which model the distributions over functions conditioned on partial observations (context set), provide a practical solution for fast inference of continuous functions. However, existing NP architectures suffer from inferior modeling capability for complex signals. In this paper, we propose an efficient NP framework dubbed Versatile Neural Processes (VNP), which largely increases the capability of approximating functions. Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost while providing high modeling capability. At the decoder side, we hierarchically learn multiple global latent variables that jointly model the global structure and the uncertainty of a function, enabling our model to capture the distribution of complex signals. We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals. Particularly, our method shows promise in learning accurate INRs w.r.t. a 3D scene without further finetuning.
[ "Neural Networks", "Implicit Neural Representations", "Signal Processing", "Computational Modeling" ]
https://iclr.cc/media/Po…202023/11651.png
ICML
2,024
33,866
Data-Efficient Learning via Clustering-Based Sensitivity Sampling: Foundation Models and Beyond
We study the data selection problem, whose aim is to select a small representative subset of data that can be used to efficiently train a machine learning model. We present a new data selection approach based on $k$-means clustering and sensitivity sampling. Assuming access to an embedding representation of the data with respect to which the model loss is Holder continuous, our approach provably allows selecting a set of ``typical'' $k + 1/\varepsilon^2$ elements whose average loss corresponds to the average loss of the whole dataset, up to a multiplicative $(1\pm\varepsilon)$ factor and an additive $\varepsilon \lambda \Phi_k$, where $\Phi_k$ represents the $k$-means cost for the input embeddings and $\lambda$ is the Holder constant. We furthermore demonstrate the performance and scalability of our approach on fine-tuning foundation models and show that it outperforms state-of-the-art methods. We also show how it can be applied on linear regression, leading to a new sampling strategy that surprisingly matches the performance of leverage score sampling, while being conceptually simpler and more scalable.
[ "Data Selection", "Clustering", "Sensitivity Sampling", "Foundation Models", "Model Training Efficiency", "Linear Regression", "Data Efficiency" ]
https://icml.cc/media/Po…202024/33866.png
ICLR
2,024
20,666
Outlier-Robust Subsampling Techniques for Persistent Homology
In recent years, persistent homology has been successfully applied to real-world data in many different settings. Despite significant computational advances, persistent homology algorithms do not yet scale to large datasets preventing interesting applications. One approach to address computational issues posed by persistent homology is to select a set of landmarks by subsampling from the data. Currently, these landmark points are chosen either at random or using the maxmin algorithm. Neither is ideal as random selection tends to favour dense areas of the data while the maxmin algorithm is very sensitive to noise. Here, we propose a novel approach to select landmarks specifically for persistent homology that preserves coarse topological information of the original dataset. Our method is motivated by the Mayer-Vietoris sequence and requires only local persistent homology calculations thus enabling efficient computation. We test our landmarks on artificial data sets which contain different levels of noise and compare them to standard landmark selection techniques. We demonstrate that our landmark selection outperforms standard methods as well as a subsampling technique based on an outlier-robust version of the k-means algorithm for low sampling densities in noisy data with respect to robustness to outliers.
[ "Computational Topology", "Data Analysis", "Computational Geometry", "Algorithm Design", "Data Science" ]
https://iclr.cc/media/Po…202024/20666.png
ICML
2,024
32,986
Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains
In recent years, there has been a surge in the development of 3D structure-based pre-trained protein models, representing a significant advancement over pre-trained protein language models in various downstream tasks. However, most existing structure-based pre-trained models primarily focus on the residue level, i.e., alpha carbon atoms, while ignoring other atoms like side chain atoms. We argue that modeling proteins at both residue and atom levels is important since the side chain atoms can also be crucial for numerous downstream tasks, for example, molecular docking. Nevertheless, we find that naively combining residue and atom information during pre-training typically fails. We identify a key reason is the information leakage caused by the inclusion of atom structure in the input, which renders residue-level pre-training tasks trivial and results in insufficiently expressive residue representations. To address this issue, we introduce a span mask pre-training strategy on 3D protein chains to learn meaningful representations of both residues and atoms. This leads to a simple yet effective approach to learning protein representation suitable for diverse downstream tasks. Extensive experimental results on binding site prediction and function prediction tasks demonstrate our proposed pre-training approach significantly outperforms other methods. Our code will be made public.
[ "Computational Biology", "Bioinformatics", "Structural Biology", "Machine Learning in Biology", "Protein Modeling" ]
https://icml.cc/media/Po…202024/32986.png
NeurIPS
2,023
76,938
Prototype-oriented Unsupervised Change Detection for Disaster Management
Climate change has led to an increased frequency of natural disasters such as floods and cyclones. This emphasizes the importance of effective disaster monitoring. In response, the remote sensing community has explored change detection methods. These methods are primarily categorized into supervised techniques, which yield precise results but come with high labeling costs, and unsupervised techniques, which eliminate the need for labeling but involve intricate hyperparameter tuning. To address these challenges, we propose a novel unsupervised change detection method named Prototype-oriented Unsupervised Change Detection for Disaster Management (PUCD). PUCD captures changes by comparing features from pre-event, post-event, and prototype-oriented change synthesis images via a foundational model, and refines results using the Segment Anything Model (SAM). Although PUCD is an unsupervised change detection, it does not require complex hyperparameter tuning. We evaluate PUCD framework on the LEVIR-Extension dataset and the disaster dataset and it achieves state-of-the-art performance compared to other methods on the LEVIR-Extension dataset.
[ "Remote Sensing", "Disaster Management", "Unsupervised Learning", "Change Detection", "Climate Change Adaptation" ]
https://neurips.cc/media…202023/76938.png
ICLR
2,024
18,848
Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis
Bilevel optimization is an important formulation for many machine learning problems, such as meta-learning and hyperparameter optimization. Current bilevel optimization algorithms assume that the gradient of the upper-level function is Lipschitz (i.e., the upper-level function has a bounded smoothness parameter). However, recent studies reveal that certain neural networks such as recurrent neural networks (RNNs) and long-short-term memory networks (LSTMs) exhibit potential unbounded smoothness, rendering conventional bilevel optimization algorithms unsuitable for these neural networks. In this paper, we design a new bilevel optimization algorithm, namely BO-REP, to address this challenge. This algorithm updates the upper-level variable using normalized momentum and incorporates two novel techniques for updating the lower-level variable: \textit{initialization refinement} and \textit{periodic updates}. Specifically, once the upper-level variable is initialized, a subroutine is invoked to obtain a refined estimate of the corresponding optimal lower-level variable, and the lower-level variable is updated only after every specific period instead of each iteration. When the upper-level problem is nonconvex and unbounded smooth, and the lower-level problem is strongly convex, we prove that our algorithm requires $\widetilde{O}(1/\epsilon^4)$ \footnote{Here $\widetilde{O}(\cdot)$ compresses logarithmic factors of $1/\epsilon$ and $1/\delta$, where $\delta\in(0,1)$ denotes the failure probability.} iterations to find an $\epsilon$-stationary point in the stochastic setting, where each iteration involves calling a stochastic gradient or Hessian-vector product oracle. Notably, this result matches the state-of-the-art complexity results under the bounded smoothness setting and without mean-squared smoothness of the stochastic gradient, up to logarithmic factors. Our proof relies on novel technical lemmas for the periodically updated lower-level variable, which are of independent interest. Our experiments on hyper-representation learning, hyperparameter optimization, and data hyper-cleaning for text classification tasks demonstrate the effectiveness of our proposed algorithm. The code is available at [https://github.com/MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness](https://github.com/MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness).
[ "Bilevel Optimization", "Meta-Learning", "Hyperparameter Optimization", "Neural Networks", "Algorithm Design", "Convergence Analysis" ]
https://iclr.cc/media/Po…202024/18848.png
ICML
2,022
17,231
Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Black-box model-based optimization (MBO) problems, where the goal is to find a design input that maximizes an unknown objective function, are ubiquitous in a wide range of domains, such as the design of proteins, DNA sequences, aircraft, and robots. Solving model-based optimization problems typically requires actively querying the unknown objective function on design proposals, which means physically building the candidate molecule, aircraft, or robot, testing it, and storing the result. This process can be expensive and time consuming, and one might instead prefer to optimize for the best design using only the data one already has. This setting---called offline MBO---poses substantial and different algorithmic challenges than more commonly studied online techniques. A number of recent works have demonstrated success with offline MBO for high-dimensional optimization problems using high-capacity deep neural networks. However, the lack of standardized benchmarks in this emerging field is making progress difficult to track. To address this, we present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods. Our benchmark includes a suite of diverse and realistic tasks derived from real-world optimization problems in biology, materials science, and robotics that present distinct challenges for offline MBO. Our benchmark and reference implementations are released at github.com/rail-berkeley/design-bench and github.com/rail-berkeley/design-baselines.
[ "Model-Based Optimization", "Machine Learning Benchmarks", "Offline Learning", "High-Dimensional Optimization", "Deep Learning", "Robotics", "Computational Biology", "Materials Science" ]
https://icml.cc/media/Po…2ee9e3f05174.png
ICLR
2,022
7,124
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces. Given the set of solutions of our convex optimization program, we show how to construct exactly the entire set of optimal neural networks. We provide a detailed characterization of this optimal set and its invariant transformations. As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.Overall, we provide a rich framework for studying the landscape of neural network training loss through convexity.
[ "Neural Networks", "Convex Optimization", "Optimization Theory" ]
https://iclr.cc/media/Po…989e7b9907fc.png
ICLR
2,024
18,878
Accelerating Sinkhorn algorithm with sparse Newton iterations
Computing the optimal transport distance between statistical distributions is a fundamental task in machine learning. One remarkable recent advancement is entropic regularization and the Sinkhorn algorithm, which utilizes only matrix scaling and guarantees an approximated solution with near-linear runtime. Despite the success of the Sinkhorn algorithm, its runtime may still be slow due to the potentially large number of iterations needed for convergence. To achieve possibly super-exponential convergence, we introduce Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine. Adopting the variational viewpoint that the Sinkhorn algorithm maximizes a concave Lyapunov potential, we offer the insight that the Hessian matrix of the potential function is approximately sparse. Sparsification of the Hessian results in a fast $O(n^2)$ per-iteration complexity, the same as the Sinkhorn algorithm. In terms of total iteration count, we observe that the SNS algorithm converges orders of magnitude faster across a wide range of practical cases, including optimal transportation between empirical distributions and calculating the Wasserstein $W_1, W_2$ distance of discretized continuous densities. The empirical performance is corroborated by a rigorous bound on the approximate sparsity of the Hessian matrix.
[ "Optimization", "Numerical Analysis", "Computational Mathematics", "Algorithms" ]
https://iclr.cc/media/Po…202024/18878.png
ICML
2,024
34,249
Two Fists, One Heart: Multi-Objective Optimization Based Strategy Fusion for Long-tailed Learning
Real-world data generally follows a long-tailed distribution, which makes traditional high-performance training strategies unable to show their usual effects. Various insights have been proposed to alleviate this challenging distribution. However, some observations indicate that models trained on long-tailed distributions always show a trade-off between the performance of head and tail classes. For a profound understanding of the trade-off, we first theoretically analyze the trade-off problem in long-tailed learning and creatively transform the trade-off problem in long-tailed learning into a multi-objective optimization (MOO) problem. Motivated by these analyses, we propose the idea of strategy fusion for MOO long-tailed learning and point out the potential conflict problem. We further design a Multi-Objective Optimization based Strategy Fusion (MOOSF), which effectively resolves conflicts, and achieves an efficient fusion of heterogeneous strategies. Comprehensive experiments on mainstream datasets show that even the simplest strategy fusion can outperform complex long-tailed strategies. More importantly, it provides a new perspective for generalized long-tailed learning. The code is available in the accompanying supplementary materials.
[ "Long-tailed Learning", "Multi-Objective Optimization", "Data Distribution Analysis" ]
https://icml.cc/media/Po…202024/34249.png
NeurIPS
2,023
72,239
Exact Bayesian Inference on Discrete Models via Probability Generating Functions: A Probabilistic Programming Approach
We present an exact Bayesian inference method for discrete statistical models, which can find exact solutions to a large class of discrete inference problems, even with infinite support and continuous priors.To express such models, we introduce a probabilistic programming language that supports discrete and continuous sampling, discrete observations, affine functions, (stochastic) branching, and conditioning on discrete events.Our key tool isprobability generating functions:they provide a compact closed-form representation of distributions that are definable by programs, thus enabling the exact computation of posterior probabilities, expectation, variance, and higher moments.Our inference method is provably correct and fully automated in a tool calledGenfer, which uses automatic differentiation (specifically, Taylor polynomials), but does not require computer algebra.Our experiments show that Genfer is often faster than the existing exact inference tools PSI, Dice, and Prodigy.On a range of real-world inference problems that none of these exact tools can solve, Genfer's performance is competitive with approximate Monte Carlo methods, while avoiding approximation errors.
[ "Bayesian Inference", "Probabilistic Programming", "Discrete Statistical Models", "Probability Generating Functions", "Computational Statistics", "Statistical Computing", "Automatic Differentiation" ]
https://neurips.cc/media…202023/72239.png
ICML
2,024
33,261
Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
We introduce an Outlier-Efficient Modern Hopfield Model (termedOutEffHop) and use it to address the outlier inefficiency problem of training gigantic transformer-based models. Our main contribution is a novel associative memory model facilitatingoutlier-efficientassociative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (Softmax_1): it is an approximation of the memory retrieval process ofOutEffHop. Methodologically, this allows us to introduce novel outlier-efficient Hopfield layers as powerful alternatives to traditional attention mechanisms, with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the efficacy of the proposed model across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT, and STanHop-Net), benchmarking against state-of-the-art methods likeClipped_SoftmaxandGated_Attention. Notably,OutEffHopachieves an average reduction of 22+% in average kurtosis and 26+% in the maximum infinity norm of model outputs across four models. Code is available atGitHub; future updates are onarXiv.
[ "Neural Networks", "Transformer Models", "Attention Mechanisms", "Associative Memory Models", "Model Optimization", "Quantization Techniques" ]
https://icml.cc/media/Po…202024/33261.png
NeurIPS
2,022
54,798
Bessel Equivariant Networks for Inversion of Transmission Effects in Multi-Mode Optical Fibres
We develop a new type of model for solving the task of inverting the transmission effects of multi-mode optical fibres through the construction of an $\mathrm{SO}^{+}(2,1)$-equivariant neural network. This model takes advantage of the of the azimuthal correlations known to exist in fibre speckle patterns and naturally accounts for the difference in spatial arrangement between input and speckle patterns. In addition, we use a second post-processing network to remove circular artifacts, fill gaps, and sharpen the images, which is required due to the nature of optical fibre transmission. This two stage approach allows for the inspection of the predicted images produced by the more robust physically motivated equivariant model, which could be useful in a safety-critical application, or by the output of both models, which produces high quality images. Further, this model can scale to previously unachievable resolutions of imaging with multi-mode optical fibres and is demonstrated on $256 \times 256$ pixel images. This is a result of improving the trainable parameter requirement from $\mathcal{O}(N^4)$ to $\mathcal{O}(m)$, where $N$ is pixel size and $m$ is number of fibre modes. Finally, this model generalises to new images, outside of the set of training data classes, better than previous models.
[ "Optical Fiber Communications", "Neural Networks", "Computational Imaging", "Signal Processing" ]
https://neurips.cc/media…202022/54798.png
ICML
2,024
37,149
Resource-constrained Neural Architecture Search on Language Models: A Case Study
Transformer-based language models have achieved milestones in natural language processing, but they come with challenges, mainly due to their computational footprint. Applying automated machine learning to these models can democratize their use and foster further research and development. We present a case study using neural architecture search (NAS) to optimize DistilBERT in a resource-constrained environment with a $4\,000$ GPU-hour budget. We employ an evolutionary algorithm that uses a two-level hierarchical search space and a segmented pipeline for component enhancement. While in order to obtain state-of-the-art results more compute budget is required, our results show efficient exploration, and a strong correlation between pre-training and downstream performance. This suggests a potential applicability of using pre-training validation as a cutoff criterion during model training. Finally, our learning curves analysis emphasizes the potential for efficient resource allocation through the adoption of an epoch-level stopping strategy, thus directing resources towards more promising candidate models. Future work should focus on scaling these insights to larger language models and more diverse tasks.
[ "Natural Language Processing", "Neural Architecture Search ", "Automated Machine Learning ", "Resource-Constrained Computing", "Machine Learning Optimization", "Evolutionary Algorithms" ]
https://icml.cc/media/Po…202024/37149.png
ICLR
2,024
18,344
Pareto Deep Long-Tailed Recognition: A Conflict-Averse Solution
Deep long-tailed recognition (DTLR) has attracted much attention due to its close touch with realistic scenarios. Recent advances have focused on re-balancing across various aspects, e.g., sampling strategy, loss re-weighting, logit adjustment, and input/parameter perturbation, to name a few. However, few studies have considered dynamic re-balancing to address intrinsic optimization conflicts. In this paper, we first empirically argue that the optimizations of mainstream DLTR methods are still dominated by some categories (e.g., major) due to a fixed re-balancing strategy. Thus, they fail to deal with gradient conflicts among categories, which naturally deduces the motivation for reaching Pareto optimal solutions. Unfortunately, a naive integration of multi-objective optimization (MOO) with DLTR methods is not applicable due to the gap between multi-task learning (MTL) and DLTR, and can in turn lead to class-specific feature degradation. Thus, we provide effective alternatives by decoupling MOO-based MTL from the temporal rather than structure perspective, and enhancing it via optimizing variability collapse loss motivated by the derived MOO-based DLTR generalization bound. Moreover, we resort to anticipating worst-case optimization with theoretical insights to further ensure convergence. We build a Pareto deep long-tailed recognition method termed PLOT upon the proposed MOO framework. Extensive evaluations demonstrate that our method not only generally improves mainstream pipelines, but also achieves an augmented version to realize state-of-the-art performance across multiple benchmarks.
[ "Deep Learning", "Computer Vision", "Multi-Objective Optimization", "Imbalanced Data Handling" ]
https://iclr.cc/media/Po…202024/18344.png
ICML
2,023
27,881
Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for Instruction Generation Models
Recent work studies the cognitive capabilities of language models through psychological tests designed for humans. While these studies are helpful for understanding the general capabilities of these models, there is no guarantee that a model possessing sufficient capabilities to pass those tests would actually use those capabilities in performing real-life tasks.In this work, we formulate task-oriented cognitive capabilities, which are human-like cognitive capabilities that language models leverage to perform tasks. These capabilities are (i) the ability to quickly generate good candidate utterances (the search capability) (ii) the ability to predict how a listener interprets those utterances and choose the most appropriate one (the pragmatic capability).We design an evaluation scheme for comparing these capabilities of a language model with those of a human.Applying this scheme to examine various models in a navigation instruction generation problem, we find that their pragmatic capability is severely lacking.This insight leads us to augment them with better models of the listener and obtain a significant boost of 11% in success rate in guiding real humans.Our work advocates for having a principled procedure for aligning language models with humans that involves (i) formulating task-oriented capabilities, (ii) devising a method to quantify their deficiency, and (iii) iteratively improving them.
[ "Natural Language Processing", "Cognitive Computing", "Human-Computer Interaction", "Artificial Intelligence ", "Language Models", "Task-Oriented Systems" ]
https://icml.cc/media/Po…202023/27881.png
ICLR
2,024
21,543
EU Climate Change News Index: Forecasting EU ETS prices with online news
Carbon emission allowance prices have been rapidly increasing in the EU since 2018 and accurate forecasting of EU Emissions Trading System (ETS) prices has become essential. This paper proposes a novel method to generate alternative predictors for daily ETS price returns using relevant online news information. We devise the EU Climate Change News Index by calculating the term frequency–inverse document frequency (TF–IDF) feature for climate change-related keywords. The index is capable of tracking the ongoing debate about climate change in the EU. Finally, we show that incorporating the index in a simple predictive model significantly improves forecasts of ETS price returns.
[ "Environmental Economics", "Climate Change Policy", "Financial Forecasting", "Machine Learning in Finance", "Text Mining and Natural Language Processing" ]
https://iclr.cc/media/Po…202024/21543.png
ICML
2,023
25,899
On the Effectiveness of Sharpness-Aware Minimization with Large Mini-batches
Training with large mini-batches can increase hardware utilization and reduce training time. However, recent studies suggest that using large mini-batches often yields convergence to sharp minima, leading to poor generalization. In this work, we investigate the effectiveness of sharpness minimiza- tion for large-batch training. Specifically, we evaluate the sharpness-aware minimization (SAM) algorithm and compare it to the standard stochastic gradient descent (SGD) under fixed step size settings. We perform exhaustive grid search to set optimal hyperparameters in this process. As a result, we find that SAM consistently outperforms SGD, but undergoes critical performance degradation in the large-batch training regime.
[ "Optimization", "Deep Learning", "Neural Networks" ]
https://icml.cc/media/Po…202023/25899.png
NeurIPS
2,022
57,618
Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of underlying entities that take the value of object states. Worse, these entities are often unknown and must be inferred from sensory percepts. We present a hierarchical abstraction approach to uncover these underlying entities and achieve combinatorial generalization from unstructured inputs. By constructing a factorized transition graph over clusters of object representations inferred from pixels, we show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment. We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects, which outperforms current offline deep RL methods when evaluated on a set of simulated rearrangement and stacking tasks.
[ "Robotics", "Computer Vision", "Reinforcement Learning" ]
https://neurips.cc/media…202022/57618.png
ICML
2,024
33,572
Multiplicative Weights Update, Area Convexity and Random Coordinate Descent for Densest Subgraph Problems
We study the densest subgraph problem and give algorithms via multiplicative weights update and area convexity that converge in $O\left(\frac{\log m}{\epsilon^{2}}\right)$ and $O\left(\frac{\log m}{\epsilon}\right)$ iterations, respectively, both with nearly-linear time per iteration. Compared with the work by Bahmani et al. (2014), our MWU algorithm uses a very different and much simpler procedure for recovering the dense subgraph from the fractional solution and does not employ a binary search. Compared with the work by Boob et al. (2019), our algorithm via area convexity improves the iteration complexity by a factor $\Delta$---the maximum degree in the graph, and matches the fastest theoretical runtime currently known via flows (Chekuri et al., 2022) in total time. Next, we study the dense subgraph decomposition problem and give the first practical iterative algorithm with linear convergence rate $O\left(mn\log\frac{1}{\epsilon}\right)$ via accelerated random coordinate descent. This significantly improves over $O\left(\frac{m\sqrt{mn\Delta}}{\epsilon}\right)$ time of the FISTA-based algorithm by Harb et al. (2022). In the high precision regime $\epsilon\ll\frac{1}{n}$ where we can even recover the exact solution, our algorithm has a total runtime of $O\left(mn\log n\right)$, matching the state of the art exact algorithm via parametric flows (Gallo et al., 1989). Empirically, we show that this algorithm is very practical and scales to very large graphs, and its performance is competitive with widely used methods that have significantly weaker theoretical guarantees.
[ "Algorithms", "Graph Theory", "Optimization", "Computational Complexity", "Data Structures and Algorithms" ]
https://icml.cc/media/Po…202024/33572.png