conference
stringclasses 3
values | year
int32 2.02k
2.02k
| paper_id
int32 5.89k
80k
| title
stringlengths 12
188
| abstract
stringlengths 1
4.65k
| topics
listlengths 1
20
| image_url
stringlengths 54
89
|
|---|---|---|---|---|---|---|
ICML
| 2,024
| 33,337
|
Residual-Conditioned Optimal Transport: Towards Structure-Preserving Unpaired and Paired Image Restoration
|
Deep learning-based image restoration methods generally struggle with faithfully preserving the structures of the original image. In this work, we propose a novel Residual-Conditioned Optimal Transport (RCOT) approach, which models image restoration as an optimal transport (OT) problem for both unpaired and paired settings, introducing the transport residual as a unique degradation-specific cue for both the transport cost and the transport map. Specifically, we first formalize a Fourier residual-guided OT objective by incorporating the degradation-specific information of the residual into the transport cost. We further design the transport map as a two-pass RCOT map that comprises a base model and a refinement process, in which the transport residual is computed by the base model in the first pass and then encoded as a degradation-specific embedding to condition the second-pass restoration. By duality, the RCOT problem is transformed into a minimax optimization problem, which can be solved by adversarially training neural networks. Extensive experiments on multiple restoration tasks show that RCOT achieves competitive performance in terms of both distortion measures and perceptual quality, restoring images with more faithful structures as compared with state-of-the-art methods.
|
[
"Image Restoration",
"Deep Learning",
"Optimal Transport",
"Computer Vision",
"Adversarial Training"
] | |
ICLR
| 2,023
| 10,879
|
Transferable Unlearnable Examples
|
With more people publishing their personal data online, unauthorized data usage has become a serious concern. The unlearnable examples strategies have been introduced to prevent third parties from training on the data without permission. They add perturbations to the users’ data before publishing, so as to make the models trained on the perturbed published dataset invalidated. These perturbations have been generated for a specific training setting and a target dataset. However, their unlearnable effects significantly decrease when used in other training settings or datasets. To tackle this issue, we propose a novel unlearnable strategy based on Class-wise Separability Discriminant (CSD), which boosts the transferability of the unlearnable perturbations by enhancing the linear separability. Extensive experiments demonstrate the transferability of the unlearnable examples crafted by our proposed method across training settings and datasets.
|
[
"Data Privacy",
"Adversarial Machine Learning",
"Data Security",
"Transfer Learning"
] | |
NeurIPS
| 2,023
| 72,788
|
LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
|
The growth of Graph Convolution Network (GCN) model sizes has revolutionized numerous applications, surpassing human performance in areas such as personal healthcare and financial systems. The deployment of GCNs in the cloud raises privacy concerns due to potential adversarial attacks on client data. To address security concerns, Privacy-Preserving Machine Learning (PPML) using Homomorphic Encryption (HE) secures sensitive client data. However, it introduces substantial computational overhead in practical applications. To tackle those challenges, we present LinGCN, a framework designed to reduce multiplication depth and optimize the performance of HE based GCN inference. LinGCN is structured around three key elements: (1) A differentiable structural linearization algorithm, complemented by a parameterized discrete indicator function, co-trained with model weights to meet the optimization goal. This strategy promotes fine-grained node-level non-linear location selection, resulting in a model with minimized multiplication depth. (2) A compact node-wise polynomial replacement policy with a second-order trainable activation function, steered towards superior convergence by a two-level distillation approach from an all-ReLU based teacher model. (3) an enhanced HE solution that enables finer-grained operator fusion for node-wise activation functions, further reducing multiplication level consumption in HE-based inference. Our experiments on the NTU-XVIEW skeleton joint dataset reveal that LinGCN excels in latency, accuracy, and scalability for homomorphically encrypted inference, outperforming solutions such as CryptoGCN. Remarkably, LinGCN achieves a 14.2× latency speedup relative to CryptoGCN, while preserving an inference accuracy of ~75\% and notably reducing multiplication depth. Additionally, LinGCN proves scalable for larger models, delivering a substantial 85.78\% accuracy with 6371s latency, a 10.47\% accuracy improvement over CryptoGCN.
|
[
"Privacy-Preserving Machine Learning ",
"Homomorphic Encryption ",
"Graph Convolutional Networks ",
"Secure Inference",
"Cryptography in Machine Learning",
"Cloud Computing Security"
] | |
ICML
| 2,023
| 25,266
|
Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs
|
Contrastively trained encoders have recently been proven to invert the data-generating process: they encode each input, e.g., an image, into the true latent vector that generated the image (Zimmermann et al., 2021). However, real-world observations often have inherent ambiguities. For instance, images may be blurred or only show a 2D view of a 3D object, so multiple latents could have generated them. This makes the true posterior for the latent vector probabilistic with heteroscedastic uncertainty. In this setup, we extend the common InfoNCE objective and encoders to predict latent distributions instead of points. We prove that these distributions recover the correct posteriors of the data-generating process, including its level of aleatoric uncertainty, up to a rotation of the latent space. In addition to providing calibrated uncertainty estimates, these posteriors allow the computation of credible intervals in image retrieval. They comprise images with the same latent as a given query, subject to its uncertainty. Code is at https://github.com/mkirchhof/ProbabilisticContrastiveLearning .
|
[
"Uncertainty Quantification",
"Probabilistic Modeling",
"Computer Vision",
"Representation Learning"
] | |
NeurIPS
| 2,022
| 58,451
|
Towards a genealogical approach to explaining algorithmic bias
|
Specifically in the FAccT literature, algorithmic bias tends to be characterized as a problem in its consequences rather than as evidence of the underlying societal and technical conditions that (re)produce it. In this context, explainability (XAI) tools are proposed as a solution to gauge these conditions (e.g. SHAP and LIME as well as libraries such as What If or IBM360). While relevant, these tools tend to approach these conditions unrealistically; as static, cumulative and in terms of their causal import. Differently, I here propose that these tools be informed by a genealogical approach to bias. Following the tradition of Nietzsche and Foucault, a genealogy is “a form of historical critique, designed to overturn our norms by revealing their origins” (Hill, 2016, p.1). In this case, I understand genealogy as a form of epistemic critique, designed to understand algorithmic bias in its consequences by focusing on the conditions for its possibility. In this respect, I propose to question XAI tools as much as to use them as questions, rather than as replies to the problem of bias as skewed performance. This work puts forward two proposals. First, we propose a framework to index XAI tools according to their relevance for bias as evidence. We identify feature importance methods (e.g. SHAP) and rule-list methods as relevant for procedural fairness, while we identify counterfactual methods as relevant to a) agency, in terms of suggesting what can be changed to affect an outcome and b) building a prima facie case for discrimination. Second, we propose a rubric of questions to test these tools in their abilities to detect so-called “bias-shifts”. Overall, the aim is to think about XAI approaches not as mere technical tools but as questions on skewed performance for evidence gathering with fairness implications.
|
[
"Algorithmic Bias",
"Explainable Artificial Intelligence ",
"Fairness, Accountability, and Transparency in Technology ",
"Epistemic Critique",
"Procedural Fairness",
"Discrimination Detection",
"Feature Importance Methods",
"Counterfactual Methods"
] | |
NeurIPS
| 2,023
| 75,703
|
Rapid Prediction of Two-dimensional Airflow in an Operating Room using Scientific Machine Learning
|
We consider the problem of using scientific machine learning (SciML) to rapidly predict solutions to systems of nonlinear partial differential equations (PDEs) defined over complex geometries. In particular, we focus on modeling how airflow in operating rooms (ORs) is affected as the position of an object within the OR varies. We develop data-driven and physics-informed operator-learning models based on the deep operator network (DeepONet) architecture. The DeepONet models are able to accurately and rapidly predict airflow solutions to novel parameter configurations, and they surpass the accuracy of a random forest (RF) baseline. Interestingly, we find that physics-informed regularization (PIR) does not enhance model accuracy, partially because of misspecification of the physical prior compared to the data’s governing equations. Existing SciML models struggle in predicting flow when complex geometries determine localized behavior.
|
[
"Scientific Machine Learning",
"Computational Fluid Dynamics",
"Partial Differential Equations",
"Deep Learning",
"Operator Learning",
"Airflow Modeling",
"Biomedical Engineering",
"Applied Mathematics"
] | |
NeurIPS
| 2,022
| 60,356
|
VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
|
We introduce Value-Implicit Pre-training (VIP), a self-supervised pre-trained visual representation capable of generating dense and smooth reward functions for unseen robotic tasks. VIP casts representation learning from human videos as an offline goal-conditioned reinforcement learning problem and derives a self-supervised dual goal-conditioned value-function objective that does not depend on actions, enabling pre-training on unlabeled human videos. Theoretically, VIP can be understood as a novel implicit time contrastive learning that makes for temporally smooth embedding that enables the value function to be implicitly defined via the embedding distance, which can be used as the reward function for any downstream task specified through goal images. Trained on large-scale Ego4D human videos and without any fine-tuning on task-specific robot data, VIP's frozen representation can provide dense visual reward for an extensive set of simulated and real-robot tasks, enabling diverse reward-based policy learning methods, including visual trajectory optimization and online/offline RL, and significantly outperform all prior pre-trained representations. Notably, VIP can enable few-shot offline RL on a suite of real-world robot tasks with as few as 20 trajectories.
|
[
"Computer Vision",
"Reinforcement Learning",
"Robotics",
"Self-Supervised Learning",
"Representation Learning"
] | |
ICLR
| 2,023
| 12,197
|
Fairness-aware Contrastive Learning with Partially Annotated Sensitive Attributes
|
Learning high-quality representation is important and essential for visual recognition. Unfortunately, traditional representation learning suffers from fairness issues since the model may learn information of sensitive attributes. Recently, a series of studies have been proposed to improve fairness by explicitly decorrelating target labels and sensitive attributes. Most of these methods, however, rely on the assumption that fully annotated labels on target variable and sensitive attributes are available, which is unrealistic due to the expensive annotation cost. In this paper, we investigate a novel and practical problem of Fair Unsupervised Representation Learning with Partially annotated Sensitive labels (FURL-PS). FURL-PS has two key challenges: 1) how to make full use of the samples that are not annotated with sensitive attributes; 2) how to eliminate bias in the dataset without target labels. To address these challenges, we propose a general Fairness-aware Contrastive Learning (FairCL) framework consisting of two stages. Firstly, we generate contrastive sample pairs, which share the same visual information apart from sensitive attributes, for each instance in the original dataset. In this way, we construct a balanced and unbiased dataset. Then, we execute fair contrastive learning by closing the distance between representations of contrastive sample pairs. Besides, we also propose an unsupervised way to balance the utility and fairness of learned representations by feature reweighting. Extensive experimental results illustrate the effectiveness of our method in terms of fairness and utility, even with very limited sensitive attributes and serious data bias.
|
[
"Fairness in AI",
"Representation Learning",
"Contrastive Learning",
"Unsupervised Learning",
"Computer Vision"
] | |
NeurIPS
| 2,022
| 53,013
|
Deep Compression of Pre-trained Transformer Models
|
Pre-trained transformer models have achieved remarkable success in natural language processing (NLP) and have recently become competitive alternatives to Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN) in vision and speech tasks, respectively. Due to excellent computational efficiency and scalability, transformer models can be trained on exceedingly large amounts of data; however, model sizes can grow tremendously. As high performance, large-scale, and pre-trained transformer models become available for users to download and fine-tune for customized downstream tasks, the deployment of these models becomes challenging due to the vast amount of operations and large memory footprint. To address this challenge, we introduce methods to deeply compress pre-trained transformer models across three major application domains: NLP, speech, and vision. Specifically, we quantize transformer backbones down to 4-bit and further achieve 50% fine-grained structural sparsity on pre-trained BERT, Wav2vec2.0 and Vision Transformer (ViT) models to achieve 16x compression while maintaining model accuracy. This is achieved by identifying the critical initialization for quantization/sparsity aware fine-tuning, as well as novel techniques including quantizers with zero-preserving format and scheduled dropout. These hardware-friendly techniques need only to be applied in the fine-tuning phase for downstream tasks; hence, are especially suitable for acceleration and deployment of pre-trained transformer models.
|
[
"Natural Language Processing",
"Model Compression",
"Deep Learning",
"Computer Vision",
"Speech Processing",
"Transformer Models"
] | |
NeurIPS
| 2,023
| 76,215
|
RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality
|
This paper introduces RACER, the Rational Artificial Intelligence Car-following model Enhanced by Reality, a cutting-edge deep learning car-following model, which satisfies partial derivative constraints that are necessary to maintain physical feasibility, designed to predict Adaptive Cruise Control (ACC) driving behavior. Unlike conventional car-following models, RACER effectively integrates Rational Driving Constraints (RDC), crucial tenets of actual driving, resulting in strikingly accurate and realistic predictions. Notably, it adherence to the RDC, registering zero violations, in stark contrast to other models. This study incorporates physical constraints within AI models, especially for obeying rational behaviors in transportation. The versatility of the proposed model, including its potential to incorporate additional derivative constraints and broader architectural applications, enhances its appeal and broadens its impact within the scientific community.
|
[
"Transportation Engineering",
"Autonomous Vehicles",
"Intelligent Transportation Systems"
] | |
NeurIPS
| 2,023
| 73,489
|
A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning
|
Multi-objective reinforcement learning algorithms (MORL) extend standard reinforcement learning (RL) to scenarios where agents must optimize multiple---potentially conflicting---objectives, each represented by a distinct reward function. To facilitate and accelerate research and benchmarking in multi-objective RL problems, we introduce a comprehensive collection of software libraries that includes: (i) MO-Gymnasium, an easy-to-use and flexible API enabling the rapid construction of novel MORL environments. It also includes more than 20 environments under this API. This allows researchers to effortlessly evaluate any algorithms on any existing domains; (ii) MORL-Baselines, a collection of reliable and efficient implementations of state-of-the-art MORL algorithms, designed to provide a solid foundation for advancing research. Notably, all algorithms are inherently compatible with MO-Gymnasium; and(iii) a thorough and robust set of benchmark results and comparisons of MORL-Baselines algorithms, tested across various challenging MO-Gymnasium environments. These benchmarks were constructed to serve as guidelines for the research community, underscoring the properties, advantages, and limitations of each particular state-of-the-art method.
|
[
"Multi-Objective Reinforcement Learning",
"Reinforcement Learning",
"Benchmarking Tools",
"Software Libraries"
] | |
ICML
| 2,024
| 32,677
|
Domain Generalisation via Imprecise Learning
|
Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g. optimise based on the average-case risk, worst-case risk, or interpolations thereof. While this decision should in principle be decided by the model operator like medical doctors in practice, this information might not always be available at training time. This situation leads to arbitrary commitments to specific generalisation strategies by machine learners due to these deployment uncertainties. We introduce the Imprecise Domain Generalisation framework to mitigate this, featuring an imprecise risk optimisation that allows learners to stay imprecise by optimising against a continuous spectrum of generalisation strategies during training, and a model framework that allows operators to specify their generalisation preference at deployment. Our work, supported by theoretical and empirical evidence, showcases the benefits of integrating imprecision into domain generalisation.
|
[
"Domain Generalisation",
"Out-of-Distribution Generalisation",
"Risk Optimization",
"Theoretical Computer Science"
] | |
NeurIPS
| 2,022
| 54,604
|
Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation
|
Symbolic music generation aims to generate music scores automatically. A recent trend is to use Transformer or its variants in music generation, which is, however, suboptimal, because the full attention cannot efficiently model the typically long music sequences (e.g., over 10,000 tokens), and the existing models have shortcomings in generating musical repetition structures. In this paper, we propose Museformer, a Transformer with a novel fine- and coarse-grained attention for music generation. Specifically, with the fine-grained attention, a token of a specific bar directly attends to all the tokens of the bars that are most relevant to music structures (e.g., the previous 1st, 2nd, 4th and 8th bars, selected via similarity statistics); with the coarse-grained attention, a token only attends to the summarization of the other bars rather than each token of them so as to reduce the computational cost. The advantages are two-fold. First, it can capture both music structure-related correlations via the fine-grained attention, and other contextual information via the coarse-grained attention. Second, it is efficient and can model over 3X longer music sequences compared to its full-attention counterpart. Both objective and subjective experimental results demonstrate its ability to generate long music sequences with high quality and better structures.
|
[
"Music Generation",
"Symbolic Music Processing",
"Machine Learning for Music",
"Transformer Models",
"Attention Mechanisms in Neural Networks"
] | |
NeurIPS
| 2,022
| 54,606
|
Trading Off Resource Budgets For Improved Regret Bounds
|
In this work we consider a variant of adversarial online learning where in each round one picks $B$ out of $N$ arms and incurs cost equal to the $\textit{minimum}$ of the costs of each arm chosen. We propose an algorithm called Follow the Perturbed Multiple Leaders (FPML) for this problem, which we show (by adapting the techniques of Kalai and Vempala [2005]) achieves expected regret $\mathcal{O}(T^{\frac{1}{B+1}}\ln(N)^{\frac{B}{B+1}})$ over time horizon $T$ relative to the $\textit{single}$ best arm in hindsight. This introduces a trade-off between the budget $B$ and the single-best-arm regret, and we proceed to investigate several applications of this trade-off. First, we observe that algorithms which use standard regret minimizers as subroutines can sometimes be adapted by replacing these subroutines with FPML, and we use this to generalize existing algorithms for Online Submodular Function Maximization [Streeter and Golovin, 2008] in both the full feedback and semi-bandit feedback settings. Next, we empirically evaluate our new algorithms on an online black-box hyperparameter optimization problem. Finally, we show how FPML can lead to new algorithms for Linear Programming which require stronger oracles at the benefit of fewer oracle calls.
|
[
"Adversarial Online Learning",
"Regret Minimization",
"Algorithm Design",
"Online Submodular Function Maximization",
"Hyperparameter Optimization",
"Linear Programming"
] | |
ICML
| 2,022
| 16,473
|
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization
|
Neural Tangent Kernel (NTK) is widely used to analyze overparametrized neural networks due to the famous result by Jacot et al. (2018): in the infinite-width limit, the NTK is deterministic and constant during training. However, this result cannot explain the behavior of deep networks, since it generally does not hold if depth and width tend to infinity simultaneously. In this paper, we study the NTK of fully-connected ReLU networks with depth comparable to width. We prove that the NTK properties depend significantly on the depth-to-width ratio and the distribution of parameters at initialization. In fact, our results indicate the importance of the three phases in the hyperparameter space identified in Poole et al. (2016): ordered, chaotic and the edge of chaos (EOC). We derive exact expressions for the NTK dispersion in the infinite-depth-and-width limit in all three phases and conclude that the NTK variability grows exponentially with depth at the EOC and in the chaotic phase but not in the ordered phase. We also show that the NTK of deep networks may stay constant during training only in the ordered phase and discuss how the structure of the NTK matrix changes during training.
|
[
"Machine Learning Theory",
"Neural Networks",
"Deep Learning",
"Theoretical Computer Science"
] | |
NeurIPS
| 2,022
| 63,505
|
COVIDx CXR-3: A Large-Scale, Open-Source Benchmark Dataset of Chest X-ray Images for Computer-Aided COVID-19 Diagnostics
|
After more than two years since the beginning of the COVID-19 pandemic, the pressure of this crisis continues to devastate globally. The use of chest X-ray (CXR) imaging as a complementary screening strategy to RT-PCR testing is not only prevailing but has greatly increased due to its routine clinical use for respiratory complaints. Thus far, many visual perception models have been proposed for COVID-19 screening based on CXR imaging. Nevertheless, the accuracy and the generalization capacity of these models are very much dependent on the diversity and the size of the dataset they were trained on. Motivated by this, we introduce COVIDx CXR-3, a large-scale benchmark dataset of CXR images for supporting COVID-19 computer vision research. COVIDx CXR-3 is composed of 30,386 CXR images from a multinational cohort of 17,026 patients from at least 51 countries, making it, to the best of our knowledge, the most extensive, most diverse COVID-19 CXR dataset in open access form. Here, we provide comprehensive details on the various aspects of the proposed dataset including patient demographics, imaging views, and infection types. The hope is that COVIDx CXR-3 can assist scientists in advancing machine learning research against both the COVID-19 pandemic and related diseases.
|
[
"Medical Imaging",
"Computer Vision",
"COVID-19 Research",
"Radiology",
"Data Science"
] | |
NeurIPS
| 2,023
| 76,188
|
Equivariant Networks for Robust Galaxy Morphology Classification
|
We propose the use of group convolutional neural network architectures (GCNNs) equivariant to the 2D Euclidean group, $E(2)$, for the task of galaxy morphology classification by utilizing symmetries of the data present in galaxy images as an inductive bias in the architecture. We conduct robustness studies by introducing artificial perturbations via Poisson noise insertion and one-pixel adversarial attacks to simulate the effects of limited observational capabilities. We train, validate, and test GCNNs on the Galaxy10 DECals dataset and find that GCNNs achieve higher classification accuracy and are consistently more robust than their non-equivariant counterparts, with an architecture equivariant to the group $D_{16}$ achieving a $95.52 \pm 0.18\%$ test-set accuracy and losing $<6\%$ accuracy on a 50\%-noise dataset.
|
[
"Astrophysics",
"Computer Vision",
"Image Processing",
"Astronomy Data Analysis"
] | |
ICML
| 2,023
| 24,306
|
Controlling Type Confounding in Ad Hoc Teamwork with Instance-wise Teammate Feedback Rectification
|
Ad hoc teamwork requires an agent to cooperate with unknown teammates without prior coordination. Many works propose to abstract teammate instances into high-level representation of types and then pre-train the best response for each type. However, most of them do not consider the distribution of teammate instances within a type. This could expose the agent to the hidden risk of type confounding. In the worst case, the best response for an abstract teammate type could be the worst response for all specific instances of that type. This work addresses the issue from the lens of causal inference. We first theoretically demonstrate that this phenomenon is due to the spurious correlation brought by uncontrolled teammate distribution. Then, we propose our solution, CTCAT, which disentangles such correlation through an instance-wise teammate feedback rectification. This operation reweights the interaction of teammate instances within a shared type to reduce the influence of type confounding. The effect of CTCAT is evaluated in multiple domains, including classic ad hoc teamwork tasks and real-world scenarios. Results show that CTCAT is robust to the influence of type confounding, a practical issue that directly hazards the robustness of our trained agents but was unnoticed in previous works.
|
[
"Multi-Agent Systems",
"Causal Inference",
"Robotics and Autonomous Systems"
] | |
ICML
| 2,024
| 34,289
|
Predictive Dynamic Fusion
|
Multimodal fusion is crucial in joint decision-making systems for rendering holistic judgments. Since multimodal data changes in open environments, dynamic fusion has emerged and achieved remarkable progress in numerous applications. However, most existing dynamic multimodal fusion methods lack theoretical guarantees and easily fall into suboptimal problems, yielding unreliability and instability. To address this issue, we propose a Predictive Dynamic Fusion (PDF) framework for multimodal learning. We proceed to reveal the multimodal fusion from a generalization perspective and theoretically derive the predictable Collaborative Belief (Co-Belief) with Mono- and Holo-Confidence, which provably reduces the upper bound of generalization error. Accordingly, we further propose a relative calibration strategy to calibrate the predicted Co-Belief for potential uncertainty. Extensive experiments on multiple benchmarks confirm our superiority. Our code is available at https://github.com/Yinan-Xia/PDF.
|
[
"Multimodal Learning",
"Data Fusion",
"Theoretical Computer Science"
] | |
NeurIPS
| 2,022
| 53,649
|
M2N: Mesh Movement Networks for PDE Solvers
|
Numerical Partial Differential Equation (PDE) solvers often require discretizing the physical domain by using a mesh. Mesh movement methods provide the capability to improve the accuracy of the numerical solution without introducing extra computational burden to the PDE solver, by increasing mesh resolution where the solution is not well-resolved, whilst reducing unnecessary resolution elsewhere. However, sophisticated mesh movement methods, such as the Monge-Ampère method, generally require the solution of auxiliary equations. These solutions can be extremely expensive to compute when the mesh needs to be adapted frequently. In this paper, we propose to the best of our knowledge the first learning-based end-to-end mesh movement framework for PDE solvers. Key requirements of learning-based mesh movement methods are: alleviating mesh tangling, boundary consistency, and generalization to mesh with different resolutions. To achieve these goals, we introduce the neural spline model and the graph attention network (GAT) into our models respectively. While the Neural-Spline based model provides more flexibility for large mesh deformation, the GAT based model can handle domains with more complicated shapes and is better at performing delicate local deformation. We validate our methods on stationary and time-dependent, linear and non-linear equations, as well as regularly and irregularly shaped domains. Compared to the traditional Monge-Ampère method, our approach can greatly accelerate the mesh adaptation process by three to four orders of magnitude, whilst achieving comparable numerical error reduction.
|
[
"Computational Mathematics",
"Numerical Analysis",
"Machine Learning for Scientific Computing",
"Partial Differential Equations",
"Computational Fluid Dynamics",
"Scientific Computing"
] | |
NeurIPS
| 2,023
| 72,421
|
Spuriosity Didn’t Kill the Classifier: Using Invariant Predictions to Harness Spurious Features
|
To avoid failures on out-of-distribution data, recent works have sought to extract features that have an invariant or stable relationship with the label across domains, discarding "spurious" or unstable features whose relationship with the label changes across domains. However, unstable features often carry complementary information that could boost performance if used correctly in the test domain. In this work, we show how this can be done without test-domain labels. In particular, we prove that pseudo-labels based on stable features provide sufficient guidance for doing so, provided that stable and unstable features are conditionally independent given the label. Based on this theoretical insight, we propose Stable Feature Boosting (SFB), an algorithm for: (i) learning a predictor that separates stable and conditionally-independent unstable features; and (ii) using the stable-feature predictions to adapt the unstable-feature predictions in the test domain. Theoretically, we prove that SFB can learn an asymptotically-optimal predictor without test-domain labels. Empirically, we demonstrate the effectiveness of SFB on real and synthetic data.
|
[
"Domain Adaptation",
"Invariant Learning",
"Out-of-Distribution Generalization",
"Feature Engineering"
] | |
ICLR
| 2,024
| 18,343
|
Supervised Knowledge Makes Large Language Models Better In-context Learners
|
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering. The recent progress in large-scale generative models has further expanded their use in real-world language applications. However, the critical challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored. While previous in-context learning research has focused on enhancing models to adhere to users' specific instructions and quality expectations, and to avoid undesired outputs, little to no work has explored the use of task-specific fine-tuned Language Models (SLMs) to improve LLMs' in-context learning during the inference stage. Our primary contribution is the establishment of a simple yet effective framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks. Using our proposed plug-in method, enhanced versions of Llama 2 and ChatGPT surpass their original versions regarding generalizability and factuality. We offer a comprehensive suite of resources, including 16 curated datasets, prompts, model checkpoints, and LLM outputs across 9 distinct tasks. Our empirical analysis sheds light on the advantages of incorporating discriminative models into LLMs and highlights the potential of our methodology in fostering more reliable LLMs.
|
[
"Natural Language Processing",
"Language Models",
"In-context Learning",
"Generative Models",
"Fine-tuning",
"Model Generalization",
"Question Answering"
] | |
ICLR
| 2,022
| 7,173
|
High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize
|
In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems. More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a convergence rate of $\mathcal O (1/ \sqrt{T})$ with high probability without the knowledge of smoothness and variance. We use a particular version of Freedman's concentration bound for martingale difference sequences (Kakade & Tewari, 2008) which enables us to achieve the best-known dependence of $\log (1 / \delta )$ on the probability margin $\delta$. We present our analysis in a modular way and obtain a complementary $\mathcal O (1 / T)$ convergence rate in the deterministic setting. To the best of our knowledge, this is the first high probability result for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness and uniform variance bound, which simultaneously has best-known dependence of $\log( 1/ \delta)$. We further prove noise adaptation property of AdaGrad under additional noise assumptions.
|
[
"Optimization",
"Algorithms",
"Nonconvex Optimization",
"Stochastic Optimization"
] | |
NeurIPS
| 2,022
| 53,565
|
Improving Diffusion Models for Inverse Problems using Manifold Constraints
|
Recently, diffusion models have been used to solve various inverse problems in an unsupervised manner with appropriate modifications to the sampling process. However, the current solvers, which recursively apply a reverse diffusion step followed by a projection-based measurement consistency step, often produce sub-optimal results. By studying the generative sampling path, here we show that current solvers throw the sample path off the data manifold, and hence the error accumulates. To address this, we propose an additional correction term inspired by the manifold constraint, which can be used synergistically with the previous solvers to make the iterations close to the manifold. The proposed manifold constraint is straightforward to implement within a few lines of code, yet boosts the performance by a surprisingly large margin. With extensive experiments, we show that our method is superior to the previous methods both theoretically and empirically, producing promising results in many applications such as image inpainting, colorization, and sparse-view computed tomography. Code available https://github.com/HJ-harry/MCG_diffusion
|
[
"Computer Vision",
"Image Processing",
"Inverse Problems",
"Generative Models",
"Computational Imaging"
] | |
ICLR
| 2,024
| 18,352
|
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
|
By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory.However, how receptive are LLMs to such external evidence, especially when the evidence conflicts with their parametric memory? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering knowledge conflicts.We propose a systematic framework to elicit high-quality parametric memory from LLMs and construct the corresponding counter-memory, which enables us to conduct a series of controlled experiments.Our investigation reveals seemingly contradicting behaviors of LLMs.On the one hand, different from prior wisdom, we find that LLMs can be highly receptive to external evidence even when that conflicts with their parametric memory, given that the external evidence is coherent and convincing.On the other hand, LLMs also demonstrate a strong confirmation bias when the external evidence contains some information that is consistent with their parametric memory, despite being presented with conflicting evidence at the same time.These results pose important implications that are worth careful consideration for the further development and deployment of tool- and retrieval-augmented LLMs.Resources are available at https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict.
|
[
"Natural Language Processing",
"Knowledge Representation",
"Cognitive Computing"
] | |
NeurIPS
| 2,022
| 63,631
|
Imitation from Observation With Bootstrapped Contrastive Learning
|
Imitation from observation is a paradigm that consists of training agents using visual observations of expert demonstrations without direct access to the actions.One of the most common procedures adopted to solve this problem is to train a reward function from the demonstrations, but this task still remains a significant challenge.We approach this problem with a method of agent behavior representation in a latent space using demonstration videos.Our approach exploits recent algorithms of contrastive learning of image and video and uses a bootstrapping method to progressively train a trajectory encoding function with respect to the variation of the agent policy. This function is then used to compute the rewards provided to a standard Reinforcement Learning (RL) algorithm.Our method uses only a limited number of videos produced by an expert and we do not have access to the expert policy function.Our experiments show promising results on a set of continuous control tasks and demonstrate that learning a behavior encoder from videos allows for building an efficient reward function for the agent.
|
[
"Reinforcement Learning",
"Imitation Learning",
"Computer Vision",
"Contrastive Learning"
] | |
ICML
| 2,023
| 25,106
|
From Hypergraph Energy Functions to Hypergraph Neural Networks
|
Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest. To exploit these relationships in making downstream predictions, a variety of hypergraph neural network architectures have recently been proposed, in large part building upon precursors from the more traditional graph neural network (GNN) literature. Somewhat differently, in this paper we begin by presenting an expressive family of parameterized, hypergraph-regularized energy functions. We then demonstrate how minimizers of these energies effectively serve as node embeddings that, when paired with a parameterized classifier, can be trained end-to-end via a supervised bilevel optimization process. Later, we draw parallels between the implicit architecture of the predictive models emerging from the proposed bilevel hypergraph optimization, and existing GNN architectures in common use. Empirically, we demonstrate state-of-the-art results on various hypergraph node classification benchmarks. Code is available at https://github.com/yxzwang/PhenomNN.
|
[
"Neural Networks",
"Graph Neural Networks",
"Hypergraph Theory",
"Computational Graph Theory",
"Optimization Methods"
] | |
ICML
| 2,023
| 23,816
|
Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization
|
Various optimal gradient-based algorithms have been developed for smooth nonconvex optimization. However, many nonconvex machine learning problems do not belong to the class of smooth functions and therefore the existing algorithms are sub-optimal. Instead, these problems have been shown to satisfy certain generalized-smooth conditions, which have not been well understood in the existing literature. In this paper, we propose a notion of $\alpha$-symmetric generalized-smoothness that substantially extends the existing notions and covers many important functions such as high-order polynomials and exponential functions. We study the fundamental properties and establish descent lemmas for the functions in this class. Then, to solve such a large class of nonconvex problems, we design a special deterministic normalized gradient descent algorithm that achieves the optimal iteration complexity $\mathcal{O}(\epsilon^{-2})$, and also prove that the popular SPIDER variance reduction algorithm achieves the optimal sample complexity $\mathcal{O}(\epsilon^{-3})$. Our results show that solving generalized-smooth nonconvex problems is as efficient as solving smooth nonconvex problems.
|
[
"Nonconvex Optimization",
"Gradient-Based Algorithms",
"Mathematical Optimization",
"Algorithmic Complexity"
] | |
NeurIPS
| 2,023
| 71,032
|
On the Convergence of Black-Box Variational Inference
|
We provide the first convergence guarantee for black-box variational inference (BBVI) with the reparameterization gradient. While preliminary investigations worked on simplified versions of BBVI (e.g., bounded domain, bounded support, only optimizing for the scale, and such), our setup does not need any such algorithmic modifications. Our results hold for log-smooth posterior densities with and without strong log-concavity and the location-scale variational family. Notably, our analysis reveals that certain algorithm design choices commonly employed in practice, such as nonlinear parameterizations of the scale matrix, can result in suboptimal convergence rates. Fortunately, running BBVI with proximal stochastic gradient descent fixes these limitations and thus achieves the strongest known convergence guarantees. We evaluate this theoretical insight by comparing proximal SGD against other standard implementations of BBVI on large-scale Bayesian inference problems.
|
[
"Bayesian Inference",
"Variational Inference",
"Optimization",
"Stochastic Gradient Descent"
] | |
ICLR
| 2,023
| 13,617
|
Instruction-Finetuned Foundation Models for Multimodal Web Navigation
|
We propose an instruction-aligned multimodal agent for autonomous web navigation -- i.e., sequential decision making tasks employing a computer interface. Our approach is based on supervised finetuning of vision and language foundation models on a large corpus of web data consisting of webpage screenshots and HTML. Specifically, we use vision transformers on sequences of web page screenshots to extract patch-level image features. These features are concatenated with embedding of tokens in HTML documents. Using an instruction-finetuned large language model, we jointly encode both vision and HTML modalities and decode web actions such as click and type. We show that our method outperforms previous approaches by a significant margin, even in handling out-of-distribution HTML and compositional tasks. On the MiniWoB benchmark, we improve previous approaches using only HTML input by more than 17.7%, even surpassing the performance of RL-finetuned models. On the recent WebShop benchmark, our 3-billion-parameter model achieves superior performance to the existing state-of-the-art PaLM-540B. We also collect 347K gold demonstrations using our trained models, 29 times larger than prior work, and make them available to promote future research in this area. We believe that our work is a step towards building capable and generalist decision making agents for computer interface.
|
[
"Multimodal Learning",
"Natural Language Processing",
"Computer Vision",
"Human-Computer Interaction",
"Web Navigation",
"Autonomous Agents"
] | |
NeurIPS
| 2,023
| 72,461
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to the input length limit, preventing them from utilizing rich long-context information from past inputs. To address this, we propose a framework, Language Models Augmented with Long-Term Memory (LongMem), which enables LLMs to memorize long history. We design a novel decoupled network architecture with the original backbone LLM frozen as a memory encoder and an adaptive residual side-network as a memory retriever and reader. Such a decoupled memory design can easily cache and update long-term past contexts for memory retrieval without suffering from memory staleness. Enhanced with memory-augmented adaptation training, LongMem can thus memorize long past context and use long-term memory for language modeling. The proposed memory retrieval module can handle unlimited-length context in its memory bank to benefit various downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k tokens and thus cache many-shot extra demonstration examples as long-form memory for in-context learning. Experiments show that our method outperforms strong long-context models on ChapterBreak, a challenging long-context modeling benchmark, and achieves remarkable improvements on memory-augmented in-context learning over LLMs. The results demonstrate that the proposed method is effective in helping language models to memorize and utilize long-form contents.
|
[
"Natural Language Processing",
"Language Models",
"Memory-Augmented Neural Networks",
"Long-Context Processing",
"In-Context Learning"
] | |
NeurIPS
| 2,022
| 56,448
|
Provably Reliable Large-Scale Sampling from Gaussian Processes
|
When comparing approximate Gaussian process (GP) models, it can be helpful to be able to generate data from any GP. If we are interested in how approximate methods perform at scale, we may wish to generate very large synthetic datasets to evaluate them. Na\"{i}vely doing so would cost (\order{n^3}) flops and (\order{n^2}) memory to generate a size (n) sample. We demonstrate how to scale such data generation to large (n) whilst still providing guarantees that, with high probability, the sample is indistinguishable from a sample from the desired GP.
|
[
"Gaussian Processes",
"Probabilistic Modeling",
"Computational Statistics",
"Data Science"
] | |
ICLR
| 2,024
| 22,197
|
Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization
|
Large Language Models (LLMs) exhibit robust problem-solving capabilities for diverse tasks. However, most LLM-based agents are designed as specific task solvers with sophisticated prompt engineering, rather than agents capable of learning and evolving through interactions. These task solvers necessitate manually crafted prompts to inform task rules and regulate LLM behaviors, inherently incapacitating to address complex dynamic scenarios e.g., large interactive games. In light of this, we propose Agent-Pro: an LLM-based Agent with Policy-level Reflection and Optimization that can learn a wealth of expertise from interactive experiences and progressively elevate its behavioral policy. Specifically, it involves a dynamic belief generation and reflection process for policy evolution. Rather than action-level reflection, Agent-Pro iteratively reflects on past trajectories and beliefs, "fine-tuning" its irrational beliefs for a better policy. Moreover, a depth-first search is employed for policy optimization, ensuring continual enhancement in policy payoffs. Agent-Pro is evaluated across two games: Blackjack and Texas Hold’em, outperforming vanilla LLM and specialized models. Our results show Agent-Pro can learn and evolve in complex and dynamic scenes, which also benefits numerous LLM-based applications.
|
[
"Reinforcement Learning",
"Natural Language Processing",
"Game Theory"
] | |
ICLR
| 2,022
| 5,999
|
Sparse Attention with Learning to Hash
|
Transformer has become ubiquitous in sequence modeling tasks. As a key component of Transformer, self-attention does not scale to long sequences due to its quadratic time and space complexity with respect to the sequence length. To tackle this problem, recent work developed dynamic attention sparsification techniques based on Approximate Nearest Neighbor (ANN) methods, where similar queries and keys are allocated to the same hash bucket with high probability. However, the effectiveness of those ANN methods relies on the assumption that queries and keys should lie in the same space, which is not well justified. Besides, some of the ANN methods such as Locality-Sensitive Hashing (LSH) are randomized and cannot fully utilize the available real data distributions. To overcome these issues, this paper proposes a new strategy for sparse attention, namely LHA (Learning-to-Hash Attention), which directly learns separate parameterized hash functions for queries and keys, respectively. Another advantage of LHA is that it does not impose extra constraints for queries and keys, which makes it applicable to the wide range of pre-trained Transformer models. Our experiments on evaluation of the WikiText-103 dataset for language modeling, the GLUE benchmark for natural language understanding, and the Lang-Range-Arena benchmark for multiple tasks (text/image classification, retrieval, etc.) show the superior performance of LHA over other strong Transformer variants.
|
[
"Natural Language Processing",
"Deep Learning",
"Sequence Modeling",
"Attention Mechanisms",
"Transformer Models",
"Sparse Representations",
"Hashing Techniques"
] | |
NeurIPS
| 2,023
| 70,406
|
Enhancing Robot Program Synthesis Through Environmental Context
|
Program synthesis aims to automatically generate an executable program that conforms to the given specification. Recent advancements have demonstrated that deep neural methodologies and large-scale pretrained language models are highly proficient in capturing program semantics.For robot programming, prior works have facilitated program synthesis by incorporating global environments. However, the assumption of acquiring a comprehensive understanding of the entire environment is often excessively challenging to achieve.In this work, we present a framework that learns to synthesize a program by rectifying potentially erroneous code segments, with the aid of partially observed environments. To tackle the issue of inadequate attention to partial observations, we propose to first learn an environment embedding space that can implicitly evaluate the impacts of each program token based on the precondition. Furthermore, by employing a graph structure, the model can aggregate both environmental and syntactic information flow and furnish smooth program rectification guidance.Extensive experimental evaluations and ablation studies on the partially observed VizDoom domain authenticate that our method offers superior generalization capability across various tasks and greater robustness when encountering noises.
|
[
"Robotics",
"Program Synthesis",
"Deep Learning",
"Natural Language Processing",
"Computer Vision",
"Graph Neural Networks"
] | |
NeurIPS
| 2,023
| 70,725
|
Semi-Supervised Contrastive Learning for Deep Regression with Ordinal Rankings from Spectral Seriation
|
Contrastive learning methods can be applied to deep regression by enforcing label distance relationships in feature space. However, these methods are limited to labeled data only unlike for classification, where unlabeled data can be used for contrastive pretraining. In this work, we extend contrastive regression methods to allow unlabeled data to be used in a semi-supervised setting, thereby reducing the reliance on manual annotations. We observe that the feature similarity matrix between unlabeled samples still reflect inter-sample relationships, and that an accurate ordinal relationship can be recovered through spectral seriation algorithms if the level of error is within certain bounds. By using the recovered ordinal relationship for contrastive learning on unlabeled samples, we can allow more data to be used for feature representation learning, thereby achieve more robust results. The ordinal rankings can also be used to supervise predictions on unlabeled samples, which can serve as an additional training signal. We provide theoretical guarantees and empirical support through experiments on different datasets, demonstrating that our method can surpass existing state-of-the-art semi-supervised deep regression methods. To the best of our knowledge, this work is the first to explore using unlabeled data to perform contrastive learning for regression.
|
[
"Semi-Supervised Learning",
"Contrastive Learning",
"Deep Learning",
"Regression Analysis",
"Ordinal Data Analysis",
"Spectral Methods"
] | |
NeurIPS
| 2,022
| 58,443
|
Minimax Optimal Fair Regression under Linear Model
|
We investigate the minimax optimal error of a fair regression problem under a linear model employing demographic parity as a fairness constraint. As a tractable demographic parity constraint, we introduce $(\alpha,\delta)$-fairness consistency, meaning that the quantified unfairness is decreased at most $n^{-\alpha}$ rate with at least probability $1-\delta$, where $n$ is the sample size. In other words, the consistently fair algorithm eventually outputs a regressor satisfying the demographic parity constraint with high probability as $n$ tends to infinity. As a result of our analyses, we found that the minimax optimal error under the $(\alpha,\delta)$-fairness consistency constraint is $\Theta(\frac{dM}{n})$ provided that $\alpha \le \frac{1}{2}$, where $d$ is the dimensionality, and $M$ is the number of groups induced from the sensitive attributes.
|
[
"Fairness in Machine Learning",
"Regression Analysis",
"Statistical Learning Theory",
"Algorithmic Fairness",
"Linear Models"
] | |
ICLR
| 2,024
| 18,279
|
Procedural Fairness Through Decoupling Objectionable Data Generating Components
|
We reveal and address the frequently overlooked yet important issue ofdisguised procedural unfairness, namely, the potentially inadvertent alterations on the behavior of neutral (i.e., not problematic) aspects of data generating process, and/or the lack of procedural assurance of the greatest benefit of the least advantaged individuals. Inspired by John Rawls's advocacy forpure procedural justice(Rawls, 1971; 2001), we view automated decision-making as a microcosm of social institutions, and consider how the data generating process itself can satisfy the requirements of procedural fairness. We propose a framework that decouples the objectionable data generating components from the neutral ones by utilizing reference points and the associated value instantiation rule. Our findings highlight the necessity of preventingdisguised procedural unfairness, drawing attention not only to the objectionable data generating components that we aim to mitigate, but also more importantly, to the neutral components that we intend to keep unaffected.
|
[
"Procedural Fairness",
"Data Ethics",
"Automated Decision-Making",
"Fairness in Machine Learning",
"Social Justice in AI",
"Data Generation Processes"
] | |
ICML
| 2,024
| 33,240
|
High-Probability Bound for Non-Smooth Non-Convex Stochastic Optimization with Heavy Tails
|
Recently, Cutkosky et al. introduce the online-to-non-convex framework, which utilizes online learning methods to solve non-smooth non-convex optimization problems, and achieves an $\mathcal{O}(\epsilon^{-3}\delta^{-1})$ gradient complexity for finding $(\delta,\epsilon)$-stationary points. However, their results rely on the bounded variance assumption of stochastic gradients and only hold in expectation. To address these limitations, we investigate the case that stochastic gradients obey heavy-tailed distributions with finite $\mathfrak{p}$-th moments for some $\mathfrak{p}\in(1,2]$, and propose a novel algorithm which is able to identify a $(\delta,\epsilon)$-stationary point with high probability, after consuming $\tilde{\mathcal{O}}(\epsilon^{-\frac{2\mathfrak{p}-1}{\mathfrak{p}-1}}\delta^{-1})$ stochastic gradients. The key idea is first incorporating the gradient clipping technique into the online-to-non-convex framework to produce a sequence of points, the averaged gradient norms of which is no greater than $\epsilon$. Then, we propose a validation method to select one $(\delta,\epsilon)$-stationary point among the candidates. When gradient distributions have bounded variance, i.e., $\mathfrak{p}=2$, our result turns into $\tilde{\mathcal{O}}(\epsilon^{-3}\delta^{-1})$, which improves the existing $\tilde{\mathcal{O}}(\epsilon^{-4}\delta^{-1})$ high-probability bound. When the objective is smooth, our algorithm can also find an $\epsilon$-stationary point with $\tilde{\mathcal{O}}(\epsilon^{-\frac{3\mathfrak{p}-2}{\mathfrak{p}-1}})$ gradient queries.
|
[
"Stochastic Optimization",
"Non-Convex Optimization",
"Machine Learning Theory",
"Online Learning",
"Probability and Statistics"
] | |
ICLR
| 2,024
| 17,392
|
Byzantine Robust Cooperative Multi-Agent Reinforcement Learning as a Bayesian Game
|
In this study, we explore the robustness of cooperative multi-agent reinforcement learning (c-MARL) against Byzantine failures, where any agent can enact arbitrary, worst-case actions due to malfunction or adversarial attack. To address the uncertainty that any agent can be adversarial, we propose a Bayesian Adversarial Robust Dec-POMDP (BARDec-POMDP) framework, which views Byzantine adversaries as nature-dictated types, represented by a separate transition. This allows agents to learn policies grounded on their posterior beliefs about the type of other agents, fostering collaboration with identified allies and minimizing vulnerability to adversarial manipulation. We define the optimal solution to the BARDec-POMDP as an ex interim robust Markov perfect Bayesian equilibrium, which we proof to exist and the corresponding policy weakly dominates previous approaches as time goes to infinity. To realize this equilibrium, we put forward a two-timescale actor-critic algorithm with almost sure convergence under specific conditions. Experiments on matrix game, Level-based Foraging and StarCraft II indicate that, our method successfully acquires intricate micromanagement skills and adaptively aligns with allies under worst-case perturbations, showing resilience against non-oblivious adversaries, random allies, observation-based attacks, and transfer-based attacks.
|
[
"Multi-Agent Systems",
"Reinforcement Learning",
"Game Theory",
"Robustness in Machine Learning",
"Adversarial Machine Learning"
] | |
ICML
| 2,024
| 33,876
|
Tackling Non-Stationarity in Reinforcement Learning via Causal-Origin Representation
|
In real-world scenarios, the application of reinforcement learning is significantly challenged by complex non-stationarity. Most existing methods attempt to model changes in the environment explicitly, often requiring impractical prior knowledge of environments. In this paper, we propose a new perspective, positing that non-stationarity can propagate and accumulate through complex causal relationships during state transitions, thereby compounding its sophistication and affecting policy learning. We believe that this challenge can be more effectively addressed by implicitly tracing the causal origin of non-stationarity. To this end, we introduce the Causal-Origin REPresentation (COREP) algorithm. COREP primarily employs a guided updating mechanism to learn a stable graph representation for the state, termed as causal-origin representation. By leveraging this representation, the learned policy exhibits impressive resilience to non-stationarity. We supplement our approach with a theoretical analysis grounded in the causal interpretation for non-stationary reinforcement learning, advocating for the validity of the causal-origin representation. Experimental results further demonstrate the superior performance of COREP over existing methods in tackling non-stationarity problems. The code is available at https://github.com/PKU-RL/COREP.
|
[
"Reinforcement Learning",
"Non-Stationarity",
"Causal Inference"
] | |
ICML
| 2,022
| 17,529
|
The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning
|
We consider the problem of training a $d$ dimensional model with distributed differential privacy (DP) where secure aggregation (SecAgg) is used to ensure that the server only sees the noisy sum of $n$ model updates in every training round. Taking into account the constraints imposed by SecAgg, we characterize the fundamental communication cost required to obtain the best accuracy achievable under $\varepsilon$ central DP (i.e. under a fully trusted server and no communication constraints). Our results show that $\tilde{O}\lp \min(n^2\varepsilon^2, d) \rp$ bits per client are both sufficient and necessary, and this fundamental limit can be achieved by a linear scheme based on sparse random projections. This provides a significant improvement relative to state-of-the-art SecAgg distributed DP schemes which use $\tilde{O}(d\log(d/\varepsilon^2))$ bits per client. Empirically, we evaluate our proposed scheme on real-world federated learning tasks. We find that our theoretical analysis is well matched in practice. In particular, we show that we can reduce the communication cost to under $1.78$ bits per parameter in realistic privacy settings without decreasing test-time performance. Our work hence theoretically and empirically specifies the fundamental price of using SecAgg.
|
[
"Differential Privacy",
"Federated Learning",
"Secure Aggregation",
"Communication Efficiency in Distributed Systems"
] | |
NeurIPS
| 2,022
| 61,564
|
Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems
|
Deep Learning-based Recommendation models use sparse and dense features of a user to predict an item that the user may like. These features carry the users' private information, service providers often protect these values by memory encryption (e.g., with hardware such as Intel's SGX). However, even with such protection, an attacker may still learn information about which entry of the sparse feature is nonzere through the embedding table access pattern. In this work, we show that only leaking the sparse features' nonzero entry positions can be a big threat to privacy. Using the embedding table access pattern, we show that it is possible to identify or re-identify a user, or extract sensitive attributes from a user. We subsequently show that applying a hash function to anonymize the access pattern cannot be a solution, as it can be reverse-engineered in many cases.
|
[
"Data Privacy",
"Deep Learning",
"Recommendation Systems",
"Information Security",
"Machine Learning Security"
] | |
NeurIPS
| 2,023
| 72,806
|
Debias Coarsely, Sample Conditionally: Statistical Downscaling through Optimal Transport and Probabilistic Diffusion Models
|
We introduce a two-stage probabilistic framework for statistical downscaling using unpaired data. Statistical downscaling seeks a probabilistic map to transform low-resolution data from a biased coarse-grained numerical scheme to high-resolution data that is consistent with a high-fidelity scheme. Our framework tackles the problem bycomposing two transformations: (i) a debiasing step via an optimal transport map, and (ii) an upsampling step achieved by a probabilistic diffusion model with a posteriori conditional sampling. This approach characterizes a conditional distribution without needing paired data, and faithfully recovers relevant physical statistics from biased samples. We demonstrate the utility of the proposed approach on one- and two-dimensional fluid flow problems, which are representative of the core difficulties present in numerical simulations of weather and climate. Our method produces realistic high-resolution outputs from low-resolution inputs, by upsampling resolutions of $8\times$ and $16\times$. Moreover, our procedure correctly matches the statistics of physical quantities, even when the low-frequency content of the inputs and outputs do not match, a crucial but difficult-to-satisfy assumption needed by current state-of-the-art alternatives. Code for this work is available at: https://github.com/google-research/swirl-dynamics/tree/main/swirl_dynamics/projects/probabilistic_diffusion.
|
[
"Statistical Downscaling",
"Optimal Transport",
"Probabilistic Diffusion Models",
"Numerical Simulations",
"Weather and Climate Modeling",
"Machine Learning in Climate Science",
"Data Upsampling",
"Computational Fluid Dynamics"
] | |
ICML
| 2,022
| 16,237
|
Failure and success of the spectral bias prediction for Laplace Kernel Ridge Regression: the case of low-dimensional data
|
Recently, several theories including the replica method made predictions for the generalization error of Kernel Ridge Regression. In some regimes, they predict that the method has a `spectral bias': decomposing the true function $f^*$ on the eigenbasis of the kernel, it fits well the coefficients associated with the O(P) largest eigenvalues, where $P$ is the size of the training set. This prediction works very well on benchmark data sets such as images, yet the assumptions these approaches make on the data are never satisfied in practice. To clarify when the spectral bias prediction holds, we first focus on a one-dimensional model where rigorous results are obtained and then use scaling arguments to generalize and test our findings in higher dimensions. Our predictions include the classification case $f(x)=$sign$(x_1)$ with a data distribution that vanishes at the decision boundary $p(x)\sim x_1^{\chi}$. For $\chi>0$ and a Laplace kernel, we find that (i) there exists a cross-over ridge $\lambda^*_{d,\chi}(P)\sim P^{-\frac{1}{d+\chi}}$ such that for $\lambda\gg \lambda^*_{d,\chi}(P)$, the replica method applies, but not for $\lambda\ll\lambda^*_{d,\chi}(P)$, (ii) in the ridge-less case, spectral bias predicts the correct training curve exponent only in the limit $d\rightarrow\infty$.
|
[
"Machine Learning Theory",
"Kernel Methods",
"Statistical Learning Theory",
"Generalization Error Analysis",
"High-Dimensional Statistics"
] | |
NeurIPS
| 2,023
| 76,087
|
Pay Attention to Mean Fields for Point Cloud Generation
|
Collider data generation via machine learning is gaining traction in particle physics due to the computational cost of traditional Monte Carlo simulations, especially for future high-luminosity colliders. This study presents a model using linearly scaling attention-based aggregation. The model is trained in an adversarial setup, ensuring input permutation equivariance respective invariance for the generator and critic, respectively. A feature matching loss is introduced to stabilise known unstable adversarial training. Results are presented for two different datasets. On the \textsc{JetNet150} dataset, the model is competitive but more parameter-efficient than the current state-of-the-art GAN-based model. The model has been extended to handle the CaloChallenge Dataset 2, where each point cloud contains up to $30\times$ more points than for the previous dataset. The model and its corresponding code will be made available upon publication.
|
[
"Particle Physics",
"Computational Physics",
"Generative Adversarial Networks ",
"Point Cloud Processing"
] | |
ICML
| 2,023
| 23,665
|
The Value of Out-of-Distribution Data
|
Generalization error always improves with more in-distribution data. However, it is an open question what happens as we add out-of-distribution (OOD) data. Intuitively, if the OOD data is quite different, it seems more data would harm generalization error, though if the OOD data are sufficiently similar, much empirical evidence suggests that OOD data can actually improve generalization error. We show a counter-intuitive phenomenon: the generalization error of a task can be a non-monotonic function of the amount of OOD data. Specifically, we prove that generalization error can improve with small amounts of OOD data, and then get worse than no OOD data with larger amounts. In other words, there is value in training on small amounts of OOD data. We analytically demonstrate these results via Fisher's Linear Discriminant on synthetic datasets, and empirically demonstrate them via deep networks on computer vision benchmarks such as MNIST, CIFAR-10, CINIC-10, PACS and DomainNet. In the idealistic setting where we know which samples are OOD, we show that these non-monotonic trends can be exploited using an appropriately weighted objective of the target and OOD empirical risk. While its practical utility is limited, this does suggest that if we can detect OOD samples, then there may be ways to benefit from them. When we do not know which samples are OOD, we show how a number of go-to strategies such as data-augmentation, hyper-parameter optimization and pre-training are not enough to ensure that the target generalization error does not deteriorate with the number of OOD samples in the dataset.
|
[
"Generalization and Overfitting",
"Out-of-Distribution Data",
"Computer Vision",
"Deep Learning",
"Data Augmentation",
"Hyperparameter Optimization"
] | |
ICML
| 2,024
| 36,710
|
Boost your crystal model with denoising pre-training
|
Crystals play a vital role in a wide range of materials, influencing both cutting-edge technologies and everyday applications. Recently, deep learning approaches for crystal property prediction have shown exceptional performance, driving significant progress in material discovery.However, supervised approaches can only be trained on labeled data and the number of data points varies for different properties. Making full use of unlabeled data remains an ongoing challenge.To address this issue, we propose an unsupervised Denoising Pre-training Framework (DPF) for crystal structure. DPF trains a model to reconstruct the original crystal structure from recover the masked atom types, perturbed atom positions, and perturbed crystal lattices.Through the pre-training, models learn the intrinsic features of crystal structures and capture the key features influencing crystal properties.We pre-train models on 380,743 unlabeled crystal structures and fine-tune them on downstream property prediction benchmarks. Extensive experiments demonstrate the effectiveness of our denoising pre-training framework.
|
[
"Materials Science",
"Computational Materials Science",
"Machine Learning in Materials Science",
"Deep Learning",
"Unsupervised Learning",
"Crystal Structure Prediction",
"Data-Driven Materials Discovery"
] | |
NeurIPS
| 2,023
| 70,888
|
Data-Informed Geometric Space Selection
|
Geometric representation learning (e.g., hyperbolic and spherical geometry) has proven to be efficacious in solving many intricate machine learning tasks. The fundamental challenge of geometric representation learning lies in aligning the inherent geometric bias with the underlying structure of the data, which is a rarely explored topic in the literature. Existing methods heavily rely on heuristic assumptions on the data structure to decide the type of geometry to be adopted, which often leads to suboptimal performance. This work aims to automate the alignment process via a data-informed strategy such that we optimize model performance with minimal overhead. Specifically, a sparse gating mechanism is employed to enable each input data point $\mathit{p}$ to select $K$ geometric spaces from a given candidate geometric space pool with $N$ ($K
|
[
"Geometric Representation Learning",
"Data-Driven Methods",
"Automated Model Selection"
] | |
ICML
| 2,023
| 24,994
|
Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling
|
We study off-policy evaluation (OPE) of contextual bandit policies for large discrete action spaces where conventional importance-weighting approaches suffer from excessive variance. To circumvent this variance issue, we propose a new estimator, calledOffCEM, that is based on theconjunct effect model(CEM), a novel decomposition of the causal effect into a cluster effect and a residual effect. OffCEM applies importance weighting only to action clusters and addresses the residual causal effect through model-based reward estimation. We show that the proposed estimator is unbiased under a new assumption, calledlocal correctness, which only requires that the residual-effect model preserves the relative expected reward differences of the actions within each cluster. To best leverage the CEM and local correctness, we also propose a new two-step procedure for performing model-based estimation that minimizes bias in the first step and variance in the second step. We find that the resulting OffCEM estimator substantially improves bias and variance compared to a range of conventional estimators. Experiments demonstrate that OffCEM provides substantial improvements in OPE especially in the presence of many actions.
|
[
"Reinforcement Learning",
"Causal Inference",
"Statistical Methods",
"Contextual Bandits"
] | |
NeurIPS
| 2,022
| 53,690
|
On the Frequency-bias of Coordinate-MLPs
|
We show that typical implicit regularization assumptions for deep neural networks (for regression) do not hold for coordinate-MLPs, a family of MLPs that are now ubiquitous in computer vision for representing high-frequency signals. Lack of such implicit bias disrupts smooth interpolations between training samples, and hampers generalizing across signal regions with different spectra. We investigate this behavior through a Fourier lens and uncover that as the bandwidth of a coordinate-MLP is enhanced, lower frequencies tend to get suppressed unless a suitable prior is provided explicitly. Based on these insights, we propose a simple regularization technique that can mitigate the above problem, which can be incorporated into existing networks without any architectural modifications.
|
[
"Deep Learning",
"Computer Vision",
"Neural Networks",
"Signal Processing"
] | |
ICML
| 2,023
| 26,946
|
Ensemble Fractional Imputation for Incomplete Categorical Data with a Graphical Model
|
Missing data is common in practice, and standard statistical inference can be biased when missingness is related to the outcome of interest. We present a frequentist approach using a graphical model and fractional imputation, which can handle missing data for multivariate categorical variables under missing at random assumption. To avoid the problem due to the curse of dimensionality in multivariate data, we adopt the idea of a random forest to fit multiple reduced models and then combine multiple models using model weights. The model weights are computed from the novel method, double projection, where the observed likelihood is projected to the class of a graphical mixture model. The performance of the proposed method is investigated through an extensive simulation study.
|
[
"Statistical Methods",
"Missing Data Analysis",
"Categorical Data Analysis",
"Graphical Models",
"Machine Learning Techniques"
] | |
NeurIPS
| 2,022
| 60,058
|
Identifying Structure in the MIMIC ICU Dataset
|
The MIMIC-III dataset, containing trajectories of 40,000 ICU patients, is one of the most popular datasets in machine learning for health space. However, there has been very little systematic exploration to understand what is the natural structure of these data---most analyses enforce some type of top-down clustering or embedding. We take a bottom-up approach, identifying consistent structures that are robust across a range of embedding choices. We identified two dominant structures sorted by either fraction-inspired oxygen or creatinine --- both of which were validated as the key features by our clinical co-author. Our bottom-up approach in studying the macro-structure of a dataset can also be adapted for other datasets.
|
[
"Machine Learning in Healthcare",
"Data Analysis",
"Intensive Care Medicine",
"Clinical Data Mining"
] | |
ICLR
| 2,024
| 21,199
|
LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression
|
In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the input prompt. Addressing this, we introduce LongLLMLingua, a method for prompt compression that improves LLMs’ key information perception, effectively tackling these challenges. Our extensive evaluation across various long context scenarios demonstrates that LongLLMLingua not only enhances performance but also significantly reduces costs and latency. For instance, in the NaturalQuestions benchmark, LongLLMLingua boosts performance by up to 21.4% with around 4x fewer tokens in GPT-3.5-Turbo, leading to substantial cost savings. It achieves a 94.2% cost reduction in the LooGLE benchmark. Moreover, when compressing prompts of about 10k tokens at rates of 2x-10x, LongLLMLingua can accelerate end-to-end latency by 1.4x-3.8x.
|
[
"Natural Language Processing",
"Computational Linguistics",
"Artificial Intelligence ",
"Language Models",
"Data Compression"
] | |
ICLR
| 2,023
| 11,826
|
GOOD: Exploring geometric cues for detecting objects in an open world
|
We address the task of open-world class-agnostic object detection, i.e., detecting every object in an image by learning from a limited number of base object classes. State-of-the-art RGB-based models suffer from overfitting the training classes and often fail at detecting novel-looking objects. This is because RGB-based models primarily rely on appearance similarity to detect novel objects and are also prone to overfitting short-cut cues such as textures and discriminative parts. To address these shortcomings of RGB-based object detectors, we propose incorporating geometric cues such as depth and normals, predicted by general-purpose monocular estimators. Specifically, we use the geometric cues to train an object proposal network for pseudo-labeling unannotated novel objects in the training set. Our resulting Geometry-guided Open-world Object Detector (GOOD) significantly improves detection recall for novel object categories and already performs well with only a few training classes. Using a single ``person'' class for training on the COCO dataset, GOOD surpasses SOTA methods by 5.0% AR@100, a relative improvement of 24%. The code has been made available at https://github.com/autonomousvision/good.
|
[
"Computer Vision",
"Object Detection",
"Open-world Learning",
"Geometric Deep Learning"
] | |
NeurIPS
| 2,022
| 57,202
|
Accelerated Single-Call Methods for Constrained Min-Max Optimization
|
We study first-order methods for constrained min-max optimization. Existing methods either requires two gradient calls or two projections in each iteration, which may be costly in applications. In this paper, we first show that the \emph{Optimistic Gradient (OG)} method, a \emph{single-call single-projection} algorithm, has $O(\frac{1}{\sqrt{T}})$ convergence rate for inclusion problems with operators that satisfy the weak Minty variation inequality (MVI). Our second result is the first single-call single-projection algorithm -- the \emph{Accelerated Reflected Gradient (ARG)} method that achieves the \emph{optimal $O(\frac{1}{T})$} convergence rate for inclusion problems that satisfy negative comonotonicity. Both the weak MVI and negative comonotonicity are well-studied assumptions and capture a rich set of non-convex non-concave min-max optimization problems. Finally, we show that the \emph{Reflected Gradient (RG)} method, another \emph{single-call single-projection} algorithm, has $O(\frac{1}{\sqrt{T}})$ last-iterate convergence rate for constrained convex-concave min-max optimization, answering an open problem of (Hsieh et al., 2019).
|
[
"Optimization",
"Min-Max Optimization",
"Constrained Optimization",
"First-Order Methods",
"Non-Convex Optimization",
"Algorithm Design and Analysis"
] | |
NeurIPS
| 2,022
| 60,314
|
Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data
|
Even though deep neural networks (DNNs) achieve state-of-the-art results for a large number of problems involving genomic data, getting DNNs to explain their decision-making process has been a major challenge due to their black-box nature. One way to get DNNs to explain their reasoning for prediction is via attribution methods which are assumed to highlight the parts of the input that contribute to the prediction the most. Given the existence of numerous attribution methods and a lack of quantitative results on the fidelity of those methods, selection of an attribution method for sequence-based tasks has been mostly done qualitatively. In this work, we take a step towards identifying the most faithful attribution method by proposing a computational approach that utilizes point mutations. Providing quantitative results on seven popular attribution methods, we find Layerwise Relevance Propagation (LRP) to be the most appropriate attribution method with LRP identifying two important biological features for translation: the integrity of Kozak sequence as well as the detrimental effects of premature stop codons.
|
[
"Genomics",
"Bioinformatics",
"Neural Networks",
"Machine Learning Interpretability",
"Computational Biology"
] | |
ICML
| 2,022
| 16,043
|
Bounding Training Data Reconstruction in Private (Deep) Learning
|
Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods---Renyi differential privacy and Fisher information leakage---both offer strong semantic protection against data reconstruction attacks.
|
[
"Differential Privacy",
"Machine Learning Security",
"Data Privacy",
"Privacy-Preserving Machine Learning",
"Information Security"
] | |
ICML
| 2,023
| 26,497
|
Latent Space Editing in Transformer-Based Flow Matching
|
This paper strives for image editing via generative models. Flow Matching is an emerging generative modeling technique that offers the advantage of simple and efficient training. Simultaneously, a new transformer-based U-ViT has recently been proposed to replace the commonly used UNet for better scalability and performance in generative modeling. Hence, Flow Matching with a transformer backbone offers the potential for scalable and high-quality generative modeling, but their latent structure and editing ability are as of yet unknown. Hence, we adopt this setting and explore how to edit images through latent space manipulation. We introduce an editing space, we call $u$-space, that can be manipulated in a controllable, accumulative, and composable manner. Additionally, we propose a tailored sampling solution to enable sampling with the more efficient adaptive step-size ODE solvers. Lastly, we put forth a straightforward yet powerful method for achieving fine-grained and nuanced editing using text prompts. Our framework is simple and efficient, all while being highly effective at editing images while preserving the essence of the original content.We will provide our source code and include it in the appendix.
|
[
"Generative Models",
"Image Editing",
"Computer Vision",
"Deep Learning",
"Transformer Models"
] | |
ICML
| 2,023
| 23,557
|
Bootstrapped Representations in Reinforcement Learning
|
In reinforcement learning (RL), state representations are key to dealing with large or continuous state spaces. While one of the promises of deep learning algorithms is to automatically construct features well-tuned for the task they try to solve, such a representation might not emerge from end-to-end training of deep RL agents. To mitigate this issue, auxiliary objectives are often incorporated into the learning process and help shape the learnt state representation. Bootstrapping methods are today's method of choice to make these additional predictions. Yet, it is unclear which features these algorithms capture and how they relate to those from other auxiliary-task-based approaches. In this paper, we address this gap and provide a theoretical characterization of the state representation learnt by temporal difference learning (Sutton, 1988). Surprisingly, we find that this representation differs from the features learned by Monte Carlo and residual gradient algorithms for most transition structures of the environment in the policy evaluation setting. We describe the efficacy of these representations for policy evaluation, and use our theoretical analysis to design new auxiliary learning rules. We complement our theoretical results with an empirical comparison of these learning rules for different cumulant functions on classic domains such as the four-room domain (Sutton et al, 1999) and Mountain Car (Moore, 1990).
|
[
"Reinforcement Learning",
"Deep Learning",
"Representation Learning"
] | |
NeurIPS
| 2,023
| 70,197
|
Tracr: Compiled Transformers as a Laboratory for Interpretability
|
We show how to "compile" human-readable programs into standard decoder-only transformer models. Our compiler, Tracr, generates models with known structure. This structure can be used to design experiments. For example, we use it to study "superposition" in transformers that execute multi-step algorithms. Additionally, the known structure of Tracr-compiled models can serve asground-truthfor evaluating interpretability methods. Commonly, because the "programs" learned by transformers are unknown it is unclear whether an interpretation succeeded. We demonstrate our approach by implementing and examining programs including computing token frequencies, sorting, and parenthesis checking. We provide an open-source implementation of Tracr at https://github.com/google-deepmind/tracr.
|
[
"Natural Language Processing",
"Model Interpretability",
"Transformer Models",
"Computational Linguistics"
] | |
ICML
| 2,024
| 33,600
|
Position: Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems
|
Recent advancements in solving large-scale traveling salesman problems (TSP) utilize the heatmap-guided Monte Carlo tree search (MCTS) paradigm, where machine learning (ML) models generate heatmaps, indicating the probability distribution of each edge being part of the optimal solution, to guide MCTS in solution finding. However, our theoretical and experimental analysis raises doubts about the effectiveness of ML-based heatmap generation. In support of this, we demonstrate that a simple baseline method can outperform complex ML approaches in heatmap generation. Furthermore, we question the practical value of the heatmap-guided MCTS paradigm. To substantiate this, our findings show its inferiority to the LKH-3 heuristic despite the paradigm's reliance on problem-specific, hand-crafted strategies. For the future, we suggest research directions focused on developing more theoretically sound heatmap generation methods and exploring autonomous, generalizable ML approaches for combinatorial problems. The code is available for review: https://github.com/xyfffff/rethinkmctsfor_tsp.
|
[
"Combinatorial Optimization",
"Operations Research",
"Neural Networks",
"Algorithm Design"
] | |
NeurIPS
| 2,023
| 75,069
|
DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models
|
This paper presents DiffuseBot , a first step toward efficient automatic robotic andvirtual creature content creation. We propose using physical simulation to guide the generative process of pretrained large-scale 3D diffusion models. Diffusion models pretrained for 3D shapes provide an expressive base distribution that can effectively propose reasonable candidate geometries for soft robots. In order to sample robots in a physics-aware and performance-driven manner, we first optimize the embeddings that condition the diffusion model, skewing the sampling distribution toward better-performing robots as evaluated by our simulator. Then, we reformulate the sampling process that incorporates co-optimization over structure and control.
|
[
"Robotics",
"Soft Robotics",
"Generative Models",
"Computational Design",
"Simulation and Modeling"
] | |
ICML
| 2,023
| 28,098
|
PRODIGY: Enabling In-context Learning Over Graphs
|
In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop Pretraining Over Diverse In-Context Graph Systems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel \emph{prompt graph} representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pretrained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18\% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33\% on average with in-context learning.
|
[
"Graph Neural Networks",
"Natural Language Processing",
"In-Context Learning",
"Pretraining Methods",
"Graph Representation Learning"
] | |
ICML
| 2,023
| 24,284
|
Lower Bounds for Learning in Revealing POMDPs
|
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging *partially observable* setting. While it is well-established that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynomial sample complexities are achievable under the *revealing condition*---A natural condition that requires the observables to reveal some information about the unobserved latent states. However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds. We establish strong PAC and regret lower bounds for learning in revealing POMDPs. Our lower bounds scale polynomially in all relevant problem parameters in a multiplicative fashion, and achieve significantly smaller gaps against the current best upper bounds, providing a solid starting point for future studies. In particular, for *multi-step* revealing POMDPs, we show that (1) the latent state-space dependence is at least $\Omega(S^{1.5})$ in the PAC sample complexity, which is notably harder than the $\widetilde{\Theta}(S)$ scaling for fully-observable MDPs; (2) Any polynomial sublinear regret is at least $\Omega(T^{2/3})$, suggesting its fundamental difference from the *single-step* case where $\widetilde{\mathcal{O}}(\sqrt{T})$ regret is achievable. Technically, our hard instance construction adapts techniques in *distribution testing*, which is new to the RL literature and may be of independent interest. We also complement our results with new sharp regret upper bounds for *strongly B-stable PSRs*, which include single-step revealing POMDPs as a special case.
|
[
"Reinforcement Learning",
"Partially Observable Markov Decision Processes ",
"Theoretical Computer Science",
"Machine Learning Theory"
] | |
ICML
| 2,024
| 34,839
|
Image Hijacks: Adversarial Images can Control Generative Models at Runtime
|
Are foundation models secure against malicious actors? In this work, we focus on the image input to a vision-language model (VLM). We discover image hijacks, adversarial images that control the behaviour of VLMs at inference time, and introduce the general Behaviour Matching algorithm for training image hijacks. From this, we derive the Prompt Matching method, allowing us to train hijacks matching the behaviour of an arbitrary user-defined text prompt (e.g. 'the Eiffel Tower is now located in Rome') using a generic, off-the-shelf dataset unrelated to our choice of prompt. We use Behaviour matching to craft hijacks for four types of attack: forcing VLMs to generate outputs of the adversary’s choice, leak information from their context window, override their safety training, and believe false statements. We study these attacks against LLaVA, a state-of-the-art VLM based on CLIP and LLaMA-2, and find that all attack types achieve a success rate of over 80%. Moreover, our attacks are automated and require only small image perturbations.
|
[
"Adversarial Machine Learning",
"Computer Vision",
"Generative Models",
"Security in AI",
"Vision-Language Models"
] | |
ICML
| 2,023
| 23,472
|
Simple Disentanglement of Style and Content in Visual Representations
|
Learning visual representations with interpretable features, i.e., disentangled representations, remains a challenging problem. Existing methods demonstrate some success but are hard to apply to large-scale vision datasets like ImageNet. In this work, we propose a simple post-processing framework to disentangle content and style in learned representations from pre-trained vision models. We model the pre-trained features probabilistically as linearly entangled combinations of the latent content and style factors and develop a simple disentanglement algorithm based on the probabilistic model. We show that the method provably disentangles content and style features and verify its efficacy empirically. Our post-processed features yield significant domain generalization performance improvements when the distribution shift occurs due to style changes or style-related spurious correlations.
|
[
"Computer Vision",
"Representation Learning",
"Domain Generalization"
] | |
NeurIPS
| 2,023
| 69,916
|
Online List Labeling with Predictions
|
A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, $n$ items arrive over time and must be stored in sorted order in an array of size $\Theta(n)$. The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case.
|
[
"Algorithms",
"Data Structures",
"Online Algorithms",
"Learning-Augmented Algorithms"
] | |
NeurIPS
| 2,023
| 70,499
|
FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
|
Federated learning (FL) provides a distributed training paradigm where multiple clients can jointly train a global model without sharing their local data. However, recent studies have shown that FL offers an additional surface for backdoor attacks. For instance, an attacker can compromise a subset of clients and thus corrupt the global model to misclassify an input with a backdoor trigger as the adversarial target. Existing defenses for FL against backdoor attacks usually detect and exclude the corrupted information from the compromised clients based on a static attacker model. However, such defenses are inadequate against dynamic attackers who strategically adapt their attack strategies. To bridge this gap, we model the strategic interactions between the defender and dynamic attackers as a minimax game. Based on the analysis of the game, we design an interactive defense mechanism FedGame. We prove that under mild assumptions, the global model trained with FedGame under backdoor attacks is close to that trained without attacks. Empirically, we compare FedGame with multiple state-of-the-art baselines on several benchmark datasets under various attacks. We show that FedGame can effectively defend against strategic attackers and achieves significantly higher robustness than baselines. Our code is available at: https://github.com/AI-secure/FedGame.
|
[
"Federated Learning",
"Game Theory",
"Cybersecurity",
"Machine Learning Security",
"Adversarial Machine Learning"
] | |
ICML
| 2,023
| 26,527
|
Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning
|
We propose A-Crab (Actor-Critic Regularized by Average Bellman error), a new algorithm for offline reinforcement learning (RL) in complex environments with insufficient data coverage. Our algorithm combines the marginalized importance sampling framework with the actor-critic paradigm, where the critic returns evaluations of the actor (policy) that are pessimistic relative to the offline data and have a small average (importance-weighted) Bellman error. Compared to existing methods, our algorithm simultaneously offers a number of advantages:(1) It achieves the optimal statistical rate of $1/\sqrt{N}$---where $N$ is the size of offline dataset---in converging to the best policy covered in the offline dataset, even when combined with general function approximators.(2) It relies on a weaker *average* notion of policy coverage (compared to the $\ell_\infty$ single-policy concentrability) that exploits the structure of policy visitations.(3) It outperforms the data-collection behavior policy over a wide range of specific hyperparameters.
|
[
"Offline Reinforcement Learning",
"Actor-Critic Methods",
"Importance Sampling",
"Statistical Learning Theory"
] | |
NeurIPS
| 2,023
| 76,153
|
AstroYOLO: Learning Astronomy Multi-Tasks in a Single Unified Real-Time Framework
|
In this paper, we proposed a single unified real-time pipeline that jointly performs two tasks: star vs. galaxy detection and smooth type vs. disk type galaxy classification. To achieve the goal, we introduced a model which have two different classification heads sharing the same backbone: The first classification head is used to detect useful objects from the universe images and classify them into star vs. galaxy; while the second classification head is used to further classify whether the galaxy is smooth or disk type. As the backbone, we used YOLOX architecture, add two classification heads upon it and train them using two heterogeneous datasets: the star vs. galaxy detection dataset which have images including star and galaxy objects and corresponding bounding box and class labels and the smooth vs. disk type classification dataset having galaxy images and their corresponding labels. To prevent the catastrophic forgetting when learning two heads and a backbone, we performed the alternative training between two tasks and also applied data augmentation such as mosaic and mix-up methods. The final model achieved 73.4\% accuracy on the smooth vs. disk type classification task, and 65.6 mAP score on star vs. galaxy detection task.
|
[
"Astronomy",
"Computer Vision",
"Image Classification",
"Object Detection"
] | |
NeurIPS
| 2,023
| 72,125
|
Task-aware world model learning with meta weighting via bi-level optimization
|
Aligning the world model with the environment for the agent’s specific task is crucial in model-based reinforcement learning. While value-equivalent models may achieve better task awareness than maximum-likelihood models, they sacrifice a large amount of semantic information and face implementation issues. To combine the benefits of both types of models, we propose Task-aware Environment Modeling Pipeline with bi-level Optimization (TEMPO), a bi-level model learning framework that introduces an additional level of optimization on top of a maximum-likelihood model by incorporating a meta weighter network that weights each training sample. The meta weighter in the upper level learns to generate novel sample weights by minimizing a proposed task-aware model loss. The model in the lower level focuses on important samples while maintaining rich semantic information in state representations. We evaluate TEMPO on a variety of continuous and discrete control tasks from the DeepMind Control Suite and Atari video games. Our results demonstrate that TEMPO achieves state-of-the-art performance regarding asymptotic performance, training stability, and convergence speed.
|
[
"Model-based Reinforcement Learning",
"Bi-level Optimization",
"Task-aware Learning",
"Meta Learning",
"Control Systems",
"Machine Learning Algorithms"
] | |
ICML
| 2,023
| 28,306
|
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
|
This work advances the understandings of the remarkable \emph{in-context learning} (ICL) abilities of transformers---the ability of performing new tasks when prompted with training and test examples, without any parameter update to the model. We begin by showing that transformers can implement a broad class of standard machine learning algorithms in context, such as least squares, ridge regression, Lasso, convex risk minimization for generalized linear models, and gradient descent on two-layer neural networks, with near-optimal predictive power on various in-context data distributions. Our transformer constructions admit mild bounds on the number of layers and heads, and can be learned with polynomially many pretraining sequences. Building on these ``base'' ICL algorithms, intriguingly, we show that transformers can implement more complex ICL procedures involving \emph{in-context algorithm selection}, akin to what a statistician can do in real life---A \emph{single} transformer can adaptively select different base ICL algorithms---or even perform qualitatively different tasks---on different input sequences, without any explicit prompting of the right algorithm or task. In theory, we construct two general mechanisms for algorithm selection with concrete examples: (1) Pre-ICL testing, where the transformer determines the right task for the given sequenceby examining certain summary statistics of the input sequence; (2) Post-ICL validation, where the transformer selects---among multiple base ICL algorithms---a near-optimal one for the given sequence using a train-validation split. Experimentally, we demonstrate the strong in-context algorithm selection capabilities of standard transformer architectures.
|
[
"Natural Language Processing",
"Statistical Learning",
"Algorithm Selection",
"Neural Networks"
] | |
NeurIPS
| 2,022
| 61,577
|
Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning
|
Deep reinforcement learning policies have been shown vulnerable to adversarial attacks due to the inherit frangibility of neural networks. Current attack methods mainly focus on the adversarial state or action perturbations, where such direct manipulations to a reinforcement learning system may not always be feasible or realizable in the real-world. In this paper, we consider the more practical adversarial attacks realized through actions by an adversarial agent in the same environment.It has been shown, in prior work, that an victim agent is vulnerable to behaviors of an adversarial agent who targets to attack the victim, at the cost of introducing perceivable abnormal behaviors for the adversarial agent itself. To address this, we propose to constrain the state distribution shift caused by the adversarial policy and offer a more controllable attack scheme by building connections among policy space variations, state distribution shift, and the value function difference. To provide provable defense, we revisit the cycling behavior of common adversarial training methods in Markov game, which has been a well-known issue in general differential games including Generative Adversarial Networks (GANs) and adversarial training in supervised learning. We propose to fix the non-converging behavior through a simple timescale separation mechanism. In sharp contrast to general differential games, where timescale separation may only converge to stationary points, a two-timescale training methods in Markov games can converge to the Nash Equilibrium (NE). Using the Robosumo competition experiments, we demonstrate the controllable attack is much more efficient in the sense that it can introduce much less state distribution shift while achieving the same winning rate with unconstrained attack. Furthermore, in both Kuhn Poker and Robosumo competition, we verify that the rule of timescale separation leads to stable learning dynamics and less exploitable victim policies.
|
[
"Multi-Agent Reinforcement Learning",
"Adversarial Machine Learning",
"Game Theory",
"Deep Reinforcement Learning",
"Security in Artificial Intelligence"
] | |
NeurIPS
| 2,022
| 61,595
|
Evaluating the Practicality of Counterfactual Explanation
|
Machine learning models are increasingly used for decisions that directly affect people’s lives. These models are often opaque, meaning that the people affected cannot understand how or why the decision was made. However, according to the General Data Protection Regulation, decision subjects have the right to an explanation. Counterfactual explanations are a way to make machine learning models more transparent by showing how attributes need to be changed to get a different outcome. This type of explanation is considered easy to understand and human-friendly. To be used in real life, explanations must be practical, which means they must go beyond a purely theoretical framework. Research has focused on defining several objective functions to compute practical counterfactuals. However, it has not yet been tested whether people perceive the explanations as such in practice. To address this, we contribute by identifying properties that explanations must satisfy to be practical for human subjects. The properties are then used to evaluate the practicality of two counterfactual explanation methods (CARE and WachterCF) by conducting a user study. The results show that human subjects consider the explanations by CARE (a multi-objective approach) to be more practical than the WachterCF (baseline) explanations. We also show that the perception of explanations differs depending on the classification task by exploring multiple datasets.
|
[
"Explainable AI ",
"Human-Computer Interaction ",
"Data Privacy and Ethics"
] | |
NeurIPS
| 2,022
| 64,172
|
Time-Myopic Go-Explore: Learning A State Representation for the Go-Explore Paradigm
|
Very large state spaces with a sparse reward signal are difficult to explore. The lack of a sophisticated guidance results in a poor performance for numerous reinforcement learning algorithms. In these cases, the commonly used random exploration is often not helpful. The literature shows that this kind of environments require enormous efforts to systematically explore large chunks of the state space. Learned state representations can help here to improve the search by providing semantic context and build a structure on top of the raw observations. In this work we introduce a novel time-myopic state representation that clusters temporal close states together while providing a time prediction capability between them. By adapting this model to the Go-Explore paradigm (Ecoffet et al., 2021b), we demonstrate the first learned state representation that reliably estimates novelty instead of using the hand-crafted representation heuristic. Our method shows an improved solution for the detachment problem which still remains an issue at the Go-Explore Exploration Phase. We provide evidence that our proposed method covers the entire state space with respect to all possible time trajectories — without causing disadvantageous conflict-overlaps in the cell archive. Analogous to native Go-Explore, our approach is evaluated on the hard exploration environments MontezumaRevenge, Gravitar and Frostbite (Atari) in order to validate its capabilities on difficult tasks. Our experiments show that time-myopic Go-Explore is an effective alternative for the domain-engineered heuristic while also being more general. The source code of the method is available on GitHub.
|
[
"Reinforcement Learning",
"State Representation Learning",
"Exploration Strategies"
] | |
ICLR
| 2,024
| 17,369
|
Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting
|
Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under $privacy-restricted$ scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.
|
[
"Medical Informatics",
"Privacy-preserving Machine Learning",
"Natural Language Processing in Healthcare",
"Artificial Intelligence in Medicine",
"Data Privacy in Healthcare"
] | |
NeurIPS
| 2,023
| 76,936
|
Generative Nowcasting of Marine Fog Visibility in the Grand Banks area and Sable Island in Canada
|
This study presents the application of generative deep learning techniques to evaluate marine fog visibility conditions by nowcasting visibility using the FATIMA (Fog and turbulence interactions in the marine atmosphere) campaign observations collected during July 2022 in the North Atlantic in the Grand Banks area and vicinity of Sable Island (SI), northeast of Canada. The measurements were collected using the Vaisala Forward Scatter Sensor model FD70 and Vaisala Weather Transmitter model WXT50, mounted on the Research Vessel (R/V) Atlantic Condor. To perform nowcasting, the time series of fog visibility (Vis), wind speed (Uh), dew point depression (Ta-Td), and relative humidity (RHw) with respect to water were preprocessed to have lagged time step features. Generative nowcasting of Vis time series for lead times of 30 and 60 minutes were performed using conditional generative adversarial networks (cGAN) regression at visibility thresholds of Vis < 1 km and <10 km. Extreme gradient boosting (XGBoost) was used as a baseline method for comparison against cGAN. At the 30 min lead time, Vis was best predicted with cGAN at Vis < 1 km (RMSE = 0.151 km) and with XGBoost at Vis < 10 km (RMSE = 2.821 km). At the 60 min lead time, Vis was best predicted with XGBoost at Vis < 1 km (RMSE = 0.167 km) and Vis < 10 km (RMSE = 3.508 km), but cGAN error was not too far off. At both lead times for Vis < 1 km, the cGAN model tracked the variation in Vis very well, which suggests that there is potential for future generative analysis of marine fog formation using observational meteorological parameters.
|
[
"Meteorology",
"Environmental Science",
"Atmospheric Science",
"Oceanography"
] | |
NeurIPS
| 2,022
| 54,275
|
Learning to Constrain Policy Optimization with Virtual Trust Region
|
We introduce a constrained optimization method for policy gradient reinforcement learning, which uses two trust regions to regulate each policy update. In addition to using the proximity of one single old policy as the first trust region as done by prior works, we propose forming a second trust region by constructing another virtual policy that represents a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial if the old policy performs poorly. We propose a mechanism to automatically build the virtual policy from a memory buffer of past policies, providing a new capability for dynamically selecting appropriate trust regions during the optimization process. Our proposed method, dubbed Memory-Constrained Policy Optimization (MCPO), is examined in diverse environments, including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.
|
[
"Reinforcement Learning",
"Constrained Optimization",
"Policy Gradient Methods",
"Trust Region Methods",
"Robotics Control",
"Machine Learning Algorithms"
] | |
NeurIPS
| 2,022
| 53,051
|
Reinforcement Learning with Non-Exponential Discounting
|
Commonly in reinforcement learning (RL), rewards are discounted over time using an exponential function to model time preference, thereby bounding the expected long-term reward. In contrast, in economics and psychology, it has been shown that humans often adopt a hyperbolic discounting scheme, which is optimal when a specific task termination time distribution is assumed. In this work, we propose a theory for continuous-time model-based reinforcement learning generalized to arbitrary discount functions. This formulation covers the case in which there is a non-exponential random termination time. We derive a Hamilton–Jacobi–Bellman (HJB) equation characterizing the optimal policy and describe how it can be solved using a collocation method, which uses deep learning for function approximation. Further, we show how the inverse RL problem can be approached, in which one tries to recover properties of the discount function given decision data. We validate the applicability of our proposed approach on two simulated problems. Our approach opens the way for the analysis of human discounting in sequential decision-making tasks.
|
[
"Reinforcement Learning",
"Computational Economics",
"Decision Theory",
"Applied Mathematics"
] | |
ICML
| 2,023
| 25,800
|
Preventing Dimensional Collapse in Contrastive Local Learning with Subsampling
|
This paper presents an investigation of the challenges of training Deep Neural Networks (DNNs) via self-supervised objectives, using local learning as a parallelizable alternative to traditional backpropagation. In our approach, DNN are segmented into distinct blocks, each updated independently via gradients provided by small local auxiliary Neural Networks (NNs). Despite the evident computational benefits, extensive splits often result in performance degradation. Through analysis of a synthetic example, we identify a layer-wise dimensional collapse as a major factor behind such performance losses. To counter this, we propose a novel and straightforward sampling strategy based on blockwise feature-similarity, explicitly designed to evade such dimensional collapse.
|
[
"Deep Learning",
"Neural Networks",
"Self-Supervised Learning",
"Optimization Techniques in Neural Networks"
] | |
ICLR
| 2,023
| 11,399
|
Scalable and Equivariant Spherical CNNs by Discrete-Continuous (DISCO) Convolutions
|
No existing spherical convolutional neural network (CNN) framework is both computationally scalable and rotationally equivariant. Continuous approaches capture rotational equivariance but are often prohibitively computationally demanding. Discrete approaches offer more favorable computational performance but at the cost of equivariance. We develop a hybrid discrete-continuous (DISCO) group convolution that is simultaneously equivariant and computationally scalable to high-resolution. While our framework can be applied to any compact group, we specialize to the sphere. Our DISCO spherical convolutions exhibit $\text{SO}(3)$ rotational equivariance, where $\text{SO}(n)$ is the special orthogonal group representing rotations in $n$-dimensions. When restricting rotations of the convolution to the quotient space $\text{SO}(3)/\text{SO}(2)$ for further computational enhancements, we recover a form of asymptotic $\text{SO}(3)$ rotational equivariance. Through a sparse tensor implementation we achieve linear scaling in number of pixels on the sphere for both computational cost and memory usage. For 4k spherical images we realize a saving of $10^9$ in computational cost and $10^4$ in memory usage when compared to the most efficient alternative equivariant spherical convolution. We apply the DISCO spherical CNN framework to a number of benchmark dense-prediction problems on the sphere, such as semantic segmentation and depth estimation, on all of which we achieve the state-of-the-art performance.
|
[
"Computer Vision",
"Neural Networks",
"Geometric Deep Learning",
"Computational Efficiency",
"Rotational Equivariance"
] | |
ICLR
| 2,022
| 7,110
|
A Generalized Weighted Optimization Method for Computational Learning and Inversion
|
The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes. In this paper, we focus on regression models such as feature regression and kernel regression and analyze a generalized weighted least-squares optimization method for computational learning and inversion with noisy data. The highlight of the proposed framework is that we allow weighting in both the parameter space and the data space. The weighting scheme encodes both a priori knowledge on the object to be learned and a strategy to weight the contribution of different data points in the loss function. Here, we characterize the impact of the weighting scheme on the generalization error of the learning method, where we derive explicit generalization errors for the random Fourier feature model in both the under- and over-parameterized regimes. For more general feature maps, error bounds are provided based on the singular values of the feature matrix. We demonstrate that appropriate weighting from prior knowledge can improve the generalization capability of the learned model.
|
[
"Computational Learning Theory",
"Optimization Methods",
"Regression Analysis",
"Inverse Problems"
] | |
NeurIPS
| 2,022
| 56,639
|
Faster and Cheaper Energy Demand Forecasting at Scale
|
Energy demand forecasting is one of the most challenging tasks for grids operators. Many approaches have been suggested over the years to tackle it. Yet, those still remain too expensive to train in terms of both time and computational resources, hindering their adoption as customers behaviors are continuously evolving.We introduce Transplit, a new lightweight transformer-based model, which significantly decreases this cost by exploiting the seasonality property and learning typical days of power demand. We show that Transplit can be run efficiently on CPU and is several hundred times faster than state-of-the-art predictive models, while performing as well.
|
[
"Energy Demand Forecasting",
"Transformer Models",
"Computational Efficiency",
"Power Systems",
"Time Series Analysis"
] | |
ICML
| 2,022
| 17,795
|
How Powerful are Spectral Graph Neural Networks
|
Spectral Graph Neural Network is a kind of Graph Neural Network (GNN) based on graph signal filters. Some models able to learn arbitrary spectral filters have emerged recently. However, few works analyze the expressive power of spectral GNNs. This paper studies spectral GNNs’ expressive power theoretically. We first prove that even spectral GNNs without nonlinearity can produce arbitrary graph signals and give two conditions for reaching universality. They are: 1) no multiple eigenvalues of graph Laplacian, and 2) no missing frequency components in node features. We also establish a connection between the expressive power of spectral GNNs and Graph Isomorphism (GI) testing, the latter of which is often used to characterize spatial GNNs’ expressive power. Moreover, we study the difference in empirical performance among different spectral GNNs with the same expressive power from an optimization perspective, and motivate the use of an orthogonal basis whose weight function corresponds to the graph signal density in the spectrum. Inspired by the analysis, we propose JacobiConv, which uses Jacobi basis due to its orthogonality and flexibility to adapt to a wide range of weight functions. JacobiConv deserts nonlinearity while outperforming all baselines on both synthetic and real-world datasets.
|
[
"Graph Neural Networks",
"Spectral Methods",
"Machine Learning Theory",
"Graph Theory",
"Neural Network Architecture"
] | |
ICML
| 2,022
| 17,479
|
Improving Language Models by Retrieving from Trillions of Tokens
|
We enhance auto-regressive language models by conditioning on document chunks retrieved from a large corpus, based on local similarity with preceding tokens. With a 2 trillion token database, our Retrieval-Enhanced Transformer (RETRO) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters. After fine-tuning, RETRO performance translates to downstream knowledge-intensive tasks such as question answering. RETRO combines a frozen Bert retriever, a differentiable encoder and a chunked cross-attention mechanism to predict tokens based on an order of magnitude more data than what is typically consumed during training. We typically train RETRO from scratch, yet can also rapidly RETROfit pre-trained transformers with retrieval and still achieve good performance. Our work opens up new avenues for improving language models through explicit memory at unprecedented scale.
|
[
"Natural Language Processing",
"Language Models",
"Information Retrieval",
"Deep Learning"
] | |
NeurIPS
| 2,022
| 53,999
|
DeepFoids: Adaptive Bio-Inspired Fish Simulation with Deep Reinforcement Learning
|
Our goal is to synthesize realistic underwater scenes with various fish species in different fish cages, which can be utilized to train computer vision models to automate fish counting and sizing tasks. It is a challenging problem to prepare a sufficiently diverse labeled dataset of images from aquatic environments. We solve this challenge by introducing an adaptive bio-inspired fish simulation. The behavior of caged fish changes based on the species, size and number of fish, and the size and shape of the cage, among other variables. However, a method to autonomously achieve schooling behavior for caged fish did not exist. In this paper, we propose a method for achieving schooling behavior for any given combination of variables, using multi-agent deep reinforcement learning (DRL) in various fish cages in arbitrary environments. Furthermore, to visually reproduce the underwater scene in different locations and seasons, we incorporate a physically-based underwater simulation.
|
[
"Computer Vision",
"Deep Reinforcement Learning",
"Bio-Inspired Computing",
"Simulation and Modeling",
"Multi-Agent Systems",
"Artificial Intelligence in Ecology and Environment"
] | |
NeurIPS
| 2,022
| 54,741
|
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
|
The use of black-box models (e.g., deep neural networks) in high-stakes decision-making systems, whose internal logic is complex, raises the need for providing explanations about their decisions. Model explanation techniques mitigate this problem by generating an interpretable and high-fidelity surrogate model (e.g., a logistic regressor or decision tree) to explain the logic of black-box models. In this work, we investigate the issue of fairwashing, in which model explanation techniques are manipulated to rationalize decisions taken by an unfair black-box model using deceptive surrogate models. More precisely, we theoretically characterize and analyze fairwashing, proving that this phenomenon is difficult to avoid due to an irreducible factor---the unfairness of the black-box model. Based on the theory developed, we propose a novel technique, called FRAUD-Detect (FaiRness AUDit Detection), to detect fairwashed models by measuring a divergence over subpopulation-wise fidelity measures of the interpretable model. We empirically demonstrate that this divergence is significantly larger in purposefully fairwashed interpretable models than in honest ones. Furthermore, we show that our detector is robust to an informed adversary trying to bypass our detector. The code implementing FRAUD-Detect is available at https://github.com/cleverhans-lab/FRAUD-Detect.
|
[
"Explainable AI ",
"Fairness in AI",
"Model Interpretability",
"Algorithmic Transparency",
"Ethical AI"
] | |
NeurIPS
| 2,023
| 74,422
|
Enhancing the Misreport Network for Optimal Auction Design
|
Optimal auction mechanism design has long been a focus in computer science and economics. While substantial progress has been made in single-item auctions, optimal design for multi-item auctions has yet to be derived. Recent years have seen a surge in deriving near-optimal auctions through deep learning. As one of the approaches, ALGNet models the bidding process as a two-player game. The ALGNet model however adopted a rather simple design for generating optimal misreports to derive the regret of the trained auction mechanisms. We show that this design can be improved both in network structure and the testing method. Specifically, we train a misreport network tailored for each individual bidder which leads to better misreports. This approach is especially effective when the auctions are asymmetric. By studying misreport, we can get a more accurate estimate of the regret in the auction mechanism thus enhancing its robustness. Experimental results demonstrate that our approach can detect misreport more effectively than previous methods resulting in an increase in regret values as large as 70%. The new misreport network can also be applied to train auction mechanisms, allowing for a better description of the auction process.
|
[
"Auction Theory",
"Mechanism Design",
"Game Theory",
"Computational Economics",
"Machine Learning in Economics",
"Deep Learning Applications"
] | |
ICML
| 2,023
| 24,891
|
On Uni-Modal Feature Learning in Supervised Multi-Modal Learning
|
We abstract the features (i.e. learned representations) of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interactions. Multi-modal models are expected to benefit from cross-modal interactions on the basis of ensuring uni-modal feature learning. However, recent supervised multi-modal late-fusion training approaches still suffer from insufficient learning of uni-modal features on each modality. We prove that this phenomenon does hurt the model's generalization ability. To this end, we propose to choose a targeted late-fusion learning method for the given supervised multi-modal task from Uni-Modal Ensemble (UME) and the proposed Uni-Modal Teacher (UMT), according to the distribution of uni-modal and paired features. We demonstrate that, under a simple guiding strategy, we can achieve comparable results to other complex late-fusion or intermediate-fusion methods on various multi-modal datasets, including VGG-Sound, Kinetics-400, UCF101, and ModelNet40.
|
[
"Multi-Modal Learning",
"Feature Learning",
"Supervised Learning",
"Data Fusion"
] | |
ICLR
| 2,023
| 10,991
|
Fake It Until You Make It : Towards Accurate Near-Distribution Novelty Detection
|
We aim for image-based novelty detection. Despite considerable progress, existing models either fail or face dramatic drop under the so-called ``near-distribution" setup, where the differences between normal and anomalous samples are subtle. We first demonstrate existing methods could experience up to 20\% decrease in their AUCs in the near-distribution setting. Next, we propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data. Our model is then fine-tuned to distinguish such data from the normal samples. We make quantitative as well as qualitative evaluation of this strategy, and compare the results with a variety of GAN-based models. Effectiveness of our method for both near-distribution and standard novelty detection is assessed through extensive experiments on datasets in diverse applications such as medical images, object classification, and quality control. This reveals that our method significantly improves upon existing models, and consistently decreases the gap between the near-distribution and standard novelty detection AUCs by a considerable amount.
|
[
"Computer Vision",
"Anomaly Detection",
"Generative Models",
"Image Processing"
] | |
NeurIPS
| 2,022
| 55,772
|
EgoTaskQA: Understanding Human Tasks in Egocentric Videos
|
Understanding human tasks through video observations is an essential capability of intelligent agents. The challenges of such capability lie in the difficulty of generating a detailed understanding of situated actions, their effects on object states (\ie, state changes), and their causal dependencies. These challenges are further aggravated by the natural parallelism from multi-tasking and partial observations in multi-agent collaboration. Most prior works leverage action localization or future prediction as an \textit{indirect} metric for evaluating such task understanding from videos. To make a \textit{direct} evaluation, we introduce the EgoTaskQA benchmark that provides a single home for the crucial dimensions of task understanding through question answering on real-world egocentric videos. We meticulously design questions that target the understanding of (1) action dependencies and effects, (2) intents and goals, and (3) agents' beliefs about others. These questions are divided into four types, including descriptive (what status?), predictive (what will?), explanatory (what caused?), and counterfactual (what if?) to provide diagnostic analyses on \textit{spatial, temporal, and causal} understandings of goal-oriented tasks. We evaluate state-of-the-art video reasoning models on our benchmark and show their significant gaps between humans in understanding complex goal-oriented egocentric videos. We hope this effort would drive the vision community to move onward with goal-oriented video understanding and reasoning.
|
[
"Computer Vision",
"Egocentric Vision",
"Video Understanding",
"Task Understanding",
"Question Answering",
"Human-Computer Interaction"
] | |
ICML
| 2,023
| 24,963
|
Efficient Personalized Federated Learning via Sparse Model-Adaptation
|
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data. Due to the heterogeneity of clients' local data distribution, recent studies explore the personalized FL that learns and deploys distinct local models with the help of auxiliary global models. However, the clients can be heterogeneous in terms of not only local data distribution, but also their computation and communication resources. The capacity and efficiency of personalized models are restricted by the lowest-resource clients, leading to sub-optimal performance and limited practicality of personalized FL. To overcome these challenges, we propose a novel approach named pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models. With a lightweight trainable gating layer, pFedGate enables clients to reach their full potential in model capacity by generating different sparse models accounting for both the heterogeneous data distributions and resource constraints. Meanwhile, the computation and communication efficiency are both improved thanks to the adaptability between the model sparsity and clients' resources. Further, we theoretically show that the proposed pFedGate has superior complexity with guaranteed convergence and generalization error. Extensive experiments show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods. We also demonstrate that pFedGate performs better than competitors in the novel clients participation and partial clients participation scenarios, and can learn meaningful sparse local models adapted to different data distributions.
|
[
"Federated Learning",
"Personalized Learning",
"Model Adaptation",
"Sparse Models",
"Computational Efficiency",
"Communication Efficiency"
] | |
NeurIPS
| 2,022
| 57,139
|
Optimization for Robustness Evaluation beyond ℓp Metrics
|
Empirical evaluations of neural network models against adversarial attacks entail solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover, PGD can only handle $\ell_1$, $\ell_2$, and $\ell_\infty$ attacks due to the use of analytical projectors. In this paper, we introduce an alternative algorithmic framework that blends a general-purpose constrained-optimization solver \pygranso, \textbf{W}ith \textbf{C}onstraint-\textbf{F}olding (PWCF), to add reliability and generality to the existing adversarial evaluations. PWCF 1) finds good-quality solutions without delicate tuning of multiple hyperparameters; 2) can handle general attack models which are inaccessible to the existing algorithms, e.g., $\ell_{p > 0}$, and perceptual attacks.
|
[
"Adversarial Machine Learning",
"Neural Networks",
"Optimization",
"Robustness Evaluation"
] | |
NeurIPS
| 2,023
| 72,740
|
AI for Interpretable Chemistry: Predicting Radical Mechanistic Pathways via Contrastive Learning
|
Deep learning-based reaction predictors have undergone significant architectural evolution. However, their reliance on reactions from the US Patent Office results in a lack of interpretable predictions and limited generalizability to other chemistry domains, such as radical and atmospheric chemistry. To address these challenges, we introduce a new reaction predictor system, RMechRP, that leverages contrastive learning in conjunction with mechanistic pathways, the most interpretable representation of chemical reactions. Specifically designed for radical reactions, RMechRP provides different levels of interpretation of chemical reactions. We develop and train multiple deep-learning models using RMechDB, a public database of radical reactions, to establish the first benchmark for predicting radical reactions. Our results demonstrate the effectiveness of RMechRP in providing accurate and interpretable predictions of radical reactions, and its potential for various applications in atmospheric chemistry.
|
[
"Artificial Intelligence in Chemistry",
"Interpretable Machine Learning",
"Reaction Prediction",
"Radical Chemistry",
"Atmospheric Chemistry",
"Contrastive Learning"
] | |
NeurIPS
| 2,023
| 72,734
|
Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?
|
We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual ‘foundation models’ for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none are universally dominant. To study the effect of pre-training data size and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 4.3M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does not improve performance universally (but does so on average). Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either. Next, we show that task- or domain-specific adaptation of VC-1 leads to substantial gains, with VC-1 (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. Finally, we present real-world hardware experiments, in which VC-1 and VC-1 (adapted) outperform the strongest pre-existing PVR. Overall, this paper presents no new techniques but a rigorous systematic evaluation, a broad set of findings about PVRs (that in some cases, refute those made in narrow domains in prior work), and open-sourced code and models (that required over 10,000 GPU-hours to train) for the benefit of the research community.
|
[
"Computer Vision",
"Embodied AI",
"Robotics"
] | |
NeurIPS
| 2,023
| 70,217
|
PPi: Pretraining Brain Signal Model for Patient-independent Seizure Detection
|
Automated seizure detection is of great importance to epilepsy diagnosis and treatment. An emerging method used in seizure detection, stereoelectroencephalography (SEEG), can provide detailed and stereoscopic brainwave information. However, modeling SEEG in clinical scenarios will face challenges like huge domain shift between different patients and dramatic pattern evolution among different brain areas. In this study, we propose a Pretraining-based model for Patient-independent seizure detection (PPi) to address these challenges. Firstly, we design two novel self-supervised tasks which can extract rich information from abundant SEEG data while preserving the unique characteristics between brain signals recorded from different brain areas. Then two techniques channel background subtraction and brain region enhancement are proposed to effectively tackle the domain shift problem. Extensive experiments show that PPi outperforms the SOTA baselines on two public datasets and a real-world clinical dataset collected by ourselves, which demonstrates the effectiveness and practicability of PPi. Finally, visualization analysis illustrates the rationality of the two domain generalization techniques.
|
[
"Medical Imaging and Signal Processing",
"Machine Learning in Healthcare",
"Neurology and Neuroscience",
"Epilepsy Research",
"Clinical Data Analysis"
] | |
NeurIPS
| 2,022
| 59,302
|
Analyzing Micro-Level Rebound Effects of Energy Efficient Technologies
|
Energy preservation is central to prevent resource depletion, climate change and environment degradation. Investment in raising efficiency of appliances is among the most significant attempts to save energy. Ironically, introduction of many such energy saving appliances increased the total energy consumption instead of reducing it. This effect in literature is attributed to the inherent Jevons paradox (JP) and optimism bias (OB) in consumer behavior. However, the magnitude of these instincts vary among different people. Identification of this magnitude for each household can enable the development of appropriate policies that induce desired energy saving behaviour. Using the RECS 2015 dataset, the paper uses machine learning for each electrical appliance to determine the dependence of their total energy consumption on their energy star rating. This shows that only substitutable appliances register increase in energy demand upon boosted efficiency. Lastly, an index is noted to indicate the varying influence of JP and OB on different households.
|
[
"Energy Efficiency",
"Consumer Behavior",
"Environmental Economics",
"Machine Learning in Energy Studies",
"Sustainable Technology",
"Behavioral Economics"
] | |
ICML
| 2,024
| 36,144
|
Recurrent Action Transformer with Memory
|
Recently, the use of transformers in offline reinforcement learning has become a rapidly developing area. This is due to their ability to treat the agent's trajectory in the environment as a sequence, thereby reducing the policy learning problem to sequence modeling. In environments where the agent's decisions depend on past events (POMDPs), capturing both the event itself and the decision point in the context of the model is essential. However, the quadratic complexity of the attention mechanism limits the potential for context expansion. One solution to this problem is to enhance transformers with memory mechanisms. This paper proposes a Recurrent Action Transformer with Memory (RATE), a novel model architecture incorporating a recurrent memory mechanism designed to regulate information retention. To evaluate our model, we conducted extensive experiments on memory-intensive environments (ViZDoom-Two-Colors, T-Maze, Memory Maze, Minigrid.Memory), classic Atari games and MuJoCo control environments. The results show that using memory can significantly improve performance in memory-intensive environments while maintaining or improving results in classic environments. We hope our findings will stimulate research on memory mechanisms for transformers applicable to offline reinforcement learning.
|
[
"Reinforcement Learning",
"Sequence Modeling",
"Deep Learning",
"Memory Mechanisms",
"Transformer Models"
] | |
ICLR
| 2,023
| 10,777
|
Neural Systematic Binder
|
The key to high-level cognition is believed to be the ability to systematically manipulate and compose knowledge pieces. While token-like structured knowledge representations are naturally provided in text, it is elusive how to obtain them for unstructured modalities such as scene images. In this paper, we propose a neural mechanism called Neural Systematic Binder or SysBinder for constructing a novel structured representation called Block-Slot Representation. In Block-Slot Representation, object-centric representations known as slots are constructed by composing a set of independent factor representations called blocks, to facilitate systematic generalization. SysBinder obtains this structure in an unsupervised way by alternatingly applying two different binding principles: spatial binding for spatial modularity across the full scene and factor binding for factor modularity within an object. SysBinder is a simple, deterministic, and general-purpose layer that can be applied as a drop-in module in any arbitrary neural network and on any modality. In experiments, we find that SysBinder provides significantly better factor disentanglement within the slots than the conventional object-centric methods, including, for the first time, in visually complex scene images such as CLEVR-Tex. Furthermore, we demonstrate factor-level systematicity in controlled scene generation by decoding unseen factor combinations.
|
[
"Computer Vision",
"Cognitive Science",
"Neural Networks",
"Representation Learning"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.