Publications

Fishr: Invariant Gradient Variances for Out-of-distribution Generalization

We introduce and motivate a new regularization that enforces invariance in the domain-level gradient variances across the different training domains in order to improve out-of-distribution generalization.

Beyond question-based biases: Assessing multimodal shortcut learning in visual question answering

We propose an experimental protocol to evaluate model’s reliance on multimodal biases.

MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks

We introduce a new generalized framework for learning multi-input multi-output subnetworks and study how to best mix the inputs. We obtain sota on CIFAR and Tiny ImageNet by better leveraging the expressiveness of large networks.

DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation

Driven by arguments from information theory, we introduce a new learning strategy for deep ensembles that increases diversity among members: we adversarially prevent features from being conditionally redundant.

RUBi: Reducing unimodal biases for Visual Question Answering

We introduce a strategy to reduce bias in models for Visual Question Answering.