Jean Kossaifi
YOU?
Author Swipe
View article: FourCastNet 3: A geometric approach to probabilistic machine-learning weather forecasting at scale
FourCastNet 3: A geometric approach to probabilistic machine-learning weather forecasting at scale Open
FourCastNet 3 advances global weather modeling by implementing a scalable, geometric machine learning (ML) approach to probabilistic ensemble forecasting. The approach is designed to respect spherical geometry and to accurately model the s…
View article: Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning
Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning Open
A wide range of scientific problems, such as those described by continuous-time dynamical systems and partial differential equations (PDEs), are naturally formulated on function spaces. While function spaces are typically infinite-dimensio…
View article: Enabling Automatic Differentiation with Mollified Graph Neural Operators
Enabling Automatic Differentiation with Mollified Graph Neural Operators Open
Physics-informed neural operators offer a powerful framework for learning solution operators of partial differential equations (PDEs) by combining data and physics losses. However, these physics losses rely on derivatives. Computing these …
View article: Factorized Implicit Global Convolution for Automotive Computational Fluid Dynamics Prediction
Factorized Implicit Global Convolution for Automotive Computational Fluid Dynamics Prediction Open
Computational Fluid Dynamics (CFD) is crucial for automotive design, requiring the analysis of large 3D point clouds to study how vehicle geometry affects pressure fields and drag forces. However, existing deep learning approaches for CFD …
View article: TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training
TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training Open
Scientific problems require resolving multi-scale phenomena across different resolutions and learning solution operators in infinite-dimensional function spaces. Neural operators provide a powerful framework for this, using tensor-paramete…
View article: A Library for Learning Neural Operators
A Library for Learning Neural Operators Open
We present NeuralOperator, an open-source Python library for operator learning. Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces. They can be trained and inferenced …
View article: Exploring the design space of deep-learning-based weather forecasting systems
Exploring the design space of deep-learning-based weather forecasting systems Open
Despite tremendous progress in developing deep-learning-based weather forecasting systems, their design space, including the impact of different design choices, is yet to be well understood. This paper aims to fill this knowledge gap by sy…
View article: Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems
Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems Open
Fourier Neural Operators (FNOs) excel on tasks using functional data, such as those originating from partial differential equations. Such characteristics render them an effective approach for simulating the time evolution of quantum wavefu…
View article: Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs Open
Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of hig…
View article: Equivariant Graph Neural Operator for Modeling 3D Dynamics
Equivariant Graph Neural Operator for Modeling 3D Dynamics Open
Modeling the complex three-dimensional (3D) dynamics of relational systems is an important problem in the natural sciences, with applications ranging from molecular simulations to particle mechanics. Machine learning methods have achieved …
View article: Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs
Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs Open
Memory complexity and data scarcity have so far prohibited learning solution operators of partial differential equations (PDEs) at high resolutions. We address these limitations by introducing a new data efficient and highly parallelizable…
View article: Neural Operators for Accelerating Scientific Simulations and Design
Neural Operators for Accelerating Scientific Simulations and Design Open
Scientific discovery and engineering design are currently limited by the time and cost of physical experiments, selected mostly through trial-and-error and intuition that require deep domain expertise. Numerical simulations present an alte…
View article: Geometry-Informed Neural Operator for Large-Scale 3D PDEs
Geometry-Informed Neural Operator for Large-Scale 3D PDEs Open
We propose the geometry-informed neural operator (GINO), a highly efficient approach to learning the solution operator of large-scale partial differential equations with varying geometries. GINO uses a signed distance function and point-cl…
View article: Guaranteed Approximation Bounds for Mixed-Precision Neural Operators
Guaranteed Approximation Bounds for Mixed-Precision Neural Operators Open
Neural operators, such as Fourier Neural Operators (FNO), form a principled approach for learning solution operators for PDEs and other mappings between function spaces. However, many real-world problems require high-resolution training da…
View article: Quantum Goemans-Williamson Algorithm with the Hadamard Test and Approximate Amplitude Constraints
Quantum Goemans-Williamson Algorithm with the Hadamard Test and Approximate Amplitude Constraints Open
Semidefinite programs are optimization methods with a wide array of applications, such as approximating difficult combinatorial problems. One such semidefinite program is the Goemans-Williamson algorithm, a popular integer relaxation techn…
View article: Towards a scalable discrete quantum generative adversarial neural network
Towards a scalable discrete quantum generative adversarial neural network Open
Quantum generative adversarial networks (QGANs) have been studied in the context of quantum machine learning for several years, but there has not been yet a proposal for a fully QGAN with both, a quantum generator and discriminator. We int…
View article: Score-based Diffusion Models in Function Space
Score-based Diffusion Models in Function Space Open
Diffusion models have recently emerged as a powerful framework for generative modeling. They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate sam…
View article: HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression
HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression Open
Transformers have attained superior performance in natural language processing and computer vision. Their self-attention and feedforward layers are overparameterized, limiting inference speed and energy efficiency. Tensor decomposition is …
View article: Towards a scalable discrete quantum generative adversarial neural network
Towards a scalable discrete quantum generative adversarial neural network Open
We introduce a fully quantum generative adversarial network intended for use with binary data. The architecture incorporates several features found in other classical and quantum machine learning models, which up to this point had not been…
View article: Variational quantum optimization with multibasis encodings
Variational quantum optimization with multibasis encodings Open
Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate a realizable quantum advantage. The utility of many quantum algorithms is limited by high requisite circuit depth and nonconvex optimization …
View article: Quantum Goemans-Williamson Algorithm with the Hadamard Test and Approximate Amplitude Constraints
Quantum Goemans-Williamson Algorithm with the Hadamard Test and Approximate Amplitude Constraints Open
Semidefinite programs are optimization methods with a wide array of applications, such as approximating difficult combinatorial problems. One such semidefinite program is the Goemans-Williamson algorithm, a popular integer relaxation techn…
View article: TensorLy-Quantum: Quantum Machine Learning with Tensor Methods
TensorLy-Quantum: Quantum Machine Learning with Tensor Methods Open
Simulation is essential for developing quantum hardware and algorithms. However, simulating quantum circuits on classical hardware is challenging due to the exponential scaling of quantum state space. While factorized tensors can greatly r…
View article: Variational Quantum Optimization with Multi-Basis Encodings
Variational Quantum Optimization with Multi-Basis Encodings Open
Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate realizable quantum advantage. The utility of many quantum algorithms is limited by high requisite circuit depth and nonconvex optimization la…
View article: Reinforcement Learning in Factored Action Spaces using Tensor Decompositions
Reinforcement Learning in Factored Action Spaces using Tensor Decompositions Open
We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions. The goal of t…
View article: Reinforcement Learning in Factored Action Spaces using Tensor\n Decompositions
Reinforcement Learning in Factored Action Spaces using Tensor\n Decompositions Open
We present an extended abstract for the previously published work TESSERACT\n[Mahajan et al., 2021], which proposes a novel solution for Reinforcement\nLearning (RL) in large, factored action spaces using tensor decompositions. The\ngoal o…
View article: AugMax: Adversarial Composition of Random Augmentations for Robust\n Training
AugMax: Adversarial Composition of Random Augmentations for Robust\n Training Open
Data augmentation is a simple yet effective way to improve the robustness of\ndeep neural networks (DNNs). Diversity and hardness are two complementary\ndimensions of data augmentation to achieve robustness. For example, AugMix\nexplores r…
View article: AugMax: Adversarial Composition of Random Augmentations for Robust Training
AugMax: Adversarial Composition of Random Augmentations for Robust Training Open
Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness. For example, AugMix explores rand…
View article: Defensive Tensorization
Defensive Tensorization Open
We propose defensive tensorization, an adversarial defence technique that leverages a latent high-order factorization of the network. The layers of a network are first expressed as factorized tensor layers. Tensor dropout is then applied i…