I am a final-year PhD student in the department of Computer Science at the University of Texas at Austin, advised by Alex Dimakis. These days I mostly work on building better datasets for pretraining of large language models. Throughout most of my PhD, I have been interested in provable verification of robustness of neural networks and other assorted challenges related to security and privacy in AI/ML. I've also dabbled in contrastive learning, generative modeling, and causality. Prior to graduate school, I received a bachelor's degree in math and computer science from MIT and cofounded a tech startup.

I am currently on the job market and am actively seeking employment!


Publications/Projects

DataComp-LM: In search of the next generation of training sets for language models

Li et. al

Preprint

OpenLM: a minimal but performative language modeling (lm) repository, 2023

Gururangan et. al

Open Source Project

Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand

Md Musfiqur Rahman*, Matt Jordan*, Murat Kocaoglu

ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling.

Lovasz Theta Contrastive Learning

Georgios Smyrnis, Matt Jordan, Ananya Uppal, Giannis Daras, Alexandros G. Dimakis

NeurIPS 2022 Workshop: Self-Supervised Learning - Theory and Practice

Zonotope Domains for Lagrangian Neural Network Verification

Matt Jordan*, Jonathan Hayase*, Alexandros G. Dimakis, Sewoong Oh

Advances in Neural Information Processing Systems (NeurIPS) 2022.

Inverse Problems Leveraging Pre-trained Contrastive Representations

Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

Advances in Neural Information Processing Systems (NeurIPS) 2021.

Provable Lipschitz Certification for Generative Models

Matt Jordan, Alexandros G. Dimakis

International Conference on Machine Learning (ICML) 2021.

Quarantines as a Targeted Immunization Strategy

Jessica Hoffmann*, Matt Jordan*, Constantine Caramanis.

Preprint, arXiv:2008.08262.

Exactly Computing the Local Lipschitz Constant of ReLU Networks

Matt Jordan, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2020.

Provable Certificates for Adversarial Examples: Fitting a Ball in a Union of Polytopes

Matt Jordan, Justin Lewis, Alexandros G. Dimakis.

Advances in Neural Information Processing Systems (NeurIPS) 2019.

Quantifying Perceptual Distortion of Adversarial Examples

Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis.

Preprint, arXiv:1902.08265.


Last Update: Aug 2024
HTML Template stolen from Chen Liu