(*) Equal contribution

Sequential Learning

Recommender systems are known to overfit to users’ revealed preferences, which are different than their true preferences; this often leads to the systems showing users addictive or harmful content. This failure can be understood as the system’s inability to correctly balance depth and breadth of chosen content for each user. These competing objectives can be well-modeled as the sum of a submodular and supermodular function. These functions also are useful to model active learning, summarization amongst other applications of “item selection” in ML. We study the black-box optimization of these functions in a low-information setting, where decisions are made sequentially.

Online SuBmodular + SuPermodular (BP) Maximization with Bandit Feedback Adhyyan Narang, Omid Sadeghi, Lillian J Ratliff, Maryam Fazel, Jeff Bilmes. In Submission to ICML 2022 (arxiv)

Strategic Learning

Traditional ML usually considers the learner to be an independent agent in an isolated world; and chooses training objectives and algorithms accordingly. However, in practice most models are deployed in complex ecosystems with many other models, each of whom influence each others data. How should we perform learning in these settings?

Decision Dependent Learning in the Presence of Competition Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J Ratliff. AIStats 2022 (arxiv)

Global Convergence to Local Minmax Equilibrium in Classes of Nonconvex Zero-Sum Games. Tanner Fiez, Lillian J. Ratliff, Eric Mazumdar, Evan Faulkner, Adhyyan Narang. Neurips 2021 (NeurIPS)

Overparameterized Learning

In machine learning, the loss on training data is often used as a proxy for the loss on an unseen test point. Traditional statistical theory justifies this approach for underparameterized settings: when the number of training points greatly exceeds the number of parameters of the model. However, modern neural networks are almost always overparameterized. Is the above proxy still a reasonable choice?

We explore this question for classification problems. We show that classification is ‘easier’ than regression, and that overparameterization could make models brittle even when regular performance is good. Additionally, we study meta-learning in overparameterization; this also gives an insight into where the prior for single-agent overparameterized regression comes from.

Classification and Adversarial examples in an Overparameterized Linear Model: A Signal Processing Perspective Adhyyan Narang, Vidya Muthukumar, Anant Sahai. Short version in ICML OPPO Workshop 2021 (arxiv)

Towards Sample-Efficient Overparameterized Meta-Learning Yue Sun, Adhyyan Narang, Ibrahim Gulluk, Samet Oymak, Maryam Fazel. Neurips 2021 (arxiv)

Classification vs regression in overparameterized regimes: Does the loss function matter? Vidya Muthukumar*, Adhyyan Narang*, Vignesh Subramanian*, Mikhail Belkin, Daniel Hsu, Anant Sahai. JMLR 2021 (arxiv)