Reza Esfandiarpoor

headshot_3.jpg

I am a Ph.D. candidate in Computer Science at Brown University, working with Stephen Bach. Previously, I have interned at Google in the AdsAI team. Currently, I am interning at Microsoft in the Office AI team, where I work on LLM agents.

My research focuses on optimizing the interaction between different foundation models to better understand their behavior and improve their performance.
Here is a summary of my research:

  • Vision-Language Models. I introduced Extract and Explore, a novel analysis method that uses reinforcement learning to align an LLM with VLM preferences. I then analyze LLM generations and find that non-visual and even spurious information significantly impact VLM concept representations. I introduce Follow-up Differential Descriptions, a new method that adapts LLM-generated concept descriptions to each individual VLM to improve performance.

  • Information Retrieval. I introduced SyCL, a new synthetic data generation method that creates fine-grained training data with multi-level relevance labels for training dense retrievers. I also created Trove, an open-source toolkit for dense retrieval that simplifies IR experiments without sacrificing flexibility.

  • Learning with Limited Labeled Data. Introduced Extended Few-shot Learning, a novel approach that uses structured knowledge sources and auxiliary data to improve visual classification with small models.

Before Ph.D., I completed my undergraduate studies with Shadrokh Samavi, where I worked on pruning and quantization methods to create efficient medical image analysis models for edge devices.

Open Source Software

  1. Trove Logo
    Trove: A Flexible Toolkit for Dense Retrieval
    Reza Esfandiarpoor and Stephen Bach
    2025

Selected Publications (see all)

  1. Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of Relevance
    Reza Esfandiarpoor*, George Zerveas*, Ruochen Zhang, Macton Mgonzo, Carsten Eickhoff, and Stephen H. Bach
    arXiv preprint, 2025
  2. An Adaptive Method for Weak Supervision with Drifting Data
    Alessio Mazzetto, Reza Esfandiarpoor, Akash Singirikonda, Eli Upfal, and Stephen H. Bach
    AISTATS, 2025
  3. If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions
    Reza Esfandiarpoor, Cristina Menghini, and Stephen H. Bach
    EMNLP, 2024
  4. Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification
    Reza Esfandiarpoor and Stephen H. Bach
    ICLR, 2024
  5. Extended Few-Shot Learning: Exploiting Existing Resources for Novel Tasks
    Reza Esfandiarpoor, Amy Pu, Mohsen Hajabdollahi, and Stephen H. Bach
    arXiv, 2021