Raj S. Shah

Ph.D. student at Interactive Computing, Georgia Tech.

IMG_0664.jpg

My research focuses on the practical use of AI to support human well-being. I approach this through three complementary directions: (1) Technical methods for LLM safety, robustness, and control. I develop techniques such as effective unlearning to protect users, watermarking for authorship verification, and continual learning in real environments. (2) Using pre-trained language models (PLMs) as computational models of human cognition. I use PLMs to better understand human cognition; developing theoretically grounded linking hypotheses, modeling reasoning patterns, and evaluating when and why model behavior aligns with or diverges from human judgments and developmental trajectories. Recently, I have been exploring the developmental alignment of models. (3) Benchmarks and evaluation protocols for practical scenarios and contexts. I design domain-specific evaluations for mental health, social-media visualizations, global representations, clinical documentation, and financial language modeling. These evaluations expose where AI failures have real human consequences. Overall, my research collectively aims to work towards value-aligned use of AI through technical methods combined with effective evaluations.

news

Feb 15, 2026 Shoutout to Harsh Nishant Lalai. He is exploring PhD opportunities for Fall 2026. He has strong theoretical grounding, careful experimental design, and the engineering maturity to turn concepts into working systems. If your lab values both depth and execution, I highly recommend him (Harsh is awesome). Website: https://sites.google.com/view/harsh-nishant-lalai/

latest posts

selected publications

  1. Helping the helper: Supporting peer counselors via ai-empowered practice and feedback
    Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, and 4 more authors
    Proceedings of the ACM on Human-Computer Interaction, 2025
  2. The unlearning mirage: A dynamic framework for evaluating LLM unlearning
    Raj Sanjay Shah, Jing Huang, Keerthiram Murugesan, and 2 more authors
    In Second Conference on Language Modeling, 2025