Raj S. Shah

Ph.D. student at Interactive Computing, Georgia Tech.

IMG_0664.jpg

I’m Raj, a PhD student in Interactive Computing at Georgia Tech (advised by Sashank Varma) and a visiting researcher in Diyi Yang’s SALT Lab at Stanford. My job-talk story is about building AI systems that stay trustworthy when the stakes are human well-being.

I pull on three research threads:

  • Reliable model operations. Dynamic unlearning evaluations, watermarking, and continual learning recipes that keep LLMs accountable once they leave the lab.
  • LLMs as cognitive probes. Developmentally staged corpora (BabyLM) and psych-aligned batteries that reveal when models follow-or ignore-human reasoning trajectories.
  • AI for mental health. Counselor copilots, MI-aware feedback loops, and evaluation protocols that center safety for both helpers and seekers.

Across these threads I ship practical tooling: pip packages for unlearning stress tests, open MI datasets + coaching sandboxes, and benchmark suites for finance, healthcare, and visualization literacy. If you’re evaluating LLMs in the wild (or just want to see the job-talk deck), reach out.

news

Feb 03, 2026 Announced my job talk tour (Georgia Tech → Stanford → CMU HCI)**-demoing our Naïve Scientific Misconceptions probes that uncover where GPT-4o slips back into child theories.
Dec 15, 2025 Started the Stanford SALT Lab residency with Diyi Yang-splitting time between counselor copilots and clarification-driven summarization for Amazon Rufus.
Aug 20, 2025 Received the Georgia Tech President’s Fellowship-three years of support to push on unlearning, mental-health evaluation, and BabyLM.

latest posts

selected publications

  1. Helping the Helper: Supporting Peer Counselors via AI-Empowered Practice and Feedback
    Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, and 4 more authors
    Proceedings of the ACM on Human-Computer Interaction, 2023
  2. The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning
    Raj Sanjay Shah, Sashank Varma, and others
    Proceedings of Machine Learning (COLM) Workshop, 2025