Haris Riaz
710 Gould Simpson
1040 E 4th Street
Tucson, AZ 85721
I am a PhD student at the
University of Arizona, advised by Professor Mihai Surdeanu at the
CLU Lab. My research focuses on enabling Large Language Models (LLMs) to effectively leverage structured expert knowledge.
I have developed neuro-symbolic techniques that parametrically integrate knowledge from symbolic reasoners into LLMs for constraint satisfaction problems, and created methods for generating synthetic reward feedback data through weak supervision. My work also includes designing meta-algorithms to enhance diversity in LLM-generated synthetic data, integrating causal reasoning and pragmatics into retrieval-augmented generation (RAG) frameworks, and leveraging structured linguistic knowledge for information extraction tasks.
Before joining the UofA, I completed my undergraduate studies in Computer Science at the School of Electrical Engineering and Computer Science at the National University of Sciences and Technology in 2021.
Besides work, I am learning to play the guitar. I enjoy rock climbing and hiking along the various trails surrounding Tucson. A long time ago, I memorized the spelling of every word in the English dictionary and was a finalist in the 4th Dawn in Education National Spelling Bee.
news
| May 27, 2025 | Returned to Amazon as an Applied Scientist Intern on the |
|---|---|
| May 17, 2025 | Our paper “MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation” has been accepted to Findings of |
| Dec 31, 2024 | New paper alert: “MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation”. Under submission. |
| Dec 10, 2024 | Our work Say Less Mean More: Leveraging Pragmatics in Retrieval-Augmented Generation is presented as a contributed lightning talk at the MusIML workshop co-located with NeurIPS 2024 |
| Nov 03, 2024 | Serving as reviewer for ICLR 2025. |
| Oct 15, 2024 | New paper: Say Less Mean More: Leveraging Pragmatics in Retrieval-Augmented Generation. Under submission. |
| May 28, 2024 | Started my internship as an Applied Scientist Intern on the |
| Mar 13, 2024 | Our paper Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification is accepted at NAACL 2024 Findings! |
| Feb 20, 2024 | Our paper ELLEN: Extremely Lightly Supervised Learning For Efficient Named Entity Recognition is accepted at LREC-COLING 2024! |
| Feb 16, 2024 | Attending Stanford Treehacks 2024. Our hackathon project eventually morphed into StoryEngine ($750k pre-seed backed by A16z). Note: I am not affiliated with StoryEngine (all credit goes to my hackathon teammate Wanrong He). |
| Jan 15, 2024 | Serving as reviewer for NAACL 2024 |
| Apr 04, 2022 | Serving as reviewer for Second Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning (Pan-DL) co-located with EMNLP 2023 |
| Apr 04, 2022 | Awarded AI Talent Bursary for attending AI week organized by Alberta Machine Intelligence Institute (AMII) |
| Jan 12, 2022 | Started PhD at University of Arizona working with Professor Mihai Surdeanu |
| Jun 01, 2021 | Graduated with Bachelors in CS from NUST-SEECS. My Final Year Project on Handwritten Sequence Recognition with Time Series Transformers is 1/3 selected for Rector’s Gold Medal for best final year CS project |
| May 07, 2021 | Serving as Volunteer for ICLR 2021 |