Caiwen Ding

Associate Professor, CS&E
University of Minnesota, Twin Cities
ML/AI Systems & Computer Architecture
Privacy-Preserving ML & ML for EDA

Find me at

About me

I am Caiwen Ding, an Associate Professor in the Department of Computer Science & Engineering at the University of Minnesota, Twin Cities (since Aug 2024). Previously, I was an assistant professor at the University of Connecticut. I received my Ph.D. from Northeastern University (NEU), Boston in 2019, supervised by Prof. Yanzhi Wang.

I am looking for highly motivated Ph.D./Postdoc students with strong interests in FPGAs/GPUs programming, EDA/Architecture, or machine learning/deep learning, with full financial support. Please email cwen1988@gmail.com with your CV and transcripts.

My research interests include algorithm-system co-design of ML/AI; computer architecture and heterogeneous computing (FPGAs/GPUs); privacy-preserving machine learning; machine learning for electronic design automation (EDA); neuromorphic computing; computer vision and natural language processing.

My work has been published in top-tier venues including DAC, ICCAD, ASPLOS, ISCA, MICRO, HPCA, CCS, Oakland, SC, FPGA, MLSys, NeurIPS, ICML, ICLR, CVPR, AAAI, ACL, EMNLP, IJCAI, ICRA, and DATE. I am a recipient of the NSF CAREER Award, Amazon Research Award, and CISCO Research Award. I received the Best Paper Award at 2025 ICLAD, Outstanding Student Paper Award at 2023 HPEC, Best Paper Award at 2023 AAAI DCAA workshop, and Best Paper Award Nominations at DATE 2018 and 2021.

News

Selected Achievements

Best Paper Award at ICLAD 2025
Best Paper Award — ICLAD 2025
Best Paper Award at AAAI DCAA 2023
Best Paper Award — AAAI DCAA Workshop 2023
First Place Winner in Accuracy, 2022 TinyML Design Contest
First Place in Accuracy — 2022 ACM/IEEE TinyML Design Contest at ICCAD
Fourth Place Winner, 2022 TinyML Design Contest
Fourth Place Overall — 2022 ACM/IEEE TinyML Design Contest at ICCAD

Research Areas

Algorithm-hardware co-design for efficient ML systems on FPGAs, GPUs, and emerging computing platforms. Model compression, acceleration, and efficient training/inference.
Secure computation for deep learning, homomorphic encryption-based inference, federated learning, and differential privacy techniques.
Machine learning for electronic design automation, LLM-based Verilog generation, multi-agent systems for chip design and mathematical problem solving.

Research Sponsors

Research Sponsors