I am a software development engineer from Amazon Science, working on deep learning compiler and framework. I got my Ph.D. from Georgia Institute of Technology, and was advised by Santosh Pande and Greg Eisenhauer. Before that, I received my M.S. and B.S. degrees from Hunan University (Changsha, China) under the supervision of Dr. Cheng Xu (徐成)

In general, my research interest lies in the intersection of compiler and system. I am particularly interested in applying compiler techniques to problems such as application performance and resilience.

[CV], [Research Statement], [Teaching Statement]


  • Dec 2021: CASE is accepted by PPoPP’22. It provides a scheduling framework for improving GPU utilizations.
  • May 2021: IterPro is accepted by IEEE TPDS. It explores features of modern compiler optimizations for almost free resilience.
  • April 2021: I joined amazon science working on deep learning compiler and system.
  • Dec 2020: I defended my dissertation.
  • Nov 2020: I joined amazon SCOT working on distributed storage system.
  • May 2019: Our paper CARE is accepted by SC’19, and nominated as a Best Student Paper Finalist. CARE is a instant failure recovery framework. It incurs zero runtime overhead, during the normal execution of applications.
  • Jan 2018: Our paper LADR is accespted by HPDC’18.
  • Sep 2016: I did internship at VMWare CTO Office, Boston, MA.
  • Aug 2016: Two papers related to active storage were awarded as Best Papers.


  1. Chao Chen, Chris Porter and Santosh Pande. CASE: A Compiler-Assisted SchEduling Framework for Multi-GPU Systems. To appear in 27th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), 2022. [PDF]

  2. Chao Chen, Greg Eisenhauer and Santosh Pande. Near-zero Downtime Recovery from Transient-Error-Induced Crashes. Accepted by IEEE Transactions on Parallel and Distributed Systems (TPDS), 2021. [DOI][PDF]

  3. Chao Chen, Greg Eisenhauer, Santosh Pande and Qiang Guan. CARE: Compiler-Assisted Recovery for Soft Failures. International Conference for High Performance Computing, Networking, Storage, and Analysis (SC), 2019. (Best Student Paper Finalist) [DOI][PDF]

  4. Chao Chen, Greg Eisenhauer, Matthew Wolf and Santosh Pande. LADR: Low-cost Application-level Detector for Reducing Silent Output Corruptions. ACM International Symposium on High-Performance Parallel and Distributed Computing (HPDC), 2018. [DOI][PDF]

  5. Chao Chen, Michael Lang, Latchesar Ionkov and Yong Chen Active Burst-Buffer: In-Transit Processing Integrated into Hierarchical Storage. 11th IEEE International Conference on Networking, Architecture, and Storage (NAS), 2016. (Best Paper) [DOI]

  6. Yong Chen, Chao Chen, Yanlong Yin, Xianhe Sun, Rajeev Thakur and William Gropp. Rethinking High Performance Computing System Architecture for Scientific Big Data Applications. 14th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA) (Best Paper)[DOI]

  7. Yong Chen, Chao Chen, Xian-He Sun, William D. Gropp, and Rajeev Thakur. A Decoupled Execution Paradigm for Data-Intensive High-End Computing. International Conference on Cluster Computing (Cluster), 2012. [DOI]

  8. Chao Chen, Yong Chen and Philip C. Roth. DOSAS:Mitigating the Resource Contention in Active Storage Systems. International Conference on Cluster Computing (Cluster), 2012. [DOI]

  9. Chao Chen and Yong Chen. Dynamic Active Storage for High Performance I/O. 41st International Conference on Parallel Processing (ICPP), 2012. [DOI]


  • Teaching Assistant. Design Operating Systems (CS3210). Georgia Institute of Technology.
  • Teaching Assistant. Compilers-Theory and Practice (CS8803-O08). Georgia Institute of Technology.
  • Teaching Assistant. Compiler Design (CS6241). Georgia Institute of Technology.
  • Teaching Assistant. Compilers and Interpreters (CS4240). Georgia Institute of Technology.
  • Teaching Assistant. High Performance Computer Architecture (CS6290). Georgia Institute of Technology.


  • Program Committee: IPDPS’22, HPDC’22
  • Student Volunteer: SC’13
  • Sub-reviewer: Cluster’12, CCGrid’12, Cluster’13, CCGrid’13, PACT’19, PLDI’19