Jeffrey J. Ma

Computer Science PhD Student at Harvard SEAS and the Edge Computing Lab.

prof_pic.jpg

Science and Engineering Complex (SEC) 5.101

150 Western Ave

Boston, MA 02134

About Me

Welcome to my site! I’m Jeffrey Ma: I’m currently a Harvard CS PhD student, advised by Prof. Vijay Janapa Reddi. I’m interested in the intersection of machine learning, systems, and multi-agent interaction. I am especially interested in the following:

  • Efficient model, data representation, and learning.
  • Systems of learning agents and improving multi-step reasoning.
  • Parallelism, asynchronicity, and resiliency in ML systems.
  • Continual learning and scalable methods of skill acquisition in large foundation models.

I’m currently working as a Student Researcher at Google, focusing on LLMs for code, studying how we improve multi-step reasoning, identify code edit opportunities with LLMs, and optimize code at scale with Milad Hashemi (Google Cloud AI) and Amir Yazdanbakhsh (Google DeepMind). I previously interned at AWS AI Labs, where I worked on understanding silent data corruption in large scale LLM training with Leonard Lausen and Hengzhi Pei.

Before Harvard, I attended the California Institute of Technology for my undergrad, studying computer science and finance, where I was advised by Prof. Adam Wierman and worked with Prof. Animashree Anankumar, Prof. Yuanyuan Shi and Prof. Florian Schäfer on competitive optimization methods in multi-agent reinforcement learning settings.

I’ve also worked in industry, both as an intern and full-time prior to my PhD:

  • At Google Brain, on the TensorFlow Extended team, working on MLOps to continuously train models on newly arriving data.

  • At a self-driving startup, Nuro, on the ML Infrastructure team, on post-training model optimization and deployment.

  • At a quantitative finance firm, Citadel, on the E-Trading and Order Management System (OMS) teams, working on algorithmic and automated methods of trading and booking fixed income instruments.

news

Sep 16, 2024 I’ve started as an Student Researcher on the Learning2Perf Team at Google, working on improving code understanding in LLMs and automating code optimization at scale! I’ll be interning remotely from the Cambridge area and around the Google Cambridge office: let me know if you’d ever like to chat!
May 13, 2024 I’ve started as an Applied Scientist intern in the AWS AI Research and Education (AIRE) Lab at Amazon NYC this summer, working on fault resiliency in LLM training! Definitely reach out if you’re in the area and want to chat!
Aug 28, 2023 Started as a PhD student at Harvard, working on ML + systems, large language models, and code generation!

latest posts

selected publications

  1. Polymatrix Competitive Gradient Descent
    Jeffrey Ma, Alistair Letcher , Florian Schäfer , and 2 more authors
    Nov 2021
  2. FedStaleWeight: Buffered Asynchronous Federated Learning with Fair Aggregation via Staleness Reweighting
    Jeffrey Ma, Alan Tu , Yiling Chen , and 1 more author
    Jun 2024