Optimizing ML Efficiency
Through Algorithmic Innovation

PhD candidate specializing in making machine learning models more efficient without requiring larger architectures or more data.

ML Research

Hugging Face

Research Award 2023

Research Philosophy

Efficiency First

Developing techniques that improve model performance without increasing computational requirements or data needs.

Algorithmic Innovation

Creating novel approaches like GRPO that fundamentally change how models learn from data.

Real-World Impact

Applying research to domains like molecular science and computer vision with measurable improvements.

Research Highlights

Group Relative Policy Optimization (GRPO)

A novel reinforcement learning framework that achieves superior performance with reduced computational overhead compared to traditional PPO methods. GRPO introduces group-based relative rewards that stabilize training while maintaining sample efficiency.

20% Faster Convergence

Compared to standard PPO in Atari benchmarks

Reduced Variance

More stable training dynamics across environments

GRPO Visualization
Research Visualization View Paper
Molecular Science

Molecular Science Applications

Developed specialized architectures for molecular property prediction that outperform traditional methods while using significantly fewer parameters. Our approach combines geometric deep learning with efficient attention mechanisms.

15% Accuracy Improvement

On QM9 benchmark dataset

3× Faster Inference

Compared to baseline models

Publications

NeurIPS 2023

Group Relative Policy Optimization for Sample-Efficient RL

Introducing GRPO, a novel RL algorithm that achieves superior performance with reduced computational overhead through group-based relative rewards.

Reinforcement Learning Efficiency PPO
ICML 2023

Efficient Geometric Deep Learning for Molecular Property Prediction

A parameter-efficient architecture combining geometric deep learning with attention mechanisms for molecular science applications.

Molecular Science GNNs Efficiency
AAAI 2023

Data-Efficient Training of Vision Transformers

Novel techniques to train vision transformers with limited data while maintaining competitive performance on benchmark datasets.