Profile photo

Akarsh Kumar

[Twitter] [Google Scholar] [GitHub]

Contact akarshkumar0101(at)gmail(dot)com

Bio
I'm a Ph.D. student at MIT CSAIL working with Phillip Isola.
I'm also a research intern at Sakana AI working with Yujin Tang and David Ha. I also collaborate with Ken Stanley.
My research is supported by the NSF GRFP!
Previously, I graduated from UT Austin where I worked with Peter Stone, Risto Miikkulainen, and Alexander Huth.

Research
I want to understand intelligence and all the different processes that emerge it.
More specifically, I'm interested in:

  • Applying principles from natural evolution and artificial life to create better AI systems.
  • Open-ended processes which keep creating "interesting" artifacts indefinitely.
  • Evolving intelligence from scratch, without the internet, like how natural evolution created us.
In practice, I'm espcially interested in meta-learning, RL, automatic environment generation, and multi-agent self-play.

Research

Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis

Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley

arXiv 2025

TLDR: We hypothesize that the standard paradigm of training networks today is producing spaghetti-like representations of it's (good) output behavior, leading to ineffective OOD generalization, creativity, and continual learning.

Automating the Search for Artificial Life with Foundation Models

Akarsh Kumar, Chris Lu, Louis Kirsch, Yujin Tang, Kenneth O. Stanley, Phillip Isola, David Ha

ALife Journal 2024

TLDR: ASAL is a new ALife paradigm using CLIP to search for (1) target, (2) open-ended, and (3) diverse simulations. ASAL helps us understand "life as it could be" in arbitrary artificial worlds.

Learning In-Context Decision Making with Synthetic MDPs

Akarsh Kumar, Chris Lu, Louis Kirsch, Phillip Isola

AutoRL @ ICML 2024

TLDR: Generalist in-context RL agents trained on only synthetic MDPs generalize to real MDPs.

Effective Mutation Rate Adaptation through Group Elite Selection

Akarsh Kumar, Bo Liu, Risto Miikkulainen, Peter Stone

GECCO 2022

TLDR: Explains why genetic algorithms fail to self-adapt their mutation rate and proposes group selection to fix it.

Physically Plausible Pose Refinement using Fully Differentiable Forces

Akarsh Kumar, Aditya R. Vaidya, Alexander G. Huth

EPIC @ CVPR 2021

TLDR: Accurately refining pose estimations by differentiably modeling the physics of the scene.

Talks

Speaker at Detection and Emergence of Complexity Conference
May 2025
[Website] [Video (coming soon)]

MIT Embodied Intelligence Seminar
March 2025
[Video]

Kaiming He's Lab
March 2025

Michael Levin's Lab
April 2025
[Video]

Jeff Gore's Lab
March 2025

AutoRL @ ICML 2024
May 2024
[Video]