Year 4 CS undergraduate at the National University of Singapore.
Currently exploring Deep Reinforcement Learning, Open-Endedness, Unsupervised Environment Design, and Meta-learning.
Some questions on my mind:
What is the better way to understand intelligence - engineering it from the ground up or by trying to reverse-engineer the brain?
What are the limits to meta-learning? What’s stopping us from meta-learning every part of an ML algorithm?
What should we vary in the environments we use to train generally capable agents? Transition dynamics? Observations? Reward functions? Intuitively, varying the reward function seems like it will produce agents that can succeed at arbitrary tasks.
Is there any point in computational neuroscience if we aren’t even able to properly understand how deep NNs do what they do?
I like to write about ideas and things I find interesting. I also share dumb technical mistakes I make so others don’t make them.
Intelligence, ML, and Jax
Date | Title | Reading Time |
---|---|---|
Aug 2, 2025 | Leaked BatchTracer Error in Jax | 2 min |
Jul 5, 2024 | Neuroscience and AI | 5 min |
Misc
Date | Title | Reading Time |
---|---|---|
Jun 3, 2024 | Notes from “The Work of His Hands” by Sy Garte | 5 min |
Jun 3, 2024 | Quick thoughts on ‘Mere Christianity’ by C.S Lewis | 8 min |
May 19, 2024 | What to do about the future | 2 min |
May 19, 2024 | Trying to get good at math and science | 2 min |
May 17, 2024 | Ambitious but at peace | 2 min |