What I'm Doing Now
Last updated: Dec 2025
🔬 Working On
Research: I am currently exploring new ideas, with the primary focus likely to be on sparse matrix computation (and Approximate Computing?) and training sparse models.
Coding: Currently organizing the code from my previous paper.
Life: Currently applying for a PhD program while enjoying my final days of master's studies in Santa Barbara.
📚 Recent Readings
Currently reading.
Currently reading.
đź’ˇ Potential Ideas / Brainstorming
Some raw thoughts, "what-ifs", and research directions I'm currently pondering but haven't started yet. Open to discussion!
Hardware Efficiency of Dynamic Sparse Training
Existing GPUs (CUDA cores) are designed for dense matrix multiplication. While unstructured sparsity reduces computational load (FLOPs), it introduces discontinuous memory access patterns, often resulting in slower training speeds on standard GPUs rather than faster ones.
Is LLM the Future of Humanity? (Demo)
I used to think LLMs were merely “clever” AI agents, incapable of performing many tasks well or at all. But the development of LLMs seems to be accelerating uncontrollably. Could we see the emergence of more brain-like models—ones that may not be as “clever” as current LLMs but can learn like humans? This likely requires simultaneous advancements in both software and hardware.