Beledarian/wgpu-llm
A from-scratch LLM inference engine that uses wgpu (the cross-platform WebGPU implementation) to dispatch WGSL compute shaders for every math operation a Transformer needs. No CUDA. No Python. No massive framework dependencies. Just Rust, raw shaders, and your GPU.
0Active
On the radar — signal detected
Stars
6
Forks
0
Contributors
1
Language
Rust
Score updated Apr 12, 2026
// SUBSCRIBE
The repos that moved this week, why they matter, and what to watch next. One email. No noise.