Before SparseTech ever existed, I spent years working as a structural engineer. On the surface, it feels like a completely different life. But the longer I've been here, the more I realize how much that world shaped how I think.
When you design structures, you develop an instinct for what actually matters. You learn where you can simplify, where you can't, and where cutting corners will come back to haunt you. You also learn—sometimes the hard way—that reality doesn't care how elegant your math looked on paper.
When I started digging into AI and signal processing (originally as a side project that got… a little out of control), I kept seeing the same pattern over and over again: systems treating every piece of data as if it deserved equal attention. As if all information carried the same weight.
It doesn't.
I've lost count of how many times I've seen something behave perfectly in simulation, only to fall apart the moment it touched real hardware. Tiny assumptions nobody talks about. Unit conversions that quietly sabotage you months later. Test signals that look "representative" but never actually occur in the wild. I once lost an entire week to one of those—and managed to make the same mistake again two years later. That one stung.
That gap between "works in theory" and "works on my desk" is where most of the real learning lives. It's messy, humbling, and honestly… that's the part I enjoy most.
So what does SparseTech actually do?
At a high level, we work on two problems that sound unrelated—but aren't.
One is making wireless signals more efficient—the kind of foundational work that shows up in next-generation connectivity. The other is shrinking AI models so they can run where they actually need to run: on constrained devices, not just massive cloud clusters.
Underneath both is the same question I used to ask about buildings:
Where is the load really going—and why are we spending resources on parts that aren't carrying it?
Once you start asking that question seriously, a lot of complexity starts to feel optional.
What this blog is (and isn't)
Think of this blog as my notebook, just written out loud.
Some posts will get technical. Some will be reflections on patterns I'm seeing across AI, hardware, and infrastructure. Occasionally, I'll share company updates—usually only when we ship something I'm genuinely proud of.
What you won't find here: buzzwords for their own sake, hype we can't defend, or confident answers to questions I'm still actively wrestling with.
I've been meaning to start this for a long time. There's always another problem to solve, another test to run, another edge case to chase down. Meanwhile, a lot of hard-earned lessons stay trapped inside the building—the failures that taught us the most, the shortcuts that actually work, the things everyone seems to rediscover independently.
Some of that has to stay private. But a lot of it doesn't.
The other, more selfish reason I'm doing this is simple: I learn best by talking things through with people who've been in the trenches. Internal conversations only get you so far.
If any of this sounds familiar—if you've hit the same walls or asked the same questions—you can reach me through the contact page. I'm not especially interested in internet debates. But I am interested in the kind of conversations that start with, "Oh man, I ran into that too—here's what I tried," and go from there.
Looking forward to what comes next.