
Introducing Starlight
We've been quiet for a while. Mostly because we've spent the last year building. Starlight is an infrastructure platform for running VMs, containers, and AI workloads on hardware you already own.
We've been quiet for a while.
Mostly because we've spent the last year building instead of talking.
Why we built it
Starlight started from a pretty simple frustration: modern infrastructure assumes everything lives in a datacenter with a perfect internet connection and an unlimited cloud budget. That works until you actually have to operate systems in the real world.
Real infrastructure is messy. Machines sit in warehouses, offices, labs, retail stores, garages, factories, and half-connected remote sites. GPUs end up stranded in desktops. Teams glue together virtualization stacks, Kubernetes clusters, VPNs, storage systems, and automation scripts just to keep things alive.
Meanwhile, the hardware is already there.
What's missing is software that treats distributed infrastructure like a first-class environment instead of an edge case.
That's why we built Starlight.
What it is
Starlight is an infrastructure platform for running VMs, containers, and AI workloads on hardware you already own. It gives you a single control plane for managing distributed systems without forcing everything through a centralized cloud architecture.
At a technical level, yes, there's orchestration, networking, scheduling, observability, automation, and all the things you would expect. But the important part is simpler than that: the system keeps working even when connectivity doesn't.
Built for the real world
That became a core design principle very early on.
We weren't interested in building another platform that falls apart the second a site loses internet access. A surprising amount of modern infrastructure quietly depends on "someone else's computer" staying reachable at all times. Once you start operating outside pristine cloud environments, that assumption breaks fast.
Starlight was designed for unreliable networks, remote environments, and operators who still want control over their own systems.
On AI infrastructure
We also think AI infrastructure is headed in a weird direction.
Right now the industry answer to every AI problem seems to be: rent more GPUs somewhere else. But most companies already have underutilized compute scattered across their environments. Workstations. Small clusters. Edge servers. Existing virtualization hosts.
There's an enormous amount of dormant capacity already sitting inside organizations.
We think the next generation of infrastructure software should make that usable.
An alternative
Not everybody wants to ship data halfway across the country just to run inference. Not everybody wants their operational costs tied directly to hyperscaler pricing. And not everybody wants their infrastructure strategy to depend on three vendors continuing to behave reasonably forever.
Starlight is our attempt at building an alternative.
Not a "cloud replacement." Not an anti-cloud manifesto. Just infrastructure software designed around the idea that owning hardware should still matter.
We're still early. There's a lot more to share over the coming months — architecture, deployment models, AI workloads, networking, automation, and the control plane itself.
But this is the first time we're publicly saying what we've been building.
Welcome to Starlight.