Qbaylogic’s Bittide platform eliminates jitter entirely.
For all their expensive hardware, modern data centers and large-scale computing networks are essentially small home labs when it comes to networking. Each computer operates independently from its peers, relying on complex flow control, retransmissions and dynamic backpressure to compensate for tiny differences in clock frequencies. Over the years, resources have been poured into making the bounds for this clock drift smaller and smaller, with synchronization schemes such as Google’s Firefly achieving just tens of nanoseconds of absolute drift over thousands of machines. Can we get to zero?
The dream would be a perfectly scheduled, predictable network. A network that eliminates jitter entirely would move it from a reactive, best-effort network to a proactive, perfectly scheduled one. Imagine an air traffic control system for collectives, where every transfer is choreographed and scheduled to avoid congestion before it ever happens. This would make end-to-end latency not just low, but predictable by design, transforming the network from a source of random delays into a deterministic, reliable fabric. This is a word-for-word quote of the director of Google’s AI and infrastructure department, Amin Vahdat, presenting his keynote at Hot Interconnects 2025, a conference on the state of the art of networking.
Google isn’t the only one struggling with this. All parties that try to orchestrate asynchronous systems have the same problems. Still, the AI fever spreading across the industry has highlighted the need to reason about a system’s synchronicity like never before. Three years ago, Qbaylogic started the hardware development of Bittide, a platform to eliminate jitter entirely. Vahdat would likely call it a “dream” and “utopia.”
Last year, we implemented a model data center showcasing that this is indeed possible. In a paper jointly published by us and Google Deepmind, called “Bittide: control time, not flows,” we show that systems, even if they’re multiple kilometers apart, can synchronize to such a degree that data arrival and computations become perfectly predictable down to the clock cycle. While the theory already predicted it, later experiments have shown that this property holds even if these clock cycles are scheduled multiple weeks apart.
Control time, not flows
Typical computer networks operate by employing some form of backpressure. This means that whenever a node can’t keep up with the data sent to it by another node, it will return a message telling it so. The other node can then decide to stop sending for a while or retry its transmission. This type of communication, typically called “flow control,” causes systems to become unpredictable, as exact clock frequency differences that are fundamental to this behavior are unknown in practice. In large systems, it can lead to a phenomenon called “tail latency”: the accumulation of all situations where all nodes in a computation just happened to be overloaded. Many resources have been poured into minimizing the effects of this.
Bittide is coming at it from a different angle. Instead of reacting to data overflows with backpressure packets, the heartbeat of the hardware itself is adjusted. In a Bittide network, every node is equipped with an elastic buffer that sits between the incoming data link and the internal logic. This buffer acts as a physical tension gauge for the clock. If it starts to fill up, it indicates the sender is slightly faster than the receiver; if it empties, the sender is slower. Instead of sending a stop signal across the network, which creates a cascade of delays, Bittide uses the buffer level as a feedback signal to a local, high-precision oscillator. By subtly speeding up or slowing down the local clock frequency to match the incoming data rate, the system achieves syntony: a state where every node in a global network is perfectly predictable with relation to all other nodes. The result is a network where data never queues and never drops.
It’s tempting to think this would require very specialized hardware. However, that’s not the case. The components needed to build a system like this are present on most mid-tier FPGA development boards. It works with anywhere from 100 Mbit/s to 400 Gbit/s links.
While the immediate killer app for Bittide is the massive AI training clusters at various companies, the implications for embedded systems are equally interesting. For example, modern cars are essentially mobile data centers, future self-driving ones even more so. Both through legislative and consumer pressure, they’re increasingly incorporating high-speed, high-bandwidth sensors and decision centers. With Bittide, the car’s central computer knows exactly when a lidar or camera frame arrives and when actuators are ready to receive data.
Halfway around the world
Today, Qbaylogic is building vendor-agnostic libraries that can be implemented on any FPGA/ASIC design, most of which are open source. As it happens, the properties guaranteed by Bittide are so strong that the business logic in a chip doesn’t need to know a synchronization layer is in between it and the outside world. In fact, the asynchronous nature of the different circuits is unobservable by the business logic itself. Instead, it can operate directly on metadataless links with perfect promises on when data put on those links will arrive at neighboring nodes. This property is something that we’re already familiar with for single-chip design, but Bittide lifts it to multiple chips, even when there are many tens of thousands and they’re halfway around the world.
Up until now, we’ve only focused on data center applications in the AI era. We hope, however, that this is the beginning of something greater. The industry has spent decades building layers of complexity to manage the ‘noise’ of asynchronous clocks. Bittide offers a way to turn that noise into a silent, perfectly timed foundation. We look forward to applying it in other domains too.


