Coordinating Computers In a Relativistic Universe

News subtitle

A Dartmouth professor ponders how algorithms might function across space.

Image
Image
Siddhartha Jayanti with a graphic of satellites
Siddhartha Jayanti, assistant professor of computer science, is asking, and beginning to answer, questions that are not as futuristic as they may sound. (Photo by Katie Lenhart, graphic by Spencer Fennell) 
Body

Will algorithms designed for interconnected computers hold up if some of the machines are not here on Earth but flying about in space, onboard satellites or spacecraft?

, assistant professor of computer science, is asking—and beginning to answer—questions that computer scientists must tackle as humans increase their footprint in space, bringing their machines with them.

Jayanti studies the design and behavior of distributed computer systems in which many interconnected computers work together as one to handle tasks that would overwhelm a single machine. Think streaming services or online banking, where millions of people visit a website at the same time or companies need to process massive amounts of data in an instant.

The key to distributed computing is communication, says Jayanti. To solve a problem together, independent computers that are far apart must engage in digital dialogues, passing data back and forth efficiently and reliably.

To mathematically verify that an algorithm designed for such a distributed system is achieving the tasks it was designed for, computer scientists employ a technique that essentially pauses the system at different moments in time to examine its state and behavior at that moment and understand how it evolved over time.

“But what if you now consider a scenario where these machines are deployed across the solar system in spacecraft that travel at high speeds and are subject to unusual gravitational effects?” asks Jayanti. “And what if the different machines are subject to different gravitational fields?”

These questions are not as futuristic as they may sound, he says. Scientists are already brainstorming ideas to build an Interplanetary Internet, an extraterrestrial network that could move data more efficiently in space, much like the Internet we use.

What’s different in this new paradigm, says Jayanti, is that the physics of relativity, first proposed by Albert Einstein, comes into play. Its unusual, and often counterintuitive, effects that warp our perception of time and space must factor into how computer scientists can design, verify, and understand algorithms distributed across space.

The astronomical distances involved pose an additional problem. Depending on where Earth and Mars are in their orbits, light can take anywhere from 3 to 22 minutes to travel between the two planets. This makes the system asynchronous, meaning that it would be very hard to coordinate messages and events with clocks between interplanetary computers that would have to collaborate.

Of particular relevance for distributed systems, Jayanti says, is the “relativity of simultaneity”—the idea that whether two observers watching two events happening at two different locations agree that they are happening simultaneously depends on how fast they each are moving in relation to where the events happen. The effect is only perceptible when the speeds in question are a significant fraction of the speed of light.

What this means is that observers—and computers—on board spacecraft whizzing about at different speeds will disagree on the order of events.

“There’s no universal freezing of time, according to Einstein,” says Jayanti. “So, when our methods of reasoning about distributed systems depend on pausing at different moments in time, how will we design algorithms that behave correctly, and how do we verify that they are doing so?”

In his , which he presented on June 18 at the Association for Computing Machinery’s Symposium on Principles of Distributed Computing, Jayanti establishes a connection between the properties of classical, relativistic, and computational executions of distributed algorithms.

“The paper shows a way of taking algorithms that we have built for the classical world and make them work in the relativistic world,” he says.

In the paper, Jayanti considers a host of algorithms that have been proven to be correct on classical distributed systems and transports them to scenarios where observers watch them execute from different reference frames as the machines of the distributed system travel at relativistic speeds.

“The surprising result of the paper is that if the algorithm you’re running is classically correct, then every observer will agree that it is correct in a relativistic setting. Simultaneously, the observers might disagree about why the algorithm is correct,” he says.

Jayanti’s proof hinges on causality, the principle that a cause must precede its effect even where relativity operates.

The central idea behind the proof, he says, is in correctly formulating a distributed computing notion of causality, which is independent of physics, and forming a bond between this purely mathematical notion and relativistic causality, which is a real physical notion, and bringing these together.

This result forms the foundation of how we can understand a relativistic distributed system, and there is a lot more work to be done, says Jayanti. “As we try to explore space more, with computation being a central tool in how we do this, having a concrete understanding of what’s going on, what should go on, and how we can design and verify correct systems is key.”

Harini Barath