real time networking architecture

Real-Time Networking Architecture

Origins

I come from a background in the games industry, and have done a lot of networking code on real-time strategy games. These games all involved a mix of both TCP/IP (connection oriented / streaming / guaranteed delivery) and UDP (datagram / potentially out-of-order / no guaranteed delivery) protocols. This mix of traits tended to create a pretty complicated architecture.

The logic behind the use of TCP/IP was that there were a large number of things that needed to be delivered correctly and in the right order, like player commands, information about levels and objects, etc. On the other hand, real-time updates of objects and transient things like projectiles were all things where we were only interested in the latest updates; we didn’t want the overhead of TCP/IP re-sends and guaranteed delivery, and with UDP we could trivially discard anything that was out of order (by including a sequence number) and only pay attention to the latest real-time updates.

The complications arose when we used TCP/IP to (reliably) say that something was created (and later destroyed) while at the same time using UDP to transport real-time updates about that object. Oftentimes we needed to be robust at the application level in order to handle the case when we got a late / out-of-order / unreliable update (UDP) about an object that had previously been reliably destroyed (TCP/IP). In general it was just a conceptual mess because the protocols had such different traits.

Further complicating things are the facts that TCP/IP is stream oriented and does not guaranteed to transmit data that you passed to it in a cohesive chunk (which we wanted). UDP on the other hand will either deliver your packet intact or not at all. We wrote messaging abstractions based around the idea of a “network command” that we expected to arrive intact or not at all, and to make this work over TCP/IP required a lot of under the hood trickery. In hindsight this was just stupid; the protocol doesn’t work that way, so why did we try to force the issue?

A Simpler Approach

Some years later I happened to read an article on how id Software’s game Quake III Arena worked on a networking level. It seemed that John Carmack at id had had similar experiences to my own when it came to tricky and unreliable networking architectures during development of the Quake series, but with Quake III Arena he felt that he had finally “got it right”.

To summarize the idea behind Q3A is simply that (using only UDP) the client application continually sends a single type of input packet, and the server continually responds with a single type of state packet. That’s it. There are lots of tricky details related to compression in there, but in general the idea seemed to be to get rid of complicated event-based protocols and express everything in terms of a single type of upstream packet (from the client) and a single type of downstream packet (from the server).

It just so happened that just prior to starting work on Dreamler I had done some experiments with a simple collaborative real-time networked game, using the ideas from Q3A. The client would continually send input (at about 20 Hz) including the position of the player avatar as well as some interaction information. The server would continually use these inputs to simulate a game world, a big part of which was allowing the player avatars to edit the world (which was a procedurally generated mass of tiles).

One new idea here was to have the server not track the “connection state” of any clients at all; any inputs that arrived would be written as player avatar position and interaction state into a permanent store, and the server would respond to one (1) input with one (1) state packet. The state packet back from the server included all the avatar information the server had hitherto recorded (in order for players to be able to see other players).

The game was procedural and practically infinite, so all clients could generate the initial state of the world themselves, but as the game allowed anyone to edit any part of the world these changes had to be transmitted somehow. Obviously as time passed the total number of edits would grow, and hence the total size of the world state would also grow. This would have quickly overflowed the capacity of UDP to transmit this in a single packet (which was the design), so I came up with the second new idea.

The second new idea was simply to not store or send world state, but to store and send CHANGES to world state in the form of transactions. In hindsight I realize that this is basically what is recorded by banks to keep track of historical changes to your bank account; much better to know EVERYTHING that has ever happened and DERIVE what your current balance is than to ONLY store your current balance.

With this idea in place the server state packet, in addition to the latest avatar state, includes simply a list of world state edits. A client will thus continually pass to the server (as part of the input) an acknowledgement (ack) about how many world state edits it has received. This starts at 0 initially, and with each received input the server will look at the ack number and pack as many edits as can fit into a single UDP packet (starting at this number and potentially ending at the last edit that has happened). Once the client receives and interprets a bunch of edits he will change the ack in his input to be the number of the last received edit.

The great thing about this is that it “just works”. There is no technical difference between the “loading the world” state and the normal state of being up to date and receiving edits in real-time. Since we are using UDP we simply piggy-back acks (from the client to the server) on top of real-time position and interaction information, and conversely respond (from the server to the client) with as many edits as we can, starting from the ack number that the client supplies. This effectively achieves a “poor man’s TCP/IP” in that over time all edits will eventually be transmitted from the server to each client, and everyone will be up to date with the state of the world as well as see real-time avatars moving around and making further edits to the world.

Dreamler is really just a game

For Dreamler, Thomas had mentioned wanting 60 frames per second graphics and real-time networking, which to me that meant basing the networking stuff on UDP. The “Q3A with transactions” approach seemed like a good fit, so when Samuel and I started work on the client and the server for Dreamler I suggested that we should try this approach. We got the basics up and running in just a couple of days, with a basic command line client in C++ and a server in C#. This became what we now refer to as our “Level 0” protocol, and that protocol has not changed since its inception almost 2 years ago.

The real-time avatar part of the data consists of the client screen center, the world space rectangle that corresponds to the screen, as well as the world space position of the mouse cursor. This allows clients to see exactly what others are doing in real time. The analogy to “world edits” in the game case are in the Dreamler case “client commands” that all have a higher level meaning to the application; typical examples are “create an activity”, “move an activity”, etc., all “transactions” that modify the state of the game-board.

Level 0 is basically application agnostic, except perhaps for the fact that our real-time avatar data section of the client input and server state is fixed in size. It is easy to imagine this approach working for any number of applications that have similar requirements to Dreamler. We have since the beginning run at 10Hz from the client, and the server still only responds when a client packet arrives. We have also added zip compression to both the client input and the server state to both keep bandwidth down, as well as to be able to pack more client commands into a single server state.

As our test game-board grew in size (number of transactions) we did however quite soon run into throughput problems, as transmitting large amounts of data over UDP in this way is very inefficient when compared to TCP/IP. What happened was that when connecting to a game-board the client had to sit through a pretty long process of watching all the historical transactions play out before their eyes before being “up to date”. To fix this issue we implemented a “history pre-fetch” over HTTP, getting all the past transactions in a giant compressed binary blob, and after that has been interpreted the client continues with normal input with the ack set to the last received transaction number.

The only problem that we have found with Level 0 as it currently stands is that we are still bound by the size of the Maximum Transmission Unit (MTU); the biggest UDP packet that routers on the Internet will typically allow. This is somewhere in the neighborhood of 1500 bytes, and some of our larger transactions can exceed this size by several times. As a result we are considering implementing an alternative protocol over Websockets that functions similarly to what Level 0 does over UDP.

It is interesting, regardless of the actual network transport protocol one might use, to consider that the server really doesn’t know anything about the MEANING of any client commands / transactions; it simply stores and re-transmits them to other clients as requested. This is indeed a powerful aspect of Level 0 and one of the reasons that the protocol has never changed. This did however turn out to have profound implications for Dreamler as an application, as the things that we built on top of Level 0 turned into a complexity nightmare…

0 5717