Jakob's Blog coding for fun and growth

Gamedev #4: Benefits of full-stack Rust

Efficiently synchronizing a client application with the server can be challenging. Today, I write about my recent experience on this topic and the benefits I found when using Rust for both the server and the client endpoints.

Everything in this post is based on my long-term hobby project called Paddlers, an online multiplayer game playable in the browser. The repository is here.

There can be many benefits when using a single language across all parts of a project. Some of them are purely from a business perspective but I will not write about those. Instead, I want to have a look at the technical benefits that I can see.

First, I will give a short overview of the relevant game features and the software architecture of Paddlers. Just enough to understand the two practical examples that follow afterwards. I selected the examples to represent some of my personal experiences on using Rust as a full-stack development language.

Table of contents

Background for examples discussed in this post

In the game named Paddlers, there is a variety of different game mechanics. For this post, we will have a closer look at the aspect that is essentially a simple tower defence game.

Each player has a village and periodically, units walk through those villages. Defensive buildings can be placed by the players which will attack the passing units. Once a wave of units has been defeated, it grants the player a reward.

On a side note, the theme of Paddlers is all about peaceful cooperation between ethnics. As a result, the units are not really attacking but rather they are depressed ducklings visiting the town. And the defending towers are, for example, beautiful flowers which should help the ducklings to overcome their sadness. But for this blog post, I will assume the traditional tower defence theme, which is mostly based on violence and destruction.

Image: Example of Paddlers lane with visitors Pre-alpha footage of Paddlers

All units are stored in a PostgreSQL database, alongside the buildings and player information. The server applications access this database directly and generally operate on the data structures provided by diesel. The client in the browser, on the other hand, represents this data in an ECS (Entity component system) using specs.

To reduce the workload on the server, the final result of an attacking wave is computed only once at the end. However, the client continuously has to recompute the health point of all units, so that it can be displayed correctly to the player.

In the end, the final result of the fight should be the same in the client and on the server. Otherwise, an inconsistent and wrong view will be displayed to the player. Considering that new towers that influence the result can be inserted by the player at any time, this is no trivial property to achieve. Fortunately, with the help of Rust on both ends, I found a setup that helps to reach this goal. Let me show you how I did it.

The workspace setup

How does a Rust project have to be set up to enable code sharing between the server and the client? There are probably other solutions (I am interested to learn about them!) but here is my take on it.

  1. Separate applications for the server(s) and the client. They have different compilation targets, in my case x86_64 and wasm32. (Server app 1, Server app 2, Client app)
  2. A shared library that can be used by both applications. (Shared lib)
  3. A cargo workspace spanning all the crates. (Cargo.toml)

This setting has worked out quite well for me. However, I quickly realized that when I wanted to use procedural macros defined in external crates on my shared types, for example, to derive the necessary traits to make an enum usable with diesel, I had to pull in the dependency in the shared library. But because I did not want to include a dependency like diesel in my client app, I added some feature flags in the shared library that the applications can select from (see Cargo.toml). This way, adding derive annotations on shared types look something like this.

#[derive(Debug, Clone)]
#[cfg_attr(feature = "graphql", derive(juniper::GraphQLEnum))]
#[cfg_attr(feature = "sql_db", derive(DbEnum), DieselType = "My_shared_enum")]
pub enum MySharedEnum {/* ... */ }

This is now an enumeration which has a corresponding PostgreSQL type on the database and even a GraphQL type, which is going to be important soon.

Connecting two API endpoints written in Rust

There are two separate APIs in Paddlers. One is a simple REST API that is used by the client to send game commands to the server upon player actions. The JSON format is used for transmission over the network, which can be marshaled using serde.

For example, to place a new building, the data is defined as follows.

#[derive(Clone, Serialize, Deserialize)]
pub struct BuildingPurchase {
    pub village: VillageKey,
    pub building_type: BuildingType,
    pub x: usize,
    pub y: usize,
}

The client can create an instance of this structure and the server then operates on the exact same object. This is pretty cool because it makes defining a new API function quick and easy. Furthermore, the client and the server inherently agree on the data format, since they share the same source code.

But there is also that second API, which uses GraphQL to define the interface. This interface is used for much more complex data transmission, namely all information about the world, the players, and the stuff that belongs to the players. After a player has logged in, the browser has to load a selection of that data from the server somehow.

For example, consider the data for an attacking unit, stored in a SQL table like this:

CREATE TYPE unit_color 
AS ENUM ('yellow', 'white', 'camo')
;
CREATE TABLE attackers (
	id bigserial NOT NULL,
	home int8 NOT NULL,
	color unit_color NULL,
	speed float4 NOT NULL,
	hp int8 NOT NULL,
	CONSTRAINT attackers_pkey PRIMARY KEY (id)
);

Some player-specific rows from this table have to be transferred to the server application and then to the client in the browser. I thought there could be another opportunity to use the exact same data structures and marshalling code on both endpoints. However, I have not found the perfect solution for this so far. But I have something that works, so keep on reading.

Using diesel, the Rust representation of a data row looks like this:

// In shared lib: src/models.rs
#[cfg(feature = "sql_db")]
#[derive(Debug, Queryable, Identifiable, AsChangeset, Clone)]
pub struct Attacker {
    pub id: i64,
    pub home: i64,
    pub color: Option<UnitColor>,
    pub speed: f32,
    pub hp: i64,
}
#[derive(Debug, PartialEq, Eq, Clone, Copy, Serialize, Deserialize)]
#[cfg_attr(feature = "graphql", derive(juniper::GraphQLEnum))]
#[cfg_attr(feature = "sql_db", derive(DbEnum), DieselType = "Unit_color")]
pub enum UnitColor {
    Yellow,
    White,
    Camo,
}

This covers the interface between the database and the server application. To expose an interface for the client to use over HTTP, I am using juniper to create a GraphQL schema. Ideally, all the fields from the structure are offered one-to-one on the GraphQL interface. This could be done with procedural macros of juniper and an inherent impl block on the Attacker type. But inherent impl blocks are only allowed in the same crate as the type is defined, which would be the shared library. To define it in the server application instead, I had to wrap it in another helper type. It is simply called GqlAttacker and contains nothing but an instance of an Attacker.

pub struct GqlAttacker(pub paddlers_shared_lib::models::Attacker);

The GraphQL fields are defined through methods which load the requested data. In this case, the fields that we are interested in are simply fields of the underlying data object, hence the method bodies are very simple.

// Server-only code (paddlers-db-interface/src/graphql/gql_public.rs:201)
#[juniper::object (Context = Context)]
impl GqlAttacker {
    pub fn id(&self) -> juniper::ID {
        self.0.id.to_string().into()
    }
    pub fn color(&self) -> &Option<paddlers_shared_lib::models::UnitColor> {
        &self.0.color
    }
    pub fn hp(&self) -> i32 {
        self.0.hp as i32
    }
    pub fn speed(&self) -> f64 {
        self.0.speed as f64
    }
    // ...
}

Have you noticed how the primitive types are cast a lot in the above snippet? This is because I did not come about to implement custom scalar types yet and i32 and f64 correspond to the default GraphQL scalars Int! and Float! as defined in the latest GraphQL specification, whereas in the database I used i64 and f32. Fixing this would be easy but I thought I mention it to show how type-preservation through GraphQL can be an issue.

I used the graphql-client library to call this interface from the client. When I give its CLI tool an address of where the GraphQL server interface is accessible, it will read the schema from it and keep a copy to type-check queries later on.

For example, this is a query which selects all attack waves for a specified village. For each attack, it then selects the fields defined earlier.

query AttacksQuery($village_id: Int!) {
  village(villageId: $village_id) {
    attacks {
      id
      units {
        attacker {
          id
          color
          hp
          speed
        }
      }
    }
  }
}

From this query definition, client-side Rust code is generated to execute the API call and parse the response into the defined values. For each query, a set of new types is generated as well, to allow type-safe access to all fields.

This is great, except that now I have a new set of types that are only used at the client. Even for the enum UnitColor, a new type is generated for every query that uses it. To convert it to the original type used by the server, I had to add some boilerplate code.

impl Into<UnitColor> for &attacks_query::UnitColor {
    fn into(self) -> UnitColor {
        match self {
            attacks_query::UnitColor::YELLOW => UnitColor::Yellow,
            attacks_query::UnitColor::WHITE => UnitColor::White,
            attacks_query::UnitColor::CAMO => UnitColor::Camo,
            attacks_query::UnitColor::Other(_err) => panic!("Could not parse unit color"),
        }
    }
}

Since the object fields are defined as methods, which may perform complex computation, there is, in general, no corresponding server-side object to the objects used on the client. Thus, it makes sense that the client has its own types generated. But for enums, I would like to find a solution to auto-deserialize to my original types rather than newly generated types. Maybe it is already possible with custom scalars or something similar but I have not tried it, yet.

So, that is not ideal. But it is not the end of the world, either. Having GraphQL as an intermediate interface and generated types from that is more than good enough for me when it comes to objects. This still gives me compile-time guarantees that the server code and the client code use matching data structures because the schema is generated from the servers source and the client code for queries is type-checked against that schema. But there is no actual need or benefit for having Rust on both ends.

As an example of compile-time safety, if I add another colour to the shared enum UnitColor, then the schema will have this option added automatically. If I then forget to add a line to the boilerplate code above which handles this new variant, I will receive a compile-time error on the client code because the compiler knows this pattern matching to be non-exhaustive. (The Other(_) => panic!() is not a wildcard, it is only for handling parsing errors.)

Conclusion on API building

To conclude what I have shown so far, when creating a REST API directly, without intermediate specification, using Rust on both ends really simplifies the implementation.

On the other hand, the fact that I used Rust on both ends did not help me a lot with building APIs based on GraphQL. Rust’s type system is useful and there is decent tooling around but all of this could, in essence, also work by using two different languages. There is certainly room to improve my approach but I could not find an obvious way immediately.

In both kinds of API designs, I was able to use the Rust compiler to guarantee that server and client use the same data structures, which extends the benefits of a strongly-typed language across modules.

Next, I will show an example that shows the advantage of sharing types between client and server that goes beyond the API.

Sharing business logic across nodes

Another benefit that I can see comes up when business logic has to be implemented on the frontend and the backend. Business logic can be quite complex and especially when it is defined by business analysts or someone else, we programmers are not familiar with the rules given to us. Then, it may happen that we have to fix the code again later due to misunderstandings. To make things worse, I found that business analysts are capable of changing their mind about what the rules are, oftentimes after seeing a first complete implementation.

Given the circumstances, we would really like to have the business logic programmed centrally without duplications, even more so than other parts of our code. For Paddlers, I experimented with this idea and tried to write the business logic in the shared library if it should be used by more than one module. But considering the vastly different data representations used by the client and the server, it may not be quite as easy as I had initially hoped.

Let us have a look at an example. Remember the tower-defence game mechanic I mentioned earlier? Units walk a static path through the player’s village and they are affected by various buildings, defending units, and player actions. The client uses an ECS to efficiently update the health and position of all attackers every frame. For the server, this would be too heavy of a load, thus it only recomputes the health after the attacker wave has gone through.

Image: Example of health displayed in the client Pre-alpha footage of Paddlers

For this example, consider the computation of the lost health points of a single unit at a given moment in time. To focus on just one aspect, we only look at the damage due to defending towers. Further, we assume that each tower attacks each unit at most once and it will always attack it as soon as it is in range, to collect a list of all buildings that have attacked.

To compute the lost health points in this model, the past movement of the unit has to be simulated, which I decided to do in steps with a length equal to the tile grid width. For each step, we need to compute which buildings existed already and have been in range.

The step length is a decision made consciously to simplify the computation. With a smaller step length, the result could be different in some cases, for example when a new building is placed while the unit is right around the edge of the range of the building. It is therefore crucial that client and server use that same step size, or they will not always agree on the result.

To extract the information I need for the step-wise simulation, I defined two traits in the shared library. About the attacker, I need to know the speed and the time when it arrived at the town. The path the attacker takes is assumed to be static to simplify the example. For the buildings, I decided to just add a function that gives a list of buildings that are within range of a certain tile at a given moment in time.

/// Provides information about a hobo currently attacking
pub trait IAttacker {
    fn speed(&self) -> f32;
    fn arrival(&self) -> Timestamp;
}
/// Trait for town information required to perform hp computations
pub trait IDefendingTown {
    type BuildingId: Ord + PartialEq;
    fn buildings_in_range(&self, index: VillageTile, time: Timestamp) -> Vec<(Self::BuildingId, i32)>;

    // ...
}

The client and the server can both implement these traits for different types. On the client, the data will be fetched from the ECS, whereas the server will use data loaded from the database. The associated type BuildingId on the server is the primary key of the building inside the PostgreSQL database. On the client, it is the entity identity used in the ECS. Crucially, both are integers uniquely identifying buildings.

Using only the functions defined in the traits, I then programmed the health computation of a unit. A simplified version of it is provided below.

pub trait IDefendingTown {
    
    // ...

    fn touched_buildings<ATTACKER: IAttacker>(
        &self,
        now: Timestamp,
        attacker: &ATTACKER,
    ) -> Vec<(Self::BuildingId, i32)> 
    {
        let mut out = vec![];
        let mut t = attacker.arrival();
        let t_per_tile = Timestamp::from_float_seconds(1.0 / attacker.speed());
        // Path is fixed to simplify the sample code
        let tiles = magic_function_that_computes_path();
        for tile in tiles {
            if t > now {
                break;
            }
            let mut buildings = self.buildings_in_range(tile, t);
            out.append(&mut buildings);
            t = t + t_per_tile;
        }
        out.sort();
        out.dedup();
        out
    }
}

The code above is an implementation of a piece of business logic, which is defined only once, in the shared library. Nevertheless, both server and client have it compiled into their binaries.

The server uses the code whenever it checks the outcome of an attacker wave. The client uses it when loading the initial state to determine the health points lost for a unit before the current moment. Afterwards, the client uses a more efficient way to continuously compute new buildings defending the attacker after every frame. The code for that is not shared because the server-side does not need continuous computations.

Other parts, like the geometric proximity calculation, are still duplicated in the current implementation. But with more complex traits this can all become a part of the shared library.

The cost for the shared logic is that everything has to fit through the artificial abstraction layer, which can make the implementation a bit more cumbersome sometimes. On the other hand, I am now also able to write unit tests for this code by defining some mock-structures that implement the same traits. Without an abstraction layer, it is often complicated to write tests effectively as the data-mocking can be much more effortful.

The example above follows my own code in Paddlers very closely but it has been simplified as much as possible. If you are interested in the full code, here are the trait definitions, the client implementation, the server implementation, and the unit tests with their own implementation. (In that code, attackers are named hobos because of the underlying theme in Paddlers and instead of counting buildings, more general effects called auras are tracked.)

Conclusion on shared business-logic

I presented how traits can be used to share fight-simulation logic across modules that store the same data in vastly different ways, taking advantage of the fact that Rust is the programming language for all modules. Although it adds a bit to the programming effort, it effectively removes code duplication and it also enables easier unit-testing.

Closing word

These have been some of the thoughts I had while working with Rust in a full-stack fashion. I would not claim there is anything novel in this post, just a report of the issues that I ran into and how I solved them. Probably, there is not much professional interest in this kind of things but maybe some other hobbyists also working on full-stack Rust projects are interested.

What do you think, is Rust everywhere a feasible choice for web projects with a WASM frontend? Can you see other technical benefits than what I have mentioned? I am interested to hear your opinions. The blog will be linked on Reddit for the discussion. Also, if you find any kinds of mistakes, I would appreciate it if you let me know!

Next post in series: Gamedev #5: Version 0.2 Released

Technology Stack

Rust

The Rust programming languages had its first stable release in May 2015. Although it is in its core a systems programming language, it has been adopted rapidly in different environments. Many programmers love using it and the community is growing quickly.

WebAssembly

WebAssembly is a new web standard (1.0 since October 2017) that allows running bytecode in the browser. While it is possible to compile C/C++ and other languages to Wasm, right now the tooling for Rust is definitely the most advanced. Partly because both the Rust and the Wasm project originated at Mozilla and are actively pushed by them.

GraphQL

GraphQL is used to define interfaces between services. It allows the client to send complex declarative queries and thereby reduces the number of round-trips. It typically makes the client implementation much easier than with traditional REST interfaces.