Building GeoLink: Preparing a Real-Time Game for Scale
Most real-time multiplayer games are built for single servers. What happens when you need to scale?
GeoLink is a real-time geography game where players chain cities together (Tokyo → Oslo → Ottawa). Currently, it runs on a single server. But we built it with distributed locking from day one, so when we need to scale, we won't have to rewrite core game logic.
The key insight: Most tutorials show you how to build a game that works on one server. We built a game that's ready for multiple servers, even if we don't need them yet.
The Challenge: Why Real-Time Multiplayer Gets Hard at Scale
The Single-Server Assumption
Most tutorials assume one server handling all game state. You build a Socket.IO server, store game state in memory, and everything works perfectly—until you need to scale.
Here's what happens in a typical real-time game:
- A player submits a city and the turn switches
- At the same time, a timer expires or another event fires
- Both try to modify game state simultaneously, potentially causing:
- Race conditions in turn management
- Inconsistent scores
- State corruption
- Lost updates
On a single server, you can use in-memory locks or simple synchronization. But what happens when you need multiple servers for reliability, performance, or geographic distribution?
Distributed Locking: A Standard Pattern, Done Early
The Problem: Race Conditions at Scale
Even in a turn-based game, race conditions can occur. For example, when a player submits a city and the turn switches, a timer might expire at the same time, or multiple servers might process different events for the same game:
Without locking:
- Server A processes a city submission → reads game state → switches turn → writes state
- Server B processes a timer expiration → reads game state (before Server A writes) → switches turn → writes state
- Result: Server B's write overwrites Server A's write. The city submission is lost, scores are inconsistent, and the game state is corrupted.
With distributed locking:
- Server A acquires lock for `game:123` → reads state → processes submission → switches turn → writes → releases lock
- Server B tries to acquire lock → waits (lock held by Server A) → acquires lock → reads state (now includes the submission) → processes timer → writes → releases lock
- Result: All updates are processed correctly, scores are consistent, and game state is valid.
The Solution: Redis-Based Distributed Locks
We use Redis for atomic lock acquisition. Locks automatically expire to prevent deadlocks if a server crashes. All game state updates go through a lock manager that ensures only one server can modify a game's state at a time.
Why This Matters (When We Need It)
This architecture will enable:
- Horizontal scaling: Add more servers without rewriting game logic
- Race condition prevention: Handles concurrent updates that could break game state
- Reliability: If one server crashes, others can continue handling games
- Geographic distribution: Run servers in different regions, all coordinating through Redis
Most real-time games don't do this upfront. They're built for single servers and require significant rewrites to scale. We built it this way from day one, even though we don't need it yet. It's a bit of over-engineering, but it's the kind of over-engineering that pays off when you actually need to scale.
The Game: Quick Overview
GeoLink is a real-time geography game where players chain cities together (Tokyo → Oslo → Ottawa). But the architecture decision was to build for scale from the start, even if we don't need it yet.