The framework presented here is built on top of Microsoft’s XNA toolset and managed runtime environment. The key objective is to comprehensively unite all necessary components into single set of interfaces to develop networked games for the XBox-360.
The managed nature of the runtime environment is in principle the wrong approach to build high-performance applications. But currently it’s the only way to get access to the glamorous world of today’s gaming consoles as an indie developer.
It’s incredibly difficult to synchronise a realtime system with complex state over a conventional network that suffers quality of service. Latency, packet loss and small bandwidth capacity are the main objectives to focus. If we take a global connection, e.g. Zurich to New York, the air-line distance between these two cities amounts roughly 6000 km. Signals travel with about 65% speed of light in fibre, so this connection already lags with ~ 60 ms RTT. Intermediate routers make things worse, since each look-up and forwarding procedure takes additional time to execute. Taking account of these facts, games should still work fine with average latencies between 25-250 ms. Thus, we developed a prediction based algorithm that smoothly translates objects even if information is missing temporarly. We use TCP for critical packets that need to be delivered reliably to sustain existential synchronisation. UDP is used for frequent packets with low priority. The network topology can basically be selected freely. However, we used a hybrid topology for the two game releases so far. Most positioning and voice chat data is distributed in a peer-to-peer manner, whereas critical game state and unit selection information is hosted by one machine that acts as server and the remaining participants as clients. Since Microsoft promises that every XBOX-360 online game can be played smoothly with an ISDN connection or better, bandwidth is limited to 8 kbit/s by the XNA framework. In order to maximize transmission efficiency, we inserted another network layer on top of UDP/TCP/XNA which disassembles and reassembles value types for the purpose of saving bandwidth on the cost of a small precision loss. In case of our game, we are able to hold up to 8 players synchronized including team-based voice chat without exceeding the 8 kbit/s limit! The network protocol we designed for this purpose consists of 36 different packet types.
The XNA toolset provides some basic functionality to read out if certain buttons on the gamepad are whether pressed or not. We improved this basic concept by an event-driven component that dynamically invokes delegates on the object that currently holds the user focus. We then extended this component to work cross-platform (Windows and XBOX-360) including a mapping component to independently assign keyboard, mouse and gamepad events to a common event-based input interface.
Today’s input devices often support tactile feedback technology that allows to apply forces and/or vibrations back to the user’s input device. The framework automatically detects if such a feedback configuration is available for the currently attached device. A set of functions allows to output different shaped signals, such as sine and square waves or simple ramp and constant forces. A proximity dependent variable can be attached to this output signal in order to simulate a 3D feedack environment.
The XNA toolset provides a low-level programming interface that takes advantage of hardware acceleration capabilities to translate and display textures and 3D objects. We extended this basic concept to be able to rotate, scale and smoothly fade textures. This has been achieved by extending the Direct3D pipeline using HLSL (High Level Shader Language). Rendering scenes that are large-sized and partially a multiple of the current user’s view can waste an extensive amount of CPU/GPU cycles. Thus it was inevitable to build an intermediate graphics layer that decides which objects are actually rendered and which aren’t. The graphics component also contains a particle system to visualize smoke and explosions and a video player. Moreover, the terrain of the virtual game environment can be linked with meta data that describes the surface appearance (friction, slip, medium). Objects moving on this surface react according to the defined meta data.We additionally designed and implemented a pixel shader written in HLSL to render any 3D scene in anaglyph mode to play in real 3D using red/blue glasses.
The audio system manages multiple output lines to mix up customized sound effects and simulates a 3D environment dependent on the user’s position in the virtual world. All effects can be parametrized (pan, pitch, volume and distance) during runtime. Meta data can be applied to simulate dynamic site-specific sound effects (e.g. birdsong in forests, rushing sea on beaches, etc.).