Spatial Rebasing: An Unreal Engine Odyssey

post-thumb

Taming floating point precision errors

(Especially in space games!)

Developing a game on a grand scale (solar system or galactic) is fraught with technical complications. This is quadrupled for any game heavily invested in a multiplayer experience. One of the most elementary complications that inevitably pop up: floating point precision errors in Graphics and Physics. This is something Epic Games has heavily invested in addressing via Unreal Engine 5 and their upcoming Large World Coordinates feature (LWC). Will they succeed? Let’s talk about it!

Defining the problem

“Naively, a big world game, we will make.”
Yoda dev

Let’s create a new level in your favorite game engine… We’re going to make the World’s largest Ice hockey game!! The table is as wide as the Earth!

We set the surface’s friction value to “Ice Hockey” (zero), and kick a hockey puck. So far so good. But, we want to see the hockey puck hit the goal, which is 6,371km away. Let’s attach a camera to the puck to focus on its journey. At some point starts to appear contorted. And, the contortions become ever more severe the farther we get from the Origin. What theeeee?

“Floating point precision errors!” you shout. Ahhh, I can see you’ve made this game before. How does the issue affect our game?

Floating point precision errors and where to find them

Rendering

As we’ve seen, floating point precision errors here can cause corrupted rendering. Near the Earth’s surface you can only resolve positions down about half a meter. If you render a character as far away as Earth’s radius, the Mesh won’t be mis-positioned. Rather, every vertex of the mesh will be. Differently. Every frame. Congrats, you’ve made Human Soup. With single precision floats you have less than 7 digits of precision. The Earth’s Radius is about 6,371,000 meters. 7 digits.

Let’s pick a random number, move that far from the origin, determine how much error we’ll see in our vertex positions:

  • Pick a random number
    • Hint: pick 16,777,216
  • Convert the 32-bit float value of 16777216 to it’s binary representation as a CPU sees it.
    • Hint: It’s 1 0000 0000 0000 0000 0000 0000
  • Increment it by the least amount possible
    • 1 0000 0000 0000 0000 0000 0001
  • Converted back to a 32 bit float
    • Answer? 16,777,217
      • Soo, that’s the value we started with, plus 1.000. That’s the smallest amount we can increment it now, a full meter.

Beyond 16,777,216 we cannot represent fractional parts in a 32-bit float.

Floating point precision corruption when rendering

Physics

Floating point precision errors are worse in physics. For spatial position values, we have Deja Vu all over again - the same issues we had with graphics. But, now there are issues with velocities as well. Say we attach two hockey pucks together with a physics constraint/joint. They’re moving independently, but the constraint generates forces to keep them together. If the velocities of the objects are high enough, precision errors between the two velocities can create phantom accelerations/forces that break the joints. Even, if the object is near the origin. Floating point errors in the Velocity blew up our joint. So, there’s that.

Collisions

And then there’s collisions. You put a character on the Earth’s surface, and the player reports that every time the character bumps into the Earth they’re flung off in some random direction. The smart dev that you are, you’ve determined it’s because the Earth is rotating underneath the player but the collision reaction was calculated as if that surface point was stationary. The player, the Earth’s surface, and the camera area all moving together in one direction - everything appears stationary through the camera lens. But you’ve determined collisions happen as if the Earth was not rotating… So bumping the Earth’s surface affects the player as if the surface point were moving in a totally different direction. It “flings” the player’s character.

Technically this is not (yet) a precision issue, it’s a physics coordinate system issue… The physics solver has to be “told” the Earth’s angular velocity - the Earth must be a physics Rigid-Body, and then the solver has to make a bunch of other assumptions to work out the mesh-sub-chunk velocity + angular velocity AND know it SHOULD take all that into account for the collision etc etc etc etc. And that’s what’s going wrong. It may not be a precision issue but it’s still an issue the player sees when you naively plop the Earth down at the coordinate system’s origin, and then “spin” the Earth yourself to emulate the planet’s daily rotation.

But must you “spin” the Earth? No, you don’t have to. With a stationary Earth it’s much easier to generate realistic-looking collisions. But, you’re doing rigid-body physics in a “non-inertial” frame. An “inertial frame” is when your coordinate system is neither accelerating nor rotating. The equations used to simulate rigid-body physics are only valid in an “inertial frame”. Rigid bodies won’t move correctly. Objects that should stay in orbit are instead flung to oblivion. An important detail, for a Spacecaft Game. This an be overcome, however, by applying “phantom forces” to every rigid body on every frame.

The point is…

At Earth’s scale 32-bit graphics and physics + collisions will be completely broken.


Solutions

Like I always say, there’s 101 solutions for any problem… None are “right” or “wrong”, they’re just optimized to different criteria. Your role is to understand which criteria are most important for your case, then identify which solution best meets those needs. Answers will vary from game to game.


“World Origin Rebasing”

World Origin Rebasing simply means:

  • When the camera’s focus moves too far from the origin
    • Teleport it back to the origin, and move everything else in the scene equally

If everything moves equally including the camera, nothing will look like it moved, right?

This is Unreal Engine’s “World Origin Rebasing” in a nutshell. Also, many games do this themselves minus any game engine support. One downside is that not everything enjoys being teleported. Does your particle system support relocating World Space particles? Do your shaders use World Space for Tri-planar terrain texture coordinates? Are any components using position deltas to compute an implied velocity? Etc. Many features were developed under the assumption that the coordinate system does not move, they break if it does. The only one who can weigh these tradeoffs is you - the game developer. You’re the only one who knows what specific features your game requires.

Btw, I’ve never used the Unreal Engine implementation myself and can’t really speak to its implementation. All things equal it’s best to use an Engine’s native support for a feature, right? One the other hand, the Engine doesn’t always have your specific use-case in mind…. For instance I doubt Unreal Engine does Velocity Rebasing, and without it following the ISS around its Earthly orbit would likely thrash position-only rebasing.


Velocity Rebasing

“Velocity Rebasing”, I said? What’s that? It’s just the same idea as position rebasing, but applied to velocities. You can relocate objects from one inertial frame (constant velocity) to another frame that has a constant but different velocity. In other words, to prevent velocities from becoming too large, you can adjust the velocity of everything in the scene by an equal amount to bring them back near zero.

Velocity rebasing comes with many of the same issues as World Origin Position Rebasing. Many engine features etc were developed under the assumption that the coordinate system is stationary with respect to fixed earth, etc. In game development it always seems like one critical feature DEPENDS on you doing XY&Z, but another critical feature BREAKS if you do XY&Z. That very much applies here. The fun is in figuring out which workaround path most aligns with what matters to you (“easily?” “quickly?” “cheaply?” “robustly?” “performantly?” “elegantly?” etc etc etc)

Snap

64-bit “double precision” floats

Yes, you can put all your scene-graph math into double precision floats. 15 significant digits. This postpones precisions issues enough to live with them on a solar system and a bit beyond-scale. Pluto averages 5.6 billion kilometers from the sun. 5.6 E+9 km. 5.6 E+12 meters. That means at Pluto’s distance we can resolve distances to about 1 millimeter using double precision floats.

One form of World-Origin and Velocity Rebasing is to maintain an higher order scene-graph in double precision, then map the camera location to a 32 bit space for graphics and physics. E.g., the 32-bit origin shifts within the 64-bit scene-graph. Anything in rigid-body physics moves within the 32-bit “bubble”. Anything outside the bubble only moves procedurally (“ephemeris data” or computational orbit propagation, etc). The 32-bit origin position/velocity is rebased to follow the camera, etc.

But why does this work? For this to work, the Earth’s surface must be mesh sub-chunks, which are themselves positioned near the Earth’s surface. The position of the sub-chunk can then be computed using 64 bit math, and then mapped precisely into the 32-bit bubble. Chunks near 32-bit scene-graph’s origin will be positioned with precision, and the 32 bits of precision in the vertex data is enough to render correctly.

If, however, your Earth has sub-chunks that are relative to the center of the Earth… this system breaks. 32-bit vertex data can only represent position data to about half a meter resolution when the value of the vertices is the scale of planet earth. Same problem, now originating in the vertex data.

The point here is, scheme of “let’s just move our scene-graph to 64 bits” not a magic cure-all, if you’re still constrained to 32-bits in parts of the graphics/physics pipelines. Yes, it can be made to work… But it places requirements on the scene’s content. A naive planet meshing system with 32 bit vertices relative to the origin still breaks, regardless of the outer 64 bit scene-graph.


Integral Positions

Floating point values for position information are “wasteful” in that they can represent anything near the origin in extremely fine detail. The farther away you move the more precision is lost. If you establish exactly how large a space you wish to represent you can represent all portions evenly by discretizing the position values into integer values. Using 1mm increments and an unsigned 32-bit integer, you can represent a space about 4,294,967 meters, or 4,294km. That’s less than Earth’s radius, bummer. An unsigned 64-bit integer gets you 18,446,744,073,709,551,615 mm, which is 18,446,744,073,709,551 meters, which is 18,446,744,073,709 km, … which is ~123,309 AU. Which, is almost 2 Light Years… Which, makes it about half way to the next solar system. Pluto’s orbit is 49.3 AU, for comparison. Anyways, we can do much better within a 2 LY box with integers than floats because we’re not wasting so much of the 64-bit space on nanometer etc scales.

Other than a consistent precision, integers are a lot like larger floats. Not a fundamental fix. Not enough precision? How about some more? Solutions along these lines solve practical problems despite not solving the root issue. Even so, postponing an issue outside your use case is a fair solution. O(n^2) solutions are common when N is known to be bounded to “2”. Your graphics driver is probably full of O(n^2) blt implementations (bitmap image copy, aka BLock Transfer).


“Large World Coordinates”: Coming to a game engine near you!

The Unreal Engine 5 Preview build contains an initial base for “Large World Coordinates”. Unfortunately, it’s not useable yet. Epic’s implementation is likely to solve the issues for the vast majority of use cases (or why bother?). I haven’t looked through it in detail yet, so I can’t speak to it. At all. And yet, I will and (foolishly, based on assumptions alone) say there’s one thing the implementation CANNOT be… And that’s a grand reworking of the whole graphics & physics pipelines to 64-bits. Which, seems to be what a lot of people expect. This would require Epic to rewrite every graphics driver, every Shader… to sneak into your house in the middle of the night and add transistors to your GPU… Large World Coordinates will 100% give developers “64-bit float positions” for Actors, Components, Shader Types, etc… But that the idea is still to internally map an “absolute” 64 bit scene-graph into an inner 32 bit “world” scene-graph… Or something.

And, that’s great, it’ll solve the vast majority of cases. But, it may not solve cases that require things like multiple physics bodies (and/or cameras) in multiple “bubbles”. For instance, simulating both a SpaceX booster’s return to Earth AND the upper stage’s flight simultaneously may be difficult. It’s not like they’re in one big fully 64-bit scene. Within an outer 64 bit “absolute” scene-graph, this would require two different 32-bit scene-graphs, one for the booster and one for the upper stage. I could imagine that, multiple physics and rendering scene-graphs, and it would be awesome… But’s still not a magic wave of a 64-bit wand.

Also, only I know my own use cases best, and most of mine do not apply to other engine users… Only I am able to identify unique solutions that only apply to my own specific use cases, right? And, my use case currently is : Vehicles in orbit.


Special Case solution for Orbits

Two-Body Propagation Rebasing

Rebasing Positions and Velocities makes multiplayer support hard. Really hard. So, what’s Multiplayer Centric game to do?.

For instance, if you just turn on position replication… Clients at any given point in time…. may have scene objects that were positioned relative to different reference points. Extra support is obviously required. If you’re using an engine’s support for this, great! The engine is probably handling all the multiplayer nuances of rebased replication for you.

For objects in orbit, though, they’re going to need continually rebased, because objects in orbit continually accelerate.

Multiplayer is hard. Especially, for large world games, with continuously accelerating objects.
Assumptions so far

So far in all the rebasing discussion the assumption has been that the goal is : the physics and graphics pipelines work with our scene-graph’s data… With the assumption that the physics system requires an inertial reference frame to do its job. An inertial coordinate system implies we feed rectangular (X, Y, Z) coordinates of a non-accelerating and non-rotating coordinate system into the physics engine… It solves physics and emits the results back into our inertial frame.

But, we’ve already touched on doing physics in a non-inertial reference frame above. We discussed doing physics in a rotating frame. We can do that, if we add “fictitious forces” to account for the non-inertialness of it all. Specifically, in a rotating coordinate system we can add “Centrifugal”, “Coriolis”, and “Euler” forces to all bodies in the physics solver, and viola, physics works again. Assuming the planet’s rotation is constant, the Euler term is zero.

May the fictitious force be with Apple B’s

Another “fictitious force” we can add is a frame’s acceleration… Which isn’t complicated if the frame is accelerating linearly and it isn’t rotating… We can just subtract our reference frame’s acceleration (from every rigid-body) before we send it all into the physics solver.

For example…. Imagine dropping two apples at the same time. We can use the first apple’s position as the origin of a (non-inertial coordinate system)… And do rigid-body physics in that coordinate system on anything else in that frame, such as Apple “B”. We just have to add a fictitious force to Apple B before we physics it… And the fictious force in this case is the negation of Apple’s A’s acceleration.

So, if we drop the Apples at exactly the same time, what do we see? Both Apples move exactly the same. So subtracting Apple A’s acceleration from B, what did Apple B’s motions in the A’s coordinate system look like?.. They are accelerating at exactly the same rate, starting from the same position and same velocity. We see no acceleration on Apple B at all. It’s stationary… In Apple A’s frame. Makes sense, yes?

If we drop Apple A first, then it already has some velocity when we drop Apple B… Even though Apple A is moving faster than Apple B, they still have the same acceleration due to gravity. So, in Apple A’s frame, Apple B still does not appear to be accelerating. But, it is moving backwards, since Apple A is getting farther and farther as they fall. We can do rigid-body physics in Apple A’s frame, we just have to subtract out the frame’s acceleration due to gravity.

The 'fictitious' force of gravity
So Elemental!

We can in fact rebase a coordinate system that’s defined by orbital elements. E.g., what rocket scientists would call “Two Body Orbit Propagation”. This is a simple mathematic model of a body under the influence of a single dominant gravity force absent other perturbations etc. The motion of the object in orbit can be computed for any given time. What about this frame’s acceleration? Simple! That’s just the acceleration due to gravity wherever the orbit currently is. So, we can do rigid-body physics in this frame who’s motion is defined by an orbit - we just have to cancel out the gravity measured at the origin… on every body in the physics solver. Any viola, valid rigid-body physics.

Only one fictitious force. Even simpler than the rotating reference frame!

There is no acceleration for an object that follows the same orbit. It remains stationary - at the origin.

In code, it may look something like:

void UOriginComponent::GetSceneAccelerations_Implementation(FVector& scenegraphAccelerations, const FCState& state)
{
    FSDimensionlessVector accelerations;
    UCelestialStatics::GravityAcceleration(state.StateVector, CurrentSphereGM, accelerations);

    if (IsOriginRotating)
    {
        FSDimensionlessVector centrifugal, coriolis;
        UCelestialStatics::CentrifugalForce(state.StateVector, OriginAngularVelocity, centrifugal);
        UCelestialStatics::CoriolisForce(state.StateVector, OriginAngularVelocity, coriolis);

        accelerations += centrifugal;
        accelerations += coriolis;
    }

    if (Mode == EC_OriginMode::Orbit)
    {
        FSDimensionlessVector originAcceleration;
        UCelestialStatics::GravityAcceleration(Origin.StateVector, CurrentSphereGM, originAcceleration);
        accelerations -= originAcceleration;
    }

    TransformForce_Implementation(scenegraphAccelerations, Origin.ReferenceFrame, accelerations);
}

For a given rigidbody, we compute the acceleration due to gravity at it’s current position. If the frame is rotating, it means we’re near the planet’s surface and want good collision results. So, the planet stops rotating (and everything else rotates around it). We can fix up rigidbody physics by adding Centrifugal and Coriolis Forces.

And finally, as discussed… If we’re rebasing to an orbit, we simply subtract out the gravitational acceleration of the origin itself.

Just for reference, the fictitious forces are simple to compute.

void UCelestialStatics::CoriolisForce(
    const FSStateVector& state,
    const FSAngularVelocity& av,
    FSDimensionlessVector& coriolis
)
{
    FSDimensionlessVector _av;
    FSDimensionlessVector _v;

    av.AsDimensionlessVector(_av);
    state.v.AsDimensionlessVector(_v);

    USpice::vcrss(_av, _v, coriolis);

    coriolis = -2. * coriolis;
}

void UCelestialStatics::CentrifugalForce(
    const FSStateVector& state,
    const FSAngularVelocity& av,
    FSDimensionlessVector& centrifugal
)
{
    FSDimensionlessVector _av;
    FSDimensionlessVector _r;

    av.AsDimensionlessVector(_av);
    state.r.AsDimensionlessVector(_r);

    USpice::vcrss(_av, _r, centrifugal);
    USpice::vcrss(-_av, centrifugal, centrifugal);
}

void UCelestialStatics::GravityAcceleration(
    const FSStateVector& state,
    const FSMassConstant& GM,
    FSDimensionlessVector& gravity
)
{
    double r = state.r.Magnitude().AsDouble();
    gravity = -(GM).AsDouble() / (r * r) * state.r.Normalized();
}

The types in the code above can be found in MaxQ.

Beyond Orbits

So, fundamentally, why does this work with Orbital motion?

It works because we have a solution to the differential equations of motion. We know the position and acceleration for any given time based on the solution. This means the motion of the reference point must obey the laws of mathematics governing acceleration, velocity, and position - Kinematics.

The idea’s utility to simplify multiplayer development is that all clients (and the server) are able to compute the reference point’s position and acceleration for any point in time. It factors out any required cooperation on determing the reference point used to set the coordinate system’s origin. The idea could be useful in single player while numerically solving the motion. But, if there’s no precise analytical solution, in multiplayer the clients and server are bound to disagree on the precise motion of the reference point… Which brings back a level of required coordination regarding the reference point.

Orbital motions are likely to be the main situation in which the solution is relevant, but it would apply to other kinematic solutions. Circular motion, constant linear acceleration, oscillations due to spring force, etc.

Profit!!!

Rebasing the coordinate system to a solved motion (such as orbits) nearly eliminates the accelerations of any objects nearby said motion… In fact, an object that shares the same motion as the coordinate system will remain motionless - by definition, right? There’s a LOT less rebasing for an object that never needs rebased. None! And, objects nearby only accelerate at a rate equal to the differences in acceleration between their frame and the origin. Anything nearby will accelerate a LOT less. For example, objects in the same exact motion but a bit ahead or a bit behind. Such an object will never move much farther from the origin than it already is. If you watch the object, it will appear to “orbit” the origin as the two mutually follow the same path. However, in the case of orbital motion, objects off the center will be in unstable relative positions… if they’re nearer the gravity source they’ll drift towards it and away from the origin. Objects farther away will drift ever farther away. But… The accelerations are much more manageable than trying to manage positions and velocities in a rectangular “inertial” coordinate system. All for the low cost of adding a fictious force that we’ve probably already computed anyways.

Motions of objects through the coordinate system

And this… (brief pause)… (for the effect)… transforms multiplayer development from brutal, to barely any worse than any other vehicle game. Rather than shooting 32-bit 64-bit positions and velocities of accelerating objects all over the network, you’re putting much more manageable values of things barely accelerating thing on the wire. Imagine a two players docking spacecraft in orbit… Managing that in a rectangular coordinate system with spacecraft continuously accelerating due to gravity = hard. Managing one player’s motion relative to the other’s frame by adding a small ficticious force that nearly eliminates all accelerations and motion = much easier nut to network.

Simple, eh?

Comments are disabled. To share feedback, please send email, or join the discussion on Discord.