This wiki is archived and useful information is being migrated to the main bzflag.org website

Editing Dead Reckoning

Jump to: navigation, search

Warning: The database has been locked for maintenance, so you will not be able to save your edits right now. You may wish to copy and paste your text into a text file and save it for later.

The administrator who locked it offered this explanation: Archived wiki

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
Dead Reckoning (DR) is a method to find the current position by measuring the course and distance from a past known point.  It is used in Distributed Interactive Simulation to conserve bandwidth in the communication between two different network entities, when exchanging position information of a moving object. It starts with a kinematic model of the object.
+
This page should give information and suggestion on what Dead Reckoning (DR) is, how it is implemented, what it should be.
  
As a simple example, entity A controls an object which is a "tank". A second entity (B) receives position updates for the tank from A.  A updates the tank's position continuously as it changes, taking into account the environment, the tank's control inputs, and the (virtual) physics laws that the game imposes. A is the master (driver) of the tank.  B recreates the position and orientation of A's Tank locally using data provided by A. B's representation of the tank is a slave.
 
  
In a network with no delay and very high bandwidth, A can communicate the tank position and orientation to B any time it updates its own internal tank representation. B would update its internal copy of the tank, using the newest data to have arrived. In the real, inter-networked world, this is impractical because bandwidth is limited.
+
DR is a method to find the current position by measuring the course and distance from a past known point. It is used in Distributed Interactive Simulation to conserve bandwidth in the communication between two different network entities, when exchanging position information of a moving object. It starts with a kinematic model of the object.
  
To conserve bandwidth, A and B could share a Dead Reckoning (DR) model of the tank. Instead of sending position updates any time it changes, A compares the (new) current tank position to a predicted tank position that is calculated using the DR model. If the "real" and predicted object representations are the "close enough" (within a certain tolerance of being the same) it does not send an update message to B. With no update received from A, B uses its own DR model to calculate the position of the tank. This calculated position is intended to be the actual current tank position with no more error than is allowed by the tolerance. However, the true tank position *is* updated periodically at a low rate, to allow for new player's entering in the game to become in sync. To enable all players' DR models to remain synchronized, DR model parameters are sent with the update messages.  This way, position updates are only sent when the present position cannot be accurately derived by the model data, reducing the amount of network bandwidth used.
 
  
In a fixed [[lag]] network, B has the "correct" representation of the object and its history, but delayed by the network lag. Lag can be at least partially compensated for by adding a measure of the lag to the DR model.
+
Let's explain with a simple case including an object which we'll call "Tank" that is owned by entity (A). There is a second entity (B) that should receive position updates for Tank.  A is updating Tank's position continuously, taking into account the environment, the Tank driver's control inputs, and the physics (virtual) laws that the game imposes. "Continuously" means at some periodic rate. A is the master (driver) of the tank.  B recreates the position and orientation of A's Tank locally using data provided by A. B's tank is a slave.
  
In a network with significant [[jitter]], the fastest packets arrive with a delay that is near the minimum path route, while others arrive later.  The tank updates from A to B not only arrive delayed, but with a change in that delay, causing effects of time compression and expansion that A is not aware of. DR will alter B's perception of A's tank because simple lag compensation is not able to keep the models in sync. This can result in the remote (to B) tank suddenly "jerking" to a new position when A sends a periodic update.
 
  
The examples most disruptive to game-play occur when A's tank is not under the influence of its control inputs, as when jumping or falling. In these cases, the world physics, the DR model and relatively few periodic updates from A determine the tank's trajectory.  With a jump as an example, A might only send updates at the start of the jump, halfway through the rising arc, at the top, halfway through the descent and then upon landing. If the lag varies significantly between these updates, the DR model makes invalid predictions which are "corrected" upon each update received, which force the tank to a new position.  The visual effect at player B's perspective is a tank that rapidly jumps between different positions and trajectories.
+
Let's first assume a 0 delay network.
  
As with lag, jitter compensation can be incorporated into the DR model. However, by its nature, jitter is rarely constant and remains a challenge to overcome completely.
+
One approach could be for A to communicate the tank position and orientation to B any time it updates its own internal tank representation. B uses these messages to update its internal copy of the tank, using the newest data to have arrived. Unfortunately, this wastes bandwidth!
 +
 
 +
 
 +
To conserve bandwidth, A and B could share a Dead Reckoning (DR) model of the tank.
 +
 
 +
When A has updated its Tank position, instead of blindly sending a position update, A compares the current tank position to a predicted tank position that is calculated via the DR model. If the "real" and predicted object representations are the same (or nearly the same within a tolerance) it does not send an update message to B. With no update received from A, B uses its DR model to calculate the position of the tank. That is not a future position. It is the actual current tank position with no more error than is allowed by the tolerance. However, the true tank position *is* updated periodically, to allow for new player's entering in the game. To enable all players' DR models to remain synchronized, we must add DR model parameters to the update messages.  In this way we only send tank updates when the position cannot be accurately derived by the old data, thus saving bandwidth.
 +
 
 +
 
 +
Now let's assume a fixed delay network.
 +
 
 +
With the same behavior described above, A send its data to B. B updates its tank representation with the same rules outlined above. B has the "correct" representation of the object, and its history, but just delayed by the network. Here Dead reckoning is not going to "predict" any position, B behaves the same as if DR is not used, (i.e. A send tank updates any frames). Data futuring is not in the game.
 +
 
 +
 
 +
Now let's consider a network with [[jitter]].
 +
 
 +
With jitter, the fastest packets arrive with a delay that is near the minimum path route, while some other packets arrive later, due to some network bottleneck.  Now the tank updates from A to B not only arrive delayed, but sometimes with a change in that delay, causing effects of time compression and expansion that A is not aware of. If B uses these update messages without any time correction between the two entity's prediction algorithms, A & B will not behave the same. The result is that DR is going to alter the perception of A 's tank at B, not just by (network) delay of its history, but also by potentially changing its trajectory.  The adverse effects are increased with increasing jitter.
 +
 
 +
One of the most serious effects of this is seen when A's driver is not in control of its tank (e.g. jumping or falling). With no other input, it is only the world "physics" and the DR algorithm operating and relatively few updates are sent to B.  Using a jump as an example, A may send just a few updates, perhaps just at the jump start, halfway through the rising arc, top, halfway through the descent and then upon landing. So take an example.
 +
 
 +
Tank starts to jump, so A sends an update. The next update suffers from network congestion, so it is seen delayed at the B side. In the meantime, B continues to predict the tank position. When the tank is seen at B at half descent, the delayed message was received, so B reverts its local view back to the half rise point, where the local view of the tank is continuing to rise. Later, a new update is received, suddenly putting the tank in a new forward, descending position. The visual effect at player B's perspective is a tank that rapidly jumps between different positions and trajectories.
 +
 
 +
This is currently a problem that is often viewable while playing.
 +
 
 +
To correct this, we should compute the network jitter, and use this to correctly position the tank in time & space as viewed by the DR algorithm, so it can continue to work happily. The "fixed" (unknown) network delay still applies, so our local tank representation is still delayed and, apart the network congestion event, where we blindly predict the future, truely reflect the remote one.
  
 
[[Category:Development]]
 
[[Category:Development]]

Please note that all contributions to BZFlagWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see BZFlagWiki:Copyrights for details). Do not submit copyrighted work without permission!

To edit this page, please answer the question that appears below (more info):

Cancel | Editing help (opens in new window)