This wiki is archived and useful information is being migrated to the main bzflag.org website

Difference between revisions of "Dead Reckoning"

From BZFlagWiki
Jump to: navigation, search
(copy DeadReckoning from old wiki)
 
m
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This page should give information and suggestion on what Dead Reckoning (DR) is, how it is implemented, what it should be.
+
Dead Reckoning (DR) is a method to find the current position by measuring the course and distance from a past known point.  It is used in Distributed Interactive Simulation to conserve bandwidth in the communication between two different network entities, when exchanging position information of a moving object. It starts with a kinematic model of the object.
  
DR is a method to find the current position by measuring the course and distance from a past know point.
+
As a simple example, entity A controls an object which is a "tank". A second entity (B) receives position updates for the tank from A.  A updates the tank's position continuously as it changes, taking into account the environment, the tank's control inputs, and the (virtual) physics laws that the game imposes. A is the master (driver) of the tank.  B recreates the position and orientation of A's Tank locally using data provided by A. B's representation of the tank is a slave.
  
It is used in Distributed Interactive Simulation to save the bandwidth in the communication between two different network entities, when exchanging positional information of the object. It starts from a kinematic model of the object.
+
In a network with no delay and very high bandwidth, A can communicate the tank position and orientation to B any time it updates its own internal tank representation. B would update its internal copy of the tank, using the newest data to have arrived. In the real, inter-networked world, this is impractical because bandwidth is limited.
  
Let explain that with simple cases around the entity that own the object (A), and the entity that should receive the positional update (B). Name this object tank.
+
To conserve bandwidth, A and B could share a Dead Reckoning (DR) model of the tank. Instead of sending position updates any time it changes, A compares the (new) current tank position to a predicted tank position that is calculated using the DR model. If the "real" and predicted object representations are the "close enough" (within a certain tolerance of being the same) it does not send an update message to B. With no update received from A, B uses its own DR model to calculate the position of the tank. This calculated position is intended to be the actual current tank position with no more error than is allowed by the tolerance. However, the true tank position *is* updated periodically at a low rate, to allow for new player's entering in the game to become in sync. To enable all players' DR models to remain synchronized, DR model parameters are sent with the update messages.  This way, position updates are only sent when the present position cannot be accurately derived by the model data, reducing the amount of network bandwidth used.
  
A is updating is tank position continuosly, taking into account the environment, the player wished, and the physics (virtual) laws that the game impose. Continuosly just means any frame. A is the master (driver) of the tank.
+
In a fixed [[lag]] network, B has the "correct" representation of the object and its history, but delayed by the network lag. Lag can be at least partially compensated for by adding a measure of the lag to the DR model.
B is following with A, using A provided info to update is local copy of the tank. B is a slave.
+
  
Let assume at first a 0 delay network.
+
In a network with significant [[jitter]], the fastest packets arrive with a delay that is near the minimum path route, while others arrive later.  The tank updates from A to B not only arrive delayed, but with a change in that delay, causing effects of time compression and expansion that A is not aware of. DR will alter B's perception of A's tank because simple lag compensation is not able to keep the models in sync. This can result in the remote (to B) tank suddenly "jerking" to a new position when A sends a periodic update.
  
First approach could be for A to communicate the tank position to B any time (frame) it is updating its internal tank representation. B uses these message to change its internal copy of the tank and uses the last arrived data, whenever tank info should be used. This is a bandwidth wasting!
+
The examples most disruptive to game-play occur when A's tank is not under the influence of its control inputs, as when jumping or falling. In these cases, the world physics, the DR model and relatively few periodic updates from A determine the tank's trajectory.  With a jump as an example, A might only send updates at the start of the jump, halfway through the rising arc, at the top, halfway through the descent and then upon landing. If the lag varies significantly between these updates, the DR model makes invalid predictions which are "corrected" upon each update received, which force the tank to a new position. The visual effect at player B's perspective is a tank that rapidly jumps between different positions and trajectories.
  
To save bandwidth, A and B could share a DR model of the tank.
+
As with lag, jitter compensation can be incorporated into the DR model. However, by its nature, jitter is rarely constant and remains a challenge to overcome completely.
  
After having A updated is tank position, without taking into account the DR model, and just before sending, A compare the current tank position with a predicted tank position, taking now into account the DR model. If the two object representation are not so different, below a threshold, it just avoids sending data. B, when no data is arriving, uses the DR model to know the position of the tank. That is not a futured position. It is just the current position with an error maximized by the threshold. Tank position is however updated periodically, to allow for new player entering in the game. For this to work, we should add DR model parameters to the updated data.
+
[[Category:Development]]
 
+
This way we do not send tank update when the position can be accurately derived by the old data, saving bandwidth.
+
 
+
Let assume now a fixed delay network.
+
 
+
With the same behaviour described above, A send its data to B. B update is tank representation with just the same rules above. Forgetting the network delay, B has the "correct" representation of the object, and its history, but just delayed by the network. Here Dead reckoning is not going to "predict" any position, B behaves the same as if DR is not used, (i.e. A send tank updates any frames). Data futuring is not in the game.
+
 
+
What happens with a jittered network?
+
 
+
With jitter, the fastest packets arrive with a delay that is near the minimum path route, while some other packets arrives later, due to some network bottleneck.
+
Now the tank updates from A to B does not arrive just delayed, but sometimes there is an event time compression, sometimes a time expansion, without A knowing anything about it. If B takes into account this message without any time correction the two prediction algorithm, A & B, behave not the same. And DR is going to alter the perception of A tank in B, not just delaying his history, but changing its trajectory.
+
 
+
One of the most effect of this behaviour is when A cannot control its tank so the DR algorithm works saving much. Jumping!
+
Probably, during jumping A send just few updates, let us assume that it will send 4 updates: beginning, half of the raise, top, half of descent, landing. So take an example.
+
 
+
Tank start to jump, so it send an update. The next update suffers from network congestion, so it is seen delayed at B side. In the mean time, B continue to predict the tank position. When tank is seen at B at half descent, the delayed message was received, so B put back is local view of the tank at the half raise point, where the local view of tank continue is raising. After a while a new update is received putting the tank again forward to his top and half descent position.
+
 
+
I see lot of time this while playing.
+
 
+
To correct this, we should compute the network jitter, and use this to correctly position the tank in time & space as view by the DR algorithm, so it can continue to work happily. The "fixed" (unknown) network delay still apply, so our local tank representation is still delayed and, apart the network congestion event, where we blindly predict the future, truely reflect the remote one.
+

Latest revision as of 20:42, 4 February 2009

Dead Reckoning (DR) is a method to find the current position by measuring the course and distance from a past known point. It is used in Distributed Interactive Simulation to conserve bandwidth in the communication between two different network entities, when exchanging position information of a moving object. It starts with a kinematic model of the object.

As a simple example, entity A controls an object which is a "tank". A second entity (B) receives position updates for the tank from A. A updates the tank's position continuously as it changes, taking into account the environment, the tank's control inputs, and the (virtual) physics laws that the game imposes. A is the master (driver) of the tank. B recreates the position and orientation of A's Tank locally using data provided by A. B's representation of the tank is a slave.

In a network with no delay and very high bandwidth, A can communicate the tank position and orientation to B any time it updates its own internal tank representation. B would update its internal copy of the tank, using the newest data to have arrived. In the real, inter-networked world, this is impractical because bandwidth is limited.

To conserve bandwidth, A and B could share a Dead Reckoning (DR) model of the tank. Instead of sending position updates any time it changes, A compares the (new) current tank position to a predicted tank position that is calculated using the DR model. If the "real" and predicted object representations are the "close enough" (within a certain tolerance of being the same) it does not send an update message to B. With no update received from A, B uses its own DR model to calculate the position of the tank. This calculated position is intended to be the actual current tank position with no more error than is allowed by the tolerance. However, the true tank position *is* updated periodically at a low rate, to allow for new player's entering in the game to become in sync. To enable all players' DR models to remain synchronized, DR model parameters are sent with the update messages. This way, position updates are only sent when the present position cannot be accurately derived by the model data, reducing the amount of network bandwidth used.

In a fixed lag network, B has the "correct" representation of the object and its history, but delayed by the network lag. Lag can be at least partially compensated for by adding a measure of the lag to the DR model.

In a network with significant jitter, the fastest packets arrive with a delay that is near the minimum path route, while others arrive later. The tank updates from A to B not only arrive delayed, but with a change in that delay, causing effects of time compression and expansion that A is not aware of. DR will alter B's perception of A's tank because simple lag compensation is not able to keep the models in sync. This can result in the remote (to B) tank suddenly "jerking" to a new position when A sends a periodic update.

The examples most disruptive to game-play occur when A's tank is not under the influence of its control inputs, as when jumping or falling. In these cases, the world physics, the DR model and relatively few periodic updates from A determine the tank's trajectory. With a jump as an example, A might only send updates at the start of the jump, halfway through the rising arc, at the top, halfway through the descent and then upon landing. If the lag varies significantly between these updates, the DR model makes invalid predictions which are "corrected" upon each update received, which force the tank to a new position. The visual effect at player B's perspective is a tank that rapidly jumps between different positions and trajectories.

As with lag, jitter compensation can be incorporated into the DR model. However, by its nature, jitter is rarely constant and remains a challenge to overcome completely.