Latency (engineering): Difference between revisions
Akeosnhaoe (talk | contribs) mNo edit summary |
rm heading redundancy |
||
Line 10: | Line 10: | ||
The precise definition of latency depends on the system being observed or the nature of the simulation. In [[Telecommunication|communications]], the lower limit of latency is determined by the [[Transmission medium|medium]] being used to transfer information. In [[Reliability (computer networking)|reliable]] two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of [[Human–computer interaction|human–machine interaction]], perceptible latency has a strong effect on [[Computer user satisfaction|user satisfaction]] and [[usability]]. |
The precise definition of latency depends on the system being observed or the nature of the simulation. In [[Telecommunication|communications]], the lower limit of latency is determined by the [[Transmission medium|medium]] being used to transfer information. In [[Reliability (computer networking)|reliable]] two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of [[Human–computer interaction|human–machine interaction]], perceptible latency has a strong effect on [[Computer user satisfaction|user satisfaction]] and [[usability]]. |
||
==CommunicationS== |
|||
==Communication latency== |
|||
[[Online game]]s are sensitive to latency (or "[[lag]]"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate [[reaction time]]. This gives players with low latency connections a technical advantage. |
[[Online game]]s are sensitive to latency (or "[[lag]]"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate [[reaction time]]. This gives players with low latency connections a technical advantage. |
||
Line 40: | Line 40: | ||
Satellites in [[geostationary orbit]]s are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. [[Low Earth orbit]] is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the [[satellite constellation]] to ensure continuous coverage. |
Satellites in [[geostationary orbit]]s are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. [[Low Earth orbit]] is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the [[satellite constellation]] to ensure continuous coverage. |
||
==Audio |
==Audio== |
||
{{main|Latency (audio)}} |
{{main|Latency (audio)}} |
||
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include [[analog-to-digital conversion]], [[Buffer (telecommunication)|buffering]], [[digital signal processing]], [[transmission time]], [[digital-to-analog conversion]] and the [[speed of sound]] in air. |
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include [[analog-to-digital conversion]], [[Buffer (telecommunication)|buffering]], [[digital signal processing]], [[transmission time]], [[digital-to-analog conversion]] and the [[speed of sound]] in air. |
||
==Video |
==Video== |
||
Video latency refers to the degree of delay between the time a transfer of a video stream is requested and the actual time that transfer begins. Networks that exhibit relatively small delays are known as low-latency networks, while their counterparts are known as high-latency networks. |
Video latency refers to the degree of delay between the time a transfer of a video stream is requested and the actual time that transfer begins. Networks that exhibit relatively small delays are known as low-latency networks, while their counterparts are known as high-latency networks. |
||
==Workflow== |
|||
==Operational latency== |
|||
Any individual [[workflow]] within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving [[air travel]]. |
Any individual [[workflow]] within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving [[air travel]]. |
||
Line 70: | Line 70: | ||
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task. |
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task. |
||
==Mechanics== |
|||
==Mechanical latency== |
|||
Any mechanical process encounters limitations modeled by [[Newtonian physics]]. The behavior of [[disk drive]]s provides an example of mechanical latency. Here, it is the time [[seek time]] for the actuator arm to be positioned above the appropriate track and then [[rotational latency]] for the data encoded on a platter to rotate from its current position to a position under the [[disk read-and-write head]]. |
Any mechanical process encounters limitations modeled by [[Newtonian physics]]. The behavior of [[disk drive]]s provides an example of mechanical latency. Here, it is the time [[seek time]] for the actuator arm to be positioned above the appropriate track and then [[rotational latency]] for the data encoded on a platter to rotate from its current position to a position under the [[disk read-and-write head]]. |
||
==Computer hardware and operating |
==Computer hardware and operating systems== |
||
{{Further|Access time}} |
{{Further|Access time}} |
||
Line 82: | Line 82: | ||
On embedded systems, the real-time execution of instructions is often supported by a [[real-time operating system]]. |
On embedded systems, the real-time execution of instructions is often supported by a [[real-time operating system]]. |
||
== |
==Simulations== |
||
In simulation applications, latency refers to the time delay, often measured in [[millisecond]]s, between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called ''transport delay''. Some authorities{{who|date=February 2020}} distinguish between latency and transport delay by using the term ''latency'' in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the [[vehicle dynamics]] and can be controversial. |
In simulation applications, latency refers to the time delay, often measured in [[millisecond]]s, between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called ''transport delay''. Some authorities{{who|date=February 2020}} distinguish between latency and transport delay by using the term ''latency'' in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the [[vehicle dynamics]] and can be controversial. |
||
Revision as of 13:02, 22 May 2021
This article needs additional citations for verification. (April 2011) |
Latency from a general point of view is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.[1]
Latency is physically a consequence of the limited velocity which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of stimulation that it has been exposed to.
The precise definition of latency depends on the system being observed or the nature of the simulation. In communications, the lower limit of latency is determined by the medium being used to transfer information. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human–machine interaction, perceptible latency has a strong effect on user satisfaction and usability.
CommunicationS
Online games are sensitive to latency (or "lag"), since fast response times to new events occurring during a game session are rewarded while slow response times may carry penalties. Due to a delay in transmission of game events, a player with a high latency internet connection may show slow responses in spite of appropriate reaction time. This gives players with low latency connections a technical advantage.
Capital markets
Minimizing latency is of interest in the capital markets,[2] particularly where algorithmic trading is used to process market updates and turn around orders within milliseconds. Low-latency trading occurs on the networks used by financial institutions to connect to stock exchanges and electronic communication networks (ECNs) to execute financial transactions.[3] Joel Hasbrouck and Gideon Saar (2011) measure latency based on three components: the time it takes for information to reach the trader, execution of the trader's algorithms to analyze the information and decide a course of action, and the generated action to reach the exchange and get implemented. Hasbrouck and Saar contrast this with the way in which latencies are measured by many trading venues who use much more narrow definitions, such as, the processing delay measured from the entry of the order (at the vendor's computer) to the transmission of an acknowledgement (from the vendor's computer).[4] Electronic trading now makes up 60% to 70% of the daily volume on the New York Stock Exchange and algorithmic trading close to 35%.[5] Trading using computers has developed to the point where millisecond improvements in network speeds offer a competitive advantage for financial institutions.[6]
Packet-switched networks
Network latency in a packet-switched network is measured as either one-way (the time from the source sending a packet to the destination receiving it), or round-trip delay time (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Note that round trip latency excludes the amount of time that a destination system spends processing the packet.[citation needed] Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping uses the Internet Control Message Protocol (ICMP) echo request which causes the recipient to send the received packet as an immediate response, thus it provides a rough way of measuring round-trip delay time. Ping cannot perform accurate measurements,[7] principally because ICMP is intended only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore, routers and internet service providers might apply different traffic shaping policies to different protocols.[8][9] For more accurate measurements it is better to use specific software, for example: hping, Netperf or Iperf.
However, in a non-trivial network, a typical packet will be forwarded over multiple links and gateways, each of which will not begin to forward the packet until it has been completely received. In such a network, the minimal latency is the sum of the transmission delay of each link, plus the forwarding latency of each gateway. In practice, minimal latency also includes queuing and processing delays. Queuing delay occurs when a gateway receives multiple packets from different sources heading towards the same destination. Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delay. Processing delays are incurred while a gateway determines what to do with a newly received packet. Bufferbloat can also cause increased latency that is an order of magnitude or more. The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.
Latency limits total throughput in reliable two-way communication systems as described by the bandwidth-delay product.
Fiber optics
Latency in optical fiber is largely a function of the speed of light, which is 299,792,458 meters/second in vacuum. This would equate to a latency of 3.33 µs for every kilometer of path length. The index of refraction of most fiber optic cables is about 1.5, meaning that light travels about 1.5 times as fast in a vacuum as it does in the cable. This works out to about 5.0 µs of latency for every kilometer. In shorter metro networks, higher latency can be experienced due to extra distance in building risers and cross-connects. To calculate the latency of a connection, one has to know the distance traveled by the fiber, which is rarely a straight line, since it has to traverse geographic contours and obstacles, such as roads and railway tracks, as well as other rights-of-way.
Due to imperfections in the fiber, light degrades as it is transmitted through it. For distances of greater than 100 kilometers, amplifiers or regenerators are deployed. Latency introduced by these components needs to be taken into account.
Satellite transmission
Satellites in geostationary orbits are far enough away from Earth that communication latency becomes significant – about a quarter of a second for a trip from one ground-based transmitter to the satellite and back to another ground-based transmitter; close to half a second for two-way communication from one Earth station to another and then back to the first. Low Earth orbit is sometimes used to cut this delay, at the expense of more complicated satellite tracking on the ground and requiring more satellites in the satellite constellation to ensure continuous coverage.
Audio
Audio latency is the delay between when an audio signal enters and when it emerges from a system. Potential contributors to latency in an audio system include analog-to-digital conversion, buffering, digital signal processing, transmission time, digital-to-analog conversion and the speed of sound in air.
Video
Video latency refers to the degree of delay between the time a transfer of a video stream is requested and the actual time that transfer begins. Networks that exhibit relatively small delays are known as low-latency networks, while their counterparts are known as high-latency networks.
Workflow
Any individual workflow within a system of workflows can be subject to some type of operational latency. It may even be the case that an individual system may have more than one type of latency, depending on the type of participant or goal-seeking behavior. This is best illustrated by the following two examples involving air travel.
From the point of view of a passenger, latency can be described as follows. Suppose John Doe flies from London to New York. The latency of his trip is the time it takes him to go from his house in England to the hotel he is staying at in New York. This is independent of the throughput of the London-New York air link – whether there were 100 passengers a day making the trip or 10000, the latency of the trip would remain the same.
From the point of view of flight operations personnel, latency can be entirely different. Consider the staff at the London and New York airports. Only a limited number of planes are able to make the transatlantic journey, so when one lands they must prepare it for the return trip as quickly as possible. It might take, for example:
- 35 minutes to clean a plane
- 15 minutes to refuel a plane
- 10 minutes to load the passengers
- 30 minutes to load the cargo
Assuming the above are done consecutively, minimum plane turnaround time is:
- 35 + 15 + 10 + 30 = 90
However, cleaning, refueling and loading the cargo can be done at the same time. Passengers can only be loaded after cleaning is complete. The reduced latency, then, is:
- 35 + 10 = 45
- 15
- 30
- Minimum latency = 45
The people involved in the turnaround are interested only in the time it takes for their individual tasks. When all of the tasks are done at the same time, however, it is possible to reduce the latency to the length of the longest task. If some steps have prerequisites, it becomes more difficult to perform all steps in parallel. In the example above, the requirement to clean the plane before loading passengers results in a minimum latency longer than any single task.
Mechanics
Any mechanical process encounters limitations modeled by Newtonian physics. The behavior of disk drives provides an example of mechanical latency. Here, it is the time seek time for the actuator arm to be positioned above the appropriate track and then rotational latency for the data encoded on a platter to rotate from its current position to a position under the disk read-and-write head.
Computer hardware and operating systems
Computers run instructions in the context of a process. In the context of computer multitasking, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system schedules the process for each transition (high-low or low-high) based on a hardware clock such as the High Precision Event Timer. The latency is the delay between the events generated by the hardware clock and the actual transitions of voltage from high to low or low to high.
Many desktop operating systems have performance limitations which create additional latency. The problem may be mitigated with real-time extensions and patches such as PREEMPT_RT.
On embedded systems, the real-time execution of instructions is often supported by a real-time operating system.
Simulations
In simulation applications, latency refers to the time delay, often measured in milliseconds, between initial input and output clearly discernible to the simulator trainee or simulator subject. Latency is sometimes also called transport delay. Some authorities[who?] distinguish between latency and transport delay by using the term latency in the sense of the extra time delay of a system over and above the reaction time of the vehicle being simulated, but this requires detailed knowledge of the vehicle dynamics and can be controversial.
In simulators with both visual and motion systems, it is particularly important that the latency of the motion system not be greater than of the visual system, or symptoms of simulator sickness may result. This is because, in the real world, motion cues are those of acceleration and are quickly transmitted to the brain, typically in less than 50 milliseconds; this is followed some milliseconds later by a perception of change in the visual scene. The visual scene change is essentially one of change of perspective or displacement of objects such as the horizon, which takes some time to build up to discernible amounts after the initial acceleration which caused the displacement. A simulator should, therefore, reflect the real-world situation by ensuring that the motion latency is equal to or less than that of the visual system and not the other way round.
See also
- Feedback
- Interrupt latency
- Jitter
- Lagometer
- Lead time
- Memory latency
- Performance engineering
- Response time (technology)
- Responsiveness
References
- ^ "Latency" Retrieved 2020-10-27.
- ^ TABB (2009). High Frequency Trading Technology: a TABB Anthology. Archived from the original on 2016-08-01. Retrieved 2017-02-11.
- ^ Mackenzie, Michael; Grant, Jeremy (2009). "The dash to flash" (PDF). Financial Times. Archived from the original (PDF) on 23 July 2011. Retrieved 18 July 2011.
extracting tiny slices of profit from trading small numbers of shares in companies, often between different trading platforms, with success relying on minimal variations in speed - or "latency", in the trading vernacular.
- ^ Hasbrouck, Joel; Saar, Gideon. "Low-Latency Trading" (PDF). p. 1. Archived from the original (PDF) on 11 November 2011. Retrieved 18 July 2011.
- ^ Heires, Katherine (July 2009). "Code Green: Goldman Sachs & UBS Cases Heighten Need to Keep Valuable Digital Assets From Walking Out The Door. Millions in Trading Profits May Depend On It" (PDF). Securities Industry News. Retrieved 18 July 2011.
- ^ "High-frequency trading: when milliseconds mean millions". The Telegraph. Retrieved 2018-03-25.
- ^ "Don't misuse ping!". Retrieved 29 April 2015.
- ^ Shane Chen (2005). "Network Protocols Discussion / Traffic Shaping Strategies". knowplace.org. Archived from the original on 2007-01-09.
- ^ "Basic QoS part 1 – Traffic Policing and Shaping on Cisco IOS Router". The CCIE R&S. Retrieved 29 April 2015.
Further reading
- M. Brian Blake (December 2003). "Coordinating Multiple Agents for Workflow-Oriented Process Orchestration" (PDF). Information Systems and E-Business Management Journal. Springer-Verlag. Archived from the original (PDF) on 2006-09-01.