开发者

Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly?

开发者 https://www.devze.com 2023-03-02 07:54 出处:网络
For diagnostic purposes, I want to be able to detect changes in the system time-of-day clock in a long-running server application.Since System.currentTimeMillis() is based on wall clock time and Syste

For diagnostic purposes, I want to be able to detect changes in the system time-of-day clock in a long-running server application. Since System.currentTimeMillis() is based on wall clock time and System.nanoTime() is based on a system timer that is independent(*) of wall clock time, I thought I could use changes in the difference between these values to detect system time changes.

I wrote up a quick test app to see how stable the difference between these values is, and to my surprise the values diverge immediately for me at the level of several milliseconds per second. A few times I saw much faster divergences. This is on a Win7 64-bit desktop with Java 6. I haven't tried this test program below under Linux (or Solaris or MacOS) to see how it performs. For some runs of this app, the divergence is positive, for some runs it is negative. It appears to depend on what else the desktop is doing, but it's hard to say.

public class TimeTest {
  private static final int ONE_MILLION  = 1000000;
  private static final int HALF_MILLION =  499999;

  public static void main(String[] args) {
    long start = System.nanoTime();
    long base = System.currentTimeMillis() - (start / ONE_MILLION);

    while (true) {
      try {
        Thread.sleep(1000);
      } catch (InterruptedException e) {
        // Don't care if we're interrupted
      }
      long now = System.nanoTime();
      long drift = System.currentTimeMillis() - (now / ONE_MILLION) - base;
      long interval = (now - start + HALF_MILLION) / ONE_MILLION;
      System.out.println("Clock drift " + drift + " ms after " + interval
                         + " ms = " + (drift * 1000 / interval) + " ms/s");
    }
  }
}

Inaccuracies with the Thread.sleep() time, as well as interruptions, should be entirely irrelevant to timer drift.

Both of these Java "System" calls are intended for use as a measurement -- one to measure differences in wall clock time and the other to measure absolute intervals, so when the real-time-clock is not being changed, these values should change at very close to the same speed, right? Is this a bug or a weakness or a failure in Java? Is there something in the OS or hardware th开发者_如何学JAVAat prevents Java from being more accurate?

I fully expect some drift and jitter(**) between these independent measurements, but I expected well less than a minute per day of drift. 1 msec per second of drift, if monotonic, is almost 90 seconds! My worst-case observed drift was perhaps ten times that. Every time I run this program, I see drift on the very first measurement. So far, I have not run the program for more than about 30 minutes.

I expect to see some small randomness in the values printed, due to jitter, but in almost all runs of the program I see steady increase of the difference, often as much as 3 msec per second of increase and a couple times much more than that.

Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?

(*) I understand that on some architectures, System.nanoTime() will of necessity use the same mechanism as System.currentTimeMillis(). I also believe it's fair to assume that any modern Windows server is not such a hardware architecture. Is this a bad assumption?

(**) Of course, System.currentTimeMillis() will usually have a much larger jitter than System.nanoTime() since its granularity is not 1 msec on most systems.


You might find this Sun/Oracle blog post about JVM timers to be of interest.

Here are a couple of the paragraphs from that article about JVM timers under Windows:

System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).

System.nanoTime() is implemented using the QueryPerformanceCounter / QueryPerformanceFrequency API (if available, else it returns currentTimeMillis*10^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency). You can tell if your system uses the ACPI PMT by checking if QueryPerformanceFrequency returns the signature value of 3,579,545 (ie 3.57MHz). If you see a value around 1.19Mhz then your system is using the old 8245 PIT chip. Otherwise you should see a value approximately that of your CPU frequency (modulo any speed throttling or power-management that might be in effect.)


I am not sure how much this will actually help. But this is an area of active change in the Windows/Intel/AMD/Java world. The need for accurate and precise time measurement has been apparent for several (at least 10) years. Both Intel and AMD have responded by changing how TSC works. Both companies now have something called Invariant-TSC and/or Constant-TSC.

Check out rdtsc accuracy across CPU cores. Quoting from osgx (who refers to an Intel manual).

"16.11.1 Invariant TSC

The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by PUID.80000007H:EDX[8].

The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."

See also http://www.citihub.com/requesting-timestamp-in-applications/. Quoting from the author

For AMD:

If CPUID 8000_0007.edx[8] = 1, then the TSC rate is ensured to be invariant across all P-States, C-States, and stop-grant transitions (such as STPCLK Throttling); therefore, the TSC is suitable for use as a source of time.

For Intel:

Processor’s support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behaviour moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource."

Now the really important point is that the latest JVMs appear to exploit the newly reliable TSC mechanisms. There isn't much online to show this. However, do take a look at http://code.google.com/p/disruptor/wiki/PerformanceResults.

"To measure latency we take the three stage pipeline and generate events at less than saturation. This is achieved by waiting 1 microsecond after injecting an event before injecting the next and repeating 50 million times. To time at this level of precision it is necessary to use time stamp counters from the CPU. We choose CPUs with an invariant TSC because older processors suffer from changing frequency due to power saving and sleep states. Intel Nehalem and later processors use an invariant TSC which can be accessed by the latest Oracle JVMs running on Ubuntu 11.04. No CPU binding has been employed for this test"

Note that the authors of the "Disruptor" have close ties to the folks working on the Azul and other JVMs.

See also "Java Flight Records Behind the Scenes". This presentation mentions the new invariant TSC instructions.


"Returns the current value of the most precise available system timer, in nanoseconds.

"This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change. Differences in successive calls that span greater than approximately 292 years (2**63 nanoseconds) will not accurately compute elapsed time due to numerical overflow."

Note that it says "precise", not "accurate".

It's not a "bug in Java" or a "bug" in anything. It's a definition. The JVM developers look around to find the fastest clock/timer in the system and use it. If that's in lock-step with the system clock then good, but if it's not, that's just the way the cookie crumbles. It's entirely plausible, say, that a computer system will have an accurate system clock but then have a higher-rate timer internally that's tied to the CPU clock rate or some such. Since clock rate is often varied to minimize power consumption, the increment rate of this internal timer would vary.


System.currentTimeMillis() and System.nanoTime() are not necessarily provided by the same hardware. System.currentTimeMillis(), backed by GetSystemTimeAsFileTime() has 100ns resolution elements. Its source is the system timer. System.nanoTime() is backed by the system's high performance counter. There is a whole variety of different hardware providing this counter. Therefore its resolution varies, depending on the underlying hardware.

In no case can it be assumed that these two sources are in phase. Measuring the two values against each other will disclose a different running speed. If the update of System.currentTimeMillis() is taken as the real progress in time, the output of System.nanoTime() may be sometimes slower, sometimes faster, and also varying.

A careful calibration has to be done in order to phase lock these two time sources.

A more detailed description of the relation between these two time sources can be found at the Windows Timestamp Project.


Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?

The Windows Timestamp Project does what you are asking for. As far as I know it only affects the wall-clock timer.

0

精彩评论

暂无评论...
验证码 换一张
取 消