Background
I'm using Apache's NMS library to talk with ActiveMq. Every sent message is time stamped by the producer, I'm looking at message latency by subtracting the timestamp from DateTime.UtcNow
, I'd expect to see latencies of 0-100ms but instead I am seeing latencies reported of between -1000 and 1000 ms. Obviously negative latencies make no sense so I initially suspected system clocks were out of sync, however I've now independently confirmed that they are correct and within 20ms of each other.
More background
Measuring broadcast message latency using system clock, good idea?
Question
I now believe that the discrepancies might be due to the way dates are handled between .Net and Java.
- Is the conversion from Java to/from .Net times lossy?
- Can the conversions explain the large negative time spans I was observing?
- Is there something else that could explain the time differences?
MessageProducer.cs -- Producer sets NMSTimestamp
activeMessage.NMSTimestamp = DateTime.UtcNow;
ActiveMqMessage.cs -- NMSTimestamp converts to Java time and store on Message
...
public DateTime NMSTimestamp
{
get { return DateUtils.ToDateTime(Timestamp); }
set
{
Timestamp = DateUtils.ToJavaTimeUtc(value);
if(timeToLive.TotalMilliseconds > 0)
{
Expiration = Timestamp + (long) 开发者_JAVA百科timeToLive.TotalMilliseconds;
}
}
}
...
Message.cs -- Timestamp holds the wire formatted date, the message marshaller sets this value directly
// ActiveMqMessage extends Message
...
public long Timestamp
{
get { return timestamp; }
set { this.timestamp = value; }
}
...
DateUtils.cs -- Used to perform the conversions
namespace Apache.NMS.Util
{
public class DateUtils
{
/// <summary>
/// The start of the Windows epoch
/// </summary>
public static readonly DateTime windowsEpoch = new DateTime(1601, 1, 1, 0, 0, 0, 0);
/// <summary>
/// The start of the Java epoch
/// </summary>
public static readonly DateTime javaEpoch = new DateTime(1970, 1, 1, 0, 0, 0, 0);
/// <summary>
/// The difference between the Windows epoch and the Java epoch
/// in milliseconds.
/// </summary>
public static readonly long epochDiff; /* = 1164447360000L; */
static DateUtils()
{
epochDiff = (javaEpoch.ToFileTimeUtc() - windowsEpoch.ToFileTimeUtc())
/ TimeSpan.TicksPerMillisecond;
}
public static long ToJavaTime(DateTime dateTime)
{
return (dateTime.ToFileTime() / TimeSpan.TicksPerMillisecond) - epochDiff;
}
public static DateTime ToDateTime(long javaTime)
{
return DateTime.FromFileTime((javaTime + epochDiff) * TimeSpan.TicksPerMillisecond);
}
public static long ToJavaTimeUtc(DateTime dateTime)
{
return (dateTime.ToFileTimeUtc() / TimeSpan.TicksPerMillisecond) - epochDiff;
}
public static DateTime ToDateTimeUtc(long javaTime)
{
return DateTime.FromFileTimeUtc((javaTime + epochDiff) * TimeSpan.TicksPerMillisecond);
}
}
}
My code calculates latency as follows
var latency = (DateTime.UtcNow - msg.NMSTimestamp).TotalMilliseconds;
Yes, the conversion to/from Java time is lossy. This is due to the fact that Java time is not as precise as Windows time. The conversion to/from the different time formats is guaranteed to be accurate down to the millisecond without loss. The more precise Windows Ticks will be lost, though.
The NMS DateTime utility conversion functions are working correctly. The error you are experiencing is coming from your latency calculation code. You are using the var type, and this is where your error is coming in. The TotalMilliseconds returns a double value, and this will create a rounding error. The fix is to truncate the value to whole milliseconds, not partial milliseconds, as that is lossy as described above. Try using the following code with a strong typing of long to calculate latency:
long latency = System.Math.floor((DateTime.UtcNow - msg.NMSTimestamp).TotalMilliseconds);
精彩评论