State of the Standards
What Matters When measuring
The UNIX-NTP real world compromise
What about SNTP (e.g., Windows)
What's wrong with the UNIX-NTP compromise?
The other big problem is that the compromise assumes that none of the processing software implements UTC leap seconds fully. One problem point is time difference computation.
What is the time difference between 20081231235959 and 20090101000001 ? Almost all systems will say "2 seconds". It's the obvious answer, but it's wrong in UTC. According to UTC, there is a 3 second difference. That's because there will be a leap second at the end of December 2008. The UNIX-NTP compromise was pragmatic. The compromise is motivated by allowing use of the extensive library of existing software that does not handle leap seconds.
Implementing leapseconds completely means having configuration files that list all the leap seconds and which check every time difference computation to see whether that happens to include a leap second. UNIX can do this. But it means that you need to regularly update this configuration file. Leap seconds are not predictable more than a year in advance. This greatly complicates both system administration and software development.
The compromise is pragmatically correct. Very few applications make use of the leap second configuration file as part of their time difference computations. But it is a potential problem whenever systems get updated. You never know when an old application will be updated. (I know of systems that deal with this issue by intentionally changing the configuration files to eliminate the list of leap seconds. This may mean deviations from UTC in some instances, but those deviations don't matter. Eliminating these leap seconds ensures that all of the time analysis programs (both UTC aware and not) give the same results.
Quite a mess isn't it.