Network Time Protocol (NTP)
is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks.
OVERVIEW
·
NTP provides Coordinated
Universal Time (UTC) including scheduled leap second
adjustments. No information about time zones or
daylight
saving time is transmitted; this information is outside
its scope and must be obtained separately.
·
NTP uses Manzullo’s
algorithm and is designed to resist the effects of variable
latency. NTP can usually maintain time to within tens of milliseconds over the
public Internet,
and can achieve 1 millisecond accuracy in local
area networks under ideal conditions.
NTP SOFTWARE
IMPLEMENTATIONS
·
For
modern Unix systems, the NTP client is implemented as a daemon process that runs continuously in user space (ntpd). Because of sensitivity to timing.
·
it
is important to have the standard NTP clock phase-locked loop
implemented in kernel space. All recent versions of Linux, BSD, Mac OS X, Solaris and AIX are implemented in this manner.
·
The
NTP packet is a UDP datagram, carried on port 123.
MICROSOFT WINDOWS
·
Microsoft Windows
NT 4.0 did not
come with an NTP implementation. The reference implementation of NTP can be
used on NT4 systems.
·
All
Microsoft Windows versions since Windows 2000
and Windows XP
include the Windows Time Service ("w32time"), which has the ability
to sync the computer clock to an NTP server.
·
The
version in Windows 2000 and Windows XP only implements Simple NTP, and violates
several aspects of the NTP version 3 standard. Beginning with Windows Server 2003 and Windows Vista,
a compliant implementation of full NTP is included.
·
However,
Microsoft does not guarantee that their implementation will be more accurate
than 2 seconds. If higher accuracy is desired, Microsoft recommends to use a
different NTP implementation.
·
It is a computer algorithm
devised by computer scientist Leslie Lamport,
which is intended to improve the safety in the usage of shared resources among
multiple threads by means of mutual exclusion.
·
In
computer science, it is common for multiple threads to
simultaneously access the same resources. Data corruption
can occur if two or more threads try to write into the same memory location, or if one thread reads a
memory location before another has finished writing into it.
·
Lamport's
bakery algorithm is one of many mutual exclusion
algorithms designed to prevent concurrent threads entering critical sections
of code concurrently to eliminate the risk of data corruption.
Algorithm:
·
Lamport
envisioned a bakery with a numbering machine at its entrance so each customer
is given a unique number. Numbers increase by one as customers enter the store.
·
A global counter displays the number of the
customer that is currently being served. All other customers must wait in a
queue until the baker finishes serving the current customer and the next number
is displayed.
·
When
the customer is done shopping and has disposed of his or her number, the clerk
increments the number, allowing the next customer to be served. That customer
must draw another number from the numbering machine in order to shop again.
·
Due
to the limitations of computer architecture, some parts of Lamport's analogy need
slight modification. It is possible that more than one thread will get the same
number when they request it; this cannot be avoided.
·
Therefore,
it is assumed that the thread identifier i is also a priority. A lower
value of i means a higher priority and threads with higher priority will
enter the critical section first.
· Lamport was the first to give a
distributed mutual exclusion algorithm as an illustration of his clock
synchronization scheme. Let Ri be
the request set of site Si
, i.e. the set of sites from which Si needs permission when it
wants to enter CS. In Lamport's algorithm, ∀i : 1 ≤ i ≤ N :: Ri= {S1, S2,...,SN}. Every site
Si keeps a queue, request_queuei,
which contains mutual exclusion requests ordered by their timestamps. This
algorithm requires messages to be delivered in the FIFO order between every
pair of sites.
Requesting the critical
section.
1. When a site Si wants to enter the CS,
it sends REQUEST(tsi, i) message to all the sites in its request set Ri
and places the request on request_queuei (tsi is the
timestamp of the request).
2. When a site Sj receives the REQUEST(tsi,
i) message from site Si, it returns a timestamped REPLY message to Si
and places site S'is request on request_queuej.
Executing the critical
section.
1. Site Si enters the CS when the
two following conditions hold:
a) [L1:] Si has received a message with
timestamp larger than (tsi, i) from all other sites.
b) [L2:] S'is request is at the top request_queuei.
Releasing the critical
section.
1. Site Si, upon exiting the CS, removes
its request from the top of its request queue and sends a timestamped RELEASE
message to all the sites in its request set.
2. When a site Sj receives a RELEASE
message from site Si, it removes S'is request from its request
queue.