U(”t) ≈ Uf (”t/2)Uv (”t)Uf (”t/2). (6.97)

Writing this out (exercise 1 on page 209) it is easily seen that the velocity-

Verlet scheme (6.87) is recovered. Concatenating the force operator for

successive steps yields the leap-frog algorithm, (6.85). The method is pow-

erful enough to derive higher-order and multiple time-step algorithms. For

U is the transformation matrix of the transformation of (x, v)(0) to (x, v)(t). The Jacobian of

38

the transformation, which is the determinant of U, is equal to 1 because of the general rule

det[exp(A)] = exp( tr A); the trace of D = 0.

194 Molecular dynamics

example, a double time-step algorithm with short- and long-range forces is

obtained (Tuckerman et al., 1992) by applying the propagator

Ul (”t/2)[Us (δt/2)Uv (δt)Us (δt/2)]n Ul (”t/2), (6.98)

where Us and Ul are the propagators for the short- and long-range forces,

respectively, and ”t = nδt.

6.4.1 Constraints

The incorporation of constraints is fully treated in Section 15.8 of Chapter

15, to which we refer. The most popular method is coordinate resetting, as

in the routine shake and its variants settle and rattle. In the Verlet

algorithm, the coordinate prediction x(t + ”t) is ¬rst made as if no con-

straints exist and subsequently the coordinates are iteratively reset in the

direction of x(t) until all constraints are satis¬ed. The most robust and

stable method is the projection method lincs. The in¬‚uence of constraints

on the statistical mechanics of canonical averages is treated in Chapter 17,

Section 17.9.3 on page 499.

6.5 Controlling the system

In almost all cases it is necessary to make modi¬cations to the Newtonian

equations of motion in order to avoid undesirable e¬ects due to the inexact

solution of the equations of motion and the inexact evaluation of forces.

In most applications it is desirable to simulate at constant temperature,

preferably generating a canonical N V T ensemble, and in many applications

simulation at constant pressure (N pT ) is preferred above constant volume.

In some applications simulation at constant chemical potential is desirable.

Finally, a very important class of applications are non-equilibrium simula-

tions, where the system is externally driven out of equilibrium, usually into

a steady-state condition, and its response is measured.

Depending on the purpose of the simulation it is more or less important to

generate an exact ensemble and to know the nature of the ensemble distribu-

tion. When the sole purpose is to equilibrate an initially disturbed system,

any robust and smooth method that does not require intervention is ac-

ceptable. Generally, when only average equilibrium quantities are required,

the exact nature of the generated equilibrium ensemble is less important as

long as the system remains close to Hamiltonian evolution. One should be

aware that there are system-size e¬ects on averages that will depend on the

nature of the ensemble (see Chapter 17, Section 17.4.1 on page 462). When

6.5 Controlling the system 195

one wishes to use the properties of ¬‚uctuations, e.g., to determine higher

derivatives of thermodynamic quantities, knowledge of the exact nature of

the generated distribution function is mandatory.

Four classes of methods are available to control the system externally:

(i) Stochastic methods, involving the application of stochastic forces to-

gether with friction forces. They are particularly useful to control

temperature. Such forces mimic the e¬ect of elastic collisions with

light particles that form an ideal gas at a given temperature. They

produce a canonical ensemble. Other types of stochastic control make

use of reassigning certain variables (as velocities to control tempera-

ture) to preset distribution functions. Stochastic methods in general

enforce the required ensemble distribution, but disturb the dynamics

of the system.

(ii) Strong-coupling methods apply a constraint to the desired quantity,

e.g., for temperature control one may scale the velocities at every time

step to set the total kinetic energy exactly at the value prescribed by

the desired temperature. This is the Gauss isokinetic thermostat.

This method follows an old principle by Gauss stating that external

constraints should be applied in such a way that they cause the least

disturbance. The system dynamics is non-Hamiltonian. The Gauss

thermostat produces a canonical distribution in con¬guration space,

but disturbs the dynamical accuracy.

(iii) Weak-coupling methods apply a small perturbation aimed at smooth-

ly reducing the property to be controlled to a preset value by a

¬rst-order rate equation. Such couplings can be applied to velocity

scaling to control the temperature and/or to coordinate and volume

scaling to control pressure. As the dynamics of the system is non-

Hamiltonian, weak-coupling methods do not generate a well-de¬ned

ensemble. Depending on the coupling strength they generate an en-

semble in between microcanonical and canonical for temperature scal-

ing and in between isochoric and isobaric for coordinate scaling. It

seems warranted to use ensemble averages but not ¬‚uctuations in or-

der to determine thermodynamic quantities. Weak-coupling methods

are well suited to impose non-equilibrium conditions.

(iv) Extended system dynamics extends the system with extra degrees of

freedom related to the controlled quantity, with both a “coordinate”

and a conjugate “momentum.” The dynamics of the extended sys-

tems remains fully Hamiltonian, which enables the evaluation of the

distribution function, but the dynamics of the molecular system is

196 Molecular dynamics

disturbed. A proper choice of extended variables and Hamiltonian

can combine the control of temperature and/or pressure combined

with a canonical distribution in con¬guration space.

The in¬‚uence of such external modi¬cations on the equilibrium distribu-

tion function can be evaluated by the following considerations. First we

assume that the underlying (unperturbed) system is Hamiltonian, not only

as a di¬erential equation, but also in the algorithmic implementation. This

is never true, and the e¬ects of deviations become apparent only on longer

time scales as a result of accumulation of errors. For a true Hamiltonian

system, the evolution of density in phase space f (z) is given by the Liouville

equation (see Chapter 17, Section 17.8 on page 492, and (17.158)). It can

easily be seen that any distribution f (H) in phase space that depends only

on the total energy H = K + V will be stationary:

3N

‚f (H) df (H) ‚H ‚H ‚H ‚H

z · ∇H = f (H) ’

™

= = 0. (6.99)

‚t dH ‚pi ‚qi ‚qi ‚pi

i=1

This means that in principle any initial distribution f (H) (e.g., the canonical

distribution) will not change in time, but there is no restoring force inherent

in the dynamics that will correct any deviations that may (slowly) develop.

If an external in¬‚uence drives the system to a given distribution, it provides

the necessary restoring force. If the Hamiltonian dynamics is accurate, only

a small restoring force is needed.

6.5.1 Stochastic methods

The application of stochastic disturbances to control temperature goes back

to Schneider and Stoll (1978) and corresponds to Langevin dynamics (see

Section 8.6); we shall call this method the Langevin thermostat. The idea is

to apply a frictional force and a random force to the momenta:

pi = Fi ’ γpi + Ri (t),

™ (6.100)

where Ri (t) is a zero-average stationary random process without memory:

Ri (0)Ri (t) = 2mi γi kB T δ(t). (6.101)

For convenience we now drop the subscript i; the following must be valid for

any particle i and the friction and noise can be chosen di¬erently for every

degree of freedom. The random disturbance is realized in time steps ”t and

the change in p is a random number drawn from a normal distribution with

6.5 Controlling the system 197

variance (”p)2 given by

2

”t ”t ”t

2

(”p) = R(t ) dt = dt dt R(t )R(t )