the virtual matter laboratoryfolk.uio.no/ponniahv/dft/gillan.pdf ·...

16
The virtual matter laboratory M. J. G ILLAN New types of computer simulation are being used to investigate matter on the atomic scale. Unlike older simulation methods, the new techniques represent a solid or a liquid as a collection of nuclei and electrons, and the laws of quantum mechanics are used to calculate the energy and the forces from ® rst principles. The most successful method so far for doing ® rst-principles simulation is based on density functional theory and the pseudopotential approximation. I describe the main ideas of these techniques and explain how they can be used to perform simulations in which chemical bonds are made and broken. I give illustrations of current work in two areas: the atomic-scale behaviour of water and other liquids; and molecular processes at surfaces. I stress that the new methods are still evolving rapidly and I point to likely developments in the next few years. 1. Introduction In the last few years, more and more condensed-matter scientists have become gripped by a powerful idea. The idea is to use the fundamental laws of quantum mechanics to produce highly realistic simulations of solids and liquids on the atomic scale. In this article, I will explain how these new simulations work and I will give some examples of the rapid progress they are bringing to the understanding of condensed matter. We all have an image of the world in our mind. We consult this mental image when we want to interpret the world, or when making plans to change it. But our image is too unreliable and approximate. We construct maps, models and other artefacts to make it more precise. The ideal would be to have a faithful working model of the world which would tell us what would happen in any situation Ð a simulation. Powerful computers are making simulation a familiar concept. The aerodynamic perfor- mance of a new aircraft design, the response of a building to an earthquake, the behaviour of a car in a high-speed collision: these and many other problems are now studied by simulation, in the con® dence that what is seen on the computer screen is a true picture of the real world. This con® dence rests on one key thingÐ the laws of physics. A knowledge of these laws makes simulation possible. We know the laws governing atomic-scale matter. These are the laws of quantum mechanics obeyed by the nuclei and electrons of which matter is made. It should therefore be possible to make a precise working model of any material. It has taken many years to develop the techniques to do this. We have to simulate systems of many atoms and each atom may contain many electrons. Since we are doing quantum mechanics, each electron must be represented by a wave. The electrons interact with each other and the motion of one aOEects the motion of all the others. There are tremendous di culties. When people ® rst tried to simulate matter (Alder and Wainwright 1957, 1959, Gibson et al . 1960), no-one thought of modelling the behaviour of the electronsÐ it just seemed too di cult. Instead, the atoms were treated as rigid objects interacting with each other. Their interaction was usually modelled by some empirical potential function, representing the energy of two atoms as a function of their distance apart. A commonly used interaction model was the Lennard-Jones potential shown in ® gure 1. With such an empirical interaction, the computer could be used to ® nd the equilibrium structure of a system of atoms; alterna- tively, dynamical simulations could be performed, with all the atoms moving according to Newton’s equation of motion. Since the interactions between the atoms were represented by simple pair potentials, only very limited kinds of system could be studied and much of this early work was done on the rare-gas elements, particularly argon (Rahman 1964, Verlet 1967). These elements have closed electronic shells and the electrons are tightly bound to the nuclei, so that a simple pair interaction is quite reasonable. In spite of this limitation to simple systems, the early simulations made a great impact. They represented the properties of simple liquids and solids quite well. Later, people found that many other kinds of materials, particu- larly ionic and molecular materials (Woodcock and Singer Author’s address: Physics Department, Keele University, Keele, Staffordshire ST5 5BG, UK. 0010-7514/97 $12.00 Ó 1997 Taylor & Francis Ltd Contemporary Physics, 1997, volume 38, number 2, pages 115± 130

Upload: others

Post on 26-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

The virtual matter laboratory

M. J. G ILLAN

New types of computer simulation are being used to investigate matter on the atomic scale.

Unlike older simulation methods, the new techniques represent a solid or a liquid as a

collection of nuclei and electrons, and the laws of quantum mechanics are used to calculate

the energy and the forces from ® rst principles. The most successful method so far for doing

® rst-principles simulation is based on density functional theory and the pseudopotential

approximation. I describe the main ideas of these techniques and explain how they can be

used to perform simulations in which chemical bonds are made and broken. I give

illustrations of current work in two areas: the atomic-scale behaviour of water and other

liquids; and molecular processes at surfaces. I stress that the new methods are still evolving

rapidly and I point to likely developments in the next few years.

1. Introduction

In the last few years, more and more condensed-matter

scientists have become gripped by a powerful idea. The idea

is to use the fundamental laws of quantum mechanics to

produce highly realistic simulations of solids and liquids on

the atomic scale. In this article, I will explain how these new

simulations work and I will give some examples of the rapid

progress they are bringing to the understanding of

condensed matter.

We all have an image of the world in our mind. We

consult this mental image when we want to interpret the

world, or when making plans to change it. But our image is

too unreliable and approximate. We construct maps,

models and other artefacts to make it more precise. The

ideal would be to have a faithful working model of the

world which would tell us what would happen in any

situation Ð a simulation. Powerful computers are making

simulation a familiar concept. The aerodynamic perfor-

mance of a new aircraft design, the response of a building

to an earthquake, the behaviour of a car in a high-speed

collision: these and many other problems are now studied

by simulation, in the con® dence that what is seen on the

computer screen is a true picture of the real world. This

con® dence rests on one key thing Ð the laws of physics. A

knowledge of these laws makes simulation possible.

We know the laws governing atomic-scale matter. These

are the laws of quantum mechanics obeyed by the nuclei

and electrons of which matter is made. It should therefore

be possible to make a precise working model of any

material. It has taken many years to develop the techniques

to do this. We have to simulate systems of many atoms and

each atom may contain many electrons. Since we are doing

quantum mechanics, each electron must be represented by a

wave. The electrons interact with each other and the

motion of one aŒects the motion of all the others. There are

tremendous di� culties.

When people ® rst tried to simulate matter (Alder and

Wainwright 1957, 1959, Gibson et al. 1960), no-one

thought of modelling the behaviour of the electrons Ð it

just seemed too di� cult. Instead, the atoms were treated as

rigid objects interacting with each other. Their interaction

was usually modelled by some empirical potential function,

representing the energy of two atoms as a function of their

distance apart. A commonly used interaction model was the

Lennard-Jones potential shown in ® gure 1. With such an

empirical interaction, the computer could be used to ® nd

the equilibrium structure of a system of atoms; alterna-

tively, dynamical simulations could be performed, with all

the atoms moving according to Newton’ s equation of

motion. Since the interactions between the atoms were

represented by simple pair potentials, only very limited

kinds of system could be studied and much of this early

work was done on the rare-gas elements, particularly argon

(Rahman 1964, Verlet 1967). These elements have closed

electronic shells and the electrons are tightly bound to the

nuclei, so that a simple pair interaction is quite reasonable.

In spite of this limitation to simple systems, the early

simulations made a great impact. They represented the

properties of simple liquids and solids quite well. Later,

people found that many other kinds of materials, particu-

larly ionic and molecular materials (Woodcock and SingerAuthor’ s address : P hysics D epartm ent, K eele U niversity , K eele,

Staffordshire ST5 5BG, UK .

0010-7514/97 $12.00 Ó 1997 Taylor & Francis Ltd

Contemporary Physics, 1997, volume 38, number 2, pages 115 ± 130

Page 2: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

1971, Cheung and Powles 1975) , could be tackled in the

same way, and simulation grew into one of the major ways

of investigating condensed matter (Allen and Tildesley

1987). But still, simulation was limited by its rather

empirical approach.

In the 1980s radical new ideas emerged and these have

completely changed the simulation of matter. The main

new idea was that the electrons should be included

explicitly in the simulations, so that the system would be

represented as a collection of nuclei and electrons. The

fundamental equations of quantum mechanics would be

used to calculate the energy of the system and the forces on

the nuclei, and the atoms would then move under the action

of these forces. The ® rst attempt to do this was reported by

Car and Parrinello (1985 ) and this paper has had a major

in¯ uence on simulation during the past ten years. The new

ideas changed the approach to simulation for two reasons.

First, they greatly expanded the range of materials and

problems that could be simulated. It was no longer

necessary to stick to closed-shell systems, because now all

kinds of ionic, covalent and metallic bonding could be

handled in the same way, and even the making and

breaking of chemical bonds could be simulated. Second,

instead of approaching every system in an empirical ad hoc

way, all of matter would now be treated using a uni® ed

method. M ost importantly, no adjustable parameters

would be needed. The whole of condensed matter would

be reconstructed from the ground up, using only the values

of the fundamental constants: the electronic charge e and

mass m and Planck’s constant h.

These new developments have brought us near to the

ideal of constructing a precise working model of atomic-

scale matter. I call this ideal the `virtual matter laboratory’ .

By observing virtual matter in this computer-generated

laboratory, we are observing a true image of real matter.

But more than this, by manipulating virtual matter and

observing the consequences, we are learning to make real

matter do what we want. What we have today is still an

approximation to the virtual matter laboratory Ð our

image of the real world is not perfect Ð but I hope to

convince you that the approximation is already good

enough to be extremely useful.

What about those tremendous di� culties that I men-

tioned Ð the large numbers of wave-like electrons and

their complicated interactions? This is an important part

of the story and I will explain in section 2 how an

extremely eŒective, but still incomplete, way of over-

coming the di� culties has been found. If you are not

interested in that part of the story, and you want to know

about the science that is coming out of the new methods,

the article will still make sense if you simply skip section 2.

In talking about the science, I have made arbitrary

choices. First-principles simulation is now contributing

to so many ® elds that it would be impossible to do justice

to them all. Guided by my own interests, I have chosen

illustrations concerning liquids and surfaces. In discussing

liquids, I will describe in section 3 some of the recent

progress in understanding liquid metals and semiconduc-

tors, and I will also say something about that most vital of

all liquids, water. The new methods are also bringing

rapid progress in surface science and are giving new

insights into the way molecules interact with surfaces. I

will talk in section 4 about recent work on the interaction

of the H 2 m olecule w ith metal surfaces, and the

interaction of the Cl2 molecule with the surface of silicon.

First-principles simulation is undergoing a tremendous

boom and I will end the paper by giving some predictions

of what the next ten years will bring.

2. First-principles simulation

We want to simulate atoms moving about in solids and

liquids. I will assume that as the nuclei move, the system of

electrons is always in its ground state: I am not interested in

electronic excitations. Th is assumption Ð technically

known as the Born ± Oppenheimer approximation Ð is

usually a good one. The electrons are so much lighter than

the nuclei that they can adjust themselves very rapidly to

what the nuclei do. From this point of view, the problem of

Figure 1. The Lennard-Jones form of interaction potential v(r)

between two rare-gas atoms, used in early simulations of liquid

argon. The curve shows v(r) divided by Boltzmann’s constant k,

in units of degrees K.

M. J. Gillan116

Page 3: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

doing realistic simulations breaks into two questions: ® rst,

how do you ® nd the ground-state of the system of electrons

when the nuclei are sitting in certain positions and second,

how should the nuclei move? The ® rst question is by far the

most di� cult, and will occupy most of this section; the

second is rather straightforward, once we have answered

the ® rst. In fact, I will explain that the ® rst question consists

of three separate problems: electron correlation, the

representation of the electron waves, and the treatment of

large numbers of atoms. The reader who feels dissatis ® ed

with the brief details I will give here should consult the

original paper by Car and Parrinello (1985) and the more

recent reviews by, for example, Galli and Parrinello (1991)

and Payne et al. (1992).

2.1. Quantum mechanics

Let’ s start from basics by recalling the physics of the

hydrogen atom. The single electron bound to a proton is

very similar to a planet orbiting the sun, except that the

electron is attracted to the proton by the electrostatic

attraction between unlike charges, instead of gravitation.

The other great diŒerence is that we must use quantum

mechanics instead of Newton’ s laws of motion, so that the

electron is described by a wavefunction w (r). The wave-

function itself is not directly observable, but its square

| w (r)|2

gives the probability of ® nding the electron is at any

point r.

To ® nd the allowed energies E n of the hydrogen atom

and the corresponding wavefunctions w n(r) of the electron,

you have to solve the SchroÈ dinger equation, which says

that

2±h

2

2m,

2w n (r) 1 V( r) w n (r) 5 En w n (r) (1)

Here, ±h is Planck’ s constant h divided by 2 p and V(r)

represents the potential energy of the electron in the

electrostatic ® eld of the proton:

V( r) 5 2 e2/ 4 p ²0r, (2)

where r is the distance between the electron and the proton.

The problem of the hydrogen atom can be solved

exactly, and there is a well-known formula for its allowed

energies:

En 5 2 me4/ (4 p ²0 )

22 ±h

2n

2. (3)

The state of lowest possible energy Ð the ground-state Ð is

obtained by putting n = 1, and in this case the wavefunc-

tion is given by the simple formula:

w 1 (r) 51

( p a30 )

1/ 2exp ( 2 r/ a0 ) (4)

where a0 is the Bohr radius, which has a value of about

0 × 529 AÊ .

Unfortunately, as soon as you consider atoms with more

than one electron, it becomes impossible to do the quantum

mechanics exactly, and it is even worse for many atoms. To

see what the problem is, let us look at the helium atom. We

now have two electrons, acted on by the electrostatic

attraction of the nucleus and their own electrostatic

repulsion. It is this repulsion that causes all the trouble.

If there was no repulsion, each electron would behave as if

the other was not there and we would be back to a single-

electron problem like equation (1), which is easy to solve.

In this situation, the two electrons would have their own

wavefunctions w a(r1) and w b(r2) and it turns out that the

wavefunction w (r1 ,r2) representing the two-electron system

is just the product of the two:

w (r1, r2 ) 5 w a (r1 ) w b ( r2 ) (5)

The repulsion between the electrons completely spoils

this beautiful simplicity. The true wavefunction, instead of

being expressible as the product form shown in equation

(5), is some very complicated function of the electron

positions, which we can never hope to ® nd exactly. This

re¯ ects the fact that the electrons do not move indepen-

dently: their motion is correlated. We are now face-to-face

with the ® rst and most profound problem of doing ® rst-

principles calculations.

2.2. Ignoring correlation: Hartree theory

We cannot simply ignore the repulsion between electrons.

When the distance between two electrons is 1 AÊ , their

electrostatic interaction energy is roughly 14 eV, and if we

ignored an energy as large as this we would get completely

wrong results. However, instead of ignoring their interac-

tion, we can ignore their correlation. This idea leads to a

method called Hartree ± Fock theory, which is a step in the

right direction.

What does it mean to ignore correlation? Suppose I am

an observer sitting at some point r inside an atom. I observe

an electron orbiting around the nucleus. At my observation

point I measure the potential V(r) due to the charge on the

nucleus and the charge on the orbiting electron. Because

the electron is moving, the potential V(r) ¯ uctuates. Let me

take the average value of the potential at my observation

point r, which I will call Vm (r). At every point in space,

there is an average potential. Now instead of being an

observer, I will be an electron. I will orbit the nucleus under

the action of this average potential Vm (r). The other

electrons will do the same. We will all move, not acted on

by the true ¯ uctuating potential, but acted on by the

average potential due to the nucleus and the electrons.

Since we are all moving in a static potential, we are

behaving like independent electrons, but at the same time

we feel each other’ s repulsion, though only in an average

sense.

The virtual matter laboratory 117

Page 4: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

What I have just described is the essence of Hartree

theory (the Fock part comes later). It replaces the

(insoluble) many-electron problem by the (soluble) problem

of single electrons moving in the average potential Vm(r):

we are back to equation (1). But we have paid a price: by

ignoring correlation, we are making a deliberate error,

which may have serious consequences.

2.3. The lonely electron: Hartree ± Fock theory

Up to now, I have avoided mentioning two very important

facts about electrons. Indeed, some of the things I said were

not quite correct. Now I must put things right. The ® rst fact

is that electrons are spinning about their own axis.

Quantum mechanics says that the component of the spin

angular momentum along a given direction can only have

the two values 6 12

±h . The spin can only be `up’ or `down’ .

This means that the wavefunction w r (r) for an electron

should depend on a spin variable r , which has the values

`up’ or `down’ , which I will write ­ and ¯ . For example,

| w ­ (r)|2

gives the probability for ® nding the electron at

point r with its spin pointing up.

The second important fact is that electrons are indis-

tinguishable: there is absolutely no way of telling which is

which. According to quantum mechanics, this has extra-

ordinary consequences. The main point for us is that the

wavefunction of a system of electrons must change its sign

when any two electrons are interchanged. This requirement,

called `exchange symmetry’ , means that for two electrons

the simple product wavefunction shown in equation (5) is

not correct. Instead, if the two electrons both have their

spins up, the two-electron wavefunction must be:

w a ­ (r1 ) w b ­ (r2 ) 2 w a ­ (r2 ) w b ­ (r1 ) . (6)

This exchange symmetry is the origin of the Pauli exclusion

principle, which says that two electrons cannot be in the

same quantum state. It also says that two electrons with the

same spin cannot be found at the same place: if you put

r1 = r2 into equation (6), the wavefunction vanishes, so that

there is zero probability of ® nding them at the same place.

Electrons are unsociable: they keep away from each other.

The unsociability of electrons is not included in Hartree

theory, because the theory does not respect exchange

symmetry. But all we have to do is to replace the simple

product of wavefunctions like equation (5) by the correct

`antisymmetrized’ product like equation (6). The resulting

scheme is Hartree ± Fock theory. The inclusion of exchange

symmetry lowers the energy. This is because it keeps the

electrons away from each other so that they feel less

positive repulsive energy. The energy reduction due to

exchange symmetry is called exchange energy.

Hartree ± Fock theory has been widely used, but it is not

very accurate and would not be good enough for the virtual

matter laboratory. It is not enough simply to neglect the

correlation due to electronic repulsion and this is why we

need density functional theory.

2.4. Coping with correlation: density functional theory

We have seen how the Hartree and Hartree ± Fock theories

include the repulsive interaction between electrons by

ignoring correlations. All the electrons move independently

under the action of a static potential, and the average

repulsive interaction is included as part of this potential.

Amazingly, it turns out to be possible to include correlation

simply by modifying the static potential Ð and the miracle

is that this can in principle be done exactly. This is the idea

behind density functional theory (DFT) (Hohenberg and

Kohn 1964, Kohn and Sham 1965, Jones and Gunnarsson

1989).

To understand what DFT does, remember how exchange

symmetry results in the lowering of the energy by keeping

electrons apart. Correlation has exactly the same eŒect. The

electrostatic repulsion between electrons also tends to keep

them apart and this is the essence of the eŒect I am calling

correlation. Electrons avoid each other for two reasons:

® rst, because of exchange symmetry; and second, because

of correlation. Both mechanisms lower the energy. Because

exchange and correlation have such similar eŒects, they are

often lumped together. The reduction of energy caused by

exchange and correlation is called the exchange ± correla-

tion energy and denoted by E xc. The key statement of DFT

for us is that Exc can be expressed solely in terms of the

electron density distribution.

Now there is one system for which we know all about the

exchange ± correlation energy. This is the uniform gas of

interacting electrons, sometimes called jellium. Theorists

have spent a lot of eŒort on jellium, and the exchange ±

correlation energy per electron is accurately known for a

wide range of electron densities (Ceperley and Alder 1980 ,

Perdew and Zunger 1981). Let us call e xc (n) the exchange ±

correlation energy per electron in jellium when the electron

density is n. The great breakthrough in dealing with

correlation was the discovery that the exchange ± correla-

tion energy of electrons in a system of atoms is very similar

to that in jellium. In collections of atoms, the electron

density varies from place to place Ð let’ s call it n(r). Now

assume that the exchange ± correlation energy per electron

at point r is given by e xc (n(r)), the quantity appropriate to

jellium. Then the amount of exchange ± correlation energy

per unit volume is n(r) e xc (n(r)), and the total exchange ±

correlation energy is given by:

Exc 5 n(r)²xc (n(r))dr. (7)

This remarkably simple expression, called the local density

approximation (LDA), has proved astonishingly successful

for many condensed-matter systems. Recently even better

M. J. Gillan118

Page 5: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

expressions have been found, which allow for the gradients

of n(r). These new expressions are called `generalized

gradient approximations’ (GGA) (Perdew 1986, Becke

1988).

If you have followed me this far, you can permit yourself

a large sigh of relief because our discussion of correlation is

now done. I called this question of electron correlation the

® rst and most profound problem, but the approximate

solution has turned out to be amazingly simple. The LDA

and GGA form the basis for all the simulations I will

discuss later.

2.5. Representing the wavefunctions Ð pseudopotentials

and plane waves

We now come to the second major problem: how do you

represent wavefunctions on a computer? Computers are

discrete, ® nite machines designed to store and manipulate

lists of numbers, so this means that wavefunctions must be

represented as lists of numbers. There are two opposing

schools of thought about how to do this.

One school says that the wavefunctions w i(r) of electrons

in an assembly of atoms are like the wavefunctions / a (r) of

electrons in isolated atoms. The idea is that the wavefunc-

tion w i(r) can be represented by adding together the atomic

wavefunctions / a (r):

w i(r) 5a

ci a u a (r) . (8)

We know about the atomic wavefunctions; they are like the

wavefunctions of the hydrogen atom given in equation (4).

The l̀ist of numbers’ manipulated by the computer is the

set of coe� cients ci a . The computer has to vary these

coe� cients until the electrons are in the ground-state as

near as possible. The / a (r) functions are called `basis

functions’ , because they provide a basis for representing the

wavefunctions w i(r).

The other school of thought also uses basis functions,

but starts from a very diŒerent viewpoint. It says that the

electrons in condensed matter can run about rather freely

so that they are like free particles. Now the wavefunction

for a free electron is exp (ik × r), where k is the momentum of

the electron divided by ±h ; in fact, k is just the wavevector of

the de Broglie wave. So the idea is to use these plane-waves

exp (ik × r) as basis functions:

w i (r) 5k

cik exp ( ik . r) , (9)

and now the coe� cients c ik will be the l̀ist of numbers’ to

be varied.

The two schools of thought have both produced strong

arguments why their method is best. But in practice, for

large numbers of atoms in condensed matter, the plane-

wave method has been much more successful. This is

surprising, because the electrons in matter are clearly

nothing like free particles. In most atoms, there are tightly-

bound core electrons con® ned to small regions around the

nucleus, so that the plane-wave method seems to contradict

common sense. In fact, the method only makes sense when

combined with another idea: the pseudopotential concept.

What is a pseudopotential? It is basically a modi® ed

form of the true potential experienced by the electrons

(Heine 1970, Cohen and Heine 1970, Heine and Weaire

1970). When they are near the nucleus, the electrons feel a

strong attractive potential and this gives them a high kinetic

energy. But this means that their de Broglie wavelength is

very small, and their wavevector k is very large. Because of

this, a plane-wave basis would have to contain so many

wavevectors k in equation (9) that the calculations would

become impossible. A remarkable way of eliminating this

problem was discovered about 35 years ago by Heine,

Cohen and others, who showed that you can represent the

interaction of the valence electrons with the atomic cores by

a weak eŒective `pseudopotential’ and still end up with a

correct description of the electron states and the energy of

the system. In this way of doing it, the core electrons are

assumed to be in exactly the same states that they occupy in

the isolated atom, which is usually valid.

The discovery that the true potential can be replaced by a

much weaker pseudopotential is an extraordinary one and

it has had a deep in¯ uence on condensed-matter physics.

There is now a highly developed theory which speci® es how

pseudopotentials should be constructed for all the elements

in the periodic table so that they reproduce the properties

of the real potentials as exactly as possible (Bachelet et al.

1982). With these pseudopotentials, plane-wave basis sets

can be used for any element.

Plane waves have proved to be very successful for many

reasons. The wavefunctions can be made as accurate as

necessary by increasing the number of plane waves, so that

the method is systematically improvable. Plane waves are

simple so that the computer’ s job is easy. It also turns out

that the forces on the ions are straightforward to calculate,

so that it is easy to move them. Finally, plane waves are

unbiased. The calculations are unaŒected by the prejudices

of the user Ð an important advantage for any method that

is going to be widely used.

2.6. Supercells

I have now ® nished with the di� cult ideas, but there is still

one more important problem. In condensed matter, we are

dealing with enormous numbers of atoms Ð numbers like

1023

. But realistically we can never do ® rst-principles

simulations on numbers like that. Fortunately, this does

not matter in most situations. Like short-sighted people in

a crowd, atoms are aware only of their neighbours. More

scienti ® cally, the electrons arrange themselves so that the

The virtual matter laboratory 119

Page 6: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

physical interactions between atoms have a range of only a

few atomic diameters. This means that the properties of

matter containing many atoms can be understood by

calculations on quite small numbers of atoms.

There are two ways of doing this: the cluster method and

the supercell method. In the cluster method, we carve out a

small piece of material and do the simulations on that. The

small piece is a cluster of atoms, which in the crudest

approximation is simply in free space, but more commonly

is embedded in a simple representation of the surrounding

material. In principle, one should try to increase the size of

the cluster until the properties of interest cease to vary. The

cluster method is not usually a good idea, for a simple

reason: its properties are dominated by its surface, unless

the cluster is very large. (Of course, real physical clusters

are interesting in their own right Ð but that is another

matter.)

In the supercell method, the calculations are also done

on a limited set of atoms, but there is a big diŒerence. The

set of atoms is surrounded on all sides by images of itself,

which are periodically repeated as shown in ® gure 2. This

device has the excellent eŒect of eliminating unwanted

boundaries and surfaces. The set of atoms that are

actually simulated are fooled into thinking that they are

part of an in® nite system. The price we pay is that the

system is made arti ® cially periodic, but the eŒects of this

generally disappear rapidly as the size of the system is

increased.

There is another great advantage of supercells: they ® t

very well with plane-wave basis sets. In a periodic system, a

plane-wave representation is the same as a Fourier series.

Since Fourier series are expressly designed to represent

periodic functions there is something extremely natural

about the combination of the supercell method and plane-

wave basis sets.

2.7. Move the atoms!

Now the ideas are all in place. The di� cult problem of

electron correlation is handled by the local density

approximation for exchange and correlation or by one of

the improved approximations. The wave functions of the

electrons are represented in terms of plane waves, using the

pseudopotentials scheme. The supercell device makes the

atoms behave as they would in bulk matter.

By doing things this way, we can ® nd the ground state of

a system of many atoms, by varying the plane-wave

coe� cients until the energy is a minimum. We can also

calculate the forces F i on all the atomic cores. These forces

come in two parts: the electrostatic forces between the

charges of the cores and the forces exerted on the cores by

the electrons. This second part can be calculated once we

know the wavefunctions of the electrons in the ground

state.

We are now ready to move the atoms. We let our system

evolve in time by making the atomic positions R i follow

Newton’s equation of motion:

d2R i / dt

2 5 F i / M i , (10)

where M i is the mass of atom i. Our working model of

matter moves into action! In fact, this way of moving the

atoms was already being used many years ago in the old-

sty le simulations based on empirical models for the

interaction between atoms (Gibson et al. 1960, Rahman

1964, Verlet 1967 , Allen and Tildesley 1987) . But now there

is no model. Everything is calculated from the quantum

mechanics of the electrons. As the nuclei move, the

electrons follow, and for each new set of atomic positions

R i the electronic ground state and hence the forces on the

atoms are recalculated.

I must explain a striking feature of these simulations.

We are using quantum mechanics for the electrons but

classical mechanics for the nuclei. Clearly the electrons

demand quantum mechanics, because they are in the

ground state, but what justi® es the use of classical

mechanics for the nuclei? The answer is that quantum

mechanics goes over to classical mechanics for highly

excited states and for most situations the nuclei are in

highly excited states. Put another way, the de Broglie

wavelength of the nuclei is very short compared with the

distance between the atoms so that quantum eŒects are

negligible. But this is not always justi® ed. For very light

atoms like hydrogen, we cannot use Newton’ s laws and I

will discuss an example of this later.

Figure 2. An illustration of the supercell method. The

simulation cell, in this case a cube containing 6 atoms, is

repeated to form an in® nite periodic system.

M. J. Gillan120

Page 7: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

3. Liquids

The physical concepts that go into the simulation of liquids

are fairly simple. We take a collection of electrons and

nuclei in our periodically repeating cell, with the volume of

the box chosen to give the density we want. The electrons

are brought to the ground state and we determine the

energy of the system and the forces on the atoms. We then

give the nuclei some kinetic energy and let the system

evolve in time. In the language of statistical mechanics, the

system explores the microstates (atomic positions and

momenta) associated with the thermodynamic state having

the given density and energy. In other words, our

simulation generates the microcanonical ensemble (Mandl

1988). The temperature T of the system can be found using

the equipartition principle: the average kinetic energy per

atom is 32 kBT , where kB is Boltzmann’s constant. If we want

to adjust the temperature, we add or remove kinetic energy

by rescaling the velocities.

3.1. Liquid silicon

The ® rst liquid studied with the new methods was liquid

silicon (SÏ tich et al. 1989 , 1991), and this is an excellent

illustration because its covalent bonding makes it very

di� cult to model realistically with empirical interaction

potentials.

Silicon is in the same column of the periodic table as

carbon and crystallizes in the diamond structure shown in

® gure 3. In this structure, each Si atom is connected by

covalent bonds to four neighbouring atoms. Crystalline Si

is a semiconductor with a band gap of 1 × 1 eV. (Its

semiconducting properties form the basis of the computer

industry, so that with delightful circularity we are using Si

to study Si!) When Si melts, its structure changes

completely, as we shall see, and it becomes a metal. As

the atoms move around in the liquid, there must be a

constantly shifting pattern of bonding to their neighbours.

This means that the electrons in the system must be

continually rearranging themselves in response to the

motion of the atoms. This is exactly the kind of problem

that the new simulation methods are designed to address.

SÆtich et al. performed a completely ® rst-principles simula-

tion of liquid silicon (I will use the abbreviation l-Si),

using the molecular dynamics methods sketched in section

2.7. The repeating simulation cell they used contained only

64 atoms and the duration of the simulation (after

equilibration) was only 1 × 2 ps, but subsequent work has

shown that this is just about enough to represent the bulk

liquid.

The most direct way of testing that such a simulated

system is faithfully mimicking its counterpart in the real

world is by examining the radial distribution function, and

I need to say a few words about this. Suppose you are

riding on a Si atom as the simulation evolves. You observe

the average density of atoms at some distance r away from

you. If the atoms were arranged completely randomly (as in

a perfect gas, for example), this density would just be the

bulk number density q . But the interactions between atoms

make the arrangement far from random. To describe this,

we say that the average density at a distance r away from

your atom, instead of being q , is q g(r), where g(r) is the

radial distribution function. At distances r where g(r) is

greater than unity, there is more than the usual probability

of ® nding atoms and where it is less than unity the

probability is less.

The radial distribution function is easy to monitor in the

simulation, basically by constructing a histogram of the

interparticle distances. It is also directly measurable in

either X-ray or neutron diŒraction experiments. The

comparison of the radial distribution function of simulated

and real l-Si reported by SÏ tich et al. just above the melting

point is reproduced in ® gure 4. Notice how, in both the

simulated and the real systems, g(r) is zero for short

distances Ð there is no probability of ® nding atoms less

than a certain distance apart, because of the strong

repulsion between them. There is a pronounced peak at a

distance of 2 × 46 AÊ , representing the ® rst shell of neighbours

surrounding any given atom. In the crystal, this ® rst peak

would contain four neighbours, but in the liquid the area

under the peak corresponds to 6 × 5 neighbours, according to

both simulation and experiment. This is the change of

structure that I mentioned before, which is spontaneously

reproduced by the simulated system. At larger distances,

there are weaker peaks, indicating the presence of rather ill-

de® ned further shells of neighbours. The good agreement

between the radial distribution functions of the simulated

and real systems is even more remarkable when one recallsFigure 3. The diamond crystal structure.

The virtual matter laboratory 121

Page 8: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

that there are no adjustable parameters whatever involved:

the only experimental input to the simulations is the values

of the fundamental constants e, m and h. The combination

of SchroÈ dinger’ s equation for the electrons and Newton’s

equation of motion for the ions provides an almost perfect

working model of the real liquid. (It is only `almost perfect’

because there is an approximation Ð the local density

approximation mentioned in section 2.4.)

3.2. More complicated liquids

The success of these simulations on l ± Si stimulated

investigations of many other liquids (see e.g. Zhang et al.

1990, Gong et al. 1993) . As an illustration of current work,

I will describe some results from the work of my own

group, on liquid alloys of silver and selenium (KirchoŒet

al. 1996). Silver is a typical metal and in the solid state its

atoms pack together like billiard balls in the face-centred

cubic structure. Selenium is completely diŒerent, because it

is a covalent material, with its atoms held together by

strong directional chemical bonds. Since it has a valency of

two, it likes to form chains and the normal form of the Se

crystal consists of spiral chains stacked parallel to each

other (® gure 5). Liquid alloys of very diŒerent elements,

like Ag and Se, are fascinating because the type of bonding

in the liquid varies continuously as the composition

changes. Many experiments have been done on such liquid

alloys, to ® nd out how their structure changes with

composition and how this aŒects their electrical properties.

(For a review, see Enderby and Barnes (1990). )

Our ® rst-principles molecular dynamics simulations were

done in a repeating cell containing 69 atoms and they lasted

for about 3 ps. Hence they are rather similar to the l-Si

simulations of SÏ tich et al. The diŒerence is that we now

have two kinds of atoms and we made simulations at three

diŒerent compositions Ag1 ± xSex for which x = 0 × 33, 0 × 42

and 0 × 65. As with l-Si, a direct check against the real world

can be made through the radial distribution functions.

Since there are two kinds of atoms, there are now three

diŒerent radial distribution functions. There is gA gA g(r)

describing the distribution of Ag atoms around an Ag

atom; gSeSe(r) describing the distribution of Se around Se;

and gA gS e(r) for Se around Ag Ð or Ag around Se, which is

the same thing. Recently, all three radial distribution

functions have been measured for real liquid Ag2Se by

neutron diŒraction (Lague et al. 1996), so we can verify

completely that simulation is mimicking reality. The

comparison shown in ® gure 6 leaves no doubt that this is

being achieved and I stress again that the values of e, m and

h are the only data entering the simulation.

Now let us ® nd out what happens when we change the

composition. Experimentally, the structure has been

measured only for the composition Ag2Se. But if the

simulations mimic reality, we can con® dently use them to

® nd out about the structure at diŒerent compositions.

Figure 5. The crystal structure of Se, with bonds drawn

between atoms in spiral chains. The outline indicates one unit cell

of the crystal.

Figure 4. The radial distribution function g(r) of simulated

liquid Si (solid lines) compared with the experimental results

obtained by neutron diŒraction (dotted line) and X-ray diŒrac-

tion (dash-dotted line). Simulation was performed by SÏ tich et al.

(1989), neutron diŒraction by Gabathuler and Steeb (1979) and

X-ray diŒraction by Waseda and Suzuki (1975). Reproduced

from SÏ tich et al. (1989), with permission.

M. J. Gillan122

Page 9: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

Figure 7 shows what our simulations predict for the radial

distribution functions at the three compositions we have

studied. Nothing much changes in gA gA g(r) and gA gSe(r),

but there is clearly something rather dramatic happening in

gSeS e(r): as the amount of Se increases, a completely new

peak grows up at the distance 2 × 35 AÊ . On going more

deeply into the simulations, we found that this new peak is

caused by Se atoms linking together by covalent bonds.

This process became completely clear when we took

snapshots of the atom arrangements, like the ones shown

in ® gure 8. The fascinating thing here is that the formation

of chemical bonds between Se atoms happens completely

spontaneously in the simulations. The laws of quantum

mechanics for the electrons and Newton’s law for the nuclei

make it happen. Although this bonding between Se atoms

has not yet been observed experimentally in liquid Ag ± Se

alloys, diŒraction measurements by Barnes and Enderby

(1988 ) on liquid CuSe display the same short-distance peak

in the radial distribution function that we see in our

simulations. This leaves little doubt that the Se bonding

eŒects shown in ® gure 8 occur in the real world.

3.3. Water

If you were asked to name the most important liquid, you

would probably say water. The medium of all biology, a

major force in geology, a key agent in the physics and

chemistry of the atmosphere, a universal solvent: it is

important for many reasons. But it has proved one of the

most di� cult liquids to understand on the atomic scale.

The work of the virtual matter laboratory is starting to give

new insights.

Figure 6. The three radial distribution functions g a b (r) of liquid

Ag2Se obtained from ab initio simulations of KirchhoΠet al.

(1996) (solid lines), compared with the neutron diŒraction results

of Lague et al. (1996).

Figure 7. The radial distribution functions g a b (r) of liquid

Ag1 ± xSex at three values of the Se fraction x obtained from

ab initio simulations by KirchhoŒet al. (1996).

Figure 8. Snapshots of typical atomic arrangements observed in ab initio simulations of liquid Ag1 ± xSex by KirchhoŒet al. (1996).

Dark and light spheres represent Ag and Se atoms, respectively, and panels (a), (b) and (c) show snapshots for Se fractions of x= 0 × 33,

0 × 42 and 0 × 65 respectively. Bonds are drawn between pairs of Se atoms separated by less than 2 × 9 AÊ .

The virtual matter laboratory 123

Page 10: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

The water molecule, shown in ® gure 9, has a bent form,

with an angle of 105 8 between the O ± H bonds. The oxygen

atom attracts electrons to itself (it is electronegative) and

this means that it acquires a negative charge, leaving the H

atoms positively charged. The molecule is thus an electric

dipole and interacts electrostatically with other water

molecules and with dissolved ions. It is this electrostatic

property of the H2O molecule that makes water such a

good solvent and a lot of its action in biology also depends

on this.

Because of their shape and their dipole moment, H2O

molecules like to arrange themselves as shown in ® gure 10,

with the positive H atoms pointing towards the negative O

atoms. But there is more to it than just electrostatics. The

positive charge on a H atom attracts e lectrons on

surrounding molecules towards itself. The electron clouds

on the surrounding molecules get pulled out of shape as the

H atom tries to form a weak chemical bond with an O atom

on a neighbouring molecule. Chemists refer to this as a

hydrogen bond. There have been many models of water

and many attempts to make simulations based on empirical

interactions, none of them fully satisfactory. The new

simulation methods promise to change this situation. The

electrons are explicitly included in the simulation and the

distortion of the electron clouds as the molecules move is all

represented in the ® rst-principle calculations.

But does it work in practice? Recent simulations by

Parrinello, Car and their co-workers are encouraging

(Laasonen et al. 1993, Fois et al. 1994). They reported

simulations on a collection of 32 water molecules in a

periodically repeated box at the experimental density and

room temperature (300 K). The simulations ran for 1 × 5 ps,

which is just long enough to represent thermal equilibrium.

As usual, the crucial test of realism is the radial distribution

functions, and the comparison with experiment is shown in

® gure 11. The agreement is impressive, given that no

adjustable parameters are involved. Under the action of

® rst-principles quantum mechanics, the hydrogen and

oxygen atoms spontaneously arrange themselves almost

exactly as in real water.

Building on this success, Parrinello’ s group have now

moved on to the next challenge: the structure of hydrogen

Figure 9. The geometry of the H2O molecule.

Figure 10. Illustration of the general way in which H2O

molecules arrange themselves in liquid water so as to lower their

energy. Dashed lines indicate hydrogen bonds between H and O

atoms.

Figure 11. The three radial distribution functions g a b (r)

between H and O nuclei in liquid water from ab initio simulation

of Laasonen et al. (1993) (solid lines), compared with neutron-

diŒraction results of Soper and Philips (1986) (dotted lines).

Reproduced from Laasonen et al. (1993), with permission.

M. J. Gillan124

Page 11: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

and hydroxyl ions in water (Tuckerman et al. 1995). Even

in pure water, the molecules have a very slight tendency to

dissociate into hydrogen and hydroxyl ions:

H 2O j H1 1 OH 2 . (11)

In thermal equilibrium, there is a certain concentration of

these ions, which is aŒected by whatever is dissolved in the

water. (Chemists describe the concentration of H+

ions by

the pH of the solution.) However, life is not so simple. No

one seriously believes that H+

could be present in the form

of isolated protons, because electrostatic attraction will

surely make the protons attach themselves to surrounding

water molecules. Sometimes, people talk of hydronium ions

(H3O+

). But are these stable objects? Experiments show

that hydrogen ions diŒuse very rapidly and this suggests

that H3O+

ions (if they exist) must be continually breaking

apart and reforming. The true nature of the OHÐ

ions has

been equally obscure.

The ® rst glimpses of the true situation are now emerging

from the virtual matter laboratory. By simulating the

hydrogen ion in water, Parrinello’ s group have indeed

observed the H 3O+

ion, linked by hydrogen bonds to

surrounding water molecules. But it turns out to be a

transient and ever-changing object. It is easy for one of the

protons in the H 3O+

to move to another water molecule.

The simulations show that for about 40% of the time the

extra proton is attached more or less equally to two water

molecules. This picture of hydrogen diŒusion occurring by

the continual swapping of partners has long been the

favoured model among physical chemists Ð it is called the

Grotthus mechanism (Atkins 1994) Ð but this is the ® rst

time it has been revealed by simulation. The simulations are

also giving insights into the structure and dynamics of the

OHÐ

ion.

Clearly, this story has a long way to go. Bigger systems

and longer simulations are needed. An interesting challenge

for the future will be to study whether the simulations

reproduce the celebrated density maximum of water at 4 8 C.

But the long-term impact on our understanding of water,

and of all kinds of aqueous solutions, will certainly be

immense.

4. Surfaces

Surfaces make the world what it is. When we observe

physical objects, it is usually their surfaces that we are

seeing. Materials placed in contact deform each other or

stick together at their surfaces. They rub on each other and

are worn away, or are corroded by their environment

through surface processes. The growth of crystals from

liquids or vapours depends on the dynamics of atoms at

surfaces. Many chemical processes would be virtually

impossible if they were not catalysed by surfaces. The

desire to understand adhesion, wear, corrosion, growth,

catalysis and many other surface phenomena has stimu-

lated an enormous experimental eŒort over the past thirty

years and a vast panoply of techniques has been deployed

to probe surfaces on the atomic scale.

In spite of all this eŒort, experiments are still unable to

answer many basic questions and the virtual matter

laboratory is playing an increasingly important role. To

give a glimpse of what is happening, I will present some

examples of recent work on the interaction of molecules

with surfaces. The break-up of molecules when they land

on a surface, known as dissociative chemisorption, is

important for many reasons. For example, it is involved in

corrosion and it underlies the operation of many important

catalysts, such as the catalytic converter used in cars.

However, these cases are too complex to describe here and I

will use simpler examples to illustrate the ideas, starting

with the H2 molecule on metal surfaces.

4.1. Hydrogen on metal surfaces

The case of H2 is a good place to start, because it is the

simplest molecule. However, the small mass of the H atom

means that quantum eŒects are important for the motion of

the nuclei and the use of Newton’s laws of motion would be

questionable. Instead, the quantum methods described in

section 2 are being used to map out how the energy varies

as the molecule goes from place to place. This energy map

can then be used to work out what happens when the

molecule hits the surface.

One of the ® rst cases studied was copper surfaces (White et

al. 1994 , Hammer et al. 1994). I will talk about the work of

White et al., which was concerned with the (100) surface.

(The notation (100) just means that the surface is made by

cutting the crystal along a plane perpendicular to one of the

cubic axes of the face-centred cubic Cu lattice.) As I

explained in section 2.6, periodic boundary conditions are

used, and to make this possible, the metal is represented as a

slab containing a certain number of layers of atoms. Since the

H atom is so light, it moves very rapidly and it can then be

assumed that the Cu atoms do not move much when the

molecule hits the surface, so the calculations are done with

the Cu atoms in the slab frozen in their positions. The

supercell approach means that the calculations are done on a

periodic array of H2 molecules on the surface. How does one

ensure that the arti® cial periodicity does not aŒect the

results? The system one really wants to study is a large piece

of metal with a single molecule on the surface, but the

supercell method does not allow this. However, it does allow

a good approximation to it, provided the slab is thick

enough, the vacuum between neighbouring slabs is wide

enough and the H 2 molecules are far enough apart. It turns

out that this is quite easy to achieve. The calculations of

White et al. were done with ® ve layers in the slab, a vacuum

width of 7 AÊ and a distance of 3 × 6 AÊ between H2 molecules; a

The virtual matter laboratory 125

Page 12: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

plan view of their periodically repeated system, looking down

onto the surface, is shown in ® gure 12. Their tests showed

that increasing the distance between H 2 molecules and

changing the number of layers made almost no diŒerence to

the energetics. For this size of system, a single calculation is

very quick Ð and speed is of the essence, because the

calculations have to be repeated hundreds of times.

The reason so many calculations are needed is that the H

atoms can be in many diŒerent positions. This is the energy

map that I just mentioned. In making a map of a country,

the height of the land is systematically measured at a large

number of places. This large set of data can then be used to

construct a contour map, allowing us to understand the

shape of the mountains and valleys. In the same way, for a

molecule on a surface, one needs a map of the total energy

as a function of the atomic positions. There is a big

diŒerence though. In the map of a country, the height

depends on a two-dimensional position. For a molecule on

a surface there are far more variables. Even if the atoms in

the surface are frozen, each atom in a molecule has a three-

dimensional position, so that six variables are needed.

What is needed is a map of the energy as a function of

these six variables. That is why many calculations are

needed.

A six-dimensional map is rather hard to display, and it is

easier to look at chosen two-dimensional sections. A

convenient choice is to ® x the orientation of the molecule

and the surface site directly below the molecule, and to plot

the energy as a function of the distance d between the atoms

in the molecule (the molecular bond length) and the height

z of the molecule above the surface. An example of this

contour plot is shown in ® gure 13, which is taken from the

work of White et al. In looking at this plot, notice that in

the top left-hand corner the H atoms are far from the

surface and we are in a deep valley in which the minimum

energy corresponds to the bond length of the free molecule

(0 × 74 AÊ ). In the bottom right-hand corner, we are again in

a valley, but now with the H atoms bound to the surface

and far apart, so that the molecule has dissociated. The

contours make it clear that in order for the molecule to

approach the surface from the gas phase and then

dissociate and stick to the surface, an energy barrier has

to be overcome. The path marked by dots in the ® gure

shows a favourable way for this to happen. The search for

the most favourable path is rather similar to crossing a

range of mountains by the lowest pass.

What does this energy barrier mean in terms of real

experiments? Well, clearly it means that the molecule

cannot stick to the surface and dissociate unless it arrives

with at least enough energy to overcome the barrier. The

sticking of molecules to surfaces has been intensively

studied in experiments and is characterized by the `sticking

coe� cient’ S . This gives the fraction of incoming molecules

that succeed in sticking; the others bounce back into the

vacuum. If there is an energy barrier, then S will generally

increase as the energy of the incoming molecule increases.

Figure 12. The periodically repeated system used in the ab

initio calculations of White et al. (1994) on the dissociative

adsorption of H2 on the Cu (100) surface. Large and small

spheres represent Cu and H atoms.

Figure 13. Contour plot of the total energy of the H2 molecule

on the Cu (100) surface as a function of the height z of the centre

of the molecule above the surface and the separation d between

the H nuclei. In the case shown, the H ± H bond is parallel to the

surface, and the mid-point of the bond is sited as in ® gure 12.

Contour spacing is 0 × 05 eV, with dashed contours at 0 × 5 and

1 × 0 eV above energy of free molecule. Dotted line shows a

favourable path for dissociative adsorption of the molecule.

Reproduced from White et al. (1994) with permission.

M. J. Gillan126

Page 13: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

Once the energy surface has been mapped out, it is possible

to calculate S as a function of the incident energy, and this

has been done with some success for H 2 on one of the

copper surfaces by Gross et al. (1994).

Real laboratory experiments show that the sticking

coe� cient for H2 on metals does not always increase with

incident energy. On many transition metals, S increases

when the incident energy decreases to low values. This has

given rise to much controversy, and suggestions have been

made that perhaps the molecule becomes trapped in a so-

called precursor state on the surface. One well-known case

is the (100) surface of tungsten and this has very recently

been investigated by ® rst-principles simulation (White et al.

1996). It turns out that there is no energy barrier at all in

this case: there are paths that take the molecule from the

gas phase to the dissociated absorbed state, which go

downhill in energy all the way. The detailed calculations

have given a very clear explanation for the energy

dependence of S . In a recent paper, Kay et al. (1995 ) have

taken the energy surface calculated for H2 on W (100) and

have used it to predict the dynamics of the H 2 molecule

when it hits the surface, allowing for the quantum

behaviour of the H nuclei. The results agree with

experiment in showing a mimimum sticking coe� cient at

a certain energy and the calculations show that the eŒect

arises from `steering’ . Even though the most favourable

dissociation path does not involve a barrier, it can require

the system to follow a rather tortuous path. Just as a

motorist may fail to negotiate a sharp corner if he arrives

too fast, so a molecule with too much energy may strike a

hill in the energy plot and be re¯ ected back into the gas

phase. A slow molecule will have time to adjust its path to

the twists and turns and will succeed in following the valley.

The increase of sticking probability at very high energies is

simply because the molecule is then able to ride over

obstacles in the contour plot.

At the same time as the work of White et al. (1996) and

Kay et al. (1995) , similar calculations for H2 dissociation on

the Pd (100) surface were reported by Gross et al. (1995).

Here again, S shows a minimum at a certain energy, as shown

in ® gure 14, and the calculations give strong evidence that

steering is the reason. The distinctive feature of these

calculations for H 2 on Pd (100) is that the quantum dynamics

of the H nuclei were included in a very complete way.

Taken together, all these ab initio studies on H 2

dissociation represent a really decisive step forward. The

reader who wants to ® nd out more about this area is

strongly recommended to look at the very recent review by

Darling and Holloway (1995) .

4.2. The dynamics of molecular break-up

Hydrogen absorption on metals is a very special case in one

respect: it is one of the few cases where the response of the

surface can be ignored. For many problems, the surface

itself may be strongly distorted by its interaction with the

molecule. This means that the energy will depend on the

positions of many atoms and any method based on the

study of energy surfaces seems doomed to failure. But there

is one saving feature: in many cases, quantum eŒects in the

dynamics of the atoms will be small, so that dynamical ab

initio simulations based on Newton’ s equation of motion

can be used, as discussed in section 2.7.

The ® rst attempt to use direct dynamical simulation to

study molecular adsorption was made by my group at

Keele in collaboration with the group of Payne in

Cambridge (De Vita et al. 1993). The example chosen was

the adsorption of the Cl2 molecule at the (111) surface of Si.

The geometry of this surface is very interesting in its own

right, because it is very diŒerent from what you get if you

simply cut the bulk crystal. Instead, the atoms at the

surface rearrange themselves, so as to satisfy the `dangling’

chemical bonds at the surface and lower the energy. The

equilibrium structure of the Si (111) surface was itself

greatly clari® ed by dynamical simulations done some years

ago (Ancilotto et al. 1990).

The main aim in our simulations on Cl2 adsorption at the

Si (111) surface was to ® nd out whether the molecule would

dissociate spontaneously, whether this depended on where

the molecule hit the surface and whether the response of the

surface played an important part. To study this question, we

performed ® ve simulations, in which the Cl2 molecule was

sent from the gas phase towards diŒerent sites on the surface

and in diŒerent orientations. At the rather high incident

energies we used, we found that the molecule dissociated on

Figure 14. Comparison of calculated and experimental sticking

coe� cients as a function of incident kinetic energy for H2 on the

Pd (100) surface, from Gross et al. (1995). Dashed line: H2

molecules initially in the rotational ground state; solid line: H2

molecules with an initial rotational and energy distribution

appropriate for molecular beam experiments; circles: experiment

(Rendulic et al. 1989). Reproduced with permission.

The virtual matter laboratory 127

Page 14: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

hitting the surface in all cases and the Cl atoms formed new

chemical bonds with the surface atoms. An example of this

process is displayed in ® gure 15, which shows how the

electron density evolves as the molecule is adsorbed. The

breaking of the Cl ± Cl bond and the formation of new Cl ±

Si bonds are very clear. One clear conclusion from these

simulations was that surface distortions are very large for

this system, so that no approach based on contour plots

would stand much chance of success.

Many other simulations of molecular absorption have

been reported in the last few years, including some on oxide

surfaces. A good recent example of dynamical simulations

used to study molecular break-up was the work of Langel

and Parrinello (1994 ) on the adsorption of water on

stepped MgO surfaces.

5. Where are we going?

All I have been able to do here is to point to a few random

examples of work done recently in the virtual matter

laboratory. It is important to understand that these are

only a very small part of a large worldwide eŒort now going

into the new simulation methods. Some of the develop-

ments of the next few years are easy to predict by simple

extrapolation from what is happening now.

For example, the new methods will certainly make a

tremendous impact on our understanding of surfaces.

Many of the important catalytic processes underlying

industrial chemistry are still very poorly understood and

the ability to observe these processes directly in simulation

models will bring a completely new level of understanding.

It is also highly likely that it will bring the insights needed

to modify and improve these processes in a rational way.

The examples of surface simulation that I have given

concerned mainly metals and semiconductors, but ® rst-

principles simulations will be increasingly used for oxides

(Pugh and Gillan 1994, Manassidis et al. 1995 , Kantor-

ovich et al. 1995 , Goniakowski and Gillan 1996) and for

molecular processes in microporous materials such as

zeolites (Shah et al. 1996a,b, Nusterer et al. 1996a,b ).

There are also many other reasons apart from catalysis for

wanting to understand surfaces. The growth of crystals by

deposition of atoms from the vapour phase is important in

applications as diŒerent as fabricating computer chips and

growing diamond ® lms (Ashfold et al. 1994). Here too, the

virtual matter laboratory will be important.

There will also be enormous progress in the under-

standing of real-world liquids. The work on the liquid Ag-

Se mixture that I mentioned shows how one can observe a

kind of primitive chemical reaction involving the sponta-

neous formation of covalently bonded clusters. The general

idea of using the quantum simulation methods to observe

chemical reactions in liquids is an immensely powerful one.

Given more space, I could have talked about recent

Figure 15. Evolution of the electron density during the

dissociative chemisorption of a Cl2 molecule on the Si (111)

surface, according to the ab initio dynamical simulations of De

Vita et al. (1993). From top to bottom, the three panels show the

molecule above the surface, its ® rst contact as the bond between

Cl atoms is broken and the full formation of chemical bonds with

surface atoms. Each panel shows an isovalue surface of the

electron density.

M. J. Gillan128

Page 15: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

simulations by Chiarotti et al. (1995) , in which they have

observed the spontaneous polymerization of acetylene

under high pressures. Parrinello’ s work on water that I

described in section 3.3 will surely form the starting point

for investigations of all kinds of problems in aqueous

solutions. I believe that the examples that we have seen so

far are only a taste of the tremendous advances that will be

made in the next few years.

Before I conclude, I want to point to one area that I

have not had space to mention at all: materials under high

pressures. This area is extremely important for the

understanding of planetary interiors, but also for practical

applications such as the behaviour of explosives. There

has been a strong interest among earth scientists for many

years in the use of simulation to help understand the

materials of the earth’ s core and mantle under the extreme

conditions that exist at great depths. The reason for this

interest is easy to understand. The fact is that it is still

experimentally di� cult to study materials under the

conditions of the earth’ s deep interior. The wonderful

thing about simulation is that extreme conditions present

no problem. If you want to increase the pressure two-fold

(or a thousand-fold), it is only a question of resetting a

parameter. The last few years have seen the beginning of

an eŒort to investigate the mantle material magnesium

silicate at very high pressures using ® rst-principles

simulation (Wentzcovitch et al. 1993 , 1995). The same

methods will certainly be enlisted in the study of liquid

iron, which is believed to make up much of the earth’ s

core.

I want to end with this thought: the ® rst-principles

simulation of matter is still young. The techniques are not

in their ® nal state. The plane-wave pseudopotential method

has been spectacularly successful, but still better methods

may yet be found. Ever more complex problems are being

tackled. First-principles simulations on systems containing

hundreds of atoms are becoming common; the next few

years will bring simulations on thousands, or even tens of

thousands. The last ten years have been a time of

extraordinary progress. The next ten will be equally

exciting.

Acknowledgements

I am grateful to Professor A. M. Stoneham FRS for his

comments on the manuscript. I also thank Professors D.

Bird, M. Parrinello and M. Sche‚ er for permission to

reproduce ® gures from their publications.

References

Alder, B. J., and Wainwright, T. E., 1957, J. Chem. Phys., 27, 1208.

Alder, B. J., and Wainwright, T. E., 1959, J. Chem. Phys., 31, 459.

Allen, M. P., and Tildesley, D. J., 1987, Computer Simulation of Liquids

(Oxford: Clarendon Press).

Ancilotto, F., Andreoni, W., Selloni, A., Car, R., and Parrinello, M., 1990,

Phys. Rev. Lett., 65, 3148.

Ashfold, M. N. R., May, P. W., Rego, C. A., and Everitt, N. M., 1994,

Chem. Soc. Rev., 23, 21.

Atkins, P. W., 1994, Physical Chemistry, 5th edition (Oxford: OUP),

chapter 24.

Bachelet, G. B., Hamann, D. R., and SchluÈ ter, M., 1982, Phys. Rev. B., 26,

4199.

Barnes, A. C., and Enderby, J. E., 1988, Phil. Mag. B., 58, 497.

Becke, A. D., 1988, Phys. Rev. Lett., 38, 3098.

Car, R., and Parrinello, M., 1995, Phys. Rev. Lett., 55, 2471.

Ceperley, D. M., and Alder, B. J., 1980, Phys. Rev. Lett., 45, 566.

Cheung, P. S. Y., and Powles, J. G., 1975, Mol. Phys., 30, 921.

Chiarotti, G. L. et al., 1995, unpublished.

Cohen, M., and Heine, V., 1970, Solid State Phys., edited by H. Ehrenreich,

F. Seitz and D. Turnbull, Vol. 24, p. 38.

Darling, G. R., and Holloway, S., 1995, Rep. Prog. Phys., 58, 1595.

De Vita, A., SÏ tich, I., Gillan, M. J., Payne, M. C., and Clarke, L. J., 1993,

Phys. Rev. Lett., 71, 1276.

Enderby, J. E., and Barnes, A. C., 1990, Rep. Prog. Phys., 53, 85.

Fois, E. S., Sprik, M., and Parrinello, M., 1994, Chem. Phys. Lett., 223,

441.

Gabathuler, J. P., and Steeb, S., 1979, Z. Naturf., 34a, 1314.

Galli, G., and Parrinello, M., 1991, Proceedings of NATO ASI `Computer

Simulation in Materials Science’ , edited by M. Meyer and V. Pontikis

(Dordrecht: Kluwer), p. 283.

Gibson, J. B., Goland, A. N., Milgram, M., and Vineyard, G. H., 1960,

Phys. Rev., 120, 1229.

Gillan, M. J., 1993, New Scientist, 3rd April, p. 34.

Gong, X. G., Chiarotti, G. L., Parinnello, M., and Tosatti, E., 1993,

Europhys. Lett., 21, 469.

Goniakowski, J., and Gillan, M. J., 1996, Surf. Sci., 350, 145.

Gross, A., Hammer, B., Scheffler, M., and Brenig, W., 1994, Phys. Rev.

Lett., 73, 3121.

Gross, A., Wilke, S., and Scheffler, M., 1995, Phys. Rev. Lett., 75,

2718.

Hammer, B., Scheffler, M., Jacobsen, K. W., and N ù rskov, J. K., 1994,

Phys. Rev. Lett., 73, 1400.

Heine, V., 1970, Solid State Phys., edited by H. Ehrenreich, F. Seitz and

D. Turnbull, Vol. 24, p. 1.

H eine, V., and W eair e, D., 1970, S olid State Phys ., edited by

H. Ehrenreich, F., Seitz and D. Turnbull, Vol. 24, p. 250.

Hohenberg, P., and Kohn, W., 1964, Phys. Rev., 136, B864.

Jones, R. O., and Gunnarsson, O., 1989, Rev. Mod. Phys., 61, 689.

Kantorovich, L. N., Holender, J. M., and Gillan, M. J., 1995, Surf. Sci.,

343, 221.

Kay, M., Darling, G. R., Holloway, S., White, J. A., and Bird, D. M., 1995,

Chem. Phys. Lett., 245, 311.

Kirchhoff, F., Holender, J. M., and Gillan, M. J., 1996, Europhys. Lett. 33,

605.

Laasonen, K., Sprik, M., Parrinello, M., and Car, R., 1993, J. Chem. Phys.,

99, 9080.

Lague, S. B., Barnes, A. C., and Salmon, P. S., 1996, (unpublished).

Langel, W., and Parrinello, M., 1994, Phys. Rev. Lett., 73, 504.

Manassidis, I., Goniakowski, J., Kantorovich, L. N., and Gillan, M. J.,

1995, Surf. Sci., 339, 258.

Mandl, F., 1988, Statistical Physics, 2nd edition (New York: John Wiley).

Nusterer, E., BloÈ chl, P., and Schwarz, K., 1996, Angew. Chemie 35, 175.

Nusterer, E., BloÈ chl, P., and Schwarz, K., 1996, Chem. Phys. Lett 253, 448.

Payne, M. C., Teter, M. P., Allan, D. C., Arias, T., A., and Joannopoulos,

J. D., 1992, Rev. Mod. Phys., 64, 1045.

Perdew, J. P., 1986, Phys. Rev. B., 33, 8822.

Perdew, J., and Zunger, A., 1981, Phys. Rev. B., 23, 5048.

Pugh, S., and Gillan, M. J., 1994, Surf. Sci., 320, 331.

Rahman, A., 1964, Phys. Rev., 136, A405.

The virtual matter laboratory 129

Page 16: The virtual matter laboratoryfolk.uio.no/ponniahv/DFT/Gillan.pdf · expressionshavebeenfound,whichallowforthegradients ofn(r).Thesenewexpressionsarecalled`generalized gradientapproximations’(GGA)(Perdew1986,Becke

Rendulic, K. D., Anger, G., and Winkler, A., 1989, Surf. Sci., 208, 404.

Shah, R., Gale, J. D., Payne, M. C., and Lee, M.-H., 1996, Science 271,

1395.

Shah, R., Gale, J. D., and Payne, M. C., 1996, J. Phys. Chem. 100, 11688.

SÆtich, I., Car, R., and Parrinello, M., 1989, Phys. Rev. Lett., 63, 2240.

SÆtich, I., Car, R., and Parrinello, M., 1991, Phys. Rev. B., 44, 4262.

Tuckerman, M., Laasonen, K., Sprik, M., and Parrinello, M., 1995, J.

Phys. Chem., 99, 5749.

Verlet, L., 1967, Phys. Rev., 159, 98.

Waseda, Y., and Suzuki, K., 1975, Z. Phys. B, 20, 339.

Wentzcovitch, R. M., Martins, J. L., and Price, G. D., 1993, Phys. Rev.

Lett., 70, 3947.

Wentzcovitch, R. M., Ross, N. L., and Price, G. D., 1995, Phys. Earth

Planet. Interiors, 90, 101.

White, J. A., Bird, D. M., Payne, M. C., and SÆtich, I., 1994, Phys. Rev.

Lett., 73, 1404.

White, J. A., Bird, D. M., and Payne, M. C., 1996, Phys. Rev. B 53, 1667.

Woodcock, L. V., and Singer, K., 1971, Trans. Faraday Soc., 67, 12.

Zhang, Q. M., Chiarotti, G., Selloni, A., Car, R., and Parrinello, M., 1990,

Phys. Rev. B, 42, 5071.

Michael Gillan obtained a D. Phil. from Oxford

University, and worked at the University of

Minnesota and at Harwell Laboratory before

being appointed Professor of Theoretical Physics

at Keele University in 1988. Since moving to

Keele, his main research interest has been in the

ab initio simulation of solids and liquids and

particularly the use of simulation to study liquid

metals and molecular processe s at surfaces. He is

currently coordinator of the UK Car ± Parrinello

consort ium, a collaboration of 10 research

groups which is using the Edinburgh Cray T3D

supercom puter to study a range of problems in

condensed-matter physics and chemistry.

The virtual matter laboratory130