Next: Overview Up: Title page Previous: Title page


Density-functional theory (DFT) [1,2] has made a unique impact [3] on science, well beyond the traditional realms of quantum-mechanical simulations into disciplines such as microelectronics [4], biochemistry [5] and geology [6]. This broad multidisciplinary appeal has its origin in the ability of DFT to provide a sufficiently accurate description of electron correlation for many purposes at a computational cost which scales favourably (as the cube of the system size N) compared with correlated wave function methods (which typically exhibit N5 to N7 scaling).

The origin of the N3 asymptotic scaling of DFT can be understood with reference to Fig. 1, which shows one of the solutions ψn(r) to the single-particle Kohn-Sham equation


In addition to the potential due to the ion cores, the effective potential Veff[n](r) describes electron-electron interactions in a mean field manner (via the electron density n(r)). As a result, Eq. 1 must be solved self-consistently along with


where the occupation numbers {fn} are unity (zero) for states below (above) the chemical potential at absolute zero and the factor of two accounts for spin degeneracy. For small basis sets, Eq. 1 may be solved by direct matrix diagonalisation which scales as N3. For large-scale simulations or large basis sets (not least the plane-wave pseudopotential method [7,8], the leading workhorse for DFT calculations) iterative methods [9] are more efficient. In this case, the orthonormality of the Kohn-Sham orbitals {ψn(r)} must be imposed directly:


Figure 1: A Kohn-Sham orbital for an oligopeptide molecule: in general, each orbital extends over the entire system.

The number of orbital pairs and thus the number of constraints (3) is proportional to N2. As illustrated in Fig. 1, each Kohn-Sham orbital extends over the entire system so that the overlap integral in (3) requires a computational effort which scales linearly with N (in the same manner as the volume of the system increases). Thus the total cost of enforcing the orthonormality constraints scales as N3.

While relatively benign compared to other methods, this N3 scaling still presents a bottleneck which restricts the size of simulations to a few hundred atoms, even with the most powerful supercomputers. In recent years there has therefore been considerable interest in the development of linear-scaling or order-N methods (see the review articles [10,11] and references therein). These offer the potential to revolutionise the scope and scale of first-principles simulations based on DFT to include entire biological molecules and nanostructures containing many thousands of atoms.

While many different computational schemes have been proposed, this article focuses on one more recent method which has been implemented in the ONETEP code [12,13]. The twofold aim of this work has been to develop a general purpose scheme which exhibits true linear scaling and controlled accuracy.

In all proposed order-N schemes the computational effort associated with the dominant part of the calculation (e.g. evaluation of the Hamiltonian matrix elements or application of the Hamiltonian to a trial vector) scales linearly with system-size. However this does not guarantee that the total calculation time will scale so well. For example, while the time for a single self-consistent iteration may scale linearly, various forms of ill-conditioning [14] can cause the number of iterations required to reach convergence to increase with system-size. The aim of true linear scaling thus requires that the total effort to calculate the desired physical properties of a system should scale linearly.

In order to obtain any linear-scaling method, it is necessary to make further approximations that exploit the "nearsightedness" [15,16] of quantum many-particle systems, in addition to those common to all DFT methods (exchange-correlation functional, finite basis set, discrete Brillouin zone sampling and choice of pseudopotential if appropriate). Pursuing Kohn's analogy of myopia, this involves deliberately impairing the vision of the simulated quantum system so that the local density is blind to distant perturbations in the potential. As the range of sight is decreased, the calculation will become cheaper, but with a consequent loss of accuracy. Therefore this approximation must be controlled to reduce any error to an acceptable level if the results are to be used with confidence.

The accuracy of a calculation is heavily influenced by the choice of basis set. Linear-scaling methods can be divided into two categories: those which use relatively small basis sets of atomic-like orbitals such as numerical atomic orbitals [17,18,19,20,21], Slater type orbitals [22], localised spherical waves [23], or Gaussian type orbitals [24,25]; and those which use larger sets of simpler functions such as B-splines or blip functions [26], finite elements [27], or methods based on finite-difference [28] and multigrid [29] techniques. To satisfy the requirement for controlled accuracy, it is necessary to choose a basis set which may be improved systematically, ideally via a single parameter, and which spans the range from minimal atomic sets to plane-wave accuracy.

In Sec. 2 an overview of the ONETEP linear-scaling method is given, and in Sec. 3 an assessment is made of representative results for a variety of systems in the light of the above aims.

Next: Overview Up: Title page Previous: Title page

Page last modified on January 23, 2009, at 02:07 PM