0% found this document useful (0 votes)
376 views23 pages

Vari Harm2001

For classroom use by teachers, one copy per student in the class may be made free of charge. This exercise provides undergraduate physical chemistry students with an understandable introduction to the Variational Method. Students use linear combinations of basis functions for finding approximate solutions to the Schrodinger equation for the quantum harmonic oscillator.

Uploaded by

api-3735201
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
376 views23 pages

Vari Harm2001

For classroom use by teachers, one copy per student in the class may be made free of charge. This exercise provides undergraduate physical chemistry students with an understandable introduction to the Variational Method. Students use linear combinations of basis functions for finding approximate solutions to the Schrodinger equation for the quantum harmonic oscillator.

Uploaded by

api-3735201
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Variational Method Applied to the Harmonic

Oscillator
S. Keith Dunn
Department of Chemistry
Centre College
Danville, Kentucky 40422
[email protected]

© Copyright 2002 by the Division of Chemical Education, Inc., American Chemical Society. All rights reserved. For classroom
use by teachers, one copy per student in the class may be made free of charge. Write to JCE Online,
[email protected], for permission to place a document, free of charge, on a class Intranet.

Goal

This exercise provides undergraduate physical chemistry students with an understandable introduction to the
Variational Method. Students use linear combinations of basis functions for finding approximate solutions to the
Schrödinger equation for the quantum harmonic oscillator.

Objectives

Upon completion of the exercise students should be able to:

1. solve an eigenvalue problem


2. simplify computational procedures using the consequences of wavefunction orthogonality
3. explain the importance of kinetic and potential contributions to the total energy of a system
4. adjust a variational parameter to minimize the energy of a system
5. discuss the importance of the additivity of basis functions in achieving the best trial
wavefunction for a given basis set
6. discuss the value and limitations of approximate solutions to the Schrödinger equation.

Prerequisites

Students should have at least two terms of calculus and a physics course, as well a solid introduction to quantum
mechanics. Students should be familiar with both the quantum particle-in-a-box and the quantum harmonic oscillator
problems. Additionally, they should understand the concepts of orthonormality and linear combinations of
wavefunctions. A rudimentary familiarity with Mathcad is required. A familiarity with the use of matrices and their
applications in quantum mechanics is useful.

Many of the equations presented, particularly in the Introductory section, are merely present for explanatory purposes
and have therefore been disabled. These disabled equations are distinguished from active equations by their yellow
background.

Instructor's notes and commands that the students will enter are distinguished by a blue background. I suggest the
instructor remove all such elements prior to distribution.

Direct instructions for completion of the exercises are distinguished by a pink background so the students can easily
find the instructions while navigating through the document.

Created June 2001 vari_harm2001.mcd page 1


Updated August 2002 S. Keith Dunn
References

1. Sims, J.S.and Ewing, G.E., "Experiments in Quantum Chemistry: The Linear Variation
Method," J. Chem. Ed. 56, 546 (1979).

2. Knox, K., "Particle in the One-Dimensional Champagne Bottle," J. Chem. Ed. 57, 626(1980).

3. Rioux, F., "Linear Variation Method in One Dimension (CS)", J. Chem. Ed. 59, 773 (1982).

4. Knudson, S.K., "Applications of the Variation Method," J. Chem. Ed. 74, 930 (1997).

5. Besalu, E. and Marti, J., "Exploring the Rayleigh-Ritz Variational Principle," J. Chem. Ed. 75, 105 (1998).

6. Casaubon, J.I. and Doggett, G., "Variational Principle for a Particle in a Box," J. Chem. Ed. 77, 1221 (2000).

7. Grubbs, W.T., "Variational Methods Applied to the Particle in the Box," J. Chem. Ed. 78, 1557 (2001).

8. Raff, L.M., Principles of Physical Chemistry, Prentice Hall, Upper Saddle, NJ, 2001.

9. Atkins, P.W., Physical Chemistry, 6th edition, W.H. Freeman and Company, NY, 1997.

10. Bransden, B.H. and Joachin, C.J., Introduction to Quantum Mechanics, Wiley, NY, 1989.

Introduction and Background


The Variational Method is one of the most important tools in computational quantum chemistry. The method itself is
simple, but the technique is often introduced using examples, such as the construction of molecular orbitals from
linear combinations of hydrogenic atomic orbitals, that obscure the basic principles of the technique with
complicated mathematical details. Several recent papers describe exercises that introduce the Variational Method
using simpler systems [1-7]. Reference [7], which provides a clever exercise where students find approximate
solutions to the particle-in-a-1d box problem is the only one that uses a Mathcad worksheet. Professor Grubb's
exercise demonstrates the basics of the Variational Method very clearly and efficiently. However, the project
presented here deals in greater depth with the construction of approximate wavefunctions using linear combinations of
basis functions, the backbone of Molecular Orbital (MO) Theory. In short, the students get hands-on experience with
all the important details of a MO calculation but in one dimension, where the wavefunctions are easy to graph and the
integrals are simple, before they are asked to solve more complicated problems using a computational chemistry
package like Spartan or Hyperchem.

In this exercise, students find approximate solutions to the Schrödinger equation for the quantum harmonic oscillator
using particle-in-a-1d box wavefunctions as a basis set. One of the most satisfying aspects of the exercise is that
students can graph the one-dimensional basis functions and understand how these basis functions add together to
produce the trial wavefunctions. Additionally, since the harmonic oscillator wavefunctions are known, the students
can compare their trial wavefunctions and energies with actual ones. By comparison then, students can see the
value and limitations of approximate methods of solving the Schrödinger equation.

Created June 2001 vari_harm2001.mcd page 2


Updated August 2002 S. Keith Dunn
The Variation Method
Most undergraduate physical chemistry texts[8,9] provide a reasonable introduction to the Variational Method; so only
one specific example is provided here. A more general treatment of the details is found in references [1,5, 8-10].

The Variation Theorem states that the energy obtained using a trial wavefunction will always be greater than or equal
to the true energy of the system. Therefore, the basic strategy of the Variational Method is to find the trial
wavefunction that results in the lowest possible energy.

Trial wavefunctions often consist of linear combinations of a set of basis functions. Consider a trial wavefunction, φ,
made from a linear combination of two basis functions, f1 and f2, so that

φ := c1 ⋅ f1 + c2 ⋅ f2 (1)

where c1 and c2 are the weighting coefficients.

Since the trial wavefunctions are not necessarily eigenfunctions of the hamiltonian, H, the energy of the system, E,
can be obtained using the expectation value expression


 φ ⋅ H⋅ φ dx

E := (2) The trial wavefunctions are assumed to be real and one dimensional

 φ ⋅ φ dx

Substituting the trial wavefunction of equation (1) into equation (2) gives


 ( c1⋅ f1 + c2⋅ f2) ⋅ H ⋅ ( c1⋅ f1 + c2⋅ f2) d x
⌡ (3)
E :=

 ( c1⋅ f1 + c2⋅ f2) ⋅ ( c1⋅ f1 + c2⋅ f2) d x

or
2 2
c1 ⋅ H11 + c1⋅ c2⋅ H12 + c2⋅ c1⋅ H21 + c2 ⋅ H22
E := ( 4)
2 2
c1 ⋅ S11 + c1⋅ c2⋅ S12 + c2⋅ c1⋅ S21 + c2 ⋅ S22

where


Hjk :=  fj⋅ H ⋅ fk d x ( 5)

and


Sjk :=  fj⋅ fk d x ( 6)

Created June 2001 vari_harm2001.mcd page 3


Updated August 2002 S. Keith Dunn
Sjk is called the overlap integral of fj and fk. Since H12=H21 and S12=S21, then equation (4) simplifies to

2 2
c1 ⋅ H11 + 2 ⋅ c1⋅ c2⋅ H12 + c2 ⋅ H22
E := ( 7)
2 2
c1 ⋅ S11 + 2 ⋅ c1⋅ c2⋅ S12 + c2 ⋅ S22

Remember that the Variation Theorem states that the energy for any trial wavefunction will always be greater than or
equal to the actual energy. So, to get the best trial wavefunction, we need to find the set of coefficients, c1 and c2,
that minimizes the energy expression in equation (7). To minimize E, we simultaneously set the partial derivatives of
E with respect to c1 and c2 equal to zero. The derivative with respect to c1 is

2
d  2 ⋅ c1⋅ H11 + 2 ⋅ c2⋅ H12 c1 ⋅ H11 + 2 ⋅ c1⋅ c2⋅ H12 + c2⋅ H22⋅ ( 2 ⋅ c1⋅ S11 + 2 ⋅ c2⋅ S12)
 E  := −
 d c1  c2 2 2 2 ( 8)
c1 ⋅ S11 + 2 ⋅ c1⋅ c2⋅ S12 + c2 ⋅ S22 (c12⋅ S11 + 2⋅ c1⋅ c2⋅ S12 + c22⋅ S22)
Substituting E from equation (7) into equation (8) gives

d  2 ⋅ c1⋅ H11 + 2 ⋅ c2⋅ H12 E⋅ ( 2 ⋅ c1⋅ S11 + 2 ⋅ c2⋅ S12)


 E  := − ( 9)
 d c1  c2 2 2 2 2
c1 ⋅ S11 + 2 ⋅ c1⋅ c2⋅ S12 + c2 ⋅ S22 c1 ⋅ S11 + 2 ⋅ c1⋅ c2⋅ S12 + c2 ⋅ S22

Setting equation (9) equal to zero and multiplying both sides by (1/2)*(denominator) gives

( c1⋅ H11 + c2⋅ H12) − E⋅ ( c1⋅ S11 + c2⋅ S12) := 0 ( 10)

Collecting terms gives

c1⋅ ( H11 − E⋅ S11) + c2⋅ ( H12 − E⋅ S12) := 0 ( 11)

Likewise setting the derivative of E with respect to c2 equal to zero results in

c1⋅ ( H21 − E⋅ S21) + c2⋅ ( H22 − E⋅ S22) := 0 ( 12)

The series of simultaneous equations (11) and (12) can be solved either by setting c1=c2=0, the trivial solution, or by
setting the determinant

H11 − E ⋅ S11 H12 - E ⋅ S12


:= 0 ( 13)
H21 − E ⋅ S21 H22 − E ⋅ S22

If f1 and f2 are orthonormal, orthogonal and normalized, then

Sjk := 1 if j=k
( 14)
Sjk := 0 if not

So, equation (13) simplifies to

H11 − E H12
:= 0 ( 15)
H21 H22 − E

Created June 2001 vari_harm2001.mcd page 4


Updated August 2002 S. Keith Dunn
Equation (15) is said to be the characteristic equation of the matrix

 H11 H12 
 
 H21 H22 

So, if we know the values of H11, H12, H21 and H22, we can solve for the two energy roots of equations (11) and (12),
by solving equation (15). Substituting one of the energy roots back into equations (11) and (12) and using the
additional constraint that

2 2
c1 + c2 := 1 ( 16)

for a normalized trial wavefunction, provides a unique set of c1 and c2 for construction of the trial wavefunction of
equation (1).

The preceding example is called an eigenvalue problem. These types of problems are central to computational
quantum mechanics. The energy roots are termed eigenvalues and the corresponding sets of coefficients are called
eigenvectors.

Exercise 1a (This exercise should be turned in at the BEGINNING of the lab period)

Consider the case where f1 and f2 are orthogonal and normalized (form an orthonormal set), H11=5, H12=H21=-2, and
H22=2. Find the energy roots and the trial wavefunctions associated with each root by hand on a separate sheet of
paper. Recall that for a determinant

a b
:= a ⋅ d − b ⋅ c
c d

You also could have Mathcad solve the three simultaneous equations (11), (12), and (16) with a solve block as I've
shown below. However, one of the reasons I started this project was to take the "black box" magic out of the kind of
calculations we do with Spartan or Hyperchem. If you're not as paranoid as I am about the "black box" problem, you
may want to have them use the solve block instead of solving the problem by hand.

H11 := 5 H12 := −2 H21 := H12 H22 := 2 S11 := 1 S12 := 0 S21 := 0 S22 := 1

Given
c1⋅ ( H11 − E⋅ S11) + c2⋅ ( H12 − E⋅ S12) = 0 ( 11)

c1⋅ ( H21 − E⋅ S21) + c2⋅ ( H22 − E⋅ S22) = 0 ( 12)

2 2
c1 + c2 = 1 (16)

 1 −1 −2 2 
 ⋅ 5 ⋅ 5 ⋅ 5 ⋅ 5 
0.447 −0.447 −0.894 0.894 
 
5 5 5 5

Find( c1 , c2 , E) →   
2 −2 1 −1  = 0.894 −0.894 0.447 −0.447
⋅ 5 ⋅ 5 ⋅ 5 ⋅ 5  
 5 5 5 5   1 1 6 6 
 
 1 1 6 6 

Created June 2001 vari_harm2001.mcd page 5


Updated August 2002 S. Keith Dunn
Exercise 1b (To be completed during the lab period)
The following exercise walks you through using Mathcad to find the eigenvalues and the respective eigenvectors for
the problem you solved by hand in exercise 1a.

Enter the following matrix in the space below

 H11 H12 
M :=  
H H 
 21 22 

by typing M:, choosing Matrix from the Insert button on the tool bar, choosing 2 Rows and 2 Columns, and hitting OK.
Type in the values of H11, H12, H22 from exercise 1a and press [return].

 5 −2 
M :=  
 −2 2 

Recall that equation (15) is said to be the characteristic equation for the matrix M, so solving equation (15) for E is
the same thing as finding the eigenvalues for M. To find the eigenvalues using Mathcad, use the eigenvals(M)
command.

Set the eigenvalues equal to a vector, W, by typing W:eigenvals(M)[return].

W := eigenvals( M )

To view the eigenvalues, type W=[return].

6
W=  
1

The eigenvalues displayed, 6 and 1, should be the same results you got from finding the energy roots by hand.
Notice that Mathcad does NOT put the eigenvalues in any particular order.

To arrange the eigenvalues in order from smallest to largest, type E:sort(W)[return].

E := sort( W)

To view the sorted eigenvalues, type E=[return].

1
E= 
6

The eigenvalues are now stored in a vector E. To access the individual eigenvalues, you can call up the individual
components of this vector. Mathcad starts ordering vector and matrix components with 0, so the individual
eigenvalues are stored under the variables E0 and E1 .

To view the individual eigenvalues type E[0=[return] and E[1=[return]. Note the [ key is used to define subscripts in
an equation. The subscript tells Mathcad which component of a vector or matrix you're requesting.

E =1
0

E =6
1

Created June 2001 vari_harm2001.mcd page 6


Updated August 2002 S. Keith Dunn
Now, to find the eigenvector associated with a particular eigenvalue, z, use the eigenvec(M,z) command.

Set the eigenvector for E0 equal to the variable v0, by typing v0:eigenvec(M,E[0). To view the eigenvector, type
v0=[return]. Set v1 equal to the eigenvector for E1 and view v1 as well.

 0.447 
v0 := eigenvec( M , E ) v0 =  
0  0.894 

 −0.894 
v1 := eigenvec( M , E ) v1 =  
1  0.447 

Each eigenvector contains the coefficients, c1 and c2 of equation (1), for the trial wavefunction associated with a
particular eigenvalue. Therefore, the trial wavefunctions, φ1 and φ2, for E0 and E1 , respectively, are as follows:

φ1 := 0.447⋅ f + 0.894⋅ f
1 2

φ2 := −0.894⋅ f + 0.447⋅ f
1 2

Created June 2001 vari_harm2001.mcd page 7


Updated August 2002 S. Keith Dunn
Exercise 2
Several constants you will use in the following exercises are defined below. Please
note the units on each constant.

Defining constants:
− 34
h := 6.626⋅ 10 Planck's constant (J s)
h
hbar := Planck's constant divided by (2*π)
2⋅ π

− 27
µ := 1.7⋅ 10 mass of hydrogen atom (kg)

κ := 500 force constant (N/m), a typical value for an H-X bond


− 10
L := 0.32⋅ 10 The size of the box ranges from -L to +L. L (m) is used as a Variational Parameter to
minimize the energy.

I have chosen L as the value achieved by minimizing the variational energy using 4 basis
functions, so that the figures below look like the final result of the student's report. I suggest
changing L to a value of 0.2 angstroms to start.
10
c := 2.998⋅ 10 Speed of light (cm/s)

0.5
κ The frequency of the harmonic oscillator (s-1)
ω :=  
µ

The Harmonic Oscillator


The harmonic oscillator provides a completely solvable, one-dimensional problem that demonstrates many important
aspects of quantum mechanics, including the usefulness of symmetry. Solutions of the Schrödinger equation for this
system provide a series of wavefunctions that are commonly used to approximate the quantized vibrational energy
levels of molecules. The hamiltonian for this system is

 hbar 2 
⋅
d  1 2
H := − +  ⋅ κ⋅ x ( 17)
  2
 2 ⋅ µ  dx 2
 

where x is the displacement from the oscillator's equilibrium position. The hamiltonian has contributions from both
kinetic, T, and potential, U, energy operators, as shown below.

hbar  d 2
T := −
( 18)
⋅ 2
 2⋅ µ  dx
and

Created June 2001 vari_harm2001.mcd page 8


Updated August 2002 S. Keith Dunn
U :=   ⋅ κ ⋅ x
1 2
( 19)
2

The exact solutions to the Schrödinger equation are known, but not intuitive without experience solving differential
equations. They have the form
2
− ( α ⋅ x)

2
ψ v ( x ) := N( v ) ⋅ Yv( x ) ⋅ e v = 0,1,2,... ( 20)

where
1
4
 κ 
α := µ ⋅ ( 21) Note: this equation in active
 2
 ( hbar ) 

and the normalization constant, N(v), is given by


0.5
 α 
N( v ) := ( 22)
 0.5 v 
 π ⋅ 2 ⋅ v! 
The first ten Hermite polynomials, Y0(x) through Y9(x), are given below

Y0( x ) := 1
Y1( x ) := 2 ⋅ α ⋅ x
2
Y2( x ) := 4 ⋅ ( α ⋅ x ) − 2
Y3( x ) := 4 ⋅ α ⋅ x ⋅ 2 ⋅ ( α ⋅ x ) − 3
2

2 4
Y4( x ) := 12 − 48⋅ ( α ⋅ x ) + 16⋅ ( α ⋅ x )
( 23)
Y5( x ) := 8 ⋅ α ⋅ x ⋅  15 − 20⋅ ( α ⋅ x ) + 4 ⋅ ( α ⋅ x ) 
2 4

2 4 6
Y6( x ) := −120 + 720⋅ ( α ⋅ x ) − 480⋅ ( α ⋅ x ) + 64⋅ ( α ⋅ x )
Y7( x ) := 16⋅ α ⋅ x ⋅  −105 + 210⋅ ( α ⋅ x ) − 84⋅ ( α ⋅ x ) + 8 ⋅ ( α ⋅ x ) 
2 4 6

2 4 6 8
Y8( x ) := 1680 − 13440⋅ ( α ⋅ x ) + 13440⋅ ( α ⋅ x ) − 3584⋅ ( α ⋅ x ) + 256⋅ ( α ⋅ x )
Y9( x ) := 32⋅ α ⋅ x ⋅  945 − 2520⋅ ( α ⋅ x ) + 1512⋅ ( α ⋅ x ) − 288⋅ ( α ⋅ x ) + 16⋅ ( α ⋅ x ) 
2 4 6 8

Created June 2001 vari_harm2001.mcd page 9


Updated August 2002 S. Keith Dunn
The first ten harmonic oscillator wavefunctions, ψ v(x), are constructed according to equations (20) through (23) and
are listed below:

2 2
− ( α ⋅ x) − ( α ⋅ x)

2 2
ψ 0 ( x ) := N( 0 ) ⋅ Y0( x ) ⋅ e ψ 5 ( x ) := N( 5 ) ⋅ Y5( x ) ⋅ e
2 2
− ( α ⋅ x) − ( α ⋅ x)

2 2
ψ 1 ( x ) := N( 1 ) ⋅ Y1( x ) ⋅ e ψ 6 ( x ) := N( 6 ) ⋅ Y6( x ) ⋅ e
2 2
− ( α ⋅ x) − ( α ⋅ x)

2 2
ψ 2 ( x ) := N( 2 ) ⋅ Y2( x ) ⋅ e ψ 7 ( x ) := N( 7 ) ⋅ Y7( x ) ⋅ e ( 24)
2 2
− ( α ⋅ x) − ( α ⋅ x)

2 2
ψ 3 ( x ) := N( 3 ) ⋅ Y3( x ) ⋅ e ψ 8 ( x ) := N( 8 ) ⋅ Y8( x ) ⋅ e
2 2
− ( α ⋅ x) − ( α ⋅ x)

2 2
ψ 4 ( x ) := N( 4 ) ⋅ Y4( x ) ⋅ e ψ 9 ( x ) := N( 9 ) ⋅ Y9( x ) ⋅ e

The first few wavefunctions are plotted below

Actual Harmonic Oscillator Wavefunctions

5
2 . 10

ψ0( x)

ψ1( x)
0
ψ2( x)

5
2 . 10

11 11 11 11
4 . 10 2 . 10 0 2 . 10 4 . 10
x
x/m

The energy levels are given by

E( v ) :=  v +  ⋅ hbar ⋅ ω
1
( 25)
 2

where v is the vibrational quantum number.

Created June 2001 vari_harm2001.mcd page 10


Updated August 2002 S. Keith Dunn
Notice that if you divide the energy by (hbar*ω ), the energy levels take the values of half integers. We will use this
convention throughout the exercise. To display the energy of a particular level, simply type E(v)/(hbar*ω )=[return],
where v is any integer. The first few energy levels are displayed below.

E( 0 ) E( 1 ) E( 2 ) E( 3 )
= 0.5 = 1.5 = 2.5 = 3.5
hbar ⋅ ω hbar ⋅ ω hbar ⋅ ω hbar ⋅ ω

If you don't divide the energies by (hbar*ω ), they will be displayed in units of Joules. However, the energy in Joules for
a quantized molecular vibration is a very small number. In fact, it's below Mathcad's default zero threshold for
numerical calculations. So, typing E(v)=[return] will result in a zero being displayed To avoid this annoyance, you
can change the zero tolerance by choosing Number or Result under the Format option on the button bar and
entering 20 in the Zero Tolerance or zero Threshold box. Use the Mathcad Help Index for details.

− 20
E( 0 ) = 2.86 × 10 (ground-state energy in Joules)

In the exercise that follows you will find approximate wavefunctions and energy levels for the harmonic oscillator using
the variation method and an appropriate set of basis functions. You will then compare your results with the actual
wavefunctions and energies given above.

Using n Basis Functions

Most interesting problems require more than two basis functions to obtain reasonable estimates for the
wavefunctions and energy levels of a system. Although the calculations for such problems become more
cumbersome with the addition of more, and often more complicated, basis functions, the basic methods by which
eigenvalue problems are solved are essentially identical to those employed in the problem you just finished.

As you will see through the course of the next exercise, adding additional basis functions often increases the
accuracy of a calculation. However, the computational time required to solve an eigenvalue problem with n basis
functions increases as 2n , so doubling the number of basis functions results in quadrupling the computational time.
Computational quantum chemists always reach a compromise between the accuracy they desire and the limited
computational time available. As you will see later, including symmetry arguments can significantly reduce the time
needed to solve a problem.

For n basis functions, f1 through fn, a total of n different trail wavefunctions can be expressed as

n
φi :=
∑ ( cij⋅ fj) i = 1,2,...,n (26)
j=1

For the trial functions to be normalized

n
2
∑ cij := 1
( 27)
j=1

Created June 2001 vari_harm2001.mcd page 11


Updated August 2002 S. Keith Dunn
The energy expression is


 n n


∑ ∑ cij⋅ cik⋅ Hjk d x

 j=1 k=1

E := ( 28)

 n n


∑ ∑ cij⋅ cik⋅ Sjk d x

 j=1 k=1

Setting the derivative of E with respect to each coefficient equal to zero gives the following set of
simultaneous equations.

∑ cij⋅ ( Hij − Sij⋅ E) := 0 i = 1,2,...,n ( 29)


j=1

In this case the following equation

H11 − E H12 .. H1n


H21 H22 − E H23 ..
:= 0 ( 30)
.. .. .. ..
Hn1 .. .. Hnn − E

is the characteristic equation of the matrix

 H11 H12 .. H1n 


 
 H21 H22 .. .. 
M := ( 31)
 .. .. .. .. 
 
 Hn1 .. .. Hnn 

So, solving equation (30) provides the energy roots. Substituting each of these roots into equations (29) and using
the constraint of equation (27) provides a unique set of coefficients, cij, used to construct the corresponding trial
wavefunction of equation (26). Obviously, problems involving more than a few simple basis functions are too
time-consuming to do by hand. Fortunately, computers are fantastic at doing repetitive calculations. We can find
the energy roots and coefficients for any eigenvalue problem by defining the proper matrix, M, and using the
eigenvals(M) and eigenvec(M,z) commands, respectively as in exercise 1b.

Created June 2001 vari_harm2001.mcd page 12


Updated August 2002 S. Keith Dunn
Basis Functions
Often basis functions are chosen because they are eigenfunctions for a simpler yet related potential. An example is
the choice of the orbitals of atomic hydrogen as a basis set for finding approximate wavefunctions for the H2
molecule. In this exercise you will use the solutions for a particle confined to a 1-dimensional box as a basis set for
finding approximate wavefunctions and energies of the harmonic oscillator. The graph below shows the potential
energy curves for both systems.

 1 2
  ⋅ κ⋅ x
 2

x
−L L
Potential Energy functions:
Red Curve: Harmonic Oscillator
Green Curve: Particle in a Box

The box is of length 2L and is centered at x = 0. Both potentials rise to infinity on either side of the x axis. However,
the particle-in-a-box potential does so abruptly at x = +/-(L), while the harmonic-oscillator potential rises as x2 .

The wavefunctions for a particle confined to a box, as defined in the figure above, are

0.5
 π x 
f ( n , x ) :=  
1
⋅ sinn ⋅  1 +  ( 32)
 L  2 L 

The difference between these functions and those typically presented in textbooks results from our setting x = 0 as
the centre of the box rather than as the left edge. The first few of these functions are displayed below. Note the
similarities and differences in these basis functions and the actual wavefunctions of the harmonic oscillator pictured
above.

5
2 . 10

f ( 1 , x)

f ( 2 , x) 0
f ( 3 , x)

5
2 . 10
11 11
2 . 10 0 2 . 10
x

Created June 2001 vari_harm2001.mcd page 13


Updated August 2002 S. Keith Dunn
Verify that the first few basis functions satisfy the boundary conditions for a box with infinite potential at x = -L,L.

− 11 5
f ( 1 , L) = 2.165 × 10 f ( 1 , −L) = 0 f ( 1 , 0 ) = 1.768 × 10
− 11
f ( 2 , L) = −4.33 × 10 f ( 2 , −L) = 0
f(n,L) is effectively zero compared
− 11 to the maximum at f(1,0)
f ( 3 , L) = 6.494 × 10 f ( 3 , −L) = 0

Notice that the values of the basis functions appear to be nonzero at x=L. In fact, the values are on the order of
10-11. Compared to the maximum values of about 105 (see the graph above), these values are essentially zero.
They are only nonzero because of the way in which Mathcad numerically evaluates the sine function. Often
numerical computational algorithms produce very small nonzero numbers for operations that should return zero
values. You could use the procedure described above to change the zero tolerance, and Mathcad would display
zeros for these results.

Solving the Integrals for the Variation Calculation:


With the basis functions and the hamiltonian defined you can now solve the integrals needed to construct the matrix
M of equation (22). Since our basis functions are only defined between -L and L, all integrals will be definite integrals
evaluated between these limits.

The Overlap Integrals

The general overlap integral, S(j,k), is defined below, where j and k represent the quantum numbers for the two basis
functions.

L

S( j , k ) :=  f ( j , x) ⋅ f ( k , x) dx
⌡ ( 33)
−L

Show the results for several of these overlap integrals below by typing S(1,1)=[return], etc. Note any trends and
provide an explanation in your report.

S( 1 , 1 ) = 1
The overlap integral determined for specific (n,,j) values. Note the integral is 1
− 17
S( 1 , 2 ) = 1.574 × 10 if n=j and 0 if not. This simply shows that the basis functions are orthogonal
and normalized
S( 2 , 2 ) = 1
− 17
S( 1 , 3 ) = 3.815 × 10
− 17
S( 2 , 3 ) = 2.766 × 10
S( 3 , 3 ) = 1

Created June 2001 vari_harm2001.mcd page 14


Updated August 2002 S. Keith Dunn
The Energy Integrals

As discussed earlier, the Energy integrals, of the general form Hjk, can be broken down into contributions from the
potential and kinetic energy operators, U and T, respectively. The overall energy is the sum of the potential and
kinetic contributions. You can use whatever energy units you choose, as long as you are consistent. If you use the
constants as defined above, your energy units will be Joules. Often vibrational energies are expressed in cm-1.
These units can be obtained by dividing the energy in J by (hc), where c has units of (cm/s). Alternatively, you may
wish to divide all of your energies, in Joules, by (hbar)*ω , so that the actual energy roots for the harmonic oscillator
have the values 1/2, 3/2, 5/2, etc. as discussed previously. I have chosen the last option below.

The kinetic energy integrals are defined as

L

  hbar 2 2 
 1   d 
T( j , k ) :=  ⋅ f ( j , x) ⋅ − ⋅ f ( k , x) dx ( 34)
 hbar ⋅ ω    2⋅ µ 2 

 dx 
−L

You could have the students show the results of the kinetic energy integrals directly from equation (34); however, I
think it is instructive to have them show by hand that (34) simplifies to (35). See explanation below.

Equation (34) simplifies to

 2  k⋅ π 2
 ⋅  
hbar
T( j , k ) :=   ⋅ S( j , k ) ( 35)
 2 ⋅ µ ⋅ ( hbar ⋅ ω )   2 ⋅ L 

In your report, prove that equation (34) simplifies to equation (35). In the space below, show the results for several of
the kinetic energy integrals. Note any trends and provide an explanation in your report. What do each of the nonzero
kinetic energy integrals represent?
− 18 − 17
T( 1 , 1 ) = 0.138 T( 1 , 2 ) = 8.678 × 10 T( 1 , 3 ) = 4.731 × 10
− 18 − 17
T( 2 , 1 ) = 2.169 × 10 T( 2 , 2 ) = 0.551 T( 2 , 3 ) = 3.43 × 10
− 16
T( 3 , 3 ) = 1.24 T( 3 , 4 ) = 1.322 × 10

The kinetic energy integrals are nonzero only if j = k. This is so because each basis function is an eigenfunction of T.
Therefore, f(k,x) results in a constant, T, times f(k,x). The constant comes out of the integral, and what remains is
simply the overlap integral, S(j,k), which is nonzero only if j = k, since the basis functions are orthogonal. For j = k,
T(j,k) gives the energy of the jth level for a particle in a box.

The potential energy integrals are defined as

L

U( j , k ) :=  ⋅ f ( j , x ) ⋅   ⋅ κ ⋅ x ⋅ f ( k , x ) d x ( 36)
1 1 2

 hbar ⋅ ω  

2
−L

Created June 2001 vari_harm2001.mcd page 15


Updated August 2002 S. Keith Dunn
Show the results for several of the potential energy integrals below. Note any trends and give an explanation of the
trends in your report. Hint: The trends can be explained in terms of the symmetry of the potential and the trial
wavefunctions. One of the most useful concepts in Group Theory is that in order for an integral to be nonzero, the
integrand must be totally symmetric (with respect to the symmetry of the hamiltonian). As you will learn later in the
course, this concept is the basis for many spectroscopic selection rules. You might be able to explain these trends
more effectively if you look back at the plots of the potential and the basis functions above. See the instructor for
further guidance.

− 17 − 17
U( 1 , 1 ) = 0.585 U( 1 , 2 ) = −3.24 × 10 U( 1 , 3 ) = 0.68 U( 1 , 4 ) = −3.683 × 10
− 18 − 16
U( 2 , 2 ) = 1.265 U( 2 , 3 ) = −1.468 × 10 U( 2 , 4 ) = 0.806 U( 2 , 5 ) = 2.026 × 10
− 17 − 17
U( 3 , 3 ) = 1.391 U( 3 , 4 ) = −8.715 × 10 U( 3 , 5 ) = 0.85 U( 3 , 6 ) = 5.533 × 10

The potential energy integrals are nonzero only if j=k or (j+k) is even. As alluded to in the Hint above, this is a result
of the symmetry of the basis functions and the potential. Since the potential energy operator, U, is symmetric
about the origin, then the product of f(j,x)*f(k,x) must also be symmetric for the integral to be nonzero. For this to
occur, j and k must either both be odd or both be even. If you don't cover Group Theory in your pchem course, you
may wish to exclude the explanation part of the potential energy integrals. I still think it's useful for the students to
identify the trend. This trend is the reason why the odd basis functions contribute to one set of trial wavefunctions,
and the even basis functions contribute to another. Seeing this example of symmetry ramifications will give the
students some reference to help them grasp why, in homonuclear diatomic molecules, the pz atomic orbitals
contribute to σ MO's, while the px and py atomic orbitals contribute only to π MO's. I've found this simple, 1-d
example provides an understandable introduction as to why symmetry arguments are so useful in quantum
mechanics.

H( j , k ) := T( j , k ) + U( j , k ) ( 37)

You could enter the matrix elements by hand, as you did in exercise 1b. But, Mathcad provides a more efficient way
of defining matrices if the elements (j,k) are functions of j and k. Two range variables, p and q are defined below. Let's
start by using 4 basis functions, so p and q will range from 0 to 3. When you redo the exercise using ten basis
functions, you will need to increase the range of both p and q to be 0 to 9.
p := 0 , 1 .. 3
( 38)
q := 0 , 1 .. 3

The elements of the matrix M are defined below by typing M[p,q[right arrow]:H(p,q) as follows:
M := H( p + 1 , q + 1 ) ( 39)
p, q

Created June 2001 vari_harm2001.mcd page 16


Updated August 2002 S. Keith Dunn
Note that objects entered after the [ key appear as subscripts. This is how Mathcad refers to specific elements of a
matrix or a vector. Remember that by default Mathcad starts numbering elements of an array with (0,0), while our
basis functions are defined for quantum numbers of 1,2,3,..n. Therefore, the matrix element M0,0=H(1,1), etc. Since
the matrix elements have been defined using range variables, basis functions can be added simply by increasing the
range on p and q. So, when you repeat the exercise with more basis functions, you won't have to enter any matrix
elements by hand.

The matrix M is displayed below by typing M=[return].

 −2.372 × 10
− 17
−1.891 × 10
− 16 
 0.723 0.68 
 − 17 − 17 
 −3.859 × 10 1.817 3.283 × 10 0.806 
M= ( 40)
 − 17 − 17 
0.68 1.489 × 10 2.632 4.502 × 10
 
 − 17 − 17 
 −5.195 × 10 0.806 −3.313 × 10 3.64 

Eigenvalues

As in exercise 1b, you can find the eigenvalues using the eigenvals(M) command.

Set the eigenvalues equal to a vector, W, by typing W:eigenvals(M)[return].

W := eigenvals( M )

Recall, however, that the eigenvalues are not in any particular order.

To arrange the eigenvalues in order from smallest to largest, type E:sort(W)[return].

E := sort( W)

Created June 2001 vari_harm2001.mcd page 17


Updated August 2002 S. Keith Dunn
To view the sorted eigenvalues, type E=[return].

 0.505 
 
 1.511 
E=
 2.849 
 
 3.946 

At this point in the exercise, we use the Variational Principle again to find the best set of trial wavefunctions. Recall
that the trial wavefunction always produces an energy greater than or equal to the actual energy of the system. Here,
we use L, the size of the box, as a Variational parameter and minimize the energy of the system as a function of L.
We choose the highest energy state as the reference, since this state is the strongest function of L.

You could minimize the ground state energy and still get good results for the ground-state. However, I found that the
energies of the excited states are stronger functions of L. Minimizing the energy of the highest energy level provides
much better trial wavefunctions and energies for the excited states.

Record, in your lab notebook, the value of the largest eigenvalue obtained using the initial value of L. Return to page
10 of the worksheet where the constants are defined and change the value of L slightly. Record the new result for
the energy. Continue varying L until you find the minimum value of the largest eigenvalue. This value of L produces
the best set of trial wavefunctions. In your report plot the largest eigenvalue as a function of L from 0.2 to 1.0
angstroms to confirm that you have indeed found the minimum energy. Report a percent error for your ground-state
energy using your best L value. Does the variational method provide an accurate approximation in this instance?

Included here is a plot of E vs. L using 10 basis functions. I did the plot in Excel and cut and pasted the graph.

Energy Minimization

40
35
30
25
E10/hn

20
15
10
5
0
0 0.2 0.4 0.6 0.8 1 1.2
L(Angstroms)

Created June 2001 vari_harm2001.mcd page 18


Updated August 2002 S. Keith Dunn
Eigenvectors

Define v0 as the eigenvector corresponding to the ground-state eigenvalue, by typing v0:eigenvec(M,E[0). Display
the eigenvector, v0, by typing v0=[return]. Define the remaining eigenvectors in the same manner. Display each of
these eigenvectors in the space below. Note any trends in the eigenvectors and give an explanation of the trends
observed in your report. Hint: Again, the trends can be explained in terms of the symmetry of the hamiltonian and
the trial wavefunctions. Since the hamiltonian is symmetric about x=0, all allowable wavefunctions for the system
must be either symmetric or antisymmetric about x=0. You might try plotting a linear combination of i) two odd
basis functions, ii) two even basis functions, and iii) one even and one odd basis function and investigate the
symmetry of each of these combinations. Choose -L and L as the x-axis limits on any graph you plot, since the
basis functions are only defined in the region between these limits. How can the symmetry arguments above by
applied to significantly reduce the computational time needed for this problem?
v0 := eigenvec( M , E ) v1 := eigenvec( M , E ) v2 := eigenvec( M , E ) v3 := eigenvec( M , E )
0 1 2 3

 −0.952   3.83 × 10− 17   0.305   −5.6 × 10− 17 


       
− 17 − 17
 −3.318 × 10   0.935   3.656 × 10   0.354 
v0 =   v1 =   v2 =   v3 =  
0.305 − 17 0.952 − 18
   −2.146 × 10     7.059 × 10 
 − 18     − 17   
 −4.029 × 10   −0.354   2.263 × 10   0.935 

Notice that the odd basis functions contribute to one set of trial wavefunctions, while the even basis functions
contribute to another. I really gave the answer in the Hint above. All the students need to do is to plot the 3 linear
combinations described in the Hint and note that the third one is neither symmetric nor antisymmetric about the
origin. The computational time can be significantly reduced by using the odd basis functions and the even basis
functions as separate basis sets. Solving two (2x2) determinants takes only half as long as solving one (4x4).
Again, if you don't cover Group Theory in your pchem course, you might want to exclude the explanation for this part.
i, symmetric ii, antisymmetric
5
5 . 10
5
4 . 10

f ( 1 , x) + f ( 3 , x)2 . 105 f ( 2 , x) + f ( 4 , x) 0

5
0 5 . 10
0 0
x x
iii, neither symmetric nor antisymmetric
5
4 . 10

5
2 . 10
f ( 1 , x) + f ( 2 , x)
0

5
2 . 10
11 11
2 . 10 0 2 . 10
x
Created June 2001 vari_harm2001.mcd page 19
Updated August 2002 S. Keith Dunn
x

Constructing the Trial Wavefunctions

For convenience, the basis functions have been put into a vector, vbasis(x), below. When you redo the exercise with
n basis functions, you will need to redefine vbasis(x) by inserting a matrix with n rows and one column in equation
(41).

 f ( 1 , x) 
 
 f ( 2 , x) 
vbasis( x ) := ( 41)
 f ( 3 , x) 
 
 f ( 4 , x) 

Again, I've run into a problem with Mathcad that I can't solve. It would be much nicer to define the individual elements
of vbasis(x) in terms of the basis functions using one of the range variables defined in equation (38). But, I can't get
Mathcad to let me do that kind of matrix definition with functions. If you can tell me how to do this, please send me
an e-mail.

The trial wavefunctions are then just vbasis(x) multiplied by the appropriate eigenvector. The first four trial
wavefunctions are defined below by typing g0(x):v0*vbasis(x).

When you redo the exercise using n basis functions, you will need to define the remaining trial wavefunctions similarly.

g0 ( x ) := v0 ⋅ vbasis( x ) The ground-state trial function (v=0)

g1 ( x ) := v1 ⋅ vbasis( x ) first excited state (v=1)


( 42)
g2 ( x ) := v2 ⋅ vbasis( x ) second excited state (v=2)

g3 ( x ) := v3 ⋅ vbasis( x ) third excited state (v=3)

Plotting the Actual and Trial Wavefunctions


Graph an X-Y plot of the actual, ψ 0(x), and trial, g0(x), ground-state wavefunctions vs. x below by choosing Graph
under the Insert option on the button bar. Remember that the sign of the wavefunction has no physical meaning, so
you may need to plot -g(0) instead of g(0).
Choose the range of x to be from -L to L, and choose an appropriate range for the y-axis. Graph each of your trial
functions together with the corresponding actual wavefunction. Note the similarities and differences in your report.
You may wish to change the value of L away from the best value and see how the trial wavefunctions are affected.

How does g0(x) differ from pure f(1,x)? How does the addition (or subtraction) of "higher energy" basis functions to
f(1,x) result in a lower energy than pure f(1,x)? Include a complete discussion of these questions in your report.
Consider both kinetic and potential energy contributions in your answer. You may benefit from looking back at the
plot of the potential vs. x. Recall that the probability of the system being at a displacement of x at any given time is
given by the square of the wavefunction, so a plot of g02 (x) vs. x might also be useful.
Created June 2001 vari_harm2001.mcd page 20
Updated August 2002 S. Keith Dunn
given by the square of the wavefunction, so a plot of g02 (x) vs. x might also be useful.

Reference [5] provides a good discussion of how adding "higher energy" basis functions leads to lower variational
energies. However, the material is probably presented at a level above most undergraduates.

When you have finished exercise 2, repeat it using 10 basis functions instead of 4. Include in your report an
energy-level diagram of the actual energy levels together with those obtained using 4 and 10 basis functions. Also,
include a discussion of how additional basis functions affect the energy levels and trial wavefunctions.

Report
Your report should consist of the following:

- the completed Mathcad worksheet (an electronic copy is fine.)

- an explanation of the trends in the overlap integrals

- proof that equation (34) simplifies to equation (35)

- an explanation of the trends in the kinetic energy integrals

- an explanation of the trends in the potential energy integrals

- a graph of the largest eigenvalue vs. L

- calculations of the percent errors for the ground-state energies

- an explanation of the trends observed in the eigenvectors

- a suggestion as to how the computational time might be decreased significantly by the use of symmetry
arguments

- a discussion of the differences and similarities in the trial and actual wavefunctions

- a discussion of the importance of the additivity of basis functions in achieving the best trial wavefunctions

- an energy level diagram of the actual energy levels together with those obtained using 4 and 10 basis functions

- any graphs that you find useful in making your arguments

As shown below, the agreement between the trial and actual wavefunctions is quite good for the ground and first
excited states. For these two states the square-well potential used in defining the basis functions contains most of
the displacement experienced by the system. However, for higher energy levels, the probability density of the
actual wavefunctions is significant outside the box. Since the basis functions exist only between -L and L, the trial
wavefunctions can not have any amplitude outside the box. Using ten basis functions gives very good fits for the
first six levels, but again, the higher energy levels have significant probability density outside the box. Both basis
sets give good results for the ground state, but the larger basis set gives much better results for the first few excited
levels. As is pointed out in reference [5], using the complete (infinite) set of basis functions would provide the exact
wavefunctions and energy levels

Created June 2001 vari_harm2001.mcd page 21


Updated August 2002 S. Keith Dunn
5
3 . 10

5
2 . 10
− g0 ( x)

ψ0( x)
5
1 . 10

0
11 11
2 . 10 0 2 . 10
x

5
2 . 10

5
1 . 10

− g1 ( x)
0
ψ1( x)

5
1 . 10

5
2 . 10
11 11
2 . 10 0 2 . 10
x

5
2 . 10

5
1 . 10

g2 ( x)
0
ψ2( x)

5
1 . 10

5
2 . 10
11 11
2 . 10 0 2 . 10
x

Created June 2001 vari_harm2001.mcd page 22


Updated August 2002 S. Keith Dunn
5
4 . 10

5
2 . 10

− g3 ( x)
0
ψ3( x)

5
2 . 10

5
4 . 10
11 11
2 . 10 0 2 . 10
x

Shown below is a plot of g0(x)2, f(1,x)2, and f(3,x)2. Notice that by subtracting a bit of f(3,x) from f(1,x), the
probability density is concentrated more toward the centre of the well and away from the extremes of the allowed
displacement where the harmonic potential rises rapidly. So subtracting a bit of f(3,x) decreases the potential
energy of the system. However, concentrating the probability density toward the centre of the well (further localizing
the motion) actually increases the average curvature of the wavefunction or the kinetic energy. So the total energy
of the system is minimized by decreasing the potential energy, but at the cost of increasing the kinetic energy.

10
6 . 10

2
g0 ( x) 4 . 1010
2
f ( 1 , x)
2 10
f ( 3 , x) 2 . 10

0
11 11
2 . 10 0 2 . 10
x

Created June 2001 vari_harm2001.mcd page 23


Updated August 2002 S. Keith Dunn

You might also like