9. Boundary Value and Eigenvalue Problems#

9.1. Introduction#

Many of the important equations of physics can be cast in the form of a linear, second-order differential equation:

\(\frac{ \mathrm{d}^2 y } { \mathrm{d} x^2 } + k^2 (x) y = S(x)\),

where \(S(x)\) is an “inhomogeneous” (or “driving”, or “source” term) and \(k^2(x)\) is a real function.

When \(k^2>0\), the solutions of the homogeneous equation (i.e. with \(S=0\)) are oscillatory, with wavenumber \(k\).

When \(k^2<0\), the solutions grow or decay exponentially at a rate \(\sqrt{-k^2}\).

An example of such a problem is trying to find the electrostatic potential \(\Phi\), generated by a localized charge distribution \(\rho(\mathbf{r})\).

The starting point would be Poisson’s equation: \(\nabla^2 \Phi = - 4 \pi \rho\).

(Note that this is written in Gaussian units, where the permittivity of free space is replaced by \(1/4\pi\): \(\varepsilon_0 \rightarrow 1/4\pi\). See https://en.wikipedia.org/wiki/Gaussian_units for further detail.)

For spherically-symmetric \(\rho\), and hence \(\Phi\), the Poisson equation turns into:

\(\frac{1}{r^2} \frac{ \mathrm{d} }{\mathrm{d}r} \left( r^2 \frac{ \mathrm{d}\Phi } { \mathrm{d} r} \right) = - 4 \pi \rho\).

The standard substitution: \(\Phi(r) = r^{-1} \phi(r)\) then yields:

\(\frac{ \mathrm{d}^2 \phi } { \mathrm{d} r^2 } = -4\pi r\rho\),

which is of the aforementioned form, with \(k^2=0\) and \(S=-4 \pi r \rho\).

Another example is the quantum-mechanical wave function for a particle of mass \(m\) and energy \(E\), moving in a central potential \(V(r)\). This can be written as:

\(\Psi(\mathbf{r}) = r^{-1} R(r) Y_{\ell M} (\theta, \phi)\),

where \(Y_{\ell M}\) are the spherical harmonics, and \(\ell, M\) are quantum numbers relevant for the angular momentum, and the radial wave function satisfies:

\(\frac{ \mathrm{d}^2 R } { \mathrm{d} r^2 } + k^2 (r) R = 0\),

where \(k^2(r) = \frac{2 m}{\hbar^2} \left[ E - \frac{\ell(\ell+1)}{2 mr^2} \hbar^2 - V(r)\right]\).

The above equation is also in the general form stated above, with \(S=0\).

These equations appear unremarkable and readily treated by the methods discussed in Chapter 7, except for two points:

  • Boundary conditions imposed by the physics often appear as constraints on the dependent variable at two separate points of the dependent variable. Therefore, a solution of the problem as an initial value problem is not obviously possible.

  • The (time-independent) Schrödinger equation is an eigenvalue problem, in which we must find the energies that lead to physically-acceptable solutions satisfying the appropriate boundary conditions.

We begin by deriving an integration algorithm particularly suited to the equations of the desired form, and discuss the boundary value and eigenvalue problems in return.

9.2. The Numerov Algorithm#

There is a particularly simple and efficient method for integrating the second-order differential equations of the desired form (i.e. without a first-order derivative). This is known as the Numerov (or Cowling’s) method.

Consider the following expansions of the function \(y\) about \(x\):

\(y(x+h) = y(x) + h y'(x) + \frac{h^2}{2} y''(x) + \frac{h^3}{6} y'''(x) + \frac{h^4}{4!} y''''(x) + \mathcal{O}(h^5)\),

\(y(x-h) = y(x) - h y'(x) + \frac{h^2}{2} y''(x) - \frac{h^3}{6} y'''(x) + \frac{h^4}{4!} y''''(x) - \mathcal{O}(h^5)\),

Add the two together:

\(y(x+h) + y(x-h) = 2y(x) + h^2 y''(x) + + \frac{h^4}{12} y''''(x) + \mathcal{O}(h^6)\),

since the \(\mathcal{O}(h^5)\) terms will cancel out due to opposite signs.

After some minor rearrangements, we get a three-point approximation for the second derivative \(y''(x)\):

\(y''(x) + \frac{h^2}{12} y''''(x) + \mathcal{O}(h^4) = \frac{ y(x+h) + y(x-h) - 2 y(x) }{h^2}\),

where we have kept the error term \(\frac{h^2}{12} y''''(x)\), which we will further manipulate below.

Before we get to that, according to the above, let’s write down the the three-point approximation for the second derivative:

\(y''(x_n) = \frac{ y_{n+1} - 2 y_n + y_{n-1} } { h^2 } + \mathcal{O}(h^2)\),

where we have defined the \(n\)-th iteration of the independent variable as \(x_n = x_0 + nh\).

The differential equation itself tells us that:

\(y'' = -k^2 y + S\).

If we differentiate this twice more and evaluate it at \(x_n\):

\(y''''(x_n) \equiv y_n'''' = \frac{\mathrm{d}^2}{\mathrm{d}x^2} \left.\left(-k^2 y + S\right)\right|_{x=x_n}\).

We can then use the three-point approximation of the second derivative derived above to deduce that: \(y_n'''' = -\frac{ (k^2 y)_{n+1} - 2 (k^2 y)_n + (k^2 y)_{n-1} }{h^2} + \frac{ S_{n+1} - 2 S_n + S_{n-1}}{h^2} + \mathcal{O}(h^2)\).

This can be substituted into the “error” term of the three-point approximation for the second derivative \(y''(x)\) to get:

\(y_n'' + \frac{h^2}{12} \left(-\frac{ (k^2 y)_{n+1} - 2 (k^2 y)_n + (k^2 y)_{n-1} }{h^2} + \frac{ S_{n+1} - 2 S_n + S_{n-1}}{h^2} + \mathcal{O}(h^2)\right) + \mathcal{O}(h^4) = \frac{ y_{n+1} - 2 y_n + y_{n-1} } { h^2 }\).

Multiplying through by \(h^2\) and using the differential equation itself, i.e. \(y_n'' = - (k^2y)_n + S_n\):

\(- (k^2y)_n + S_n + \frac{h^2}{12} \left(-(k^2 y)_{n+1} - 2 (k^2 y)_n + (k^2 y)_{n-1} + S_{n+1} - 2 S_n + S_{n-1} \right) + \mathcal{O}(h^6) = y_{n+1} - 2 y_n + y_{n-1}\).

After further rearrangement, we get a recursion relation for \(y_{n+1}\) in terms of \(y_n\) and \(y_{n-1}\), the previous two steps:

\(y_{n+1} = \frac{ 2(1- \frac{5h^2}{12} k_n^2) y_n - (1+\frac{h^2}{12} k^2_{n-1}) y_{n-1} + \frac{h^2}{12}(S_{n+1} + 10 S_n + S_{n-1}) + \mathcal{O}(h^6) }{ 1 + \frac{h^2}{12} k_{n+1}^2 }\).

Note that one can also solve for \(y_{n-1}\) in terms of \(y_n\) and \(y_{n+1}\) to integrate backward in \(x\).

We emphasize here that the Numerov method is one more order accurate than the fourth-order Runge-Kutta method, which might be used to integrate the problem as two coupled first-order equations. The Numerov method is also more efficient, as each step requires computation of \(k^2\) and \(S\) at only the “lattice” points.

9.2.1. Example 9.1: The Numerov Algorithm for Simple Harmonic Oscillation#

Apply the Numerov algorithm to the Simple Harmonic Oscillator problem:

\(\frac{ \mathrm{d}^2 y } {\mathrm{d}x^2} = - 4 \pi^2 y\), with \(y(0)=1\), \(y'(0)=0\).

Integrate from \(x=0\) to \(x=4\). Note that you will have to use some special procedure to generate the value of \(y_1 \equiv y(h)\), needed to start the three-term recursion relation.

Plot the resulting function \(y(x)\).

9.3. Direct Integration of Boundary Value Problems: the Poisson Equation#

Consider trying to solve Poisson’s equation for a charge density distribution:

\(\rho(r) = \frac{1}{8 \pi} e^{-r}\).

If we integrate the charge density over all space, we obtain the total charge:

\(Q = \int \rho(r) \mathrm{d}^3 r = 4\pi \int_0^\infty \rho(r) r^2 \mathrm{d} r = 1\).

The exact solution to this problem is:

\(\phi(r) = 1 - \frac{1}{2} (r+2) e^{-r}\),

and therefore we can obtain the potential \(\Phi = \phi/r\).

The solution has the expected behavior at large \(r\): since \(\phi \rightarrow 1\), we have \(\Phi \rightarrow 1/r\), the Coulomb potential from a unit charge (in Gaussian units).

Let’s try to solve this example as an ordinary initial value problem. The charge density \(\rho\) has no singular behavior at the origin, and therefore we expect \(\Phi\) to be regular there, which implies that \(\phi = r \Phi\) vanishes at the origin, when \(r=0\). We can readily check that this is indeed the case for the explicit solution:

\(\phi(0) = 1 - \frac{1}{2} (0+2) e^{0} = 0\).

We could then integrate \(\frac{ \mathrm{d}^2 \phi } { \mathrm{d} r^2 } = -4\pi r\rho\) outward from the origin using the Numerov method:

\(\phi_{n+1} = 2 \phi_n - \phi_{n-1} + \frac{h^2}{12} (S_{n+1} + 10 S_n + S_{n-1})\),

with source term \(S=-4\pi r\rho = -\frac{1}{2} r e^{-r}\).

However, to be able to accomplish this, we also need to know the value of \(\phi_1 = \phi(h)\) in addition to \(\phi_0 = 0\). Note that this is also identical to \(\Phi(0)\), since:

\(\Phi(0) = \lim_{r\rightarrow 0} \frac{\phi}{r} = \left. \frac{ \mathrm{d} \phi } { \mathrm{d} r}\right|_{r\rightarrow 0}\), after applying L’Hôpital’s rule.

This is unfortunate, since \(\phi_1\) is nominally part of the function we are trying to find, and we do not know it a priori.

We will discuss below what to do in the general case, but for now, let’s just take \(\phi_1\) from the analytical solution:

\(\phi_1 = \phi(r=h) = 1 - \frac{1}{2} (h + 2) e^{-h}\).

9.3.1. Example 9.2: The Numerov Algorithm for the Poisson Equation#

Apply the Numerov algorithm to solve Poisson’s equation with \(\rho(r) = \frac{1}{8 \pi} e^{-r}\).

Assume the exact solution \(\phi(r) = 1 - \frac{1}{2} (r+2) e^{-r}\) and plot the error up to \(r=20\).

To start up the recursion, use \(\phi_1 = \phi(r=h) = 1 - \frac{1}{2} (h + 2) e^{-h}\).

After solving Example 9.2, you will have noticed that the error is getting larger at large \(r\)!

Let’s try to understand the origin of this phenomenon by considering a more general case, where we do not have an analytical formula to give us \(\phi\) near the origin that is necessary to get the three-term recursion relation started.

One way to proceed is to find \(\Phi(0)\) by direct numerical quadrature of the Coulomb potential. At a point \(\mathbf{r}\), the potential will be given by an integral over the whole charge distribution:

\(\Phi(\mathbf{r}) = \int \frac{ \rho (|\mathbf{r}'|)}{|\mathbf{r} - \mathbf{r}'|} \mathrm{d}^3 r'\).

At the origin, this would be easy to calculate:

\(\Phi(0) = \int \frac{ \rho (|\mathbf{r}'|) }{ |\mathbf{r}'| } \mathrm{d}^3 r' = 4\pi \int_0^\infty r' \rho \mathrm{d} r'\), where we have assumed a spherically-symmetric \(\rho(r)\).

We could achieve this, e.g. by using Simpson’s rule. There will be, however, some error associated with the value obtained. Let’s say this error is 5%. Let’s just impose such an error in our Example 9.2 result and compare the effect to what we had obtained previously.

9.3.2. Example 9.3: Numerov Algorithm Error Exploration#

Apply a 5% assumed error on the \(\phi_1\) input of Example 9.2 and compare your error to the previous one.

You will have noticed that disaster has struck: A 5% change on the initial conditions has induced a 50% error on the solution at large values of \(r\)!

What’s going on?

To understand what has happened, consider solutions to the homogeneous version of our differential equation:

\(\frac{ \mathrm{d}^2 \phi } { \mathrm{d} r^2 } = 0\).

Such solutions can be added to any particular solution to give another solution.

There are two linearly-independent homoeneous solutions:

\(\phi \sim r\) and \(\phi \sim \mathrm{constant}\).

The general solution to \(\frac{ \mathrm{d}^2 \phi } { \mathrm{d} r^2 } = -4\pi r\rho\) in the asymptotic region, where \(\rho\) vanishes and the equation is in fact homogeneous, can be written as a linear combination of these two functions. Of course, the latter, sub-dominant solution (which corresponds to \(\Phi \sim 1/r\) is the physical one.

What has occurred in our problem is that an imprecision in the specification of \(\Phi\) at the origin, or any numerical round-off error in the integration process, can introduce a small admixture of the \(\phi \sim r\) solution, which will eventually dominate at large \(r\).

There exists a straightforward cure for this difficulty: subtract a multiple of the “bad”, unphysical solution to the homogeneous equation from the numerical result, to guarantee the physical behavior in the asymptotic region. The “bad” results vary linearly with \(r\) for large \(r\). We can fit the last few points of the numerical solution to the form:

\(\phi = mr + b\),

and subtract \(mr\) from the numerical results to reinstate the appropriate large-\(r\) behavior. Let’s do this in the next example.

9.3.3. Example 9.4: Bad Result Correction#

Correct the “bad” results with the 5% error of Example 9.3.

The “bad” results vary linearly with \(r\) for large \(r\). Fit the last few points of the numerical solution to the form:

\(\phi = mr + b\),

and subtract \(mr\) from the numerical results to reinstate the appropriate large-\(r\) behavior.

In this relatively simple example, the instabilities are not too severe, and satisfactory results for moderate values of \(r\) are obtained with outward integration when the exact (or reasonably-accurate approximation) value of \(\phi_1\) is used.

Alternatively, it is also feasible to integrate inward, starting at large \(r\), with \(\phi = Q\), independent of \(r\). This results in a solution that often satisfies accurately the boundary condition at \(r=0\) and avoids having the perform a quadrature to determine the (approximate) starting value \(\phi_1\).

9.4. Green’s Function Solution of Boundary Value Problems#

It’s possible that the two solutions to the homogeneous equation (i.e. with the RHS=0) have very different behaviors. In that case, some extra precautions must be taken.

For example, consider the equation describing the potential from a charge distribution of multipole order \(\ell >0\):

\(\left[ \frac{ \mathrm{d^2}}{ \mathrm{d}r^2} - \frac{ \ell(\ell+1)}{r^2} \right] \phi = -4 \pi r \rho\),

which has two homogeneous solutions:

\(\phi \sim r^{\ell + 1}\) and \(\phi \sim r^{-\ell}\).

For large \(r\), the first of these solutions is much larger than the second, so that ensuring the correct asymptotic behavior by subtracting a multiple of this dominant homogeneous solution from a particular solution obtained by outward integration is subject to large round-off errors. Inward integration would also be unsatisfactory, since the unphysical solution \(r^{-\ell}\) is likely to dominate at small \(r\).

One possible way to generate an accurate solution is by combining the two methods: inward integration can be used to obtain the potential for \(r\) greater than some intermediate radius \(r_m\), and outward integration can be used for the potential when \(r< r_m\). As long as \(r_m\) is chosen so that neither homogeneous solution is dominant, the outer and inner potentials obtained, respectively from these two integrations will match at \(r_m\) and, together, will describe the entire solution. If the inner and outer potentials don’t quite match, a multiple of the homogeneous solution can be added to the former to correct for any deficiencies in our knowledge of \(\phi' (r=0)\).

Sometimes the two homogeneous solutions have such different behaviors that it is impossible to find a value of \(r_m\) that permits satisfactory integration of the inner and outer potentials. Such cases can be solved by the Green’s function of the homogeneous equation.

To illustrate this, consider our prototypical equation:

\(\frac{ \mathrm{d}^2 y } { \mathrm{d} x^2 } + k^2 (x) y = S(x)\),

with boundary conditions \(\phi(x=0) = \phi(x=\infty) = 0\).

Since this is a linear problem, the solution to this equation can be written as:

\(\phi(x) = \int_0^\infty G(x,x') S(x') \mathrm{d}x'\),

where \(G\) is a Green’s function that satisfies:

\(\left[ \frac{ \mathrm{d}^2 } { \mathrm{d} x^2 } + k^2 (x)\right] G(x,x') = \delta(x-x')\).

Evidently, \(G\) satisfies the homogeneous equation for \(x\neq x'\). However, the derivative of \(G\) is discontinuous at \(x=x'\) as can be seen by integrating the above equation from \(x=x'-\varepsilon\) to \(x=x'+\varepsilon\) and letting \(\varepsilon \rightarrow 0\).

We have \(\int \frac{ \mathrm{d}^2 G } { \mathrm{d} x^2 } = \frac{ \mathrm{d} G } { \mathrm{d} x }\), the integral of the \(\delta\) function that includes \(x=x'\) has to be unity, and the second integral \(\int_{x'-\varepsilon}^{x'+\varepsilon} G(x,x')\) vanishes as \(\varepsilon \rightarrow 0\), we are left with:

\(\left.\frac{ \mathrm{d} G } { \mathrm{d} x }\right|^{x'+\varepsilon}_{x' - \varepsilon} = 1\).

Then, to construct the Green function, we first solve the homogeneous equation:

\(\left[ \frac{ \mathrm{d}^2 } { \mathrm{d} x^2 } + k^2 (x)\right] G(x,x') = 0\) for \(x > x'\) and \(x < x'\) and then impose the appropriate boundary conditions, including the discontinuity in the derivative. Let’s recap the process by considering a simple analytical example.

9.4.1. Example 9.5: A Simple Analytical Green’s Function Problem#

Solve the equation:

\(\frac{ \mathrm{d}^2 y } { \mathrm{d} x^2 } + y = S(x)\),

with boundary conditions \(y(0) = y(\pi/2) = 0\), using the Green’s function method.

There’s a general prescription to obtain the Green’s function for our differential equation:

\(\frac{ \mathrm{d}^2 y } { \mathrm{d} x^2 } + k^2 (x) y = S(x)\),

  • Get the two solutions to the homogeneous equation that satisfy the two boundary conditions: \(y_<\) and \(y_>\). The first will satisfy the boundary conditions to the left (e.g. at \(x=0\)), and the second to the right (e.g. at \(x=\infty\)).

  • These have to be normalized such that their Wronskian equals unity: \(W = \frac{ d y_> } { \mathrm{d} x } y_< - \frac{ d y_< } { \mathrm{d} x } y_> = 1\).

  • Then the Green’s function is given by: \(G(x,x') = y_< (x_<) y_> (x_>)\), where \(x_<\) and \(x_>\) are the smaller and larger of \(x\) and \(x'\) respectively. (Check that this conforms with Example 9.5!).

  • Then the explicit solution is given by: \(y(x) = y_> (x) \int_0^x y_< (x') S(x') \mathrm{d}x' + y_< (x) \int_x^\infty y_> (x') S(x') \mathrm{d}x'\).

In general, this expression can be evaluated by a numerical quadrature and is not subject to any of the stability problems we have seen associated with a direct integration of the inhomogeneous equation.

In the case of arbitrary \(k^2\), the homogeneous solutions \(y_<\) and \(y_>\) can be found numerically by outward and inward integrations, respectively, of initial value problems and then normalized to satisfy the Wronskian relation.

For simple forms of \(k^2(x)\), they are known analytically. For example, for the problem:

\(\left[ \frac{ \mathrm{d^2}}{ \mathrm{d}r^2} - \frac{ \ell(\ell+1)}{r^2} \right] \phi = -4 \pi r \rho\),

it can be shown that:

\(\phi_< (r) = r^{\ell+1}\) and \(\phi_> (r) = - \frac{ 1 } { 2\ell + 1 } r^{-\ell}\), are one possible set of homogeneous solutions satisfying the appropriate boundary conditions and the differential equation.

9.4.2. Example 9.6: Green’s Function Example#

Solve the problem:

\(\frac{ \mathrm{d^2} \phi}{ \mathrm{d}r^2} = -4 \pi r \rho\),

for \(\rho(r) = \frac{1}{8\pi} e^{-r}\) using the Green’s function method.

Compare your results to the exact solution by plotting the error as in the previous examples.

9.5. Eigenvalues of the Wave Equation#

Eigenvalue problems involving differential equations often arise in finding normal-mode solutions of wave equations.

We begin our discussion of eigenvalue problems with a simple example: that of normal modes of a stretched string of uniform mass density.

After suitable scaling of the physical quantities, the equation and boundary conditions defining these modes can be written as:

\(\frac{ \mathrm{d^2} \phi}{ \mathrm{d}x^2} = -k^2 \phi\),

with \(\phi(x=0) = \phi(x=1) = 0\).

Here we have:

  • \(0 < x < 1\) is the coordinate along the string,

  • \(\phi\) is the transverse displacement of the string,

  • \(k\) is the constant wavenumber, linearly related to the frequency of vibration.

The equation is an eigenvalue equation in the sense that the solution satisfying the boundary conditions exist only for particular values of \(k\), \(\{k_n\}\), which we must find.

Furthermore, it is linear and homogeneous, the normalization of the eigenfunctions corresponding to any \(k_n\), which we denote as \(\phi_n\), is not fixed, but can be chosen for convenience.

The un-normalized eigenfunctions and eigenvalues of this problem are of course known analytically:

\(k_n = n \pi\), \(\phi_n = \sin n \pi x\), where \(n\) is a positive integer.

The strategy for solving this problem is an iterative one:

  • Guess a trial eigenvalue and obtain the general solution by integrating the differential equation as an initial value problem (e.g. using the Numerov method).

  • If the resulting solution does not satisfy the boundary conditions, we change the trial eigenvalue and integrate again, repeating the process until a trial eigenvalue is found for which the b.c.’s are satisfied to within a predetermined tolerance.

The above method is known as the “shooting method”.

For the problem at hand: for each trial value of \(k\), we integrate forward from \(x=0\) with the initial conditions:

\(\phi(x=0) = 0\) and \(\phi'(x) = \delta\),

where \(\delta\) is arbitrary and can be chosen for convenience, since the problem we are solving is a homogeneous one, and the normalization of solutions is not specified.

Upon integrating to \(x=1\), we will find, in general, a non-vanishing value of \(\phi\), since the trial eigenvalue will not be one of the true eigenvalues. We must then readjust \(k\) and integrate again, repeating the process until we find \(\phi(x=1)=0\) to within a specified tolerance.

The problem of finding a value of \(k\) for which \(\phi(1)\) vanishes is a root-finding problem, such as the ones that we have already discussed (Chapter 6). It is safest to use a simple search to locate an approximate eigenvalue, e.g. the bisection method.

9.6. Example 9.7: The Shooting Method#

Find the lowest eigenvalue of the stretched string problem by employing the shooting method described above.

Start your search at \(k=1\) and terminate the search when the eigenvalue is determined within a precision of \(10^{-5}\).

9.7. The One-Dimensional Schrödinger Equation#

A rich example of the shooting method for eigenvalue problems is the task of finding the stationary quantum states of a particle of mass \(m\) moving in a one-dimensional potential \(V(x)\).

The time-independent Schroödinger equation is given by:

\(\frac{-\hbar^2}{2m} \frac{ \mathrm{d}^2}{\mathrm{d}x^2} \psi(x) + V(x)\psi(x) = E\psi(x)\).

If we rescale the \(x\) coordinate by a physical length \(a\), then \(\mathrm{d} x \rightarrow a \mathrm{d}x\), and we can rewrite this as:

\(\frac{-\hbar^2}{2ma^2} \frac{ \mathrm{d}^2}{\mathrm{d}x^2} \psi(x) + V(x)\psi(x) = E\psi(x)\).

We can then divide LHS and RHS by $V_0, a characteristic scale of the potential:

\(\frac{-\hbar^2}{2ma^2 V_0} \frac{ \mathrm{d}^2}{\mathrm{d}x^2} \psi(x) + \frac{V(x)}{V_0} \psi(x)= \frac{E}{V_0}\psi(x)\),

to reach the form:

\(\left[-\frac{1}{z_0^2} \frac{ \mathrm{d}^2}{\mathrm{d}x^2} + v(x) - \epsilon\right] \psi(x) = 0\),

where

\( z_0^2 = \frac{2 m a^2 V_0}{\hbar^2}\;, \)

which characterizes the “depth” of the potential, and we have defined the dimensionless energy \(\epsilon = E/V_0\).

The equation is then of the form that we have previously addressed:

\(\frac{ \mathrm{d}^2 \psi}{\mathrm{d}x^2} - z_0^2 \left[v(x) - \epsilon\right] \psi(x) = 0\),

with \(k^2(x) = - z_0^2 \left[v(x) - \epsilon\right]\).

If \(v(x) < 0\) and \(v(x)\) is zero at some boundaries (“walls”), our goal then is to find “bound” solutions with \(-1 < \epsilon < 0\), which are localized within the potential and decay exponentially outside.

This eigenvalue problem can be solved by the shooting method. Suppose that we are seeking a bound state, and therefore start with a negative trial eigenvalue.

We can integrate toward larger \(x\) via the “forward” Numerov algorithm, from some initial value \(x_\mathrm{min}\) to obtain the wave function \(\psi_<(x)\) (the “left” wave function). However, once we reach the “classically forbidden” region, we will start to generate an admixture of the undesirable exponentially-growing solution. Therefore, as a rule, integration into a classically forbidden region is likely to be inaccurate.

Therefore, at each energy, it is wiser to generate a second solution, \(\psi_>(x)\) (the “right” wave function), by integrating from \(x_\mathrm{max}\) toward a smaller \(x\), using a “backward” Numerov algorithm.

To determine whether the energy is an eigenvalue \(\psi_<(x)\) and \(\psi_>(x)\) can be compared at a matching point \(x_m\), chosen so that neither integration will be inaccurate. A convenient choice for \(x_m\) is the left turning point.

Since both \(\psi_<(x)\) and \(\psi_>(x)\) satisfy a homogeneous equation, their normalizations can always be chosen so that the two functions are equal at \(x_m\):

\(\psi_<(x_m) = \psi_>(x_m)\).

Furthermore, the derivative has to be continuous as well:

\( \int_{x- \varepsilon}^{x + \varepsilon} \mathrm{d} x \frac{ \mathrm{d}^2 \psi}{\mathrm{d}x^2} = 0\), and so:

\(\left.\frac{ \mathrm{d} \psi_< }{\mathrm{d}x}\right|_\mathrm{x_m} = \left.\frac{\mathrm{d} \psi_>}{\mathrm{d} x}\right|_{x_m}\).

If we approximate the derivatives by their simplest finite difference approximations, i.e.:

\(\frac{ \mathrm{d} \psi (x) }{\mathrm{d}x} = \frac{ \psi(x) - \psi(x-h) } { h }\),

then an equivalent condition for the continuity condition of the derivatives is:

\(f = \frac{ \psi_<(x_m - h) - \psi_>(x_m - h) } { \phi }\),

where \(\phi\) is a normalization factor, chosen to to make \(f\) typically of order unity, e.g. \(\phi\) could be the maximum value of \(\psi_<\) or \(\psi_>\).

Note that if there are no turning points, then \(x_m\) can be chosen anywhere, while if there are more than two turning points, three or more homogeneous solutions, each accurate in different regions, must be patched together.

We will tackle the particular problem of the potential of the finite square well in Exercise 9.2:

\[\begin{split} V(x)= \begin{cases} -V_0& \quad \text{if $-a \leq x \leq a$;}\\ 0& \quad \text{if $|x| > a$.}\\ \end{cases} \end{split}\]