masterdesky

Solving PDEs

1. Introduction to the numerical solution of PDEs

In this article I’ll detail the basics of the numerical solution of partial differential equations (PDEs), particularly the finite difference method (FDM). Along with examples, I’ll showcase a selection of different FDM algorithms that can be used to numerically solve PDEs. I’ll use the heat equation and the viscous and inviscid Burgers’ equation as an example to show how to apply in practice the presented methods.

Introduction

Partial differential equations and differential equations in general play a fundamental role in the mathematical description of our world. Spanning from physics to economics, differential equations are used in virtually every field of science to describe the dynamics of systems, phenomenons and processes.

As for the examples to show the application of PDE solver methods, I’ll use the following famous and easily manageable equations:

\[\begin{align} \label{eq:1} \frac{\partial u}{\partial t} = \alpha \frac{\partial^{2} u}{\partial x^{2}} &\quad\quad (\text{Heat equation}) \newline\nonumber\newline \label{eq:2} \frac{\partial u}{\partial t} = -u \frac{\partial u}{\partial x} &\quad\quad (\text{Inviscid Burgers' equation}) \newline\nonumber\newline \label{eq:3} \frac{\partial u}{\partial t} = \nu \frac{\partial^{2} u}{\partial x^{2}} - u \frac{\partial u}{\partial x} &\quad\quad (\text{Viscous Burgers' equation}) \end{align}\]

Finite differences

There are several universal approaches and schemes exist for the solution of both ordinary differential equations (ODEs) and PDEs. Moreover, specific equations or families of equations almost always have there own unique solution schemes, both analytical and numerical. One of the most popular numerical approaches that is extensively used for both ODEs and PDEs is the finite difference method (FDM). Its straightforward concept, numerically very low cost steps and easy implementation all makes it a feasible choice for the numerical solution of differential equations (e.g. in Euler’s method or in family of Runge–Kutta methods). Just to mentioned other solution schemes (that won’t be detailed in this post), PDE solution schemes include e.g the method of lines, the multidimensional generalizations of FDM, like the finite-element method or the finite-volume method and also multiple other families of solution schemes, like various meshless methods (e.g. smoothed-particle hydrodynamics) and others.

Just like these schemes I mentioned, FDM also refers to a class of several different methods that are connected by the same concept. In this approach all derivatives are approximated either using the forward, backward or central finite differences. Discretizing a 1D function between two arbitrary limits $a$ and $b$ results in a discrete function with values at $x_{1}, x_{2}, \dots, x_{N}$ points at $h$ distance from each other, where $N$ is the number of grindpoints in the $\left[ a, b \right]$ interval and

\[\begin{equation} \notag h = (b - a) / (N - 1). \end{equation}\]

Differences in function values between neighbouring points are what we refer to as finite differences. Depending on their calculation method we distinguish

  • forward : $f \left( x_{i+1} \right) - f \left( x_{i} \right)$,
  • backward : $f \left( x_{i} \right) - f \left( x_{i-1} \right)$,
  • and central : $\left[ f \left( x_{i+1} \right) - f \left( x_{i-1} \right) \right]\,/\,2$

finite differences. When we’re numerically solving a ODE or a PDE, we’re discretizing both the time and the space domain and approximating the derivatives on this discrete field. But how finite differences help us in solving differential equations?

Both concepts (finite differences and differentiation) are actually firmly related. In the well-known definition of differentiation, we’re actually describing derivatives as the limit of a finite difference, when $h \to 0$. Using the three different finite difference that we mentioned in prior, we can write down some straightforward approximating formulae for derivatives.

The forward difference approximates a derivative using the difference of the function values at $x_{i}$ and $x_{i+1}$:

\[\begin{equation} \label{eq:4} f_{\text{FW}}'(x_{i}) \approx \frac{f \left( x_{i+1} \right) - f \left( x_{i} \right)}{x_{i+1} - x_{i}} \end{equation}\]

The backward difference uses the function values at $x_{i-1}$ and $x_{i}$:

\[\begin{equation} \label{eq:5} f_{\text{BW}}'(x_{i}) \approx \frac{f \left( x_{i} \right) - f \left( x_{i-1} \right)}{x_{i} - x_{i-1}} \end{equation}\]

The central difference combines these methods and uses the function values at $x_{i-1}$ and $x_{i+1}$ as follows:

\[\begin{equation} \label{eq:6} f_{\text{CE}}'(x_{i}) \approx \frac{f \left( x_{i+1} \right) - f \left( x_{i-1} \right)}{x_{i+1} - x_{i-1}} \end{equation}\]

Flavors of the finite difference methods

What really makes the difference between distinct finite difference methods is the strategy of how they’re applied on different differential equations (you just can’t avoid using several meanings of the word “different” in this context). Mathematically the difference arise from the way as derivatives on the time and space domain get approximated. But our method of choice is always selected accordingly to the problem we’re facing. Some of them are numerically more stable in various circumstances, some of them are cost efficient to iterate, while some of them are just simply easy to understand and program. The choice is always ours what we’re going to use in the current situation. Regardless, there are some methods that are generally more popular to use, because they offer a comfortable balance of all the advantages a solution scheme can provide. In the following section I’ll summarize some of these methods for 1D cases of the heat equation and/or the Burgers’ equation.

I should make a note here that the formulae I’ll show in the following sections are not universally true. They’re only true for differential equations that are first-order in time (i.e. $\partial u / \partial t$ is the highest order time derivative in them). I’ll detail the difference from other type of differential equations down below.

A common notation I’ll also employ in the following sections is

\[\begin{equation} \notag u_{i}^{n} \equiv u \left( x_{i}, t_{n} \right) \equiv u \left( i \Delta x, n \Delta t \right) \end{equation}\]

where the index $i$ is indexing the 1D space domain and the index $n$ is indexing the time domain. On the right side of the equation $\Delta t$ and $\Delta x$ are the resolution of the time and space domain respectively.

The FTCS (explicit) method

FTCS, meaning “Forward Time Centered Space” does what its name says. It uses the forward Euler’s method in the time domain and the central difference in the space domain to iterate the numerical PDE calculation. While iteration in time is known from the numerical solution of ODEs, “iteration” in space could be non-trivial for someone, who was just introduced to partial differential equations.

In practice the “FT” part means that we’re iterating the time domain using the well-known forward Euler’s method, as we would do in case of any ODEs. Considering an equation that is first-order in time (e.g. the heat equation or the Burgers’ equation) we can write the following formula for the time domain’s iteration:

\[\begin{equation} \label{eq:7} \frac{\partial u_{i}^{n}}{\partial t} = \frac{u_{i}^{n+1} - u_{i}^{n}}{\Delta t} \quad \Rightarrow \quad u_{i}^{n+1} = u_{i}^{n} + \frac{\partial u_{i}^{n}}{\partial t} \cdot \Delta t, \end{equation}\]

where $\partial u_{i}^{n} / \partial t$ is given by the specific equation we’re currently trying to solve. The formula on the left side of the arrow is what we refer to as the forward Euler’s method. In case of the Burgers’ equation we can get the value of $\partial u_{i}^{n} / \partial t$ from Eq. \eqref{eq:2} to be

\[\begin{equation} \label{eq:8} \frac{\partial u_{i}^{n}}{\partial t} = \nu \frac{\partial^{2} u_{i}^{n}}{\partial x^{2}} - u_{i}^{n} \frac{\partial u_{i}^{n}}{\partial x}. \end{equation}\]

The non-trivial “CS” part in “FTCS” tells us about the method of how to approximate the spatial derivatives $\partial u_{i}^{n} / \partial t$ derivative in the equation above. In this “Central Space” case, we’re using the central difference method (which we’ve discussed in Eq. \eqref{eq:7}) to approximate the derivatives in Eq. \eqref{eq:9} on the right side of the equation. These approximations for the first and second-order derivatives in the Burgers’ equation will be

\[\begin{equation} \label{eq:9} \frac{\partial u_{i}^{n}}{\partial x} = \frac{u_{i+1}^{n} - u_{i-1}^{n}}{2 \Delta x} \end{equation} \begin{equation} \label{eq:10} \frac{\partial^{2} u_{i}^{n}}{\partial x^{2}} = \frac{\partial}{\partial x} \frac{u_{i+1}^{n} - u_{i-1}^{n}}{2 \Delta x} = \frac{u_{i+1}^{n} - 2 u_{i}^{n} + u_{i-1}^{n}}{\left( \Delta x \right)^{2}} \end{equation}\]

In case of the Burgers’ equation we can first write up the original PDE with the FTCS approximations from \eqref{eq:9}, \eqref{eq:10} and \eqref{eq:11} as

\[\begin{equation} \label{eq:11} u_{i}^{n+1} = u_{i}^{n} + \nu \frac{u_{i+1}^{n} - 2 u_{i}^{n} + u_{i-1}^{n}}{\left( \Delta x \right)^{2}} \Delta t - u_{i}^{n} \frac{u_{i+1}^{n} - u_{i-1}^{n}}{2 \Delta x} \Delta t, \end{equation}\]

The BTCS (implicit) method

While FTCS was an explicit method, BTCS is an implicit one. The name “BTCS” stands for “Backward Time Centered Space”. The difference between these two methods is that BTCS uses the backward Euler’s method to iterate the PDE in time which can be written similarly to Eq. \eqref{eq:8} in the following form:

\[\begin{equation} \label{eq:12} \frac{u_{i}^{n+1} - u_{i}^{n}}{\Delta t} = \frac{\partial u_{i}^{n+1}}{\partial t} \quad \Rightarrow \quad u_{i}^{n+1} = u_{i}^{n} + \frac{\partial u_{i}^{n+1}}{\partial t} \cdot \Delta t. \end{equation}\]

The obvious problem we’re facing here is that $u_{i}^{n+1}$ is not known at the $n$’th time step of the integration and according to \eqref{eq:10}, we have to determine it’s value implicitly. To understand how and derive the correct formulae we should use, we have to expand $\partial u / \partial t$ for the specific differential equations we’re trying to solve. In this case, let’s write down the central difference approximation of the spatial derivatives in the Burgers’ equation:

\[\begin{equation} \label{eq:13} \frac{\partial u_{i}^{n+1}}{\partial x} = \frac{u_{i+1}^{n+1} - u_{i-1}^{n+1}}{2 \Delta x} \end{equation} \begin{equation} \label{eq:14} \frac{\partial^{2} u_{i}^{n+1}}{\partial x^{2}} = \frac{\partial}{\partial x} \frac{u_{i+1}^{n+1} - u_{i-1}^{n+1}}{2 \Delta x} = \frac{u_{i+1}^{n+1} - 2 u_{i}^{n+1} + u_{i-1}^{n+1}}{\left( \Delta x \right)^{2}} \end{equation}\]

Now these equations are universal for PDEs that are first-order in time. Depending on the equation we intend to solve, we can substitute the previous approximations in the original PDE and rearrange it to determine the value of $u_{i}^{n+1}$. Of course, this will be different for every PDE, but we would still get an implicit equation in every case. It would be also always true that $u_{i}^{n+1}$ depends on $u_{i}^{n}$, $u_{i-1}^{n+1}$ and $u_{i-1}^{n+1}$, if we use the central difference method to approximate the spatial derivatives.

The trick we employ here is to rearrange the PDE (again) into a better form. The idea is to obtain an equation for every $x_{i}$ point along a spatial dimension, where $u_{i}^{n}$ (the only known value) is the subject of the equation. This way we’ll obtain $N$ number of equations, one for every discrete spatial point. All these equations will have the $n$’th timestep-dependent $u_{i}^{n}$ term on one side and all the $u_{\times}^{n+1}$ terms on the other side. The quintessence of this trick is to interpet these equations as a system of equations and solve it to determine the values for $u_{i}^{n+1}$, $u_{i-1}^{n+1}$ and $u_{i-1}^{n+1}$.

Unfortunately the viscous Burgers’ equation presented in Eq. \eqref{eq:3} is not easy to solve implicitly. The discretization and integration of the spatial domain needs special treatment as it was shown eg. by Mittal and Arora[1]. Since the goal of this article is to present the general idea of the existing numerical methods that can be used to solve PDEs, I’ll use the heat equation from \eqref{eq:1} here as an example for the BTCS method.

Similarly to \eqref{eq:11}, we can approximate the heat equation using the BTCS method from Eq. \eqref{eq:12} and Eq. \eqref{eq:14} as

\[\begin{equation} \label{eq:15} u_{i}^{n+1} = u_{i}^{n} + \alpha \frac{u_{i+1}^{n+1} - 2 u_{i}^{n+1} + u_{i-1}^{n+1}}{\left( \Delta x \right)^{2}} \cdot \Delta t. \end{equation}\]

Denote $\left( \Delta x \right)^{2}$ simply as $\Delta x^{2}$ from now on. Rearranging the equation as detailed above we get the final form for the BTCS method, where $u_{i}^{n}$ is the subject of the equation:

\[\begin{equation} \label{eq:16} u_{i}^{n} = \frac{\alpha \Delta t}{\Delta x^{2}} u_{i+1}^{n+1} - \left( \frac{2 \alpha \Delta t}{\Delta x^{2}} + 1 \right) u_{i}^{n+1} + \frac{\alpha \Delta t}{\Delta x^{2}} u_{i-1}^{n+1} \end{equation}\]

For every $i$ value we can write the same equation to get a system of equations. Denote the coefficient of $u_{i+1}^{n+1}$ as $a_{i}$, the coefficient of $u_{i}^{n+1}$ as $b_{i}$ and the coefficient of $u_{i-1}^{n+1}$ as $c_{i}$. Similarly, denote the coefficient of the other side of the equation as $d_{i}$ (which equals to $1$ in \eqref{eq:16}). If we have $N$ discrete points along the spatial dimension, indexed from $0$ to $N-1$, then the system of equations will be the following:

\[\begin{align} d_{0} = &a_{0} u_{0+1}^{n+1} - b_{0} u_{0}^{n+1} + c_{0} u_{0-1}^{n+1} \newline d_{1} = &a_{1} u_{1+1}^{n+1} - b_{1} u_{1}^{n+1} + c_{1} u_{1-1}^{n+1} \newline &\vdots \newline d_{N-1} = &a_{N-1} u_{(N-1)+1}^{n+1} - b_{N-1} u_{N-1}^{n+1} + c_{N-1} u_{(N-1)-1}^{n+1} \end{align}\]

Terms with indexes smaller than $0$ or larger than $N-1$ can be either simply dropped or evaluated with periodic boundary conditions.

The Crank–Nicolson method

Examples

Solving the 1D Burgers’ equation

Solving the 2D Burgers’ equation

Code availability

The C++ implementation of the complete solution can be found on my GitHub page in this repository under the project/ folder.

References

  1. Mittal, R. C., and Geeta Arora. "Numerical solution of the coupled viscous Burgers’ equation." Communications in Nonlinear Science and Numerical Simulation 16.3 (2011): 1304-1313. https://doi.org/10.1016/j.cnsns.2010.06.028