Parallel Computing with the 1-D Heat Equation: Part 1

Derivation of the 1-D Heat Equation and Basic Numerical Methods

When I took a course in Large Scale Computation at Tulane University, I learned a lot and forgot a lot. Understanding how to take a physical phenomena or problem and develop software that is scalable and highly parallelizable, along with optimizing the software for the  High Performance Computing (HPC) resources available is enough information to justify a whole degree, so trying to learn all of that in the semester courses that are taught on HPC (if at all) can feel like drinking water from a fire hose. In going back over the class notes I re-remembered one of problems that encompasses all the important parts for scientific computing on HPC resources and in relearning it felt like it would be a great example for beginners to work through.

Heat equation

The heat equation is one of the most well known and simplest partial differential equations (PDEs). The physical intuition and solving the heat equation provides the fundamentals for other equations throughout mathematics and physics. The heat equation and other diffusive PDEs show up throughout statistical mechanics describing random walks and Brownian motion. While the heat equation in 1-D and some other PDEs have analytical solutions, solving most PDEs requires numerical methods to solve.

In this first part of a multi-part series using the example of heat flow in a uniform rod, we’ll go over the physical interpretation of the heat equation, mathematical setup, and discretization for the numerical solution. In later parts of the series we’ll use C++ and open-source parallel computing libraries to solve the problem numerically on HPC resources.

Physical interpretation

From the second law of thermodynamics heat flows from hot to cold proportional to the temperature difference and the thermal conductivity of the material between them. This provides the intuitive result that the rate of heating or cooling  is proportional to how much hotter or cooler the surrounding area is.

Conservation of heat

A slender rod where, \phi (x,t) is some conserved quantity, f(x,t) is a source term, and F_1 and F_2 are the flux at x_1 and x_2, respectively.

 

Here we have a conserved quantity, \phi (x,t) where \phi might be heat or mass. The total amount of this conserved quantity in an interval x_1 \leq x \leq x_2 is defined as,

(1)   \begin{equation*} \int_{x_1}^{x_2} \phi (x,t) d x \end{equation*}

To describe how the amount changes with time over the same interval,

(2)   \begin{equation*} \frac{d}{d t} \int_{x_1}^{x_2} \phi (x,t) d x \end{equation*}

Because \phi is a conserved quantity, Eq. (2) has to balance with the flux at the boundaries, F_1 and F_2, and the generation/consumption term f(x,t) within the interval,

(3)   \begin{equation*} \frac{d}{d t} \int_{x_1}^{x_2} \phi (x,t) d x = F_1(t) - F_2(t) + \int_{x_1}^{x_2}f(x,t) d x \end{equation*}

Using Fick’s Law, the flux is proportional to the gradient, F=-\mathcal{D} \frac{\partial }{\partial x}\phi (x,t), where \mathcal{D} is a constant related to the diffusivity of the material. Using Fick’s Law with Eq. (3),

    \begin{equation*} \frac{d}{dt} \int_{x_1}^{x_2} \phi (x,t) d x = -\left[-\mathcal{D} \frac{\partial}{\partial x}\phi (x,t) \right ]^{x_2}_{x_1}+ \int_{x_1}^{x_2}f(x,t) d x \implies \end{equation*}

(4)   \begin{align*} \frac{d}{dt} \int_{x_1}^{x_2} \phi (x,t) d x =& -\int_{x_1}^{x_2}\frac{\partial}{\partial x}\left (-\mathcal{D} \frac{\partial}{\partial x}\phi (x,t) \right )dx\\&+ \int_{x_1}^{x_2}f(x,t) d x \end{align*}

Leibniz integral rule

Using the Leibniz integral rule, we can differentiate under the integral sign of the left side of Eq. (4). The Leibniz integral rule states that for a multivariate function f(x,t) (here f(x,t) is a function of x and t and not the generation/consumption term related to the physical problem) on the interval -\infty \leq a(x) , b(x) \leq \infty the derivative of the integral \int_{a(x)}^{b(x)} f(x,t) dt can be expressed as,

    \begin{align*} \frac{d}{dx}\int_{a(x)}^{b(x)} f(x,t) dt  =& f(x,b(x))\cdot \frac{d}{dt} b(x) - f(x,a(x))\cdot \frac{d}{dt} a(x) \\&+ \int_{a(x)}^{b(x)} \frac{\partial}{\partial x} f(x,t) dt \end{align*}

Often the above simplifies into,

    \begin{equation*} \frac{d}{dx}\int_{a(x)}^{b(x)} f(x,t) dt = \int_{a(x)}^{b(x)} \frac{\partial}{\partial x} f(x,t) dt \end{equation*}

Because the boundary conditions, a(x) and b(x) are constants and their derivative will be 0.

Final formulation

Applying the Leibniz integral rule with Eq. (4),

(5)   \begin{align*} \int_{x_1}^{x_2} \frac{\partial}{\partial t} \phi (x,t) dx =&- \int_{x_1}^{x_2}\frac{\partial}{\partial x}\left (-\mathcal{D} \frac{\partial}{\partial x}\phi (x,t) \right )dx \\&+ \int_{x_1}^{x_2}f(x,t) d x \end{align*}

And because all terms of Eq. (5) have the same limits of integration and same differential it can be simplified to,

(6)   \begin{equation*} \frac{\partial}{\partial t} \phi (x,t) = \frac{\partial }{\partial x}\left (\mathcal{D} \frac{\partial}{\partial x}\phi (x,t) \right ) + f(x,t) \end{equation*}

Steady-state with Dirichlet boundary condition

For steady-state the time derivative of \phi is zero, the equation can be reduced to,

(7)   \begin{equation*} \frac{d }{d x}\left (\mathcal{D} \frac{d}{d x}\phi \right ) + f(x) = 0 \end{equation*}

With the Dirichlet boundary conditions of \phi(0)=A and \phi(1)=B.

Finite difference method

To evaluate the steady-state heat equation numerically, Eq. (7) must be discretized. This is done approximating the derivative -\mathcal{D} \frac{d \phi}{d x} evaluated at a point i+1/2 to a finite difference between the two nodes outside of the derivative being approximated, \phi_i and \phi_{i+1}, and the “infinitesimal” distance between the two nodes \Delta x_{i+1/2}.

Visual representation of the central difference, where \left [ -\mathcal{D} \frac{d \phi}{d x} \right ] evaluated at i+1/2 can be approximated by difference between \phi_{i+1} and \phi_i divided by the spacing between the \phis, \Delta x_{i+1/2}.

Again using Fick’s Law, the flux is equal to the gradient, we can now use the central difference by substituting the flux, F, into Eq. (7),

(8)   \begin{equation*} -\frac{d F}{d x}+f(x) = 0 \end{equation*}

Visual representation of Eq. (7) and of Eq. (8).

The central difference of the flux,

    \begin{equation*} -\left [ \frac{d F}{d x} \right ]_{i} \simeq - \frac{F_{i+1/2} - F_{i-1/2}}{\Delta x_i} \end{equation}

And since,

    \begin{equation*} F_{i+1/2} = \left [ -\mathcal{D} \frac{d \phi}{d x} \right ]_{i+1/2} \simeq -\mathcal{D}\frac{\phi_{i+1} - \phi_{i}}{\Delta x_{i+1/2}} \end{equation*}

and,

    \begin{equation*} F_{i-1/2} = \left [ -\mathcal{D} \frac{d \phi}{d x} \right ]_{i-1/2} \simeq -\mathcal{D}\frac{\phi_{i} - \phi_{i-1}}{\Delta x_{i-1/2}} \end{equation*}

The discrete form of Eq. (8) is,

(9)   \begin{equation*} \frac{\mathcal{D}}{\Delta x_i}\left [ \frac{(\phi_{i+1}-\phi_i)}{\Delta x_{i+1/2}} - \frac{(\phi_{i}-\phi_{i-1})}{\Delta x_{i-1/2}} \right ]+ f(x_i) = 0 \end{equation*}

And if \Delta x_{i+1/2} = \Delta x_{i-1/2} = \Delta x_i and \Delta x_i is uniform Eq. (9) simplifies into,

(10)   \begin{equation*} \frac{\mathcal{D}}{\Delta x^2}(\phi_{i+1} - 2\phi_i + \phi_{i-1}) + f(x_i) = 0 \end{equation*}

Numerical grid and linear system

Consider a uniform grid within 0\leq x \leq 1,

In this example, the uniform grid has 7 node points and including the boundary conditions the value of \phi must satisfy the the following linear system of equations,

    \begin{align*} \phi_0 =&A \\ \phi_2 - 2\phi_1 + \phi_0 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_1)\\ \phi_3 - 2\phi_2 + \phi_1 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_2)\\ \phi_4 - 2\phi_3 + \phi_2 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_3)\\ \phi_5 - 2\phi_4 + \phi_3 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_4)\\ \phi_6 - 2\phi_5 + \phi_4 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_5)\\ \phi_6 =& B \end{align*}

This system of linear system of equations in Matrix-Vector form,

(11)   \begin{gather*} \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -2 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -2 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \phi_6 \\ \end{bmatrix} = \begin{bmatrix} A \\ -\frac{\Delta x^2}{\mathcal{D}}f(x_1) \\ -\frac{\Delta x^2}{\mathcal{D}}f(x_2) \\ -\frac{\Delta x^2}{\mathcal{D}}f(x_3) \\ -\frac{\Delta x^2}{\mathcal{D}}f(x_4) \\ -\frac{\Delta x^2}{\mathcal{D}}f(x_5) \\ B \\ \end{bmatrix} \end{gather*}

This forms the classic Matrix-Vector equation

(12)   \begin{equation*} \mathbf{A} \mathbf{x} = \mathbf{b} \end{equation*}

Where \mathbf{A} is a matrix, and both \mathbf{x} and \mathbf{b} are vectors. Through matrix algebra, \mathbf{x} can be solved for by computing \mathbf{x} = \mathbf{A}^{-1} \mathbf{b}.

For HPC, and scientific computations in general, it is beneficial for \mathbf{A} to be a symmetric matrix. In Eq. (11), the matrix \mathbf{A} is close to being symmetric and we can take advantage that \phi_0 and \phi_6 are known but are included in the unknown vector \mathbf{x}. By eliminating \phi_0 and \phi_6 from the system of linear equations and the matrix \mathbf{A} in Eq. (7),

    \begin{align*} \phi_2 - 2\phi_1 + A =& -\frac{\Delta x^2}{\mathcal{D}}f(x_1) \\ B - 2\phi_5 + \phi_4 =& -\frac{\Delta x^2}{\mathcal{D}}f(x_5) \end{align*}

Resulting in,

(13)   \begin{gather*} \small \begin{bmatrix} -2 & 1 & 0 & 0 & 0  \\ 1 & -2 & 1 & 0 & 0  \\ 0 & 1 & -2 & 1 & 0  \\ 0 & 0 & 1 & -2 & 1  \\ 0 & 0 & 0 & 1 & -2  \\ \end{bmatrix} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix} = -\frac{\Delta x^2}{\mathcal{D}} \begin{bmatrix} f(x_1) \\ f(x_2) \\ f(x_3) \\ f(x_4) \\ f(x_5) \\ \end{bmatrix} - \begin{bmatrix} A \\ 0 \\ 0 \\ 0 \\ B \\ \end{bmatrix} \end{gather*}

And now \mathbf{A} is symmetric. Now that we have our physical problem discretized, and setup in such a way that \mathbf{A} in \mathbf{x}=\mathbf{A}^{-1}\mathbf{b}, is symmetric we can begin to solve numerically.

Iterative method

When solving for the unknown vector, \mathbf{x}, from the linear system of equations from Eq. (12), \mathbf{x} is solved with iterative methods, using the sceme,

(14)   \begin{equation*} \mathbf{x}^{k+1} = \mathbf{B}\mathbf{x}^k + \mathbf{c} \end{equation*}

Where \mathbf{B} and \mathbf{c} are unknown constant vector matrix and vector and \mathbf{x}^k is a step that may not satisfy the equation Eq. (12) and k is the iterative step counter. We begin to solve for \mathbf{c}, the solution must satisfy the equation,

    \[\mathbf{x}=\mathbf{B}\mathbf{x} + \mathbf{c}\]

Solving for \mathbf{c} and using the \mathbf{x}=\mathbf{A}^{-1}\mathbf{b},

    \[\mathbf{c}= (\mathbf{I} - \mathbf{B})\mathbf{A}^{-1}\mathbf{b}\]

The iterative scheme of Eq. (14) is now,

(15)   \begin{gather*} \mathbf{x}^{k+1} = \mathbf{B}\mathbf{x}^k + (\mathbf{I} - \mathbf{B})\mathbf{A}^{-1}\mathbf{b} \qquad\implies \\ \mathbf{A}(\mathbf{I} - \mathbf{B})^{-1} \mathbf{x}^{k+1} = \mathbf{A}(\mathbf{I} - \mathbf{B})^{-1}\mathbf{B} \mathbf{x}^k + \mathbf{b}\qquad\implies \\ \mathbf{D}\mathbf{x}^{k+1} = \mathbf{N}\mathbf{x}^{k} + \mathbf{b} \end{gather*}

Where \mathbf{D}=\mathbf{A}(\mathbf{I} - \mathbf{B})^{-1} and \mathbf{N}=\mathbf{D}\mathbf{B}. \mathbf{A} can be split as,

(16)   \begin{equation*} \mathbf{A}=\mathbf{D}-\mathbf{N} \end{equation*}

We arrive at the final equation for computing the unknown vector \mathbf{x}^{k+1} is,

(17)   \begin{equation*} \mathbf{x}^{k+1} = \mathbf{D}^{-1} \left (\mathbf{N}\mathbf{x}^k + \mathbf{b} \right ) \end{equation*}

For an iterative method, computing \mathbf{N}\mathbf{x}^k and \mathbf{D}^{-1} must converge rapidly and be computationally inexpensive.

Convergence

The error for an iterative method at step k is \mathbf{e}^k=\mathbf{x}-\mathbf{x}^k. The error must also satisfy the equation,

(18)   \begin{equation*} \mathbf{e}^{k+1}=\mathbf{B}\mathbf{e}^k \end{equation*}

And converges if,

    \[\lim_{k \to \infty}=\mathbf{e}^k=0\]

Also assuming that \mathrm{B} has the eigenvalue equation,

(19)   \begin{equation*} \mathbf{B}\mathbf{z}_i = \lambda_i \mathbf{z}_i \qquad i = 1,\cdots,\mathrm{rank}\left [ \mathbf{B}\right ] \end{equation*}

The initial error, \mathbf{e}^0 using the eigenvectors from Eq. (19) is defined as

(20)   \begin{equation*} \mathbf{e}^0 := \sum_{i=1}^{\mathrm{rank}\left [ \mathbf{B}\right ]} a_i \mathbf{z}_i \end{equation*}

Where a_i is a constant. Resulting in the formula for \mathbf{e}^k,

(21)   \begin{equation*} \mathbf{e}^k = \sum_{i=1}^{\mathrm{rank}\left [ \mathbf{B}\right ]} a_i (\lambda_i)^k \mathbf{z}_i \end{equation*}

If the magnitude of the eigenvalues, \lambda_i is less than 1, \mathbf{e}^k converges to 0 as k \to \infty. For convergence the spectral radius of \mathbf{B}, \rho ( \mathbf{B}), must meet the criteria,

(22)   \begin{equation*} \rho (\mathbf{B})=\lim_{k\to \infty} \left | \left | \mathbf{B}^k \right | \right |^{1/k} < 1 \end{equation*}

For the matrix \mathbf{A} from Eq. (12) to have guaranteed convergence the following criteria must be met,

(23)   \begin{equation*} \rho\left ( \mathbf{D}^{-1} \mathbf{N} \right ) < 1 \end{equation*}

Jacobi Method for the 1-D heat equation

The simplest iterative method is the Jacobi Method. An n \times n matrix \mathbf{A} has a matrix \mathbf{D} composed of the diagonal elements of \mathbf{A},

    \[\mathbf{D}=\mathbf{A}_{ii}\]

Computing only inverse of the diagonal elements is fast and easy, resulting in, \mathbf{D}_{ii}^{-1}=1/\mathbf{A}_{ii}. We will define \mathbf{N} as the off-diagonal elements of \mathbff{A},

    \[\mathbf{N}=-\mathbf{A}_{ij} \qquad i \ne j\]

For any iterative method to have guaranteed  convergence the spectral radius must follow Eq. (23). The Jacobi Method also provides another sufficient condition of convergence that is,

(24)   \begin{equation*} \left | \mathbf{A}_{ii} \right | > \sum_{j \ne i} \left | \mathbf{A}_{ij} \right | \qquad i = 1, \cdots, n \end{equation*}

The matrix \mathbf{A} from Eq. (13), is clearly diagonally dominant so our unknown vector \mathbf{x}^k will converge as k \to \infty. The diagonal matrix \mathbf{D},

    \begin{equation*} \mathbf{D}= \begin{bmatrix} -2 & 0 & 0 & 0 & 0  \\ 0 & -2 & 0 & 0 & 0  \\ 0 & 0 & -2 & 0 & 0  \\ 0 & 0 & 0 & -2 & 0  \\ 0 & 0 & 0 & 0 & -2  \\ \end{bmatrix} \end{equation*}

And from Eq. (16) -\mathbf{N} has the off-diagonal elements of \mathbf{A},

    \begin{equation*} \mathbf{N}= \begin{bmatrix} 0 & -1 & 0 & 0 & 0  \\ -1 & 0 & -1 & 0 & 0  \\ 0 & -1 & 0 & -1 & 0  \\ 0 & 0 & -1 & 0 & -1  \\ 0 & 0 & 0 & -1 & 0  \\ \end{bmatrix} \end{equation*}

The iterative scheme solving for solving \phi(x) from Eq. (13) is now,

(25)   \begin{gather*} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^{k+1} = \mathbf{D}^{-1} \left ( \mathbf{N} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^k + \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \\ b_5 \\ \end{bmatrix} \right ) \end{gather*}

And where k is the number of iterations. The last term \mathbf{b} for Eq. (17) is,

(26)   \begin{gather*} \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \\ b_5 \\ \end{bmatrix} = -\frac{\Delta x^2}{\mathcal{D}} \begin{bmatrix} f(x_1) \\ f(x_2) \\ f(x_3) \\ f(x_4) \\ f(x_5) \\ \end{bmatrix} - \begin{bmatrix} A \\ 0 \\ 0 \\ 0 \\ B \\ \end{bmatrix} \end{gather*}

For the Jacobi Method, the general equation to find \phi_i at node i after k+1 steps is using the matrix \mathbf{A} is,

(27)   \begin{equation*} \phi_{i}^{k+1}=\frac{1}{D_{ii}}\left ( b_i + \sum_{i \ne j} N_{ij}\phi_j^k \right) \end{equation*}

Gauss-Seidel Method

The Gauss-Seidel method is another iterative method. The main difference between Gauss-Seidel and the Jacobi method is that the diagonal matrix, \mathbf{D}, is determined by the lower triangle elements of \mathbf{A} in Gauss-Seidel is the,

(28)   \begin{equation*} \mathbf{D} = A_{ij} \qquad j \leq i, \;\; i=1,\cdots , n \end{equation*}

And now the \mathbf{N} matrix contains the upper triangle elements of \mathbf{A},

(29)   \begin{equation*} \mathbf{N} = - A_{ij} \qquad i > j, \;\; i,\cdots , n \end{equation*}

Similar to the Jacobi Method, if the matrix \mathbf{A} is diagonally dominant, Gauss-Seidel converges from Eq. (24).

Solving the 1-D heat equation’s linear system of equations from Eq. (13) the matrices for Gauss-Seidel become,

    \begin{equation*} \mathbf{D}= \begin{bmatrix} -2 & 0 & 0 & 0 & 0  \\ 1 & -2 & 0 & 0 & 0  \\ 0 & 1 & -2 & 0 & 0  \\ 0 & 0 & 1 & -2 & 0  \\ 0 & 0 & 0 & 1 & -2  \\ \end{bmatrix} \end{equation*}

and,

    \begin{equation*} \mathbf{N}= \begin{bmatrix} 0 & -1 & 0 & 0 & 0  \\ 0 & 0 & -1 & 0 & 0  \\ 0 & 0 & 0 & -1 & 0  \\ 0 & 0 & 0 & 0 & -1  \\ 0 & 0 & 0 & 0 & 0  \\ \end{bmatrix} \end{equation*}

The iterative scheme is again Eq. (17). For Gauss-Seidel the computation for \mathbf{D}^{-1} is more involved than for the Jacobi method. Our system is,

    \begin{gather*} \mathbf{D} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^{k+1} = \mathbf{N} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^k + \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \\ b_5 \\ \end{bmatrix} \end{gather*}

And now,

    \begin{gather*} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \\ \end{bmatrix} = \mathbf{N} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^k + \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \\ b_5 \\ \end{bmatrix} \end{gather*}

The linear system of equations becomes,

    \begin{gather*} \begin{bmatrix} D_{11} & 0 & 0 & 0 & 0  \\ D_{21} & D_{22} & 0 & 0 & 0  \\ 0 & D_{32} & D_{33} & 0 & 0  \\ 0 & 0 & D_{43} & D_{44} & 0  \\ 0 & 0 & 0 & D_{54} & D_{55}  \\ \end{bmatrix} \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \\ \phi_4 \\ \phi_5 \\ \end{bmatrix}^{k+1} = \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \\ \end{bmatrix} \end{gather*}

We can solve the general form for \phi_i^{k+1} using induction. The first term, \phi_1^{k+1} is,

    \[\phi_1^{k+1}=\frac{c_1}{D_{11}}\]

The second term, \phi_2^{k+1},

    \begin{gather*} D_{21}\phi_1^{k+1}+D_{22}\phi_2^{k+1}=c_2 \implies\\ \phi_2^{k+1} = \frac{(c_2 - D_{21}\phi_1^{k+1})}{D_{22}} \end{gather*}

And the third term,

    \begin{gather*} D_{32}\phi_2^{k+1} + D_{33}\phi_3^{k+1} = c_3 \implies \\ \phi_3^{k+1} = \frac{(c_3 - D_{32}\phi_2^{k+1})}{D_{33}} \end{gather*}

And finally for the ith node at the k+1 step the general equation is,

(30)   \begin{equation*} \phi_{i}^{k+1} = \frac{\left ( c_i - \sum_{j<i} D_{ij} \phi_{j}^{k+1} \right )}{D_{ii}} \end{equation*}

Next in the series

In future posts of this series, we’ll solve this problem numerically using C++ with OpenMP and MPI!

Leave a comment

Your email address will not be published. Required fields are marked *