Numerical Methods Exam Prep
Cheatsheet Content
### Advection-Diffusion Equation The conservative form for a scalar species concentration $\phi(x, t)$ is given by: $$\frac{\partial (\rho\phi)}{\partial t} + \nabla \cdot (\rho\vec{u}\phi) = \kappa\nabla^2\phi$$ where $\vec{u}$ is the velocity vector field, $\kappa$ is conductivity, and $\rho$ is the density field. #### Continuity Equation The continuity equation is: $$\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho\vec{u}) = 0$$ #### Non-Conservative Form Using the continuity equation, the non-conservative form of the species conservation equation can be derived as: $$\rho\frac{\partial \phi}{\partial t} + \rho\vec{u} \cdot \nabla \phi = \kappa\nabla^2\phi$$ #### Advantage of Conservative Form The main advantage of numerically solving the conservative form over the non-conservative form is that it inherently conserves the total amount of the scalar quantity ($\phi$) within the computational domain. This is crucial for physical accuracy, especially when dealing with discontinuities or sharp gradients, as it ensures that the numerical solution does not create or destroy the scalar quantity, even if the velocity field is not perfectly divergence-free on a discrete grid. #### QUICK Upwind Scheme for Convective Flux For discretizing the convective flux $\int_{S_e} \rho\vec{u} \cdot \hat{n} dS \approx \dot{m}_e \phi_e$, using the second-order QUICK upwind scheme for $\dot{m}_e > 0$, $\phi_e$ can be expressed in terms of neighboring nodal values. Given $x_p = \frac{x_j + x_{j+1}}{2}$ and using Lagrange polynomials, for a general non-uniform grid, $\phi_e$ depends on $\phi_P, \phi_E, \phi_W, \phi_{WW}$ (or similar neighboring nodes depending on the direction of flow and stencil). If $\dot{m}_e > 0$, the east face value $\phi_e$ is interpolated using the values at P, W, and WW. Let $x_{WW}, x_W, x_P, x_E$ be the coordinates of the nodes. The general form for QUICK interpolation at face 'e' (between P and E) for $\dot{m}_e > 0$ is: $$\phi_e = C_1 \phi_P + C_2 \phi_W + C_3 \phi_{WW}$$ The coefficients $C_1, C_2, C_3$ are derived from a quadratic interpolation through $x_{WW}, x_W, x_P$ or $x_W, x_P, x_E$ depending on the specific implementation and grid non-uniformity. For a uniform grid and $\dot{m}_e > 0$, a common approximation for $\phi_e$ is: $$\phi_e = \frac{6}{8}\phi_P + \frac{3}{8}\phi_E - \frac{1}{8}\phi_W$$ However, for non-uniform grids, explicit expressions involving $x_P, x_E, x_W, x_{WW}$ and their corresponding $\phi$ values would be derived using Lagrange interpolation. ### FDM and FVM Comparison #### Finite Difference Method (FDM) - **Description:** FDM approximates derivatives in a PDE using Taylor series expansions. The domain is discretized into a grid, and the PDE is applied at each grid point, replacing derivatives with finite difference approximations. - **Diagram:** A grid of points where the solution is computed at specific nodes. - **Example:** For $\frac{\partial^2 \phi}{\partial x^2} \approx \frac{\phi_{i+1} - 2\phi_i + \phi_{i-1}}{\Delta x^2}$. #### Finite Volume Method (FVM) - **Description:** FVM divides the domain into a set of non-overlapping control volumes (finite volumes). The PDE is integrated over each control volume, and divergence terms are converted to surface integrals using Gauss's divergence theorem. Values are typically stored at cell centers. - **Diagram:** A grid of control volumes, with fluxes calculated across the faces of each volume. - **Example:** For $\nabla \cdot (\rho\vec{u}\phi)$, the integral over a control volume $V$ is $\int_V \nabla \cdot (\rho\vec{u}\phi) dV = \oint_{\partial V} (\rho\vec{u}\phi)\cdot \hat{n} dS$. #### Main Differences 1. **Conservation:** FVM is inherently conservative because it integrates the PDE over control volumes, ensuring that fluxes entering one volume exit the adjacent one. FDM, while it can be made conservative, does not guarantee it by formulation. 2. **Geometry:** FVM can easily handle complex geometries and unstructured grids because it works with arbitrary control volumes. FDM is typically easier to implement on structured, orthogonal grids. 3. **Variable Location:** FDM usually defines variables at grid nodes. FVM typically defines variables at the centroids of control volumes. #### Advantages of FDM over FVM 1. **Simplicity:** FDM is generally simpler to formulate and implement for simple geometries and structured grids, especially for lower-order approximations. 2. **Accuracy (for smooth solutions):** For smooth solutions on regular grids, FDM can achieve high orders of accuracy more straightforwardly than FVM. #### Advantages of FVM over FDM 1. **Conservation Property:** FVM inherently conserves extensive properties (mass, momentum, energy) at a discrete level, which is critical for many physical problems. 2. **Complex Geometries:** FVM can be applied to complex and unstructured meshes more easily than FDM, making it versatile for real-world engineering problems. ### 1D Steady Convection-Diffusion Equation Consider the 1-D steady convection-diffusion equation with a source term: $$u\frac{\partial \phi}{\partial x} = \Gamma\frac{\partial^2 \phi}{\partial x^2} + \beta\phi$$ where $u > 0$ is a constant velocity, $\Gamma > 0$ is the diffusivity, and $\beta$ is a scalar constant. We use first-order upwind differencing for the advection term and second-order central differencing for the diffusion term. Given $\phi_i = \phi(x_i)$ where $x_i = x_{i-1} + \Delta x$. #### Discretized Form at an Interior Node Using first-order upwind for advection ($u>0 \implies \phi_i - \phi_{i-1}$) and second-order central for diffusion: $$u\frac{\phi_i - \phi_{i-1}}{\Delta x} = \Gamma\frac{\phi_{i+1} - 2\phi_i + \phi_{i-1}}{\Delta x^2} + \beta\phi_i$$ Rearranging terms: $$\left(\frac{u}{\Delta x} + \frac{\Gamma}{\Delta x^2}\right)\phi_{i-1} + \left(-\frac{u}{\Delta x} - \frac{2\Gamma}{\Delta x^2} - \beta\right)\phi_i + \frac{\Gamma}{\Delta x^2}\phi_{i+1} = 0$$ #### Truncation Error The truncation error for the discretized equation can be found by substituting Taylor series expansions for $\phi_{i-1}$ and $\phi_{i+1}$ around $\phi_i$: $\phi_{i-1} = \phi_i - \Delta x \phi'_i + \frac{\Delta x^2}{2}\phi''_i - \frac{\Delta x^3}{6}\phi'''_i + O(\Delta x^4)$ $\phi_{i+1} = \phi_i + \Delta x \phi'_i + \frac{\Delta x^2}{2}\phi''_i + \frac{\Delta x^3}{6}\phi'''_i + O(\Delta x^4)$ Substitute these into the discretized equation: $u\left(\phi'_i - \frac{\Delta x}{2}\phi''_i + \frac{\Delta x^2}{6}\phi'''_i + \dots\right) = \Gamma\left(\phi''_i + \frac{\Delta x^2}{12}\phi''''_i + \dots\right) + \beta\phi_i$ The original PDE is $u\phi'_i - \Gamma\phi''_i - \beta\phi_i = 0$. The truncation error is the difference between the discretized equation and the original PDE. $T_i = \left[u\frac{\phi_i - \phi_{i-1}}{\Delta x} - \Gamma\frac{\phi_{i+1} - 2\phi_i + \phi_{i-1}}{\Delta x^2} - \beta\phi_i\right] - \left[u\phi'_i - \Gamma\phi''_i - \beta\phi_i\right]$ Substituting Taylor expansions: $T_i = \left[u(\phi'_i - \frac{\Delta x}{2}\phi''_i + O(\Delta x^2)) - \Gamma(\phi''_i + O(\Delta x^2)) - \beta\phi_i\right] - \left[u\phi'_i - \Gamma\phi''_i - \beta\phi_i\right]$ $T_i = -\frac{u\Delta x}{2}\phi''_i + O(\Delta x^2)$ The truncation error is $O(\Delta x)$. #### Scarborough Criterion for Gauss-Siedel For a Gauss-Siedel iterative scheme to converge, the Scarborough criterion (or diagonal dominance condition) must be met. For the discretized equation $A_i\phi_{i-1} + B_i\phi_i + C_i\phi_{i+1} = 0$, the condition is $|B_i| \ge |A_i| + |C_i|$ for all $i$, with strict inequality for at least one $i$. Here, $A_i = \frac{u}{\Delta x} + \frac{\Gamma}{\Delta x^2}$, $B_i = -\frac{u}{\Delta x} - \frac{2\Gamma}{\Delta x^2} - \beta$, $C_i = \frac{\Gamma}{\Delta x^2}$. The condition becomes: $\left|-\frac{u}{\Delta x} - \frac{2\Gamma}{\Delta x^2} - \beta\right| \ge \left|\frac{u}{\Delta x} + \frac{\Gamma}{\Delta x^2}\right| + \left|\frac{\Gamma}{\Delta x^2}\right|$ Since $u, \Gamma, \Delta x > 0$, and assuming $\beta$ is such that the diagonal coefficient $B_i$ is negative (which is often the case for stability or physical reasons, e.g., if $\beta \le 0$): $\left(\frac{u}{\Delta x} + \frac{2\Gamma}{\Delta x^2} + \beta\right) \ge \left(\frac{u}{\Delta x} + \frac{\Gamma}{\Delta x^2}\right) + \frac{\Gamma}{\Delta x^2}$ $\frac{u}{\Delta x} + \frac{2\Gamma}{\Delta x^2} + \beta \ge \frac{u}{\Delta x} + \frac{2\Gamma}{\Delta x^2}$ This simplifies to $\beta \ge 0$. So, a sufficient condition for Gauss-Siedel convergence is $\beta \ge 0$. (Note: if $\beta ### Wave Equation Consider the wave equation: $$\frac{\partial^2 \phi}{\partial t^2} = c^2\frac{\partial^2 \phi}{\partial x^2}$$ where $c$ is a real scalar wave speed. Using central difference scheme on a uniform grid $x_j = x_{j-1} + \Delta x$ for $\frac{\partial^2 \phi}{\partial x^2}$: $$\frac{\partial^2 \phi}{\partial t^2} = c^2\frac{\phi_{j+1} - 2\phi_j + \phi_{j-1}}{\Delta x^2}$$ #### Second Order ODE for $v(t)$ Assume $\phi_j(t) = v(t) \exp(ikx_j)$ for wavenumber $k$. Substitute into the discretized wave equation: $$\frac{d^2v}{dt^2} \exp(ikx_j) = c^2 \frac{v(t)\exp(ikx_{j+1}) - 2v(t)\exp(ikx_j) + v(t)\exp(ikx_{j-1})}{\Delta x^2}$$ Divide by $v(t)\exp(ikx_j)$: $$\frac{1}{v}\frac{d^2v}{dt^2} = c^2 \frac{\exp(ik\Delta x) - 2 + \exp(-ik\Delta x)}{\Delta x^2}$$ Using Euler's formula, $\exp(ik\Delta x) + \exp(-ik\Delta x) = 2\cos(k\Delta x)$. $$\frac{1}{v}\frac{d^2v}{dt^2} = c^2 \frac{2\cos(k\Delta x) - 2}{\Delta x^2} = \frac{2c^2}{\Delta x^2} (\cos(k\Delta x) - 1)$$ Using the identity $\cos(\theta) - 1 = -2\sin^2(\theta/2)$: $$\frac{1}{v}\frac{d^2v}{dt^2} = -\frac{4c^2}{\Delta x^2}\sin^2\left(\frac{k\Delta x}{2}\right)$$ Let $\omega_k^2 = \frac{4c^2}{\Delta x^2}\sin^2\left(\frac{k\Delta x}{2}\right)$. The second order ODE governing $v(t)$ is: $$\frac{d^2v}{dt^2} + \omega_k^2 v = 0$$ #### System of 2 Coupled First Order ODEs Let $y_1 = v$ and $y_2 = \frac{dv}{dt}$. Then $\frac{dy_1}{dt} = y_2$ And $\frac{dy_2}{dt} = -\omega_k^2 y_1$ In matrix form: $$\frac{d}{dt}\begin{pmatrix} y_1 \\ y_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -\omega_k^2 & 0 \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}$$ #### Eigenvalues of the Matrix The matrix is $A = \begin{pmatrix} 0 & 1 \\ -\omega_k^2 & 0 \end{pmatrix}$. To find eigenvalues, solve $\det(A - \lambda I) = 0$: $\det \begin{pmatrix} -\lambda & 1 \\ -\omega_k^2 & -\lambda \end{pmatrix} = 0$ $(-\lambda)(-\lambda) - (1)(-\omega_k^2) = 0$ $\lambda^2 + \omega_k^2 = 0$ $\lambda^2 = -\omega_k^2$ $\lambda = \pm i\omega_k$ The eigenvalues are $\pm i\omega_k$. #### Maximum Time Step for Stability This part requires stability diagrams for Explicit Euler and 4th order Runge-Kutta. - **Explicit Euler:** The region of stability for Explicit Euler is a circle of radius 1 centered at $(-1, 0)$ in the complex plane for $\lambda \Delta t$. Since the eigenvalues are purely imaginary ($\lambda = \pm i\omega_k$), we need $|i\omega_k \Delta t| \le 1$, which means $\omega_k \Delta t \le 1$. Thus, $\Delta t \le \frac{1}{\omega_k} = \frac{\Delta x}{2c|\sin(k\Delta x/2)|}$. The maximum value of $|\sin(k\Delta x/2)|$ is 1 (when $k\Delta x/2 = \pi/2$), so $\Delta t \le \frac{\Delta x}{2c}$. - **4th Order Runge-Kutta:** The stability region for 4th order RK is much larger. For imaginary eigenvalues, the stability region extends along the imaginary axis. From typical RK4 stability plots, the imaginary axis is stable for approximately $|i\lambda \Delta t| \le 2\sqrt{2} \approx 2.828$. So, $\omega_k \Delta t \le 2\sqrt{2}$. Thus, $\Delta t \le \frac{2\sqrt{2}}{\omega_k} = \frac{2\sqrt{2}\Delta x}{2c|\sin(k\Delta x/2)|} = \frac{\sqrt{2}\Delta x}{c|\sin(k\Delta x/2)|}$. The maximum $\Delta t$ is $\frac{\sqrt{2}\Delta x}{c}$. Comparison: RK4 allows a significantly larger time step than Explicit Euler for this problem. ### Boundary Value Problem Discretization Consider the boundary value problem: $$\frac{d^2\phi}{dx^2} + \alpha\frac{d\phi}{dx} + \beta\phi = f(x)$$ with $\phi(0) = \phi_0, \phi(L) = \phi_L$. Discretized over $x_j = j\Delta x$, $\Delta x = L/N$. #### Discretization for Interior Node Using second-order finite differencing for $d\phi/dx$ and $d^2\phi/dx^2$: $\frac{d^2\phi}{dx^2} \approx \frac{\phi_{j+1} - 2\phi_j + \phi_{j-1}}{\Delta x^2}$ $\frac{d\phi}{dx} \approx \frac{\phi_{j+1} - \phi_{j-1}}{2\Delta x}$ Substituting into the PDE: $$\frac{\phi_{j+1} - 2\phi_j + \phi_{j-1}}{\Delta x^2} + \alpha\frac{\phi_{j+1} - \phi_{j-1}}{2\Delta x} + \beta\phi_j = f_j$$ Multiplying by $\Delta x^2$: $$(\phi_{j+1} - 2\phi_j + \phi_{j-1}) + \frac{\alpha\Delta x}{2}(\phi_{j+1} - \phi_{j-1}) + \beta\Delta x^2\phi_j = \Delta x^2 f_j$$ Rearranging terms for $\phi_{j-1}, \phi_j, \phi_{j+1}$: $$\left(1 - \frac{\alpha\Delta x}{2}\right)\phi_{j-1} + (-2 + \beta\Delta x^2)\phi_j + \left(1 + \frac{\alpha\Delta x}{2}\right)\phi_{j+1} = \Delta x^2 f_j$$ This applies for $j \in \{1, 2, \dots, N-1\}$. #### Gauss-Siedel Successive Over-Relaxation (SOR) For the discretized equation $A_j\phi_{j-1} + B_j\phi_j + C_j\phi_{j+1} = F_j$, where: $A_j = 1 - \frac{\alpha\Delta x}{2}$ $B_j = -2 + \beta\Delta x^2$ $C_j = 1 + \frac{\alpha\Delta x}{2}$ $F_j = \Delta x^2 f_j$ The SOR scheme for $\phi_j$ at iteration $(k+1)$ with relaxation parameter $\omega$ is: $$\phi_j^{(k+1)} = (1-\omega)\phi_j^{(k)} + \omega\left[\frac{1}{B_j}\left(F_j - A_j\phi_{j-1}^{(k+1)} - C_j\phi_{j+1}^{(k)}\right)\right]$$ #### Point-Jacobi Convergence For the Point-Jacobi iteration scheme to converge, the spectral radius of the iteration matrix must be less than 1. A sufficient condition is strict diagonal dominance. Given $\beta\Delta x^2 = 3$ and $N=100$. The condition for convergence involves the value of $\alpha\Delta x$. The system matrix (for internal points) has coefficients: $A_j = 1 - \frac{\alpha\Delta x}{2}$ $B_j = -2 + 3 = 1$ $C_j = 1 + \frac{\alpha\Delta x}{2}$ For diagonal dominance, $|B_j| > |A_j| + |C_j|$. $|1| > \left|1 - \frac{\alpha\Delta x}{2}\right| + \left|1 + \frac{\alpha\Delta x}{2}\right|$ If $\frac{\alpha\Delta x}{2} \le 1$, then $1 > (1 - \frac{\alpha\Delta x}{2}) + (1 + \frac{\alpha\Delta x}{2}) \implies 1 > 2$, which is false. If $\frac{\alpha\Delta x}{2} > 1$, then $1 > (\frac{\alpha\Delta x}{2} - 1) + (1 + \frac{\alpha\Delta x}{2}) \implies 1 > \alpha\Delta x$. This means $\alpha\Delta x 1$, then $A_j$ changes sign. The strict diagonal dominance condition for Point-Jacobi to converge is often simplified to $2|B_j| > |A_j| + |C_j|$ for some cases or relies on the spectral radius. For this specific type of matrix (tridiagonal), the condition is often related to the eigenvalues. A common condition for convergence of Jacobi is that the matrix is strictly diagonally dominant. In this case, for $B_j=1$, we need $1 > |1 - \frac{\alpha\Delta x}{2}| + |1 + \frac{\alpha\Delta x}{2}|$. This inequality is never satisfied. This type of problem often implies that the $\beta\phi$ term should contribute to the diagonal dominance. If $\beta$ was negative and large, it would make $B_j$ a large negative number. Let's re-evaluate for $B_j = -2 + \beta\Delta x^2$. Given $\beta\Delta x^2 = 3$, so $B_j = -2+3 = 1$. The condition is $|1| > |1 - \frac{\alpha\Delta x}{2}| + |1 + \frac{\alpha\Delta x}{2}|$. This condition is never satisfied. For example, if $\alpha\Delta x = 0$, $1 > |1| + |1| = 2$, false. If $\alpha\Delta x = 2$, $1 > |0| + |2| = 2$, false. Thus, under the given parameters, Point-Jacobi will not converge. There might be a misunderstanding of the problem or a typo in the question for Point-Jacobi convergence with these coefficients. #### Iterations for Discretization Error Reduction Given $\beta\Delta x^2 = 3$, $\alpha\Delta x = 1.75$, $N=100$. The problem asks for iterations to reduce discretization error by $10^{-3}$ for a *guess value* of $\phi(x)$. This is about the convergence rate of the iterative method itself. The number of iterations required to reduce the error by a factor of $10^{-3}$ is $k$ such that $\rho^k \le 10^{-3}$, where $\rho$ is the spectral radius of the iteration matrix. For Point-Jacobi, $\rho_J = \max_j \frac{|A_j| + |C_j|}{|B_j|}$. Here $|B_j|=1$. $|A_j| = |1 - \frac{1.75}{2}| = |1 - 0.875| = |0.125| = 0.125$ $|C_j| = |1 + \frac{1.75}{2}| = |1 + 0.875| = |1.875| = 1.875$ So $\rho_J = 0.125 + 1.875 = 2$. Since $\rho_J = 2 > 1$, the Point-Jacobi method diverges, and thus the error will not reduce; it will grow. ### 1D Diffusion Equation Stability Analysis Consider the 1-D diffusion equation: $$\frac{\partial \phi}{\partial t} = \alpha\frac{\partial^2 \phi}{\partial x^2}$$ Discretized on a uniform grid $x_j = j\Delta x$, $\Delta x = L/N$. #### Method 1: Explicit Scheme $\frac{\phi_j^{n+1} - \phi_j^n}{\Delta t} = \alpha\frac{\phi_{j+1}^n - 2\phi_j^n + \phi_{j-1}^n}{\Delta x^2}$ Substitute $\phi_j^n = G^n e^{ikx_j}$: $G^{n+1}e^{ikx_j} - G^n e^{ikx_j} = \alpha\frac{\Delta t}{\Delta x^2}(G^n e^{ikx_{j+1}} - 2G^n e^{ikx_j} + G^n e^{ikx_{j-1}})$ Divide by $G^n e^{ikx_j}$: $G - 1 = \frac{\alpha\Delta t}{\Delta x^2}(e^{ik\Delta x} - 2 + e^{-ik\Delta x})$ $G = 1 + \frac{\alpha\Delta t}{\Delta x^2}(2\cos(k\Delta x) - 2)$ $G = 1 + 2\frac{\alpha\Delta t}{\Delta x^2}(\cos(k\Delta x) - 1)$ $G = 1 - 4\frac{\alpha\Delta t}{\Delta x^2}\sin^2\left(\frac{k\Delta x}{2}\right)$ Let $r = \frac{\alpha\Delta t}{\Delta x^2}$. $G = 1 - 4r\sin^2\left(\frac{k\Delta x}{2}\right)$ For stability, $|G| \le 1$. $-1 \le 1 - 4r\sin^2\left(\frac{k\Delta x}{2}\right) \le 1$ The right inequality $1 - 4r\sin^2\left(\frac{k\Delta x}{2}\right) \le 1$ implies $-4r\sin^2\left(\frac{k\Delta x}{2}\right) \le 0$, which is always true since $r>0$ and $\sin^2 \ge 0$. The left inequality $1 - 4r\sin^2\left(\frac{k\Delta x}{2}\right) \ge -1$ implies $2 \ge 4r\sin^2\left(\frac{k\Delta x}{2}\right)$, so $r \le \frac{1}{2\sin^2\left(\frac{k\Delta x}{2}\right)}$. The most restrictive condition occurs when $\sin^2\left(\frac{k\Delta x}{2}\right) = 1$. Therefore, $r \le \frac{1}{2}$, or $\frac{\alpha\Delta t}{\Delta x^2} \le \frac{1}{2}$. **This scheme is conditionally stable.** The maximum $\Delta t$ for stability is $\Delta t \le \frac{\Delta x^2}{2\alpha}$. #### Method 2: Crank-Nicolson Scheme $\frac{\phi_j^{n+1} - \phi_j^n}{\Delta t} = \alpha\frac{1}{2}\left(\frac{\phi_{j+1}^{n+1} - 2\phi_j^{n+1} + \phi_{j-1}^{n+1}}{\Delta x^2} + \frac{\phi_{j+1}^n - 2\phi_j^n + \phi_{j-1}^n}{\Delta x^2}\right)$ Substitute $\phi_j^n = G^n e^{ikx_j}$ and $\phi_j^{n+1} = G^{n+1} e^{ikx_j}$: $G - 1 = \frac{\alpha\Delta t}{2\Delta x^2}\left[G(e^{ik\Delta x} - 2 + e^{-ik\Delta x}) + (e^{ik\Delta x} - 2 + e^{-ik\Delta x})\right]$ $G - 1 = \frac{r}{2}\left[G(2\cos(k\Delta x) - 2) + (2\cos(k\Delta x) - 2)\right]$ $G - 1 = \frac{r}{2}\left[-4\sin^2\left(\frac{k\Delta x}{2}\right)G - 4\sin^2\left(\frac{k\Delta x}{2}\right)\right]$ $G - 1 = -2r\sin^2\left(\frac{k\Delta x}{2}\right)(G+1)$ Let $A = 2r\sin^2\left(\frac{k\Delta x}{2}\right)$. $G - 1 = -A(G+1)$ $G - 1 = -AG - A$ $G(1+A) = 1 - A$ $G = \frac{1-A}{1+A}$ For stability, $|G| \le 1$. Since $A = 2r\sin^2\left(\frac{k\Delta x}{2}\right) \ge 0$, then $1+A \ge 1-A$. So $\left|\frac{1-A}{1+A}\right| \le 1$ is always true when $A \ge 0$. **This scheme is unconditionally stable.** There is no maximum $\Delta t$ for stability for this scheme. ### 2D Steady Advection-Diffusion Equation (FV Method) Consider the steady two-dimensional advection-diffusion equation: $$\nabla \cdot (\vec{u}\phi) = \Gamma\nabla^2\phi$$ with velocity field $\vec{u}(x,y) = u(x,y)\vec{i} + v(x,y)\vec{j}$. Discretized over a uniform grid of cell-width $\Delta x = L/N$. #### Diffusion Term Discretization For interior nodes, the diffusion term $\int_{\Omega_{FV}} \Gamma\nabla^2\phi dV$ can be discretized using a second-order scheme. Applying Gauss's divergence theorem to $\int_{\Omega_{FV}} \nabla \cdot (\nabla\phi) dV = \oint_{\partial \Omega_{FV}} (\nabla\phi) \cdot \hat{n} dS$. For a control volume (cell P) with faces e, w, n, s, the diffusion flux across each face is approximated using central differences. For the east face (e), the flux is $-\Gamma \frac{\phi_E - \phi_P}{\Delta x} \cdot \Delta y$. The total diffusion term is: $$A_P\phi_P + A_E\phi_E + A_W\phi_W + A_N\phi_N + A_S\phi_S$$ where $A_P = -2\Gamma\left(\frac{\Delta y}{\Delta x} + \frac{\Delta x}{\Delta y}\right)$, $A_E = \Gamma\frac{\Delta y}{\Delta x}$, $A_W = \Gamma\frac{\Delta y}{\Delta x}$, $A_N = \Gamma\frac{\Delta x}{\Delta y}$, $A_S = \Gamma\frac{\Delta x}{\Delta y}$. Assuming $\Delta x = \Delta y$: $$A_P = -4\Gamma, A_E = \Gamma, A_W = \Gamma, A_N = \Gamma, A_S = \Gamma$$ #### QUICK Upwind Scheme for Convective Flux (Boundary Face) For a face $f$ (e.g., west face 'w') where node P is adjacent to the boundary $x=0$, and $u_w > 0$ (flow into the control volume from the boundary). The integral of convective flux is $\int_{S_w} (\rho\vec{u}\phi) \cdot \hat{n} dS$. Since $\hat{n}$ points outwards, for the west face, $\hat{n} = -\vec{i}$. The flux is $\rho u_w \phi_w \Delta y$. Using a second-order accurate "ghost cell" approach, we introduce a ghost node $W_g$ outside the boundary at $x = -\Delta x$. The values of $\phi_W$ (at $x=0$) and $\phi_{W_g}$ are determined by the boundary condition. For $u_w > 0$, the QUICK scheme for $\phi_w$ (face value) would typically involve $\phi_P$, $\phi_W$, and $\phi_{WW}$ (or a ghost node). If the west face is at the boundary ($x=0$), and flow is from left to right ($u_w > 0$), then $\phi_w$ is approximated using values to the left (upwind). A common QUICK approximation for $\phi_w$ using P, W, $W_g$ (ghost node) would be: $$\phi_w = \frac{6}{8}\phi_W + \frac{3}{8}\phi_P - \frac{1}{8}\phi_{W_g}$$ The value $\phi_W$ is the boundary value $\phi(0)$. $\phi_P$ is the first interior node. $\phi_{W_g}$ is a ghost value. The boundary condition $\phi(0)$ would be used to relate $\phi_W$ and $\phi_{W_g}$ or $\phi_P$. If $\phi(0)$ is given, then $\phi_W = \phi(0)$. A ghost node $\phi_{W_g}$ can be introduced. The convective flux across the west face is $\dot{m}_w \phi_w$. #### Ghost-Cell Method for Boundary Conditions The advection diffusion equation is discretized as $A_E\phi_E + A_W\phi_W + A_N\phi_N + A_S\phi_S + A_P\phi_P = Q_P$. Suppose node P is adjacent to $x=L$, and the boundary condition is $\frac{\partial\phi}{\partial x}|_{x=L} = C$. This is a Neumann boundary condition. We introduce a ghost node $E_g$ at $x = L + \Delta x$. The central difference approximation for the derivative at $x=L$ (face 'e') is: $$\frac{\phi_{E_g} - \phi_P}{2\Delta x} = C \implies \phi_{E_g} = \phi_P + 2C\Delta x$$ The coefficients $A_P, A_E, A_W, A_N, A_S, Q_P$ are modified for the cell P adjacent to the boundary. For instance, the east face flux contribution to $A_P$ and $A_E$ would now involve $\phi_{E_g}$ which is expressed in terms of $\phi_P$ and $C$. The term $A_E\phi_E$ in the original equation becomes $A_E\phi_{E_g}$ for the ghost cell. Substituting $\phi_{E_g}$: $A_E(\phi_P + 2C\Delta x) = A_E\phi_P + 2A_E C\Delta x$. This means $A_P$ is modified to $A_P + A_E$, and $Q_P$ is modified to $Q_P + 2A_E C\Delta x$. The coefficient $A_E$ itself would be removed from the equation. ### Lagrange Interpolation Function $f(x)$ is discretized on a stretched grid $x_0=0, x_{i+1} = x_i + \beta(x_i - x_{i-1})$. Given $f(x_0)=0, f(x_1)=1, f(x_2)=0, f(x_3)=1$. We need to find the 3rd order accurate approximation to $f''(x_0)$ using Lagrange polynomials or any other technique. The grid points are $x_0, x_1, x_2, x_3$. $x_0 = 0$ $x_1 = \Delta x$ (since $x_1 - x_0 = \Delta x$) $x_2 = x_1 + \beta(x_1 - x_0) = \Delta x + \beta\Delta x = (1+\beta)\Delta x$ $x_3 = x_2 + \beta(x_2 - x_1) = (1+\beta)\Delta x + \beta(\Delta x + \beta\Delta x - \Delta x) = (1+\beta)\Delta x + \beta^2\Delta x = (1+\beta+\beta^2)\Delta x$ The second derivative of a function $f(x)$ at $x_0$ using $n+1$ points can be approximated using a polynomial interpolation. For 3rd order accuracy, we need at least 4 points ($x_0, x_1, x_2, x_3$). Let $P_3(x)$ be the cubic Lagrange interpolating polynomial. $P_3(x) = f(x_0)L_0(x) + f(x_1)L_1(x) + f(x_2)L_2(x) + f(x_3)L_3(x)$ Given $f(x_0)=0, f(x_1)=1, f(x_2)=0, f(x_3)=1$: $P_3(x) = L_1(x) + L_3(x)$ We need $P_3''(x_0)$. $L_j(x) = \prod_{k \ne j} \frac{x-x_k}{x_j-x_k}$ $L_1(x) = \frac{(x-x_0)(x-x_2)(x-x_3)}{(x_1-x_0)(x_1-x_2)(x_1-x_3)}$ $L_3(x) = \frac{(x-x_0)(x-x_1)(x-x_2)}{(x_3-x_0)(x_3-x_1)(x_3-x_2)}$ Calculating $L_j''(x_0)$ is tedious. A more straightforward method is using finite difference formulas from Taylor series expansions, or using divided differences. For a general set of points $x_0, x_1, x_2, x_3$, the formula for $f''(x_0)$ can be derived. A common method is to use Newton's forward difference formula, or directly derive a finite difference stencil. For $f''(x_0)$ using $x_0, x_1, x_2, x_3$: $f''(x_0) \approx A f(x_0) + B f(x_1) + C f(x_2) + D f(x_3)$ This is a standard problem for deriving finite difference coefficients. Let's use the method of undetermined coefficients for $f''(x_0)$: $f''(x_0) = Af(x_0) + Bf(x_1) + Cf(x_2) + Df(x_3)$ Expand each $f(x_i)$ around $x_0$ using Taylor series: $f(x_i) = f(x_0) + (x_i-x_0)f'(x_0) + \frac{(x_i-x_0)^2}{2}f''(x_0) + \frac{(x_i-x_0)^3}{6}f'''(x_0) + O((x_i-x_0)^4)$ Substitute these into the equation and equate coefficients of $f(x_0), f'(x_0), f''(x_0), f'''(x_0)$ to solve for A, B, C, D. Coefficients: $f(x_0): A+B+C+D = 0$ $f'(x_0): B(x_1-x_0) + C(x_2-x_0) + D(x_3-x_0) = 0$ $f''(x_0): B\frac{(x_1-x_0)^2}{2} + C\frac{(x_2-x_0)^2}{2} + D\frac{(x_3-x_0)^2}{2} = 1$ $f'''(x_0): B\frac{(x_1-x_0)^3}{6} + C\frac{(x_2-x_0)^3}{6} + D\frac{(x_3-x_0)^3}{6} = 0$ Let $h_1 = x_1-x_0 = \Delta x$ $h_2 = x_2-x_0 = (1+\beta)\Delta x$ $h_3 = x_3-x_0 = (1+\beta+\beta^2)\Delta x$ The system of equations is: 1. $A+B+C+D = 0$ 2. $B h_1 + C h_2 + D h_3 = 0$ 3. $B \frac{h_1^2}{2} + C \frac{h_2^2}{2} + D \frac{h_3^2}{2} = 1$ 4. $B \frac{h_1^3}{6} + C \frac{h_2^3}{6} + D \frac{h_3^3}{6} = 0$ Given $f(x_0)=0, f(x_1)=1, f(x_2)=0, f(x_3)=1$. So, $f''(x_0) = B f(x_1) + D f(x_3) = B + D$. We need to solve for $B$ and $D$. From (2), (3), (4) for $B, C, D$: $B h_1 + C h_2 + D h_3 = 0$ $B h_1^2 + C h_2^2 + D h_3^2 = 2$ $B h_1^3 + C h_2^3 + D h_3^3 = 0$ This is a linear system. Substituting $h_1=\Delta x$, $h_2=(1+\beta)\Delta x$, $h_3=(1+\beta+\beta^2)\Delta x$. This derivation is complex. For 3rd order accuracy using 4 points, the general formula for $f''(x_0)$ is: $f''(x_0) = \frac{2}{\Delta x^2}\left[ \frac{f(x_0)}{(h_1)(h_2)(h_3)} + \frac{f(x_1)}{(h_1-h_2)(h_1-h_3)} + \frac{f(x_2)}{(h_2-h_1)(h_2-h_3)} + \frac{f(x_3)}{(h_3-h_1)(h_3-h_2)} \right]$ This formula needs adjustment. A common formula for $f''(x_0)$ using Newton's form is: $f''(x_0) = 2f[x_0, x_1, x_2] + (x_3-x_0) f[x_0, x_1, x_2, x_3]$ $f[x_0,x_1,x_2] = \frac{f[x_1,x_2] - f[x_0,x_1]}{x_2-x_0}$ $f[x_0,x_1] = \frac{f(x_1)-f(x_0)}{x_1-x_0} = \frac{1-0}{\Delta x} = \frac{1}{\Delta x}$ $f[x_1,x_2] = \frac{f(x_2)-f(x_1)}{x_2-x_1} = \frac{0-1}{\beta\Delta x} = -\frac{1}{\beta\Delta x}$ $f[x_0,x_1,x_2] = \frac{-1/(\beta\Delta x) - 1/\Delta x}{(1+\beta)\Delta x} = \frac{-1-\beta}{\beta\Delta x^2 (1+\beta)} = -\frac{1}{\beta\Delta x^2}$ $f[x_1,x_2,x_3] = \frac{f[x_2,x_3] - f[x_1,x_2]}{x_3-x_1}$ $f[x_2,x_3] = \frac{f(x_3)-f(x_2)}{x_3-x_2} = \frac{1-0}{\beta^2\Delta x} = \frac{1}{\beta^2\Delta x}$ $f[x_1,x_2,x_3] = \frac{1/(\beta^2\Delta x) - (-1/(\beta\Delta x))}{(1+\beta+\beta^2)\Delta x - \Delta x} = \frac{1/(\beta^2\Delta x) + 1/(\beta\Delta x)}{(\beta+\beta^2)\Delta x} = \frac{(1+\beta)/(\beta^2\Delta x^2)}{\beta(1+\beta)} = \frac{1}{\beta^3\Delta x^2}$ $f[x_0,x_1,x_2,x_3] = \frac{f[x_1,x_2,x_3] - f[x_0,x_1,x_2]}{x_3-x_0}$ $f[x_0,x_1,x_2,x_3] = \frac{1/(\beta^3\Delta x^2) - (-1/(\beta\Delta x^2))}{(1+\beta+\beta^2)\Delta x} = \frac{(1+\beta^2)/(\beta^3\Delta x^2)}{(1+\beta+\beta^2)\Delta x} = \frac{1+\beta^2}{\beta^3(1+\beta+\beta^2)\Delta x^3}$ So, $f''(x_0) = 2\left(-\frac{1}{\beta\Delta x^2}\right) + (1+\beta+\beta^2)\Delta x \left(\frac{1+\beta^2}{\beta^3(1+\beta+\beta^2)\Delta x^3}\right)$ $f''(x_0) = -\frac{2}{\beta\Delta x^2} + \frac{1+\beta^2}{\beta^3\Delta x^2}$ $f''(x_0) = \frac{-2\beta^2 + 1+\beta^2}{\beta^3\Delta x^2} = \frac{1-\beta^2}{\beta^3\Delta x^2}$ This is the 3rd order accurate approximation for $f''(x_0)$. ### Compressible vs. Incompressible Flow Pressure Calculation The fundamental difference in calculating the pressure field for compressible versus incompressible flows lies in their governing equations and how density variations are treated. #### Incompressible Flow - **Assumption:** Density ($\rho$) is constant ($\rho = \text{const}$). - **Continuity Equation:** $\nabla \cdot \vec{u} = 0$. This equation does not directly contain pressure. - **Momentum Equation:** $\rho\left(\frac{\partial \vec{u}}{\partial t} + (\vec{u}\cdot\nabla)\vec{u}\right) = -\nabla P + \mu\nabla^2\vec{u}$. - **Pressure Calculation:** Since the continuity equation defines a solenoidal velocity field but doesn't explicitly involve pressure, pressure in incompressible flow is often determined by solving a **Poisson equation**. Taking the divergence of the momentum equation and using the continuity equation leads to: $$\nabla^2 P = -\rho\nabla \cdot ((\vec{u}\cdot\nabla)\vec{u}) + \mu\nabla^2(\nabla \cdot \vec{u}) = -\rho\nabla \cdot ((\vec{u}\cdot\nabla)\vec{u})$$ This Poisson equation for pressure ensures that the velocity field obtained from the momentum equations satisfies the incompressibility constraint ($\nabla \cdot \vec{u} = 0$). The pressure acts as a Lagrange multiplier to enforce this constraint. #### Compressible Flow - **Assumption:** Density ($\rho$) is variable and depends on pressure and temperature, typically through an **equation of state** (e.g., Ideal Gas Law: $P = \rho RT$). - **Continuity Equation:** $\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho\vec{u}) = 0$. This equation explicitly contains density, which is linked to pressure. - **Momentum Equation:** $\frac{\partial (\rho\vec{u})}{\partial t} + \nabla \cdot (\rho\vec{u}\vec{u}) = -\nabla P + \nabla \cdot \tau$. (where $\tau$ is the viscous stress tensor). - **Energy Equation:** An additional energy equation is required to close the system, relating temperature to other thermodynamic variables. - **Pressure Calculation:** In compressible flow, pressure is a primary thermodynamic variable directly obtained from the **equation of state** once density and temperature (or internal energy) are determined from the continuity and energy equations. There is no separate Poisson equation for pressure; instead, pressure evolves dynamically as part of the coupled system of conservation equations. The pressure gradient term in the momentum equation drives the flow, and changes in pressure directly lead to changes in density (and vice-versa) via the equation of state. In summary, for incompressible flow, pressure is a dynamic variable that acts to enforce the divergence-free velocity constraint via a Poisson equation. For compressible flow, pressure is a thermodynamic variable directly linked to density and temperature through an equation of state, and all conservation equations (mass, momentum, energy) are solved simultaneously to find all flow variables, including pressure.