Module 1: First-Order ODEs Key Definitions Differential Equation: An equation involving a dependent variable and its derivatives with respect to one or more independent variables. Ordinary Differential Equation (ODE): A differential equation with only one independent variable. Order of ODE: The order of the highest derivative present. Solution of ODE: A function $y = f(x)$ that, when substituted into the ODE, reduces it to an identity. General Solution: A solution containing arbitrary constants, equal in number to the order of the equation. Particular Solution: A solution obtained by applying specific initial conditions to the general solution. Separable Equation: An ODE that can be written in the form $dy/dx = f(x)g(y)$. Homogeneous Function (of degree $n$): A function $f(x,y)$ such that $f(tx,ty) = t^n f(x,y)$. Homogeneous Equation: An ODE $M(x,y)dx + N(x,y)dy = 0$ where $M$ and $N$ are homogeneous functions of the same degree. Exact Differential: An expression $M(x,y)dx + N(x,y)dy$ for which there exists a function $f(x,y)$ such that $\partial f/\partial x = M$ and $\partial f/\partial y = N$. Exact Equation: An ODE $M(x,y)dx + N(x,y)dy = 0$ where $Mdx + Ndy$ is an exact differential. Integrating Factor: A function $\mu(x,y)$ that transforms a non-exact ODE into an exact one. Linear Equation (First Order): An ODE of the form $dy/dx + P(x)y = Q(x)$. Core Formulas & Methodologies Separable Equations: Rewrite as $dy/g(y) = f(x)dx$. Integrate both sides: $\int \frac{dy}{g(y)} = \int f(x)dx + C$. Homogeneous Equations ($dy/dx = f(x,y)$ where $f$ is homogeneous of degree 0): Substitute $y = zx$, so $dy/dx = z + x(dz/dx)$. The equation becomes $z + x(dz/dx) = f(1,z)$, which is separable. Solve the separable equation for $z$. Substitute back $z = y/x$ to get the solution in terms of $x$ and $y$. Exact Equations: Check for exactness: $\partial M/\partial y = \partial N/\partial x$. If exact, integrate $M$ with respect to $x$: $f(x,y) = \int M(x,y)dx + g(y)$. Differentiate $f(x,y)$ with respect to $y$ and set it equal to $N$: $\partial f/\partial y = \partial/\partial y (\int M(x,y)dx) + g'(y) = N(x,y)$. Solve for $g'(y)$ and integrate to find $g(y)$. The general solution is $f(x,y) = C$. Integrating Factors: If $\frac{1}{N}(\frac{\partial M}{\partial y} - \frac{\partial N}{\partial x}) = G(x)$ (function of $x$ only), then $\mu(x) = e^{\int G(x)dx}$. If $\frac{1}{M}(\frac{\partial N}{\partial x} - \frac{\partial M}{\partial y}) = H(y)$ (function of $y$ only), then $\mu(y) = e^{\int H(y)dy}$. Multiply the ODE by $\mu$ and solve the resulting exact equation. Linear Equations ($dy/dx + P(x)y = Q(x)$): Calculate the integrating factor $\mu(x) = e^{\int P(x)dx}$. Multiply the entire equation by $\mu(x)$: $\mu(x) \frac{dy}{dx} + \mu(x)P(x)y = \mu(x)Q(x)$. The left side is now $\frac{d}{dx}(\mu(x)y)$. So, $\frac{d}{dx}(\mu(x)y) = \mu(x)Q(x)$. Integrate both sides: $\mu(x)y = \int \mu(x)Q(x)dx + C$. Solve for $y$: $y = \frac{1}{\mu(x)} \left( \int \mu(x)Q(x)dx + C \right)$. Reduction of Order (for 2nd order ODEs missing $y$ or $x$ explicitly): Case 1: $y$ is missing ($F(x, y', y'') = 0$): Substitute $p = y'$ and $p' = y''$. The equation becomes $F(x, p, p') = 0$, a first-order ODE in $p$. Solve for $p(x)$. Integrate $p(x)$ to find $y(x)$: $y = \int p(x)dx + C$. Case 2: $x$ is missing ($G(y, y', y'') = 0$): Substitute $p = y'$ and $y'' = p(dp/dy)$. The equation becomes $G(y, p, p(dp/dy)) = 0$, a first-order ODE in $p$ and $y$. Solve for $p(y)$. Integrate $p(y)$ to find $y(x)$: $\int \frac{dy}{p(y)} = \int dx + C$. Important Theorems Existence & Uniqueness (First Order): If $f(x,y)$ and $\partial f/\partial y$ are continuous on a rectangle $R$, then through each point $(x_0, y_0)$ in the interior of $R$ passes a unique integral curve of $dy/dx = f(x,y)$. Watch Out! For homogeneous equations, ensure $M$ and $N$ are of the *same* degree. When using integrating factors, correctly identify if the factor depends on $x$ only or $y$ only. Reduction of order for second-order ODEs works only if *explicitly* $x$ or $y$ is missing. Module 2: Second-Order Linear ODEs Key Definitions Linear Equation (Second Order): An ODE of the form $y'' + P(x)y' + Q(x)y = R(x)$. Homogeneous Equation: $y'' + P(x)y' + Q(x)y = 0$. Nonhomogeneous Equation: $y'' + P(x)y' + Q(x)y = R(x)$ where $R(x) \not\equiv 0$. Trivial Solution: $y(x) = 0$ for all $x$. Linear Combination: $c_1 y_1(x) + c_2 y_2(x)$. Linearly Dependent Solutions: Two solutions $y_1, y_2$ where one is a constant multiple of the other. Linearly Independent Solutions: Two solutions $y_1, y_2$ where neither is a constant multiple of the other. Wronskian: $W(y_1, y_2)(x) = y_1 y_2' - y_2 y_1'$. Auxiliary Equation: For $y'' + py' + qy = 0$, it's $m^2 + pm + q = 0$. Core Formulas & Methodologies General Solution Structure: For a nonhomogeneous equation $y'' + P(x)y' + Q(x)y = R(x)$, the general solution is $y(x) = y_g(x) + y_p(x)$, where $y_g$ is the general solution of the associated homogeneous equation and $y_p$ is any particular solution of the nonhomogeneous equation. Homogeneous Equations with Constant Coefficients ($y'' + py' + qy = 0$): Form the auxiliary equation: $m^2 + pm + q = 0$. Find the roots $m_1, m_2$. Determine $y_g(x)$ based on root type: Distinct Real Roots ($m_1 \neq m_2$): $y_g(x) = c_1 e^{m_1 x} + c_2 e^{m_2 x}$. Distinct Complex Roots ($m_1, m_2 = a \pm ib$): $y_g(x) = e^{ax}(c_1 \cos(bx) + c_2 \sin(bx))$. Equal Real Roots ($m_1 = m_2 = m$): $y_g(x) = c_1 e^{mx} + c_2 x e^{mx}$. Method of Undetermined Coefficients (for $y'' + py' + qy = R(x)$): Find $y_g(x)$ (as above). Guess a form for $y_p(x)$ based on $R(x)$: $R(x)$ Form $y_p(x)$ Trial Form Modification if trial form is in $y_g(x)$ $A e^{\alpha x}$ $C e^{\alpha x}$ Multiply by $x$ (if $\alpha$ is a simple root of aux. eq.) or $x^2$ (if $\alpha$ is a double root). $A \cos(\beta x) + B \sin(\beta x)$ $C \cos(\beta x) + D \sin(\beta x)$ Multiply by $x$ (if $\pm i\beta$ are roots of aux. eq.) Polynomial $P_n(x)$ of degree $n$ $Q_n(x)$ (general polynomial of degree $n$) Multiply by $x$ (if $0$ is a root of aux. eq.) or $x^2$ (if $0$ is a double root). Substitute $y_p(x)$ and its derivatives into the nonhomogeneous ODE. Equate coefficients to find the undetermined constants. Method of Variation of Parameters (for $y'' + P(x)y' + Q(x)y = R(x)$): Find two linearly independent solutions $y_1(x), y_2(x)$ of the homogeneous equation $y'' + P(x)y' + Q(x)y = 0$. Calculate the Wronskian $W(y_1, y_2)(x) = y_1 y_2' - y_2 y_1'$. The particular solution is $y_p(x) = -y_1(x) \int \frac{y_2(x)R(x)}{W(y_1, y_2)(x)}dx + y_2(x) \int \frac{y_1(x)R(x)}{W(y_1, y_2)(x)}dx$. Important Theorems Existence & Uniqueness (Second Order Linear): If $P(x), Q(x), R(x)$ are continuous on $[a,b]$, then for any $x_0 \in [a,b]$ and $y_0, y_0'$ any numbers, $y'' + P(x)y' + Q(x)y = R(x)$ has a unique solution $y(x)$ on $[a,b]$ satisfying $y(x_0)=y_0$ and $y'(x_0)=y_0'$. Principle of Superposition (Homogeneous): If $y_1, y_2$ are solutions of a homogeneous linear ODE, then $c_1 y_1 + c_2 y_2$ is also a solution. Wronskian Test for Linear Independence: Two solutions $y_1, y_2$ of $y'' + P(x)y' + Q(x)y = 0$ are linearly independent iff $W(y_1, y_2)(x) \neq 0$ for all $x$ in the interval. If $W(y_1, y_2)(x_0) = 0$ for some $x_0$, then $W(y_1, y_2)(x) = 0$ for all $x$. Abel's Formula: For solutions $y_1, y_2$ of $y'' + P(x)y' + Q(x)y = 0$, $W(y_1, y_2)(x) = C e^{-\int P(x)dx}$. Watch Out! For undetermined coefficients, remember to modify the trial solution if it overlaps with the homogeneous solution. Variation of parameters is more general (works for variable coefficients) but often involves harder integrals than undetermined coefficients. Ensure the ODE is in standard form ($y'' + P(x)y' + Q(x)y = R(x)$) before applying Variation of Parameters. Module 3: Systems of ODEs Key Definitions System of First-Order ODEs: A set of $n$ equations where the derivatives of $n$ unknown functions are expressed in terms of the independent variable and the functions themselves. Homogeneous Linear System: A system of linear ODEs where all constant terms are zero. Nonhomogeneous Linear System: A system of linear ODEs with at least one non-zero constant term. Trivial Solution (for systems): A solution where all unknown functions are identically zero. Wronskian (for systems): For two solutions $\begin{pmatrix} x_1(t) \\ y_1(t) \end{pmatrix}$ and $\begin{pmatrix} x_2(t) \\ y_2(t) \end{pmatrix}$ of a 2x2 system, $W(t) = x_1(t)y_2(t) - x_2(t)y_1(t)$. Linearly Dependent Solutions (for systems): Two solutions are linearly dependent if one is a constant multiple of the other. Auxiliary Equation (for constant coefficient systems): A polynomial equation whose roots determine the exponential terms in the solutions. Core Formulas & Methodologies Reduction of Higher-Order ODE to a System: For an $n$-th order ODE $y^{(n)} = f(x, y, y', \dots, y^{(n-1)})$. Define new variables: $y_1 = y$, $y_2 = y'$, $\dots$, $y_n = y^{(n-1)}$. The system becomes: $y_1' = y_2$ $y_2' = y_3$ $\vdots$ $y_n' = f(x, y_1, y_2, \dots, y_n)$. Homogeneous Linear Systems with Constant Coefficients ($dx/dt = a_1 x + b_1 y$, $dy/dt = a_2 x + b_2 y$): Assume solutions of the form $x = A e^{mt}$ and $y = B e^{mt}$. Substitute into the system and simplify to get a linear algebraic system for $A$ and $B$: $(a_1 - m)A + b_1 B = 0$ $a_2 A + (b_2 - m)B = 0$ Form the auxiliary (characteristic) equation by setting the determinant of coefficients to zero: $(a_1 - m)(b_2 - m) - b_1 a_2 = 0 \implies m^2 - (a_1 + b_2)m + (a_1 b_2 - a_2 b_1) = 0$. Find the roots $m_1, m_2$. For each root, substitute back into step 2 to find corresponding $(A,B)$ pairs (eigenvectors). Construct the general solution based on root types: Distinct Real Roots ($m_1 \neq m_2$): $x(t) = c_1 A_1 e^{m_1 t} + c_2 A_2 e^{m_2 t}$ $y(t) = c_1 B_1 e^{m_1 t} + c_2 B_2 e^{m_2 t}$ Distinct Complex Roots ($m_1, m_2 = a \pm ib$): If $(A^*, B^*)$ is a complex solution for $m_1 = a+ib$, then real solutions are obtained from $\text{Re}(A^* e^{(a+ib)t})$ and $\text{Im}(A^* e^{(a+ib)t})$. $x(t) = e^{at}(c_1 (A_R \cos(bt) - A_I \sin(bt)) + c_2 (A_I \cos(bt) + A_R \sin(bt)))$ $y(t) = e^{at}(c_1 (B_R \cos(bt) - B_I \sin(bt)) + c_2 (B_I \cos(bt) + B_R \sin(bt)))$ (where $A^* = A_R + iA_I$, $B^* = B_R + iB_I$). Equal Real Roots ($m_1 = m_2 = m$): One solution is $x_1 = A e^{mt}$, $y_1 = B e^{mt}$. Second linearly independent solution is found by substituting $x_2 = (A_1 + A_2 t)e^{mt}$, $y_2 = (B_1 + B_2 t)e^{mt}$ into the original system to solve for $A_1, A_2, B_1, B_2$. Important Theorems Existence & Uniqueness (Systems): For a system $\mathbf{y}' = \mathbf{F}(x, \mathbf{y})$, if $\mathbf{F}$ and its partial derivatives with respect to $\mathbf{y}$ are continuous in a region $R$, then for any initial point $(x_0, \mathbf{y}_0) \in R$, there exists a unique solution. Principle of Superposition (Systems): If $\mathbf{y}_1$ and $\mathbf{y}_2$ are solutions to a homogeneous linear system $\mathbf{y}' = A(t)\mathbf{y}$, then $c_1 \mathbf{y}_1 + c_2 \mathbf{y}_2$ is also a solution. Wronskian Test for Linear Independence (Systems): For a system of two homogeneous linear ODEs, two solutions are linearly independent iff their Wronskian $W(t)$ is non-zero. $W(t)$ is either identically zero or never zero. General Solution (Nonhomogeneous Systems): If $\mathbf{y}_g$ is the general solution of the associated homogeneous system and $\mathbf{y}_p$ is a particular solution of the nonhomogeneous system, then $\mathbf{y} = \mathbf{y}_g + \mathbf{y}_p$ is the general solution of the nonhomogeneous system. Watch Out! For constant coefficient systems, correctly calculate the roots of the auxiliary equation, as this dictates the form of the solution. If complex roots occur, ensure you convert the complex-valued solutions into real-valued solutions using Euler's formula. When equal roots occur, the second independent solution requires a multiplier of $t$. Module 4: Series Solutions Key Definitions Elementary Functions: Algebraic functions and elementary transcendental functions (trigonometric, exponential, logarithmic, etc.) and combinations thereof. Higher Transcendental/Special Functions: Functions beyond elementary ones, often arising as ODE solutions (e.g., Bessel, Legendre). Power Series: An infinite series of the form $\sum_{n=0}^\infty a_n (x-x_0)^n$. Radius of Convergence ($R$): The value for which a power series converges for $|x-x_0| R$. Analytic Function: A function that can be represented by a convergent power series in a neighborhood of a point. Ordinary Point: A point $x_0$ for $y'' + P(x)y' + Q(x)y = 0$ where $P(x)$ and $Q(x)$ are analytic at $x_0$. Singular Point: A point that is not an ordinary point. Regular Singular Point: A singular point $x_0$ where $(x-x_0)P(x)$ and $(x-x_0)^2Q(x)$ are analytic. Irregular Singular Point: A singular point that is not regular. Frobenius Series: A series of the form $x^m \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty a_n x^{n+m}$, where $a_0 \neq 0$. Indicial Equation: The quadratic equation for $m$ obtained when substituting a Frobenius series into an ODE at a regular singular point. Its roots are the exponents of the ODE. Sturm Separation Theorem: Concerns the alternating zeros of linearly independent solutions of $y'' + P(x)y' + Q(x)y = 0$. Sturm Comparison Theorem: Compares oscillation rates of solutions of $y'' + q(x)y = 0$ based on $q(x)$. Core Formulas & Methodologies Power Series Solutions at Ordinary Points ($y'' + P(x)y' + Q(x)y = 0$): Assume $y = \sum_{n=0}^\infty a_n x^n$. Substitute $y, y', y''$ into the ODE. Combine terms and equate coefficients of each power of $x$ to zero to find a recurrence relation for $a_n$. Express $a_n$ in terms of $a_0$ and $a_1$. Write $y = a_0 y_1(x) + a_1 y_2(x)$, where $y_1$ and $y_2$ are linearly independent series solutions. The radius of convergence $R$ is at least as large as the minimum of the radii of convergence of $P(x)$ and $Q(x)$. Frobenius Method (at Regular Singular Points, $x_0=0$): Assume $y = \sum_{n=0}^\infty a_n x^{n+m}$. Substitute $y, y', y''$ into the ODE. Extract the indicial equation (coefficient of the lowest power of $x$) and solve for $m_1, m_2$. Use the recurrence relation (from equating other coefficients to zero) for each root of the indicial equation. Case 1: $m_1 - m_2$ is not an integer: Two linearly independent Frobenius series solutions $y_1, y_2$ exist. Case 2: $m_1 = m_2$: One Frobenius series solution $y_1$. The second solution is $y_2 = y_1 \ln|x| + \sum_{n=1}^\infty b_n x^{n+m}$. Case 3: $m_1 - m_2$ is a positive integer: If the recurrence relation allows, two Frobenius series solutions may exist. Otherwise, one Frobenius series $y_1$ exists, and the second solution is $y_2 = k y_1 \ln|x| + \sum_{n=0}^\infty b_n x^{n+m_2}$ (where $k$ can be 0). Important Theorems Picard's Theorem (Existence & Uniqueness for Series Solutions): If $P(x)$ and $Q(x)$ are analytic at $x_0$, then a unique analytic solution exists for $y'' + P(x)y' + Q(x)y = 0$ satisfying $y(x_0)=a_0$ and $y'(x_0)=a_1$. The radius of convergence of the solution is at least the minimum of the radii of convergence of $P(x)$ and $Q(x)$. Sturm Separation Theorem: If $y_1(x)$ and $y_2(x)$ are two linearly independent solutions of $y'' + P(x)y' + Q(x)y = 0$, then the zeros of these functions are distinct and occur alternately (zeros of $y_1$ separate zeros of $y_2$, and vice-versa). Sturm Comparison Theorem: Given $y'' + q(x)y = 0$ and $z'' + r(x)z = 0$ with $q(x) > r(x) > 0$. If $z(x)$ has two successive zeros $x_1, x_2$, then $y(x)$ must vanish at least once between $x_1$ and $x_2$. Watch Out! Ensure $a_0 \neq 0$ when assuming a Frobenius series. Carefully handle the recurrence relation for different root cases of the indicial equation. For regular singular points, the existence of two Frobenius series solutions depends on the difference between the roots of the indicial equation. Module 5: Special Functions Key Definitions Legendre's Equation: $(1-x^2)y'' - 2xy' + n(n+1)y = 0$. Legendre Polynomials ($P_n(x)$): Polynomial solutions of Legendre's equation when $n$ is a non-negative integer. Bessel's Equation: $x^2y'' + xy' + (x^2 - p^2)y = 0$. Bessel Functions of the First Kind ($J_p(x)$): Solutions of Bessel's equation, bounded at $x=0$. Bessel Functions of the Second Kind ($Y_p(x)$): Solutions of Bessel's equation, unbounded at $x=0$. Gamma Function ($\Gamma(p)$): A generalization of the factorial function to complex and real numbers, defined by $\Gamma(p) = \int_0^\infty t^{p-1}e^{-t}dt$. Chebyshev's Equation: $(1-x^2)y'' - xy' + p^2y = 0$. Chebyshev Polynomials ($T_n(x)$): Polynomial solutions of Chebyshev's equation. Core Formulas & Methodologies Legendre Polynomials: Generating Function: $\frac{1}{\sqrt{1-2xt+t^2}} = \sum_{n=0}^\infty P_n(x)t^n$. Rodrigues' Formula: $P_n(x) = \frac{1}{2^n n!} \frac{d^n}{dx^n}(x^2-1)^n$. Recurrence Relation: $(n+1)P_{n+1}(x) = (2n+1)xP_n(x) - nP_{n-1}(x)$. First Few: $P_0(x)=1$, $P_1(x)=x$, $P_2(x)=\frac{1}{2}(3x^2-1)$, $P_3(x)=\frac{1}{2}(5x^3-3x)$. Legendre Series: To expand $f(x)$ on $[-1,1]$: $f(x) = \sum_{n=0}^\infty a_n P_n(x)$ where $a_n = \frac{2n+1}{2} \int_{-1}^1 f(x)P_n(x)dx$. Bessel Functions: Bessel Function of the First Kind ($J_p(x)$): $J_p(x) = \sum_{n=0}^\infty \frac{(-1)^n}{\Gamma(n+1)\Gamma(n+p+1)} \left(\frac{x}{2}\right)^{2n+p}$. Recurrence Relations: $J_{p-1}(x) + J_{p+1}(x) = \frac{2p}{x} J_p(x)$ $J_{p-1}(x) - J_{p+1}(x) = 2J_p'(x)$ Derivative Relations: $\frac{d}{dx}[x^p J_p(x)] = x^p J_{p-1}(x)$ $\frac{d}{dx}[x^{-p} J_p(x)] = -x^{-p} J_{p+1}(x)$ Gamma Function: $\Gamma(p+1) = p\Gamma(p)$. $\Gamma(n+1) = n!$ for integer $n \ge 0$. $\Gamma(1/2) = \sqrt{\pi}$. Chebyshev Polynomials: Definition: $T_n(x) = \cos(n \arccos x)$ for $x \in [-1,1]$. Recurrence Relation: $T_{n+1}(x) = 2xT_n(x) - T_{n-1}(x)$. First Few: $T_0(x)=1$, $T_1(x)=x$, $T_2(x)=2x^2-1$. Orthogonality Relations Legendre Polynomials: $\int_{-1}^1 P_m(x)P_n(x)dx = \begin{cases} 0 & \text{if } m \neq n \\ \frac{2}{2n+1} & \text{if } m = n \end{cases}$ Bessel Functions ($J_p(\lambda_n x)$, where $\lambda_n$ are zeros of $J_p(x)$): $\int_0^1 x J_p(\lambda_m x) J_p(\lambda_n x)dx = \begin{cases} 0 & \text{if } m \neq n \\ \frac{1}{2} J_{p+1}(\lambda_n)^2 = \frac{1}{2} J_{p}'(\lambda_n)^2 & \text{if } m = n \end{cases}$ (This is orthogonality with respect to weight function $x$). Chebyshev Polynomials: $\int_{-1}^1 \frac{T_m(x)T_n(x)}{\sqrt{1-x^2}}dx = \begin{cases} 0 & \text{if } m \neq n \\ \pi & \text{if } m = n = 0 \\ \pi/2 & \text{if } m = n \neq 0 \end{cases}$ (Orthogonality with respect to weight function $\frac{1}{\sqrt{1-x^2}}$). Watch Out! Gamma function $\Gamma(p)$ is undefined for $p=0, -1, -2, \dots$. Bessel function $Y_p(x)$ is unbounded at $x=0$, so it's often discarded in physical problems requiring boundedness at the origin. Ensure correct use of recurrence relations and Rodrigues' formula for Legendre polynomials. Module 6: Laplace Transforms Key Definitions Laplace Transform ($\mathcal{L}\{f(t)\}$): An integral transform $\mathcal{L}\{f(t)\} = F(s) = \int_0^\infty e^{-st}f(t)dt$. Inverse Laplace Transform ($\mathcal{L}^{-1}\{F(s)\}$): The function $f(t)$ corresponding to a given $F(s)$. Piecewise Continuous Function: A function continuous on finite intervals, with only a finite number of jump discontinuities. Function of Exponential Order: A function $f(t)$ for which there exist constants $M > 0$ and $c$ such that $|f(t)| \le M e^{ct}$ for all $t \ge 0$. Convolution ($f*g$): $(f*g)(t) = \int_0^t f(\tau)g(t-\tau)d\tau$. Unit Step Function ($u(t-a)$): $u(t-a) = \begin{cases} 0 & \text{if } t Dirac Delta Function ($\delta(t-a)$): A generalized function representing an impulse at $t=a$. $\mathcal{L}\{\delta(t-a)\} = e^{-as}$. Core Formulas & Methodologies Basic Laplace Transforms (Table): $f(t)$ $F(s) = \mathcal{L}\{f(t)\}$ $1$ $1/s$ $t^n$ ($n \ge 0$ integer) $n!/s^{n+1}$ $e^{at}$ $1/(s-a)$ $\sin(bt)$ $b/(s^2+b^2)$ $\cos(bt)$ $s/(s^2+b^2)$ Properties of Laplace Transforms: Linearity: $\mathcal{L}\{af(t) + bg(t)\} = aF(s) + bG(s)$. Derivative of $f(t)$: $\mathcal{L}\{f'(t)\} = sF(s) - f(0)$ $\mathcal{L}\{f''(t)\} = s^2F(s) - sf(0) - f'(0)$ $\mathcal{L}\{f^{(n)}(t)\} = s^n F(s) - s^{n-1}f(0) - \dots - f^{(n-1)}(0)$ Transform of an Integral: $\mathcal{L}\left\{\int_0^t f(\tau)d\tau \right\} = \frac{F(s)}{s}$. Multiplication by $t^n$: $\mathcal{L}\{t^n f(t)\} = (-1)^n \frac{d^n}{ds^n}F(s)$. Division by $t$: $\mathcal{L}\left\{\frac{f(t)}{t}\right\} = \int_s^\infty F(\sigma)d\sigma$. Shifting Theorem (First Translation): $\mathcal{L}\{e^{at}f(t)\} = F(s-a)$. Shifting Theorem (Second Translation): $\mathcal{L}\{f(t-a)u(t-a)\} = e^{-as}F(s)$. Convolution Theorem: $\mathcal{L}\{(f*g)(t)\} = F(s)G(s)$. Solving Initial Value Problems with Laplace Transforms: Take the Laplace transform of both sides of the ODE. Use derivative properties to replace $\mathcal{L}\{y'(t)\}$, $\mathcal{L}\{y''(t)\}$, etc., with expressions involving $s$ and $F(s) = \mathcal{L}\{y(t)\}$, incorporating initial conditions. Solve the resulting algebraic equation for $F(s)$. Find the inverse Laplace transform $\mathcal{L}^{-1}\{F(s)\}$ to get the solution $y(t)$. (Often requires partial fraction decomposition for $F(s)$). Solving Integral Equations (Volterra Type): For $f(t) = y(t) + \int_0^t k(\tau)y(t-\tau)d\tau$, rewrite integral as convolution $k*y$. Take Laplace transform: $F(s) = Y(s) + K(s)Y(s)$. Solve for $Y(s)$: $Y(s) = \frac{F(s)}{1+K(s)}$. Find $\mathcal{L}^{-1}\{Y(s)\}$ to get $y(t)$. Watch Out! Laplace transforms only work for $t \ge 0$. Initial conditions are automatically incorporated when transforming derivatives. $F(s) \to 0$ as $s \to \infty$ is a necessary condition for $F(s)$ to be a Laplace transform (though not sufficient). Partial fraction decomposition is crucial for finding inverse transforms of rational functions. Remember to use the correct shifting theorem (first for $e^{at}f(t)$, second for $f(t-a)u(t-a)$). Module 7: Fourier Series Key Definitions Trigonometric Series: A series of the form $\frac{a_0}{2} + \sum_{n=1}^\infty (a_n \cos(nx) + b_n \sin(nx))$. Fourier Series: A trigonometric series where coefficients $a_n, b_n$ are calculated using Euler's formulas. Fourier Coefficients: The $a_n, b_n$ values determined by Euler's formulas. Periodic Function: A function $f(x)$ for which $f(x+p) = f(x)$ for some constant $p > 0$ (period). Periodic Extension: Extending a function defined on a finite interval to the entire real line by periodicity. Simple/Jump Discontinuity: A point where left and right limits are finite but unequal. Piecewise Smooth Function: A function that is piecewise continuous and has a piecewise continuous first derivative. Even Function: $f(-x) = f(x)$. Odd Function: $f(-x) = -f(x)$. Orthogonal Functions: A sequence of functions $\{\phi_n(x)\}$ on $[a,b]$ such that $\int_a^b \phi_m(x)\phi_n(x)dx = 0$ for $m \neq n$. Orthonormal Sequence: An orthogonal sequence where $\int_a^b \phi_n(x)^2dx = 1$ for all $n$. Mean Square Error: $E_n = \int_a^b [f(x) - p_n(x)]^2dx$. Mean Convergence: A sequence $p_n(x)$ converges in the mean to $f(x)$ if $E_n \to 0$ as $n \to \infty$. Core Formulas & Methodologies Euler's Formulas for Fourier Coefficients (on $[-\pi, \pi]$): $a_n = \frac{1}{\pi} \int_{-\pi}^\pi f(x)\cos(nx)dx$ for $n=0,1,2,\dots$ $b_n = \frac{1}{\pi} \int_{-\pi}^\pi f(x)\sin(nx)dx$ for $n=1,2,3,\dots$ Fourier Series on an Arbitrary Interval $[-L, L]$: $f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \left(a_n \cos\left(\frac{n\pi x}{L}\right) + b_n \sin\left(\frac{n\pi x}{L}\right)\right)$ $a_n = \frac{1}{L} \int_{-L}^L f(x)\cos\left(\frac{n\pi x}{L}\right)dx$ $b_n = \frac{1}{L} \int_{-L}^L f(x)\sin\left(\frac{n\pi x}{L}\right)dx$ Fourier Sine Series (for odd functions or functions on $[0,L]$ extended oddly): $f(x) = \sum_{n=1}^\infty b_n \sin\left(\frac{n\pi x}{L}\right)$ $b_n = \frac{2}{L} \int_0^L f(x)\sin\left(\frac{n\pi x}{L}\right)dx$ Fourier Cosine Series (for even functions or functions on $[0,L]$ extended evenly): $f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos\left(\frac{n\pi x}{L}\right)$ $a_n = \frac{2}{L} \int_0^L f(x)\cos\left(\frac{n\pi x}{L}\right)dx$ Important Theorems Dirichlet's Theorem (Pointwise Convergence): If $f(x)$ is defined and bounded on $[-\pi, \pi]$ (or $[-L,L]$), has a finite number of discontinuities and a finite number of maxima/minima, and is periodically extended, then its Fourier series converges to $\frac{1}{2}[f(x-) + f(x+)]$ at every point $x$. At points of continuity, it converges to $f(x)$. Parseval's Equation: For a function $f(x)$ integrable on $[-\pi, \pi]$ with Fourier coefficients $a_n, b_n$: $\frac{1}{\pi} \int_{-\pi}^\pi [f(x)]^2dx = \frac{a_0^2}{2} + \sum_{n=1}^\infty (a_n^2 + b_n^2)$. (This implies that the Fourier series converges to $f(x)$ in the mean). Bessel's Inequality: $\frac{a_0^2}{2} + \sum_{n=1}^\infty (a_n^2 + b_n^2) \le \frac{1}{\pi} \int_{-\pi}^\pi [f(x)]^2dx$. Property of Orthogonal Functions: The Fourier coefficients minimize the mean square error. Watch Out! Always periodically extend the function when considering its Fourier series outside the initial interval. At jump discontinuities, the Fourier series converges to the average of the left and right limits. For functions on $[0,L]$, be careful whether you're asked for a sine series (odd extension) or a cosine series (even extension). The constant term in Fourier series is $\frac{a_0}{2}$, not $a_0$. Module 8: Boundary Value Problems Key Definitions Boundary Value Problem (BVP): A differential equation with conditions specified at two different points (boundaries). Eigenvalue: A constant $\lambda$ for which a homogeneous BVP has a non-trivial solution. Eigenfunction: The non-trivial solution corresponding to an eigenvalue. Homogeneous Boundary Conditions: Boundary conditions where all terms are zero (e.g., $y(a)=0, y(b)=0$). Any linear combination of solutions satisfying these also satisfies them. Sturm–Liouville Problem: A second-order linear ODE of the form $\frac{d}{dx}\left[p(x)\frac{dy}{dx}\right] + [q(x) + \lambda r(x)]y = 0$ with homogeneous boundary conditions. Regular Sturm–Liouville Problem: A Sturm–Liouville problem where $p(x), p'(x), q(x), r(x)$ are continuous, $p(x) > 0$, $r(x) > 0$ on a finite interval $[a,b]$. Singular Sturm–Liouville Problem: A Sturm–Liouville problem where conditions for "regular" are not met (e.g., $p(x)$ or $r(x)$ vanish at endpoints, or interval is infinite). Core Formulas & Methodologies Solving Simple BVPs ($y'' + \lambda y = 0$ with $y(0)=0, y(\pi)=0$): Consider cases for $\lambda$: $\lambda $\lambda = 0$: General solution $y = c_1 x + c_2$. Apply boundary conditions to show only trivial solution $y=0$ exists. $\lambda > 0$: General solution $y = c_1 \cos(\sqrt{\lambda}x) + c_2 \sin(\sqrt{\lambda}x)$. For $\lambda > 0$: $y(0)=0 \implies c_1 = 0$. So $y = c_2 \sin(\sqrt{\lambda}x)$. $y(\pi)=0 \implies c_2 \sin(\sqrt{\lambda}\pi) = 0$. For non-trivial solution, $\sin(\sqrt{\lambda}\pi) = 0$. Solve for $\lambda$: $\sqrt{\lambda}\pi = n\pi \implies \lambda_n = n^2$ for $n=1,2,3,\dots$. These are the eigenvalues. The corresponding eigenfunctions are $y_n(x) = \sin(nx)$. Important Theorems Properties of Eigenvalues (Regular Sturm–Liouville Problems): Eigenvalues are real numbers. They form an increasing sequence: $\lambda_1 Eigenfunctions corresponding to distinct eigenvalues are orthogonal with respect to the weight function $r(x)$. Each eigenfunction is unique up to a non-zero constant factor. The eigenfunction $y_n(x)$ corresponding to $\lambda_n$ has exactly $n-1$ zeros in the open interval $(a,b)$. Orthogonality of Eigenfunctions (Sturm–Liouville): For a Sturm–Liouville problem, if $y_m(x)$ and $y_n(x)$ are eigenfunctions corresponding to distinct eigenvalues $\lambda_m$ and $\lambda_n$, then they are orthogonal with respect to the weight function $r(x)$: $\int_a^b r(x)y_m(x)y_n(x)dx = 0$. Watch Out! Always check all three cases for $\lambda$ (negative, zero, positive) in simple BVPs to confirm non-trivial solutions only exist for specific $\lambda > 0$. Eigenfunctions are determined up to a constant multiplier. Module 9: PDEs Key Definitions Partial Differential Equation (PDE): An equation involving partial derivatives of an unknown function of multiple independent variables. Wave Equation (1D): $\frac{\partial^2 y}{\partial t^2} = a^2 \frac{\partial^2 y}{\partial x^2}$. Describes vibrations of a string. Heat Equation (1D): $\frac{\partial w}{\partial t} = a^2 \frac{\partial^2 w}{\partial x^2}$. Describes heat conduction in a rod. Laplace's Equation (2D): $\frac{\partial^2 w}{\partial x^2} + \frac{\partial^2 w}{\partial y^2} = 0$. Describes steady-state phenomena. Steady-State Solution: A solution where the time derivative is zero ($\partial/\partial t = 0$). Harmonic Function: A solution to Laplace's equation. Dirichlet Problem: Finding a function satisfying a PDE in a region and having prescribed values on the boundary. Core Formulas & Methodologies Method of Separation of Variables (General Steps): Assume a solution of the form $u(x,t) = X(x)T(t)$ (or appropriate product for more variables). Substitute this form into the PDE. Separate variables such that one side depends only on one variable and the other side depends only on the other. Equate both sides to a separation constant (e.g., $-\lambda$). Solve the resulting two (or more) ODEs. Apply boundary conditions to determine allowed values of $\lambda$ (eigenvalues) and corresponding solutions (eigenfunctions). Form a general solution by summing (or integrating) these particular solutions (e.g., Fourier series). Apply initial conditions to determine coefficients in the general solution. 1D Wave Equation Solution (Fixed Ends, Zero Initial Velocity): $\frac{\partial^2 y}{\partial t^2} = a^2 \frac{\partial^2 y}{\partial x^2}$, $y(0,t)=0, y(L,t)=0, y(x,0)=f(x), \frac{\partial y}{\partial t}(x,0)=0$. Solution: $y(x,t) = \sum_{n=1}^\infty b_n \sin\left(\frac{n\pi x}{L}\right) \cos\left(\frac{n\pi a t}{L}\right)$ where $b_n = \frac{2}{L} \int_0^L f(x)\sin\left(\frac{n\pi x}{L}\right)dx$. 1D Heat Equation Solution (Fixed Zero Ends): $\frac{\partial w}{\partial t} = a^2 \frac{\partial^2 w}{\partial x^2}$, $w(0,t)=0, w(L,t)=0, w(x,0)=f(x)$. Solution: $w(x,t) = \sum_{n=1}^\infty b_n e^{-a^2(n\pi/L)^2 t} \sin\left(\frac{n\pi x}{L}\right)$ where $b_n = \frac{2}{L} \int_0^L f(x)\sin\left(\frac{n\pi x}{L}\right)dx$. Steady-State 1D Heat Conduction (Laplace's Eq. in 1D): $\frac{d^2 w}{d x^2} = 0$, $w(0)=w_1, w(L)=w_2$. Solution: $w(x) = w_1 + \frac{w_2-w_1}{L}x$. Dirichlet Problem for a Unit Circle (Laplace's Eq. in Polar Coordinates): $\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial w}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2 w}{\partial \theta^2} = 0$, $w(1,\theta)=f(\theta)$. Solution (assuming $w(r,\theta)$ is bounded at $r=0$): $w(r,\theta) = \frac{a_0}{2} + \sum_{n=1}^\infty r^n (a_n \cos(n\theta) + b_n \sin(n\theta))$ where $a_n = \frac{1}{\pi} \int_{-\pi}^\pi f(\theta)\cos(n\theta)d\theta$ and $b_n = \frac{1}{\pi} \int_{-\pi}^\pi f(\theta)\sin(n\theta)d\theta$. Poisson Integral Formula: $w(r,\theta) = \frac{1}{2\pi} \int_{-\pi}^\pi \frac{1-r^2}{1-2r\cos(\theta-\phi)+r^2} f(\phi)d\phi$. Watch Out! The choice of separation constant is crucial. If it leads to non-physical (e.g., unbounded) solutions, it must be rejected. Boundary conditions typically determine the spatial part (eigenvalues/eigenfunctions) of the solution. Initial conditions typically determine the coefficients of the Fourier series. Ensure the correct Fourier series (sine or cosine) is used based on the boundary conditions and function extension.