1. First-Order Differential Equations Differential equations involving only the first derivative of the unknown function. 1.1 General Form A first-order differential equation can be expressed as $F(x, y, y') = 0$, where $y'$ is the first derivative of $y$ with respect to $x$. Often, it's written in the explicit form $y' = f(x, y)$, indicating that the slope of the solution curve at any point $(x,y)$ depends on both $x$ and $y$. Solutions to these equations typically involve one arbitrary constant. 1.2 Separable Equations Form: These equations can be rearranged so that all terms involving $y$ (and $dy$) are on one side, and all terms involving $x$ (and $dx$) are on the other. This form is $y' = g(x)h(y)$ or, equivalently, $M(x)dx + N(y)dy = 0$. Solution Procedure: Step 1: Separate Variables. Rewrite the equation as $\frac{dy}{h(y)} = g(x)dx$. If the form is $M(x)dx + N(y)dy = 0$, the variables are already separated. Step 2: Integrate Both Sides. Integrate the $y$-side with respect to $y$ and the $x$-side with respect to $x$: $\int \frac{dy}{h(y)} = \int g(x)dx + C$. Remember to add the constant of integration $C$ to one side (typically the $x$-side). Step 3: Solve for $y$ (if possible). Explicitly solve the resulting equation for $y$ in terms of $x$ and $C$. 1.3 Exact Equations Form: An equation of the form $M(x, y)dx + N(x, y)dy = 0$ is exact if it corresponds to the total differential $d\Phi = \frac{\partial \Phi}{\partial x}dx + \frac{\partial \Phi}{\partial y}dy$ for some function $\Phi(x, y)$. Condition for Exactness: The necessary and sufficient condition for exactness is that the partial derivative of $M$ with respect to $y$ equals the partial derivative of $N$ with respect to $x$: $\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}$. Solution Procedure: Step 1: Verify Exactness. Check if $\frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}$. If not, the equation is not exact (and might require an integrating factor). Step 2: Find $\Phi(x, y)$. Since $\frac{\partial \Phi}{\partial x} = M(x, y)$, integrate $M(x, y)$ with respect to $x$, treating $y$ as a constant. This gives $\Phi(x, y) = \int M(x, y)dx + h(y)$, where $h(y)$ is an arbitrary function of $y$ (analogous to the constant of integration). Step 3: Determine $h(y)$. Differentiate the expression for $\Phi(x, y)$ from Step 2 with respect to $y$: $\frac{\partial \Phi}{\partial y} = \frac{\partial}{\partial y} \left( \int M(x, y)dx \right) + h'(y)$. Set this equal to $N(x, y)$, i.e., $\frac{\partial \Phi}{\partial y} = N(x, y)$. This allows you to solve for $h'(y)$. Step 4: Integrate $h'(y)$. Integrate $h'(y)$ with respect to $y$ to find $h(y)$. Do not add another constant here; it's absorbed into the final general solution. Step 5: Write the General Solution. Substitute the found $h(y)$ back into the expression for $\Phi(x, y)$ from Step 2. The general solution is then given implicitly by $\Phi(x, y) = C$, where $C$ is an arbitrary constant. 1.4 Integrating Factors for Non-Exact Equations If an equation $M(x, y)dx + N(x, y)dy = 0$ is not exact ($\frac{\partial M}{\partial y} \neq \frac{\partial N}{\partial x}$), sometimes it can be made exact by multiplying it by a suitable integrating factor $\mu(x, y)$. The goal is to find $\mu$ such that $\mu M dx + \mu N dy = 0$ is exact. Common Cases for Finding $\mu$: If $\mu$ depends only on $x$: Calculate the expression $\frac{1}{N} \left( \frac{\partial M}{\partial y} - \frac{\partial N}{\partial x} \right)$. If this expression is solely a function of $x$ (let's call it $f(x)$), then the integrating factor is $\mu(x) = e^{\int f(x)dx}$. If $\mu$ depends only on $y$: Calculate the expression $\frac{1}{M} \left( \frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right)$. If this expression is solely a function of $y$ (let's call it $g(y)$), then the integrating factor is $\mu(y) = e^{\int g(y)dy}$. Once $\mu$ is found, multiply the original equation by $\mu$ and solve the resulting exact equation using the procedure from Section 1.3. 1.5 Linear First-Order Equations Form: A linear first-order differential equation for $y$ is of the form $y' + P(x)y = Q(x)$, where $P(x)$ and $Q(x)$ are continuous functions of $x$. Solution Procedure: Step 1: Identify $P(x)$ and $Q(x)$. Ensure the equation is in the standard form $y' + P(x)y = Q(x)$. Step 2: Calculate the Integrating Factor. The integrating factor is $\mu(x) = e^{\int P(x)dx}$. Step 3: Multiply by the Integrating Factor. Multiply the entire differential equation by $\mu(x)$: $\mu(x)y' + \mu(x)P(x)y = \mu(x)Q(x)$. The left-hand side is now the derivative of a product: $(\mu(x)y)'$. Step 4: Integrate Both Sides. Integrate both sides with respect to $x$: $\int (\mu(x)y)'dx = \int \mu(x)Q(x)dx$. This yields $\mu(x)y = \int \mu(x)Q(x)dx + C$. Step 5: Solve for $y$. Divide by $\mu(x)$ to get the general solution: $y(x) = \frac{1}{\mu(x)} \left( \int \mu(x)Q(x)dx + C \right)$. 1.6 Bernoulli Equations Form: Bernoulli equations are non-linear first-order differential equations of the form $y' + P(x)y = Q(x)y^n$, where $n$ is any real number except $0$ or $1$ (if $n=0$ or $n=1$, it's a linear equation). Solution Procedure: Step 1: Transform to Linear. Divide the entire equation by $y^n$: $y^{-n}y' + P(x)y^{1-n} = Q(x)$. Step 2: Substitute. Let $v = y^{1-n}$. Differentiate $v$ with respect to $x$ using the chain rule: $v' = (1-n)y^{-n}y'$. Step 3: Rewrite and Solve. Substitute $v$ and $v'$ into the transformed equation. Notice that $y^{-n}y' = \frac{1}{1-n}v'$. The equation becomes $\frac{1}{1-n}v' + P(x)v = Q(x)$, which is a linear first-order equation in $v$. Solve this linear equation for $v$ using the method described in Section 1.5. Step 4: Substitute Back. Once $v(x)$ is found, substitute back $y = v^{1/(1-n)}$ (or $y = v^{1/(1-n)}$ if $1-n$ is negative) to get the solution for $y$. 1.7 Homogeneous Equations Form: A first-order differential equation is homogeneous if it can be written in the form $y' = f(y/x)$. This means that the right-hand side is a function of the ratio $y/x$. Solution Procedure: Step 1: Substitute. Let $v = y/x$. This implies $y = vx$. Step 2: Differentiate. Differentiate $y = vx$ with respect to $x$ using the product rule: $y' = v \cdot 1 + x \cdot v' = v + xv'$. Step 3: Transform to Separable. Substitute $v$ and $y'$ into the original homogeneous equation: $v + xv' = f(v)$. This equation can always be rearranged into a separable form: $xv' = f(v) - v \Rightarrow \frac{dv}{f(v) - v} = \frac{dx}{x}$. Step 4: Solve Separable Equation. Integrate both sides: $\int \frac{dv}{f(v) - v} = \int \frac{dx}{x} + C$. Step 5: Substitute Back. Solve for $v$, then replace $v$ with $y/x$ to obtain the solution for $y$. 2. Second-Order Linear Differential Equations Differential equations involving the second derivative of the unknown function, where the unknown function and its derivatives appear linearly. 2.1 General Form The general form of a second-order linear differential equation is $P(x)y'' + Q(x)y' + R(x)y = G(x)$, where $P(x)$, $Q(x)$, $R(x)$, and $G(x)$ are given functions of $x$. If $G(x) = 0$, the equation is homogeneous; otherwise, it is non-homogeneous. 2.2 Homogeneous Equations with Constant Coefficients Form: These are equations of the form $ay'' + by' + cy = 0$, where $a, b, c$ are constants and $a \neq 0$. Solution Procedure: Assume a solution of the form $y = e^{rx}$. Substituting this into the equation yields the characteristic equation (also known as the auxiliary equation): $ar^2 + br + c = 0$. The roots of this quadratic equation determine the form of the general solution $y_h(x)$. Case 1: Two Distinct Real Roots ($r_1, r_2$) If the discriminant $b^2 - 4ac > 0$, there are two distinct real roots. The general solution is a linear combination of two exponential functions: $y_h(x) = C_1e^{r_1x} + C_2e^{r_2x}$. Case 2: One Repeated Real Root ($r = r_1 = r_2$) If the discriminant $b^2 - 4ac = 0$, there is one repeated real root $r = -b/(2a)$. The general solution is a linear combination of $e^{rx}$ and $xe^{rx}$: $y_h(x) = C_1e^{rx} + C_2xe^{rx}$. Case 3: Complex Conjugate Roots ($\alpha \pm i\beta$) If the discriminant $b^2 - 4ac 2.3 Non-Homogeneous Equations Form: $a y'' + b y' + c y = G(x)$, where $G(x) \neq 0$. This can also be written using differential operators as $F(D)y = G(x)$, where $F(D) = aD^2 + bD + c$ and $D = \frac{d}{dx}$. General Solution: The general solution to a non-homogeneous linear differential equation is the sum of the homogeneous solution ($y_h(x)$) and a particular solution ($y_p(x)$): $y(x) = y_h(x) + y_p(x)$. The homogeneous solution is found by setting $G(x)=0$ and solving the resulting equation (as in Section 2.2). The particular solution $y_p(x)$ is any specific solution that satisfies the non-homogeneous equation. Its calculation is the focus of the following methods. 2.4 Operator Method for Particular Integral ($y_p$) This method uses the inverse differential operator $\frac{1}{F(D)}$ to find $y_p$. The particular integral is given by $y_p = \frac{1}{F(D)}G(x)$. 2.4.1 Case 1: $G(x) = e^{ax}$ To find $y_p = \frac{1}{F(D)}e^{ax}$: Rule: Replace $D$ with $a$ in $F(D)$. If $F(a) \neq 0$: The particular integral is $y_p = \frac{1}{F(a)}e^{ax}$. If $F(a) = 0$: This means $a$ is a root of the characteristic equation. If $a$ is a simple root (i.e., $F(a)=0$ but $F'(a) \neq 0$): $y_p = x\frac{1}{F'(a)}e^{ax}$. (Differentiate $F(D)$ with respect to $D$, then substitute $a$). If $a$ is a double root (i.e., $F(a)=0$, $F'(a)=0$ but $F''(a) \neq 0$): $y_p = x^2\frac{1}{F''(a)}e^{ax}$. (Differentiate $F(D)$ twice, then substitute $a$). This rule can be generalized for roots of higher multiplicity. 2.4.2 Case 2: $G(x) = \sin(ax)$ or $G(x) = \cos(ax)$ To find $y_p = \frac{1}{F(D)}\sin(ax)$ or $y_p = \frac{1}{F(D)}\cos(ax)$: Rule: Replace $D^2$ with $-a^2$ in $F(D)$. This rule is only applicable if $F(D)$ can be expressed as a function of $D^2$. If $F(D)$ contains odd powers of $D$, you might need to multiply the numerator and denominator by $F(-D)$ to get $D^2$ terms. If $F(-a^2) \neq 0$: The particular integral is $y_p = \frac{1}{F(-a^2)}\sin(ax)$ or $y_p = \frac{1}{F(-a^2)}\cos(ax)$. If $F(-a^2) = 0$: This means $-a^2$ is a root of $F(D^2)=0$, which implies $\pm ia$ are roots of $F(D)=0$. This is the resonant case. For example, if $F(D) = D^2+a^2$: $y_p = \frac{1}{D^2+a^2}\sin(ax) = -\frac{x}{2a}\cos(ax)$ And $y_p = \frac{1}{D^2+a^2}\cos(ax) = \frac{x}{2a}\sin(ax)$ Alternative and often easier method: Use complex exponentials. For $\sin(ax)$, consider $G(x) = \text{Im}(e^{iax})$, so $y_p = \text{Im}\left(\frac{1}{F(D)}e^{iax}\right)$. For $\cos(ax)$, consider $G(x) = \text{Re}\left(\frac{1}{F(D)}e^{iax}\right)$. Then apply the rules for $e^{ax}$ (Case 1). 2.4.3 Case 3: $G(x) = x^m$ (Polynomial) To find $y_p = \frac{1}{F(D)}x^m$, where $m$ is a positive integer: Rule: Rearrange $\frac{1}{F(D)}$ into the form $[c(1 + \phi(D))]^{-1}$ or $[cD^k(1 + \phi(D))]^{-1}$ where $c$ is a constant, and $\phi(D)$ is a function of $D$ such that $\phi(0)=0$. Expand $[1 + \phi(D)]^{-1}$ using the binomial series $(1+u)^{-1} = 1 - u + u^2 - u^3 + \dots$ or $(1-u)^{-1} = 1 + u + u^2 + u^3 + \dots$. Terminate the series after the $D^m$ term, because applying $D^{m+1}$ or higher powers to $x^m$ will result in zero ($D^k x^m = 0$ for $k > m$). Apply the resulting polynomial in $D$ to $x^m$. Remember $D x^m = mx^{m-1}$, $D^2 x^m = m(m-1)x^{m-2}$, etc. 2.4.4 Case 4: $G(x) = e^{ax}V(x)$ To find $y_p = \frac{1}{F(D)}e^{ax}V(x)$, where $V(x)$ is another function (e.g., polynomial, sine/cosine): Rule: The exponential term can be "shifted" to the left of the operator by replacing $D$ with $(D+a)$ in the operator: $y_p = e^{ax}\frac{1}{F(D+a)}V(x)$. Now, solve $\frac{1}{F(D+a)}V(x)$ using the appropriate method for $V(x)$ (e.g., Case 2.4.2 for trigonometric $V(x)$, or Case 2.4.3 for polynomial $V(x)$). 2.4.5 Case 5: $G(x) = xV(x)$ To find $y_p = \frac{1}{F(D)}xV(x)$, where $V(x)$ is a function that can be easily operated on by $F(D)$ (like $e^{ax}$, $\sin(ax)$, $\cos(ax)$): Rule: $y_p = x \frac{1}{F(D)}V(x) - \frac{F'(D)}{[F(D)]^2}V(x)$. First, calculate $\frac{1}{F(D)}V(x)$ using the relevant rule (Case 2.4.1 or 2.4.2). Then, calculate $F'(D)$ (the derivative of $F(D)$ with respect to $D$). Apply the operator $\frac{F'(D)}{[F(D)]^2}$ to $V(x)$. This often involves applying the rules for $e^{ax}$ or $\sin(ax)/\cos(ax)$ multiple times or using partial fractions for $\frac{1}{[F(D)]^2}$. 2.5 Variation of Parameters This is a general method for finding a particular solution $y_p(x)$ for $P(x)y'' + Q(x)y' + R(x)y = G(x)$, even when the coefficients are not constant or $G(x)$ is not of a form suitable for the operator method. It requires knowing two linearly independent solutions ($y_1(x)$ and $y_2(x)$) of the associated homogeneous equation $P(x)y'' + Q(x)y' + R(x)y = 0$. Solution Procedure: Step 1: Standard Form. First, convert the equation to the standard form: $y'' + p(x)y' + q(x)y = g(x)$, by dividing by $P(x)$. So, $p(x) = Q(x)/P(x)$, $q(x) = R(x)/P(x)$, and $g(x) = G(x)/P(x)$. Step 2: Find $y_h(x)$. Find two linearly independent solutions, $y_1(x)$ and $y_2(x)$, to the homogeneous equation $y'' + p(x)y' + q(x)y = 0$. Step 3: Calculate the Wronskian. Compute the Wronskian of $y_1$ and $y_2$: $W(y_1, y_2)(x) = y_1y_2' - y_2y_1'$. The Wronskian must be non-zero for $y_1, y_2$ to be linearly independent. Step 4: Formulate $y_p(x)$. Assume the particular solution has the form $y_p(x) = u_1(x)y_1(x) + u_2(x)y_2(x)$, where $u_1(x)$ and $u_2(x)$ are unknown functions. Step 5: Calculate $u_1'(x)$ and $u_2'(x)$. The derivatives of $u_1$ and $u_2$ are given by: $u_1'(x) = -\frac{y_2(x)g(x)}{W(y_1, y_2)(x)}$ $u_2'(x) = \frac{y_1(x)g(x)}{W(y_1, y_2)(x)}$ Step 6: Integrate to find $u_1(x)$ and $u_2(x)$. Integrate the expressions from Step 5 to find $u_1(x) = \int u_1'(x)dx$ and $u_2(x) = \int u_2'(x)dx$. Do not include arbitrary constants of integration here, as they would simply reproduce terms already in the homogeneous solution. Step 7: Substitute. Substitute $u_1(x)$ and $u_2(x)$ back into the expression for $y_p(x)$ from Step 4. 2.6 Cauchy-Euler Equations Form: Cauchy-Euler (or Euler-Cauchy) equations are linear differential equations with variable coefficients that are powers of $x$: $ax^2y'' + bxy' + cy = 0$, where $a, b, c$ are constants. Solution Procedure: Assume a solution of the form $y = x^r$. Calculate the derivatives: $y' = rx^{r-1}$ and $y'' = r(r-1)x^{r-2}$. Substitute these into the equation: $ax^2(r(r-1)x^{r-2}) + bx(rx^{r-1}) + cx^r = 0$ This simplifies to $ar(r-1)x^r + brx^r + cx^r = 0$. Since $x^r \neq 0$, we can divide by it to get the characteristic equation (also called the indicial equation): $ar(r-1) + br + c = 0$, which expands to $ar^2 + (b-a)r + c = 0$. Solve this quadratic equation for $r$. Case 1: Two Distinct Real Roots ($r_1, r_2$) The general solution is $y_h(x) = C_1x^{r_1} + C_2x^{r_2}$. Case 2: One Repeated Real Root ($r = r_1 = r_2$) The general solution is $y_h(x) = C_1x^{r} + C_2x^{r}\ln|x|$. Case 3: Complex Conjugate Roots ($\alpha \pm i\beta$) The roots are of the form $r = \alpha \pm i\beta$. The general solution is $y_h(x) = x^{\alpha}(C_1\cos(\beta \ln|x|) + C_2\sin(\beta \ln|x|))$. This uses the identity $x^{i\beta} = e^{i\beta \ln x} = \cos(\beta \ln x) + i\sin(\beta \ln x)$. For non-homogeneous Cauchy-Euler equations ($ax^2y'' + bxy' + cy = G(x)$), methods like Variation of Parameters can be used after finding the homogeneous solutions. 2.7 Reduction of Order This method is used to find a second linearly independent solution of a homogeneous second-order linear differential equation, $y'' + p(x)y' + q(x)y = 0$, when one non-trivial solution $y_1(x)$ is already known. Solution Procedure: Step 1: Assume a Second Solution. Assume the second solution is of the form $y_2(x) = v(x)y_1(x)$, where $v(x)$ is an unknown function. Step 2: Differentiate. Calculate the first and second derivatives of $y_2(x)$: $y_2' = v'y_1 + vy_1'$ $y_2'' = v''y_1 + v'y_1' + v'y_1' + vy_1'' = v''y_1 + 2v'y_1' + vy_1''$ Step 3: Substitute and Simplify. Substitute $y_2$, $y_2'$, and $y_2''$ into the original homogeneous equation: $ (v''y_1 + 2v'y_1' + vy_1'') + p(x)(v'y_1 + vy_1') + q(x)(vy_1) = 0 $ Rearrange terms: $ v(y_1'' + p(x)y_1' + q(x)y_1) + v''(y_1) + v'(2y_1' + p(x)y_1) = 0 $. Since $y_1$ is a solution to the homogeneous equation, $(y_1'' + p(x)y_1' + q(x)y_1) = 0$. This simplifies the equation to: $v''y_1 + v'(2y_1' + p(x)y_1) = 0$. Step 4: Solve for $v'(x)$. This is a first-order linear (and separable) differential equation for $v'$. Let $w = v'$. Then $w'y_1 + w(2y_1' + p(x)y_1) = 0$. Separate variables: $\frac{w'}{w} = -\frac{2y_1' + p(x)y_1}{y_1} = -2\frac{y_1'}{y_1} - p(x)$. Integrate both sides with respect to $x$: $\ln|w| = -2\ln|y_1| - \int p(x)dx + C_0$. Exponentiate: $w = C_1 \frac{1}{y_1^2} e^{-\int p(x)dx}$. So, $v'(x) = C_1 \frac{e^{-\int p(x)dx}}{[y_1(x)]^2}$. Step 5: Integrate to find $v(x)$. Integrate $v'(x)$ to find $v(x) = C_1 \int \frac{e^{-\int p(x)dx}}{[y_1(x)]^2}dx + C_2$. Step 6: Form $y_2(x)$. For a second linearly independent solution, we can choose $C_1=1$ and $C_2=0$. Thus, $y_2(x) = y_1(x) \int \frac{e^{-\int p(x)dx}}{[y_1(x)]^2}dx$. The general solution is $y(x) = C_1y_1(x) + C_2y_2(x)$. 3. Systems of First-Order Linear Differential Equations These involve multiple coupled first-order differential equations, often expressed in matrix form. We focus on homogeneous systems with constant coefficients. 3.1 General Form A system of $n$ first-order linear differential equations can be written in vector-matrix form as $\mathbf{x}' = A\mathbf{x}$, where $\mathbf{x}(t)$ is a column vector of $n$ unknown functions ($x_1(t), \dots, x_n(t)$), $\mathbf{x}'(t)$ is its derivative vector, and $A$ is an $n \times n$ constant matrix. For a 2D system, this is $\begin{pmatrix} x_1' \\ x_2' \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}$. 3.2 Eigenvalue Method for $\mathbf{x}' = A\mathbf{x}$ The solution to $\mathbf{x}' = A\mathbf{x}$ is found by analyzing the eigenvalues and eigenvectors of the matrix $A$. We assume solutions of the form $\mathbf{x}(t) = \mathbf{v}e^{\lambda t}$, where $\lambda$ is an eigenvalue and $\mathbf{v}$ is its corresponding eigenvector. Solution Procedure: Step 1: Find Eigenvalues. Solve the characteristic equation $\det(A - \lambda I) = 0$ for $\lambda$, where $I$ is the identity matrix. The roots are the eigenvalues $\lambda_i$. Step 2: Find Eigenvectors. For each eigenvalue $\lambda_i$, solve the system $(A - \lambda_i I)\mathbf{v}_i = \mathbf{0}$ to find the corresponding eigenvector $\mathbf{v}_i$. Step 3: Construct the General Solution based on Eigenvalue Types. Case 1: Two Distinct Real Eigenvalues ($\lambda_1, \lambda_2$) If $A$ has two distinct real eigenvalues $\lambda_1$ and $\lambda_2$, with corresponding eigenvectors $\mathbf{v}_1$ and $\mathbf{v}_2$, the general solution is a linear combination of the two fundamental solutions: $\mathbf{x}(t) = C_1\mathbf{v}_1e^{\lambda_1 t} + C_2\mathbf{v}_2e^{\lambda_2 t}$. Case 2: Repeated Real Eigenvalues ($\lambda_1 = \lambda_2 = \lambda$) Subcase 2a: Two Linearly Independent Eigenvectors. If the repeated eigenvalue $\lambda$ has a multiplicity of 2 and there are two linearly independent eigenvectors $\mathbf{v}_1, \mathbf{v}_2$ (this happens if $A$ is a diagonal matrix or can be diagonalized), then the general solution is $\mathbf{x}(t) = C_1\mathbf{v}_1e^{\lambda t} + C_2\mathbf{v}_2e^{\lambda t}$. Subcase 2b: Only One Linearly Independent Eigenvector. If the repeated eigenvalue $\lambda$ only yields one linearly independent eigenvector $\mathbf{v}$ (algebraic multiplicity is 2, geometric multiplicity is 1), then one solution is $\mathbf{x}_1(t) = \mathbf{v}e^{\lambda t}$. A second linearly independent solution is found using a generalized eigenvector: $\mathbf{x}_2(t) = (\mathbf{v}t + \mathbf{\eta})e^{\lambda t}$, where $\mathbf{\eta}$ is a generalized eigenvector that satisfies the equation $(A - \lambda I)\mathbf{\eta} = \mathbf{v}$. The general solution is then $\mathbf{x}(t) = C_1\mathbf{x}_1(t) + C_2\mathbf{x}_2(t)$. Case 3: Complex Conjugate Eigenvalues ($\lambda = \alpha \pm i\beta$) If $A$ has complex conjugate eigenvalues $\lambda_1 = \alpha + i\beta$ and $\lambda_2 = \alpha - i\beta$, find the eigenvector $\mathbf{v}$ corresponding to one of them (e.g., $\lambda_1 = \alpha + i\beta$). The complex solution is $\mathbf{z}(t) = \mathbf{v}e^{\lambda_1 t} = \mathbf{v}e^{(\alpha + i\beta)t} = \mathbf{v}e^{\alpha t}(\cos(\beta t) + i\sin(\beta t))$. The real and imaginary parts of this complex solution form two linearly independent real solutions. Let $\mathbf{v} = \mathbf{a} + i\mathbf{b}$. Then: $\mathbf{z}(t) = (\mathbf{a} + i\mathbf{b})e^{\alpha t}(\cos(\beta t) + i\sin(\beta t)) = e^{\alpha t}[(\mathbf{a}\cos(\beta t) - \mathbf{b}\sin(\beta t)) + i(\mathbf{a}\sin(\beta t) + \mathbf{b}\cos(\beta t))]$. The general real solution is $\mathbf{x}(t) = C_1 e^{\alpha t}(\mathbf{a}\cos(\beta t) - \mathbf{b}\sin(\beta t)) + C_2 e^{\alpha t}(\mathbf{a}\sin(\beta t) + \mathbf{b}\cos(\beta t))$. This can also be written as $\mathbf{x}(t) = C_1 \text{Re}(\mathbf{v}e^{\lambda_1 t}) + C_2 \text{Im}(\mathbf{v}e^{\lambda_1 t})$. 4. Initial and Boundary Conditions Differential equations often have arbitrary constants in their general solutions. These constants are determined by imposing additional conditions. 4.1 Initial Value Problems (IVPs) For an $n$-th order ODE, an IVP specifies the value of the solution and its first $n-1$ derivatives at a single point (the initial point). For a first-order ODE $y' = f(x,y)$, an IVP is $y(x_0) = y_0$. For a second-order ODE $y'' = f(x,y,y')$, an IVP is $y(x_0) = y_0$ and $y'(x_0) = y_1$. The constants $C_i$ in the general solution are uniquely determined by these conditions. 4.2 Boundary Value Problems (BVPs) For an $n$-th order ODE, a BVP specifies the value of the solution or its derivatives at two or more different points (the boundary points). For a second-order ODE, typical BVPs are: $y(a) = y_a$, $y(b) = y_b$ (Dirichlet conditions) $y'(a) = y_a'$, $y'(b) = y_b'$ (Neumann conditions) Mixed conditions, e.g., $y(a) = y_a$, $y'(b) = y_b'$. Unlike IVPs, BVPs do not always have a unique solution or even any solution. They can have infinitely many solutions or no solution at all. 5. Laplace Transform Method for ODEs The Laplace Transform is a powerful tool for solving linear ODEs, especially non-homogeneous equations with discontinuous forcing functions or for IVPs. It converts a differential equation in the time domain ($t$) into an algebraic equation in the frequency domain ($s$). 5.1 Definition and Properties Laplace Transform: $\mathcal{L}\{f(t)\} = F(s) = \int_0^\infty e^{-st}f(t)dt$ Inverse Laplace Transform: $\mathcal{L}^{-1}\{F(s)\} = f(t)$ Key Properties: $\mathcal{L}\{y'(t)\} = sY(s) - y(0)$ $\mathcal{L}\{y''(t)\} = s^2Y(s) - sy(0) - y'(0)$ $\mathcal{L}\{c_1f_1(t) + c_2f_2(t)\} = c_1F_1(s) + c_2F_2(s)$ (Linearity) $\mathcal{L}\{e^{at}f(t)\} = F(s-a)$ (First Shifting Theorem) $\mathcal{L}\{t^n f(t)\} = (-1)^n \frac{d^n}{ds^n}F(s)$ $\mathcal{L}\{u(t-a)f(t-a)\} = e^{-as}F(s)$ (Second Shifting Theorem, where $u(t-a)$ is the Heaviside step function) $\mathcal{L}\{\int_0^t f(\tau)d\tau\} = \frac{F(s)}{s}$ $\mathcal{L}\{(f*g)(t)\} = F(s)G(s)$ (Convolution Theorem, $(f*g)(t) = \int_0^t f(\tau)g(t-\tau)d\tau$) 5.2 Solving ODEs using Laplace Transform Procedure for $ay'' + by' + cy = G(t)$ with $y(0)=y_0, y'(0)=y_1$: Step 1: Take Laplace Transform of both sides. Apply $\mathcal{L}$ to each term of the ODE, using the properties for derivatives and initial conditions. $a(s^2Y(s) - sy_0 - y_1) + b(sY(s) - y_0) + cY(s) = \mathcal{L}\{G(t)\} = G(s)$ Step 2: Solve for $Y(s)$. This results in an algebraic equation for $Y(s)$. Isolate $Y(s)$: $Y(s)(as^2 + bs + c) = G(s) + a(sy_0 + y_1) + by_0$ $Y(s) = \frac{G(s) + a(sy_0 + y_1) + by_0}{as^2 + bs + c}$ Step 3: Find the Inverse Laplace Transform of $Y(s)$. Use partial fraction decomposition for $Y(s)$ (if necessary) and then apply the inverse Laplace transform to each term to find $y(t) = \mathcal{L}^{-1}\{Y(s)\}$. This $y(t)$ is the unique solution to the IVP. 6. Series Solutions of ODEs When an ODE has variable coefficients and cannot be solved by elementary methods, a series solution (power series) can be used. This method assumes the solution can be expressed as a power series $y(x) = \sum_{n=0}^\infty a_n(x-x_0)^n$. 6.1 Ordinary and Singular Points Consider $y'' + p(x)y' + q(x)y = 0$. Ordinary Point: If $p(x)$ and $q(x)$ are analytic at $x_0$ (i.e., they have convergent power series expansions about $x_0$), then $x_0$ is an ordinary point. We can assume a solution of the form $y(x) = \sum_{n=0}^\infty a_n(x-x_0)^n$. Singular Point: If $x_0$ is not an ordinary point, it's a singular point. Regular Singular Point: If $(x-x_0)p(x)$ and $(x-x_0)^2q(x)$ are analytic at $x_0$. We use the Method of Frobenius. Irregular Singular Point: If $x_0$ is not a regular singular point. 6.2 Power Series Method (for Ordinary Points) Procedure for $y'' + p(x)y' + q(x)y = 0$ around an ordinary point $x_0=0$: Step 1: Assume a Power Series Solution. Let $y(x) = \sum_{n=0}^\infty a_nx^n$. Step 2: Differentiate. Calculate $y'(x) = \sum_{n=1}^\infty na_nx^{n-1}$ and $y''(x) = \sum_{n=2}^\infty n(n-1)a_nx^{n-2}$. Step 3: Substitute into ODE. Substitute $y, y', y''$ and the power series for $p(x)$ and $q(x)$ into the differential equation. Step 4: Shift Indices and Combine. Adjust the summation indices so that all terms have $x^k$. Combine terms with the same power of $x$. Step 5: Equate Coefficients to Zero. Since the power series must be zero for all $x$, the coefficient of each power of $x$ must be zero. This leads to a recurrence relation for the coefficients $a_n$. Step 6: Solve Recurrence Relation. Solve for $a_n$ in terms of $a_0$ and $a_1$. $a_0$ and $a_1$ are arbitrary constants, corresponding to the two fundamental solutions. Step 7: Write the General Solution. Substitute the coefficients back into the series: $y(x) = a_0y_1(x) + a_1y_2(x)$. 6.3 Method of Frobenius (for Regular Singular Points) Procedure for $y'' + p(x)y' + q(x)y = 0$ around a regular singular point $x_0=0$: Step 1: Assume a Frobenius Series Solution. Let $y(x) = \sum_{n=0}^\infty a_nx^{n+r}$, where $a_0 \neq 0$ and $r$ is a constant to be determined. Step 2: Differentiate. Calculate $y'(x) = \sum_{n=0}^\infty (n+r)a_nx^{n+r-1}$ and $y''(x) = \sum_{n=0}^\infty (n+r)(n+r-1)a_nx^{n+r-2}$. Step 3: Substitute into ODE. Substitute $y, y', y''$ and the series for $(x-x_0)p(x)$ and $(x-x_0)^2q(x)$ into the modified equation (multiply the original ODE by $(x-x_0)^2$). Step 4: Equate Lowest Power Coefficient to Zero (Indicial Equation). The coefficient of the lowest power of $x$ (usually $x^r$) gives a quadratic equation in $r$, called the indicial equation. The roots of this equation ($r_1, r_2$) determine the form of the solutions. Step 5: Find Recurrence Relation. Equate the coefficient of $x^{n+r}$ (for $n \ge 1$) to zero to find a recurrence relation for $a_n$. Step 6: Determine Solutions based on Roots of Indicial Equation. Case 1: Distinct Roots, $r_1 - r_2$ is not an integer. $y_1(x) = \sum_{n=0}^\infty a_n(r_1)x^{n+r_1}$ and $y_2(x) = \sum_{n=0}^\infty a_n(r_2)x^{n+r_2}$. Case 2: Repeated Roots, $r_1 = r_2 = r$. $y_1(x) = \sum_{n=0}^\infty a_n(r)x^{n+r}$ and $y_2(x) = y_1(x)\ln|x| + \sum_{n=1}^\infty b_n x^{n+r}$. Case 3: Distinct Roots, $r_1 - r_2$ is a positive integer. $y_1(x) = \sum_{n=0}^\infty a_n(r_1)x^{n+r_1}$. The second solution might be of the form $y_2(x) = C y_1(x)\ln|x| + \sum_{n=0}^\infty b_n x^{n+r_2}$ (if $a_n(r_2)$ becomes infinite, $C \neq 0$) or $y_2(x) = \sum_{n=0}^\infty a_n(r_2)x^{n+r_2}$ (if $C=0$).