### Relation - If $\mathbf{A}$ and $\mathbf{B}$ are two non-empty sets, then a relation $\mathbf{R}$ is a subset of the Cartesian product of set $\mathbf{A}$ and $\mathbf{B}$. - $\mathbf{m}$ = No. of elements in set $\mathbf{A}$ - $\mathbf{n}$ = No. of elements in set $\mathbf{B}$ - **Total Relations:** $\mathbf{2^{mn}}$ #### Types of Relations - **Identity Relation:** A relation is said to be identity if $\mathbf{(a,a) \in R}$; $\mathbf{a \in A}$. - Example: If $\mathbf{A = \{1,2,3\}}$, then $\mathbf{\{(1,1), (2,2), (3,3)\}}$ is the only identity relation. - Total No. of identity relations on a set $\mathbf{A}$ = $\mathbf{1}$. - **Universal Relation:** $\mathbf{R = A \times A}$, i.e., set of all possible ordered pairs from set $\mathbf{A}$ to $\mathbf{A}$. - Example: If $\mathbf{A = \{1,2\}}$, then $\mathbf{\{(1,1), (1,2), (2,1), (2,2)\}}$ is the only universal relation. - Total No. of Universal Relations on set $\mathbf{A}$ = $\mathbf{1}$. - **Empty Relation:** If $\mathbf{R = \phi}$ (empty set), i.e., no ordered pair satisfies the given condition. - Example: $\mathbf{A = \{1,2,3\}}$, $\mathbf{B = \{4,5\}}$. $\mathbf{R = \{(a,b) : a \in A, b \in B, a+b=20\}}$. This is an empty relation. - Total No. of empty relations = $\mathbf{2^{mn} - 1}$ (We subtract 1 from total possible relations as $\mathbf{\phi}$ is already an empty relation not formed from ordered pairs). #### Reflexive Relation - A relation $\mathbf{R}$ on a set $\mathbf{A}$ is reflexive if $\mathbf{(a,a) \in R}$ for all $\mathbf{a \in A}$. - Note: In a reflexive relation, other than $\mathbf{(a,a)}$, ordered pairs may exist. - Example: If $\mathbf{A = \{1,2,3\}}$, then $\mathbf{R_1 = \{(1,1), (2,2), (3,3)\}}$ is reflexive. - $\mathbf{R_2 = \{(1,1), (2,2), (3,3), (1,2)\}}$ is reflexive. - Identity and Universal relations are always reflexive, symmetric, and transitive. - Total No. of reflexive relations = $\mathbf{2^{n^2-n}}$ #### Symmetric Relation - If $\mathbf{(a,b) \in R \Rightarrow (b,a) \in R}$ for all $\mathbf{a,b \in A}$. - Total No. of symmetric relations = $\mathbf{2^{\frac{n(n+1)}{2}}}$ #### Anti-symmetric Relation - If $\mathbf{(a,b) \in R \Rightarrow (b,a) \notin R}$ (only when $\mathbf{a \neq b}$), then $\mathbf{R}$ is an anti-symmetric relation. - Example: If $\mathbf{a \le b}$ and $\mathbf{b \le a}$ implies $\mathbf{a = b}$. - Total No. of anti-symmetric relations = $\mathbf{n \cdot 2^{\frac{n(n-1)}{2}}}$ #### Transitive Relation - If $\mathbf{(a,b) \in R}$ and $\mathbf{(b,c) \in R \Rightarrow (a,c) \in R}$ for all $\mathbf{a,b,c \in A}$. - The total number of transitive relations on a set $\mathbf{A}$ can be evaluated easily by Bell's formula. - Note: When only one ordered pair $\mathbf{(a,b)}$ is possible and others are not possible, then $\mathbf{R}$ is transitive (e.g., "x is wife of y"). - Total transitive relations for: - 2 elements = $\mathbf{13}$ - 3 elements = $\mathbf{171}$ - 4 elements = $\mathbf{3994}$ #### Equivalence Relation - If a relation is reflexive, symmetric, and transitive, then it is an equivalence relation. - Example: $\mathbf{R = \{(L_1, L_2) : L_1 \text{ is parallel to } L_2, L_1, L_2 \in \text{set of lines in a plane}\}}$. - Total equivalence relations for: - 3 elements = $\mathbf{5}$ - 4 elements = $\mathbf{15}$ - 5 elements = $\mathbf{52}$ #### Equivalence Classes - All those partitions which satisfy the given condition of a relation form equivalence classes. - Example: $\mathbf{A = \{0,1,2,...,12\}}$. $\mathbf{R = \{(a,b) : |a-b| \text{ is divisible by } 4\}}$. - Equivalence class of $\mathbf{\{2\}}$ is $\mathbf{\{2,6,10\}}$ because $\mathbf{|2-2|, |2-6|, |2-10|}$ are divisible by $\mathbf{4}$. - Note: A relation may or may not be a function. A function is always a relation. ### Functions & Mappings - If $\mathbf{A}$ and $\mathbf{B}$ are two non-empty sets, then a mapping $\mathbf{f: A \to B}$ is defined as: "If each element of set $\mathbf{A}$ associates with a unique element of set $\mathbf{B}$." - Set $\mathbf{A}$ is called **Domain**, set $\mathbf{B}$ is called **Co-domain**. - The elements of set $\mathbf{A}$ associated with elements of set $\mathbf{B}$ are called **Range** or **Image**. - Total No. of functions: $\mathbf{n(B)^{n(A)}}$ #### Graphical Definition - Any line parallel to the $\mathbf{y}$-axis cuts the graph at one point only if it is a function from set $\mathbf{A}$ to $\mathbf{B}$. - In these two cases, $\mathbf{f}$ is not a function from $\mathbf{A}$ to $\mathbf{B}$: - (i) An element in $\mathbf{A}$ has no image in $\mathbf{B}$. - (ii) An element in $\mathbf{A}$ has two images in $\mathbf{B}$. #### Types of Functions ##### One-One (Injective) - If different elements of the domain have different images in the co-domain. - If $\mathbf{f'(x) > 0}$ or $\mathbf{f'(x) n}$: $\mathbf{n P_m}$ (no. of permutations of $\mathbf{n}$ items taken $\mathbf{m}$ at a time) ##### Many-One Function - If different elements of the domain have the same image in the co-domain, then $\mathbf{f}$ is said to be a many-one function. - Examples: Modulus function, trigonometric functions, Greatest Integer function, Signum function are many-one functions. - Total many-one functions: (Total functions) - (One-one functions) ##### Onto Function (Surjective) - If for every element of the co-domain, there must exist at least one pre-image in the domain, then the function is always an onto function. - i.e., Co-domain = Range. - Examples: - $\mathbf{f(x) = 4x+3}$, $\mathbf{f: R \to R}$ - $\mathbf{f(x) = \sin x}$, $\mathbf{f: R \to [-1,1]}$ - $\mathbf{f(x) = |x|}$, $\mathbf{f: R \to [0,\infty)}$ - $\mathbf{f(x) = \lceil x \rceil}$, $\mathbf{f: R \to Z}$ - All are onto functions as Co-domain = Range. - Total onto functions: - If $\mathbf{m n}$: $\mathbf{\sum_{k=0}^{n} (-1)^k \binom{n}{k} (n-k)^m}$ ##### Into Function - If there exists at least one element in the co-domain for which there does not exist any pre-image in the domain, it is said to be an into function. - In other words: When Co-domain $\neq$ Range. - Total into functions: (Total functions) - (Onto functions) #### Properties of Functions - (i) The composition of functions is associative. - (ii) The composition of two one-one functions is one-one. - (iii) The composition of two onto functions is always onto. - (iv) The composition of bijective functions is bijective. ### Inverse Trigonometric Functions - Trigonometric functions are periodic functions. Their inverse exists when we bound them in some restriction according to their domain and range. | Function | Domain | Range | |----------|--------|-------| | $\mathbf{\sin^{-1}x}$ | $\mathbf{[-1,1]}$ | $\mathbf{[-\frac{\pi}{2}, \frac{\pi}{2}]}$ | | $\mathbf{\cos^{-1}x}$ | $\mathbf{[-1,1]}$ | $\mathbf{[0, \pi]}$ | | $\mathbf{\tan^{-1}x}$ | $\mathbf{R}$ | $\mathbf{(-\frac{\pi}{2}, \frac{\pi}{2})}$ | | $\mathbf{\cot^{-1}x}$ | $\mathbf{R}$ | $\mathbf{(0, \pi)}$ | | $\mathbf{\sec^{-1}x}$ | $\mathbf{R - (-1,1)}$ | $\mathbf{[0, \pi] - \{\frac{\pi}{2}\}}$ | | $\mathbf{\csc^{-1}x}$ | $\mathbf{R - (-1,1)}$ | $\mathbf{[-\frac{\pi}{2}, \frac{\pi}{2}] - \{0\}}$ | - $\mathbf{[-\frac{\pi}{2}, \frac{\pi}{2}]}$, $\mathbf{[0, \pi]}$ etc. are called principal ranges. #### Set 1 - $\mathbf{\sin^{-1}(\sin\theta) = \theta}$, $\mathbf{\theta \in [-\frac{\pi}{2}, \frac{\pi}{2}]}$ - $\mathbf{\cos^{-1}(\cos\theta) = \theta}$, $\mathbf{\theta \in [0, \pi]}$ - $\mathbf{\tan^{-1}(\tan\theta) = \theta}$, $\mathbf{\theta \in (-\frac{\pi}{2}, \frac{\pi}{2})}$ - $\mathbf{\cot^{-1}(\cot\theta) = \theta}$, $\mathbf{\theta \in (0, \pi)}$ - $\mathbf{\sec^{-1}(\sec\theta) = \theta}$, $\mathbf{\theta \in [0, \pi] - \{\frac{\pi}{2}\}}$ - $\mathbf{\csc^{-1}(\csc\theta) = \theta}$, $\mathbf{\theta \in [-\frac{\pi}{2}, \frac{\pi}{2}] - \{0\}}$ #### Set 2 - $\mathbf{\sin(\sin^{-1}x) = x}$, $\mathbf{x \in [-1,1]}$ - $\mathbf{\cos(\cos^{-1}x) = x}$, $\mathbf{x \in [-1,1]}$ - $\mathbf{\tan(\tan^{-1}x) = x}$, $\mathbf{x \in R}$ - $\mathbf{\cot(\cot^{-1}x) = x}$, $\mathbf{x \in R}$ - $\mathbf{\sec(\sec^{-1}x) = x}$, $\mathbf{x \in R - (-1,1)}$ - $\mathbf{\csc(\csc^{-1}x) = x}$, $\mathbf{x \in R - (-1,1)}$ #### Set 3 - $\mathbf{\sin^{-1}(-x) = -\sin^{-1}x}$ - $\mathbf{\tan^{-1}(-x) = -\tan^{-1}x}$ - $\mathbf{\csc^{-1}(-x) = -\csc^{-1}x}$ - $\mathbf{\cos^{-1}(-x) = \pi - \cos^{-1}x}$ - $\mathbf{\sec^{-1}(-x) = \pi - \sec^{-1}x}$ - $\mathbf{\cot^{-1}(-x) = \pi - \cot^{-1}x}$ #### Set 4 - $\mathbf{\sin^{-1}x + \cos^{-1}x = \frac{\pi}{2}}$, $\mathbf{x \in [-1,1]}$ - $\mathbf{\tan^{-1}x + \cot^{-1}x = \frac{\pi}{2}}$, $\mathbf{x \in R}$ - $\mathbf{\sec^{-1}x + \csc^{-1}x = \frac{\pi}{2}}$, $\mathbf{x \in R - (-1,1)}$ - $\mathbf{\tan^{-1}x + \tan^{-1}y = \tan^{-1}\left(\frac{x+y}{1-xy}\right)}$ if $\mathbf{xy 1}$ - $\mathbf{\tan^{-1}x - \tan^{-1}y = \tan^{-1}\left(\frac{x-y}{1+xy}\right)}$ if $\mathbf{xy > -1}$ - $\mathbf{2\tan^{-1}x = \sin^{-1}\left(\frac{2x}{1+x^2}\right) = \cos^{-1}\left(\frac{1-x^2}{1+x^2}\right) = \tan^{-1}\left(\frac{2x}{1-x^2}\right)}$ #### Set 5 - $\mathbf{\sin^{-1}x + \sin^{-1}y = \sin^{-1}(x\sqrt{1-y^2} + y\sqrt{1-x^2})}$ - $\mathbf{\sin^{-1}x - \sin^{-1}y = \sin^{-1}(x\sqrt{1-y^2} - y\sqrt{1-x^2})}$ - $\mathbf{\cos^{-1}x + \cos^{-1}y = \cos^{-1}(xy - \sqrt{1-x^2}\sqrt{1-y^2})}$ - $\mathbf{\cos^{-1}x - \cos^{-1}y = \cos^{-1}(xy + \sqrt{1-x^2}\sqrt{1-y^2})}$ #### Set 6 (Trigonometric Formulas to Remember) 1. $\mathbf{1 + \sin A = (\cos\frac{A}{2} + \sin\frac{A}{2})^2}$ 2. $\mathbf{1 - \sin A = (\cos\frac{A}{2} - \sin\frac{A}{2})^2}$ 3. $\mathbf{1 + \cos A = 2\cos^2\frac{A}{2}}$ 4. $\mathbf{1 - \cos A = 2\sin^2\frac{A}{2}}$ 5. $\mathbf{\frac{1+\tan A}{1-\tan A} = \tan(\frac{\pi}{4} + A)}$ 6. $\mathbf{\frac{1-\tan A}{1+\tan A} = \tan(\frac{\pi}{4} - A)}$ 7. $\mathbf{\sin 2A = 2\sin A \cos A = \frac{2\tan A}{1+\tan^2 A}}$ 8. $\mathbf{\cos 2A = \cos^2 A - \sin^2 A = 2\cos^2 A - 1 = 1 - 2\sin^2 A = \frac{1-\tan^2 A}{1+\tan^2 A}}$ 9. $\mathbf{\cos(A \pm B) = \cos A \cos B \mp \sin A \sin B}$ 10. $\mathbf{\sin(A \pm B) = \sin A \cos B \pm \cos A \sin B}$ ### Matrices - A set of $\mathbf{m \times n}$ numbers (real or complex) arranged in various rows and columns is called a matrix of $\mathbf{m \times n}$ order. - It is represented by $\mathbf{A, B, C}$ etc. and written as $\mathbf{A = [a_{ij}]_{m \times n}}$. - $\mathbf{i}$ = element's row position, $\mathbf{j}$ = element's column position. - $\mathbf{m}$ = No. of rows (Horizontal lines), $\mathbf{n}$ = No. of columns (Vertical lines). - $\mathbf{m \times n}$ represents that in each row there exist $\mathbf{n}$ elements and there exist $\mathbf{m}$ elements in each column. - For Total No. of matrices of different order, use prime factorization method. - Example: Let $\mathbf{n(A) = 2000}$. Prime factors are $\mathbf{2^4 \times 5^3}$. - No. of matrices of different order = $\mathbf{(4+1)(3+1) = 20}$. #### Types of Matrices ##### Row Matrix - If $\mathbf{m=1}$, i.e., No. of rows = $\mathbf{1}$. - Example: $\mathbf{[1 \quad 5 \quad 7 \quad 9]}$ ##### Column Matrix - If $\mathbf{n=1}$, i.e., No. of columns = $\mathbf{1}$. - Example: $\mathbf{\begin{bmatrix} 2 \\ 5 \\ 9 \end{bmatrix}}$ ##### Diagonal Elements - If $\mathbf{i=j}$, i.e., element's row and column position is the same. $\mathbf{a_{11}, a_{22}, a_{33}, ...}$ are called diagonal elements. ##### Square Matrices - If $\mathbf{m=n}$, i.e., No. of rows = No. of columns. - Example: $\mathbf{\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}_{3 \times 3}}$ ##### Rectangular Matrices - If $\mathbf{m \neq n}$, i.e., No. of rows $\neq$ No. of columns. - There are two types of rectangular matrices: - (i) **Horizontal Matrix:** If $\mathbf{m n}$, i.e., No. of rows > No. of columns. - Example: $\mathbf{\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}}$ ##### Diagonal Matrix - In a square matrix, if $\mathbf{a_{ij} = 0}$ for $\mathbf{i \neq j}$, i.e., all non-diagonal elements are $\mathbf{0}$. - Example: $\mathbf{\begin{bmatrix} 1 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 9 \end{bmatrix}}$ ##### Scalar Matrix - In a square matrix, if $\mathbf{a_{ij} = 0}$ for $\mathbf{i \neq j}$ and $\mathbf{a_{ii} = K}$ (where $\mathbf{K}$ is a real or non-real number). - Example: $\mathbf{\begin{bmatrix} 5 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 5 \end{bmatrix}}$ ##### Unit Matrix (Identity Matrix) - In a square matrix, if $\mathbf{a_{ij} = 0}$ for $\mathbf{i \neq j}$ and $\mathbf{a_{ii} = 1}$. - It is represented by $\mathbf{'I'}$ or $\mathbf{I_2, I_3, I_n}$ etc. - Example: $\mathbf{\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}}$ - Note: Minimum no. of zeroes in a diagonal or scalar matrix = $\mathbf{n(n-1)}$, where $\mathbf{n}$ represents the order of the matrix. - If $\mathbf{f(x) = x^2+3x+5}$ is a polynomial, then $\mathbf{f(A) = A^2+3A+5I}$. ##### Lower Triangular Matrix - In a square matrix, if $\mathbf{a_{ij} = 0}$ for $\mathbf{i j}$, i.e., all the elements below the principal diagonal are zero. - Example: $\mathbf{\begin{bmatrix} 1 & 8 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 9 \end{bmatrix}}$ - Note: Minimum zeroes in an upper triangular matrix = $\mathbf{\frac{n(n-1)}{2}}$. ##### Trace of a Matrix - The sum of all the diagonal elements in a square matrix is called the trace of the matrix. - $\mathbf{T(A) = \sum a_{ii}}$ - Example: For $\mathbf{\begin{bmatrix} 1 & 0 & 5 \\ 7 & 5 & 3 \\ 3 & 9 & 7 \end{bmatrix}}$, $\mathbf{T(A) = 1+5+7 = 13}$. - Note: A diagonal matrix is both upper and lower triangular. - An triangular matrix is called strictly triangular if $\mathbf{a_{ii} = 0}$. - The product of two lower triangular matrices is lower triangular, and for upper, it is upper triangular. ##### Addition and Subtraction of two matrices - The addition and subtraction of two matrices are possible only when the order of the matrices is the same. #### Matrix Multiplication - If the number of elements in any row of the pre-matrix is equal to the number of elements in any column of the post-matrix, then matrix multiplication is possible. - Example: $\mathbf{A = \begin{bmatrix} 1 & 2 \\ 2 & 3 \end{bmatrix}}$, $\mathbf{B = \begin{bmatrix} 2 & 3 \\ 5 & 0 \end{bmatrix}}$. $\mathbf{AB = \begin{bmatrix} 1 \cdot 2 + 2 \cdot 5 & 1 \cdot 3 + 2 \cdot 0 \\ 2 \cdot 2 + 3 \cdot 5 & 2 \cdot 3 + 3 \cdot 0 \end{bmatrix} = \begin{bmatrix} 12 & 3 \\ 19 & 6 \end{bmatrix}}$. - Note: If $\mathbf{A}$ is a matrix of $\mathbf{m \times n}$ and $\mathbf{B}$ is a matrix of $\mathbf{n \times p}$, then the order of $\mathbf{AB = m \times p}$. - $\mathbf{BA}$ is possible if $\mathbf{m=p}$. #### Properties of Matrix Operations - (i) $\mathbf{A+B = B+A}$ (Commutative law) - (ii) $\mathbf{A+(B+C) = (A+B)+C}$ (Associative law) - (iii) $\mathbf{A+0 = A = 0+A}$ (Identity law) - (iv) $\mathbf{A+(-A) = 0 = (-A)+A}$ (Additive inverse law) - (v) $\mathbf{AB \neq BA}$ (May or may not be possible) - (vi) $\mathbf{A(BC) = (AB)C}$ (Associative law) - (vii) $\mathbf{A(B+C) = AB+AC}$ (Distributive law) - (viii) $\mathbf{AI = A = IA}$ (Identity law) - (ix) $\mathbf{AA^{-1} = I = A^{-1}A}$ (Inverse Property) - (x) If $\mathbf{AB = 0}$, then $\mathbf{A \neq 0}$ and $\mathbf{B \neq 0}$ are possible. #### Transpose of a Matrix - If rows and columns are interchanged, then the matrix so formed is the transpose of the given matrix. - For $\mathbf{A = [a_{ij}]_{m \times n}}$, $\mathbf{A^T = [a_{ji}]_{n \times m}}$. #### Properties of Transpose - (i) $\mathbf{(A+B)^T = A^T+B^T}$ - (ii) $\mathbf{(A-B)^T = A^T-B^T}$ - (iii) $\mathbf{(AB)^T = B^T A^T}$ - (iv) $\mathbf{(ABC)^T = C^T B^T A^T}$ - (v) $\mathbf{(KA)^T = K A^T}$ where $\mathbf{K}$ is a constant. - (vi) $\mathbf{(A^T)^T = A}$ #### Orthogonal Matrix - If $\mathbf{AA^T = I = A^T A}$, then $\mathbf{A}$ is said to be an orthogonal matrix. - Note: The determinant of an orthogonal matrix = $\mathbf{\pm 1}$. #### Special Matrices - **Idempotent Matrix:** If $\mathbf{A^2 = A}$. - **Involuntary Matrix:** If $\mathbf{A^2 = I}$. - **Nilpotent Matrix:** If $\mathbf{A^k = 0}$ for some integer $\mathbf{k}$. - **Periodic Matrix:** If $\mathbf{A^{n+1} = A}$ for some integer $\mathbf{n \ge 1}$. - Note: If $\mathbf{AB=0}$, then $\mathbf{A \neq 0, B \neq 0}$ is possible. #### Symmetric Matrix - In a square matrix, if $\mathbf{A^T = A}$ or $\mathbf{a_{ij} = a_{ji}}$, i.e., corresponding rows and columns are identical. - Example: $\mathbf{\begin{bmatrix} 1 & 2 & 4 \\ 2 & 5 & 9 \\ 4 & 9 & 8 \end{bmatrix}}$ #### Properties of Symmetric Matrix - (i) For any square matrix $\mathbf{A}$, $\mathbf{A+A^T}$ is symmetric. - (ii) If $\mathbf{A, B}$ are symmetric matrices, then $\mathbf{AB+BA}$ is also symmetric. - (iii) If $\mathbf{A}$ is a symmetric matrix, then $\mathbf{A^n}$ is also symmetric for $\mathbf{n \in N}$. #### Skew-Symmetric Matrix - In a square matrix, if $\mathbf{A^T = -A}$ or $\mathbf{a_{ij} = -a_{ji}}$ and $\mathbf{a_{ii} = 0}$ (all diagonal elements are $\mathbf{0}$). - Example: $\mathbf{\begin{bmatrix} 0 & -3 & 5 \\ 3 & 0 & -4 \\ -5 & 4 & 0 \end{bmatrix}}$ #### Properties of Skew-Symmetric Matrix - (i) For any square matrix $\mathbf{A}$, $\mathbf{A-A^T}$ is skew-symmetric. - (ii) If $\mathbf{A, B}$ are symmetric matrices, then $\mathbf{AB-BA}$ is skew-symmetric. - (iii) If $\mathbf{A}$ is a skew-symmetric matrix, then $\mathbf{A^{2n}}$ is symmetric (even positive integral power) and $\mathbf{A^{2n+1}}$ is skew-symmetric (odd positive integral power). - (iv) Any square matrix can be expressed as a sum of symmetric and skew-symmetric matrices: $\mathbf{A = \frac{1}{2}(A+A^T) + \frac{1}{2}(A-A^T)}$. - (v) A matrix which is both symmetric and skew-symmetric is a zero matrix. ### Determinants - A determinant is a unique number associated with a square matrix. - It is represented by $\mathbf{|A|, \det(A)}$. - Linear transformation for determinant in 2 dimensions represents area, while in 3 dimensions, it represents volume. #### Properties of Determinants - (i) $\mathbf{|A^T| = |A|}$ - (ii) $\mathbf{|AB| = |A||B|}$ - (iii) $\mathbf{|KA| = K^n |A|}$ (where $\mathbf{K}$ is a scalar) - (iv) $\mathbf{|KAB| = K^n |A||B|}$ - (v) If we multiply $\mathbf{R_1}$ by $\mathbf{a}$, $\mathbf{R_2}$ by $\mathbf{b}$, $\mathbf{R_3}$ by $\mathbf{c}$, the new determinant is $\mathbf{abc|A|}$. - (vi) Determinant of an orthogonal matrix = $\mathbf{\pm 1}$. - (vii) Determinant of a skew-symmetric matrix for odd order = $\mathbf{0}$. - (viii) Vandermonde Matrix: If all the entries in a row or in a column of a matrix are equal to $\mathbf{1}$. - (ix) Determinant of a Unit Matrix is $\mathbf{1}$. #### Determinant of a $\mathbf{2 \times 2}$ Matrix - For $\mathbf{A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}}$, $\mathbf{|A| = ad-bc}$. #### Determinant of a $\mathbf{3 \times 3}$ Matrix - We can expand this row-wise or column-wise. - For $\mathbf{A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}}$ - $\mathbf{|A| = a_{11}(a_{22}a_{33}-a_{32}a_{23}) - a_{12}(a_{21}a_{33}-a_{31}a_{23}) + a_{13}(a_{21}a_{32}-a_{31}a_{22})}$. #### Determinant of a $\mathbf{3 \times 3}$ Matrix by Sarrus Rule - $\mathbf{|A| = (a_{11}a_{22}a_{33} + a_{12}a_{23}a_{31} + a_{13}a_{21}a_{32}) - (a_{13}a_{22}a_{31} + a_{11}a_{23}a_{32} + a_{12}a_{21}a_{33})}$. - Note: Sarrus rule is applicable only for $\mathbf{3 \times 3}$ matrices. #### Minors - The determinant obtained by deleting the row and column in which a particular element lies is called the minor of that element. - It is represented by $\mathbf{M_{ij}}$. - Example: For $\mathbf{A = \begin{bmatrix} 1 & 5 & 7 \\ 2 & 9 & 1 \\ 0 & -3 & 4 \end{bmatrix}}$, $\mathbf{M_{23} = \begin{vmatrix} 1 & 5 \\ 0 & -3 \end{vmatrix} = 1(-3) - 5(0) = -3}$. - Minor of $\mathbf{a_{23}}$ element = $\mathbf{-3}$. #### Cofactors - $\mathbf{A_{ij} = (-1)^{i+j} M_{ij}}$. - Example: $\mathbf{A_{23} = (-1)^{2+3} M_{23} = -1(-3) = 3}$. - Sign chart of cofactors for a $\mathbf{3 \times 3}$ matrix: $\mathbf{\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}}$. #### Working Rule for Determinant without Expanding 1. Think about common factors. If there is a common factor in any row or column, then take it out. 2. Add or subtract row-wise or column-wise (two or more rows or columns) and check whether the result is the same or not. If same, apply the operation. 3. If it is a circulant matrix, then use $\mathbf{R_1 \to R_1+R_2+R_3}$. 4. If it is a Vandermonde matrix, then use $\mathbf{R_1 \to R_1-R_2}$ and $\mathbf{R_2 \to R_2-R_3}$. 5. Try to make zeros in a row or in a column. 6. Apply operation according to RHS. 7. Multiply row-wise or column-wise by the alphabets. #### Singular Matrix - If $\mathbf{|A| = 0}$. #### Non-Singular Matrix - If $\mathbf{|A| \neq 0}$. - The maximum value of the determinant of a $\mathbf{3 \times 3}$ matrix using $\mathbf{0,1}$ is $\mathbf{2}$ and the minimum value is $\mathbf{-2}$. #### Properties of Determinants - If all the elements in a row or in a column are zero, then the determinant is $\mathbf{0}$. - If we change the elements of a row with another, then the sign of the determinant will change. - All the elements of a row or column are zero, then the determinant is $\mathbf{0}$. - For a determinant of type $\mathbf{\begin{vmatrix} x+\alpha & p & l \\ y+\beta & q & m \\ z+\gamma & r & n \end{vmatrix}}$, write as $\mathbf{\begin{vmatrix} x & p & l \\ y & q & m \\ z & r & n \end{vmatrix} + \begin{vmatrix} \alpha & p & l \\ \beta & q & m \\ \gamma & r & n \end{vmatrix}}$. - If $\mathbf{|A| = K|A|}$ (Identity). ### Adjoint & Inverse of a Matrix #### Adjoint of a Matrix - $\mathbf{\text{adj}A = [A_{ij}]^T}$. - A matrix formed by the transpose of the cofactor's matrix of a given matrix. - For $\mathbf{A = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \dots & a_{mn} \end{bmatrix}}$, the cofactor matrix is $\mathbf{A_{cf} = \begin{bmatrix} A_{11} & A_{12} & \dots \\ A_{21} & A_{22} & \dots \\ \vdots & \vdots & \ddots \end{bmatrix}}$. - Then $\mathbf{\text{adj}A = [A_{cf}]^T}$. #### Properties of Adjoint - (i) $\mathbf{A (\text{adj}A) = (\text{adj}A)A = |A|I_n}$. - (ii) $\mathbf{\text{adj}(AB) = (\text{adj}B)(\text{adj}A)}$. - (iii) $\mathbf{\text{adj}(KA) = K^{n-1}\text{adj}A}$. - (iv) $\mathbf{\text{adj}(A^T) = (\text{adj}A)^T}$. - (v) $\mathbf{\text{adj}(A^n) = (\text{adj}A)^n}$. - (vi) $\mathbf{|\text{adj}A| = |A|^{n-1}}$. - (vii) $\mathbf{\text{adj}(\text{adj}A) = |A|^{n-2}A}$. - (viii) $\mathbf{|\text{adj}(\text{adj}A)| = |A|^{(n-1)^2}}$. - (ix) $\mathbf{|\text{adj}(\text{adj}(\text{adj}A))| = |A|^{(n-1)^3}}$. - (x) If $\mathbf{A}$ is a symmetric matrix, its adjoint is also symmetric. - (xi) $\mathbf{|\text{adj}AB| = |\text{adj}B||\text{adj}A| = |B|^{n-1}|A|^{n-1}}$. - (xii) $\mathbf{|A \text{adj}A| = |A|^n}$. #### Adjoint of a $\mathbf{2 \times 2}$ Matrix - For $\mathbf{A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}}$, $\mathbf{\text{adj}A = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}}$ (change positions of diagonal elements, change signs of off-diagonal elements). #### Adjoint of a $\mathbf{3 \times 3}$ Matrix - For $\mathbf{A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}}$, $\mathbf{\text{adj}A = \begin{bmatrix} -3 & 6 & -3 \\ 6 & -12 & 6 \\ -3 & 6 & -3 \end{bmatrix}}$. #### Inverse of a Matrix - If $\mathbf{AB = BA = I}$, then $\mathbf{A}$ is said to be the inverse of $\mathbf{B}$, and $\mathbf{B}$ is said to be the inverse of $\mathbf{A}$. - $\mathbf{A^{-1} = \frac{1}{|A|} \text{adj}A}$, where $\mathbf{|A| \neq 0}$. - Note: $\mathbf{A}$ and $\mathbf{B}$ are square matrices of the same order. #### Conditions and Properties of Inverse - (i) $\mathbf{A^{-1}A = AA^{-1} = I}$. - (ii) $\mathbf{(AB)^{-1} = B^{-1}A^{-1}}$. - (iii) $\mathbf{(ABC)^{-1} = C^{-1}B^{-1}A^{-1}}$. - (iv) $\mathbf{(A^T)^{-1} = (A^{-1})^T}$. - (v) $\mathbf{(\text{adj}A)^{-1} = \text{adj}(A^{-1})}$. - (vi) If $\mathbf{A}$ is symmetric, then $\mathbf{A^{-1}}$ is symmetric. - (vii) $\mathbf{|A^{-1}| = |A|^{-1}}$. - (viii) The inverse of a Unit Matrix is a Unit Matrix. - (ix) If $\mathbf{A^2 = I}$, then $\mathbf{A^{-1} = A}$. - (x) If $\mathbf{AP = B}$, then $\mathbf{P = A^{-1}B}$. - (xi) The inverse of a diagonal matrix is a diagonal matrix. - (xii) The inverse of a scalar matrix is a scalar matrix. ### Solution of System of Equations - The system of equations can be written as $\mathbf{AX = B}$, where $\mathbf{A}$ is the coefficient matrix, $\mathbf{X}$ is the variable matrix, and $\mathbf{B}$ is the constant matrix. #### Three Positions of Solutions 1. **Unique Solution:** Consistent. 2. **No Solution:** Inconsistent. 3. **Many Solutions:** Consistent. #### Unique Solution - If $\mathbf{|A| \neq 0}$, the system has a unique solution: $\mathbf{X = A^{-1}B}$. #### Many Solutions - If $\mathbf{|A| = 0}$ and $\mathbf{(\text{adj}A)B = 0}$ (zero matrix). #### No Solution - If $\mathbf{|A| = 0}$ and $\mathbf{(\text{adj}A)B \neq 0}$ (non-zero matrix). #### Homogeneous System of Equations - $\mathbf{AX = 0}$, where $\mathbf{B=0}$. - If $\mathbf{|A| \neq 0}$, then $\mathbf{X=0}$ (trivial solutions, $\mathbf{x=y=z=0}$). - If $\mathbf{|A| = 0}$, then there are many solutions. #### Note - If we have to multiply two matrices first and then solve the system of equations, then the product of two matrices must be a scalar matrix. ### Continuity - A function is said to be continuous if it attains the same value at its neighborhood point. - $\mathbf{\lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x) = f(a)}$. - In general, when there is no break in the graph, the function is continuous. - If $\mathbf{|f(x) - f(a)| 0}$) - $\mathbf{\lim_{x \to 0} (1+x)^{1/x} = e}$ ($\mathbf{e \approx 2.7183}$) - $\mathbf{\lim_{x \to 0} \frac{\log(1+x)}{x} = 1}$ - $\mathbf{\lim_{x \to \infty} (1+\frac{1}{x})^x = e}$ #### Continuous Functions - Constant functions are always continuous at $\mathbf{x \in R}$. - Identity functions are always continuous at $\mathbf{x \in R}$. - Polynomial functions are always continuous at $\mathbf{x \in R}$. - Modulus functions are always continuous at $\mathbf{x \in R}$. - Exponential functions are always continuous at $\mathbf{x \in R}$. - Sine and Cosine functions are always continuous at $\mathbf{x \in R}$. - Log functions are always continuous at $\mathbf{x > 0}$. #### Properties of Continuous Functions - The greatest integer function is continuous at non-integral (decimal) values. - Rational functions $\mathbf{\frac{P(x)}{Q(x)}}$ are continuous when $\mathbf{Q(x) \neq 0}$. - Tangent, Secant, Cotangent, Cosecant functions are continuous at their domain value: - Domain of $\mathbf{\tan\theta}$ and $\mathbf{\sec\theta}$ is $\mathbf{R - \{(2n+1)\frac{\pi}{2}\}}$. - Domain of $\mathbf{\cot\theta}$ and $\mathbf{\csc\theta}$ is $\mathbf{R - \{n\pi\}}$. - The composition of two continuous functions is always continuous. - If $\mathbf{f, g}$ are continuous, then $\mathbf{f \circ g, g \circ f, f \circ f, g \circ g}$ are continuous. - If $\mathbf{f, g}$ are continuous, then $\mathbf{f+g, f-g, fg, \frac{f}{g}}$ are also continuous. - If one function is continuous and the other is not, then their composition may not be continuous. ### Differentiability - A function is derivable at $\mathbf{x=a}$ if $\mathbf{\lim_{x \to a^-} \frac{f(x)-f(a)}{x-a} = \lim_{x \to a^+} \frac{f(x)-f(a)}{x-a}}$. - When a function is derivable, it must be continuous at that point, but the converse may or may not be true. - The greatest integer function is derivable at non-integral values. - $\mathbf{|x-1|+|x-2|}$ is not derivable at $\mathbf{x=1,2}$, although it is continuous at $\mathbf{x=1,2}$. - $\mathbf{e^x = 1+x+\frac{x^2}{2!} + \dots + \infty}$. - $\mathbf{a^x = 1+x(\log a) + \frac{x^2}{2!}(\log a)^2 + \dots + \infty}$. - $\mathbf{\log|x|}$ is not continuous at $\mathbf{x=-1,0,1}$. ### Differentiation - Differentiation is the process of finding the rate of change of a dependent variable $\mathbf{y}$ with respect to an independent variable $\mathbf{x}$. - The slope of the tangent to the curve is represented by $\mathbf{\frac{dy}{dx}, f'(x), y', y_1, D}$. - All formulas are derived using the ab-initio rule or first principal method. #### Formulas of Differentiation - $\mathbf{\frac{d}{dx}(x^n) = nx^{n-1}}$ - $\mathbf{\frac{d}{dx}(ax+b)^n = n(ax+b)^{n-1} \cdot a}$ - $\mathbf{\frac{d}{dx}(\frac{1}{x^n}) = -\frac{n}{x^{n+1}}}$ - $\mathbf{\frac{d}{dx}(\sqrt{x}) = \frac{1}{2\sqrt{x}}}$ - $\mathbf{\frac{d}{dx}(\sqrt{ax+b}) = \frac{a}{2\sqrt{ax+b}}}$ - $\mathbf{\frac{d}{dx}(c) = 0}$ - $\mathbf{\frac{d}{dx}(e^x) = e^x}$ - $\mathbf{\frac{d}{dx}(a^x) = a^x \log a}$ - $\mathbf{\frac{d}{dx}(\log x) = \frac{1}{x}}$ - $\mathbf{\frac{d}{dx}(\log_a x) = \frac{1}{x \log a}}$ - $\mathbf{\frac{d}{dx}(\log(ax+b)) = \frac{a}{ax+b}}$ - $\mathbf{\log_e x = x}$ - $\mathbf{\frac{d}{dx}(\sin x) = \cos x}$ - $\mathbf{\frac{d}{dx}(\cos x) = -\sin x}$ - $\mathbf{\frac{d}{dx}(\tan x) = \sec^2 x}$ - $\mathbf{\frac{d}{dx}(\cot x) = -\csc^2 x}$ - $\mathbf{\frac{d}{dx}(\sec x) = \sec x \tan x}$ - $\mathbf{\frac{d}{dx}(\csc x) = -\csc x \cot x}$ #### Rules of Differentiation - **Product Rule:** $\mathbf{\frac{d}{dx}(uv) = u\frac{dv}{dx} + v\frac{du}{dx}}$. - **Quotient Rule:** $\mathbf{\frac{d}{dx}(\frac{u}{v}) = \frac{v\frac{du}{dx} - u\frac{dv}{dx}}{v^2}}$. - **Chain Rule:** $\mathbf{\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}}$. #### Implicit Function - When the function is not directly expressed in terms of $\mathbf{y}$, the function is said to be implicit. - Example: $\mathbf{x^2+y^2+2gx+2fy+c=0}$. #### Parametric Form - When both dependent variable $\mathbf{y}$ and independent variable $\mathbf{x}$ depend on another variable $\mathbf{t}$, then $\mathbf{\frac{dy}{dx} = \frac{dy/dt}{dx/dt}}$. #### Derivative of a Function with respect to another Function - For $\mathbf{y = f(x)}$ and $\mathbf{z = g(x)}$, $\mathbf{\frac{dy}{dz} = \frac{f'(x)}{g'(x)}}$. #### Inverse Trigonometric Differentiation - $\mathbf{\frac{d}{dx}(\sin^{-1}x) = \frac{1}{\sqrt{1-x^2}}}$ - $\mathbf{\frac{d}{dx}(\cos^{-1}x) = -\frac{1}{\sqrt{1-x^2}}}$ - $\mathbf{\frac{d}{dx}(\tan^{-1}x) = \frac{1}{1+x^2}}$ - $\mathbf{\frac{d}{dx}(\cot^{-1}x) = -\frac{1}{1+x^2}}$ - $\mathbf{\frac{d}{dx}(\sec^{-1}x) = \frac{1}{x\sqrt{x^2-1}}}$ - $\mathbf{\frac{d}{dx}(\csc^{-1}x) = -\frac{1}{x\sqrt{x^2-1}}}$ #### Substitution Method for Inverse Trigonometric Functions - $\mathbf{\sqrt{a^2-x^2}}$: put $\mathbf{x = a\sin\theta}$ or $\mathbf{a\cos\theta}$. - $\mathbf{\sqrt{a^2+x^2}}$: put $\mathbf{x = a\tan\theta}$ or $\mathbf{a\cot\theta}$. - $\mathbf{\sqrt{x^2-a^2}}$: put $\mathbf{x = a\sec\theta}$ or $\mathbf{a\csc\theta}$. - $\mathbf{\sqrt{1+x}, \sqrt{1-x}, \sqrt{1+x^2}, \sqrt{1-x^2}}$: put $\mathbf{x = \cos\theta}$. - $\mathbf{\frac{1+x}{1-x}, \frac{1-x}{1+x}}$: put $\mathbf{x = \tan\theta}$. - Note: $\mathbf{\sin^{-1}x + \cos^{-1}x = \tan^{-1}x + \cot^{-1}x = \sec^{-1}x + \csc^{-1}x = \frac{\pi}{2}}$. #### Tangent and Normal - **Slope:** The tangent of the angle which a line makes with co-ordinate axes (+ve direction of x-axis). - It is represented by 'm'. - $\mathbf{m = \tan\theta = \frac{y_2-y_1}{x_2-x_1} = \frac{dy}{dx}}$. - For linear equations: $\mathbf{m = - \frac{\text{Coefficient of } x}{\text{Coefficient of } y}}$. - **Tangent:** The touching line to the curve $\mathbf{y=f(x)}$. - **Normal:** Line perpendicular to the tangent. - When the tangent is parallel to the x-axis: $\mathbf{\frac{dy}{dx} = 0}$. - When the tangent is parallel to the y-axis: $\mathbf{\frac{dx}{dy} = 0}$. #### Rolle's Theorem - If $\mathbf{y=f(x)}$ is a function such that: - (i) It is continuous at $\mathbf{[a,b]}$. - (ii) It is derivable at $\mathbf{(a,b)}$. - (iii) $\mathbf{f(a) = f(b)}$. - Then there exists at least one value $\mathbf{c \in (a,b)}$ such that $\mathbf{f'(c) = 0}$. - Rolle's theorem is useful in finding at least one point on the curve where the tangent is parallel to the x-axis. #### Lagrange's Mean Value Theorem - If $\mathbf{y=f(x)}$ is a function such that: - (i) It is continuous at $\mathbf{[a,b]}$. - (ii) It is derivable at $\mathbf{(a,b)}$. - Then there exists at least one value $\mathbf{c \in (a,b)}$ such that $\mathbf{\frac{f(b)-f(a)}{b-a} = f'(c)}$. - Lagrange's mean value theorem gives at least one point on the curve where the tangent is parallel to the chord joining $\mathbf{(a,f(a))}$ and $\mathbf{(b,f(b))}$. #### Strictly Increasing and Strictly Decreasing - If $\mathbf{x_1 f(x_2)}$, then $\mathbf{f}$ is strictly decreasing. - If $\mathbf{f'(x) > 0}$, $\mathbf{f}$ is an increasing function. - If $\mathbf{f'(x) 0}$, $\mathbf{f'(x) = ae^{ax} > 0}$ for all $\mathbf{x \in R}$. - **Local Maximum:** A function $\mathbf{f}$ has a maximum value in an interval $\mathbf{I}$ at $\mathbf{x_0}$ if $\mathbf{x_0 \in I}$ and $\mathbf{f(x_0) \ge f(x)}$ for all $\mathbf{x \in I}$. $\mathbf{f(x_0)}$ is the maximum value and $\mathbf{x_0}$ is the point of maximum. - **Local Minimum:** A function $\mathbf{f}$ has a minimum value in an interval $\mathbf{I}$ at $\mathbf{x_0}$ if $\mathbf{x_0 \in I}$ and $\mathbf{f(x_0) \le f(x)}$ for all $\mathbf{x \in I}$. $\mathbf{f(x_0)}$ is the minimum value and $\mathbf{x_0}$ is the point of minimum. #### First Derivative Test - Let $\mathbf{f(x)}$ be a differentiable function of $\mathbf{x}$ and $\mathbf{x \in I}$. - (i) $\mathbf{x_0}$ is a local maximum if $\mathbf{f'(x_0) = 0}$ and $\mathbf{f'(x)}$ changes sign from $\mathbf{+}$ to $\mathbf{-}$ as $\mathbf{x}$ increases through $\mathbf{x_0}$. - (ii) $\mathbf{x_0}$ is a local minimum if $\mathbf{f'(x_0) = 0}$ and $\mathbf{f'(x)}$ changes sign from $\mathbf{-}$ to $\mathbf{+}$ as $\mathbf{x}$ increases through $\mathbf{x_0}$. - (iii) If $\mathbf{f'(x_0) = 0}$ and $\mathbf{f'(x)}$ does not change sign, then $\mathbf{x_0}$ is a point of inflection. #### Higher Order Derivative Test - (i) $\mathbf{x_0}$ is a local maximum if $\mathbf{f'(x_0) = 0}$ and $\mathbf{f''(x_0) 0}$. - (iii) If $\mathbf{f''(x_0) = 0}$, then find $\mathbf{f'''(x_0)}$. If $\mathbf{f'''(x_0) \neq 0}$, then $\mathbf{x_0}$ is a point of inflection. - (iv) If $\mathbf{f''(x_0) = 0}$ and $\mathbf{f'''(x_0) = 0}$ and $\mathbf{f^{(4)}(x_0) 0}$, then $\mathbf{x_0}$ is a local minimum. - In trigonometric or log functions, the higher-order derivative test is very useful. #### Approximation Value of a Number or a Function - **Approximate Value = (original value + derivative $\mathbf{\times}$ change)** - **Relative error** in $\mathbf{V = \frac{dV}{V}}$. - **Relative error** in $\mathbf{A = \frac{dA}{A}}$. - It is not necessary that a function has both maximum and minimum values. - Maximum value of $\mathbf{a\sin x + b\cos x = \sqrt{a^2+b^2}}$. ### Integration (Antiderivative) #### Basic Formulas - $\mathbf{\int x^n dx = \frac{x^{n+1}}{n+1} + C}$ (for $\mathbf{n \neq -1}$) - $\mathbf{\int \frac{1}{x^n} dx = \frac{1}{(n-1)x^{n-1}} + C}$ (for $\mathbf{n \neq 1}$) - $\mathbf{\int (ax+b)^n dx = \frac{(ax+b)^{n+1}}{(n+1)a} + C}$ - $\mathbf{\int \frac{1}{ax+b} dx = \frac{1}{a} \log|ax+b| + C}$ - $\mathbf{\int \frac{1}{\sqrt{x}} dx = 2\sqrt{x} + C}$ - $\mathbf{\int \frac{1}{\sqrt{ax+b}} dx = \frac{2\sqrt{ax+b}}{a} + C}$ - $\mathbf{\int e^x dx = e^x + C}$ - $\mathbf{\int a^x dx = \frac{a^x}{\log a} + C}$ - $\mathbf{\int \sin x dx = -\cos x + C}$ - $\mathbf{\int \cos x dx = \sin x + C}$ - $\mathbf{\int \tan x dx = \log|\sec x| + C = -\log|\cos x| + C}$ - $\mathbf{\int \cot x dx = \log|\sin x| + C}$ - $\mathbf{\int \sec x dx = \log|\sec x + \tan x| + C = \log|\tan(\frac{\pi}{4}+\frac{x}{2})| + C}$ - $\mathbf{\int \csc x dx = \log|\csc x - \cot x| + C = \log|\tan(\frac{x}{2})| + C}$ - $\mathbf{\int \sec^2 x dx = \tan x + C}$ - $\mathbf{\int \csc^2 x dx = -\cot x + C}$ - $\mathbf{\int \sec x \tan x dx = \sec x + C}$ - $\mathbf{\int \csc x \cot x dx = -\csc x + C}$ #### Special Forms - $\mathbf{\int \frac{dx}{x^2+a^2} = \frac{1}{a}\tan^{-1}\frac{x}{a} + C}$ - $\mathbf{\int \frac{dx}{x^2-a^2} = \frac{1}{2a}\log|\frac{x-a}{x+a}| + C}$ - $\mathbf{\int \frac{dx}{a^2-x^2} = \frac{1}{2a}\log|\frac{a+x}{a-x}| + C}$ - $\mathbf{\int \frac{dx}{\sqrt{a^2-x^2}} = \sin^{-1}\frac{x}{a} + C}$ - $\mathbf{\int \frac{dx}{\sqrt{x^2+a^2}} = \log|x+\sqrt{x^2+a^2}| + C}$ - $\mathbf{\int \frac{dx}{\sqrt{x^2-a^2}} = \log|x+\sqrt{x^2-a^2}| + C}$ - $\mathbf{\int \sqrt{a^2-x^2} dx = \frac{x}{2}\sqrt{a^2-x^2} + \frac{a^2}{2}\sin^{-1}\frac{x}{a} + C}$ - $\mathbf{\int \sqrt{x^2+a^2} dx = \frac{x}{2}\sqrt{x^2+a^2} + \frac{a^2}{2}\log|x+\sqrt{x^2+a^2}| + C}$ - $\mathbf{\int \sqrt{x^2-a^2} dx = \frac{x}{2}\sqrt{x^2-a^2} - \frac{a^2}{2}\log|x+\sqrt{x^2-a^2}| + C}$ - $\mathbf{\int e^{ax}\sin(bx) dx = \frac{e^{ax}}{a^2+b^2}(a\sin(bx) - b\cos(bx)) + C}$ - $\mathbf{\int e^{ax}\cos(bx) dx = \frac{e^{ax}}{a^2+b^2}(a\cos(bx) + b\sin(bx)) + C}$ - $\mathbf{\int x e^x dx = e^x(x-1) + C}$ - $\mathbf{\int \log x dx = x(\log x - 1) + C}$ #### Integration by Parts - $\mathbf{\int u v dx = u \int v dx - \int (\frac{du}{dx} \int v dx) dx}$. - **LITEA Rule** for selecting first and second functions: - **L** $\to$ Logarithmic - **I** $\to$ Inverse trigonometric - **T** $\to$ Trigonometric - **E** $\to$ Exponential - **A** $\to$ Algebraic - The power of the numerator is always less than the denominator. If not, always divide the numerator by the denominator. - The power and angle must be linear. - Example: $\mathbf{e^{x^2}}$ is not linear, $\mathbf{\sin(x^2)}$ is not linear. - $\mathbf{\int \sin^m x \cos^n x dx}$: - If $\mathbf{n}$ is odd, integrate as $\mathbf{\int \sin^m x \sin^{n-1} x \cos x dx}$ and convert $\mathbf{\sin^{n-1} x}$ to $\mathbf{\cos}$ function etc. - If $\mathbf{m}$ is odd, integrate as $\mathbf{\int \cos^n x \cos^{m-1} x \sin x dx}$ and convert $\mathbf{\cos^{m-1} x}$ to $\mathbf{\sin}$ function etc. - If $\mathbf{m, n}$ are even, convert to multiple angles using $\mathbf{\sin^2 x = \frac{1-\cos 2x}{2}}$ and $\mathbf{\cos^2 x = \frac{1+\cos 2x}{2}}$. #### Partial Fractions - For $\mathbf{\int \frac{Q(x)}{P(x)} dx}$ where $\mathbf{Q(x)}$ is a polynomial of degree less than $\mathbf{P(x)}$. - Case 1: Denominator has distinct linear factors: $\mathbf{\frac{Q(x)}{(x-a)(x-b)} = \frac{A}{x-a} + \frac{B}{x-b}}$. - Case 2: Denominator has repeated linear factors: $\mathbf{\frac{Q(x)}{(x-a)^2(x-b)} = \frac{A}{x-a} + \frac{B}{(x-a)^2} + \frac{C}{x-b}}$. - Case 3: Denominator has irreducible quadratic factors: $\mathbf{\frac{Q(x)}{(x^2+ax+b)(x-c)} = \frac{Ax+B}{x^2+ax+b} + \frac{C}{x-c}}$. ### Definite Integral #### Limit of Sum - $\mathbf{\int_a^b f(x) dx = \lim_{h \to 0} h[f(a) + f(a+h) + f(a+2h) + \dots + f(a+(n-1)h)]}$. - $\mathbf{\sum_{k=1}^{n-1} k = \frac{n(n-1)}{2}}$. - $\mathbf{\sum_{k=1}^{n-1} k^2 = \frac{n(n-1)(2n-1)}{6}}$. - $\mathbf{\sum_{k=1}^{n-1} k^3 = \left(\frac{n(n-1)}{2}\right)^2}$. #### Properties of Definite Integrals - $\mathbf{\int_a^b f(x) dx = \int_b^a f(x) dx}$ (if order is reversed, sign changes). - $\mathbf{\int_a^b f(x) dx = \int_a^b f(t) dt}$ (dummy variable property). - $\mathbf{\int_a^b f(x) dx = \int_a^c f(x) dx + \int_c^b f(x) dx}$ (interval splitting property, where $\mathbf{a ### Straight Lines #### Equation of a Line - **Parallel to y-axis:** $\mathbf{x=a}$ (where $\mathbf{a \in R}$). - **Parallel to x-axis:** $\mathbf{y=b}$ (where $\mathbf{b \in R}$). - **Passing through origin:** $\mathbf{y=mx}$. - **Passing through origin and equally inclined to co-ordinate axes:** $\mathbf{y=x}$. - **Intercept Form:** $\mathbf{\frac{x}{a} + \frac{y}{b} = 1}$. - $\mathbf{x}$-intercept = $\mathbf{(a,0)}$. - $\mathbf{y}$-intercept = $\mathbf{(0,b)}$. - **Through a point $\mathbf{(x_1,y_1)}$ with slope $\mathbf{m}$:** $\mathbf{y-y_1 = m(x-x_1)}$. ### Circles #### Equation of a Circle - **Center $\mathbf{(a,b)}$ and Radius $\mathbf{r}$:** $\mathbf{(x-a)^2 + (y-b)^2 = r^2}$. - **General Form:** $\mathbf{x^2+y^2+2gx+2fy+c=0}$. - Center = $\mathbf{(-g,-f)}$. - Radius = $\mathbf{\sqrt{g^2+f^2-c}}$. - **Center $\mathbf{(0,0)}$ and Radius $\mathbf{r}$:** $\mathbf{x^2+y^2=r^2}$. - **Diameter Form (endpoints $\mathbf{(x_1,y_1)}$ and $\mathbf{(x_2,y_2)}$):** $\mathbf{(x-x_1)(x-x_2) + (y-y_1)(y-y_2) = 0}$. ### Ellipse #### Equation of an Ellipse - **Standard Form:** $\mathbf{\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1}$. - **If $\mathbf{a>b}$:** - Co-ordinates at major axis = $\mathbf{(\pm a,0)}$. - Co-ordinates at minor axis = $\mathbf{(0,\pm b)}$. - **If $\mathbf{b>a}$:** - Co-ordinates at major axis = $\mathbf{(0,\pm b)}$. - Co-ordinates at minor axis = $\mathbf{(\pm a,0)}$. ### Parabola #### Equation of a Parabola - **Standard Form:** $\mathbf{(y-y_1)^2 = 4a(x-x_1)}$. - Vertex = $\mathbf{(x_1,y_1)}$. - Axis of parabola is symmetric about $\mathbf{x}$-axis. - **Standard Form:** $\mathbf{(x-x_1)^2 = 4a(y-y_1)}$. - Vertex = $\mathbf{(x_1,y_1)}$. - Axis of parabola is symmetric about $\mathbf{y}$-axis. ### Area Under Curves #### Working Rule 1. Write the specific main points about given curves or lines. 2. Find out the point of intersections where the given curves intersect. 3. Draw the rough sketch of the given curve and shade the area bounded by the curve or lines. 4. Using proper limits of integration, evaluate the area. #### Area Formulas - **Area lying below the x-axis:** If $\mathbf{f(x) \le 0}$ for $\mathbf{a \le x \le b}$, then the area is $\mathbf{\int_a^b -f(x) dx}$. - **Area lying above and below the x-axis:** If $\mathbf{f(x) \ge 0}$ for $\mathbf{a \le x \le c}$ and $\mathbf{f(x) \le 0}$ for $\mathbf{c \le x \le b}$, then the area is $\mathbf{\int_a^c f(x) dx + \int_c^b -f(x) dx}$. - **Area between two curves:** $\mathbf{\int_a^b (y_{\text{upper}} - y_{\text{lower}}) dx}$. - **Area of an ellipse:** $\mathbf{\pi ab}$. - **Area of a circle:** $\mathbf{\pi r^2}$. - **Area bounded by parabola $\mathbf{y^2=4ax}$ or $\mathbf{x^2=4ay}$ with latus rectum:** $\mathbf{\frac{8a^2}{3}}$. ### Differential Equations #### Definition - If an expression involves a dependent variable $\mathbf{y}$, an independent variable $\mathbf{x}$, and its derivatives ($\mathbf{\frac{dy}{dx}, \frac{d^2y}{dx^2}}$, etc.), it is called a differential equation. - Examples: $\mathbf{x\frac{dy}{dx} + 5y = 0}$, $\mathbf{\frac{d^2y}{dx^2} + 3x\frac{dy}{dx} = 0}$. #### Order - The order of a differential equation is the order of the highest derivative involved. - Example: $\mathbf{x\frac{d^2y}{dx^2} + y\frac{dy}{dx} = 1}$. Order = $\mathbf{2}$. #### Degree - The degree of a differential equation is the exponent of the highest order derivative involved, provided all derivatives are free from fractional powers. - Example: $\mathbf{x\frac{d^2y}{dx^2} + y\frac{dy}{dx} = 1}$. Degree = $\mathbf{1}$. - Example: $\mathbf{\sin(\frac{dy}{dx}) = \frac{d^2y}{dx^2}}$. Order = $\mathbf{2}$, Degree = $\mathbf{1}$. - Example: $\mathbf{(\frac{d^2y}{dx^2})^3 + (\frac{dy}{dx})^2 + 3y = 0}$. Order = $\mathbf{2}$, Degree = $\mathbf{3}$. - Example: $\mathbf{(\frac{d^3y}{dx^3})^2 + (\frac{dy}{dx})^4 + 3y = 0}$. Order = $\mathbf{3}$, Degree = $\mathbf{2}$. - Example: $\mathbf{\sin(\frac{d^3y}{dx^3}) = x}$. Order = $\mathbf{3}$, Degree not defined. #### Linear Differential Equation - A linear differential equation of a dependent variable $\mathbf{y}$ and its derivatives are found to occur in the first degree only and are never multiplied together. Otherwise, it is non-linear. - Example: $\mathbf{x\frac{d^2y}{dx^2} + y\frac{dy}{dx} = 0}$ (Linear). - Example: $\mathbf{y\frac{d^2y}{dx^2} + 3y = 8}$ (Non-linear). #### Solving Differential Equations ##### 1. Variable Separable Method - In this method, we separate $\mathbf{x}$ variables and $\mathbf{y}$ variables and then integrate both sides. - Example: $\mathbf{\frac{dy}{dx} = (x+y)^2}$. Put $\mathbf{x+y=z}$. Then $\mathbf{1+\frac{dy}{dx} = \frac{dz}{dx}}$, so $\mathbf{\frac{dy}{dx} = \frac{dz}{dx}-1}$. - $\mathbf{\frac{dz}{dx}-1 = z^2 \Rightarrow \frac{dz}{dx} = z^2+1 \Rightarrow \int \frac{dz}{z^2+1} = \int dx \Rightarrow \tan^{-1}z = x+C}$. - $\mathbf{\tan^{-1}(x+y) = x+C}$. ##### 2. Homogeneous Differential Equation - A homogeneous differential equation is one where the function can be written in the form $\mathbf{f(\frac{y}{x}, \frac{dy}{dx})}$. - Put $\mathbf{y=vx}$, so $\mathbf{\frac{dy}{dx} = v + x\frac{dv}{dx}}$. - Example: $\mathbf{\frac{dy}{dx} = \frac{x^2+y^2}{x^2+xy}}$. ##### 3. Linear Differential Equation - **Case 1:** $\mathbf{\frac{dy}{dx} + Py = Q}$, where $\mathbf{P, Q}$ are functions of $\mathbf{x}$. - Integrating Factor (IF) = $\mathbf{e^{\int P dx}}$. - Solution: $\mathbf{y \cdot (\text{IF}) = \int Q \cdot (\text{IF}) dx}$. - **Case 2:** $\mathbf{\frac{dx}{dy} + Px = Q}$, where $\mathbf{P, Q}$ are functions of $\mathbf{y}$. - Integrating Factor (IF) = $\mathbf{e^{\int P dy}}$. - Solution: $\mathbf{x \cdot (\text{IF}) = \int Q \cdot (\text{IF}) dy}$. - **Shifting the Origin Method:** For $\mathbf{\frac{dy}{dx} = \frac{x+y-2}{x-y+3}}$. - Put $\mathbf{x = X+h}$, $\mathbf{y = Y+k}$. - Then $\mathbf{\frac{dy}{dx} = \frac{dY}{dX}}$. - $\mathbf{\frac{dY}{dX} = \frac{X+h+Y+k-2}{X+h-(Y+k)+3}}$. - Solve $\mathbf{h+k-2=0}$ and $\mathbf{h-k+3=0}$ for $\mathbf{h, k}$. Then integrate. #### Bernoulli's Theorem - $\mathbf{\frac{dy}{dx} + Py = Qy^n}$. - Divide by $\mathbf{y^n}$: $\mathbf{y^{-n}\frac{dy}{dx} + Py^{1-n} = Q}$. - Put $\mathbf{z = y^{1-n}}$. Then $\mathbf{\frac{dz}{dx} = (1-n)y^{-n}\frac{dy}{dx}}$. - This reduces to a linear differential equation. ### Vectors #### Vector Quantity - A physical quantity which has magnitude as well as direction. - Represented by an arrow (e.g., $\vec{a}, \vec{b}$, or $\vec{AB}$). #### Magnitude of a Vector - The minimum length from the initial point to the terminal point. - Represented by $\mathbf{|\vec{a}|}$, $\mathbf{|\vec{b}|}$, or $\mathbf{|\vec{AB}|}$. - In two dimensions: $\mathbf{\vec{v} = x\hat{i} + y\hat{j}}$, magnitude $\mathbf{|\vec{v}| = \sqrt{x^2+y^2}}$. - In three dimensions: $\mathbf{\vec{v} = x\hat{i} + y\hat{j} + z\hat{k}}$, magnitude $\mathbf{|\vec{v}| = \sqrt{x^2+y^2+z^2}}$. #### Types of Vectors - **Unit Vector:** A vector with magnitude $\mathbf{1}$. - $\mathbf{\hat{a} = \frac{\vec{a}}{|\vec{a}|}}$. - For $\mathbf{\vec{a} = x\hat{i} + y\hat{j}}$, $\mathbf{\hat{a} = \frac{x\hat{i} + y\hat{j}}{\sqrt{x^2+y^2}}}$. - **Co-initial Vectors:** Vectors having the same initial point. - **Coplanar Vectors:** Vectors lying in the same plane (intersecting or parallel). - **Zero Vector:** Having magnitude $\mathbf{0}$ (point form). - **Like Vectors:** Having the same direction. - **Unlike Vectors:** Having opposite directions. - **Position Vectors:** The initial and final position of a vector is measured from a specific point (origin). - Position vector of point $\mathbf{A}$ is $\mathbf{\vec{OA}}$. - Position vector of point $\mathbf{B}$ is $\mathbf{\vec{OB}}$. - Position vector of $\mathbf{\vec{AB} = \vec{OB} - \vec{OA}}$. #### Triangle Law of Vector Addition - The sum of two sides of a triangle (taken in magnitude and direction) is equal to the third side (taken in opposite order). - $\mathbf{\vec{OA} + \vec{AB} = \vec{OB}}$. #### Parallelogram Law of Vector Addition - If $\mathbf{\vec{OA}}$ and $\mathbf{\vec{OB}}$ are adjacent sides of a parallelogram, then $\mathbf{\vec{OC} = \vec{OA} + \vec{OB}}$. #### Dot Product (Scalar Product) - $\mathbf{\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}|\cos\theta}$. - $\mathbf{\hat{i} \cdot \hat{i} = \hat{j} \cdot \hat{j} = \hat{k} \cdot \hat{k} = 1}$. - $\mathbf{\hat{i} \cdot \hat{j} = \hat{j} \cdot \hat{k} = \hat{k} \cdot \hat{i} = 0}$. - **Scalar projection of $\mathbf{\vec{a}}$ on $\mathbf{\vec{b}}$:** $\mathbf{\frac{\vec{a} \cdot \vec{b}}{|\vec{b}|}}$. - If $\mathbf{\theta}$ is obtuse, $\mathbf{\cos\theta ### 3D Geometry #### Direction Ratio (DR) - The position of a point in space regarding co-ordinate axes. Represented by $\mathbf{ }$. - For points $\mathbf{A(x_1,y_1,z_1)}$ and $\mathbf{B(x_2,y_2,z_2)}$, DR = $\mathbf{ }$. - If two lines are parallel, their DRs are the same. #### Direction Cosine (DC) - The cosine of the angle which a point makes with co-ordinate axes. Represented by $\mathbf{(l,m,n)}$. - $\mathbf{l = \cos\alpha = \frac{a}{\sqrt{a^2+b^2+c^2}}}$. - $\mathbf{m = \cos\beta = \frac{b}{\sqrt{a^2+b^2+c^2}}}$. - $\mathbf{n = \cos\gamma = \frac{c}{\sqrt{a^2+b^2+c^2}}}$. - $\mathbf{l^2+m^2+n^2 = 1}$. - If a line is equally inclined to co-ordinate axes, DR = $\mathbf{ }$. - DR of x-axis: $\mathbf{ }$. - DR of y-axis: $\mathbf{ }$. - DR of z-axis: $\mathbf{ }$. #### Angle Between Two Lines - $\mathbf{\cos\theta = \frac{a_1a_2+b_1b_2+c_1c_2}{\sqrt{a_1^2+b_1^2+c_1^2}\sqrt{a_2^2+b_2^2+c_2^2}}}$. - If two lines are perpendicular: $\mathbf{a_1a_2+b_1b_2+c_1c_2 = 0}$. #### Equation of a Line - **Through one point $\mathbf{\vec{a}}$ with direction $\mathbf{\vec{b}}$:** $\mathbf{\vec{r} = \vec{a} + \lambda\vec{b}}$. - **Cartesian form:** $\mathbf{\frac{x-x_1}{a} = \frac{y-y_1}{b} = \frac{z-z_1}{c}}$. - **Through two points $\mathbf{\vec{a}, \vec{b}}$:** $\mathbf{\vec{r} = \vec{a} + \lambda(\vec{b}-\vec{a})}$. - **Cartesian form:** $\mathbf{\frac{x-x_1}{x_2-x_1} = \frac{y-y_1}{y_2-y_1} = \frac{z-z_1}{z_2-z_1}}$. - Any general point on line: $\mathbf{(x_1+\lambda a, y_1+\lambda b, z_1+\lambda c)}$. - On xy-plane: $\mathbf{z=0}$. On yz-plane: $\mathbf{x=0}$. On zx-plane: $\mathbf{y=0}$. #### Distance of a Point from a Line - $\mathbf{d = \frac{|\vec{PQ} \times \vec{b}|}{|\vec{b}|}}$, where $\mathbf{P}$ is the point, $\mathbf{Q}$ is a point on the line, and $\mathbf{\vec{b}}$ is the direction of the line. #### Shortest Distance Between Two Lines - **Skew lines:** $\mathbf{d = \frac{|(\vec{a_2}-\vec{a_1}) \cdot (\vec{b_1} \times \vec{b_2})|}{|\vec{b_1} \times \vec{b_2}|}}$. - **Parallel lines:** $\mathbf{d = \frac{|(\vec{a_2}-\vec{a_1}) \times \vec{b}|}{|\vec{b}|}}$. #### Plane - **General Equation:** $\mathbf{ax+by+cz+d=0}$. - **Vector Equation:** $\mathbf{\vec{r} \cdot \vec{n} = q}$. - $\mathbf{\vec{r} = x\hat{i}+y\hat{j}+z\hat{k}}$. - $\mathbf{\vec{n}}$ is the normal vector perpendicular to the plane. - $\mathbf{q}$ is a constant. - **Through one point $\mathbf{(x_1,y_1,z_1)}$:** $\mathbf{a(x-x_1)+b(y-y_1)+c(z-z_1)=0}$. - $\mathbf{ }$ are the direction ratios of the normal to the plane. - **Normal Form:** $\mathbf{\vec{r} \cdot \hat{n} = p}$. - $\mathbf{\hat{n}}$ is the unit vector normal to the plane. - $\mathbf{p}$ is the perpendicular distance from the origin. - **Intercept Form:** $\mathbf{\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1}$. #### Coplanar Lines - When two or more lines intersecting or parallel to each other lie in a plane. - **Condition for coplanarity of two lines $\mathbf{\vec{r}=\vec{a_1}+\lambda\vec{b_1}}$ and $\mathbf{\vec{r}=\vec{a_2}+\mu\vec{b_2}}$:** $\mathbf{(\vec{a_2}-\vec{a_1}) \cdot (\vec{b_1} \times \vec{b_2}) = 0}$. - This also means that the general point of the first line must satisfy the equation of the second line. #### Angle Between Two Planes - $\mathbf{\cos\theta = \frac{\vec{n_1} \cdot \vec{n_2}}{|\vec{n_1}||\vec{n_2}|}}$. - If two planes are perpendicular: $\mathbf{\vec{n_1} \cdot \vec{n_2} = 0}$. - If two planes are parallel: $\mathbf{\vec{n_1} = \lambda \vec{n_2}}$. #### Equation of a Plane - **Parallel to a given plane $\mathbf{ax+by+cz+d=0}$:** $\mathbf{ax+by+cz+K=0}$. - **Through 3 points:** Use the same formula as the general equation of a plane. - **Through two points and parallel to one line:** - **Through one point and parallel to two lines:** - **Containing two coplanar lines:** - **Containing one line and parallel to another line:** Use $\mathbf{\frac{x-x_1}{a_1} = \frac{y-y_1}{b_1} = \frac{z-z_1}{c_1}}$. #### Angle Between a Line and a Plane - $\mathbf{\sin\phi = \frac{|\vec{b} \cdot \vec{n}|}{|\vec{b}||\vec{n}|}}$. - When line and plane are perpendicular: $\mathbf{\frac{a}{l} = \frac{b}{m} = \frac{c}{n}}$ (DR of line = DR of normal). - When line and plane are parallel: $\mathbf{\vec{b} \cdot \vec{n} = 0}$. #### Equation of a Plane Through the Line of Intersection of Two Planes - $\mathbf{P_1 + \lambda P_2 = 0}$, where $\mathbf{P_1=0}$ and $\mathbf{P_2=0}$ are the equations of the planes. ### Probability #### Definition - The numerical value of the occurrence or non-occurrence of any event in an experiment. - Represented by $\mathbf{P(E)}$. - $\mathbf{P(E) = \frac{\text{Number of favorable outcomes}}{\text{Total number of exhaustive cases}}}$. - $\mathbf{0 \le P(E) \le 1}$. - If $\mathbf{P(E) = 0}$, it is an impossible event. - If $\mathbf{P(E) = 1}$, it is a sure event. #### Odds - **Odds in favor of an event:** $\mathbf{\frac{m}{n}}$. Then $\mathbf{P(E) = \frac{m}{m+n}}$. - **Odds against an event:** $\mathbf{\frac{n}{m}}$. Then $\mathbf{P(E) = \frac{n}{m+n}}$. #### Conditional Probability - $\mathbf{P(A|B) = \frac{P(A \cap B)}{P(B)}}$ (Probability of event $\mathbf{A}$ given that event $\mathbf{B}$ has already occurred). - $\mathbf{P(B|A) = \frac{P(A \cap B)}{P(A)}}$ (Probability of event $\mathbf{B}$ given that event $\mathbf{A}$ has already occurred). - $\mathbf{0 \le P(A|B) \le 1}$. #### Multiplication Law of Probability - $\mathbf{P(A \cap B) = P(A) \cdot P(B|A)}$. - $\mathbf{P(A \cap B \cap C) = P(A) \cdot P(B|A) \cdot P(C|A \cap B)}$. #### Independent Events - Two events $\mathbf{A, B}$ are independent if the occurrence of one does not affect the probability of the other. - If $\mathbf{A, B}$ are independent, then $\mathbf{P(A \cap B) = P(A) \cdot P(B)}$. - If $\mathbf{A, B}$ are independent: - $\mathbf{A}$ and $\mathbf{B^c}$ are independent. - $\mathbf{A^c}$ and $\mathbf{B}$ are independent. - $\mathbf{A^c}$ and $\mathbf{B^c}$ are independent. #### Mutually Exclusive Events - If $\mathbf{A \cap B = \phi}$, then $\mathbf{P(A \cap B) = 0}$. #### Total Probability - Let $\mathbf{\{E_1, E_2, \dots, E_n\}}$ be a partition of the sample space, and each event has a non-zero probability. - If $\mathbf{A}$ is any arbitrary event associated with $\mathbf{S}$, then $\mathbf{P(A) = \sum_{i=1}^n P(E_i)P(A|E_i)}$. #### Bayes' Theorem - $\mathbf{P(E_i|A) = \frac{P(E_i)P(A|E_i)}{\sum_{j=1}^n P(E_j)P(A|E_j)}}$. - $\mathbf{P(E_i)}$ are prior probabilities. - $\mathbf{P(E_i|A)}$ are posterior probabilities. #### Random Variable - A variable whose values are determined by the outcomes of a random experiment. - A real-valued function whose domain is the sample space and range is the real line. #### Discrete Random Variable - A random variable is discrete if it takes only a finite number of values. #### Probability Distributions of a Random Variable - If a random variable $\mathbf{X}$ assumes values $\mathbf{x_1, x_2, \dots, x_n}$ with probabilities $\mathbf{p_1, p_2, \dots, p_n}$ such that: - (a) $\mathbf{0 \le p_i \le 1}$ for $\mathbf{i=1,2,\dots,n}$. - (b) $\mathbf{p_1+p_2+\dots+p_n = 1}$. #### Mean of Probability Distribution - $\mathbf{Mean = E(X) = \sum_{i=1}^n x_i p_i}$. #### Variance of Probability Distribution - $\mathbf{Var(X) = E(X^2) - [E(X)]^2 = \sum_{i=1}^n x_i^2 p_i - (\sum_{i=1}^n x_i p_i)^2}$. - $\mathbf{SD = \sqrt{\text{Variance}}}$. #### Binomial Distribution - $\mathbf{P(X=r) = \binom{n}{r} p^r q^{n-r}}$. - $\mathbf{n}$ = total trials. - $\mathbf{r}$ = required successes. - $\mathbf{p}$ = probability of success. - $\mathbf{q}$ = probability of failure ($\mathbf{q=1-p}$). - Mean = $\mathbf{np}$. - Variance = $\mathbf{npq}$. - SD = $\mathbf{\sqrt{npq}}$. ### Linear Programming #### Definition - A technique for determining an optimum schedule of interdependent activities in view of available resources. #### Objective Function - The linear function which is to be optimized (maximized or minimized) subject to given conditions. - These conditions are called **Constraints**. - Examples: Diet problems, optimal product line problems, transportation problems, investment problems. #### Feasible Region - The common region determined by all the constraints. - A feasible region is always a **convex polygon**. - If the feasible region is bounded, the maximum or minimum value of the objective function must occur at some vertex of the convex polygon. - If the feasible region is unbounded, the optimal value of the function may or may not exist. - If the feasible region is unbounded, the minimum value of $\mathbf{Z}$ only exists if the open half-plane determined by $\mathbf{ax+by M}$). #### Corner Point - The intersection point of two lines. - In an LPP, the maximum value or minimum value of the objective function $\mathbf{Z = ax+by}$ is always finite. - If the objective function has the same maximum value on two corner points of the feasible region, then every point on the line segment joining these two points gives the same maximum value. #### Methods for Solving LPP 1. **Geometrical or Graphical Method** (for two variables): - Corner Point Method. - Isoprofit or Isocost Method. 2. **Simplex Method** (for three or more variables).