1. Asymptotic Notations Big-O Notation: $O(g(n))$ - Upper bound. $f(n) \in O(g(n))$ if $f(n) \le c \cdot g(n)$ for $n \ge n_0$. Omega Notation: $\Omega(g(n))$ - Lower bound. $f(n) \in \Omega(g(n))$ if $f(n) \ge c \cdot g(n)$ for $n \ge n_0$. Theta Notation: $\Theta(g(n))$ - Tight bound. $f(n) \in \Theta(g(n))$ if $c_1 \cdot g(n) \le f(n) \le c_2 \cdot g(n)$ for $n \ge n_0$. Little-o Notation: $o(g(n))$ - Strict upper bound. $\lim_{n \to \infty} \frac{f(n)}{g(n)} = 0$. Little-omega Notation: $\omega(g(n))$ - Strict lower bound. $\lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty$. 2. Recurrences 2.1. Substitution Method Guess the form of the solution. Use mathematical induction to find constants and show the solution works. 2.2. Recursion Tree Method Draw a tree representing the recursive calls and costs at each level. Sum the costs across all levels to determine the total cost. 2.3. Master Theorem For recurrences of the form $T(n) = aT(n/b) + f(n)$, where $a \ge 1, b > 1$ are constants, $f(n)$ is asymptotically positive: Case 1: If $f(n) = O(n^{\log_b a - \epsilon})$ for some constant $\epsilon > 0$, then $T(n) = \Theta(n^{\log_b a})$. Case 2: If $f(n) = \Theta(n^{\log_b a})$, then $T(n) = \Theta(n^{\log_b a} \log n)$. Case 3: If $f(n) = \Omega(n^{\log_b a + \epsilon})$ for some constant $\epsilon > 0$, AND $a f(n/b) \le c f(n)$ for some constant $c 3. Sorting Algorithms 3.1. Comparison Sorts Insertion Sort: $O(n^2)$ worst-case, $O(n)$ best-case. In-place, stable. Merge Sort: $O(n \log n)$ worst-case. Not in-place, stable. $T(n) = 2T(n/2) + \Theta(n)$. Heap Sort: $O(n \log n)$ worst-case. In-place, not stable. Uses a max-heap. Quick Sort: $O(n \log n)$ average-case, $O(n^2)$ worst-case. In-place, not stable. Pivot selection is critical. Lower Bound for Comparison Sorts: $\Omega(n \log n)$. 3.2. Non-Comparison Sorts Counting Sort: $O(n+k)$ where $k$ is range of input integers. Stable. Radix Sort: $O(d(n+k))$ where $d$ is number of digits, $k$ is max digit value. Stable. Bucket Sort: $O(n)$ average-case, $O(n^2)$ worst-case. Requires data to be uniformly distributed. 4. Data Structures 4.1. Heaps Max-Heap Property: Parent is $\ge$ children. $A[\text{PARENT}(i)] \ge A[i]$. Min-Heap Property: Parent is $\le$ children. $A[\text{PARENT}(i)] \le A[i]$. Operations: MAX-HEAPIFY(A, i) : $O(\log n)$ BUILD-MAX-HEAP(A) : $O(n)$ HEAPSORT(A) : $O(n \log n)$ HEAP-MAXIMUM(A) : $O(1)$ HEAP-EXTRACT-MAX(A) : $O(\log n)$ HEAP-INCREASE-KEY(A, i, key) : $O(\log n)$ MAX-HEAP-INSERT(A, key) : $O(\log n)$ 4.2. Hash Tables Direct Addressing: $O(1)$ lookup if keys are small integers. Collision Resolution: Chaining: Linked list for each slot. $O(1+\alpha)$ average search/insert/delete, where $\alpha = n/m$ (load factor). Open Addressing: Probing for empty slot. Linear Probing: $h(k, i) = (h'(k) + i) \pmod m$. Quadratic Probing: $h(k, i) = (h'(k) + c_1 i + c_2 i^2) \pmod m$. Double Hashing: $h(k, i) = (h_1(k) + i \cdot h_2(k)) \pmod m$. 4.3. Binary Search Trees (BST) Properties: Left child $\le$ parent $\le$ right child. Operations: SEARCH , MINIMUM , MAXIMUM , SUCCESSOR , PREDECESSOR : $O(h)$ where $h$ is height. INSERT , DELETE : $O(h)$. 4.4. Red-Black Trees Properties: Every node is Red or Black. Root is Black. Every leaf (NIL) is Black. If a node is Red, both its children are Black. All simple paths from a node to descendant leaves contain same number of Black nodes (black-height). Height: $O(\log n)$. Operations: INSERT , DELETE , SEARCH : $O(\log n)$ worst-case. 5. Dynamic Programming Characteristics: Optimal substructure, overlapping subproblems. Steps: Characterize optimal solution structure. Recursively define optimal solution value. Compute value of optimal solution (bottom-up or memoization). Construct optimal solution from computed information. Examples: Matrix Chain Multiplication: $O(n^3)$. Longest Common Subsequence (LCS): $O(mn)$. Knapsack Problem (0/1): $O(nW)$ where $W$ is capacity. 6. Greedy Algorithms Characteristics: Optimal substructure, greedy-choice property. Steps: Cast optimization problem as one where we make a choice. Prove that there's always an optimal solution that makes the greedy choice. Prove that with the greedy choice, what remains is a subproblem. Examples: Activity Selection Problem: $O(n \log n)$ (with sorting). Huffman Coding: $O(n \log n)$. Minimum Spanning Tree (MST): Prim's, Kruskal's. Single-Source Shortest Paths: Dijkstra's. 7. Graph Algorithms 7.1. Graph Representations Adjacency List: $O(V+E)$ space. Good for sparse graphs. Adjacency Matrix: $O(V^2)$ space. Good for dense graphs, checking edge existence. 7.2. Graph Traversal Breadth-First Search (BFS): $O(V+E)$. Finds shortest path in unweighted graphs. Depth-First Search (DFS): $O(V+E)$. Used for topological sort, strongly connected components. 7.3. Minimum Spanning Trees (MST) Prim's Algorithm: $O(E \log V)$ with binary heap, $O(E + V \log V)$ with Fibonacci heap. Kruskal's Algorithm: $O(E \log E)$ or $O(E \log V)$ using Union-Find. 7.4. Single-Source Shortest Paths Bellman-Ford Algorithm: $O(VE)$. Handles negative edge weights. Detects negative cycles. Dijkstra's Algorithm: $O(E \log V)$ with binary heap, $O(E + V \log V)$ with Fibonacci heap. No negative edge weights. 7.5. All-Pairs Shortest Paths Floyd-Warshall Algorithm: $O(V^3)$. Handles negative edge weights. Johnson's Algorithm: $O(VE + V^2 \log V)$. Faster for sparse graphs with negative weights. 7.6. Maximum Flow Ford-Fulkerson Method: $O(E \cdot f)$ where $f$ is max flow. Edmonds-Karp Algorithm: $O(VE^2)$. Specific implementation of Ford-Fulkerson. 8. String Matching Naive Algorithm: $O((n-m+1)m)$ worst-case. Rabin-Karp Algorithm: $O((n-m+1)m)$ worst-case, $O(n+m)$ average-case. Uses hashing. Knuth-Morris-Pratt (KMP) Algorithm: $O(n+m)$ worst-case. Uses a prefix function to avoid re-matching. 9. NP-Completeness P: Problems solvable in polynomial time. NP: Problems verifiable in polynomial time. NP-Hard: At least as hard as any NP problem. NP-Complete: In NP and NP-Hard. Common NP-Complete Problems: SAT, 3-SAT, Clique, Vertex Cover, Hamiltonian Cycle, Traveling Salesperson Problem (TSP), Subset Sum.