The Bisection Method is a bracketing method for finding the root of a function. It works by repeatedly halving the interval in which the root lies.
- Given a continuous function
$f(x)$ and an interval$[a, b]$ such that$f(a) \cdot f(b) < 0$ , a root exists between$a$ and$b$ . - Compute the midpoint: $$ c = \frac{a + b}{2} $$
- Evaluate
$f(c)$ :- If
$f(a) \cdot f(c) < 0$ , set$b = c$ - Else, set
$a = c$
- If
- Repeat until the interval
$[a, b]$ is sufficiently small.
✅ Advantages: Always converges (if root exists in interval)
The Secant Method is an open method that uses a secant line (a straight line through two points on the curve) to approximate the root.
-
Given two initial guesses
$x_0$ and$x_1$ , draw a secant line passing through the points$(x_0, f(x_0))$ and$(x_1, f(x_1))$ . -
The x-intercept of the secant gives the next approximation:
$$ x_n = x_{n-1} - f(x_{n-1}) \cdot \frac{x_{n-1} - x_{n-2}}{f(x_{n-1}) - f(x_{n-2})} $$
-
Use the latest two approximations
$x_{n-1}$ and$x_{n-2}$ for each new iteration.
Unlike the bisection method, the secant method does not require bracketing, and it often converges faster, but convergence is not guaranteed.
📝 In most problems, you are given
✅ Advantages: Faster than bisection
The Regula Falsi Method is a hybrid of the bisection and secant methods. It uses a secant-like formula but keeps the root bracketed like in the bisection method.
-
Start with two points
$a$ and$b$ such that$f(a) \cdot f(b) < 0$ . -
Use the following formula to find the x-intercept of the secant:
$$ x = a - f(a) \cdot \frac{b - a}{f(b) - f(a)} $$
-
Check the sign of
$f(x)$ :- If
$f(a) \cdot f(x) < 0$ , set$b = x$ - Else, set
$a = x$
- If
-
Repeat the process while keeping the interval
$[a, b]$ bracketing the root.
✅ Advantages: Safer than secant, faster than bisection
The Newton-Raphson Method is a powerful open method that uses the first derivative to approximate the root of a function.
-
Given an initial guess
$x_0$ , use the iterative formula:$$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} $$
-
Repeat until
$x_n$ converges to a root.
It requires that the derivative
✅ Advantages: Very fast convergence (quadratic), especially near the root
The Muller Method fits a parabola through three points and computes the root of that parabola. It can also find complex roots.
Given three approximations
Solve the quadratic equation:
Then:
Note: Choose the sign in the denominator that gives the larger magnitude, i.e., the same sign as
$a_1$ , to avoid small denominators and improve convergence.
✅ Advantages: Can find complex roots, superlinear convergence
The Chebyshev Method is an open root-finding method that improves the Newton-Raphson method by incorporating the second derivative for faster convergence.
-
Given an initial guess
$x_0$ , use the iteration formula:$$ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} - \frac{1}{2} \cdot \left( \frac{f(x_n)^2 \cdot f''(x_n)}{f'(x_n)^3} \right) $$
✅ Advantages: Cubic convergence near root
The Method of Successive Approximations (also known as Fixed Point Iteration) is an iterative technique to solve equations of the form:
- Start with an initial guess
$x_0$ . - Iterate using: $$ x_{n+1} = g(x_n) $$
- Continue until
$|x_{n+1} - x_n| < \epsilon$ (a small tolerance).
-
$g(x)$ must be continuous. -
$|g'(x)| < 1$ near the root.
✅ Advantages: Simple to implement
The Gauss-Jordan Elimination Method is a direct method for solving systems of linear equations by transforming the augmented matrix into reduced row-echelon form (RREF).
- Convert the augmented matrix
$[A | b]$ to RREF. - Make the leading diagonal elements 1.
- Make all other elements in the pivot columns 0.
- The resulting matrix directly gives the solution: $$ x_1, x_2, \dots, x_n $$
✅ Advantages: Direct solution
The LU Decomposition Method factorizes a square matrix
Then solve:
-
$LUx = b$ ⟹ Let$Ux = z$ , then:- Solve
$Lz = b$ (forward substitution) - Solve
$Ux = z$ (backward substitution)
- Solve
-
Doolittle Method:
$L$ has 1s on the diagonal ($l_{ii} = 1$ ) -
Crout Method:
$U$ has 1s on the diagonal ($u_{ii} = 1$ )
✅ Advantages: Efficient for multiple right-hand sides
The Cholesky Factorization is used for symmetric positive-definite matrices.
$$ A = LL^T $$ Where:
-
$L$ is a lower triangular matrix -
$L^T$ is the transpose of$L$
Given
- Solve
$Lz = b$ (forward substitution) - Solve
$L^Tx = z$ (backward substitution)
A matrix
✅ Advantages: Fast and stable for symmetric positive-definite matrices
The Partition Method solves systems by partitioning the matrix and its inverse into submatrices.
Given: $$ A = \begin{pmatrix} B & C \ E & D \end{pmatrix} \quad \text{and} \quad A^{-1} = \begin{pmatrix} X & Y \ Z & V \end{pmatrix} $$
Then the inverse components are given by:
$V = (D - EB^{-1}C)^{-1}$ $Z = -VEB^{-1}$ $X = B^{-1} + B^{-1}CZ$ $Y = -B^{-1}CV$
✅ Advantages: Useful in block matrix computations