🇯🇵 日本語 | 🇺🇸 English
Advertisement

Numerical Computing in C

Turn mathematical knowledge into code. Implement the algorithms most commonly seen in science and engineering computation with C.

Statistics (Mean, Variance, Standard Deviation)

Computing basic statistics from a dataset — by far the most common operation in science/engineering reports and research.
Mean: x̄ = (1/n) Σxi Variance: s² = (1/n) Σxi² - x̄² Std. dev.: s = √s²
#include <stdio.h>
#include <math.h>

int main(void) {
    double data[] = {72.5, 85.3, 91.0, 68.2, 77.8, 83.1, 90.5, 65.9};
    int n = 8;
    double sum = 0, sum2 = 0;

    for (int i = 0; i < n; i++) {
        sum  += data[i];       // sum
        sum2 += data[i] * data[i]; // sum of squares
    }

    double avg = sum / n;
    double var = sum2 / n - avg * avg; // variance = E[X^2] - (E[X])^2
    double sd  = sqrt(var);

    printf("N:       %d\n", n);
    printf("Mean:    %.2f\n", avg);
    printf("Var:     %.2f\n", var);
    printf("Std:     %.2f\n", sd);
    return 0;
}
Key point: Variance can be computed in one pass as "mean of squares - square of mean". You don't need two loops.

Newton's Method (Numerical Root Finding)

An algorithm that numerically finds the solution of f(x) = 0. It iteratively uses the x-intercept of the tangent as the next approximation.
xn+1 = xn - f(xn) / f'(xn)
Example: find the positive solution of x² - 2 = 0 (= √2 ≈ 1.41421356...).
#include <stdio.h>
#include <math.h>

double f(double x)  { return x*x - 2; }       // f(x) = x^2 - 2
double df(double x) { return 2*x; }            // f'(x) = 2x

int main(void) {
    double x = 2.0;  // initial guess
    double EPS = 1e-10;

    for (int i = 0; i < 100; i++) {
        double x_new = x - f(x) / df(x);
        printf("iter %2d: x = %.15f  err = %e\n", i+1, x_new, fabs(x_new - x));
        if (fabs(x_new - x) < EPS) break;
        x = x_new;
    }
    printf("sqrt(2) = %.15f\n", x);
    return 0;
}
iter 1: x = 1.500000000000000 err = 5.00e-01 iter 2: x = 1.416666666666667 err = 8.33e-02 iter 3: x = 1.414215686274510 err = 2.45e-03 iter 4: x = 1.414213562374690 err = 2.12e-06 iter 5: x = 1.414213562373095 err = 1.59e-12 sqrt(2) = 1.414213562373095
15-digit accuracy in just 5 iterations. Newton's method is quadratically convergent, roughly doubling the number of correct digits each iteration.
Caveat: a poor initial guess may fail to converge. Watch out for points where f'(x)=0.

Monte Carlo Method (Estimating Pi)

A simulation technique that uses random numbers to probabilistically answer a question. Drop random points into a 1x1 square, and estimate pi from the fraction that fall inside the unit circle.
pi ≈ 4 × (points inside circle / total points)
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main(void) {
    int N;
    printf("Trials: "); scanf("%d", &N);
    srand((unsigned)time(NULL));

    int inside = 0;
    for (int i = 0; i < N; i++) {
        double x = (double)rand() / RAND_MAX; // random in [0,1]
        double y = (double)rand() / RAND_MAX;
        if (x*x + y*y <= 1.0) inside++;     // inside circle?
    }

    double pi = 4.0 * inside / N;
    printf("pi ≈ %.6f  (N=%d)\n", pi, N);
    printf("true: 3.141593\n");
    return 0;
}
Accuracy vs. number of trials:
Trials NEstimate (ex.)Error
1,0003.156~0.01
10,0003.1432~0.001
100,0003.14052~0.001
1,000,0003.141284~0.0003
10,000,0003.141552~0.00004
Convergence is slow (error ∝ 1/√N), but the method extends easily to high-dimensional problems and is simple to code. It's widely used in physics simulation and financial engineering.

Numerical Integration (Trapezoid & Simpson's Rule)

Functions that can't be integrated analytically can still be approximated numerically.

Trapezoid Rule

ab f(x)dx ≈ h × [ f(a)/2 + f(x₁) + f(x₂) + ... + f(b)/2 ] (h = (b-a)/n)
#include <stdio.h>
#include <math.h>

double f(double x) { return sin(x); } // integrand

// Trapezoid rule
double trapezoid(double a, double b, int n) {
    double h = (b - a) / n;
    double sum = (f(a) + f(b)) / 2.0;
    for (int i = 1; i < n; i++)
        sum += f(a + i * h);
    return h * sum;
}

// Simpson's rule (higher accuracy)
double simpson(double a, double b, int n) {
    double h = (b - a) / n;
    double sum = f(a) + f(b);
    for (int i = 1; i < n; i++)
        sum += (i % 2 == 0) ? 2*f(a+i*h) : 4*f(a+i*h);
    return h / 3.0 * sum;
}

int main(void) {
    printf("True value of integral of sin(x) from 0 to pi = 2.0\n\n");
    int ns[] = {10, 100, 1000};
    for (int i = 0; i < 3; i++) {
        printf("n=%4d  trap=%.10f  Simpson=%.10f\n",
            ns[i], trapezoid(0, M_PI, ns[i]), simpson(0, M_PI, ns[i]));
    }
    return 0;
}
Accuracy comparison: Simpson's rule is orders of magnitude more accurate than the trapezoid rule (fourth-order accuracy).

Linear Systems (Gaussian Elimination)

Solve a system Ax = b with forward elimination followed by back substitution.
2x + y - z = 8 -3x - y + 2z = -11 -2x + y + 2z = -3
#include <stdio.h>
#include <math.h>
#define N 3

void gauss(double a[N][N+1], double x[N]) {
    // forward elimination
    for (int i = 0; i < N; i++) {
        for (int j = i+1; j < N; j++) {
            double ratio = a[j][i] / a[i][i];
            for (int k = i; k <= N; k++)
                a[j][k] -= ratio * a[i][k];
        }
    }
    // back substitution
    for (int i = N-1; i >= 0; i--) {
        x[i] = a[i][N];
        for (int j = i+1; j < N; j++)
            x[i] -= a[i][j] * x[j];
        x[i] /= a[i][i];
    }
}

int main(void) {
    double a[N][N+1] = {
        { 2,  1, -1,   8},
        {-3, -1,  2, -11},
        {-2,  1,  2,  -3}
    };
    double x[N];
    gauss(a, x);
    for (int i = 0; i < N; i++)
        printf("x[%d] = %.4f\n", i, x[i]);
    return 0;  // answer: x=2, y=3, z=-1
}

Bisection Method (Bracketing Roots)

If f(a) and f(b) have opposite signs, a root lies between them. Halve the interval repeatedly. Slower than Newton's method, but guaranteed to converge.
#include <stdio.h>
#include <math.h>

double f(double x) { return x*x*x - x - 1; } // x^3 - x - 1 = 0

double bisection(double a, double b, double eps) {
    while (b - a > eps) {
        double mid = (a + b) / 2.0;
        if (f(a) * f(mid) <= 0)
            b = mid;  // root in left half
        else
            a = mid;  // root in right half
    }
    return (a + b) / 2.0;
}

int main(void) {
    double root = bisection(1.0, 2.0, 1e-10);
    printf("root of x^3 - x - 1 = 0: %.10f\n", root);
    printf("check: f(root) = %e\n", f(root));
    return 0;
}
Bisection: slow (linear convergence) but safe. No worrying about initial guesses.
Newton's method: fast (quadratic convergence) but needs the derivative and a good initial guess.

Ordinary Differential Equations (Euler & Runge-Kutta)

Equations of the form dy/dx = f(x, y) can be solved step-by-step from an initial value. Projectile motion, epidemic models, electrical circuits, chemical kinetics — all fit this form.

Test problem

dy/dx = -y
y(0) = 1
Exact: y(x) = e-x
We integrate up to x=5 with three methods and compare. The exact value is e-5 ≈ 0.00673795.

① Euler method (1st-order)

The simplest scheme — use the current slope for one step.
yn+1 = yn + h · f(xn, yn)

② Improved Euler / Heun (2nd-order)

Predict, then correct with the slope at the predicted point.
k1 = f(xn, yn)
k2 = f(xn+h, yn+h·k1)
yn+1 = yn + h · (k1 + k2) / 2

③ Runge-Kutta 4 (RK4, 4th-order)

Evaluates the slope four times (start, two midpoints, end) and takes a weighted average. The most widely used general-purpose solver.
k1 = f(xn, yn)
k2 = f(xn + h/2, yn + h/2 · k1)
k3 = f(xn + h/2, yn + h/2 · k2)
k4 = f(xn + h, yn + h · k3)
yn+1 = yn + h/6 · (k1 + 2·k2 + 2·k3 + k4)

Comparison program

#include <stdio.h>
#include <math.h>

// dy/dx = f(x, y)
double f(double x, double y) {
    (void)x;
    return -y;
}

double euler(double x0, double y0, double x_end, double h) {
    double x = x0, y = y0;
    while (x < x_end) {
        y = y + h * f(x, y);
        x = x + h;
    }
    return y;
}

double heun(double x0, double y0, double x_end, double h) {
    double x = x0, y = y0;
    while (x < x_end) {
        double k1 = f(x,     y);
        double k2 = f(x + h, y + h * k1);
        y = y + h * (k1 + k2) / 2;
        x = x + h;
    }
    return y;
}

double rk4(double x0, double y0, double x_end, double h) {
    double x = x0, y = y0;
    while (x < x_end) {
        double k1 = f(x,         y);
        double k2 = f(x + h/2,   y + h/2 * k1);
        double k3 = f(x + h/2,   y + h/2 * k2);
        double k4 = f(x + h,     y + h * k3);
        y = y + h / 6 * (k1 + 2*k2 + 2*k3 + k4);
        x = x + h;
    }
    return y;
}

int main(void) {
    double exact = exp(-5.0);
    printf("exact y(5) = %.8f\n\n", exact);
    printf("   h    | Euler       | Heun        | RK4\n");
    printf("--------+-------------+-------------+-------------\n");
    double hs[] = {0.5, 0.1, 0.01};
    for (int i = 0; i < 3; i++) {
        double h = hs[i];
        printf(" %5.3f  | %.8f  | %.8f  | %.8f\n",
               h, euler(0, 1, 5, h),
                  heun(0, 1, 5, h),
                  rk4(0, 1, 5, h));
    }
    return 0;
}
$ gcc ode_compare.c -o ode_compare -lm $ ./ode_compare exact y(5) = 0.00673795 h | Euler | Heun | RK4 --------+-------------+-------------+------------- 0.500 | 0.00097656 | 0.01000214 | 0.00679947 0.100 | 0.00515378 | 0.00681497 | 0.00673797 0.010 | 0.00657202 | 0.00673874 | 0.00673795
Order of accuracy: dividing h by 10 reduces the error by about 10 (Euler, 1st), 100 (Heun, 2nd), and 10000 (RK4, 4th). RK4 already matches 6 digits at h=0.1 while Euler agrees only to 2 digits even at h=0.01.

Extension to systems (spring oscillator)

Projectile motion, Lotka-Volterra and the SIR model all involve multiple coupled variables. RK4 extends naturally. Here we solve a spring oscillator m·x'' = -k·x by rewriting it as two first-order equations:
dx/dt = v, dv/dt = -(k/m)·x
#include <stdio.h>

#define K 1.0
#define M 1.0

// right-hand side: returns dx/dt and dv/dt
void deriv(double x, double v, double *dx, double *dv) {
    *dx = v;
    *dv = -(K / M) * x;
}

void rk4_step(double *x, double *v, double h) {
    double k1x, k1v, k2x, k2v, k3x, k3v, k4x, k4v;
    deriv(*x,              *v,              &k1x, &k1v);
    deriv(*x + h/2*k1x, *v + h/2*k1v, &k2x, &k2v);
    deriv(*x + h/2*k2x, *v + h/2*k2v, &k3x, &k3v);
    deriv(*x + h*k3x,    *v + h*k3v,    &k4x, &k4v);
    *x += h/6 * (k1x + 2*k2x + 2*k3x + k4x);
    *v += h/6 * (k1v + 2*k2v + 2*k3v + k4v);
}

int main(void) {
    double x = 1.0, v = 0.0;   // amplitude 1, released from rest
    double h = 0.05;
    printf("t,x,v\n");
    for (double t = 0; t <= 20; t += h) {
        printf("%.2f,%.4f,%.4f\n", t, x, v);
        rk4_step(&x, &v, h);
    }
    return 0;
}
Energy check: theoretically E = (1/2)·k·x² + (1/2)·m·v² is constant. Plain Euler drifts and the energy grows; RK4 keeps it essentially constant over long times. That is one reason RK4 is so widely used.
See also
The projectile motion, SIR and Lotka-Volterra examples on the Build Simulations page become far more accurate when the Euler update is replaced with RK4 — often at fewer steps.

Challenge Problems

Apply what you've learned to the following problems.
C1. Cube root via Newton's method
Find x such that x³ = a. Apply Newton's method with f(x) = x³ - a, f'(x) = 3x². Verify that you get x=3 when a=27.
C2. Definite integral via Monte Carlo
Compute ∫₀¹ e-x² dx with Monte Carlo. (Part of the Gaussian integral — no closed-form answer.)
Hint: For random x in [0,1], compute f(x) = e-x². The mean of those values approximates the integral.
C3. Frequency distribution & histogram
Generate 100 random numbers in [0,1], compute a frequency distribution with 0.1-wide bins, and display a text-based histogram.
C4. Confidence interval
Estimate pi via Monte Carlo N times, then compute a 95% confidence interval (x̄ ± 2σ/√N).
C5. Prime sieve (Sieve of Eratosthenes)
Use the Sieve of Eratosthenes to enumerate all primes ≤ N. Use an array and confirm the O(N log log N) complexity.
C6. Matrix multiplication as a function
Write a function that computes C = AB for n×n matrices A, B. Verify with 3×3 matrices.
Share this article
Share on X