These notes are based on Chapter 13 of Jones, O., Maillardet, R., & Robinson, A. (2014). Introduction to scientific programming and simulation using R (2nd ed.). CRC Press.

Introduction

Recall, from your calculus class, that solving an ordinary differential equations (ODEs) \[\frac{d\mathbf{y}}{dt} = \mathbf{f}(t,\mathbf{y}(t))\] means finding a (vector-valued) function \(\mathbf{y}(t)\) that satisfies the equation. Here is a famous model using the Lotka-Volterra equations.

Predator–prey model

The predator-prey model is used to describe the population dynamics of two species, one as a predator and the other as prey. The model consists of a pair of ODEs: \[\begin{align} \frac{dx}{dt} &= ax - bxy\\ \frac{dy}{dt} &= gbxy - ey \end{align}\] where

  • \(x\): the prey population,
  • \(y\): the predator population,
  • \(a\): the natural growth rate of prey in the absence of predation,
  • \(b\): the death rate per encounter of rabbits due to predation,
  • \(g\): is the rate captured prey allow births of new predators,
  • \(e\) is the natural death rate of predators.

The system is known to have periodic solutions, which do not allow a simple expression. Fortunately, it is still easy to obtain an implicit equation of \(x\) and \(y\).

Eliminating \(dt\) from the above system of ODEs, we have: \[\begin{align} &(ax-bxy)dy = (gbxy-ey)dx\\ \Leftrightarrow\quad &\int \frac{a-by}{y}dy = \int \frac{gbx-e}{x}dx\\ \Leftrightarrow\quad &C = gbx-e\log(x)+by-a\log(y) \end{align}\] where \(C\) is a constant.

With \(a=0.05\), \(b=0.0002\), \(g=0.8\), \(e=0.03\), and \(C=-0.3\), we have the following orbit in a phase space.

a <- 0.05
b <- 0.0002
g <- 0.8
e <- 0.03
C <- -0.3
x <- seq(0,800,length=1000)
y <- seq(0,830,length=1000)
out <- outer(x, y, function(x,y) g*b*x - e*log(x) + b*y - a*log(y) )
contour(x, y, out, levels=C)
from_x <- c(16.05, 400, 500)
from_y <- c(401, 48, 652.3)
to_x <- c(16, 401, 499)
to_y <- c(400, 48.1, 653)
y0 <- c(150, 804)
points(y0[1],y0[2],pch=16)
text(y0[1],y0[2]-50, expression(y[0]), pos=4)
arrows(from_x, from_y, to_x, to_y, length=.1)
title(main=expression(C == g*b*x - e*log(x) + b*y - a*log(y)), xlab="prey", ylab="predator")

Numerical solutions

As nonsensical as it sounds, if we knew \(\mathbf{y}(t)\) on the right-hand side of ODEs, a solution would be a usual integral with respect to \(t\): \[\mathbf{y}(t) = \int \mathbf{f}(t,\mathbf{y}(t))dt .\] But, of course, we do not know \(\mathbf{y}(t)\), which is the point of this exercise.

Nevertheless, when we do an initial value problem with \(\mathbf{y}(0)\), \[\mathbf{y}(h) = \mathbf{y}(0) + \int_0^h \mathbf{f}(t,\mathbf{y}(t))dt ,\] such a “nonsensical” integral gives us a reminder:

Solving a differential equation is doing (often difficult) integration.

Almost always, whether it is analytical or numerical integration,

When it is hard, we approximate the integrand by simple (often, linear or even step) functions.

Recall the technique used in Numerical Integration, where we approximate the complex function over each interval of length \(h\) by simple functions so that we can carry out an integral with ease. Specifically, linear functions in the trapezoidal rule and quadratic functions in Simpson’s rule. Why not applying the same idea here — approximating the complex (indeed, unknown) function \(\mathbf{y}(t)\)?

Euler’s method

Let’s try linear approximation. Let \(\hat{\mathbf{y}}(t)\) be approximation to \(\mathbf{y}(t)\) for \(t>0\), and \(\hat{\mathbf{y}}(0) = \mathbf{y}(0)\). If \(\mathbf{y}(t)\) is linear, \(\mathbf{f}\) is a constant function. A natural choice is \(\mathbf{f}(t,\mathbf{y}(t)) = \mathbf{f}(0,\mathbf{y}(0))\) for \(t\in[0,h]\), and we can easily compute \(\mathbf{f}(0,\mathbf{y}(0))\) using the knowledge of \(\mathbf{y}(0)\) and \(\mathbf{f}\). Thus, \[\begin{align} \hat{\mathbf{y}}(h) &= \hat{\mathbf{y}}(0) + \int_0^h \mathbf{f}(0,\hat{\mathbf{y}}(0))dt\\ &= \hat{\mathbf{y}}(0) + h\mathbf{f}(0,\hat{\mathbf{y}}(0)) \end{align}\]

If we are happy to make successive approximation, then we have Euler’s method for solving ODEs: \[\begin{align} \hat{\mathbf{y}}(nh+h) &= \hat{\mathbf{y}}(nh) + \int_{nh}^{nh+h} \mathbf{f}(nh,\hat{\mathbf{y}}(nh))dt\\ &= \hat{\mathbf{y}}(nh) + h\mathbf{f}(nh,\hat{\mathbf{y}}(nh)) \end{align}\] for \(n\in\{0,1,\dots\}\).

By its nature of successive approximation, each approximation error \(\|\hat{\mathbf{y}}(nh) - \mathbf{y}(nh)\|\) accumulates over time.

Let’s illustrate it using the predator–prey model. Each process starts at \(y_0=(150, 804)\), which is a point on the orbit.

# ODEs
LV <- function(t=NULL,y) {
  c(a*y[1] - b*y[1]*y[2], g*b*y[1]*y[2] - e*y[2])
}
# Euler's method
euler <- function(f, t0, y0, h, N) {
  y <- matrix(0, nrow=N+1, ncol=length(y0))
  y[1,] <- y0
  t <- t0
  for (n in 1:N) {
    y[n+1,] <- y[n,] + h*f(t, y[n,])
    t <- t + h
  }
  return(y)
}
# y0=(150, 804) is a point on the orbit
# y0 <- c(150, 804)
for (h in c(1, 0.1)) {
  N <- 820/h
  y <- euler(LV, 0, y0, h, N)
  plot(y[,1], y[,2], type='l', xlab="prey", ylab="predator")
  points(y[1,1],y[1,2],pch=16)
  text(y[1,1]+10,y[1,2], expression(y[0]), pos=1)
  points(y[N+1,1],y[N+1,2])
  title(paste("Euler's method (h = ",h,")", sep=""))
}