next up previous
Next: Qualitative Classification of Solutions Up: The Art of Phugoid Previous: Seeing the flight path

Subsections

Fixed Point Analysis

So far, the cases we've looked at seem to have a number of common features. Each has has a solution with $ \theta$(t) and v(t) constant; there are solutions which immediately tend toward the constant solution (or oscillate around it), and solutions which do some number of loops first (for R = 3, we didn't demonstrate this- if the inital angle is 0, a solution needs an initial velocity larger than 86.3 before it will do a loop). How do the possible solutions depend on the value of R? For example, the behavior of solutions for R = 0.1 and R = 0.2 are quite similar, but are dramatically different from R = 0 and R = 3.


One way we can get a better handle on exactly how the behavior of the solutions depends on R is to examine what happens to solutions near the constant solution. If we look in the $ \theta$-v plane, this solution corresponds to a single point, which is often referred to as a ``fixed point''. We explicitly found the fixed point {$ \theta$(t) = 0, v(t) = 1} solution for R = 0 in §2 by noticing that whenever $ \dot{\theta}$ = 0 and $ \dot{v}$ = 0, we must have a constant solution (and vice-versa). Using $ \mverb$solve, we can get maple to tell us the where the fixed point is for arbitrary values of R. We use convert to insist that maple represent the result as a radical, instead of using the RootOf notation.

> 
  R:='R':
  FixPoint:=convert(
            solve((v^2-cos(theta))/v=0,-sin(theta)-R*v^2=0, theta,v),
            radical);


\begin{maplelatex}
\begin{displaymath}
\mathit{FixPoint} := \{v=({\displaystyle ...
...sqrt{{\displaystyle \frac {1}{R^{2} + 1}} })\}
\end{displaymath}\end{maplelatex}

So, we see that there is a fixed point for all values of R (since 1 + R2 is always positive, the radical always takes real values). For R > 0, $ \theta$(t) at this fixed solution is negative, so this corresponds to a diving solution. As R increases, the angle of the dive becomes steeper and steeper.


What can we say about behaviour of solutions near this fixed solution?

In order to answer this, notice that we can apply Taylor's theorem to the right-hand side of our system of differential equations. That is, think of our differential equation as being in the vector form

$\displaystyle {\frac{d\vec{X}}{dt}}$ = $\displaystyle \vec{F} $($\displaystyle \vec{X} $),

where (in our case) $ \vec{X} $ is [$ \theta$(t), v(t)], and $ \vec{F} $(X) = [(v2 - cos($ \theta$))/v,  - sin($ \theta$) - Rv2]. Taylor's theorem says that for $ \vec{X} $ near $ \vec{X}_{0}^{}$,

$\displaystyle \vec{F} $($\displaystyle \vec{X} $) = $\displaystyle \vec{F} $($\displaystyle \vec{X}_{0}^{}$) + D$\displaystyle \vec{F} $($\displaystyle \vec{X}_{0}^{}$)($\displaystyle \vec{X} $ - $\displaystyle \vec{X}_{0}^{}$) + higher order terms,

where D$ \vec{F} $($ \vec{X}_{0}^{}$) is the Jacobian of $ \vec{F} $ evaluated at $ \vec{X}_{0}^{}$. Since $ \vec{X}_{0}^{}$ is a fixed point, $ \vec{F} $($ \vec{X}_{0}^{}$) is the zero vector. Hence, we can get a good idea of what is happening close to the fixed point by studying the linear system of differential equations given by the derivative at the fixed point.


First, we give a brief refresher about linear 2×2 systems of equations. The reader is refered to the texts in the references for more comprehensive treatment of this topic.

Linear Systems of ODEs

Linear differential equations (with constant coefficients) are the simplest ones to solve and understand. Such equations have the form $ {\frac{dX}{dt}}$ = AX, where A is an n×n matrix and X is an n-vector. We'll be content with n = 2.

First, notice that X = 0 is always a fixed point for this system, and it is the only one. Secondly, if X1(t) and X2(t) are solutions to the system, then so is X1(t) + X2(t). This property is called superposition and will be very useful.

Let's look at the simplest case for the 2×2 system: A = $ \left(\vphantom{ \begin{array}{cc}a & 0  0 & d\end{array}}\right.$$ \begin{array}{cc}a & 0  0 & d\end{array}$$ \left.\vphantom{ \begin{array}{cc}a & 0  0 & d\end{array}}\right)$, that is,

$\displaystyle \dot{x}$ = ax  
$\displaystyle \dot{y}$ =   dy

If we have an initial condition with y(0) = 0, then y(t) = 0. This means that the problem reduces to the one-dimensional case, and x(t) = x(0)eat. If a > 0, the solution moves away from the origin along the x-axis as t increases; if a < 0, the solution moves toward the origin. If the initial condition is on the y-axis, the sign of d controls whether the solution moves away (d > 0) or towards (d < 0) the origin. Because of the superposition property mentioned before, an initial condition with x(0)$ \ne$ 0 and y(0)$ \ne$ 0 gives rise to a solution which is a combination of these two behaviours.


\begin{mfigure}\centerline{ \psfig{figure=linear01.eps,height=2.5in} \hfil
\psf...
....eps,height=2.5in} \hfil
\psfig{figure=linear04.eps,height=2.5in}}\end{mfigure}

A large number of linear systems behave very similarly to the cases above, except the straight-line solutions may not lie exactly along the coordinate axes.

Suppose there is a solution of $ \dot{X}$ = AX which lies along a straight line, that is, along some vector $ \vec{v} $. Because the system is linear, this means the tangent vectors to this solution must be of the form $ \lambda$X for some number $ \lambda$. Such a number $ \lambda$ is called an eigenvalue, and the corresponding vector $ \vec{v} $ is the corresponding eigenvector. Note that in the case above, we had eigenvalues a and d with eigenvectors [1, 0] and [0, 1].

To find the eigenvalues, we need to solve

AX = $\displaystyle \lambda$X,    or equivalently,    (A - $\displaystyle \lambda$I)X = 0.

This can only happen if X is the zero vector, or the determinant of A - $ \lambda$I is zero. In the latter case, we must have

(a - $\displaystyle \lambda$)(d - $\displaystyle \lambda$) - bc = 0,

That is,

$\displaystyle \lambda^{2}_{}$ - (a + d )$\displaystyle \lambda$ + (ad - bc) = 0.

Using the quadratic formula, we see that the eigenvalues must be

$\displaystyle \lambda$ = $\displaystyle {\frac{(a+d) \pm \sqrt{ (a+d)^2 - 4 (ad - bc)}}{2}}$.

The quantity a + d is called the trace of A (more generally, the trace of a matrix is the sum of diagonal entries), and ad - bc is the determinant of A. The eigenvalues of a 2×2 matrix can be expressed in terms of the trace and the determinant3.3 as

$\displaystyle {\frac{Tr A \pm \sqrt{ (Tr A)^2 - 4 det A}}{2}}$.

We'll use this form again in a little while.


For linear systems, the eigenvalues determine the ultimate fate of solutions to the ODE. From the above, it should be clear that there are always two eigenvalues for a 2×2 matrix (although sometimes there might be a double eigenvalue, when (TrA)2 - 4 detA = 0). Let's call them $ \lambda_{1}^{}$ and $ \lambda_{2}^{}$.

We've already seen prototypical examples where the eigenvalues are real, nonzero, and distinct. But what if the eigenvalues are complex conjugate (which happens when (TrA)2 < 4 detA)? In this case, there is no straight line solution in the reals. Instead, solutions turn around the origin. If the real part of the eigenvalues is positive, solutions spiral away from the origin, and if it is negative, they spiral towards it. If the eigenvalues are purely imaginary, the solutions neither move in nor out; rather, they circle around the origin.

We can summarize this in the table below. We are skipping over dealing with the degenerate cases, when $ \lambda_{1}^{}$ = $ \lambda_{2}^{}$ and when one of the eigenvalues is zero.

$ \lambda_{1}^{}$ > $ \lambda_{2}^{}$ > 0
source
\psfig{figure=lin_source.eps,height=.5in}                 
$ \lambda_{i}^{}$ complex, Re$ \lambda$ > 0
spiral source
\psfig{figure=lin_ssrc.eps,height=.5in}
$ \lambda_{1}^{}$ < $ \lambda_{2}^{}$ < 0
sink
\psfig{figure=lin_sink.eps,height=.5in}                 
$ \lambda_{i}^{}$ complex, Re$ \lambda$ < 0
spiral sink
\psfig{figure=lin_ssink.eps,height=.5in}
$ \lambda_{1}^{}$ < 0 < $ \lambda_{2}^{}$
saddle
\psfig{figure=lin_saddle.eps,height=.5in}                 
$ \lambda_{i}^{}$ complex, Re$ \lambda$ = 0
center
\psfig{figure=lin_center.eps,height=.5in}


Finally, we remark that there is a convenient way to organize this information into a single diagram. Since the eigenvalues of A can be written as

$\displaystyle {\frac{Tr A \pm \sqrt{ (Tr A)^2 - 4 det A}}{2}}$,

we can consider the matrix A as being a point in the Trace-Determinant plane. Then the curve where (TrA)2 = 4 detA is a parabola. If A is above this parabola, it has complex eigenvalues. Furthermore, if the determinant is negative, we must have a saddle because3.4 $ \sqrt{(Tr A)^2 - 4 det A}$ > TrA, which means there is one positive and one negative eigenvalue. Finally, if the determinant is positive, the eigenvalues (or their real part, if they are complex) is the same sign as the trace of A. We summarize this in the diagram below.


\begin{mfigure}\centerline{\psfig{figure=lin_bifdiag.eps,height=2in}}\end{mfigure}

Fixed Points for the Glider

We now return to our nonlinear system, and look at the linearization near the fixed point. As we saw earlier in this section, for every value of R the Phugoid model

$\displaystyle {\frac{d\theta}{dt}}$ = $\displaystyle {\frac{v^2 - \cos\theta}{v}}$        $\displaystyle {\frac{dv}{dt}}$ = - sin$\displaystyle \theta$ - Rv2

has a fixed point at

$\displaystyle \theta$ = - arctan$\displaystyle \left(\vphantom{ R \sqrt{\frac{1}{1 + R^2}} }\right.$R$\displaystyle \sqrt{\frac{1}{1 + R^2}}$$\displaystyle \left.\vphantom{ R \sqrt{\frac{1}{1 + R^2}} }\right)$        v = $\displaystyle \sqrt[4]{\frac{1}{1 + R^2}}$.

We now calculate the Jacobian matrix, using Maple.
> 
  with(linalg):
  R:='R':
  J:=jacobian( [(v^2-cos(theta))/v, -sin(theta)-R*v^2], [theta, v]);


\begin{maplelatex}
\begin{displaymath}
J := \left[
{\begin{array}{cc}
{\display...
...s}(\theta ) & - 2 R v
\end{array}}
\right]
\end{displaymath}\end{maplelatex}

Once we know the trace and determinant of the Jacobian matrix at the fixed point, we can use that information to determine how its type (sink, source, saddle, etc.) depends on R. Rather than solve for the eigenvalues directly, we will compute the trace and determinant, which have a simpler form in this case.

First, let's find the fixed point again, issuing the same command we did earlier. Then, we'll substitute that into the result of trace and det to calculate the trace and determinant at the fixed point.

> 
  FixPoint:=convert(
            solve((v^2-cos(theta))/v=0,-sin(theta)-R*v^2=0, theta,v),
            radical);


\begin{maplelatex}
\begin{displaymath}
\mathit{FixPoint} := \{\theta =\mathrm{ar...
...isplaystyle \frac {1}{R^{2} + 1
}} )^{(1/4)}\}
\end{displaymath}\end{maplelatex}

> 
  trfix := simplify(subs(FixPoint, trace(J)));
  detfix:= simplify(subs(FixPoint, det(J)));


\begin{maplelatex}
\begin{displaymath}
\mathit{trfix} := {\displaystyle \frac {-...
...aymath}
\mathit{detfix} := 2 \sqrt{R^{2} + 1}
\end{displaymath}\end{maplelatex}

So, we see that for all real values of R, the determinant is positive (meaning saddles are impossible). Furthermore, for R > 0, the trace is negative, so the fixed point is always a sink. (When R = 0, the trace is zero, which means it is possible for this fixed point to be a center.3.5) From the plots we made earlier, we expect that for small values of R, we will have a spiral sink, but it will probably not spiral for sufficiently large values of R. We can check this by solving for the point where Tr(J)2 = 4 det(J), which happens when R = 2$ \sqrt{2}$.

> 
  solve( (trfix)^2 = 4*detfix, R);


\begin{maplelatex}
\begin{displaymath}2 \sqrt{2},   - 2 \sqrt{2} \end{displaymath}\end{maplelatex}



Footnotes

... determinant3.3
It is true for all matrices that the trace is the sum of the eigenvalues, and the determinant is their product.
... because3.4
Since the determinant is the product of the eigenvalues, the only way it can be negative is if they are of opposite signs.
... center.3.5
We didn't address this point. If the trace is nonzero, it is impossible to have a center. But if the trace is zero, then the higher-order terms in the Taylor series play a role, and anything is possible. In fact, it can be shown that the fixed point for R = 0 is indeed a center. One way to do this is to show that the quantity E($ \theta$, v) = v3 - 3v cos$ \theta$ must be constant for any solution when R = 0. Thus, the solutions are the level sets of E, which has a local minimum at $ \theta$ = 0, v = 1. Around a local minimum, the level sets are simple closed curves, so the fixed point is indeed a center.

next up previous
Next: Qualitative Classification of Solutions Up: The Art of Phugoid Previous: Seeing the flight path

Translated from LaTeX by Scott Sutherland
2002-08-29