Arguments That Seem True but Are Actually False in Differential Equations
The author did not formulate correctly what he meant, and thus left a loophole.
In [Ches, p.181, l.10l.12], Chester says, "The fact that t is missing from the righthand sides makes the family of solutions an n instead of an (n+1)parameter family."
It is easy to find a counterexample: dx/dt = 1, but x = t+c. I think [Joh, p.9,
l.6p.10, l.6] was what Chester meant. In fact, there
is a better interpretation than John's. We can reduce the three equations
of [Joh, p.9, (4.4)] to the two equations of [Joh, p.9, (4.3)]. Then we have
only two integration constants.
 Handwaving arguments

 The author's wording creates a facade that puts readers under the
impression that they have finished the proof even though they have not.

Author omits so many details that readers are left with no clue
about how to start a rigorous proof.
Example 1. [Kre, p.49, l.2l.1]. Kreyszig
should have said, "Since a_{1}
= b_{1} and
a_{2} =
b_{2}, for the first
order all we need to prove is (da_{3}/ds)
s_{0}
= (db_{3}/ds*) s*_{0}.
 Either the argument's loose ends need to be tied together or its confusing
points need to be clarified.
Example 1. In the proof of [Spi, vol.1, p.203, Theorem 5], Spivak leaves out the the part
of the proof given in [Arn1, p.306, l.19p.307, l.2]. By contrast, in Arnold's
proof he points out what
tasks need to be done in order to tie the loose ends together, and what analyses
need to be refined in order to clarify the confusion.
Example 2. [Spi, vol.1, p.214, Theorem 10] leaves its proof unfinished. In order to complete
the proof, we must prove the following lemma: If Xf=Yf for every C^{¥}
f: M ® R, then X=Y.
Proof: Let X =
å a_{i}(¶/¶x^{i}) and
Y = å b_{i}(¶/¶x^{i}).
Then a_{i} = b_{i}
locally Þ X=Y locally.
 Analysis needs to be refined and the notation abuse needs to be analyzed.
Example.
The equality given in [Spi1, p.102, l.1].
Case j¹i: Let 1£m£k
and m¹i. If m¹j, I_{(j,a)}*(dx^{m})
= dx^{m}. If m=j, I_{(j,a)}*(dx^{m})
= 0. If j¹i, ($m)_{1£m£k}
: m=j.
Case j=i: Strictly speaking, f should be defined on [0,1]^{k}.
Spivak's notation is sloppy: f on [0,1]^{k1}
should have been written as f_{(j,a)}.
For k=2, the integral given in [Spi1, p.102, l.1] is
actually a line integral. It can be considered an area integral only if we add a
factor ò_{[0,1]} dx^{i}
= 1.  Information that should have been provided is not.
In [Spi, vol. 1, p.290, l.13], Spivak says, "It is easy to prove that dw
is alternating, ….", but he fails to provide
any hints to start the proof. It would be helpful if he could provide the
following hints: Assume X_{io}
= X_{jo}, where i_{0} < j_{0}.
For S_{1}, consider the
cancellation between X_{io}w
and X_{jo}w.
For S_{2}, consider the
cancellations among the four cases: i<i_{o}, i_{o}<j,
i<j_{o}, and j_{0}<j.
 If a quoted theorem [Har, p.25, Lemma 2.1] has several versions whose proofs
are similar, we sill have to present every version in detail even though we may
prove one version and omit the proofs of other versions. If we omit any version, serious confusions may arise (compare
[Har, p. 25, (2.2)] with the equality given in [Har, p.25, l.13]). A random guess
only leads to an incorrect result (e.g., u_{0}
³ v(t_{0}) given in [Har,
p.27, l.10] should have been v(t_{0})
³ u^{0}). In addition, we must clarify the
meaning of ambiguous statements (the large n in [Har, p.25, l.14;
l.7] refers to n such that a_{n}
= min (a, b/(M+n^{1}))
> a').
Remark.
The various versions of [Har, p.25, Lemma 2.1]
(*)_{max} u' = U(t,u), u(t_{0}) = u_{0}
(**)_{max} u(t)
£ u^{0}(t)
(*)_{min} u' = U(t,u), u(t_{0}) = u^{0}
(**)_{min} u(t)
³ u_{0}(t)
 (The lefthalf interval) Let U(t,u) be continuous on a rectangle R: t_{0}a£t£t_{0},
yy_{0}£b;
let U(t,u)£M
and a=min (a, b/M). Then
 (The maximal solution) (*)_{max} has a
solution u=u^{0}(t) on [t_{0}a,t_{0}]
with the property that every solution u=u(t) of
u' = U(t,u), u(t_{0}) £ u_{0}
satisfies
(**)_{max} on [t_{0}a,t_{0}].
 (The minimal solution) (*)_{min} has a solution u=u_{0}(t)
on [t_{0}a,t_{0}] with the property that every solution u=u(t) of
u' = U(t,u), u(t_{0}) ³ u^{0}
satisfies
(**)_{min} on [t_{0}a,t_{0}].
 (The righthalf interval) Let U(t,u) be continuous on a rectangle R: t_{0}£t£t_{0}+a,
yy_{0}£b;
let U(t,u)£M
and a=min (a, b/M). Then
 (The maximal solution) (*)_{max} has a solution u=u^{0}(t)
on [t_{0},t_{0}+a] with the property that every solution u=u(t) of
u' = U(t,u), u(t_{0}) £ u_{0}
satisfies
(**)_{max} on [t_{0},t_{0}+a].
 (The minimal solution) (*)_{min} has a solution u=u_{0}(t)
on [t_{0},t_{0}+a] with the property that every solution u=u(t) of
u' = U(t,u), u(t_{0}) ³ u^{0}
satisfies
(**)_{min} on [t_{0},t_{0}+a].
 Be specific. It is inappropriate to say, "It is impossible for
Statement A to be true." Instead, we must pinpoint which previous statement
it would contradict if Statement A were true. Otherwise, the readers will have
no clue to follow.
Example. This is impossible [Har, p.40, l.3]. Hartman should be more specific
and say that
[(V(y(t))®c as t®¥)
Þ ((d/dt)[V(y(t)]®0 as t®¥)].
 The proof that may apply to a particular case, but cannot apply to the general case.
Example. We want to prove ò _{[z, z+2K]} dn^{2}
zdz =
ò _{[0, 2K]} dn^{2}
zdz [Gon1, p.441, (5.245)]. The proof given in [Gon1, p.4441, l.18] applies
only to the case where z is a real number. If z = x+iy, we should use [Gon1,
(5.1725), (5.184) and (5.186)].
 The argument is misleading even though the conclusion is correct.
In [Gon1, p.476, l.4l.5], González says
that e_{1} is the largest root of 4u^{3
}g_{2}u
g_{3} = 0 because
Ã'(x)<0 for 0 <x <w_{1}.
However, the reason he provides is not fundamental. Assume e_{2 }> e_{1}.
Then there exists an x_{0} such that 0 <x_{0}
<w_{1}
and Ã(x_{0})
= e_{2} [Gon1, p.475, l.14l.13].
By [Gon1, p.376, Theorem 5.17], the existence of the root x_{0} contradicts [Gon1, p.449, Corollary 5.23a].
 The proof given in [Wat1, p.452, l.9l.16] provides rough ideas, but fails to
specify a method to execute them. In
contrast, the proof of [Gon1, p.378, Theorem 5.19] provides specific details.¬
 Links {1}.
 We must pinpoint which theorem we use in a proof.
If the proofs of Theorems A, B, and C are similar and we use Theorem A in a proof,
we should say that we use Theorem A rather than Theorem B or Theorem C in the proof.
Example. In [Har, p.32, l.15l.16, proof of Theorem 6.1], Hartman claims that he
uses [Har, p.27, Corollary 4.3] and the remark following [Har, p.26, Theorem
4.1]. In view of all the versions of [Har, p.26, Theorem 4.1] listed below,
Hartman actually uses its (the minimal solution; the lefthalf interval)version rather than
its other versions or [Har, p.27, Corollary 4.3].
Remark.
The various versions of [Har, p.26, Theorem 4.1]
(*)_{max} u' = U(t,u), u(t_{0}) = u_{0}
(**)_{max} v(t)
£ u^{0}(t)
(*)_{min} u' = U(t,u), u(t_{0}) = u^{0}
(**)_{min} v(t)
³ u_{0}(t) (The maximal
solution) Let U(t,u) be continuous function on an open (t,u)set E and u = u^{0}(t)
be the maximal solution of
(*)_{max}.
 (The lefthalf interval) Let v(t) be a continuous function on [t_{0}a,t_{0}]
satisfying the condition v(t_{0})£u_{0},
(t, v(t))ÎE and v(t) has a left derivative D_{L}v(t)
on (t_{0}a,t_{0}]
such that D_{L}v(t)³U(t,
v(t)). Then on a common interval of existence of u^{0}(t)
and v(t), (**)_{max} holds.
 (The righthalf interval) Let v(t) be a continuous function on [t_{0},t_{0}+a] satisfying the condition v(t_{0})£u_{0},
(t, v(t))ÎE and v(t) has a right derivative D_{R}v(t)
on [t_{0},t_{0}+a)
such that D_{R}v(t)£U(t,
v(t)). Then on a common interval of existence of u^{0}(t)
and v(t), (**)_{max} holds.
 (The minimal
solution) Let U(t,u) be continuous function on an open (t,u)set E and u = u_{0}(t)
be the minimal solution of
(*)_{min}.
 (The lefthalf interval) Let v(t) be a continuous function on [t_{0}a,t_{0}]
satisfying the condition v(t_{0})³u^{0},
(t, v(t))ÎE and v(t) has a left derivative D_{L}v(t)
on (t_{0}a,t_{0}]
such that D_{L}v(t)£U(t,
v(t)). Then on a common interval of existence of u_{0}(t)
and v(t), (**)_{min} holds.
 (The righthalf interval) Let v(t) be a continuous function on [t_{0},t_{0}+a] satisfying the condition v(t_{0})³u^{0},
(t, v(t))ÎE and v(t) has a right derivative D_{R}v(t)
on [t_{0},t_{0}+a)
such that D_{R}v(t)³U(t,
v(t)). Then on a common interval of existence of u_{0}(t)
and v(t), (**)_{min} holds.
 A statement is false because of the use of improper notations.
Example. [Spi, vol.1, p.214, l.2]. Correction: g(t,p) should have been
ò_{0}^{1}(¶/¶y)f(st,p)ds,
where y=st.
 The final answer is correct, but the involved calculations are not.
Remark. The statement given in [Inc1, p.120, l.17l.13] is correct, but
the coefficient of u^{(n)} is
(1)^{n}D(u_{1},…,u_{n})
rather than D(u_{1},…,u_{n})
[Inc1, p.120, l.19]. Each step of a proof must be carefully examined. Any guess
or wishful thinking should not be allowed.
 In order to prove the equality of two series, we have to consider their domains of convergence
in addition to their algebraic sums [Wat1, p.16, l.16l.10].
 A complex variable's argument assigned in the proof of a theorem should be
consistent with the variable's argument assigned in the theorem's hypothesis
[1].
 The statement that seems true but is actually false: If a_{n}
¹ 0
for n³1, then [Perr, p.289, (1)]Þ[Perr, p.290, (3)].
From the above statement we should learn the following lessons:
 How does this guess arise?
Ans. If for some positive integer n a_{n+1}= 0, then [Perr, p.289, Satz 44] holds.
 Formally, the statement seems true [Perr, p.13, l.15]. If it were true, what contradiction would we obtain?
Ans. [Perr, p.290, l.5l.16]
 How do we correct the statement?
Ans. For the strong case given in [Perr, p.290, l.21], we can find the minimum
requirement (i.e., necessary and sufficient conditions) [Perr, p.290, Satz 45]. For the weak case
given in [Perr, p.291, l.9], we can only find sufficient conditions [Perr, pp.291292, Satz
46, B, C, or D].
 (A theorem's proof should be guided by its physical theme)
Without being guided by a theorem's physical
theme, one may easily get lost in the maze of its proof
. [Cod, p.319, l.15l.10] uses the following argument:
if [((A and C)ÞB) and (BÞC)],
then (AÞB) (*),
where
A = [Cod, p.318, (1.16) & (1.17)];
C = [j(t)£d and Cod, p.319, (1.22)] (see [Cod, p.319, l.15l.16]);
B = [Cod, p.319, (1.23)].
If in (*) we substitute C into B, we see that the conclusion (AÞC) is false. Thus, Levinson's argument is incorrect. However, the hypothesis
[((A and C)ÞB) and (BÞC)] ensures that under the condition A, B and C are equivalent. We can correct Levinson's mistake by the following method:
Even though the estimate given in [Pon, p.211, l.11] is less effective than that given in [Cod, p.319, (1.23)], we may use the former estimate to prove C. Thereby, we obtain the better estimate
B.
 [Inc1, p.196, l.7l.8] says that limits of integration are to be determined so that
the term [t^{e}(1t)^{ce}(¶u/¶t)]_{a}^{b} vanishes identically. Actually, it leaves out another term u^{(0)}(p_{0}v)^{(1)}_{a}^{b}, where p_{0 }= t(1t). See [Inc1, p.186, l.8] and [Cod, p.86, (6.12)].
If u=F(a,b;e;xt), both terms are 0 when a=0, b=1 provided that e1>0, ce1>0.
Remark. The equality given in [Inc1, p.196, l.9] holds when b>1,
c>b+1. By analytic continuation [Guo, p.153, l.10l.7], it also holds when
b>0, c>b.
 Prove y(m+1)=1^{1}+2^{1}+3^{1}+…m^{1}g [Wat, p.60, l.4].
Incorrect proof. y(m+1)=g+S_{n=1}^{¥}(n^{1}(m+n)^{1}) [Guo, p.108, (10)]
=g+S_{n=1}^{¥}n^{1}S_{n=m+1}^{¥}n^{1} (incorrect step).
Correct proof. G(z+1)=zG(z)
ÞG '(z+1)=G(z)+zG '(z)
ÞG '(z+1)[G(z)]^{1}=1+zG '(z)[G(z)]^{1}
ÞG '(z+1)[G(z+1)]^{1}=z^{1}+G '(z)[G(z)]^{1}.
 Links {1(the
symbol O)}.