y = x^2
When x is increasing,
y is decreasing for x < 0 and
y is increasing for x > 0.
-------------------------------------------------------------
This is a part of (all) middle school textbook.
(In fact, our textbook was written by professor.
Of cousre, we was taught about differentiation in high school.)
Why ignore about x = 0 ?
Namely,
y is decreasing for x <= 0 and
y is increasing for x >= 0.
How do you think about it ?
By almost anyone's definition of "increasing on an interval",
the function x^2 is increasing on every interval (open, closed,
half-open) that is a subset of [0, oo) and decreasing on
every interval (open, closed, half-open) that is a subset
of (-oo, 0].
This topic comes up often in the ap-calculus math group
(archived at Math Forum). Below are some of the posts
I've made on this topic in that group, roughly ordered
by their mathematical level.
--------------------------------------------------------
16 October 2006
http://mathforum.org/kb/message.jspa?messageID=5261301
Starla Negin wrote:
http://mathforum.org/kb/thread.jspa?messageID=5260694
> 1990 AB 5 asks for intervals on which
> f(x) = sin^2(x) - sin(x) is increasing from
> 0 to 3pi/2. The solution includes the endpoints
> of the intervals. Since these are local extrema,
> I generally tell my students to use open intervals
> that exclude these. My reasoning is that the
> function is neither increasing nor decreasing
> at extrema. Am I mistaken? What do the readers
> look for?
In elementary college calculus courses it is typical,
and in ap-calculus courses it is universal, that
differentiation is defined pointwise and notions of
increasing/decreasing are defined on intervals.
When done this way, your comment "the function is
neither increasing nor decreasing at extrema"
is a category mistake [1]. To say that a function
f is (strictly) increasing on the interval J means
that whenever x_1 and x_2 belong to J and x_1 < x_2,
then f(x_1) < f(x_2). There is nothing about derivatives
or about endpoints in this definition. It happens to
be the case that if J is an open interval, if the
derivative of f exists at each point of J, and if
the derivative is positive at each point of J, then
f will be increasing on J. But a function can be
increasing on an open interval without the derivative
existing at some points (x times the greatest integer
function, for example) or without the derivative
being positive some points (x^3 at x=0; see also
the example in [2]).
AP-tests are worded and graded so that, *for this
topic*, including or excluding endpoint(s) of
an interval will not affect a student's score,
although I suppose if a student went so far as
to say -1/x is increasing on [0, oo], they wouldn't
receive full credit for whatever they were working on.
[1] http://en.wikipedia.org/wiki/Category_error
[2] http://mathforum.org/kb/thread.jspa?messageID=5204230
--------------------------------------------------------
13 September 2007
http://mathforum.org/kb/message.jspa?messageID=5906902
Getting back to calculus, you don't want to identify
"increasing" with "positive derivative". For one thing,
a function can be strictly increasing on an interval
without having a positive derivative everywhere on
the interval (e.g. x^3 on the entire real line; the
Cantor singular function in real analysis is strictly
increasing and continuous everywhere, but its
derivative is zero at infinitely many points in
every interval). For another thing, it wouldn't
be a good idea to replace a simple geometrical
concept (perhaps "coordinate-geometrical concept"
is more correct, since this concept isn't invariant
under rigid motions) like increasing on an interval
with a much more sophisticated analytical notion
involving signs of derivatives. It's better
mathematical practice to let the defined concepts
be the simplest versions (when different versions
are mathematical equivalent, which isn't even
the case here, but let's suppose it were) and then
let its equivalence to the more complicated version
be a theorem.
--------------------------------------------------------
2 October 2006
http://mathforum.org/kb/message.jspa?messageID=5204230
James Wysocki wrote:
http://mathforum.org/kb/thread.jspa?messageID=5203325
> Given the function y = x^(1/3), I was wondering
> what the consensus would be on the intervals
> for which x is increasing and decreasing.
This function is strictly increasing on the
interval (-oo, oo).
Suppose r < s. I'll show that r^(1/3) < s^(1/3)
by cases. The cases are sequenced by lexicographic
order on the pairs (r,s) to make it easier to see
that they are exhaustive for r < s.
1. If r and s are both less than 0, then the
desired inequality holds because the function
has a positive derivative on the interval (-oo, 0).
2. If r is less than 0 and s is greater than or equal
to 0, then the desired inequality holds because
the function will be negative at x = r and
non-negative at x = s.
3. If r = 0 and s is greater than 0, then the desired
inequality holds because the function will be zero
at x = r and positive at x = s.
4. If r and s are both greater than 0, then the
desired inequality holds because the function
has a positive derivative on the interval (0, oo).
--------------------------------------------------------
27 November 2008
http://mathforum.org/kb/message.jspa?messageID=6514477
As Richard Sisley has pointed out (in reply to another post,
see [1]), f is increasing on (-oo,0) and f is increasing
on (0,oo). In fact, f is increasing on (2/3, pi].
{note added 29 December 2008: The function 'f' is the
function f:R --> R defined by f(x) = x^3.}
[1] http://mathforum.org/kb/message.jspa?messageID=6505744
It is usually understood that one gives a *maximal*
interval (see [2]) in these situations. Thus, x^3 is
increasing on (-oo,oo) and x^2 is increasing on [0,oo).
[2] The reason I wrote "a maximal interval" instead of
"the maximal interval" is that there may be more
than one "maximal interval" (e.g. sin(x) or sin(x^2)).
By the way, there is no mathematical necessity for the
restriction to intervals, although this is done in
calculus courses. Thus, one can speak of a function
strictly increasing on a set B of real numbers. The
definition of this is: For all b_1 and b_2 in B such
that b_1 < b_2, it follows that f(b_1) < f(b_2). By
the way, note that if g is an antiderivative of
-x(x-2)(x-3)(x-4), then g is strictly increasing on
each of the intervals [0,2] and [3,4], but g is not
strictly increasing on the *set* [0,2] union [3,4].
[Use for b_1 a number less than 2 and very close
to 2 and use for b_2 a number greater than 3 and
very close to 3. The fact that the function strictly
decreases as x varies from 2 to 3 is going to make
f(b_1) > f(b_2), at least if you choose b_1 and b_2
close enough to 2 and 3.]
As for "strictly increasing" and "non-decreasing", which
has come up in other posts, here are some definitions.
f non-decreasing on set B: For all b_1 and b_2 in B
such that b_1 < b_2, we have f(b_1) <= f(b_2).
NOTE: An equivalent condition arises if we replace
"b_1 < b_2" with "b_1 <= b_2".
f strictly increasing on set B: For all b_1 and b_2
in B such that b_1 < b_2, we have f(b_1) < f(b_2).
The notions "non-increasing" and "strictly decreasing"
can be defined by reversing the inequalities or by
requiring that the appropriate condition above holds
for the function -f.
f monotonic on set B: f is non-decreasing on B or f is
non-increasing on B.
f strictly monotonic on set B: f is strictly increasing
on B or f is strictly decreasing on B.
--------------------------------------------------------
4 May 2007
http://mathforum.org/kb/message.jspa?messageID=5692679
Skerbie (Bill) wrote:
http://mathforum.org/kb/message.jspa?messageID=5689882
> Let me clarify...I understand that consideration
> of the derivative is not necessary for determining
> whether a function is increasing or decreasing...I am
> just wondering if students would lose points for
> saying "the function f is increasing on [-3,-2]
> because f'(x)>0." My point is that they are saying
> f'(x)>0 on the whole interval when in actuality
> f'(-2)=0, so f' is NOT greater than zero on the
> interval they mentioned. So it's not the reasoning
> behind using f' to determine inc/dec., it's the
> inconsistency of the student's own statement that
> I'm asking about. If *I* were grading, I would take
> points off (little points...:)
This is one of those things where I wonder how far
graders would be willing to go in awarding credit
for correct mathematics. It's well known (but for
some reason, it's rarely mentioned in undergraduate
real analysis texts) that if a continuous function
is assumed to have a positive derivative everywhere
except for possibly a countable set of points
(which could be dense, like the rational numbers),
then the function must be strictly increasing.
This was proved by Ludwig Scheeffer way back in 1884
(see pp. 282-283 of [6]), and at the time it was
one of the very first applications of Cantor's
(then) new ideas/theories invovling infinite sets.
More precisely, one can prove the following for
a function f defined on an open interval, where
"co-countably many" means "all but countably many":
* If f is continuous and f' > 0 at co-countably
many points, then f is strictly increasing.
* If f is continuous and f' non-negative at
co-countably many points, then f is non-decreasing.
* If f is continuous and f' = 0 at co-countably
many points, then f is constant.
See Maurey/Tacchi [2] for a historical essay about
Scheeffer's result. (Their essay also deals with other
results that Scheeffer proved, such as the fact that
given any perfect nowhere dense set P and countable
set Z, there exist a dense set of translations of P
which are disjoint from Z.) More general versions of
Scheeffer's result hold. [[ Replace "f is continuous
at each x" with "the lim-sup (h --> 0+) of f(x-h) is
less than or equal to f(x) is less than or equal to
the lim-sup (h --> 0+) of f(x+h) at each x", and
replace f' with any specified Dini derivate of f. ]]
See Hobson [1] (p. 365), McShane [3] (pp. 200-201),
and Saks [5] (p. 204) for these more general results.
[1] Ernest W. Hobson, THE THEORY OF FUNCTIONS OF A REAL
VARIABLE AND THE THEORY OF FOURIER'S SERIES, Volume I,
Dover Publications, 1927/1957.
[2] Bernard Maurey and Jean-Pierre Tacchi, "Ludewig
Scheeffer et les extensions du théorème des
accroissements finis" [Ludwig Scheeffer and the
extensions of the finite-increment theorem],
pp. 1-60 in Travaux Mathématiques XIII, Centre
Universitaire de Luxembourg, 2002.
http://www.math.jussieu.fr/~maurey/articles/
[3] Edward James McShane, INTEGRATION, Princeton University
Press, 1947.
[4] Arnoud C. M. Van Rooij and Wilhelmus H. Schikhof,
A SECOND COURSE ON REAL FUNCTIONS, Cambridge
University Press, 1982.
[5] Stanislaw Saks, THEORY OF THE INTEGRAL, 2'nd revised
edition, Dover Publications, 1937/1964.
[6] Ludwig Scheeffer, "Zur Theorie der Functionen einer
reellen Veränderlichen" [On the theory of functions
of a real variable], Acta Mathematica 5 (1884),
183-194 & 279-296.
--------------------------------------------------------
23 April 2006
http://mathforum.org/kb/message.jspa?messageID=4661014
Lin McMullin wrote (in part):
http://mathforum.org/kb/message.jspa?messageID=4658938
> Any "definition" of increasing at a point would
> have to include something implying an interval
> (e.g. "as the function moves through the point"
> or "in some open interval around the point,"
> or some such). I know of no book that defines
> this phrase.
Spivak's beginning calculus book (below) has a
long problem, #65 (a) through (f) in Chapter 11
(pp. 214-215), that deals with the idea "f is
increasing at a" in the same sense that I use
below (Definition #1).
Michael Spivak, "Calculus", 3'rd edition,
Publish or Perish, 1994, xiv + 670 pages.
ISBN 0-914098-89-6
This idea occurs in many undergraduate
and graduate level real analysis texts
and it's a well-known and useful idea in
mathematical research. More generally, this
idea is one instance of a general notion
that is sometimes called "localization at
at point". For functions, you can form the
pointwise version of any interval property
(increasing, concave up, etc.) by requiring
the property to hold on all sufficiently
small intervals centered at the point
(this is Definition #2 below). The idea
of monotonicity at a point (in the sense of
Definition #1 below) is also used extensively
in certain areas of probability and statistics,
such as in the analysis of Brownian motion,
as well as in the analysis of "fractal like
functions", wavelet theory, harmonic analysis,
and many other mathematical areas.
Without getting into other variations that
one also encounters (such as local monotonicity
variations arising from only requiring that
the restriction of the function is monotone
relative to (all sufficiently small) types of
sets which are not necessarily full intervals),
here are the notions of monotonicity that I've
found to be the most commonly used, specialized
to the case of "strict increase".
In increasing order of strength, they are:
* strictly increasing at a point
* strictly increasing near a point
* strictly increasing on a specified interval
DEFINITION 1: "f is strictly increasing AT x=b"
means there exists a delta > 0 such that for
all points L belonging to (b - delta, b) we have
f(L) < f(b) and for all points R belonging to
(b, b + delta) we have f(b) < f(R).
DEFINITION 2: "f is strictly increasing NEAR x=b"
means that for some delta > 0, f is strictly
increasing on the interval (b - delta, b + delta).
DEFINITION 3: "f is strictly increasing on the
interval I" means that for all c and d in I,
if c < d, then f(c) < f(d).
Sometimes the phrase "locally increasing" is used
for Definition #2, but since this phrase is also
often used for Definition #1, I'll use the word
"near" in order to distinguish them. (I have not
seen "at" and "near" used to distinguish these
two concepts before, but this seems to be a nice
way to distinguish them.)
THEOREM 4: Let I be an open interval. The following
are logically equivalent:
(1) f is strictly increasing at each point of I.
(2) f is strictly increasing near each point of I.
(3) f is strictly increasing on I.
Proof (outline): (3) ==> (2) ==> (1) are immediate,
and the proof that (1) ==> (3) involves a compactness
argument (not compactness of I, but compactness of
the closed interval [c,d], where c < d are the two
points arbitrarily chosen in I during the process
of proving that f is strictly increasing on I).
Theorem 5: Let f be a function defined on an
open interval I containing b. Then
for x=b we have (3) ==> (2) ==> (1),
but neither of these two implications
is reversible.
Proof: Again, (3) ==> (2) ==> (1) are immediate.
(2) doesn't imply (3): sin(x) is increasing
near x=0 but sin(x) is not increasing on
the interval (-10, 10).
(1) doesn't imply (2): Define the function f
by f(x) = x + (x^2)*sin(1/x^2), with f(0) = 0.
Then f is strictly increasing at x=0 (in fact,
f' exists and equals 1 at x=0), but f has
infinitely many intervals of strict increase
and strict decrease arbitrarily close to x=0,
on both sides of x=0 in fact, and hence f
isn't strictly increasing near x=0 (or even
non-decreasing near either side of x=0).
The relationship between these notions and pointwise
differentiation notions (specifically, the four
Dini derivates that one encounters in beginning
graduate level real analysis classes) is a little
involved, but one general theme in this relationship
is that, roughly, the sign of the differentiation
notion corresponds to a pointwise monotonicity
notion (Definition #1 above) along with a lower bound
on how rapidly the function increases or decreases
at that point. For example, the function x^3 is
increasing at x=0, but not with sufficient rapidity
at x=0 for its derivative to be positive at x=0.
--------------------------------------------------------
Dave L. Renfro
That's an interesting question. What's happening to the range of the
function around the point x=0? Try to define an interval including
zero that doesn't make zero an end point, and that has the y values
strictly increasing or decreasing. Your attempt may illuminate the
dilemma.
If "y is decreasing for x <= 0" is an acceptable statement for when x
increases, then it's implied that at zero the function will be
decreasing further.
My solution would be to write, "As x increases, y is decreasing for x
< 0 and y is increasing for x >= 0."
Remember this is just my own opinion.