Sir Six
Randolph Finder
rf...@andrew.cmu.edu
"And yet I would give it all up to be human" - Lt. Cmdr Data
I believe that 5.5" is the maximum dimension for the squares. Consider
the center point of the paper. Every solution to the problem (two identical
squares) can be made to be symmetric about that point.
But by observation with a 5.5" square, the only suitable solution occurs when
the square is parallel to the paper, so a larger square will not work.
Keith
--
ARPA: miy...@cs.purdue.edu
UUCP: ...!{decwrl,gatech,ucbvax}!purdue!miyake
Let the paper be ABCD (AB < BC), draw lines at 45 degree from A and C
these intersect the median of the paper parallel to AB at E and F (see
the fig.)
^ X|
| |\
| | \<-separator line
B-----+-----C | \
| |/ /| | \ E
| E/ / | | \/
| /| / | |... /\
| / | / | | /. \
| / |/ | | / . \
| / /F | | / . \
|/ /| | |/ . \Y
A-----+-----D A----------------------
If two squares are cut from the paper, there is a line which separates
them. W.l.o.g we may assume that this line has a common point with the
ray AB starting at A. (If it is parallel to AB, then at least on part
is too small to have a square of size 5,5".) This means that the line
must have a common point either with segment AE or with segment CF.
Suppose again that this is AE. In this case one of the squares lies
entirely inside the rectangular triangle AXY. Now it is easy to check
that the biggest square inside a rectangular triangle is the one with
two sides at the rectangular corner (shown by dots), thus its diagonal
is part of AE, therefore has length < AE. Consequently the side of the
square cannot be > 5.5", QED.
The same idea can be used to prove that if the unit square contains
two disjoint squares with sides a and b, then a+b<=1.
Problem: Which is the smallest square into which all the squares with
sides 1, 1/2, 1/3, 1/4, 1/5, ... can be packed without overlapping?
Laci Csirmaz
This would be a wondrous solution indeed.
The area of an 8.5x11 piece of paper is 93.5;
The area of 2 7 2/3 sided squares is 117 5/9.
I believe that the original question is equivalent to "what is the
largest square that will fit into a trapzeoid constructed by drawing a
(straight) line through the center of an 8.5x11 rectangle." Among
other limits, such a square certainly cannot have a side longer than
hypot(5.5, 4.25), which is 7.10633...
--
John Sahr, | Electrical Engineering - Space Plasma Physics
jo...@alfven.spp.cornell.edu | Cornell University, Ithaca, NY 14853
Please DON'T give me a proof that says "just take
the determinant of the matrix of zeros and ones representing
the permutation" unless you have a definition of the
determinant that doesn't depend on the sign of a
permutation being well-defined!!!!!!!!!!!!!
I.e., no circular arguments please.
And can you do it by tomorrow morning????
No, this is not a homework problem!!!!!!!!
I'M TEACHING THE COURSE.
p(x_1,...,x_n) = \prod_{i<j} (x_i - x_j).
S_n acts on p by permuting the variables. Then p^(sigma) = +- p.
For any transposition sigma = (rs),
p^(sigma)(x_1,...,x_r,...,x_s,...,x_n) = p(x_1,...,x_s,...,x_r,...,x_s(n))
= - p(x_1,...,x_n).
Therefore, tau \in S_n cannot have representations as both an even and
an odd number of transpostions, for in the first case the action would
change the sign of p, and in the second case it would not. We require
that p neq -p, which follows because p is not identically zero, as can
be seen by specializing the variables appropriately, say x_i = i.
What is happening here is that we define a mechanism for specifying
the sign of each term of a determinant quite independently of any
discussion of permutations. If you consider the Vandermonde matrix
[ 1 1 ... 1 ]
[ x_1 x_2 ... x_n ]
[ x_1^2 x_2^2 ... x_n^2 ] ,
[ ..................... ]
[ x_1^{n-1} x_2^{n-1} ... x_n^{n-1} ]
then each possible term of its determinant is uniquely determined by the
vector (e_1,...,e_n) of exponents of x_1,...,x_n which it entails. Each
term x_1^{e_1}...x_n^{e_n} for 0 <= e_i <= n-1 also appears in p, however.
Use the sign appearing in p as the sign for the term in the determinant.
This makes p equal to the Vandermonde determinant, and everything else
works out from then on.
...Paul Pedersen
> Help!!! I'd like a short proof if possible that the
> sign of a permutation is really well-defined, i.e.,
> that any element which is a product of an even number of
> interchanges is not the product of an odd number.
The following (due to Hans Liebeck) is my favorite proof of this
result.
If a permutation f is the product of both an odd and an even number
of transpositions then we may write the identity e as the product
of an odd number of transpositions. Assume WLOG
e = (x_1 y_1) * ... * (x_n y_n) with 1 =< x_i < y_i, where n is odd.
Next, replace (x_i y_i) with (1 x_i)(1 y_i)(1 x_i) if x_i > 1. We
now have that e = (1 z_1) * ... * (1 z_m) with m odd and z_i>1 for
all i. This gives a contradiction, since each symbol (1 k) with k>1
must appear an even number of times, and m is odd.
Q.E.D.
--
Chris Long, 272 Hamilton St. Apt. 1, New Brunswick NJ 08901 (201) 846-5569
"In a study of schoolboys, an educator discovered a correlation between size
of feet and quality of handwriting. The boys with the larger feet were,
on the average, older." Wallis & Roberts, _The Nature of Statistics_
This doesn't work. For example,
e = (1 2) (1 3) (1 2) (1 3) (1 2) (1 3)
but (1 2) appears an odd number of times.
-------
Dean Hickerson, Dept. of Math., Penn State Univ., University Park, PA 16802
h...@psuvm.psu.edu
Let's assume we are talking about a permutation of 1,2,...,n, so that
we can describe the permutation by a sequence, e.g. (5,1,3,2,4).
Define an _inversion_ to be a place where a larger number in the
sequence precedes a smaller one. The above example has 5 inversions (5
precedes four smaller numbers; 3 precedes one). Define the parity of
the permutation to be the parity of the number of inversions.
Lemma: Any transposition (i.e., interchange) reverses the parity of
the permutation.
Corollary: Any permutation that consists of an odd number of
transpositions is odd, i.e., has an odd number of inversions.
Similarly for even.
That does the job.
For a permutation in S_n define its sign to be 1 or -1 according to
what effect that permutation has on the polynomial
product for 1 <= i < j <=n of (x_i - x_j)
It is then obvious that the sign defined this way is a character, i.e.,
a homomorphism from S_n to the multiplicative group {1, -1}, that
assigns the value -1 to any transposition and, therefore, "counts mod 2"
the number of transpositions in any factorization of a permutation.
-- Bill
The above proof is of course elegant. It also requires some
sophistication of the student, so it may not be the best one if the
students haven't been exposed to polynomial rings and groups of units.
If the course is concentrating on group theory at the moment, it may
be instructive to grind through a proof along the following lines:
1. Show that every permutation has a unique decomposition as a product
of disjoint cycles.
2. Define the sign of a k-cycle to be (-1)^{k-1} (i.e. (-1) to the
smallest number of transpositions one can multiply to obtain a
k-cycle).
3. Define the sign of a product of disjoint cycles to be the product
of the signs of the cycles in it.
4. Let sigma be any product of disjoint cycles and let tau be a
transposition. Explicitly write down the product tau sigma as a
product of disjoint cycles. (If tau permutes i and j, then the result
depends on how i and j sit in the cycle structure of sigma.)
By explicit calculation, show that the
sign of tau sigma is the negative of the sign of sigma.
A proof along these lines is a bit tedious, but totally elementary. It
also shows the students that if they take an idea, like the
decomposition of a permutation as a product of disjoint cycles, then
they can use it to straightforwardly prove a useful result, given
sufficient confidence and persistence.
--Mark
Can anyone see how to correct it?
Good idea, but not taken far enough. Do NOT slice the ENTIRE width of
the page, but stop short of one edge by an amount equal to half of
the thickness of the page. You now have one sheet of about 11x17 inches,
folded in half! Unfold and REPEAT ad infinitum (or, as Feynman would
point out, until you reach the level of sheets one molecule thick).
Followups are directed to rec.puzzles only.
--
Mark Brader, SoftQuad Inc., Toronto "The problem is that tax lawyers are
utzoo!sq!msb, m...@sq.com amazingly creative." -- David Sherman
This article is in the public domain.
In my abstract algebra course here at UF we proved this theorem.
Here is the proof from our book (Fraleigh's A_FIRST_COURSE_IN_ABSTRACT_ALGEBRA,
4th edition)
Theorem: No permutation in Sn (the set of all permutations of n elements) can
be expressed both as a product of an even number of transpositions and
and as an odd number of transpositions.
Proof: We say that two integers have the same parity if they are either both
odd or both even. Let sigma (s) be an element of Sn and suppose that
s appears as the product of these transpositions:
s = tk tk-1 ... t2 t1 [the book uses taus and subscript #s]
Let r be the number of orbits of s. We shall show that the numbers k
and n-r have the same parity. Since n-r does not depend upon the way
we write s as a number of transpositions, this will complete the proof.
Note that if k=0, then the permutation is the identity with r=n orbits,
so n-r=0 also. Suppose k=1, so that s = t1. Clearly t1 has r=n-1
orbits, so n-r=1 also. Thus k and n-r have the same parity for s=1.
We proceed by induction on the number k of transpositions, having
checked the cases k=0 and k=1. Suppose now that k and n-r have the
same parity for k<=m and let s = tm+1 tm tm-1 ... t2 t1
Let r' be the number of orbits of tm...t1. By our induction assumption
m and n-r' have the same parity. By a previously stated Lemma, the
number r of orbits of s differs from the number r' by 1, and of course
k+1 differs from k by 1. thus both k+1 and n-r have the same parity,
namely the opposite parity from k and n-r'. This completes the
induction proof.
Lemma Let s be an element of Sn and let t be a transposition in Sn. The
number of orbits of s and the number of orbits of ts differ by 1.
This proof is on pages 79-81 of the book.
*******************************************************************************
Joshua C. Sasmor -- famous mathematician-to-be * j...@lab4.math.ufl.edu
************************************************* bras...@maple.circa.ufl.edu
"The highest form of pure thought is in * x999...@maple.circa.ufl.edu
mathematics" -- Plato *
*******************************************************************************
>> Help!!! I'd like a short proof if possible that the
>> sign of a permutation is really well-defined, i.e.,
>> that any element which is a product of an even number of
>> interchanges is not the product of an odd number.
Thanks for all those proofs which have come flooding in!!!
Please don't send me any more proofs... I've got
some elegant ones.
I'm teaching a mathematical physics grad course and
just showed how the symmetric group has 2 1-dimensional
representations, the boson (= trivial) and fermion
(=sign of permutation). I pointed out that it takes
some thinking to show that the sign of a permutation was
actually well-defined; one girl taking algebra now
happend to have a proof in her notes! The physicists
were of course perfectly willing to believe this fact;
one fellow suggested to use the determinant of a matrix
and I pointed out the circularity of this. I will present
the (prize-winning :-) proof in my lecture next week,
since I was too disorganized to do so today. My criteria
will be 1) elegance and memorability (for me these are
very correlated), 2) hardware required (the less the better),
and 3) insight given.
Consider the homomorphism S(n) --> {1,-1} defined as follows:
Product( X_p(k) - X_p(j) : j<k)
p |--> --------------------------------
Product( X_k - X_j : j<k )
Product( X_pq(k) - X_pq(j) : j<k)
= ----------------------------------
Product( X_q(k) - X_q(j) : j<k)
for arbitrary permutations p and q.
It's cute that the order of the set {1,...,n} is irrelevant.
Replace the standard order by any (total or linear) order and
you'll get the same notion of the sign (or parity). But it
helps to prove things.
Thus the order of a finite set plays a similar role w.r. to
permutations as triangulations for polyhedra and the Morse
functions for the smooth manifolds.
This is cute since we deal with a FINITE set and still need
a substitute for triangulation (or coordinates in the case of
euclidean geometry).
And we certainly have a definition of the determinant which
doesn't depend on the sign of a permutation, the axiomatic
definition! A multi-linear, anti-symmetric (det(...A...A...)=0),
normalized (det(I)=1) function of columns.
I better hurry now :-)
Wlod
It is not difficult to define the determinant without mentioning
permutations; in fact, it is preferable in some ways. (When I teach
elementary linear algebra out of, say, Anton's book, I always ignore
a permutation-based development of det(A) and instead roughly follow the
development in Apostol's Calculus books. The students tend to like it.)
-David
Well I mailed you on this one, but it bounced. But since you've posted
it again - fair game.
The determinant of a matrix is entirely well defined without any
prior recourse to permutations. If you take my point of view, the
determinant of a matrix is defined as the product of the eigenvalues
(counting algebraic multiplicity). Note that you need not calculate
the characteristic polynomial via the determinant you are trying to
define in order to ascertain the algebraic multiplicities - you
use the dimensions of the cyclic subspaces. (You can use Jordan or
Frobenius form here).
If your view of Linear Algebra includes the dreaded 'cofactor' and
'classical adjoint' concepts, you have my profound condolences.
Later,
Andrew Mullhaupt
There must have been a short between my ears. I knew this
approach to calculating determinants but I guess I had
permutations (and the "classical" formula using sums of
products of permuted columns with signs thrown in) on the
brain. Now my only question - hopefully equally easy -
is how you prove the formula det(AB) = det(A) det(B) using
this definition (trivial if A and B commute).
No, my linear algebra was not blighted by massive use of
minors, cofactors et al. Hmm.... digging it out ....
blowing off the dust... Birkhoff & MacLane's Modern
Algebra does it this way: First they talk about the
symmetric group, and show the sign of a permutation is
well defined in terms of its action on Product(xi - xj) .
(By the way, this is my favorite approach of all the ones
people so kindly sent me. As a mathematical physicist I
think groups exist in order to have representations: they
should be earning a living by *acting* on something, not
just sitting there symmetrically. Proving things about
them using representations appeals to me when it saves time,
the "intrinsic" proofs that the identity cannot be
written as an odd number of transpositions seem rather
clunky, though they have a kind of "homebrew" charm of
their own.) Anyway, later on they define the determinant
the classical way and using elementary matrices show its
a homomorphism.
So the current state of affairs is that I know the
slickest possible proof that the sign of a permutation
is a well-defined homomorphism (which is what I wanted),
but now I'm a bit curious about determinants. Seems to
me that the best road to showing that the determinant is
a homomorphism is to talk about the action of GL(V) on
the top exterior power of V . Kids should learn about
differential forms in calculus anyway :-).
I DON'T SEE WHY THE "INTRINSIC" APPROACH HAS TO BE PRESENTED IN A CLUNKY
WAY. HERE IS MY ATTEMPT AT A NON-CLUNKY PRESENTATION:
DEFINE THE SIGN OF A PERMUTATION AS THE "NORMALIZED" PARITY OF THE NUMBER OF
CYCLES ("NORMALIZED" IN THE SENSE THAT THE IDENTITY PERMUTATION IS ALWAYS
TAKEN TO HAVE EVEN PARITY). WELL-DEFINEDNESS IS THEN TRIVIAL, AND THE
HOMOMORPHISM PROPERTY FOLLOWS FROM THE FACT THAT EVERY PERMUTATION IS A
PRODUCT OF TRANSPOSITIONS, PLUS THE FOLLOWING:
LEMMA: MULTIPLICATION BY A TRANSPOSITION CHANGES THE PARITY OF THE NUMBER OF
CYCLES.
PROOF: MULTIPLICATION BY A TRANSPOSITION SPLICES TWO CYCLES INTO ONE IF THE
TRANSPOSED ELEMENTS ARE IN DIFFERENT CYCLES, AND SLICES ONE CYCLE INTO TWO IF
THEY ARE IN THE SAME CYCLE.
JAMES DOLAN, MATH DEPT., SUNY AT BUFFALO