Yet another proof of Theorem PSPHS

13 views
Skip to first unread message

Valerón

unread,
Mar 2, 2009, 10:55:18 PM3/2/09
to fcla-discuss
Since I think I've got a more intuitive proof for this theorem, I'd
like to post it here. But I'm not good at checking for errors, so I
don't know if there's bug in my proof. I hope you can help me to debug
or tell me I did right.

Proof : We say A is a system of m equations in n variable. Let B be a
row-equivalent m * (n + 1) matrix in reduced row-echelon form. Let B'
be the one for LS(A,0). Applying theorem VFSLS, we know that we can
construct any solution of a system by linear combination. For B, here
we use u1,u2,u3,u4....u(n-r) to denote the vectors correspond to the
variable part of the constuction, c as the constant vector. For B', we
use u'1,u'2,u'3,u'4....u'(n-r) as variable part and c' as constant
vector.
Since row-reduce only envoke row operations that make every entry in B
and B' linear relate to other entrys in their columns, so the only
difference between B and B' is the last column, [B](i,n+1) and [B']
(i,n+1), for 1<= i <= m. And [B'](i,n+1) = 0 for all 1<= i <= m.
This make the u1= u'1, u2 = u'2, u3 = u'3....u(n-r) = u' (n-r). And c'
= 0.

Note: The scalers used in the proof are all arbitary chosen.
The first half:
Since w and y are solutions to LS(A,b)
w = c + a1 * u1 + a2 * u2 + a3 * u3 + a4 * u4+....a(n-r) * u(n-r)
y = c + b1 * u1 + b2 * u2 + b3 * u3 + b4 * u4+....b(n-r) * u(n-r)

y - w = (c + b1 * u1 + b2 * u2 + b3 * u3 + b4 * u4+....b(n-r) * u
(n-r))
- (c + a1 * u1 + a2 * u2 + a3 * u3 + a4 * u4+....a(n-r)
* u(n-r))
= (b1 - a1) * u1 + (b2 - a2) * u2 + (b3 - a3) * u3 + (b4 -
a4) * u4..... + (b(n-r) - a(n-r)) * u(n-r)

this show that y - w is a linear combination of u1,u2,u3,u4....u(n-
r), then it's a solution to LS(A,0), N(A) in other words.

Sencond half:
According to the hypothesis and VFSLS
w = c + a1 * u1 + a2 * u2 + a3 * u3 + a4 * u4+....a(n-r) * u(n-r)
z = e1 * u1 + e2 * u2 + e3 * u3 + e4 * u4+....e(n-r) * u(n-
r)
y = w + z
= c + a1 * u1 + a2 * u2 + a3 * u3 + a4 * u4+....a(n-r) * u(n-r)
+ e1 * u1 + e2 * u2 + e3 * u3 + e4 * u4+....e(n-r) * u(n-
r)
= c + (a1 + e1) * u1 + (a2 + e2) * u2 + (a3 + e3) * u3 + (a4 +
e4) * u4 +...... + (a(n-r) + e(n-r)) * u(n-r)
this show that y is a solution to LS(A,b).

Done.

By the way, how to formulate a proof better without TeX, or I have
learn TeX to write proof?
HTML is suck for math proof so we'd better setup a standard for
disccusion in the group.

Rob Beezer

unread,
Mar 3, 2009, 10:51:01 PM3/3/09
to fcla-discuss
Hi Valeron,

I've started a new thread about notation, so it will be easier for
others to find. it's a good idea to address that - thanks for asking.

Can you clarify what you mean by:

For B, here we use u1,u2,u3,u4....u(n-r) to denote the vectors
correspond to the
variable part of the constuction,....

Are the u_i vectors that are columns of B, or maybe rows? Its not
clear what you mean by the "construction" and then what is the
variable part.

Rob

Valerón

unread,
Mar 4, 2009, 1:36:37 AM3/4/09
to fcla-discuss
I'm a bit lazy to write it down...
all there notions are borrow from the proof of VFSLS in the book.
Here is a full definition
Vector Form of Solutions to Linear Systems
Suppose that [A | b] is the augmented matrix for a consistent linear
system LS(A, b) of m equations
in n variables. Let B be a row-equivalent m * (n + 1) matrix in
reduced row-echelon form. Suppose
that B has r nonzero rows, columns without leading 1's with indices F
= {f_1, f_2, f_3, ...... f_n-r, n+1},
and columns with leading 1's (pivot columns) having indices D = {d_1,
d_2, d_3, .......d_r} De fine vectors
c, u_j , 1 <= j <= n - r of size n by
......(these all can be copy from the proof of VFSLS, cause I don't
know how to write something is an object in a set and some set is a
subset to another set)

For LS(A,0), the augment matrix is [A | 0] and each notation of the
corresponding object for it should be with a superscript " ' ", for
example, B' means the row-reduce echelon form of [A | 0], c' means the
constant vector.

Rob Beezer

unread,
Mar 5, 2009, 11:30:38 PM3/5/09
to fcla-discuss
Hi Valeron,

Sorry - I didn't pick up on the notation coming out of VFSLS.

Yes, I think this proof is correct (and if not, it would only be minor
changes). In other words, this seems like a nice alternative.

In some ways, it is like the existing proof, since it manipulates
linear combinations, but maybe it is more intuitive. I think
the"arbitary-ness" of the sacalars does have some appeal over the
scalars being entries of w, y and z in the existing proof. I'll think
about making this the proof in the book.

Any comments from others?

Rob

Zoli

unread,
Mar 12, 2009, 5:30:33 PM3/12/09
to fcla-discuss

I vote for the existing proof.

I think the existing proof is the most simple and intuitive one that
is possible. The mathematical background of this theorem consists of
just the notions: <equation>, <addition>, <zero>. The existing proof
reflects this background well by using only these notions.

However the proposed alternative proof uses much more complicated
notions like <gauss elimination> and <number of pivot columns>.

Rob Beezer

unread,
Mar 21, 2009, 12:05:19 AM3/21/09
to fcla-discuss
Maybe Valeron's idea could be used as a "T" exercise that suggests a
certain approach.

Give another proof of PSPHS using Theorem VFSLS by...., etc, etc
Reply all
Reply to author
Forward
0 new messages