Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proof that Multiplication Modulo n is Associative?

1,201 views
Skip to first unread message

Bill Lewis Clark

unread,
Mar 9, 2004, 6:17:15 PM3/9/04
to
I'm trying to find an example of a proof that multiplication modulo n
is associative.

Every reference I can dig up on the web either deals with some other
problem and tells the reader that the associativity of modular
multiplication can be assumed, or else claims it "trivially" follows
from the associativity of regular multiplication.

It's been (too many) years since I took any courses in abstract
algebra, and I don't currently have access to any textbooks that might
supply the proof.

This was actually a problem on a friend's quiz (he's taking an
abstract algebra course at night school for fun... the weirdo :) and
ever since he brought it up I've been driving myself crazy trying to
prove this seemingly simple fact.

I've come up with several candidate proofs myself... but all of them
seem to depend upon other facts that I'm not sure I can assume.

Thanks,

-Bill

Arturo Magidin

unread,
Mar 9, 2004, 6:33:57 PM3/9/04
to
In article <29b75758.04030...@posting.google.com>,

Bill Lewis Clark <wcl...@eden.rutgers.edu> wrote:
>I'm trying to find an example of a proof that multiplication modulo n
>is associative.
>
>Every reference I can dig up on the web either deals with some other
>problem and tells the reader that the associativity of modular
>multiplication can be assumed, or else claims it "trivially" follows
>from the associativity of regular multiplication.

Yes: you are looking at the quotient ring Z/nZ, so associativity
follows.

>It's been (too many) years since I took any courses in abstract
>algebra, and I don't currently have access to any textbooks that might
>supply the proof.

Well, you can try proving it from the definition of "congruent modulo
n":

a = b (mod n) if and only if n|(a-b).

Now you want to prove that (ab)c = a(bc) (mod n); which means that you
need to show that n|[(ab)c - a(bc)], which is "trivial" since the
multiplication here is the usual multiplication for the integers.


Since you say this is "driving you crazy", I assume that what you
really want to do is define the multiplication just by
remainders. That is, replace a with the remainder of dividing a by n,
b with the remainder of dividing b by n; ab by the remainder of
dividing ab by n, etc, and check it directly on the remainders
somehow....

If so, I can see how it would drive you crazy!

Take a, b, c, and we may assume that a, b, and c satisfy 0<= a,b,c <
n.

We want to show that remainder(remainder(ab)*c) =
remainder(a*remainder(bc)), where "remainder(x)" means the remainder
of dividing x by n.

Write ab = p1*n + r1, 0<= r1 < n
r1*c = q1*n + R1, 0<= R1 < n
bc = p2*n + r2, 0<= r2 < n
a*r2 = q2*n + R2, 0<= R2 < n

and you want to show that R1 = R2.

Consider R1-R2 = (r1*c - q1*n) - (a*r2 - q2*n)
= (r1*c - a*r2) + (q2-q1)*n
= (ab-p1*n)*c - a*(bc-p2*n) + (q2-q1)n
= (ab)*c - a*(bc) - p1*c*n + a*p2*n + (q2-q1)n
= n*(q2-q1 - p1*c + a*p2).

That means that |R1-R2| is a multiple of n, and therefore is either
equal to 0, or has absolute value greater than n.

If R2<= R1, then 0<=|R1-R2| = R1 - R2 <= R1 < n, so R1-R2 = 0, so R1=R2.
If R1<= R2, then 0<=|R1-R2| = R2-R1 <= R2 < n, so R1-R2 = 0, so R1=R2.

Since either R2<=R1 or R1<=R2 (or both), we conclude that in either
case, R1=R2.

--
======================================================================
"It's not denial. I'm just very selective about
what I accept as reality."
--- Calvin ("Calvin and Hobbes")
======================================================================

Arturo Magidin
mag...@math.berkeley.edu

The World Wide Wade

unread,
Mar 9, 2004, 9:19:27 PM3/9/04
to
In article <29b75758.04030...@posting.google.com>,

wcl...@eden.rutgers.edu (Bill Lewis Clark) wrote:

> I'm trying to find an example of a proof that multiplication modulo n
> is associative.
>
> Every reference I can dig up on the web either deals with some other
> problem and tells the reader that the associativity of modular
> multiplication can be assumed, or else claims it "trivially" follows
> from the associativity of regular multiplication.

Can you give the precise definition of multiplication modulo n? The reason
I ask it that associativity does follow trivially follow from the
associativity of regular multiplication once the definition is understood.

Brian Quincy Hutchings

unread,
Mar 10, 2004, 12:30:50 AM3/10/04
to
it really does seem trivial,
as the statement n | remainder((ab)c - a(bc)),
since (real?) multiplication is associative, within the parentheses.
especially in base one!

mag...@math.berkeley.edu (Arturo Magidin) wrote in message news:<c2lk95$1cu3$1...@agate.berkeley.edu>...



> Now you want to prove that (ab)c = a(bc) (mod n); which means that you
> need to show that n|[(ab)c - a(bc)], which is "trivial" since the
> multiplication here is the usual multiplication for the integers.

--Give Earth a Trickier Dick Cheeny -- out of office, after GIGA years.
http://www.benfranklinbooks.com/
http://larouchepub.com/other/2004/book_reviews/3105suskind_oneill.html
http://larouchepub.com/other/2004/3105cheny_trgtted.html
http://larouchepub.com/other/2003/3046chnygte_plmbrs.html
http://larouchepub.com/other/2003/3048iraq_58_const.html
http://www.rand.org/publications/randreview/issues/rr.12.00/
http://members.tripod.com/~american_almanac
http://www.wlym.com/PDF-68-76/CAM7606.pdf
http://larouchepub.com/other/2004/3104elect_voting.html
http://larouchepub.com/other/2003/3045dems_dive_soros.html

Bill Dubuque

unread,
Mar 10, 2004, 2:41:04 PM3/10/04
to
Arturo Magidin <mag...@math.berkeley.edu> wrote: (edited)

>Bill Lewis Clark <wcl...@eden.rutgers.edu> wrote:
>>
>> I'm trying to find an example of a proof
>> that multiplication modulo n is associative.
>
> We want to show rem(rem(ab)*c) = rem(a*rem(bc))
> where rem(x) := remainder of dividing x by n.
>
> Write ab = p n + r, 0 <= r < n
> rc = q n + t, 0 <= t < n
> bc = j n + s, 0 <= s < n
> as = k n + u, 0 <= u < n
>
> To show t = u
>
> consider t-u = (rc-qn) - (as-kn)
> = rc - as + (k-q)n
> = (ab-pn)c-a(bc-jn) + (k-q)n
> = (ab)c-a(bc)+(aj-pc + k-q)n

|a r-ab| |a r|
Simpler: 0 = | | = | | (mod n)
|c s-cb| |c s|

i.e. n| r-ab & n|s-bc

=> n|(r-ab)c - a(s-bc) = rc - as

Hence rc, as have equal residues (mod n). QED

Appended below are two of my prior sci.math posts which
provide further related examples of (ring) congruences.

http://google.com/groups?selm=y8zk7vhhrc6.fsf%40nestle.ai.mit.edu
Richard J. Yanco <rjy...@james.amherst.edu> wrote:
>
> AHSME 1956 #48: If n is a positive integer then (3n+25)/(2n-5)
> can be a positive integer iff
>
> a. 3<=n; b. 3<=n<=35; c. n<=35; d. n=35; e. n=3 or n=35

You can bound the divisors in your problems by eliminating
the variable "n" from two known multiples, e.g. see below.
Here I employ the standard notation x|y == "x divides y".

(1) 2n-5 | 3n+25 -> 2n-5 | 2(3n+25) - 3(2n-5) = 65

i.e. an+b | cn+d -> an+b | a(cn+d) - c(an+b) = ad-bc

(2) n+9 | 10n -> n+9 | 10n - 10( n+9) = -90

via m(n+9) = 10n if (n+10)/(10n+m) = 1/m as below. So

(1) 0 < 2n-5 <= 65 -> 3 <= n <= 35, i.e. ASHME choice (b).

It proves insightful to examine related methods of solution.

That an+b | ad-bc has a determinant representation as follows:

| a an+b | | a b |
cn+d = 0 (mod an+b) => 0 = | | = | | (mod an+b)
| c cn+d | | c d |
For n = -1 this is nothing but a |
proof on congruence multiplication: +-- 2nd column = 0
i.e. a=b, d=c => ad=bc (mod m) | hence det = 0
as follows: |
| a -a+b | | a b |
c = d, a = b (mod m) => 0 = | | = | | => ad = bc
| c -c+d | | c d |

In fact we can directly derive our result by congruence multiplication:

-cn = d, b = -an (mod an+b) | 3n = -25, 5 = 2n (mod 2n-5)
|
=> -acn = ad, bc = -anc | 6n = -50, 15 = 6n
|
thus ad = bc, i.e. ad-bc = 0 | so -50 = 15, i.e. 65 = 0

This illustrates the power of congruence based problem-solving.
The just-presented congruence approach allows us to use familiar
arithmetic operations to eliminate the variable "n" from the two
linear equations in n, thus deriving a purely numerical equation.
Contrast this with the first approach, where we were forced to
manipulate cumbersome equations involving divisibility relations.

Because the integers modulo m inherit their arithmetic laws from
the integers, we can reuse all our knowledge about the integers
when working in the integers modulo m, e.g. above we reused
methods of elimination for solving linear equations. The common
structure and laws shared between these two number systems is
formalized when one studies "abstract algebra" at university.

These two number systems are examples of a fundamental algebraic
structure known as a "ring". Just as above, in general rings
one may compute modulo congruences, which may greatly simplify
matters. For example, parity arguments correspond to arithmetic
of integers modulo 2, i.e. to arithmetic in the ring Z/(2).
Here's an related congruence proof on Pythagorean triples:

CLAIM x^2 + y^2 = z^2 => x or y is even, since

Mod 4: odd^2 + odd^2 = 2 isn't odd^2 (= 1) or even^2 (= 0).

http://google.com/groups?selm=y8z1ya4tmcy.fsf%40nestle.ai.mit.edu
Gerry Myerson <ge...@mpce.mq.edi.ai.i2u4email> wrote:
>Martin L'Heureux <mlheu...@hotmail.com> wrote:
>>
>> Does A = a, B = b (mod n) => AB = ab (mod n) ?
>
> ab - AB = (a-A)b + A(b-B)

This has a pretty representation in determinant form:

A = a | a-A A | | a A |
(mod n) => 0 = | | = | | = ab - AB (mod n)
B = b | B-b b | | B b |

The same proof yields lim(AB) = lim(A) lim(B) in calculus, see
http://google.com/groups?selm=y8zu1pepbho.fsf%40nestle.ai.mit.edu

Indeed Calculus is Algebra via NSA (NonStandard Analyis): the
product rule for limits is just the product rule for congruences
upon NSA algebraicization. For if I is the ideal of infinitesimals
then lim x->0 F(x) = f iff F := F(i) = f (mod I), i in I and
then lim(FG) = lim(F)lim(G) = fg is just congruence multiplication

F = f (mod I) FG = fg + (F-f)G + f(G-g)
=>
G = g (mod I) = fg (mod I) since F-f, G-g in I

The analytic proofs use FG - fg = (F-f)G + f(G-g) e.g. see [**] below.

In determinant speak: (see above)

F = f | f-F F | | f F |
(mod I) => 0 = | | = | | => FG = fg (mod I)
G = g | G-g g | | G g |

Spelled out in functional notation one sees the underlying identity
as a product rule for differences

f(x)g(x) - f(a)g(a) = [f(x)-f(a)] g(x) + f(a) [g(x)-g(a)]

-> D(fg) = (Df) g + f (Dg)

which becomes the Leibniz product rule for derivatives D after
dividing thru by x-a and passing to the "limit" as x -> a.
In fact this approach yields an elegant algebraic proof of
the Leibniz product rule for polynomials if one defines the
derivative of a poly p(x) as p'(x) = Q(x,x) where
p(x) - p(y) = (x - y) Q(x,y) in the poly ring R[x,y].

-Bill Dubuque

Michael Barr <ba...@barrs.org> wrote: (edited)
>
> Here is how I would do it.
>
> |f(x)g(x) - fg| =< |f(x)g(x) - f(x)g| + |f(x)g - fg|
>
> = |f(x)| |g(x) - g| + |f(x) - f| |g| [**]
>
> < |f + 1| |g(x) - g| + |f(x) - f| |g + 1|
>
> for x sufficiently close to c that |f(x) - f| < 1 and |g(x) - g| < 1.
> Then given E > 0 and also < 1 (to enable the line above) and choosing
> D sufficiently small that for 0 < |x - c| < D, both inequalities
> |f(x) - f| < E/(2(g+1)) and |g(x) - g| < E/(2(f+1)) you get
> the required inequality.
>
> BTW, it is much easier with infinitesimals. For then the limit is the
> ordinary part of f(x+h) for h a non-zero infinitesimal, assuming
> that the ordinary part does not depend on h. Then for infinitesimal h,
> there are infinitesimals i,j so that f(x+h)g(x+h) = (f + i)(g + j) =
> f g + f j + i g + i j and the last three terms are infinitesimal.
> All the nasty work is done once and henceforth hidden in the definition
> of infinitesimal.

Virgil

unread,
Mar 10, 2004, 3:25:08 PM3/10/04
to
Since somewhere in your proof you have to use the associativity of the
original ring: if a(bc) = (ab)c then a(bc) == (ab)c mod n.

Bill Lewis Clark

unread,
Mar 10, 2004, 9:52:39 PM3/10/04
to
mag...@math.berkeley.edu (Arturo Magidin) wrote in message news:<c2lk95$1cu3$1...@agate.berkeley.edu>...

> Since you say this is "driving you crazy", I assume that what you


> really want to do is define the multiplication just by
> remainders. That is, replace a with the remainder of dividing a by n,
> b with the remainder of dividing b by n; ab by the remainder of
> dividing ab by n, etc, and check it directly on the remainders
> somehow....

Yes, that's exactly what I'm trying to do.

> If so, I can see how it would drive you crazy!

Thanks, now I don't feel quite as stupid :)

> Take a, b, c, and we may assume that a, b, and c satisfy 0<= a,b,c <
> n.

> We want to show that remainder(remainder(ab)*c) =
> remainder(a*remainder(bc)), where "remainder(x)" means the remainder
> of dividing x by n.

> Write ab = p1*n + r1, 0<= r1 < n
> r1*c = q1*n + R1, 0<= R1 < n
> bc = p2*n + r2, 0<= r2 < n
> a*r2 = q2*n + R2, 0<= R2 < n
> and you want to show that R1 = R2.

That's actually how I started out... but then I switched over to
expressing a, b, and c each in the form kn+r (which led to regress
problems that were throwing me off.)

For some reason, I was thinking that I wouldn't be able to freely
express r1*c and a*r2 in the right form because r1 and r2 were related
to each other.

> Consider R1-R2 = (r1*c - q1*n) - (a*r2 - q2*n)
> = (r1*c - a*r2) + (q2-q1)*n
> = (ab-p1*n)*c - a*(bc-p2*n) + (q2-q1)n
> = (ab)*c - a*(bc) - p1*c*n + a*p2*n + (q2-q1)n
> = n*(q2-q1 - p1*c + a*p2).
>
> That means that |R1-R2| is a multiple of n, and therefore is either
> equal to 0, or has absolute value greater than n.
>
> If R2<= R1, then 0<=|R1-R2| = R1 - R2 <= R1 < n, so R1-R2 = 0, so R1=R2.
> If R1<= R2, then 0<=|R1-R2| = R2-R1 <= R2 < n, so R1-R2 = 0, so R1=R2.
>
> Since either R2<=R1 or R1<=R2 (or both), we conclude that in either
> case, R1=R2.

Thank you! I'll make sure to give you proper credit when I show this
solution to my friend (who was the one who asked me about this in the
first place.)

-Bill

0 new messages