19 views

Skip to first unread message

Feb 26, 1992, 5:26:30 PM2/26/92

to

Re: pi and the mandelbrot set

This is a "re-post", I posted this about a year ago, hoping for some

response by someone more knowledgable. Didn't get any responses, except

for a couple of e-mails saying (in effect) "that's pretty strange".

So... here we go again.

I was writing a quick-n-dirty program to verify that the 'neck' of the

M-set at (real=-.75, imag=0) is actually of zero thickness. Accordingly,

I was testing the # of iterations that points of the form (-.75,x) (x

being a small number) went thru before escaping. Here's a quick list for

special values of x:

x # of iterations

.1 33

.01 315

.001 3143

.0001 31417

.00001 314160

.000001 3141593

.0000001 31415928

Does the number of iterations strike you as a suspicious number? How about

the product of the number of iterations with x? It's pi, to within +- x.

My initial reaction was "What the HELL is pi doing here?". Come to think

of it, that's still my reaction.

Adopting the motto "When in doubt, keep going", I tried the same experiment

at the "butt" of the M-set, located at (real=.25, imag = 0.0). I was now

trying points of the form (.25 + x, 0), with x again a small number. Here's

some more results for various values of x:

x # of iterations

.1 8

.01 30

.001 97

.0001 312

.00001 991

.000001 3140

.0000001 9933

.00000001 31414

.000000001 99344

.0000000001 314157

.00000000001 993457

.000000000001 3141625

Again, we get the same type of relationship, this time it is

pi = sqrt(x) * (num. of iterations)

I've made some attempts to show these results mathematically instead of

numerically, but haven't made much headway.

Has anyone seen this? What's going on?

--

------------------------------------------------------------

Dave Boll bo...@handel.cs.colostate.edu

"The speed of time is 1 second per second"

------------------------------------------------------------

Feb 26, 1992, 9:11:08 PM2/26/92

to

--

Philip Yzarn de Louraille Internet: yz...@chevron.com

Research Support Division Unix & Open Systems

Chevron Information & Technology Co. Tel: (213) 694-9232

P.O. Box 446, La Habra, CA 90633-0446 Fax: (213) 694-7709

Feb 26, 1992, 9:38:59 PM2/26/92

to

How far have you carried these tests out. It may be coincidence. If you can go

to, say, twenty decimal places, then I think I would be convinced something

was going on.

to, say, twenty decimal places, then I think I would be convinced something

was going on.

What do the orbits of zero look like? You may be close to Siegel disks which

have attractors with circular orbits. Try plotting the points as you iterate.

Let me know what happens...

-John Hart

Electronic Visualization Lab

University of Illinois at Chicago

Feb 28, 1992, 9:45:01 AM2/28/92

to

>Re: pi and the mandelbrot set

>

> This is a "re-post", I posted this about a year ago, hoping for some

> response by someone more knowledgable. Didn't get any responses, except

> for a couple of e-mails saying (in effect) "that's pretty strange".

>

> So... here we go again.

>

> I was writing a quick-n-dirty program to verify that the 'neck' of the

> M-set at (real=-.75, imag=0) is actually of zero thickness. Accordingly,

> I was testing the # of iterations that points of the form (-.75,x) (x

> being a small number) went thru before escaping. Here's a quick list for

> special values of x:

>

> This is a "re-post", I posted this about a year ago, hoping for some

> response by someone more knowledgable. Didn't get any responses, except

> for a couple of e-mails saying (in effect) "that's pretty strange".

>

> So... here we go again.

>

> I was writing a quick-n-dirty program to verify that the 'neck' of the

> M-set at (real=-.75, imag=0) is actually of zero thickness. Accordingly,

> I was testing the # of iterations that points of the form (-.75,x) (x

> being a small number) went thru before escaping. Here's a quick list for

> special values of x:

[data omitted]

======================================

I read this last year, and I was one of those replying that this

is weird. In the course of the year, I have thought about it off

and on, with some ideas, but no complete resolution. Below are

some comments.

We will need to consider the conformal map psi(w) from the

outside of the unit disk onto the outside of the Mandelbrot set.

psi(w) = w + sum (n=0 to infinity) b_n w^(-n)

= w - (1/2) + (1/8) w^(-1) - (1/4) w^(-2) + (15/128) z^(-3)

+ 0 w^(-4) - (47/1024) w^(-5) + ...

These coefficients can be computed recursively, but a closed form

is not known.

If we use as parameter the point z > 1/4 outside the Mandelbrot set,

and do our iterations z_0=z, z_1=z_0^2+z, ...,

z_(k+1) = z_k^2+z, and stop when z_n = 10,

then psi(10^(2^-n)) = z, at least approximately when n

is large. [See _The Science of Fractal Images, page 189 ff.

We will see the particular choice of stopping

rule '10' is unimportant.] Consider the points on the positive real

axis. When w approaches 1 from the right, z = psi(w) approaches

1/4 from the right. The numerical evidence of Dave Boll

suggests that if z = 1/4 + delta diverges in n steps,

then sqrt(delta)*n approaches pi. If z = psi(w), where

w = 10^(2^-n), this means that psi(10^(2^-n)) should behave

asymptotically like 1/4 + pi^2/n^2, as n goes to infinity.

Now if w = 1 + epsilon = 10^(2^-n), we have as n -> infinity, that

log w [approximately equal to epsilon] is asymptotic to 2^-n * log 10

log log w [approx log(epsilon)] is asymptotic to -n*log 2

[so the choice of '10' goes away]

If epsilon and delta are positive numbers

related so that psi(1+epsilon) = 1/4+delta, then we know

epsilon -> 0 when delta -> 0: the Boll evidence becomes:

delta is asymptotic to [pi log 2 / log(1/epsilon)]^2

So the question really deals with how fast the function psi

approaches the boundary point 1/4.

This is related intimately to the geometry of the

boundary of the Mandelbrot set near the cusp at 1/4.

My comments and (mostly) questions relating to this

are in another article with title

"Boundary maps in the Riemann mapping theorem"

to be submitted to the newsgroup sci.math.research.

The rate [1/log(1/epsilon)]^2, above, is the proper rate

for a cusp like a cardioid. Similarly, a rate like

1/log(1/epsilon), which is what Boll's data suggest for

the point -3/4, is the proper rate for two tangent curves

with non-zero curvature. But the exact constants appearing

here are mysterious to me. For the bare cardoid, without decorations,

the rate would be

delta asymptotic to [pi/(2*log(1/epsilon)]^2 .

So (according to Boll's data) the decorations cause the constant

to be adjusted from pi/2 to pi*log 2.

--

Gerald A. Edgar Internet: ed...@mps.ohio-state.edu

Department of Mathematics Bitnet: EDGAR@OHSTPY

The Ohio State University telephone: 614-292-0395 (Office)

Columbus, OH 43210 -292-4975 (Math. Dept.) -292-1479 (Dept. Fax)

Feb 28, 1992, 1:30:53 PM2/28/92

to

ed...@function.mps.ohio-state.edu (Gerald Edgar) writes:

>I read this last year, and I was one of those replying that this

>is weird. In the course of the year, I have thought about it off

>and on, with some ideas, but no complete resolution. Below are

>some comments.

I have a simple question. Has anyone independently verified the original

claims *experimentally*? To me, it sounds too weird to be true, so I'd like

to know if the behavior can be duplicated by someone else, with different

code, etc., to make sure it's not some kind of anomaly or artifact of the

original poster's program.

Since no one has provided a simple theoretical explanation, this would seem

to be the next best thing.

--

Paul Callahan

call...@BIFFVM.cs.jhu.edu

Feb 28, 1992, 2:53:37 PM2/28/92

to

Yes, I have verified it to many digits. It does not

affect the answer if you count the # of iter. to get the

norm bigger than 2 or bigger that 10 or whatever. Likewise

I checked various seq. of delta->0 , not just 1/10^k and it

again does not change anything. I have some vague ideas

theoretically, but not enough to be coherent - if anyone

gets anything, let me know.

Feb 28, 1992, 6:03:20 PM2/28/92

to

>ed...@function.mps.ohio-state.edu (Gerald Edgar) writes:

>

>>I read this last year, and I was one of those replying that this

>>is weird. In the course of the year, I have thought about it off

>>and on, with some ideas, but no complete resolution. Below are

>>some comments.

>

>I have a simple question. Has anyone independently verified the original

>claims *experimentally*?

>

>>I read this last year, and I was one of those replying that this

>>is weird. In the course of the year, I have thought about it off

>>and on, with some ideas, but no complete resolution. Below are

>>some comments.

>

>I have a simple question. Has anyone independently verified the original

>claims *experimentally*?

Yes, it's absolutely right!

David Petry

Feb 28, 1992, 4:02:24 PM2/28/92

to

>I have a simple question. Has anyone independently verified the original

>claims *experimentally*? [i.e. claims of divergence after 10^n*Pi iterations]I checked the claims with a short C program and the same thing happens for

me. I checked at the left and right sides of the main cardioid. I also

checked to see what happens at the top bud of the main cardioid, but the

effect didn't seem to happen there. I don't have any explanation other

than it probably has something to do with equipotential curves.

I used a small bound of 2 or 4 for my tests. Note however, that the bound

doesn't really matter, since a large bound only adds a constant number of

iterations. This constant increment will become negligible for large n.

Ken Shirriff shir...@sprite.Berkeley.EDU

Feb 28, 1992, 6:39:53 PM2/28/92

to

Here's a somewhat heuristic theoretical explanation of what's going on,

atleast around the point (1/4,0).

When we iterate the equation x := x^2 + 1/4+epsilon, starting at x = 0,

x increases slowly to 1/2, and after it passes 1/2, it zooms off rapidly

to infinity. So the interesting behavior is when x = 1/2. Let x = y+1/2.

Then our equation reads y := y^2 + y + epsilon, or using subscripts

y_(n+1) = (y_n)^2 + y_n + epsilon.

The y_n 's are increasing smoothly and slowly, atleast near y = 0, so it

is reasonable to consider y to be a function of the continuous variable n,

and y_(n+1) - y_n is very close to y'(n) (the derivative of y).

So our equation now reads y'(n) = y^2 + epsilon. This has the solution

y = a*tan(a*n+c) where a = Sqrt(epsilon). The intitial point and end point

of our iteration correspond to consecutive poles of the tangent function,

giving a*n = pi, where a = Sqrt(expsilon) and n is the number of iterations

to leave the set, exactly what Mr. Boll has found.

A similar method seems to work around the point (-3/4,0), but I haven't

completed the analysis (it seems to require a second degree diffeq).

I did a little more experimentation, and found something really neat at

the points (-1.25,epsilon). There, n*epsilon/pi (n = #iterations) jumps

around chaotically, but it is always very close to an integer or a half

integer for epsilon very small. Wierd!

David Petry

Feb 28, 1992, 6:55:31 PM2/28/92

to

>ed...@function.mps.ohio-state.edu (Gerald Edgar) writes:

>

>>I read this last year, and I was one of those replying that this

>>is weird. In the course of the year, I have thought about it off

>>and on, with some ideas, but no complete resolution. Below are

>>some comments.

>

>I have a simple question. Has anyone independently verified the original

>claims *experimentally*? To me, it sounds too weird to be true, ...>

>>I read this last year, and I was one of those replying that this

>>is weird. In the course of the year, I have thought about it off

>>and on, with some ideas, but no complete resolution. Below are

>>some comments.

>

>I have a simple question. Has anyone independently verified the original

Here's some code to test it yourself. It'll ask you for values of x, and

epsilon. Enter -0.75 for x, and 0.00001 or something like that for

epsilon. Also especially interesting is using -1.25 for x. The code needs

to be modified just slightly to test around the point (0.25,0).

David Petry

/*************************** cut here *******************************/

#include <stdio.h>

#include <stdlib.h>

#define max 100000000 /* maximum number of iterations */

#define pi 3.1415926535

depth_in_set(px,py) double px,py; {

int k=0, bb = 1;

double x=0. , y=0. , a=0. , b=0. ;

while ( (k++<max) && (a+b < 4.)) {

a = x*x; b = y*y;

y = 2.0*x*y + py;

x = a - b + px; }

return k ; }

main() {

int a;

double x,y;

while (1) {

printf("Enter x>> ");

scanf("%lf", &x);

printf("Enter epsilon >> ");

scanf("%lf", &y);

a = depth_in_set(x,y);

printf("\n\nDepth = %d\n", a);

printf("Depth*y/pi = %lf\n", (double) a*y/pi); } }

Mar 2, 1992, 4:17:54 PM3/2/92

to

pe...@pythagoras.math.washington.edu (David Petry) writes:

>In article <1992Feb28....@blaze.cs.jhu.edu> call...@biffvm.cs.jhu.edu (Paul Callahan) writes:

>>ed...@function.mps.ohio-state.edu (Gerald Edgar) writes:

>>

>>>I read this last year, and I was one of those replying that this

>>>is weird. In the course of the year, I have thought about it off

>>>and on, with some ideas, but no complete resolution. Below are

>>>some comments.

>>

>>I have a simple question. Has anyone independently verified the original

>>claims *experimentally*? To me, it sounds too weird to be true, ...

>Here's some code to test it yourself. It'll ask you for values of x, and

>epsilon. Enter -0.75 for x, and 0.00001 or something like that for

>epsilon. Also especially interesting is using -1.25 for x. The code needs

>to be modified just slightly to test around the point (0.25,0).

>David Petry

[..code deleted..]

Since we know that if the approximation for a given value of

x is going to give 10* (approx) as many iterations than for the last

value (if we decrease x by factor of 10 each time), then we can

probably speed up the code somewhat by just using a for-loop to

10*( N div 10 )

(N=old num of iterations for old x) and then doing (up to a maximum of)

10 iterations to get the next dp exactly.

This probably allows for a number of further optimisations.

In any case, for serious research you'd have to integerise

the code.

Damian C Jackson

dam...@castle.ed.ac.uk

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu