It certainly looks like it is true (though TeX is hard to decipher in
an ascii newsgroup).
Limit as n goes to +infinity of
Sum_{i=1 to n} (-i)^(n-i) * exp(i) / (n-i)! - 2n = 2/3
Why do you say so? Maybe I'm doing something totally wrong -
my numerical experiments don't seem to indicate anything like
this. (What do you get for, say, n=1, 10, or 100?)
--
David C. Ullrich
Note that the -2n is outside the sum.
For n=1 you get (-1)^0 * e / 0! - 2 = e - 2 i.e. about 0.718
For n=2 you get e^2 - e - 4 about 0.671
For n=3 you get e^3 - 2*e^2 + (1/2)*e - 6 about 0.66657
For n=10 you get e^10 - 9*e^9 + 32*e^8 - (343/6)*e^7 + 54*e^6 -
(625/24)*e^5 + (256/45)*e^4 - (243/560)*e^3 + (2/315)*e^2 -
(1/362880)*e - 20 about 0.66666666649
For larger numbers the sum simply gets longer and more prone to
precision problems in calculation
>On 14 Oct, 19:19, "David C. Ullrich" <dullr...@sprynet.com> wrote:
>> In article
>> <bb5c9397-5927-43d0-9c5e-ee697953c...@y21g2000hsf.googlegroups.com>,
>>
>> s...@btinternet.com wrote:
>> > On 14 Oct, 11:29, zdu <zhao.hui...@gmail.com> wrote:
>> > > Is it true that
>> > > \lim_{n->+\infty}(\sum_{i=1}^{n}\frac{(-i)^{n-i}e^i}{(n-i)!})-2n=\frac{2}{3
>> > > }
>>
>> > It certainly looks like it is true (though TeX is hard to decipher in
>> > an ascii newsgroup).
>>
>> > Limit as n goes to +infinity of
>> > Sum_{i=1 to n} (-i)^(n-i) * exp(i) / (n-i)! - 2n = 2/3
>>
>> Why do you say so? Maybe I'm doing something totally wrong -
>> my numerical experiments don't seem to indicate anything like
>> this. (What do you get for, say, n=1, 10, or 100?)
>>
>> --
>> David C. Ullrich
>
>Note that the -2n is outside the sum.
I was summing from 1 to n-1 (never mind how that happened).
Yes, it does appear to be right.
>For n=1 you get (-1)^0 * e / 0! - 2 = e - 2 i.e. about 0.718
>
>For n=2 you get e^2 - e - 4 about 0.671
>
>For n=3 you get e^3 - 2*e^2 + (1/2)*e - 6 about 0.66657
>
>For n=10 you get e^10 - 9*e^9 + 32*e^8 - (343/6)*e^7 + 54*e^6 -
>(625/24)*e^5 + (256/45)*e^4 - (243/560)*e^3 + (2/315)*e^2 -
>(1/362880)*e - 20 about 0.66666666649
>
>For larger numbers the sum simply gets longer and more prone to
>precision problems in calculation
David C. Ullrich
"Understanding Godel isn't about following his formal proof.
That would make a mockery of everything Godel was up to."
(John Jones, "My talk about Godel to the post-grads."
in sci.logic.)
Fruitlessly exploring this, I came across some pretty identities which
I had not seen before. One is
Sum_{i=0 to k} (-1)^i * (n-i)^k * C(k,i) = k!
where n does not affect the result and does not need to be positive or
even an integer.
So for example
1*4^0 = 1 = 0!
1*4^1 - 1*3^1 = 1 = 1!
1*4^2 - 2*3^2 + 1*2^2 = 2 = 2!
1*4^3 - 3*3^3 + 3*2^3 - 1*1^3 = 6 = 3!
1*4^4 - 4*3^4 + 6*2^4 - 4*1^4 + 1*0^4 = 24 = 4!
and even
1*4^5 - 5*3^5 + 10*2^5 - 10*1^5 + 5*0^5 - 1*(-1)^5 = 120 = 5! etc.
This might explain n+1 of the 2n+2/3 in the original conjecture by
expanding the exponentials.
Some other pretty identities are:
Sum_{i=0 to n} (-1)^(n-i) * i^(n-k) * C(n,i) = 0 [for n >= k > 0]
Sum_{i=0 to n} (-1)^(n-i) * i^n * C(n,i) = n!
Sum_{i=0 to n} (-1)^(n-i) * i^(n+k) * C(n,i) = n! * S2(n+k,n)
where S2 represents Stirling numbers of the second kind
so for example
1*4^0 - 4*3^0 + 6*2^0 - 4*1^0 + 1*0^0 = 0 taking 0^0=1
1*4^1 - 4*3^1 + 6*2^1 - 4*1^1 + 1*0^1 = 0
1*4^2 - 4*3^2 + 6*2^2 - 4*1^2 + 1*0^2 = 0
1*4^3 - 4*3^3 + 6*2^3 - 4*1^3 + 1*0^3 = 0
1*4^4 - 4*3^4 + 6*2^4 - 4*1^4 + 1*0^4 = 24 = 4!
1*4^5 - 4*3^5 + 6*2^5 - 4*1^5 + 1*0^5 = 240 = 4! * 10
1*4^6 - 4*3^6 + 6*2^6 - 4*1^6 + 1*0^6 = 1560 = 4! * 65
Sadly these later identities do not look as if they will help much
with the original conjecture. The last is stated in Abramowitz and
Stegun, Handbook of Mathematical Functions, Section 24.1.4.I.C
----
Here is a possible probabilistic solution:
Consider the total distance n+E when the total first hits or exceeds
n, with a final step F. So both the excess distance E and the final
step F are in [0,1] with F>E. F will tend to be big because it is
conditioned on passing n, while E will tend to be small since it is
conditioned on n having been passed.
For large n, the probability density of F is close to 2x while the
density of E is close to 2-2x. See
http://domino.research.ibm.com/Comm/wwwr_ponder.nsf/challenges/June2007.html
for more details.
This makes the expected value of E be 1/3 and so the expected total
distance is about n+1/3. Since for large n the average distance per
step is close to 1/2, the expected number of steps is about 2n+2/3.
The idea sounds great.
In the solution of the problem, it says:
Values between 0 and 1 are equally likely but larger values are more
likely to cause the sum to exceed n+x. So as n --> infinity the
differential probability that xk=t is 2*t*dt.
It is difficult to understand why the differenetial probability is
2*t. Do you know why the result is 2*t?
And given the expected value of E is 1/3 and the expected distance of
each step is 1/2, could we result in that the expected number of step
is 2n+2/3 when n is large enough?
The argument goes something like:
The joint distribution of the final step F and the excess E tends
toward being flat for large n subject to the constraint
0 <= E < F <= 1.
We have int_{x=0 to 1} int_{y=0 to x} dy dx = 1/2 so the joint
probability density must be close to 2 for large n.
So Prob(F<=t) = int_{x=0 to t} int_{y=0 to x} 2 dy dx = t^2
and the marginal probability density for F is (close to) 2t
while Prob(E<=t) = int_{y=0 to t} int_{x=y to 1} 2 dx dy = 2t - t^2
and the marginal probability density for E is (close to) 2 - 2t
This is how I would do the check:
With y and z in [0,1),
let g(y,z) = Prob(first point at or above n+1 is at or below n+1+z |
given that point n+y is visited)
Note that n+y does not have to be either the first or the last point
visited in the interval [n,n+1).
I think you have
if y>=z then g(y,z) = z + int_{x=y to 1} g(x,z) dx
if y<=z then g(y,z) = y + int_{x=y to 1} g(x,z) dx
which has solution
if y>=z then g(y,z) = z*exp(1-y)
if y<=z then g(y,z) = 1 + (z*exp(1)-exp(z))*exp(-y)
Now to check 2*(1-t):
So want to look at probability first point at or above n+1 is at or
below n+1+z using prior 2*(1-t) for first point at or above n
i.e. int_{t=0 to 1} [2*(1-t) g(t,z)] dt
which is int_{t=0 to z} [2*(1-t) +2*(1-t)*(z*exp(1)-exp(z))*exp(-t))]
dt + int_{t=z to 1} [2*(1-t)*z*exp(1-t)] dt
which is 2*z - z^2
and the derivative of that is 2*(1-z), as I would have hoped for.
That is a good question. I made what I thought was an attractive
argument, but I can also cast doubt on it.
For example, the expected position immediately before reaching n tends
towards n-1/3 for large n but the expected number of steps to reach it
is not close to 2n-2/3, instead being close to 2n-1/3.
I would distinguish the position immediately after reaching n from the
position immediately before reaching n. Reaching n is a "stopping
rule" since we know at that point that we have reached n, while for
any point before n there is always the possibility of a small step
which does not reach n. Since we have a stopping rule, I believe we
can use the optional stopping property of martingales, and so with
independent steps of expected length 1/2, the expected total distance
is half the expected number of steps under any stopping rule.
Take as an example this particular problem for n=1:
What is the expected number of steps until the total distance traveled
exceeds 1?
Answer: exp(1),
as the expected number of steps from the point x<1 is exp(1-x) and
initially x=0.
What is the expected total distance when the total distance traveled
first exceeds 1?
Answer: exp(1)/2,
since my earlier analysis gave
g(0,z) = 1 + z*exp(1) - exp(z)
which has a derivative exp(1) - exp(z) and so the answer
is 1 + int_{z=0 to 1} z*(exp(1)-exp(z)) dz
Or as another stopping rule example:
What is the expected total number of steps until (and including) when
a step length first exceeds 2/3?
Answer: 3
= 1*(1/3) + 2*(2/9) + 3*(4/27) + ...
What is the expected total distance when a step length first exceeds
2/3?
Answer: 3/2
= (5/6)*(1/3) + (7/6)*(2/9) + (9/6)*(4/27) + ...