Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Transforming "C" into a Turing equivalent language 001 [Providing unlimited memory access to C]

132 views
Skip to first unread message

olcott

unread,
Aug 30, 2020, 2:56:13 PM8/30/20
to
The end goal of this sequence of posts is to show that the basic "C"
programming language (without the "C" libraries) can be fully mapped to
an abstract model of computation that is equivalent to a Turing machine
in such a way that any Turing complete computation can be written in the
"C" programming language.

When "C" is mapped to an abstract model of computation that can provide
an arbitrary number of arbitrary length linked lists, then "C" acquires
Turing complete memory access. This is computationally the same as
providing "C" with an unlimited number of Turing machine tapes.

When we define this abstract memory access such that it can be
concretely implemented on machines with finite memory and this concrete
implementation automatically scales to any increase in physical memory
then the memory access aspect of the concrete implementation is
computationally identical to the abstract Turing complete machine.

Such a virtual machine would provide Turing complete memory access to
"C" because the memory access aspect would behave exactly the same
across the Turing complete abstract machine and the concrete machine for
all computations not requiring more memory than the concrete machine has.

The virtual machine code that the "C" programs would be translated into
would be a Turing equivalent language. Thus the machine description
language would have identical execution on the concrete physical machine
as it would on the abstract Turing equivalent machine.

The input, output, state changes, and moves of the Tape head would be
identical between the two machines for any computation not requiring
more memory that the physical machine actually has.


--
Copyright 2020 Pete Olcott

Kaz Kylheku

unread,
Aug 30, 2020, 3:20:07 PM8/30/20
to
On 2020-08-30, olcott <No...@NoWhere.com> wrote:
> The end goal of this sequence of posts is to show that the basic "C"
> programming language (without the "C" libraries) can be fully mapped to
> an abstract model of computation that is equivalent to a Turing machine
> in such a way that any Turing complete computation can be written in the
> "C" programming language.

Turing's infinite tape can be mapped to the <stdio.h> facility.
The possible symbols on that tape can be mapped to characters,
and the operations and positioning can be mapped to the available
library operations.

> When "C" is mapped to an abstract model of computation that can provide
> an arbitrary number of arbitrary length linked lists, then "C" acquires

In the absence of I/O, the C storage model is finite, and therefore not
Turing complete. However, perhaps not quite.

I had a discussion with this with someone who convinced me of the
following: in C we can allocate automatic storage (informally,
"local variables on the stack") via recursion. If we do this without
taking the address of anything, then pointers do not appear in
the program, and thus, in the abstract sense, we don't run into the
issue of pointers being of finite width. With that restriction, we have
no random access (the program cannot access material in existing
activation frames without returning to them). Therefore what we have is
a push-down automaton (PDA). A PDA is strictly less powerful than a
Turing machine. It contains an infinity in that it can push down as far
as it needs to, but there are some computations that can't be carried
out with a universal push down machine.

> When we define this abstract memory access such that it can be
> concretely implemented on machines with finite memory and this concrete
> implementation automatically scales to any increase in physical memory
> then the memory access aspect of the concrete implementation is
> computationally identical to the abstract Turing complete machine.

This scaling is impossible in general because:

- an expression like sizeof (void *) is a compile-time constant.

- pointers are embedded in data structures where not only is there
no room to grow them wider, programs are not prepared to have
the offsets of members change at run-time to accomodate.

> Such a virtual machine would provide Turing complete memory access to
> "C" because the memory access aspect would behave exactly the same
> across the Turing complete abstract machine and the concrete machine for
> all computations not requiring more memory than the concrete machine has.

The problem is that the amount of memory has to be known in advance. So
that is to say, for a given C program and its input case, we have to
know how much memory it will need. Then choose an instance of the
abstract machine which will fit. Then instantiate the machine,
translate the program, and execute it on the intended input.

Why that is a problem is that that knowing how much capacity to
provision for a given instance is equivalent to the halting problem.

You're much better off chasing Turing completeness via the stdio
stream route.

--
TXR Programming Lanuage: http://nongnu.org/txr
Music DIY Mailing List: http://www.kylheku.com/diy
ADA MP-1 Mailing List: http://www.kylheku.com/mp1

olcott

unread,
Aug 30, 2020, 3:57:39 PM8/30/20
to
On 8/30/2020 2:19 PM, Kaz Kylheku wrote:
> On 2020-08-30, olcott <No...@NoWhere.com> wrote:
>> The end goal of this sequence of posts is to show that the basic "C"
>> programming language (without the "C" libraries) can be fully mapped to
>> an abstract model of computation that is equivalent to a Turing machine
>> in such a way that any Turing complete computation can be written in the
>> "C" programming language.
>
> Turing's infinite tape can be mapped to the <stdio.h> facility.
> The possible symbols on that tape can be mapped to characters,
> and the operations and positioning can be mapped to the available
> library operations.
>
>> When "C" is mapped to an abstract model of computation that can provide
>> an arbitrary number of arbitrary length linked lists, then "C" acquires
>
> In the absence of I/O, the C storage model is finite, and therefore not
> Turing complete. However, perhaps not quite.
>
> I had a discussion with this with someone who convinced me of the
> following: in C we can allocate automatic storage (informally,
> "local variables on the stack") via recursion. If we do this without
> taking the address of anything, then pointers do not appear in
> the program, and thus, in the abstract sense, we don't run into the
> issue of pointers being of finite width. With that restriction, we have

Good point that I had not considered.

> no random access (the program cannot access material in existing
> activation frames without returning to them). Therefore what we have is
> a push-down automaton (PDA). A PDA is strictly less powerful than a

This immediately occurred to me.
The only thing that we need is the capability to create an unlimited
number of unlimited length linked-lists.

There is no need to know the memory requirements in advance. We could
give it a subset of the std::list interface.

Unlimited length integers could be space delimited numeric digits,
ASCII, Hex, or some other base. This can be leveraged to provide a
pointer to the top of an unlimited heap.

olcott

unread,
Aug 30, 2020, 4:02:34 PM8/30/20
to
On 8/30/2020 2:41 PM, Kaz Kylheku wrote:
> On 2020-08-30, Siri Cruise <chine...@yahoo.com> wrote:
>> In article <202008301...@kylheku.com>,
>> Kaz Kylheku <793-84...@kylheku.com> wrote:
>>
>>> In the absence of I/O, the C storage model is finite, and therefore not
>>> Turing complete. However, perhaps not quite.
>>
>> Make the memory a shift register implemented as modules connected
>> by USB? cables. Arrange it so you can cable in any number of
>> modules. You now have memory of finite but unbounded length, just
>> like a Turing tape.
>
> Peter Olcott could be right in the following way.
>
> Suppose we restrict ourselves to only highly portable programs,
> which don't depend on any features of an implementation such as pointer
> size. (I deliberately don't want to say strictly conforming, because
> strictly conforming programs don't violate implementation limits.)
>
> These highly portable programs, if they halt on their input and have
> enough capacity from the implementation, will calculate the same result
> regardless of the implementation characteristics, in particular, pointer
> size.
>
> Then the strategy is conceivable of allowing a program to calculate
> until it hits a memory-related resource. Then, at that point,
> instantiate a new implementation of C which adds one more bit to the
> width of a pointer relative to the previous implementation (and has
> the storage to match it).
>
> Then recompile the program for that implementation, and execute it on
> the same input. It will have to repeat every calculation done so far,
> but if the program is poritable, then the result of that part is
> identical, and will bring the machine into the same abstract state
> to that point where it previuosly had to abort; and with the memory
> doubled, it can then continue.
>
> Our "meta implementation" which scales itself lazily when the program
> hits a limit looks like could be a UTM.
>
> But, so what?
>

This is all focused on leveraging my credibility for my halt decider
written in "C" and implemented as x86 machine code.

I want to make totally sure in advance that my correct halt decider is
not dismissed as inapplicable to Turing machines.

olcott

unread,
Aug 30, 2020, 5:19:58 PM8/30/20
to
On 8/30/2020 3:36 PM, Ben Bacarisse wrote:
> Kaz Kylheku <793-84...@kylheku.com> writes:
>
>> On 2020-08-30, olcott <No...@NoWhere.com> wrote:
>>> The end goal of this sequence of posts is to show that the basic "C"
>>> programming language (without the "C" libraries) can be fully mapped to
>>> an abstract model of computation that is equivalent to a Turing machine
>>> in such a way that any Turing complete computation can be written in the
>>> "C" programming language.
>>
>> Turing's infinite tape can be mapped to the <stdio.h> facility.
>> The possible symbols on that tape can be mapped to characters,
>> and the operations and positioning can be mapped to the available
>> library operations.
>
> I think (but it's hard to get answers) that Peter Olcott is only
> considering what we'd call freestanding implementations here.
>
> But, worse, he's not interested in simply showing that hosted C is
> Turing complete by simulating a universal TM. I think he aims to show
> that "ordinary C" can be made Turing complete just by implementing it on
> a Turing complete sub-system.
>
> (BTW, even a IO-based UTM needs are bit of checking. Functions like
> fgetpos need to be allowed to fail. I think the specification is plenty
> loose enough for that since it says almost nothing about what failures
> are permissible.)
>
>>> When "C" is mapped to an abstract model of computation that can provide
>>> an arbitrary number of arbitrary length linked lists, then "C" acquires
>>
>> In the absence of I/O, the C storage model is finite, and therefore not
>> Turing complete. However, perhaps not quite.
>>
>> I had a discussion with this with someone who convinced me of the
>> following: in C we can allocate automatic storage (informally,
>> "local variables on the stack") via recursion. If we do this without
>> taking the address of anything, then pointers do not appear in
>> the program, and thus, in the abstract sense, we don't run into the
>> issue of pointers being of finite width. With that restriction, we have
>> no random access (the program cannot access material in existing
>> activation frames without returning to them). Therefore what we have is
>> a push-down automaton (PDA). A PDA is strictly less powerful than a
>> Turing machine. It contains an infinity in that it can push down as far
>> as it needs to, but there are some computations that can't be carried
>> out with a universal push down machine.
>
> Hmm... Modern C has threads and therefore more than one stack. A PDA
> with two stacks is Turing complete so there may be a way after all.
> Nice one!
>

There you go teamwork. Between you and Kaz.

Now the key of how do you make an unlimited depth stack?
is directly addressed by the other key implementation detail:

How do you provide a linked-list of unlimited depth that automatically
scales to whatever memory is available.

This problem has just been simplified by you and Kaz in that we now only
need two of these. Also with the piece added by Kaz we have a much
simpler interface: push and pop().

A two stack based machine might be a cleaner implementation because it
would never have to do garbage collection. It may provide a more
cumbersome model of computation than a RAM machine though. If this is
the case then it is much more difficult to map "C" to the two-stack
machine.

> Often, things like IO and threads (unless wired into the language
> syntax) are informally excluded when asking if language X is Turing
> complete, but I don't think the rules of this particular game are clear.
>
> There is, though, another way to argue about C. Implement your UTM in
> our all-too-finite C as, say, compiled by gcc. If a computation fails
> for lack of space, just get the gcc people to double the size of all the
> pointers and try again. Repeat as necessary, using pointers simulated
> in software. Each iteration can conform to the C standard, and since
> you only need to be able to run terminating computations to be Turing
> complete, one can show that there is (in theory) a C implementation that
> can run any given computation.
>
> Whether this quantifier fiddling is permitted ("∀ p in programs, ∃ C
> compiler s.t. C runs p" is not the same as "∃ C compiler s.t. ∀ p in
> programs, C runs p") depends on the rules of the game. None of this is
> as tidy and formal as it is with the various mathematical models of
> computation.
>
> <cut>

olcott

unread,
Aug 30, 2020, 9:35:42 PM8/30/20
to
On 8/30/2020 6:23 PM, Mike Terry wrote:
> On 30/08/2020 21:36, Ben Bacarisse wrote:
>> Kaz Kylheku <793-84...@kylheku.com> writes:
>>
>>> On 2020-08-30, olcott <No...@NoWhere.com> wrote:
>>>> The end goal of this sequence of posts is to show that the basic "C"
>>>> programming language (without the "C" libraries) can be fully mapped to
>>>> an abstract model of computation that is equivalent to a Turing machine
>>>> in such a way that any Turing complete computation can be written in
>>>> the
>>>> "C" programming language.
>>>
>>> Turing's infinite tape can be mapped to the <stdio.h> facility.
>>> The possible symbols on that tape can be mapped to characters,
>>> and the operations and positioning can be mapped to the available
>>> library operations.
>>
>> I think (but it's hard to get answers) that Peter Olcott is only
>> considering what we'd call freestanding implementations here.
>>
>> But, worse, he's not interested in simply showing that hosted C is
>> Turing complete by simulating a universal TM.  I think he aims to show
>> that "ordinary C" can be made Turing complete just by implementing it on
>> a Turing complete sub-system.
>>
>> (BTW, even a IO-based UTM needs are bit of checking.  Functions like
>> fgetpos need to be allowed to fail.  I think the specification is plenty
>> loose enough for that since it says almost nothing about what failures
>> are permissible.)
>>
>>>> When "C" is mapped to an abstract model of computation that can provide
>>>> an arbitrary number of arbitrary length linked lists, then "C" acquires
>>>
>>> In the absence of I/O, the C storage model is finite, and therefore not
>>> Turing complete. However, perhaps not quite.
>>>
>>> I had a discussion with this with someone who convinced me of the
>>> following: in C we can allocate automatic storage (informally,
>>> "local variables on the stack") via recursion. If we do this without
>>> taking the address of anything, then pointers do not appear in
>>> the program, and thus, in the abstract sense, we don't run into the
>>> issue of pointers being of finite width.  With that restriction, we have
>>> no random access (the program cannot access material in existing
>>> activation frames without returning to them).  Therefore what we have is
>>> a push-down automaton (PDA).  A PDA is strictly less powerful than a
>>> Turing machine. It contains an infinity in that it can push down as far
>>> as it needs to, but there are some computations that can't be carried
>>> out with a universal push down machine.
>>
>> Hmm...  Modern C has threads and therefore more than one stack.  A PDA
>> with two stacks is Turing complete so there may be a way after all.
>> Nice one!
>
> Isn't it the case though, that
>
> -  In any C implementation (or C++) the size of a pointer is fixed
>
> -  Every distinct object (class/struct/variable/etc...) must have
>    its own unique address (that fits the pointer size)
> ?
>
> If so any C implementation can only have finitely many distinct
> addressable objects, and so can not fully implement a TM tape, and can
> only in principle run a subset of TM calculations
>
> Of course, any C implementation can implement a partial TM emulator, and
> this might have the capacity to run interesting TM calculations, but it
> seems to me not all TM calculations.
>
> Sure, if we consider a sequence of C implementations with ever larger
> address pointer sizes, then given any /terminating/ TM calculation,
> eventually one of them will be capable of running it.  But some
> non-terminating TM calculations cannot run on a finite tape, so we're
> sort of loosing this theoretical battle.
>
> Now if we could change the C spec to allow variable length pointer
> sizes, then we would have a theoretically infinite pool of addressable
> objects, so suddenly a full TM emulator becomes easy in terms of
> theoretical program design, but obviously not when it comes to running
> on a given physical implementation, whose memory will only be finite.
> (But this probably doesn't matter for PO, who knows?  Certainly not PO,
> who is quite lost in all this...)
>
> Or if we can use IO facilities then a potentially infinite virtual
> memory results - e.g. parts of the tape could be backed in a cloud-based
> system addressed through arbitrary long URLs.  (Ok, maybe URLs have a
> max length, but we can envisage some new storage device addressed
> through IO subsystem not having this restriction.
>

I only really need to know about computations that are TM equivalent:
meaning for example a computation performed by a "C" or x86 machine
language program that executed on an input will derive equivalent output
to a Turing machine on equivalent input.

>>
>> Often, things like IO and threads (unless wired into the language
>> syntax) are informally excluded when asking if language X is Turing
>> complete, but I don't think the rules of this particular game are clear.
>>
>> There is, though, another way to argue about C.  Implement your UTM in
>> our all-too-finite C as, say, compiled by gcc.  If a computation fails
>> for lack of space, just get the gcc people to double the size of all the
>> pointers and try again.  Repeat as necessary, using pointers simulated
>> in software.  Each iteration can conform to the C standard, and since
>> you only need to be able to run terminating computations to be Turing
>> complete, one can show that there is (in theory) a C implementation that
>> can run any given computation.
>
> Yes, for a given terminating calculation.  But to do this we just
> declare a big enough array to run the calculation, and run it.  This
> doesn't take too much thought or discussion regarding either program
> design or language considerations, although PO threads typically get to
> thousands of posts so we'll see how it goes in this newsgroup.
>
>
> Regards,
> Mike.

I was just having this exact same idea. This means that a single feature
that I added to my x86-UTM environment makes this environment Turing
Complete for any computation that fits within its memory 32-bit memory
space: u32* Allocate(u32 Size);

>
>>
>> Whether this quantifier fiddling is permitted ("∀ p in programs, ∃ C
>> compiler s.t. C runs p" is not the same as "∃ C compiler s.t. ∀ p in
>> programs, C runs p") depends on the rules of the game.  None of this is
>> as tidy and formal as it is with the various mathematical models of
>> computation.
>>
>> <cut>
>>


olcott

unread,
Aug 30, 2020, 10:35:55 PM8/30/20
to
On 8/30/2020 8:30 PM, Ben Bacarisse wrote:
> olcott <No...@NoWhere.com> writes:
>

>>>> I had a discussion with this with someone who convinced me of the
>>>> following: in C we can allocate automatic storage (informally,
>>>> "local variables on the stack") via recursion. If we do this without
>>>> taking the address of anything, then pointers do not appear in
>>>> the program, and thus, in the abstract sense, we don't run into the
>>>> issue of pointers being of finite width. With that restriction, we have
>>>> no random access (the program cannot access material in existing
>>>> activation frames without returning to them). Therefore what we have is
>>>> a push-down automaton (PDA). A PDA is strictly less powerful than a
>>>> Turing machine. It contains an infinity in that it can push down as far
>>>> as it needs to, but there are some computations that can't be carried
>>>> out with a universal push down machine.
>>>
>>> Hmm... Modern C has threads and therefore more than one stack. A PDA
>>> with two stacks is Turing complete so there may be a way after all.
>>> Nice one!
>>
>> There you go teamwork. Between you and Kaz.
>
> Note "may". In the end it is likely to come down to how one chooses to
> interpret the standard. It's really not a clear-cut question. (See my
> reply to Mike Terry about that.)

On 8/30/2020 6:23 PM, Mike Terry wrote:
> On 30/08/2020 21:36, Ben Bacarisse wrote:
>> There is, though, another way to argue about C. Implement
>> your UTM in our all-too-finite C as, say, compiled by gcc.
>> If a computation fails
>> for lack of space, just get the gcc people to double the size
>> of all the pointers and try again. Repeat as necessary,
>> using pointers simulated in software. Each iteration can
>> conform to the C standard, and since you only need to be able
>> to run terminating computations to be Turing complete, one can
>> show that there is (in theory) a C implementation that
>> can run any given computation.
>
> Yes, for a given terminating calculation. But to do this we just
> declare a big enough array to run the calculation, and run it. This
> doesn't take too much thought or discussion regarding either program
> design or language considerations,
>
> Regards,
> Mike.

So Kaz spurred an idea in you that spurred an idea in Mike that seems to
complete the circle. If Mike is right then any computation performed
within the 32-bit 4GB address space of the x86 architecture that
terminates normally would be equivalent to a computation on a Turing
Machine having equivalent input. Since the x86 machine language program
was generated by a "C" compiler then the "C" version of the program is a
Turing equivalent computation.

As long as this reasoning is completely correct this part of the
analysis is finished. Every computation by a "C" language program on an
input that terminates normally is equivalent to the computation on a
Turing on equivalent input.

Here is where the analysis gets extended:
The input to the halt decider is the x86 machine language of itself.
This means that the input to the Turing machine must be this same
algorithm implemented as a conventional five-Tuple Turing machine.

Since the task of making a "C" to conventional five-tuple Turing Machine
translator is very difficult we define an some augmentations that create
a virtual machine language that is Turing equivalent that "C" maps to
much more directly.

When this virtual machine executes a halt decider that takes its own
virtual machine description as its input then we can know by proxy that
the x86 halt decider has equivalent input to an actual Turing machine.

The input to the x86 based UTM derives equivalent output to the output
of a virtual machine that is equivalent to a Turing machine and has
input equivalent to the input to the x86 based UTM.

This proves (by proxy) that input to an actual Turing machine that is
equivalent to the input to the x86 UTM would derive equivalent output
from the actual Turing machine.

olcott

unread,
Aug 31, 2020, 12:55:58 AM8/31/20
to
If we assume that Mike's analysis is correct then we can conclude that
every x86 program that derives a result has an equivalent Turing machine
computation on equivalent input deriving an equivalent result.

If an x86 based halt decider derives a correct halting decision on the
basis of its own x86 code then there exists an equivalent Turing machine
(using the same algorithm) that correctly decides halting on its own
Turing machine description.

Now if we just had some computer science to back that up.

olcott

unread,
Aug 31, 2020, 10:59:38 AM8/31/20
to
On 8/30/2020 8:24 PM, Ben Bacarisse wrote:
> Mike Terry <news.dead.p...@darjeeling.plus.com> writes:
>
>> On 30/08/2020 21:36, Ben Bacarisse wrote:
>>> Kaz Kylheku <793-84...@kylheku.com> writes:
>>>
>>>> On 2020-08-30, olcott <No...@NoWhere.com> wrote:
>>>>> The end goal of this sequence of posts is to show that the basic "C"
>>>>> programming language (without the "C" libraries) can be fully mapped to
>>>>> an abstract model of computation that is equivalent to a Turing machine
>>>>> in such a way that any Turing complete computation can be written in the
>>>>> "C" programming language.
>>>>
>>>> Turing's infinite tape can be mapped to the <stdio.h> facility.
>>>> The possible symbols on that tape can be mapped to characters,
>>>> and the operations and positioning can be mapped to the available
>>>> library operations.
>>>
>>> I think (but it's hard to get answers) that Peter Olcott is only
>>> considering what we'd call freestanding implementations here.
>>>
>>> But, worse, he's not interested in simply showing that hosted C is
>>> Turing complete by simulating a universal TM. I think he aims to show
>>> that "ordinary C" can be made Turing complete just by implementing it on
>>> a Turing complete sub-system.
>>>
>>> (BTW, even a IO-based UTM needs are bit of checking. Functions like
>>> fgetpos need to be allowed to fail. I think the specification is plenty
>>> loose enough for that since it says almost nothing about what failures
>>> are permissible.)
>>>
>>>>> When "C" is mapped to an abstract model of computation that can provide
>>>>> an arbitrary number of arbitrary length linked lists, then "C" acquires
>>>>
>>>> In the absence of I/O, the C storage model is finite, and therefore not
>>>> Turing complete. However, perhaps not quite.
>>>>
>>>> I had a discussion with this with someone who convinced me of the
>>>> following: in C we can allocate automatic storage (informally,
>>>> "local variables on the stack") via recursion. If we do this without
>>>> taking the address of anything, then pointers do not appear in
>>>> the program, and thus, in the abstract sense, we don't run into the
>>>> issue of pointers being of finite width. With that restriction, we have
>>>> no random access (the program cannot access material in existing
>>>> activation frames without returning to them). Therefore what we have is
>>>> a push-down automaton (PDA). A PDA is strictly less powerful than a
>>>> Turing machine. It contains an infinity in that it can push down as far
>>>> as it needs to, but there are some computations that can't be carried
>>>> out with a universal push down machine.
>>>
>>> Hmm... Modern C has threads and therefore more than one stack. A PDA
>>> with two stacks is Turing complete so there may be a way after all.
>>> Nice one!
>>
>> Isn't it the case though, that
>>
>> - In any C implementation (or C++) the size of a pointer is fixed
>>
>> - Every distinct object (class/struct/variable/etc...) must have
>> its own unique address (that fits the pointer size)
>> ?
>
> Yes, indeed. But I suspect the argument proposed to Kaz relies on what
> is permitted when no addresses are used. This imaginary implementation
> would collapse in a heap (no pun intended) if any use were ever made of
> an address. The question being, is a C implementation obliged to
> restrict the set of objects if the program takes no note of any pointers
> and could therefore not tell?
>
> Going the other way, it is widely considered acceptable for an
> implementation to give distinct objects, with overlapping lifetimes, the
> same address provided the program's observable behaviour is not
> affected.
>
> I'm not really inclined to "go to court" (either way) over this one!
> The rules are not precise and the question is of no practical or
> theoretical importance. But I think it's fun the think about...
>
>> If so any C implementation can only have finitely many distinct
>> addressable objects, and so can not fully implement a TM tape, and can
>> only in principle run a subset of TM calculations
>>
>> Of course, any C implementation can implement a partial TM emulator,
>> and this might have the capacity to run interesting TM calculations,
>> but it seems to me not all TM calculations.
>>
>> Sure, if we consider a sequence of C implementations with ever larger
>> address pointer sizes, then given any /terminating/ TM calculation,
>> eventually one of them will be capable of running it. But some
>> non-terminating TM calculations cannot run on a finite tape, so we're
>> sort of loosing this theoretical battle.
>
> Usually, (I thought) the relevant definitions are in terms of
> terminating computations. Failing on the ones that don't terminate is
> no failure of completeness. Mind you, I'd need to re-read someone like
> Martin on this because I don't recall the technical definition.
>
>> Now if we could change the C spec to allow variable length pointer
>> sizes, then we would have a theoretically infinite pool of addressable
>> objects, so suddenly a full TM emulator becomes easy in terms of
>> theoretical program design, but obviously not when it comes to running
>> on a given physical implementation, whose memory will only be
>> finite. (But this probably doesn't matter for PO, who knows?
>> Certainly not PO, who is quite lost in all this...)
>
> That's one way, but it would need quite a few detailed of changes in the
> language. Another, rather arbitrary, way would be to allow at least two
> unbounded integers.
>
>> Or if we can use IO facilities then a potentially infinite virtual
>> memory results - e.g. parts of the tape could be backed in a
>> cloud-based system addressed through arbitrary long URLs. (Ok, maybe
>> URLs have a max length, but we can envisage some new storage device
>> addressed through IO subsystem not having this restriction.
>
> Though that steps outside of standard C. The "standard" way would be to
> use tmpfile, fseek(_, +1/-1, SEEK_CUR) and fread and fwrite to get a
> one-ended unbounded tape.
>
>>> Often, things like IO and threads (unless wired into the language
>>> syntax) are informally excluded when asking if language X is Turing
>>> complete, but I don't think the rules of this particular game are clear.
>>>
>>> There is, though, another way to argue about C. Implement your UTM in
>>> our all-too-finite C as, say, compiled by gcc. If a computation fails
>>> for lack of space, just get the gcc people to double the size of all the
>>> pointers and try again. Repeat as necessary, using pointers simulated
>>> in software. Each iteration can conform to the C standard, and since
>>> you only need to be able to run terminating computations to be Turing
>>> complete, one can show that there is (in theory) a C implementation that
>>> can run any given computation.
>>
>> Yes, for a given terminating calculation. But to do this we just
>> declare a big enough array to run the calculation, and run it.
>
> Yes, you can switch larger integer indexes for larger pointers. The two
> views are interchangeable. (Or have I missed the point you are
> making?).
>

If we had pointer names that were handled under-the-covers by a Turing
equivalent process then we could still have the essence of "C" on an
abstract machine with unlimited memory. This is not much more than a
paraphrase of what Kaz already said. The difference is that we switched
the context from the stack to the heap.

I think that I just figured out how to make at least one linked list of
truly unlimited length. It depends on unlimited length integers. To
explain this system I would use 1024-bit addressing that could be scaled
upward at any point in time.

2^1024 (1.797693134862315907729305190789e+308) bytes of RAM
179769313486231590772930519078902473361797697894230657273430081157732675805500963132708477322407536021120113879871393357658789768814416622492847430639474124377767893424865485276302219601246094119453082952085005768838150682342462881473913110540827237163350510684586298239947245938479716304835356329624224137216

Here is the maximum address value in hex:
0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff

Implementing such a system on actual hardware requires some sort of
relative addressing. Using every single atom in the whole universe to
encode one single binary digit each would provide a woefully deficient
amount of RAM for 1024-bit addressing:

...it is estimated that the there are between 10^78 to 10^82 atoms in
the known, observable universe...
https://www.universetoday.com/36302/atoms-in-the-universe/#

The key thing about this memory system is that because it has no
theoretical limit to its scalability every Turing equivalent machine
tied to this abstract memory system that can be implemented in actual
hardware would be computationally identical to its abstract machine
version until the concrete machine reached its memory limit.

This provides a way to empirically test many theory of computation
problems.

Keith Thompson

unread,
Aug 31, 2020, 2:19:09 PM8/31/20
to
olcott <No...@NoWhere.com> writes:
> The end goal of this sequence of posts is to show that the basic "C"
> programming language (without the "C" libraries) can be fully mapped
> to an abstract model of computation that is equivalent to a Turing
> machine in such a way that any Turing complete computation can be
> written in the "C" programming language.
[...]

I encourage anyone in comp.lang.c or comp.lang.c++ who wants to
reply to these posts to take a look at comp.theory, which has been
dominated by these arguments for years.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */

Jorgen Grahn

unread,
Aug 31, 2020, 5:35:08 PM8/31/20
to
["Followup-To:" header set to comp.lang.c.]

On Mon, 2020-08-31, Keith Thompson wrote:
> olcott <No...@NoWhere.com> writes:
>> The end goal of this sequence of posts is to show that the basic "C"
>> programming language (without the "C" libraries) can be fully mapped
>> to an abstract model of computation that is equivalent to a Turing
>> machine in such a way that any Turing complete computation can be
>> written in the "C" programming language.
> [...]
>
> I encourage anyone in comp.lang.c or comp.lang.c++ who wants to
> reply to these posts to take a look at comp.theory, which has been
> dominated by these arguments for years.

Ugh. I took a look and now wish I hadn't.

(Not that it seemed toxic or anything; it was just a very depressing
ghost of a newsgroup.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Juha Nieminen

unread,
Sep 1, 2020, 5:14:03 AM9/1/20
to
In comp.lang.c++ Jorgen Grahn <grahn...@snipabacken.se> wrote:
> (Not that it seemed toxic or anything; it was just a very depressing
> ghost of a newsgroup.)

Nowadays usenet is mostly dead anyway. This group might be one of the last
dinosaurs still alive and kicking. Expect it to die within a decade, though.

(Not that I consider it a good thing. Just making an observation.)

bol...@nuttyella.co.uk

unread,
Sep 1, 2020, 5:25:07 AM9/1/20
to
On Tue, 1 Sep 2020 09:13:46 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>Nowadays usenet is mostly dead anyway. This group might be one of the last
>dinosaurs still alive and kicking. Expect it to die within a decade, though.
>
>(Not that I consider it a good thing. Just making an observation.)

People were saying that 10 years ago. Usenet won't die until either all the
NNTP servers vanish or the people still using it finally die. I suspect the
former will come first as there arn't many now as it is.

Also plenty of the uk.* groups are still well used.

olcott

unread,
Sep 1, 2020, 6:00:37 PM9/1/20
to
On 9/1/2020 3:39 PM, Mike Terry wrote:
> On 31/08/2020 20:46, Ben Bacarisse wrote:
>> Mike Terry <news.dead.p...@darjeeling.plus.com> writes:
>>
>>> On 31/08/2020 02:24, Ben Bacarisse wrote:
>>>> Mike Terry <news.dead.p...@darjeeling.plus.com> writes:
>>>>

>>> My point was just that if PO's intention is simply to run a specific
>>> terminating calculation, then the issue becomes trivial, so no need
>>> for much discussion.  I think that /is/ his intention, because in the
>>> past he claimed to have a fully coded TM that "refuted the usual
>>> diagonal argument used in the TM halting problem proof".  Having
>>> admitted he doesn't have this, I think he now wants to claim he has a
>>> C program that refutes the equivalent diagonal argument for C
>>> programs.  Of course he cannot have this, as the diagonal argument
>>> demonstrates.
>>
>> Ah, you are a great revelation behind the times.  He now has a halt
>> decider!  Not sure what it decides halting for, but he's gone beyond the
>> "one impossible case" from Dec 2018.  (BTW, was it you that pointed out
>> to me then that, despite his claiming to have "exactly Linz's H", he was
>> talking about only one M/M^ related pair and not a halt decider?  If so,
>> thanks.)
>
> (Yes, you're welcome...)
>
> I haven't seen a claim that he has produced a complete halt decider, but
> I've only recently (last few days) come here.  As you know, he's been
> absent from sci.logic for some time so I'm out of date.
>
> What he's said in responses I've read here:
>
> ::  What I have now is the every single detail design of an x86 program
> ::  that
> ::  correctly decides halting for two classes of computations that are
> ::  equivalent to H_hat correctly deciding halting on itself.
>
> and in response to pointing out that this is just a partial halt decider:
>
> ::  When it is shown how H_Hat correctly decides halting on itself
> ::  the wildcard state of H makes H into a complete halt decider, or
> ::  at least one that is no longer shown to be incomplete.
>
> The "or at least" seems to be PO recognising that H is not in fact shown
> to be a complete decider - all it would mean is that one argument
> showing it to be incomplete has been refuted.
>

It empirically shows that every self-referential halting problem
counter-example that "proves" that halting is undecidable has been refuted.

Here is how I summed up your analysis:
Every x86 program that terminates has a Turing Machine equivalent.

It is identical to my intuition from 2018, yet now I have someone that
agrees and a process for proving that it is true rather than merely a
hypothesis.

A "C" program maps to an x86 program that maps to a model of computation
that is provably equivalent to a Turing machine, therefore "C" programs
and x86 programs have equivalent TMs.

The key thing that I wanted to be certain of was that there were zero
cases where a Turing machine derived a different result than a "C"
porgram or an x86 program on equivalent input.

So now I have an analytical chain-of-inference that proves that all "C"
programs and x86 programs that terminate have equivalent (functionally
identical) Turing machines, thus for all practical purposes are
one-and-the-same thing as an actual Turing machine for specific
computations.

Bottom line this means that my halting problem refutation would apply to
Turing machines.

Öö Tiib

unread,
Sep 1, 2020, 7:34:33 PM9/1/20
to
On Monday, 31 August 2020 21:19:09 UTC+3, Keith Thompson wrote:
> olcott <No...@NoWhere.com> writes:
> > The end goal of this sequence of posts is to show that the basic "C"
> > programming language (without the "C" libraries) can be fully mapped
> > to an abstract model of computation that is equivalent to a Turing
> > machine in such a way that any Turing complete computation can be
> > written in the "C" programming language.
> [...]
>
> I encourage anyone in comp.lang.c or comp.lang.c++ who wants to
> reply to these posts to take a look at comp.theory, which has been
> dominated by these arguments for years.

It is unlikely that anyone wants to. Maybe if only Scott Newman will
praise those posts as he likes to tease insane people.

wyn...@gmail.com

unread,
Sep 2, 2020, 2:56:19 AM9/2/20
to
olcott於 2020年8月31日星期一 UTC+8上午2時56分13秒寫道:
ANY programming lannguage is already Turing equivalent language

James Kuyper

unread,
Sep 2, 2020, 9:34:28 AM9/2/20
to
On 9/2/20 2:56 AM, wyn...@gmail.com wrote:
...
> ANY programming lannguage is already Turing equivalent language

Languages that aren't Turing equivalent do exist, but examples are
pretty rare:
<https://en.wikipedia.org/wiki/Turing_completeness#Non-Turing-complete_languages>

wyn...@gmail.com

unread,
Sep 2, 2020, 10:24:41 AM9/2/20
to
James Kuyper於 2020年9月2日星期三 UTC+8下午9時34分28秒寫道:
I know there are series of FPGA, *PROM programming languages, just
would not like to type too many to reveal my bad English.

bol...@nuttyella.co.uk

unread,
Sep 2, 2020, 11:15:21 AM9/2/20
to
On Wed, 2 Sep 2020 09:34:14 -0400
James Kuyper <james...@alumni.caltech.edu> wrote:
>On 9/2/20 2:56 AM, wyn...@gmail.com wrote:
>....
>> ANY programming lannguage is already Turing equivalent language
>
>Languages that aren't Turing equivalent do exist, but examples are
>pretty rare:
><https://en.wikipedia.org/wiki/Turing_completeness#Non-Turing-complete_language
>s>

SQL.

olcott

unread,
Sep 2, 2020, 1:08:55 PM9/2/20
to
I now have the means to prove that the x86 language is Turing equivalent
by defining a provably Turing equivalent abstract model of computation
that the x86 language maps to.

The following abstract machine maps the x86 language to machines with a
fixed pointer size of the largest unlimited integer address that is
actually needed by the computation.

Many algorithms could be directly implemented as x86/x64 machines
directly equivalent to Turing machines for the same computation.

Instruction
: INTEGER ":" OpCode
| INTEGER ":" OpCode Integer
| INTEGER ":" OpCode Integer "," Integer

HEXDIGIT [a-fA-F-0-9]
INTEGER {HEXDIGIT}+
OPCODE HEXDIGIT{4}

Address:OpCode
Address:OpCode Param
Address:OpCode Param, Param

Every "C" prorgam can be mapped to the above model thus proving that "C"
is Turing Equivalanet.

olcott

unread,
Sep 2, 2020, 1:27:29 PM9/2/20
to
On 9/2/2020 12:22 PM, Kaz Kylheku wrote:
> On 2020-09-02, olcott <No...@NoWhere.com> wrote:
>> This abstract model maps the x86 language to unlimited memory access
>
> Well, the Olcott x86 language, not the Intel x86 language.
>

The Olcott x86 language is a Turing equivalent adaptation of the Intel
x86 language that provides the basis for a proof that all Intel x86
programs mapping to the Olcott x86 language are Turing equivalent
computations.

Jorgen Grahn

unread,
Sep 6, 2020, 10:38:31 AM9/6/20
to
On Mon, 2020-08-31, Keith Thompson wrote:
> olcott <No...@NoWhere.com> writes:
>> The end goal of this sequence of posts is to show that the basic "C"
>> programming language (without the "C" libraries) can be fully mapped
>> to an abstract model of computation that is equivalent to a Turing
>> machine in such a way that any Turing complete computation can be
>> written in the "C" programming language.
> [...]
>
> I encourage anyone in comp.lang.c or comp.lang.c++ who wants to
> reply to these posts to take a look at comp.theory, which has been
> dominated by these arguments for years.

And now, six days later, the normal/crackpot ratio in both comp.lang.c
and comp.lang.c++ has dropped to close to zero. (Not that it was very
high to begin with, especially not in comp.lang.c.)
0 new messages