merge

18 views
Skip to first unread message

John Skaller2

unread,
Aug 9, 2018, 1:12:47 AM8/9/18
to dypgen, felix google
I am having some trouble understanding why this doesn’t work:

let dyp_merge_Obj_sexpr ol = keep_all ol

let dyphack (ls : ( 'a * string) list) : 'a =
match ls with
| [x,_] -> x
| [] -> failwith "No parse found"
| _ -> failwith "Multiple parses found"

In my system, there is really only one constructor: Obj_sexpr.
This is because all non-terminals have an action rule which executes
some Scheme and returns an S-expression.

My system consists of a bootstrap parser which processes files containing
EBNF syntax like this:

private assertion_stmt := "assert" sexpr ";" =># "`(ast_assert ,_sr ,_2)";

The RHS after =># here is a string containing Scheme, the ,_sr and
_2 are global Scheme variables which are assigned temporarily
by the actual Dypgen Ocaml action code so the scheme, when
executed generates an S-expression.

My system “saves up” these extensions in a data structure and then
the user can “open” them. At this point, the system adds rules
to Dypgen’s automaton dynamically. The whole language syntax
is therefore defined as an extension of the bootstrap parser,
which extends itself by reading grammar rules from the
standard library.

Here’s my problem: given this EBNF:

//$ Tuple formation by cons: right associative.
x[stuple_cons_pri] := x[>stuple_cons_pri] “,," x[stuple_cons_pri] =>#
"`(ast_tuple_cons ,_sr ,_1 ,_3)";

//$ Tuple formation by append: left associative
x[stuple_cons_pri] := x[stuple_cons_pri] “<,,>" x[>stuple_cons_pri] =>#
"`(ast_tuple_snoc ,_sr ,_1 ,_3)";

I expected to get a “Multiple parses found” error on this input:

var q = 22,,x<,,>99;

But, I don’t. The code above compiles and runs. Its hard to tell
what is happening because each of the parses makes a distinct
AST but the semantics are the same for both. The two interpretations are:

(22,,x) <,,> 99 // prepend first, then append
22,, (x <,,> 99) // append first, then prepend

I added debugging to the Scheme so it prints the stuff being reduced:

//$ Tuple formation by cons: right associative.
x[stuple_cons_pri] := x[>stuple_cons_pri] ",," x[stuple_cons_pri] =>#
"""(begin (display "cons ")(display _1)(display ",,") (display _3)(display "\n") `(ast_tuple_cons ,_sr ,_1 ,_3))""";

//$ Tuple formation by append: left associative
x[stuple_cons_pri] := x[stuple_cons_pri] "<,,>" x[>stuple_cons_pri] =>#
"""(begin (display "snoc ")(display _1)(display "<,,>")(display _3)(display "\n")`(ast_tuple_snoc ,_sr ,_1 ,_3))""";

and I get this:

cons (ast_literal 22),,(ast_name x)
cons (ast_literal 22),,(ast_name x)
cons (ast_literal 22),,(ast_name x)
cons (ast_literal 22),,(ast_name x)
snoc (ast_name x)<,,>(ast_literal 99)
cons (ast_literal 22),,(ast_tuple_snoc (ast_name x) (ast_literal 99))
cons (ast_literal 22),,(ast_tuple_snoc (ast_name x) (ast_literal 99))
cons (ast_literal 22),,(ast_tuple_snoc (ast_name x) (ast_literal 99))
cons (ast_literal 22),,(ast_tuple_snoc (ast_name x) (ast_literal 99))
snoc (ast_tuple_cons (ast_literal 22) (ast_name x))<,,>(ast_literal 99)
snoc (ast_tuple_cons (ast_literal 22) (ast_name x))<,,>(ast_literal 99)
snoc (ast_tuple_cons (ast_literal 22) (ast_name x))<,,>(ast_literal 99)
snoc (ast_tuple_cons (ast_literal 22) (ast_name x))<,,>(ast_literal 99)


The duplication seems a bit confusing but what’s happening is that

22,,x

is a legit expression, however, if we parse that as a top level expression
we then hit the <,,> symbol and that parse is screwed. So the top level
reduce has to be discarded in favour of a shift.

Similarly in reverse, when we have the whole actual expression and try
to go further we hit a semicolon “;” and that parse is screwed by overshooting.

I guess this happens internally as well explaining rather a lot of reductions.
But the point is BOTH final forms are reduced:

cons (ast_literal 22),,(ast_tuple_snoc (ast_name x) (ast_literal 99))
snoc (ast_tuple_cons (ast_literal 22) (ast_name x))<,,>(ast_literal 99)

I don’t have a print of the reduction of the containing non-terminal
but its reasonable to assume “cons .. “ generates “ast_tuple_cons”
and “snoc …” generates “ast_tuple_snoc” because that’s what
the scheme code says (i.e. there’s no print of the reduced term
in the debugging, only the input to the reduction rule).

So BOTH parses are produced but a merge is discarding one of them!!!

But the generic merge explicitly says to keep all parses.

So it looks like the *extended automaton* is disobeying this rule and
falling back to the default of just picking one. in this case semantically
it doesn’t matter but in other cases it definitely does.

Is there a bug in Dypgen here or is it something in my own code?
Anyone have any clues?


John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 11, 2018, 7:45:16 PM8/11/18
to felix google
I’m losing track here! So …

I am developing a POWERFUL generalised algebra that seems obvious but I have
not seen implemented anywhere else.

There are several parts to this so I have to present them in pieces.
The problem is managing the combination of all the pieces!

In most programming languages you can index a tuple:

(1,”hello”,42.3).1 // “hello”

This is actually an application of a projection. In Felix you can separate
out the projection:

typedef isd = int * string * double;
var prj = proj 1 of (isd); // stand alone projection function
var x = 1,”hello”,42.3; // tuple
println$ x.prj; // application

The projection is an ordinary function of type

isd -> string

so you can compose projections: if you have a tuple

typedef outer = int * isd * long;
pouter = proj 1 of (outer);
pinner = proj 1 of (isd);
prj = pouter \odot pinner; // reverse composition
var x = 2, (1, “hello”, 42.3),33L;
println$ x.prj; // “hello” again

The fact that projections are ordinary functions and can therefore be
composed by ordinary function composition is basic category theory.

Felix goes another step! You can have projections of *pointers* as well
as values. Given a pointer to a tuple, a projection can be applied
to produce a pointer to a component:

var x = (1,”hello”,42.3);
var pprj = proj 1 of (&isd);
ps = &x . pprj;
println$ *ps; // “hello”
ps <- “bye”; // store value in component
println$ *ps; // “bye”
println$ x; // (1,”bye”, 42.3);

This is fairy easy to understand but the implications are PROFOUND.
The address calculus is purely functional and referentially transparent.
There are no lvalues or references. The only imperative code involved
is the store operator <-. Just to repeat LVALUES AND REFERENCES
ARE ELIMINATED ENTIRELY.

Felix goes another step to keep up with C++: we have read only,
write only and read-write pointers. The normal pointer is read-write,
which is a subtype of both read pointers and write pointers.
The syntax for address taking is:

&x // read-write pointer
&<x // read only pointer, aka pointer to const
&>x // write only pointer

and the type notation follows the pattern (&int, &<int, &>int).

The next complication is compact linear tuples. These work the
same as non-compact linear types: that’s the specification!
However the type of pointers is different:

_pclt<D,C>

is the type of a pointer to a C stored in a compact linear product D.
The representation consists of a pointer to D, a machine word,
together with a divisor and modulus used to obtain the relevent
component by the formula

*p / divisor % modulus

This is a generalisation of the well know method for obtaining
a sub-bit-string from a bit array, where the divisor and modulus
are powers of 2.

Projection functions are a bit tricky because there are
TWO cases: the argument pointer of the projection
can be either an ordinary pointer or a compact linear pointer.
The result in both cases is a compact linear pointer.
The value composition for compact linear projections is

v / (divisor1 * divisor2) % modulus2

Pointer projections store a divisor and modulus
the same as value projections.

In the Ocaml all the above complex interaction is made harder
because the representation of a tuple type is a list of types,
unless all the types are the same, in which case the type
is an array and we just use the base type and the index count.
In fact, arrays are more general: the index can be any compact
linear type, not just a unit sum, that is, arrays don’t have to be
linear. But we’ll get to that later .. or rather .. we won’t because
that’s where stuff hits the fan.

Now back again. For arrays, there is a second kind of projection,
called an array projection. Unlike tuple projections, array
projections allow the projection index to be an expression,
not just a constant. This can work because all the components
have the same type.

We have to make array projections work for both values
and pointers, and for both ordinary types and compact linear
types. The complexity is around cubic now!

But we’re not done yet!

Category theory has a concept called duality. Products have
a dual type called a coproduct or sum type. Instead of
projections, we have injections. These are ordinary functions
too, just as projections are.

Coproducts work backwards from products in the following sense:
a product constructor produces a set of projections, the projections
are destructors (analysers, eliminators, extractors). There can be
many kinds of constructors for products: tuples are one kind,
but in Felix we also have records (with named projections instead
of numbered ones), and structs. The important thing is that the
projections for any product obey laws: the projections *characterise*
the product.

For coproducts there are many kinds of constructors too,
and there are laws, so now, the injections characterise the
corproduct. We use pattern matching usually for destructors.

We can have compact linear sums as well as compact
linear products.

You think its getting messy .. we’re nowhere near done yet!

As with products, where products of the same type repeatedly
are arrays, we have repeated sums, which, naturally,
should be called coarrays! For example unitsums are coarrays!
Here is another:

4 *+ int = int + int + int + int

The operator *+ is used for a repeated sum, just as ^ is used for
a repeated product. Technically it is isomorphic to a product,
however

4 * int

is a pair of an index and integer, whereas 4 *+ int is a coarray.
Pity about the notation: the fact is the universal representation
of coproduct is in fact precisely and index and a pointer
to a heap allocated value of the type so it really is a pair,
but there are optimisations so the representation isn’t uniform.
So we need another type constructor.

The thing about coarrays is the same as arrays. You can
index the injections used to create a sum type with
an expression, whereas for an ordinary sum the index
has to be a constant.

Getting messy? We’re not done yet! Don’t forget
compact linear types!

But there is much worse to come!

It is possible to use an expression to index a tuple!
You may wonder, how can we do that, if the types of
the components are different???

Here is the type of a *generalised projection*:

3 -> int * string * double -> int + string + double

We have unified the types in the tuple by just adding them
together. There is another way to look at this, it is
NOT equal but it is isomorphic: any tuple IS an array:

(int + string + long) ^ 3

and now an ordinary projection on that is isomorphic to
a generalised projection on a tuple.

What makes arrays special is that we can *smash* the index
of the generalised projection away:

3 -> int * int * int -> int + int + int |— smash -> int

Smash throws away the information about which component was accessed
which is otherwise required so you can do a pattern match so the constructor
arguments can be different types.

The point is that with smash, which is a generic operator
WE NO LONGER NEED TUPLE PROJECTIONS.

Since ALL projections can be dynamically indexed, we only need
generalised projections, to get back to an ordinary projection
you can just apply smash if the result is a coarray, or use pattern
matches for each case otherwise. In other words, we can define
a standard constant projection in terms of an array projection,
even for tuples.

Unfortunately things are getting really complicated now.
The problem is, although we can fully generalise the computations
to a very small number of operations, they will run dead slow.
For example you do NOT want this:

(1,”hello”, 42.3).1

to actually construct a tuple, then construct a projection,
then apply the projection! NO! You actually want to reduce
the expression to just

“hello”

immediately. So actually we DO need the compiler to handle a
lot of special cases, because the generalised computations
aren’t fast enough.

Don’t forget that when we have generalised projections
we have to have generalised injections too!

BUT WE ARE NOT EVEN CLOSE TO BEING FINISHED YET.

So far we have assumed the index of an array is a unit sum.
Ditto, coarrays.

What programmer hasn’t heard of a thing called a matrix?
A matrix is NOT an array of arrays:

double ^ 3 ^ 4 // array of arrays

No way! A matrix is a two dimensional array:

double ^ (4 * 3) // matrix

you access a matrix like this:

x. (i,j)

Notice the index is a tuple. Of course there are isomorphisms here
but that’s the point.

And OF COURSE matrix is only an example.

THE INDEX TYPE OF AN ARRAY CAN BE ANY COMPACT LINEAR TYPE.

Actually an array is just a function so we can go even further, and that’s
another complication because Felix treats arrays as object to which
we apply projection functions, but there is another view, that an array
is a function applied to an index value.

And of course don’t forget that the repeat count of a coarray also
can be any compact linear type.

And again, for an array, the base type can be compact linear and itself
an array and, the index type could also be an array! In fact:

double ^ ( 4 ^ 2 )

is of course just a square matrix with both dimension 4.

You can see why it is absolutrely essential to have a compact
calculus to cover all the intricate combinations, and then
we can do special cases knowing that everything will work
even if it is slow.

The problem here isn’t quadratic, cubic, or even quintic.
I have no idea of the order of the problem but it feels like
there are at leastr 8 bifurcations so the problem is octonioc.
2^8 of course is 256. That’s a LOT of cases to write out
in the compiler!!!!!




John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 12, 2018, 4:19:14 PM8/12/18
to felix google

Ah. Finally constructed a breaking example. Uggh. Polymorphism!

The problem is like this consider a function:

fun f[A,B] (p: &(A * B)): &B => p.1;

This is just a wrapper for a pointer projection.
The problem is does NOT work if A,B are compact linear,
because then the return type is

_pclt<A* B,B>

and NOT &B. In other words, the type of p.1 depends on
whether p points at a compact linear type or not. If A, B
are both compact linear, then so is A * B.

One solution is to use compact linear pointers EVERYWHERE.
IN particular we can define:

&T = _pclt<T,T>

in other words the container for the target is the target.
After momomorphisation, the usual 3 word pointer can
be optimised to a normal pointer.

To say this another way, pointers would now, in general,
be *interior* pointers meaning the general case requires
specifying BOTH the container and the interior type targetted.

Another solution is to use generalised pointers everywhere.
That’s just a pair of get and set methods.






John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 12, 2018, 5:17:31 PM8/12/18
to felix google


> On 13 Aug 2018, at 06:19, John Skaller2 <ska...@internode.on.net> wrote:
>
>
> Ah. Finally constructed a breaking example. Uggh. Polymorphism!
>
> The problem is like this consider a function:
>
> fun f[A,B] (p: &(A * B)): &B => p.1;
>
> This is just a wrapper for a pointer projection.
> The problem is does NOT work if A,B are compact linear,
> because then the return type is
>
> _pclt<A* B,B>
>
> and NOT &B.

Other solutions would be say to use type classes with a virtual type
of the result of the projection.

The real problem here is that it seems the type system is unsound
because I think it *assumes* that the type of the projection from
&(A * B) is &B. Note, non-pointer projections are OK.

Another solution is to say:

A \* B

to mean a packed product (and use \+ for a packed sum and maybe
even A \^ B for a packed exponential (array)). Perhaps equivalently

packed[A * B]

would mean the same, that would allow an “unpack” operator
to remove the “packed”, but this would be an actual run time
operation extracting the components and splitting them into
addressable stores. To make this work we simply disallow
pointer projections into packs. pointer projections outside or to packs
are OK, and pointer projections inside packs are also OK.
Migrating the boundary is also ok, it just has to be explicit.

Note that the monomorphic calculus generates code for all
these cases already. The monomorphic type system is sound
(at least in this respect .. at least I think it is).

The way to approach this problem has to be to *generalise it*.
One perhaps bad example where a container/interior pointer
model would be useful is actually arrays. For example the interior
pointer would be a pointer to the whole array plus an index
and the type would also support incrementing the index with
a bounds check. This would be quite useful for varrays,
where at present you have to call

va.stl_begin

to get a machine pointer to the start of the array you can increment,
and then stuff is unsafe because you can go off the end.

Another use might be a file handle plus an offset into the file.
Fixing the type system is probably easy, though I’m not sure,
the problem is finding the right model. In particular once
monomorphised we have to generate efficient (optimised)
code as we do now. Its not clear this would be possible
if for example an abstract pointer were used:

(get: 1->T, set: T->0)

since the record fields are likely to be closures.
The idea of that model is that it depends only on T
so everything works. We use functional abstraction
(of the closures) to hide other details.

Hmmmm..


John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 12, 2018, 11:37:23 PM8/12/18
to felix google
Well this is interesting indeed. The bad Ocaml code has been fixed, so now
a type variable is considered to be a compact linear type.

The idea is that a compact linear pointer _pclt<T,T> will degrade during
monomorphisation to an ordinary pointer &T. In other words &T is a “subtype"
of _pclt<D,C>.

The degradation should ensure optimal code generation.

The thing is I didn’t implement the degrade! This would mean all pointers
to products would be compact linear pointer, which requires TWO type variables
not one.

I expected everything to break!

It didn’t!

I had to removes some hacks that allowed math on unitsums. They weren’t any good
anyhow. I also changed this:

fun unsafe_get (var a: array[t, n], j: size): t => a . (j :>> n);

because the compiler barfed trying to do the coercion. I first tried this:

fun unsafe_get (var a: array[t, n], j: size): t => a . (j % len a);

but that lead to an infinite loop. Type class stuff. So I hacked it to this,
which is what used to be used before compact linear types came along:

fun unsafe_get: array[t, n] * size -> t = “$1.data[$2]";

This won’t work on compact linear types of course! Surprisingly with that hack
only 3 tests failed, and Felix also built!

The array stuff in src/packages/array.fdoc is a right royal mess.
It tries to handle various cases, like fudging array indices to integers
when in reality an array of type int ^ 6 for example requires the index
to be of type 6. No bounds check is required if you use a precisely correct
index, instead, the check would occur coercing an integer TO the correct
type. That’s not required if you map an array, or use an iterator over
values of type 6.

So .. those classes need to be fixed, or possibly removed.
The compiler knows what a fixed length array is and should
support the operations on it intrinsically (we don’t need any classes
thank you). The reason for generalising to classes is that functions
like map and fold can be defined once for more than one kind of array,
eg for varrays and darrays as well. However the abstraction I have
doesn’t work well. To process a varray you have to use “stl_begin”
which returns a C pointer which can be incremented .. including
increment right off the end. No good.

So I’m inclined at the moment to FORCE the correct Ocaml code
through, that is, to FORCE products involving type variables
to be considered as compact linear, and then let them degrade
to non-compact linear after the type variables are removed.

Although I didn’t implement any degrading code .. what I think happens
is that the test on a type after monomorphisation will be the same as
before because the only difference was that a type variable was considered
compact linear, and after monomorphisation there are none.



John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 14, 2018, 2:59:22 AM8/14/18
to felix google
Ok so, I just implement array *value* projections: a simple test case
appears to work:

begin
var arr = `0:5,`1:5,`2:5,`3:5;
var index =`2:4;
var arrprj = aproj index of 5^4;
var result = arr.arrprj;
println$ result :>> int;// should be `2:5
end

// array projection of non-compact linear array
begin
var arr = 0, 1, 2, 3;
var index =`2:4;
println$ "Index value is " + index._strr;
var arrprj = aproj index of int^4;
var result = arr.arrprj; // should be 2
println$ result;
end

What’s important here is that `aproj` creates a projection
with a *dynamic* index, that is, the index is computed at
run time. Previously I implement an exhaustive collection of
routine that handle constant projections. So more generally:

begin
println$ "loop ..";
var arr = `0:5,`1:5,`2:5,`3:5;
for i in 0..<4 do
var index = i :>> 4;
var arrprj = aproj index of 5^4;
var result = arr.arrprj;
println$ result :>> int;// should be `2:5
done
end

works! There’s no way to loop through a unitsum type at the moment
so I use an integer and a coercion. This has to be fixed, but I will leave that
until a bit later.

The code that does this work should ALSO work on an array with ANY
compact linear type as an index. This is very important!

The reason it should work is actually trivial: compact linear types
are just integers. And arrays with compact linear types for indices
such as the matrix:

var m : double ^ (10 * 10);

are actually linear arrays underneath. So if we coerce an integer
in the range 0..99 to 10 * 10, and index a matric with it,
the code just coerces the pair of type 10 * 10 to an integer
to access the matrix linearly. In particular this means you can
loop through a matrix with a SINGLE loop whereas with an
array of arrays, you have to use two indices and this two loops.

Therefore we have rank independence, that is, we have polyadic
array programming! At the moment Felix requires you to use nasty
type coercions to apply the isomorphisms. Again, this is a “will
fix later” issue. The isomorphisms are generally associators
(in other words they just move parenthesis around to reshape
the structure).

My next step is to get array projections to work with pointers.
The machinery is much the same except there are two kinds
of pointers to deal with.

Now, the next step, is to fix the really hacked up code in the
library that handles arrays to use projections. This is the main
reason to do the work. The projections are *compiler supported*
and so using them in the library should work.

The critical thing is polymorphis: library functions are polymorphic.
The array algebra, and in general pointer algebra, cannot work
polymorphically UNLESS the underlying projections do.

To make this work, a projection on a product (or array)
MUST USE A COMPACT LINEAR FORM FOR POINTERS
since we don’t know if a product of two types U and V is
compact linear or not. After monomorphisation, the compact
linear form must reduce to a non-compact linear form if
possible otherwise we don’t have any machine pointers.

So the key design principle is that a pointer into an array
has two types associated with it: the whole array container type
AND the type of the array element. If the result will work
with a machine pointer (eg the element type isn’t compact linear)
then the type degrades to an ordinary pointer. For example
with a pointer into an array of double, the elements are addressable
so we could degrade the representation to a plain old pointer
to the element.

A key thing here is that because the index used for an array
is compact linear, since the array has a fixed length defined
by a compact linear type, no bounds checks are ever needed.
If you want to use a plain old int as an index, you have to coerce
it first, and a bounds check can be done then. The point is in
a loop with a synthesised compact linear type you don’t need
a bounds check.

I am hopeful the model can be extended to work with varrays.
For example, instead of the current crap, where you have to
get a C pointer to scan a varray, we can use a pair consisting
of a pointer to the whole array (a varray actually IS a pointer!)
plus an integer index. An arbitrary access requires a bounds
check. However again, a generated scan shouldn’t.

The core idea here is that bounds checks only occur when
you coerce a type to a subtype e.g. an integer to a subrange
of integers.





John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 15, 2018, 8:33:50 AM8/15/18
to felix google


> On 14 Aug 2018, at 16:59, John Skaller2 <ska...@internode.on.net> wrote:

OK! Now I think, with some hackery, all the projections work.
Some fixes to the array handling in the library, but here is the crunch:

This code is correct:

//$ Pointer to array as value.
instance[t,n] ArrayValue[&array[t,n], &t] {
fun len (x:&array[t, n]): size => Typing::arrayindexcount[n];
/* won't work for compact linear types! */
//fun unsafe_get: &array[t, n] * size -> &t = "(&($1->data[$2]))";
fun unsafe_get (var a: &array[t, n], j: size) => a.(aproj (j :>> n) of (&(t^n)));
}

That is, the non-commented out function there is correct.

class ArrayValue[t,v] {

virtual fun unsafe_get: t * size -> v;
}

This is fine. But the instance is not. The return type of unsafe_get above
is actually

_pclt<t^n,t>

and not

&t

The reason is that, if t is compact linear, that has to be the type,
and since t is a type variable we HAVE to assume it is compact linear.

What we really want to do is use the commented out formula is t is not
compact linear otherwise the uncommented out one if it is, but there
is no way to say that. I have been thinking to introduce a new kind

CLT // a compact linear type

which is a subkind of TYPE. That way we can have TWO instances,
one for compact linear case and one for ordinary case. During monomorphisation
type classes are instantiated and at this time we actually *know* for a particular
substitution of t which instance to choose. Whether I can actually get this
to work is another story, but the information should be there.

In the meantime, i seek another solution that may be less of a dramatic
change. I’m not sure there is one. I had to do some hackery already
for example

instance[t,n] ArrayValue[array[t,n], t] {
fun len (x:array[t, n]): size => Typing::arrayindexcount[n];
fun unsafe_get (var a: array[t, n], j: size): t => a . (j :>> n);
}

That code failed to build, because the coercion to a type variable
can’t be justified. I had to allow it, otherwise the function can’t work.
At best i have to make sure the monomorphised coercion
is actually *checked* because the coercion to a type variable can’t be.

The problem now is we have to allow some cases to degrade back
to ordinary pointers, and worse, I need to be sure the container type
and target type of a compact linear pointer type are always available
when required.

When should a compact linear pointer degrade to an ordinary one?

Well obviously we can degrade

_pclt<int ^5, int> —-> &int

because int is not a compact linear type. Another question is,
how do we *promote* a normal pointer to a compact linear pointer?

Clearly:

&int —> _pclt<int^1, int> == _pclt<int,int>

I think more generally:

&T = _pclt<T,T>

for all T. The argument is that if we already have an actual machine
pointer to T, then the whole container into which we point is of type T,
and of course the component of the 1-ary product T^1 = T we want is
just the first (and only) component of the array, of type T.

This formula says the transformation is an embedding, that is,
&T is a subtype of _pclt<X,T>.


So lets see, exactly what is the degradation rule?

1. _pclt<T,T> degrades to T because we already have a machine
pointer to a T (the first argument) and the component type is the whole
type, so the pointer to the component must BE that machine pointer.
OTHERWISE

2. _pclt<D,C> if D is compact linear, then since D!=C due to the
otherwise above, then it doesn’t degrade because a pointer
into the interior of a compact linear isn’t addressable.
OTHERWISE

3. Since D isn’t compact linear, a subcomponent C of it
must be addressable even if C is itself compact linear.
For example, a tuple D=int * 3 *long with C = 3,
then C is compact linear. Can we have a pointer
with

D = int * (3 * 4) * long
C = 3

?? No we cannot. A pointer with C = 3 here would have to be

_pclt<3 * 4, 3>

In other words: D must be a product and C must be one of its
components, 3 is not a component of the D above.
Remove the parens:

D = int * 3 * 4 * long

then _pclt<D,3> = &3, an ordinary pointer.

The degradation rule can be implemented directly in the
type constructor, once so this is a fully localised change!
Yae!

The subtyping rule is tricker. Its simple to put in the unification engine,
but this doesn’t actually effect the conversion. Have to think about that.

So now there is one more thing to note, and it is not so pleasant:

instance[t,n] ArrayValue[&array[t,n], &t] {

This is nasty. We need this instead:

instance[t,n] ArrayValue[&array[t,n], _pclt<array[t,n],t>] {

because remember that is what

fun unsafe_get (var a: &array[t, n], j: size) => a.(aproj (j :>> n) of (&(t^n)));

actually returns (prior to momomorphisation). But there is worse, we also need
the ARGUMENT to be a compact linear type:

fun unsafe_get[D] (var a: _pclt<D,array[t, n]>, j: size) => a.(aproj (j :>> n) of (&(t^n)));

because of this type:

D = 3 * 2 ^4

which compact linear. Otherwise we cannot do an unsafe_get on a pointer
to the second component because that pointer is a compact linear pointer
not an ordinary one.

In fact, this case *subsumes* the ordinary pointer case because it degrades
to it, so we can in fact drop the original instance entirely.

My worry is that just about ALL polymorphic code where we have &T will
have to be changed to _pclt<D,T> which also means adding a D type
parameter to a lot of functions…..



John Skaller
ska...@internode.on.net





John Skaller2

unread,
Aug 15, 2018, 11:05:36 AM8/15/18
to felix google
BUT there may be another case: D is NOT a structural product.
For example:

_pclt<varray[T],T>

or just

_pclt<somecountainer, T>

In this case, we might think about degrading to some type given by a
type class. For example (get: 1->T, set:T->0), an abstract pointer.
Hmmm.


John Skaller
ska...@internode.on.net





Reply all
Reply to author
Forward
0 new messages