In order to check the compiler is doing what is expected, I’m adding a bunch
of unit tests to ensure correctness. By being systematic I’m hoping to provide
coverage. A number of omissions and errors have already been corrected.
There is a compiler refactoring in sight which could make a serious mess,
and these tests will help ensure the refactoring is correct.
The tests are also useful for tutorial purposes since they supposedly cover
all the cases with miminal examples.
The first batch of tests deals exclusively with projections.
The first one I wrote looks like this:
@title Stand Alone Tuple Value Integer (Constant) Projection Tuple domain
@felix
typedef d_t = int * string * double;
var x : d_t = 1,"Hello",42.1;
var p0 = proj 0 of d_t;
var p1 = proj 1 of d_t;
var p2 = proj 2 of d_t;
println$ x . p0;
println$ x . p1;
println$ x . p2;
@expect
1
Hello
42.1
@
This test deals with the CORE case of a stand alone tuple value projection.
Stand-alone means the projection is constructed as a first class function
before being applied. The argument of the projection operator “proj” here
must be a simple int literal. Note, typed integer literals are not allowed.
The argument of “proj” must be a constant because the codomain
of the projection varies with its domain value.
Then we have further tests of a simlar nature for
* compact linear tuples
* arrays
* arrays with compact linear base
Also we have to check
* structs
* records
because they’re products with projections too, they just use the field name
as the projection instead of an integer.
Also singletons and unit tuples need to be handled right and checked!
Next, we have to check array projections:
@title Stand Alone Precise Array value projection
@felix
typedef a_t = int^3;
var x : a_t = 1,2,3;
for i in ..[3] do
var p = aproj i of a_t;
println$ x.p;
done
@expect
1
2
3
@
Unlike Tuple projections, Array projections accepts a *variable* index.
We use “aproj” for array projections. They only apply to arrays, because
only arrays have the same codomain for all indices. Note that at
present, the index expression must be of the precisely correct type,
an integer will not do. In the above example the type is 3.
Its a bit inconsistent, since tuple projections require an integer literal,
and won’t accept the precisely correct type!
So far, we’ve only dealt with value projections. Recall all projections
in Felix ALSO work for pointers. This is the secret power algebra that
allows Felix to get rid of references and lvalues.
So we have to repeat all the above tests, but this time using pointers
instead of values. And it’s not that simple, because there are THREE
modes of pointers:
* read/write pointers &x
* read only pointers &<x (called pointer to const in C++)
* write only pointers &>x (not available in C++)
I am thinking of adding read-modify-write pointers &<>x as well.
This works like a read/write pointer EXCEPT that it allows atomic
read/modify/write operations.
In addition, for compact linear types there are TWO kinds of pointers!!
The pointer to the top level type is an ordinary machine pointer.
However a projection of that pointer to a compact linear components
is a special kind of pointer, a compact linear pointer. It uses 3 machine
words. The first is the machine pointer to the outer compact linear object,
and then there is a divisor and modulus for extracting the value.
So we need an additional (batch of) tests in which the machine pointer
is projected to a compact linear pointer which is then projected again
to another compact linear pointer, to obtain coverage.
And of course compact linear pointers have read/write attributes too.
Don’t forget, we need both proj and aproj checks!
Are we done?
Not by a LONG SHOT!
Next, we have inline projections:
@title Inline Tuple Value Integer (Constant) Projection Compact Linear Array domain
@felix
typedef d_t = 5 ^ 3;
var x : d_t = `1:5,`2:5,`4:5;
println$ x . 0 . _strr;
println$ x . 1 . _strr;
println$ x . 2 . _strr;
@expect
case 1 of 5
case 2 of 5
case 4 of 5
@
With inline projections, they’re a direct application of the index type to the
product. And, we usually allow int as well as the precise type. And,
for arrays, it can be an expression. And they have to work with pointers too.
Done yet?
NOPE!
There are two further major extensions.
First, slices. A slice is a *range* or indices.
So all the stuff above has to be repeated for slices.
A slice is used to extract a subarray, which is a contiguous sequence
of components. Its just the concatenation of the results of a sequence
of projections.
And now for the BIG one! Arrays support a special isomorphism
of the index types which gives higher rank arrays. In particular
consider
(int ^ 3) ^ 2 :>> int ^ ( 2 * 3)
This isomorphism changes an array of length 2 of arrays of length 3
into a matrix, with dimensions 2 * 3. The difference is the matrix
takes a SINGLE index of type 2 * 3, that is, the index is a tuple.
Felix also allows you coerce that to
int ^ 6
which is a linear array again. In particular, it allows you to write
some rank independent operators. For example you can add two
matrices by linearising them, adding the linear arrays, and then
coercing back to a matrix.
Now the point is, the matrix has a projection with domain 2 * 3.
So we need to support (AND THIS IS NOT DONE YET):
proj (`1:2, `1:3) of int ^ (2*3)
and of course the aproj equivalent.
And we’re STILL not done. The above is of course recursively applicable.
BUT NOW WE NEED TO HANDLE SUM TYPES.
Sums have injections instead of projections. We have three core coproducts:
* sums (dual of tuples)
* variants (dual of structs)
* polymorphic variants (dual of records**)
and of course we have
* repeated sums (dual of arrays)
** Note, polymorphic variants are not quite the dual of records
because records allow scoped labels, whilst polymorphic variants
do not.
Also note, Felix has polyrecords, which are records with
“row polymorphism”. There should be a corresponding
“column polymorphism” for polymorphic variants but isn’t.
THIS IS A LOT OF TESTS!
Why do we need all this? Well consider:
((1,2,3),(4,5)): int^3 * int ^2
is isomorphic to
(1,2,3) + (4,5) // notation??
in other words
int ^ (3 + 2)
In other words, we first select the left or right array,
and then we have projections with domain 3 and 2 respectively.
We can linearise that to int ^ 5 of course.
And note, 3 + 2 is a sum type. So this is an alternate view of the
original tuple of tuples which, as for the matrix case, allows
a SINGLE index type.
What’s IMPORTANT in this calculus is to observe we can LINEARISE
the base without losing structure, my moving the structure INTO
the index.
Why is that important? Well its a FUNDAMENTAL RULE OF CATEGORY THEORY
that functors preserve stucture. We can get POLYADIC array handling (and more),
by transforming ANY product composite into a SINGLE product with
a structured index. And then, any functorial operations on the stucture
can be performed by:
(a) transform composite structure to index
(b) flatten index
(c) perform operation on linear array
(d) unflatten index
(e) transform back to original structure
The ultimate power of this is enormous. It’s sometimes called RESHAPING.
The critical thing is:
(a) static type checking the reshaping operations
(b) OPTIMISING THE CALCULATIONS
All the calculations can be done exclusively at run time.
Python can do this with a well known extension.
But Felix can do compile time checks on the types,
and it can do the moral equivalent of constant folding
when constants are involved.
Constant folding like his has little use in quantum mechanical calculations
using matrics of double precision floats. But it is MANDATORY for cryptographic
work, signal processing, and other jobs involving heavily stuctured small scale
data.
—
John Skaller
ska...@internode.on.net