kr support

86 views
Skip to first unread message

Aaron Gottlieb

unread,
Aug 9, 2023, 5:51:46 PM8/9/23
to Kona Users
Is/was there any support for \kr?

As in:

\kr file

The script file.k in the current OS directory is converted into a runtime program file.kr. Such programs are compiled and encoded K code, and may be loaded into both the developer and runtime versions of K (see Load). Note that unencoded scripts of the form file.k may not be loaded into runtime K.

from the K3 spec

I feel like the startup time of some of my scripts is problematic and was wondering if there was support for compiled k

Kevin Lawler

unread,
Aug 9, 2023, 6:30:50 PM8/9/23
to kona...@googlegroups.com
no. kona has a kind of AST (but not a bytecode), so it could be done.
but read on.

kerf1 has an intermediate bytecode and supports a compiled
representation. https://github.com/kevinlawler/kerf1

> I feel like the startup time of some of my scripts is problematic and was wondering if there was support for compiled k

the thing is the interpretation/compilation time of array langs tends
to be negligible in kind of a strong way. for several reasons, because
it's simple to parse and because the bytecode tends to be simple also.
and the code itself is short. so you're really not going to see any
kind of appreciable speedup, to the point that i'd suggest
double-checking if you believe this is a bottleneck. so kona didn't
implement this because it doesn't really do anything.

my sense was that \kr was largely for show. or perhaps for obscuring source.

if anyone has counterarguments or clarifications i'd love to hear

there may be more that was written on this in the past but i don't
have pointers atm.
> --
> You received this message because you are subscribed to the Google Groups "Kona Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kona-user+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kona-user/356e780f-c869-4c43-bdd2-c6f2c1864691n%40googlegroups.com.

Kevin Lawler

unread,
Aug 9, 2023, 6:36:23 PM8/9/23
to kona...@googlegroups.com
> and because the bytecode tends to be simple also

a representative example would be that a single bytecode maps to a
verb, and verbs aren't "compiled" (except as C/C++ functions), so that
verb running on a single large vector basically time dominates
everything else

tavmem

unread,
Aug 9, 2023, 9:59:24 PM8/9/23
to Kona Users
> but i don't have pointers atm.

What does "pointers atm" refer to?

Bakul Shah

unread,
Aug 9, 2023, 11:54:41 PM8/9/23
to kona...@googlegroups.com
I think Kevin means he doesn't references to old discussions on this topic at the moment.

One argument *for* bytecode is its interpreter will likely fit in L1 cache as it will stripped off the compiler bits.

Kona is much slower than the old k3 and my guess is it would match k3 speed with a bytecode interpreter.
As an example

\t do[1000000;{x+y}[1;2]]

takes 110ms on k3 but 1835ms on kona.

\t 1000000{x+0}/1

takes 69ms on k3 but 1632 on kona.

\t do[1000000;{x+0}/1]

takes 186ms on k3 and... dies silently on kona.

Kevin Lawler

unread,
Aug 10, 2023, 9:41:41 AM8/10/23
to kona...@googlegroups.com
> One argument *for* bytecode is its interpreter will likely fit in L1 cache...

this has been discussed at various times over the years. i think most
usefully at length on hacker news within the last six years or so. the
consensus there, which i agree with, was that executable code fitting
in the cache doesn't really do anything. this seemed to be a bit of
marketing from kx. i was long suspicious of this claim, but this area
is not one where i have strong enough expertise to make
pronouncements. i defer to that discussion. my memory was this was
just something said to further justify golfing the source.

> timings

i don't dispute the timings here. i will add two things. one, these
are fairly pathological if you look at them (perhaps necessary to
illustrate the case). two, i don't remember how badly bytecode
affected these cases, but i will say that the result could just as
likely be due to something extraneous in the kona execution path.
bytecode is not necessarily the culprit. kona caches the ASTs if i
remember correctly.

> Kona is much slower than the old k3

perhaps you meant in the context of bytecode interpreting. i'll
comment on the overall case to say, not really, since it provides some
elucidating commentary.

this is particularly not true when you mod out the fact that k3
doesn't support 64-bit integers and this was a design criterion in
kona. most of the verbs perform basically exactly the same, some
better, and the language, when not doing "\t 10^6", is basically
compositions of verbs. If "\t 10^6" is the criterion, the execution
path in k3/k* (and by extension kona) is terrible. It is also fairly
bad in moving to lambda+adverbs for similar reasons. there was a
project to jit kx which was abandoned (and I would strongly guess
never successfully revisited) because the gains just aren't worth it
for the source bloat, for that given project. if you had a premier
open-source project you could do such things sensibly, but otherwise
optimizing "a+=1" and other imperative-style state manipulation is a
huge lift that doesn't buy you anything practical in an array
language. I forget when/where the k line had tail recursion, but
poking around in that *kind* of area is where you will find one of k's
weak points: it doesn't incorporate hardly any of what might be called
compiler logic.

anyway, i should know better at this point than to argue with the
legend of k. there are lots of things you could push forward, but most
of the newcomer interest, of what there is, is of a historical or
backward looking flavor.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kona-user/2FFD26D4-373F-4576-9FBD-2459AD67B30E%40iitbombay.org.

Kevin Lawler

unread,
Aug 10, 2023, 9:48:03 AM8/10/23
to kona...@googlegroups.com
> i think most usefully at length on hacker news within the last six years or so.

the "L1 cache" incident may also have been discussed on the APL Farm
discord server. i forget how much took place where. It at any rate is
a useful resource for this and other topics:
https://discord.gg/G24v5FWh6z

Aaron Gottlieb

unread,
Aug 10, 2023, 1:05:09 PM8/10/23
to Kona Users
Thank you all for the responses and information

Bakul Shah

unread,
Aug 10, 2023, 2:39:41 PM8/10/23
to kona...@googlegroups.com
On Aug 10, 2023, at 6:41 AM, Kevin Lawler <kevin....@gmail.com> wrote:
>
>> One argument *for* bytecode is its interpreter will likely fit in L1 cache...
>
> this has been discussed at various times over the years. i think most
> usefully at length on hacker news within the last six years or so. the
> consensus there, which i agree with, was that executable code fitting
> in the cache doesn't really do anything. this seemed to be a bit of
> marketing from kx. i was long suspicious of this claim, but this area
> is not one where i have strong enough expertise to make
> pronouncements. i defer to that discussion. my memory was this was
> just something said to further justify golfing the source.

I just asked Nick on the APL Discord about what ngn/k does as its
speed is as good or better k3. He responded it compiles to bytecode.

>> timings
>
> i don't dispute the timings here. i will add two things. one, these
> are fairly pathological if you look at them (perhaps necessary to
> illustrate the case). two, i don't remember how badly bytecode
> affected these cases, but i will say that the result could just as
> likely be due to something extraneous in the kona execution path.
> bytecode is not necessarily the culprit. kona caches the ASTs if i
> remember correctly.

They were just some random examples. It is worse pretty much across
everything.

>> Kona is much slower than the old k3
>
> perhaps you meant in the context of bytecode interpreting. i'll
> comment on the overall case to say, not really, since it provides some
> elucidating commentary.
>
> this is particularly not true when you mod out the fact that k3
> doesn't support 64-bit integers and this was a design criterion in
> kona. most of the verbs perform basically exactly the same, some
> better, and the language, when not doing "\t 10^6", is basically
> compositions of verbs. If "\t 10^6" is the criterion, the execution
> path in k3/k* (and by extension kona) is terrible. It is also fairly
> bad in moving to lambda+adverbs for similar reasons. there was a
> project to jit kx which was abandoned (and I would strongly guess
> never successfully revisited) because the gains just aren't worth it
> for the source bloat, for that given project. if you had a premier
> open-source project you could do such things sensibly, but otherwise
> optimizing "a+=1" and other imperative-style state manipulation is a
> huge lift that doesn't buy you anything practical in an array
> language. I forget when/where the k line had tail recursion, but
> poking around in that *kind* of area is where you will find one of k's
> weak points: it doesn't incorporate hardly any of what might be called
> compiler logic.

The point of \t 10^6 is to get a more accurate timing, nothing more.
Note ngn/k ints are also 64 bit, the same as kona:

-3#64(2*)\1
4611686018427387904 0N 0

> anyway, i should know better at this point than to argue with the
> legend of k. there are lots of things you could push forward, but most
> of the newcomer interest, of what there is, is of a historical or
> backward looking flavor.

I used k3 just as a "reference". Sort of like POSIX! At present I mostly
play with https://codeberg.org/ngn/k.git as it seems to be the most
performant open source k version.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kona-user/CADmfCNu%2BnSMkE_hBZAjqSA1XfVvYHSQhw4yDmRgZ0nVVBc%3Dj9w%40mail.gmail.com.

Bakul Shah

unread,
Aug 15, 2023, 4:06:38 AM8/15/23
to kona...@googlegroups.com
A somewhat relevant paper that may be of interest:


Our results show that both systems indeed reach performance close to Node.js/V8. Looking at interpreter only performance, our AST interpreters are on par with, or even slightly faster than their bytecode counterparts. After just-in-time compilation, the results are roughly on par. This means bytecode interpreters do not have their widely assumed performance advantage. However, we can confirm that bytecodes are more compact in memory than ASTs, which becomes relevant for larger applications. However, for smaller applications, we noticed that bytecode interpreters allocate more memory because boxing avoidance is not as applicable, and because the bytecode interpreter structure requires memory, e.g., for a reified stack.

Tom Szczesny

unread,
Aug 16, 2023, 11:10:13 AM8/16/23
to kona...@googlegroups.com
Thanks !!

You received this message because you are subscribed to a topic in the Google Groups "Kona Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kona-user/SvdW49SrgfI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kona-user+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/kona-user/3593B0FA-7C18-486A-B893-5C90E764A69D%40iitbombay.org.

Tom Szczesny

unread,
Aug 16, 2023, 11:44:56 AM8/16/23
to kona...@googlegroups.com
"After just-in-time compilation, the results are roughly on par."

I assume that both strategies require a just-in-time compilation step.
Otherwise, if the bytecode strategy does not, it would appear to be faster, overall.

Bakul Shah

unread,
Aug 16, 2023, 2:12:49 PM8/16/23
to kona...@googlegroups.com
AST+JITC is similar to bytecode (& not JITC) in performance.

Reply all
Reply to author
Forward
0 new messages