We're going to split the Swiftz subtree soon and suck the core and Basis into the main lib, then split off some parts into other repos. We were thinking of linking the new Swiftz with LlamaKit, then layering our own primitives on top. Ones that admit the kind of Haskell-ese needed for the other libraries. Max had a particularly elegant solution (seeing as Swift is having some real trouble with generic extensions):
LlamaKit ships Box, Result, Either
Swiftz links to those, then wraps them with our own "category"-ified Box, Result, and Either. A common API is bridged between both frameworks so either solution is drop in, and you can hot-swap without a hitch. If you want more FP-ese, you just link to Swiftz. Less, link to LlamaKit.
What do you think?
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/a3fe4e2a-b657-44eb-a494-76c69d095859%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
As for Either, I don't feel it's especially necessary to a framework that exports Result. ErrorType also saves me some work (Basis has an ErrorType of its own I can now delete).
Excited to hear you'd be up for this.
Hot-swapping here means some user wants to move more stuff to Swiftz, but they're in LlamaKit. Rather than have two different boxes or results and whatnot, you define a LlamaKit.Box, we define a Swiftz.Box that wraps yours. All the user has to do is import Swiftz instead of LlamaKit, and they'll be using your value completely opaquely but with our functions and typeclasses!
- A GCD class (non-ideal).
Basically, a central class that wraps all the functions and lifts them into Swift land.
- A GCD monad (Hear me out).
People love performing work, then hopping queues back to main. We could implement some kind of GCD monad (maybe in Concurrent?) that does allows forking computations onto queues either provided or opaque. After all, if the IO monad represents computation, the GCD monad will represent computation on a different thread. I'm hoping to throw an STM into Concurrent soon, and it would be so cool to see it work with that.
GCD stuff I've been thinking about... I came up with a couple ideas
- GCD-specific functions
Concurrent already has fork and forkIO. I'm open to a variant of forkOn or forkQueue(barrier), etc. that takes a GCD queue and a computation that handles all of it behind the scenes.
- A GCD class (non-ideal).
Basically, a central class that wraps all the functions and lifts them into Swift land.
- A GCD monad (Hear me out).
People love performing work, then hopping queues back the main. We could implement some kind of GCD monad (maybe in Concurrent?) that does allows forking computations onto queues either provided or opaque. After all, if the IO monad represents compitation, the GGD monad will represent computation on a different thread. I'm hoping to throw an STM into Concurrent soon, and it would be so cool to see it work with that.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/3138ce00-e184-40cc-b04a-8c10b22b7916%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
There is no abstraction to be had here (besides Functor, and we're trying to avoid that). What I mean is that all the user has to do is switch imports. We handle the bridging in Swiftz automatically.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/26d327be-bff6-4aeb-b974-0e17073629a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I still need mutexes that aren't just private queues I spin synchronous read loops on because, fundamentally, an MVar is a thread of control holding a value at a later date. It is not a queue, it is what one builds queues out of.
The way I have it in Concurrent, you have the library dispatch to pthreads behind the scenes, but I can swap it out for a private queue, or even accept a queue, if a computation requests it (I use [IO] monad and computation interchangeably). See https://github.com/typelift/Parallel/blob/master/ParallelTests/ParallelTests.swift#L45-L50. Of course, there are implicit locks (MVars) in this scheme, but we're going for Async-await, right?
I still need mutexes that aren't just private queues I spin synchronous read loops on because, fundamentally, an MVar is a thread of control holding a value at a later date. It is not a queue, it is what one builds queues out of.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/e7352e4f-2f78-467d-b209-de2a3be1ddf8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I'm starting to see what you're saying, but it still makes me nervous that I wouldn't be holding onto a thread, but a queue. As I said before, the point of an MVar is that it seizes an entire thread, not a queue.
I'd have to lock around the queue, or keep submitting barrier-dispatching operations to an internal queue. And I'm still not entirely sure what happens if I spawn a hundred or a thousand of those things, anyhow. I would need to keep around uniqueness state for each queue (easy with IORefs, but still a pain), and system optimizations might one day cause the whole non-deterministic threads of execution thing to mean you can submit blocks to an open thread that should be blocked by a take on an empty MVar, for example.As for the other stuff: I'm mitigating the creep of state and keeping laziness around at the same time with those new operators. One should not be allocating objects in a context that isn't stateful. Nor should one be using said objects without having to force the thunk that keeps such an allocation from occurring. This way, you can pass futures around and throw them in collections and what not and still have the option of forcing the value when you want to, not when a queue is empty. The binary <- is getting quite annoying to use, though. I'm thinking of proposing a strictness operator (prefix-bang) to force thunks instead.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/30a930c0-c0c3-4655-881c-4fa490d947d0%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/591973a9-d884-443b-bd16-5c823cf6a417%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/CADwykfA0zTWfE-aDzcoXXkTg7hrEY-XF1%3DiBCGWuUjHppzFbWQ%40mail.gmail.com.