Join Us...

54 views
Skip to first unread message

Robert Widmann

unread,
Sep 29, 2014, 9:47:33 PM9/29/14
to llam...@googlegroups.com
Max and I have been discussing the structure of Swiftz and future stuff for TypeLift, and we thought it would be prescient to bring up merging LlamaKit into the organization. In the interest of having one place to go for FP tools.

We're going to split the Swiftz subtree soon and suck the core and Basis into the main lib, then split off some parts into other repos. We were thinking of linking the new Swiftz with LlamaKit, then layering our own primitives on top. Ones that admit the kind of Haskell-ese needed for the other libraries. Max had a particularly elegant solution (seeing as Swift is having some real trouble with generic extensions):

LlamaKit ships Box, Result, Either
Swiftz links to those, then wraps them with our own "category"-ified Box, Result, and Either. A common API is bridged between both frameworks so either solution is drop in, and you can hot-swap without a hitch. If you want more FP-ese, you just link to Swiftz. Less, link to LlamaKit.

What do you think?

Rob Napier

unread,
Sep 30, 2014, 11:54:38 AM9/30/14
to llam...@googlegroups.com
That's the kind of basic structure I'd really like, at least in general terms. My ideal situation is that large, complementary frameworks like ReactiveCocoa and Swiftz can work together easily and types flow between them naturally. And I want small, low-level frameworks that need minimal dependencies to still be able to fit into the system cleanly.

I am not completely clear on the hot-swap implementation in practice. If you return a TypeLift.Result, then that wouldn't be passable to a non-TypeLift framework, right? Or do you mean sharing a protocol?

(Twenty minutes of hacking up a small project might go much faster than four paragraphs of discussion :D)

A few other details:

Either comes up a bit. I currently haven't planned to put that in LlamaKit, since I feel it duplicates Swift's enums. Do you see Either as a fundamental thing that most devs should use in addition to enums?

LlamaKit now exports ErrorType. It's just an empty protocol that identifies "error-like things" for Result. Is that more useful to TypeLift or does it get in the way? (I'd assume it'd be a big win, since I assume you're not using NSErrror.)

I'd want to apply the same basic structure to concurrency. In particular, I want to provide a GCD-based Future that other things can build on. Maybe Promise. Maybe Task (Future<Result>). I'd really like to talk through these on the list, because I don't have a total design yet. But I definitely want them to bridge seamlessly to GCD queues since that's going to ensure good platform interop and performance optimization.

Sounds like we're on a good path. I'd like to get towards a possible implementation of this idea, and get Justin's input on how it would impact consuming frameworks. And then maybe build a couple of toy apps to see how it impacts app devs.

BTW, I have no problem coming under the TypeLift repos if we can make all that work cleanly for downstream users, and we can provide a smooth transition from familiarity to power. In principle, I'm not tied to any specific name or repo.

-Rob



--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/llamakit/a3fe4e2a-b657-44eb-a494-76c69d095859%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Rob Napier
Cocoaphony blog -- http://robnapier.net/blog
iOS Programming Pushing the Limits -- http://robnapier.net/book

Robert Widmann

unread,
Sep 30, 2014, 12:46:29 PM9/30/14
to llam...@googlegroups.com
Hot-swapping here means some user wants to move more stuff to Swiftz, but they're in LlamaKit. Rather than have two different boxes or results and whatnot, you define a LlamaKit.Box, we define a Swiftz.Box that wraps yours. All the user has to do is import Swiftz instead of LlamaKit, and they'll be using your value completely opaquely but with our functions and typeclasses!

As for Either, I don't feel it's especially necessary to a framework that exports Result. ErrorType also saves me some work (Basis has an ErrorType of its own I can now delete).

Excited to hear you'd be up for this.

Message has been deleted

Rob Napier

unread,
Sep 30, 2014, 12:52:51 PM9/30/14
to llam...@googlegroups.com
On Tue, Sep 30, 2014 at 12:46 PM, Robert Widmann <widman...@gmail.com> wrote:
Hot-swapping here means some user wants to move more stuff to Swiftz, but they're in LlamaKit.  Rather than have two different boxes or results and whatnot, you define a LlamaKit.Box, we define a Swiftz.Box that wraps yours.  All the user has to do is import Swiftz instead of LlamaKit, and they'll be using your value completely opaquely but with our functions and typeclasses!

But does that cause problems if they're working with other libraries that return a mix of LlamaKit.Box and Swiftz.Box? Would libraries always be expected to return LlamaKit.Box (and would that work)?

My gut is that there has to be just one specific protocol, if not one specific struct, that comes out of a consistent package. My preference I think is a single struct that other modules can extend with functions (and eventually will be able to extend with methods; I don't know if you can do that right now in Swift).

 -Rob

Robert Widmann

unread,
Sep 30, 2014, 12:53:57 PM9/30/14
to llam...@googlegroups.com
GCD stuff I've been thinking about... I came up with a couple ideas
- GCD-specific functions
Concurrent already has fork and forkIO. I'm open to a variant of forkOn or forkQueue(barrier), etc. that takes a GCD queue and a computation that handles all the rest behind the scenes.

- A GCD class (non-ideal).
Basically, a central class that wraps all the functions and lifts them into Swift land.

- A GCD monad (Hear me out).
People love performing work, then hopping queues back to main. We could implement some kind of GCD monad (maybe in Concurrent?) that does allows forking computations onto queues either provided or opaque. After all, if the IO monad represents computation, the GCD monad will represent computation on a different thread. I'm hoping to throw an STM into Concurrent soon, and it would be so cool to see it work with that.

Robert Widmann

unread,
Sep 30, 2014, 12:55:22 PM9/30/14
to llam...@googlegroups.com
There is no abstraction to be had here (besides Functor, and we're trying to avoid that). What I mean is that all the user has to do is switch imports. We handle the bridging in Swiftz automatically.

Rob Napier

unread,
Sep 30, 2014, 1:31:51 PM9/30/14
to llam...@googlegroups.com
I'm not clear what these examples mean. This is what I'm seeing for Future (minus the operators):


I'm not yet certain how to build things on top of it. I designed a Task that wraps Future<Result> and a Promise that allows you to complete a Future by hand. I'm not sure those are the best approaches.

But the goal is to allow things like:

let x = future { longComputation(param) }
    .map { $0 * 2 }

Alternately, I think this syntax is very useful:

let x = future { longComputation() }
   .onComplete {
     dispatch_async(dispatch_get_main_queue()) { self.result = x.result() }
   }

Of course onComplete() can also be achieved with flatMap(), and I'm open to other ways of composing these. But these are the kinds of things I'm trying to provide. I think the "future {...}" syntax is very powerful. Justin can weigh-in on how this would be useful or not useful to RxCocoa.

But one key point is that no library should generate its own threads directly (i.e. via pthreads). That directly fights Apple's system optimizations and imposes a big penalty on the caller. I'm not quite clear yet how your suggestions play into that and what they would look like.

-Rob



On Tue, Sep 30, 2014 at 12:52 PM, Robert Widmann <widman...@gmail.com> wrote:
GCD stuff I've been thinking about... I came up with a couple ideas

- GCD-specific functions
Concurrent already has fork and forkIO.  I'm open to a variant of forkOn or forkQueue(barrier), etc. that takes a GCD queue and a computation that handles all of it behind the scenes.


- A GCD class (non-ideal).
Basically, a central class that wraps all the functions and lifts them into Swift land.

- A GCD monad (Hear me out).
People love performing work, then hopping queues back the main.  We could implement some kind of GCD monad (maybe in Concurrent?) that does allows forking computations onto queues either provided or opaque.  After all, if the IO monad represents compitation, the GGD monad will represent computation on a different thread.  I'm hoping to throw an STM into Concurrent soon, and it would be so cool to see it work with that.


--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Rob Napier

unread,
Sep 30, 2014, 1:36:03 PM9/30/14
to llam...@googlegroups.com
If a caller uses two libraries, and one of the libraries accepts and returns Swiftz.Result and the other library accepts and returns LlamaKit.Result, how will they interoperate? Am I missing something? If a library uses LlamaKit and calls another library that uses LlamaKit, but the main app uses Swiftz, will they have problems?

Today the app dev has to compile all the third-party frameworks together. But we should expect in the near future (within a year or two) there to be Swift third-party binary frameworks. It feels like this would necessarily break in that case.


-Rob

On Tue, Sep 30, 2014 at 12:55 PM, Robert Widmann <widman...@gmail.com> wrote:
There is no abstraction to be had here (besides Functor, and we're trying to avoid that).  What I mean is that all the user has to do is switch imports.  We handle the bridging in Swiftz automatically.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Robert Widmann

unread,
Sep 30, 2014, 3:28:48 PM9/30/14
to llam...@googlegroups.com
The way I have it in Concurrent, you have the library dispatch to pthreads behind the scenes, but I can swap it out for a private queue, or even accept a queue, if a computation requests it (I use [IO] monad and computation interchangeably). See https://github.com/typelift/Parallel/blob/master/ParallelTests/ParallelTests.swift#L45-L50. Of course, there are implicit locks (MVars) in this scheme, but we're going for Async-await, right?


I still need mutexes that aren't just private queues I spin synchronous read loops on because, fundamentally, an MVar is a thread of control holding a value at a later date. It is not a queue, it is what one builds queues out of.

Rob Napier

unread,
Sep 30, 2014, 4:15:12 PM9/30/14
to llam...@googlegroups.com
So does the call to forkFuture() generate a new thread every time it's called? That's the thing we should be strongly avoiding. As a start, it should use a private queue by default, and should accept a queue if passed. See my Future for what I mean. This allows the caller to manage priorities and serialization using standard GCD mechanisms. And seamlessly allows the caller to ensure a closure runs on the main queue (which is often required).

GCD provides mutexes. There's no need for pthreads to get that (GCD is much more than just queues). If you look at my Future code, you'll notice I do my blocking with dispatch_group, which is just a convenient wrapper around a dispatch_semaphore that can accept a completion block. Doing it that way avoids spinning up a thread (and its TLS/stack and requiring kernel context switches) just to block on a mutex. When there is no contention, GCD doesn't even have to switch into the kernel (unlike pthreads). This greatly improves performance, while keeping the code very simple.

Also remember that a Future may be created inside of a dispatched closure (this in fact may be common, during a NSURLSession handler for instance). There are tricky things to be careful about when calling pthreads functions inside of a dispatched closure. Read the "Compatibility with pthreads" section of the "Migrating away from threads" document I sent earlier. Your current code *probably* doesn't mutate threads illegally, but it would need careful auditing to make sure of that, especially if I put a dispatched closure inside of a Future (which is common because of the main queue requirements in UIKit). pthreads is just really dangerous and discouraged in Cocoa apps.

I get the channels. It's an interesting thing to add to Swift. But I believe channels should be independent of Future. And consumers should have access to Future without importing any new operators (like <-) or requiring the infrastructure for do-syntax.

Future is a really basic type, like Result. You should be able to use it without requiring a lot of infrastructure, so that it's easily shared between frameworks. That's how I generally feel about types. Types are all about interoperability. Syntax (like do-syntax) is localized to just the code that uses it, so that's ok to have dependencies. But types should stand alone as much as they can.

-Rob


On Tue, Sep 30, 2014 at 3:28 PM, Robert Widmann <widman...@gmail.com> wrote:
The way I have it in Concurrent, you have the library dispatch to pthreads behind the scenes, but I can swap it out for a private queue, or even accept a queue, if a computation requests it (I use [IO] monad and computation interchangeably).  See https://github.com/typelift/Parallel/blob/master/ParallelTests/ParallelTests.swift#L45-L50.  Of course, there are implicit locks (MVars) in this scheme, but we're going for Async-await, right?


I still need mutexes that aren't just private queues I spin synchronous read loops on because, fundamentally, an MVar is a thread of control holding a value at a later date.  It is not a queue, it is what one builds queues out of.
--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Robert Widmann

unread,
Sep 30, 2014, 5:57:33 PM9/30/14
to llam...@googlegroups.com
I'm starting to see what you're saying, but it still makes me nervous that I wouldn't be holding onto a thread, but a queue.  As I said before, the point of an MVar is that it seizes an entire thread, not a queue.  I'd have to lock around the queue, or keep submitting barrier-dispatching operations to an internal queue.  And I'm still not entirely sure what happens if I spawn a hundred or a thousand of those things, anyhow.  I would need to keep around uniqueness state for each queue (easy with IORefs, but still a pain), and system optimizations might one day cause the whole non-deterministic threads of execution thing to mean you can submit blocks to an open thread that should be blocked by a take on an empty MVar, for example.

As for the other stuff: I'm mitigating the creep of state and keeping laziness around at the same time with those new operators.  One should not be allocating objects in a context that isn't stateful.  Nor should one be using said objects without having to force the thunk that keeps such an allocation from occurring.  This way, you can pass futures around and throw them in collections and what not and still have the option of forcing the value when you want to, not when a queue is empty.  The binary <- is getting quite annoying to use, though.  I'm thinking of proposing a strictness operator (prefix-bang) to force thunks instead.

Rob Napier

unread,
Sep 30, 2014, 6:27:23 PM9/30/14
to llam...@googlegroups.com
On Tue, Sep 30, 2014 at 5:57 PM, Robert Widmann <widman...@gmail.com> wrote:
I'm starting to see what you're saying, but it still makes me nervous that I wouldn't be holding onto a thread, but a queue.  As I said before, the point of an MVar is that it seizes an entire thread, not a queue.

The point of a Future is not an MVar. A Future should only complete once. Isn't that an IVar? And my ML is really weak; I don't understand how MVar or IVar has anything to do with threads at all. Threads are an implementation detail. The point is concurrency, atomicy, and serialization, not seizing threads. GCD queues offer all of those things, and control over them. Am I missing something?

 
 I'd have to lock around the queue, or keep submitting barrier-dispatching operations to an internal queue.  And I'm still not entirely sure what happens if I spawn a hundred or a thousand of those things, anyhow.  I would need to keep around uniqueness state for each queue (easy with IORefs, but still a pain), and system optimizations might one day cause the whole non-deterministic threads of execution thing to mean you can submit blocks to an open thread that should be blocked by a take on an empty MVar, for example.

As for the other stuff: I'm mitigating the creep of state and keeping laziness around at the same time with those new operators.  One should not be allocating objects in a context that isn't stateful.  Nor should one be using said objects without having to force the thunk that keeps such an allocation from occurring.  This way, you can pass futures around and throw them in collections and what not and still have the option of forcing the value when you want to, not when a queue is empty.  The binary <- is getting quite annoying to use, though.  I'm thinking of proposing a strictness operator (prefix-bang) to force thunks instead.

I don't understand the rest of this at all. You can put my Future in a collection and force values when you want them (by calling .result()). I'm not sure what you mean by "when a queue is empty." I don't think we're talking about the same things here. The typical GCD queue is concurrent, so things dispatched to it will execute as soon as a core is available to run it. Is my Future code clear, or do I need to document it more? Maybe my use of dispatch_group isn't obvious. Have you done much work in GCD?

I'm assuming a caller who is mostly programming in imperative style, integrating with a vast amount of existing code, and just wants a more elegant way to deal with some asynchronous operations. That same type could be used for functional composition, but I'm not assuming that in the construction of the type. A Future is just "a  value that may not yet be available, but will be when it is completed." It doesn't imply anything about threads or anything else. Everything else is an implementation detail.

-Rob

Robert Widmann

unread,
Sep 30, 2014, 7:34:19 PM9/30/14
to llam...@googlegroups.com
Sorry.  A lot of that post was based on unpushed code to some TypeLift stuff.  Basically, I have an STM that operates under the assumption that every MVar has a pthread_t associated with it.  With pthreads, this was incredibly easy to maintain because you could use pthread_t's like integers and hole them up in dictionaries and the like.  With GCD, I would have to keep track of queue labels and keep around a function for generating unique queue names to spawn.

The stateful stuff still stands.  Allocations are still thunks unto themselves (see any of the new* functions in Parallel.  They're not the cleanest things around), and some can potentially execute stateful operations.  Especially under the STM with TVars.  I cannot return raw TVars without performing some costly mutations of state that may not even need to be forced if the user never uses the TVar.  

Robert Widmann

unread,
Sep 30, 2014, 7:45:01 PM9/30/14
to llam...@googlegroups.com
Yes, they will have type errors.  The two types will share a name and a common bridge, but they are conceptually separated by the modules they reside in.  If possible, Swiftz.Result may be able to offer a destruct() or unwrap() or bridge() function that breaks it down into a LLamaKit primitive.

Rob Napier

unread,
Sep 30, 2014, 8:20:40 PM9/30/14
to llam...@googlegroups.com
I really don't know what this means. I'm just a code monkey. What doesn't my Future implementation do that an iOS app dev really wishes it would? I mean, I know what an STM is, and I kind of know what a TVar is, but I don't know why the caller has to have these to be allowed to discuss a future value. It feels like implementation details have gotten in the way of types here. What common app problem is this solving?

I don't know why you would need a separate queue for every future. I'm currently handling all futures in the whole system on a single concurrent queue. Futures are not promised to be time ordered. If you want them generated in some order, or if they depend on each other in some way, then it's up to you provide a serial queue to create that order. Futures are just a type. They just mean "a value that will eventually be readable."

-Rob

--
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Robert Widmann

unread,
Sep 30, 2014, 11:00:12 PM9/30/14
to llam...@googlegroups.com
The difference between Concurrent and Parallel (and the reason I'm going to change the repo name) is that Concurrent is for deterministic concurrency and concurrency effects.  Parallel is for non-deterministic (often compiler-optimized) concurrency and effects.  In the interest of that, the concurrency primitives act in a manner that requires they know about each other's threads (passing exceptions to and from different threads -see throwTo), but not necessarily the state of those threads until the action is executed.  You're describing an inversion of that principle wherein the environment knows nothing about the threads themselves (because GCD handles them all), and everything about the environment, and that requires a certain... statefulness that leaves me with a bad taste in my mouth.  Don't get me wrong, it's possible to refactor the thing to work like that, it just doesn't lend itself to the patterns I need.

Then again, the patterns I need come from Haskell.  I'll let you know when I publish so you can look over the source and see more of what I mean.  For now, we've gone off on a pretty wild tangent here :)

Rob Napier

unread,
Sep 30, 2014, 11:22:21 PM9/30/14
to llam...@googlegroups.com
Ok, so if people build things on LlamaKit Futures and rely primarily on GCD for their async work, they can't integrate those with any of the Concurrent package you're working on. That's ok, but it would make it awkward to actually merge the repositories. (Swiftz could still rely on llamakit's Result, it just would be confusing if the same org had incompatible concurrency systems.)

Note that GCD provides concurrency, not parallelism. It provides mechanisms to reason about how operations depend on other operations. It doesn't promise that anything will actually run in parallel. That sometimes confuses people because they think GCD promises parallelism or is primarily about parallelism, which it doesn't and isn't.

Rob

Robert Widmann

unread,
Oct 1, 2014, 2:01:16 PM10/1/14
to llam...@googlegroups.com
I see it as more a case where the gradient solves this stuff.  If you push a simple future, then people will use it.  But the day they wake up (most likely with a big knot on their skull from where they got beat up by a Haskeller) and decide they need MVars and Actors and an STM, then they'll use the big-boy Future built on those things.

Rob Napier

unread,
Oct 1, 2014, 2:55:53 PM10/1/14
to llam...@googlegroups.com
That's fine, and I'll keep pointing over to TypeLift for the big-kid toys, but it sounds like we can't really merge the repos or orgs. That's not a problem, just wanted to figure it out. 

I do find myself looking over my shoulder now for those roaming bands of Haskellers… :)

Rob

Robert Widmann

unread,
Oct 1, 2014, 8:28:46 PM10/1/14
to llam...@googlegroups.com
Oh?  Why not.  Here, I've got a gist that shows what I mean by wrapping your stuff in ours.  This is a direct response to 1.1 removing the ability to extend generic classes and structures outside your own module *sigh*.

Robert Widmann

unread,
Oct 1, 2014, 10:19:04 PM10/1/14
to llam...@googlegroups.com
Oh?  Why not?  Here's a gist of what Max was thinking about (Unfortunately, a direct result of Swift 1.1 removing support for extending generic classes outside your own module).

Rob Napier

unread,
Oct 2, 2014, 11:57:56 AM10/2/14
to llam...@googlegroups.com
LlamaKit promises tight platform integration (OS, Cocoa, and stdlib). If the same umbrella framework includes a Future that provides that, and an apparently-related-but-completely-different Future that removes platform integration and has very different performance characteristics, that's too confusing to the caller IMO. If there are implementation discontinuities that impact the calling app when adding TypeLift features, I want it to be clear to the caller that they're moving to a different framework and not just adding a few features.

Your work on replacing Apple's libraries is interesting. It may lead to a very powerful platform. It's just not the platform I'm working on right now.

-Rob

Justin Spahr-Summers

unread,
Oct 2, 2014, 12:46:13 PM10/2/14
to llam...@googlegroups.com
I'm trying to avoid getting involved in this thread, because the last thing it needs is more opinions. I'll just say that I agree almost 100% with what Rob is saying.

Cocoa and Swift are what we see today, and libraries _that want to see adoption_ should be cognizant of that. If Cocoa or Swift changes significantly in the future, so be it--we can adapt then, but it's silly to build things in expectation of that (because it may be a long time, or never happen at all).

-- Justin

-- 
You received this message because you are subscribed to the Google Groups "llamakit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to llamakit+u...@googlegroups.com.
To post to this group, send email to llam...@googlegroups.com.

Robert Widmann

unread,
Oct 5, 2014, 1:28:25 AM10/5/14
to llam...@googlegroups.com
I respect your decision.  We will still move forward, and I will split out a minimal core from Swiftz and the Basis.  Thanks for the discussion, all.
Reply all
Reply to author
Forward
0 new messages