Some thoughts on the future direction of Happstack

35 views
Skip to first unread message

Matthew Elder

unread,
Jun 15, 2009, 11:21:21 AM6/15/09
to HA...@googlegroups.com
All, as I have been learning how to be a dad, I have also had some
time to reflect on the goals I initially had for Happstack. I have
also been scanning the various topics that have been appearing on the
list. One thing that has become apparent is that happs has grown into
a framework that is being used for very many purposes -- and that
people have very different needs.

That said, I would like to focus on narrowing the scope of the project
a bit more and drive towards a common goal.

What is Happstack at this point in the road? Here are some of the
parts of the puzzle:

- a fairly mature http application server
- a fairly immature frontend http server (ipv6 and file serving -- to
name a couple of the areas of weakness)
- a fairly immature prevayler style persistence layer (lack of tools
and learning curve are a couple of issues here)
- a mixed bag of mature and immature utility functions of which some
are slowly decaying (in terms of maintenance)

Each of these four aspects could easily be a full-time job for one developer

Some open questions to the community:

1. How valuable do you find each of these pieces to be in your work
(whether it be personal or professional)?
2. What do you see as the main weakness of the current implementation
and/or community?
3. Which items could be potentially taken away to ease the overall
code footprint with minimal impact and/or which items really belong in
a separate project alltogether (state anyone?).

Regards,
Matthew Elder

--
Sent from my mobile device

Need somewhere to put your code? http://patch-tag.com
Want to build a webapp? http://happstack.com

Gregory Collins

unread,
Jun 15, 2009, 12:58:38 PM6/15/09
to HA...@googlegroups.com
Matthew Elder <ma...@mattelder.org> writes:

> Each of these four aspects could easily be a full-time job for one developer
>
> Some open questions to the community:
>
> 1. How valuable do you find each of these pieces to be in your work
> (whether it be personal or professional)?

Greetings from Budapest! (I'm on my honeymoon here).

If I were to evaluate how important the various happstack pieces are to
me:

* HTTP application server -- absolutely critical, the raison d'etre for
the package, and the part of it I'm actually using, both for personal
projects and at work -- but it needs an extensive overhaul and maybe a
re-architecting (as we've discussed)

* HTTP frontend server -- also critical for similar reasons, but the
current implementation is not so hot and should be replaced by
something better.

* Persistence layer -- I'm completely disinterested and will remain so
until the design and code documentation reaches an acceptable level of
quality. Right now I have no idea how it actually *works* because it's
far too complicated and under-documented. I also have some pretty deep
reservations about its performance/overhead/scalability
characteristics. Has anyone done a comprehensive benchmark/performance
evaluation? I've heard apocryphal stories of it gobbling tons of RAM.

* Utility functions -- should probably be packages in their own right if
they're self-contained enough, "kitchen-sink"-type libraries are a sure
sign of questionable design decisions.


> 2. What do you see as the main weakness of the current implementation
> and/or community?

Lazy I/O in the HTTP protocol handler, this has bitten me in the ass
with unpredictable performance characteristics several times.

Community-wise things are fine, I've felt no obstacles to stepping in
and contributing. I wish for a better wiki/bug-tracker though. (Trac?)
We should also be supplying better-written, cleaner, more
aesthetically-pleasing tutorial material.


> 3. Which items could be potentially taken away to ease the overall
> code footprint with minimal impact and/or which items really belong in
> a separate project alltogether (state anyone?).

I think I know where you're going with this. It's my position that the
question of persistence/data layer should be orthogonal to the HTTP
application server stuff, and any effort to entwine them further should
be resisted. Happstack should be usable without the state portion.

I know for a fact that there are people who feel exactly the opposite
way, and wish for Happstack to be a Rails-ish HTTP/user
management/data-layer/ORM/templating megalith. So you're right that we
need to develop a consensus on the direction of the project.

***

As a thought experiment, let's enumerate a cople of different projects
we could be working on:

* A rewrite of the base-level HTTP application server stuff. The
ServerPartT transformer stack is pretty cool and has some great
features but is overcomplicated, exposes too much of its internals in
the public interface, and has problems re: lazy I/O. To me this one is
a sine qua non for any further work, we need a more solid foundation
to build upon. I'm planning on working on this at Hac Phi, by the end
of that weekend we should have a candidate replacement for the old
stuff along with some kind of a compatibility layer for ServerPartT to
avoid breaking old code.

* An implementation of sharding for happstack-state -- lots of people
have asked about this, being unable to scale beyond 4GB datasets is an
unreasonably severe limitation.

* An implementation of happstack-state based on secondary storage
(i.e. not 100% in RAM) -- see above -- no idea how this might work.

* Some useful Rails-ish utility modules for building real-world
web-apps; user management, model-view-controller crud, built-in
Javascript stuff, etc.. I'm definitely not keen on moving in an
explicitly "Ruby-on-Rails-clone" direction, Haskell isn't like Ruby
and I think we can definitely do better.

In my mind we would need some design consensus re: how all that stuff
wold plug together before we can proceed on that front. My feeling is
that the natural way to do pluggable componentry in Haskell is via
typeclasses: for example, to do user management you'd make your
toplevel website monad implement a "UserManager" typeclass that would
provide the operations you need, probably by wrapping some common
authentication code.

Keeping the interface separate from the implementation would allow you
to plug in different user auth modules depending on which design
decisions make sense for your problem domain, i.e. challenge-response
vs. plaintext/SSL, MySQL vs. happstack-state vs. Tokyo Tyrant, MD5 vs
SHA, web browsers/cookies vs. middleware/SOAP, etc.

***

Hope this provides some fodder for conversation.

G
--
Gregory Collins <gr...@gregorycollins.net>

MightyByte

unread,
Jun 15, 2009, 2:20:23 PM6/15/09
to HA...@googlegroups.com
Since I am currently running a production website developed with
Happstack, I'll throw in some comments.

1. The http application server is an integral part of my app, which
uses it to serve dynamic content. It works well for me now, but I'm
certainly open to improvements.
2. I only serve small css, javascript, and image files, so the current
front end functionality is sufficient for me at this time, although I
agree that it should be improved.
3. My site is built around Happstack's persistence layer (and makes
heavy use of IxSet for convenience), which has worked nicely, but
since going live has given me cause for concern.

I find both the http server and the persistence layer to be very
valuable. I make heavy use of formlets, which allows me to stay very
close to Haskell data types. Since I'm close to the type system on
the formlet/UI side, it's very nice to stay close to the type system
on the persistence side.

However, memory use is a cause for concern. I'm looking at ways to
reduce my storage requirements, but optimizations like bytestrings can
only take me so far. The current lack of sharding means that I'll
have to scale up rather than out should the need arise. It was
sobering to realize that this will only take me so far, and beyond
that point as everything stands right now, I'd be dead in the water.
(Admittedly the same is true if you're using MySql. The difference is
that you'll go a lot longer before running into this wall with MySql.)
So I'm having to analyze whether it would be a better use of my time
to work on sharding for Happstack or migrating to some other solution
for persistence.

So in my mind, scalability in the persistence layer is Happstack's
biggest weakness right now. The everything-in-memory design means
that this will be a problem much sooner than it will for a RDBMS-based
site. I was fully aware of this fact when I chose Happstack for my
project, but it's a much more pressing concern now. Some will point
out that if this is a problem for me, I should contribute the code to
fix it. I undoubtedly will do something about it, but given the
complexity and obscurity Gregory mentioned, there's a distinct
possibility that I'll end up deciding to use a different approach.

Another concern is that even if sharding was available tomorrow once
the database becomes large enough, creating checkpoints may become a
problem. I've already discovered that regular checkpointing helps to
reduce memory usage, but when the checkpoint files become large, it
may be too time consuming. At this point, I haven't heard a
satisfactory answer to this concern.

Jake McArthur

unread,
Jun 15, 2009, 2:41:53 PM6/15/09
to HA...@googlegroups.com
Matthew Elder wrote:
> 1. How valuable do you find each of these pieces to be in your work
> (whether it be personal or professional)?

* HTTP application server: Critical
I think web services should be Happstack's primary focus.
* HTTP front end server: Critical
I think it should be important to be able to make Happstack serve as
either a front end OR a back end.
* Persistence layer: Important
I love the idea of primary storage being in RAM. I would rather
blacklist certain kinds of data from cache (RAM) than whitelist
certain kinds of data (from disk) for cache. That said, we don't
have a very easy way to do that yet.
* Utility functions: Not Important
Such things should definitely be put into separate libraries.

> 2. What do you see as the main weakness of the current implementation
> and/or community?

Too much Template Haskell in State. The TH makes it extremely hard to
follow and understand. If it's too complex without the TH, we should
probably address that, not hide it.

State also should provide some way to optionally remove some data from
RAM, retrieving it from disk as needed. For example, I believe the
current way to upload and save large files to a Happstack application is
to simply save them to disk and file serve them. It would be excellent
to be able to store these along with other associated data in State, but
only load them into memory on demand. This can be extrapolated into any
infrequently accessed data, however large or small, of course.

Sharding will be great once it's working properly, but I honestly think
that proper optimization for the resources available to you on a single
system is the best way to go at first. Note that I am not advocating the
traditional way of storing data on the hard drive only and caching in
memory as needed. Rather, I'm advocating storing data primarily in
memory but making exceptions for some data to store only on the hard
drive (and perhaps later in the development of State even automatically
caching some of the most frequently read data in memory, as is traditional).

> 3. Which items could be potentially taken away to ease the overall
> code footprint with minimal impact and/or which items really belong in
> a separate project alltogether (state anyone?).

I think nearly every component should be as loosely coupled and
separated as possible. New packages can then be created to tie things
together if the Railsy types want that sort of thing, but I would rather
the core be highly modular. State, Ix, and many of the utilities can be
separated pretty cleanly. I honestly don't even understand why
happstack-util exists, as it's components have very little to do with
Happstack proper or each other. The happstack package's dependencies
ideally shouldn't even have "happstack" in their names, although that
would be overly pedantic. In my opinion, the core of Happstack is the
server, so happstack-server should be the new happstack, and the current
happstack package should cease to exist. Other pieces can be added as
needed.

- Jake

John MacFarlane

unread,
Jun 15, 2009, 2:48:15 PM6/15/09
to HA...@googlegroups.com
+++ Matthew Elder [Jun 15 09 08:21 ]:

>
> All, as I have been learning how to be a dad, I have also had some
> time to reflect on the goals I initially had for Happstack. I have
> also been scanning the various topics that have been appearing on the
> list. One thing that has become apparent is that happs has grown into
> a framework that is being used for very many purposes -- and that
> people have very different needs.
>
> That said, I would like to focus on narrowing the scope of the project
> a bit more and drive towards a common goal.
>
> What is Happstack at this point in the road? Here are some of the
> parts of the puzzle:
>
> - a fairly mature http application server
> - a fairly immature frontend http server (ipv6 and file serving -- to
> name a couple of the areas of weakness)
> - a fairly immature prevayler style persistence layer (lack of tools
> and learning curve are a couple of issues here)
> - a mixed bag of mature and immature utility functions of which some
> are slowly decaying (in terms of maintenance)
>
> Each of these four aspects could easily be a full-time job for one developer
>
> Some open questions to the community:
>
> 1. How valuable do you find each of these pieces to be in your work
> (whether it be personal or professional)?

I use the application server and HTTP server in gitit. I don't use
happstack-state at all, mostly because of the complexity and the
difficulty managing migrations. I think it is important that
happstack-server be made orthogonal to happstack-state -- but that
has mostly been achieved in happstack-0.2.

> 2. What do you see as the main weakness of the current implementation
> and/or community?

happstack-0.2 is a huge improvement over the original HAppS. I'd like to
see happstack-0.3 released as soon as possible, since it fixes a serious
memory leak (among other things). But the community seems great -- I
hope momentum can be kept up.

John

MightyByte

unread,
Jun 15, 2009, 2:54:45 PM6/15/09
to HA...@googlegroups.com
> * Persistence layer:        Important
>     I love the idea of primary storage being in RAM. I would rather
>     blacklist certain kinds of data from cache (RAM) than whitelist
>     certain kinds of data (from disk) for cache. That said, we don't
>     have a very easy way to do that yet.

> State also should provide some way to optionally remove some data from


> RAM, retrieving it from disk as needed.

I've been thinking about this idea as well. I think it could be
useful, although it won't solve the scaling problems I mentioned
earlier.

Flavio Botelho

unread,
Jun 15, 2009, 3:19:37 PM6/15/09
to HA...@googlegroups.com

Personally i think that this idea that everything has to be in RAM
does not make any economical sense, maybe only for very few specific
applications.
Now, being able to store data based on your language's data type, that
is a really good idea.

My 2 cents.
Flavio Botelho

Vagif Verdi

unread,
Jun 15, 2009, 4:53:53 PM6/15/09
to HAppS
OMG! I've been saying this from the moment i saw original Happs, and i
was also very frustrated when you guys continued that path with
Happstack. Why on earth do you think Happs did not take off ? Because
of the unfortunate decision to enforce web server users to use your
homegrown data storage.

Happstack should be only about web server and apllication server.

1. Remove data storage into separate project.
2. Get rid of Template Haskell from end user code. If you need to use
it inside the project, it is perfectly fine. But to demand from end
user (often beginner haskell programmer) to learn and maintain another
level of complexity - is too much.
3. Please implement sessions that are a: not based on your data
storage, b: do not require Template Haskell wizardry.

Forget about all that fancy Rails functionality. The main problem
happs has is the incredibly high entry barrier. Make it simple to
start writing stateful web applications with it.

MightyByte

unread,
Jun 15, 2009, 5:13:54 PM6/15/09
to HA...@googlegroups.com
On Mon, Jun 15, 2009 at 4:53 PM, Vagif Verdi<Vagif...@gmail.com> wrote:
>
> OMG! I've been saying this from the moment i saw original Happs, and i
> was also very frustrated when you guys continued that path with
> Happstack. Why on earth do you think Happs did not take off ? Because
> of the unfortunate decision to enforce web server users to use your
> homegrown data storage.

For the record, neither HAppS nor Happstack force the user to use our
data storage.

> 1. Remove data storage into separate project.

...


> happs has is the incredibly high entry barrier. Make it simple to
> start writing stateful web applications with it.

These two items seem to be in conflict.

Jake McArthur

unread,
Jun 15, 2009, 5:23:16 PM6/15/09
to HA...@googlegroups.com
Vagif Verdi wrote:
> OMG! I've been saying this from the moment i saw original Happs, and i
> was also very frustrated when you guys continued that path with
> Happstack. Why on earth do you think Happs did not take off ? Because
> of the unfortunate decision to enforce web server users to use your
> homegrown data storage.

I don't think this attitude of "don't do it yourself" will result in
new, useful, interesting models.

> Happstack should be only about web server and apllication server.
>
> 1. Remove data storage into separate project.

Agreed.

> 2. Get rid of Template Haskell from end user code. If you need to use
> it inside the project, it is perfectly fine. But to demand from end
> user (often beginner haskell programmer) to learn and maintain another
> level of complexity - is too much.

Agreed, mostly.

> 3. Please implement sessions that are a: not based on your data
> storage, b: do not require Template Haskell wizardry.

This does not sound like it should be in the scope of the Happstack
project, proper. It sounds more like a utility, which implies it should
be in a separate package.

- Jake

Shu-yu Guo

unread,
Jun 15, 2009, 5:26:44 PM6/15/09
to HA...@googlegroups.com
On Mon, Jun 15, 2009 at 1:53 PM, Vagif Verdi<Vagif...@gmail.com> wrote:
>
> 1. Remove data storage into separate project.
> 2. Get rid of Template Haskell from end user code. If you need to use
> it inside the project, it is perfectly fine. But to demand from end
> user (often beginner haskell programmer) to learn and maintain another
> level of complexity - is too much.
> 3. Please implement sessions that are a: not based on your data
> storage, b: do not require Template Haskell wizardry.
>
> Forget about all that fancy Rails functionality. The main problem
> happs has is the incredibly high entry barrier. Make it simple to
> start writing stateful web applications with it.

Unfortunately I don't think it's easy to have a simple state system in
an inherently multithreaded program written in a pure functional
language. That said, I do think happstack-state could be a lot
simpler. Why don't we just use STM and have people write their own
savers for data?

--
shu

Rick R

unread,
Jun 15, 2009, 5:30:21 PM6/15/09
to HA...@googlegroups.com
I am just about to put an app into production, part of which uses happstack. I learned pretty early on that the Goals of Happstack are a bit different than mine, but I stuck with it, if for no other reason than it's the only app framework I know of for Haskell.

Anyways, onto my priorities.

> a fairly mature HTTP application server

Obviously the most important.

> a fairly immature frontend http server (ipv6 and file serving -- to name a couple of the areas of weakness)

Important, but honestly this is a *very* difficult task to execute correctly (Security, DOS resistance, etc).
I would be fine with tight integration with any (one?) of the other, purpose built http servers out there (nginx, lighttpd, apache, etc) My app runs with a lighttpd front end which load balances requests to Happstack instances. I'm too afraid to have any other server face the public.

> a fairly immature prevayler style persistence layer (lack of tools and learning curve are a couple of issues here)

Not important to me at all. There is at least a dozen amazing, mature, scalable data solutions out there. My app currently uses Berkeley DB, but I am putting the finishing touches on a Haskell MongoDB driver.  As far as session level persistence, what is wrong with Memcached? Really?


> a mixed bag of mature and immature utility functions of which some are slowly decaying (in terms of maintenance)

These are actually nice to have, but they can certainly be in other libraries. IMO, the only thing that makes rails really nice is its helpers/utils. That, however, is largely a tribute to its maturity. The libraries can be created and made available by and for Happstack customers as they are needed.


As far as weaknesses, I would like to echo my concern over strictness guarantees in the server. Perhaps that is my control-freak nature coming through in that I like computers to do things exactly when I tell them to. This is something I deal with in all of my Haskell apps.

IMO I don't find the documentation for the core system really isn't as bad as others are saying it is. Perhaps that is because I've ignored anything to do with State.

As far as future direction, I've stated this before, but I think the trend in web applications is moving towards an event-driven client/server approach. It would be nice to see better utils/support/documentation for ajax/comet. 

Since Happstack is its own web server, it does have the capabilities to do things truly differently.  Nitrogen (an erlang based app server) is the only app server I've seen that is really designed as an event driven system. As a result, it is terribly easy to provide simple, interactive user interfaces.



On Mon, Jun 15, 2009 at 11:21 AM, Matthew Elder <ma...@mattelder.org> wrote:



--
"The greatest obstacle to discovering the shape of the earth, the continents, and the oceans was not ignorance but the illusion of knowledge."
- Daniel J. Boorstin

Lemmih

unread,
Jun 15, 2009, 6:00:58 PM6/15/09
to HA...@googlegroups.com

Guaranteeing data integrity with reasonable efficiency constrains
isn't as trivial as that, I'm afraid.

--
Cheers,
Lemmih

Lemmih

unread,
Jun 15, 2009, 7:10:03 PM6/15/09
to HA...@googlegroups.com

Err, I meant data persistency, not data integrity.

--
Cheers,
Lemmih

Duncan Coutts

unread,
Jun 15, 2009, 8:22:27 PM6/15/09
to HAppS
On Jun 15, 4:21 pm, Matthew Elder <m...@mattelder.org> wrote:

> - a fairly mature http application server
> - a fairly immature frontend http server (ipv6 and file serving -- to
> name a couple of the areas of weakness)
> - a fairly immature prevayler style persistence layer (lack of tools
> and learning curve are a couple of issues here)
> - a mixed bag of mature and immature utility functions of which some
> are slowly decaying (in terms of maintenance)
>
> Each of these four aspects could easily be a full-time job for one developer
>
> Some open questions to the community:
>
> 1. How valuable do you find each of these pieces to be in your work
> (whether it be personal or professional)?

So my interest in happs and now happstack is in using it for the
hackage server. The main reason I chose happs (and then worked on it
with Lemmih) over other frameworks is because of the persistence layer
(meaning we can use Haskell types) and the fact that it's a standalone
server (making deployment easy -- cabal install and go). These are the
things I thought important based on the existing hackage server
implementation which uses apache+cgi (ie hard to deploy) and file
system data storage (ie writing lots of manual serialisation /
deserialisation and having to mix in a lot of IO).

> 2. What do you see as the main weakness of the current implementation
> and/or community?

I've got no real complaints at the moment. Lack of decent http
authentication is a bit of a pain.

> 3. Which items could be potentially taken away to ease the overall
> code footprint with minimal impact and/or which items really belong in
> a separate project alltogether (state anyone?).

Splitting things up is fine. Please don't deprecate or abandon the
state system. It's really the most interesting bit. I appreciate
everything in-memory doesn't work for every application. For the
hackage server I'm using a combination of in-memory indexes and "blob"
storage for large objects like tarballs. I'm also designing some
compact data structures for other indexes (eg tarball contents). So
far so good.

That said, I don't trust the state migration for upgrades etc. That's
not because I've found that it doesn't work in practice but because as
one of the authors of the binary package I'm worried by the fragility
of Binary class instances. For the hackage server I'm going to
implement a dump/restore in a separate non-binary format and use that
for backups and as an escape mechanism if upgrades do not go well.

Some people have mentioned sessions. Personally I have zero interest
in sessions. I'm trying to follow a restful design and sessions are
antithetical to that (and to caching).

Duncan

Bjorn Buckwalter

unread,
Jun 15, 2009, 8:53:27 PM6/15/09
to HA...@googlegroups.com
On Mon, Jun 15, 2009 at 20:22, Duncan
Coutts<duncan...@googlemail.com> wrote:

> So my interest in happs and now happstack is in using it for the
> hackage server. The main reason I chose happs (and then worked on it
> with Lemmih) over other frameworks is because of the persistence layer

> (meaning we can use Haskell types)...

> Splitting things up is fine. Please don't deprecate or abandon the
> state system. It's really the most interesting bit.

I'm just a lurker who has hardly used happstack nor contributed to it
but I want to second this. What make happstack really interesting and
unique is IMO the persistence model and (promise of) multimaster and
sharding. Those are the main reason I'm lurking here and, I contend,
the reason happs[tack] has garnered some interest outside of the
Haskell community.

Thanks,
Bjorn

Hugo Gomes

unread,
Jun 16, 2009, 5:19:32 AM6/16/09
to HA...@googlegroups.com
Solving that IPv6 bug would be really neat, it should be solved in 0.3, and it must be top priority..

it is useful specially for 127.0.0.1 testing, since *most* modern *nix systems come with it by default (hmm, dont them all nowdays?). And when you are not a team of 1 coder it comes as a must have feature.

I think. 

Matthew Elder

unread,
Jun 16, 2009, 10:38:11 AM6/16/09
to HA...@googlegroups.com
Ipv6 only broke in the latest ghc 6.10.3 I believe. The problem is
that for some reason the th code is now improperly detecint it. Anyone
want to take a stab?

thomas hartman

unread,
Jun 16, 2009, 3:01:30 PM6/16/09
to HAppS
I perceive the most contentious issues to be TemplateHaskell and
State, and these are of course connected.

The detail issues are

1) having to learn template haskell or accept the black magic if you
want to use macid (personally, so far I have just accepted the black
magic)
2) documentation lacking (not so bad in my opinion, but certainly
could be better)
3) fragility / integrity fears (duncan's message)
4) some level of disappointment, perhaps, that sharding and
multimaster have not materialized.

The meta issues for somebody doing consumer faccing web 2.0 apps, like
myself:

* Does this really save me time compared to rails when I am
prototyping?
* Does this really save me time compared to rails when I am scaling?

My tentative answer to both metas is probably no, not yet. For me
personally I am better with happs than rails, but even then I
recognize that with rails you get the benefits of an established
ecosystem and plug and play solutions to many problems that have to
some extent been "widgetized": everything from payments integration to
calendar popups, and on and on. We don't have any of this
infrastructure.

That said, I still like the promise of happs and so the question
becomes, how to get from "here" to "there".

Some random ideas:

* How badly do we want sharding and multimaster, really? Is any of us
capable of implementing this? (Lemmih?) If so, how much work would it
be, and would it be possible to offer a bounty for implementing the
feature? I would be interested in hearing from lemmih and others on
this, and if there is interest we could then see if perhaps we could
get the project funded from the community -- how many of us are
willing to donate, say, $25 for this? Perhaps committed commercial
happs/haskell users that can pony up more?

Then again, maybe multimaster/sharding was never that important.

* We could have a conversation about whether template haskell is a net
win or lose with regards to macid usability and understandability. Is
it too much black magic? Is there a better way?

* Duncan thinks the state solution might be lossy. That's a problem,
for serious use, for consumer facing commercial apps. What's the
scope, how can it be resolved? How can we be confident? Obviously this
is a problem I have thought somewhat hard about though not answered
(see macid stress testing in happstut). For now, I am just trusting
macid plus backups.

Very interesting thread!





On Jun 16, 10:38 am, Matthew Elder <m...@mattelder.org> wrote:
> Ipv6 only broke in the latest ghc 6.10.3 I believe. The problem is
> that for some reason the th code is now improperly detecint it. Anyone
> want to take a stab?
> )
> Need somewhere to put your code?http://patch-tag.com

thomas hartman

unread,
Jun 16, 2009, 3:03:22 PM6/16/09
to HAppS
Speaking of which, here's another community question I'm interested in
getting answered.

How many followers of this NG are interested in HAppS for building
commercial facing web 2.0 type apps, and how many are interested in
HAppS for other reasons? (In house apps, web services, etc).

On Jun 16, 3:01 pm, thomas hartman <thomashartm...@googlemail.com>
wrote:

Alex Jacobson

unread,
Jun 16, 2009, 3:31:39 PM6/16/09
to HA...@googlegroups.com
My original goal with HAppS was to make it as simple for a set of Haskell developers to prototype, iterate, and scale internet applications.  This required four major components:

   * I/O via Internet protocols (hence HAppS.HTTP, HTTP.SMTP, and HAppS.DNS)
   * Handling of Internet data types e.g. XML and MIME (hence HAppS.Data)
   * Efficient Relational Collections (hence HAppS.IxSet)
   * Persistent Global State (hence HAppS.State)

Having all of these components in a single executable in a language with a strong type system would massively reduce development, test, deployment, and scaling complexity as compared to the LAMP stack approach with all of its network plumbing and impedance matching between components.  I wanted to eliminate the need for DBAs and sysadmins for most apps.  The advent of data center as a service platforms e.g. Amazon and RackSpace, makes this approach even more appealing.  I now imagine patch-tag extended with a workflow that looks something like this:

1. newProject -- creates a darcs repo associated with AWS credentials on patch-tag
2. pull and develop the project on your laptop with minimal typing of boilerplate or reading of documentation.
3. update -- every submitted patch causes HAppS-Tag run project/test.sh on EC2.  Patches that fail tests are rejected.
4. publish <tag> <name> -- pushes the current version to a server set.  if things start to look weird, roll back and report

No project member needs to think about servers or network architecture or database schemas.  The main imperative is just to think about a domain name and product features.  Some work is needed to make things this nice, but not as much as it may appear.  Here are what I think are the major todo items to get there.

== Usability of HAppS State ==

TH and the type system have advanced considerably since HAppS-State was written.  State works by requiring all the types within the global state to be instances of (Show,Ord,Eq,Typeable,Version, Serialize).   Writing the relevant deriving/instance/deriveInstance lines for each and every type is a massive beat-down and makes the code less readable (I generally stick all these declarations in some block at the end that I can ignore).   It would be really great if someone updated mkMethods so that it also:

   1. traverses the component types and generates the relevant needed instance or deriving declarations
   2. infers the start/empty state for MyState
   3. declares the entry point for MyState as myState

That would reduce all the work of using happs state just to this one mkMethods TH call and the call to startSystemState.

== Reliability with MultiMaster (starting on Amazon EC2) ==

Before working on allowing HAppS to handle larger data sets (sharding), it makes sense to make sure it handles existing data sets extremely reliably.  Amazon has provided a lot of the server plumbing required to make this job simpler.   We can generalize to other platforms in the future, but we might as well take advantage of what Amazon provides now (perhaps becoming the standard operating system there).

* Define a standard HAppS machine image for use with EC2
* Adapt multimaster so it uses Elastic Block Store (then only one machine needs to be writing checkpoints/logs)
* Integrate multimaster work with Amazon auto-scaling so it spins up/down as many machines as necessary
* Define update/upgrade code to work with cloudwatch so if the upgrade goes badly, you rollback
* Integrate serving static files with Amazon S3 and cloudfront
* Blob abstraction -- blobs should be stored to S3 so that each happs instance can save/load the same data

Note, this activity would come in the context and bulletproofing multimaster and making it industrial strength.  The current multimaster code is more proof of concept and needs a deep clean.

== Sharding ==

Once multimaster is really bulletproof, it may make sense to do sharding.   Until we have automated sharding, the solution for those concerned is manual sharding.  The correct approach to this depends on your application, but a simple approach that works for most cases is a directory server that maps object IDs to the server that hosts it.  At 128 bits per entry, it is highly unlikely that you will have enough objects to exceed 4GB.  

-Alex-

Alex Jacobson

unread,
Jun 16, 2009, 3:50:30 PM6/16/09
to HA...@googlegroups.com
Just to be clear, aside from mkMethods, the TH for HAppS state is just a bunch of instance declarations.  You can manually do this sort of thing for each type if you don't like the magic:

> deriving instance Show OrderId
> deriving instance Eq OrderId
> deriving instance Ord OrderId
> deriving instance Typeable OrderId
> instance Version OrderId
> $(deriveSerialize ''OrderId) 

The point of my last mail is that recent GHC probably lets you fold all of these declarations into mkMethods.

As for bulletproofing multimaster, I am willing to contribute some funds for lemmih to do it, but I don't want to be the only one doing so.


-Alex-




On 6/16/09 3:01 PM, thomas hartman wrote:

Matthew Elder

unread,
Jun 16, 2009, 5:28:03 PM6/16/09
to HA...@googlegroups.com
Alex, perhaps we can start a pledgie and we can get donb etc to help
with getting the word out?

Also, do you have the time to take the reins of the state / data parts
for awhile so I can focus more on the current issues with the http
implementation?
Need somewhere to put your code? http://patch-tag.com

Bjorn Buckwalter

unread,
Jun 16, 2009, 10:04:16 PM6/16/09
to HA...@googlegroups.com
On Tue, Jun 16, 2009 at 17:28, Matthew Elder<ma...@mattelder.org> wrote:
>
> Alex, perhaps we can start a pledgie and we can get donb etc to help
> with getting the word out?

I think this is a good idea, kudos to Thomas Hartman for bringing it
up. I would contribute a modest amount (<$100) toward improving
state/multimaster/sharding (code and documentation). I don't use
happstack but would like to see these things working and working well!

Thanks,
Bjorn

Gregory Collins

unread,
Jun 16, 2009, 10:39:23 PM6/16/09
to HA...@googlegroups.com
thomas hartman <thomash...@googlemail.com> writes:

> How many followers of this NG are interested in HAppS for building
> commercial facing web 2.0 type apps, and how many are interested in
> HAppS for other reasons? (In house apps, web services, etc).

For me the answer is "both".

Gregory Collins

unread,
Jun 16, 2009, 10:45:07 PM6/16/09
to HA...@googlegroups.com
Alex Jacobson <al...@alexjacobson.com> writes:

> Before working on allowing HAppS to handle larger data sets
> (sharding), it makes sense to make sure it handles existing data sets
> extremely reliably. Amazon has provided a lot of the server plumbing
> required to make this job simpler. We can generalize to other
> platforms in the future, but we might as well take advantage of what
> Amazon provides now (perhaps becoming the standard operating system
> there).

Having Amazon facilities plug into happstack might be cool but please,
only as an optional dependency. We use Amazon at work but I really don't
want to be "married" to it.

> Once multimaster is really bulletproof, it may make sense to do
> sharding. Until we have automated sharding, the solution for those
> concerned is manual sharding. The correct approach to this depends on
> your application, but a simple approach that works for most cases is a
> directory server that maps object IDs to the server that hosts it. At
> 128 bits per entry, it is highly unlikely that you will have enough
> objects to exceed 4GB.

The directory server then becomes another scaling bottleneck, and you'd
*need* multimaster/hot-swap to make sure that a failure of your
dictionary server didn't take down the other cluster.

Another approach you might consider is consistent hashing.

Alex Jacobson

unread,
Jun 17, 2009, 1:01:34 PM6/17/09
to HA...@googlegroups.com
On 6/16/09 10:45 PM, Gregory Collins wrote:
Alex Jacobson <al...@alexjacobson.com> writes:

  
Before working on allowing HAppS to handle larger data sets
(sharding), it makes sense to make sure it handles existing data sets
extremely reliably.  Amazon has provided a lot of the server plumbing
required to make this job simpler.  We can generalize to other
platforms in the future, but we might as well take advantage of what
Amazon provides now (perhaps becoming the standard operating system
there).
    
Having Amazon facilities plug into happstack might be cool but please,
only as an optional dependency. We use Amazon at work but I really don't
want to be "married" to it.
  

I don't want to be married to it either, but Amazon has provided a lot of services that we would need to write if we don't use Amazon.  It seems like the best way to start is running on top of amazon and then writing replacements for the specific services as makes sense e.g.  auto-scaling, server monitoring, reliable network disk for checkpoint and log, blob storage, etc. 

See http://aws.amazon.com/ec2/#features for what they are providing.

Step 1: run on top of amazon ec2 + s3
Step 2: run on top of a few more cloud providers e.g. rackspace
Step 3: standardize on a cloud provider SPI
Once multimaster is really bulletproof, it may make sense to do
sharding.  Until we have automated sharding, the solution for those
concerned is manual sharding.  The correct approach to this depends on
your application, but a simple approach that works for most cases is a
directory server that maps object IDs to the server that hosts it.  At
128 bits per entry, it is highly unlikely that you will have enough
objects to exceed 4GB.
    
The directory server then becomes another scaling bottleneck, and you'd
*need* multimaster/hot-swap to make sure that a failure of your
dictionary server didn't take down the other cluster.
  
Yes, as I said we want multimaster bulletproof before we work on sharding.  Once multimaster is working, a simple directory server is very easy to keep up and scale.  I'm also skeptical that without multimaster on the app side, you could easily overheat a directory server so the priority is really multimaster.

Another approach you might consider is consistent hashing.
  
Consistent hashing is how I'd like to see sharding implemented.  The issue with consistent sharding is that you need good handling of splitting/merging parts of hash space (based on density).  It is certainly possible to do that, but it requires care because of the many corner cases.  My view is that we use a directory server approach until we get automated sharding working properly.

-Alex- 



thomas hartman

unread,
Jun 17, 2009, 1:18:44 PM6/17/09
to HAppS, duncan...@googlemail.com
I agree with alex that multimaster on ec2 is a good approach, and
would add that if the eucalyptus clone amazon web services project

eucalyptus.com

bears fruit, which I believe likely, you will be run aws-like clouds
on private infrastructure using the same code.

That said, I was thinking that we should perhaps address DCoutt's
concerns about the macid storage mechanism (state migration for
upgrades) first.

Bulletproof multimaster on a lossy datastore does not sound like
something that will get a lot of traction to me.

If this is a fud phantom it needs to be addressed as such, otherwise
really fixed.



On Jun 17, 1:01 pm, Alex Jacobson <a...@alexjacobson.com> wrote:
> On 6/16/09 10:45 PM, Gregory Collins wrote:Alex Jacobson<al...@alexjacobson.com>writes:Before working on allowing HAppS to handle larger data sets (sharding), it makes sense to make sure it handles existing data sets extremely reliably. Amazon has provided a lot of the server plumbing required to make thisjob simpler. We can generalize to other platforms in the future, but we might as well take advantage of what Amazon provides now (perhaps becoming the standard operating system there).Having Amazon facilities plug into happstack might be cool but please, only as an optional dependency. We use Amazon at work but I really don't want to be "married" to it.
> I don't want to be married to it either, but Amazon has provided a lot of services that we would need to write if we don't use Amazon.  It seems like the best way to start is running on top of amazon and then writing replacements for the specific services as makes sense e.g.  auto-scaling, server monitoring, reliable network disk for checkpoint and log, blob storage, etc. 
> Seehttp://aws.amazon.com/ec2/#featuresfor what they are providing.
> Step 1: run on top of amazon ec2 + s3
> Step 2: run on top of a few more cloud providers e.g. rackspace
> Step 3: standardize on a cloud provider SPIOnce multimaster is really bulletproof, it may make sense to do sharding. Until we have automated sharding, the solution for those concerned is manual sharding. The correct approach to this depends on your application, but a simple approach that works for most cases is a directory server that maps object IDs to the server that hosts it. At 128 bits per entry, it is highly unlikely that you will have enough objects to exceed 4GB.The directory server then becomes another scaling bottleneck, and you'd *need* multimaster/hot-swap to make sure that a failure of your dictionary server didn't take down the other cluster.Yes, as I said we want multimaster bulletproof before we work on sharding.  Once multimaster is working, a simple directory server is very easy to keep up and scale.  I'm also skeptical that without multimaster on the app side, you could easily overheat a directory server so the priority is really multimaster.Another approach you might consider is consistent hashing.Consistent hashing is how I'd like to see sharding implemented.  The issue with consistent sharding is that you need good handling of splitting/merging parts of hash space (based on density).  It is certainly possible to do that, but it requires care because of the many corner cases.  My view is that we use a directory server approach until we get automated sharding working properly.
> -Alex- 

Alex Jacobson

unread,
Jun 17, 2009, 1:30:17 PM6/17/09
to HA...@googlegroups.com
On 6/15/09 8:22 PM, Duncan Coutts wrote:
2. What do you see as the main weakness of the current implementation
and/or community?
    
I've got no real complaints at the moment. Lack of decent http
authentication is a bit of a pain.
  
http-auth is largely an artifact of a prior era.  A lib that is an open-ID consumer and fails to email auth otherwise would be more useful.

Splitting things up is fine. Please don't deprecate or abandon the
state system. It's really the most interesting bit. I appreciate
everything in-memory doesn't work for every application. For the
hackage server I'm using a combination of in-memory indexes and "blob"
storage for large objects like tarballs. I'm also designing some
compact data structures for other indexes (eg tarball contents). So
far so good.
  
very cool.

That said, I don't trust the state migration for upgrades etc. That's
not because I've found that it doesn't work in practice but because as
one of the authors of the binary package I'm worried by the fragility
of Binary class instances. For the hackage server I'm going to
implement a dump/restore in a separate non-binary format and use that
for backups and as an escape mechanism if upgrades do not go well.
  
Can you explicate?  I think HAppS originally serialized using read/show.  If Binary is unreliable, that is interesting. 
If we are not going to use a binary format, my generic instinct is to use XML.

-Alex-

Simon Michael

unread,
Jun 17, 2009, 2:17:06 PM6/17/09
to ha...@googlegroups.com
In hledger I currently use only basic http serving. Getting to this point wasn't easy, but I now have reliable, fast
http serving of a small amount of ascii content with a small memory footprint and a small amount of code. Off the top of
my head, here's what would help me the most:

- documentation and examples at the level of python frameworks such as django

- rich support for typical web 2.0 app needs - styling, ajax, forms, authentication etc.

- full support for non-ascii content everywhere

Matthew Elder

unread,
Jun 17, 2009, 4:07:35 PM6/17/09
to HA...@googlegroups.com
While basic auth may be "dated", I can vouch at least with my work that it is still used like nuts all over the place, and is crucial for SSO (single sign on) in the intranet. kerberos and negotiate style authentication are also valuable. I would love to see an open-id plugin, but I think interoperability is key to creating a more flexible product. They could even be combined; as an example, an open-id provider could authenticate the user via basic-auth before passing on the authentication token to the consuming web-app.
--

stepcut

unread,
Jun 17, 2009, 7:58:08 PM6/17/09
to HAppS
I used happstack for business and pleasure. For me the key essential
piece is happstack-state. In fact, I don't really like happstack-
server at all. I hope to replace it with a combination of hyena and
URLT.

Happstack-Server is based on lazy I/O. We have already seen many
reports about connection problems arising from the unpredictability of
lazy IO (such as running out of file handles, etc). Hyena uses left-
fold enumeration instead of laziness. Serving files, etc, is done
strictly, and with bounded resources (such as the number of file
handles or amount of RAM used). It is also pretty fast today, and will
likely be very fast in the future. The target is 10,000 queries per
second I believe.

The Happstack-Server request pattern matching DSL (aka, dir, method,
path, withDataFn), etc, provides very little (type) safety. It is very
easy to generate invalid links to resources that are internal to your
application. Additionally, it does not provide a safe way to compose
modules from different vendors. For example, there is no namespace
separation. So, if you had an Image Gallery library, and a Blog
library which both tried to handle requests to '/some/path', you would
be stuck.

I have started work on an alternative system which addresses these
issues and more. You can see my initial design here:

http://src.seereason.com/~jeremy/SimpleSite1.html

And the latest implementation here:

http://src.seereason.com/urlt/

I will also say, though, that I think a killer feature of Happstack is
the fact that it is a loosely bound together collection of libraries.
As a result it is relatively easy to swap out the parts that don't
work for you with something that works better (or, perhaps just
differently). So, I am all for the continued efforts to keep happstack-
server, happstack-state, etc, separated. Though, I don't think I would
object to people writing a more rails-like library on top of them if
they felt so inclined. In fact, if that is not possible, we should fix
it so that it is possible.

So, for me the two key features of happstack are:

1. happstack-state
2. the ability to mix-and-match the parts to build what is right for
you

And, my primary interests in future development are:

1. improving happstack-state so that it scales smoothly from small to
very large (see the other thread I created)

2. figure out how to support hyena (in addition to the current
happstack-server). I have successfully used hyena with happstack-
state, so there is no major conflict there, which is excellent. But I
expect there are things that could be done to make it even better.

- jeremy

stepcut

unread,
Jun 17, 2009, 8:14:59 PM6/17/09
to HAppS
On Jun 15, 1:20 pm, MightyByte <mightyb...@gmail.com> wrote:
> Another concern is that even if sharding was available tomorrow once
> the database becomes large enough, creating checkpoints may become a
> problem.  I've already discovered that regular checkpointing helps to
> reduce memory usage, but when the checkpoint files become large, it
> may be too time consuming.  At this point, I haven't heard a
> satisfactory answer to this concern.

Too time-consuming in what way? Checkpoints do not suspend the
transaction system. New updates and queries can be processed while a
checkpoint is being written. Additionally, checkpointing is not very
CPU intensive. So, it should not slow down processing. Writing the
state to disk is sequential, so it should achieve near optimal write
performance for your drive. Obviously, saturating your disk I/O could
be problematic in a normal database situation. Since writing
checkpoints and events to disk is pretty much the only disk I/O that
happstack does, it should not be a problem, because no one else is
fighting for those I/O slices. One exception would be if you are using
fileServe to serve files from the same disk. But there are many work
around for that, such as using a different machine for serving static
content (or even using S3).

A modern SATA drive can do a sustained write at around 80MB/s. Servers
tend to max out at around 32GB of RAM. So to serialize 32GB would take
around 7 mins. But, that should not really cause a significant slow
down.

Additionally, if you are running multimaster, (ie, the same data is
stored on multiple machines), you do not have to checkpoint on all of
them. You could have a specific machine that is dedicated to
checkpointing. It might have a faster disk, and could even stop
handling HTTP requests while it was checkpointing if desired...

- jeremy

stepcut

unread,
Jun 17, 2009, 8:47:58 PM6/17/09
to HAppS
Oh, I also wanted to add that I think happstack will have achieved
ultimate success when it no longer exists.

For example, happstack-ixset is not really specific to happstack,
except that it depends on happstack-data. But happstack-data is not
really happstack specific either. And, the stuff in happstack-utils
can mostly be sent upstream to other libraries on hackage if someone
had that ambition. happstack-state and happstack-server are also
independently useful, and many people only use one or the other. So,
overtime, they might develop separate user bases and developers. And
'competing' libraries might emerge as well, such as happstack-server
vs hyena.

Hence, at some future date, happstack could be merely a concept that
builds on top of a number of existing libraries. In the same way that
we don't think of mtl or stm as being an explicit part of happstack,
we might some day not think any of the current components as belonging
exclusively to happstack.

At that point, happstack will be more of a philosophy of how to build
a highly scalable, stable website rather than any specific
implementation of code.

- jeremy

Alex Jacobson

unread,
Jun 17, 2009, 9:22:08 PM6/17/09
to HA...@googlegroups.com
form support is there. check out Happstack.Data.Pairs
automatic conversion between name-value pairs and haskell data types.

-Alex-

Alex Jacobson

unread,
Jun 17, 2009, 9:30:23 PM6/17/09
to HA...@googlegroups.com
The key to good performance is to write the log to a different disk than you are writing the checkpoint.

If you are not doing that and your checkpoint is large, then you will run into major performance problems as each log write then takes disk access time rather than disk throughput time.

-Alex-

MightyByte

unread,
Jun 17, 2009, 9:42:27 PM6/17/09
to HA...@googlegroups.com
Ok, that is an acceptable answer to my checkpointing concern. For
some reason I had been assuming that the app would block during the
write. Thanks for clearing that up.

Simon Michael

unread,
Jun 18, 2009, 1:46:32 AM6/18/09
to ha...@googlegroups.com
Thanks! I will do so.

Kamil Dworakowski

unread,
Jun 19, 2009, 3:17:45 AM6/19/09
to HAppS

> == Usability of HAppS State ==
> TH and the type system have advanced considerably since HAppS-State was written.  State works by requiring all the types within the global state to be instances of (Show,Ord,Eq,Typeable,Version, Serialize).   Writing the relevant deriving/instance/deriveInstance lines for each and every type is a massive beat-down and makes the code less readable (I generally stick all these declarations in some block at the end that I can ignore).   It would be really great if someone updated mkMethods so that it also:
>    1. traverses the component types and generates the relevant needed instance or deriving declarations
>    2. infers the start/empty state for MyState
>    3. declares the entry point for MyState as myState
> That would reduce all the work of using happs state just to this one mkMethods TH call and the call to startSystemState.

I would like to do that. It is both well scoped and solves an
annoyence I had when starting to use the State. I don't know TH nor
the State source code, but I have experience with similar concepts
from other languages. I have implemented AOP in Nemerle largely by
means of its macro system, which I think is similar to TH; and I have
been using a prevayler like memory persistence for couple of years
now. To avoid confusion: I don't expect any funding.

Anton van Straaten

unread,
Jun 19, 2009, 10:29:02 PM6/19/09
to HA...@googlegroups.com
I would take stab at this IPv6 issue if I could replicate the problem,
but it seems to only be a problem on Mac OS X and perhaps on Vista.
I've tried it on Debian 5.0 and it works fine. I've also tried on
Windows XP, but that uses IPv4 by default, and also works fine.

Pasqualino Assini

unread,
Jun 20, 2009, 3:58:55 PM6/20/09
to HA...@googlegroups.com
Hi,

I find this discussion about future advanced happstack functionality really fascinating.


However let's not forget that, currently, the major stumbling block to the development of Haskell Web applications is not the lack of advanced functionality such as superior scalability but much more elementary the lack of:

- a standard API to transparently run a Haskell Web app into any Web server

- an industrial-strength, pure Haskell, Web server implementation


A quick look at hackage reveals that:

- There are already many attempts at defining a common web app api based on existing standards (cgi, fastcgi, hack) as well as many server-specific apis. All these APIs perform the same function and often have very similar, but maddening incompatible, definitions.

- An even greater effort has been spent trying to write a pure haskell web server (http-server, salvia, mohws, httpd-shed, hyena ...). However, none comes even remotely close to providing the level of functionality that a developer would reasonably expect from even the lightest web server (a la' nginx or lighttpd).

If we really want to make Haskell a viable language for Web development what is sorely needed is for the authors of all these packages to work together to distill the best ideas from their well-meaning but often half-baked attempts and give the haskell community the simple api and solid and complete implementation that is needed.

Regards,

               titto

--
Pasqualino "Titto" Assini, Ph.D.
http://quicquid.org/





2009/6/15 Matthew Elder <ma...@mattelder.org>

All, as I have been learning how to be a dad, I have also had some
time to reflect on the goals I initially had for Happstack. I have
also been scanning the various topics that have been appearing on the
list. One thing that has become apparent is that happs has grown into
a framework that is being used for very many purposes -- and that
people have very different needs.

That said, I would like to focus on narrowing the scope of the project
a bit more and drive towards a common goal.

What is Happstack at this point in the road? Here are some of the
parts of the puzzle:

- a fairly mature http application server
- a fairly immature frontend http server (ipv6 and file serving -- to
name a couple of the areas of weakness)
- a fairly immature prevayler style persistence layer (lack of tools
and learning curve are a couple of issues here)
- a mixed bag of mature and immature utility functions of which some
are slowly decaying (in terms of maintenance)

Each of these four aspects could easily be a full-time job for one developer

Some open questions to the community:

1. How valuable do you find each of these pieces to be in your work
(whether it be personal or professional)?

2. What do you see as the main weakness of the current implementation
and/or community?
3. Which items could be potentially taken away to ease the overall
code footprint with minimal impact and/or which items really belong in
a separate project alltogether (state anyone?).

Regards,
Matthew Elder


--
Sent from my mobile device

Need somewhere to put your code? http://patch-tag.com
Want to build a webapp? http://happstack.com





--
Pasqualino "Titto" Assini, Ph.D.
http://quicquid.org/


Andrey Chudnov

unread,
Jun 20, 2009, 5:40:02 PM6/20/09
to HAppS
> 1. How valuable do you find each of these pieces to be in your work
> (whether it be personal or professional)?

- application server - very valuable.
- frontend http server - useful. Btw, if you think it's not that good,
why not use mohws (http://hackage.haskell.org/package/mohws)? Or
maybe merge happstack http server with mohws?
- persistence - that was and still is one of the biggest promises of
HAppS for me. But, it seems that nobody in this community knows how to
do it right (and that's a hard task indeed). How about forking it out
and hoping that some distributed systems guru would pick it up? Or
scale down the requirements a little bit - just drop the distributed
part alltogether. Or just switch to an RDBMS and write another
turbinado :)
- utils - important, but should be maintained separately.

Happstack would be very useful for me if it had end-to-end type
safety. Combined with expressive type system that would be a killer
feature. That's, also, one of the features HAppS promised.

> 2. What do you see as the main weakness of the current implementation
> and/or community?

The ambitions are too high - especially with respect to HAppS-State.
Otherwise, the community is great (though, more docs would be nice).

> 3. Which items could be potentially taken away to ease the overall
> code footprint with minimal impact and/or which items really belong in
> a separate project alltogether (state anyone?).

Everything, except for the app server and the tutorial application.
The tutorial application would also serve as a test of how well
different parts (assuming they are separate projects) integrate.
State should be separate. HTTP server should be superseded by/merged
with mohws. Each util should be a separate project.

/Andrey

Gregory Collins

unread,
Jun 20, 2009, 6:04:21 PM6/20/09
to HA...@googlegroups.com
Andrey Chudnov <achu...@gmail.com> writes:

> - frontend http server - useful. Btw, if you think it's not that good,
> why not use mohws (http://hackage.haskell.org/package/mohws)? Or
> maybe merge happstack http server with mohws?

One of my goals for Hac Phi is to allow for alternate HTTP backends for
Happstack. Matt Elder mentioned wanting to try Hyena as a backend also,
and personally out of all of the alternatives I find the iteratee-based
approach the most promising. I'll mohws to my "to-read" list (although
it doesn't seem to have too much in the way of source-level
documentation).

There isn't too much coding involved in writing a shim layer between
happstack ServerParts and other Haskell web libraries, it should mostly
be a matter of translating between Requests and Responses. Then we can
line up the servers and race them :)

> Happstack would be very useful for me if it had end-to-end type
> safety. Combined with expressive type system that would be a killer
> feature. That's, also, one of the features HAppS promised.

Could you please elaborate a little bit on what you mean by "end-to-end
type safety" or "expressive type system"? What in particular is missing
that you would like to see?

Andrey Chudnov

unread,
Jun 20, 2009, 6:37:18 PM6/20/09
to HAppS


On Jun 20, 6:04 pm, Gregory Collins <g...@gregorycollins.net> wrote:
> Andrey Chudnov <achud...@gmail.com> writes:
> > - frontend http server - useful. Btw, if you think it's not that good,
> > why not use mohws  (http://hackage.haskell.org/package/mohws)?Or
> > maybe merge happstack http server with mohws?
>
> One of my goals for Hac Phi is to allow for alternate HTTP backends for
> Happstack. Matt Elder mentioned wanting to try Hyena as a backend also,
> and personally out of all of the alternatives I find the iteratee-based
> approach the most promising. I'll mohws to my "to-read" list (although
> it doesn't seem to have too much in the way of source-level
> documentation).

Maybe, I'll be at Hac Phi too.
I picked mohws just because it's the one of oldest haskell http
servers: so it's likely to be stable. I haven't looked at Hyena, but
it has even less docs than mohws :)

> There isn't too much coding involved in writing a shim layer between
> happstack ServerParts and other Haskell web libraries, it should mostly
> be a matter of translating between Requests and Responses. Then we can
> line up the servers and race them :)

Good idea. What about that new Hack library in Hackage?
http://hackage.haskell.org/package/hack
Would it do the trick?

> > Happstack would be very useful for me if it had end-to-end type
> > safety. Combined with expressive type system that would be a killer
> > feature. That's, also, one of the features HAppS promised.
>
> Could you please elaborate a little bit on what you mean by "end-to-end
> type safety" or "expressive type system"? What in particular is missing
> that you would like to see?

Here's what I meant: I should be able to program with native haskell
ADT's and have them be safely converted to/from wire/storage formats
when needed. I didn't mean this doesn't work at all (in fact it does
work well with XML). But somehow it didn't work very well for me.
Maybe having a high-level rails-like library gluing the parts together
to make things like that dead simple would be nice -- but that
wouldn't be possible until all of the components (especially,
persistence) are mature. I think I wrote something along these lines a
while ago.

Gregory Collins

unread,
Jun 20, 2009, 7:31:10 PM6/20/09
to HA...@googlegroups.com
Andrey Chudnov <achu...@gmail.com> writes:

> Maybe, I'll be at Hac Phi too.

That's great, it looks like there will be enough people attending who
are interested in happstack for us to be able to make some real progress
towards some of these goals.

> I picked mohws just because it's the one of oldest haskell http
> servers: so it's likely to be stable. I haven't looked at Hyena, but
> it has even less docs than mohws :)

Also its interface is in flux, Johan seems to want to rewrite it to use
the left-fold definition from the iteratee library (a good idea in my
opinion).

There's also the standard "HTTP" library, which has some request-parsing
stuff in it, we should consider trying it out also. Happstack should
probably support FastCGI (if it doesn't already?) as well.

>> There isn't too much coding involved in writing a shim layer between
>> happstack ServerParts and other Haskell web libraries, it should mostly
>> be a matter of translating between Requests and Responses. Then we can
>> line up the servers and race them :)
>
> Good idea. What about that new Hack library in Hackage?
> http://hackage.haskell.org/package/hack
> Would it do the trick?

Sorry for not being precise -- I meant it should be easy to write
shim*s* (plural) between Happstack and other Haskell web libraries. If
you take a look at Happstack.Server.HTTP.Types, it already looks awfully
similar to that "Hack" package.

I envisioned that part of Happstack serving a similar role as the "Rack"
interface, i.e. a common-sense interface between web server libraries
and application server code.

>> Could you please elaborate a little bit on what you mean by "end-to-end
>> type safety" or "expressive type system"? What in particular is missing
>> that you would like to see?
>
> Here's what I meant: I should be able to program with native haskell
> ADT's and have them be safely converted to/from wire/storage formats
> when needed. I didn't mean this doesn't work at all (in fact it does
> work well with XML). But somehow it didn't work very well for me.
> Maybe having a high-level rails-like library gluing the parts together
> to make things like that dead simple would be nice -- but that
> wouldn't be possible until all of the components (especially,
> persistence) are mature. I think I wrote something along these lines a
> while ago.

I think we're in agreement that the core stuff should mature before we
really tackle that. I've (tediously, by now) already voiced my doubts
about "one ring to rule them all" data layers :)

Gregory Collins

unread,
Jun 20, 2009, 7:53:42 PM6/20/09
to HA...@googlegroups.com
Gregory Collins <gr...@gregorycollins.net> writes:

> Happstack should probably support FastCGI (if it doesn't already?) as
> well.

Mea culpa, don't know why I didn't notice
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/happstack-fastcgi
before.

Matthew Elder

unread,
Jun 20, 2009, 11:47:50 PM6/20/09
to HA...@googlegroups.com
As far as standardization goes, I think the reverse http "application
server" approach that Happstack supports currently is what is
generally accepted as the defacto scalable solution, even outside the
haskell community.

This is the model the java community uses for the most part also AFAIK.

While fastcgi is convenient, that is where the road ends IMHO.
Semantically speaking, reverse http and fastcgi accomplish the same
thing -- a persistent process which serves pages for your apps. Don't
get hung up on the notion of http as being only a front-facing server.
It is this realization that has driven the designs of projects such as
lighttpd. Lighttpd in the last few versions abstracted the proxy load
balancing feature to not only work for http, but also for fcgi and cgi
etc. I don't see fcgi having much of a following for scaleable webapps
into the future.

If your app uses 50 mb in fcgi it will also use 50 mb as a backend
http server as they are both daemonlike persistent processes; why are
we so worried about supporting fcgi?

My 2c

Gregory Collins

unread,
Jun 21, 2009, 3:25:34 AM6/21/09
to HA...@googlegroups.com
Matthew Elder <ma...@mattelder.org> writes:

> If your app uses 50 mb in fcgi it will also use 50 mb as a backend
> http server as they are both daemonlike persistent processes; why are
> we so worried about supporting fcgi?

In principle I'm inclined to agree with you, on the other hand there's
little reason NOT to support it -- happstack-fastcgi is only ~220
LOC. FastCGI is supposed to be marginally more efficient (~5%?), and if
you use it e.g. lighttpd will spawn/kill your server instances for you
(also not such a big deal...)

Can we agree that (for example) something like happstack-hyena might get
implemented using the same basic strategy as happstack-fastcgi? We
should be able to knock three or four of those out during Hac Phi, no
sweat. Maybe the http server should be split out from the "rack"-like
middleware layer in happstack-server, as "just another backend"?

stepcut

unread,
Jun 21, 2009, 3:47:53 PM6/21/09
to HAppS
On Jun 21, 2:25 am, Gregory Collins <g...@gregorycollins.net> wrote:

> Can we agree that (for example) something like happstack-hyena might get
> implemented using the same basic strategy as happstack-fastcgi?

I would think, no. It is my impression that happstack-fastcgi is a
glue layer that translates fastcgi requests into happstack-server
Requests which are then handled by the normal happstack-server code as
if it was a normal Request? On the other hand, Hyena is a complete
replacement for happstack-server. In theory hyena *could* act as a
proxy which passed requests to the exist happstack-server backend, but
that would defeat the purpose of using hyena in the first place --
which is to use a backend which is based on left-fold enumerations
instead of lazy I/O.

In fact, you would likely want happstack-fastcgi support *for* hyena
as well.

Hyena defines an generic "Web Application Interface" which could be
used by any web backend:

http://github.com/tibbe/hyena/blob/9655e9e6473af1e069d22d3ee75537ad3b88a732/Network/Wai.hs

In theory, if happstack-server, hyena, and happstack-fastcgi all used
this interface, then you could use happstack-server or hyena with
happstack-fastcgi, and happstack-fastcgi would not have to have any
code that was specific to either one.

Though, I expect that converting happstack-server to use Network.Wai
would not be backwards compatible...

(And, there is no particular reason to believe that Network.Wai is the
best generic interface either. I have not look at any other options).

Also, I wrote a tutorial a while back about using hyena with happstack-
state:

http://src.seereason.com/examples/happstack-hyena-tutorial/Hyena.html

And I also hacked up an incomplete version of SimpleHTTP that is based
on Hyena:

http://src.seereason.com/examples/happstack-hyena-tutorial/simpleHyena.hs

This code has suffered bit-rot, but is hopefully of some use.

- jeremy

Gregory Collins

unread,
Jun 21, 2009, 4:51:58 PM6/21/09
to HA...@googlegroups.com
stepcut <jer...@n-heptane.com> writes:

> On Jun 21, 2:25 am, Gregory Collins <g...@gregorycollins.net> wrote:
>
>> Can we agree that (for example) something like happstack-hyena might
>> get implemented using the same basic strategy as happstack-fastcgi?
>
> I would think, no. It is my impression that happstack-fastcgi is a
> glue layer that translates fastcgi requests into happstack-server
> Requests which are then handled by the normal happstack-server code as
> if it was a normal Request? On the other hand, Hyena is a complete
> replacement for happstack-server.

OK, let's go over this. Happstack-server contains:

1. datatype definitions for Request/Response types:

http://happstack.com/docs/0.2/happstack-server/0.2/Happstack-Server-HTTP-Types.html

2. a monad transformer stack for defining and building web
applications, having neat features like filtering/monoid
instances/short-circuiting/etc. (i.e. ServerPartT):

http://happstack.com/docs/0.2/happstack-server/0.2/Happstack-Server-SimpleHTTP.html

This layer depends on the types from #1 above.

3. code which listens on a socket, speaks the HTTP protocol, parses
Requests, passes Requests to a function of type (Request -> IO
Response), and sends the Response back out to the socket (i.e. the
actual "web server" part):

http://happstack.com/docs/0.2/happstack-server/0.2/Happstack-Server-HTTP-LowLevel.html

Application code is written to the interfaces of #1 and #2, the use of
#3 is not mandatory -- indeed happstack-fastcgi works by replacing
happstack's HTTP protocol stuff with a shim that translates types from
the fastcgi library (which defines its own Request/Response types) to
happstack's types (#1 above), then runs the monad transformer from #2 by
calling runServerPartT/runWebT. Happstack's HTTP serving stuff is
bypassed altogether (out of all of those pieces, it's the most suspect).

I'm proposing that a similar shim layer could be written for hyena.


> Hyena defines an generic "Web Application Interface" which could be
> used by any web backend:

So does happstack :)


> In theory, if happstack-server, hyena, and happstack-fastcgi all used
> this interface, then you could use happstack-server or hyena with
> happstack-fastcgi, and happstack-fastcgi would not have to have any
> code that was specific to either one.

I think you're confused about happstack-fastcgi, it's just a thin
adapter to the pre-existing fastcgi library from hackage (which speaks
the fastcgi protocol). You could write a similar shim for apps written
to run on hyena (because hyena defines its own version of #1), maybe
that's what you meant?

I agree that if all of the libraries wrote to the same interface, then
there would be no need for any adapter layers -- this is what the "Rack"
interface does for Ruby. The haskell "Hack" library mentioned yesterday,
a Rack clone, has a similar intention.


> Though, I expect that converting happstack-server to use Network.Wai
> would not be backwards compatible...

Yes, it would break all of the apps, because those are written against
the app-level interface.


> And I also hacked up an incomplete version of SimpleHTTP that is based
> on Hyena:
>
> http://src.seereason.com/examples/happstack-hyena-tutorial/simpleHyena.hs

This is exactly the kind of shim layer I was proposing! Are we on the
same page yet? :)

G.
--
Gregory Collins <gr...@gregorycollins.net>

stepcut

unread,
Jun 21, 2009, 5:40:50 PM6/21/09
to HAppS
On Jun 21, 3:51 pm, Gregory Collins <g...@gregorycollins.net> wrote:

> This is exactly the kind of shim layer I was proposing! Are we on the
> same page yet? :)

I think so. I think part of my confusion is that I imagined that
happstack-fastcgi exercised most of the code in #3, when in probably
exercises none of it :)

In terms of the data types for Request/Response, I think the reason to
prefer hyena's version is that you can implement lazy I/O on top of
left-fold enumeration, but not the other way around. However, I am not
really sure how this would play out in practice.

And, if you are thinking of taking the stuff in SimpleHTTP
(ServerPartT, dir, path, etc), moving it out of happstack-server into
its own package, and generalizing it so that it can work with
happstack-server, hyena, etc, then I am totally in favor of that. I
think the tricky part of doing that right now is that ServerPartT,
dir, path, etc, are pretty tightly integrated with the specific
Request/Response types in happstack-server. So, you either need to add
a type class that abstracts out the type, or get happstack-server and
hyena to use the same type of the Response/Result ? Or, I guess, write
a glue layer than converts between hyena's types and the Response/
Request type (which is what the current happstack-fastcgi does?)

Anyway, it sounds like you have a better handle on this than me, so
I'm all in favor of whatever you propose ;)

- jeremy

Gregory Collins

unread,
Jun 21, 2009, 7:38:14 PM6/21/09
to HA...@googlegroups.com
stepcut <jer...@n-heptane.com> writes:

> In terms of the data types for Request/Response, I think the reason to
> prefer hyena's version is that you can implement lazy I/O on top of
> left-fold enumeration, but not the other way around. However, I am not
> really sure how this would play out in practice.

One of the things I wanted to do for Hac Phi was to spend some time
reworking the ServerPartT stuff:

* it has an awkward name

* the towering ServerPartT/WebT/FilterT/MaybeT/ErrorT stack is
unnecessarily complicated, which scares away beginners

* ServerPartT leaks its internals out all over the place

While we're looking at that, we could spend some time thinking about the
response type also. We've already discussed extending it to support
sendfile(), we could put in iteratee support there also.

The simplest way discussed so far is to add more constructors to the
RqBody type:

data RqBody = Body ByteString
| Stream (StreamG ByteString Word8)
| SendFile FilePath


> And, if you are thinking of taking the stuff in SimpleHTTP
> (ServerPartT, dir, path, etc), moving it out of happstack-server into
> its own package, and generalizing it so that it can work with
> happstack-server, hyena, etc, then I am totally in favor of that.

That would indeed be what I'd propose.


> I think the tricky part of doing that right now is that ServerPartT,
> dir, path, etc, are pretty tightly integrated with the specific
> Request/Response types in happstack-server. So, you either need to add
> a type class that abstracts out the type, or get happstack-server and
> hyena to use the same type of the Response/Result ? Or, I guess, write
> a glue layer than converts between hyena's types and the Response/
> Request type (which is what the current happstack-fastcgi does?)

To me, ServerPart needs to depend on a particular set of
Request/Response types, so I'd argue for going by the latter route.

stepcut

unread,
Jun 22, 2009, 12:40:21 PM6/22/09
to HAppS
On Jun 21, 6:38 pm, Gregory Collins <g...@gregorycollins.net> wrote:
>
> One of the things I wanted to do for Hac Phi was to spend some time
> reworking the ServerPartT stuff:

This all sounds great to me.

- jeremy

Matthew Elder

unread,
Jun 22, 2009, 5:55:35 PM6/22/09
to HA...@googlegroups.com
Hey Gregory, first of all, I can't wait to see what you guys produce at Hac Phi. I would love to abstract the "core" http server from the transformer stack. This would lay the pavement for plugging in the various available http backends currently available so we can "race them" (stealing anothers words).

I want to add that I am currently working on the prototype of a small shim library which exposes a sendfile interface portably. I have windows natively supported already; I will release version 0.1 once I add Linux native support. Another nice feature of this lib is that if no native implementation is available, it seamlessly falls back to a portable haskell implementation. This hopefully can be worked into http engines such as Hyena and "happs classic".

If anyone wants to take a look at what I have so far and do some really early/alpha testing & feedback, please take a look at:

http://patch-tag.com/r/sendfile/home
--

Johan Tibell

unread,
Jun 23, 2009, 2:46:50 AM6/23/09
to HA...@googlegroups.com
On Mon, Jun 22, 2009 at 11:55 PM, Matthew Elder <ma...@mattelder.org> wrote:
I want to add that I am currently working on the prototype of a small shim library which exposes a sendfile interface portably. I have windows natively supported already; I will release version 0.1 once I add Linux native support. Another nice feature of this lib is that if no native implementation is available, it seamlessly falls back to a portable haskell implementation. This hopefully can be worked into http engines such as Hyena and "happs classic".

Cool. Would you consider merging this into network-bytestring which already has sendfile support for unixes?

Cheers,

Johan

Matthew Elder

unread,
Jun 24, 2009, 2:39:31 AM6/24/09
to HA...@googlegroups.com
My only problem with this is that bytestring really has nothing to do
with sendfile, and more to do with network. If anything I would merge
it into network. For now I am releasing on hackage to stabilize it,
though.
--
Sent from my mobile device

Johan Tibell

unread,
Jun 24, 2009, 3:14:12 AM6/24/09
to HA...@googlegroups.com
On Wed, Jun 24, 2009 at 8:39 AM, Matthew Elder <ma...@mattelder.org> wrote:

My only problem with this is that bytestring really has nothing to do
with sendfile, and more to do with network. If anything I would merge
it into network. For now I am releasing on hackage to stabilize it,
though.

Yes, it probably fits better in network. There's an open ticket for merging network-bytestring into network:

http://trac.haskell.org/network/ticket/15

-- Johan

Matthew Elder

unread,
Jun 24, 2009, 3:27:42 AM6/24/09
to HA...@googlegroups.com
The simplest way discussed so far is to add more constructors to the
RqBody type:

data RqBody = Body     ByteString
           | Stream   (StreamG ByteString Word8)
           | SendFile FilePath

Gregory, didn't you mean Response?

Here is what I plan to commit, instead of returning a Response, you can return a SendFile:

data Response  = Response  { rsCode    :: Int,
                             rsHeaders :: Headers,
                             rsFlags   :: RsFlags,
                             rsBody    :: L.ByteString,
                             rsValidator:: Maybe (Response -> IO Response)
                           }
               | SendFile  { rsCode    :: Int,
                             rsHeaders :: Headers,
                             rsFlags   :: RsFlags,
                             rsPath    :: String,
                             rsValidator:: Maybe (Response -> IO Response)
                           }
               deriving (Show,Typeable)


Then I will teach the handler how to plug this into my new sendfile library : )

Gregory Collins

unread,
Jun 24, 2009, 4:27:51 AM6/24/09
to HA...@googlegroups.com
Matthew Elder <ma...@mattelder.org> writes:

> The simplest way discussed so far is to add more constructors to the
> RqBody type:
>
> data RqBody = Body ByteString
> | Stream (StreamG ByteString Word8)
> | SendFile FilePath
>
> Gregory, didn't you mean Response?

...yeah, I guess I did. Sorry, I mucked up Request (which has that
RqBody type) and Response (which just uses a plain bytestring now.)

Note that between Response and the new SendFile constructor you're
planning on introducing, the only difference is the body type; maybe we
should think about moving SendFile / left-fold (if we do that) down into
the body type?

Either way would be fine, of course, it's an aesthetic decision.

Matthew Elder

unread,
Jun 24, 2009, 7:18:57 AM6/24/09
to HA...@googlegroups.com
That would work, yeah, let me sleep on that. (Just changed violet's diapers)

-- oh, and that code is what was in the current source, which is still
lazy it seems.. L.ByteString.

Gregory Collins

unread,
Jun 24, 2009, 9:05:13 AM6/24/09
to HA...@googlegroups.com
Matthew Elder <ma...@mattelder.org> writes:

> That would work, yeah, let me sleep on that. (Just changed violet's
> diapers)

It'd be a backwards-incompatible change, so let's save it for the
upcoming ServerPart refactoring.

Matthew Elder

unread,
Jun 24, 2009, 10:49:41 AM6/24/09
to HA...@googlegroups.com
Right.

On 6/24/09, Gregory Collins <gr...@gregorycollins.net> wrote:
>

John MacFarlane

unread,
Jun 24, 2009, 12:44:51 PM6/24/09
to HA...@googlegroups.com
+++ Gregory Collins [Jun 21 09 03:25 ]:

Jinjing Wang has already had some success getting a happstack app
(gitit) running through his 'hack' (a rack-like middleware layer):
http://groups.google.com/group/gitit-discuss/browse_thread/thread/829595e063caf587

He was even able to run gitit on hyena this way, though he needs some
help with the hyena interface.

John

Matthew Elder

unread,
Jun 24, 2009, 1:06:16 PM6/24/09
to HA...@googlegroups.com
yeah i noticed that, very interesting indeed! this will be invaluable to those who are not experienced with application server interfaces.



--

Creighton Hogg

unread,
Jun 25, 2009, 3:07:00 PM6/25/09
to HA...@googlegroups.com
On Wed, Jun 17, 2009 at 5:47 PM, stepcut<jer...@n-heptane.com> wrote:
>
> Oh, I also wanted to add that I think happstack will have achieved
> ultimate success when it no longer exists.
<snip>
> Hence, at some future date, happstack could be merely a concept that
> builds on top of a number of existing libraries. In the same way that
> we don't think of mtl or stm as being an explicit part of happstack,
> we might some day not think any of the current components as belonging
> exclusively to happstack.
>
> At that point, happstack will be more of a philosophy of how to build
> a highly scalable, stable website rather than any specific
> implementation of code.

Getting in late on this discussion as I was busy
packing/moving/unpacking, but I strongly second this sentiment.
Also, I saw that there was a bit of discussion about future work on
Happstack-State, multimaster, & sharding. To be honest, my main
interest in the Happstack project is working on the persistence model
& backend so I'm more than willing to take lead on any effort to
rewrite the macid implementation if no one is chomping at the bit.

stepcut

unread,
Jun 25, 2009, 4:08:43 PM6/25/09
to HAppS
On Jun 25, 2:07 pm, Creighton Hogg <wch...@gmail.com> wrote:

> Also, I saw that there was a bit of discussion about future work on
> Happstack-State, multimaster, & sharding.  To be honest, my main
> interest in the Happstack project is working on the persistence model
> & backend so I'm more than willing to take lead on any effort to
> rewrite the macid implementation if no one is chomping at the bit.

What do you mean by rewrite? The current code is largely undocumented,
the multimaster code is incomplete, and sharding has not yet been
started. On the other hand, for a single server system, it seems
pretty stable. I have looked at the code extensively and the code
looks pretty good. And it seems like it should be possible finish
multimaster and add sharding to the current code base.

But, rewrite makes it sound like you want to start over, perhaps with
a different approach? If so, why do you feel the current code is not
suitable, and what do you plan to do differently?

- jeremy

Creighton Hogg

unread,
Jun 25, 2009, 4:13:08 PM6/25/09
to HA...@googlegroups.com

Sorry, I made an awful word choice. I meant 'revise', as in stream
line, polish, & finish. Regardless of my linguistic handicap, I think
we're on the same page.

stepcut

unread,
Jun 25, 2009, 8:49:19 PM6/25/09
to HAppS
On Jun 25, 3:13 pm, Creighton Hogg <wch...@gmail.com> wrote:
> Sorry, I made an awful word choice.  I meant 'revise', as in stream
> line, polish, & finish.  Regardless of my linguistic handicap, I think
> we're on the same page.

Sweet!

If you are looking for a good place to start, you might try debugging
the happstack-state test that fails randomly. Namely,

### Failure in: happstack-state:1:checkpointProperties:
4:prop_runRestoreCongestion

Mae and I looked at it briefly a while ago, and it looked like the
event right after the checkpoint was sometimes lost. If this is a real
bug, then it would be great to squash it. And, in the process, you
should get a clear understanding of how happstack-state works under
the hood (assuming you don't already).

- jeremy
Reply all
Reply to author
Forward
0 new messages