Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

IMAP in disconnected mode

7 views
Skip to first unread message

Roberto Ullfig

unread,
Apr 8, 2002, 4:35:02 PM4/8/02
to

We're starting to fall victims of the WWW and so must now support IMAP
access from it. We've been using IMP with UWASH IMAP - works O.K. but is
quite heavy since each button clicked on the IMP interface starts and
stops an IMAP process. I/O is then roughly equal to the inbox size times
the number of clicks the user makes. Are there any plans to make IMAP a
little lighter, perhaps with some sort of server-side cache? We also run
Qualcomm Popper and they just recently implemented a server-side cache
which greatly improves performace (when mail is left on server) - the
inbox is not read if there are no new messages. Any plans for something
like this in IMAP's future?

--
Roberto Ullfig : rul...@uchicago.edu
Systems Administrator
Networking Services and Information Technologies
University of Chicago

Mark Crispin

unread,
Apr 8, 2002, 6:37:08 PM4/8/02
to Roberto Ullfig
On Mon, 8 Apr 2002, Roberto Ullfig wrote:
> We're starting to fall victims of the WWW and so must now support IMAP
> access from it. We've been using IMP with UWASH IMAP - works O.K. but is
> quite heavy since each button clicked on the IMP interface starts and
> stops an IMAP process. I/O is then roughly equal to the inbox size times
> the number of clicks the user makes.

A good web-based IMAP client is a difficult thing to build. We weren't
able to find one back when we first investigated the available options, so
we bought the least miserable of the bunch and started work on webpine.

Webpine is now in production use at UW, but I don't know what the plans
are for any external distribution.

-- Mark --

http://staff.washington.edu/mrc
Science does not emerge from voting, party politics, or public debate.

Clifton Royston

unread,
Apr 9, 2002, 6:29:17 PM4/9/02
to
Roberto Ullfig <rul...@uchicago.edu> wrote:

> We're starting to fall victims of the WWW and so must now support IMAP
> access from it. We've been using IMP with UWASH IMAP - works O.K. but is
> quite heavy since each button clicked on the IMP interface starts and
> stops an IMAP process. I/O is then roughly equal to the inbox size times
> the number of clicks the user makes. Are there any plans to make IMAP a
> little lighter, perhaps with some sort of server-side cache? We also run
> Qualcomm Popper and they just recently implemented a server-side cache
> which greatly improves performace (when mail is left on server) - the
> inbox is not read if there are no new messages. Any plans for something
> like this in IMAP's future?

I don't think there is any reasonable way to make a "lighter" IMAP; the
proper solution is to make the client side more stateful and
connection-oriented which is a very tough problem..

I posted here a few weeks ago that we have discovered the same issue
with all IMAP webmail clients we have evaluated so far. One solution I
could see is to build a trusted IMAP proxy server and run it on the
webmail machine to maintain state on each user who currently has an
IMAP session, which makes I/O more closely proportionate to # of
concurrent users * size of mailbox, rather than "number of clicks" as
you rightly say. Unfortunately nobody has done this AFAICT and that's
potentially a large, complex, and security-sensitive piece of software
to write.

BTW, qpopper in "server mode" and UW IMAP will walk all over each
other in the user's inbox. You should be aware of that.
-- Clifton

--
Clifton Royston -- LavaNet Systems Architect -- clif...@lava.net
"What do we need to make our world come alive?
What does it take to make us sing?
While we're waiting for the next one to arrive..." - Sisters of Mercy

Mark Crispin

unread,
Apr 9, 2002, 7:57:31 PM4/9/02
to Clifton Royston
On 9 Apr 2002, Clifton Royston wrote:
> One solution I
> could see is to build a trusted IMAP proxy server and run it on the
> webmail machine to maintain state on each user who currently has an
> IMAP session, which makes I/O more closely proportionate to # of
> concurrent users * size of mailbox, rather than "number of clicks" as
> you rightly say. Unfortunately nobody has done this AFAICT and that's
> potentially a large, complex, and security-sensitive piece of software
> to write.

That is what UW has done with Webpine. And yes, it was a large, complex,


and security-sensitive piece of software to write.

-- Mark --

Neil Hoggarth

unread,
Apr 10, 2002, 7:05:11 AM4/10/02
to
In article <a8vpvt$rg4$1...@mochi.lava.net>,
Clifton Royston <clif...@lava.net> wrote:

> I posted here a few weeks ago that we have discovered the same issue
> with all IMAP webmail clients we have evaluated so far. One solution I
> could see is to build a trusted IMAP proxy server and run it on the
> webmail machine to maintain state on each user who currently has an
> IMAP session, which makes I/O more closely proportionate to # of
> concurrent users * size of mailbox, rather than "number of clicks" as
> you rightly say. Unfortunately nobody has done this AFAICT and that's
> potentially a large, complex, and security-sensitive piece of software
> to write.

I use the WING[1] web mail system, which includes a program called
"maild". Maild is a c-client based Perl program, one instance running
per login session, persisting throughout the life of the session:


----------> Web server ---------->
Browser <---------- with. WING <---------- maild <--------> IMAP server
----------> CGI module ---------->
<---------- <----------

series series of persistent
of transient maild IMAP
HTTP connections connection
connections (via UNIX
domain
socket)


[1] http://users.ox.ac.uk/~mbeattie/wing/

--
Neil Hoggarth Departmental Computer Officer
<neil.h...@physiol.ox.ac.uk> Laboratory of Physiology
http://www.physiol.ox.ac.uk/~njh/ University of Oxford, UK

Roberto Ullfig

unread,
Apr 12, 2002, 3:12:32 PM4/12/02
to

Yes, we know. Your answer was pretty much what I expected. Our current
solution then is to throw lots of hardware at the problem.

0 new messages