* Sending messages works (or should ;). Currently only smtp is
implemented, but twitter should be easy. There is a new /drafts API
end-point along the lines of what James asked for - /drafts/create (a
POST request returning the ID of the created message) will create a new
message and drafts/send (a PUT request taking an ID as a param) will
send it. No ability yet to update a pre-created draft. Includes tests.
* API message/send works, which is basically a short-hand for
drafts/create + drafts/send (it requires a POST body in the same format
as drafts/create). Includes tests.
* Message objects returned by the API now have the 'from' element a
single recipient object rather than a list (and also expect the same
when sending a message). NOTE: I haven't updated any samples which may
be affected by this change. 'to', 'cc' etc elements are unchanged.
* We can now have multiple queues - each account capable of sending
messages gets its own "outgoing" queue. Currently there is still only a
single incoming queue, but I expect this to be split so things like the
bitly extension can have their own.
* 'run-raindrop.py delete-the-world' now attempts to restore all your
account information after nuking the database (unless --delete-accounts
is specified as an option.
* Formalized 'to_dict' and 'from_dict' methods on objects, replacing the
SASerializer class used in the API.
* Changed storm's checking of require_auth so the test suite can avoid
it - and test.ini now avoids it.
* ensure_tagged now has an optional param 'replace_values' which
defaults to False. If true, any other tags with the same type but
different values are removed.
* All date parsing/formatting methods we use are now classmethods on the
UTCDateTime object.
* More of raindrop.model.storage has been nuked.
Let me know how it goes...
Mark
James
On 17/08/10 3:16 AM, Mark Hammond wrote:
> Hi all,
> I just pushed *lots* of changes. The highlights...
>
> * Sending messages works (or should ;). Currently only smtp is
> implemented, but twitter should be easy. There is a new /drafts API
> end-point along the lines of what James asked for - /drafts/create (a
> POST request returning the ID of the created message) will create a
> new message and drafts/send (a PUT request taking an ID as a param)
> will send it. No ability yet to update a pre-created draft. Includes
> tests.
I'm getting an error message on sync now, here's the text:
INFO:raindrop.conductor:Starting sync of smtp account: smtpgmail-clarkbw@...
ERROR:raindrop.conductor:sync of account '<Account(id=3)>' failed
Traceback (most recent call last):
File "/Users/clarkbw/moz/raindrop-reboot/raindrop/raindrop/conductor.py", line 142, in _do_sync_thread
self._do_sync(proto, options)
File "/Users/clarkbw/moz/raindrop-reboot/raindrop/raindrop/conductor.py", line 152, in _do_sync
proto.start_sync(self, options)
AttributeError: 'SMTPProtocol' object has no attribute 'start_sync'
> * API message/send works, which is basically a short-hand for
> drafts/create + drafts/send (it requires a POST body in the same
> format as drafts/create). Includes tests.
>
> * Message objects returned by the API now have the 'from' element a
> single recipient object rather than a list (and also expect the same
> when sending a message). NOTE: I haven't updated any samples which
> may be affected by this change. 'to', 'cc' etc elements are unchanged.
I haven't had a chance to check these out but I'm going to try on this
next flight.
> * We can now have multiple queues - each account capable of sending
> messages gets its own "outgoing" queue. Currently there is still only
> a single incoming queue, but I expect this to be split so things like
> the bitly extension can have their own.
I'm excited for the bitly/incoming extension queue system. Let me know
when you're starting work on that so I can give a brain dump of
thoughts. For now I think there are two distinct cases I can see.
First we have the bitly type API systems where we are looking to a
certain API to return information we need and we want to have each API
service in it's own queue so a remote system only breaks it's own API.
Second I think we have a couple other extensions that are going to open
a connect to a service (is.gd and others only offer a 301 service and no
API) and then likely another connection to the final service; I'm not
sure how you'd want to handle that situation.
And finally I think there's a common interaction we need to create with
the UI for when a service is down. The UI will normally be expecting
shortened links to be expanded but when those services are down for some
reason we should have some kind of standard/reliable system for
understanding that the link couldn't be expanded just yet.
> * 'run-raindrop.py delete-the-world' now attempts to restore all your
> account information after nuking the database (unless
> --delete-accounts is specified as an option.
So far this seems to work great for me, thanks!
> * Formalized 'to_dict' and 'from_dict' methods on objects, replacing
> the SASerializer class used in the API.
Nice, I think this will be really helpful in creating a more standard
representation that's delivered via the API.
Thanks,
~ Bryan
> I'm getting an error message on sync now, here's the text:
pushed a fix for that.
>> * We can now have multiple queues - each account capable of sending
>> messages gets its own "outgoing" queue. Currently there is still only
>> a single incoming queue, but I expect this to be split so things like
>> the bitly extension can have their own.
> I'm excited for the bitly/incoming extension queue system. Let me know
> when you're starting work on that so I can give a brain dump of
> thoughts.
Now is a good time to start thinking about this..
> For now I think there are two distinct cases I can see. First
> we have the bitly type API systems where we are looking to a certain API
> to return information we need and we want to have each API service in
> it's own queue so a remote system only breaks it's own API.
I was thinking the API would include the ability to have multiple queues
(ie, these extensions wouldn't be "breaking" the API - the API grows to
incorporate this as a first-class concept).
My thoughts on this are (but obviously are up for discussion and
re-thinking):
* As the extension initializes, it reports that it would like its own
queue initialized.
* The extension is passed items as normal (ie, not on the queue - just
as a regular extension). The extension then examines the item, but
instead of processing it, simply asks for the item to be "re-queued" on
its queue. This means that only items which the bitly extension knows
contains links it can expand are queued to it, while the vast majority
of messages never get queued to that queue.
* The raindrop extension framework manages the additional queues and
calls the extension with the queued items - but it calls a different
'entry point' for items it got off the extension queue.
IOW, extensions which want their own queue would be changed so:
* An optional 'initialize' method exists in extensions - it is here they
would indicate they want a queue.
* on_item_update still exists for these extensions - in general they do
no real work, but instead examine each item and optionally re-queue it
for later.
* A new entry-point 'on_queued_item'(?) must be defined for these
extensions. The raindrop framework will magically pop things off the
extension's queue and call this entry-point.
(I'm wondering if extensions should just be a class...)
> Second I
> think we have a couple other extensions that are going to open a connect
> to a service (is.gd and others only offer a 301 service and no API) and
> then likely another connection to the final service; I'm not sure how
> you'd want to handle that situation.
Can you explain this in a little more detail? I do see a need for these
things to chain together somehow - eg, a bitly link gets expanded, and
it is found to be a youtube video - the youtube extension should then be
capable of doing its magic. Is the scenario you describe above similar
to that?
> And finally I think there's a common interaction we need to create with
> the UI for when a service is down. The UI will normally be expecting
> shortened links to be expanded but when those services are down for some
> reason we should have some kind of standard/reliable system for
> understanding that the link couldn't be expanded just yet.
Yeah - do you have any thoughts on that? In the model I sketch above,
the extension has the opportunity of indicating somehow it is
re-queueing the item (eg, the bitly extension could set some field to
'pending' before it causes the item to be re-queued), but making this
general seems tricky.
Cheers,
Mark