Linking to pictures in torrents

15 views
Skip to first unread message

Benjamin

unread,
Apr 12, 2010, 9:46:46 AM4/12/10
to LittleShoot
Great software!!!

I found a nice usage for littleshoot

I could link to pictures (in the limewire network) in my web page like
this:

<img width="495" height="493" src="http://p2p2o.littleshoot.org:8107/
api/client/download/00_Shantel_-_Planet_Paprika-WEB-2009-Cover-csm.jpg?
name=00_Shantel_-_Planet_Paprika-WEB-2009-Cover-
csm.jpg&amp;size=25307&amp;uri=urn
%3Asha1%3AHMSWPH6A5FGLNOGGFPZW6CCR5A5EA274&amp;cancelOnStreamClose=false&amp;noCache=1271073397798&amp;mimeType=image
%2Fjpeg&amp;urn=urn
%3Asha1%3AHMSWPH6A5FGLNOGGFPZW6CCR5A5EA274&amp;source=limewire">

But there is no way to do this with torrent files, I would like a way
to specify a file in the torrent to download so it can be displayed in
my web page instantly. Even if there are 1000 other pictures in the
torrent not yet downloaded.

This way I could host one single picture-torrent with all pictures for
my web page and have a nice presentation with css on my webpage
linking to individual pictures.

Would be great to have a web page with ~5 MB size being displayed as a
super big gallery site with 100 gigs of pictures :D

--
You received this message because you are subscribed to the Google Groups "LittleShoot" group.
To post to this group, send email to littles...@googlegroups.com.
To unsubscribe from this group, send email to littleshooter...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/littleshooters?hl=en.

Michael Sullivan

unread,
Apr 17, 2010, 11:11:11 PM4/17/10
to littles...@googlegroups.com

adamfisk

unread,
Apr 17, 2010, 11:22:45 PM4/17/10
to LittleShoot
Very creative implementation! This is exactly the type of thing
LittleShoot is designed to do, and the next release will make it much
easier (and documented =) to use LittleShoot in this way.

We hadn't considered the multi-file torrent of images scenario,
though. Should be relatively straightforward to achieve, although
certainly a fair bit of work to do there. One reason we handle the
torrents a little differently is because the startup time tends to
take a few seconds longer as the downloader gathers the sources, but
it certainly can work the same as all the other downloads.

I've created an issue for this, and it's very exciting to see people
using LittleShoot in this way! You're blazing precisely the trail we
want to go down, so thanks. Here's the JIRA issue:

http://dev.littleshoot.org:8081/browse/LS-438

Exciting stuff.

-Adam


On Apr 12, 6:46 am, Benjamin <benjamingustafs...@gmail.com> wrote:
> Great software!!!
>
> I found a nice usage for littleshoot
>
> I could link to pictures (in the limewire network) in my web page like
> this:
>
> <img width="495" height="493" src="http://p2p2o.littleshoot.org:8107/
> api/client/download/00_Shantel_-_Planet_Paprika-WEB-2009-Cover-csm.jpg?
> name=00_Shantel_-_Planet_Paprika-WEB-2009-Cover-
> csm.jpg&amp;size=25307&amp;uri=urn
> %3Asha1%3AHMSWPH6A5FGLNOGGFPZW6CCR5A5EA274&amp;cancelOnStreamClose=false&am p;noCache=1271073397798&amp;mimeType=image

Benjamin

unread,
Apr 18, 2010, 6:33:45 AM4/18/10
to LittleShoot
Well, the delay to get the sources for torrents is a problem... may
need to use it with AJAX...

This way I can show a "image loading" picture until the file is
downloaded...

Or

A command to gather sources for a specific torrent in advance... but
not download anything until a file is specified... This way the
landing page on the website could prepare the connection to the
torrent.


Other thoughts:
It may be interesting to consider all different magnet-link types. And
add a module for each type (much) later in the development....
This way the user can connect to many file-networks to search for the
file-hash.

In html 5:
<video>tiger-tree-hash</video> could start a dc++ download of a video
and have instant start for example...


Im just starting to understand the code in the project...

//Benjamin

adamfisk

unread,
Apr 19, 2010, 2:35:27 AM4/19/10
to LittleShoot
All excellent thoughts, Benjamin. We've also been considering a quick
Wordpress plugin that would make this super easy for Wordpress users,
likely using oEmbed to automatically configure LittleShoot embeds.
That would might enable some Ajax loading with embedded JavaScript.

With straight LittleShoot files, the startup time isn't a problem, but
we need to finish off the uploading module for that to work, which is
in part waiting on our Facebook integration.

I don't know how closely you've looked at the links LittleShoot
creates for downloads allowing you to post links to Facebook, Twitter,
etc, but those are also pretty interesting. They're of the form:

http://www.littleshoot.org/link?uri=[url-encoded uri -- could be
a .torrent file, a p2p2o.littleshoot.org - style link, or more going
forward]

Basically, that URL will just redirect to something LittleShoot can
handle *if the caller has LittleShoot*. If the caller doesn't have
LittleShoot, that'll first prompt for a LittleShoot download and
install, at which point it will detect the install and make the same
request to LittleShoot to start downloading the file.

The main tricky thing with gathering sources across multiple network
types is integrating all the various protocols while keeping the code
clean, especially given that each network has pretty unique ways of
discovering sources in the first place, many of them based on DHTs. It
certainly could be done, but we've so far focused on keeping the
download architecture as clean as possible. We also really want to
push the straight LittleShoot REST API for accessing sources, which
LittleShoot's own downloading architecture is based on. That'll be an
API anyone can use to access all the sources for a file, with all
those sources being available via HTTP (albeit with all sorts of
firewall tricks to get to that point).

The HTML 5 idea is super interesting. Would be neat to create a
standard way to link to sources with a littleshoot extension that
would automatically do the P2P stream.

Anyway, all great stuff Benjamin. We're hoping to flesh out this whole
area thoroughly with LittleShoot 1.0 (as soon as we find enough time
to finish it!), and it would be great to get your input. Where are you
based?

All the Best,

-Adam

Michael Sullivan

unread,
Apr 19, 2010, 9:05:51 AM4/19/10
to littles...@googlegroups.com
REST API for accessing sources..... 

That would be key.  Very cool.

Sull

adamfisk

unread,
Apr 19, 2010, 11:45:04 AM4/19/10
to LittleShoot
Yup!! It's in there now, just undocumented and under-utilized without
the LittleShoot-based publishing, but it's one of the things I'm super
excited about!

-Adam
> > littleshooter...@googlegroups.com<littleshooters%2Bunsubscribe@go oglegroups.com>
> > .
> > > > > For more options, visit this group athttp://
> > groups.google.com/group/littleshooters?hl=en.
>
> > > > --
> > > > You received this message because you are subscribed to the Google
> > Groups "LittleShoot" group.
> > > > To post to this group, send email to littles...@googlegroups.com.
> > > > To unsubscribe from this group, send email to
> > littleshooter...@googlegroups.com<littleshooters%2Bunsubscribe@go oglegroups.com>
> > .
> > > > For more options, visit this group athttp://
> > groups.google.com/group/littleshooters?hl=en.
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > "LittleShoot" group.
> > > To post to this group, send email to littles...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > littleshooter...@googlegroups.com<littleshooters%2Bunsubscribe@go oglegroups.com>
> > .
> > > For more options, visit this group athttp://
> > groups.google.com/group/littleshooters?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "LittleShoot" group.
> > To post to this group, send email to littles...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > littleshooter...@googlegroups.com<littleshooters%2Bunsubscribe@go oglegroups.com>
> > .

Benjamin Gustafsson

unread,
Apr 19, 2010, 7:10:48 PM4/19/10
to littles...@googlegroups.com
Adam, I live in Sweden

I guess "enough time" equals "enough money" in your life, just like it does in my life...
The way your system works now, I think it would be easy to get donations if the upload module got some work, so that people can start to test the possibilities and understand what it is. Without the upload possibility its no good :(

If everything is reachable via P2P HTTP (as a last resort), then lots of users will get it working without "configuration-and-setup-hell". Even if it is slower, it won't matter, it will spread if it is easy to use and works all the time without slowing down the computer like most P2P programs do.

Some input about the upload module:
I have not read the code... but I suggest you add hashsum calculation with as many hash types as possible (maybe as a advanced setting, then send the bitprint-data to a public database, that data is digital gold)... it takes only seconds to generate the common bitprints for a file and enables sooooo many useful possibilities for P2P.

The following is something I have been thinking of developing myself, but never found a good system to act as a bitprint generator:

A big database with bitprints would enable user ratings and release information from the release groups being related to the actual files/bitprints. It thereby enables a "wikipedia" for files. (Much more than http://bitzi.org/ ever can become)
It would also enable decentralized searching in many private networks if a list with requested rare files was published on a web service, users could set up their system to automatically search for, and upload these files to the requester when logged in to their private networks... web services is the future... all that is needed is a database with interesting data... There are many 'almost extinct' files, only available for search short durations every week or month (when the peers are on line).

Also http://www.vertor.com is closing in on the "wikipedia for files", it is the future... soon there will be lots of web services analyzing data and relating bitprints to useful information.

Don't make the system to dependent on a single access point, don't make it too dependent to your website... It has to be as decentralized as possible, so that it works even if your system gets a 'never ending' DDoS attack. In my humble opinion, DoS attacks are the biggest threat to any system...


//Benjamin

adamfisk

unread,
Apr 22, 2010, 3:32:15 PM4/22/10
to LittleShoot
Agreed on all of these points, Benjamin, and "enough time" certainly
equals "enough money" in my world as well. Funding startups completely
on one's own is quite an art, and it's honestly hurt LittleShoot in
the last 8 months. That'll change soon, though, and we'll see some
steadier progress.

Totally agreed on the common API for files using different bitprints.
The obvious are MD5 and SHA-1 URIs, but I'd also like to start adding
more human readable URI "permalinks," although we haven't finalized
the format there. The other obvious ones are Tiger Tree hashes and the
related but distinct .torrent files themselves.

I've known Gordon Mohr from Bitzi since the early days (2000/2001 I
guess), and I agree integrating something like this at the client
level makes a lot of sense. The Wikipedia for files approach is
brilliant. One great aspect of that approach would be the phenomenal
SEO.

I agree on the single point of failure in principal, although I think
there's always a balance. Martin Fowler's "First Law of Distributed
Computing" rings true to me -- "Don't Distribute Your Objects." The
reason is it's just hard. The work I did on distributed search for
LimeWire/Gnutella is the perfect example. It's some really fun and
cool technology, but it's complicated and hard. In the end, you simple
can't make distributed search either as fast or as comprehensive as
centralized search. So it's much harder and works much worse. With
LittleShoot, we run all of that part directly on Google App Engine. So
there's a single point of failure in a sense, but it's a single point
of failure that's based on redundant servers all over the world that
have survived many DDOS attacks.

So to me it just depends. I think P2P really shines for big files, and
it works incredibly well there. With LittleShoot, part of the focus is
to modularize that large file downloading component as much as
possible to make that tech available easily in any context.

Anyway, back to the code!

Thanks for all the thoughts, Benjamin. These are all again precisely
aligned with the direction we want to go, so it's great to here from
someone so uncannily like-minded!

-Adam


On Apr 19, 4:10 pm, Benjamin Gustafsson <benjamingustafs...@gmail.com>
wrote:
> Adam, I live in Sweden
>
> I guess "enough time" equals "enough money" in your life, just like it does
> in my life...
> The way your system works now, I think it would be easy to get donations if
> the upload module got some work, so that people can start to test the
> possibilities and understand what it is. Without the upload possibility its
> no good :(
>
> If everything is reachable via P2P HTTP (as a last resort), then lots of
> users will get it working without "configuration-and-setup-hell". Even if it
> is slower, it won't matter, it will spread if it is *easy to use *and *works
> all the time* without slowing down the computer like most P2P programs do.
>
> Some input about the upload module:
> I have not read the code... but I suggest you add hashsum calculation with
> as many hash types as possible (maybe as a advanced setting, then send the
> bitprint-data to a public database, that data is digital gold)... it takes
> only seconds to generate the common bitprints for a file and enables sooooo
> many useful possibilities for P2P.
>
> The following is something I have been thinking of developing myself, but
> never found a good system to act as a bitprint generator:
>
> A big database with bitprints would enable user ratings and release
> information from the release groups being related to the actual
> files/bitprints. It thereby enables a "wikipedia" for files. (Much more thanhttp://bitzi.org/ever can become)
> It would also enable decentralized searching in many private networks if a
> list with requested rare files was published on a web service, users could
> set up their system to automatically search for, and upload these files to
> the requester when logged in to their private networks... web services is
> the future... all that is needed is a database with interesting data...
> There are many 'almost extinct' files, only available for search short
> durations every week or month (when the peers are on line).
>
> Alsohttp://www.vertor.comis closing in on the "wikipedia for files", it is
Reply all
Reply to author
Forward
0 new messages