Handy script for downloading online manga (comics)

5 views
Skip to first unread message

♪♫▬aze

unread,
Jun 10, 2010, 11:03:53 AM6/10/10
to foss-...@googlegroups.com
#!/bin/bash
n=$1;
string=$2;
start=$3;
end=$4;
link=$5;
attrib=$6;
mkdir "$n.$string"
for i in $(seq -f %02.f $start $end);
do
wget -c "$link$i.jpg" -O "$n.$string/AK.$n""x$i.jpg";
done;
ls -l "$n.$string";

if [[ $attrib == "" ]]; then
zip -r "$n.$string.cbz" "$n.$string"
rm -r "$n.$string"
fi

SATA

unread,
Jun 11, 2010, 5:01:09 AM6/11/10
to foss-...@googlegroups.com

You should have explained those six arguments with the code.

♪♫▬aze,
Here a simpler one with just a single argument. Everything else are auto identified.

#!/bin/bash
# usuage: ./managa.sh <URL>
# eg: ./managa.sh http://www.onemanga.com/Naruto/498/

# Need a argument. The url.
[ -z "$1" ] && exit 0

DIR=`echo $1 | cut -d'/' -f4`
LINK=`wget -q "$101" -O- | sed -n 's/.*manga-page.*src="\(.*\)01\.jpg".*/\1/gp'`
mkdir -p "${DIR}"
for i in `seq -w 1 99`
do
wget -q "${LINK}${i}.jpg" -O "${DIR}/${i}.jpg"
if [ $?  -ne 0 ] ; then
    rm -f "${DIR}/${i}.jpg"
    echo "Done"
    exit 0
fi
done


 

--
FOSS Nepal mailing list: foss-...@googlegroups.com
http://groups.google.com/group/foss-nepal
To unsubscribe, e-mail: foss-nepal+...@googlegroups.com
 
Mailing List Guidelines: http://wiki.fossnepal.org/index.php?title=Mailing_List_Guidelines
Community website: http://www.fossnepal.org/



--
Regards,
Suraj Sapkota, aka SATA
सुरज सापकोटा
~ As long as I have a want, I have a reason for living. Satisfaction is death. ~

♪♫▬aze

unread,
Jun 11, 2010, 5:32:05 AM6/11/10
to foss-...@googlegroups.com
But the image like are like http://img.1000manga.com/mangas/00000215/000317232/
there is no way u can get info from that...

usage to download an episode will be 

$ ./manga.sh <episode no> "title" 1 18 http://img.1000manga.com/mangas/00000200/000317232/

...but i am excited ... to make your method really work..... that will be lot more easier.. no needing to see the staring and ending episodes.. 
シ ZZZZZZzzzzzzzssshhhhh..............

SATA

unread,
Jun 11, 2010, 5:44:14 AM6/11/10
to foss-...@googlegroups.com
On Fri, Jun 11, 2010 at 3:17 PM, ♪♫▬aze <the....@gmail.com> wrote:
But the image like are like http://img.1000manga.com/mangas/00000215/000317232/
there is no way u can get info from that...

usage to download an episode will be 

$ ./manga.sh <episode no> "title" 1 18 http://img.1000manga.com/mangas/00000200/000317232/

You should have checked that out before you replied.
Try this:
./managa.sh http://www.onemanga.com/Naruto/498/

[I suppose, you name the script "manga.sh"]

Remember the url that I am specifying is not the url you are assuming.
 

♪♫▬aze

unread,
Jun 11, 2010, 5:48:40 AM6/11/10
to foss-...@googlegroups.com
ahhh. i see it working prefectly... i didn't realize it was in quite mode....
great work...


--
シ ZZZZZZzzzzzzzssshhhhh..............

♪♫▬aze

unread,
Jun 15, 2010, 12:25:55 PM6/15/10
to foss-...@googlegroups.com
Script improvement... 
1. Getting non-generatable  pages i.e. 05-06, cover, credit
2. Getting Episode Title
3. Renaming and compression to cbz
4. can handle links of "onemanga.com" and "1000manga.com"


#!/bin/bash

# Need a argument. The url.
if [ -z "$1" ]; then 
echo -e "\
manga.sh: missing URL
Usage: ./manga.sh [URL]
exit 0;
fi

# extracting info
no=`echo $1 | cut -d'/' -f5`;
initial=`echo $1 | cut -d '/' -f4 | tr -s '_' '\n' | cut -c 1`
initial=`echo $initial | sed -n 's/ //pg'`
site=`echo $1 | cut -d '.' -f2`;
echo $site;

mkdir -p "$initial.$no";
cd "$initial.$no";

#getting episode title
if [ ! -e startpage ]; then
wget "$1" -O "startpage";
fi;
title=`sed -n 's/.*Chapter Title: \(.*\)<.*/\1/p' startpage`
mkdir -p "$no.$title";

#getting first-page
page0=$(sed -n "s/.*href=.*$no\/\(.*\)\">.*Begin.*/\1/p" startpage);
if [ ! -e firstpage ]; then
wget "$1$page0" -O firstpage;
fi;

#getting initial list
if [ $site = "onemanga" ]; then
list="`sed -n "/id_page_select/,/select>/p" firstpage | tr -s "<>" "\n" | sed '4~4!d'`";
else 
list=`sed -n "/id_page_select/,/select>/p" firstpage | sed "s/.*<.*>//p"`;
fi

echo $list;

if [ -z "$list" ]; then 
echo "Couldn't retrive pages";
exit 0;
fi;

#getting link
page0=$(echo $page0 | tr -d '/');
link=$(sed -n "s/.*manga-page.*src=\"\(.*\)$page0.jpg.*/\1/p" firstpage);

if [ -z "$link" ]; then 
echo "Couldn't retrive link";
exit 0;
fi;

#downloading
for i in $list; do
wget -c "$link$i.jpg" -O "$no.$title/$initial.${no}x$i.jpg";
if [ $? -ne 0 ] ; then
echo "Problem downloading file"
exit 0
fi
done;

#compressing
zip -r "$no.$title.cbz" "$no.$title"
mv "$no.$title.cbz" "../$no.$title.cbz"
cd ..
rm -r "$initial.$no"
exit 0;
--
シ ZZZZZZzzzzzzzssshhhhh..............
Reply all
Reply to author
Forward
0 new messages