Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Server Side Includes fast of slow?

4 views
Skip to first unread message

masonc

unread,
May 22, 2015, 4:07:00 PM5/22/15
to
I have many of pages with three columns (or divs).
Two or repeated on all pages. One is unique to that page.

I think I could use server-side includes for the two repeaters.

Would this slow down page loading?

(I haven't found this info by Googling)

Thanks

MasonC http:\\frontal-lobe.info

Chris F.A. Johnson

unread,
May 22, 2015, 5:08:04 PM5/22/15
to
On 2015-05-22, masonc wrote:
> I have many of pages with three columns (or divs).
> Two or repeated on all pages. One is unique to that page.
>
> I think I could use server-side includes for the two repeaters.
>
> Would this slow down page loading?

I haven't noticed a slowdown, and I use SSI a lot. For example,
this page has 8 SSIs: <http://torquiz.cfaj.ca/>, and I don't notice
a delay.

--
Chris F.A. Johnson

David E. Ross

unread,
May 22, 2015, 6:21:41 PM5/22/15
to
Aha! Individuals familiar with SSIs.

I have a number of SSIs that worked perfectly well on a prior Web host.
That host, however, decided to eliminate personal customers and
concentrate on commercial customers. Thus, I had to find an alternative
host.

With my current host, a simple SSI works great. More complicated SSIs,
however, do not work at all. All my SSIs are coded in UNIX Korn.
Instead of changing hosts again, can you suggest what I can do to make
my UNIX scripts work?

Before someone objects to this being off-topic for
comp.infosystems.www.authoring.html, I posted a similar message at
comp.infosystems.www.servers.unix but never received an answer.

--
David E. Ross

Why do we tolerate political leaders who
spend more time belittling hungry children
than they do trying to fix the problem of
hunger? <http://mazon.org/>

Chris F.A. Johnson

unread,
May 22, 2015, 11:08:04 PM5/22/15
to
On 2015-05-22, David E. Ross wrote:
> On 5/22/2015 1:59 PM, Chris F.A. Johnson wrote:
>> On 2015-05-22, masonc wrote:
>>> I have many of pages with three columns (or divs).
>>> Two or repeated on all pages. One is unique to that page.
>>>
>>> I think I could use server-side includes for the two repeaters.
>>>
>>> Would this slow down page loading?
>>
>> I haven't noticed a slowdown, and I use SSI a lot. For example,
>> this page has 8 SSIs: <http://torquiz.cfaj.ca/>, and I don't notice
>> a delay.
>>
>
> Aha! Individuals familiar with SSIs.
>
> I have a number of SSIs that worked perfectly well on a prior Web host.
> That host, however, decided to eliminate personal customers and
> concentrate on commercial customers. Thus, I had to find an alternative
> host.
>
> With my current host, a simple SSI works great. More complicated SSIs,
> however, do not work at all. All my SSIs are coded in UNIX Korn.

What's the difference between a "simple" and a "complex" script?

> Instead of changing hosts again, can you suggest what I can do to make
> my UNIX scripts work?

Can you be more specific about what the problem is?

One of the most common problems is forgetting to give a script the
right permissions: a+rx, go-w


--
Chris F.A. Johnson

David E. Ross

unread,
May 23, 2015, 12:47:12 AM5/23/15
to
In both of the following, some statements have been made into comments
in an attempt to make them work. For the first of these, the attempt
was successful. Both scripts have permissions 0755, which corresponds
to chmod a+rx o-w.

As a reminder, all my scripts worked fine on a prior Web host's server.

A simple script that works displays the current date and time as
specified in RFC 822:

#!/bin/ksh
# Display current date-time in RFC 822 format

# print 'Content-type: text/html'
# print ''

now=$(date -R)
/bin/echo $now

================================================

A complex script that does not work is supposed to display a list all
the HTML files in my space on the Web servers in UNIX ls format but
formatted as a Web page:

#!/bin/ksh
# Create index file, listing files

typeset -L restline

# fromwhere=${HTTP_REFERER%/*} # Web site (directory, not page)

\rm -f *.fidx # get rid of old files (if any)

savedir=$(pwd) # save old directory
# cd .. # get into directory with HTML files
# cd ~ # get into directory with HTML files

\rm -f index_list.html

ls -ld *.html | grep -v 'get_index.shtml' | grep -v 'lrwxr' >
$savedir/raw.fidx
# save only HTML files, but not caller of this file
ls -ld CA_review/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include CA_review directory
ls -ld Canada_trip/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include Canada_trip directory
ls -ld cooking/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include cooking directory
ls -ld editorials/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include editorials directory
ls -ld frauds_n_hoaxes/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include frauds_n_hoaxes directory
ls -ld garden/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include garden directory, exclude diary subdirectory
ls -ld garden/diary/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include garden diary directory
ls -ld internet/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include internet directory
ls -ld malaprops/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include malaprops directory
ls -ld PGP/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include PGP directory
ls -ld quips/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include quips directory
ls -ld SocSec/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include SocSec directory
ls -ld taxes/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include taxes directory
ls -ld unemployed/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include unemployed directory
ls -ld UPS_sucks/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
# include UPS directory

cd $savedir # return to original directory

awk '{ print $5, $6, $7, $8, $9 }' raw.fidx > temp.fidx
# extract size, date, and file-name

touch list.fidx
while read size date1 date2 date3 fname; do
print -n '<tr><td><a href="'$fname'">'$fname'</a></td>' >> list.fidx
print -n '<td align="right">'$size'</td>' >> list.fidx
print '<td>'$date1 $date2 $date3'</td></tr>' >> list.fidx
done < temp.fidx

print '</table>' >> list.fidx
print ' ' >> list.fidx

print '<p align="center">This index was created' \
$(date +'%A, %e %b %Y at %T %Z') >> list.fidx

cat front.tidx list.fidx back.tidx > index.fidx

# print 'Content-type: text/html'
# print ''
while read indexline; do
print $indexline
done < index.fidx

\rm -f *.fidx # get rid of temp files

Where front.tidx is a text file containing the HTML <Head> section with
the beginning of the <Body> section and back.tidx is a text file
containing the HTML for the page's footer with the </body> and </html>
markup.

Chris F.A. Johnson

unread,
May 23, 2015, 3:08:05 AM5/23/15
to
Which part of the script is not working?

Does it work when you run it at the command prompt?

> typeset -L restline
>
> # fromwhere=${HTTP_REFERER%/*} # Web site (directory, not page)
>
> \rm -f *.fidx # get rid of old files (if any)
>
> savedir=$(pwd) # save old directory

There's no need to call an external command (pwd); use:

savedir=$PWD
Unless there are a huge number of file, that entire block can be reduced to:

{
ls -ld *.html | grep -v 'get_index.shtml'
ls -ld CA_review/*.html Canada_trip/*.html cooking/*.html editorials/*.html frauds_n_hoaxes/*.html \
garden/*.html garden/diary/*.html internet/*.html malaprops/*.html PGP/*.html quips/*.html \
SocSec/*.html taxes/*.html unemployed/*.html UPS_sucks/*.html
} | grep -v 'lrwxr' > $savedir/raw.fidx

(And if there _are_ too many files, break the second "ls -ld" into two
or three separate commands.)

>
> cd $savedir # return to original directory
>
> awk '{ print $5, $6, $7, $8, $9 }' raw.fidx > temp.fidx
> # extract size, date, and file-name

<http://mywiki.wooledge.org/ParsingLs>
(I think Greg overstates the case, but it's something to be aware of.)

> touch list.fidx
> while read size date1 date2 date3 fname; do
> print -n '<tr><td><a href="'$fname'">'$fname'</a></td>' >> list.fidx
> print -n '<td align="right">'$size'</td>' >> list.fidx
> print '<td>'$date1 $date2 $date3'</td></tr>' >> list.fidx
> done < temp.fidx
>
> print '</table>' >> list.fidx
> print ' ' >> list.fidx
>
> print '<p align="center">This index was created' \
> $(date +'%A, %e %b %Y at %T %Z') >> list.fidx
>
> cat front.tidx list.fidx back.tidx > index.fidx
>
> # print 'Content-type: text/html'
> # print ''
> while read indexline; do
> print $indexline
> done < index.fidx
>
> \rm -f *.fidx # get rid of temp files
>
> Where front.tidx is a text file containing the HTML <Head> section with
> the beginning of the <Body> section and back.tidx is a text file
> containing the HTML for the page's footer with the </body> and </html>
> markup.

You don't need the closing BODY and HTML tags.

Do you have a DOCTYPE as the first line of front.tidx?

--
Chris F.A. Johnson

David E. Ross

unread,
May 23, 2015, 10:44:53 AM5/23/15
to
On 5/22/2015 11:20 PM, Chris F.A. Johnson wrote [in part]:
>
> You don't need the closing BODY and HTML tags.
>
> Do you have a DOCTYPE as the first line of front.tidx?
>

Yes, front.tidx has a DOCTYPE.

aitch

unread,
Jun 2, 2015, 5:30:05 PM6/2/15
to
David E. Ross wrote:

> In both of the following, some statements have been made into comments
> in an attempt to make them work. For the first of these, the attempt
> was successful. Both scripts have permissions 0755, which corresponds
> to chmod a+rx o-w.
>
> As a reminder, all my scripts worked fine on a prior Web host's server.
>
> A simple script that works displays the current date and time as
> specified in RFC 822:
>
> #!/bin/ksh
> # Display current date-time in RFC 822 format
>
> # print 'Content-type: text/html'
> # print ''
>
> now=$(date -R)
> /bin/echo $now

You could have just done this:

#v+
#!/bin/ksh
echo -e 'Content-Type: text/plain\n'
date -R
#v-

> A complex script that does not work is supposed to display a list all
> the HTML files in my space on the Web servers in UNIX ls format but
> formatted as a Web page:
>
> #!/bin/ksh
> # Create index file, listing files
>
> typeset -L restline
>
> # fromwhere=${HTTP_REFERER%/*} # Web site (directory, not page)
>
> \rm -f *.fidx # get rid of old files (if any)
>
> savedir=$(pwd) # save old directory
> # cd .. # get into directory with HTML files
> # cd ~ # get into directory with HTML files
>
> \rm -f index_list.html
>
> ls -ld *.html | grep -v 'get_index.shtml' | grep -v 'lrwxr' >
> $savedir/raw.fidx
> # save only HTML files, but not caller of this file
> ls -ld CA_review/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
> # include CA_review directory
[... loads of similar lines snipped ...]
> ls -ld UPS_sucks/*.html | grep -v 'lrwxr' >> $savedir/raw.fidx
> # include UPS directory
>
> cd $savedir # return to original directory
>
> awk '{ print $5, $6, $7, $8, $9 }' raw.fidx > temp.fidx
> # extract size, date, and file-name
>
> touch list.fidx
> while read size date1 date2 date3 fname; do
> print -n '<tr><td><a href="'$fname'">'$fname'</a></td>' >> list.fidx
> print -n '<td align="right">'$size'</td>' >> list.fidx
> print '<td>'$date1 $date2 $date3'</td></tr>' >> list.fidx
> done < temp.fidx
>
> print '</table>' >> list.fidx
> print ' ' >> list.fidx
>
> print '<p align="center">This index was created' \
> $(date +'%A, %e %b %Y at %T %Z') >> list.fidx
>
> cat front.tidx list.fidx back.tidx > index.fidx
>
> # print 'Content-type: text/html'
> # print ''
> while read indexline; do
> print $indexline
> done < index.fidx
>
> \rm -f *.fidx # get rid of temp files

It probably isn't working because the UID that the script runs under
doesn't have write permission on the $savedir directory. Your previous
server must have had more relaxed (insecure) permissions. Using the
proper directory for temp files should help:
savedir=${TMPDIR:-/tmp}
If you do this, you'll have to specify a path for the the *.tidx files
when you cat them with list.fidx.

Your script seems needlessly complex though, and with all those
temporary files, the output could get munged if there are several
instances running. If you've got access to "find", you could do
something like this (lines may wrap):

#v+
#!/bin/ksh
echo -e 'Content-Type: text/plain\n'
find $DOCUMENT_ROOT -name \*.html -type f -printf \
' <tr><td><a href="/%P">/%P</a></td><td align="right">%s</td><td>%TY-%Tm-%Td</td></tr>\n' \
2>/dev/null | sort -t \> -k 4.2,5
#v-

No temporary files necessary, and you won't have to edit the script if
you add or remove any subdirectories. I've specified the Content-Type
as text/plain, as the output isn't a standalone web page.

> Where front.tidx is a text file containing the HTML <Head> section with
> the beginning of the <Body> section and back.tidx is a text file
> containing the HTML for the page's footer with the </body> and </html>
> markup.

There's no need to do that. You can keep all the static HTML in
get_index.shtml and call the CGI script from inside the table:

#v+
<!DOCTYPE ...>
<html>
...
<table>
<tr><th>File</th><th>Size</th><th>Date</th></tr>
<!--#include virtual="/scriptdir/scriptname.ksh" -->
</table>
<!--#config timefmt="%F %R%z" -->
<p>This index was created <!--#echo var="DATE_LOCAL" --></p>
...
</body>
</html>
#v-

I'm no expert BTW, but HTH.

--
aitch
0 new messages