Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How can I use compression over a socket?

28 views
Skip to first unread message

Chris Gokey

unread,
Feb 25, 2002, 8:15:25 AM2/25/02
to
I noticed that most of the java.util.zip classes are geared toward
file-based compression. I'd like to send data over a socket where the
data is intermittent, i.e., the data isn't complete, like it is in a file.
I've tried doing something like this:

in = <some input stream>
out = socket.getOutputStream();
out = new DeflaterOutputStream(out);
byte buffer = new byte[1024];
while (true) {
int noBytes = in.read(buffer); // block until data available
out.write(buffer,0,noBytes); // write out compressed data
out.flush();
}

I found this link:
http://java.sun.com/products/jdk/1.2/docs/guide/rmi/sockettype.doc.html

But the page specifically says:
Please note: This algorithm is not recommended for use in any application
requiring data compression. It is included only for this example and is
not intended for practical use.

If someone could point to me to where I could find a package that
implements this type of compression, I'd really appreciate it.

I've looked through many usenet posting, but have yet to find anything
that works. I've got some ideas how to implement my own, but would rather
not reinvent the wheel, I can't imagine that someone hasn't already done
this.

Chris

devnull

unread,
Feb 25, 2002, 3:21:37 PM2/25/02
to
i dont think that's easy to realize because most compression
techniques (all?)
use a kind of dictionary to compress data and therefore need complete
data to
set priorities (see Huffmann algorithm), which makes your task
difficult.

you could, however, try to split up the data you want to transmit in
smaller fragments
like you wait until you have reached 5000 bytes and send them
compressed, wait for another 5000 and so on...

A. Bolmarcich

unread,
Feb 25, 2002, 5:45:10 PM2/25/02
to
In article <20020225.091827...@gcmd.nasa.gov>, Chris Gokey wrote:
> I noticed that most of the java.util.zip classes are geared toward
> file-based compression. I'd like to send data over a socket where the
> data is intermittent, i.e., the data isn't complete, like it is in a file.
> I've tried doing something like this:
>
> in = <some input stream>
> out = socket.getOutputStream();
> out = new DeflaterOutputStream(out);
> byte buffer = new byte[1024];
> while (true) {
> int noBytes = in.read(buffer); // block until data available
> out.write(buffer,0,noBytes); // write out compressed data
> out.flush();
> }
>
> I found this link:
> http://java.sun.com/products/jdk/1.2/docs/guide/rmi/sockettype.doc.html

If you want to use the number of bytes each read gets as the amount
to compress and send at a time, you can change the above code to

in = <some input stream>
out = socket.getOutputStream();

byte buffer = new byte[1024];

int noBytes = in.read(buffer); // block until data available

while (noBytes > 0) {
DeflatorOutputStream dos = new DeflaterOutputStream(out);
DataOutputStream dataOut = new DataOutputStream(dos);

dataOut.writeInt(noBytes); // write number of data bytes
dataOut.write(buffer,0,noBytes); // write out compressed data
dataOut.flush();
dos.finish();
out.flush();
}

The basic idea is to use a separate DeflaterOutputStream for each chunk
being compressed and invoke the finish() method of DeflaterOutputStream
after each chunk of compressed data has been written. Because the
reading end of the socket will have to use a new InflaterInputStream
for each deflated chunk, it needs to know the size of each chunk.

The code for the reading end of the socket would look like

in = socket.getInputStream();


byte buffer = new byte[1024];
while (true) {

InflaterInputStream iis = new InflaterInputStream(in);
DataInputStream dataIn = new DataInputStream(iis);
int noBytes;
try {
noBytes = dataIn.readInt();
} catch (EOFException ee) {
break;
}
dataIn.read(buffer,0,noBytes);
}
in.close();

Chris Gokey

unread,
Feb 25, 2002, 11:55:23 PM2/25/02
to
This approach seems like it will work. I suppose increasing the size of
the buffer would futher increase compression and provide less overhead,
maybe use a larger array or using ByteArrayOutputStram would help here.

Also, writing a unsigned short instead of a int could also improve
performance here (2 bytes rather than 4, still will hold 65535 bytes).
Actually, just looked in DataOutputStream and I only see a
writeShort(...). Can you write a unsigned short?

Obviously the overhead here of writing the the length of the buffer each
time would be worth while if the stream of is sending large chunks of
data, but if the stream instead is sending bursts small chunks of data, I wonder
if adding that extra short would be counter productive...

Still seems like Java should provide a
CompressedInputStream/CompressedOutputStream that
you could specify the BUFFER size, and it would handle this type
of thing for you, but I do like your solution.

Thanks, I think I'll try this and let you know my results.

Chris

In article <slrna7lfjl....@earl-grey.cloud9.net>, "A. Bolmarcich"

A. Bolmarcich

unread,
Feb 26, 2002, 9:42:11 AM2/26/02
to
In article <20020226.005822....@gcmd.nasa.gov>, Chris Gokey wrote:
> This approach seems like it will work. I suppose increasing the size of
> the buffer would futher increase compression and provide less overhead,
> maybe use a larger array or using ByteArrayOutputStram would help here.
>
> Also, writing a unsigned short instead of a int could also improve
> performance here (2 bytes rather than 4, still will hold 65535 bytes).
> Actually, just looked in DataOutputStream and I only see a
> writeShort(...). Can you write a unsigned short?

You can write an unsigned short by writing a char. If you know that
the value is in the range of a char (0 -- 65535), use the the
writeChar(int) method.

> Obviously the overhead here of writing the the length of the buffer each
> time would be worth while if the stream of is sending large chunks of
> data, but if the stream instead is sending bursts small chunks of data, I wonder
> if adding that extra short would be counter productive...

What is also counter productive from a compression point of view is using
a new DeflaterOutputStream each time. A new DeflaterOutputStream starts
from scratch; better compression may have been obtained by continuing to
use the previous DeflatorOutputStream.

Chris Gokey

unread,
Feb 27, 2002, 9:17:09 AM2/27/02
to
I took a closer look at this last night and I don't think this will work.
See below where the problem exists:

> in = <some input stream>
> out = socket.getOutputStream();
> byte buffer = new byte[1024];
> int noBytes = in.read(buffer); // block until data available while
> (noBytes > 0) {
> DeflatorOutputStream dos = new DeflaterOutputStream(out);
> DataOutputStream dataOut = new DataOutputStream(dos);
>
> dataOut.writeInt(noBytes); // write number of data bytes

## The line above will end up printing the incorrect size of the chunk.
(it is using the uncompressed size.) Since it isn't possible using
DeflatorOutputStream to know the much it will compress it until you
actually send the data to the stream, I was thinking instead using the
Deflator object and give it a chunk of bytes and have it compress those
bytes into a byte array. i.e.,

in = <some input stream>
out = socket.getOutputStream();

byte buffer = new byte[1024]; // uncompressed buffer
byte cbuffer = new byte[1024]; // compressed buffer


int noBytes = in.read(buffer); // block until data available

while DataOutputStream dataOut = new DataOutputStream(out);
deflater.setInput(buffer);
deflater.finish();
int noBytes = deflater.deflate(cbuffer); deflater.reset();
dataOut.writeInt(noBytes); // write number of compressed data
bytes dataOut.write(cbuffer,0,noBytes); // write out compressed
data dataOut.flush();

I am looking to do the equivalent on the receiving side, i.e:

DataInputStream dataIn = new DataInputStream(in);
int noByte= dataIn.readInt();
cbuffer = new byte[noBytes];


buffer = new byte[1024];

dataIn.read(cbuffer, 0, noBytes)
Inflator inflator = new Inflator();
inflater.setInput(cbuffer);
int uncompressedBytes = inflater.inflate(buffer);
inflater.reset();

Few things here.. I noticed that the Inflator has no finish() method? Do
I somehow need to tell the Inflator when the data is complete like I must
do with the Deflator?

Second, it appears that the number of bytes returned from
inflater.inflate(buffer) is always 1024... It is always returns the size
of the array rather than the # of uncompressed bytes.

Anyhow, that is where I'm at... Any comments are welcome.

Chris

A. Bolmarcich

unread,
Feb 27, 2002, 10:19:05 AM2/27/02
to
In article <20020227.102006...@gcmd.nasa.gov>, Chris Gokey wrote:
> I took a closer look at this last night and I don't think this will work.
> See below where the problem exists:
>
>> in = <some input stream>
>> out = socket.getOutputStream();
>> byte buffer = new byte[1024];
>> int noBytes = in.read(buffer); // block until data available while
>> (noBytes > 0) {
>> DeflatorOutputStream dos = new DeflaterOutputStream(out);
>> DataOutputStream dataOut = new DataOutputStream(dos);
>>
>> dataOut.writeInt(noBytes); // write number of data bytes
>
> ## The line above will end up printing the incorrect size of the chunk.

The reading end of the socket will request that many uncompressed bytes.
It is up to the decompressser to read the correct number of compressed
bytes needed to produced the requested number of uncompressed bytes.
As long as the decompresser reads exactly that number of bytes from the
stream, the code the I gave works correctly.

There is a problem with the code I posted: I forget to put a

noBytes = in.read(buffer);

in the loop.
[snip]

Chris Gokey

unread,
Feb 27, 2002, 1:29:18 PM2/27/02
to
I attached three classes, CompressionInputStream, CompressionOutputStream, and
Test.. You specify the size of the buffer in CompressionOutputStream and it will
automatically
handle flushing when the buffer fills up. (I just adapted it from
BufferedOutputStream/BufferedInputStream
mostly)

The CompressionOutputStream seems to work fine and appears to be writing the data
correctly,
but I'm getting problemsm with the CompressionInputStream. The first iterator of
reading
the compressed data works fine, it grabs the INT telling it how many bytes to
decompress, and
reads the data... But the next iterator through the loop, when it tries to read
the next INT, it
throws this exception:

[cgokey@mylaptop zip]$ java Test /home/cgokey/notes2.txt > abc.txt
writing 2000
writing 2000
writing 824
reading=2000
Exception in thread "main" java.util.zip.ZipException: unknown compression method

at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:139)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:104)
at java.io.DataInputStream.readInt(DataInputStream.java:338)
at CompressedInputStream.fill(CompressedInputStream.java:25)
at CompressedInputStream.read(CompressedInputStream.java:38)
at Test.main(Test.java:22)

Any ideas why it can't read that next INT?

Chris

CompressedInputStream.java
CompressedOutputStream.java
Test.java

A. Bolmarcich

unread,
Feb 27, 2002, 7:53:32 PM2/27/02
to
In article <3C7D343F...@gcmd.nasa.gov>, Chris Gokey wrote:
[snip]

> Any ideas why it can't read that next INT?

It looks like a case of insufficient testing by me. I tested the idea
using sockets rather than a file as used in your program. [The subject
of the thread is about transmitting over a socket.] Due to the timing
of the writing and reading threads, the reading thread had available to
it only as many bytes as were written in one compressed chunk by the
writing thread.

As I wrote in a previous post, the technique works because the reading
thread reads only as many compressed bytes as are needed to produced
the requested number of uncomprssed bytes. When reading from a file,
more than that number of bytes are available.

A technique that will work, but will use more space is to have the
writer:

1. deflate to a ByteArrayOutputStream
2. write the ByteArrayOutputStream length and array to the OutputStream

and have the reader

1. read the length of the compressed array
2. reads the compressed array
3. construct a ByteArrayInputStream on the array
4. inflate from the ByteArrayInputStream

Here are (somewhat sloppy) changes to your code that does this.

filename="CompressedInputStream.java"

import java.io.*;
import java.util.zip.*;

public class CompressedInputStream {
protected DataInputStream dataIn; // InputStream being read
protected InflaterInputStream iis; // Inflater on compressed data

public CompressedInputStream(InputStream in) throws IOException {
this.dataIn = new DataInputStream(in);
fill();
}

// Establish InflaterInputStream on next chunk of compressed bytes
private void fill() throws IOException {
byte buf[];
if (iis != null) {
iis.close();
iis = null;


}
int noBytes;
try {
noBytes = dataIn.readInt();

buf = new byte[noBytes];
} catch (EOFException ee) {
return;
}
System.err.println("reading="+noBytes);
dataIn.readFully(buf);
iis = new InflaterInputStream(new ByteArrayInputStream(buf));
}

public synchronized int read() throws IOException {
int ret = iis.read();

while (ret == -1) {
fill();
if (iis == null) {
// there is no more compressed data
break;
}
// decompress first byte (returns -1 if writer wrote
// chunk of compressed data of size 0)
ret = iis.read();
}

return ret;
}

public void close() throws IOException {
if (iis != null) {
iis.close();
iis = null;
}
if (dataIn != null) {
dataIn.close();
dataIn = null;
}
}
}


filename="CompressedOutputStream.java"

import java.io.*;
import java.util.zip.*;

public class CompressedOutputStream extends FilterOutputStream {
protected byte buf[]; // The internal buffer where uncompressed data is stored
protected int count; // The number of valid bytes in the buffer
protected int size; // Total size of the buffer.
protected DataOutputStream dataOut; // DataOutputStream on OutputStream

public CompressedOutputStream(OutputStream out) {
this(out, 512);
}

public CompressedOutputStream(OutputStream out, int size) {
super(out);
if (size <= 0) {
throw new IllegalArgumentException("Buffer size <= 0");
}
this.size = size;
buf = new byte[size];
dataOut = new DataOutputStream(out);
}

private void flushBuffer() throws IOException {
if (count > 0) {
writeBuffer(buf, 0, count);
count = 0;
}
}

private void writeBuffer(byte buf[], int offset, int len) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream(len/2 + 1);
DeflaterOutputStream dos = new DeflaterOutputStream(baos);
dos.write(buf,offset,len);
dos.close();
dataOut.writeInt(baos.size()); // write number of data bytes
System.err.println("writing "+len);
baos.writeTo(dataOut); // write out compressed data
dataOut.flush();
}

public synchronized void write(int b) throws IOException {
if (count >= buf.length) {
flushBuffer();
}
buf[count++] = (byte)b;
}

public synchronized void write(byte b[], int off, int len) throws IOException {
if (len >= buf.length) {
/* If the request length exceeds the size of the output buffer,
flush the output buffer and then write the data directly.
In this way buffered streams will cascade harmlessly. */
flushBuffer();
writeBuffer(b, off, len);
return;
}
if (len > buf.length - count) {
flushBuffer();
}
System.arraycopy(b, off, buf, count, len);
count += len;
}

public synchronized void flush() throws IOException {
flushBuffer();
out.flush();
}
}

Chris Gokey

unread,
Feb 27, 2002, 10:24:17 PM2/27/02
to
I plugged it into my socket application (the subject of the thread :))
and it worked great. Thanks a million for all your help.

Chris

I

Chris Gokey

unread,
Feb 27, 2002, 9:08:29 PM2/27/02
to
In article <slrna7qvsa....@earl-grey.cloud9.net>, "A. Bolmarcich"
<agg...@earl-grey.cloud9.net> wrote:

> In article <3C7D343F...@gcmd.nasa.gov>, Chris Gokey wrote: [snip]
>> Any ideas why it can't read that next INT?
>
> It looks like a case of insufficient testing by me. I tested the idea
> using sockets rather than a file as used in your program. [The subject
> of the thread is about transmitting over a socket.] Due to the timing
> of the writing and reading threads, the reading thread had available to
> it only as many bytes as were written in one compressed chunk by the
> writing thread.
> As I wrote in a previous post, the technique works because the reading
> thread reads only as many compressed bytes as are needed to produced the
> requested number of uncomprssed bytes. When reading from a file, more
> than that number of bytes are available.

In the case here, the reading thread knows exactly how much data it needs
to decompress because it is specified in the int sent just prior to the
compressed chunk. So, it shouldn't matter if there is data available in
the stream or not. It is told to read exactly the specified amount. So,
I would think this code should work on file data as well as socket data.

> A technique that will work, but will use more space is to have the
> writer:

I like this idea. This may work better because you are opening a new
InflaterInputStream for each chunk read; therefore, it may not produce the
same glitches produced by the other code.

Thanks for your responses.
Chris

A. Bolmarcich

unread,
Feb 28, 2002, 12:31:21 AM2/28/02
to
In article <20020227.221127...@gcmd.nasa.gov>, Chris Gokey wrote:
> In article <slrna7qvsa....@earl-grey.cloud9.net>, "A. Bolmarcich"
> <agg...@earl-grey.cloud9.net> wrote:
[snip]

>> As I wrote in a previous post, the technique works because the reading
>> thread reads only as many compressed bytes as are needed to produced the
>> requested number of uncomprssed bytes. When reading from a file, more
>> than that number of bytes are available.
>
> In the case here, the reading thread knows exactly how much data it needs
> to decompress because it is specified in the int sent just prior to the
> compressed chunk. So, it shouldn't matter if there is data available in
> the stream or not. It is told to read exactly the specified amount. So,
> I would think this code should work on file data as well as socket data.

Not quite. The writing thead writes the number of uncompressed bytes,
say 300, and the compressed bytes, say 175 of them. The reading thread
reads the number of uncompressed bytes and then requests 300 bytes. The
decompressor will need to read at least 175 bytes from the stream to
return 300 bytes to the reading thread. However, it may read more than
175 bytes. The decompressor is likely reading the stream into a buffer.
It would normally use what is left in this buffer to satisfy future read
requests.

When I tested using a socket, the writing thread flushed the socket and
was suspended. When the decompressor did a read for the number of bytes
in its buffer, the read return only the 175 bytes that were available.
The decompressor was able to supply 300 bytes of uncompressed data from
those 175 bytes.

When reading from a file, a decompressor doing a read for the number of
bytes in its buffer will likely have its buffer filled. The same would
happen when reading from a socket if the writing thread were not
suspended while the decompressor did a read to fill its buffer.

Chris Gokey

unread,
Feb 28, 2002, 12:56:19 AM2/28/02
to
That makes perfect sense. Thanks for all your help... It runs fairly
well, I'm getting compression ratios over 50% for streaming data from the
server to client. The client sends small chunks so actually each request
from the clients end up being more bytes compressed because
the amount of data sent is so small.. :) But, all in all, I
think the compression does make a difference.

Again, thanks for everything.

Chris

In article <slrna7rg52....@earl-grey.cloud9.net>, "A. Bolmarcich"

Benjamin Chen

unread,
Mar 11, 2002, 2:53:24 PM3/11/02
to
Hi Chris and everyone,

Over the last 2 weeks, I've been searching for an
answer to the specific problem I've been working on.
Which is, Serializing data transfer for sockets over
a RMI connection.

I was wondering if any one can help me on the subject.
I was Sooo happy to find this thread, and I put the
CompressedInputStream and CompressedOutputStream to test
ASAP.

But the result were unsuccessful.
What I attempted to do is to create a custom socket for
the RMI to use.

I change the CompressedInputStream to CompressionInputStream
and have it extend InputStream. I changed CompressedOutputStream
to CompressionOutputStream. There changes were to make them
compatible with the rest of my code.

Here is the CustomSocket:

public class ZipSocket extends Socket
{
private InputStream inStream;
private OutputStream outStream;

public ZipSocket() { super(); }

public ZipSocket(String host, int port) throws IOException
{
super(host, port);
}

public InputStream getInputStream() throws IOException
{
if (inStream == null)
inStream = new CompressionInputStream(super.getInputStream());
return inStream;
}

public OutputStream getOutputStream() throws IOException
{
if (outStream == null)
outStream = new CompressionOutputStream(super.getOutputStream());
return outStream;
}

public synchronized void close() throws IOException
{
OutputStream o = getOutputStream();
o.flush();
super.close();
}
}

It seems when RMI creates an connection, it sends data over
the socket which are extremly small, which I'm guessing is mostly
for connection comfirmation and security information rather
than read data.

Seems this amound of initial data is so small, it is not
enough to be compressed, and this causes all my attempted
custom input/output streams to crash, no matter what approach
I took. (FilterStreams, ZipStreams, GZIPStreams, In/DeflateStreams)

If any one can give me some advice on how to approach this problem
I have right now, I will be extremly greatful.
Thank in advance,
Benjamin Chen

Benjamin Chen

unread,
Mar 11, 2002, 4:05:13 PM3/11/02
to
Hi Mr.Gokey

My name is Benjamin Chen.
I was reading over google groups and I came across your helpful
messages regarding using data compression over sockets

Here is the CustomSocket:

If you can give me some advice on how to approach this problem


I have right now, I will be extremly greatful.
Thank in advance,
Benjamin Chen

"Chris Gokey" <cgo...@gcmd.nasa.gov> wrote in message news:<20020228.015916...@gcmd.nasa.gov>...

A. Bolmarcich

unread,
Mar 11, 2002, 6:34:58 PM3/11/02
to
Note: Followup-To set to comp.lang.java.programmer

In article <20a85bbb.02031...@posting.google.com>, Benjamin Chen wrote:
[snip]


> Here is the CustomSocket:
>
> public class ZipSocket extends Socket
> {
> private InputStream inStream;
> private OutputStream outStream;
>
> public ZipSocket() { super(); }
>
> public ZipSocket(String host, int port) throws IOException
> {
> super(host, port);
> }
>
> public InputStream getInputStream() throws IOException
> {
> if (inStream == null)
> inStream = new CompressionInputStream(super.getInputStream());
> return inStream;
> }
>
> public OutputStream getOutputStream() throws IOException
> {
> if (outStream == null)
> outStream = new CompressionOutputStream(super.getOutputStream());
> return outStream;
> }
>
> public synchronized void close() throws IOException
> {
> OutputStream o = getOutputStream();
> o.flush();
> super.close();
> }
> }

Does your CompressionOutputStream correctly support flush()? Is your
CompressionInputStream able to read a stream that has been flushed?

[snip]

> Seems this amound of initial data is so small, it is not
> enough to be compressed, and this causes all my attempted
> custom input/output streams to crash, no matter what approach
> I took. (FilterStreams, ZipStreams, GZIPStreams, In/DeflateStreams)

The java.util.zip.DeflaterOutputStream does not support flush().
You need to use a compression method that supports flush.

Chris Gokey

unread,
Mar 11, 2002, 9:46:07 PM3/11/02
to
In article <20a85bbb.02031...@posting.google.com>, "Benjamin
Chen" <nech...@yahoo.com> wrote:

> Hi Chris and everyone,
>
> Over the last 2 weeks, I've been searching for an answer to the specific
> problem I've been working on. Which is, Serializing data transfer for
> sockets over a RMI connection.

I'm not sure the CompressedOutputStream/CompressedInputStream we came up
with in this thread will work with RMI.
(see http://24.91.182.203:8080/sockets/index.html)

These classes buffer the OutputStream until either the buffer is filled (in which
the CompressedOutputStream will flush() the data) or the calling class manually
flushes the stream.

For my particular application I was able to control when the stream
needed to be flushed. I could control this by doing something along
these lines:

in = <some input stream>
out = socket.getOutputStream();

for (;;) {
int data = in.read();
if (data != -1) {
out.write(data);
if (in.available() == 0) {
out.flush();
}
} else {
out.close();
in.close();
return;
}
}

If you are trying to create your own custom RMISocketFactory where the
socket returned is a ZipSocket, I'm not sure this approach will work for
that.

Chris

Benjamin Chen

unread,
Mar 12, 2002, 9:31:53 AM3/12/02
to
Sorry for the duplicate message, I thought the first one
didn't go through.

> Does your CompressionOutputStream correctly support flush()? Is your
> CompressionInputStream able to read a stream that has been flushed?

I'm not sure what I have to do to make sure the CompressionInputStream
is able to read a stream that has been flushed.

> The java.util.zip.DeflaterOutputStream does not support flush().
> You need to use a compression method that supports flush.

I thought the DeflaterOutputStream would just use FilterOutputStream's
flush. Is there a java compression stream which supports flush?

Maybe it isn't possible to do compression over RMI.
I have not been successful in finding a working example of
it over the internet. Has anyone had success doing that?

Here is my Code.
Mainly parts taken from this thread with some change:

package server;

import java.io.*;
import java.util.zip.*;

public class CompressionInputStream extends InputStream {


protected DataInputStream dataIn; // InputStream being read

protected GZIPInputStream iis; // Inflater on compressed data

public CompressionInputStream(InputStream in) throws IOException {


this.dataIn = new DataInputStream(in);
fill();
}

// Establish InflaterInputStream on next chunk of bytes


private void fill() throws IOException
{

System.out.println("in fill");


byte buf[];
if (iis != null)
{

System.out.println("Fill 1");
iis.close();
System.out.println("Fill 2");
iis = null;
System.out.println("Fill 3");
}
int noBytes;

System.out.println("dataIn " + dataIn.available());
//System.out.println("iis " + iis.available());

/*if (dataIn.available() == 0)
return;*/

try
{
System.out.println("Fill 4");
noBytes = dataIn.readInt();
System.out.println("Fill 5");


buf = new byte[noBytes];

System.out.println("Fill 6");
}
catch (EOFException ee)
{
System.out.println("Fill 7");


return;
}
System.err.println("reading="+noBytes);
dataIn.readFully(buf);

iis = new GZIPInputStream(new ByteArrayInputStream(buf));
System.out.println("Done Fill");
}

public synchronized int read() throws IOException {

System.out.println("Starting to read");
int ret = iis.read();

while (ret == -1) {
System.out.println("In Read loop " + ret);


fill();
if (iis == null) {

System.out.println("No more compressed data");


// there is no more compressed data
break;
}

System.out.println("iss is " + iis);


// decompress first byte (returns -1 if writer wrote
// chunk of compressed data of size 0)
ret = iis.read();
}

System.out.println("Done Reading");
return ret;
}

public void close() throws IOException {
if (iis != null) {
iis.close();
iis = null;
}
if (dataIn != null) {
dataIn.close();
dataIn = null;
}
}
}


package server;

import java.io.*;
import java.util.zip.*;

public class CompressionOutputStream extends FilterOutputStream {


protected byte buf[]; // The internal buffer where uncompressed
data is stored
protected int count; // The number of valid bytes in the buffer
protected int size; // Total size of the buffer.
protected DataOutputStream dataOut; // DataOutputStream on
OutputStream

public CompressionOutputStream(OutputStream out) {
this(out, 512);
}

public CompressionOutputStream(OutputStream out, int size) {


super(out);
if (size <= 0) {
throw new IllegalArgumentException("Buffer size <= 0");
}
this.size = size;
buf = new byte[size];
dataOut = new DataOutputStream(out);
}

private void flushBuffer() throws IOException {
if (count > 0) {
writeBuffer(buf, 0, count);
count = 0;
}
}

private void writeBuffer(byte buf[], int offset, int len) throws
IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream(len/2 +
1);

GZIPOutputStream dos = new GZIPOutputStream(baos);
dos.write(buf,offset,len);
dos.flush();
dos.finish();


dos.close();
dataOut.writeInt(baos.size()); // write number of data bytes
System.err.println("writing "+len);
baos.writeTo(dataOut); // write out compressed data
dataOut.flush();
}

public synchronized void write(int b) throws IOException {

System.out.println("Starting to write 2");


if (count >= buf.length) {
flushBuffer();
} buf[count++] = (byte)b;

System.out.println("Finished to write 2");
}

public synchronized void write(byte b[], int off, int len) throws
IOException {

System.out.println("Starting to write 1");



if (len >= buf.length) {
/* If the request length exceeds the size of the output buffer,
flush the output buffer and then write the data directly.
In this way buffered streams will cascade harmlessly. */
flushBuffer();
writeBuffer(b, off, len);
return;
}
if (len > buf.length - count) {
flushBuffer();
}
System.arraycopy(b, off, buf, count, len);
count += len;

this.flush();
System.out.println("Finished write 1");
}

public synchronized void flush() throws IOException {
flushBuffer();

dataOut.flush();
}
}

Benjamin Chen

unread,
Mar 12, 2002, 1:25:52 PM3/12/02
to
On another note,
I tried this with the CompressionOutputStream and CompressionInputStream
using the ZOutputStream and ZInputStream which Chris Posted on his site.

This seemed like a possible solution, but still doesn't work.
Am I missing something in my approach? Or does stream compression
just don't work with RMI.

Like Chris pointed out, RMI doesn't give you access to the socket
reference and it does all its socket input/output at a lower level
not accessible to the developer. So flushing when the buffer is
filled doesn't work.

I'm attempting to flush the output stream with each piece of
byte array being writen to it. and taking care of the flushing
in a wrapper class for the streams.

Anyways, still doesn't work.

Also, A. Bolmarcich, I'm not competent in java
enough to be able to distinquish which input streams
can read from flushed streams and which can't.
If some one can explain to me the difference in reading from
flushed streams and not flushed streams and what needs
to be done on the recieving end, that'd be great.

Thanks so very much
Ben

Here's my code for CompressionOutputStream and CompressionInputStream
by attemping to adapt the suggested usage of ZInputStream and ZOutputStream
suggested by Chris. Also in hopes of resolving any possible flushing
issues.

package server;

import java.io.*;
import java.util.zip.*;

class CompressionOutputStream extends FilterOutputStream
{
/*
* Constructor calls constructor of superclass.
*/
public CompressionOutputStream(OutputStream out) throws IOException
{
super(out);
}

public synchronized void write(byte b[], int off, int len) throws IOException
{

ZOutputStream cos = new ZOutputStream(out);
DataOutputStream dos = new DataOutputStream(cos);
dos.write(b , off, len);
dos.flush();
}

public void flush() throws IOException
{
System.out.println("flushing");
super.flush();
}
}


package server;

import java.io.*;
import java.net.*;
import java.util.zip.*;

class CompressionInputStream extends FilterInputStream
{


public CompressionInputStream(InputStream in) throws IOException
{

super(in);
}

public int read(byte b[], int off, int len) throws IOException
{

ZInputStream cis = new ZInputStream(in);
DataInputStream dis = new DataInputStream(cis);

//DataInputStream dis = new DataInputStream(in);
return dis.read(b,off,len);
}
}

A. Bolmarcich

unread,
Mar 12, 2002, 2:48:07 PM3/12/02
to
In article <20a85bbb.02031...@posting.google.com>, Benjamin Chen wrote:
> On another note,
> I tried this with the CompressionOutputStream and CompressionInputStream
> using the ZOutputStream and ZInputStream which Chris Posted on his site.
>
> This seemed like a possible solution, but still doesn't work.
> Am I missing something in my approach? Or does stream compression
> just don't work with RMI.

After taking a quick look at the code you posted, it looks OK. It would
help if you posted (or make available through a URL) the source files for
a complete runnable program. I'm willing to spend some time on this
problem, but writing RMI server and an RMI client applications that use
the code that you have posted would take more time than I have available.

> Like Chris pointed out, RMI doesn't give you access to the socket
> reference and it does all its socket input/output at a lower level
> not accessible to the developer. So flushing when the buffer is
> filled doesn't work.

You may also need to flush the socket. The socket output at the
lower level may be buffered.

> I'm attempting to flush the output stream with each piece of
> byte array being writen to it. and taking care of the flushing
> in a wrapper class for the streams.
>
> Anyways, still doesn't work.
>
> Also, A. Bolmarcich, I'm not competent in java
> enough to be able to distinquish which input streams
> can read from flushed streams and which can't.
> If some one can explain to me the difference in reading from
> flushed streams and not flushed streams and what needs
> to be done on the recieving end, that'd be great.

It is only output streams that may need to be flushed. Unless you write
something special to the output stream when you flush it, an input
stream connected to the output stream can't tell where the output stream
has been flushed. I mentioned flushing an output stream because in an
earlier posting you included the following code that invoked flush() on
an output stream and then invoked close() on an underlying output stream

Chris Gokey

unread,
Mar 12, 2002, 2:57:48 PM3/12/02
to
> I'm attempting to flush the output stream with each piece of byte array
> being writen to it. and taking care of the flushing in a wrapper class
> for the streams.

Just as a side note.. It is possible to be able to control these streams
form client to server via RMI, vice versa. You maybe able to glean some insights
from this code which implements a RMISocketFactory that will allow you
to do callbacks through a firewall. The code is very nicely written.

http://cssassociates.com/rmifirewall.html

Chris

Chris Gokey

unread,
Mar 13, 2002, 1:32:48 AM3/13/02
to
I am now sucessfully able to create a custom RMISocketFactory using the
zlib package found at:
http://24.91.182.203:8080/sockets/index.html

Please download the latest version, zlib-0.2.jar. It contains a bug
fix that was causing the read(b[],offset,len) method to block
prematurely. A. Bolmarcich, if you have time could you double check
the logic of this method. The previous implementation of this method
would block if # of bytes read was < len. It should have returned if it
read atleast 1 byte.

Also at the web page specified above is another package called
zsocket-0.2.jar. This contains the custom RMISocketFactory which uses
the zlib package above as well as a small test program.

Hope this helps.
Chris


In article <20a85bbb.02031...@posting.google.com>, "Benjamin

Benjamin Chen

unread,
Mar 14, 2002, 4:03:13 PM3/14/02
to
Hi Chris,

Wow! Ya, it works perfectly!! Thanks!! You guys are really smart!! I
don't think I would have ever thought of this solution.

Just a couple of thoughts ...
for you ZipSocketFactory, it'll be good if you made it

public class ZipSocketFactory extends RMISocketFactory implements
RMIServerSocketFactory, RMIClientSocketFactory, Serializable

in case people wanted to use in a class they extended from
UnicastRemoteObject That way they can pass it in to the super
constructor.

Also, in the ZipInputStream, why have a dataIn?
Can't you make use of the in from FilterInputStream and save some
space/complexity?

Changing the constructor to:

public ZInputStream(InputStream in) throws IOException
{
super(new DataInputStream(in));
fill();
}

and every instance of dataIn, use (DataInputStream)in instead.
Is the casting less efficient compared to having a separte variable
which I'm not aware of?

Anyways, just some thought.
Great job guys !!!
Thanks so much for your help !!!

Benjamin Chen

Chris Gokey

unread,
Mar 16, 2002, 6:22:42 PM3/16/02
to
I posting this as a update to a URL in this thread. The library
can now be found at:
http://home.attbi.com/~cgokey/java/zlib/index.html

Also included is a implementation of a RMISocketFactory that
uses these libraries in order to compression communication between
RMI objects.

Thanks for all these responses.
Chris

0 new messages