Michael Niehren <
mic...@niehren.de> wrote:
> Hi together,
>
> i'm running into an problem of uploading data to a tcl server. You
> see the code below. The client takes a filename as input, read the
> data, encode it to base64 and send it to the server. If i take small
> files (< 1 MB) it work's fine, but if the files are greater than 1 MB
> (i did not check the real limit), some data is missing at the server
> side. The greater the files are, the greater are the missing data.
>
> Has someone an idea, where my fault is ?
>
> proc readLine {sock} {
> global clientdata
First problem I see. You are using a single global for handling data
being read in an event driven manner. As long as you always only ever
have a single sender, you'll be ok here. The moment you have two (or
more) senders at the same time, this will fail miserably.
> chan configure $sock -buffering line -encoding utf-8 -blocking 0
> -translation crlf
>
> puts -nonewline $sock $data
> flush $sock
> close $sock
The one item I see is you set the channel to non-blocking, the puts the
data (so puts returns immediately, then you flush and close, and then
you exit the sending script.
I suspect that the exiting of the sending script is why you are seeing
larger amounts of missing data the larger the file is that you try to
send. The flush (for sockets) is not "data has been transmitted to
server and acknowledged" but instead is "data has been turned over to
the I/O subsystem for transmission". When you then exit the script,
whatever data is not yet transmitted over the TCP channel never gets
transmitted (because the script ended, so all the OS I/O buffers
allocated to the process also get dropped).
Try adding a "vwait forever" at the end of the sender and see if the
issue with missing data goes away. If it does, then that was the
cause.