2011/04/21 2:08, Alexey Zhigaltsov wrote:
> Hi!
>
> First of all, thank you for this awesome software!
>
> I have found an issue with large datasets processing:
>
> ogorod=# SELECT id,date,author FROM posts WHERE (deleted=0 AND
> (votes>-1000) AND ((showto=1))) ORDER BY date DESC LIMIT 208 OFFSET 0;
> ogorod=# SELECT id,date,author FROM posts WHERE (deleted=0 AND
> (votes>-1000) AND ((showto=1))) ORDER BY date DESC LIMIT 209 OFFSET 0;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> ogorod=#
I wonder this depends on result data size or not.
Do you always get SEGV when you specify 'LIMIT 209'?
And not get with 'LIMIT 208'?
I'm thinking a way to reproduce it on my server.
And also I have realized that I need to make some changes
in debug messages to be more descriptive.
Regards,
--
NAGAYASU Satoshi <satoshi....@gmail.com>
Thank you.
At first, I will try to figure out what makes the crash within it,
and if I could not reproduce it, I will ask for the help.
> 2011/04/25 3:02, Alexey Zhigaltsov wrote:
>> On this table I get SEGV when I state LIMIT 209. Less than 209 is ok.
>> I guess on other tables (or other field set) will have another faulty
>> number of rows to return.
>> I can send you a dump of this table if this helps you to reproduce the
>> issue. Please let me know if it is neccessary
Finally, I have figured out the problem.
Buffer handling has a bug for caching result data when the size
is over 8192 (=PQC_MAX_VALUE).
At the same time, I also found that some guy posted a patch
to solve this a few days ago.
http://code.google.com/p/pqc/issues/detail?id=2
I just applied this patch, so, now internal buffer would be dynamic,
and the problem you found has been fixed. Try the latest code.