--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
If these results hold up, all I can say is good job!Have you compared to https://github.com/go-pg/pg? It doesn't follow database/sql either (uses bits and pieces) andis also rumored to have better performance than lib/pq.If you haven't, would you mind including it in your benchmark?
--
- Taru Karttunen
Nice work. Can you explain why the database/sql interface makes it slower and if there is any work that can be done to make it faster? (Both on Go's stdlib and your package)
Jack,
I tried pgx and its direct interface and I got about 20% improvement. My usecase was 50k+ qps with a trivial query, where the overhead of the library is somewhat significant compared to postgres query run time (most likely data is cached in memory). However I have a few suggestions / feedback.
1. The row.Scan() API ties the choice between []byte and string to underlying Postgres types 'text' and 'bytea'. I don't think the standard sql API does this. See https://github.com/jackc/pgx/blob/master/query.go#L211
That is, I want to pass a *[]byte and avoid conversion back and forth between string and []byte. I see the code is reading a []byte string and converting to string here https://github.com/jackc/pgx/blob/master/msg_reader.go#L155 . I can get the API to call readByte() instead, but that requires the column to be bytea. See https://github.com/jackc/pgx/blob/master/values.go#L1006
Ideally, reading to []byte shouldn't require the underlying column type to be bytea - which is rarely the case. Text and varchar are more common AFAIK.
2. It would be nice to avoid the allocation at https://github.com/jackc/pgx/blob/master/msg_reader.go#L194 if the caller can pass in already allocated buffer (from a sync.Pool may be). But I don't know Go enough to be sure if that is the right approach to reduce allocations.
--
Thank you again for making the library. Let me know your thoughts on these points.
--
Harry
On Wednesday, July 16, 2014 5:42:48 AM UTC-7, Domain Admin wrote:
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Nice work. Can you explain why the database/sql interface makes it slower and if there is any work that can be done to make it faster? (Both on Go's stdlib and your package)
- RK
2. It would be nice to avoid the allocation at https://github.com/jackc/pgx/blob/master/msg_reader.go#L194 if the caller can pass in already allocated buffer (from a sync.Pool may be). But I don't know Go enough to be sure if that is the right approach to reduce allocations.This only gets called in the case of calling Scan with a *[]byte argument. It expects to return a new slice. I suppose we could do something like this:
type PreallocedBytes []byte
buf := make([]byte, 1024)
conn.QueryRow("...").Scan(*PreallocatedBytes(&buf))
Scan would treat *PreallocatedBytes differently than []byte and would copy data into it instead of create a new slice. Could raise error if scanned value is too big or just alloc if necessary.
I think this might work. Is this the type thing you were thinking of?
On Saturday, September 27, 2014 2:34:41 PM UTC-7, Domain Admin wrote:2. It would be nice to avoid the allocation at https://github.com/jackc/pgx/blob/master/msg_reader.go#L194 if the caller can pass in already allocated buffer (from a sync.Pool may be). But I don't know Go enough to be sure if that is the right approach to reduce allocations.This only gets called in the case of calling Scan with a *[]byte argument. It expects to return a new slice. I suppose we could do something like this:
type PreallocedBytes []byte
buf := make([]byte, 1024)
conn.QueryRow("...").Scan(*PreallocatedBytes(&buf))
Scan would treat *PreallocatedBytes differently than []byte and would copy data into it instead of create a new slice. Could raise error if scanned value is too big or just alloc if necessary.
I think this might work. Is this the type thing you were thinking of?
The database/sql Scan() method behaves differently based on whether the argument is []byte, *[]byte, or something else (like a string). Can we do it without defining a new type.
That said I am still trying to really understand the database/sql behavior difference between *[]byte and []byte :) May be some golang Guru should chime in here (on how the interface should be). I am afraid I am kinda newbie and probably not the right person to decide what the ideal interface should be.
I will try your other patch / update and see what it does for my program.
Thanks
--
Harry