Golang slow performance inside for loop MySQL Queries

1,910 views
Skip to first unread message

bill...@gmail.com

unread,
Oct 20, 2020, 12:52:49 PM10/20/20
to golang-nuts
I use Go with github.com/go-sql-driver/mysql to connect to MySQL. The performance in Go vs other languages is terrible or I'm missing something.

I use Go for my REST API, an example is below:

We have a table with posts. We fetch the posts, on each post we search favorites table (if user has favorited this post and if he likes it).

posts := make([]*MembersModel.Post, 0, 6)

postsResults, err := Config.DB.Query("SELECT id, b64, title, description, total_likes, total_favorites, published_date FROM posts ORDER BY id DESC LIMIT 6")

defer postsResults.Close()

if err != nil {
fmt.Println(err)
panic(err.Error())
}

for postsResults.Next() {
var postID int
var b64 string
var title string
var description string
var total_likes int
var total_favorites int
var published_date string

postsResults.Scan(&id, &b64, &title, &description, &total_likes, &total_favorites, &published_date)

var count int
var favorited string

fetchBookmarks := Config.DB.QueryRow("SELECT COUNT(*) FROM favorites where userID = ? and postID = ? and status = ?", userID, postID, "added").Scan(&count)

if fetchBookmarks != nil {
fmt.Println("error")
}

if count == 0 {
favorited = "no"
} else {
favorited = "yes"
}

var countSubmitted int
var likedPost string

fetchLikes := Config.DB.QueryRow("SELECT COUNT(*) FROM likes where userID = ? and postID = ? and status=?", userID, postID, "liked").Scan(&countSubmitted)

if fetchLikes != nil {
fmt.Println("error")
}

if countSubmitted == 0 {
likedPost = "no"
} else {
likedPost = "yes"
}

post := &MembersModel.JobList{
PostID: b64,
Title: title,
Description: description,
Likes: total_likes,
PubDate: published_date,
Bookmarked: favorited,
Liked: likedPost,
}

posts = append(posts, post)
}

Average time to fetch these results -> 10 seconds!

If I exclude the MYSQL calls inside the loop, the time to fetch these results is 300ms.

Marcin Romaszewicz

unread,
Oct 20, 2020, 1:19:16 PM10/20/20
to bill...@gmail.com, golang-nuts
Go's database layer is generally pretty quick, I use it a lot, but your code immediately sets off my DB alarms, because you are doing queries within the body of another query loop, which means you're opening up lots of connections, which could be slow.

I'd reorganize as followd.

- Cache the results of your first query into an array, it's only 6 results.
- Create two prepared statements, one for each query inside the loop.
- Loop over your array of 6 results, and execute each prepared statement instead of parsing the query each time.

By doing it this way, you should use 1 connection for the entire loop, and you'll only be parsing statements once each.

If this is still slow, I'd start looking at your database performance, maybe.

also, "defer rows.Close()" after you've checked if the query didn't result an error. There are no rows to close if err != nil

-- Marcin



--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/e3f758d8-595f-4f0d-9aa6-d0b87899007fo%40googlegroups.com.

Bill

unread,
Oct 21, 2020, 6:41:54 AM10/21/20
to golang-nuts
Hello, 
I created the sql to fetch all info in one call instead of multiple calls per each row.

One thing I noticed about golang is that when I ping my Remote DB from localhost (running in vscode) I get ping -> 1.573826274s. So I think this is the reason that all my calls to db are delayed.
Have you any idea what can be the issue? Maybe a clean install of Golang in MacOSX could fix this issue? If I test the code in server (upload the binary) the response time is normal (200ms - 300ms).

My Code to ping: 

begin := time.Now()
err := Config.DB.Ping()
log.Printf("Ping in %s (%v)", time.Since(begin), err)

Tamás Gulácsi

unread,
Oct 21, 2020, 8:10:59 AM10/21/20
to golang-nuts
To reuse connections, and pay the slowness only on the first connection, set

    Config.DB.SetMaxIdleConns(2)

Or something like that.
As *sql.DB is a pool, this can keep two connections alive - one for your main query, and one for the per-row extra queries.

Marcin Romaszewicz

unread,
Oct 21, 2020, 11:55:06 AM10/21/20
to Bill, golang-nuts
Can you connect to that DB any faster from the same machine not using Go?

-- Marcin

David Riley

unread,
Oct 21, 2020, 1:10:00 PM10/21/20
to Marcin Romaszewicz, bill...@gmail.com, golang-nuts
On Oct 20, 2020, at 1:18 PM, Marcin Romaszewicz <mar...@gmail.com> wrote:
>
> Go's database layer is generally pretty quick, I use it a lot, but your code immediately sets off my DB alarms, because you are doing queries within the body of another query loop, which means you're opening up lots of connections, which could be slow.
>
> I'd reorganize as followd.
>
> - Cache the results of your first query into an array, it's only 6 results.
> - Create two prepared statements, one for each query inside the loop.
> - Loop over your array of 6 results, and execute each prepared statement instead of parsing the query each time.
>

I would also add that this seems like an excellent candidate for a join, not multiple queries, though I admit to not having looked *terribly* closely at the query structure. That should cut down on overhead and complexity a lot.


- Dave


signature.asc

David Riley

unread,
Oct 21, 2020, 1:31:59 PM10/21/20
to Bill, golang-nuts
On Oct 21, 2020, at 6:41 AM, Bill <bill...@gmail.com> wrote:
>
> Hello,
> I created the sql to fetch all info in one call instead of multiple calls per each row.
>
> One thing I noticed about golang is that when I ping my Remote DB from localhost (running in vscode) I get ping -> 1.573826274s. So I think this is the reason that all my calls to db are delayed.
> Have you any idea what can be the issue? Maybe a clean install of Golang in MacOSX could fix this issue? If I test the code in server (upload the binary) the response time is normal (200ms - 300ms).
>
> My Code to ping:
>
> begin := time.Now()
> err := Config.DB.Ping()
> log.Printf("Ping in %s (%v)", time.Since(begin), err)

So Ping() just gets a response from the server, but it's not really defined how simple an operation that is. If you have a high latency connection (e.g. international distances, slow/lossy connection, etc) to your database, it might only take a few round trips to reach over a second. Honestly, 200 ms sounds excessively long for an on-server response if it's just going to localhost, though I suppose it depends on how loaded the database is.

In any case, if a simple database ping is taking a second and a half, it's reasonable to assume that a number of transactions which depend on each other could easily amount to 10s. This is where joins are handy, because the database is actually generally really good at shuffling all that data around on its own before sending it back to you in one lump. A good join (especially if you've done your indexing right) will improve both your latency and your code complexity immensely. It sounds like you may have already done that, in which case I assume you've seen some improvement.

As to why it takes so long in the first place: where are you located relative to where the database server is located? What's the response time from a regular command-line database ping? If you're, say, on the other side of the world, it's going to take a while for each round trip, and there's not much way around that other than trying to keep things somewhat closer together where possible (and also reduce or parallelize the round trips where possible). There are fundamental limitations in re: the speed of light when it comes to latency, no matter what your bandwidth.


- Dave


Jesper Louis Andersen

unread,
Oct 22, 2020, 6:47:23 AM10/22/20
to bill...@gmail.com, golang-nuts
On Tue, Oct 20, 2020 at 6:51 PM <bill...@gmail.com> wrote:
        var count int
        var favorited string

        fetchBookmarks := Config.DB.QueryRow("SELECT COUNT(*) FROM favorites where userID = ? and postID = ? and status = ?", userID, postID, "added").Scan(&count)

        if fetchBookmarks != nil {
            fmt.Println("error")
        }

        if count == 0 {
            favorited = "no"
        } else {
            favorited = "yes"
        }


While I think you have other problems if a Ping() to the database takes above 1.5s, The above looks a bit misleading. You aren't really counting all rows since you are doing a point-query, so you can essentially just `SELECT 1 FROM favorites ...` and check if a row is returned. I'd also definitely look into baking this into an (OUTER) JOIN. The lookup is much faster if the database is doing the work close to the data than if you pay the RTT twice in every iteration of the loop. Even in moderately fast environments (<10ms RTT) these kinds of query structures tend to cost a lot since they serialize the query. While you might be able to run many of them concurrently, each individual query is going to take time to complete.

Another point is that you are currently mixing presentation with logic. Your `count` variable is the logic part, but `favorited` pertains to presentation. I'd propose making `favorited` a boolean value. Callers of the function can then use this value more directly, whereas they will have to do string comparison in the above scheme. There are a lot of subtle bugs that can creep in as a result of this. You might even want to make results into a struct and bind a method for presentation to the struct.

Also, I'd propose adding some observability to the errors as they occur. Something like `log.Printf("Failure querying favorites for (User:%v, Post:%v, Status:%v): %v", userID, postID, "added", err)`. This will make the system tell you what is wrong in a log line rather than you having to make an educated guess. It doesn't take much time, and when things start going wrong in a code base for some reason, you don't have the benefit of time at hand most often. Better get it in when things are not looking like chaos.


Reply all
Reply to author
Forward
0 new messages