Golang ORM Performances

918 views
Skip to first unread message

Bhavesh Kothari

unread,
Dec 19, 2024, 8:01:25 AM12/19/24
to golang-nuts

I read few posts regarding ORMs in golang & realized, few peoples are not happy with it.

They mentioned to use raw queries with our own wrapper if want to make few things reusable.

I was actually using Gorm, which was super slow, then I started research regarding performance efficient ORMs, and found Bun which was indeed faster compared to Gorm. But then I explored documentation more and realized it doesn't provide enough function to make life easier, this ORM even doesn't have community I felt.

1. I want to know readers view regarding ORMs in Go.

Let me tell you something, my project is a huge project, and I decided to use Golang for better performance. But now I feel I've to write a lot of code and slow/less-featured ORMs here making me irritated.

2. What do you guys suggest is go really good for large projects?

Brian Candler

unread,
Dec 19, 2024, 8:23:53 AM12/19/24
to golang-nuts
I am not a fan of ORMs. Application performance problems are usually down to poorly formed SQL queries (and/or lack of supporting indexes), which ORMs can mask or amplify.

For an alternative approach, have a look at https://github.com/sqlc-dev/sqlc

In short, you write the set of SQL queries to meet your application's needs, and then they get automatically wrapped in idiomatic Go.


Refs:

Mike Schinkel

unread,
Dec 19, 2024, 10:17:34 AM12/19/24
to GoLang Nuts Mailing List, Brian Candler
Hi Bhavesh,

I am also not a fan of ORMs, but I am a big fan of sqlc so I will 2nd Brian Candler's recommendation. 

Sqlc is one of the few Go packages/tools I consider a must-use for any of my projects that need to interact with SQL databases.

-Mike

-- 
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/golang-nuts/6addef83-3e2d-4b6c-b543-8127f5754f0en%40googlegroups.com.

Robert Engels

unread,
Dec 19, 2024, 10:27:46 AM12/19/24
to Mike Schinkel, golan...@googlegroups.com, Brian Candler
I go back and forth on ORMs. I think a lot depends on the complexity of the project. 

Still, I wouldn’t expect the overhead of an ORM to be more than 1-2% for IO/db bound systems - if it is I suspect it is not properly configured and it is generating inefficient queries or the database tables lack proper indexing. 

On Dec 19, 2024, at 9:17 AM, Mike Schinkel <mi...@newclarity.net> wrote:

Hi Bhavesh,

will....@gmail.com

unread,
Dec 19, 2024, 4:07:27 PM12/19/24
to golang-nuts
https://github.com/ent/ent seemed to perform well, and was quite flexible.

Will

Jason E. Aten

unread,
Dec 19, 2024, 7:59:09 PM12/19/24
to golang-nuts
If you want to avoid boilerplate and keep the lightest weight possible,
you could have a look at the approach I took recently when I
added SQL support in my serialization format, greenpack. 

See here: (only supports MariaDB/MySQL syntax at the moment)



The code-generation will generate the
SQL create, insert, and select statements corresponding to each struct
in the files that have //go:generate greenpack -sql mariadb
in them.

This is minimal and ultra simple but still saves manually writing tedious boilerplate.
It keeps your fields received always aligned with your select statements.
It is appropriate if your Go code is your source-of-truth/starting point.

The other packages I looked at (ORMs, etc) seem to want
to take the database schema as the starting point, and create Go code from that.

That's a reasonable approach, but not what I needed.

Enjoy,
Jason

Kerem Üllenoğlu

unread,
Dec 20, 2024, 12:54:38 PM12/20/24
to golang-nuts
Hey,

I keep hearing ORMs - especially GORM - is slow. But I have never felted like it was slow and used in a couple of projects now. Is there a good article you recommend reading about ORMs in Go - especially GORM - being slow?

Sorry, my reply is not actually an answer to the original question. But I am really curious about it now. Is it really important, really slow?

Thanks,
Kerem

20 Aralık 2024 Cuma tarihinde saat 03:59:09 UTC+3 itibarıyla Jason E. Aten şunları yazdı:

robert engels

unread,
Dec 20, 2024, 1:13:38 PM12/20/24
to Kerem Üllenoğlu, golang-nuts
It is not slow - but it will be if you don’t use it properly or set the database up properly.

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Luca Pascali

unread,
Dec 21, 2024, 4:44:04 AM12/21/24
to Kerem Üllenoğlu, golang-nuts
In my experience, ORM has a structural flaw, and it cannot be otherwise (as well as a lot of automatic tools)

Using ORM you are driven to write queries like (pseudo go code)

objects := Table.filter(some rules)
for o := range objects {
    if o.field == 5 {
       sub := Table2.filter(some other rules)
   }
}


this cannot be optimized a lot by compiler or ORM engine, because queries are separated.
So having 10k elements on first query can bring up to 10k subqueries and the (almost) only optimization that the engine can operate is to cache queries result to choose if using an old result (can bring to other issues like memory usage or data retention) or ask to the db (lots, lots, lots of queries)

If you manage to reduce 10k automatically generated queries to just 1 handcraft query, you can save hundreds of seconds of execution time. Sorry for stating the obvious.

Another aspect that magnifies the problem is that ORM engines creates SQL code that must be parsed by the db server, and this time is definetly not zero.
And not all db servers have the possibility to precompile queries.

Again the obvious: 1ms over 1 query is not a problem. 1ms over 10k queries takes 10 seconds (plus the execution time, plus the parsing time to build object representation, plus ... ).

So ORM can be the wrong choice,but
If you don't have a complex db structure (this means a few connected tables) and/or you can use some sort of advanced merges where you can specify the projection in the orm itlself (that's something like writing a raw query) ORM is a fantastic abstraction, because it allows you to write simpler code.

In conclusion, it is not ORM that is slow.
Sometimes it is just not the right tool.

If you need some simple access proceed with ORM. If you need something more complex, optimize the code to reduce db access to the essential (prefetch as much as you can) and write your own queries. In that case sqlc is perfect

My 2¢

PSK

Robert Engels

unread,
Dec 21, 2024, 8:07:35 AM12/21/24
to Luca Pascali, Kerem Üllenoğlu, golang-nuts
This is incorrect. Most ORM will efficiently execute the first query with a single SQL statement.  It’s called the N+1 Select problem and it is solvable. 

On Dec 21, 2024, at 3:43 AM, Luca Pascali <pasc...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Luca Pascali

unread,
Dec 21, 2024, 11:43:11 AM12/21/24
to Robert Engels, Kerem Üllenoğlu, golang-nuts
The first one, yes.

The other 10k no, if they are called after in a loop in the user language that runs over every row returned by the first query.
Like it is done in a lot of REST interfaces

That is the difference I was pointing out.

In fact I also said that in case of simple interfaces ORM does its work well.

If I got you wrong, please show me some example that can optimize the other queries after returning the first result set.

PSK

robert engels

unread,
Dec 21, 2024, 11:46:52 AM12/21/24
to Luca Pascali, Kerem Üllenoğlu, golang-nuts
As I said, it is a common problem and readily solved if you know how to use the ORM properly.

Luca Pascali

unread,
Dec 21, 2024, 11:56:21 AM12/21/24
to robert engels, Kerem Üllenoğlu, golang-nuts
Ok, clear.

To say it easier, you annotate the first query to prefetch informations from joint tables but keeping distinct their object representation.
Fair enough.

Still risky because it can explode, but definetly better than lots of single queries.

PSK

Jason E. Aten

unread,
Dec 21, 2024, 1:17:49 PM12/21/24
to golang-nuts
Bhavesh,

Go is great for big projects.

You don't say specifically which part is slow, nor which
database you are using, so it is hard to give specific advice. 
You should measure and profile to see what part of your
process is taking a long time. Go has great profiling tools.
That's one of the reasons it is good for big projects.

There are some standard tricks if, for example, your initial data
load (say, alot of inserts) into an empty set of indexed tables is too slow. 
And this may mean going outside of an ORM and even outside of SQL.
For example, you may then need to resort to the old standby of halting all reads, dropping all 
indexes--or here, not creating them in the first place, and
then doing the equivalent of 
LOAD DATA LOCAL INFILE (specific to MariaDB; the idea is to
read from local disk into database storage on local disk without
transactional overhead),
and then, only once that is done, applying or re-applying the indexes. 
Creating indexes in batch can be
massively faster than updating them after each record is inserted.

That's just one example. The details are... the important bit. You've
left them out. Feel free to tell us more.

Best wishes,
Jason

Samir Faci

unread,
Dec 21, 2024, 2:48:59 PM12/21/24
to Mike Schinkel, GoLang Nuts Mailing List, Brian Candler
This might be a tangent from the thread, but while I really liked SQLC, I found that doing anything remotely dynamic became too cumbersome.  I started out with SQLC and had SQLX as a fallback.

As soon as you want to have optional filters you end up with a hideous query you have to contend with. 

For example:

```go
-- name: ListAuthors :one
SELECT * FROM authors
WHERE email = CASE WHEN sqlc.arg(email) = '' then NULL else sqlc.arg(email) END
OR username = CASE WHEN sqlc.arg(username) = '' then NULL else sqlc.arg(username) END
LIMIT 1;
```
I mean sure that works but it just feels kind of gross.  You can end up with all potential conditional having to be built into the query and case statements.  Do people really use this pattern in prod? 

it feels like as soon as. you try to add dynamic queries SQLC appeal dwindles.  That being said, for simple queries SQLC is awesome. <3 



Brian Candler

unread,
Dec 22, 2024, 4:50:21 AM12/22/24
to golang-nuts
> WHERE email = CASE WHEN sqlc.arg(email) = '' then NULL else sqlc.arg(email) END

What database is that? If it's Postgres then I believe that expression collapses to

    WHERE email = NULLIF(sqlc.arg(email), '')

But wouldn't it be better to pass null explicitly, rather than assume "empty string means null"? sqlc allows this by using sqlc.narg() instead of sqlc.arg(), see

-- name: GetAuthor :one
SELECT * FROM authors
WHERE name = sqlc.narg(name)
OR bio = sqlc.narg(bio)
LIMIT 1;

The parameter is then a sql.NullString

type NullString struct {
    String string
    Valid bool // Valid is true if String is not NULL
}

And/or look at the emit_pointers_for_null_types option in https://docs.sqlc.dev/en/latest/reference/datatypes.html#text

Henry

unread,
Jan 8, 2025, 1:38:07 AMJan 8
to golang-nuts
The reason people use ORMs is because they don't want to write SQL. The reason people don't want to write SQL is because SQL is not portable with each database implements its own SQL variant/dialect. Switching databases often involves rewriting the SQL instructions. But ORM is an overengineered solution to this problem. It may be slow because the generated code may not be tailored to your specific scenario. Debugging can also be problematic since it is difficult to check and fix the underlying SQL instruction that is causing the problem. Long-term maintenance may also be difficult, such as database upgrade, migration, schema changes, etc. A thin wrapper over SQL language that is portable across databases is a better solution.

Mike Schinkel

unread,
Jan 8, 2025, 2:37:28 AMJan 8
to Henry, GoLang Nuts Mailing List

On Jan 8, 2025, at 1:38 AM, Henry <henry.ad...@gmail.com> wrote:

The reason people use ORMs is because they don't want to write SQL. ... A thin wrapper over SQL language that is portable across databases is a better solution.

That is insightful. 

Do you know of such an existing solution that meets said requirements, for Go, or for the general-case even?

-Mike

Henry

unread,
Jan 8, 2025, 11:56:46 PMJan 8
to golang-nuts
Hi,

I am not aware of any SQL wrapper in Go, but then I didn't look hard enough. At work, we implemented our own SQL wrapper for a few databases we commonly work with. In .Net, you have LINQ, but LINQ is a query-only language. But the idea is the same, which is to provide a common interface when working with data sources. That way you can switch databases with minimal changes to your project code. Working with SQL and its dialects and dealing with each database's quirkiness is quite time consuming.

Mike Schinkel

unread,
Jan 9, 2025, 9:42:07 AMJan 9
to Henry, GoLang Nuts Mailing List
On Jan 8, 2025, at 11:56 PM, Henry <henry.ad...@gmail.com> wrote:

I am not aware of any SQL wrapper in Go, but then I didn't look hard enough. At work, we implemented our own SQL wrapper for a few databases we commonly work with. In .Net, you have LINQ, but LINQ is a query-only language. But the idea is the same, which is to provide a common interface when working with data sources. That way you can switch databases with minimal changes to your project code. Working with SQL and its dialects and dealing with each database's quirkiness is quite time consuming.

As someone with off-and-on SQL experience on a variety of platforms starting in the mid 90's, I completely empathize with your concerns.  I even designed my own such approach last year, but never implemented it because the scope was too intimidating, and I did not know how to support myself while developing it.

The problem with the concept, however, is that although the concept has real value there are no known implementations of said concept[1] other than (at least one) internal solution(s) as you mention, where the sponsor also owns the cost of all maintenance and enhancements. But not being in the business of selling such a solution[2] means not enough use-cases are tested ensuring the design is not really robust and is not broad enough to cover all reasonable use-cases. 

I would love to see someone produce such a solution. However, given the fact that we have had SQL for right at 50 years[3], I am not holding my breath.  #fwiw

-Mike

1 — LINQ being an admitted subset and with a design that requires embedding into the language is unfortunately not a complete solution and it couples concerns.

2 — This is the core vs. context problem that Geoffrey Moore wrote about where companies should invest in R&D for their core businesses vs all those things they need to do to deliver on their core, aka "the context."  IOW, unless one's business is to sell an agnostic SQL wrapper solution, investing in developing a SQL wrapper solution is just an expense and will likely never produce a world class product (with the caveat being if it is open-sourced — like React — but then internal-only solutions by-definition are not.)

Muhammad Rizsky

unread,
Jan 9, 2025, 1:01:05 PMJan 9
to Mike Schinkel, Henry, GoLang Nuts Mailing List

My statement will be unpopular fact.

My workplace using gorm for application with avg 3k qps. Even at some point its exponentially increase to around 50k (campaign event). Its a big project. Never had issue caused by gorm specifically.


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Roland Müller

unread,
Jan 9, 2025, 2:49:23 PMJan 9
to golang-nuts
Hello,

I am actually scared by the fact to  implement a bigger DB application on plain code level and tend rather to use a ORM. 

Reason for that is that doing all DB access methods in the application language (Go or Java) tends to a codebase where you have SQL code snippets scattered over in your code. When the application now grows it will be more and more demanding to make changes. Thus, this does not scale up.

Especially in the case you change to another DB system or even a newer version of your current DB bringing many disruptive changes fixing such a 100% handcrafted system is quite a big effort.

ORM on the other hand enables to build a cleaner architecture but may have performance issues.

These can be addressed by several means - at least in JPA - I don't know GORM:

  • the setup of the database schema(s) can be done manually or partly manually by pure SQL code. The ORM code then just interfaces to your database but inside the DB there may be custom solutions.  The same Java code can interface to different SQL implementations thus actually the Entiy classes are only an interface.
  • in SQL part you can add elements which are transparent to the ORM code such as indexes or materialized views etc. JPA also have an @Index annotation that you can use but you are not forced to use it.
  • last not least JPA offers the opportunity to add native queries to the Java code. I don't know whether GORM has such a feature.
To summarize: the basic approach would be to use ORM and detect where the performance is weak. These places are then fixed by either transparent setup in the database or by partly overwriting the ORM queries (native queries in JPA)

BR,
Roland

Brian Candler

unread,
Jan 10, 2025, 4:55:17 AMJan 10
to golang-nuts
On Thursday, 9 January 2025 at 19:49:23 UTC Roland Müller wrote:
Reason for that is that doing all DB access methods in the application language (Go or Java) tends to a codebase where you have SQL code snippets scattered over in your code. When the application now grows it will be more and more demanding to make changes. Thus, this does not scale up.

The solution to that problem with sqlc is to *centralize* all your SQL code snippets in one place. This then becomes a set of known, supported interfaces that the application is permitted to use (as an auto-generated, clean Go API). The application cannot use ad-hoc SQL.

If the application author finds they need to use a different SQL query, then they get it added to the corpus of SQL snippets. There it can be tested, reviewed, sanity-checked for performance implications etc.  You can also easily find which piece of code uses which snippet.

Rory Campbell-Lange

unread,
Jan 10, 2025, 8:26:22 AMJan 10
to Brian Candler, golang-nuts
On 10/01/25, 'Brian Candler' via golang-nuts (golan...@googlegroups.com) wrote:
> On Thursday, 9 January 2025 at 19:49:23 UTC Roland Müller wrote:
>
> > Reason for that is that doing all DB access methods in the application
> > language (Go or Java) tends to a codebase where you have SQL code snippets
> > scattered over in your code. When the application now grows it will be more
> > and more demanding to make changes. Thus, this does not scale up.
>
> The solution to that problem with sqlc is to *centralize* all your SQL code
> snippets in one place. This then becomes a set of known, supported
> interfaces that the application is permitted to use (as an auto-generated,
> clean Go API). The application cannot use ad-hoc SQL.

If the project doesn't need to move database type (in my experience middleware changes more often than the database on longterm projects), it is worth considering using procedural sql, for instance PL/PGSQL for Postgres.

PL/PGSQL can provide a database "shell" to the database with several conveniences including per-request auth, simple solutions to the so-called n+1 problem and convenient testing within a transactional environment. Since a major capability of databases is generating composite results that don't naturally fit with the entities from which they are derived (in database tables or middleware types), working with procedural SQL can also help make the most out of the relational power of SQL.

I haven't looked at sqlc's support for procedural sql, but in the past I've used a simple go tool to generate code from PL/PGSQL functions -- see https://github.com/rorycl/pgtools/tree/main/go-modelmaker.

Cheers,
Rory

Brian Candler

unread,
Jan 11, 2025, 6:32:55 AMJan 11
to golang-nuts
Reply all
Reply to author
Forward
0 new messages