Can we improve how we work with prepared queries and how they are cached

108 views
Skip to first unread message

benjamin...@gmail.com

unread,
Oct 11, 2025, 1:22:40 PMOct 11
to elixir-ecto
Currently prepared queries are using references, and store a lot of data in ETS. In SQL we take a different approach to identify queries, we have globally unique ids, which are generated at compile time. I could imagine a world where the only information needed to be stored would be the conn pid and id to signify that the query has been prepared in the specific connection or needed to be prepared.

To be clear, I'm not asking anyone to do this work, I can do it myself. But I want to gauge the core team if this kind of improvement is of intrest to them.

José Valim

unread,
Oct 11, 2025, 2:10:17 PMOct 11
to elixi...@googlegroups.com
Because Ecto queries are composed at runtime, we can’t build compile time IDs. This is why our query builder is based on the runtime parts. If you have good ideas to do both composition and unique compile time IDs, then that will be appreciated. PRs welcome!


On Sat, Oct 11, 2025 at 19:22 benjamin...@gmail.com <benjamin...@gmail.com> wrote:
Currently prepared queries are using references, and store a lot of data in ETS. In SQL we take a different approach to identify queries, we have globally unique ids, which are generated at compile time. I could imagine a world where the only information needed to be stored would be the conn pid and id to signify that the query has been prepared in the specific connection or needed to be prepared.

To be clear, I'm not asking anyone to do this work, I can do it myself. But I want to gauge the core team if this kind of improvement is of intrest to them.

--
You received this message because you are subscribed to the Google Groups "elixir-ecto" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elixir-ecto...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/elixir-ecto/73b679f1-49e3-411d-a9d5-43fcf71aec18n%40googlegroups.com.

benjamin...@gmail.com

unread,
Oct 11, 2025, 2:29:36 PMOct 11
to elixir-ecto
Yeah, I was afraid that was the issue, or more specifically the DSL. In SQL it's trvial as we concatenate tokens before we parse and generate strings. I know it's a huge ask to deprecate the DSL in favour of SQL and I really wanted to reuse db_connection and all the drivers. Although I do believe there is a lot to gain by having a consistent structure to represent SQL across drivers and libraries.

This feels like being stuck between and rock and a hard place, as there is no straigth forward way to satisfy everything and everybody.

benjamin...@gmail.com

unread,
Oct 17, 2025, 5:39:49 PMOct 17
to elixir-ecto
The new pool implementation is going great, unfortunately db_connection does no far well and breaks my benchmark on parallel, this is with default configuration:

15:25:52.438 [error] Task #PID<0.353.0> started from #PID<0.101.0> terminating ** (DBConnection.ConnectionError) could not checkout the connection owned by #PID<0.353.0>. When using the sandbox, connections are shared, so this may imply another process is using a connection. Reason: connection not available and request was dropped from queue after 2911ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by: 1. Ensuring your database is available and that you can connect to it 2. Tracking down slow queries and making sure they are running fast enough 3. Increasing the pool_size (although this increases resource consumption) 4. Allowing requests to wait longer by increasing :queue_target and :queue_interval See DBConnection.start_link/2 for more information (db_connection 2.7.0) lib/db_connection.ex:989: DBConnection.run/3 (benchee 1.3.1) lib/benchee/benchmark/collect/time.ex:17: Benchee.Benchmark.Collect.Time.collect/1 (benchee 1.3.1) lib/benchee/benchmark/runner.ex:291: Benchee.Benchmark.Runner.collect/3 (benchee 1.3.1) lib/benchee/benchmark/repeated_measurement.ex:91: Benchee.Benchmark.RepeatedMeasurement.do_determine_n_times/5 (benchee 1.3.1) lib/benchee/benchmark/runner.ex:226: Benchee.Benchmark.Runner.measure_runtimes/4 (benchee 1.3.1) lib/benchee/benchmark/runner.ex:102: Benchee.Benchmark.Runner.measure_scenario/2 (elixir 1.20.0-dev) lib/task/supervised.ex:105: Task.Supervised.invoke_mfa/2 (elixir 1.20.0-dev) lib/task/supervised.ex:40: Task.Supervised.reply/4 Function: #Function<2.13016825/0 in Benchee.Utility.Parallel.map/2> 
Args: [] 

Operating System: macOS CPU Information: Apple M1 Max Number of Available Cores: 10 Available memory: 64 GB Elixir 1.20.0-dev Erlang 28.1 JIT enabled: true Benchmark suite executing with the following configuration: warmup: 2 s time: 5 s memory time: 2 s reduction time: 2 s parallel: 1 inputs: 1..100_000 Estimated total run time: 22 s Measured function call overhead as: 0 ns Benchmarking ecto with input 1..100_000 ... Benchmarking sql with input 1..100_000 ... Calculating statistics... Formatting results... ##### With input 1..100_000 ##### Name ips average deviation median 99th % sql 2102.96 K 475.52 ns ±4854.34% 417 ns 583 ns ecto 132.93 K 7522.50 ns ±210.70% 6958 ns 17167 ns Comparison: sql 2102.96 K ecto 132.93 K - 15.82x slower +7046.98 ns Memory usage statistics: Name Memory usage sql 1.02 KB ecto 7.21 KB - 7.05x memory usage +6.19 KB **All measurements for memory usage were the same** Reduction count statistics: Name Reduction count sql 124 ecto 399 - 3.22x reduction count +275 **All measurements for reduction count were the same**

Operating System: macOS CPU Information: Apple M1 Max Number of Available Cores: 10 Available memory: 64 GB Elixir 1.20.0-dev Erlang 28.1 JIT enabled: true Benchmark suite executing with the following configuration: warmup: 2 s time: 1 s memory time: 2 s reduction time: 2 s parallel: 50 inputs: 1..100_000 Estimated total run time: 7 s Measured function call overhead as: 0 ns Benchmarking sql with input 1..100_000 ... Calculating statistics... Formatting results... ##### With input 1..100_000 ##### Name ips average deviation median 99th % sql 98.67 K 10.13 μs ±1880.64% 1.17 μs 208.29 μs Memory usage statistics: Name average deviation median 99th % sql 1.18 KB ±934.54% 1.02 KB 1.02 KB Reduction count statistics: Name Reduction count sql 124 **All measurements for reduction count were the same**

José Valim

unread,
Oct 17, 2025, 6:04:10 PMOct 17
to elixi...@googlegroups.com
Your benchmark seems to be using the ownership pool/sandbox, which is for test mode and not designed for production.



benjamin...@gmail.com

unread,
Oct 17, 2025, 6:49:57 PMOct 17
to elixir-ecto
You're right.

No sandbox works, but it almost 30x slower

Operating System: macOS
CPU Information: Apple M1 Max
Number of Available Cores: 10
Available memory: 64 GB
Elixir 1.20.0-dev
Erlang 28.1
JIT enabled: true

Benchmark suite executing with the following configuration:
warmup: 2 s
time: 1 s
memory time: 2 s
reduction time: 2 s
parallel: 50
inputs: 1..100_000
Estimated total run time: 14 s


Measured function call overhead as: 0 ns
Benchmarking ecto with input 1..100_000 ...
Benchmarking sql with input 1..100_000 ...
Calculating statistics...
Formatting results...

##### With input 1..100_000 #####
Name           ips        average  deviation         median         99th %
sql       107.00 K        9.35 μs  ±1791.69%        1.13 μs      200.04 μs
ecto        3.63 K      275.80 μs    ±93.24%      190.79 μs     1065.27 μs

Comparison:
sql       107.00 K
ecto        3.63 K - 29.51x slower +266.46 μs


Memory usage statistics:

Name         average  deviation         median         99th %
sql          1.55 KB  ±1406.93%        1.02 KB        1.02 KB
ecto            7 KB     ±0.00%           7 KB           7 KB

Comparison:
sql          1.02 KB
ecto            7 KB - 4.53x memory usage +5.45 KB


Reduction count statistics:

Name Reduction count
sql              124
ecto             361 - 2.91x reduction count +237


**All measurements for reduction count were the same**

benjamin...@gmail.com

unread,
Oct 17, 2025, 6:52:10 PMOct 17
to elixir-ecto

And serialized:

Operating System: macOS
CPU Information: Apple M1 Max
Number of Available Cores: 10
Available memory: 64 GB
Elixir 1.20.0-dev
Erlang 28.1
JIT enabled: true

Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
memory time: 2 s
reduction time: 2 s
parallel: 1
inputs: 1..100_000
Estimated total run time: 22 s

Measured function call overhead as: 0 ns
Benchmarking ecto with input 1..100_000 ...
Benchmarking sql with input 1..100_000 ...
Calculating statistics...
Formatting results...

##### With input 1..100_000 #####
Name           ips        average  deviation         median         99th %
sql      2130.22 K      469.44 ns  ±4564.87%         417 ns         583 ns
ecto      184.57 K     5418.06 ns   ±148.97%        5208 ns        6292 ns

Comparison:
sql      2130.22 K
ecto      184.57 K - 11.54x slower +4948.62 ns


Memory usage statistics:

Name    Memory usage
sql          1.02 KB
ecto            7 KB - 6.84x memory usage +5.98 KB


**All measurements for memory usage were the same**

Reduction count statistics:

Name Reduction count
sql              124
ecto             361 - 2.91x reduction count +237

**All measurements for reduction count were the same**

José Valim

unread,
Oct 18, 2025, 2:30:38 AMOct 18
to elixi...@googlegroups.com
Can you show the code being benchmarked?



benjamin...@gmail.com

unread,
Oct 18, 2025, 2:50:22 PMOct 18
to elixir-ecto
import SQL
import Ecto.Query
SQL.Pool.start_link(%{name: :mypool, protocol: :tcp, size: 10})
defmodule SQL.Repo do
  use Ecto.Repo, otp_app: :sql, adapter: Ecto.Adapters.Postgres
  use SQL, adapter: SQL.Adapters.Postgres, repo: __MODULE__

  def sql() do
    {idx, _} = SQL.Pool.checkout(~SQL[select 1], :mypool)
    SQL.Pool.checkin(idx, :mypool)
  end

  def ecto() do
    checkout(fn -> to_sql(:all, select(from("users"), 1)) end)
  end
end
Application.put_env(:sql, :ecto_repos, [SQL.Repo])
Application.put_env(:sql, SQL.Repo, log: false, username: "postgres", password: "postgres", hostname: "localhost", database: "sql_test#{System.get_env("MIX_TEST_PARTITION")}", pool_size: 10)
SQL.Repo.__adapter__().storage_up(SQL.Repo.config())
SQL.Repo.start_link()
Benchee.run(
  %{
  "sql" => fn _ -> SQL.Repo.sql() end,
  "ecto" => fn _ -> SQL.Repo.ecto() end,
  },
  parallel: 50, time: 1,
  inputs: %{"1..100_000" => Enum.to_list(1..100_000)},
  memory_time: 2,
  reduction_time: 2,
  unit_scaling: :smallest,
  measure_function_call_overhead: true,
  profile_after: :eprof
)

You might say that it's not a fair benchmark as Ecto is doing more work at runtime, but that is not really an issue, since we have seen the performance degradation in any workflow even when both versions execute similiar code:

import SQL
import Ecto.Query
SQL.Pool.start_link(%{name: :mypool, protocol: :tcp, size: 10})
defmodule SQL.Repo do
  use Ecto.Repo, otp_app: :sql, adapter: Ecto.Adapters.Postgres
  use SQL, adapter: SQL.Adapters.Postgres, repo: __MODULE__

  def sql() do
    {idx, _} = SQL.Pool.checkout(SQL.parse("with recursive temp (n, fact) as (select 0, 1 union all select n+1, (n+1)*fact from temp where n < 9)"), :mypool)
    SQL.Pool.checkin(idx, :mypool)
  end

  def ecto() do
    checkout(fn -> SQL.parse("with recursive temp (n, fact) as (select 0, 1 union all select n+1, (n+1)*fact from temp where n < 9)") end)
  end
end
Application.put_env(:sql, :ecto_repos, [SQL.Repo])
Application.put_env(:sql, SQL.Repo, log: false, username: "postgres", password: "postgres", hostname: "localhost", database: "sql_test#{System.get_env("MIX_TEST_PARTITION")}", pool_size: 10)
SQL.Repo.__adapter__().storage_up(SQL.Repo.config())
SQL.Repo.start_link()
Benchee.run(
  %{
  "sql" => fn _ -> SQL.Repo.sql() end,
  "ecto" => fn _ -> SQL.Repo.ecto() end,
  },
  parallel: 50, time: 1,
  inputs: %{"1..100_000" => Enum.to_list(1..100_000)},
  memory_time: 2,
  reduction_time: 2,
  unit_scaling: :smallest,
  measure_function_call_overhead: true,
  profile_after: :eprof
)

Operating System: macOS
CPU Information: Apple M1 Max
Number of Available Cores: 10
Available memory: 64 GB
Elixir 1.20.0-dev
Erlang 28.1
JIT enabled: true

Benchmark suite executing with the following configuration:
warmup: 2 s
time: 1 s
memory time: 2 s
reduction time: 2 s
parallel: 50
inputs: 1..100_000
Estimated total run time: 14 s

Measured function call overhead as: 0 ns
Benchmarking ecto with input 1..100_000 ...
Benchmarking sql with input 1..100_000 ...
Calculating statistics...
Formatting results...

##### With input 1..100_000 #####
Name           ips        average  deviation         median         99th %
sql        33.24 K       30.08 μs   ±311.95%        4.21 μs      232.08 μs
ecto        2.96 K      337.61 μs   ±116.37%         244 μs     1180.75 μs

Comparison:
sql        33.24 K
ecto        2.96 K - 11.22x slower +307.52 μs

Memory usage statistics:

Name    Memory usage
sql         13.76 KB
ecto        16.20 KB - 1.18x memory usage +2.44 KB


**All measurements for memory usage were the same**

José Valim

unread,
Oct 18, 2025, 3:08:27 PMOct 18
to elixi...@googlegroups.com
It would probably be best to measure only the time to checkout, without queries. But the second example, where SQL.parse is used in both, is fair, since they both do the same work. If that's the 10x, then that's a solid improvement!

I'd be curious to see how things compare once all pieces of the puzzle are in place. For example, how do you handle these cases?

1. What happens if you check out but the process crashes?
2. What happens if you check out, issue "begin transaction", and then the process crashes?
3. What do you track in your telemetry events? Queue time, checkout time, and idle time?
4. How do you handle overloads? For example, DBConnection performs load shedding. Without it, when the system is overloaded, many checkouts may have already timed out, since the pool cannot keep up, but if you do not preemptively discard them, they will take a connection from the pool, delaying the most recent checkouts, which may now potentially timeout too, and so on. Load shedding, circuit breaks, etc all make pure benchmarks slower but are important for resilience in practice!



benjamin...@gmail.com

unread,
Oct 18, 2025, 3:40:26 PMOct 18
to elixir-ecto
Yeah, the 10x improvement is from SQL.parse example. In this initial implementation then none of your points is being covered, the pool does do load balancing spread over the scheduler. The pool is implemented as a GenServer that will eventully deal with connections going into a bad state.

benjamin...@gmail.com

unread,
Oct 20, 2025, 4:27:33 PMOct 20
to elixir-ecto
Load seams to be evenly spread across schedulers and connections:
Screenshot 2025-10-20 at 4.25.43 PM.png

benjamin...@gmail.com

unread,
Oct 23, 2025, 2:02:39 PMOct 23
to elixir-ecto


Now we have linear scaling of the pool, I've included an Ecto benchmark as well.


➜  sql git:(main) mix sql.bench.pool

🧠 SQL.Pool Live Benchmark | Concurrency: 40 | Scheduler Online: 10 | Scheduler: 10 | Total Queries: 46077 | QPS: 3125.0 | Total dead: 0 | Error Rate: 0.0% | Retry Rate: 0.0%

Time left: 0 s

Pool Size: Initial: 20 | Current: 20 | Recommended: 16 | Active: 20 | Idle: 0 | Dead: 0

λ (pool): 3125.0 req/s: ▆▆▆▇▇▆▆▇▇▇▆▇▇▇▆▆▆▆▆▆▇▇▇▆▇▇▇▆▇▆▇▇▇▇▆▇▇▆▇▇▇▇▇▆█▇▇▇▆▇

W: 6.0 ms: ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇█▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇

L: 20 connections (recommended: 16): ██████████████████████████████████████████████████

Error Rate: 0.0% ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁

Retry Rate: 0.0% ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁

── Per-schedule utilization ─────────────

 1: ██████████ 10%

   ↳  2: ██████████████████████████████████████████████████ 99% (alive)

   ↳  1: ██████████████████████████████████████████████████ 98% (alive)

 2: ██████████ 10%

   ↳  4: ██████████████████████████████████████████████████ 99% (alive)

   ↳  3: ██████████████████████████████████████████████████ 99% (alive)

 3: ██████████ 10%

   ↳  6: ██████████████████████████████████████████████████ 99% (alive)

   ↳  5: ██████████████████████████████████████████████████ 97% (alive)

 4: ██████████ 10%

   ↳  8: ██████████████████████████████████████████████████ 99% (alive)

   ↳  7: ██████████████████████████████████████████████████ 98% (alive)

 5: ██████████ 10%

   ↳  9: ██████████████████████████████████████████████████ 98% (alive)

   ↳ 10: ██████████████████████████████████████████████████ 98% (alive)

 6: ██████████ 10%

   ↳ 12: ██████████████████████████████████████████████████ 99% (alive)

   ↳ 11: ██████████████████████████████████████████████████ 97% (alive)

 7: ██████████ 10%

   ↳ 14: ██████████████████████████████████████████████████ 100% (alive)

   ↳ 13: ██████████████████████████████████████████████████ 98% (alive)

 8: ██████████ 10%

   ↳ 16: ██████████████████████████████████████████████████ 98% (alive)

   ↳ 15: ██████████████████████████████████████████████████ 98% (alive)

 9: ██████████ 10%

   ↳ 17: ██████████████████████████████████████████████████ 97% (alive)

   ↳ 18: ██████████████████████████████████████████████████ 97% (alive)

10: ██████████ 10%

   ↳ 20: ██████████████████████████████████████████████████ 99% (alive)

   ↳ 19: ██████████████████████████████████████████████████ 96% (alive)


  defp loop_checkout(stop_time, counter, pid) do
    if System.monotonic_time() < stop_time do
      case SQL.Pool.checkout(%{id: 1}, :default) do
        {idx, _socket} ->
          :counters.add(counter, 1, 1)
          Process.sleep(Enum.random(1..10))
          SQL.Pool.checkin(idx, :default)

        :none -> Process.sleep(5)
      end

      loop_checkout(stop_time, counter, pid)
    end
  end


➜  sql git:(main) mix run /benchmarks/ecto.exs

🧠 SQL.Pool Live Benchmark | Concurrency: 40 | Total Queries: 22230 | QPS: 1566.7

Pool Size: Initial: 20

Time left: 0 s



  defp loop_checkout(stop_time, counter) do
    if System.monotonic_time() < stop_time do

      try do
        Repo.checkout(fn ->
          :counters.add(counter, 1, 1)
          Process.sleep(Enum.random(1..10))
        end)
      rescue
        _ -> :error
          Process.sleep(5)
      end
      loop_checkout(stop_time, counter)
    end
  end

benjamin...@gmail.com

unread,
Oct 26, 2025, 3:25:33 PMOct 26
to elixir-ecto
Okay, I wrote a better benchmark to visualize by step how we do compared to Ecto.

defmodule SQL.Repo do
  use Ecto.Repo, otp_app: :sql, adapter: Ecto.Adapters.Postgres
end

Application.put_env(:sql, :ecto_repos, [SQL.Repo])
Application.put_env(:sql, SQL.Repo, log: false, username: "postgres", password: "postgres", hostname: "localhost", database: "sql_test#{System.get_env("MIX_TEST_PARTITION")}", pool_size: 10)
SQL.Repo.__adapter__().storage_up(SQL.Repo.config())

defmodule SQL.Pool.DeterministicBench do
  @moduledoc false
  @pool_size 40
  @lease_time_ms 5
  @duration_ms 5_000

  def run do
    {:ok, pid} = SQL.Repo.start_link()

    IO.puts("Starting scaling benchmark (pool size: #{@pool_size})")

    for clients <- 1..50 do
      counter = :counters.new(1, [:write_concurrency])
      stop_time = System.monotonic_time(:millisecond) + @duration_ms

      Enum.each(1..clients, fn _ ->
        spawn(fn -> loop_checkout(stop_time, pid, counter) end)
      end)

      # Wait for duration
      Process.sleep(@duration_ms + 50)

      total_requests = :counters.get(counter, 1)
      measured_qps = total_requests / (@duration_ms / 1000)
      theoretical_qps = @pool_size / (@lease_time_ms / 1000)

      IO.puts("""
      Clients: #{clients}
      Total requests: #{total_requests}, Measured QPS: #{Float.round(measured_qps, 1)}
      Theoretical max QPS: #{theoretical_qps |> Float.round(1)}
      """)
    end
  end

  defp loop_checkout(stop_time, pid, counter) do
    if System.monotonic_time(:millisecond) < stop_time do
      try do
        SQL.Repo.checkout(fn ->
          :counters.add(counter, 1, 1)
          :rand.seed(:exsplus, {123, 456, 789})
          delay = :rand.uniform(10)  # 1..10 ms
          Process.sleep(delay)

        end)
      rescue
        _ -> :error
      end
      loop_checkout(stop_time, pid, counter)
    end
  end
end
SQL.Pool.DeterministicBench.run()


defmodule SQL.Pool.DeterministicBench do
  @moduledoc false
  @pool_size 40
  @lease_time_ms 5
  @duration_ms 5_000

  def run do
    {:ok, pid} = SQL.Pool.start_link(%{name: :default, size: @pool_size, protocol: :tcp})

    IO.puts("Starting scaling benchmark (pool size: #{@pool_size})")

    for clients <- 1..50 do
      counter = :counters.new(1, [:write_concurrency])
      stop_time = System.monotonic_time(:millisecond) + @duration_ms

      Enum.each(1..clients, fn _ ->
        spawn(fn -> loop_checkout(stop_time, pid, counter) end)
      end)

      # Wait for duration
      Process.sleep(@duration_ms + 50)

      total_requests = :counters.get(counter, 1)
      measured_qps = total_requests / (@duration_ms / 1000)
      theoretical_qps = @pool_size / (@lease_time_ms / 1000)

      IO.puts("""
      Clients: #{clients}
      Total requests: #{total_requests}, Measured QPS: #{Float.round(measured_qps, 1)}
      Theoretical max QPS: #{theoretical_qps |> Float.round(1)}
      """)
    end
  end

  defp loop_checkout(stop_time, pid, counter) do
    if System.monotonic_time(:millisecond) < stop_time do
      {idx, _socket} = SQL.Pool.checkout(%{id: 1}, :default)
      :counters.add(counter, 1, 1)
      :rand.seed(:exsplus, {123, 456, 789})
      delay = :rand.uniform(10)  # 1..10 ms
      Process.sleep(delay)
      SQL.Pool.checkin(idx, :default)
      loop_checkout(stop_time, pid, counter)
    end
  end
end

SQL.Pool.DeterministicBench.run()


➜  sql git:(main)   mix run benchmarks/ecto.exs

Starting scaling benchmark (pool size: 40)

Clients: 1

Total requests: 803, Measured QPS: 160.6

Theoretical max QPS: 8.0e3


Clients: 2

Total requests: 1629, Measured QPS: 325.8

Theoretical max QPS: 8.0e3


Clients: 3

Total requests: 2441, Measured QPS: 488.2

Theoretical max QPS: 8.0e3


Clients: 4

Total requests: 3218, Measured QPS: 643.6

Theoretical max QPS: 8.0e3


Clients: 5

Total requests: 3999, Measured QPS: 799.8

Theoretical max QPS: 8.0e3


Clients: 6

Total requests: 4913, Measured QPS: 982.6

Theoretical max QPS: 8.0e3


Clients: 7

Total requests: 5728, Measured QPS: 1145.6

Theoretical max QPS: 8.0e3


Clients: 8

Total requests: 6533, Measured QPS: 1306.6

Theoretical max QPS: 8.0e3


Clients: 9

Total requests: 7311, Measured QPS: 1462.2

Theoretical max QPS: 8.0e3


Clients: 10

Total requests: 8152, Measured QPS: 1630.4

Theoretical max QPS: 8.0e3


Clients: 11

Total requests: 8084, Measured QPS: 1616.8

Theoretical max QPS: 8.0e3


Clients: 12

Total requests: 8193, Measured QPS: 1638.6

Theoretical max QPS: 8.0e3


Clients: 13

Total requests: 8095, Measured QPS: 1619.0

Theoretical max QPS: 8.0e3


Clients: 14

Total requests: 8145, Measured QPS: 1629.0

Theoretical max QPS: 8.0e3


Clients: 15

Total requests: 8182, Measured QPS: 1636.4

Theoretical max QPS: 8.0e3


Clients: 16

Total requests: 7884, Measured QPS: 1576.8

Theoretical max QPS: 8.0e3


Clients: 17

Total requests: 7966, Measured QPS: 1593.2

Theoretical max QPS: 8.0e3


Clients: 18

Total requests: 7510, Measured QPS: 1502.0

Theoretical max QPS: 8.0e3


Clients: 19

Total requests: 8055, Measured QPS: 1611.0

Theoretical max QPS: 8.0e3


Clients: 20

Total requests: 8104, Measured QPS: 1620.8

Theoretical max QPS: 8.0e3


Clients: 21

Total requests: 8125, Measured QPS: 1625.0

Theoretical max QPS: 8.0e3


Clients: 22

Total requests: 8132, Measured QPS: 1626.4

Theoretical max QPS: 8.0e3


Clients: 23

Total requests: 8079, Measured QPS: 1615.8

Theoretical max QPS: 8.0e3


Clients: 24

Total requests: 8107, Measured QPS: 1621.4

Theoretical max QPS: 8.0e3


Clients: 25

Total requests: 8130, Measured QPS: 1626.0

Theoretical max QPS: 8.0e3


Clients: 26

Total requests: 7683, Measured QPS: 1536.6

Theoretical max QPS: 8.0e3


Clients: 27

Total requests: 7894, Measured QPS: 1578.8

Theoretical max QPS: 8.0e3


Clients: 28

Total requests: 8015, Measured QPS: 1603.0

Theoretical max QPS: 8.0e3


Clients: 29

Total requests: 8062, Measured QPS: 1612.4

Theoretical max QPS: 8.0e3


Clients: 30

Total requests: 8024, Measured QPS: 1604.8

Theoretical max QPS: 8.0e3


Clients: 31

Total requests: 7965, Measured QPS: 1593.0

Theoretical max QPS: 8.0e3


Clients: 32

Total requests: 7837, Measured QPS: 1567.4

Theoretical max QPS: 8.0e3


Clients: 33

Total requests: 8025, Measured QPS: 1605.0

Theoretical max QPS: 8.0e3


Clients: 34

Total requests: 8064, Measured QPS: 1612.8

Theoretical max QPS: 8.0e3


Clients: 35

Total requests: 8077, Measured QPS: 1615.4

Theoretical max QPS: 8.0e3


Clients: 36

Total requests: 8191, Measured QPS: 1638.2

Theoretical max QPS: 8.0e3


Clients: 37

Total requests: 8104, Measured QPS: 1620.8

Theoretical max QPS: 8.0e3


Clients: 38

Total requests: 8138, Measured QPS: 1627.6

Theoretical max QPS: 8.0e3


Clients: 39

Total requests: 8103, Measured QPS: 1620.6

Theoretical max QPS: 8.0e3


Clients: 40

Total requests: 8223, Measured QPS: 1644.6

Theoretical max QPS: 8.0e3


Clients: 41

Total requests: 8178, Measured QPS: 1635.6

Theoretical max QPS: 8.0e3


Clients: 42

Total requests: 8196, Measured QPS: 1639.2

Theoretical max QPS: 8.0e3


Clients: 43

Total requests: 8176, Measured QPS: 1635.2

Theoretical max QPS: 8.0e3


Clients: 44

Total requests: 8107, Measured QPS: 1621.4

Theoretical max QPS: 8.0e3


Clients: 45

Total requests: 8186, Measured QPS: 1637.2

Theoretical max QPS: 8.0e3


Clients: 46

Total requests: 8247, Measured QPS: 1649.4

Theoretical max QPS: 8.0e3


Clients: 47

Total requests: 7920, Measured QPS: 1584.0

Theoretical max QPS: 8.0e3


Clients: 48

Total requests: 8057, Measured QPS: 1611.4

Theoretical max QPS: 8.0e3


Clients: 49

Total requests: 8159, Measured QPS: 1631.8

Theoretical max QPS: 8.0e3


Clients: 50

Total requests: 8123, Measured QPS: 1624.6

Theoretical max QPS: 8.0e3



➜  sql git:(main) mix sql.bench.pool

Starting scaling benchmark (pool size: 40)

Clients: 1

Total requests: 824, Measured QPS: 164.8

Theoretical max QPS: 8.0e3


Clients: 2

Total requests: 1645, Measured QPS: 329.0

Theoretical max QPS: 8.0e3


Clients: 3

Total requests: 2439, Measured QPS: 487.8

Theoretical max QPS: 8.0e3


Clients: 4

Total requests: 3216, Measured QPS: 643.2

Theoretical max QPS: 8.0e3


Clients: 5

Total requests: 4092, Measured QPS: 818.4

Theoretical max QPS: 8.0e3


Clients: 6

Total requests: 4860, Measured QPS: 972.0

Theoretical max QPS: 8.0e3


Clients: 7

Total requests: 5691, Measured QPS: 1138.2

Theoretical max QPS: 8.0e3


Clients: 8

Total requests: 6480, Measured QPS: 1296.0

Theoretical max QPS: 8.0e3


Clients: 9

Total requests: 7392, Measured QPS: 1478.4

Theoretical max QPS: 8.0e3


Clients: 10

Total requests: 8150, Measured QPS: 1630.0

Theoretical max QPS: 8.0e3


Clients: 11

Total requests: 8960, Measured QPS: 1792.0

Theoretical max QPS: 8.0e3


Clients: 12

Total requests: 9816, Measured QPS: 1963.2

Theoretical max QPS: 8.0e3


Clients: 13

Total requests: 10645, Measured QPS: 2129.0

Theoretical max QPS: 8.0e3


Clients: 14

Total requests: 11074, Measured QPS: 2214.8

Theoretical max QPS: 8.0e3


Clients: 15

Total requests: 12024, Measured QPS: 2404.8

Theoretical max QPS: 8.0e3


Clients: 16

Total requests: 12816, Measured QPS: 2563.2

Theoretical max QPS: 8.0e3


Clients: 17

Total requests: 13665, Measured QPS: 2733.0

Theoretical max QPS: 8.0e3


Clients: 18

Total requests: 14515, Measured QPS: 2903.0

Theoretical max QPS: 8.0e3


Clients: 19

Total requests: 15588, Measured QPS: 3117.6

Theoretical max QPS: 8.0e3


Clients: 20

Total requests: 16295, Measured QPS: 3259.0

Theoretical max QPS: 8.0e3


Clients: 21

Total requests: 17256, Measured QPS: 3451.2

Theoretical max QPS: 8.0e3


Clients: 22

Total requests: 18348, Measured QPS: 3669.6

Theoretical max QPS: 8.0e3


Clients: 23

Total requests: 19182, Measured QPS: 3836.4

Theoretical max QPS: 8.0e3


Clients: 24

Total requests: 20016, Measured QPS: 4003.2

Theoretical max QPS: 8.0e3


Clients: 25

Total requests: 20850, Measured QPS: 4170.0

Theoretical max QPS: 8.0e3


Clients: 26

Total requests: 21684, Measured QPS: 4336.8

Theoretical max QPS: 8.0e3


Clients: 27

Total requests: 22518, Measured QPS: 4503.6

Theoretical max QPS: 8.0e3


Clients: 28

Total requests: 23352, Measured QPS: 4670.4

Theoretical max QPS: 8.0e3


Clients: 29

Total requests: 24186, Measured QPS: 4837.2

Theoretical max QPS: 8.0e3


Clients: 30

Total requests: 25020, Measured QPS: 5004.0

Theoretical max QPS: 8.0e3


Clients: 31

Total requests: 25854, Measured QPS: 5170.8

Theoretical max QPS: 8.0e3


Clients: 32

Total requests: 26688, Measured QPS: 5337.6

Theoretical max QPS: 8.0e3


Clients: 33

Total requests: 27522, Measured QPS: 5504.4

Theoretical max QPS: 8.0e3


Clients: 34

Total requests: 28356, Measured QPS: 5671.2

Theoretical max QPS: 8.0e3


Clients: 35

Total requests: 29190, Measured QPS: 5838.0

Theoretical max QPS: 8.0e3


Clients: 36

Total requests: 30024, Measured QPS: 6004.8

Theoretical max QPS: 8.0e3


Clients: 37

Total requests: 30747, Measured QPS: 6149.4

Theoretical max QPS: 8.0e3


Clients: 38

Total requests: 31692, Measured QPS: 6338.4

Theoretical max QPS: 8.0e3


Clients: 39

Total requests: 32526, Measured QPS: 6505.2

Theoretical max QPS: 8.0e3


Clients: 40

Total requests: 33360, Measured QPS: 6672.0

Theoretical max QPS: 8.0e3


Clients: 41

Total requests: 34194, Measured QPS: 6838.8

Theoretical max QPS: 8.0e3


Clients: 42

Total requests: 35028, Measured QPS: 7005.6

Theoretical max QPS: 8.0e3


Clients: 43

Total requests: 35862, Measured QPS: 7172.4

Theoretical max QPS: 8.0e3


Clients: 44

Total requests: 36696, Measured QPS: 7339.2

Theoretical max QPS: 8.0e3


Clients: 45

Total requests: 37530, Measured QPS: 7506.0

Theoretical max QPS: 8.0e3


Clients: 46

Total requests: 38360, Measured QPS: 7672.0

Theoretical max QPS: 8.0e3


Clients: 47

Total requests: 39197, Measured QPS: 7839.4

Theoretical max QPS: 8.0e3


Clients: 48

Total requests: 40030, Measured QPS: 8006.0

Theoretical max QPS: 8.0e3


Clients: 49

Total requests: 40862, Measured QPS: 8172.4

Theoretical max QPS: 8.0e3


Clients: 50

Total requests: 41694, Measured QPS: 8338.8

Theoretical max QPS: 8.0e3



Reply all
Reply to author
Forward
0 new messages