Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Limiting memory usage

9 views
Skip to first unread message

Anton Shepelev

unread,
May 5, 2022, 5:40:56 AM5/5/22
to
Hello, all.

We have a server with usually seven to ten active MSSQL
instances, of which only few are intensively used at each
moment. Manually stopping unused instances and starting
required ones several times a week is too burdensome for
several reasons. In order to improve RAM utilisation, I
have written a simple script that tries to decrease memory
use by instances not in active operation. I might share and
discuss this script later, but now I have a specific
question about the `max sever memory' parameter:

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options

In order to estimate its effect on actual memory usage, I
wrote the following test script:

sp_configure 'show advanced options', 1 RECONFIGURE
CREATE TABLE #LOG( N INT IDENTITY, m_used INT, m_max INT )

DECLARE @mem_ref INT
DECLARE @mem_cur INT
DECLARE @mem_max INT

-- Exercise for the curious reader: rewrite the nested loops as a
-- single loop with a single query of sys.dm_os_process_memory:
WHILE 1=1 BEGIN
SELECT @mem_ref = physical_memory_in_use_kb/1024
FROM sys.dm_os_process_memory
SET @mem_max = @mem_ref
WHILE 1=1 BEGIN
SET @mem_max = @mem_max - 1
IF @mem_max < 512 BREAK
EXEC sp_configure 'max server memory', @mem_max
RECONFIGURE
WAITFOR DELAY '00:00:10'
SELECT @mem_cur = physical_memory_in_use_kb/1024
FROM sys.dm_os_process_memory
IF @mem_cur < @mem_ref BEGIN
INSERT INTO #LOG VALUES( @mem_cur, @mem_max )
BREAK
END
END
END

SELECT
#LOG.m_used AS Used ,
#LOG.m_max AS [Max] ,
#LOG.m_used - #LOG.m_max AS [Overrun] ,
PREV.m_used - #LOG.m_used AS [Delta Used],
PREV.m_max - #LOG.m_max AS [Delta Max]
FROM #LOG
LEFT JOIN #LOG PREV ON PREV.N = #LOG.N - 1

And here are the typical results:

https://pastebin.com/raw/L5BDGJ7Q
(tab-separated table)

As you see from the Overrun column, most of the time actual
memory usage exceeds the configured limit by 31-33
megabytes, occasionally coinciding with it, at which moments
an interesting anomaly takes place: a higher memory limit
results in a futher decrease of used memory, for example: a
limit of 857 Mb caused memory usage to drop to 888 Mb, but
then a higher limit of 887 megabytes decreased usage to the
same 887 megabytes!

Is this behavior ducumented anywhere, and is there a method
of stable and predictable control of used memory? I can
think of setting the limit to at least 34 megabytes less
than current usage, but this may depend on MSSQL version and
environment...

--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]

Anton Shepelev

unread,
May 5, 2022, 6:18:57 AM5/5/22
to
I wrote:

> SELECT
> #LOG.m_used AS Used ,
> #LOG.m_max AS [Max] ,
> #LOG.m_used - #LOG.m_max AS [Overrun] ,
> PREV.m_used - #LOG.m_used AS [Delta Used],
> PREV.m_max - #LOG.m_max AS [Delta Max]
> FROM #LOG
> LEFT JOIN #LOG PREV ON PREV.N = #LOG.N - 1

I forgot to ORDER BY #LOG.N , because the results were
already ordered that way, but I understand it is merely a
coincidence, albeit a likely one :-)

Erland Sommarskog

unread,
May 5, 2022, 3:28:09 PM5/5/22
to
Anton Shepelev (anton.txt@g{oogle}mail.com) writes:
> As you see from the Overrun column, most of the time actual
> memory usage exceeds the configured limit by 31-33
> megabytes, occasionally coinciding with it, at which moments
> an interesting anomaly takes place: a higher memory limit
> results in a futher decrease of used memory, for example: a
> limit of 857 Mb caused memory usage to drop to 888 Mb, but
> then a higher limit of 887 megabytes decreased usage to the
> same 887 megabytes!
>
> Is this behavior ducumented anywhere, and is there a method
> of stable and predictable control of used memory? I can
> think of setting the limit to at least 34 megabytes less
> than current usage, but this may depend on MSSQL version and
> environment...
>

Max server memory main controls the buffer cache, which also is the main
consumer. But there are also memory allocations that are outside "max server
memory", so it is not unusual for the actual usage to exceeds the
setting a little bit.

As for the usage decreasing when you are increasing the memory, it can be
because the OS is signaling memory pressure, or because SQL Server is still
working with trimming the memory. The memory will not start to increase
until there are queries that needs to drag pages into memory.

Don't make this too dynamic. I don't know for sure, but I would assume
that max server memory are one of the options that clear the plan cache,
and thus causes a lot of recompilations. (Because the amount of available
memory affects compilation of queries.)

Anton Shepelev

unread,
May 11, 2022, 5:48:52 AM5/11/22
to
Erland Sommarskog:

> Max server memory main controls the buffer cache, which
> also is the main consumer. But there are also memory
> allocations that are outside "max server memory", so it is
> not unusual for the actual usage to exceeds the setting a
> little bit.

The documentation seems to disagree with you:

Reconfigure the amount of memory (in megabytes) for a SQL
Server process used by an instance of SQL Server.

When I set up a limit of 768 Mb, restarted the instance, and
put some high memory pressure on it, the actual usage never
exceeded 768 Mb:

sys.dm_os_process_memory: physical_memory_in_use_kb/1024 = 751
Task Manager : Working set (memory) /1024 = 751

I then increased the limit to 1024 Mb and imparted more
memory pressure on the instance. The memory usage was again
below the limit:

sys.dm_os_process_memory: physical_memory_in_use_kb/1024 = 1015
Task Manager : Working set (memory) /1024 = 1015

As you see, SQL Server does not exceed its memory limit if
it is not exceeded already (which it is when decreasing the
limit below actual usage)!

> Don't make this too dynamic. I don't know for sure, but I
> would assume that max server memory are one of the options
> that clear the plan cache, and thus causes a lot of
> recompilations. (Because the amount of available memory
> affects compilation of queries.)

My script is executed every hour and causes the memory usage
of an idle SQL Server to drop by a factor of two closer to
the base level (128 Mb) during each working day. In that
sense, it is not very dynamnic. The source is far from
publishing quality, but you are welcome to exemine it if
intersted:

sp_configure 'show advanced options', 1 RECONFIGURE

DECLARE @srv_n INT = 6 -- number of MSSQL instances
DECLARE @dec_2 INT = 10 -- number of iterations for half-decay
DECLARE @m_res INT = 2048 -- memory reserved for the OS

-- TODO: instead of c_mis, try: round down to next 32, subract 32.
DECLARE @c_mmi INT = 128
DECLARE @c_mma INT = 2147483647
DECLARE @c_ste INT = 32 -- minumum memory decrement (exp.)
DECLARE @c_two FLOAT = 2
DECLARE @c_min VARCHAR(17) = 'min server memory'
DECLARE @c_max VARCHAR(17) = 'max server memory'

DECLARE @v_low BIT,
@p_low BIT
-- RAM values, in MB:
DECLARE @m_min INT -- min RAM parameter
DECLARE @m_use INT -- RAM used by this MSSQL instance
DECLARE @m_lim INT -- target RAM limitation
DECLARE @max_l INT -- max limit
DECLARE @N INT

-- Active only in work hours:
IF NOT DATEPART( hour, GETDATE() ) BETWEEN 9 AND 19 GOTO Finish

SELECT @m_use = physical_memory_in_use_kb/1024,
@p_low = process_physical_memory_low ,
@v_low = process_virtual_memory_low
FROM sys.dm_os_process_memory
IF @v_low=1 OR @p_low=1 GOTO Finish
SET @m_min = @c_mmi
IF @m_use <= @m_min GOTO Finish
SET @m_lim = @m_min + (@m_use - @m_min) / POWER(@c_two, 1.0/@dec_2)
SET @max_l = @m_use - @m_use % @c_ste - 1 - @c_ste + 8
IF @max_l < @m_lim SET @m_lim = @max_l
IF @m_lim < @m_min SET @m_lim = @m_min

PRINT FORMATMESSAGE('Decreasing memory use from %i to %i', @m_use, @m_lim)

EXEC sp_configure @c_min, 0
EXEC sp_configure @c_max, @m_lim RECONFIGURE

SET @N = 12
WHILE @N > 0
BEGIN
WAITFOR DELAY '00:00:05'
SELECT @m_use = physical_memory_in_use_kb/1024,
@p_low = process_physical_memory_low
FROM sys.dm_os_process_memory
IF @p_low = 1 BREAK
IF @m_use < @m_lim + @c_ste BREAK
SET @N = @N - 1
END

EXEC sp_configure @c_max, @c_mma RECONFIGURE

Finish:

Anton Shepelev

unread,
Jul 4, 2022, 7:31:34 AM7/4/22
to
I wrote:

> We have a server with usually seven to ten active MSSQL
> instances, of which only few are intensively used at each
> moment. Manually stopping unused instances and starting
> required ones several times a week is too burdensome for
> several reasons. In order to improve RAM utilisation, I
> have written a simple script that tries to decrease memory
> use by instances not in active operation.
> [...]

I continue to experiment with my memory balancer, and its
worst defect is that sometimes by decreasing
'max server memory' it causes the isntance to freeze and
become unresponsive until restarted. There are three ranges
of these setting:

freeze high CPU Works OK, zero CPU
until usage while usage while idle
restart idle
|--------|-------------|--------------------->
0 V1 V2 max server memory

where V1 and V2 are idiosyncratic to each instance. I have
tried to estimate V1 and V2 using:

I have tried monitoring the following parameters:

1. sys.dm_os_process_memory:
- process_physical_memory_low
- process_virtual_memory_low

2. sys.dm_os_performance_counters:
- Memory Grants Pending

3. sys.dm_os_memory_pools:
- free_entires_count

4. a percentage of the total .mdf file size in the instance

but in vain.

Have you an idea how to estimate minimum working value for
server memory, that is V2 in the diagram above?

Erland Sommarskog

unread,
Jul 5, 2022, 3:45:38 AM7/5/22
to
Anton Shepelev (anton.txt@g{oogle}mail.com) writes:
> I continue to experiment with my memory balancer, and its
> worst defect is that sometimes by decreasing
> 'max server memory' it causes the isntance to freeze and
> become unresponsive until restarted. There are three ranges
> of these setting:
>
> freeze high CPU Works OK, zero CPU
> until usage while usage while idle
> restart idle
> |--------|-------------|--------------------->
> 0 V1 V2 max server memory
>
>...
> Have you an idea how to estimate minimum working value for
> server memory, that is V2 in the diagram above?

You don't say which absolute values you are working with. What I've noticed
is that if you set "max server memory" to really low values, below 400 MB,
SQL Server will not even start.

I'm not surprised if you can set "max server memory" to higher values
and still send your server into nirvana.

Anton Shepelev

unread,
Jul 5, 2022, 5:16:30 AM7/5/22
to
Erland Sommarskog to Anton Shepelev:

> > I continue to experiment with my memory balancer, and
> > its worst defect is that sometimes by decreasing
> > 'max server memory' it causes the isntance to freeze and
> > become unresponsive until restarted. There are three
> > ranges of these setting:
> >
> > freeze high CPU Works OK, zero CPU
> > until usage while usage while idle
> > restart idle
> > |--------|-------------|--------------------->
> > 0 V1 V2 max server memory
> >
> > where V1 and V2 are idiosyncratic to each instance. I
> > have tried to estimate V1 and V2 using:
> > [...]
> > Have you an idea how to estimate minimum working value
> > for server memory, that is V2 in the diagram above?
>
> You don't say which absolute values you are working with.

Deliberately, because I want to arrive at a universal
approach that will not depend on hard-coded values of V1 and
V2 but will rather estimate them. In my case, V1 and V2 are
usually between 190 and 350 Mb. My databases are usually
between 5-50 Gb, 5-15 databases per instance, and I see no
correlation between DB size and minimum required RAM...

> What I've noticed is that if you set "max server memory"
> to really low values, below 400 MB, SQL Server will not
> even start.

I have had that, too. Most of the times, however, it can be
resurrected:

1. rename all the .mdf files for user DBs
(e.g. to .mdf1),
2. start the server (with -m if necessary),
3. set 'max server memory' to a working value,
4. stop the instance,
5. restore the correct names of the .mdf files,
6. start the instance.

> I'm not surprised if you can set "max server memory" to
> higher values and still send your server into nirvana.

And I am surprised, because an enterprise system should
either refuse a setting that sends it into limbo or comply
with it even at the expense of a terribly slow performace.
Becoming unresponsive and unable to start is bad manners,
especially because setting server memory requires that the
instance be running! The documentation says the minimum
value is 128 Mb.

I am here in search of suggestions how to detect the
situation between V1 and V2, when the instance still works,
but is on the verge of swooning. With the plethora of
performance monitoring facilities in MSSQL, it must be
possible somehow.

Erland Sommarskog

unread,
Jul 5, 2022, 3:27:20 PM7/5/22
to
Anton Shepelev (anto...@freeshell.de) writes:
> Deliberately, because I want to arrive at a universal
> approach that will not depend on hard-coded values of V1 and
> V2 but will rather estimate them. In my case, V1 and V2 are
> usually between 190 and 350 Mb. My databases are usually
> between 5-50 Gb, 5-15 databases per instance, and I see no
> correlation between DB size and minimum required RAM...

Even 350 MB is too low in my opinion. I have not tested vigorously,
but my gut feeling is that you need at least 400 MB.

And you been flipping between. let's say, 2 GB and 8 GB, I could
have had some sympathy for you. Now I only say: don't do that.

> And I am surprised, because an enterprise system should
> either refuse a setting that sends it into limbo or comply
> with it even at the expense of a terribly slow performace.

I can certainly sympathize with that opinion, but it may not be
trivial to implement. Say that you have a server with 2TB of
RAM, and most of that is in use. Say now that for some reason
you set "Max server memory" to 10 GB. Do you think the server
will stand up? It will certainly grind to a standstill. (Although
in the end it may not be entirely unresponsive, just unbearlingly
slow.)

But I am not sure how the engine should be able to figure that out.

On the other hand, it should not permit ridiculously low values
where it knows that it cannot even start.

> Becoming unresponsive and unable to start is bad manners,
> especially because setting server memory requires that the
> instance be running! The documentation says the minimum
> value is 128 Mb.

Indeed, and this is also the minimum value listed in
sys.configurations. And 128 MB was a workable value for SQL 2005
and maybe also for SQL 2008. But not for SQL 2019.

I have raised this issue with Microsoft and that they need to raise the minimum value, and I have a feedback item here:
https://feedback.azure.com/d365community/idea/3b7f1124-6225-ec11-b6e6-000d3a4f0da0

Anton Shepelev

unread,
Jul 5, 2022, 5:45:52 PM7/5/22
to
Erland Sommarskog:

> Even 350 MB is too low in my opinion. I have not tested
> vigorously, but my gut feeling is that you need at least
> 400 MB.

Some instances work with 192 Mb, whereas others suffocate at
384. You may be correct: I have not seen this happen to an
instance with more than 400 Mb.

> And you been flipping between. let's say, 2 GB and 8 GB, I
> could have had some sympathy for you. Now I only say:
> don't do that.

I can hard-code a different threshold into my memory
balancer.

> I can certainly sympathize with that opinion, but it may
> not be trivial to implement. Say that you have a server
> with 2TB of RAM, and most of that is in use. Say now that
> for some reason you set "Max server memory" to 10 GB. Do
> you think the server will stand up? It will certainly
> grind to a standstill. (Although in the end it may not be
> entirely unresponsive, just unbearlingly slow.)

Yes, I expect it to go into turtle mode and thrash the HDD
and swap file, but not to die. Another option is to
decrease memory usage to, say, 64 Gb and stop there with an
event about an impossible memory constraint.

> And 128 MB was a workable value for SQL 2005 and maybe
> also for SQL 2008. But not for SQL 2019.

Growing heavier, aren't they?

> I have raised this issue with Microsoft and that they need
> to raise the minimum value, and I have a feedback item
> here:
> https://feedback.azure.com/d365community/idea/3b7f1124-6225-ec11-b6e6-000d3a4f0da0

Let us hope they pay attention.
0 new messages