MySQL 8 And MySQL 5.7 Memory Consumption On Small Devices

1 view
Skip to first unread message

Eduviges Gearlds

unread,
May 29, 2024, 9:20:58 AM5/29/24
to leyparrepa

I have a server with only 500mb ram and found that mysql started using a lot of ram as my tables got larger. After playing with a bunch of the settings, what reduced memory usage for me was to convert all my tables to MyISAM.If you dont need the features of innodb converting tables to MyISAM helps quite a bit.You can convert tables like this :

MySQL 8 and MySQL 5.7 Memory Consumption on Small Devices


DOWNLOADhttps://t.co/FFLWGcni0V



After this change I found that memory usage decreased by 20%.To get a further reduction in memory usage you can convert ALL of your tables to MyISAM and then turn off innodb support in mysql altogether.This reduced my memory usage by 50%.

The following my.ini for mysql 5.6 causes the "private bytes" to go from 630MB down to 20MB and the "working set" from 450MB down to 21MB on a Windows 7 64 bit machine. The performance of a mysql server configured like this is probably going to be very bad, but technically, if you want it to occupy as little memory as possible, then this is the answer.

I also agree based on this data default should be raised to 1GB for MySQL8 I also wonder if it was ever tested running MySQL8 on 512MB system as part of release process to validate this manual claim ?

> I also agree based on this data default should be raised to 1GB for MySQL8 I also wonder if it was ever tested running MySQL8 on 512MB system as part of release process to validate this manual claim ?

I wonder if there is anyway to know the usage of each part of mysql.
as you can see blow, operation system tells me that mysql uses 46.5GB of RAM,
but performance_schema tells me that mysql uses 37.7GB of RAM,
it is nearly 10GB gap!!!
here is several questions:
1. Is it caused by innodb_flush_log_at_trx_commit=2 and sync_binlog=0?
2. if yes, why these memory is not freed even in the light workload?
3. is it caused by connection thread?
4. if yes, why when i killed all connection, the memory usage does not get down?

Couple of things here. First there are some areas which are NOT covered by Memory instrumentation in Performance Schema, for example MySQL code also is kept in memory but not allocation which can be seen in Performance Schema

Hi Peter,
Thanks for your reply.
I think MySQL code will not use much RAM, and also for other areas(if I was wrong, please kindly point out),
and I want to know where the 10GB GAP come from,
so is there anyway to find out?

We have a container running MariaDB, and a few other small containers on the host. The Mysql container was allocated 21G of memory (out of 32G total), as well as a few other parameters, with the following commands in the docker-compose:

We tried putting the mysqldump commands in a different container, to try to "isolate" this big operation, but that does not seem to change anything, the bulk of the work is still done by the Mysql container which ultimately crashes when the other container performs the dump.

By default mysqldump tries to dump entire tables at once, meaning it must load the entire table into memory, and when memory is constrained mysqldump may fail. The --quick option switches to dumping by row, which is slightly slower and makes slightly larger dump files, but uses far less memory.

I am not sure how you are scripting your dumps. I am thinking it may help to break up your dumps into smaller batches groups (sets of table/database) to help keep memory getting to high. Just some thoughts on this.

If you don't want to give more memory to mysql, you can try setting replication server and do the mysqldump from there, problem is mysqldump is risky when your server having load. Please take a look performance related parameters which may also help -performance-options

I've built a small project that uses an Arduino Mega2560, a Freetronics 2004 shield, an Arduino Ethernet Shield 2, and a Weatherproof Ultrasonic Sensor w/ Separate Probe (RB-Dfr-720) to monitor a 2,600 gallon water tank, determine the water level, and then update a MySQL DB table with the data it gathered. This thing will run 24x7x365 and I need it to be stable. I live off grid and the tank, which serves my small house, is 1,300 feet away, so I don't want to be hiking there all time to check on things or reset the set up.

I've been working thru various issues with this project (LCD conflicting with ethernet, sensor stability issues (now using NewPing), my learning curve, etc) but I think I've now got all devices working and am in the code refactoring phase for stability and efficiency.

Here's the deal... I've set the system up in my house and just let it run for hours/days to check on stability. But, I get random hangs where the system LCD displays "Reading Sensor" (my display logic) and just stalls there. I assumed it was still stability issues with the sensor, even though NewPing is supposed to address that issue. Anyway, in trouble shooting I added a slew of other LCD display writes to indicate where I am in the code, and have found that the sensor seems to hang fairly consistently on a new debug display statement I added "Writing to DB". I'm now looking at the code and am wondering if my program is overflowing memory and creating more or less random fails.

I'd assumed that the Arduino IDE wraps my code in a loader that did some memory management, but obviously that's not the case. So after that long preamble here's the question...can anyone help me understand if lots of LCD display statements and/or my MySQL cursor create and insert logic is blowing thru the Flash memory.

Here's the MySQL insert logic. I assumed that my Delete Cursor statement was clearing Flash but maybe it's not. And I also understand that strings take up a fair amount of SRAM and maybe because of all my LCD writes I'm blowing that out. Any memory management tips or tutorials you can refer me to would be great. Or, if you see a stupid problem in my DB write code hit me with your feedback. I have thick skin so will take all responses as helpful. Thanks...

Some of you guys need to both understand how the SQL librararies work, AND read the OPs code. It is perfectly normal to create a request object, perform the query, then delete the object. That is EXACTLY what the OPs code snippet does. There IS no memory leak indicated, and after the delete, the heap will look EXACTLY like it did before the new, so there also is NO fragmentation.

When considering the performance of any software, there's a classic trade-off between time and space. In the process of assessing the performance of a MySQL query, we often focus on execution time (or query latency) and use it as the primary metric for query performance. This is a good metric to use, as ultimately, we want to get query results as quickly as possible.

I recently released a blog post about how to identify and profile problematic MySQL queries, with a discussion centered around measuring poor performance in terms of execution time and row reads. However, in this discussion, memory consumption was largely ignored.

Though it may not be needed as often, MySQL also has built-in mechanisms for gaining a deep understanding of both how much memory a query is using and also what that memory is being used for. Let's take a deep dive through this functionality and see how we can perform live monitoring of the memory usage of a MySQL connection.

You should see several-hundred results. Each of these represent a different category of memory that can be individually instrumented in MySQL. Some of these categories contain a short bit of documentation describing what this memory category represents or is used for. If you'd like to see only memory types that have a non-null documentation value, you can run:

Notice that there is no specific tracking for memory usage at a per-query level. However, this does not mean we cannot profile the memory usage of a query! To accomplish this, we can monitor the usage of memory on whatever connection the query of interest is being executed on. Because of this, we'll focus our use on the memory_summary_by_thread_by_event_name table, as there is a convenient mapping between a MySQL connection and a thread.

At this point, you should set up two separate connections to your MySQL server on the command line. The first is the one that will execute the query you want to monitor memory usage for. The second will be used for monitoring purposes.

Next up, execute some long-running query you'd like to profile the memory usage for. For this example, I'll do a large SELECT from a table that has 100 million rows in it, which should take awhile since there is no index on the alias column:

This indicates the amount of memory for each category being used at the exact moment this query was executed. If you run this query several times while the other SELECT alias... query is executing, you may see differences in the results, as memory usage for a query is not necessarily constant over its whole execution. Each execution of this query represents a sample at a moment of time. Thus, if we want to see how the usage changes over time, we'll need to take many samples.

However, the name indicates that it is memory being used for sorting data from a file. This makes sense as a large part of the expense of this query would be sorting the data so that is can be displayed in descending order.

As a next step, we need to be able to sample this memory usage over time. For short queries this will not be as useful, as we'll only be able to execute this query once, or a small number of times while the profiled query is executing. This will be more useful for longer-running queries, ones that take multiple seconds or minutes. These, would be the types of queries we'd want to profile anyways, as these are the ones likely to use a large portion of memory.

This could be adjusted in many ways depending on your profiling needs. For example, tweaking the frequency of the ping to the server or changing how many memory categories are listed per iteration. Running this while a query is executing provides results like this:

This is great, but there's a few weaknesses. It would be nice to see more than the top 4 memory usage categories, but increasing that numbers increases the size of this already-large output dump. It would also be nice to have an easier way to get a picture of the memory usage at-a-glance via some visualizations. This could be done by having the script dump the results to a CSV or JSON, and then loading them up later in a visualization tool. Even better, we could plot the results we are getting live, as the data is streaming in. This provides a more up-to-date view, and allows us to observe the memory usage live as it is happening, all in one tool.

bcf7231420
Reply all
Reply to author
Forward
0 new messages