/var/log/mysql/error.log - repeating lines "Plugin mysql_native_password reported: ''mysql_native_password' is deprecated"

354 views
Skip to first unread message

Jim Adamson

unread,
Jan 8, 2024, 7:00:14 AMJan 8
to AtoM Users
Hi all,

I'm investigating an intermittent problem reported by a colleague with records not being editable and sometimes returning a 500 Internal Server Error. Upon investigating I noticed that /var/log/mysql/error.log was over 1 GB, and full of lines like:

2024-01-05T17:01:25.971507Z 673 [Warning] [MY-013360] [Server] Plugin mysql_native_password reported: ''mysql_native_password' is deprecated and will be removed in a future release. Please use caching_sha2_password instead'

We are using the following version of MySQL with Atom 2.7.3:
mysql-server/unknown,now 8.0.35-1ubuntu20.04 amd64

Googling suggested this might be caused by something connecting to mysqlx, which listens on port 33060:
lsof -i -P -n | grep 3306
mysqld    3911           mysql   20u  IPv6  82053      0t0  TCP *:33060 (LISTEN)
mysqld    3911           mysql   31u  IPv4  82055      0t0  TCP 127.0.0.1:3306 (LISTEN)

I successfully disabled mysqlx by adding mysqlx = off to my.cnf:
/etc/mysql/my.cnf
[mysqld]
mysqlx = off


However, tail -f -ing the error.log, showed that the repeating lines were still coming. After further testing, it looks like just browsing the site in an ordinary, human way triggers the lines.

Has anyone elese noticed this, and found a way to suppress those lines? I'm surprised this is coming up at all; our server is configured with the default auth plugin set to caching_sha2_password anyway (as is applicable to AtoM 2.7.x).

Thanks, Jim

Jim Adamson

unread,
Jan 8, 2024, 10:26:58 AMJan 8
to ica-ato...@googlegroups.com
Hi all,

I've found a way to suppress the warnings, though it's not specific to the deprecation warning that was filling up the log file:

/etc/mysql/my.cnf
[mysqld]
log_error_verbosity=1

Thanks, Jim

--
You received this message because you are subscribed to the Google Groups "AtoM Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ica-atom-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ica-atom-users/afa22f17-f915-4685-9df6-353c98aff190n%40googlegroups.com.


--
Jim Adamson
Systems Administrator/Developer
Facilities Management Systems
IT Services
LFA/023 | Harry Fairhurst building | University of York | Heslington | York | YO10 5DD

José Raddaoui

unread,
Jan 11, 2024, 9:46:47 AMJan 11
to AtoM Users
Hi Jim,

We needed to use that legacy authentication method for MySQL in AtoM 2.6.x to be able to make everything work with PHP 7.2.

https://accesstomemory.org/en/docs/2.6/admin-manual/installation/linux/ubuntu-bionic/#mysql

However that's is no longer needed in 2.7.x if you are using PHP 7.4. But, if you used the legacy authentication method setting up MySQL or creating the AtoM user that may still be in place. If that's the case, you should be able to change the authentication method with the following statement:

ALTER USER 'username'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'password';

Best,
Radda.

Jim Adamson

unread,
Jan 15, 2024, 9:12:21 AMJan 15
to ica-ato...@googlegroups.com
Hi Radda,

Thanks for the explanation and fix - this makes sense now. I think what happened in our case was that, although mysql (mysql.cnf) was configured with the default authentication plugin set to caching_sha2_password, the user authentication plugin was set to mysql_native_password (select user, plugin from mysql.user;).

I think the reason for this is that we tend to follow the upgrade instructions (as opposed to the new installation instructions), so the atom user comes across in the mysqldump file from the previous version of AtoM (as opposed to re-creating the user). Does that make sense?

I guess it might be worth incorporating your ALTER instruction into the upgrade instructions for 2.7 and 2.8? Certainly the huge growth in size of the mysql error.log as a direct result of this would justify that.

Thanks, Jim

sally-an...@york.ac.uk

unread,
Jan 17, 2024, 4:18:57 AMJan 17
to AtoM Users
Hi all,

I just wanted to update this thread to say that we've been experiencing this same problem frequently over the past few days. Jim Adamson has followed Dan Gillean's instructions each time and this has temporarily resolved the issue, but not proven to be a permanent fix yet.  He has now increased the server memory as restarting atom-worker service consumes a lot of memory and we are waiting to see if this makes a difference.

Best,

Sally

Dan Gillean

unread,
Jan 17, 2024, 8:24:51 AMJan 17
to ica-ato...@googlegroups.com
Hi Sally; Jim, 

I have been waiting for Radda to be available to follow up on the original conversation in this thread. However, to pick up on Sally's latest post: 

Those linked instructions are very general, and are my blanket suggestion when the actual contents  / cause of the 500 error message are unknown. Jim: what are you seeing in the logs? As in, is this still part of the MySQL authentication issue, or is there some other issue occurring? Is it the job scheduler failing? If yes, then: 

Cheers, 

Dan Gillean, MAS, MLIS
AtoM Program Manager
Artefactual Systems, Inc.
604-527-2056
@accesstomemory
he / him


Jim Adamson

unread,
Jan 17, 2024, 8:43:57 AMJan 17
to AtoM Users
Hi Dan,

I suspect the MySQL stuff is unrelated; it was just that I stumbled across that when trying to investigate the problem Sally reports.

I tied up the problem that Sally reports to specific Nginx log lines, e.g.

2024/01/15 16:16:25 [error] 281727#281727: *471821 FastCGI sent in stderr: "PHP message: No Gearman worker available that can handle the job arXmlExportSingleFileJob" while reading response header from upstream, client: 144.32.224.129, server: borthcat.york.ac.uk, request: "POST /index.php/ds-6-7-1-7/edit HTTP/2.0", upstream: "fastcgi://unix:/run/php7.4-fpm.atom.sock:", host: "borthcat.york.ac.uk", referrer: "https://borthcat.york.ac.uk/index.php/ds-6-7-1-7/edit"
2024/01/15 16:17:38 [error] 281727#281727: *471821 FastCGI sent in stderr: "PHP message: No Gearman worker available that can handle the job arXmlExportSingleFileJob" while reading response header from upstream, client: 144.32.224.129, server: borthcat.york.ac.uk, request: "POST /index.php/ds-6-7-1-7/edit HTTP/2.0", upstream: "fastcgi://unix:/run/php7.4-fpm.atom.sock:", host: "borthcat.york.ac.uk", referrer: "https://borthcat.york.ac.uk/index.php/ds-6-7-1-7/edit"
2024/01/15 16:18:18 [error] 281727#281727: *471821 FastCGI sent in stderr: "PHP message: No Gearman worker available that can handle the job arUpdatePublicationStatusJob" while reading response header from upstream, client: 144.32.224.129, server: borthcat.york.ac.uk, request: "POST /index.php/ds-6-7-1/informationobject/updatePublicationStatus HTTP/2.0", upstream: "fastcgi://unix:/run/php7.4-fpm.atom.sock:", host: "borthcat.york.ac.uk", referrer: "https://borthcat.york.ac.uk/index.php/ds-6-7-1/informationobject/updatePublicationStatus"
2024/01/15 16:23:40 [error] 281727#281727: *471821 FastCGI sent in stderr: "PHP message: No Gearman worker available that can handle the job arXmlExportSingleFileJob" while reading response header from upstream, client: 144.32.224.129, server: borthcat.york.ac.uk, request: "POST /index.php/ds-7-1-1-1/edit HTTP/2.0", upstream: "fastcgi://unix:/run/php7.4-fpm.atom.sock:", host: "borthcat.york.ac.uk", referrer: "https://borthcat.york.ac.uk/index.php/ds-7-1-1-1/edit"


Working through the instructions you posted in the other thread Sally mentions, I then noticed that systemctl status atom-worker returned Failed to start AtoM worker (see below for full message) and so a systemctl reset-failed atom-worker and systemctl start atom-worker was required. What I also noticed was that almost straight away after restarting the worker service, it consumed a lot of memory (~2GB on top of what was already consumed, 4/5GB, IIRC). We're about to bump the Virtual Machine's RAM from 8 to 10GB, so I'm hoping we'll see fewer or zero 500 Internal Server Errors going forward.

* atom-worker.service - AtoM worker
     Loaded: loaded (/lib/systemd/system/atom-worker.service; enabled; vendor preset: enabled)
     Active: failed (Result: signal) since Tue 2024-01-16 14:26:58 GMT; 23h ago
    Process: 779389 ExecStart=/usr/bin/php7.4 -d memory_limit=-1 -d error_reporting=E_ALL symfony jobs:worker (code=killed, signal=KILL)
   Main PID: 779389 (code=killed, signal=KILL)

Jan 16 14:26:28 redacted_hostname systemd[1]: atom-worker.service: Main process exited, code=killed, status=9/KILL
Jan 16 14:26:28 redacted_hostname systemd[1]: atom-worker.service: Failed with result 'signal'.
Jan 16 14:26:58 redacted_hostname systemd[1]: atom-worker.service: Scheduled restart job, restart counter is at 3.
Jan 16 14:26:58 redacted_hostname systemd[1]: Stopped AtoM worker.
Jan 16 14:26:58 redacted_hostname systemd[1]: atom-worker.service: Start request repeated too quickly.
Jan 16 14:26:58 redacted_hostname systemd[1]: atom-worker.service: Failed with result 'signal'.
Jan 16 14:26:58 redacted_hostname systemd[1]: Failed to start AtoM worker.

Thanks, Jim

José Raddaoui

unread,
Jan 18, 2024, 9:10:17 AMJan 18
to AtoM Users
Thanks Jim,

User info won't be included in the database dumb, but adding a note to the upgrading docs makes sense to me, for those upgrades using the same MySQL server and user.


Best,
Radda.

Jim Adamson

unread,
Jan 18, 2024, 12:05:21 PMJan 18
to ica-ato...@googlegroups.com
Thanks Radda. You are right; neither the atom user nor other users come across in the MySQL dump file.

In our case we are using Puppet with the puppetlabs/mysql module to manage certain aspects of the DB, including creating the atom user. Unfortunately it appears that the user's authentication plugin isn't manageable through this module (at least not yet anyway). I was hoping it was so that I could add a selector which would set the authentication plugin according to the version of AtoM, but we'll have to stick with the manual way for now.

Thanks, Jim

Jim Adamson

unread,
Jan 25, 2024, 8:59:36 AMJan 25
to AtoM Users
I'm pleased to report that since we increased our server's memory from 8GB to 10GB, over a week ago, we have seen no further No Gearman worker available that can handle the job errors in the Nginx error.log files, and the atom-worker service has stayed running.

Thanks, Jim

Dan Gillean

unread,
Jan 25, 2024, 9:59:11 AMJan 25
to ica-ato...@googlegroups.com
Interesting... 

Thanks for this update, Jim. I will monitor for similar reports - we may need to increase the recommended minimum technical requirements, which currently suggest 7GB of memory. In the meantime, I'm glad to hear that you've solved this issue for now! 

Cheers, 

Dan Gillean, MAS, MLIS
AtoM Program Manager
Artefactual Systems, Inc.
604-527-2056
@accesstomemory
he / him

Reply all
Reply to author
Forward
0 new messages