RuntimeError: Failed to get "http://localhost:10000/storage_service/scylla_release_version

89 views
Skip to first unread message

Uttam Giri

<uttameast@gmail.com>
unread,
Jun 30, 2023, 4:01:34 PM6/30/23
to ScyllaDB users
Hi, Has anyone encountered the following error when starting ScyllaDB in Docker?

RuntimeError: Failed to get "http://localhost:10000/storage_service/scylla_release_version" due to the following error: <urlopen error [Errno 99] Cannot assign requested address>

docker-compose.yml
version: '3'
services:
  scylla-node1:
    image: scylladb/scylla:latest
    container_name: scylla-node1
    ports:
      - 9042:9042
      - 9160:9160
      - 7000:7000
      - 7001:7001
    volumes:
      - ./scylla1:/var/lib/scylla
      - ./scylla-config/scylla.yaml:/etc/scylla/scylla.yaml
    restart: always

Environment:-
 cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy


Logs:-
running: (['/opt/scylladb/scripts/scylla_dev_mode_setup', '--developer-mode', '1'],)
running: (['/opt/scylladb/scripts/scylla_io_setup'],)
2023-06-30 19:58:49,273 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config fileto avoid this message.
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/rsyslog.conf" during parsing
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/scylla-housekeeping.conf" during parsing
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/scylla-jmx.conf" during parsing
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/scylla-node-exporter.conf" during parsing
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/scylla-server.conf" during parsing
2023-06-30 19:58:49,273 INFO Included extra file "/etc/supervisord.conf.d/sshd-server.conf" during parsing
2023-06-30 19:58:49,277 INFO RPC interface 'supervisor' initialized
2023-06-30 19:58:49,277 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2023-06-30 19:58:49,277 INFO supervisord started with pid 26
2023-06-30 19:58:50,279 INFO spawned: 'rsyslog' with pid 27
2023-06-30 19:58:50,281 INFO spawned: 'scylla' with pid 28
2023-06-30 19:58:50,283 INFO spawned: 'scylla-housekeeping' with pid 29
2023-06-30 19:58:50,284 INFO spawned: 'scylla-jmx' with pid 30
2023-06-30 19:58:50,286 INFO spawned: 'scylla-node-exporter' with pid 32
2023-06-30 19:58:50,287 INFO spawned: 'sshd' with pid 36
rsyslogd: pidfile '/run/rsyslogd.pid' and pid 27 already exist.
If you want to run multiple instances of rsyslog, you need to specify
different pid files for them (-i option).
rsyslogd: run failed with error -3000 (see rsyslog.h or try https://www.rsyslog.com/e/3000 to learn what that number means)
2023-06-30 19:58:50,288 INFO exited: rsyslog (exit status 1; not expected)
ts=2023-06-30T19:58:50.296Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.4.0, branch=HEAD, revision=7da1321761b3b8dfc9e496e1a60e6a476fec6018)"
ts=2023-06-30T19:58:50.296Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.19.1, user=root@83d90983e87c, date=20220926-12:32:56)"
ts=2023-06-30T19:58:50.296Z caller=node_exporter.go:185 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
ts=2023-06-30T19:58:50.297Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
ts=2023-06-30T19:58:50.297Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
ts=2023-06-30T19:58:50.297Z caller=diskstats_common.go:100 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
ts=2023-06-30T19:58:50.297Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:108 level=info msg="Enabled collectors"
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=arp
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=bcache
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=bonding
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=btrfs
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=conntrack
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=cpu
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=cpufreq
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=diskstats
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=dmi
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=edac
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=entropy
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=fibrechannel
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=filefd
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=filesystem
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=hwmon
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=infiniband
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=interrupts
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=ipvs
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=loadavg
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=mdadm
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=meminfo
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=netclass
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=netdev
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=netstat
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=nfs
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=nfsd
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=nvme
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=os
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=powersupplyclass
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=pressure
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=rapl
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=schedstat
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=selinux
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=sockstat
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=softnet
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=stat
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=tapestats
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=textfile
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=thermal_zone
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=time
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=timex
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=udp_queues
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=uname
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=vmstat
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=xfs
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:115 level=info collector=zfs
ts=2023-06-30T19:58:50.298Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100
ts=2023-06-30T19:58:50.298Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
Scylla version 5.2.3-0.20230608.ea08d409f155 with build-id ec8d1c19fc354f34c19e07e35880e0f40cc7d8cd starting ...
command used: "/usr/bin/scylla --log-to-syslog 0 --log-to-stdout 1 --default-log-level info --network-stack posix --developer-mode=1 --overprovisioned --listen-address 172.19.0.2 --rpc-address 172.19.0.2 --seed-provider-parameters seeds=172.19.0.2 --alternator-address 172.19.0.2 --blocked-reactor-notify-ms 999999999"
parsed command line options: [log-to-syslog, (positional) 0, log-to-stdout, (positional) 1, default-log-level, (positional) info, network-stack, (positional) posix, developer-mode: 1, overprovisioned, listen-address: 172.19.0.2, rpc-address: 172.19.0.2, seed-provider-parameters: seeds=172.19.0.2, alternator-address: 172.19.0.2, blocked-reactor-notify-ms, (positional) 999999999]
Connecting to http://localhost:10000
Starting the JMX server
WARN  2023-06-30 19:58:50,594 seastar - Requested AIO slots too large, please increase request capacity in /proc/sys/fs/aio-max-nr. available:65536 requested:816416
WARN  2023-06-30 19:58:50,594 seastar - max-networking-io-control-blocks adjusted from 50000 to 3070, since AIO slots are unavailable
INFO  2023-06-30 19:58:50,594 seastar - Reactor backend: linux-aio
WARN  2023-06-30 19:58:50,656 [shard  0] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
INFO  2023-06-30 19:58:50,665 [shard  0] seastar - Created fair group io-queue-0, capacity rate 2147483:2147483, limit 12582912, rate 16777216 (factor 1), threshold 2000
INFO  2023-06-30 19:58:50,665 [shard  0] seastar - IO queue uses 0.75ms latency goal for device 0
INFO  2023-06-30 19:58:50,665 [shard  0] seastar - Created io group dev(0), length limit 4194304:4194304, rate 2147483647:2147483647
INFO  2023-06-30 19:58:50,665 [shard  0] seastar - Created io queue dev(0) capacities: 512:2000:2000 1024:3000:3000 2048:5000:5000 4096:9000:9000 8192:17000:17000 16384:33000:33000 32768:65000:65000 65536:129000:129000 131072:257000:257000
WARN  2023-06-30 19:58:50,768 [shard  5] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,768 [shard  6] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,769 [shard  7] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,769 [shard  3] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,769 [shard  2] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,772 [shard  4] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,772 [shard  9] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,776 [shard 10] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,777 [shard  8] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,778 [shard 11] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,781 [shard 14] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,781 [shard  1] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,781 [shard 15] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,783 [shard 13] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
WARN  2023-06-30 19:58:50,783 [shard 12] seastar - Creation of perf_event based stall detector failed, falling back to posix timer: std::system_error (error system:1, perf_event_open() failed: Operation not permitted)
INFO  2023-06-30 19:58:50,788 [shard  0] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  2] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  1] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  7] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  3] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  9] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  5] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 11] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 12] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  6] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 13] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 15] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  4] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard  8] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 10] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,788 [shard 14] seastar - updated: blocked-reactor-notify-ms=1000000
INFO  2023-06-30 19:58:50,792 [shard  0] init - installing SIGHUP handler
JMX is enabled to receive remote connections on port: 7199
INFO  2023-06-30 19:58:50,996 [shard  0] init - Scylla version 5.2.3-0.20230608.ea08d409f155 with build-id ec8d1c19fc354f34c19e07e35880e0f40cc7d8cd starting ...

WARN  2023-06-30 19:58:50,997 [shard  0] init - I/O Scheduler is not properly configured! This is a non-supported setup, and performance is expected to be unpredictably bad.
 Reason found: none of --max-io-requests, --io-properties and --io-properties-file are set.
To properly configure the I/O Scheduler, run the scylla_io_setup utility shipped with Scylla.

INFO  2023-06-30 19:58:50,998 [shard  0] init - starting prometheus API server
INFO  2023-06-30 19:58:50,998 [shard  0] init - creating snitch
2023-06-30 19:58:52,001 INFO spawned: 'rsyslog' with pid 107
2023-06-30 19:58:52,001 INFO success: scylla entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-06-30 19:58:52,001 INFO success: scylla-housekeeping entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-06-30 19:58:52,001 INFO success: scylla-jmx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-06-30 19:58:52,001 INFO success: scylla-node-exporter entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-06-30 19:58:52,001 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
rsyslogd: activation of module imklog failed [v8.2112.0 try https://www.rsyslog.com/e/2145 ]
2023-06-30 19:58:53,007 INFO success: rsyslog entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Traceback (most recent call last):
  File "/opt/scylladb/scripts/libexec/scylla-housekeeping", line 196, in <module>
    args.func(args)
  File "/opt/scylladb/scripts/libexec/scylla-housekeeping", line 122, in check_version
    current_version = sanitize_version(get_api('/storage_service/scylla_release_version'))
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/scylladb/scripts/libexec/scylla-housekeeping", line 80, in get_api
    return get_json_from_url("http://" + api_address + path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/scylladb/scripts/libexec/scylla-housekeeping", line 75, in get_json_from_url
    raise RuntimeError(f'Failed to get "{path}" due to the following error: {retval}')
RuntimeError: Failed to get "http://localhost:10000/storage_service/scylla_release_version" due to the following error: <urlopen error [Errno 99] Cannot assign requested address>
Reply all
Reply to author
Forward
0 new messages