Re: [elasticluster] Elasticluster + Openstack = issue

28 views
Skip to first unread message

Riccardo Murri

unread,
Sep 3, 2019, 2:19:10 AM9/3/19
to Shank Mohan, elasticluster
Dear Shank,

> So now when running elasticluster start <name> it gets past play and task then gets held up with " [WARNING]: sftp transfer mechanism failed on [x.x.x.x]. Use ANSIBLE_DEBUG=1 to see detailed information" on all nodes.

My guess would be that the default security group of the OpenStack
cloud you're using does not allow SSH connections to the newly-created
VMs.

Can you connect to the "frontend" node using `ssh` from the machine
where you are running `elasticluster start`? (You can see the IP
address with `elasticluster list-nodes`). If not, posting the output
of `ssh -vv ip.address.of.frontend` here might help (remove sensitive
information before posting!)

Hope this helps,
Riccardo

Shashank Mohan

unread,
Sep 3, 2019, 2:26:21 AM9/3/19
to Riccardo Murri, elasticluster
I had to use elasticluster ssh slurm first so I could get the IP and then use ssh -l ubuntu <ipaddress> which worked, using root does not though and says "Please login as the user "ubuntu" rather than the user "root"."

I am able to ssh to other instances spun up using the default security group as well as I had modified it earlier.

Shank Mohan

unread,
Sep 3, 2019, 2:29:53 AM9/3/19
to elasticluster
Doing a straight ss <IP Address> fails with Permission denied (publickey)

Riccardo Murri

unread,
Sep 4, 2019, 5:12:52 AM9/4/19
to Shank Mohan, elasticluster
Hello Shank,

> Doing a straight ssh <IP Address> fails with Permission denied (publickey)

Command `ssh $ip_address` will keep the user name you have on the
machine you're connecting from.
You mentioned in an earlier message that this command works instead
and gets you a shell on the cluster front-end:

ssh ubu...@ip.addr.of.frontend

Likewise, can you confirm that this works?

elasticluster ssh slurm

If both work, then what happens if you give this command?

sftp ubu...@ip.addr.of.frontend

Are you able to copy a local file to the front end (SFTP's `put` command)?
Like this:

sftp> put localfile.txt

Ciao,
R

Shank Mohan

unread,
Sep 4, 2019, 7:00:35 PM9/4/19
to elasticluster
Ok so;

  1. ssh@ipfrontend = working
  2. elasticluster ssh slurm -- I have to export OS_CACERT first otherwise it fails with "
  3. Error: SSL exception connecting to fqdn:5000/v3/auth/tokens: HTTPSConnectionPool(host='fqdn', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))"
  4. sftp ubu...@ip.addr.of.frontend = works
  5.  sftp> put localfile.txt = working

Riccardo Murri

unread,
Sep 5, 2019, 3:17:25 AM9/5/19
to Shank Mohan, elasticluster
Hello Shank,

> ssh@ipfrontend = working
> elasticluster ssh slurm -- I have to export OS_CACERT first otherwise it fails with "
> Error: SSL exception connecting to fqdn:5000/v3/auth/tokens: HTTPSConnectionPool(host='fqdn', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))"
> sftp ubu...@ip.addr.of.frontend = works
> sftp> put localfile.txt = working

Given these data I cannot explain why it didn't work. If you can SSH
and SFTP into the cluster, so should Ansible...

Can you please re-run `elasticluster -vvvv start slurm` and post the
*entire* screen output or send it to me via email? (Watch out and
remove passwords and other sensitive config items.)

Ciao,
R

Shank Mohan

unread,
Sep 5, 2019, 11:31:31 PM9/5/19
to elasticluster
Is there a log file where all of this is output too?  Putty session scroll back is limited.

Riccardo Murri

unread,
Sep 5, 2019, 11:49:00 PM9/5/19
to Shank Mohan, elasticluster
> Is there a log file where all of this is output too? Putty session scroll back is limited.

You can save output to a file with this command (output will be saved
into file `screenlog.0`):

screen -L elasticluster -vvvv start slurm

Ciao,
R
Reply all
Reply to author
Forward
0 new messages