500 globus_xio: ICE negotiation failed

995 views
Skip to first unread message

Jamie

unread,
Jan 14, 2021, 5:14:40 PM1/14/21
to Discuss

Attempting to move files from Windows 10 computer to Fedora 33 computer. I created an end-point on each computer and (if I understand correctly) gave myself permissions for the directories of interest (the parent directories, though unknown if it leads down to its children).

It seems that commencing transfer causes this error to raise:

Thu Jan 14 2021 17:11:51 GMT-0500 (Eastern Standard Time)
Uncategorized Error
Error (session setup)
Endpoint: hlab-fedora
Server: Globus Connect
Command: SITE UPRT  yHau 3F5zOiPB+xy4knL0iGehfo 1,2013266431,172.27.145.132,59622,host 2,1677721855,128.146.189.68,6608,srflx
Message: Fatal FTP response
---
Details: 500 globus_xio: ICE negotiation failed.\r\n

Searched google and this group without success. Any thoughts? Thanks.

Stephen Rosen

unread,
Jan 14, 2021, 5:37:03 PM1/14/21
to Jamie, Discuss
Hi Jamie,

"ICE negotiation failed" refers to Globus Connect Personal's use of the STUN and ICE protocols to do "NAT hole punching".
These are protocols for traversing a NAT device, typically something like a home router, and establishing peer-to-peer connections between two Globus Connect Personal endpoints.

Some networks are incompatible with these protocols.
In particular, if both of the endpoints are behind symmetric NAT, ICE cannot be used to establish connections.
If you're getting errors related to this, it strongly suggests that your network topology does not allow ICE to create a connection between the Globus Connect Personal endpoints you're using.


Your endpoints should function correctly when used with any Globus Connect Server, including "Globus Tutorial Endpoint 1" and "Globus Tutorial Endpoint 2".
That's because Globus Connect Personal will simply make outbound connections to Globus Connect Server and doesn't need to try to use ICE.


In order for two Globus Connect Personal Endpoints to connect to one another using ICE, they need to be able to communicate with one another using UDP on ephemeral ports.
You can see a detailed document on the ports which are needed by Globus Connect Personal here:
https://docs.globus.org/how-to/configure-firewall-gcp/

The requirement for Outbound UDP 32768-65535 refers to the use of ephemeral ports.


If you aren't in control of the network and can't relocate the endpoints, one option is always to put your data through an extra hop using a Globus Connect Server endpoint.
First transfer from the source personal endpoint to the server, then from the server to the destination.
If you do this, wait for the first transfer to complete before submitting the second one, or you won't get a complete data transfer.

I hope that explains and helps.
Best,
-Stephen

Chris Taylor

unread,
Jan 14, 2021, 5:41:46 PM1/14/21
to Stephen Rosen, Discuss

To go from one Globus Connect Personal endpoint to another one, don’t you need a paid subscription so you can set up a shared guest collection?

Chris

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@globus.org.

Stephen Rosen

unread,
Jan 14, 2021, 6:00:46 PM1/14/21
to Chris Taylor, Discuss
Yes, you do need a subscription in order to submit transfers between Globus Connect Personal endpoints.
Setting up a share also requires a subscription, but enables any user with permissions -- even those without subscriptions -- to transfer to your endpoint.
If you don't have a subscription and both of the endpoints are Globus Connect Personal endpoints, not shares, you should not even be able to submit the transfer.

The ICE failure message can appear in any scenario with or without shares so long as the two hosts being connected are both Globus Connect Personal.
That means that even a transfer between two shares can be impacted, if both source and destination are hosted on Globus Connect Personal endpoints.

Best,
-Stephen
Reply all
Reply to author
Forward
0 new messages