curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
Hey @clay, is there any chance you can chip in on this? I found very similar issues across the web, but none of the solutions is specific to Local by Flywheel: -error-60-ssl-certificate-unable-to-get-local-issuer-certificate
Long version: I understand this has to do with my server setup. I have followed the guides I found online (downloading the certificates and linking them in the php.ini) without success.
My server is a linux box in the intranet, no access from outside. Since it is local, I cannot use letsencrypt, so I have created a self signed certificate for accessing owncloud. For my private use in LAN, this is good enough and works fine.
Running owncloud updater over http (unsecure) connection, did the trick. Updates over web interface ran without problems. So it must have something to do with my self signed SSL certificate (which of course has no issuer).
So that means that fiddling around with the php or websever config would not make any change, since owncloud comes with its own certificates? Is there any way to update the certificates over admin interface, or is the ca-bundle.crt hard-tied to the owncloud release?
Most of the solutions involved setting the environment variable CURL_CA_BUNDLE to the proper location, or adding cacert=/etc/ssl/certs/ca-certificates.crt to the (newly created) .curlrc file in my home directory. I have tried both, and neither completely solve the issue. curl is finding this location, but it still doesn't work, giving the error:
Does anyone know how to fix this? Is there a way to actually start fresh with all my certs? Or does anyone even know how I go about figuring out where this self signed certificate is, and then how to remove it?
This got curl working on the command line. To further get curl to work in R (where I first encountered the problem) I also needed to have cacert=/etc/ssl/certs/ca-certificates.crt in my .curlrc file as tried before, otherwise it continued to look for /etc/pki/tls/certs/ca-bundle.crt
I have a number of health monitors which require a certificate to be presented to the end device. A certificate & key have been imported using the GUI under System > File Management > SSL Certificate List. The "container" name for these these has been specified against "Client Certificate" and "Client Key" within the health monitors. This is working OK.
My question is how can I do a manual check using curl before deploying new health monitors - i.e. if I want to run a check against a new end server to confirm it is replying OK before I actually configure anything on the LTM, how can I make this check using curl? I tried to specify the existing certificate & key as stored in the LTM file structure using the following command:
This is pretty straight forward. The reason for the --insecure/-k flag is that you are using a self-signed certificate which curl is unable to validate, since it does not know anything about the root CA that was used to sign it.
When you connect with curl to the HTTP(S) port of Elasticsearch, A TLS handshake is initiated. This basically means that Elasticsearch will send the configured TLS certificate to curl. Now curl needs to validate this certificate. It does so by checking if the certificate was signed by any known and trusted root CA. Since the root CA you are using is self-generated, this will fail, unless you make the root CA known to curl with the switch I posted. Curl does not send the root CA to Elasticsearch. It uses it to validate the cert it receives from Elasticsearch.
That makes sense thanks! So if I wanted another application to communicate with elasticsearch with SG enabled, I would need to have the root-ca.pem file in that cert and also have it added to that applications keystore since it would be sending HTTP requests and I would not be able to pass the cert in through a flag like a curl?
The error you are seeing is because your node certificate does not contain a valid hostname or IP address. It seems when omitting the --insecure flag, curl validates the certificate against the root CA (this seems to work in your case) and then validates the hostname in the certificate.
1 article suggested to update all certificates by creating roots.sst but it showed adding 440+ certificates. Many of them were expired so I decided to not follow it. Certmgr showed I have 94 certificates.
If this is related to not updated certificates, then you need to install the root certificate from our issuer - Google Trust Services LLC:
image683836 13.7 KB
You need to install the GTS Root R1 and GTS CA 1D4 certificates in the Root Certificates Authorities folder in the certificate manager for the machine, not for the user or personal (unless you run your commands only as your user, not as an administrator or as a system user).
I have control of my client machine but not of the server so if I can avoid it, it would save trouble. I believe I need a .pem file and need a root certificate since using curl to access the server results in unable to get local issuer certificate.
If it not possible to do it, a kind application of why it is not possible would be appreciated, as am a bit out of my depth here. It seems to me only logical that it would be possible as the server has provided its certificate already.
IF your curl was built to use OpenSSL -- as you tagged but which is not the only option for curl, verify with curl -V -- AND the server is correctly serving any/all intermediate cert(s), THEN to trust the server curl-with-openssl needs a local 'truststore' containing the root cert for the server's cert chain (NOT the server's cert itself) in PEM format. (Some other SSL/TLS implmentations will accept nonroot anchor(s), but older versions of OpenSSL cannot do so at all, and even 1.0.2 doesn't do so by default.)
METHOD 0: maybe already there. Packages of curl often include a 'bundle' of widely-trusted CA roots such as Symantec GoDaddy etc., or link to another package that does (typically named something like ca-bundle or trusted-cas). The format(s) of the certs in such a bundle depend on the SSL/TLS library(ies) you use; check curl -V as above. Check if you have such a bundle installed, and if so whether your curl is using it. If you don't have or can't get a package (including the case you build curl yourself) AND use OpenSSL, curl upstream has a bundle of CA roots extracted from Mozilla NSS/Firefox extracted into the concatenated-PEM format needed for OpenSSL (and a tool to do this yourself).
openssl commandline need not use the same default truststore location(s) as curl -- and even if it does, until recent versions the s_client command had a bug that didn't use the default truststore correctly. If you do have a concatenated-PEM bundle, specify it to s_client with -CAfile bundlefile and see if that makes a difference.
METHOD 1: get from browser. IF you can connect to the server with a browswer like IE/Edge, Firefox, Chrome, etc. (this requires there be a URL you can enter that the server will respond with a successful webpage), THEN doubleclick on the padlock and click on whichever buttons or links lead to displaying the certificate chain (exact ones vary per browser). Select the topmost cert in the chain and look for an option to export it, preferably in single-cert PEM format (again exact method varies per browser). If you can't get single-cert PEM format, comment what format you did get (give the first line if it looks like -----BEGIN something) and I'll tell you how to convert it.
Take this file (copy to the curl machine if you created it somewhere else) and either add it to your truststore (depends on your packaging system; may be a simple as concatenating it to the existing concatenated-PEM file) or specify it by itself to curl with option --cacert filename.
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Ok, now I understand, one thing is the certs I used to connect to my site, and other thing the certs than docker uses for itself.
So the error I am getting with curl (cURL error 35: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure) when I try to open a document with collabora, can be with the certs of my reverse proxy?
Do you only get the certificate issue when accessing https://[your_account].eu.auth0.com or do you get similar issues when accessing other HTTPS sites? For example, what happens if you access Can you also share the output of the following command: nslookup [your_account].eu.auth0.com?
I updated the answer to take in consideration the curl command output. Your machine is sending the request to the incorrect IP address, you may have hardcoded this IP address at a certain point in time and now it has changed.
Yea, given that I'm getting the same results you are I can't figure out why I'm seeing an SSL error, and why disabling the certificate check leads to an issue with the downloaded file. It's very strange!
Edit2: This is also a privacy issue. The fact that anyone looking at your traffic can determine what you've downloaded by size doesn't mean it's okay to give up on transport security. For anyone who have the same concerns, just do sed -i 's/\s--insecure\\s--no-check-certificate//g' scripts/download.pl. I have not observed any side effects while compiling images with a basic package set.
35fe9a5643