Re: Google Earth Pro Behind Proxy

0 views
Skip to first unread message
Message has been deleted

Donnell Simon

unread,
Jul 12, 2024, 12:12:08 PM7/12/24
to jouestashiege

And it works fine. However, I need need to pull a KML file from a server on an internal network that I cannot access through this proxy. This example is similar. When I then open the KML file that says 'pull from this server,' google-earth cannot find the server. If I start google-earth without setting http_proxy I can access the server but not the internet (so I can't pull google-earth data)

For a simpler installation process when proxy configuration is necessary, useone of the self-contained versioned archivesto install the gcloud CLI. All previous versions of thegcloud CLI are also available to download from thedownloads archive.

google earth pro behind proxy


Descargar archivo https://ssurll.com/2yPnbe



After you have the gcloud CLI installed, proxy settings can beconfigured using gcloud CLI properties by running gcloud config.For more information on what gcloud CLI properties are and how to usethem, refer to theproperties page.

Alternatively, to avoid having the proxy credentials recorded in any logs (suchas shell history or gcloud CLI logs) or in thegcloud CLI configuration file, you can set the properties usingenvironment variables, as shown in the following sample:

If you are behind a proxy and you receive SSL handshake errors, it's likelythat your proxy is a man-in-the-middle proxy which uses a custom CA. You needto set the following property to point to your custom CA file:

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

One of the more challenging aspects of load testing happens very early on: getting your scripts working. When you run your scripts, you may observe unexpected error status codes (typically in the 4xx-5xx status code range). Other times, you'll run your script and see it receive the expected responses (usually HTTP 200), yet the "thing" your script is meant to create simply doesn't show up anywhere, like on the database or as a record on the site. You start wondering: why on earth is that?

HTTP conversations can be quite complex, particularly when dealing with website HTTP traffic (as opposed to APIs) and when there is a sequence of requests taking place with dynamic identifiers (in other words, Correlation).

Debugging HTTP requires looking at a lot of places to see what could be going wrong. Doing so can be challenging when relying purely on console.log statements or using --http-debug=full, and so this article presents another way of showing the HTTP traffic: using a web debugging proxy.

In essence, a WDP puts itself into the middle of the conversation between the client (like your browser, k6, or any proxy-aware application running on your machine) and the server. In hacker/technical terms, this is the Man-In-The-Middle (MITM) approach. Assuming traffic is flowing through the proxy (we'll get to that in a bit), the HTTP data will be visible for you to inspect. This goes for encrypted HTTPS traffic as well.

Aside from providing a purpose-built UI for viewing HTTP requests and responses, WDPs also cater for an independent view of the conversation as a result of this MITM approach. This can be handy in situations where the server expects some kind of encoded value but you don't know what encoding might be happening behind-the-scenes, or when you need to compare a script replay with a recording taken with the browser (most WDPs support importing HAR files).

Although technically not a WDP, Wireshark deserves a mention here as another debugging tool that can sometimes be useful, however the process of decrypting HTTPS traffic is a bit more involved. As Wireshark captures all network traffic and not just HTTP/S, it can also be difficult to find the packets you're looking for without setting up filters.

All of the WDPs listed in the table operate in the same way, with most also providing different ways of proxying (such as reverse proxying). To deal with encrypted HTTPS, they all require the installation of a certificate that enables the proxy to act as a Certificate Authority (the mitmproxy documentation provides a great explanation on how exactly that works).

As a long-time Windows user (gasp!), my favorite has been Fiddler Classic, and you'll see a few screenshots of it in action later in this tutorial. Installation and configuration of the proxies is beyond the scope of this article, instead we'll focus on how to configure k6 to send its HTTP/S traffic through the proxy once it is up and running.

Behind the scenes, k6 runs Go, and we use the built-in Go HTTP client for communication. Unlike some applications that use host OS proxy settings, Go programs depend on the presence of specific Environment Variables to determine whether to use a proxy and where it is listening. These magical Environment Variables are:

The Environment Variables are usually set through a terminal where they will only apply to programs also launched from the same terminal. So, it is as simple as setting the relevant variable (or both, if testing a mixture of HTTP and HTTPS) to the correct host:port, followed by issuing your k6 run command.

As the web proxy runs locally, the host portion of the variable is localhost. The port will depend on which WDP you are using. For example, Fiddler and Charlesproxy use port 8888, whereas mitmproxy uses port 8080.

As soon as your script begins sending HTTP requests, you should see them listed in the WDP. If you don't see anything, yet your script appears to be running as expected, this means that traffic is bypassing the proxy. This can happen if the root certificate was not installed properly (this can happen on locked down systems), or there was a misconfiguration with the Environment Variables.

Each entry will allow you to see the HTTP request parameters as well as those of the corresponding response. Headers, POST data, and response bodies are the most interesting of these parameters, and there will usually be ways of formatting data depending on their Content-Type. Being able to display JSON in its "beautified" form can really help make sense of things, especially when you don't know the inner workings of the target system!

The default k6 output will let you know if you receive HTTP responses with status in the 4xx-5xx range as these inherently indicate some kind of problem in most cases. k6 will mark returned status codes in this range as failures unless otherwise specified. WDPs tend to do the same as well, making it easy to narrow down on problematic requests. For instance, should I fail to correlate a couple of key tokens used on the web form during the POST to checkout, I receive a couple of HTTP 403 Forbidden responses, with the selected one returning a -1 when instead I'm expecting to see a success message:

As web developers have full control over what kind of status codes to send back to the client, you may find that sometimes an error message will be hidden within what might look like an otherwise "OK" response, i.e. those that typically return HTTP 200 OK. A classic example would be an "item out of stock" message when dealing with an eCommerce site: this is a legitimate scenario, and so sending back an error status code might not be appropriate. Still, if your script isn't finding in stock items to check out, you may find you receive actual errors later on when you try to checkout an empty cart! It's a good idea to add checks to your script to verify that you're getting the pages you expect, usually by looking for text in the response body, but WDPs can help you catch these issues during debugging as well.

A Web Debugging Proxy can be an invaluable tool when debugging your performance tests. Although a lot of debugging can be done using console.log statements dispersed throughout a script, sometimes it is simply easier and faster to look at HTTP requests and responses in a UI designed to display such content. Seeing what is actually transferred "over the wire" can yield insight into problems that might not otherwise be immediately obvious without sifting through lots of responses. Comparing a script replay with an equivalent recording taken with a browser (or tools such as Postman - yes you can proxy that, too!) can yield valuable insight that will help you put together a working script faster.

I have been trying to teach myself QGIS by using this forum and others to help. Most of my questions can be answered by downloading plugins that are available through 3rd party repositories but for some reason I am not able to connect with these. Anyone have any ideas why this is the case?

I was able to set up proxy settings in QGIS (Settings->Options->Network) so that it could communicate through our proxy server. In my case, the Default proxy type didn't work--I had to use HTTPProxy. Once that was done, I had no problem getting to the plugins on the repo servers.

If you are running QGIS from behind some firewall/proxy setups, you can't get the repositories. I have the same issue when running QGIS behind a restrictive firewall via a proxy at work, but no problem when running QGIS from home.

I had the same issue with not being able to connect to official repository but found how to fix this. I replaced DHCP with a static IP address and QGIS now connects to repository. How to here: -to-assign-a-static-ip-address-in-xp-vista-or-windows-7/

If so and that also didn't help it then I don't know how to help you here. I haven't heard back from Frenck but since found that the nginx configuration interface that is behind this HA add-on is this:

I haven't been able to figure out if that supports a config export though. As I said, without seeing how your nginx instance is actually configured, this all is leading nowhere sadly. All I can tell you is that it works just as it should on my end with nginx in front, as long as it's configured as shown in the linked examples.

The strange thing is, that this is happened after upgrade octoprint. But I understand all you wrote, I will use no SSL connection, because I need to control octoprint from outside of my network. Not safe, but have no another option

d3342ee215
Reply all
Reply to author
Forward
0 new messages