It's odd that everyone's calling it a proxy when the link makes it
clear that it's trying not to be a traditional gopher-web proxy
site.
"The Gaufre client runs client-side in your browser, where it is
sandboxed for security reasons and can not access Gopher's raw
TCP/IP sockets on port 70. The solution: Gopher over HTTP (GoH).
Gaufre makes an HTTP request to a very lightweight Gopher over
HTTP proxy, which relays the request to the intended Gopher server.
The Gopher server's response is returned as raw Gopher data through
an HTTP response to Gaufre. Because a GoH proxy does not need to
parse Gopher content, it is very CPU and RAM efficient compared to
traditional Gopher to HTML proxies."
So it's implemented as a client-side gopher browser, but being stuck
in a browser it can only make HTTP requests, hence it needs to use
a proxy that carries the Gopher data over HTTP (but apparantly not
converted to HTML, unlike the gopher-web proxy sites).
I haven't tried it myself mind (I've got a regular Gopher browser
installed, which is vastly more efficient), so I'm assuming that
the article isn't obviously misleading.
> I hope it's set up to prevent search engine indexing.
Nothing at
https://gopher.commons.host/robots.txt
But it would only be a problem if the web crawlers are today able
to run Javascript in order for the browser code to execute. More of
a barrier than regular proxies, but then they probably can do that
at this point because I sure see plenty of search results that take
me nowhere in web browsers without Javascript support.
--
__ __
#_ < |\| |< _#