Call Of Duty Mw3 Error Could Not Find Zone

0 views
Skip to first unread message

Aida Mazyck

unread,
Jul 21, 2024, 2:31:06 PM7/21/24
to asinatna

I'm running dockerized traefik 2.3.2 within an Ubuntu 20.04 host. I'm just trying to setup a basic traefik container and the proverbial whoami container. My problem arises when trying to add in SSL LE certs using cloudflare as the DNS provider to perform DNS challenge.

call of duty mw3 error could not find zone


DOWNLOADhttps://geags.com/2zwUEt



time="2020-11-09T01:16:59-06:00" level=error msg="Unable to obtain ACME certificate for domains \"whoami.xxxxxxx.com\" : unable to generate a certificate for the domains [whoami.xxxxxxx.com]: error: one or more domains had a problem:\n[whoami.xxxxxxx.com] [whoami.xxxxxxx.com] acme: error presenting token: cloudflare: failed to find zone com.: Zone could not be found\n" providerName=le.acme

Although a solution was not specifically given the OP felt it to be a problem with name resolution.
I'm not specifically sure is this is a problem in my case. I'm using pfsense in front of my servers with a DNS resolver. Within the traefik container I can perform a nslookup of cloudflare.com and it resolves.

pfSense usually intercept all outgoing port53 requests and forwards them over port 853 for (DOH).
I also usually have a DHCP host override for the the domain names in question that resolve to the local IP address of the docker host.

I thought the entire purpose of dns-challlenge anyway was to just prove that you have ownership of the domain. It's clear the domain exists on cloudflare, so I don't get what's not exactly resolving in this instance.

Anyway -- I've tested all three scenarios (pfsense with splitDNS or dns host override, no pfsense dns host override but CF proxy, no pfsense dns host override / no CF proxy), and the result is the same when trying to start the container:

I really had to ask around on this one and I guess my big question at the end of the day is why traefik is trying to query SOA records rather than just query the name servers directly to ask for the NS records and subsequent TXT records.

In doing this split DNS -- I'm using the pfsense Unbound resolver and the Host Override functionality. Unfortunately when using a Host Override, any subsequent query such as a nslookup when peformed from a client within the LAN is always going to resolve to the LAN IP address EVEN IF A SPECIFIC RESOLVER IS SPECIFIED (ie nslookup domain.com 1.1.1.1). With a Host Override the resolver is ignored.

Certainly there is a recommended work around for this case, since pfSense isn't all that esoteric and neither is running a split DNS. Other acme clients I've used in the past such as acme.sh and certbot don't seem to have this issue running running a Host Override setup, so I suspect they must be querying cloudflare differently. I suppose I could continue to use acme.sh or certbot for certificate management, however this diminishes some of the advantages of using traefik.

Thanks for the heads up on this issue. It was definitely a pfSense problem with my pfSense intercepting all requests on port 53 and forwarding them back to the unbound resolver itself. It basically would not let any outbound packets pass on port 53 so the external resolver was never reached. It took me a long time to find out the root cause of this problem however you were pretty much correct in pointing me in the right direction on this one.

I have a new computer. I installed R and RStudio, and I am now struggling to run some of my existing code. For example, R and/or the (excellent) clock package does not like my time zone, which is set to 'UTC'. See reprex below.

I tried to set the time zone manually in my .Renviron file, (TZ="America/New_York") but that did not help. I also looked at the output of the set command from my Command Prompt, but I did not see a system environment variable for time zone either.

P.P.S. I have another clue.... my win-library folder contains two subfolders, one named 3.4 one named 3.5 They were created when I copied files/folders to my current computer from an old one... so I think that path is not valid. There is no folder named 4.1.

That character is likely the issue. In the short term you could move the R package library to a location with a simpler path like you suggest and teach .libPaths() about it. That would probably fix it. In the longer term I'll hopefully be able to fix this in tzdb.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.

Hi folks - I previously had Let's Encrypt working but made the mistake of updating about thirty packages at once, so now things are broken and I'm not sure what to roll back. I only figured this out because of a very helpful "your cert is expiring" reminder email from LE.

My base domain (66c.dev) is hosted on Google Domains. The _acme-challenge subdomain is CNAMED to _acme-challenge.acme.66c.dev; the entire acme.66c.dev subdomain is managed by Google Cloud DNS (and this is where certbot used to add / remove challenge records).

I distinctly remember that I needed to patch a line to make this work last time, which I've done per this thread: DNS plugins don't work if _acme-challenge is a separate zone Issue #7701 certbot/certbot GitHub

Why/Where it happened?
Beats me!
I'd have to guess that something within Home Assistant has that TYPO.
I would try uninstalling the LE portion and then reinstalling it [if possible].
Otherwise, try checking on their support channels for anything related.

I'm using some very sophisticated instrumentation technology (lots of print statements) and it looks like the google API is returning 0 managed zones in the response for _acme-challenge.66c.dev. Here's the GET (from certbot):

Okay - working again! Nothing wrong with Let's Encrypt or certbot, I'd just forgotten the extent of the modifications you need to make. Leaving a little summary here for myself (and hopefully others).

My setup is: domain name registered with Google Domains; zone managed with Google Cloud DNS. You need to do this because Google Domains doesn't support programattic zone editing (required for ongoing proof-of-ownership through acme verification) and Cloud DNS doesn't support domain purchase / renewal. But the two can work together! This connects with the Home Assistant version of Lets Encrypt.

Home Assistant just needs the Let's Encrypt add on installed. I'm targeting the domain *.66c.dev with privkey.pem / fullchain.pem in their standard locations, challenge type DNS, config provider: dns-google and google_creds: google.json.

Let's Encrypt needs two modifications, which is what I'd forgotten. Both of them are in dns_google.py; there's probably a better way of finding it (docker shuttles it around the filesystem) but I just use sudo find / -name dns_google.py.

Second, you need to change the Authenticator class (the first thing in the file). By default, it's going to try and add / delete the validation text records against _acme-challenge.66c.dev - which the Cloud DNS API won't accept, because that's not within its acme.66c.dev purview. You could probably be more elegant about this, but I just hardcode the correct values over validation_name in _perform and _cleanup. E.g, mine now looks like this:

This works, including automatic self renewals - but will presumably break whenever you update LE. I'm tagging @patrakov just in case he has some ideas of how to do this more elegantly - I think he has been through a similar journey with another DNS provider. But for now, the hacky approach works for me and ticks all my boxes

@samuelalexmclean not really, my problem was different. You have a CNAME pointing into a different zone, i.e. a challenge alias. I don't have that, but I maintain the _acme-challenge record as a separate zone for security.

What I did was to switch to the RFC2136 plugin which worked well enough for our purposes. Nowadays for complex cases such as yours I recommend skipping the existing DNS plugins and using a hook-based approach that could e.g. call Lexicon.

Changing plugins seems like the only fix if there's not enough manpower to review a PR for such a minor issue (which I totally get - this is a pretty esoteric problem). Next time mine breaks I'll go down that path.

If you encounter issues while using AWS Control Tower, you can use the following information to resolve them according to our best practices. If the issues you encounter are outside the scope of the following information, or if they persist after you've tried to resolve them, contact AWS Support.

If you encounter this issue, check your email. You might have been sent confirmation email that is awaiting response. Alternatively, we recommend that you wait an hour, and then try again. If the issue persists, contact AWS Support.

Failed StackSets: Another possible cause of landing zone launch failure is AWS CloudFormation StackSet failure. AWS Security Token Service (STS) regions must be enabled in the management account for all AWS Regions that AWS Control Tower is governing, so that the provisioning can be successful; otherwise, stack sets will fail to launch.

Sign in to the management account of your organization, and sign in as root user. Your IAM user or user in IAM Identity Center must have AWS Control Tower administrator permissions and be part of the AWSControlTowerAdmins group. Then try the update again.

Creation of a new account in Account Factory will fail while other AWS Control Tower configuration changes are in progress. For example, while a process is running to add a control to an OU, Account Factory will display an error message if you try to provision an account.

If you try once to enroll an existing AWS account and that enrollment fails, when you try a second time, the error message may tell you that the stack set exists. To continue, you must remove the provisioned product in Account Factory.

If the reason for the first enrollment failure was that you forgot to create the AWSControlTowerExecution role in the account in advance, the error message you'll receive correctly tells you to create the role. However, when you try to create the role, you are likely to receive another error message stating that AWS Control Tower could not create the role. This error occurs because the process has been partially completed.

e59dfda104
Reply all
Reply to author
Forward
0 new messages