We use IIS with ARR to reverse proxy and handle the SSL for Metabase. I don't have an exact guide but it was something along the lines of Setup IIS with URL Rewrite as a reverse proxy for real world apps - Microsoft Community Hub
I have 2 customer's servers running on Windows (not IIS). Both have the certificate in a Java Keystore (JKS). that's then referenced in the service (created using NSSM).
I've used Portecle to add the certificate to the keystore.
Hi @notrom ,
I am also trying to enable SSL on windows server running IIS with ARR and I would greatly appreciate if you could give some more detail on the configuration. I am struggling to have this configured.
Other than that it's mostly in that linked article. Bind IIS site to port 443, all requests trigger UrlRewrite to push traffic to the local Metabase instance running on port 3000. Use NSSM to run Metabase as a service.
If you mean blocking direct access to Metabase's built in Jetty server e.g. :3000/ from external IP then I'd guess you need to setup some Windows firewall rules to block connections from external IP, but that'll depend on your setup. Not something I've looked at sorry.
Heya folks, Ned here again. Many years ago, we made configuring SMB signing in Windows pretty complicated. Then, years later, we made it even more complicated in an attempt to be less complicated. Today I'm here to explain the SMB signing rules once and for all. Probably.
SMB signing means that every SMB 3.1.1 message contains a signature generated using session key and AES. The client puts a hash of the entire message into the signature field of the SMB2 header. If anyone changes the message itself later on the wire, the hash won't match and SMB knows that someone tampered with the data. It also confirms to sender and receiver that they are who they say they are, breaking relay attacks. Ideally, you are using Kerberos instead of NTLMv2 so that your session key starts strong; don't connect to shares with IP addresses and don't use CNAME records - Kerberos is here to help!
By default, domain controllers require SMB signing of anyone connecting to them, typically for SYSVOL and NETLOGON to get group policy and those sweet logon scripts. Less well known is that - starting in Windows 10 - UNC Hardening from the client also requires signing when talking to those same two shares and goes further by requiring Kerberos (it technically requires mutual auth, but for Windows, that means Kerberos).
SMB signing first appeared in Windows 2000, NT 4.0, and Windows 98, it's old enough to drink. Signing algorithms have evolved over time; SMB 2.02 signing was improved with HMAC SHA-256, replacing the old MD5 method from the late 1990s that was in SMB1 (may it burn in Hades for all eternity). SMB 3.0 added AES-CMAC. In Windows Server 2022 and Windows 11, we added AES-128-GMAC signing acceleration, so if you're looking for the best performance and protection combo, start planning your upgrades.
This 20+ year evolutionary process brings me to the confusing bit: "requiring" versus "enabling" signing in Windows security policy. We have four settings to control SMB signing, but they behave and mean things differently with SMB2+ and SMB1.
Note my use of bold. "Always" means "required." "If agrees" means "enabled." If I could go back in time and find out who decided to use synonyms here instead of the actual words, it would be my next stop after buying every share of MSFT I could get my hands on in 1986.
The legacy SMB1 client that is no longer installed by default in Windows 10 or Windows 2019 commercial editions had a more complex (i.e. bad) behavior based on the nave idea that clients and servers should sign if they feel like it but that it was ok not to sign otherwise, known as "enabled", i.e. "if agrees". Signing is still possible in any case, nothing turns the signing code off. This is not a security model we follow anymore but everyone was wearing 1-strap undone overalls and baggy windbreakers at this point in the 90s and thinking they looked good. SMB1 also had the "required" setting, for those who wanted more strictness, and that will override the "if I feel like it" behavior as you'd hope. So we end up with this complex matrix. Again, it only matters for the SMB1 protocol that you are not supposed to be using.
If you really, really want to understand SMB signing, the article to read is SMB 2 and SMB 3 security in Windows 10: the anatomy of signing and cryptographic keys by Edgar Olougouna, who works in our dev support org and is a seriously smart man to be trusted in all things SMB.
As for all these weird ideas we had around signing back in the late 90s - I wasn't around for these decisions but it's ok, you can still blame me if you want. At least I never wore the 1-strap overalls.
Windows nodes support one elastic network interface per node. By default, the number of Pods that you can run per Windows node is equal to the number of IP addresses available per elastic network interface for the node's instance type, minus one. For more information, see IP addresses per network interface per instance type in the Amazon EC2 User Guide.
In an Amazon EKS cluster, a single service with a load balancer can support up to 1024 back-end Pods. Each Pod has its own unique IP address. The previous limit of 64 Pods is no longer the case, after a Windows Server update starting with OS Build 17763.2746.
When specifying a custom AMI ID for Windows managed node groups, add eks:kube-proxy-windows to your AWS IAM Authenticator configuration map. For more information, see Limits and conditions when specifying an AMI ID.
An existing cluster. The cluster must be running one of the Kubernetes versions and platform versions listed in the following table. Any Kubernetes and platform versions later than those listed are also supported. If your cluster or platform version is earlier than one of the following versions, you need to enable legacy Windows support on your cluster's data plane. Once your cluster is at one of the following Kubernetes and platform versions, or later, you can remove legacy Windows support and enable Windows support on your control plane.
Your cluster must have at least one (we recommend at least two) Linux node or Fargate Pod to run CoreDNS. If you enable legacy Windows support, you must use a Linux node (you can't use a Fargate Pod) to run CoreDNS.
If your cluster isn't at, or later, than one of the Kubernetes and platform versions listed in the Prerequisites, you must enable legacy Windows support instead. For more information, see Enabling legacy Windows support.
If you enabled Windows support on a cluster that is earlier than a Kubernetes or platform version listed in the Prerequisites, then you must first remove the vpc-resource-controller and vpc-admission-webhook from your data plane. They're deprecated and no longer needed.
If you don't have Amazon Linux nodes in your cluster and use security groups for Pods, skip to the next step. Otherwise, confirm that the AmazonEKSVPCResourceController managed policy is attached to your cluster role. Replace eksClusterRole with your cluster role name.
Verify that your aws-auth ConfigMap contains a mapping for the instance role of the Windows node to include the eks:kube-proxy-windows RBAC permission group. You can verify by running the following command.
You should see eks:kube-proxy-windows listed under groups. If the group isn't specified, you need to update your ConfigMap or create it to include the required group. For more information about the aws-auth ConfigMap, see Apply the aws-auth ConfigMap to your cluster.
If you enabled Windows support on a cluster that is earlier than a Kubernetes or platform version listed in the Prerequisites, then you must first remove the vpc-resource-controller and vpc-admission-webhook from your data plane. They're deprecated and no longer needed because the functionality that they provided is now enabled on the control plane.
Uninstall the vpc-resource-controller with the following command. Use this command regardless of which tool you originally installed it with. Replace region-code (only the instance of that text after /manifests/) with the AWS Region that your cluster is in.
If your cluster is at, or later, than one of the Kubernetes and platform versions listed in the Prerequisites, then we recommend that you enable Windows support on your control plane instead. For more information, see Enabling Windows support.
The following steps help you to enable legacy Windows support for your Amazon EKS cluster's data plane if your cluster or platform version are earlier than the versions listed in the Prerequisites. Once your cluster and platform version are at, or later than a version listed in the Prerequisites, we recommend that you remove legacy Windows support and enable it for your control plane.
Enable Windows support for your Amazon EKS cluster with the following eksctl command. Replace my-cluster with the name of your cluster. This command deploys the VPC resource controller and VPC admission controller webhook that are required on Amazon EKS clusters to run Windows workloads.
The VPC admission controller webhook is signed with a certificate that expires one year after the date of issue. To avoid down time, make sure to renew the certificate before it expires. For more information, see Renewing the VPC admission webhook certificate.
If the output includes Error from server (NotFound), then the cluster does not have the necessary cluster role binding. Add the binding by creating a file named eks-kube-proxy-windows-crb.yaml with the following content.
d3342ee215