Partitions

0 views
Skip to first unread message
Message has been deleted

Anna Pybus

unread,
Jul 9, 2024, 8:02:04 AM7/9/24
to warcbulklesbbu

Im partitioning a table by date and using it as an archive. Ive written code to extract a list of partitions to remove (because they are old), but cant find the method that can take this list and remove the partitions from the existing table.

partitions


Download File https://urllie.com/2yLZBV



Hello! I'm attempting to setup a DMZ External and VIP Internal Partition on my VPX device and am stuck with the vlan and IPs. Is there a good guide on setting up partitions I can use? I didn't see one by Carl out there.

I ask because I have a 3 IPs, Management IP for the default part, a DMZ SNIP and a Internal VIP SNIP. I've tried various ways to setup these IPs and assign them to the partition but it's not working. I'm guessing I don't understand how to make the DMZ SNIP only live inside the DMZ partition and then all traffic go through it for the DMZ stuff. The same goes for internal I'm not quite sure how to setup each partition with it's own SNIP. We have 3 interfaces on the Netscaler too, management, DMZ, and Internal. My goal here is to divide the DMZ and Internal traffic so neither side "sees" the other for security reasons. Eventually the DMZ side will host a VPN and external ICA while the Internal side will host website VIPs.

So then I made a partition with VLAN 1113 on it. I logged into that part and then made the SNIP for interface 2 VLAN 1113... and then nothing happens. The SNIP is never seen on the network, nothing made in the partition ever makes it on to the network, etc.

However if I skip the partition and just add the SNIP to the default one, it works right away. So what doens't work is inside the partition but nothing document wise talks extensively about how to setup the VLANs and SNIPs in a partition.

If your VPX is on an SDX, then you have to create the vlan bindings to interfaces from the SVM as far as i know . I had issues in the past when creating a vlan on the VPX. I had to do it form the SVM of the SDX.

"NetScaler VPX relies upon the hypervisor for its L2 networking services. Generally this does not limit how the NetScaler VPX can be deployed, but certain L2 functionality that is configured on a physical NetScaler appliance must be configured on the underlying hypervisor."

Have you gained any traction with this? I'm working on similar setup in which to prove out Partitions as a way to isolate traffic on our physical MPX's in the DMZ. So to test I have deployed VPX's on vmware in our DMZ with multiple virtual network adapters dedicated to a specific vlan. I created the vlans, marked partition sharing (even though I won't be sharing between partitions), assigned interfaces and SNIP to vlan, it all makes sense and should work easy. I create partitions and bind the vlan but no traffic will leave the NetScaler. I first noticed I needed to add the SNIP again in the partition but that still didn't do anything. Even when removing the partitions and testing traffic flow I'm still not seeing anything on all but the first two interfaces. Maybe there are some other VPX related settings I need to be aware of?

Each partition will need the appropriate vlans and SNIP(s) based on the networks you need to reach. The SNIPs within the admin partitions and VIPs should not be in conflict with other partitions (no duplicates). While the admin partition isn't as segregated as a separate NetScaler instance, view it as separate from the networking standpoint (once the vlans have been allocated). The partition needs the necessary VLANs to access the actual NICs and to segregate its traffic from the default and other partitions. To pass traffic each partition will need a SNIP and set of VIPs that are not in conflict with other partitions.

So here's a fun fact I've learned that is NOT commonly documented for this: You can't partition the gateways, the default partition basically is "DMZ" it's the only place you can run the Citrix gateways for VPN and ICA external traffic. You cannot make a DMZ partition and expect to run anything in the Netscaler gateway section.

Knowing that key fact, I setup the Netscaler like it would live purely in the DMZ. Then I added another NIC which would be the SNIP for internal VIPs, and assigned the interface and SNIP to traffic domain 777 (pick any number). Then I made a route for domain 777 which defaulted all traffic out the SNIP on the internal VIP subnet.

After that when I made VIPs for internal services I assigned them to traffic domain 777. Everything else on the Netscaler is setup as if it lives in the DMZ, everything that would be needed to support internal LB traffic (VIPs) is assigned to domain 777 and it seems to work great.

Sorry, I didn't mention this. Not all features are supported in Admin Partitions (Citrix Gateway being the biggest). Always check this section of the admin guide for which features are not supported to know whether partitions are useful for you or not. -us/citrix-adc/12-1/admin-partition/admin-partition-config-types.html

Also: Admin partitions are mostly a configuration boundary more so than anything else. While you can have duplicate entities (like services) in different partitions, you still have to make sure you don't have ip for snips/vips conflicts cross-partitions anymore than you would on separate devices unless you are also using traffic domains.

They have some separation of memory/bandwidth but its not segregated like separate instances would be. If you need config/processing separation for performance OR security, you need separate ADC's/NetScalers as either as separate VPX's, MPX's or separate instances on SDX. Modes/Features/Policies/entities (for supported) features can be configured per partition. Finally, they don't really do admin rights at the group level using group extraction; its mostly handled per user account (unless something changed in 12.1)

Thanks for the info. I've been looking at the Admin Partitions as more of a way to seperate/isolate network traffic (Tier 1, 2, 3 etc.) in our dmz, not having any duplicate networks. And I did plan to keep Gateway in the default partition. This is the first time I have tried setting up a VPX with multiple interfaces and I'm struggling with getting traffic to go out the right SNIP, so I added the partition and bind a vlan thinking that would clear it up it still doesn't. So my problem appears to be a routing issue and not so much admin partitions. Even when I bind vlans to interface and SNIP I'm still not getting data to use the correct route. I've even created Net profiles to force a service to use specific SNIP. Frustrating.

We have traffic domains configured on our older production MPX's and in seems to handle this just fine, but I didn't set that up and since we'll need to move onto new hardware in the near future I'm trying to redesign with today's best practices. I feel like I have a really good handle on many of the features of NetScaler, but the network/routing piece is difficult to wrap my head around.

While networks can be associated to admin partitions (and the docs) recommend partitions over traffic domains. They kind of work at different levels; so admin partitions aren't solely a network segregation tool, but you can achieve some network segregation in service of the config separation that admin partitions give you.

The main thing to remember in regular ADC networking, is that 1) all ips are associated to all interfaces by default and 2) if you want an adc to segregate ip and interface ownership you need vlans to limit specific ips to specific nics (or channels) via the vlan. This just limits which ips are garp-ed (owned) from which nics/mac address. 3) Then its about proper routing. So when you are in a two-arm or multi-armed config, you may still have to use PBRs or Mac-based forwarding to guarantee traffic goes in/out the correct interface/ip as needed.

When you then start allocating one or more vlans per partition, you still need to approach the partition networking in a similar way. Is the partition going to be 1-armed or 2-armed. Are the right vlans allocated to the partition to allow it to have access to the physical interfaces it requires and to then allow you to make vip/snip assignments within the vlans for its own entities. Therefore how many vlan(s) will it need to pass traffic and tie the in-partition IPs to the physical interfaces/channels the vlans are associated with. You still need to make sure the SNIPs/VIPs/VIP ranges in the partition are valid IPs in the networking segments the partition is participating in and without ip conflicts (unless traffic domains are used instead). And you still need proper routing and possible PBRs or MBF at the partition level too.

Currently DSS expects the explicit list of partitions to build. If you want run a recipe on all partitions you can use scenario with execute python code. Here is one example of how you could accomplish this:

When redispatch partitions in that case all partitions will be redispatched by default.
If you have a date partition, you can set a date range to cover all partitions.
If you have discrete, you can set a variable with all of your partitions and build it that way.

One of the benefits of partitions is not having to build all every time, when building them the first time, you can indeed use the scenario or other methods mentioned above>

Kind Regards,

Often a maximum of 4,000 partitions per broker is recommended (Kafka definitive guide, Apache Kafka + Confluent Blog). My understanding is that more partitions per Broker require more RAM and probably also more CPU due to the additional operations that are required per partition.

However I can easily scale CPU and RAM vertically in the Cloud and I wonder whether this recommendations still apply to newer Kafka versions (v2.6.0+) and if so what is the problem with more than 4k partitions? How can I tell whether my broker suffers from problems due to the number of partitions.

@weeco is referencing this blog post. But this blog post makes no mention of the underlying H/W on which 4K partitions runs seamlessly. One should also keep in mind the costs associated with unplanned downtime when loading up a broker with 4K partitions. A hard shutdown on a broker servicing a large number of partitions takes really long to recover.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages