PersonallyI do not use this much since everything I do in each landscape layer tends to be entirely within that layer.
If you are struggling to worry how to use this, I would suggest you may not need to worry about it. All the examples I keep thinking of are handled by doing the instructions within the layer. It may be only useful for more advanced cases where you do things that prevent the compiler from optimizing them but right now Im struggling to come up with a good example.
To harp on the issue a little more, we really need some way to branch in materials. Real branching, with dependent nodes placed inside the if block in the generated HLSL. It is very common to have expensive computations and texture lookups that are only applied to a portion of a mesh, especially with landscapes. Even more common is having material effects that you want to turn on and off with a uniform parameter. I know these techniques have pitfalls, but there are those of us who understand them and can make good decisions about when to use them. Your engine shaders make extensive use of them, for example.
As for LandscapeLayerSwitch, its intended usage is to skip code, if the layer in question is not present on landscape component. As Ryan mentioned, in most cases such optimization will be performed automatically, so this node is not widely used, but occasionally you might need to mess with it.
Personally, I would like to see the If node behave exactly the same as an if statement in HLSL, with all computations in the right scope, letting the compiler decide based on its optimization heuristics. Then it would really make sense to have the explicit Dynamic Branch node, and people who actually want a blend can just use Lerp. That would fit best with the experience shader programmers have built up over the years.
If you are struggling to worry how to use this, I would suggest you may not need to worry about it. All the examples I keep thinking of are handled by doing the instructions within the layer. It may be only useful for more advanced cases where you do things that prevent the compiler from optimizing them but right now Im struggling to come up with a good example.
A L2 switch does switching only. This means that it uses MAC addresses to switch the packets from a port to the destination port (and only the destination port). It therefore maintains a MAC address table so that it can remember which ports have which MAC address associated.
A L3 switch also does switching exactly like a L2 switch. The L3 means that it has an identity from the L3 layer. Practically this means that a L3 switch is capable of having IP addresses and doing routing. For intra-VLAN communication, it uses the MAC address table. For extra-VLAN communication, it uses the IP routing table.
This is simple but you could say "Hey but my Cisco 2960 is a L2 switch and it has a VLAN interface with an IP !". You are perfectly right but that VLAN interface cannot be used for IP routing since the switch does not maintain an IP routing table.
Either together on the same ports(using Integrated Routing and Bridging, ie, IRB): If the DMAC in the incoming IP data packet is of the IRB interface, routing or layer 3 behavior is done. Otherwise, the packet is bridged(layer 2 behavior) on all the same vlan ports.
Or, on separate sets of ports of the switch(some ports as L2 ports while some ports as L3 ports): A set of "x" ports on a switch may be configured as a bridge(and will bridge packets). While, another set of "y" ports may have IP addresses assigned to them and will act as router ports(routing received IP packets).
Where I work we're working dilligently to provide robust resiliency and redundancy for our firewalls using dual powersupplies, HA, and multiple ISP circuits with policy-based routing for failover. Our core switching (also our core router) is also fully redundant, with an IRF stack of two H3C 7506 chassis in physically disparate locations in our main campus connected via divergent fiber runs and all the distribution closets connected back to the core via an aggregation group consisting of a link to each half of the core. The weakest link in this scenario is down at layer 2: our leaf switches.
No matter what we think of, there's no getting around the fact that any single access switch failing is going to cause downtime for some group of users. Short of getting dual NICs in each machine and trying to build BAGs/LAGs for each workstation, what is the best way to mitigate the impact of this?
Obviously, invest in high quality switches, with fully redundant and hot swappable PSUs and fans. I've thought that trying to always maintain a spare switch in the stack that is unused would be a potential strategy for N+1, so that if a single switch dies we can simply add the other switch to replace the failed member remotely and then have someone physically move cables (now becoming a task fit for a help desk technician rather than requiring a network engineer to be available to go manually replace the stack member). But I was wondering if anyone know of any technology that might automate this or provide this warm spare feature explicitly.
In the past we had 'spare' switches that had the same code level and a base config on them. If a prod switch bit the dust, we could then take our config backup and dump it onto the spare, replace the dead one, and plug everything back in.
Spanning Tree will protect against loops and therefore allows multiple paths in a switch fabric, and therefore redundancy between the access layer and the core. It does not, however, allow for redundancy at the access layer itself... you know, what endpoints actually physically connect to. Nothing I can think of really does or could, except dual connections between each endpoint and multiple switches, but that would require somekind of link aggregation and thus double the number of drops/patches/switches, and would therefore be cost-prohibitive to all but those with the heftiest of coffers.
But, since this is the weakest link in the chain of the network's armor and the one that actually serves the users themselves, I was wondering if maybe some solutions out there that worked around this problem.
Yes, this is essentially what we do now. We have a stock of spare switches for each make and model and when one fails we load up the matching firmware and then rejoin it to the stack (on Comware this does not require restoring a config, you simply add it to the stack in place of the missing member). Our environment at the moment is all over the place in terms of code levels (bad, I know, but not uncommon), so it would not be possible to maintain a spare for each code level until we normalize our codebase across the organization.
Also, aside from needing to load up code, our current strategy relies on a network engineer being available to come into the office and physically install a switch. I was hoping that others might have found creative solutions to that requirement.
Your point about wireless is interesting. The only issue I see there is that making each wired host also a wireless host would create a huge additional load to our wireless infrastructure and wireless capacity is already an issue in most places. It would be cool if there were a solution that could maintain a preconfigured and tested wireless NIC that would remain cold but automatically turn itself on and connect if the wired network failed (like if it were unable to ping some IP).
Nobody has dictated it, per se, but when a switch fails it's a "drop everything else" class emergency. When this happens on the weekends or afterhours, there's no one here to address it and my employer is a hospital, so it's a a 24x7x365 operation where anytime something breaks the words "patient safety" quickly get used.
I was wondering if someone hadn't built a stacking framework that allowed for running a warm spare switch in the stack that would automatically take over the place and config of a failed member in the stack. That would reduce the repair procedure to just the moving cables part... which could easily be explained to and completed by a non-network engineer, say at 3am on a Sunday. N+1 is a concept that is ubiquiteous in almost every other area of infrastructure, except for access layer switches (as far as I've seen).
Coming from a person that has spent a lot of time in a hospital bed for one reason or another (me), I can appreciate the work you guys do. I while there cannot be a warm switch, there is nothing that says you cannot have a switch in the stack that is for 'emergencies' only. Meaning its there and working and ready for someone to just move patch cables to it.
However I have used many different brands and even when the vendors states that a failure in one switch wont affect the stack, I have see otherwise :(. For that reason at the last place I worked, we didnt stack anything ( 300+ access switches). All the switches were standalone and we seperated phones and computers on different switches. While we were not a 24/7 shop, we were a call center and downtime was tons of money lost and potential contracts lost.
Getting funding for these items shouldnt be an issue. You already mentioned the argument to use, 'patient safety'. Use the same arguement to either get additional switching, in house on-call admins, outsourced company that can come in within a short timeframe to assist.
3a8082e126