

This is pretty cool, not only in terms of functionality, but also in terms of the consolidation and the mess of Ethernet/power cables avoided by not using standalone components. If quarter-height blade servers are used, the M1000e can support up to 32 servers. The Dell PowerEdge M I/O Aggregator also provide 32 internal 10 GbE connections for Dell blade servers you can install in the Dell PowerEdge M1000e chassis. If desired, these ports can also be used as 40 GbE stacking ports. The base blade comes with 2 x 40 GbE ports that by default are configured as 8 x 10 GbE ports. Think of it as an advanced layer 2 switch that provides expandable uplink connectivity.
%2fpdp%2fnetworking-poweredge-dci-ups-pdp-love-model-lineup-09.jpg)

Finally the fx2 chassis can come with pci slots or without, and you assign the slot to the blade better for FC interfaces, but fine for Ethernet as well.The Dell PowerEdge M I//O Aggregator is a slick blade switch that plugs into the Dell PowerEdge M1000e chassis and requires barely any configuration/networking knowledge. If a stack member fails, the ioa will flip to the other member, and if the ioa fails, then the hosts see a dead interface and your esx nic failover causes the host to flip to the other ioa.Īre your hosts half width or quarter width? Half width has more internal nics. Check the Cisco docs on the port channel requirements ioa 1 will port channel to two different stack members, and repeat with ioa 2.

So long as you can port channel each ioa to your Cisco core, you should have full redundancy. We changed them to “dumb” switches as our network team wanted nothing to do with them (network only touches Cisco), so we just have a single interface per switch connected, and then that internally connects to the host side of the backplane. The Dell switches are basically fully featured switches, you can port channel them up to your core, and it will work great. We have the fx2 chassis, we use it as our dev environment actually since we went R640 for our prod hci.
