The fabric manager or Qfabric Director acts as a pure management plane only; it runs on a separate server that connects with the other Qfabric components through an outband 1 Gbits Ethernet network (typically implemented using up to eight Juniper EX-class switches).Qfabric allows multiple distributed physical devices in the network to share a common control plane and a separate, common management plane, thereby behaving as if they were a single large switch entity.
For this reason, Qfabric devices are not referred to as edge or core switches, and the overall approach is called a fabric rather than a network. The main advantages of this approach include creating a single, very large network fabric with relatively low latency between any two edge ports. ![]() The interconnect chassis architecture is a Fat Clos tree, which acts as a forwarding plane for data traffic, and the edge switch acts like a blade in a larger chassis that performs all the forwarding lookups, traffic classification, and backbone encapsulation functions. The edge switch, QFX3500, can also be used as a standalone top-of-rack switch. According to Juniper, the uplinks between the edge switches and interconnect are effectively backplane extensions, implemented using 40 Gbits links over OM-4 or OM-5 grade fiber or short copper links. Each edge node is connected to either two or four interconnects, and the architecture supports up to 128 edge switches. A maximum nonblocking configuration thus scales to 2048 ports of 10 Gbits traffic. ![]() For high availability, dual-homed servers, this equates to 3072 servers. According to Juniper, any two edge ports in the Qfabric experience the same latency, around 5 s. The uplink ports are capable of accommodating higher data rates in the future, making Qfabric potentially capable of lower oversubscription rates on future generations of the fabric; as of this writing, such configurations have not been announced. As of November 2012, the largest publicly released independent Qfabric testing involves 1536 ports of 10G Ethernet, or approximately 25 of Qfabrics theoretical maximum capacity, and various articles have speculated on the protocols used within a Qfabric 16. Qfabric provides up to 96K MAC addresses and up to 24K IP addresses. According to Juniper, all 128 edge switches in a Qfabric can be managed as if they were a single, large, logical switch. In principle, this simplifies the fabric (e.g., by reducing the number of devices that need to maintain authentication credentials). Since Qfabric performs routing functions within the fabric using proprietary approaches, it does not use STP, TRILL, or many other networking protocols past the edge switch and into the core of the fabric. According to recent analysis of Qfabric 9, the initial release of Qfabric runs STP only within the network node. In order to protect against the case where a multihomed server starts bridging between its ports and sending BPDUs, each Qfabric server node implements automatic BPDU guard. Further, using LLDP Qfabric apparently implements a form of cable error detection; for example, if two ports of a server node were connected back-to-back, Qfabric detects this and disables both ports. Traffic entering or exiting the edge switch is industry standard lossless Ethernet, while traffic on the uplinks and internal to Qfabric is a Juniper proprietary protocol where frames are tagged and forwarded across the interconnect core switch. In other words, the forwarding decision is performed at the fabric edge. This approach is similar to the connections between line cards and the supervisor in a traditional switch chassis, which use a framepacket encapsulation devised to meet the platform requirements.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |