Home > TFD11 > Tech Field Day 11 Preview: Plexxi

Tech Field Day 11 Preview: Plexxi

June 9th, 2016

Tech Field Day 11 is happening in Boston, from 22-24 June and I’m super happy to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: I’m heading to Tech Field Day 11 and DockerCon!

Plexxi

PLexxiLogoPlexxi is a still a newish company that has previously presented at Networking Field Days 5, 6, and 7 and was started by some clever brains who want to make the life of who they term a “Network Cloud Builder” easier.

Plexxi has three parts to its solution.

Plexxi Switch is an Ethernet Switch which is connected to a bunch of other switches in a physical ring with a clever optical multiplexed connector that means they act as if they were fully meshed without all the cabling. When a switch receives an optical signal from a neighbour switch it basically either terminates the optical signal for traffic destined for the switch it has just arrived at or optically passively passes it through to the next switch in the ring without having to actually switch it.

Think how useful this is for reducing latency if you don’t have to actually switch traffic at every switch you pass through, great for even high frequency trading. Plexxi calls this LightRail. You can talk directly to switches 5 along either side of you in this way with the equivalent of 20Gb direct bandwidth between every two switch (its like a mesh remember) but with far less cabling as the multiplex cable can carry all the signals for all the switches. If you need more or your bandwidth between any two pairs of switches creates contention it can then change into switching mode. I think I have this right but happy to be corrected.

Oh, did I mention these optical connections can be up to 10km in length, so strung together your span can be up to 80km, interested yet? This gives you an 11xswitch 20Gb fully meshed topology. It uses Broadcom’s commodity Trident II chipsets which are fairly common in the switching market. You can also use a pod like architecture to create a 6 switch LightRail ring which acts as a single leaf switch which can then be connected up to an Optical Spine Layer (OSL). Multiple LightRail pods can be connected to the OSLs. This can scale out to 6 x 12 rack rows which could give you up to 2448 10xGbE ports! This drastically reduces the need for north-south switching connections and increases the east-west connectivity where your servers actually want to talk to each other.

Plexxi Control

We’re in SDN world now too as the whole system is managed as a single switch. This makes it much easier to configure and actually all the configuration is all based on policy. This part of the solution is called Plexxi Control and is also the part that does the dynamic config change between optical passthrough or switching. Remember the 20Gb between all switches, well if one particular direct connection gets saturated but there are other paths around the ring that aren’t fully loaded, Plexxi Control can dynamically reconfigure the bandwidth to stay in optical passthrough mode for as long as possible to keep latency low.

You can also create affinity rules so an application on switch 1 can talk to another application on switch 2 with its own dedicated bandwidth, in fact own dedicated wavelength. This can be fixed or dynamic. Remember this is all a single switch so this config is pushed down in a single policy and everything happens at layer 2 so no spanning tree in the “mesh”. This also means traffic can be directly linked between any two switches (via optical passthrough through the others) so vMotion traffic for example will only be terminated between two switches. Another cool thing is if a switch fails, the optical passthrough continues to work, it just continues to shine the light across the connection.

This is really a new approach to SDN, the Broadcom Trident II chipset also natively includes VXLAN support for VTEPs so you can create an underlay network that’s as physically simple as your overlay network.

Plexxi Connect is the third part of the solution which is an open platform for workflow integration based on StackStorm. Basically bringing networking into DevOps. You can dynamically provision the network as you deploy new workloads or scale out storage and compute for example. Plexxi connect allows you to configure the network with for example automation engines like Puppet, Chef and many others so you can for say define a Puppet manifest with the port configuration for ESXi hosts for example, update the manifest with a new VLAN or port and Puppet can instruct Plexxi to add the new VLAN to the physical switch or configure up the new port for the host. You can also define latency or bandwidth via policy and automate the deployment. Think how useful this could be in a converged or hyper-converged offering where you can manage the physical network as easily as you can manage virtual ones.

Plexxi Connect can also react to triggers in the network so can provision new workloads or distribute them differently if network traffic changes. You can also create other policies for security for example to keep workloads isolated.

What’s new

I believe Plexxi is going to be showing us more of what Plexxi Connect can do with converged infrastructure and storage environments with new integrations with VMware, Nutanix and Hortonworks.

The VMware integration is about optimising the network for VM migrations and vMotions within or between data centers without affecting other applications. I’d love to see more automation with how distributed switches can automatically cause a configuration change to a physical network port and how Plexxi and NSX can work together.

Plexxi has also joined the “Nutanix Ready” program, I’m not sure what the integration is and hope it isn’t just “works with” Nutanix which would be pretty obvious. I wonder whether adding port groups to AHV for example will automatically configure the host ports with the new VLAN. Perhaps Plexxi can optimise rebuild operations between nodes when there is a disk or node failure or can help with moving data around for Nutanix’s data locality. If you can dynamically increase the bandwidth between nodes to help the data locality operation after a vMotion, this could speed up application performance. I’m sure this must be great for stretched clusters

Optimising the network for Hadoop and Hortonworks is a pretty obvious one as with a distributed system, network latency can be a serious problem when the system starts working hard and you don’t want your Hadoop cluster to start impacting your other workloads.

I can certainly see the attraction of optimising the storage network, I haven’t seen the need for fiber channel networking for many years but having a network that can intelligenty ensure storage traffic gets what it needs in an IP network will make storage administrators who prefer the old segregation of FC sleep easier at night.

What I’d like to see

I’m certainly no network expert but the Plexxi solution looks intriguing, I’d like to hear some customer success stories and find out how they are selling. Cisco and Juniper are such heavyweights in the data center, is Plexxi getting any traction? I’m wondering whether Plexxi Connect and Control can be decoupled from the switches. Changing a physical switch vendor is a fairly significant strategic decision. Although I like the idea of the optical pass through I wonder whether companies are wanting to introduce a new type of switch or whether they’d rather investigate white box switching where the hardware doesn’t matter as much. How does Plexxi respond to customers that want white box switching?

Looking forward to hearing more.

Gestalt IT is paying for travel, accommodation and things to eat to attend Tech Field Day but isn’t paying a penny for me to write anything good or bad about anyone.

Categories: TFD11 Tags:
Comments are closed.