If everything is more or less stock Linux, what value does Cumulus actually add for it to be worth the licensing costs?
This all seems to be a giant step backwards. JunOS is now the gold standard for its CLI and configuration, due to being highly structured and well organised. Being able to validate configurations before committing them, and automatic rollbacks with commit confirm is a much improved way of doing things over IOS. Given that Juniper hardware can often be bought significantly discounted, the cost savings of white box gear are going to be small, if at all. Thus far, they seem to be sold in such small volumes that comparable Juniper or Cisco gear can easily be purchased for less.
Also, how exactly is Cumulus separating the control plane and data plane? From the look of things, SDN is going to end up being relegated to just easy automation instead, like how Cloud has just been relegated to VMs with flexible billing instead of abstracting underlying infrastructure.
>If everything is more or less stock Linux, what value does Cumulus actually add for it to be worth the licensing costs?
Cumulus's value is that it's the translation between Linux kernel bridging/routing, and the hardware accelerated routing performed by the Broadcom Trident family. Somebody has to sign Broadcom's agreements and program for it, and Cumulus is doing that cheaper than anybody else as far as I can tell.
Another value is that you can now choose between all sorts of switch vendors, and not be as susceptible to lock-in from that angle.
Retail pricing on a white box 48x 10gB + 4x 40Gb switch is less than $6k, and an annual Cumulus license is $1k. If Juniper can match or beat that, I would love to see it, but I doubt I ever will, because you have to spend days cultivating a relationship with Juniper to even start getting decent pricing. And really, JunOS, despite being nice, puts Juniper at a disadvantage over Linux as the net OS. I'd even pay a small premium to use Linux as it is so much more programmable and makes it so much easier to get new racks up and going.
You can buy an EX4550 new in the $6K range, albeit just with 32x 10Gb ports; you'd need to buy the modules for 40Gb or additional 10Gb ports. This is just on the grey market (check eBay), no negotiation or relationship required. If you need support, J-Care service is $525/yr MSRP for the EX4550, with a reinstatement fee cost for the same for products out of warranty. You'd be able to pay off the modules with the difference in licensing/support costs over time.
Maybe I'm just bad at negotiating, but I haven't been able to get vendors to quote me less than double the cost of the Cumulus hardware quotes I've seen (primarily Quanta). Also, looking at support costs, the quotes I have for Cumulus are about 20% cheaper than the cheapest quotes I've seen from Juniper/Dell, just for software support.
That said, NBD or sooner hardware replacement plans are a bit weird with Cumulus. I think they expect you to keep onsite spares, which still ends up being cheaper.
I think if Cumulus becomes more popular, we might see some community code that implements validation and confirmed commits and more features. If we decide to go with Cumulus, and we are thinking about it, we might write something like that.
In theory, the value of Cumulus is (1) support and (2) their proprietary switchd. (Edit: I also second the other comments that Cumulus licensing appears to be really cheap. But if you're looking for "CentSwitch" you're out of luck.)
Also, how exactly is Cumulus separating the control plane and data plane?
"Proprietary" means "if you ever have issues with that that our support won't be able to solve for you in meaningful time span, you're basically out of luck".
We're running a gateway router on Vyatta/VyOS and with every single hardware and software change we had some issues. Most were relatively minor (like default settings being not appropriate for our workload), already documented/solved by someone on the 'net. Some had required reading source code and debugging/profiling to see what goes on. Must also admit, some were beyond my understanding and were mysteriously solved with some shamanism and voodoo magic (like randomly tinkering with firmware and driver version combos).
I wouldn't characterize us as "anti-SDN", instead I would say that Cumulus Linux is not an SDN solution by itself.
We work closely with all the major SDN vendors for use-cases that are virtualization-centric or cloud-centric. That said, there are use-cases like Hadoop clusters that have very little need for an SDN layer, just very fast L3 IP connectivity.
It is stock Linux (Debian based), but behind the scenes, we're programming a hardware forwarding ASIC to do the actual packet forwarding. This is why we run on white-box switches and not servers, see our HCL for details:
http://cumulusnetworks.com/support/linux-hardware-compatibil...
This is how a 1U switch drawing less than 200W can forward 2+ terabits/sec. A server full of 10gig/40gig NICs would be 50x slower and draw way more power.
I prefer JunOS to IOS/NX-OS as well, but both of their CLIs are optimized for managing switches with hand-written (or PERL script written) config files. Most Cumulus customers use automation tools. We've had folks use Chef, Puppet, CFEngine, Ansible, Saltstack, as well as a few mega-scale customers who have home-grown automation tools. The idea is to use whatever you're already using to automate servers to automate the switches; often there is substantial sharing between the server automation scripts and the network automation.
The volume is quite high already, Cumulus Linux alone is managing well over 1 million 10G ports today, and we're not the entire white box OS option. Our software pricing is available on our website:
http://cumulusnetworks.com/product/pricing/
and some of our hardware partners publish price (keep in mind, this is web orders of quantity 1):
http://whiteboxswitch.com/collections/all-switches
We don't really separate the control and data planes. We don't really consider ourselves an SDN company, we enable many approaches to SDN by building very high performance fabrics that you can run your SDN layer on top of. We work closely with VMware NSX (formerly Nicira), Midokura, Nuage, and PLUMgrid. Some of our higher scale customers have their own SDN solution.
Hi Nolan, thank you for responding to my question. In my naivete I presumed support for the ASICs would end up in the Linux kernel or at least made available due to GPL requirements, but I now see another post mentioned you have a proprietary switching daemon, so I guess it extends a lot further than just driver support.
Thanks for the link; I've seen similar pricing on the Quantas before, but only from obscure online retailers, who are often well below prices from more trustworthy sources. Would you recommend whiteboxswitch.com as a reliable source?
I don't think the optics is a fair selling point however, as you can easily purchase 3rd party optics programmed to be Cisco, Juniper, etc. compatible. We currently buy Juniper coded SFP+ SR optics for <$25 from a distributor in China.
We've been working with the kernel folks on trying to get support for forwarding ASICs into the kernel, but it is probably going to take a while. For now, ASIC programming is done from userspace by a daemon we call "switchd", which observes changes in the kernel's state, and programs the hardware accordingly.
whiteboxswitch.com is a fine option, though bm-switch.com seems to have lower web pricing at the moment.
Most of our customers are concerned about support, and thus aren't interested in unsupported optics. I guess as long as Juniper doesn't find out... =)
The use of Linux and commodity hardware is interesting but when I read some of the sales copy on the Cumulus website I was reminded of why I hate enterprise software.
> The emerging software-defined data center (SDDC) paradigm involves automated control of all network, server, storage and application resources, resulting in a cloud operating system. Unified visibility is essential, enabling the cloud operating system to efficiently allocate resources, detect problems and ensure consistent performance.
Oh man, I just had flashbacks of pages and pages of enterprise lingo that were much less clear than your sentence. I think Ansible suffered from that at one point, too.
I think it is very safe to say that most dedicated "smart" network hardware is going to disappear in the next 5-15 years to be replaced by virtual machines acting as several pieces of network equipment.
You just patch in all the cables straight onto effectively a Hypervisor, and then you generate virtual switches, routers, firewalls, and so on completely via the VM management console.
You're already seeing some of Cisco's smaller competition go this way. It saves physical space, works on standard hardware, and is easier to centrally manage (as you aren't physically moving wires after initial install).
It will be interesting to see if Cisco jumps aboard this train or continues to pretend like the sands underneath it aren't shifting. I'm sure there would be a market for VMs running Cisco's IOS, it is still by far the most popular network operating system.
"I think it is very safe to say that most dedicated "smart" network hardware is going to disappear in the next 5-15 years to be replaced by virtual machines acting as several pieces of network equipment."
I agree with this idea in that most of the intelligence will be pushed out towards the edges of the network and overlay networks will make a lot of the physical network invisible to the applications and even the management. However, this does not mean that the networking hardware can all be just dumb devices. First of all, this kind of imperative networking will not scale to large data center implementations and secondly dumb devices will become useless once disconnected from their controller.
"It will be interesting to see if Cisco jumps aboard this train or continues to pretend like the sands underneath it aren't shifting. I'm sure there would be a market for VMs running Cisco's IOS, it is still by far the most popular network operating system."
The way Cisco is getting into the SDN market is by leveraging Application Centric Infrastructure (ACI) to define the complete DC (Compute, Storage and Network) with templates and letting the whole system configure itself. The switches supporting this infrastructure are based on merchant silicon in combination with Cisco ASIC's to provide the most optimal performance at these intelligent edges. Add to that the available open API's and the investment in OpenStack and you have a rock-solid and cost-effective solution for the future of the DC and DevOps.
Cisco already has virtual switches. My company just bought a Cisco v1000 virtual switch. You can run your entire company from just one physical piece of hardware with Hyper-V and a Cisco V switch installed.
>These appear as Ethernet interfaces normally do on Linux, visible with ip link show. (ifconfig has long been deprecated by the community in favor of the iproute2 family of tools.)
I am amused when I read things like this, 99% of the Linux savvy folks I know use ifconfig.
I'm the CTO of Cumulus, and I still sometimes use ifconfig. My fingers are just too programmed at this point.
That said, ifconfig doesn't support easy addition of multiple IP addresses to a single interface. You have to create 1 alias interface per IP, which gets unwieldy fast.
I don't think you should. Every UNIX operating system has ifconfig. Writing a nonstandard tool instead of fixing the standard one is very antisocial behaviour from the Linux community, mirroring that of Microsoft. Have a look for example at how excellent the ifconfig program for OpenBSD is. These days Linux has become so popular, that it starts to pretend other operating systems and standards are no longer relevant.
This all seems to be a giant step backwards. JunOS is now the gold standard for its CLI and configuration, due to being highly structured and well organised. Being able to validate configurations before committing them, and automatic rollbacks with commit confirm is a much improved way of doing things over IOS. Given that Juniper hardware can often be bought significantly discounted, the cost savings of white box gear are going to be small, if at all. Thus far, they seem to be sold in such small volumes that comparable Juniper or Cisco gear can easily be purchased for less.
Also, how exactly is Cumulus separating the control plane and data plane? From the look of things, SDN is going to end up being relegated to just easy automation instead, like how Cloud has just been relegated to VMs with flexible billing instead of abstracting underlying infrastructure.