Facebook is in the midst of a huge project that should be worrying Cisco.
Facebook is building an entirely new kind of network, using it to support its own massive operations, and — this is the scary part for Cisco — giving the software and designs for the hardware away for free.
This project has been going on for a while, but today Facebook announced another advance, a new networking product it calls the “6-pack.”
It’s interesting that Facebook chose today, the day Cisco reports earnings, to offer an update on the technology that challenges the network industry. Cisco is far and away the market share leader of this $US23 billion market, holding over 60% of it.
In its blog post, Facebook also called out how “traditional networking technologies…tend to be too closed, monolithic, and iterative for the scale at which we operate and the pace at which we move.” Another clear dig at the industry leader.
The 6-pack looks like this:
The 6-pack builds on a radical new kind of piece of network equipment that Facebook introduced last June, called the Wedge. The Wedge is a new kind of network switch, a piece of equipment that moves data around a company’s computer network. With the Wedge, everything is standard, from the software to the choice of processor (Intel, AMD, or ARM), and modular — you can pick and choose components and snap them together like Legos. It’s also “open source,” which means the software and hardware designs are given away for free and anyone can use or modify the design.
With the 6-pack, Facebook invented a way to stack the Wedge units together. To grow the network, you simply add more switches to a box and presto, your network can now handle a lot more data. This lets it grow as your company’s needs grows.
Here’s a closer look at the card that stacks together in a 6-pack.
This product is also part of a new trend in building networks called “software defined networking.”
SDN is a radically new way to build networks that takes the fancy features baked into network equipment and puts them into software. You still need hardware, but you need less of it, and less expensive varieties. The hardware switch becomes easier to move around and manage, and far less expensive, all things that work better with today’s cloud-computing environments.
To be clear, Facebook won’t sell this switch or the new 6 pack. It is part of one of Facebook’s most important side tech projects, the Open Compute Project (OCP).
Members of the OCP are redesigning all kinds of data center hardware to make that hardware faster, easier to fix, and more green. Anyone is free to contribute to the designs and contract manufacturers are standing by to build it.
Cisco has its own SDN offering, the Nexus 9000, that is selling well, Cisco says. It is Cisco’s fastest, most powerful piece of network equipment. Plus, Cisco’s competitors, like HP and Juniper also have SDN offerings.
We reached out for Cisco for comment, and although we haven’t heard back yet, in the past the company has told us that it’s not worried about these kinds of projects, as they will only appeal to a few big customers.
That’s true: A typical enterprise wouldn’t be big enough to buy hardware at scale to make it worth going through this process. But huge internet companies like Facebook, Microsoft, Rackspace are interested in this new way to build networks. In fact, people from these companies sit on the OCP board, use its designs in their own data centres, build their own computers and network switches, instead of buying commercial hardware.
This particular network project within Facebook is being closely watched by the whole network industry. If Facebook can run its huge network this way, delivering all the photos and videos and instant messages and status updates it does, then others will be willing to try it.
In fact, a startup called Pluribus, run by a former long-time Cisco exec, Kacked by Yahoo founder Jerry Yang, just raised $US50 million (total funding to date to $US95 million). It is building products in a similar style to Facebook’s Wedge, using off the-shelf components in its hardware and its own flavour of open source software. Investors in the last round that Pluribus raised included big internet companies in Asia and Europe using its device and/or offering to sell it to others.
In other words, a shift is already happening in internet and service provider networks worldwide and, even though Cisco’s got game when it comes to SDN, this new tech is still is a threat to Cisco’s high margins — typically around 60% on networking gear.
Earlier this week, in advance of Cisco’s earnings, Credit Suisse analyst Kulbinder Garcha warned in a research note:
SDN a secular threat to GM [gross margins]. Despite potential near term momentum, we remain concerned regarding the impact of SDN threatening what remains the most profitable part of the IT stack. We believe it will introduce competition at multiple points in the network and while the impact will take time, the threat will be very real, shrinking gross profit dollars for the entire networking stack.
None of this has been lost on the network engineers at Facebook inventing their new network. In a detailed post about its new network, Yuval Bachar, hardware engineer at Facebook (and lead engineer on 6-pack), wrote a post that had some mild digs in at the networking industry, dominated by Cisco.
He explained that he hopes the whole industry will adopt this tech, which is open and not controlled by the vendors.
Here are the full geeky details as explained by Bachar.
Introducing “6-pack”: the first open hardware modular switch
As Facebook’s infrastructure has scaled, we’ve frequently run up against the limits of traditional networking technologies, which tend to be too closed, monolithic, and iterative for the scale at which we operate and the pace at which we move. Over the last few years we’ve been building our own network, breaking down traditional network components and rebuilding them into modular disaggregated systems that provide us with the flexibility, efficiency, and scale we need.
We started by designing a new top-of-rack network switch (code-named “Wedge”) and a Linux-based operating system for that switch (code-named “FBOSS”). Next, we built a data center fabric, a modular network architecture that allows us to scale faster and easier. Both of these projects were a big step forward, helping us break apart the hardware and software layers of the stack and opening up greater visibility, automation, and control in the operation of our network.
But even with all that progress, we still had one more step to take. We had a TOR, a fabric, and the software to make it run, but we still lacked a scalable solution for all the modular switches in our fabric. So we built the first open modular switch platform. We call it “6-pack.”
The “6-pack” platform is the core of our new fabric, and it uses “Wedge” as its basic building block. It is a full mesh non-blocking two-stage switch that includes 12 independent switching elements. Each independent element can switch 1.28Tbps. We have two configurations: One configuration exposes 16x40GE ports to the front and 640G (16x40GE) to the back, and the other is used for aggregation and exposes all 1.28T to the back. Each element runs its own operating system on the local server and is completely independent, from the switching aspects to the low-level board control and cooling system. This means we can modify any part of the system with no system-level impact, software or hardware. We created a unique dual backplane solution that enabled us to create a non-blocking topology.
We run our networks in a split control configuration. Each switching element contains a full local control plane on a microserver that communicates with a centralized controller. This configuration, often called hybrid SDN, provides us with a simple and flexible way to manage and operate the network, leading to great stability and high availability.
The only common elements in the system are the sheet metal shell, the backplanes, and the power supplies, which makes it very easy for us to change the shell to create a system of any radix with the same building blocks.
Below you can see the high-level “6-pack” block diagram and the internal network data path topology we picked for the “6-pack” system.
The line card
If you’re familiar with “Wedge,” you probably recognise the central switching element used on that platform as a standalone system utilising only 640G of the switching capacity. On the “6-pack” line card we leveraged all the “Wedge” development efforts (hardware and software) and simply added the backside 640Gbps Ethernet-based interconnect. The line card has an integrated switching ASIC, a microserver, and a server support logic to make it completely independent and to make it possible for us to manage it like a server.
The fabric card
The fabric card is a combination of two line cards facing the back of the system. It creates the full mesh locally on the fabric card, which in turn enables a very simple backplane design. For convenience, the fabric card also aggregates the out-of-band management network, exposing an external interface for all line cards and fabrics.
Bringing it together
With “6-pack,” we have created an architecture that enables us to build any size switch using simple set of common building blocks. And because the design is so open and so modular — and so agnostic when it comes to switching technology and software — we hope this is a platform that the entire industry can build on. Here’s what we think separates “6-pack” from the traditional approaches to modular switches:
“6-pack” is already in production testing, alongside “Wedge” and “FBOSS,” in Facebook data centres. We plan to propose the “6-pack” design as a contribution to the Open Compute Project, and we will continue working with the OCP community to develop open network technologies that are more flexible, more scalable, and more efficient.
Thanks to the entire Facebook team who have contributed to the development of “6-pack,” “Wedge,” “FBOSS,” and “fabric.”
NOW WATCH: Tech Insider videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.