“Some larger operators already have datacentres facing these issues. You either follow the current path with the notion that you are going to hit the wall in a short while or take a capital expenditure hit with a complete overhaul,” says Halilovic.
“We are not talking about a software solution, or repurposing or building on top of current infrastructure. In most cases, it is a serious rip-and-replace.”
However, Trevor Dearing, head of enterprise marketing at Juniper Networks, believes that inadequacies within existing legacy datacentre architecture is only the initial driver. “Phase one of that adoption is where they see the benefits of convergence in the server rack and the cabling, and the fact they have to go to 10GbE to support virtual machines, so they may as well go to network adapters as well, for power and space savings,” he says.
“Phase two will be application-driven, where apps such as Skype or Fusion IO running on solid state disks demand a lot of movement. This will change the way we build datacentres: less of a centralised and more of a distributed storage model.”
Converging in a crisis
As ever, cost is an issue, particularly in a market where many of the countries you would normally expect to lead technology adoption face a continuing economic crisis.
Full convergence requires new switches, cabling and server adapters, if not new servers with converged network adapters (CNAs) built in. The top-of-rack switches offering dual-purpose Ethernet and FC ports that can be plugged into existing datacentre networks are nothing more than a low-cost starting point. They offer no benefit other than a component that can be re-used when the convergence revamp takes place.
“There is no cost premium between a SAN-enabled switch and a 10GbE switch as we are trying to push it [LAN/SAN convergence] forwards, so we would like to think our solutions are relatively inexpensive and the total cost of ownership [TCO] is good,” says HP’s Berger.
Uncertainty about standards is another hurdle, says Halilovic, as is entrenched support and expertise for FC SAN technology. An FC roadmap that aims to deliver transmission speeds to 32Gbit/s in 2014 and 64Gbit/s equivalents when the market demands is also a problem (FCoE is following the Ethernet roadmap and is currently delivering 10Gbit/s, but with 40Gbit/s a few years away and 100Gbit/s further down the line).
“First and foremost, there are barriers within organisations in terms of how things have worked historically with separate networks, separate storage and separate servers, and that is hard to break,” he says.
So can FCoE succeed where FCIP, iSCSI and a host of other technologies have previously failed? Can it persuade potential customers that it is worth building a single network to connect all the resources that separate LANs and SANs now?
“FCIP in effect morphed into what FCoE is now. When iSCSI came out, there was not the same extent of adoption for 10GbE, but now we have 10Gbit/s Ethernet and we are moving towards 40Gbit/s IP pipes, so you have more bandwidth to play with and a big enough pipe to converge the two,” explains Berger.
“FCoE has more support from vendors, including Cisco and Brocade, than FCIP, and converged network adapters are built into the fabric,” says Dearing. “People think FCoE will be as simple as Ethernet, even if the reality is that they are all iSCSI over something. It feels like there is more acceptance across the board for Ethernet, whereas iSCSI was driven by mid-tier storage vendors. If you are going down the convergence route, it may be easier to end up with non-storage people building non-Ethernet storage networks.”
Vendor race to the FCoE line
Fibre Channel over Ethernet (FCoE) may not be the only technology component in the converged datacentre architecture being put forwards by major networking, server and storage companies, including Cisco, Juniper Networks, Brocade, HP, Oracle and IBM. But it forms the majority constituent of its foundation: local area network (LAN) and storage area network (SAN) convergence.
FCoE is a storage protocol that enables data traffic from Fibre Channel (FC) devices to run directly over Ethernet networks, converging storage and IP protocols into a single cable and interface used by servers, storage arrays, networks and other devices within the datacentre.
The technology was originally defined in 2009 by the Institute of Electrical and Electronics Engineers (IEEE) in its FC-BB5 specification. It requires that FCoE-enabled switches support datacentre bridging (DCB) traffic management standards.
Proposals for its successor – FC-BB6 – are subject to the usual vendor squabbling about what it should and should not include. HP solution architect Eugene Berger says that, like its competitors, HP is busy “investigating” the proposals and sitting on the relevant IEEE boards. But the delay is seen as something of a problem for all interested vendors.
“We need to all use the same FCoE protocols. Otherwise, we risk customer lock-in, especially converging the network and the SAN,” he says.
Sometimes, the power of the mainframe is the most cost effective answer. Computing's Peter Gothard puts Computing's readers' questions on the future of the mainframe to IBM's Z13 expert Steven Dickens.
This Dummies white paper will help you better understand business process management (BPM)