Putting fabric computing to the test

The tech sector loves a new buzz word. Company slogans or hyped-up products also come and go. I believe it’s good to stay sceptical of what gets thrown around because when a truly powerful shift happens, you can see it more clearly. Today, we are in the midst of one of the greatest shifts in tech-infrastructure in recent times: unified data centre fabrics.

Gartner reports that fabric computing is on the radar for several IT groups and we are seeing companies across Asia-Pacific migrating to cloud-based operations built on fabric data centres. But what exactly is unified data centre fabrics?

Fabric computing is, in a word, unified. It weaves together computer, networking, storage access and virtualisation into one cohesive system. The term ‘fabric’ comes from its appearance; it is a visual intertwine of parts and cables.

The reason why fabric computing is so powerful is because it brings traditionally disparate, siloed processes into one unified system. Fabric allows you to reconfigure all system components at the same time. This reduces management time and cost by allowing you to manage previously disparate systems, holistically. 

Previously, data centres were full of legacy siloed based infrastructure and as a result were complex. This meant data centres were very large and consumed massive amounts of power. This proved to be a very expensive set up, extremely difficult to scale to support the growth of new applications and tremendously difficult to upgrade.

Fabrics are optimised for virtualisation; what this means in a practical sense is that whenever you add new blade or rack servers to your data centre, the time it takes for the server to come online is reduced from hours to minutes.

Fabric computing consolidates infrastructure in the data centre by reducing what you physically have on premise and then connecting what you have with high-bandwidth connections. This saves both space and cost by increasing energy efficiency and allowing your servers to do what they do best: deliver computing power.

I know, this sounds a bit theoretical, so let me go through an example from Australia.

The Country Fire Authority (CFA) is a largely volunteer fire and emergency services organisation that serves 3.3 million people in more than one million homes and properties across the Australian state of Victoria.

In response to some of the devastating bushfires in Australia, the Australian government established a Royal Commission to improve the way that communities prepare for and respond to bushfires. The Commission recommended that the technology in CFA should be run less like a volunteer organisation and more like a professional one. In particular, the CFA needed to ensure high availability of critical applications and services for rapid emergency response; reduce operational risk, as well as power, cooling, cabling, and server provisioning time, and offer better technology services without additional budget.

As a first step towards these goals, CFA deployed Cisco networking equipment that helped it extend the reach and performance of its WAN communications across greater geographic areas and support rapid responses to emergencies. Once its WAN communications capabilities had been upgraded, CFA turned its attention to the data centre.

At the time, CFA had a single data centre built using a client-server architecture with most applications running on single 1U, 2U, and 5U servers. It also had a second data centre that it used as a disaster recovery site in an active-passive architecture. The CFA team found a more flexible and scalable infrastructure in a data centre solution consisting of the Cisco Unified Computing System™, VMware, and NetApp.

“Cisco was already a trusted partner due to many successful networking initiatives,” says Glenn Kerr, network administrator at CFA. “But more importantly, UCS was clearly the most appropriate solution for CFA. And when we looked at the benefits we’d get by combining UCS with VMware virtualisation and NetApp storage, we realised we had a great fit for achieving our high-availability goals.”

Today, CFA has two data centres that are approximately 140 kilometres apart in an active-active architecture. Cisco Nexus® 1000V Series Switches were also added to provide visibility to virtual machines at the virtual access layer and virtualisation intelligence to the network.

With its new data centre architecture, CFA can achieve greater value from its data centre investment and provide more enhanced services with the same number of personnel. What’s more, building a new server takes about 20 per cent the time as previously.

“The time we save in deploying our physical servers gives us more time to invest in designing even better services,” says Kerr. “Before, we deployed point solutions. Now we deploy highly available and redundant solutions. If we had to do the same task using a physical approach, it would take us four or five times longer than it does under UCS.”

With the resource pooling achieved due to virtualisation, CFA can focus more resources at a particular application running on UCS, where previously there was the possibility that services could run out. In three or four minutes, CFA can have its whole next-generation environment backed up. And to restore from one of those backups takes just 10 minutes. Previously, CFA had to find the right tape, then load it, catalogue it, and restore it. It took three weeks to get three Exchange mailboxes. In the UCS environment, it takes maybe 20 minutes.

Something this powerful isn’t just a buzz word or hyped-up product. It is where we blend public, private and hybrid clouds to offer whatever solution we need to transform a business.  It transforms IT departments into active business profit centres, by allowing infrastructure and applications to be deployed rapidly and efficiently. Fabric is a collaborative solution at its best.

Courtney Dodds is the manager of Cisco's ANZ data centre division.

More from Courtney Dodds

More from Business Spectator

Comments

Please login or register to post comments

Comments Policy »

Yawn!
Another buzzword to try and confuse people!
Any client-server architecture is inherently "l" based if the client connects to the server via the internet!
If the customer wants redundancy Or better latency for geographically separate clients), the server farm installs the customer's software on servers in disparate parts of the country.
All this can be transparent for the user, who just logs in to their portal, and sees whatever they need to see, while the server farm handles mirroring/backup issues.
All this has been available for several years, with the only new wrinkle coming about via high-speed data links between major capital cities enabling near-instant backup across the country, thus presenting to the user a standard "look and feel" from any location in the enterprise.