Why hyper-converged systems are the future of the data center

emre bozlakSolution Architect
Published:
Hyper-convergence systems have taken the IT world by storm and have quickly started to change our point of view of how the data center should and could be architected. In this article, I’ll explain the benefits of employing a hyper-converged system such as offering integrated compute unit (memory & CPU) and storage in the same box instead of individual units. I’ll also provide examples of traditional data center issues that can be resolved with hyper-converged systems.

A remedy for storage headaches
If you asked me the most radical change brought by hyper-convergence systems, I would immediately say the simplification of storage configurations.

In conventional data center architecture, installing and managing storage has always been a troublesome process. After the installment of the storages, adjustment of the raid levels, entering zone settings in SAN switches, and the creation of volume and LUNs on storage, all the LUNs defined in vSphere have to be re-defined and adjusted according to vendor best practices. This is much more complex and open to errors than it seems, especially since improper storage configurations are a common cause of many ESXi issues. In addition to all these installment processes, this structure has to be consistently monitored and maintained. Once again, if you'd like to use various features including deduplication, volume backup, or volume replication, you need to get additional licenses to do so.

Now with hyper-converged systems, those issues have all but disappeared. You simply place a new node in the data center and operate it. As the whole compute and storage unit is integrated, no cabling or configuration is required. As many hyper-convergence systems come with a hypervisor, the system offers the ideal configuration and deduplication, backup and replication features required.

Here, you can find one of the major hyper-converged vendors, Simplivity, discussing the main problems of traditional enterprise storage and how hyper-converged can help us to solve those issues:

https://www.simplivity.com/data-center-infrastructure/enterprise-storage/

Inter-compatibility of hardware updates made easy
Another important challenge commonly seen in the data center that hyper-convergence systems solve is the inter-compatibility of hardware and firmware updates.

Traditionally, even in a simple firmware update or hardware change, compatibility with all hardware within the ecosystem and the hypervisor has to be checked. If new hardware is purchased, compatibility with the current system has to be checked, as well. In some cases, no matter how much you monitor that everything is working as it should, you can still be faced with operational risk or any undesired issue during the course of the update. Unfortunately, I know this from personal experience. While updating firmware of HBA cards, both HBA modules were simultaneously rebooted. We soon realized that HBAs do not operate after the update. It was an unforeseen bug and we ended up having to stop all updating processes until our vendor released a new update package.

By offering all-in-one solutions, hyper-convergence systems not only update both single firmware and all hardware components, but they also eliminate non-compatibility issues thanks to a single manufacturer.
All vendors have similar processes and experiences how updates performed. See the link below from Nutanix to see how easy updating firmware and hypervisors of a hyper-converged box is: 

http://www.nutanix.com/2014/11/19/radically-simple-hypervisor-upgrades/

De-centralizing the data center solves problems faster
Hyper-convergence also allows for a de-centralized data center, which can be a huge benefit when something goes wrong. It can prevent a single point of failure arising from components such as blade chassis or a storage controller.

In the case of a chassis failure or other operational risk in a non-hyper-converged data center, all physical hosts on it, hundreds of VMs, not to mention storage controllers, will be suddenly interrupted. Although we design the critical components of our data center with back up, we probably don’t have spare blade chassis or an additional storage controller to provide the same performance under high IO throughput losses.

Hyper-convergence systems don't require smart chassis’; each node is a storage controller that keeps the data undistributed, so we are able to avoid the afore-mentioned problems.

Well understanding decentralizing the datacenter starts with how hyper-converged systems storages works, both link from below help you to understand how data distributed across nodes and data redundancy works.

Both links below contain valuable information about hyper-convergences which I strongly recommend reading to fully understand your infrastructure:

http://stevenpoitras.com/the-nutanix-bible/#idp319984
http://demand.simplivity.com/download-hyperconverged-infrastructure-for-dummies

Disaster recovery simplified
Disaster recovery has always been a troublesome and complex process. Regularly, copying storage on the data center to be used as DR site is required for WAN optimization when the synchronization is over WAN. Hyperconvergence systems are generally provided with out-of-box storage replication solutions and WAN optimization. This not only facilitates the first installation but also significantly simplifies tape backup to off-site as it can be synchronized to cloud server provider.
These resources below will help you to understand which hyper-convergence features and components will help you to create an effective DR/BC structure:

http://go.nutanix.com/rs/nutanix/images/Nutanix_Disaster_Recovery_Solution_Brief.pdf
https://www.simplivity.com/data-protection/disaster-recovery/

The takeaway
Many IT managers I’ve spoken with say that they are trying to lean their data centers as much as possible in order to simplify their operation, facilitate purchasing processes, and ensure a simple learning-curve for IT staff. 

This is ultimately why hyper-convergence will end up dominating the data center. By supplying all storage and compute needs from a single manufacturer and managing the whole infrastructure from one point, it significantly simplifies the data center. This simplicity ensures fewer problems and reduced operational burden for the infrastructure teams. Additionally, requesting support from a single manufacturer if something does go wrong provides huge value to the entire department.

 
3
2,750 Views

Comments (1)

Danny ChildITManager
CERTIFIED EXPERT

Commented:
Great article, very useful for me as a new starter with hyper-convergence.  Appreciate the links for further reading too.  Nice one.

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.