Home Datacenter What is Composable Infrastructure and where is it targeted?

What is Composable Infrastructure and where is it targeted?

by Philip Sellers

Last month, just before the flooding, I was onsite at HPE in Houston, TX, for a Tech Day focused on the HPE Hyper Converged and Composable Infrastructure portfolio.  I have since had a chance to reflect on everything I had learned about HPE’s Composable strategy – and the larger industry direction of containers and orchestration are it is all taking us.  There are potential benefits and lots of hurdles in both of these major initiatives in the industry.  The reason behind these concepts and solutions is delivering faster results and flexibility for IT organizations.  And while at the Tech Day, I was able to get dig in again and refresh what I’d heard and learn more detail around HPE’s Composable strategy.

What is Composable Infrastructure?

I’ve had the opportunity to drill into HPE’s and the broader industry definition of Composable Infrastructure a couple times in the past year.  HPE released Synergy Platform last December at HP Discover in London.  This was the first purpose-built hardware platform for composability, but the people I talked with in HP warned me that it would not the only platform.

Composability is the concept of being able to take standard compute, storage and networking hardware and assemble them in a particular way using orchestration to meet a specific use case.   When your use case is completed, you can decompose and recompose the same components in a different way to meet new needs.  Beyond that point, it also includes the concept of being able to do this with hardware on demand as your usage and requirements change throughout the day.   For peak work loads, you may need to compose infrastructure to run a billing cycle and then decompose and recompose the same to be web front-end servers for E-Commerce during a peak usage – like Black Friday.  The key here is orchestration and with the orchestration, speed.

In many ways, Composable Infrastructure provides a pool of resources similar to how virtualization provides access to a pool of resources that can be consumed in multiple ways.  The critical difference is that these are physical resources being sliced and diced without a hypervisor layer providing the pooling.

Characteristics of Composable Infrastructure

HPE has a strict definition they are following for Composable Infrastructure.  If any of these characteristics are missing, they will not classify a solution as Composable, no matter how close it may resemble it.

  • Unified API
    • Single line of code to abstract every element of the infrastructure for full infrastructure programmability
    • Bare-metal interface of infrastructure as a service
  • Software-Defined Intelligence
    • Template driven workload composition
    • Automation to enable streamlined operation of the system (HPE calls it frictionless operation)
  • Fluid Resource Pools
    • Single infrastructure of disaggregated pools
    • Physical, virtual and containers
    • Auto-integrating of resource capacity

The API is straight forward, along with the ability to use templates to define and automate system builds.  Fluid resource pools also makes sense if you’ve spent any time with a virtualization technology.  The point that didn’t immediately make sense is the ‘frictionless operations.’

In terms of frictionless operations, what HPE is talking about today is automation and workflow tools within the management interface to streamline the upgrade processes required on the system.  Those may include upgrades in the OS images and it may include firmware and driver bundles along with the management interface rollups.

Self-Healing Infrastructure

Now, take this concept & definition a step forward and layer on monitoring and mitigation software.  What becomes possible is a self-healing infrastructure, reacting to events and remediating them on its own.  Self-healing is a compelling concept, but so far attempts have been far from exceptional and its really tough to achieve.  There is an huge amount of work required in standardization and orchestration that just don’t fit with traditional IT software and OS concepts.  Where I say traditional, HPE is using the word legacy or old-style.  They’re talking client-server applications – the commercial, off-the-shelf software so many Enterprises are running today.

But looking into the future, HPE can see that self-healing can be realized with cloud-native, scale-out software solutions.  And it is betting that if it can build a physical infrastructure capable of programmatically assembling and disassembling systems on-demand, that is can power this self-healing future.

HPE Synergy Platform

HPE Synergy Platform is the hardware HPE believes will power the future of on-premises, cloud-native IT along with the flexibility to host legacy applications in the same pool of resources.

While examining Synergy Platform at Discover, I first noticed that the hardware platform looks very similar to a HPE BladeSystem chassis.  It is different in the number of compute nodes and the dimensions of those compute nodes and the networking.  But physically, it looks very much like a BladeSystem.  What occurs quickly to you is that you have new capabilities like adding a disk shelf that spans two compute bays and provides a pool of disk that can be shared throughout a Synergy Frame (not called chassis anymore).

HPE-Synergy-with-Composer-Storage-Module-and-480-660-620-and-680-Compute-Modules_low

Compute

Blades become compute nodes, but the concept is much the same.  The form factor does not change much – you have half-height and full-height options for compute nodes in Synergy.  The dimensions are larger to accommodate a fuller range of hardware inside of the compute node.  Starting with a half-height blade, HPE is offering the Synergy 480 Compute Node.  It is a similarly equipped Gen9 server to meet similar use cases as the BL460c.

Another change in Synergy, the slots for compute nodes no longer restrict you to a single unit – there is no metal separation between the compute bays.  You can do double width units in Synergy, in addition to full-height.  HPE is making use of that with a double width and full height Synergy 680 Compute Node.  The 680 is an impressive blade with up to 6TB of RAM across 96 DIMM slots and quad sockets.  It is a beast of a blade.  Other full-height option are the Synergy 620 and 660 Compute Nodes.

Composer & Image Streamer

Orchestration is really what sets Synergy Platform apart from the c7000 blade frames from HP.  The orchestration is achieved by modules on the Synergy Platform management modules.

First, the Synergy Composer is the brains and management of the operation and it is built on HPE OneView.  From an architecture stand-point, a single Composer module can manage up to 21 interlinked frames.  Each frame a two 10Gig management connections – on a Frame Link Module – that can be link frames together.  Each frame connects upstream to a frame and downstream to another and this forms a management ring.  Using this management network, the Synergy Composer is able to manage all 21 frames of infrastructure.  Although each frame has a slot for a Composer management module, only one is required and a second can be added in a different frame to establish high-availability.

Synergy Image Streamer is all about the boot images.  You take a golden image, you create a clone-like copy (similar to VMware’s linked clones) and you boot it, when the image is rebooted, nothing is retained.  Everything about the image must be sequenced and configured during boot.  Stateless operation is very much a cloud-native concept – requiring the additional services to be deployed in the environment – like centralized logging – to enable the long-term storage of data from the workloads.  Composer also takes into account updates and patching by allowing the administrator to commit these to the golden master and then kick off a set of rolling reboots to bring all the running images up to date.  Just like Composer, the Image Streamer also only needs a single module or two for redundancy in two different frames.

Management in Synergy is made to scale and eliminate the need for onboard management modules and a separate software outside of the hardware to manage many units.

OS Support for Streaming and Traditional OS Support

At launch, Synergy’s Image Streamer supports Linux, ESXi and Container OSes with the full benefit of image, config and run.  The images are stateless, meaning nothing is retained when the system reboots.  Windows was not supported with the Image Streamer as of December.  The Synergy and composable concept is clearly targeted at bare-metal deployments of cloud-native systems.

Now, even though you can’t use the Image Streamer to run Windows or stateful Linux on Synergy, it doesn’t mean they wont’ run.  It is still possible to create a boot disk and provision it to a compute node (boot-from-san, boot from USB or SD, etc.).  For compatibility, you can use Synergy compute note like a traditional rack-mount or blade server.  Of course, when you do, you lose all of the potential benefits of the platform from its imaging and automation engines.

The Fabric

The Synergy Fabric the other huge differentiating factor with Synergy Platform.  A single frame is probably not what you’ll see deployed anywhere.  Synergy is built to scale up within the rack – with up to 5 frames interconnected to the same converged fabric that provides Ethernet, Fibre Channel, FCoE and iSCSI across all of the frames.  Synergy uses a parent/child module to extend the managed fabric across multiple frames.  A parent module is inserted in one frame and child modules in up to 4 additional frames.  Similar to Composer and Image Streamer modules, the interconnect modules on Synergy uses a pair of parent modules in different frames to achieve high availability.  Management of the interconnect modules and fabric is a single interface and utilizes MLAG connections between the modules to communicate management changes.

Shared Local Disk

StoreVirtual is a primary use case with Synergy Platform and HPE expect many users will choose their VSA to do software defined storage in Synergy.  But it is hard to get enough disks to matter in the blade form factor.   To allow for this, HPE is also showing off a new disk shelf that can fit into two compute bays on a frame.  The HPE Synergy D3940 disk shelf can hold up to 40 small form factor disks that can be carved out and consumed by any of the compute nodes in the frame.  One important limitation here, however, is that the local disks are only accessible inside of a single frame.  The SAS module used for these disks is in a separate bay and is separate from the fabric.  So all the disks need to be presented to compute within the frame.

However, StoreVirtual to the rescue.  Either bare metal StoreVirtual or more likely as a VSA, the StoreVirtual can take those local disks and present them in a way that they can be consumed or clustered with compute in other frames.  All 40 disks may be presented to a single StoreVirtual instance and then clustered with 2 additional StoreVirtual instances in other frames – and then the fabric can consume the storage from StoreVirtual. The great thing is you have choice as a consumer to instantiate and then recompose these resources as needed on Synergy.

Who benefits from these characteristics?

It is critically important to note that you can run traditional workloads side-by-side with cloud-native workloads.  While cloud-native benefit more from the Image Streamer and account for stateless OS operation, traditional IT workloads can be run on the same hardware.  From the Composer, you can assemble a traditional server, with boot from SAN or boot from local disk in the compute node and run a client-server application.  This flexibility is important as organizations attempt to build the new style apps but need to support existing applications.  It means that a single hardware platform will be able to deliver both.  Unfortunately, organizations that choose to run legacy systems on Synergy won’t be able to fully realize all the benefits Synergy Platform has to offer since most do not apply to these legacy workloads, but having the flexibility is key.

From my viewpoint, no company is going to buy into Synergy simply to run legacy applications on it.  The real benefits are for companies that are planning or in the middle of application transition.  For companies who have not begun the transition process, Synergy and Composable Infrastructure is going to be a tough sell – because they’re not thinking in ways where the benefits Synergy delivers matter.

You may also like