Tagged in: Scale Computing

Looking Forward to 2018

Looking Forward to 2018….

2017 was the year of virtualization, the days of buying dedicated hardware is coming to a close. and I can only see this continuing as the power of virtualization is clear for all to see, dedicated task orientated servers are relics from the past being phased out in all sectors.

Virtualization and the resilience and IT agility it brings are undeniable and being widely embraced.

Hardware is becoming more and more unified, with compute and storage solutions being sold as comodities, with virtualization providing the functional granularity and separation of servers and services.

Recent Developments such as Scale Computing’s HC3 Unity will start to shift the paradigm from in house private clouds to public cloud allowing public cloud to be leveraged to fill performance gaps in peak load situations – using Hybrid cloud technology, Which for me is a very exciting prospect, negating the need for over-provisioning available compute resources as organizations can worry less about peek load than average load when buying compute hardware.   Knowing that their private cloud infrastructure has seamless integration with the public cloud. What IT department inst going to jump at the chance of spending less on hardware and gaining redundancy in the deal!

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments” –said Jason Collier, co-founder at Scale Computing.

“This will be the year of making hybrid cloud a reality. In 2017, companies dipped their toes into a combination of managed service providers (MSP), public cloud and on-premises infrastructure. In 2018, organizations will look to leverage hybrid clouds fororchestration and data mobility to establish edge data centers that take advantage of MSPs with large bandwidth into public clouds, ultimately bringing mission critical data workloads closer to front-end applications that live in the public cloud. Companies will also leverage these capabilities to bring test or QA workloads to burst in a MSP or public cloud. Having the ability to move data back and forth from MSPs, public clouds and on-premises infrastructure will also enable companies to take advantage of the costs structure that hybrid cloud provides.” – Rob Strechay, SVP Product, Zerto

 

The Big challenge for IT admins now is designing their virtual infrastructure around their organizational needs. for me there are 2 main challeges to this, designing the virtual networks to optimize performance of virtualized services, ensuring the virtualized servers can comunicate with their storage effectively, and that IO operations are optimized for their needs, across enterprise storage.

2018 Must continue this trend,  but this will bring its own challenges, in order for the hybrid cloud to be able to offer true seamless integration of private and public clouds, the performance critical services such as databases are hosted predominately in-house with optimized storeage

“From a storage perspective, I think what will surprise many is that in 2018 we will see the majority of organizations move away from convergence and instead focus on working with specialist vendors to get the expertise they need. The cloud will be a big part of this, especially as we’re going to see a major shift in public cloud adoption. I believe public cloud implementation has reached a peak, and we will even see a retreat from the public cloud due to hidden costs coming to light and the availability, management and security concerns.”  – Gary Watson, Founder and CTO of Nexsan

“2018 will be the year that DR moves from being a secondary issue to a primary focus. The last 12 months have seen mother nature throw numerous natural disasters at us, which has magnified the need for a formal DR strategy. The challenge is that organizations are struggling to find DR solutions that work simply at scale. It’s become somewhat of the white whale to achieve, but there are platforms that are designed to scale and protect workloads wherever they are – on-premises or in the public cloud.” – Chris Colotti, Field CTO at Tintri

Disaster Recovery is another great benifit to the the hybrid cloud, you can have an exact replica of your in-house cloud on the public cloud ready to failover at a moments notice, Natural Disasters need not take down public facing services, and must be taken seriously in this ever changing world…

 

It’s 2018 and computers are fundamental to the core of buisiness worldwide, theres no debate and its been so for a long time. but that doesnt necessarity mean its being done in the best way…

“In 2018, we will continue to hear a lot about companies taking on this journey called ‘digital transformation.’ The part that we are going to have to start grappling with is that there is no metric for knowing when we get there – when a company is digitally transformed. The reality is that it is a process, with phases and gates – just like software development itself. A parallel that is extremely relevant. In fact, digital transformation in 2017, and looking ahead to 2018, is all about software. The ‘bigger, better, faster, cheaper’ notion of the 90s, largely focused around hardware, is gone. Hardware is commoditized, disposable and simply an access point to software. The focus is squarely on software and unlimited data storage to push us forward. Now the pressure is on the companies building software to continue to lead the way and push us forward.” – Bob Davis, CMO at Plutora

“The total volume of traffic generated by IoT is expected to reach 600 Zettabytes by 2020, that’s 275X more traffic than is projected to be generated by end users in private datacenter applications. With such a deluge of traffic traversing WANs and being processed and stored in the cloud, edge computing — in all of its forms — will emerge as an essential part of the WAN edge in 2018.” – Todd Krautkremer, CMO, Cradlepoint

“As enterprise data continues to accumulate at an exponential rate, large organizations are finally getting a handle on how to collect, store, access and analyze it. While that’s a tremendous achievement, simply gathering and reporting on data is only half of the battle in the ultimate goal of unleashing that data as a transformative business force. Artificial intelligence and machine learning capabilities will make 2018 the year when enterprises officially enter the ‘information-driven’ era, an era in which actionable information and insights are provided to individual employees for specific tasks where and when they need it. This transformation will finally fulfill the promise of the information-driven enterprise, allowing organizations and employees to achieve unprecedented efficiency and innovation.” – Scott Parker, Senior Product Manager, Sinequa

Now My Phd was in molecular informatics, and our research group was dedicated to providing novel aplications for computing, processing data in order to extract principle factors, and this to me seems to have strong parallels with this concept of Digital Transfomation and will be something i follow closely this year..

Happy 2018 to you all,

Thanks for reading

Tchau for now…

Phil

Scale Computing Release HC3 Unity

Scale Computing have teamed up with Google to leverage it’s Google Cloud Platform (GCP) to bring ultimate IT agility.

In recently announced news Scale Computing and Google have teamed up to bring us HC3 Unity an extension to Scale’s excellent Hyper Converged Infrastructure (HCI) solution HC3. This seamlessly integrates HC3’s on premises cloud solution with the off premises cloud solution using Google’s hardware accelerated nested virtualization.

reposting Scale Computing’s announcement:

“Today we announced HC3 Cloud Unity℠, a new partnership with Google that has been two years in the making. Both companies have committed significant resources and technology to make this happen, and we’re super excited to announce it.”

So what is it? Simply put, HC3 Cloud Unity puts the resources on Google’s cloud onto your local LAN.  It becomes a component in your infrastructure, addressable locally, which your applications can inter-operate with in the exact same way they would with any local system.

The impact on operations is significant. For example, this takes the concept of cloud-based disaster recovery to a whole new level, because again, those cloud resources are part of the local LAN. This means the networking nightmare that is typically present in DR is gone, and an application which fails over to the cloud resource will retain the same IP address that it had before — and all the other systems, users, and applications will continue to communicate with the “failed” resource as though it never moved.

This also enables you to think about DR in a completely different way. Usually we think of DR as “site failure” — and certainly that could hold true here.  But, in addition, we can now think of using this type of cloud-failover for individual apps and not necessarily entire sites.  Again, since those apps failover into the same LAN, retaining IP addressing, they will work in either location.

Those are two concepts, simplified networking and DR, that we think customers will gain immediate benefit from. In addition, those examples should point you to something very new and exciting: true hybrid cloud. With everything on the same network, an application which may use several VMs can have those VMs spread across both their on-premises systems and the cloud, without any change in configuration or use.  Furthermore, moving an application to the cloud is as simple as live migrating a VM between on-prem servers, because from a networking perspective, the cloud is “on prem.”

To accomplish this we have combined technology with Google, and both sides have also introduced new tech. On the Google side, this uses the resources of their cloud combined with newly launched nested virtualization technology.  On the Scale side, we are using the HC3 platform with Hypercore and our SCRIBE SDS layers, and have now added SD-WAN capabilities to automatically bridge the networks together into the same LAN.

The end result, in-line with all Scale products, is extreme simplicity. These cloud resources are there on your LAN. Any VM can access them, use them, and move in or out of the cloud without reconfiguration or cloud awareness. We know our customers are often running a wide mix of workloads, some of which may be older, legacy systems. Whether old or new, these apps can now run in the cloud with a simple click in the UI.

When we first were approached by Google two years ago, we both immediately saw the similarities of our platforms and approaches. From KVM to software-defined storage, there was a lot that was already “in alignment” that enabled our platforms to work so seamlessly together.

Delivering this type of hybrid cloud functionality is the road we’ve been driving our customers down for a long time. From first coining the term “hyperconvergence” in 2012, to now bringing customers into this cloud-converged environment, we will continue to innovate to meet customer needs while maintaining the ease of use and interoperability that is fundamental to the Scale platform.”

Google released a blog post detailing its compute engine and its beta nested virtualization here

 

HC3 Unity promises to merge an on premises virtual network of HC3 with a virtual network on GCP creating a single, ultimately flexible, virtualization platform.  This will allow (assuming sufficient internet bandwidth) virtual machines an applications running on site to migrate off site as needed, as well as being able to lease compute or storage capacity from google at times of high load. Facilitating optimum utilization of on site hardware, with maximum overprovisioning knowing that GCP can pick up the slack when the in house hardware cant cope.

This for me is a game changer, having the power, scale and reliability of google cloud in house changes the paradigm. Infrastructure as a service (Iaas) is here to stay, IT departments can forget about hardware and concentrate on what matters for the clients and users – the application.

Scale on the up with Scale computing Hypercore 7.3 and Hardware updates

Scale on the up with Scale computing Hypercore 7.3 and Hardware updates

Scale Computing have been making waves in the hyperconvergence world and their recent announcement of hardware updates and new admin friendly additions to their HyperCore management software are sure to continue to impress. Especially considering the recent News of their HC3 cloud utility integrating HC3 Scale’s excellent seamlessly with Google’s cloud platform, allowing unthought of (for me anyway) IT agility for anyone with existing VMware virtualized infrastructure or those wishing to implement one. Any company can buy a complete HCI package from Scale with assured futureproofing and effortless upscaling on premises or off
Now let’s turn to the HyperCore 7.3 updates and hardware releases from scale, I got hold of some slides for your enjoyment.

As you can see, some impressive updates with all flash systems now available with incresed storage utiliztion, furthermore I.T. departments should be pleased with a multi-user interface, allowing greater tracking in multi-admin IT departments.
Stay tuned for more details on the HC3 Cloud utility form Scale….

Phil.