Tagged in: admin

E8 Storage Announces InfiniBand Support Extending High Performance Compute (HPC) Options for New and Existing Customers

Big news from growing storage player, E8.  Bringing InfiniBand to its blazingly fast storage solutions. In a very welcome move E8 have announced added support for InfiniBand to their storage solutions for customers looking to leverage the blazing speed of the centralized NVMe flash only storage, in storage intensive workloads via the speed of InfiniBand.

This will allow local flash storage speeds from networked centralized storage ensuring sufficient IOPS for even the most intensive workloads.

Their press release follows

E8 Storage Announces InfiniBand Support Extending High Performance Compute (HPC) Options for New and Existing Customers

 

Support for the solution highlights the close alignment with Mellanox Technologies and delivers unrivaled performance and ultra-low latency     

SANTA CLARA, CALIF., April 17, 2018 – E8 Storage today announced availability of InfiniBand support to its high performance, NVMe storage solutions. The move comes as a direct response to HPC customers that wish to take advantage of the high speed, low latency throughput of InfiniBand for their data hungry applications. E8 Storage support for InfiniBand will be seamless for customers who now have the flexibility to connect via Ethernet or InfiniBand when paired with Mellanox ConnectX InfiniBand/VPI adapters.

As our economy becomes more digitally-defined, HPC environments play a critical role in unlocking and harnessing the transformational power of data,” commented Scott Sinclair, senior analyst at ESG. “InfiniBand connectivity combined with the full potential of flash as delivered by E8 Storage is an almost unstoppable partnership in terms of speed, low latency and scalability. Together, this makes E8 Storage and Mellanox a powerful combination for data center implementations and for databases that drive financial trading systems, real-time data analytics, and healthcare applications.”

The announcement also strengthens E8 Storage’s long-standing technology partnership with Mellanox that originally enabled the integration of E8 Storage’s rack scale flash architecture with selected Mellanox adapters. Today, this relationship allows E8 Storage systems to attach directly to existing RDMA high performance networks over Converged Ethernet (RoCE) or InfiniBand connections and makes E8 Storage’s full support of InfiniBand possible especially in HPC environments where the majority of InfiniBand adoption is occurring. Like Ethernet, the InfiniBand products perform at 100Gb/s allowing enhanced scalability as E8 Storage solutions are deployed.

According to the most recent Top 500 supercomputer rankings (Nov 2017), InfiniBand is the second most-used internal system interconnect technology. Further, in a recent evaluation by Gilad Shainer, vice president of marketing for HPC products at Mellanox Technologies, Mellanox accounts for 62 percent share of the compute clustering of the true HPC systems.

“We are very pleased that E8 Storage is expanding this partnership to offer full support for our InfiniBand products. Working closely together, our combined products can deliver the lightning-fast and highly scalable network necessary to power data intensive applications, such as artificial intelligence and machine learning that now drive today’s economy,” stated Scot Schultz, Senior Director, HPC/AI and Technical Computing, Mellanox Technologies.

“Today we demonstrate once again that E8 Storage’s architecture can expand, evolve and always extract the full potential of flash performance,” comments Zivan Ori, co-founder and CEO of E8 Storage. “Partnering with market leaders like Mellanox that deliver the very best network connectivity technology ensures we continue to meet and, frequently, exceed the needs of our HPC customers even in their most demanding environments.”

E8 Storage software and appliances with InfiniBand support are available now, contact E8 Storage or your reseller for information on pricing.

E8 Storage is also a sponsor of the IBM Spectrum Scale (GPFS) User Group which will be held in London from April 18-19, 2018. The event features Spectrum Scale filesystem experts sharing the latest updates. To schedule a meeting with executives, visit https://e8storage.com/events/.

 

About E8 Storage

E8 Storage is a pioneer in shared accelerated storage for data-intensive, high-performance applications that drive business revenue. E8 Storage’s affordable, reliable and scalable solution is ideally suited for the most demanding low-latency workloads, including real-time analytics, financial and trading applications, transactional processing and large-scale file systems. Driven by the company’s patented architecture, E8 Storage’s high-performance shared NVMe storage solution delivers 10 times the performance at half the cost of existing storage products. With E8 Storage, enterprise datacenters can enjoy unprecedented storage performance density and scale, delivering NVMe performance without compromising on reliability and availability. Privately held, E8 Storage is based in Santa Clara with R&D in Tel Aviv, and channel partners throughout the US and Europe. For more information, please visit www.e8storage.com, and follow us on Twitter @E8Storage and LinkedIn.

 

Who wouldn’t be excited by faster access to their data? i know i am!

Thanks for reading

Tchau for now!

Installing ESXI 6.5

OK, So its been a while and now the holidays are over i’m back on it, so after discovering that there were significant changes between ESXI 6.0 and 6.5 I decided it was time to start over with 6.5.

so without further ado lets rebuild my virtual lab and get a hypervisor installed.  As a reminder, I’m building my lab within VMware Workstation Pro 12 so upon firing it up I hit the handy “Create new Virtual Machine button”, bringing up the “New Virtual Machine Wizard”

Defaults should work in the majority of cases so i figured lets go with that…

Hitting Next brings up the Guest OS selection page,  pointing it at the downloaded iso of VMVisor-installer 6.5

Give the machine a name, the disk location is set automatically, but you can change it here if you like.

You can change disk capacity here if you like, I left it at defaults in the knowledge that more than likely as my virtual environment grows i will need to provision more space, but i want to see how to do that.  Moving on…

Gives a summary of my selections and the opertunity to go back, or to edit the hardware configuration, being happy i clicked Finish

To be greeted by the installer boot loader, which automatically boots from the cd unless you intervene. But as that’s exactly what i wanted I let it go.

after a couple of minutes watching the installer screen.

I got a compatibility warning, but as im installing within a VMware VM I was pretty confident i need not worry about that, so hit enter to continue.

Accept the ELUA (F11)

Select the disk to install to.

Keyboard layout..

And set a root password.

Again you get an opertunity to sanity check your selections before hitting F11 to install.

A couple of minutes later its installed, and you get a useful reminder to remove the installer disk before rebooting.

And Hey Presto! a running hypervisor!

 

This process was beautifully simple for anyone with basic IT skills,  and remarkably fast, a real advantage when i think about enterprise level infrastructure, any new hardware can be installed with a hypervisor ready to e integrated into an existing setup within minutes and requires no special knowledge. Overall a very slick and smooth proces for which VMware should be congratulated.

So the first step in my virtual lab is done and i look forward to creating VM’s on my hypervisor.

Thanks for reading

Tchau for now

Phil

Looking Forward to 2018

Looking Forward to 2018….

2017 was the year of virtualization, the days of buying dedicated hardware is coming to a close. and I can only see this continuing as the power of virtualization is clear for all to see, dedicated task orientated servers are relics from the past being phased out in all sectors.

Virtualization and the resilience and IT agility it brings are undeniable and being widely embraced.

Hardware is becoming more and more unified, with compute and storage solutions being sold as comodities, with virtualization providing the functional granularity and separation of servers and services.

Recent Developments such as Scale Computing’s HC3 Unity will start to shift the paradigm from in house private clouds to public cloud allowing public cloud to be leveraged to fill performance gaps in peak load situations – using Hybrid cloud technology, Which for me is a very exciting prospect, negating the need for over-provisioning available compute resources as organizations can worry less about peek load than average load when buying compute hardware.   Knowing that their private cloud infrastructure has seamless integration with the public cloud. What IT department inst going to jump at the chance of spending less on hardware and gaining redundancy in the deal!

“Throughout 2017 we have seen many organizations focus on implementing a 100% cloud focused model and there has been a push for complete adoption of the cloud. There has been a debate around on-premises and cloud, especially when it comes to security, performance and availability, with arguments both for and against. But the reality is that the pendulum stops somewhere in the middle. In 2018 and beyond, the future is all about simplifying hybrid IT. The reality is it’s not on-premises versus the cloud. It’s on-premises and the cloud. Using hyperconverged solutions to support remote and branch locations and making the edge more intelligent, in conjunction with a hybrid cloud model, organizations will be able to support highly changing application environments” –said Jason Collier, co-founder at Scale Computing.

“This will be the year of making hybrid cloud a reality. In 2017, companies dipped their toes into a combination of managed service providers (MSP), public cloud and on-premises infrastructure. In 2018, organizations will look to leverage hybrid clouds fororchestration and data mobility to establish edge data centers that take advantage of MSPs with large bandwidth into public clouds, ultimately bringing mission critical data workloads closer to front-end applications that live in the public cloud. Companies will also leverage these capabilities to bring test or QA workloads to burst in a MSP or public cloud. Having the ability to move data back and forth from MSPs, public clouds and on-premises infrastructure will also enable companies to take advantage of the costs structure that hybrid cloud provides.” – Rob Strechay, SVP Product, Zerto

 

The Big challenge for IT admins now is designing their virtual infrastructure around their organizational needs. for me there are 2 main challeges to this, designing the virtual networks to optimize performance of virtualized services, ensuring the virtualized servers can comunicate with their storage effectively, and that IO operations are optimized for their needs, across enterprise storage.

2018 Must continue this trend,  but this will bring its own challenges, in order for the hybrid cloud to be able to offer true seamless integration of private and public clouds, the performance critical services such as databases are hosted predominately in-house with optimized storeage

“From a storage perspective, I think what will surprise many is that in 2018 we will see the majority of organizations move away from convergence and instead focus on working with specialist vendors to get the expertise they need. The cloud will be a big part of this, especially as we’re going to see a major shift in public cloud adoption. I believe public cloud implementation has reached a peak, and we will even see a retreat from the public cloud due to hidden costs coming to light and the availability, management and security concerns.”  – Gary Watson, Founder and CTO of Nexsan

“2018 will be the year that DR moves from being a secondary issue to a primary focus. The last 12 months have seen mother nature throw numerous natural disasters at us, which has magnified the need for a formal DR strategy. The challenge is that organizations are struggling to find DR solutions that work simply at scale. It’s become somewhat of the white whale to achieve, but there are platforms that are designed to scale and protect workloads wherever they are – on-premises or in the public cloud.” – Chris Colotti, Field CTO at Tintri

Disaster Recovery is another great benifit to the the hybrid cloud, you can have an exact replica of your in-house cloud on the public cloud ready to failover at a moments notice, Natural Disasters need not take down public facing services, and must be taken seriously in this ever changing world…

 

It’s 2018 and computers are fundamental to the core of buisiness worldwide, theres no debate and its been so for a long time. but that doesnt necessarity mean its being done in the best way…

“In 2018, we will continue to hear a lot about companies taking on this journey called ‘digital transformation.’ The part that we are going to have to start grappling with is that there is no metric for knowing when we get there – when a company is digitally transformed. The reality is that it is a process, with phases and gates – just like software development itself. A parallel that is extremely relevant. In fact, digital transformation in 2017, and looking ahead to 2018, is all about software. The ‘bigger, better, faster, cheaper’ notion of the 90s, largely focused around hardware, is gone. Hardware is commoditized, disposable and simply an access point to software. The focus is squarely on software and unlimited data storage to push us forward. Now the pressure is on the companies building software to continue to lead the way and push us forward.” – Bob Davis, CMO at Plutora

“The total volume of traffic generated by IoT is expected to reach 600 Zettabytes by 2020, that’s 275X more traffic than is projected to be generated by end users in private datacenter applications. With such a deluge of traffic traversing WANs and being processed and stored in the cloud, edge computing — in all of its forms — will emerge as an essential part of the WAN edge in 2018.” – Todd Krautkremer, CMO, Cradlepoint

“As enterprise data continues to accumulate at an exponential rate, large organizations are finally getting a handle on how to collect, store, access and analyze it. While that’s a tremendous achievement, simply gathering and reporting on data is only half of the battle in the ultimate goal of unleashing that data as a transformative business force. Artificial intelligence and machine learning capabilities will make 2018 the year when enterprises officially enter the ‘information-driven’ era, an era in which actionable information and insights are provided to individual employees for specific tasks where and when they need it. This transformation will finally fulfill the promise of the information-driven enterprise, allowing organizations and employees to achieve unprecedented efficiency and innovation.” – Scott Parker, Senior Product Manager, Sinequa

Now My Phd was in molecular informatics, and our research group was dedicated to providing novel aplications for computing, processing data in order to extract principle factors, and this to me seems to have strong parallels with this concept of Digital Transfomation and will be something i follow closely this year..

Happy 2018 to you all,

Thanks for reading

Tchau for now…

Phil

repost: VMware ESXi Release and Build Number History as of Dec 15 2017

Repost: VMware ESXi Release and Build Number History as of Dec 15 2017

The following listings are a comprehensive collection of the flagship hypervisor product by VMware. All bold versions are downloadable releases. All patches have been named by their release names. Please note that the ESXi hypervisor is available since version 3.5.

 

So I am still learning and when i went to download esxi i was unable to download version 6.5  and it would only allow me to download version 6, being a newbie and assuming there were no major differences beween version 6 and 6.5, I did so.

But now with hindsight i understand that was not the case and i am reposting this excellent article on VMwares build history to help you avoid falling in the same trap

VMware ESXi Release and Build Number History

So what’s the difference?

the main difference apears to be with vSphere, having moved from a client app system to a HTML5 interface,  now  seeing as its a complete interface change it would be better to learn that.  But have no fear all is not lost, i now have the perfect opertunity to look into upgrade paths and learn about that!

Anyway keep tuned for the next insallment

Tchau for now.

Phil

 

 

Installing a server on our hypervisor with vSphere 6.0

Installing a server on our hypervisor with vSphere

So, where are we?

We have a virtual server with exsi hypervisor running on as a base for a virtual lab,  now as a (out of touch) windows domain admin i think installing windows server to refresh myself is a good a place as any to start. Since the last time i administered a domain was when server 2008 R2 was new and starting to get adopted, i figure ill grab myself a evaluation copy of windows server 2016 and see whats new. So I’ve grabbed an ISO of that and im ready to get started.

So here i am, a vSphere front screen,  now Inventory, Administration and Recent Tasks, seam like reassuring fields to have connecting to a hypervisor, especially when you can see power events in t the tasks pane.  looks like i’m good to go.

Now as a noob, I must confess it isn’t entirely obvious where to go to start creating virtual machines, but hey,  i figure inventory is the most likely of the options presented to me.

so opening inventory i find:

 

After that brief moment of confusion, everything is much clearer now another MMC-like window with a nice explanation of what a virtual machine is and a nice “create new virtual machine link”

that’s more like it, now i’m happy again, so lets do that and create a Virtual machine  for windows server 2016.

Up comes another great Wizzard,  prompts you for each configuration option.  Now this is so quick and easy i am basically going to accept defaults at every stage where possible, after all a vanilla 2016 install must be one of the most common tasks performed and i want to see how it does.

Just calling it testvm for now

again default storage location.

I changed from the default, to server 2016 (64bit).

Default nework configuration looks good to me.

again I accept the defaults, reducing the disk to 10Gb assuming i can always grow it if necessary

Click finish, and in a blink of an eye, its done and all i have to show for it is a new item under inventory;- testvm

looks like it worked, lets see if we can get windows installing on it

nice to not the create virtual machine task is registered in recent tasks, lets click on testvm and see what we can do,

first thing i notice is we get a toolbar with play pause and stop buttons,  hitting the play button.

does very little. apart from putting a play icon over the virtual machine, and adding a power on virtual machine event in the task log.

ok makes sense, its for virtual servers, which are usually headless and remote, and from past playing with virtual machines at cambridge i know i need to connect to the console of the virtual machine. lucky for me in the new toolbar one of the icons has a hover over tooltip that says “launch the virtual machine console”, so i hit it:

and i’m greeted by a failed pxe boot screen,  again makes sense i haven’t given it an install media and the VM is network booting by default. very useful for remote installs..

so we need to give the virtual machine a install media, i have an iso and as i’m too lazy to burn it i will try and mount it, so closing down the console and shutting down the vm, lest see if we can mount the iso to install from.  Right clicking on the testvm i have an option to edit settings..

now under CD/DVD drive Device type we have client device with a message telling us to power on the device then click on the connect cd/dvd button in the toolbar.

sounds sensible, lets try that..

on the 3rd reboot i worked out to press F2 to enter the bios of the VM in order to give me enough time to mount the ISO to boot from it.

now im not going to talk you through installing windows, i’m sure you are capable of that,   and in my next post i will mention any hurdles i had.

off to install windows, hope you enjoyed it.

Tchau for now

Phil

Hypervisor installed, what now?

Hypervisor installed so what now?

Well a hypervisor has one purpose, host and allocate resources to virtual machines. so to do that we need some way to manage the hypervisor and the virtual hardware it will provide

How do we manage esxi?  well looking at the splash screen on the esxi server we have a friendly prompt:

“Download tools to manage this server from http://192.168.159.128”

That sounds like just the ticket,  noting it’s the ip address of our newly installed esxi server. so firing up our internet browser of choice and heading over to that url…

 

We find a nice VMware EXSI welcome page, and the first paragraph suggests downloading something called vSphere to manage the server,  i liked the sound of that so i downloaded it.

Running the installer gives us a standard compressed windows package installer: (note pretty sure its going to need administrator privileges to install so i granted them when asked

lets keep it simple and install it in English!

Accept the license Agreement..

Specify the install location

Install

well that was painless, lets see what has appeared in our start menu

VMware vSphere Client, that’s new lets take a look.

.

ok, the first thing the cvlient wats to do is connect to a server to manage, so i enter the IP of our new esxi server and the root credentials i set at install

Now as this is a home lab to play around with i don’t have a certificate authority, so vSphere will prompt to accept the self signed certificate the first time you connect.

Success!

vSphere has successfully been downloaded, installed, and connected to a remote hypervisor.  In what can only be described as a super slick, fast and intuitive process.

I find it very exciting that this process is all that is involved in deploying new hardware in a virtualised environment,  install the hypervisor on the bare metal and connect it up to the existing infrastructure. the rest can be done remotely.

 

Now, next up lets install a server on our hypervisor and get to know vSphere a little

Tchau for now

 

Phil

 

Let’s get started…. with VMware workstation pro (12)

Let’s get started….

So, I need to learn about VMware, what’s the best way to go about it I asked myself?  Jump on In and install VMware workstation pro (12) and have a play!

My desktop has 8 4ghz cores, 24 gigs of ram and a few terabytes of disk, I think I should be fine. So, of to vmware.com to download VMware workstation 12 after creating a free account.

I`m running windows so ill grab the appropriate version, I have a key for version 12 so that’s what I’m getting, the 400ish Mbyte download is quick and slick, as is the installer to follow, the usual ELUA fare and customise options, but if you’re considering VMware you have enough knowledge I don’t have to go through the installer process the only thing of note is you get a full featured 30-day evaluationSo, if you want to play and explore the possibilities you can.

Frankly installing VMware to this point has been simple slick professional installer as to be expected from VMware’s reputation and industry standing.

Now, for learning VMware I figure I need a virtual environment, so I`m going to set up a virtual server within VMware workstation on which I will install a hypervisor (esxi) and use that as a base for a home lab.

So, let’s see what we have and open workstation for the first time…

 

 

Not unlike a Microsoft management console (MMC) window, and immediately feeling at home, with the addition of some helpful shortcuts on the home screen.

Strikes me as VMware workstation is very easy to install, for anyone with basic computer skills, and that’s great its accessible for people to learn, and it’s  quick and easy for tec-heads like me to install, and i can only imagine that as a a plus in the enterprise world..

Next time installing the hypervisor..

Tchau for now..

Phil

 

 

It Begins…

Howdy!

Phew, now that’s out the way, I was expecting writers block, but hey, no!

 

Who is this fool you ask?
Well let me introduce myself, My name is Phil Marsden and I have been invited by @rogerlund to start blogging my experiences, well at least my experiences in the world of cloud computing and virtualization. To that end I think It’s only fair that you should be familiar with the guy currently occupying your eyeballs. I met Roger through a shared love of photography and it is true, photography has become somewhat of a passion of mine, but that is unimportant for now.

What is important is I love technology and what technology can do for us, I could use a photography analogy here, old school photographers, shot film and developed their own film in a dark room patiently waiting for the developing and fixing chemicals to fix the image. They then take this fragile negative and lovingly enlarge and process a print, a process in the most professional of labs, for “instant news” may take an hour or more. Today we shoot digital, and have a 15 + Mb data dump RAW from every shot and load it and develop in digital darkrooms in a matter of minutes, not only can we replicate old systems much faster, technology allows us to go much further than ever before. Part of my love for technology comes from my background, I’m nearly 40 and I grew up in a nice southern English market town where I was always a bit of a nerd at school, never enjoyed school very much but I did well at it, and finally at the ripe old age of 18 I shipped off to university. I studied chemistry at university, which again was fascinating, and I did well at it understanding and growing in the process, but nothing really grabbed me and shouted this is what you want to do!!
In the final year of my degree we had a module called computer aided-drug design and I was hooked, hooked by the concept of a computer being able to model a chemical process. calculating a chemicals properties and interactions with other properties we could virtually screen compounds in silico or design the perfect compound for a drug to act on a specific target. Now this process wasn’t perfect, in matter of hours, or days or weeks we could come up with something that through traditional methods would have taken years with dozens of chemists synthesizing and testing compounds. In short, we could get faster and better, thanks to computers.

I was so hooked, this took me on to do my PhD at Cambridge in molecular informatics, where my project was very cool, and I got to play with stereo 3D visualization long before 3d was even in the cinema. Even in the sea of intellectuality that was the theoretical and computational chemistry department at the university of Cambridge, I was still a nerd, when it came to computers, as I gamed, and I was a poor student, I’d build my own computers, I frequently overclocked them too far and broke components, so I was always fixing something, and that carried over into my professional life and I was frequently the person asked to help out when workstations did funny things so I’d help out after (finally writing up my Ph.D.) I had nothing planned and the head of the group asked me if I wanted to stay on as a computer officer (now we are getting somewhere) and very quickly I realised it was the challenge of learning things coupled with the use of technology that gets me going!

When I joined the group IT was basically firefighting, all users had root or admin on their workstations and frankly it was a nightmare, there were network policies that defined if personal machines were allowed on the network, so this department of 2500 registered machines and around 10k registered users had a team of 7 computer officers managing everything, OS installs, upgrades, application installs, machine meltdowns network infrastructure maintenance and socket patching. It was all firefighting, there was no man power for development and departmental infrastructure.
Now I came on as an assistant to the computer officer in the Unilever centre (hello Charlotte!) and Charlotte to my eternal gratitude, instead of using me as a personal slave assistant in a very hectic work environment (which for future reference contained a “training area” of 25 identical workstations for us to host events and workshops.) decided she wanted me to manage the training area. The training area was frequently reinstalled and having various packages installed for certain workshops. Indeed, one of my first tasks was to install office 2000 on each machine, amazingly we only had one cd and it involved going to each machine in turn, powering up, logging on inserting the disk and installing office – 25 times!

Now I’m not the brightest, but even to me it seamed there must be a better way. Charlotte sat me down and said she wanted me to go on a training course for Active Directory, as she had done one “a while ago” and was “fairly sure it had the answer”. And boy was she right!

The next couple of years of my life were spent implementing a AD infrastructure in out little sub department, and as our carrot to entice the user into letting us manage their machine was a little fileserver I knocked together out of recycled bits kicking around, IIRC it had a 6TB raid 6 array. On which I gave the user space mapped as a network drive. And an assurance that any data put there needed 3 disks to die simultaneously for their data to be lost.
Over time this grew, and when charlotte went to follow her passion of marine biology and her job became available I was successfully recruited, and the domain slowly grew research group by research group. The domain also only grew thanks to a cohesive strategy emerging within the department, which resulted in departmental resources, which we spent on servers. Partly because the domain had proved its worth both in terms of network security and machine maintenance and resiliency of user data and now justified departmental resources. When we first set up the domain we used old workstations, probably 1Ghz Celerons as Domain Controllers and made sure we had enough for replication to keep us safe.

Now with investment we moved onto virtualized servers where each virtual server would be hosted on a pair of real servers with network mirrored hard drives, with automatic failover etc, System uptimes from that point on were just great 99.9% upwards.
No whilst at the university I met a lovely Brazillian doctor and we fell in love and got married, and in time my eldest son was born. Well circumstances led us down a path which resulted in us moving to Brazil and my becoming a stay at home dad.
Fast forward 7 years I was talking to my family at lunch about this opportunity to be blogging this today, and my son ended up asking “what is cloud computing?”

That’s pretty much where I am, I was a sysadmin 7 years ago I’m familiar with the concept of virtualization, I’ve run virtual machines in the real world. I’m a bit out of touch at the moment, but I am now at a point in my children’s life that I have more time to develop and a brain that still wants to learn
I eagerly accept Rogers invitation to learn about the VMware platform and blog about my experience.

Thanks Roger!

 

Philip