DH2i Closes 2021 As Another Year of Record Sales Growth, Product Innovation, and Strategic Partnership Development
Strengthened Foothold Across Key Verticals and Transitioned to 100% Software as a Subscription Model
Strengthened Foothold Across Key Verticals and Transitioned to 100% Software as a Subscription Model
The other day I ended up writing a bat file to reset the password on the current version of Symantec Endpoint Protection
@start “UPDATPASS” “E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\jre\bin\javaw.exe” -Dprism.order=sw -Xms128m -Xmx256m -Djava.library.path=”%PATH%;E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\tomcat\bin;E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\ASA\win64″ -Dcatalina.home=”E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\tomcat” -cp “E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\Tools\tools.jar;E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\bin\inst.jar;E:\Program Files (x86)\Symantec\Symantec Endpoint Protection Manager\bin\inst-res.jar” com.sygate.scm.tools.DatabaseFrame setpassword admin admin
I hope this helps someone.
Roger
Hello everyone! This is the first post in a series I plan to do around Splunk. One of the things we wanted to do in our Splunk environment was have the Splunk HTTP Event Collector (HEC) setup across multiple Heavy Forwarders (HFs). This way we could have all the same inputs setup once and deployed to all other HEC enabled HFs. There are two ways to do this (NOTE – This document assumes an on-premises Splunk deployment):
We chose the latter option for a very important reason: Some of our HEC enabled HFs also had Splunk DB Connect installed. DB Connect uses the HEC config that is stored in $SPLUNK_HOME/etc/apps/splunk_httpinput/. If you use option number 1, when the app is deployed, it will delete the splunk_httpinput folder you already have on the DB Connect HF and you will lose connectivity to DB Connect. I learned that lesson the hard way. Luckily we had a git server that recorded the changes that were made so I was able to roll it back. The rest of this post will show you how to create the app on the deployment server, use another instance of Splunk to create your HEC entries to create your tokens, copy those inputs from inputs.conf into the app and then deploy that app to the HEC enabled HFs.
First, lets show you how to create the app. As this is going to hold just an inputs.conf file, here is what I did. Get into the CLI of your Splunk Server and change into the $SPLUNK_HOME/etc/deployment-apps/ folder. Create a new folder using whatever name you want the app to be. In this example, lets use HECinputs. Go into the HECinputs folder and create a local folder. We will come back to the CLI in a bit.
Next, open up the Splunk GUI on one of your other Splunk instances. I have a locally installed copy on my laptop that I use for this. After you login, click settings and Data inputs
On the data inputs screen, click HTTP Event Collector
On the HEC screen, click New Token at the top right:
On the Add Data screen, enter the name for this HEC token and click the green Next button at the top of the screen:
On the next page, you need to either pick an existing source type or click New and type in the new sourcetype name. In the app context drop down, pick an app to store this config in. Remember this app as you will need to go there in the CLI to get the inputs.conf file. Choose the allowed indexes for this input and also choose a default index. In most cases, this will be a HEC input for one index so you would choose that index for allowed and default. Click the green Review button at the top.
Review your settings and click the green Submit button at the top.
You have created the HEC input. The token value is auto generated and you can see it in the GUI or in the inputs.conf file. Since we are going to copy the contents of inputs.conf from this server to the app on the deployment server, lets go to the CLI for the server you just created the input on. In my example, this is on a Linux server. Once you are logged into the CLI, go into $SPLUNK_HOME/etc/apps folder. Next, do you remember the App Context setting? That is the folder you want to go into. I put the Test HEC input into the Search App so I need to so into the search folder ($SPLUNK_HOME/etc/apps/search). Go into the local folder under search. In this folder, there will be an inputs.conf file. Show the contents of the file:
[http://Test]
disabled = 0
index = main
indexes = main
sourcetype = _json
token = 3396a456-a09d-484e-bb08-ec82086479b8
As you can see, a HEC input will start with http:// and have the name you specified. See the token? This is what you have to provide to the person who needs to send data to this HEC input. Copy this entire section and paste it into an inputs.conf file in the app you created on the deployment server. Save the file and your app is ready to deploy.
This post will not cover the aspects of the deployment process. Refer to Splunk Docs link below for deploying apps to HFs (Deploy Apps From Deployment Server)
Do you have some Splunk things you would like for me to blog about? Please reach out to me and let me know!
As of writing, the Cloud Native Computing Foundation has 137 different certified distributions or flavors of Kubernetes. That’s a very crowded and fragmented marketplace of choice for consumers. Much of the differentiation between distributions is within the CNI and CSI – the container networking interface and container storage interface. A company’s opinion or innovation around how to handle networking and storage generally powers the differentiation of a distribution.
HPE Container Platform was first introduced back in November, 2019. Unlike its competitors, HPE offers CSI’s for many of its storage offerings, allowing persistent storage offerings across many Kubernetes distributions. And if the company’s strategy is to support many distros, why have its own? HPE’s Container Platform is based on 100% pure Kubernetes. That’s not differentiated, so what are the ways that HPE is trying to make their distro different?
Kubernetes is Kubernetes. In the case of HPE, its Container Platform has a lifecycle controller and management plane, from the acquisition of BlueData, is the first place it is differentiating its offering. HPE touts it as the first enterprise-ready platform — marketing that is next to meaningless. While BlueData released multiple earlier versions, the first HPE release – version 5, back in November 2019, made the switch to Kubernetes as the orchestration engine while it continues to use the BlueData controller.
Controllers and management planes are important. Lifecycle is an important concept and Kubernetes is like any other software. You must upgrade it and manage your orchestration software. A lot of the value proposition from VMware’s PKS/TKGi offering is around their lifecycle driven through BOSH. The HPE Container Platform has a very similar value proposition to ease administration for the enterprise.
HPE’s Container Platform is bare-metal. It runs on hardware and does not have the extra licensing costs or overhead associated with competitors offerings that run inside of a hypervisor. But unlike competitors on bare-metal, HPE touts that its physical platform can support both stateless and stateful workloads. Squarely, HPE is talking about VMware here.
VMware has made a lot of noise around Kubernetes in the last couple years, by integrating it as a first-class citizen on its hypervisor in the latest release. VMware’s opinion is the best way to orchestration containers is on a hypervisor – and HPE is here to say that they believe bare metal is the best way to run containers.
HPE touts its data layer as the answer to running both stateless, cloud-native workloads and stateful, traditional workloads side-by-side on its Container Platform. This is an important concept. A big barrier to entry for cloud-native software and operations is the legacy software and systems that companies have investments.
Within my own organization, the conversation of what is container ready and what is not is a big sticking point to migration of the overall DevOps methodology of software release and management. HPE’s ability to allow stateful, legacy software to run in a container beside of stateless, cloud-native services is a potentially massive advantage.
The BlueData acquisition powers most of what is in the Ezmeral portfolio. It is an opinionated release of Kubernetes that packages together AI/ML and data analytics software into an app store. In terms of distributions and addressable markets, HPE is smart to focus on a market with maturing software products that run cloud-native, that tap into the full ability of bare metal to deliver value to customers, and to develop a full ecosystem for that addressable market.
Artificial Intelligence and Machine Learning both benefit from lots of resources – both CPU and RAM – along with many other data services and analytic software packages. This is a smart bet to align to the company’s strength in hardware and focus on software that extends those benefits.
Think DevOps married to Machine Learning and you get ML Ops. It is DevOps like speed and practice brought to Machine Learning workflows. Machine Learning, by its very nature, is iterative and workflows are a key to iterative work. This is a specialized niche of the industry at this time, but its one that HPE can have a larger opinion and leadership position on – rather than targeting mass adoption – like its failed Helion cloud efforts in the past.
Today, HPE introduced Ezmeral, a new software portfolio and brand name at its Discovery Virtual Event (DVE), with product offerings ranging from container orchestration and management, AI/ML and data analytics, cost control, IT automation and AI-driven operations, and security. The new product line continues the company’s trend of creating software closely bundled to its hardware offerings.
Three years ago, HPE divested itself of its software division. Much of the ‘spin-merge’, as it was called, was a portfolio of software components that didn’t have a cohesive strategy. The software that was left to remain in the company was closely tied to hardware and infrastructure – its core businesses. And like the software that powers its infrastructure, OneView and InfoSight, the new Ezmeral line is built around infrastructure automation, namely Kubernetes.
Infrastructure orchestration is the core competency of the Ezmeral suite of software – with the HPE Ezmeral Container Platform servicing the the foundational building block for the rest of the portfolio. Introduced in November, HPE Container Platform is a pure Kubernetes distribution, certified by the Cloud Native Computing Foundation (CNCF) and supported by HPE. HPE touts it as an enterprise-ready distribution. HPE is not doing a lot of value add at the container space, but they are working to help enterprise adopt and lifecycle Kubernetes.
Kubernetes has emerged as the de-facto infrastructure orchestration software. Originally developed by Google, it allows container workloads to be defined and orchestrated in an automated way, including remediation and leveling. It provides an abstracted way to handle networking and storage – through the use of plug-ins developed by different providers.
A single software does not make a portfolio, however, so there is more. In addition to the Kubernetes underpinnings, the acquisition of BlueData by HPE provides the company with an app store and controller which assists with the lifecycle and upgrades of both the products running on the Container Platform and the platform itself. There is a lot of value to companies in these components.
In addition, the HPE Data Fabric is also a component of the portfolio allowing customers to pick and choose their stateful storage services and plugins within the Kubernetes platform.
The focus of the App Store is on analytics and big data – with artificial intelligence and machine learning as a focus as well. The traditional big data software titles – Spark, Kafka, Hadoop, etc. – will continue to be big portions of the workloads intended to run on the HPE Ezmeral Container Platform.
The new announcement for HPE Discover Virtual Event centers around Machine Learning and ML Ops.
According to the press release, HPE Ezmeral ML Ops software leverages containerization to streamline the entire machine learning model lifecycle across on-premises, public cloud, hybrid cloud, and edge environments. The solution introduces a DevOps-like process to standardize machine learning workflows and accelerate AI deployments from months to days. Customers benefit by operationalizing AI / ML data science projects faster, eliminating data silos, seamlessly scaling from pilot to production, and avoiding the costs and risks of moving data. HPE Ezmeral ML Ops will also now be available through HPE GreenLake.
“The HPE Ezmeral software portfolio fuels data-driven digital transformation in the enterprise by modernizing applications, unlocking insights, and automating operations,” said Kumar Sreekanti, CTO and Head of Software for Hewlett Packard Enterprise. “Our software uniquely enables customers to eliminate lock-in and costly legacy licensing models, helping them to accelerate innovation and reduce costs, while ensuring enterprise-grade security. With over 8,300 software engineers in HPE continually innovating across our edge to cloud portfolio and signature customer wins in every vertical, HPE Ezmeral software and HPE GreenLake cloud services will disrupt the industry by delivering an open, flexible, cloud experience everywhere.”
HPE also reiterated its engagement with open-source projects and specifically to the Cloud Native Computing Foundation (CNCF), the sponsor of Kubernetes, with an upgraded membership from silver to gold status. The recent acquisition of Scytale, a founding member of the SPIFFE and SPIRE projects, underscores the commitment, representatives say.
“We are thrilled to increase our involvement in CNCF and double down on our contributions to this important open source community,” said Sreekanti. “With our CNCF-certified Kubernetes distribution and CNCF-certification for Kubernetes services, we are focused on helping enterprises deploy containerized environments at scale for both cloud native and non-cloud native applications. Our participation in the CNCF community is integral to building and providing leading-edge solutions like this for our clients.”
Disclaimer: HPE invited me to attend the HPE Discover Virtual Experience as a social influencer. Views and opinions are my own.
In April 2020, VMware released vRealize Automation 8.1 – an incremental upgrade to vRealize Automation 8 – which includes a number of new features. Kubernetes support has been enhanced, Red Hat Openshift support has been introduced, approval policies have been added, support for Powershell in ABX and in vRealize Orchestrator, NSX and networking updates, RBAC enhancements and a slew of other new features.
Let me be honest – calling vRealize Automation 8.0 or 8.1 the same name as the previous product is actually a disservice. This is an entirely new product and way of automating infrastructure for VMware. The new product is modern and itself containerized and modular in the way that VMware built and distributes the software. It introduces modern concepts like code pipelines, continuous delivery and infrastructure as code definitions.
vRealize Automation 8 is made of 3 major components – Cloud Assembly, Code Stream & Service Broker. Also packaged is Orchestrator, which has been closely coupled with the legacy vRA for many years.
Cloud Assembly is the portion of the product used to create blueprints and workflows. This is the section that allows us to create infrastructure as code definitions for templates that can be deployed. Cloud Assembly allows for common definition of deployments through the abstraction of infrastructure. This allows the IaC definitions to be normalized while allowing the specifics to be mapped back for operational purposes. For instance, in AWS or Azure, you have a specific VM image and sizing to be determined. In vSphere, those options are a combination of choices of vCPU, RAM and disk space. Cloud Assembly allows you to setup definitions – t-shirt sizes, if you like – that defines small on vSphere as 2 vCPU, 8GB of RAM, 40Gb of Disk space and maps that to a specific SKU for a matching cloud instance.
Code Stream is a code pipeline allowing for continuous delivery. It is a DevOps tool with integrations to other tools that developers will be using to create software.
Service Broker is the catalog or portal for users to access and request services. This is the primary interface for non-infrastructure users. Governance is also built into Service Broker. In addition to blueprints created in Cloud Assembly, some services can be surfaced directly into Service Broker, like AWS Cloud Formation Templates.
The new vRealize Automation is made up of Kubernetes orchestrated containers.
Dell EMC PowerPath is a fantastic tool for migrations (as long as you run Dell EMC storage products). The PowerPath tool is a monitoring and path management suite that includes migration capabilities. The cool thing is the migration capabilities also extend to Microsoft Failover Clusters.
Recently, I was trying to retire all of our remaining VNX out of the datacenter and move some LUNs onto XtremIO. PowerPath Migration to the rescue. I’m also reposting this because the original post I used is no longer online.
I like to use either a PowerShell or CMD prompt window to get started. Navigate to C:\Program Files\EMC\PowerPath\. In that directory, the powermt, powermig and powermigcl commands are available.
The first step is to identify the source and target hard disks. To do this you can use diskpart to to list the disks
diskpart
list disk
You can also use PowerPath tools to list all of the disks and their aliases. In the output, find the source and target disks (their aliases from the array are also displayed) and look for the line called “Pseudo name.” It should be something like harddisk#.
powermt display dev=all
The next step is to setup the cluster for migrations. List the cluster resource groups first with powermigcl.
powermigcl display -all
Next, you will issue a command to setup one of the cluster resource groups.
powermigcl config -group "GroupName"
You may also verify that the group is now setup by re-issuing the display command used above.
The next step is to setup a set of disks for migration. For this you use the powermig command.
powermig setup -techtype hostcopy -src harddisk9 -tgt harddisk13 -cluster mscsv -no
In the output, a handle will be assigned to this job and given. You will use that handle in subsequent commands for this migration job. Also, because each set of disks as its own handle, you can migrate multiple disks at the same time. Reuse this command to setup additional pairs of disks for migration if you need.
Now, you’re ready to start the real work and kick off the migration.
powermig sync -handle c1
Now that you have kicked off your migrations, you monitor. Issue the following command and you can get details on the migration of each pair. Once complete, you will see a status of sourceSelected.
powermig query -all
Once the migration is complete and the disks are in sync (sourceSelected), you issue the command to transition from the source to the target disk.
powermig commit -handle c1
Once a handle completes, you can remove the configuration
powermig cleanup -handle c1 -no
And like anything else, cleanup is always best. Last command unsets the cluster group from being in a migration mode. This command cannot be completed if there are any handles still active. The cleanup has to be completed for all handles before Step 8 can be performed.
powermigcl unconfig -group "GroupName"
I have had a couple disk migrations fail. So what do you do? There is a powermig command to abort the handle which backs it out so you can start again.
powermig abort -handle c1
Last month, I had the chance to attend UIPath Together in Washington, DC. UIPath is a company that makes robotic process automation (RPA) software – a software that uses defined task routines to complete repetitive tasks. Like any other automation, you define the steps required to complete a task and combine that with variables that change based on the job run to do work. The difference between RPA tools and say a PowerShell routine is that the RPA acts like your employee – logs into an interactive desktop session and completes the work just like a human would – doing the same steps repetitively based on the job definition.
Being set in DC, the event had a Federal government focus but even though the speakers were government agencies, it was pretty clear that RPA has a universal value proposition of replacing rote and mundane tasks in workers days with robots that can do the same tasks, freeing your employees to do higher value tasks.
One observation is that RPA software is complimentary to all of the other applications that you already have in your portfolio. It doesn’t really compete with any existing software and because of the nature and design of RPA software, it leverages your current applications no matter where they live. RPA jobs can be leveraged against your on-premises systems and infrastructure, against your cloud deployed applications and against third party applications – anywhere, anytime.
The robot doesn’t have worker laws dictating the maximum number of hours and conditions under which it works. The robot can work 24 hours a day and 7 days a week without rest, if the job queue has enough work to keep it busy.
RPA jobs can also be setup to mimic the user interaction of a user, meaning there are few limitations to this software. Unlike API-based automations, RPA can be setup to interact with websites that lack API’s or interfaces of any kind. The robot launches and interacts with a webpage in a very similar way to a real user.
User accounts and permissions for robots are also setup like real users, with the key difference being the amount of permissible time that we are able to work that ’employee.’
For 2018, I’m hoping to do a weekly PowerShell related post based on some of my experiences. I have found that a lot of my infrastructure-focused peers don’t have a great programming background and my hope is that these posts can help folks feel more comfortable conquering their work in PowerShell and save a lot of time!
For the first post, I’m sharing 5 basic things that can help you do more with PowerShell, faster.
Everyone should know to use Get-Help. It is the equivalent to a Linux MAN page within Powershell and it tells you all about a cmdlet or lists available cmdlets. Wouldn’t you like to have that information while you’re writing a command without scrolling? Try adding -ShowWindow to the Get-Help command. This will put the output into a window, with resizing, search and full details.
[code language=”powershell”]Get-Help Get-ADUser -ShowWindow[/code]
The Get-Member cmdlet, also abbreviated GM, will enumerate the properties and methods inside of an object in PowerShell. This will let you know what methods you can run directly on the object and what properties you can select, scope or sort with. Methods are things that you can do and many times they return data. The data type of the returned data is in the Definition. Properties are pieces of data and the Definition shows the type of data.
This is one of my go-to problem solvers, especially in PowerCLI. If you have a cmdlet that requires parameters that need to be other objects, you can always embed a cmdlet in that position enclosed with parenthesis. PowerShell will execute the cmdlet in the parenthesis, return the object and pass it in as a parameter. This isn’t the pipeline. Here’s a PowerCLI example – If you need to move a VM with Move-VM you may want to specify a datastore. To find a datastore, you can use Get-Datastore or Get-DatastoreCluster.
[code language=”powershell”]Move-VM -VM vmname -Datastore (Get-Datastore DatastoreName)[/code]
PowerShell is object oriented, meaning all data returns is stored in an object. The object has both the individual data fields with values and it has methods that can be used to change or update the data. PowerShell has the concept of the pipeline. The pipeline allows you to move an object from command to command while doing work with the object.
For instance, you have a Get-ADUser cmdlet that returns users from AD. There are 10 users returned from your result and they come back as an object displayed on screen. If you use the pipe character – | – you can take this object and pass it into the next command, the way a pipeline passes water, fuel or oil from segment to segment.
The easiest way to start with commands is to use similar ones – Get-AdUser and Set-ADUser work well together to make changes to user accounts. In the example below, we set the Company name for users using Set-ADUser.
[code language=”powershell”]Get-ADUser -Filter * | Set-ADUser –company “mycompany”[/code]
Sometimes you can’t Pipe an object into another cmdlet. In those cases, don’t forget you can loop in PowerShell. Loops are very useful and my favorite is the ForEach. A ForEach loop will accept the object via pipeline from another cmdlet and then iterate through every object to let you do work. Inside of the loop, the $_ represents the current object and you can use dot notation to get access to other commands. Everything in the loop is enclosed in a set of curly braces { }.
You will commonly use this method when working between dislike cmdlets – like getting a list of computers from PowerCLI (VMware’s PowerShell Toolkit) and then doing work on them using AD cmdlets from the built-in toolset.
[code language=”powershell”]Get-VM | ForEach { Do some work here }[/code]
HPE is taking on information archiving in a new cloud play, introducing the Verity Information Archiving solution. Verity is intended to be a single source of all your information archives and allow for easy access into data repositories in the enterprise.
With built-in connectors for many enterprise applications like Exchange, SharePoint and Skype for Business, administrators are freed from managing the processes of moving data to third-party archives. Other archives can also be manually uploaded to the platform through an easy import wizard.
But Verity is more than an archiving solution – it is a SaaS framework. HPE calls it the ‘single source of truth’ and intends to extend the framework to include other archives like eDiscovery, backup and recovery data, structured data and even business apps. Based around the technology of the initial release, extensions will be introduced over time not only extending capabilities but allowing for the framework to run on Amazon Web Services, HPE Helion OpenStack, Microsoft Azure and even on-premises OpenStack deployments. This will allow the enterprise to benefit from Verity’s capabilities and the economics of cloud, regardless of where they want or need to host the application.
HPE has built Verity to be a compliance-oriented information archive, tailoring the capabilities to mitigate risk and ensure defensible policy management. It includes information security as a planned feature, not an afterthought, of the platform. The security measures include encryption for the data, WORM policies, and data sovereignty to ensure that companies can meet the need that litigation and regulation require of them. Because it is a cloud-native platform, it scales and adapts to changing requirements easily. On top of all of these features, HPE is also combining its rich data analytics assets into the solution.
Verity framework includes a set of rich analytics, as you would expect from the company that owns Autonomy and Vertica. The analytic visualizations are interactive and actionable – meaning that you can drill into the data from its summary formats and immediately act upon the data you’ve found. HPE says that “archiving is no longer about just retaining data, but transforming passive data into big data analytics.”
HPE Verity is built with a modern, web user interface. Built on the same technologies as HPE OneView and all of HPE’s modern software, the interface is intuitive and approachable for generalists and experts alike. The user interface is consistent across all of the applications and it includes visualizations to easily tap into the data stored on the platform. Users are able to work with archived data in real time and the framework allows for enterprise required roles and permissions to target and limit users access to specific data required for their role.
For more information, see http://www.hpe.com/software/hpeverity.