The VMM must employ a deny-all, permit-by-exception policy to allow the execution of authorized software programs and guest VMs by verifying Image Profile and VIP Acceptance Levels.
Verify the ESXi Image Profile to only allow signed VIBs. An unsigned VIB represents untested code installed on an ESXi host. The ESXi Image profile supports four acceptance levels: (1) …
Once you’ve determined what needs to be addressed, my recommendation would be to use either his script to remediate the hosts, then apply the changes to all hosts via Host Profiles, or deploy this VIB that’s over on the VMware Flings page.
VMware released another great security guide for vSphere 6.7.
vSphere Security provides information about securing your vSphere environment for VMware vCenter Server and VMware ESXi. To help you protect your vSphere environment, this documentation describes available security features and the measures that you can take to safeguard your environment from attack.
will detect hardware conditions of a host and allow you to evacuate the VMs
before the issue causes an outage. Failure happens at the most inopportune
times. It’s possible that degraded hardware goes on for minutes, hours, or even
days and when it eventually fails, workloads need to be HA restarted. In
reality, if only vCenter or the administrator had known, it could have kept the
workloads from failing!
Proactive HA can respond to different types of failures.
Currently, there are five failure events that it uses.
In your typical server failure, the server just goes down
and HA restarts VMs. However, with Proactive HA, it allows you to configure
certain actions for events that MAY lead to VM
downtime. For instance, lets say a power supply has gone down. Your server has
redundant power supplies, so your server is still up but now has a single point
of failure and is in a degraded state. When this occurs Proactive HA will be
triggered and evacuation of remaining VMs will be moved to a healthy host in
the cluster and the failed host will be put into one of the below “modes”.
How do we know the host is degraded? There are new items
called Health Providers that come into play. The Health Providers as of this
writing are Dell, Cisco and HP but I am sure that there will be more added in
The health provider reads all the sensor data from
the server and analyzes the results and sends the state of the host to a
vCenter Server. These states are Healthy, Moderate Degradation, Severe
Degradation and Unknown. Also known as Green, Yellow and Red! Each provider
will be different depending on what server vendor they are from and may have
additional features/functionality vs. what their competitors offer, so be
aware of that. Once vCenter is in the loop and aware of the degraded host, DRS
can now act based on the state of the hosts in a cluster. As with traditional
DRS, it evaluates where VMs can go and migrates them to their new hosts.
There are three options for partial failed hosts:
Quarantine mode – Do not add new VMs to the host.
Maintenance mode – Migrate all VMs of the host and place it in maintenance mode.
Mixed mode – Considered a moderate failure, keep VMs running. But for severe failures, it will migrate VMs.
Let’s talk about Quarantine mode first. The quarantine
mode state allows you to configure vMotion of VMs of the cluster if there will
No performance impact on any other VMs in the cluster.
None of the DRS rules are compromised.
Quarantine mode also makes sure that none of the newly
built VMs in the cluster are placed on that host. It can evacuate off the
VMs entirely (Maintenance Mode) and not allow any new machines to be placed on
the failed host. When you build a new machine it also will take it into
consideration and not put new machines on that host.
Now that we’ve covered quarantine mode, let’s cover
maintenance mode in a bit more detail. Maintenance Mode will evacuate all the
VMs off the host. You might be familiar with this mode already as it’s been
around for a while. Often used for patching hosts. It does not allow any VMs to
With Quarantine Mode a full evacuation is not
guaranteed. Quarantine Mode is considered the new middle ground. An
ESXi host in quarantine can and will be used to satisfy VM demand where needed,
the opposite of Maintenance Mode.
VMware vApps are perhaps one of
the most underutilized features of vCenter Server. A vApp is an application
container, like a resource pool if you will but not quite, containing one or
more virtual machines. Similarly, to a VM, a vApp can be powered on or off,
suspended and even cloned. The feature I like best is the ability to have
virtual machines power up (or shut down) in a sequential fashion using one
single mouse click or command. Suppose you have a virtualized Microsoft-centric
environment comprising a file server, a DNS server, a couple of AD domain
controllers and an Exchange Server. VMware refers to such environments as
Normally you would switch on the
DNS server first, followed by the domain controllers, the file server and
finally the Exchange server. The reverse sequence holds true when it comes to
powering down the entire environment perhaps due to scheduled maintenance. A
vApp allows you to group all these components under one logical container.
Better still, you can specify the VM start up order and the time taken in
between powering up or shutting down the next VM.
Creating a vApp:
Change the view to “Host and Clusters”, right-click on
the cluster object and select “New vApp”.
(Optionally) Configure virtual machine boot order and IP
vApps to be extremely handy in disaster recovery scenarios where you would want
to automate and quickly power up mutually dependent virtual machines using a
single click or command. vApps also lend themselves extremely well to any
backup strategy by providing the means to quickly back up and restore
multi-tiered applications or environments using a single OVF package,
assuming they are static workloads. This in turn can be backed up or archived
as part of a disaster recovery plan.
Hey everyone, re:Invent this year was huge and VMware presented news about their future plans on AWS infrastructure. Much of my time was spent on Twitter and social media checking to see if any new relevant announcements would come out surrounding VMware on AWS. There’s been a lot – and I mean a lot – of activity since it was announced. VMware Cloud on AWS is the only hybrid cloud solution allowing you to modernize, protect and scale vSphere-based apps to the cloud, leveraging AWS. Together, these services integrate allowing you to rapidly extend and migrate your VMware environment to the AWS public cloud.
I thought I’d put together a quick post to highlight some of my favorite items that were announced at re:Invent. So here goes!
VMware Cloud on AWS Outposts
VMware and AWS are already huge Goliath’s in the virtualization and cloud market but they’ve partnered again to deliver a new as-a-service, on-premise offering that includes the full VMware software stack (think vSphere, vSAN and NSX) that can run on AWS Outposts. After partnering on technology to bring VMware virtualization software to the AWS public cloud last year, they’re now joining up to introduce “Outposts,” hardware that brings the AWS cloud on-premises. It’s a fully managed and configurable server built to run on AWS-designed hardware. It will be a subscription-based service and will support existing VMware payment options.
AWS CEO Andy Jassy said AWS Outposts provides a way to run AWS infrastructure on premises for a “truly consistent” hybrid experience. It’s available in two options, with the first through the VMware Cloud on AWS offering and the second as AWS native.
Option #1: For customers who want to use the same VMware control plane and APIs they’ve been using to run their infrastructure, they will be able to run VMware Cloud on AWS locally on AWS Outposts.
Option #2: For customers who prefer the same exact APIs and control plane they’re used to running in AWS’s cloud, but on-premises, they can use the AWS native variant of AWS Outposts.
AWS Outposts are in private preview, with public general availability in the second half of 2019, according to Amazon.
This offering is the AWS and VMware answer to the hybrid cloud deployment model Microsoft has been pushing with Azure Stack. This provides AWS a hybrid cloud play that they previously lacked, and sets up a rivalry of sorts in an area that Azure has dominated (hybrid deployments). There have been many AWS customers looking for this type of play, as well as many VMware customers wanting a more native hybrid offering. This solution covers both bases, and it will be interesting to see how it evolves over the coming year.
VMware Cloud Foundation for EC2
Another huge announcement from re:Invent was the addition of services that extend datacenter management to the public cloud. They’ve coined it VMware Cloud Foundation for EC2. There are two major components. A mechanism to insert and manage these services on Amazon EC2, as well as networking, security, data, and management services themselves. It creates a common set of data center services that spans the hybrid cloud. These services support all types of workloads from traditional VM based enterprise applications to modern container-based workloads utilizing platforms like PKS or Red Hat OpenShift.
Hopefully, the above tools will help expand some environments. When they officially go live it will be interesting to see the adoption. I’ll leave you with one of my favorite sessions I watched from re:Invent. I still have several more to catch up on. It’s a great video for anybody wondering about connectivity for VMware Cloud on AWS. If you’re new or even considering, you should check this session out. One day I hope to make it to re:Invent. I hear it’s a great conference to go to!
If you’ve attended and would like to share your experience, let us know in the comments section below!