Horizon on VMC on AWS Lessons learned

Horizon on VMC on AWS lessons learned

I wrote this follow-up blog on designing and deploying Horizon on VMC on AWS, I’m creating a blog series on the lessons learned during some projects.

Deploying Horizon on VMC on AWS


All solutions have limitations, Horizon on VMC on AWS also has multiple features that are currently not supported or never will be supported, for example, linked clones.

VMware has done a great job in providing you with a nice overview of the un/supported feature parity: https://kb.vmware.com/s/article/58539

  • vTPM is currently not yet supported, so utilizing Windows 11 as VDI OS is at the moment not possible.
  • vGPU is currently also not available (waiting for something like a “G3” node availability)

Some additional limitations regarding NSX that are important for you to know:

  • NSX Loadbalancing is currently not supported, use NSX advanced load-balancer (Avi) or AWS EC2 load-balancer
  • NSX Micro-segmentation requires additional vRealize logs and networking insight services for visibility in logs and traffic flows

Integrating AWS Native services

Horizon on AWS allows for direct integration with Amazon Web Services (AWS) like:

  • Load balancing
  • DB service
  • File service
  • AD/DS service

I will not go in-depth on all possible Amazon AWS services as there are more than these four only… (+200 services).
Instead, I will primarily focus on load balancing as this is the service that is combined most frequently with Horizon on VMC.


From a standard deployment, the components need load balancing in the Horizon Solution: AppVolumes, Connection servers, and UAG’s.

Horizon on VMC on AWS network topology

The AWS EC2 load balancing service allows for a perfect solution to be implemented with VMC on AWS but there are some caveats.


When you have created your DMZ VLANs (single or dual DMZ) in your VMC SDDC some additional changes need to be performed.
When creating a public load balancer, the EC2 instances will be automatically assigned to one of three subnets generated and managed by your AWS VPC instance: pubic-subnet-A/B/C

By default, these subnets will route all their traffic towards the Internet GateWay (IGW) as they are marked as a “public/ external face load-balancer”. So to ensure the external load balancer will be able to communicate with the UAG’s in the DMZ VLANs on the SDDC, we need to add an additional rule in the routing table of the VPC.AWS public load balancer routing

As you can see, in the last rule I added an additional route rule for my DMZ that is being forced to the ENI interface.


By default, a cookie-based persistency is needed for all Horizon components that need to be load balanced. The same applies to the EC2 load balancer, but I have learned that EC2 load-balancing works differently than industry-standard load-balancers like F5 and Kemp.

By default, if traffic is flowing over multiple Application LB’s with persistency enabled, then only one layer will utilize the default persistency cookie. More info: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html

To resolve this issue, we modified the target group to utilize an application cookie called ACCESSPOINTSESSIONID, sent by the UAG.
AWS EC2 Load balancer session cookie VMware Horizon UAG

As seen above, after modifying this on the LB target group for the external Connections (UAGs), the internal session persistency for the connections servers kept on working. This resolved an issue where authentication was failing because it was being ping-ponged between the two connection servers.

Horizon secondary protocols:

A current limitation on the AWS Application load balancer is that it is only able to load balance TCP traffic. An AWS Network load balancer is able to load balance UDP and TCP protocols but does not work correctly with the UAGs and the health checks.

This means that forcing all traffic flows, both primary and secondary Horizon flows, through the load balancer is not an option. For those that don’t know what the primary and secondary connection flows are:

  1. Initial connection with connection server on 443 for authentication and XMLAPI calls
  2. PCoIP or Blast session being established between client and VDI agent

So a viable solution would be to utilize an N +1 VIP UAG deployment to ensure that secondary traffic flows are redirected directly to the UAG’s themselves and not the LB.
More info on the UAG deployment topologies: https://communities.vmware.com/t5/Horizon-Documents/Load-Balancing-across-VMware-Unified-Access-Gateway-Appliances/ta-p/2777028

So following Diagram illustrates the N+1 deployment topology or method 3 as in the link mentioned above:N+1 VIP UAG topology

Relational Database Service (RDS):

The RDS services provided by AWS are a perfect solution to provide an easy and cost-effective database service for your Horizon solution. This can be utilized to provide a high available SQL instance for your AppVolumes deployment.

The configuration is very straightforward and eliminates the need for an additional standalone Windows SQL VM in your cluster.
An easy and good guide to follow is the one written by AWS themself: https://aws.amazon.com/getting-started/hands-on/create-microsoft-sql-db/

Amazon Certificate Manager (ACM):

If you are using SSL traffic flows in combination with for example load balancing, you need to ensure the appropriate certificates are installed on the LBs. This is done through the AWS ACM which can provision AWS-generated certificates or you can utilize third-party CA certs. Being by a third party or your own CA server.

A key point to make sure you don’t forget is that I highly recommend including the FQDN of the load balancer as a SAN attribute in your certificate. This allows you to do direct testing on the load balancer FQDN without receiving SSL errors, as an AWS LB never gets an IP and only utilizes an AWS FQDN.

VMC cluster configuration

Within VMC on AWS the concept of affinity rules does not exist. Instead, you will have to utilize compute policies to enforce anti-affinity.

The following article describes how the compute policies work and how they are configured:


I hope my lessons learned post has helped you in better implementing and understanding your Horizon on VMC on AWS solution.

Want to get started with designing your Horizon on VMC on AWS? Read my design article: Designing a Hybrid Cloud with Horizon on VMC on AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *