The Virtual Horizon Lab – February 2020

It’s been a while since I’ve done a home lab update.  In fact, the last one was over four years ago. William Lam’s home lab project and appearing on a future episode of “Hello from My Home Lab” with Lindy Collier has convinced me that it’s time to do an update.

乐彩网3d走势图my lab has both changed and grown since that last update.  some of this was driven by vsphere changes – vsphere 6.7 required new hardware to replace my old r710s.  changing requirements, new technology, and replacing broken equipment have also driven lab changes at various points.

my objectives have changed a bit too.  at the time of my last update, there were four key technologies and capabilities that i wanted in my lab.  these have changed as my career and my interests have changed, and my lab has evolved with it as well.  today, my lab primarily focuses on end-user computing, learning linux and ai, and running minecraft servers for my kids.

vSphere Overview

the vsphere environment is probably the logical place to start.  my vsphere environment now consists of two vcenter servers – one for my compute workloads and one for my euc workloads.  the compute vcenter has two clusters – a 4 node cluster for general compute workloads and a 1 node cluster for backup.  the euc vcenter has a single 2-node cluster for running desktop workloads.

乐彩网3d走势图both environments run vsphere 6.7u3 and utilize the vcenter server virtual appliance.  the euc cluster utilzies vsan and horizon.  i don’t currently have nsx-t or vrealize operations deployed, but those are on the roadmap to be redeployed.

Compute Overview

my lab has grown a bit in this area since the last update, and this is where the most changes have happened.

乐彩网3d走势图most of my 11th generation dell servers have been replaced, and i only have a single r710 left.  they were initially replaced by cisco c220 m3 rackmounts, but i’ve switched back to dell.  i preferred the dell servers due to cost, availability, and html5-based remote management in the idracs.  here are the specs for each of my clusters:

Compute Cluster乐彩网3d走势图 – 4 Dell PowerEdge R620s with the following specs:

  • 2x Processors
  • 96GB of RAM

the r620s each have a 10gbe network card, but these cards are for future use.

Backup Cluster – 1 Dell PowerEdge R710 with the following specs:

  • 2x Processors
  • 24GB of RAM

this server is configured with local storage for my backup appliance.  this storage is provided by 1tb ssd sata drives.

VDI Cluster – 2 Dell PowerEdge R720s with the following specs:

  • 2x Intel Xeon E5-2630 Processors
  • 96 GB RAM
  • NVIDIA Tesla P4 Card

乐彩网3d走势图like the r620s, the r720s each have 10gbe networking available.

乐彩网3d走势图i also have an r730, however, it is not currently being used in the lab.

Network Overview

when i last wrote about my lab, i was using a pair of linksys srw2048 switches.  i’ve since replaced these with a pair of 48-port cisco catalyst 3560g switches.  one of the switches has poe, and the other is a standard switch.  in addition to switching, routing has been enabled on these switches, and they act as the core router in the network.  hsrp is configured for redundancy.  these uplink to my firewall. traffic in the lab is segregated into multiple vlans, including a dmz environment.

i use ubiquiti ac-lite aps for my home wifi.  the newer ones support standard poe, which is provided by one of the cisco switches.  the unifi management console is installed on a linux vm running in the lab.

for network services, i have a pair of pihole appliances.  these appliances are running as virtual machines in the lab. i also have avi networks deployed for load balancing.

Storage Overview

there are two main options for primary storage in the lab.  most primary storage is provided by synology.  i’ve updated by synology ds1515+ to a ds1818+.  the synology appliance has four 4tb wd red drives for capacity and four ssds.  two of the ssds are used for a high-performance datastore, and the other two are used as a read-write cache for my primary datastore.  the array presents nfs-backed datastores to the vmware environment, and it also presents cifs for file shares.

vsan is the other form of primary storage in the lab.  the vsan environment is an all-flash deployment in the vdi cluster, and it is used for serving up storage for vdi workloads.

the cloud

with the proliferation of cloud providers and cloud-based services, it’s inevitable that cloud services work their way into home lab setups. my lab is no exception.

乐彩网3d走势图i use a couple of different cloud services in operating my lab across a couple of saas and cloud providers. these include:

  • Workspace ONE UEM and Workspace ONE Access
  • Office 365 and Azure – integrated with Workspace ONE through Azure AD
  • Amazon Web Services – management integrated into Workspace ONE Access, S3 as a offsite repository for backups
  • Atlassian Cloud – Jira and Confluence Free Tier integrated into Workspace ONE with Atlassian Access

Plans Going Forward

home lab environments are dynamic, and they need to change to meet the technology and education needs of the users. my lab is no different, and i’m planning on growing my lab and it’s capabilities over the next year.

some of the things i plan to focus on are:

  • Adding 10 GbE capability to the lab. I’m looking at some Mikrotik 24-port 10GbE SFP+ switches.
  • Upgrading my firewall
  • Implementing NSX-T
  • Deploying VMware Tunnel to securely publish out services like
  • Putting my R730 back into production
  • Expanding my knowledge around DevOps and building pipelines to find ways to bring this to EUC
  • Work with Horizon Cloud Services and Horizon 7

Installing and Configuring the NVIDIA GRID License Server on CentOS 7.x

the release of nvidia grid 10 included a new version of the grid license server.  rather than do an inplace upgrade of my existing windows-based license servers that i was using in my lab, i decided to rebuild them on centos.

Prerequisites

in order to deploy the nvidia grid license server, you will need two servers.  the license servers should be deployed in a highly-available architecture since the features enabled by the grid drivers will not function if a license cannot be checked out.  these servers should be fully patched.  all of my centos boxes run without a gui. all of the install steps will be done through the console, so you will need ssh access to the servers.

the license servers only require 2 vcpu and 4gb of ram for most environments.  the license server component runs on tomcat, so we will need to install java and the tomcat web server.  we will do that as part of our install.  newer versions of java default to ipv6, so if you are not using this technology in your environment, you will need to disable ipv6 on the server.  if you don’t, the license server will not be listening on any ipv4 addresses. while there are other ways to change java’s default behavior, i find it easier to just disable ipv6 since i do not use it in my environment.

the documentation for the license server can be found on the nvidia docs site at

Installing the Prerequisites

乐彩网3d走势图first, we need to prepare the servers by installing and configuring our prerequisites.  we need to disable ipv6, install java and tomcat, and configure the tomcat service to start automatically.

if you are planning to deploy the license servers in a highly available configuration, you will need to perform all of these steps on both servers.

the first step is to disable ipv6.  as mentioned above, java appears to default to ipv6 for networking in recent releases on linux.

the steps to do this are:

  1. Open the sysctl.conf file with the following command (substitute your preferred editor for nano).

    sudo nano /etc/sysctl.conf

  2. Add the following two lines at the end of the file:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1

  3. Save the file.
  4. Reboot to allow the changes to take effect.

乐彩网3d走势图note: there are other ways to prevent java from defaulting to ipv6.  these methods usually involve making changes to the application parameters when java launches.  i selected this method because it was the easiest route to implement and i do not use ipv6 in my lab.

after the system reboots, the install can proceed.  the next steps are to install and configure java and tomcat.

  1. Install Java and Tomcat using the following commands:

    sudo yum install -y java tomcat tomcat-webapps

  2. Enable the tomcat service so that it starts automtically on reboot

    sudo systemctl enable tomcat.service

  3. Start Tomcat.

    乐彩网3d走势图sudo systemctl start tomcat.service

finally, we will want to configure our java_home variable.  the license server includes a command line tool, nvidialsadmin, that can be used to configure password authentication for the license server management console, and that tool requires a java_home variable to be configured.  these steps will create the variable for all users on the system.

  1. Run the following command to see the path to the Java install:

    sudo alternatives –config java

  2. Copy the path to the Java folder, which is in parenthesis.  Do not include anyting after “jre/’
  3. Create a Bash plugin for Java with the following command:

    乐彩网3d走势图sudo nano /etc/profile.d/java.sh

  4. Add the following lines to the file:

    export JAVA_HOME=(Your Path to Java)
    export PATH=$PATH:$JAVA_HOME/bin

  5. Save the file.
  6. Reboot the system.
  7. Test to verify that the JAVA_HOME variable is set up properly

    乐彩网3d走势图echo $java_home

Installing the NVIDIA License Server

乐彩网3d走势图now that the prerequisites are configured, the nvidia license server software can be installed.  the license server binaries are stored on the nvidia enterprise licensing portal, and they will need to be downloaded on another machine and copied over using a tool like winscp.

乐彩网3d走势图the steps for installing the license server once the installer has been copied to the servers re:

  1. Set the binary to be executable.

    chmod +x setup.bin

  2. Run the setup program in console mode.

    乐彩网3d走势图sudo ./setup.bin -i console

  3. The first screen is a EULA that will need to be accepted.  To scroll down through the EULA, press Enter until you get to the EULA acceptance.
  4. Press Y to accept the EULA.
  5. When prompted, enter the path for the Tomcat WebApps folder.  On CentOS, this path is:
    /usr/share/tomcat
  6. When prompted, press 1 to enable firewall rules for the license server.  This will enable the license server port on TCP7070.
    Since this is a headless server, the management port on TCP8080 will also need to be enabled.  This will be done in a later step.
  7. Press Enter to install.
  8. When the install completes, press enter to exit the installer.

乐彩网3d走势图after the install completes, the management port firewall rules will need to be configured.  while the management interface can be secured with usernames and passwords, this is not configured out of the box.  the normal recommendation is to just use the browser on the local machine to set the configuration, but since this is a headless machine, that’s not avaialble either. for this step, i’m applying the rules to an internal zone and restricting access to the management port to the ip address of my management machine.  the steps for this are:

  1. Create a firewall rule for port TCP port 8080.

    sudo firewall-cmd –permanent –zone=internal –add-port=8080/tcp

  2. Create a firewall rule for the source IP address.

    sudo firewall-cmd –permanent –zone=internal –add-source=management-host-ip/32

  3. Reload the firewall daemon so the new rules take effect:

    sudo firewall-cmd –reload

Configuring the License Server For High Availability

once the firewall rules for accessing the management port are in place, the server configuration can begin.  these steps will consist of configuring the high availability features.  registering the license servers with the nvidia licensing portal and retrieving and applying licenses will not be handled in this step.

in order to set the license servers up for high availability, you will need two servers running the same version of the license server software.  you will also need to identify which servers will be the primary and secondary servers in the infrastructure.

  1. Open a web browser on your management machine and go to license server hostname or IP>:8080/licserver
  2. Click on Configuration
  3. In the License Generation section, fill in the following details:
    1. Backup URI:
      license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      license server hostname or IP>:7070/fne/bin/capability
  4. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      license server hostname or IP>:7070/fne/bin/capability
  5. Click Save.
  6. Open a new browser window or tab and go to go to license server hostname or IP>:8080/licserver
  7. Click on Configuration
  8. In the License Generation section, fill in the following details:
    1. Backup URI:
      license server hostname or IP>:7070/fne/bin/capability
    2. Main URI:
      license server hostname or IP>:7070/fne/bin/capability
  9. In the Settings for server to server sync between License servers section, fill in the following details:
    1. Synchronization to fne enabled: True
    2. Main FNE Server URI:
      license server hostname or IP>:7070/fne/bin/capability
  10. Click Save.

Summary

after completing the high availability setup section, the license servers are ready for the license file.  in order to generate and install this, the two license servers will need to be registered with the nvidia licensing service.  the steps to complete those tasks will be covered in a future post.

Integrating Rubrik Andes 5.1 with Workspace ONE Access

early in december, rubrik released the latest version of their core data protection platform – andes 5.1. one of the new features in this release is support for saml identity providers.  saml integration provides new capabilities to service providers and large enterprises by enabling integration into enterprise networks without having to directly integrate into active directory.

乐彩网3d走势图rubrik also supports multi-factor authentication, but the only method supported out of the box is rsa securid.  saml integration enables enterprises to utilize other forms of multi-factor authentication, including radius-based services and azure mfa.  it also allows for other security policies to be implemented including device-based compliance checks.

Prerequisites

before we can begin configuring saml integration, there are a few things we need to do.  these prerequisites are similar to the avi networks saml setup, but we won’t need to open the workspace one access metadata file in a text editor.

first, we need to make sure a dns record is in place for our rubrik environment.  this will be used for the fully-qualified domain name that is used when signing into our system.

second, we need to get the workspace one access idp metadata.  rubrik does not import this automatically by providing a link the idp.xml file, so we need to download this file.  the steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp

Rubrik SAML Configuration

once the prerequisites are taken care of, we can start the saml configuration on the rubrik side.  this consists of generating the rubrik saml metadata and uploading the workspace one metadata file.

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Identity Providers2. Identity Providers
  4. Click Add Identity Provider3. Add Identity Providers
  5. Provide a name in the Identity Provider Name field.
  6. Click the folder icon next to the Identity Provider Metadata field.
  7. Upload the idp.xml file we saved in the last step.
  8. Select the Service Provider Host Address Option.  This can be a DNS Name or the cluster floating IP depending on your environment configuration.  For this setup, we will be doing a DNS Name.
  9. Enter the DNS name in the field.
  10. Click Download Rubrik Metadata.4. Rubrik Identity Provider Config
  11. Click Add.
  12. Open the Rubrik Metadata file in a text editor.  We will need this in the next step.

Workspace ONE Configuration

now that the rubrik side is configured, we need to create our workspace one catalog entry.  the steps for this are:

  1. Log into your Workspace One Access administrator panel.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.
  4. Provide a name for the new Rubrik entry in the App Catalog.
  5. If you have an icon to use, click Select File and upload the icon for the application.
    5. New SaaS Application
  6. Click Next.
  7. In the Authentication Type field, select SAML 2.0
  8. In Configuration, select URL/XML
    6. SaaS Configuration 1
  9. Copy the contents of the Rubrik Metadata XML file.
  10. Paste them into the URL/XML textbox.
  11. Scroll down to the Advanced Properties section.
  12. Expand Advanced Properties.
  13. Click the toggle switch under Sign Assertion
    7. Sign Assertion
  14. Click Next.
  15. Select an Access Policy to use for this application. This will determine the rules used for authentication and access to the application.
    16. Assign Access Policy
  16. Click Next.
  17. Review the Summary of the Configuration
  18. Click Save and Assign
  19. Select the users or groups that will have access to this application
  20. Click Save.

Authorizing SAML Users in Rubrik

the final configuration step is to authorize workspace one users within rubrik and assign them to a role.  this step only works with individual users.  while testing, i couldn’t find a way to have it accept users based on a group or saml attribute.

乐彩网3d走势图the steps for authorizing workspace one users is:

  1. Log into your Rubrik Appliance.
  2. Go to the Gear icon in the upper right corner and select Users1. Users Menu
  3. Select Users and Groups8. Users and Groups
  4. Click Grant Authorization9. Grant Authorization
  5. Select the directory.
    10. Select Directory
  6. Select User and enter the username that the user will use when signing into Workspace ONE.11. Enter Username
  7. Click Continue.
  8. Select the role to assign to the user and click Assign.12. Assign Rights
  9. The SAML user has been authorized to access the Rubrik appliance through SSO.

Testing SAML Authentication and Troubleshooting

乐彩网3d走势图so now that we have our authentication profiles configured in both rubrik and workspace one access, we need to test it to ensure our admin users can sign in.  in order to test access, you need to sign out of your rubrik appliance.  when you return to the login screen, you’ll see that it has changed slightly, and there will be a large “sign in with sso” button above the username field.  when pressed, users will be directed to workspace one and authenticated.

乐彩网3d走势图while rubrik may be listed in the workspace one access app catalog, launching from the app catalog will just bring you to the login page.  i could not figure out how to get idp-initiated logins to work, and some of my testing resulted in error pages that showed metadata errors.

Integrating Microsoft Azure MFA with VMware Unified Access Gateway 3.8

one of the common questions i see is around integrating vmware horizon with microsoft azure mfa. natively, horizon only supports rsa and radius-based multifactor authentication solutions. while it is possible to , it requires network policy server and a special plugin for the integration (another option existed – azure mfa server, but that is no ).

乐彩网3d走势图earlier this week, vmware released horizon 7.11 with unified access gateway 3.8. the new uag contains a pretty cool new feature – the abilility to utilize saml-based multifactor authentication solutions.  saml-based multifactor identifaction allows horizon to consume a number of modern cloud-based solutions.  this includes microsoft’s azure mfa solution.

and…you don’t have to use truesso in order to implement this.

if you’re interested in learning about how to configure unified access gateways to utilize okta for mfa, as well as tips around creating web links for horizon applications that can be launched from an mfa portal, you can read the operational tutorial that andreano lanusso wrote.  it is currently available on the .

Prerequisites

乐彩网3d走势图before you can configure horizon to utilize azure mfa, there are a few prerequisites that will need to be in place.

first, you need to have licensing that allows your users to utilize the azure mfa feature.  microsoft bundles this into their office 365 and microsoft 365 licensing skus as well as their free version of azure active directory.

Note: Not all versions of Azure MFA . I have only tested with the full version of Azure MFA that comes with the Azure AD Premium P1 license.  I have not tested with the free tier or MFA for Office 365 feature-level options.

second, you will need to make sure that you have azure ad connect installed and configured so that users are syncing from the on-premises active directory into azure active directory.  you will also need to enable azure mfa for users or groups of users and configure any mfa policies for your environment.

if you want to learn more about configuring the cloud-based version of azure mfa, you can view the microsoft documentation .

乐彩网3d走势图there are a few urls that we will need when configuring single sign-on in azure ad.  these urls are:

  • Portal URL:
  • SAML URL:

case sensitivity matters here.  if you put caps in the saml url, you may receive errors when uploading your metadata file.

Configuring Horizon UAGs as a SAML Application in Azure AD

the first thing we need to do is create an application in azure active directory.  this will allow the service to act as a saml identity provider for horizon.  the steps for doing this are:

  1. Sign into your Azure Portal.  If you just have Office 365, you do have Azure Active Directory, and you can reach it from the Office 365 Portal Administrator console.
  2. Go into the Azure Active Directory blade.
  3. Click on Enterprise Applications.
    1. Enterprise AD-Updated
  4. Click New Application.
    2. New Enterprise Application-Updated
  5. Select Non-Gallery Application.
    3. Non-Gallery Application
  6. Give the new application a name.
    4. Enterprise Application Name
  7. Click Add.
  8. Before we can configure our URLs and download metadata, we need to assign users to the app.  Click 1. Assign Users and Groups
    5. Assign Users and Groups
  9. Click Add User
    5a. Add Users
  10. Click where it says Users and Groups – None Selected.
    5b. Select Users
  11. Select the Users or Groups that will have access to Horizon. There is a search box at the top of the list to make finding groups easier in large environments.
    Note: I recommend creating a large group to nest your Horizon user groups in to simplify setup.
  12. Click Add.
  13. Click Overview.
    5c. Return to main menu
  14. Click 2. Set up single sign on.
    6. Configure SSO
  15. In the section labeled Basic SAML Configuration, click the pencil in the upper right corner of the box. This will allow us to enter the URLs we use for our SAML configuration.
    8. Basic SAML Configuration
  16. Enter the following items.  Please note that the URL paths are case sensitive, and putting in PORTAL, Portal, or SAMLSSO will prevent this from being set up successfully:
    1. In the Identifier (Entity ID) field, enter your portal URL.  It should look like this:
    2. In the Reply URL (Assertion Consumer Service URL) field, enter your uag SAML SSO URL.  It should look like this:
    3. In the Sign on URL, enter your uag SAML SSO URL.  It should look like this:

      9. Basic SAML Configuration URLS
  17. Click Save.
  18. Review your user attributes and claims, and adjust as necessary for your environment. Horizon 7 supports logging in with a user principal name, so you may not need to change anything.
  19. Click the download link for the Federation XML Metadata file.
    10. Download Metadata URL File

乐彩网3d走势图we will use our metadata file in the next step to configure our identity provider on the uag.

乐彩网3d走势图once the file is downloaded, the azure ad side is configured.

Configuring the UAG

once we have completed the azure ad configuration, we need to configure our uags to utilize saml for multifactor authentication.

乐彩网3d走势图in order to do these steps, you will need to have an admin password set on the uag appliance in order to access the admin interface.  i recommend doing the initial configuration and testing on a non-production appliance.  once testing is complete, you can either manually apply the settings to the production uags or download the configuration ini file and copy the saml configuration into the production configuration files for deployment.

Note: You can configure SAML on the UAGs even if you aren’t using TrueSSO.  If you are using this feature, you may need to make some configuration changes on your connection servers.  I do not use TrueSSO in my lab, so I have not tested Azure MFA on the UAGs with TrueSSO.

the steps for configuring the uag are:

  1. Log into the UAG administrative interface.
  2. Click Configure Manually.
  3. Go to the Identity Bridging Settings section.
  4. Click the gear next to Upload Identity Provider Metadata.
    11. Identity Provider Metadata
  5. Leave the Entity ID field blank.  This will be generated from the metadata file you upload.
  6. Click Select.
    12. Select IDP Metadata file
  7. Browse to the path where the Azure metadata file you downloaded in the last section is stored.  Select it and click Open.
    13. Select XML File Updated
  8. If desired, enable the Always Force SAML Auth option.
    Note: SAML-based MFA acts differently than RADIUS and RSA authentication. The default behavior has you authenticate with the provider, and the provider places an authentication cookie on the machine. Subsequently logins may redirect users from Horizon to the cloud MFA site, but they may not be force to reauthenticate. Enabling the Always Force SAML Auth option makes SAML-based Cloud MFA providers behave similiarly to the existing RADIUS and RSA-based multifactor solutions by requiring reauthentication on every login. Please also be aware that things like Conditional Access Policies in Azure AD and Azure AD-joined Windows 10 devices may impact the behavior of this solution.
  9. Click Save.
    14. Save IDP Data-Updated
  10. Go up to Edge Services Settings and expand that section.
  11. Click the gear icon next to Horizon Edge Settings.
  12. Click the More button to show all of the Horizon Edge configuration options.
  13. In the Auth Methods field, select one of the two options to enable SAML:
    1. If you are using TrueSSO, select SAML
    2. If you are not using TrueSSO, select SAML and Passthrough
      15. Select MFA Configuration
  14. Select the identity provider that will be used.  For Azure MFA, this will be the one labeled 
    16. Select Identity Provider
  15. Click Save.

乐彩网3d走势图saml authentication with azure mfa is now configured on the uag, and you can start testing.

User Authentication Flows when using SAML

乐彩网3d走势图compared to radius and rsa, user authentication behaves a little differently when using saml-based mfa.  when a user connects to a saml-integrated environment, they are not prompted for their radius or rsa credentials right away.

乐彩网3d走势图after connecting to the horizon environment, the user is redirected to the website for their authentication solution.  they will be prompted to authenticate with this solution with their primary and secondary authentication options.  once this completes, the horizon client will reopen, and the user will be prompted for their active directory credentials.

you can configure the uag to use the same username for horizon as the one that is used with azure ad, but the user will still be prompted for a password unless truesso is configured.

Configuring SAML with Workspace ONE for AVI Networks

earlier this year, vmware closed the acquisition of avi networks.  avi networks provides an application delivery controller solution designed for the multi-cloud world. while many adc solutions aggregate the control plane and data plane on the same appliance, avi networks takes a different approach.  they utilize a management appliance for the control plane and multiple service engine appliances that handle load balancing, web application firewall, and other services for the data plane.

Integrating Avi Networks with Workspace ONE Access

乐彩网3d走势图the avi networks controller appliance offers multiple options for integrating the management console into enterprise environments for authentication management.  one of the options that is avaiable is saml.  this enables integration into workspace one access and the ability to take advantage of the app catalog, network access restrictions and step-up authentication when administrators sign in.

before i walk through the steps for integrating avi networks into workspace one access via saml, i want to thank my colleague nick robbins.  he provided most of the information that enabled this integration to be set up in my lab environments and this blog post.  thank you, nick!

there are three options that can be selected for the url when configuring saml integration for avi networks.  the first option is to use the cluster vip address.  this is a shared ip address that is used by all management nodes when they are clustered.  the second option is to use a fully-qualified domain name.

乐彩网3d走势图these options determine the sso url and entity id that are used in the saml configuration, and they are automatically generated by the system.

乐彩网3d走势图the third option is to use a user-provided entity id.

乐彩网3d走势图for this walkthrough, we are going to use a fully-qualified domain name.

Prerequisites

乐彩网3d走势图before we can begin configuring saml integration, there are a few things we need to do.

first, we need to make sure a dns record is in place for our avi controller.  this will be used for the fully-qualified domain name that is used when signing into our system.

乐彩网3d走势图second, we need to get the workspace one access idp metadata.  avi does not import this automatically by providing a link the idp.xml file, so we need to download this file.  the steps for retrieving the metadata are:

  1. Log into your Workspace One Access administrator console.
  2. Go to App Catalog
  3. Click Settings
    7a. idp metadata WS1 Catalog Settings
  4. Under SaaS Apps, click SAML Metadata7b. idp metadata WS1 Catalog Settings idp
  5. Right click on Identity Provider Metadata and select Save Link As.  Save the file as idp.xml7c. idp metadata WS1 Catalog Settings idp
  6. Open the idp.xml file in your favorite text editor.  We will need to copy this into the Avi SAML configuration in the next step.

Avi Networks Configuration

乐彩网3d走势图the first thing that needs to be done is to configure an authentication profile to support saml on the avi networks controller.  the steps for this are:

  1. Log into your Avi Networks controller as your administrative user.
  2. Go to Templates -> Security -> Auth Profile.
  3. Click Create to create a new profile.
  4. Provide a name for the profile in the Name field.
  5. Under Type, select SAML.

    6. SAML

  6. Copy the Workspace ONE SAML idp information into the idp Metadata field.  This information is located in the idp.xml file that we save in the previous section.8. Copy idp metadata to AVI SAML Profile
  7. Select Use DNS FQDN
  8. Fill in your organizational details.
  9. Enter the fully-qualified domain name that will be used for the SAML configuration in the FQDN field.
  10. Click Save

乐彩网3d走势图next, we will need to collect some of our service provider metadata.  avi networks does not generate an xml file that can be imported into workspace one access, so we will need to enter our metadata manually.  there are three things we need to collect:

  • Entity ID
  • SSO URL
  • Signing Certificate

we will get the entity id and sso url from the service provider settings screen.  although this screen also has a field for signing certificate, it doesn’t seem to populate anyting in my lab, so we will have to get the certificate information from the ssl/tls certificate tab.

the steps for getting into the service provider settings are:

  1. Go to Templates -> Security -> Auth Profile.
  2. Find the authentication profile that you created.
  3. Click on the Verify box on the far right side of the screen.  This is the square box with a question mark in it.  10. Get Auth Profile Details
  4. Copy the Entity ID and SSO URL and paste them into your favorite text editor.  We will be using these in the next step.11. Service Provider Settings
  5. Close the Service Provider Settings screen by clicking the X in the upper right-hand corner.

乐彩网3d走势图next, we need to get the signing certificate.  this is the system-default-portal-cert.  the steps to get it are:

  1. Go to Templates -> Security -> SSL/TLS Certificates.
  2. Find the System-Default-Portal-Cert.
  3. Click the Export button.  This is the circle with the down arrow on the right side of the screen.13. Export System-Default-Portal-Cert
  4. The certificate information is in the lower box labeled certificate.
  5. Click the Copy to Clipboard button underneath the certificate box.
  6. Paste the certificate in your favorite text editor.  We will also need this in the next step.
  7. Click Done to close the Export Certificate screen.

Configuring the Avi Networks Application Catalog item in Workspace One Access

乐彩网3d走势图now that we have our saml profile created in the avi networks controller, we need to create our workspace one catalog entry.  the steps for this are:

  1. Log into your Workspace One Access admin interface.
  2. Go to the Catalog tab.
  3. Click New to create a new App Catalog entry.14. Create WS1 New SaaS Application
  4. Provide a name for the new Avi Networks entry in the App Catalog.  14. WS1 New SaaS Application
  5. If you have an icon to use, click Select File and upload the icon for the application.
  6. Click Next.
  7. Enter the following details.  For the next couple of steps, you need to remain on the Configuration screen.  Don’t click next until you complete all of the configuration items:
    1. Authentication Type: SAML 2.0
    2. Configuration Type: Manual
    3. Single Sign-On URL: Use the single sign-on URL that you copied from the Avi Networks Service Provider Settings screen.
    4. Recipient URL: Same as the Single Sign-On URL
    5. Application ID: Use the Entity ID setting that you copied from the Avi Networks Service Provider Settings screen.15a. WS1 New SaaS App Configuration
    6. Username Format: Unspecified
    7. Username Value: ${user.email}
    8. Relay State URL: FQDN or IP address of your appliance15b. WS1 New SaaS App Configuration
  8. Expand Advanced Properties and enter the following values:
    1. Sign Response: Yes
    2. Sign Assertion: Yes15c. WS1 New SaaS App Configuration - Advanced
    3. Copy the value of the System-Default-Portal-Cert certificate that you copied in the previous section into the Request Signature field.15d. WS1 New SaaS App Configuration - Advanced
    4. Application Login URL: FQDN or IP address of your appliance.  This will enable SP-initiated login workflows.
  9. Click Next.
  10. Select an Access Policy to use for this application.  This will determine the rules used for authentication and access to the application.16. Assign Access Policy
  11. Click Next.
  12. Review the summary of the configuration.17. Save and Assign
  13. Click Save and Assign
  14. Select the users or groups that will have access to this application and the deployment type.18. Assign Users
  15. Click Save.

Enabling SAML Authentication in Avi Networks

乐彩网3d走势图in the last couple of steps, we created our saml profile in avi networks and a saml catalog item in workspace one access.  however, we haven’t actually turned saml on yet or assigned any users to roles.  in this next section, we will enable saml and grant superuser rights to saml users.

Note: It is possible to configure more granular role-based access control by adding application parameters into the Workspace One Access catalog item and then mapping those parameters to different roles in Avi Networks.  This walkthrough will just provide a simple setup, and deeper RBAC integration will be covered in a possible future post.

  1. Log into your Avi Networks Management Console.
  2. Go Administration -> Settings -> Authentication/Authorization2. Settings
  3. Click the pencil icon to edit the Authentication/Authorization settings.
  4. Under Authentication, select Remote.
  5. 4. Authentication Remote
  6. Under Auth Profile, select the SAML profile that you created earlier.
  7. Make sure the Allow Local User Login box is checked.  If this box is not checked, and there is a configuration issue, you will not be able to log back into the controller.
  8. Click Save.9. Save AVI SAML Profile
  9. After saving the authentication settings, some new options will appear in the Authentication/Authorization screen to enable role mapping.
  10. Click New Mapping.9a. Create Role Mapping
  11. For Attribute, select Any
  12. Check the box labelled Super User9b. SuperUser
  13. Click Save.

saml authentication is now configured on the avi networks management appliance.

Testing SAML Authentication and Troubleshooting

so now that we have our authentication profiles configured in both avi networks and workspace one access, we need to test it to ensure our admin users can sign in.  there are two tests that should be run.  the first is launching avi networks from the workspace one access app catalog, and the second is doing an sp-initiated login by going to your avi networks url.

in both cases, you should see a workspace one access authentication screen for login before being redirected to the avi networks management console.

乐彩网3d走势图in my testing, however, i had some issues in one of my labs where i would get a json error when attempting saml authentication.  if you see this error, and you validate that all of your settings match, then reboot the appliance.  this solved the issue in my lab.

If SAML authentication breaks, and you need to gain access to the appliance management interface with a local account, then you need to provide a different URL.  That URL is .

Minimal Touch VDI Image Building With MDT, PowerCLI, and Chocolatey

recently, posted a three-part series on the process he uses for building windows 10 images in hobbitcloud (, , ). mark has put together a great series of posts that explain the tools and the processes that he is using in his lab, and it has inspired me to revisit this topic and talk about the process and tooling i currently use in my lab and the requirements and decisions that influenced this design.

Why Automate Image Building?

hand-building images is a time-intensive process.  it is also potentially error-prone as it is easy to forget applications and specific configuration items, requiring additional work or even building new images depending on the steps that were missed.  incremental changes that are made to templates may not make it into the image building documentation, requiring additional work to update the image after it has been deployed.

乐彩网3d走势图automation helps solve these challenges and provide consistent results.  once the process is nailed down, you can expect consistent results on every build.  if you need to make incremental changes to the image, you can add them into your build sequence so they aren’t forgotten when building the next image.

Tools in My Build Process

乐彩网3d走势图when i started researching my image build process back in 2017, i was looking to find a way to save time and provide consistent results on each build.  i wanted a tool that would allow me to build images with little interaction with the process on my part.  but it also needed to fit into my lab.  the main tools i looked at were packer with the jetbrains vsphere plugin and microsoft deployment toolkit (mdt).

while packer is an incredible tool, i ended up selected mdt as the main tool in my process.  my reason for selecting mdt has to do with nvidia grid.  the vsphere plugin for packer does not currently support provisioning machines with vgpu, so using this tool would have required manual post-deployment work.

乐彩网3d走势图one nice feature of mdt is that it can utilize a sql server database for storing details about registered machines such as the computer name, the ou where the computer object should be placed, and the task sequence to run when booting into mdt.  this allows a new machine to be provisioned in a zero-touch fashion, and the database can be populated from powershell.

乐彩网3d走势图unlike packer, which can create and configure the virtual machine in vcenter, mdt only handles the operating system deployment.  so i needed some way to create and configure the vm in vcenter with a vgpu profile.  the best method of doing this is using powercli.  while there are no native commandlets for managing vgpus or other shared pci objects in powercli,

while mdt can install applications as part of a task sequence, i wanted something a little more flexible.  typically, when a new version of an application is added, the way i had structured my task sequences required them to be updated to utilize the newer version.  the reason for this is that i wasn’t using application groups for certain applications that were going into the image, mainly the agents that were being installed, as i wanted to control the install order and manage reboots. (yes…i may have been using this wrong…)

i wanted to reduce my operational overhead when applications were updated so i went looking for alternatives.  i ended up settling on using chocolatey to install most of the applications in my images, with applications being hosted in a private repository running on the free edition of proget.

My Build Process Workflow

my build workflow consists of 7 steps with one branch.  these steps are:

  1. Create a new VM in vCenter
  2. Configure VM options such as memory reservations and video RAM
  3. GPU Flag Only – Add a virtual GPU with the correct profile to the VM.
  4. Identify Task Sequence that will be used.  There are different task sequences for GPU and non-GPU machines and logic in the script to create the task sequence name. Various parameters that are called when running the script help define the logic.
  5. Create a new computer entry in the MDT database.  This includes the computer name, MAC address, task sequence name, role, and a few other variables.  This step is performed in PowerShell using the .
  6. Power on the VM. This is done using PowerCLI. The VM will PXE boot to a Windows PE environment configured to point to my MDT server.

Build Process

after the vm is powered on and boots to windows pe, the rest of the process is hands off. all of the mdt prompts, such as the prompt for a computer name or the task sequence, are disabled, and the install process relies on the database for things like computer name and task sequence.

From this point forward, it takes about forty-five minutes to an hour to complete the task sequence. MDT installs Windows 10 and any drivers like the VMXNET3 driver, install Windows Updates from an internal WSUS server, installs any agents or applications, such as VMware Tools, the Horizon Agent, and the UEM乐彩网3d走势图 DEM agent, silently runs the OSOT tool, and stamps the registry with the image build date.

Future Direction and Process Enhancements

while this process works well today, it is a bit cumbersome. each new windows 10 release requires a new task sequence for version control. it is also difficult to work tools like the osdeploy powershell scripts by david segura (used for slipstreaming updated into a windows 10 wim) into the process. while there are ways to automate mdt, i’d rather invest time in automating builds using packer.

there are a couple of post-deployment steps that i would like to integrate into my build process as well. i would like to utilize pester to validate the image build after it completes, and then if it passes, execute a shutdown and vm snapshot (or conversion to template) so it is ready to be consumed by horizon. my plan is to utilize a tool like jenkins to orchestrate the build pipeline and do something similar to the process that mark brookfield has laid out.

the ideal process that i am working towards will have multiple workflows to manage various aspects to the process. some of these are:

乐彩网3d走势图1. a process for automatically creating updated windows 10 isos with the latest windows updates using the osdeploy powershell module.

乐彩网3d走势图2. a process for creating chocolatey package updates and submitting them to my proget repository for applications managed by chocolatey.

3. a process to build new images when windows 10 or key applications (such as vmware tools, the horizon agent, or nvidia drivers) are updated. this process will ideally use packer as the build tool to simplify management. the main dependency for this step is adding nvidia grid support for the jetbrains packer vsphere plug-in.

so this is what i’m doing for image builds in my lab, and the direction i’m planning to go.

Horizon 7 Administration Console Changes

over the last couple of releases, vmware has included an html5-based horizon console for managing horizon 7.  each release has brought this console closer to feature parity with the flash-based horizon administrator console that is currently used by most administrators.

乐彩网3d走势图with the end-of-life date rapidly approaching for adobe flash, and some major browsers already making flash more difficult to enable and use, there will be some changes coming to horizon administration.

  • The HTML5 console will reach feature parity with the Flash-based Horizon Administrator in the next release.  This includes a dashboard, which is one of the major features missing from the HTML5 console.  Users will be able to access the HTML5 console using the same methods that are used with the current versions of Horizon 7.
  • In the releases that follow the next Horizon release, users connecting to the current Flash-based console will get a page that provides them a choice to either go to the HTML5 console or continue to the Flash-based console.  This is similar to the landing page for vCenter where users can choose which console they want to use.

more information on the changes will be coming as the next version of horizon is released.

Using Amazon RDS with Horizon 7 on VMware Cloud on AWS

since i joined vmware back in november, i’ve spent a lot of time working with vmware cloud on aws – particularly around deploying horizon 7 on vmc in my team’s lab.  one thing i hadn’t tried until recently was utilizing amazon rds with horizon.

no, we’re not talking about the traditional remote desktop session host role. this is the amazon relational database service, and it will be used as the event database for horizon 7.

after building out a multisite horizon 7.8 deployment in our team lab, we needed a database server for the horizon events database.  rather than deploy and maintain a sql server in each lab, i decided to take advantage of one of the benefits of vmware cloud on aws and use amazon rds as my database tier.

this isn’t the first time i’ve used native amazon services with horizon 7.  .

before we begin, i want to call out that this might not be 100% supported.  i can’t find anything in the documentation, , or the readme files that explicitly state that rds is a supported database platform.  rds is also not listed in the product interoperability matrix.  however, sql server 2017 express is supported, and there are minimal operational impacts if this database experiences an outage.

What Does a VDI Solution Need With A Database Server?

vmware horizon 7 utilizes a sql server database for tracking user session data such as logins and logouts and auditing administrator activities that are performed in the horizon administrator console. unlike on-premises environments where there are usually existing database servers that can host this database, deploying horizon 7 on vmware cloud on aws would require a new database server for this service.

amazon rds is a database-as-a-service offering built on the aws platform. it provides highly scalable and performant database services for multiple database engines including postgres, microsoft sql server and oracle.

Using Amazon RDS for the Horizon 7 Events Database

乐彩网3d走势图there are a couple of steps required to prepare our vmware cloud on aws infrastructure to utilize native aws services. while the initial deployment includes connectivity to a vpc that we define, there is still some networking that needs to be put into place to allow these services to communicate. we’ll break this work down into three parts:

  1. Preparing the VMC environment
  2. Preparing the AWS VPC environment
  3. Deploying and Configuring RDS and Horizon

Preparing the VMC Environment

the first step is to prepare the vmware cloud on aws environment to utilize native aws services. this work takes place in the vmware cloud on aws management console and consists of two main tasks. the first is to document the availability zone that our vmc environment is deployed in. native amazon services should be deployed in the same availability zone to reduce any networking costs. firewall rules need to be configured on the vmc compute gateway to allow traffic to pass to the vpc.

the steps for preparing the vmc environment are:

  1. Log into
  2. Click Console
  3. In the My Services section, select VMware Cloud on AWS
  4. In the Software-Defined Data Centers section, find the VMware Cloud on AWS environment that you are going to manage and click View Details.
  5. Click the Networking and Security tab.
  6. In the System menu, click Connected VPC. This will display information about the Amazon account that is connected to the environment.
  7. Find the VPC subnet. This will tell you what AWS Availability Zone the VMC environment is deployed in. Record this information as we will need it later.

乐彩网3d走势图now that we know which availability zone we will be deploying our database into, we will need to create our firewall rules. the firewall rules will allow our connection servers and other vms to connect to any native amazon services that we deploy into our connected vpc.

乐彩网3d走势图this next section picks up from the previous steps, so you should be in the networking and security tab of the vmc console. the steps for configuring our firewall rules are:

  1. In the Security Section, click on Gateway Firewall.
  2. Click Compute Gateway
  3. Click Add New Rule
  4. Create the new firewall rule by filling in the following fields:
    1. In the Name field, provide a descriptive name for the firewall rule.
    2. In the Source field, click Select Source. Select the networks or groups and click Save.
      Note: If you do not have any groups, or you don’t see the network you want to add to the firewall, you can click Create New Group to create a new Inventory Group.
    3. In the Destination field, click Select Destination. Select the Connected VPC Prefixes option and click Save.
    4. In the Services field, click Select Services. Select Any option and click Save.
    5. In the Applied To field, remove the All Interfaces option and select VPC Interfaces.
  5. Click Publish to save and apply the firewall rule.

there are two reasons that the vmc firewall rule is configured this way. first, amazon assigns ip addresses at service creation. second, this firewall rule can be reused for other aws services, and access to those services can be controlled using aws security groups instead.

the vmc gateway firewall does allow for more granular rule sets. they are just not going to utilized in this walkthrough.

Preparing the AWS Environment

now that the vmc environment is configured, the rds service needs to be provisioned. there are a couple of steps to this process.

乐彩网3d走势图first, we need to configure a security group that will be used for the service.

  1. Log into your Amazon Console.
  2. Change to the region where your VMC environment is deployed.
  3. Go into the VPC management interface. This is done by going to Services and selecting VPC.
  4. Select Security Groups
  5. Click Create Security Group
  6. Give the security group a name and description.
  7. Select the VPC where the RDS Services will be deployed.
  8. Click Create.
  9. Click Close.
  10. Select the new Security Group.
  11. Click the Inbound Rules tab.
  12. Click Edit Rules
  13. Click Add Rule
  14. Fill in the following details:
    1. Type – Select MS SQL
    2. Source – Select Custom and enter the IP Address or Range of the Connection Servers in the next field
    3. Description – Description of the server or network
    4. Repeat as Necessary
  15. Click Save Rules

this security group will allow our connection servers to access the database services that are being hosted in rds.

乐彩网3d走势图once the security group is created, the rds instance can be deployed. the steps for deploying the rds instance are:

  1. Log into your Amazon Console.
  2. Change to the region where your VMC environment is deployed.
  3. Go into the RDS management interface. This is done by going to Services and selecting RDS.
  4. Click Create Database.
  5. Select Microsoft SQL Server.
  6. Select the version of SQL Server that will be deployed. For this walkthrough, SQL Server Express will be used.

    Note: There is a SQL Server Free Tier offering that can be used if this database will only be used for the Events Database. The Free Tier offering is only available with SQL Server Express. If you only want to use the Free Tier offering, select the Only enable options eligible for RDS Free Tier Usage.

  7. Click Next.
  8. Specify the details for the RDS Instance.
    1. Select License Model, DB Engine Version, DB instance class, Time Zone, and Storage.
      Note: Not all options are available if RDS Free Tier is being utilized.
    2. Provide a DB Instance Identifier. This must be unique for all RDS instances you own in the region.
    3. Provide a master username. This will be used for logging into the SQL Server instance with SA rights.
    4. Provide and confirm the master username password.
    5. Click Next.
  9. Configure the Networking and Security Options for the RDS Instance.
      1. Select the VPC that is attached to your VMC instance.
      2. Select No under Public Accessibility.
        Note: This refers to access to the RDS instance via a public IP address. You can still access the RDS instance from VMC since routing rules and firewall rules will allow the communication.
      3. Select the Availability Zone that the VMC tenant is deployed in.
      4. Select Choose Existing VPC Security Groups
      5. Remove the default security group by clicking the X.
      6. Select the security group that was created for accessing the RDS instance.

  10. Select Disable Performance Insights.
  11. Select Disable Auto Minor Version Upgrade.
  12. Click Create Database.

Once Create Database乐彩网3d走势图 is clicked, the deployment process starts. This takes a few minutes to provision. After provisioning completes, the Endpoint URL for accessing the instance will be available in the in RDS Management Console. It’s also important to validate that the instance was deployed in the correct availability zone. While testing this process, some database instances were created in an availability zone that was different from the one selected during the provisioning process.

make sure you copy your endpoint url. you will need this in the next step to configure the database and horizon.

Creating the Horizon Events Database

the rds instance that was provisioned in the last step is an empty sql server instance. there are no databases or sql server user accounts, and these will need to be created in order to use this server with horizon. a tool like sql server management studio is required to complete these steps, and we will be using ssms for this walkthrough. the instance must be accessible from the machine that has the database management tools installed.

the horizon events database does not utilize windows authentication, so a sql server user will be required along with the database that we will be setting up. this also requires db_owner rights on that database so it can provision the tables when we configure it in horizon the first time.

the steps for configuring the database server are:

  1. Log into new RDS instance using SQL Server Management Studio using the Master Username and Password.
  2. Right Click on Databases
  3. Select New Database
  4. Enter HorizonEventsDB in the Database Name Field.
  5. Click OK.
  6. Expand Security.
  7. Right click on Logins and select New Login.
  8. Enter a username for the database.
  9. Select SQL Server Authentication
  10. Enter a password.
  11. Uncheck Enforce Password Policy
  12. Change the Default Database to HorizonEventsDB
  13. In the Select A Page section, select User Mapping
  14. Check the box next to HorizonEventsDB
  15. In the Database Role Membership section, select db_owner
  16. Click OK

Configuring Horizon to Utilize RDS for the Events Database

乐彩网3d走势图now that the rds instance has been set up and configured, horizon can be configured to use it for the events database. the steps for configuring this are:

  1. Log into Horizon Administrator.
  2. Expand View Configuration
  3. Click on Event Configuration
  4. Click Edit
  5. Enter the Database Server, Database Name, Username, and Password and click OK.

Benefits of Using RDS with Horizon 7

乐彩网3d走势图combining vmware horizon 7 with amazon rds is just one example of how you can utilize native amazon services with vmware cloud on aws. this allows organizations to get the best of both worlds – easily consumed cloud services to back enterprise applications with an platform that requires few changes to the applications themselves and operational processes.

utilizing native aws services like rds has additional benefits for euc environments. when deploying horizon 7 on vmware cloud on aws, the management infrastructure is typically deployed in the software defined datacenter alongside the desktops. by utilizing native aws services, resources that would otherwise be reserved for and consumed by servers can now be utilized for desktops.

 

More Than VDI…Let’s Make 2019 The Year of End-User Computing

it seems like the popular joke question at the beginning of every year is “is this finally the year of vdi?”  the answer, of course, is always no.

last week, wrote a about the virtues of vdi technology with the goal of making 2019 the “year of vdi.”  johan made a number of really good points about how the technology has matured to be able to deliver to almost every use case.

乐彩网3d走势图and today, published a .  in his response, brian stated that while vdi is a mature technology that works well, it is just a small subset of the broader euc space.

i think both brian and johan make good points. vdi is a great set of technologies that have matured significantly since i started working with it back in 2011.  but it is just a small subset of what the euc space has grown to encompass.

and since the euc space has grown, i think it’s time to put the “year of vdi” meme to bed and, in it’s place start talking about 2019 as the “year of end-user computing.”

乐彩网3d走势图when i say that we should make 2019 the “year of end-user computing,” i’m not referring to some tipping point where euc solutions become nearly ubiquitous. euc projects, especially in large organizations, require a large time investment for discovery, planning, and testing, so you can’t just buy one and call it a day.

i’m talking about elevating the conversation around end-user computing so that as we go into the next decade, businesses can truly embrace the power and flexibility that smartphones, tablets, and other mobile devices offer.

since the new year is only a few weeks away, and the 2019 project budgets are most likely allocated, conversations you have around any new end-user computing initiatives will likely be for 2020 and beyond.

so how can you get started with these conversations?

if you’re in it management or managing end-user machines, you should start taking stock of your management technologies and remote access capabilities.  then talk to your users.  yes…talk to the users.  find out what works well, what doesn’t, and what capabilities they’d like to have.  talk to the data center teams and application owners to find out what is moving to the cloud or a saas offering.  and make sure you have a line of communication open with your security team because they have a vested interest in protecting the company and its data.

if you’re a consultant or service provider organization, you should be asking your customers about their end-user computing plans and talking to the end-user computing managers. it’s especially important to have these conversations when your customers talk about moving applications out to the cloud because moving the applications will impact the users, and as a trusted advisor, you want to make sure they get it right the first time.  and if they already have a solution, make sure the capabilities of that solution match the direction they want to go.

乐彩网3d走势图end-users are the “last mile of it.” they’re at the edges of the network, consuming the reosurces in the data center. at the same time, life has a tendency to pull people away from the office, and we now have the technology to bridge the work-life gap.  as applications are moved from the on-premises data center to the cloud or saas platforms, a solid end-user computing strategy is critical to delivering business critical services while providing those users with a consistently good experience.

Rubrik 5.0 “Andes” – A Refreshing Expansion

since they came out of stealth in 2015, rubrik has significantly expanded the features and capabilities of their core product.  they have had 13 major releases and added features for cloud providers, multi-tenant environments, polaris, a software-as-a-service platform that provides enhanced cloud features and global management, and radar, a service that detects and protects against ransomware attacks.

today, rubrik is announcing their 14th major release – andes 5.0.  the andes release builds on top of rubrik’s feature rich platform to further expand the capabilities of the product.  it expands support for both on-premises mission critical applications as well as cloud native applications, and it extends or enhances existing product features.

key features of this release are:

Enhanced Oracle Protection

oracle database backup support was introduced in the rubrik 4.0 alta release, and it was basically a scripted rman backup to a rubrik managed volume.  the rubrik team has been hard at work enhnacing this feature.

rubrik is introducing a connector agent that can be installed on oracle hosts or rac nodes.  this connector will be able to discover instances and databases automatically, allowing slas to be applied directly to the hosts or the databases directly.

simplified administration of oracle backups isn’t the only oracle enhancement in the andes release.  the popular live mount feature has now been extended to oracle environments.  if you’re not familiar with live mount, it is the ability to run a virtual machine or database directly from the backup.  this is useful for test and development environments or retrieving a single table or row that was accidentally dropped from a database.

point-in-time recovery of oracle environments is another new oracle enhancement.  this feature allows oracle administrators to restore their database to a specific point in time.  rubrik will orchestrate the recovery of the database and replay log files to reach the specified point in time.

SAP HANA Protection

乐彩网3d走势图sap hana is the in-memory database that drives many sap implementations.  in andes 5.0, rubrik offers an sap-certified hana backup solution that utilizes sap’s backint apis for hana data protection.  this solution integrates with hana studio and sap cockpit.  the sap hana protection feature also supports point-in-time recovery and log management features.

hana protection relies on another new feature of andes called elastic app service.  elastic app service is a managed volume mounted on the rubrik cdm and provide the same sla driven policies that other rubrik objects get.

Microsoft SQL Server Enhancements

乐彩网3d走势图rubrik has supported microsoft sql server backups since the 3.0 release, and there has been a steady stream of enhancements to this feature.  the andes release is no different, and it adds two major sql server backup features.

the first is the introduction of changed block tracking for sql server databases. this feature will act similarly to the cbt function provided in vmware vsphere.  the benefit of this feature is that the rubrik backup service can now look at the database change file to determine what blocks need to be backed up rather than scanning the database for changes, allowing for a shorter backup window and reduced overhead on the sql server host.

another sql server enhancement is group volume shadow copy service (vss) snapshots.  rubrik utilizes microsoft’s vss sql writer service to provide a point-in-time copy of the database.  the sql writer service does this by freezing all operations on, or quiescing, the database to take a vss snapshot.  once the snapshot is completed, the database resumes operations while rubrik performs any backup operations against the snapshot.  this process needs to be repeated on each individual database that rubrik backs up, and this can lead to lengthy backup windows when there are multiple databases on each sql server.

乐彩网3d走势图group vss snapshots allow rubrik to protect multiple databases on the same server in with one vss snapshot action.  databases that are part of the same sla group will have their vss snapshots taken and processed at the same time.  this essentially parallelizes backup operations for that sla group.  the benefits of this are a reduction in sql server backup times and the ability to perform backups more frequently.

Windows Bare-Metal Recovery

rubrik started off as a virtualization backup product.  however, there are still large workloads that haven’t been virtualized.  while rubrik supported some phyiscal backups, such as sql server database backups, it never supported full backup and recovery of physical windows servers.  this meant that it couldn’t fully support all workloads in the database.

the andes 5.0 release introduces the ability to protect workloads and data that reside on physical windows servers.  this is done with the same level of simplicity as all other virtualized and physical database workloads.

physical windows backup is done through the existing rubrik backup service that is used for database workloads.  the initial backup is a full system backup that is saved to a vhdx file, and all subsequent backups utilize changed block tracking to only backup the changes to the volumes.

restoring to bare metal isn’t fully automated, but it seems fairly straightforward.  the host server boots to a winpe environment, mounts a live mount of the windows volume snapshots, and then runs a powershell script to restore the volumes. once the restore is complete, the server can be rebooted to the normal boot drive.

乐彩网3d走势图this option is not only good for backing up and protecting physical workloads, but it can also be used for p2v and p2c (or physical-to-cloud) migrations.

the windows bmr feature only supports windows server 2008 r2, server 2012 r2, and server 2016.  it does not support windows 7 or windows 10.

SLA Policy Enhancements

setting up backup policies inside of rubrik is fairly simple.  you create an sla domain, you set the frequency and retention period of backup points, and you apply that policy to virtual machines, databases, or other objects.

but what if you need more control over when certain backups are taken?  there may be policies in place that determine when certain kinds of backups need to occur.

andes 5.0 introduces advanced sla policy configuration. this is an optional feature that enables administrators to not only specify the frequency and retention period of a backup point, but is also allows that administrator to specify when those backups take place.

for example, my policy may dictate that i need to take my monthly backup on the last day of each month.  under rubrik’s normal scheduling engine, i can only specify a monthly backup.  i can’t create a schedule that is only applied on the last day of the month.

Office365 Backup

office365 is quickly replacing on-premises exchange and sharepoint servers as organizations move to the software-as-a-service model. while micorsoft provides tools to help retain data, it is possible to permanently delete data. there are also scenarios where it is not easy to move data – such as migrating to a new office365 tenant.

乐彩网3d走势图starting with the andes 5.0 release, rubrik will support backup and recovery of office365 email and calendar objects through the polaris platform. polaris will act as the control plane for office365 backup operations, and it will utilize the customer’s own azure cloud storage to host the backup data and the search index.

slas can be applied to individual users or to all users in a tenant.  when it is applied to all users, new users and mailboxes will automatically inherit the sla so they are protected as soon as they are created.

乐彩网3d走势图the office365 protection feature allows for individual items, folders, or entire mailboxes to be recovered.  these items can be restored to the original mailbox location or exported to another user’s mailbox.

Other Enhancements

the andes 5.0 release is a very large release, and i’m scratching the surface of what’s being included.  some other key highlights of this release are:

  • NAS Direct Archive – Direct backup of NAS filesets into the Cloud
  • Live Mount VMDKs from Snapshots
  • Improved vCenter Recovery – Can recover directly to ESXi host
  • EPIC EHR Database Backup on Pure Storage
  • Snapshot Retention Enhancements
  • Support for RSA Multi-factor Authentication
  • API Tokens for Authentication
  • Cloud Archive Consolidation

Thoughts

乐彩网3d走势图this is another impressive release from rubrik.  there are a number of long-awaited feature enhancements in this release, and they continue to add new features at a rapid pace.