In this post we will detail three features of our recent GitHub project that automatically deploys an Azure Virtual Desktop infrastructure (VDI), including host servers based on a Golden Image, and Azure DevOps pipelines to update and deploy new versions of the image.
The environment is configured with Azure Active Directory (AAD) joined hosts, and no line of sight is required to on-premises Active Directory (AD) controllers. Further, the Virtual Desktop hosts are configured with FSLogix containers so that users’ profiles can roam among the different hosts. Typically, the Virtual Desktop environment requires the hosts to be AD joined in order for users to have access to saving their profiles into an Azure File Share. However, in this project instead of a file share, we use a blob container to store the FSLogix profiles. Thus, neither the VDI hosts, nor the users, need to be registered in AD.
First, we will show the steps to programmatically join the virtual desktop hosts to AAD and make sure an AAD-only user can logon onto them. Then, we will go over the steps to enable FSLogix profiles stored on blob containers on the hosts. Lastly, we will detail the process for deploying the initial Golden Image version, plus the automated process for updating and deploying the image.
Read more ↦In this post we detail the security measures we took in the development and automated deployment of a Graph API application we created. The console application accesses an Office 365 user's Excel file stored on OneDrive and sends "New" or "Reply" emails from the user's account (depending on the contents of tables in the Excel document). The application uses the Microsoft Graph API to access Office 365 resources without the user's credentials. Instead of a user's credentials, the application uses its own service account configured in Azure Active Directory (AAD).
The console application is deployed to Azure as an WebApp triggered WebJob. This allows us to increase the security of the application by granting access to only the WebApp running the WebJob to the application secrets (such as the application service account id and password). Thus, the application as configured, will not work outside an Azure resource with an ID that has been granted access to the secrets.
The application is deployed and maintained with an Azure DevOps CI Pipeline. Every resource needed by the application, DevOps project, and pipelines are deployed with a bash shell script that runs AZ CLI commands. The shell script is included in the GitHub repository and is named deployResourcesProject.sh
Read more ↦In this post I will detail the steps for the automatic migration of a file share located in an on-premises server to the Azure cloud and enabling SMB-over-QUIC on the migrated share. Afterwards roaming users will be able to securely access the migrated share over the Internet.
To follow best security practices, the source files that deploy the resources should not contain any sensitive information, such as passwords. Instead, sensitive information is stored in Azure KeyVault. Further, the scripts use the Principle of Least Privilege to deploy all resources required for the project. For example, the cloud storage account hosting the migrated share only allows access to the on-premises network and the project's cloud virtual network.
The process consists of three steps: the deployment of the cloud resources, migrating a local share to the cloud, and setting up a cloud VM with the replicated share for SMB-over-QUIC access. I will describe the scripts that automatically execute each of these steps in sequence. I will also point out the places in these scripts where enhanced security or automating logic are taken.
Read more ↦In this post I will detail the steps for the automatic migration of tables from a Microsoft SQL Server on-premises to a cloud-based Azure Cosmos DB account. The migration is executed with a custom Console Application, and for testing the migration we developed a containerized Blazor Server Web Application. Both applications are written in C# and use the latest .Net 6 version. All the cloud resources and applications are deployed automatically via an Azure DevOps Pipeline.
To follow best security practices, the code for the Blazor application does not include the connection string for the Cosmos DB account. Instead, this sensitive piece of information is stored in an Azure KeyVault secret, and the KeyVault allows access to its secret only to the managed identity of the application's Container. The project relies on the publicly available WideWorldImporters database, and the GitHub repository containing all the scripts and source code is also public, so anyone wishing to replicate the project can do it.
First, I will describe how to automatically deploy the Azure DevOps project along with its resources. Then, I will explain how to automatically setup the WideWorldImporters database and export its StockItem-related tables. Next, I will go over the details of the Azure DevOps Pipeline that deploys the resources and migrates the tables. Lastly, I will go over the details of the console application that migrates the exported tables into the Cosmos DB container, and the Blazor server Web App that tests the results.
Read more ↦This blog will go over the details of the Ansible project and explain how to automatically deploy a VPN tunnel between an Azure Virtual Network Gateway (VNETGW) and a Cisco ASA 5506 firewall device on premises. To allow secure access to the LAN interface of the ASA firewall, the DevOps CI Pipeline executes on self-hosted agent running on an Ubuntu server located on premises. Also, to avoid having to write any sensitive information on the playbooks, the process relies on an Azure KeyVault that stores the secrets required by the project.
First, I will describe how the setup the self-hosted agent. Then, I will explain how to setup the KeyVault for the project, and how Ansible access KeyVault secrets. Lastly, I will go over how the playbook queries for random IPs created at run time, which are required for proper configuration of the ASA Firewall.
The playbooks for the DevOps Pipeline can be found in the public GitHub repository below:
Read more ↦Earlier this year I had to migrate users and groups from two disconnected domains. Also, I needed to do this without administrative access to the source domain. To do this, I programmed a small utility that automatically created the Active Directory PowerShell module commands required to recreate the users and groups in the destination domain. The program works by parsing the results of two PowerShell "Get" commands that can be run on any source domain workstation by a regular user without administrative privileges.
In this blog I will first give an overall description on how the program works, and then I will go into detail of some aspects of the program that helps ensure the Users and Groups are created in the destination domain without errors, such as handling the empty user properties and creating the Organizational Units (OU) that contain them.
The source code for the utility and the PowerShell commands that create the reports can be found on this public GitHub repository:
Read more ↦This blog post will explain some of the design considerations for the playbooks involved in the automated deployment of a LAMP stack (Linux, Apache, MySQL, PHP) on cloud infrastructure, via Azure DevOps Continuous Integration (CI) triggers and Ansible playbooks.
First, I will outline the general steps for the process. Then, we'll examine how SSH key authentication is used throughout the pipeline. Followed by a closer look at how LAMP playbooks are executed from within the new Azure subnet. Lastly, we will look at how the pipeline checks for the success of the LAMP deployment.
The playbooks for the DevOps Pipeline can be found in the public GitHub repository below:
Read more ↦F: (310) 935-0341
Mon -Fri 9AM - 6PM Pacific Time