<< Azure DevOps CI Ansible VPN Deployment Between Virtual Network Gateway and Cisco ASA | Automatic Azure StorageSyncService Migration and SMB over QUIC Deployment >> |
In this post I will detail the steps for the automatic migration of tables from a Microsoft SQL Server on-premises to a cloud-based Azure Cosmos DB account. The migration is executed with a custom Console Application, and for testing the migration we developed a containerized Blazor Server Web Application. Both applications are written in C# and use the latest .Net 6 version. All the cloud resources and applications are deployed automatically via an Azure DevOps Pipeline.
To follow best security practices, the code for the Blazor application does not include the connection string for the Cosmos DB account. Instead, this sensitive piece of information is stored in an Azure KeyVault secret, and the KeyVault allows access to its secret only to the managed identity of the application's Container. The project relies on the publicly available WideWorldImporters database, and the GitHub repository containing all the scripts and source code is also public, so anyone wishing to replicate the project can do it.
First, I will describe how to automatically deploy the Azure DevOps project along with its resources. Then, I will explain how to automatically setup the WideWorldImporters database and export its StockItem-related tables. Next, I will go over the details of the Azure DevOps Pipeline that deploys the resources and migrates the tables. Lastly, I will go over the details of the console application that migrates the exported tables into the Cosmos DB container, and the Blazor server Web App that tests the results.
The source code for this project can be found in this public GitHub repository:
Better-Computing-Consulting/mssql-cosmosdb-migrator-blazorapp-dbrowser-ci-deployment: Console app to migrate MSSQL export into Azure Cosmos DB and containerized Blazor server app to browse the new database and create excel exports. The apps are automatically deployed via Azure DevOps Pipeline CI. (github.com)I have also posted a YouTube video showing the entire process:
To setup the DevOps project, clone the project repository into any console that has the Azure Command Line Interface (az cli) installed. Edit the variables of the script named setdevopsproj.sh located at the root of the repository. You need to supply the values for your copy of the GitHub repository and your Azure DevOps organization. Log in to Azure with the az login command and run the script. To make the script run completely unattended, export your Github personal access token as an environment variable with this command:
export AZURE_DEVOPS_EXT_GITHUB_PAT=enter-github-pat-here
The setdevopsproj.sh script will:
The role of the Service Endpoint for Azure is to grant access to the pipeline to deploy resources in Azure. The name of the Service Endpoint, AzureServiceConnection, is hard coded into the command because the task in the yml file that runs the pipeline, azure-pipelines.yml, refers to it by name, so it must match. See lines 14-17 of azure-pipelines.yml:
14 15 16 17 |
- task: AzureCLI@2
displayName: Deploy resources
inputs:
azureSubscription: 'AzureServiceConnection'
|
Since the Service Endpoint is responsible for deploying the resources, the service principal associated with it must have sufficient access rights to deploy all the resources. By default, Azure grants every new service principal created with the az ad sp create-for-rbac command Contributor role to the subscription, so it can create resource groups and other resources. This access would be enough for all deployments of the pipeline, except for one.
In this project we need to assign the Blazor Web App's container a managed identity, so it can authenticate to the KeyVault containing the Cosmos DB Connection String secret. However, the Contributor role denies the ability to create role assignments. See the permissions | notActions section of json file for the default Contributor role:
9 10 11 12 13 14 15 16 17 18 19 20 21 |
"permissions": [
{
"actions": [
"*"
],
"notActions": [
"Microsoft.Authorization/*/Delete",
"Microsoft.Authorization/*/Write",
"Microsoft.Authorization/elevateAccess/Action",
"Microsoft.Blueprint/blueprintAssignments/write",
"Microsoft.Blueprint/blueprintAssignments/delete",
"Microsoft.Compute/galleries/share/action"
],
|
Therefore, unless we grant this right to the service principal, the command that deploys the Web App Container Instance will fail.
We handle this lack of sufficient access of service principal on the setdevopsproj.sh script by first, creating a custom role with the az role definition create command. The command takes as an argument a json file defining the role. The file included in the repository is a copy of the default role definition minus the line denying the access:
16
|
"Microsoft.Authorization/*/Write",
|
So, the notActions section of our CustomRole.json file looks like this:
13 14 15 16 17 18 19 |
"notActions": [
"Microsoft.Authorization/*/Delete",
"Microsoft.Authorization/elevateAccess/Action",
"Microsoft.Blueprint/blueprintAssignments/write",
"Microsoft.Blueprint/blueprintAssignments/delete",
"Microsoft.Compute/galleries/share/action"
],
|
The custom role json file also needs a subscription ID. We supply the subscription ID to the file by first querying it and then replacing its place holder within the file. You can see this on setdevopsproj.sh lines 2 and 3, then in line 6, after putting the place holder back and creating the custom role, we create the service principal.
2 3 4 5 6 |
subscription_id=$(az account show --query id -o tsv)
sed -i "s/enter-subscription-id-here/$subscription_id/g" CustomRole.json
az role definition create --role-definition @CustomRole.json --only-show-errors --query "{roleType: roleType, roleName:roleName}" -o table
sed -i "s/$subscription_id/enter-subscription-id-here/g" CustomRole.json
spkey=$(az ad sp create-for-rbac --name $spname --role "Custom SQL Demo Contributor Role" --only-show-errors --query password -o tsv)
|
For this step, clone the repository in a computer running Microsoft SQL Server (Free Developer Edition works). Then, change directory to mssql-cosmosdb-migrator-blazorapp-dbrowser-ci-deployment\dbmigrate\Files\ and run the PowerShell script named SQLRestoreExport.ps1. The script will also work with a remote SQL server if the local computer has the sqlcmd and bcp commands installed. To use a remote MSSQL server, edit the value of the $sqlserver variable in the script. If a remote SQL server is used, the account running the script must be able to create folders on the remote C$ administrative share and create databases on the remote SQL server. The script will:
When the script ends commit the new files back to the repository.
git status
git add .
git commit -m "add files"
git push origin master
Committing the exported files back to the GitHub repository will trigger the execution of the pipeline we created in the first step of the process.
The pipeline runs the azure-pipelines.yml file. This file contains a single AzureCLI step, which executes the deploy.sh script. The deploy.sh script handles both the deployment of the Azure resources and the building of the .Net programs. In total the script deploys four resources: Container Registry, Cosmos DB account, KeyVault, and a Container Instance. And, before deploying any resource, the script checks if the resource is already deployed. The sequence of deployments is as follows:
The script builds and runs the dbmigrate program to create the Database and Container in the Cosmos DB account, and to enter the transformed records from the MSSQL server exported files created in Step 2 above.
The DevOps Linux server running the agent comes with all the tools required to run the deploy.sh script, AZ CLI, Dotnet, and Docker, but for Dotnet it only includes up to SDK 5. Dbmigrate is a .Net 6 application, so before executing the dotnet run command the script installs SDK 6 with the sudo apt-get install -y dotnet-sdk-6.0 command.
The script passes the Cosmos DB connection string, database ID, Container ID and partition key as command line variables to the dotnet run command.
57
|
dotnet run -c Release --project dbmigrate/dbmigrate.csproj $dbConnStr $databaseid $containerid $partitionkey
|
The script pushes the cosmosdbviewer Blazor Application image to the Container Registry created in the first step of the pipeline with the docker push command. However, before running this command the script logs into the registry with the docker login command. The script uses the username and password of the Service Principal created for the Azure Service Endpoint when we ran the setdevopsproj.sh to set up the DevOps project in Step 1. The pipeline keeps these two pieces of information as environment variables that can be accessed like this:
67
|
docker login $acrLoginSrv --username ${servicePrincipalId} --password ${servicePrincipalKey}
|
After the pipeline ends, if you query the Azure resource group of the project, you will the following resource listing:
$ az resource list --query "[?resourceGroup=='SQLWestRG'].{ name: name, flavor: kind, resourceType: type, resourceGroup: resourceGroup, CreatedTime: CreatedTime }" -o table
Name ResourceType ResourceGroup Flavor
------------------------ ------------------------------------------- --------------- ----------------
cosmosdbviewer Microsoft.ContainerInstance/containerGroups SQLWestRG
bccDevContainerRegistry1 Microsoft.ContainerRegistry/registries SQLWestRG
bccdevcosmosdb1 Microsoft.DocumentDB/databaseAccounts SQLWestRG GlobalDocumentDB
bccDevKeyVault1 Microsoft.KeyVault/vaults SQLWestRG
The dbmigrate is a .Net 6 Console application that uses the new minimal hosting model template. That is, the template created by the Visual Studio Wizard does not contain a Class, a Namespace or a Main function. Thus, you write your program as if you were writing inside the Main function of a .Net 5 template. Most of the code examples currently on Microsoft are based on the .Net 5 templates, so these must be adapted to new .Net 6 templates.
The program takes the CosmosDB account URI, database ID, container ID and partition key as command line arguments. See line 1-10 of Program.cs. These lines also show the lack of Namespace, Class or Main definitions typical of .Net 5 programs.
1 2 3 4 5 6 7 8 9 10 |
using Microsoft.Azure.Cosmos;
using dbmigrate;
if (args.Length == 4)
{
try
{
using CosmosClient client = new("" + args[0] + "", new CosmosClientOptions());
string databaseId = args[1];
string containerId = args[2];
string partitionKey = args[3];
|
The purpose of the program is to convert the exported MSSQL tables into container items in Cosmos DB. I setup the Cosmos DB container to have StockItem entries, so I base the import mainly on the exported StockItems table. The idea is to loop through the contents of the StockItems.txt file to create StockItem objects to be inserted into Cosmos DB. Each StockItem object must contain as sub-objects all those properties that in MSSQL it refers to by the id, such as Supplier, Color, package type and its warehouse holdings. Therefore, before looping through the StockItems.txt file, the program converts each of the related tables into list of Objects, ie, a list of Supplier objects, a list of Color objects, etc. For each line in the StockItem.txt file the program matches the id of the related object in the StockItem line to the id of the Object in its list. See Program.cs lines 48-51:
48 49 50 51 |
List<StockItemHolding> stockItemHoldings = GetStockItemHoldings();
List<Color> colors = GetColors();
List<PackageType> packageTypes = GetPackageTypes();
List<Supplier> suppliers = GetSuppliers();
|
By default, the bcp command terminates each field with tab character, so each line of every exported file is split by the tab character to create an array of string values. We know beforehand the order of the columns, so for each line we can go through each value by incrementing a column index, performing data conversion as needed, and assigning the current value to the right object property.
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
using StreamReader sr = new(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Files", "StockItems.txt"));
while (sr.Peek() >= 0)
{
string[] vals = sr.ReadLine().Split('\t');
int i = 0;
StockItem itm = new()
{
Id = Guid.NewGuid().ToString(),
StockItemID = Int32.Parse(vals[i++].Trim()),
StockItemName = vals[i++].Trim(),
Supplier = GetSupplier(Int32.Parse(vals[i++].Trim()), suppliers),
Color = GetColor(ToNullableInt(vals[i++].Trim()), colors),
UnitPackage = GetPakageType(Int32.Parse(vals[i++].Trim()), packageTypes),
OuterPackage = GetPakageType(Int32.Parse(vals[i++].Trim()), packageTypes),
Brand = vals[i++].Trim(),
Size = vals[i++].Trim(),
LeadTimeDays = Int32.Parse(vals[i++].Trim()),
QuantityPerOuter = Int32.Parse(vals[i++].Trim()),
IsChillerStock = Convert.ToBoolean(Int32.Parse(vals[i++].Trim())),
StockHolding = GetStockItemHolding(Int32.Parse(vals[0].Trim()), stockItemHoldings),
Barcode = vals[i++].Trim(),
TaxRate = decimal.Parse(vals[i++].Trim()),
UnitPrice = decimal.Parse(vals[i++].Trim()),
RecommendedRetailPrice = decimal.Parse(vals[i++].Trim()),
TypicalWeightPerUnit = decimal.Parse(vals[i++].Trim()),
MarketingComments = vals[i++].Trim(),
InternalComments = vals[i++].Trim(),
Photo = vals[i++].Trim(),
CustomFields = vals[i++].Trim(),
Tags = vals[i++].Trim(),
SearchDetails = vals[i++].Trim(),
LastEditedBy = Int32.Parse(vals[i++].Trim()),
ValidFrom = DateTime.Parse(vals[i++].Trim()),
ValidTo = DateTime.Parse(vals[i++].Trim()),
};
|
We applied the same idea, parsing exported files by splitting each line by the tab character, in our previous blog post: Active Directory PowerShell Module Users and Groups Migration Commands Generator, where we migrated exported AD accounts to a separate disconnected domain. In that project, for each object created in the loop we generated new user PowerShell command. In this case we create a database entry from each line. The process looping through files and performing data transformations on each line is very useful in many different types of migrations.
Another point from the code above: the way we create the file reference for the stream reader in line 104 allows the program to run both on a Windows computer (where I developed the program) and the Linux server that runs the program. However, for this to work we need to tell Visual Studio to copy the files to the output directory. We configure visual studio do this on lines 23-39 of dbmigrate.csproj:
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
<ItemGroup>
<None Update="Files\Colors.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="Files\PackageTypes.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="Files\StockItemHoldings.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="Files\StockItems.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="Files\Suppliers.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
|
Finally, once we are done looping through the StockItems.txt file and we have our list of StockItem objects, we loop though this list and enter each item into our Cosmos DB container:
21 22 23 ... |
foreach (StockItem anitem in allitems)
{
ItemResponse<StockItem> response = await container.CreateItemAsync(anitem);
|
The cosmosdbviewer is another .Net 6 application using the new minimal hosting model template. Setting up the application, its environment variables, and services can all be done within the Program.cs file. There is no need for a separate Startup.cs file. Again, the .Net 6 template for this application lacks the Class, a Namespace or a Main function. In this section I will show: 1) the application configuration for KeyVault access with Managed Identity, 2) how it displays Cosmos DB data, 3) how the web app performs data verification for the data entry form, and 4) the process of producing Excel spreadsheet reports.
Setting up the application is simple: the environment variables, KeyVault secret, and Cosmos DB service can all be set within the first 13 lines of Program.cs.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
using cosmosdbviewer.Services;
using Azure.Identity;
using Microsoft.Azure.Cosmos;
var builder = WebApplication.CreateBuilder(args);
builder.Configuration.AddEnvironmentVariables();
Uri vaultUri = new(builder.Configuration["VaultUri"]);
builder.Configuration.AddAzureKeyVault(vaultUri, new DefaultAzureCredential());
builder.Services.AddRazorPages();
builder.Services.AddServerSideBlazor();
builder.Services.AddSingleton<ICosmosDbService>(InitializeCosmosClientInstanceAsync(builder.Configuration).GetAwaiter().GetResult());
|
In line 7 we add the environment variables that we added to the Azure Container Instance in step 8 of the pipeline execution.
If you query these variables from the Container Instance in Azure we will get this:
$ az container show -g SQLWestRG --n cosmosdbviewer --query "containers[].environmentVariables[].{name: name, value: value}" -o table
Name Value
------------ ----------------------------------------
VaultUri https://bccdevkeyvault1.vault.azure.net/
databaseid WideWorldImporters
containerid StockItems
partitionkey /StockItemID
As you can see, the first line of the output shows the container has a variable named VautlUri that holds the address of the KeyVault. In line 8 we create an URI object from the value of VaultUri environment variable. And, in line 9 we added the secrets of the KeyVault to the application configuration by using the new URI object.
Note that we passed to the AddAzureKeyVault function, line 9 above, in addition to the Vault's URI, an Azure.Identity.DefaultAzureCredential object. Adding the DefaultAzureCredential object to function instructs the program to provide the KeyVault the managed identity we created for the container in step 8 of the pipeline as a means of authentication.
In line 13, right before building the App, we added the Cosmos DB service. The function that returns the Cosmos DB service is defined at the bottom of the Program.cs file, lines 28-39.
28 29 30 31 32 33 34 35 36 37 38 39 |
static async Task<CosmosDbService> InitializeCosmosClientInstanceAsync(IConfiguration configuration)
{
string databaseId = configuration["databaseid"];
string containerId = configuration["containerid"];
string partitionKey = configuration["partitionkey"];
string connStr = configuration["CosmosDB1ConnectionString"];
CosmosClient client = new(connStr, new CosmosClientOptions());
CosmosDbService cosmosDbService = new(client, databaseId, containerId);
DatabaseResponse database = await client.CreateDatabaseIfNotExistsAsync(databaseId);
await database.Database.CreateContainerIfNotExistsAsync(new ContainerProperties(containerId, partitionKey), ThroughputProperties.CreateManualThroughput(1000));
return cosmosDbService;
}
|
As you can see, the InitializeCosmosClientInstanceAsync function uses all the variables we added to the configuration at the top of the file: three container environment variables, databaseid, containerid, and partitionkey, and the one secret from the KeyVault, CosmosDB1ConnectionString. We created the KeyVault secret CosmosDB1ConnectionString in step 3 of the pipeline execution.
Cosmosdbviewer is single page web application. All the HTML and C# code is contained in the Index.razor file. The application's main purpose is to show what data resides in the Cosmos DB container onto which we migrated the StockItems from the MSSQL Database. It allows us to easily compare items from MSSQL and CosmosDB. The first step to show all the items we declare a list of StockItem objects in the code section of Index.razor, line 251:
251
|
List<StockItem> items = new List<StockItem>();
|
Then, we get all the items from Cosmos DB in the overwritten function OnInitializedAsync.
655 656 657 ... |
protected override async Task OnInitializedAsync()
{
items = await cosmosDB.GetItemsAsync("SELECT * FROM c");
|
Note the query we used to get all items, including all related information from the container: Supplier, Color, Package and StockHoldings. Our select statement, "SELECT * FROM c", returns everything. To get the same information from MSSQL we would need to join five different tables. This is because the migration process "denormalizes" 5 MSSQL tables into a single Cosmos DB container.
Once we have our list of StockItems, in the HTML section of Index.razor we loop through it to fill the body of our table.
175 176 177 178 179 180 181 182 ... |
<tbody>
@foreach (var item in items)
{
<tr>
<td>
<details>
<summary>
@item.StockItemID
|
I also wanted to test adding items to Cosmos DB. In this respect, some features of Blazor help minimize the coding required to validate data in entry forms. Usually with data entry forms you need make sure that the data in the form is of the type and format expected by the backend. With Blazor you can create forms that use the EditForm class, combine them with EditContext objects and Data Annotations on your customs objects. These technologies handle most of the data verification in a Blazor application.
For example, in line 21-22 of Index.razor we start the StockItem data entry form:
21 22 |
<EditForm hidden="@isNotAddingItem" EditContext=itemEditContext OnValidSubmit=EnterItem>
<DataAnnotationsValidator />
|
EditForm indicates this form of class EditForm, it also indicates that is it has EditContext object of name itemEditContext bound to it, lastly that it should use DataAnnotationsValidator. Further, in the code section of the file, line 243, we declare the EditContext object bound to it.
243
|
private EditContext? itemEditContext;
|
Next, when we override the OnInitialized function we set the HandleItemFieldChanged function to handle the OnFieldChanged event of the itemEditContect object.
679 680 681 682 |
protected override void OnInitialized()
{
itemEditContext = new(item);
itemEditContext.OnFieldChanged += HandleItemFieldChanged;
|
The HandleItemFieldChanged function validates the form based on data definitions, see line 688 below, and we also use it to make an application-side testing of the uniqueness of new StockItems names. Cosmos DB has its own mechanisms for ensuring data integrity, but by performing the check here, the program does even attempt to enter the data. See lines 692-695.
686 687 688 689 690 691 692 693 694 695 |
private void HandleItemFieldChanged(object sender, FieldChangedEventArgs e)
{
isItemError = !itemEditContext.Validate();
if (item.StockItemName is not null)
{
if (items.FindIndex(s => s.StockItemName.ToUpper() == item.StockItemName.ToUpper()) >= 0)
{
debug = "Existing StockItemName ";
isItemError = true;
|
Note the variable isItemError, line 688 and 695 above. When the isItemError variable is set to true, the Submit button of the form will be disabled. See line 111 of Index.razor:
111
|
<button type="Submit" disabled=@isItemError>Enter Item</button>
|
This variable is declared and initialized to true at the beginning of the code section, line 234.
234
|
private bool isItemError { get; set; } = true;
|
To complement the DataAnnotationsValidator option of the form, we need to setup the objects with annotations that indicate whether the property is required for a new object, and optionally the requirements for the data, and the message to output to the user when the field fails validation. For example, lines 11-13 in the StockItem.cs, which defines our StockItem class, has this for StockItem's StockItemName property:
11 12 13 |
[Required(ErrorMessage = "A unique StockItemName is required."), MinLength(4), MaxLength(50), ]
[JsonProperty("StockItemName")]
public string? StockItemName { get; set; }
|
This indicates the StockItemName field is Required, what error message the user should get, and that the StockItemName length should be between 4 and 50 characters.
The last step in defining the EditForm data validation process is selecting where to display the user the Form's errors. For the StockItem form this happens on lines 103-105.
103 104 105 |
<tr>
<td colspan="6"><ValidationSummary /></td>
</tr>
|
This designates a row spanning all the columns at the bottom of the table to display the ValidationSumary.
The last feature of the cosmosdbviewer program I will detail is its ability to produce Excel reports. Companies of many different types, Ecommerce, Research, Law services, etc., find it useful to be able create Excel reports by querying the SQL backend directly. These reports can be triggered by a schedule or manually by the users and can be used to extend the reporting capabilities of existing software.
In cosmosdbviewer we add Excel report capabilities to the program by including the NPOI NuGet Package, which is open-source, community-supported, and freely licensed.
To use the package within Index.razor we added Using statements on the top of the file, see lines 5-6:
5 6 |
@using NPOI.XSSF.UserModel
@using NPOI.SS.UserModel
|
On top of the page, right below the header, there is button named "Download Report" that when clicked it creates and prompts for the download of StockItem data as an Excel spreadsheet. The onclick event of the button is bound to the DownloadFileFromStream function. See line 19 of Index.razor:
19
|
<button @onclick="DownloadFileFromStream" hidden="@isAddingItem" disabled="@IsGettingReport">Download Report</button>
|
The DownloadFileFromStream function is short:
614 615 616 617 618 619 620 621 622 |
private async Task DownloadFileFromStream()
{
IsGettingReport = true;
var fileStream = GetFileStream();
var fileName = "ItemsReport." + DateTime.Now.ToString("yyyyMMddHHmmss") + ".xlsx";
using var streamRef = new DotNetStreamReference(stream: fileStream);
await JS.InvokeVoidAsync("downloadFileFromStream", fileName, streamRef);
IsGettingReport = false;
}
|
We create the Excel spreadsheet in the GetFileStream function, which returns the Excel file contents as a byte array to the DownloadFileFromStream function.
256 257 258 259 260 261 262 263 264 265 ... 606 607 608 609 610 611 612 613 |
private Stream GetFileStream()
{
IWorkbook workbook = new XSSFWorkbook();
var dataFormat = workbook.CreateDataFormat();
var dataStyle = workbook.CreateCellStyle();
dataStyle.DataFormat = dataFormat.GetFormat("MM/dd/yyy HH:mm:ss");
string[] tabs = { "All Items", "Week Items" };
foreach (string tab in tabs)
{
for (int i = 0; i < headers.Length - 1; i++) { worksheet.AutoSizeColumn(i); }
}
MemoryStream ms = new MemoryStream();
workbook.Write(ms);
var binaryData = ms.ToArray();
var fileStream = new MemoryStream(binaryData);
return fileStream;
}
|
In this program we only create two tabs, line 263: one that includes all items, and another that only includes the items created withing the last 7 days. The spreadsheets are easily expanded with more tabs that include complex statistical analysis or calculations on the data. You can find an example of such complex spreadsheet on our public GitHub repository https://github.com/Better-Computing-Consulting/ecometry-pre-order-report, which include many tabs created by separate SQL Select statements and columns that contain calculated data.
To make the Excel spreadsheets more readable I like to resize all the columns after the data has been entered, so that all columns are as wide as its widest cell. See line 606 above.
For this line to work on the Linux Container Instance of the web app you need to install the libgdiplus and libc6-dev packages in it. You do this by adding a RUN command to the application's Dockerfile.
7
|
RUN apt-get update && apt-get -y install libgdiplus libc6-dev
|
Going back to producing the excel file, after the excel spreadsheet is created and saved as memory stream, the DownloadFileFromStream function uses the DotNetStreamReference function, line 619 above, which is new to .Net 6, to create a stream reference for the Java Script function downloadFileFromStream, which is responsible downloading the file to the user's browser. Thus, the Excel file in never saved in the container, it is kept on memory until it is downloaded.
I placed the java script function downloadFileFromStream, and its helper function triggerFileDownload as the last two entries in the body section of the _Layout.cshtml file.
I hope you find this project and blog useful. I realize I did not cover every single aspect of the setup scrips, the pipeline or the programs, but I aimed to cover all the unique technological and logical aspects of this process from beginning to end. If you have any questions, do not hesitate to reach out to info@bcc.bz.
Thank you for reading.
IT Consultant
Better Computing Consulting
<< Azure DevOps CI Ansible VPN Deployment Between Virtual Network Gateway and Cisco ASA | Automatic Azure StorageSyncService Migration and SMB over QUIC Deployment >> |
F: (310) 935-0341
Mon -Fri 9AM - 6PM Pacific Time