Category Archives: Azure

Sending email via the Azure Communication Service

I’m wanting to send emails from an Azure function upon a call from my React UI. Azure has the Communications Service and Email Communication Service for this functionality.

In the Azure Portla

  • Create a Communication service resource to your resource group
  • Create an Email Communication resource to your resource group

Add a free Azure subdomain

If you now go to the Email Communication Service | Overview, you can “Add a free Azure subdomain”. This is a really quick and simply way to get a domain up and running but has some limitations of quotas that are less than if you use a custom domain. This said, let’s click the “1-click add” and create an Azure subdomain.

When completed you’ll see the Settings | Provision domains, where it should show a domain name, domain type of Azure subdomain and all status should be verified.

Add a custom domain

Before try anything out let’s cover the custom domain. We’ll assume you have a domain on a non Azure DNS, for example GoDaddy. In the Azure Email Communication Service | Overview, click the “Setup” button.

  • Enter your domain
  • Re-enter to confirm
  • Click the confirm button
  • Click the Add button
  • We now need to verify the domain, so click Verify Domain and copy the TXT value
  • In my instance my DNS supplier offers a “Verify domain” option but you can just as easily add a TXT type with value @ then add the copied TXT value OR use the “Verify Doman Ownership” button if one exists
  • Once validation has completed go to the Email Communication Service | Settings | Provision domains and you’ll notice SPF, DKIM and DKIM2 are not verified, i.e. they’ll show “Configure”
  • Click on “Configure” in the SPF (any will do) this will show configuration for SPF, DKIM and DKIM2
  • For SPF, copy the SPF value, go to your DNS supplier and create a new DNS record of type TXT, a name of @ and paste your value into the value
  • For DKIM, copy the DKM record name, go to your DNS supplier and create a new DNS record of type CNAME, paste the record name into the name of your record and then copy the DKIM value from Azure into the CNAME value (if you have an options for Proxy, set to DNS only)
  • Finally, for DKIM2, copy the DKM2 record name, go to your DNS supplier and create a new DNS record of type CNAME, paste the record name into the name of your record and then copy the DKIM2 value from Azure into the CNAME value (if you have an options for Proxy, set to DNS only)
  • Go back to the Azure SPF configuration and click Next then Done

Verification can take some time, but when completed your custom domain should show Domain Status, SPF Status, DKIM status and DKIM2 status all Verified.

Connecting to the Communication Service

We’ve configured our domains, now we want to connect the domains to the “Communication Service” that you created earlier.

  • From the “Communication Service” go to Email | Domains
  • Click on Connect domains
  • Select your subscription, resource group, the your email service and finally the verified domain you wish to use – you can add multiple verified domains, so for example a custom domain and your Azure free subdomain.

Now all that’s left is to test the email, so

  • From the “Communication Service” go to Email | Try Email
  • Select the domain
  • Select your sender
  • Enter one or more recipients
  • I’ll leave the rest as default
  • If all fields are correct a “Send” button will appear, click it to send the email.

Whilst trying the email out you’ll have noticed the source code on the right – this gives you the code to place in your Azure function or other services.

Code

Here’s an example of the code generated via “Try Email”

using System;
using System.Collections.Generic;
using Azure;
using Azure.Communication.Email;

string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING");
var emailClient = new EmailClient(connectionString);


var emailMessage = new EmailMessage(
    senderAddress: "DoNotReply@<from_domain>",
    content: new EmailContent("Test Email")
    {
        PlainText = @"Hello world via email.",
        Html = @"
		<html>
			<body>
				<h1>
					Hello world via email.
				</h1>
			</body>
		</html>"
    },
    recipients: new EmailRecipients(new List<EmailAddress>
    {
        new EmailAddress("<to_email>")
    }));
    

EmailSendOperation emailSendOperation = emailClient.Send(
    WaitUntil.Completed,
    emailMessage);

Azure Static Web App Preview Environments

If you’re using something like GitHub actions to deploy your static web app to Azure, you might not realise that you can have the PR’s deployed to “Preview” Environments.

Go to your static web app and select Settings | Environment and your PR’s will have a deployment listed in the preview environment section.

If the URL for your site is something like this

https://green-bat-01ba34c03-26.westeurope.3.azurestaticapps.net

The 26 is the PR number so

https://green-bat-01ba34c03-{PR Number}.westeurope.3.azurestaticapps.net

And you can simply open that PR via your browser and verify all looks/works before merging to main.

If you’ve set your PR close to correctly delete these preview environments, for example from GitHub actions

- name: Close Pull Request
  id: closepullrequest
  uses: Azure/static-web-apps-deploy@v1
  with:
    azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }}
    app_location: "./dist"
    action: "close"

then this will be delete the preview envirnoment once your PR is merged.

However as I found, if you PR close is not working correctly these preview environments increase until you get an error trying to build the PR and the PR cannot be deployed to the preview environment until you select and delete them, i.e. you’ve reached the max number of preview environments.

Configuring your DNS through to your Azure Kubernetes Cluster

Note: I’m going to have to list the steps I think I took to buy a domain name on the Azure Portal, as I didn’t note all the steps down at the time – so please double check things when creating your own.

You can create you domain wherever you like, I happened to have decided to create mine via Azure.

  • Go to the Azure portal
  • Search for DNS Zones and click the Create button
  • Supply your subscription and resource group
  • Select your domain name
  • Click the Review + Create then create to DNS Zone

To set up the DNS zone (I cannot recall if this was part of the above or a separate step), run

az network dns zone create \
--resource-group {RESOURCE_GROUP} \
--name {DOMAN_NAME}

I’m going to assume you have Kubernetes installed.

We need a way to get from the outside world into our Kubernetes cluster so we’ll create and ingress controller using

helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace --namespace ingress-nginx \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.service.externalTrafficPolicy=Local

Next we need to update our DNS record to use the EXTERNAL_IP of the ingress controller we’ve just created, so

  • Run the following to get the EXTERNAL_IP
    kubectl get svc ingress-nginx-controller -n ingress-n
    
  • You can go into the DNS record and change the A record (@ Type and any other subdomains you’ve added) to use the EXTERNAL_IP address or use
    az network dns record-set a add-record --resource-group {RESOURCE_GROUP} \ 
    --zone-name {DOMAIN_NAME} --record-set-name --ipv4-address {EXTERNAL_IP}
    

At this point you’ll obviously need to set up your service with it’s own ingress using your domain in the “host” value of the ingress.

Adding Nginx Ingress controller to your Kubernetes cluster

You’ve created your Kubernetes cluster, added a service, therefore you’ve set up deployments, services and ingress but now you want to expose the cluster to the outside world.

We need to add am ingress controller such as nginx.

  • helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    
  • helm repo update
    
  • helm install ingress-nginx ingress-nginx/ingress-nginx \
      --create-namespace \
      --namespace ingress-nginx \
      --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
      --set controller.service.externalTrafficPolicy=Local
    

Note: In my case my namespace is ingress-nginx, but you can set to what you prefer.

Now I should say, originally I installed using

helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace

and I seemed to not be able to reach this from the web, but including here just for reference.

To get your EXTERNAL-IP, i.e. the one exposed to the web, use the following (replace the -n with the namespace you used).

kubectl get svc ingress-nginx-controller -n ingress-nginx

If you’re using some script to get the IP, you can extract just that using

kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Now my IP is not static, so upon redeploying the controller it’s possible this might change, so be aware of this. Ofcourse to get around it you could create a static IP with Azure (at a nominal cost).

Still not accessing your services, getting 404’s with Nginx web page displayed ?

Remember that in your ingress (i.e. the services ingress), you might have something similar to below.

Here, we set the host name, hence in this case the service will NOT be accessed via the IP itself, it needs to be used via the domain name so it matches against the ingress for the service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-ingress
  namespace: development
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mydomain.com  # Replace with your actual domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello-service
            port:
              number: 80

Tracking events etc. with Application Insights

In my previous post I looked at what we need to do to set-up and using Application Insights for our logs, but we also have access to the TelemetryClient in .NET (Microsoft also have clients for other languages etc.) and this allows us to send information to some of the other Application Insights, for example tracking events.

Tracking events is useful as a specific type of logging, i.e. we want to track, potentially, whether one of our application options is ever used, or to what extent it’s used. Imagine we have a button that runs some long running calculation – well if nobody ever uses it, maybe it’s time to deprecate and get rid of it.

Ofcourse we can just use logging for this, but the TelemetryClient allows us to capture data within the customEvents and customMetrics tables within Application Insights (we’re look at the available tables in the next post on the basics if KQL) and hence reduce the clutter of lots of logs.

Take a look at my post Logging and Application Insights with ASP.NET core. To see code for a simple test application. We’re going to simply change the app.MapGet code to look like this (note I’ve left the logging on in place as well so we can see all the options for telemetry and logging)

app.MapGet("/test", (ILogger<Program> logger, TelemetryClient telemetryClient) =>
{
    telemetryClient.TrackEvent("Test Event");
    telemetryClient.TrackTrace("Test Trace");
    telemetryClient.TrackException(new Exception("Test Exception"));
    telemetryClient.TrackMetric("Test Metric", 1);
    telemetryClient.TrackRequest("Test Request", DateTimeOffset.Now, TimeSpan.FromSeconds(1), "200", true);
    telemetryClient.TrackDependency("Test Dependency", "Test Command", DateTimeOffset.Now, TimeSpan.FromSeconds(1), true);
    telemetryClient.TrackAvailability("Test Availability", DateTimeOffset.Now, TimeSpan.FromSeconds(1), "Test Run", true);
    telemetryClient.TrackPageView("Test Page View");

    logger.LogCritical("Critical Log");
    logger.LogDebug("Debug Log");
    logger.LogError("Error Log");
    logger.LogInformation("Information Log");
    logger.LogTrace("Trace Log");
    logger.LogWarning("Warning Log");
})
.WithName("Test")
.WithOpenApi();

As you can see, we’re injecting the TelemetryClient object and Application Insights is set up (as per my previous post) using

builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.ConnectionString = configuration["ApplicationInsights:InstrumentationKey"];
});

From the TelemetryClient we have these various “Track” methods and as you can no doubt summise, these map to

  • TrackEvent: maps to the customEvents table
  • TrackTrace: maps to the trace table
  • TrackException: maps to the exeptions table
  • TrackMetric: maps to the customMetrics table
  • TrackRequest: maps to the requests table
  • TrackDependency: maps to the dependencies table
  • TrackAvailability: maps to the availablilityResults table
  • TrackPageView: maps to the pageViews table

Telemetry along with standard logging to Application Insights gives us a wealth of information that we can look at.

Ofcourse, assuming we’re sending information to Application Insights, we’ll then want to look at features such as the Application Insights | Monitoring | Logs where we can start to query against the available tables.

Azure web app with IIS running ASP.NET core/Kestrel

When you deploy your ASP.NET core (.NET 8) to an Azure web app, you’ll have likely created the app to work with Kestrel (so you can deploy to pretty much any environment). But when you deploy as an Azure Web App, you’re essentially deploying to an IIS application.

So we need for IIS to simply proxy across to our Kestrel app. We achieve this by adding a Web.config to the root of our published app. and we’ll have configuration such as below

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <location path="." inheritInChildApplications="false">
    <system.webServer>
      <handlers>
        <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
      </handlers>
      <aspNetCore processPath=".\MyAspNetCoreApp.exe" stdoutLogEnabled="false" stdoutLogFile="\\?\%home%\LogFiles\stdout" hostingModel="inprocess" />
    </system.webServer>
  </location>
</configuration>

Creating a static web site on Azure (includes Blazor WebAssembly sites)

Azure offer a free static web site option, the process for creating a static web site is the same as creating one for a Blazor standalone application…

  • Create a resource group and/or select an existing one
  • Click Create button
  • Select or search for Static Web App via the marketplace
  • Click Create
  • I’m using the free hosting plan – so click Free, For hobby or personal projects plan type
  • I want the code to be deployed automatically from github, so ensure GitHub deployment details is set up
  • If you need to amend the GitHub account, do so
  • Set your organization, repository and branch to the github account/repo etc.
  • In Deployment configuration I use Deployment Token
  • In Advance, set your region

As part of this process, if you tie to application to your GitHub repo. you’ll also find a GitHub action’s CI/CD pipeline added to your repository which will carry out continuous deployment upon commits/merges to your main branch.

It’s likely you’ll want to map your Azure website name to DNS, I have my domain created with a different company so need to change DNS records

In my third party host I enter DNS record

  • CNAME
  • app (or www in most cases)
  • your Azure website name (I have to have the url terminated with a .)

In Azure in the static web app, select Settings | Custom domains and Add Custom domain on other DNS (if host is not in Azure)

  • Type the url name, i.e. www.mywebsite.co.uk or my subdomain is using app.mywebsite.co.uk
  • Azure will create the e CNAME record (which I’ll show below)
  • Click Next then Add (having copied your CNAME values)

Don’t forget to change the Azure supplied workflow

Whilst this is a pretty seamless experience, just usually waiting on CNAME and DNS to be updated etc. One thing to not forget is that the generated workflow file will need some changes to point to your app source etc.

At the end of the build_and_deploy_job you’ll see app_location, api_location and output_location, these need setting up for your application source location, if you are using Azure functions, then the API location and finally the output location. The first of these is the one you do need to add the others are optional.

Azure Functions, AWS Lambda Functions, Google Cloud Functions

Some companies, due to regulatory requirements, a desire to not get locked into one cloud vendor or the likes, look towards a multi-cloud strategy. With this in mind this post is the first of a few showing some of the same functionality (but with different names) across the top three cloud providers, Microsoft’s Azure, Amazon’s AWS and Google Cloud.

We’re going to start with the serverless technology known as Lambda Functions (in AWS, and I think they might have been the first), Azure Functions and the Google cloud equivalent Google Cloud Functions. Now, in truth the three may not be 100% compatible in terms of their API but they’re generally close enough to allow us to worry about the request and response only and keep the same code for the specific function. Ofcourse if your function uses DB’s or Queues, then you’re probably starting to get tied more to the vendor than the intention of this post.

I’ve already covered Azure Functions in the past, but let’s revisit, so we can compare and contrast the offerings.

I’m not too interested in the code we’re going to deploy, so we’ll stick with JavaScript for each provider and just write a simple echo service, i.e. we send in a value and it responds with the value preceded by “Echo: ” (we can look at more complex stuff in subsequent posts).

Note: We’re going to use the UI/Dashboard to create our functions in this post.

Azure Functions

From Azure’s Dashboard

  • Type Function App into the search box or select it from your Dashboard if it’s visible
  • From the Function App page click the Create Function App button
  • From the Create Function App screen
    • Select your subscription and resource group OR create a new resource group
    • Supply a Function app name. This is essentially our apps name, as the Function app can hold multiple functions. The name must be unique across Azure websites
    • Select Code. So we’re just going to code the functions in Azure not supply an image
    • Select a runtime stack, let’s choose Node.js
    • Select the version (I’m sticking with the default)
    • Select the region, look for the region closest to you
    • Select the Operating System, I’m going to leave this as the default Windows
    • I’ve left the Hosting to the default Consumption (Serverless)
  • Click Review + create
  • If you’re happy, now click Create

Once Azure has done it’s stuff, we’ll have a resource and associated resources created for our functions.

  • Go to resource or type in Function App to the search box and navigate there via this option.
  • You should see your new function app. with the Status running etc.
  • Click on the app name and you’ll navigate to the apps. page
  • Click on the Create in Azure portal button. You could choose VS Code Desktop or set up your own editor if you prefer
  • We’re going to create an HTTP trigger, which is basically a function which will start-up when an HTTP request comes in for the function, so click HTTP trigger
    • Supply a New Function, I’m naming mine Echo
    • Leave Authorization level as Function OR set to Anonymous for a public API. Azure’s security model for functions is nice and simple, so I’ve chosen Function for this function, but feel free to change to suite
    • When happy with your settings, click Create

If all went well you’re now looking at the Echo function page.

  • Click Code + Test
  • The default .js code is essentially an echo service, but I’m going to change it slightly to the following
    module.exports = async function (context, req) {
      const text = (req.query.text || (req.body && req.body.text));
      context.log('Echo called with ' + text);
      const responseMessage = text
        ? "Echo: " + text
        : "Pass a POST or GET with the text to echo";
    
      context.res = {
        body: responseMessage
      };
    }
    

Let’s now test this function. The easiest way is click the Test/Run option

  • Change the Body to
    {"text":"Scooby Doo"}
    
  • Click Run and if all went well you’ll see Echo: Scooby Doo
  • To test from our browser, let’s get the URL for our function by clicking on the Get function URL
  • The URL will be in the following format and we’ve added the query string to use with it
    https://your-function-appname.azurewebsites.net/api/Echo?code=your-function-key&text=Shaggy
    

If all went well you’ll see Echo: Shaggy and we’ve basically created our simple Azure Function.

Note: Don’t forget to delete your resources when you’ve finished testing this OR use it to create your own code

AWS Lamba

From the AWS Dashboard

  • Type Lambda into the search box
  • Click the button Create function
  • Leave the default as Author from scratch
  • Enter the function name. echo in my case
  • Leave the runtime (this should be Node), architecture etc. as the default
  • Click Create function

Once AWS has done it’s stuff let’s look at the code file index.mjs and change it to

export const handler = async (event, context) => { 
  console.log(JSON.stringify(event));
  const response = {
    statusCode: 200,
    body: JSON.stringify('Echo: ' + event.queryStringParameters.text),
  };
  return response;
};

You’ll need to Deploy the function before it updates to use the latest code but you’ll find that, at this time, you’ll probably get errors use the Test option. One thing we haven’t yet done is supply trigger.

  • Either click Add trigger or from the Configuration tab click Add trigger
  • Select API Gatewway which will add an API to create a HTTP endpoint for REST/HTTP requests
  • If you’ve not created a existing API then select Create a new API
    • We’ll select HTTP API from here
    • I’m not going to create JWT authorizer, so for Security for now, select Open
    • Click the Add button
  • From the Configuration tab you’ll see an API endpoint, in your browser paste the endpoint url and add the query string so it looks a bit like this

    https://end-point.amazonaws.com/default/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your functions when you’ve finished testing it OR use it to create your own code

    Google Cloud Function

    From the Google Cloud dashboard

    • Type Cloud Functions into the search box
    • From the Functions page, click Create Function
    • If the Enable required APIs popup appears you’ll need to click ENABLE to ensure all APIs are enabled



    From the Configuration page

    • Set to Environment if required, mine’s defaulted to 2nd gen which is the latest environment
    • Supply the function name, mine’s again echo
    • Set the region to one near your region
    • The default trigger is HTTPS, so we won’t need to change this
    • Just to save on having to setup authentication let’s choose the Allow unauthenticated invocations i.e. making a public API
    • Let’s also copy the URL for now which ill be something like
      https://your-project.cloudfunctions.net/echo
      
    • Clich the Next button



    This defaulted to creating a Node.js runtime. Let’s change our code to the familiar echo code

    • The code should look like the following
      const functions = require('@google-cloud/functions-framework');
      
      functions.http('helloHttp', (req, res) => {
        res.send(`Echo: ${req.query.text || req.body.text}`);
      });
      
    • Click the Test button and GCP will create the container etc.



    Once everything is deployed then change the test payload to

    {
      "text": "Scooby Doo"
    }
    

    and click Run Test. If all went well you’ll see the Echo response in the GCF Testing tab.

    Finally, when ready click Deploy and then we can test our Cloud function via the browser, using the previously copied URL, like this

    https://your-project.cloudfunctions.net/echo?text=Scooby%20Doo
    

    Note: Don’t forget to delete your function(s) when you’ve finished testing this OR use it to create your own code

Azure Functions

Azure functions (like AWS lambdas and GCP cloud functions) allow us to write serverless code literally just as functions, i.e. no need to fire up a web application or VM. Ofcourse just like Azure containers, there is a server component but we, the developer, need not concerns ourselves with handling configuration etc.

Azure functions will be spun up as and when required, meaning we will only be charged when they’re used. The downside of this is they have to spin up from a “cold” state. In other words the first person to hit your function will likely incur a performance hit whilst the function is started then invoked.

The other thing to remember is Azure functions are stateless. You might store state with a DB like CosmoDB, but essentially a function is invoked, does something then after a timeout period it’s shut back down.

Let’s create an example function and see how things work…

  • Create a new Azure Functions project
  • When you get to the options for the Function, select Http trigger and select Amonymous Authorization level
  • Complete the wizard by clicking the Create button

The Authorization level allows the function to be triggered without providing a key. The HTTP trigger, as it sounds, means the function is triggered by an HTTP request.

The following is basically the code that’s created from the Azure Function template

public static class ExampleFunction
{
  [FunctionName("Example")]
  public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
  {
    log.LogInformation("HTTP trigger function processed a request.");

    string name = req.Query["name"];

    var requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    var responseMessage = string.IsNullOrEmpty(name) 
      ? "Pass a name in the query string or in the request body for a personalized response."
            : $"Hello, {name}. This HTTP triggered function executed successfully.";

    return new OkObjectResult(responseMessage);
  }
}

We can actually run this and debug via Visual Studio in the normal way. We’ll get a URL supplied, something like this http://localhost:7071/api/Example to access our function.

As you can see from the above code, we’ll get passed an ILogger and an HttpRequest. From this we can get query parameters, so this URL above would be used like this http://localhost:7071/api/Example?name=PutridParrot

Ofcourse the whole purpose of the Azure Function is for it to run on Azure. To publish it…

  • From Visual Studio, right mouse click on the project and select Publish
  • For the target, select Azure. Click Next
  • Select Azure Function App (Windows) or Linux if you prefer. Click Next again
  • Either select a Function instance if one already exist or you can create a new instance from this wizard page

If you’re creating a new instance, select the resource group etc. as usual and then click Create when ready.

Note: I chose Consumption plan, which is the default when creating an Azure Functions instance. This is basically a “pay only for executions of your functions app”, so should be the cheapest plan.

The next step is to Finish the publish process. If all went well you’ll see everything configures and you can close the Publish dialog.

From the Azure dashboard you can simply type into the search textbox Function App and you should see the published function with a status of Running. If you click on the function name it will show you the current status of the function as well as it’s URL which we can access like we did with localhost, i.e.

https://myfunctionssomewhere.azurewebsites.net/api/Example?name=PutridParrot

Blazor and the GetFromJsonAsync exception TypeError: Failed to Fetch

I have an Azure hosted web api. I also have a simple Blazor standalone application that’s meant to call the API to get a list of categories to display. i.e. the Blazor app is meant to call the Azure web api, fetch the data and display it – should be easy enough, right ?

The web api can easily accessed via a web browser or a console app using the .NET HttpClient, but the Blazor code using the following simply kept throwing an exception with the cryptic message “TypeError: Failed to Fetch”

@inject HttpClient Http

// Blazor and other code

protected override async Task OnInitializedAsync()
{
   try
   {
      _categories = await Http.GetFromJsonAsync<string[]>("categories");
   }
   catch (Exception e)
   {
      Debug.WriteLine(e);
   }
}

What was happening is I was actually getting a CORS error, sadly not really reported via the exception so not exactly obvious.

If you get this error interacting with your web api via Blazor then go to the Azure dashboard. I’m running my web api as a container app, type CORS into the left search bar of the resource (in my case a Container App). you should see the Settings section CORS subsection.

Add * to the Allowed Origins and click apply.

Now your Blazor app should be able to interact with the Azure web api app.