Chaos Engineering has been around for a while, after being popularized by Netflix during their migration to the cloud. However, despite their best efforts to open source their tooling, a proper secure and reliable set up was complicated enough most people.

Fast forward to the AWS announcement of a limited preview new managed chaos engineering service called AWS Fault Injection Simulator at re:Invent 2020. After a couple of months of limited access, the service is now GA (us-east-1 only at the time of this post) and today’s post is about getting started with it.

The Setup

There are a number of actions the service can perform (stop/terminate instances, throttle APIs, etc) against a number of different targets (EC2, ECS, RDS with more to come). For this entry, we’ll keep it simple and just focus on terminating a production EC2 instance experiment. In this particular case, I’ll be using the sample NodeJS application managed by Elastic Beanstalk.

The Application

As mentioned before, I’m just using the sample NodeJS application that Elastic Beanstalk offers you to quickly get started. However, I wanted to some of the configuration choices that I made to my environment.

The first bit of configuration (and one to pay attention to) is around the high availability for my environment. You’ll notice that while it is load balanced and scale up to 4 instances, the minimum has actually been set at 1.

You can also see the resources the service created for us, which in this case is one EC2 instance to which I’ve applied a resource tag at the application level. This tag is of the form chaos:ready, which it is descriptive enough for me to understand what instances I want FIS to target during its experiments. You could choose whatever value of the key value pair tag or just not have one altogether.

Finally, here’s what the sample application looks like and it also serves as a one to see how our environment is running.

Experiment Time

From the FIS homepage, you’ll see your option is to create a new experiment template so go ahead and hit that button.

Disclaimer: FIS will execute whatever actions you define against your resources. The service doesn’t produce fake metrics or wizardy to simulate how a potential disruption affects your system. The service will indeed, terminate your instances if that’s the action you have chosen. You will be provided with a number of warning signs along the way but it’s better to be safe than sorry.

Think of the template as the definition for your experiments, the place in which you can specify actions, targets and alarms on top of the usual name, role (the role requires a trust relationship on ‘’) and tags that we’re used to from other AWS services.  As previously mentioned, today’s experiment will only perform a terminate instance action.

When creating our action, we’re asked to provide a name for it as well as an action type from a predefined list. Once you’ve selected your action type, the Target dropdown will appear with an already prepopulated value created for you. The last option is something called “Start after“, what this means is in cases were a template has multiple actions, you might choose to run them in parallel or in sequence. Right now, it can be ignored given we’re going for the one action.

Now, let’s edit the target FIS created for us. I’ll start by updating the name for something a bit more descriptive, the Resource Type can stay as is because we’re indeed targeting EC2 instances. Now comes the fun part and arguably the area in which you need to focus the most which is how are we going to target these resources.

We see the selected method by default is using a resource ID. For our particular example, it might look like it’s enough and it indeed could be for a one off execution. It is true we’re only running one EC2 instance but we need to save the template with a fixed ID, so that means we’re not really in a position to reuse the template given that if we succeed and actually terminate the instance that particular ID will be lost.

So let’s use tags and filters and as soon as we select that method, a couple of “resource” options will appear. The first one is tags, and as you can imagine it will only run against resources with the specified tags. This will be the place in which I’ll use that chaos:ready tag from before.

The second option is called filters and I highly recommend you to follow the documentation link as this is the area where targets become truly powerful. For the sake of simplicity (this post is already too long) but not to leave you hanging, I’ll create one that targets only EC2 instances that are in a running state.

The Stop Condition section will provide you with the necessary safe guard to stop the experiment if a certain criteria is met. It is an optional value and I won’t be using it now but I’d suggest to always have one for serious experiments.

Go ahead finish the creation of the template. The service will make sure you’re sure about it with with a nice warning sign.

I’m now ready to start the template, which will in return create an experiment instance. The start process comes with the same warning as the creation one and it should run successfully.

Now that the experiment has finished, let’s have a look at the chaos it caused.

My beanstalk URL now returns an error, which means the underlying EC2 instance has been successfully terminated.

We can confirm our suspicions by looking at the health of our environment as well at the specific time in which it happened by looking at the metrics.

Beanstalk will automatically spin up a new instance and your environment will be back to healthy in a minute or two but it is a good reminded that even if you’re using a managed service, the service can only do what you tell it to do. In our case, because our minimum configuration was one instance, terminating it meant a complete disruption of our application.

In our follow up post with a way of mitigating that but still being able to run chaos experiments on our environments. Check it out here :

For some time now, Azure Cognitive Services has offered a “Text Analytics” feature, which can be used for finding topics within a piece of text, or even sentiment analysis to see if the overall sentiment of the text was positive or negative.

In early 2020, Azure released an additional feature to this API called “Opinion Mining”. Opinion mining is almost the cross between topic discovery and sentiment analysis. Instead of finding the overall sentiment of a piece of text, instead it finds the sentiment of individual topics. For example, in a piece of text such as :

The food here was terrible!

We would expect it to understand that not only is this a negative sentence, but specifically, we are talking negatively about the food. Being able to understand not just whether something is overall positive or negative, but also what is being talked about in that light can be invaluable in machine learning scenarios.

So let’s jump right in!

Setting Up Azure Cognitive Services For Testing

For the purposes of this article, we’re not going to get into individual SDK’s for Python, C#, Java, or any other language (Although these are available). Instead, we’re just going to use a simple Postman example of calling the API, with our key as a header, and retrieving results. This should be enough for us to see how the API works, and what sort of results we can get from it.

The first thing we need to do is head to our Cognitive Services account in the Azure Portal (Or go ahead and make one if you need to, the first 5000 requests are free so there is no immediate cost to creating the account!).

Under Keys and Endpoint, copy out your endpoint and one of your keys from this screen :

For our test, we are going to call a POST URL in the format of :

Where ABC is replaced with your cognitive endpoint taken from the above screenshot.

Additionally, we will sending a header of “Ocp-Apim-Subscription-Key” which will be our key, again taken from the screenshot above. In Postman it will end up looking like so :

The body of our request will always look like the following :

  "documents": [
    "language": "en",
    "id": "1",
    "text": "Horrible location as it's right next to a construction site. But the food was amazing! Really friendly waiter too!"

Documents is actually an array because you can send multiple documents at once to the API to have them all mined at once. You still pay per document, so it isn’t a cost saver, but sending multiple documents at once can save time over sending them one by one.

Now we’re all set up, let’s get mining!

Testing Opinion Mining Out

First let’s try out a typical restaurant review :

Horrible location as it’s right next to a construction site. But the food was amazing! Really friendly waiter too!

So what we are looking for here is that it identifies that the location is negative, but that the food and waiter were positive. And what do you know (Note that the full API response is much more verbose, I’m just cutting it down to see what we need!)

  "sentiment": "negative",
  "confidenceScores": {
    "positive": 0.0,
    "negative": 1.0
  "text": "location"
  "sentiment": "positive",
  "confidenceScores": {
    "positive": 1.0,
    "negative": 0.0
  "text": "food"
  "sentiment": "positive",
  "confidenceScores": {
    "positive": 1.0,
    "negative": 0.0
  "text": "waiter"

So as we can see it’s actually identified the noun that we are trying to describe, and whether our opinion was positive or negative.

Let’s try something slightly harder. What I noticed was that the opinion mining spotted the adjectives of “Horrible” and “Amazing” which should be fairly easy to spot. But how about this sentence :

I felt the food was bland. The music was also very loud so we couldn’t hear anything anyone said.

So again we are leaving a review, but specifically we are saying that the food is “bland” and the music was “loud”. There’s are very specific to the sentence and aren’t common adjectives you might use to describe something. But again :

  "sentiment": "negative",
  "confidenceScores": {
    "positive": 0.01,
    "negative": 0.99
  "text": "food"
  "sentiment": "negative",
  "confidenceScores": {
    "positive": 0.04,
    "negative": 0.96
  "text": "music"

And more importantly we see that it even picked up that the food being bland and the music being loud is why the opinion is negative.

"opinions": [
    "sentiment": "negative",
    "confidenceScores": {
      "positive": 0.01,
      "negative": 0.99
    "text": "bland",

Really impressive stuff! Does that mean it always gets it right? Absolutely not. Using sentences with colloquial terms (For example, “The food here is the bees knees!”) just returns neutral scores, but for out of the box opinion mining with no training required at all (And very little developer legwork), opinion mining with Azure Cognitive Services is pretty impressive!

Not long ago, I wrote about “Creating MultiPart Uploads on S3” and the focus of the post was on the happy path without covering failed or aborted uploads. It was already long as it was so I decided to write a separate entry to discuss in detail how to clean up your buckets so you don’t incur in unnecessary storage costs.

What’s this all about?

Let’s review the basics: S3 allows you to store objects in exchange for a storage fee. Simple enough, however when we think of objects in the context of S3, most people assume the output of running a list-objects (or ls) operation or just looking at their buckets through the console (which performs the same API call). In said situations, parts of an object created through a multipart upload won’t show up but the service is still storing them for you which means you are paying for that storage.

If none of this surprises you, then this post might not be for you. However, if you’ve been doing multipart uploads for a while or you’re just new to it, I’d recommend to keep reading as you might find you could optimize your storage costs.

Let’s pick up where we left off

I’ll continue with the setup from our previous post, a bucket with a single 100MB file.

This is what list-objects has to say about it.

    "Contents": [
            "Key": "large_file",
            "LastModified": "",
            "ETag": "",
            "Size": 104857600,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "",
                "ID": ""

So now, I’ll create a new multipart upload (I’ll be reusing the same file) but to simulate failure or an aborted operation, only the first part will be uploaded.

Let’s have a look at what list-objects has to say about it now.

    "Contents": [
            "Key": "large_file",
            "LastModified": "",
            "ETag": "",
            "Size": 104857600,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "",
                "ID": ""

It is the same output as before, however if we list-parts for this particular upload we can see how we’re using an extra 25MB from our first part.

> aws s3api list-parts –bucket your-bucket-name –key your_large_file –upload-id UploadId

    "Parts": [
            "PartNumber": 1,
            "LastModified": "",
            "ETag": "",
            "Size": 26214400
    "Initiator": {
        "ID": "",
        "DisplayName": ""
    "Owner": {
        "DisplayName": "",
        "ID": ""
    "StorageClass": "STANDARD"

As far as I’m aware, the only native way (as in not wrangling scripts or 3rd party tools) to get the entire size of the bucket is through CloudWatch metrics. You can see how the total size of my bucket is correctly represented at 125MB.

So where do we go from here? Deleting unneeded parts sounds like the path forward.

S3 provides you with an API to abort multipart uploads and this is probably the go-to approach when you know an upload failed and have access to the required information to abort it.

The command to execute in this situation looks something like this

> aws s3api abort-multipart-upload –bucket your-bucket-name –key your_large_file –upload-id UploadId

However, this is not a very scalable way of controlling orphan parts, across multiple uploads and buckets. You could craft a couple of scripts (using the list-multipart-uploads command) that run on a schedule to check for those file or you can setup a lifecycle policy on your buckets to clean failed uploads.

Luckily for us, S3 makes this easy to set up. Head onto the management settings for your bucket and create a new Lifecycle Rule.

First of all give it a name and then define what the scope of the policy will be. Your options are to apply to the entire bucket or a specific prefix (for example “/uploads”). In my case, I’ll set it up across the entire bucket and the service will rightfully lets me know about it.

Next up is defining what do we want this rule to do. As you can see, there’s already a predefined option for incomplete multipart uploads.


And finally, configure the parameters for this action. Remember, S3 doesn’t know if you upload failed which is why the wording (and behavior!) is around incomplete uploads. As such, it is entirely up to you how soon after they were created you want to delete parts.

A very common query I get when storing files in Azure Cloud, is “Why are we using Blob Storage instead of File Storage. After all, aren’t we storing files?”. And it’s actually a pretty good question. And luckily, it has a very simple answer.

When To Use Azure File Storage

Azure File Storage is specifically used when storing files to be used like a managed file share. For example, if you are currently using a network share within your company on an old PC sitting under someone’s desk, you can move these files to the cloud using Azure File Storage, and have it act exactly the same as your current networked file share. Importantly, it supports both “Server Message Block (SMB)” and “Network File System (NFS)” protocols, so can be used across Windows, Mac and Linux operating systems.

While a company wide network share is obviously a good use case, another very common example is when you have an existing application (Such as a Windows Service) that you simply lift and shift onto a VM in Azure. If this application requires the use of a network share, instead of having to create a tunnel back into your office network, you can lift and shift the network share into Azure File Storage. Meaning minimal code rewrites, and making it a true lift and shift approach.

When To Use Azure Blob Storage

Azure Blob Storage is best used when storing unstructured or binary data in the cloud, and you don’t need access to it via Windows Explorer or other SMB protocols. Realistically, this means if you are storing files for your application, that are then read back via that same application, Azure Blob Storage will suffice.

It should be noted that there are Windows applications and addons that will make a blob storage account act like a file share, but it’s not recommended as some features that are available on Azure File Storage are not available on Blob and vice versa. If your main use case for moving files into Azure is to have them act as a network file share, you should use Azure File Storage instead of Blob.

File vs Blob Pricing

The other very important thing to note is that there are pricing differences between Azure File Storage and Azure Blob Storage. Sometimes it can be in the cents per GB, but often the transaction costs are vastly different on the File Storage side. For example write operations will cost you 30% more on Azure File Storage.

While it does pay to check pricing, your use case should dictate which option you go for rather than any cost difference.

When you’re using S3, an object store that has unlimited volume of data and a maximum object size of up to 5TB (the maximum for a single PUT request is 5GB) you might be tempted to start uploaded some pretty big files.

So today’s focus is about making use of the multipart upload capabilities of S3 to speed up the amount of time that it takes for a large object to land on your buckets.

The “managed” way

The AWS CLI has a number of commands that will help you upload those large files by automatically making use of multipart and so chances are that if you have used the CLI to upload documents into your buckets you have come across them. Those commands are cpmv and sync and they can be used as followed.

> aws s3 cp your_large_file s3://your-bucket/

> aws s3 mv your_large_file s3://your-bucket/

> aws s3 sync your_large_file s3://your-bucket/

The differences between the three is out of scope for this post, however I’ll finish by saying that you can still change their configuration in order to make better use of your bandwidth. You can set the new configuration values through the CLI or directly into your AWS profile. A list of all possible configuration values can be found here.

The “unmanaged” way

AWS will recommend you to use those commands when possible (and with good reason!) but there are cases in which they don’t fit the bill and you have to do a bit of plumbing yourself. Luckily you are not left alone and the AWS CLI still provide you with the necessary commands to achieve the same result.

So let’s go ahead and upload a large file in parts into our bucket. In my case, I’ll create a 100Mb test file from the command line like this

> truncate -s 100M large_file

Now, I’ll use the split command to get four 25M parts. Split is available on both Linux and OSX (however, the OSX version might out of date and you might need to install the GNU core utilities).

> split -b 25M large_file

If you list the files in your directory, it should look something like this





We are not ready to start interacting with S3!

The first step in the process is to actually create a multipart upload

> aws s3api create-multipart-upload –bucket your-bucket-name –key your_file_name

The response from the API only contains three values, two of which have been provided by you. The last value is the UploadId and as you can imagine, this will be our reference to this multipart upload operation so go ahead and save it.

It is time to start uploading our part. The following is the command on how to upload a single part of which you’ll have to repeat N number of times depending on how many parts you’ve split your file into (In my case, N=4 and the command is for the first part), the values for part-number and body will need to be updated accordingly for every part you upload.

> aws s3api upload-part –bucket your-bucket-name –key your_file_name –part-number 1 –body xaa –upload-id UploadId

The ETag value that each upload-part returns will be used to complete the upload.

Once all parts are uploaded, you need to instruct S3 that the upload is completed. Remember S3 has no knowledge on how many parts there should be and what the references are so, passing that information back to it will complete the process. In order to do so, we need to compile a json array of all our parts and their respective ETag values.

You can use the ETag values that you have been collecting or retrieve them again by listing all parts in the upload

> aws s3api list-parts –bucket your-bucket-name –key your_file_name –upload-id UploadId

Save the output of the “Parts” array into a new file (I’ll call mine parts.json) and make sure to not include the LastModified and Size keys into the final file. Once you’re done the file should like something like this and remember that in my case, I was only dealing with four parts.

  "Parts": [
      "PartNumber": 1,
      "ETag": ""
      "PartNumber": 2,
      "ETag": ""
      "PartNumber": 3,
      "ETag": ""
      "PartNumber": 4,
      "ETag": ""

Now let’s use that to complete the upload with one final API call.

> aws s3api complete-multipart-upload –multipart-upload file://parts.json –bucket your-bucket-name –key your_file_name –upload-id UploadId

And we’re done, the response will contain the location for your newly uploaded file. We can call the list objects API or check the console if we wanted to double check our file is there.

While many things in Azure have straight forward “Spin this up, pay this per hour” type pricing models, Azure SQL is not one of them! While it does have the option of paying per hour, per database, per machine size, that’s only one of many ways to use Azure SQL. So I thought it would be worth talking through how pricing works with Azure SQL, and hopefully make it a little simpler to find the right option for you.

Before we get started, I just want to note that when I say Azure SQL, I am referring specifically to Microsoft SQL Server in the cloud. Things like Postgres on Azure are named “Postgres for Azure SQL”, but if you see Azure SQL on it’s lonesome, it means that it’s referring to SQL Server. Easy!

With that out of the way, let’s get started!

Single Database vs Elastic Pool

The first decision you are going to have to make is whether you are going to use a Single Database (Or many Single Databases), or use an Elastic Pool.

Single Database is exactly how it sounds, it’s a price per single database you spin up. It’s important to note this is *not* a single server, but a single database. So if your application uses two databases, for example one for transactional data and another just used for logging, then you will pay for two different AzureSQL databases. The benefit however is that each database has it’s own resources dedicated to it, and therefore they are isolated from one another. A downside is that if your application uses multiple databases (For example a single tenant SAAS application that uses a database per customer), then your costs are going to sky rocket.

Elastic Pools are a collection of SQL databases that share computing power, and pay for a “pool” of resources. Elastic Pools do start with higher pricing than Single Databases (e.g. The minimum spend is much larger than that of a single database), but if you have a data model that requires spinning up multiple databases (And possibly spinning them down), then Elastic Pools are for you. I would note that Elastic Pools also have other factors to consider (e.g. Max DTU sizes), and the shared resources can sometimes be more of a hindrance than a help. For that reason, I only recommend using Elastic Pools when you truly do have a “pool” of databases, like that of a single tenant SAAS application, and to not use Elastic Pools to save a few dollars on hosting costs for your 3 databases in production.

DTU Pricing Model

DTU stands for “Database Transaction Unit”. It’s taking measures of CPU, Memory and IO and combining them into a single metric. That makes it hard to talk about because the first question I usually get fired back when talking about DTU’s is “So how many CPU’s is that? How much memory?”. And the answer is… We don’t know. Or more so, because it’s a blended metric, 1DTU could be comprised of almost all memory and very little CPU, or it could be completely vice versa!

That’s actually one of the benefits of DTU. It’s a single “processing power” metric without having to juggle exact memory or CPU sizes. If you’ve ever had to grab a VM that has a huge amount of memory, but very little CPU, and it’s left you saying “Well.. I just want to increase the CPU, but not the memory, but the next VM class up doubles the memory!”, then that’s why DTUs are in some ways very powerful.

However, clearly a blended metric hides exactly what you are purchasing and for some people that’s a deal breaker. It makes it hard to understand initial provisioning sizes because at first, you will have nothing to compare it to. However, vertical scaling is absolutely no issue with Azure SQL, and so starting low and working your way up is always an option.

vCore Pricing Model

As an alternative to DTU pricing, you can still purchase Azure SQL using the vCore Pricing Model. vCore is your standard Azure SQL on hardware pricing where you know exactly how many CPU Cores and Memory you are being given. It’s great if you know exactly the computing power you need, or prefer the transparency of resourcing over the DTU pricing model.

Under vCore, there is actually two additional options. There is a price per core model, that is great for unpredictable workloads that may need to scale multiple times per day. Under this model, you simply pay per CPU core, per hour. And that’s it!

As an alternative, there is a “standard” set of machines available that are essentially built into your standard “tier” sizes. e.g.  2 Core 10GB, 4 Core 20GB vCore machines. These are great if you know the computing power you need and it won’t need to scale vertically that often.

DTU vs vCore

Unfortunately, after reading all of this you may come to the conclusion that you want to use vCore for it’s transparency, so that you know exactly what you’re getting. And Microsoft knows it, that’s why they’ve put the minimum provisioned vCore Azure SQL prices at around $400 USD per month (depending on region)! There is no lightweight entry into using the vCore pricing model, it’s almost an all or nothing approach.

On the DTU side of things, pricing can start for as little as $15 USD per month (depending on region), and the price step ups are much more granular, making it a much more viable solution for small start-ups and small businesses that just need a single database in the cloud.

Other options include using a DTU pricing model for Dev/Test workloads, and using a vCore model for Production. Again, this works great but only if you are happy with the minimum spend per month for (possibly) far more computing power than you need.

In the end, DTU vs vCore is less about pricing models and how resources are allocated, and more about the minimum level of pricing. In the majority of cases, DTU pricing is the way to go simply so you can start smaller, and ramp up over time.

Following the theme of our previous post, “Static Website Hosting With CloudFormation“, I thought that before moving onto a radically different topic we could keep exploring alternative ways of reaching similar outcomes.

To no one’s surprise, there’s always multiple ways of doing the same thing on AWS. Options typically vary on pricing, functionality, ease of use, management overheard and more, so today we will be having a look at Amplify Console. The Amplify brand is all geared towards making web and mobile development as easy and streamlined as possible (the “umbrella” contains client side libraries, backend provisioning, hosting, CI/CD, CLI). So with Amplify Console we see their approach to hosting and CI/CD, in an easier and significantly less “hands-on” approach compared to having to roll out your deployment pipelines, buckets, SSL, etc.

The Application

Feel free to follow this step by step or use your own applications. In my case, I’ll be using the calculator app from the community built React applications that you can find on their site. The majority of the instructions would apply to any application with some exceptions needed for apps built using create-react-app.

Getting Started

Head over to the Amplify homepage and create a new app and choose “Host web app”





We’re now presented with a choice for our source code provider. In my case, I’ll be using CodeCommit but you can choose from the most common ones as well as manually uploading your application (which unless being a quick prototype, it kind of defeats the purpose). Depending on your provider of choice, you’ll need to accept and enable the various different permissions for the service to be able to read your repository as that’ll be needed in order to detect changes and pull the code.


The first bit of magic will come in after hitting the Next button. You might find there’s a slight delay and that’s because Amplify is looking into your repository in order to find build settings for your application or auto create one for you. Amplify will be able to detect and automatically create settings for most common setups. It’ll be able to see if you use NPM or YARN, React or Vue and even if you’re using a static site generator such as Hugo or Gatsby. You could be in luck and the auto generated settings don’t need any manual intervention but even if you need to change a value here or there, what’s presented to you will get you at least 80% of the way there.


After that, you’re presented with everything up to that point and you should be able to save your project. The service will bootstrap your application and it’ll start a new deployment, providing you with an unique URL (with SSL enabled) to access your site. There are also a number of settings on the left hand side menu such as domain management, notifications, access control (i.e setting a password for your site), monitoring, redirects and more. Let me know if you would like to dive deeper on those on future posts.

And that’s it! Your site should be up and running, a deployment pipeline is set up for you behind the scenes so any future changes that you make to the connected branch will trigger a new deployment.


A note on create-react-app

If you followed along with the same application or are using create-react-app for yours, you are going to see that things won’t work out of the box. I’ll list below the changes I made in order to have it running, however you could also make some adjustments within the actual build process to move files across.


The index.html is located under the “public” folder as opposed to the root of the repository so a redirect rule was needed there.


Build Settings

Not much needed to change here except where our base directory is located. The create-react-app build process will save all its files under the “build” folder so I just needed to adjust that to reflect that difference.



Finally, create-react-app uses the “homepage” value in your package.json to assume where your app lives. All I had to do there was setting it up as an empty value (“”).

When using Azure API Manager, one of it’s benefits is the ability to respond to certain API requests without having to ping your backend for the response. Essentially short circuiting API responses and offloading that computer power to a simple API management layer. In this case, we are specifically talking about being able to respond to a browser’s OPTIONS request (e.g. using CORS), without your entire application pipeline kicking into gear.

Now like most things, there are multiple ways to achieve the same result. So we are first going to talk about how we can get API manager to respond to the request without involving your backend API at all, but later we will also show you how you can allow a pass through to your API if you prefer to manage CORS configuration inside your application code, rather than API manager.

Using A CORS Policy

If you’re looking for the most hands off approach possible, your best bet is to use a CORS Inbound Policy within APIM. Previously, you had to manage most of this via XML copy and paste, but now there is an easy configuration wizard to follow. Inside Azure API Manager, select your API, All Operations, then go ahead and select “Add Policy” on Inbound processing.

On the next screen, find the option to “Allow cross-origin resource sharing (CORS)” as shown below

By default, the options presented to you will only be to select the “Allowed origins”, which is defaulted to *. I highly recommend instead to select the “Full” options and complete them. If you use Basic, by default, it will only allow GET and POST requests. This might be OK for some API’s, but generally modern applications make use of things like PUT, DELETE and PATCH.

After saving the configuration on this screen, your API manager will actually “short circuit” all OPTIONS requests and respond with these options defined here. Meaning that your backend service never has to put in computing power for trivial requests again!

Under the hood, using this wizard actually just adds the following custom policy to the inbound policy XML :


You may find guides around Azure API manager that still refer to pasting in this XML manually via the Azure Portal, and it’s still a valid way to do it, but for the most part Azure has added a tonne of quick wizards that essentially do the same thing, but don’t involve you having to do janky notepad copy and pastes all over the place.

Using A Wildcard OPTIONS Request

This is actually the default when you add a backend service like an Azure App Service to APIM via the wizard, instead of mapping each endpoint manually. When you do this, it actually creates a wildcard OPTIONS operation that does nothing but pass every request to your backend.

If this wildcard operation doesn’t show for you, simply create a new operation for OPTIONS that has a wildcard URL as depicted above.

After doing this, it will be up to your application to respond to OPTIONS requests. This means that your application is now getting hit when APIM could take the load off for you, but it does mean that you have more fine grain control if you prefer to manage CORS from within your existing application code rather than Azure API Manager.

Map Individual OPTIONS Requests

Instead of using a wildcard operation inside APIM, you can of course map each endpoint individually. In some cases this may be more effort than it’s worth but if you want complete fine grain control over what requests are forward to your application, this is an option.

I personally try and avoid this as it it essentially means duplicating each request to add an OPTIONS operation. If you are looking at doing this for security reasons, then in my opinion the use of a CORS inbound policy, and making the attack surface area be handled by Azure API Manager, is a much better option.

Static site generators such as Hugo, Gatsby and Jekyll as well as front-end development practices of decoupling presentation and APIs have pushed static web hosting to primetime. One of the many benefits of doing this, is the ability to very easily deploy these sites on simple object stores without the need of provisioning virtual machines.

Enabling Amazon S3 Static Web Hosting

Amazon S3 is one of AWS’ oldest services and it has been able to host static website at the click of a button for as long as it has been around.

I’ll show you how to enable static web hosting on an existing bucket, the complexity of doing this at creation is the same but the UI might look slightly different. Head over to the bucket you want to enable static web hosting, select the Properties tab and scroll to the bottom of the screen. By default, this functionality is disabled so you should see something like this.


Go ahead and click the Edit button. Here you will be presented with a number of very straight forward options such as whether you want to enable the functionality and what type of hosting you want. By default, S3 will look for an index.html file at the root of your bucket but you can specify a different file in the Index document section. Pay attention to the information banner telling you that for this to work, not only you need to enable the functionality you also need make the content in your bucket public.

You can also configure redirection rules which we wont cover now, so scroll to the bottom and save your changes.


If everything went according to plan, you should now see it enabled and ready to be used.


You can also achieve this programmatically with CloudFormation (or any other IaC provider). Here’s what the YAML version of a CloudFormation template would look to achieve similar results.

AWSTemplateFormatVersion: 2010-09-09
    Type: AWS::S3::Bucket
      AccessControl: PublicRead
        IndexDocument: index.html
    DeletionPolicy: Retain
    Type: AWS::S3::BucketPolicy
        Id: MyPolicy
        Version: 2012-10-17
          - Sid: PublicReadForGetBucketObjects
            Effect: Allow
            Principal: '*'
            Action: 's3:GetObject'
            Resource: !Join 
              - ''
              - - 'arn:aws:s3:::'
                - !Ref S3Bucket
                - /*
      Bucket: !Ref S3Bucket
    Value: !GetAtt 
      - S3Bucket
      - WebsiteURL
    Description: URL for website hosted on S3


A Few Things To Note


S3 doesn’t let you make HTTPS requests to your site (you will be greeted by a nice 403). If this is a pre production environment that might be enough, however, the moment you start serving content to your customers you want to secure the traffic over the wire. You can get that functionality by pairing your newly hosted website on S3 with AWS’ CDN offering, CloudFront.

Domain Name

Two different endpoint formats are supported to access your content, let see if you can spot the difference.

Don’t feel bad if you haven’t. The only difference is the choice of “.” and “” in between s3-website and the region in which your bucket lives.

CNAME and Custom Domains

Both CNAME and custom domains are supported. For CNAME, if you have a registered domain let’s say you can create of bucket with that as its name and then create a DNS CNAME entry that points to[region-placeholder] On the other hand, if you want a completely different name for your site that doesn’t match your bucket’s name, you can use your domain registered on Route53 by creating an Alias with your bucket’s information.

There has been a boom in recent years of people falling in love with static site generators such as Hugo, Jekyll and Gatsby to name a few. Along with this, there has been a need for cheap static website hosting with little (to no) compute power required. After all, there’s no reason to spin up a huge VM to serve a couple of HTML pages!

Azure has entered the fray with their option titled “App Service Static Web Apps”, which is really just the ability to use blob storage to serve files as a website, under a custom domain. Officially, it’s still under preview at the time of writing, but it’s still a very solid service for hosting static websites of any size. On top of this, it can be a great place to host front ends of your single page application, built in Angular, React, or Vue, that can then call directly into an App Service or Function written in your favourite backend programming language.

Let’s get started!

Configuring App Service Static Web Apps

Head to your Azure Storage Account via the Azure Portal, and select the “Static Website” option under the left hand settings menu.

Then go ahead and enable your static website. You will be given a primary endpoint, and have the option to enter a index file name, and an error file name. These  are the files that blob storage should serve when someone hits the index of your site, and when a user requests a file that doesn’t exist (404), respectively.

For static websites, it will likely be index.html and something like 404.html, however for front end frameworks such as Angular, you will want to direct the “error” page to the index also as all requests should be directed to the root page, irrespective of the URL, for Angular routing to handle the request. In this context, a 404 is whether a physical file exists in blob storage with a particular URL, not whether your angular application has a route for the URL.

In the end, things should end up looking a bit like below :

Also notice the message about a container being created for you, this does exactly what it says on the tin! If we hit save and check our containers within the account, we will see Azure has created a container named $web. Uploading content to this will automatically be served under our primary endpoint URL on the previous screen.

Deploying via Azure Devops/AZCopy

If you are using Azure Devops to deploy your code, then you already have an inbuilt way to send files to your blob storage account.

Consider the following YAML :

- task: AzureFileCopy@4
    sourcePath: '$(Pipeline.Workspace)/FileFolder/*'
    azureSubscription: '$(serviceConnectionNameHere)'
    destination: 'AzureBlob'
    storage: '$(storageAccountName)'
    containerName: '$web'

Nice and easy! If you are using the classic editor instead of YAML, the same “AzureFileCopy” task is available to you via the GUI.

It’s also extremely important to note that AZCopy by default will try and guess mimetypes on your files. This is a good thing! However in various versions of both AZCopy, and the AzureFileCopy task, this can be easily accidentally overridden. For example, in some versions of the AzureFileCopy task, it will guess the mimetype *unless* you pass in additional flags, in which case it will automatically stop guessing mimetypes.

There is no one size fits all solution as different versions of the task, and AZCopy behave differently. But if you are finding that your files are not being uploaded with the correct mimetypes, or you are finding that your files “download” in your browser instead of being rendered in the browser, it’s likely an issue of AZCopy not guessing mimetypes correctly.

Adding A Custom Domain

Adding a custom domain is actually very simple, simply take your favourite domain provider and create a CNAME record between your domain, and your “” primary endpoint domain. That’s it! It’s highly recommended to use Azure CDN in front of your static website which can help with handling of custom domains and HTTPS, but it’s not a hard requirement and for many Dev/Test scenarios, using the primary endpoint directly is more than sufficient.

What About ARM Templates?

At the time of writing, and probably owing to the fact that the service is in preview, users are currently unable to configure Static Web Applications via ARM Templates. While the storage account itself can be created via template, the actual configuration of a Static Web Application, including even the ability to turn the feature on, is required to be done via the Azure Portal.