How to deploy a Resource via ARM Templates even if it’s not documented ?

Hi,

Recently, i tried to enable the Update Management Service in Azure Automation via ARM template. Not very complicated, many blog posts and examples can be found over the www.

But what i have not found anywhere is how to to create an Update Deployment Configuration via ARM template (what you see in the picture)

After about 10 minutes of searching the net, i decided to to it myself. But the question is : if it’s undocumented, how can i guess the ARM template json definition?

Two questions arose:

1- Is this supported via ARM templates?

2- How to do it in case it’s supported?

Since no documentation can be found, question 1 answer can be hard to find, but if i answer question 2 and test it, i can answer my question 1.

1- What are ARM templates after all?

Long story short, all ARM deployments are basically Rest API calls, so any ‘operation’ is supported through Rest API. ARM templates are just the Rest API content (the Body of the Rest call, for POST Requests).

So when you write an ARM template, you create the BODY of the Rest call and you send it to Azure. Azure will use it under the hood to create the Rest calls and bingo.

NB: i’m not sure whether the Template to Rest is made at client side (powershell, cli) or at server side, but this is not important in our case.

So the KEY here is that: If you can get the Rest Call Body, you can ‘probably’ get the ARM template JSON definition.

NB: Before continuing, try first to export the ARM template of your resource or resource group and look if you find the JSON definition of your resource. If you don't find it, go to section 2

2- How to get the Rest API Body content?

There are many ways to get this:

  • You can use ARMclient
  • You can use dev tools in the Azure Portal
  • You can use any tool that can unwrap a http request and let you see inside (your favorite tool)

3- Getting the Body Content using the Azure Portal

In my example above, my goal is to create an Update Management Deployment via ARM template, so i started like the following:

1- Go to the Azure Portal and use the wizard to create the UMD, but before submitting the creation b(Create Button) , make step 2

2- Use the Dev Tools (F12 in chrome and Firefox) and start recording a session

3- Submit the creation (Click Create in my case)

4- Stop the recording

5- Find the request that contains the Rest call for your operation

6- Take the following information:

  • Resource Type: It’s contained in the Request URL. In my example it’s Microsoft.Automation/automationAccounts/softwareUpdateConfigurations
  • api-version
  • Body: It’s contained in the Request Paylaod. You can use the ‘view source’ to get the Json definition directly. It contains the name of the resource and the Properties

4- Authoring your ARM template

I will not go through the ARM template Authoring, but you know, prepare a basic ARM Template with parameters and Resources at least (resources is the only needed part actually)

A resource json definition in an ARM template has usually the following format

{
	"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
	"contentVersion" : "1.0.0.0",
	"parameters" : {},
	"variables" : {},
	"resources" : [
		{
			"type" : "",
			"dependsOn" : [],
			"apiVersion" : "",
			"name" : "",
			"location" : "",
			"properties" : {}
		}
	]
}

Here the view with Json Edit (http://tomeko.net/software/JSONedit/)

Now, you guess it, fill the blanks:

That’s it, you achieved 95% of the ARM template, now you need to adapt it to your context (add parameters, variables, concat, functions if needed…)

5- Deploying the ARM template

Now, you can deploy your ARM template and have the answer to question 1 (in fact, we are not yet sure that this is supported through ARM templates, even if it’s supported via Rest API)

I don’t have the screenshot, but the deployment was successful, and i was able to author an ARM template even if no documentation is provided. This is great!

Introduction to Azure Private Endpoints

Azure Private Links and Endpoints have been recently announced in Public Preview after months of Private Preview and testing. Private Link/Endpoint is a huge step in Azure Networking as it allows to make private any internet facing public service (Like PaaS services: Azure SQL, Azure Storage…), and provides a unified way to expose and consume services between tenants, partners or even within the same customer.

This post is an introduction of Private Endpoints

1- Concept

There are two terms that should be memorized:

  • Private Link: See my next blog post
  • Private Endpoint: Is an association (couple) of a supported resource (ex: azure sql server) and a Subnet in your virtual network so it can be assigned a private IP address

2- Example Scenario

2.1- Need

One of the easiest/most requested scenario is the ability to each an Azure SQL Database privately from a Virtual Network, without exposing it to Internet, or using its Public IP address.

The following picture shows a Virtual Machine (SQLClient) deployed to a VNET/Subnet (VNETEastUS/VM), and an Azure SQL DB (azuresqlprivatedb01) deployed within an Azure SQL server (azuresqlpriavte).

The goal is that the VM (SQLClient) reaches the azuresqlprivate over a private connection, and not via internet (we will not allow any IP to consume the Azure SQL server through the firewall): The dotted line in orange is the goal

2.2- Solution via Private Endpoints

The solution is to create a Private Endpoint that will expose the Azure SQL server via a Private IP address on a Subnet. The following picture shows a Private Endpoint (PE) that is using an IP address from the Subnet Privatesubnet and connected to azuresqlprivate

2.3- Deploy the Solution via the Azure Portal

1- Go to the Private Link Center –> Private endpoints –> Add

2- Type the required information like the Subscription, RG, Location…

3- Choose the resource that you want to create a Private Endpoint for (In my case, it’s the azuresqlprivate Azure SQL server

4- Choose a VNET/Subnet that will enable the access to your resource. Under the hood, a Network Interface (NIC) will be created, assigned an IP address and assigned to your resource. You can optionally integrate the resource with an Azure Private DNS Zone in order that you can call the private endpoint using a DNS name. This is not required as you can create your own DNS record on your own DNS service (A record)

5- After the deployment, you will notice the following:

  • The Private Endpoint is created an the Connection State is Approved*

* Approved means that the Azure SQL party has approved the Private Endpoint, this is useful when both parties are not from the same Team/Tenant, where the requester can ask for the Private Endpoint connection, and waits for the owner to approve it. In addition, you can Reject the connection at any time

  • A NIC has been created and deployed to the Subnet
  • A DNS record has been created (in case you have enabled private DNS option)

2.4- Test the Private Endpoint

On the SQLClient Virtual Machine, you can install SQL Server Management Studio and test the access to the DB


Note that the Private IP is resloved

2.5- Secure the access to the Private Endpoint

Now that we have private access to a PaaS Service, there amy ways to secure the access to it:

  • Use Network Security Groups for the PE Inbound: You can create an NSG and apply it to the Subnet NIC or the Private Endpoint NIC, and filter Inbound rules like any other NSG –> Looks like this not yet supported on the Public Preview
    • Applying a NSG to the NIC is not supported
    • Applying an NSG to the Subnet is without any affect on Private ENdpoints
  • Use Use Network Security Groups for the Outbound: You can filter outbound traffic from your sources to your Private Endpoints: This is supported but not convenient, since it’s better to filter at the destination and not a the source, when securing access from a Destination standpoint (The picture shows a rule that blocks access to the private IP of the private endpoint, and applied to the SQLClient VM)
  • Since the IP address of the Private Endpoint is within your VNET, you can filter access to it on your perimeter Firewalls, like Azure Firewall or your own Firewall

3- Private Links

The next blog post will introduce the Private Link concept, which is a service that allows your external customers/partners to securely access your services via a private connection, and using an Approval Process.

LINKS

Error deleting an App Service Plan, ghost function

Hi,

You may not be able to remove an App Service Plan and you hit the following error

Succeeded: 0, Failed: 1, Canceled: 0.Error details {app service plan name}: Server farm ‘ app service plan name ‘ cannot be deleted because it has web app(s) setproperties assigned to it. (Code: Conflict) Server farm ‘ app service plan name ‘ cannot be deleted because it has web app(s) setproperties assigned to it. undefined (Code: Conflict) undefined

Looks like a bug related to the Web App or the Function not ‘correctly’ removed by ARM. You can verify this by looking to the Apps deployed to that App Service plan.

The solution is to force delete this ghost app , and this can easily done through powershell

  • Go to Sites and copy the Resource ID of the troublsome site
  • Open an Powershell window (you need az module) or use the Azure Shell in the Azure Portal
  • Use the Remove-azresource cmdlet to remove that ghost resource
Remove-AzResource -ResourceId "/subscriptions/qsqsqsqsqsqsqs/resourceGroups/qsqsqsqsqsq/providers/Microsoft.Web/sites/setproperties"

Now, the resource has been completely cleaned, you can delete your app service plan

Azure ARM Template: “ObjectID” with Azure Key vault policy assignment

Hi all,

When you author ARM Templates, and you are deploying a Key Vault and setting the Access Policies via the template, be careful about the content of the objectID.

  "accessPolicies" : [                    {                        "tenantId": "xxxxx-30d9-xxxxx-8015-ddddddd",                        "objectId": "rrrrr-tttt-rrrr-rrrr-tttttt",                        "permissions": {
"keys": ["all"],
"secrets": ["all"]
}
},

If you are assigning the policy to a user account, use the objectId value found on Azure AD:

If you are assigning the policy to a Service Principal, use the ObjectID of the Application that you can get from the Enterprise Application blade, and not the App Registration blade.

Good

Wrong

Delete Azure Backup Restore Points collections error : InternalOperationError goal seeking tasks failed

Hi,

During an operation to move Azure resources between Subscriptions (Or resource groups), we were obliged to delete the “Microsoft.Compute/restorePointCollections” in order to be able to move VMs protected by a Backup policy, as described here

Unfortunately, when deleting the
“Microsoft.Compute/restorePointCollections” resources, we were hit by the following error.

 {X} goal seeking tasks failed. 

It took us time to figure out that trying to delete the same resources multiple times ends by a successful operations. But because each operation took about 1 minute, it will be a waste of time of doing it by hand.

So today, i’m sharing with you a Powershell script that will allow you to make all the deletion operations, in parallel!!

Go here

Unable to assign RBAC role to the root management group

Hi,

If you try to assign an RBAC role to the root management group, you may encounter the following error, even if you are the Azure Account Owner, the Global Administrator…

New-AzureRmRoleAssignment -SignInName user@domain.io -RoleDefinitionName "Reader" -Scope /providers/Microsoft.Management/managementGroups/root

New-AzureRmRoleAssignment : The client ‘xxxx@domain.io’ with object id ’46f38ab7-404e-4a36-906f-3a19299cf41c’ does not have authorization to perform action ‘Microsoft.Authorization/roleAssignments/write’ over scope ‘/providers/Microsoft.Management/managementGroups/root/providers/Microsoft.Authorization/roleAssignments/e3a41417-f5b5-4476-8171-14866f42481f’.
At line:1 char:1

New-AzureRmRoleAssignment -SignInName
user@domain.io -RoleDef …

~~~~~~~~~~~~~~~~~

CategoryInfo : CloseError: (:) [New-AzRoleAssignment], CloudException

FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureRoleAssignmentCommand

Solution

You need to Elevate access for the Global Admin in order to control the root management group.

Do this: https://docs.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin

Azure Data Factory : How to access the output on an Activity

Hi,

When using ADF (in my case V2), we create pipelines. Inside these pipelines, we create a chain of Activities.

In most cases, we always need that the output of an Activity be the Input of the next of further activity.

The following screenshot shows a pipeline of 2 activities:

  1. Get from Web : This is http activity that gets data from a http endpoint
  2. Copy to DB : This is an activity that gets the output of the first activity and copy to the a DB.

SNAG-0063.png

 

How to get the output of a activity ?

The output of any activity at a given time is : @activity(“Activity Name”).output

So the output of my Get from Web activity will be : @activity(“Get from Web”).output

Example

If my Get from Web activity gets a json output from the http endpoint like this :

{
“name”: Samir,
“mail”: “samir.farhat@mvp.com”,
“age”: “30”,
“childrens”: [
{
“name”: “Mark”,
“age”: “7”
},
{
“name”: “Helena”,
“age”: “9”
}
]
}

 

@activity(“Get from Web”).output

will contain the json code above

  • If i want to access only childrens, i can do this

@activity(“Get from Web”).output.childrens

  • If i want to access only the first children

@activity(“Get from Web”).output.childrens[0]

Understanding Azure CosmosDB Partitioning and Partition Key

Hi,

Many of you have asked me about the real meaning of Cosmos DB partitions, the partition key and how to choose a partition key if needed. This post is all about this.

1- Cosmos DB partitions, what is it ?

The official Microsoft article explains well partitions in CosmosDB, but to simplify the picture:

  • When you create a container in a CosmosDB database (A Collection in case of SQL API), CosmosDB will provision a capacity for that container
  • If the container capacity is more than 10GB, then CosmosDB requires an additional information to create it : WHY ?

When CosmosDB provisions a container, it will reserve capacity over its compute and storage resources. The Storage and Compute resources are called Physical Partitions.

Within the physical partitions, Cosmsos uses Logical Partitions, the maximum size of a Logical Partition is 10GB

You get it now : When the size of a container exceeds (or can exceed) 10GB, then Cosmos DB needs to spread data over the multiple Logical Partitions.

The following picture shows 2 collections (containers):

  • Collection 1 : The Size is 10 GB, so CosmosDB can place all the documents within the same Logical Partition (Logical Partition 1)
  • Collection 2 : The size is unlimited (greater than 10 GB), so CosmsosDB has to spread the documents across multiple logical partitions

SNAG-0022.png

“The fact of spreading the documents across multiple Logical Partitions, is called partitioning

NB1: CosmosDB may distribute documents across Logical Partitions within different physical partitions. Logical Partitions of the same container do not belong necessarily to the same physical partition, but this is managed by CosmsosDB

NB2: Partitioning is mandatory if you select Unlimited storage for your container, and supported if you choose 1000RU/s and more

NB3: Switching between partitioned and un-partitioned containers is not supported. You need to migrate your data

2- Why partitioning matters ?

This the first question i have asked to myself: Why do i need to know about this partitioning stuff? The service is managed, so why do i need to care about this information if Cosmsos DB will distribute automatically my documents across partitions.

The answer is :

  • CosmosDb does not manage Logical partitioning
  • Partitioning has impacts on Performance and related to the partition Size limit

2.1- Performance

When you query a Container, CosmosDB will look into the documents to get the required results (To keep it simple because this is a more elaborated ). When a request spans multiple Logical Partitions, it consumes more Request Units, so the Request Charge per Query will be greater
–> HINT : With this constraint, it’s better that the queries don’t span multiple logical containers, so it’s better that the documents related to the same query stay within the same logical partition

2.2- Size

When you request the creation of a new document, CosmosDB will place it within a Logical Partition. The question is how CosmosDB will distribute the documents between the Logical Partitions : A Logical Partition can’t exceed 10GB : So CosmosDB must intelligently distribute documents between the Logical Partitions –> This is easy i think, a mechanism like round robin can be enough, but this is not true! Because in case of round robin, your documents will be spread between N logical Partitions. And we have seen that queries over multiple logical Partitions consume a lot of RUs, so this is not optimal


We have now the Performance-Size dilemma : How cosmosDB can deal with these two factors ? How we can find the best configuration to :

  1. Keep ‘documents of the same query’ under the same Logical Partition
  2. Not reaching the 10GB limit easily

–> The answer is : CosmosDB can’t deal with this, you have to deal with it by choosing a Partition Key


3- The Partition Key

My definition: The Partition Key is a HINT to tell CosmosDB where to place a document, and if two documents should be stored within the same Logical Partition. The partition key is a value within the JSON document

NB : The PartitionKey must be submitted during a query to cosmsosDB

Let me explain this by an example:

Suppose we have a Hotel multi-tenant application that manages the hotel rooms like reservation. Each room is identified by a document where all the room’s information are located. The Hotel is identified by a hotelid and the room by id

The document structure is like the following:

{
“hotelid” : “”,
“name” : “”,
“room” : {
“id” : “”,
“info” : {
“info1” : “”,
“info2” : “”
}}}

The following is the graphical view of the JSON document:

SNAG-0033

Suppose we have documents of 6 rooms:

SNAG-0027

SNAG-0028

SNAG-0029

SNAG-0030

SNAG-0031

SNAG-0034.png

3.1- No Partition Key

If you create a container with a size of 10GB, the container will be not partitioned, and all the documents will be created within the same Logical Partition. So all your documents should not exceed the size of 10GB.

3.2- Partition Key : Case 1

Partition Key = /hotelid

In this case, when CosmosDB will create the 6 documents based on the /hotelid, it will spread the documents on 3 Logical Partitions, because there are 3 /hotelid distinct values.

  • Logical Partition 1 : /hotelid = 2222
    • 3 documents
  • Logical Partition 2 : /hotelid = 3333
    • 2 documents
  • Logical Partition 3 : /hotelid = 4444
    • 1 document

What are the Pro and Limits of this partitioning scheme:

  • Pro
    • Documents from the same hotel will be placed on a distinct Logical Partition
    • Each Hotel can have documents up to 10GB
    • Queries across the same hotel will perform well since they will not span multiple Logical Partitions
  • Limits
    • All rooms of the same hotel will be placed within the same Logical Partition
    • The 10GB limit may be reached when the rooms count grows

3.1- Partition Key : Case 2
NB : CosmosDB supports only 1 JSON properties for the Partition Key, so in my case i will create a new properties called PartitionKey

Suppose that after making a calculation, we figured out that each Hotel will generate 16GB of documents. This means that i need that the documents be spread over two Logical Containers. How can i achieve this ?

  • The /hotelid has only 1 distinct value per Hotel, so it’s not a good partition key
  • I need to find a value that can have at least 2 distinct values for a Hotel
  • I know that each Hotel have multiple rooms, so the multiple room ids

The idea is to create a new json proprieties called PartitionKey, the PartitionKey can have two values :

  • hotelid-1 if the roomid is odd
  • hotelid-2 id the roomid is even

This way:

  • Whe you create a new document (which contains a room), you have to look to the roomId, if it’s even than PartitionKey = hotelid-2, if it’s odd: PartitionKey = hotelid-1
  • This way, Cosmos will place even rooms within a Logical partition, and odd rooms within another Logical Partition

–> Result : The hotel documents will span two Logical Partitions, so 20 GB of storage

What are the Pro and Limits of this partitioning scheme:

  • Pro
    • Documents related to the same hotel will be placed on 2 Logical Partitions
  • Limits
    • Queries related the same hotel will be spread across 2 Logical Partitions, which will result on an additional request charge

4- How to choose the Partition Key?

This is most difficult exercise when designing your future CosmosDB data structure, here some recommendations to guide your thought it:

  • What is the expected size per document? This will give you an information about the level of partitioning you will make. Think about the examples above. If each document is 100KB max, then you can have up to 105k documents per Logical Partition, which means 105k room per hotel (More than enough), so /hotelid is a good partition key against the Size Constraint
  • If you are faced to more combinations of partition keys and are unable to get decided, do the following:
    • Do not use the partition key that will fire the Size constraint quickly : Reaching the Size limit makes the application unusable
    • Choose the Partition Key that will consume less Request Charge, but how to predict that : You have to determine the most used queries across your application, and choose the best Partition Key according to them.
  • Add new properties to your json document (PartitionKey), even if they are not really useful, just to achieve a good Partitioning

5- I have determined a good Partition Key, but i afraid hitting the 10 GB limit per Logical Partition ?

This is the most asked question after choosing the Partition Key : What if all the documents with the same Partition Key value hit the 10GB limit !!

Like he example above, try to find a mandatory value that gives your the X factor you want : The idea is to say: Can i find an additional properties that i can use in my partition key ?

NB : The Request Charge will be multiplied by X, but at least i can predict it

This was simple in my case, but in case have a factor X, you can use a Bucket calculator function. Here’s a blog about this : You just provide how much logical partitions you want to span your documents into. A good blog post here about the subject.

Hope that this article helps.

Cheers

Get Azure Datacenter IP ranges via API V2

Hi,

In my previous post, I showed how to create a light-weight Azure function that allows you to request the Azure Datacenter IP ranges via API. You can rapidly test it by following the instructions on the section 3- Try it before deploying
it here : https://buildwindows.wordpress.com/2017/11/19/get-azure-datacenter-ip-ranges-via-api/

The feedback was positive, but a lot have asked for a way to see if there were updates compared to the last version, and what the updates are if any.

In this post, I will publish the second version of the API (with the how-to), that allows you to :

  • Get the current Azure Datacenter IP ranges
  • Get the current Azure Datacenter IP ranges for a specific region
  • Get region names (since, unfortunately, the region names published by Microsoft are not exactly the same used by the Microsoft Azure services)
  • New : Get the current version ID of the Azure Datacenter IP ranges
  • New : Get the previous version ID of the Azure Datacenter IP ranges
  • New : Get the difference between the current and the previous version for all regions
  • New : Get the difference between the current and the previous version for a specific region

The new features will allow you an easy integration with your environment, and simplify the update of your Firewall rules within your infrastructure.

  • 1- How to request the API ?

The API supports only POST requests. You can make the following API requests using the following body construction.

Here the examples using Powershell, but you can use any tool to request the API using the same body content

#V1

#Get the current Azure IP address ranges of all region

$body = @{“region”=“all”;“request”=“dcip”} | ConvertTo-Json

#Get the current Azure IP address ranges of a specific region, example europewest

$body = @{“region”=“europewest”;“request”=“dcip”} |ConvertTo-Json

#Get the azure regions names, that we can request IPs for

$body= @{“request”=“dcnames”} |ConvertTo-Json

#Post the request

$webrequest=Invoke-WebRequest -Method “POST” -uri ` https://azuredcip.azurewebsites.net/getazuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

#New in V2

#Get the (added and/or removed) IP address ranges updates of a specific region

$body = @{“request”=“getupdates”;“region”=“asiaeast”} | ConvertTo-Json

#Get the (added and/or removed) IP address ranges updates of all regions

$body = @{“request”=“getupdates”;“region”=“all”} | ConvertTo-Json

#Get the current Azure DC IP ranges version ID

$body = @{“request”=“currentversion”} | ConvertTo-Json

#Get the previous Azure DC IP ranges version ID

$body = @{“request”=“previousversion”} | ConvertTo-Json

#Post the request

$webrequest Invoke-WebRequest -Method “POST” -uri ` https://azuredcip.azurewebsites.net/getazuredcipupdates -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

  • 2- How to build the solution ?

2.1- Solutions components

The V2 version is still using only Azure Functions, but unlike V1, it uses multiple functions within the Function App:

  • 1 Function App
    • Function 1
    • Function 2
    • Function 3
    • Proxy1
    • Proxy2
    • Storage Account

The following table details each component configuration. If you want to create the solution within your environment, create the same components using the given configuration:

Function App

Name

Description

App Service Plan

azuredcip This Function App will host the entire solution. It will include 3 functions, 1 Storage Account and two Proxies Shared or greater. Use at least a Basic Tier to benefit from SSL, Custom Names and backup

Function1

Name

Description

Language

Integrate

Manage

azuredcipranges This function will return you the V1 information.
https://buildwindows.wordpress.com/2017/11/19/get-azure-datacenter-ip-ranges-via-api/
HttpTrigger – Powershell
Allowed HTTP Methods : POST
Authorization level : Anonymous
You can add Function Keys if you want to secure the API Access. In my case, my API still Public (anonymous) to continue support my V1

Function2

Name

Description

Language

Integrate

Manage

azuredciprangesupdater This function will do the following :
– Get the current Azure DC IP ranges version and store it to the storage account

– Always keep the previous Azure DC IP ranges version file in the storage account

– Create and store a file containing the current and previous files difference and store it to the storage account

– Return the mentioned information based on the API request body

HttpTrigger – Powershell
  1. Inputs (Type : Azure Blob Storage)
Name Path SA connection
vnowinblob azuredcipfiles/vnow.json AzureWebJobDashboard
vpreviousinblob azuredcipfiles/vprevious.json AzureWebJobDashboard
vcompareinblob azuredcipfiles/vcompare.json AzureWebJobDashboard
  1. Outputs (Type : Azure Blob Storage)
Name Path SA connection
vnowoutblob azuredcipfiles/vnow.json AzureWebJobDashboard
vpreviousoutblob azuredcipfiles/vprevious.json AzureWebJobDashboard
vcompareoutblob azuredcipfiles/vcompare.json AzureWebJobDashboard

Keep the default http output

Allowed HTTP Methods : POST

Authorization level : Function

You can use the default function key or generate an new key. This API will not be directly exposed, so you can protect it with a key

Function3

Name

Description

Language

triggerazdciprangesupdate This function will trigger the azuredciprangesupdater weekly to update the current and previous version TimerTrigger – Powershell
Schedule : 0 0 0 * * 3

(Each Wednesday, but you can choose any day of the week, as Microsoft will not apply the updates before one week of their publication)

Proxy 1

Name

Description

Root template

Allowed HTTP methods

Backend URL

getazuredcipranges This proxy will relay requests to azuredcipranges /getazuredcipranges POST https://azuredcip.azurewebsites.net/api/azuredcipranges
(add the key if you have secured it)

Proxy 2

Name

Description

Root template

Allowed HTTP methods

Backend URL

getazuredcipupdates This proxy will relay requests to azuredcipranges /getazuredcipupdates POST https://azuredcip.azurewebsites.net/api/azuredciprangesupdater?code=typethekeyhere

Storage Account

Name

Description

container

Files

azuredcipsa

This is the storage account automatically created with the Function app (it can have any other name) azuredcipfiles Upload the following files:
– vnow.json

– vprevious.json

NB : These files are fake files. During the first API update request, the vnow content will be copied to the vprevious file. The vnow content will be then replaced by the real current version. At this moment, you can only request the current version. After one week, a new version of the file will be published by MS, so another cycle will set the vnow and the vprevious to real 2 consecutive versions, so you can benefit from the update and comparison feature.

2.2- Download the files

You can download the needed files here, you will find the powershell code for each function (function1, function2, function3) and the two json files (vnow.json and vprevious.json).

NB : As mentioned before, after the first request to the API (getupdates), you will have a valid vnow version, but the previous version will be the version uploaded now. You need to wait at least 1 week to have the valid version.

Enjoy!!

Get Azure Datacenter IP address ranges via API

Hi folks,

One of the struggles that we may face when dealing with Azure services, is the network filtering. Today, we consume a lot of services provided by Azure, and some Azure services can consume services from our infrastructure. Microsoft is publishing an ‘xml’ file that contains the exhaustive list of the Azure Datacenter IP ranges for all its public regions (No gov). This ‘xml’ file is regularly updated by Microsoft to reflect the list updates as they can add or remove ranges.

The problem is that consuming an ‘xml’ file is not very convenient as we need to make many transformations to consume the content. Many of you have requested that Microsoft at least publishes this list via an API so it can be consumed by many sources, using Rest requests. Until Microsoft make it, I will show you today how to create a very lightweight web app on Azure (using Azure Functions) that will ‘magically’ do the job.

NB : The Azure Datacenter IP ranges include all the address space used by the Azure datacenters, including the customers address space

1- Why do we need these address ranges ?

If you want to consume Azure services or some Azure services want to consume your services, and you don’t want to allow all the “Internet” space, you can ‘reduce’ the allowed ranges to only the Azure DC address space. In addition, you can go further and select the address ranges by Azure region in case the Azure services are from a particular region.

Examples :

  • Some applications in your servers need to consume Azure Web Apps hosted on Azure. In a world without the Azure DC address space, you should allow them to access internet, which is a bad idea. You can configure your firewalls to only permit access to the Azure DC IP ranges
  • If you are using Network Virtual Appliances on Azure, and you want to allow the VM’s Azure agent to access the Azure services (storage accounts) in order to function properly, you can allow access only to the Azure DC IPs instead of internet.

2- The Solution

In order to consume the Azure Datacenter IPs via an API, I used the powerful and simple Azure functions to provide a very light weight ‘File’ to ‘JSON’ converter. The Azure function will do the following:

  • Accept only a POST request
  • Download the Azure Datacenter IP ranges xml file
  • Convert it to a JSON format
  • Return an output based on the request:
    • A POST request can accept a Body of the following format : { “region”: “regionname”, “request”: “requesttype” }.
      • The “request” “parameter can have the value of :
        • dcip : This will return the list of the Azure Datacenter IP ranges, depending on the “regionname”  parameter. “regionname” can be :
          • all : This will return a JSON output of all the Azure Datacenter IP ranges of all regions
          • regionname : This will return a JSON output of the regionname’s Azure Datacenter IP ranges
        • dcnames : This will return al list of the Azure Datacenter region’s names. The “regionname” parameter will be ignored in this case
      • In case of a bad region name or request value, an error will be returned

3- Try it before deploying it

If you want to see the result, you can make the following requests using your favorite tool, against an Azure function hosted on my platform. In my case, I’m using powershell and my function Uri is https://azuredcip.azurewebsites.net/api/azuredcipranges

3.1- Get the address ranges of all the Azure DCs regions

#Powershell code

$body = @{“region”=“all”;“request”=“dcip”} | ConvertTo-Json

$webrequestInvoke-WebRequest -Method “POST” -uri ` https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content


 3.2- Get the address ranges of the North Europe region

In this case, note that we must use europnorth
instead of northeurope

#Powershell code

$body = @{“region”=“europenorth”;“request”=“dcip”} | ConvertTo-Json

$webrequest Invoke-WebRequest -Method “POST” -uri `
https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content


3.3- Get the region names

$body = @{“request”=“dcnames”} | ConvertTo-Json

#or #$body = @{“region”=”anything”;”request”=”dcip”} | ConvertTo-Json 

$webrequest Invoke-WebRequest -Method “POST” -uri `
https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

4- How to deploy it to your system ?

In order to deploy this function within your infrastructure, you will need to create a Azure function (Section : Create a function app) within your infrastructure. You can use an existing Function App or App Service Plan.

NB : The App Service Plan OS must be Windows

After creating the Function App, do the following:

Step Screenshot
Go to your Function App and click the Create new (+)
In Language chose Powersell then select HttpTrigger – Powershell
Give a Name to your function and choose an Authorization level.

In my case, i set the Authorization to anonymous in order to make the steps simpler. We will see later how to secure the access to the function http trigger

Copy paste the following code on the function tab, then click Save

# POST method: $req

$requestBody = Get-Content $req -Raw | ConvertFrom-Json

$region = $requestBody.region

$request = $requestBody.request

#Main

if (-not$region) {$region=‘all’}

$URi = https://www.microsoft.com/en-us/download/confirmation.aspx?id=41653”

$downloadPage = Invoke-WebRequest -Uri $URi -usebasicparsing

$xmlFileUri = ($downloadPage.RawContent.Split(‘”‘) -like https://*PublicIps*”)[0]

$response = Invoke-WebRequest -Uri $xmlFileUri -usebasicparsing

[xml]$xmlResponse = [System.Text.Encoding]::UTF8.GetString($response.Content)

$AzDcIpTab = @{}

if ($request -eq ‘dcip’)

{

foreach ($location
in
$xmlResponse.AzurePublicIpAddresses.Region)

{

if ($region -eq ‘all’) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}

elseif ($region -eq $location.Name) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}

}

if ($AzDcIpTab.Count -eq ‘0’) {$AzDcIpTab.Add(“error”,“the requested region does not exist”)}

}

elseif ($request -eq ‘dcnames’)

{

$AzDcIpTab = $xmlResponse.AzurePublicIpAddresses.Region.name

}

else

{$AzDcIpTab.Add(“error”,“the request parameter is not valid”)}

$AzDcIpJson = $AzDcIpTab | ConvertTo-Json

Out-File -Encoding Ascii -FilePath $res -inputObject $AzDcIpJson


Go to the integrate tab, and choose the following:

  • Allowed HTTP methods : Selected methods
  • Selected HTTP methods : Keep only POST

Click Save

It’s done !

To test the function, go back to the main blade and develop the Test tab

On the request body, type :

{

“region” : “europeewest”,

“request”: “dcnames”

}

Then click Run

You should see the results on the output, and the http Status 200 OK

Click Get function URL to get the URL of your function in order to query it via an external tool

 

5- Securing the access to the function URL

There are 3 options that let you secure the access to the Function URL:

5.1- Network IP Restrictions

I personally think that this is best option to secure the access to Function URL. IP Restrictions allows you allow only a set of Public IP addresses to access the Function URL. For example, if you have an automation script that requests the API and update a Firewall object or a database, you can whitelist only the Public IP address used by this automation workflow, which is the outbound IP address. This feature is available for Basic Tier App Service Plans and greater. It’s not supported for free and shared sku. Start using it by following this tutorial : https://docs.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions.

NB : The Networking blade for a Function App can be found by clicking on the Function Name à Platform features à Networking

5.2- Function Key and Host Key

You can restrict the access to the Function URL by leveraging an Authorization feature, by protecting querying the URL  via a ‘Key’. There are two Key types : The Function Key which is defined per Function and a Host key which is defined and the same for all the functions within a Function App. In order to protect a function via a Function key, do the following :

Step Screenshot
Go to the Integrate blade, and change the Authorization level to Function
Go to the Manage blade. You can use the default generated key or Add a new function key. Generating a new key is provided for keys rotation
Now, you can access the ‘protected’ URL by clicking on the Get function URL on the main blade. You can select which key to use.

 

5.3- Authentication and authorization feature in Azure App Service

This feature allows you to secure your Function App by requesting the caller to authenticate to an Identity Provider and provide a token. You can use it against Azure Active Directory or Facebook for example. I will not detail the steps in this post, but here’s some materials :

Overview : https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-authentication-overview.md

How to configure your App Service application to use Azure Active Directory login : https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-mobile-how-to-configure-active-directory-authentication.md