Get Azure Datacenter IP address ranges via API

Hi folks,

One of the struggles that we may face when dealing with Azure services, is the network filtering. Today, we consume a lot of services provided by Azure, and some Azure services can consume services from our infrastructure. Microsoft is publishing an ‘xml’ file that contains the exhaustive list of the Azure Datacenter IP ranges for all its public regions (No gov). This ‘xml’ file is regularly updated by Microsoft to reflect the list updates as they can add or remove ranges.

The problem is that consuming an ‘xml’ file is not very convenient as we need to make many transformations to consume the content. Many of you have requested that Microsoft at least publishes this list via an API so it can be consumed by many sources, using Rest requests. Until Microsoft make it, I will show you today how to create a very lightweight web app on Azure (using Azure Functions) that will ‘magically’ do the job.

NB : The Azure Datacenter IP ranges include all the address space used by the Azure datacenters, including the customers address space

1- Why do we need these address ranges ?

If you want to consume Azure services or some Azure services want to consume your services, and you don’t want to allow all the “Internet” space, you can ‘reduce’ the allowed ranges to only the Azure DC address space. In addition, you can go further and select the address ranges by Azure region in case the Azure services are from a particular region.

Examples :

  • Some applications in your servers need to consume Azure Web Apps hosted on Azure. In a world without the Azure DC address space, you should allow them to access internet, which is a bad idea. You can configure your firewalls to only permit access to the Azure DC IP ranges
  • If you are using Network Virtual Appliances on Azure, and you want to allow the VM’s Azure agent to access the Azure services (storage accounts) in order to function properly, you can allow access only to the Azure DC IPs instead of internet.

2- The Solution

In order to consume the Azure Datacenter IPs via an API, I used the powerful and simple Azure functions to provide a very light weight ‘File’ to ‘JSON’ converter. The Azure function will do the following:

  • Accept only a POST request
  • Download the Azure Datacenter IP ranges xml file
  • Convert it to a JSON format
  • Return an output based on the request:
    • A POST request can accept a Body of the following format : { “region”: “regionname”, “request”: “requesttype” }.
      • The “request” “parameter can have the value of :
        • dcip : This will return the list of the Azure Datacenter IP ranges, depending on the “regionname”  parameter. “regionname” can be :
          • all : This will return a JSON output of all the Azure Datacenter IP ranges of all regions
          • regionname : This will return a JSON output of the regionname’s Azure Datacenter IP ranges
        • dcnames : This will return al list of the Azure Datacenter region’s names. The “regionname” parameter will be ignored in this case
      • In case of a bad region name or request value, an error will be returned

3- Try it before deploying it

If you want to see the result, you can make the following requests using your favorite tool, against an Azure function hosted on my platform. In my case, I’m using powershell and my function Uri is https://azuredcip.azurewebsites.net/api/azuredcipranges

3.1- Get the address ranges of all the Azure DCs regions

#Powershell code

$body = @{“region”=“all”;“request”=“dcip”} | ConvertTo-Json

$webrequestInvoke-WebRequest -Method “POST” -uri ` https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content


 3.2- Get the address ranges of the North Europe region

In this case, note that we must use europnorth
instead of northeurope

#Powershell code

$body = @{“region”=“europenorth”;“request”=“dcip”} | ConvertTo-Json

$webrequest Invoke-WebRequest -Method “POST” -uri `
https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content


3.3- Get the region names

$body = @{“request”=“dcnames”} | ConvertTo-Json

#or #$body = @{“region”=”anything”;”request”=”dcip”} | ConvertTo-Json 

$webrequest Invoke-WebRequest -Method “POST” -uri `
https://azuredcip.azurewebsites.net/api/azuredcipranges -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

4- How to deploy it to your system ?

In order to deploy this function within your infrastructure, you will need to create a Azure function (Section : Create a function app) within your infrastructure. You can use an existing Function App or App Service Plan.

NB : The App Service Plan OS must be Windows

After creating the Function App, do the following:

Step Screenshot
Go to your Function App and click the Create new (+)
In Language chose Powersell then select HttpTrigger – Powershell
Give a Name to your function and choose an Authorization level.

In my case, i set the Authorization to anonymous in order to make the steps simpler. We will see later how to secure the access to the function http trigger

Copy paste the following code on the function tab, then click Save

# POST method: $req

$requestBody = Get-Content $req -Raw | ConvertFrom-Json

$region = $requestBody.region

$request = $requestBody.request

#Main

if (-not$region) {$region=‘all’}

$URi = https://www.microsoft.com/en-us/download/confirmation.aspx?id=41653”

$downloadPage = Invoke-WebRequest -Uri $URi -usebasicparsing

$xmlFileUri = ($downloadPage.RawContent.Split(‘”‘) -like https://*PublicIps*”)[0]

$response = Invoke-WebRequest -Uri $xmlFileUri -usebasicparsing

[xml]$xmlResponse = [System.Text.Encoding]::UTF8.GetString($response.Content)

$AzDcIpTab = @{}

if ($request -eq ‘dcip’)

{

foreach ($location
in
$xmlResponse.AzurePublicIpAddresses.Region)

{

if ($region -eq ‘all’) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}

elseif ($region -eq $location.Name) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}

}

if ($AzDcIpTab.Count -eq ‘0’) {$AzDcIpTab.Add(“error”,“the requested region does not exist”)}

}

elseif ($request -eq ‘dcnames’)

{

$AzDcIpTab = $xmlResponse.AzurePublicIpAddresses.Region.name

}

else

{$AzDcIpTab.Add(“error”,“the request parameter is not valid”)}

$AzDcIpJson = $AzDcIpTab | ConvertTo-Json

Out-File -Encoding Ascii -FilePath $res -inputObject $AzDcIpJson


Go to the integrate tab, and choose the following:

  • Allowed HTTP methods : Selected methods
  • Selected HTTP methods : Keep only POST

Click Save

It’s done !

To test the function, go back to the main blade and develop the Test tab

On the request body, type :

{

“region” : “europeewest”,

“request”: “dcnames”

}

Then click Run

You should see the results on the output, and the http Status 200 OK

Click Get function URL to get the URL of your function in order to query it via an external tool

 

5- Securing the access to the function URL

There are 3 options that let you secure the access to the Function URL:

5.1- Network IP Restrictions

I personally think that this is best option to secure the access to Function URL. IP Restrictions allows you allow only a set of Public IP addresses to access the Function URL. For example, if you have an automation script that requests the API and update a Firewall object or a database, you can whitelist only the Public IP address used by this automation workflow, which is the outbound IP address. This feature is available for Basic Tier App Service Plans and greater. It’s not supported for free and shared sku. Start using it by following this tutorial : https://docs.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions.

NB : The Networking blade for a Function App can be found by clicking on the Function Name à Platform features à Networking

5.2- Function Key and Host Key

You can restrict the access to the Function URL by leveraging an Authorization feature, by protecting querying the URL  via a ‘Key’. There are two Key types : The Function Key which is defined per Function and a Host key which is defined and the same for all the functions within a Function App. In order to protect a function via a Function key, do the following :

Step Screenshot
Go to the Integrate blade, and change the Authorization level to Function
Go to the Manage blade. You can use the default generated key or Add a new function key. Generating a new key is provided for keys rotation
Now, you can access the ‘protected’ URL by clicking on the Get function URL on the main blade. You can select which key to use.

 

5.3- Authentication and authorization feature in Azure App Service

This feature allows you to secure your Function App by requesting the caller to authenticate to an Identity Provider and provide a token. You can use it against Azure Active Directory or Facebook for example. I will not detail the steps in this post, but here’s some materials :

Overview : https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-authentication-overview.md

How to configure your App Service application to use Azure Active Directory login : https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-mobile-how-to-configure-active-directory-authentication.md

 

Advertisements

Use the new Azure Services Tags preview

Hi all,

At Ignite 2017, MS announced a very so waited feature, which is Azure Services Endpoints Tags for Network Security Groups. The new additional Service Tags will permit you to allow/Deny access to and from Azure Datacenter IP addresses.

Why is this important ?

A lot of Azure Services, specially Services related to Azure IaaS rely on network access to Azure endpoints located on the Internet address space. For example, you cannot use Azure Backup for IaaS virtual machines if your virtual machines do not have network access to the Azure storage endpoints of the same region. This causes a lot of frustration, since if you are using NSGs, you cannot easily create and maintain rules for only the Azure IP addresses since the list is huge and dynamic (Azure Datacenter IP addresses). This results on all VMs to have access to Internet using HTTPS.

What is new ?

Additional Service tags have been added to some regions to allow filtering access to :

  • Azure Storage endpoints
  • Azure SQL

The feature is now in Preview, and other regions will be added on the future. Make a look to this article to have the last information : Azure Services Tags

How to use the feature ?

You just need to register to the new preview feature. Use the following powershell code against your subscription :

Register-AzureRmProviderFeature -FeatureName AllowAccessRuleExtendedProperties -ProviderNamespace Microsoft.Network
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network

Creating an Azure Resource Policy via a template

Hi all,

This post is about Azure Resource Policy, and specifically, about the creation of an ARP via a template.

If you want to understand what is an ARP and why do you need it, you can refer to the official documentation here : Link1

When you start testing the resource policy, things are not complicated, you can use Powershell to create the policy definition, and then the assignment. But when you start creating real usage policies with a lot of variables, you may want a better way of defining the core of the ARP, and hence, you think about the templates.

So the template is just, as usual, a JSON file where you put the policy definition in a structured way, and then use this file to create the policy. This stills easy since, at the end, you will paste the policy definition to a json file, and then use the same command by just putting the file path. This is documented here : Link2

But, what is missing documentation today, is how to create templates of ARPs that relies on parameters.

1- Why do I need parameters on an ARP ?

The answer to this question is very easy. Let’s suppose you want to apply a policy to different subscriptions, or different resource groups. You may found out or notice that the policy is the same but some properties values are just different.

Example : I want that only a set of vm sizes are used within my subscriptions. So the goal is the same ! But I do know that every subscription will have specific sizes.

  • Subscription A : A-Series
  • Subscription B: A-Series, D-Series
  • Subscription C : A-Series, D-Series, F- Series

So In a world without parameters, I will have to create 3 policy definitions. I will assign 1 Policy to each subscription

#Powershell example

#Policy A

PolicyName = “AllowedVMSizesA”

$PolicyFile = “C:\path\AllowedVMSizesA.json”

definition = New-AzureRmPolicyDefinition -Name $PolicyName -Policy $PolicyFile

New-AzureRMPolicyAssignment -Name $PolicyName -Scope “/subscriptions/SubA-ID” -PolicyDefinition $definition

 

#Policy B

PolicyName = “AllowedVMSizesB”

$PolicyFile = “C:\path\AllowedVMSizesB.json”

definition = New-AzureRmPolicyDefinition -Name $PolicyName -Policy $PolicyFile

New-AzureRMPolicyAssignment -Name $PolicyName -Scope “/subscriptions/SubB-ID” -PolicyDefinition $definition

 

#Policy C

PolicyName = “AllowedVMSizesC”

$PolicyFile = “C:\path\AllowedVMSizesC.json”

definition = New-AzureRmPolicyDefinition -Name $PolicyName -Policy $PolicyFile

New-AzureRMPolicyAssignment -Name $PolicyName -Scope “/subscriptions/SubC-ID” -PolicyDefinition $definition

 

 

In a world with parameters, I will have to create 1 policy with a parameter which is the list of allowed sizes. When assigning the policy to Sub X, I will just pass the list of the related sizes

So the application, and the need for parameters is very crucial.

2- What is the solution?

The solution is this case is to define a policy with parameters, and each time you assign the policy to a scope, you supply the parameter(s) value(s).

A parametrized ARP template, is composed of 2 or 3 files:

  • The policy definition file : The file that contains the policy rule, that rely on parameters
  • The policy parameter file : The file that contains the parameters
  • The parameter file

NB: These first 2 files are only used when creating the policy, the third is used to make the assignment

3- What is the syntax of each file ?

The policy definition file is  the copy/paste of the policy definition as described on the MS link : Link1

You have just to copy and paste your definition to a JSON file. Include the parameters into your definition. The parameters have the following format : [parameters(‘paramterName’)]

The policy parameter file have the following syntax (The example file can be used)

#########################

{

“paramterName” : {

“type” : “string”,

“metadata” : {

“description” : “The description”

}

},   “paramterName2” : {

“type” : “string”,

“metadata” : {

“description” : “The description”

}

},

“paramterName3” : {

“type” : “string”,

“metadata” : {

“description” : “The description”

}

}

}

################################

Important : If a parameter is present on the definition file, and not present on the parameter file, an error will be thrown during the Policy creation.

4- How to create and assign the policy ?

Use the following script to create and assign the policy

###################

#Variables

$PolicyName = “PolicyName”

$PolicyFile = “Path of json Policy Definition File”

$PolicyFileparam = ” Path of json Policy parameter File “

$ScopeID = “Type here the scope ID”

#Params

$param1value = “the value of the parameter 1”

$param2value = “the value of the parameter 2”

$paramNvalue = “the value of the parameter N”

#Create the definition

$definition = New-AzureRmPolicyDefinition -Name $PolicyName -Policy $PolicyFile -Parameter $PolicyFileparam

#Assign the policy

New-AzureRMPolicyAssignment -Name $PolicyName -Scope $ScopeID -PolicyDefinition $definition -PolicyParameterObject @{“Param1=$param1value, “Param2″=$param2value,…, “ParamN”=$paramNvalue, }

###################

Azure Managed vs Unmanaged disks : The choice

Hi folks,

Recently (few months) , a new feature was announced to bring a new capability to Azure Virtual Machines : Azure Managed Disks.

Many blog posts explain well the purpose of managed disks, and how they bring enhancements to Azure IaaS virtual machines. I recommend the following readings :

The latter post shows the advantages of using Azure Managed disks, which I agree and confirm. But on the meanwhile, there is some ‘inflexible’ properties of managed disks, that may not be suitable for your or for your expectations. This is the purpose of this post : What is the model that fits my requirements , Managed or unmanaged disks.

1- The main difference between Managed disks and Storage Accounts based disks

There are some main differences between managed and unmanaged disks :

Category

Managed disks

Unmanaged disks

Management

Is an ARM (Azure Resource Manager) object (resource) Is not an ARM resource, but a file (.vhd) residing on a Azure Storage Account. The latter is an ARM  object

Size

The managed disks sizes are fixed (and can be resized). Which means that you cannot choose a custom size. You will need to pick up from a list. See (1) You can choose the disk size during the provisioning (and can be resized) when using Standard Storage. See (2)

Cost

You will pay :

·       Standard Storage :

o   A fixed price per disk size (Per month), whatever the disk usage is

o   Operations cost*

·       Premium Storage

o   A fixed price per disk size whatever the disk usage is

See (1)

You will pay :

·       Standard Storage :

o   The GB / month disk usage. You pay only what you consume

o   Operations cost*

·       Premium Storage

o   A fixed price per disk size whatever the disk usage is

See (3) and (4)

Performance

A managed disk have a predictable performance, with Standard storage (500 IOPS) or Premium storage (Depends on the disk). Only premium storage disks have a predictable performance (depends on the disk). Standard storage have a predictable performance (500 IOPS) unless they are impacted by the Storage Account performance limits (A maximum of 40 disks per standard storage account is recommended, otherwise disks can be throttled). See (5)

Availability

When placing VMs using managed disks under an Availability Set, disks are placed on different fault domains in order to achieve the better SLA (The Availability Set SLA is only for compute) When placing VMs using unmanaged disks under an Availability Set, there is no guarantee that the disks are placed on different fault domains, even if they are on different Storage Accounts.

Redundancy

LRS LRS, GRS

Encryption

ADE, SSE (Coming soon) ADE, SSE

* Operations cost means : Replication data transfer cost (In case of GRS) + Storage operations costs

 

2- Are managed disks more expensive that unmanaged disks ?

The answer is : It depends, but except in some cases, managed disks are always more expensive than unmanaged disks. Let’s prove it :

Standard Storage managed disk cost per month

  • Managed disk cost = Fixed Cost (Per disk size) + Operations cost
  • Unmanaged disk cost = Storage_Usage_In_GB * CostperGB + Operations cost

Because the Operations cost is the same for both models, we will omit them during calculation. Because the managed disk pricing model is not per usage, we will calculate the Disk size equity* value, to be able to compare with unmanaged disk :

Managed disk type

Size (GB) Price Cost per GB Standard Storage price per GB Disk size equity*

S4

32 1.3 0.040625 0.0422 31

S6

64

2.54

0.0396875

60

S10

128

4.98

0.03890625

118

S20

512

18.36

0.035859375

435

S30 1024 34.56

0.03375

818

* If you use less than the given size, then Unmanaged disks will cost less than managed disks. If you use more, then managed disks cost will be less than unmanaged disks. The €/GB will be greater as long you consume less storage space.

NB : New disk sizes have been announced (6) that finally make and end for the 1 TB disk size limit, with 2 and 4 TB for managed disks, and up to 4 TB for unmanaged disks. The service started on the West US Central region and will be generalized for the remaining regions during the coming months

3- Do I really need managed disks ?

This is a good question, but the answer is very relative to your needs. As you probably have read on the posts I mentioned earlier in this post, there are many benefits of using managed disks:

  • Disk snapshots
  • Predictable performance
  • Distribution in different fault domains when associated with Availability Sets
  • ARM object

Some workarounds may be used to have similar properties with unmanaged disks:

Properties

Unmanaged disk workaround

Disk snapshots

 

No workarounds

Predictable performance

 

Place less than 40 disks per Storage Account

Distribution in different fault domains when associated with Availability Sets

 

No Workaround. There is no way to know if the disks are place on different pools even if they are on different Storage Accounts

ARM object

 

Place each disk on its own Storage Account. Look if this will fit your needs  (Do not forget quotas)

 

4- Verdict

As you can see, managed disks brought new experience and features to Azure VM storage that permits better controlling the VM storage. Personally, I would recommend using managed disks, even if the ‘Pay as you consume’ model is not adopted there. But the features and the simplicity is worth the ‘little’ difference we can see with pricing.  Continue reading

How to protect and backup your Branch and Remote offices data (Files and Folders) ?

Hi everyone,

Since the first days of the adoption of an Information System by companies, backing up the workloads was crucial and a production blocker : No production without backup, no backup,  no business.

Today, companies are better mastering and understanding their backup needs, solutions and they are continually seeking for better, simple and cost effective backup software.

One of the ‘headache’ subjects that bother the majority of the backup admins and decision makers is the Remote Offices / Branch Offices (ROBO) ‘Files and Folders’ data backup.

During this post, I will show why Azure Backup via the MARS agent is your best choice to get rid of the ROBO workloads backup problematic. I will present :

  • Use cases for using Azure Backup via MARS agent
  • What do you need to know in order to be comfortable with this solution
  • What are the steps to plan and start using Azure Backup via MARS agent for your ROBO

1- Use cases for using Azure Backup via MARS agent

Azure Backup is the name of a complete enterprise backup solution allowing several backup scenarios and using the last technologies, specially the ability to back up to the cloud and to benefit from a GFS model (a model allowing  efficient Long term retention policies).

What is interesting about Azure Backup via MARS agent is that it allows you to backup your files and folders without the need to deploy a Backup Infrastructure or a Storage infrastructure. This opens up a lot of use cases :

Backup without backup infrastructure

The following picture shows the end to end data journey from your Windows Server or Workstation to the cloud storage (More details about the components later on this post). As you can note, the backup will needs only the installation of the Azure Backup Agent (MARS agent : Microsoft Azure Recovery Services agent) and to configure it to
backup data to a cloud location (Recovery Services Vault)

This is fantastic since it removes the classic requirements to enable workloads backup :

  • Backup software infrastructure (Backup server, Backup Proxy…)
  • Local storage : No need for a SAN or a NAS. Azure backup will directly send data to the cloud using an internet connection

Short and Long term retention without backup infrastructure

In addition to the great value from the first discussed statement, Azure Backup provides in the same time, Short and Long term retention within the same policies. No need for tapes, no need for external provider to handle it. Azure Backup use a GFS model to allow  Long Term retentions without any additional configuration. You can reach up to 99 years of retention period for up to 9999 recovery points (These values can change on the future).

Low bandwidth/Latency ROBO locations

The Azure Backup agent supports throttling (2) the data transfer to the cloud location (Not for all OSs). This is very important for ROBO location with limited bandwidth that prevent you from using your central backup infrastructure (Backup to a central backup repository)

 

2- What do you need to know

In this section, I will resume the important information that you need to know about the Azure Backup (Specially with the MARS agent). These information will give you the ability to decide, design and implement Azure backup into your information system.

2.1- Pricing

Fortunately, the Azure Backup pricing is very simple. It’s well explicated on the official documentation (1) but to resume:

When you backup a workload, you pay for :

  • An Azure Backup fixed cost for each backed up instance (The cost depends on the size of the data being backed up)

 

  • The storage used by the recovery points:
    • You can choose between LRS or GRS storage (3). To resume, LRS (Locally redundant storage) is a storage only available with the region where you create the Recovery Vault. GRS is a storage replicated asynchronously to another paired region providing hence, a protection against region failure, but more expensive (4) (~ * 2)
    • The redundancy cannot be changed after the first workload backup, so be sure of your decision before going forward

 

For example, if you backup 4 windows servers, you will pay:

  • 4* Azure Backup fixed cost
  • The cost of the Azure storage (cloud storage) used by the recovery points

2.2- Requirements

In this section, I will resume what do you need to technically be ready to use Azure Backup (via the MARS agent)

 

2.2.1-  Azure Level

As discussed earlier in this post, you need the location where you will send and store backups. This is called Recovery Services Vault (RSV). An RSV is a Microsoft Azure resource, which means that you need to subscribe to Azure in order to deploy it. Subscribing to Microsoft Azure is very simple, there are many ways to achieve it, depending on your needs and the billing/relation model that you want. In order to use Azure, you need to create an Azure subscription (5). After creating it, you can directly without any requirement create an Azure Recovery Vault, ready to host your backups (within minutes).

You will then need access* to the Recovery Vault in order to begin. You can benefit from the Azure RBAC roles (6) in order to have or give required permissions.

In order to backup Files and Folders via the MARS agent, you will just need:

  • The MARS agent installation file : Allowing you to install the agent on the required servers
  • The Vault credentials : Allowing the MARS agent to find and authenticate to the Azure Recovery Vault.

Both of them can be downloaded via the Azure portal via the Azure Recovery Services resource blades.

* Technically, you don’t need access to the Recovery Vault to enable backups. An Admin can send you the required information instead.

2.2.2- Local level

I mean by local level, what do you need at the server level (The server where the folders and files to be backed up) in order to start backing up :

  • A supported Operating system : Only Windows is supported, Linux is not yet supported.
  • A internet connectivity : The agent needs outbound internet connection to the Azure services in order to send data. Using a Proxy is supported. You can in addition limit the outbound flows to only Azure services public IPs (7) (And even more, only the IPs belonging to the RSV region)

 

There are limitations regarding the supported operating systems, what can you backup, how often you can backup and more. Please refer to the Azure Backup FAQ for complete information

 

2.3- Security and data confidentiality

Azure backup via the MARS agent provides many precious security aspects, let me enumerate some of them:

  • You will need a Vault credentials file in order to register an agent to a vault. Only backup admins can download such file from the Azure portal
  • Before enabling the backup, you will be prompted to provide a ‘passphrase’. A passphrase is a ‘complex password’ used to encrypt data before sending it to the RSV. Data is encrypted and send via HTTPS to the RSV where it remains encrypted. Note that without this passphrase, you will not be able to restore data in case you lose the original server (Or its configuration), the passphrase must be kept securely somewhere (You can use Azure Key Vault to store your secrets)
  • In case your server is compromised, the compromiser (Hacker, malicious admin) cannot delete you recovery points. Azure backup provides a security setting (enabled by default) that requires the ‘remover’ to login to the Azure Portal and generate a PIN code. The probability that the ‘compromiser’ owns the credentials to login to the Azure portal is small. In addition, you can benefit from the ‘MFA’ feature of Azure portal in order to more secure the portal access.
  • In case of ransomware/crypto-locker attack or infection, your backup data is protected, since the backup media is totally independent of the server.
  • Other security prevention feature are also available (8) :
    • Retention of deleted backup data: Backup data retained for 14 days after delete operation
    • Minimum retention range checks: Ensures more than one recovery point in case of attacks
    • Alerts and notifications: For critical operations like Stop backup with delete data
    • Multiple layers of security: Security PIN required for critical operations (Already mentioned)

2.4- Monitoring and reporting

Like you noticed, there is no server nor a console to install, monitor or see what is happening. All is done via the Azure Portal. You can use the Azure portal to :

  • Backup Items : View the backed up items (Server name, volume…)
  • Backup Status : You can view and show the status of the backups, with ‘filtering’ options
  • Backup jobs: You can see the backup jobs and their status. You can see the duration and the size of the backups and restore operations
  • Notifications : You can configure and see the notifications related to the jobs. Currently, you can only configure notifications based on the jobs status (Critical, Warning, Information)

Currently, there is no ‘Reporting’ feature with Azure backup via the portal. But this feature is coming very soon.

3- How to start : The plan

In this third and final section, I will present the planning steps in order to successfully plan and implement your ‘Folders and Files’ backup. The main steps are :

  1. Create a Recovery Services Vault
  2. Configure the vault
  3. Download the Recovery Vault credentials
  4. Install the MARS Agent on the server
  5. Create a backup policy and a schedule

This link shows the detailed steps to achieve the above steps : https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

The Azure Backup FAQ contains the most answers to your questions :

https://docs.microsoft.com/en-us/azure/backup/backup-azure-backup-faq

To finish, the following are my recommendations when planning to implement Azure Backup via the MARS agent:

Question / Constraint

Choice

Are my source servers located on the same region ? It’s recommended to backup data to the nearest location in order to benefit from a better performance / Latency during backup and restore operations.
Do I need to back up to the same RSV ? No, but to have a simple design, it’s better to minimize the number of RSV for the a similar servers group.
When do I need to backup to different RSV What can differentiate two Recovery Services Vault  :

–         The redundancy of the Storage (LRS or GRS)

–         The user rights on the RSV

–         The vault credentials

So :

–               If you have different ‘data’ importance, and you want to optimize the costs, you can create ‘LRS’ RSVs for less important data, and ‘GRS’ RSVs for more important and critical data

–               You can give permissions to access or manage the Recovery Service Vault. If you want different security levels for your Vault, you can create multiple RSV

–               The Vault Credentials are unique for an RSV. A user with a valid Vault credentials file (expires after 2 days) can backup data to the vault

Use the same passphrase for each server ? No. This is absolutely not recommended for the unique reason is that someone compromises the passphrase, he can access you all your server’s restore points (He will need a valid Vault credentials file)

 

Useful Links:

 

(1) Azure Backup pricing : https://azure.microsoft.com/en-us/pricing/details/backup/

(2) Azure Backup agent network throttling : https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

(3) Azure Storage redundancy : https://docs.microsoft.com/en-us/azure/storage/storage-redundancy

(4) Azure Storage pricing : https://azure.microsoft.com/en-us/pricing/details/storage/blobs-general/

(5) Designing Azure Subscriptions : https://buildwindows.wordpress.com/2016/03/30/azure-iaas-arm-architecting-and-design-series-azure-subscriptions/

(6)Azure Backup Roles : Backup Contributor, Backup Operator, Backup Reader

(7) Azure Public IP ranges : https://www.microsoft.com/en-us/download/details.aspx?id=41653

(8) Azure-backup-security-feature : https://azure.microsoft.com/en-us/blog/azure-backup-security-feature/

(9) Azure subscription and service limits, quotas, and constraints : https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits

New Azure Portal Feature : Find your Quotas and Limits values !

Hello All,

This is a quick post to support a new fresh Azure Portal feature which will help a lot of Admins in some cases.

You all know that you cannot create as Azure Resources as you want, and that there are Limits and Quotas for the number of deployed resources. Such information is very important and I can say crucial when designing your Azure infrastructure.

I can note some examples like :

  • Network Security Groups : By default, you cannot create more than 100 NSG objects within an Azure Region (Azure Resource Manager limit model, ASM limit model is per Subscription not per region). So if you are using NSGs to secure your environment, you will need to track the objects count usage –> This is the object of this post
  • Static Public IP addresses : By default, you cannot create more than 20 static Public IP addresses within an Azure Region. So monitoring and tracking this resource usage is important

You can always visit the official link for the last information about the service limits, the quotas and the constraints. Keep in mind that for several resources, you can ask the Microsoft Support to increase a limit value.

Back to the post main goal, you can by now consult the usage of your resources and the status against the quota values.

Go to the Azure Portal (Portal.azure.com) –> Subscriptions –> Select the Subscription –> Usage + Quotas

image

You can filter the items to have more customized view. You can use the link to directly open a Support case to increase the limits.

How to edit an existing Azure Custom RBAC role ?

Hello all,

Azure provides the ability to create Custom Roles in order to better fit the needs and give admins more flexible ways to choose the permissions they want to provide to users.

Many posts discuss the Azure RBAC and custom roles, here’s some materials:

In this post I will clarify the right method to modify an existing created custom role.

When you create a custom role, you configure many parameters:

  • Custom Role Name
  • Custom Role description
  • Custom Role Actions
  • Custom Role No-Actions
  • Custom Role assignable scopes

There are some scenarios where you would like to change one or more of the definitions, for several reasons:

– You already created a custom role assigned to only some scopes. You want to extend or reduce the scopes

– You decided to add or remove an Action or a No-Action to an existing custom role

– You noticed a typo on the description and you decided to change it

– And more reasons can come…

How to proceed ?

This step by step is using Azure Powershell, so download and install Azure powershell before proceeding. (Download and Install Azure powershell)

As an example, i will make several changes to the Azure Custom Role “Azure DNS Reader” that initially has the scope at the subscription level “/subscriptions/1111111-1111-1111-11111-11111111111”. The changes are:

  • New Name –> Azure DNS Zone RW
  • Change the description –> Lets you view and modify Azure DNS zone objects”Add or remove an Action –> Microsoft.Network/dnsZones/write”
  • Add or remove a No-Action –> Microsoft.Network/dnsZones/write”
  • Add a remove a scope –> “/subscriptions/222222-2222-2222-2222-2222222222222”

1- Login to Azure

Login to Azure using the following command:

Login-AzureRmAccount

2- Get the Custom Role Definition :

  • If your custom role is assigned to the default subscription : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader”
  • If your custom role is assigned to a scope : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader” -Scope “/subscriptions/1111111-1111-1111-11111-11111111111”

2017-02-08_13-49-41

3- Make changes* and commit

*Note that you can make all the changes and commit during last step

A- Change the role Name
$CustomRole.Name = “Azure DNS Zone RW”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-10-36

B- Change the role description
$CustomRole.Description = “Lets you view and modify Azure DNS zone objects”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-12-28

C- Add or Remove an Action
$Action = “Microsoft.Network/dnsZones/write”

$CustomRole.Actions.Add($Action)
#or to remove
$CustomRole.Actions.Remove($Action)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-31-48
D- Add or Remove a No-Action
$NoAction = “Microsoft.Network/dnsZones/write”

$CustomRole.NotActions.Add($Action)
#or
$CustomRole.NotActions.Remove($NoAction)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-34-59
E- Add or Remove a  Scope
$Scope = “/subscriptions/222222-2222-2222-2222-2222222222222”

$CustomRole.AssignableScopes.Add($Scope)
#or
$CustomRole.AssignableScopes.Remove($Scope)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-45-53