Microsoft Azure: The ways to upload VHDs to Azure : AZcopy

Hi All,

This week is all about the ways to upload VHDs to Azure. And guess what, I’m not done yet. Everyday, I’m discovering something new. Remember my blog about the Azure Powershell command Add-AzureVHD that permits to upload VHDs to Azure. It’s a powerful command that enable you to do scripting and automation. But we saw some drawbacks with it like the MD5 hash calculation that can take considerable time, and that we can’t skip. However, I don’t know why I never used AZcopy. I used AZcopy before to copy VHDs between two storage accounts, but I forgot that it permits uploading files (VHDs) from On-premise to Azure.

AZCopy is a tool (utility) that give you many options like copying files between Azure storage accounts and subscriptions, copying files from on-premise to Azure, make batch copies… The good news is that the MD5 hash calculation is optional (Not by default). Use it if your read disk throughput is low and if you don’t afford waiting.

Keep in mind that Azcopy does not convert Dynamic VHDs to Fixed VHDs during upload. You need to upload fixed type VHDs

Download AZCopy here

Look for this Microsoft article describing the AZCopy options

Look here for this step by step blog

Advertisements

Microsoft Azure: The ways to upload VHDs to Azure (v2)

  UPDATE 12-22-2016

This post is also applicable for Add-AzureRmVHD with Azure Powershell 1.0 for Azure Resource Manager

  UPDATE 

  • This post replaces my previous post Microsoft Azure: The ways to upload VHDs to Azure (Retired). The aim is to add an important information related to CLoudBerry Explorer
  • Described another utility in this post

 

Hi all,

More and more customers are moving to Azure, or at least moving some workloads to Azure, or even are starting using Azure. Anyway, in any of these cases, you may want to move some VMs to Azure, so you can start using them in a production or test scenario. Before I continue, If you plan to migrate a platform to Azure, uploading VHD by VHD is not suitable for you, you should look for a more automated and complete solution. Look for my blog here, I made a good description of that topic.

But if you want to upload some VHDs to Azure, and you are lost in googling and binging, I wish you will find your  answers here. So let’s begin:

To upload virtual hard disks to Azure, you can use several tools:

CloudBerry Explorer for Microsoft Azure Blob Storage

This is just and excellent  tool, and my favorite. CloudBerry Explorer offers a handsome UI where you can drag and drop VHDs between your local disks and the Azure Blob, and vise-versa. You can initiate many simultaneous uploads, pause and resume uploads and view the upload remaining time, all this for free. In fact, CloudBerry Explorer  provides a free action and PRO edition. The PRO edition will let do more things like creating upload rules, create analytic reports, multithread uploading, encryption, compression… But if you want just to upload some VHDs, the free version is really great. Update : Only VHDs on Page blobs are supported to work in Azure, CloudBerry copies by default  files as Block blobs, you should use the Copy As Page blob button on top of the Window. So, what are you waiting for ? Start now here

Add-AzureVHD

This is a PowerShell command provided by the Microsoft Azure PowerShell. Add-AzureVHD is great if you want to script several VHDs uploads. You can download the Azure Powershell here. Follow this blog to begin with the Add-AzureVHD command. Add-AzureVHD is a powerful way to upload VHDs to Azure, but to be honest you may hit some limits and drawbacks

Azure storage explorer

Azure Storage Explorer is a free utility for viewing and acting on Microsoft Azure Storage. Azure Storage allows you upload VHDs to Azure blobs and several additional operations. Azure Storage explorer can be downloaded here and is under a preview version. I will not rate this tool because I just used it for 5 minutes. The upload experience was awful (No upload status, no upload percentage…) and the UI freezes unexpectedly.

Azure Drive Explorer

Azure Drive Explorer is server/client tool that allows you not only to upload VHDs to Azure blobs, but also to upload files inside VHDs in Azure. Azure Drive Explorer requires that you install its server component in Azure (deploy packages into Azure), and then uses the client component to make the uploads. If you want to test it, just go here and good luck. PS: I did not tested or used this tool, so it will be to you to rate it.

Voila, I’m done here. If you want to follow my recommendations, use Cloud Berry explorer. If you want to automate, script and batch uploads, and if the disk(s) where your VHDs are located provides high read throughput, and your VHDs are dynamic, you can use the Azure PowerShell command.

Microsoft Azure : Optimize VHD upload to Azure : Add-AzureVHD

Hi all,

It was an interesting week for me, dealing with 9 Terabytes of VHDs to upload to Azure. To be honest, I was surprised of the time it costs, because all the calculations we have made to predict the total time needed to upload, were unfortunately wrong. How and why ?

To upload VHDs to Azure, I used the Azure PowerShell cmdlet Add-AzureVHD. You can use the Add-AzureVHD by downloading and installing the Azure Powershell module : Download HERE. You should install the Azure powershell module on the machine from where you will initiate the upload.

The post aim is not to share how to use the Add-AzureVHD command, but to give you hints to get the best of it.

The upload process

When you upload a VHD to Azure using Add-AzureVHD, the following steps are conducted:

Step1 : Hash calculation : A MD5 hash is calculated against the VHD. This step can’t be skipped or avoided. The aim is to be able to check the VHD integrity after its upload to Azure

Step 2: Blob creation in Azure: A page blob is created in Azure with the same VHD size.

Step 3: Empty data blocks detection :  (For Fixed VHD type only) The process looks for the empty data blocks to avoid copying blank data blocks to Azure

Step 4 : Upload : The data is uploaded to Azure

How to optimize

Step1 : Hash calculation

The hash calculation depends on three factors: Disk speed, VHD size and the processor speed. Let’s optimize each factor:

  • Disk speed: The higher the read throughput is, the faster the hash calculation will be. If your VHD is placed on a SATA disk with 60 MB/s read throughput, the  hash calculation will work at 500 Mbits. So for a VHD of 500 GB, the hash calculation will need more than two hours. Place your VHD on fast disks to obtain significant time gain.
  • VHD size: The more your VHD is huge, the more the hash calculation will need time. The question is can we optimize it. The answer reside in  using dynamic VHDs. A dynamic VHD contains the same size of data within it. Imagine a 500 fixed VHD containing just 100 GB of data, imagine the waste of going through 400 GB of blank blocks to calculate the hash. In addition, you may compact your dynamic VHDs before uploading them to Azure, compacting dynamic VHDs can reduce the VHD size. You should know that the blob size that will be created in Azure during the upload process will be equal to the VHD size for fixed size VHDs and the maximum size for dynamic VHDs. But you have not to worry about that when compacting your dynamic VHD because you can later,  expand your VHD in Azure, in case you would like a greater VHD size.
  • Processor speed : The hash calculation is a mathematical operation, so it’s clear that the faster our processor is, the faster the calculation will be. However, todays processor are fast enough to handle such operations, and bottleneck here is the disk read throughput, unless you are using a 1 Ghz old Dual Core processor to calculate the hash of a VHD located on a RAID10 SSD drives on a 10 Gbits FC SAN. You can take a look to your task manager during a hash calculation to see the processor usage.

Step 2: Blob creation in Azure

In this step, a blob with the same VHD size will be allocated in Azure. Nothing to optimize

Step 3: Empty data blocks detection

This step is only performed if the VHD to be uploaded is a fixed size VHD. The Azure command scans the VHD to look for empty data blocks. I really like this  step because it can bring us an enormous upload time gain. Imagine that you want to upload a 500 GB fixed size VHD, and that really only 100 GB are used! Empty data blocks detection will let you gain 4x the upload time. In the other hand, this step is time consuming, because all the VHD is processed to look for empty data. For example, processing a 500 GB will take more than one hour. This why, again, uploading a dynamic VHD is more advantageous, no empty data

Step 4 : Upload

This is the final step, the data is uploaded to Azure. The only optimization is to have a fast internet connection (Fast upload link).

Lesson from an experience:

  • The VHDs to be uploaded should be dynamic expanding VHDs
  • If your VHDs are fixed size, convert them before uploading, you will gain a significant upload time (Hash calculation + Empty data blocks detection)
  • If your VHDs are already dynamic, try to compact them to the minimal size (Hash calculation gain). You can then expand them to the desired size in Azure.

Azure Virtual Machines : Storage IOPS and Throughput

Hi All,

This is a fast post about something I’m encountering when trying to understand the data disks performance in Azure virtual machines.

If you read the official TechNet article where the different Azure virtual machines series are exposed and their configuration detailed, maybe (like me) you will be confused about the data disks performances between the (A-Series, D-Series and G-Series) vs (DS-Series).

In the A-Series, D-Series and G-series  virtual machines, the data disks performances are described using the metric: Max IOPS

In the DS-Series virtual machines, the data disks performances are described using the metric: Max. disk IOPS and bandwidth

The questions that I was asking myself are:

  1. What this IOPS means, how much data per IOPS ?
  2. Why there is an additional bandwidth metric in the DS  Series ?

And here, I will answer based on information I got:

  What this IOPS means, how much data per IOPS ? 

IOPS means Input/Output Per Second, and it’s absolutely not referring to bandwidth. There is a tight relation between bandwidth and IOPS, but we need another parameter to do that. The Input/output size. What that means ?

Let’s imagine when you were kids, you are playing a game. You need to fill a bucket with water, cross the road, empty the bucket, re-cross the road, and do that for 1 minute, and finally we will see, who is the winner ? The winner is the one who collects more water, yes you say it, more water, not more back and forth. Yes, it depends of how the bucket is filled. If my bucket is completely  filled and I made 5 turns, and my friend’s bucket is half-filled and he made 5 turns, I will win !! This is exactly the same logic, you have to know first, what is the size of the Input/output unit used here, and you win Smile

So in the case of the A, D and G Series, using a IO unit of 8KB for the 500 IOPS per disk, will result in approximately 8*500 = 4000KB/s = 3,9 MB/s

This guy makes a nice test to verify that, thanks to him : LINK HERE

In fact, the guy making the tests found that with a 4 disks strip, only 1130 IOPS was achieved (We expect 4*500 = 2000). So it’s a real best effort and storage performances are not guaranteed or that the test is not relevant, or and i guess it is the more likely, the total throughout is throttled.

Snap 2015-03-10 at 12.32.20

In addition, I wonder why Microsoft did not show the maximum bandwidth, because based on my calculation : The maximum bandwidth for a 8KB IO unit is, and for a maximum size of 1 TB is : (Number of disks*500*8)/(1024). Based on this I can achieve a 62 MB/s on a A11 VM, and 250 MB/s on a G5 VM. Why Microsoft did not make things more clear, because this can’t be true !!!! There is certainly something I missed

  Why there is an additional bandwidth metric in the DS Series ? 

DS-Series can use the Azure Premium Storage. Azure Premium Storage is a new storage service that gives you high throughput capacity, low latency and  maximum performances. PS is based on SSD disks. To acquire Premium Storage, you will need a premium storage account.

To confirm the high performances of its Premium Storage service, Microsoft added an additional metric to describe the storage throughput.

Snap 2015-03-10 at 12.33.02

But what this means ?

First, it’s a little complicated (Just a little) but I will explain it here, the easiest way (My explanations are based on this Microsoft article : http://azure.microsoft.com/en-us/documentation/articles/storage-premium-storage-preview-portal/)

  • With Azure Premium Storage you can achieve a 50,000 IOPS and 32 TB of storage. The throughput is not mentioned here, because it has to be calculated
  • Premium Storage is based on three Azure Storage Disk types. P10, P20 and P30. That means that your VHDs will be stored on disks of that type. You need to know that the IOPS&Throughput are dependent of the disk size (therefore on the VDH size)

Snap 2015-03-10 at 15.05.29

The table above shows the P10, P20 and P30  specifications:

P10 disk is 128 GB, it can achieves 500 IOPS and up to 100 MB/s.

P20 disk is 512 GB, it can achieves 2300 IOPS and up to 150 MB/s.

P30 disk is 1024 GB, it can achieves 5000 IOPS and up to 200 MB/s.

You can see that the specifications are not linear. That means that Capacity, IOPS and throughput metrics are not linearly dependent. For example, the P20 disk size is 4 times greater than the P10 disk size, but the IOPS is a little greater than that, and for the throughput, it’s only 1.5 greater. So we need to be sharp when we create our VHDs.

Let’s see some example to better assimilate the facts

Example 1 :  I want to create a 200 GB VHD (Option1)

Azure will roundup your choice, so the more accurate disk to use is a P20, because the P10 disks are 128 GB only, and VHD will not fit in. You will benefit then, of 2300 IOPS and up to 150 MB/s

Example 2 :  I want to create a 200 GB VHD (Option2)

You will create 2 VHDs with 100 GB each. Azure will create these two VHDs using P10 disks (smaller than 128 GB). Then you will use Windows (Or Linux) to create a strip (Storage spaces for example) using these 2 VHDs. You will have as a result a 200 GB VHD with 1000 IOPS and 200 MB/s

Example 3: I want to create a VHD with 600 GB and 400 MB/s of throughput

You will not obtain such throughput if you just create a 600 GB VHD, because Azure will create a 600 GB VHD on a P30 Disk, and then you will have only 200 MB/s.

To achieve that, you should use stripping, and to do that, you can proceed with different ways:

Way1 : You create two 600 GB VHDs. Azure will create them using P30 disks. Then you use your stripping tool (Storage spaces) to create a 1200 GB volume. This volume will permit 400 MB/s and 10000 IOPS. But in this case you will have 600 un-nedded GB

Way2 : You create 3 VHDs with 200 GB each. Azure will create them using P20 disks (Example 1, Option1). Then you use your stripping tool (Storage spaces) to create a 600 GB volume. This volume will permit 450 MB/s (150 MB/s *3) and 6900 IOPS (2300 IOPS *3).

Example 4: I want to create a VHD with 600 GB and 600 MB/s of throughput

Unfortunately, we can’t just dream, and ask Azure to do it, not till today. In fact, the maximum throughput possible is 512 MB/s, we can’t do better.

  IMPORTANT   

  • The total data storage, the IOPS and the throughput are limited by the VM series and size. Each Azure Virtual Machine type is limited by a number of disks (total storage size), a maximum IOPS (IOPS) and a maximum throughput (Throughput). For example, you  may achieve a 400 MB/s (Example 3) only in a Standard_DS14 VM.  All the other VM types will throttle your IOPS or throughput when you reach the threshold. The following picture (Microsoft credit) shows the DS-Series maximum storage performances

Snap 2015-03-10 at 17.59.27

Follow this link for all the Azure Virtual machines types, sizes and specifications : https://msdn.microsoft.com/en-us/library/azure/dn197896.aspx

  • You should now that Azure will throttle storage whenever one of the two metrics threshold is reached: IOPS or Throughput. Let’s take a DS2 VM, If you run 2000 IOPS workload with a 100 KB IO unit, you will reach a 195 MB/s throughput. This value exceeds the 64 MB/s threshold (The DS2 threshold) , and Azure will throttle your disk access, this will result on a huge IO queue, and you will suffer from performances degradation.
  • When you choose your storage, you will choose your VM too. And when you choose your VM, you choose your storage too. So make a reasonable calculation before spending money

  Conclusion 

Azure is relatively new, new features and terms are coming everyday, and lack of information and ambiguity is a real headache, specially for persons that will make a decision. Storage for virtual machines in Azure a bit tricky, and Microsoft did not provide any tool where we put our need to obtain a list of possible configurations. But till this day, I invite you to proceed like the following:

  1. Determine your storage needs : 1- How much data disks 2- How much IOPS, Throughput for each disk
  2. Calculate the options when converting to Azure storage (P10, P20, P30), it’s like the Examples 1, 2, 3 and 4 we discussed above
  3. Match the results to Azure Virtual machines Series to find the best suitable size for you : For each Option found in step 2, find the VM sizes that fit
  4. Create a table with all these information exposing all the scenarios
  5. Calculate the cost for each scenario
  6. Choose the best Spec/Cost VM

The way to migrate from anywhere to Azure: Migration Accelerator (MA)

MA is no longer supported, and was replaced by Azure Site Recovery

Hi all,

I’m continuing my blog series about the ways to migrate workloads to Hyper-V or Azure. I started with the VMM V2V feature, then I blogged about MVMC, and today I’m completing this blog series with this last post. You can find the previous posts here:

  1. The way to migrate to Hyper-V / Azure : Introduction
  2. The way to migrate  VMware to Hyper-V: SCVMM V2V
  3. The way to migrate VMware/Physical to  Hyper-V / Azure: MVMC/MAT

So, what if I tell you that Microsoft found the Aladdin’s magic lamp, and the magic lamp name is Inmage. InMage is a computer software company based in the US and India. It markets a product line called Scout that uses continuous data protection (CDP) for backup and replication. Scout consists of two product lines: the host-offload line, which uses a software agent on the protected servers, and the fabric line, which uses an agent on the Fibre Channel switch fabric.

Microsoft brought Inmage in July 2014, and the acquisition goal is to use  Scout to offer  customers a powerful solution to protect and migrate their infrastructure using Microsoft Azure. The overall solution is called Migration Accelerator (MA). Today, MA is in it’s first version and it’s currently in preview. MA is only availlabe in North America, that means that you can only use MA to proetct and replicate workload to North America azure regions.

  Is Migration Accelerator free? 

MA preview is free, but Microsoft did not mention anything about MA licensing. Maybe it will be free, or maybe it will paying, hard to guess. But if it will be paying, I think it will by licensed by protected or migrated instance, Anyway, keep our finger crossed.

  A small enlightenment 

MA is not a  converting tool, but a replication tool, it’s agent based, ie, an agent is installed in each operating system to protect, and data is replicated from inside the OS. This is why we will see that the mechanism is different from other converting tool like MVMC.

  What can we do with MA 

So let’s talk straightly about MA. I guess the first things we think about it when we hear about MA are  the migration paths.

  Migration paths 

MA is a real piece of jewelry for Microsoft Azure, because it created lovers for Azure.

  Ma can replicate workloads from 

  • VMware VSphere
  • VMware VCenter
  • VMware Vcloud
  • Amazon Web Services
  • Hyper-V hosts
  • Physical server

  To 

  • Microsoft Azure

Yes, you can certainly see that the target platform  is always Azure, no Hyper-V nor VMM clouds. Microsoft is shifting gears, the ultimate goal is to accelerate Azure adoption, and provide customers  an efficient tool that can do migration. The following picture show the Cloud Services/Protection view in the MA (Preview) portal. You can see icons used to connect MA to the source platforms.

Snap 2015-03-09 at 10.13.03

  How it works 

This is the best part, because we will discuss and explain how this solution is working. You can notice that I’m always referring to MA as a solution and not a tool, and this is intentionally. MA is like ASR (Azure Site Recovery) , it’s a service hold in Azure and orchestrates replication and migration between different sites. It’s the operation master and it’s accessed via a web portal. The actual preview URL is https://ma-wus-01.cloudapp.net. You need a subscription to use MA, and for the happiness of all, you can request for a preview subscription freely via this link

  Components   

The MA solution includes 5 mandatory components, these components should be correctly installed and configured to start protection an migration of workloads, the following ^picture describes the components and their placement:

Snap 2015-03-09 at 10.47.31

  1. Mobility Service (MS): The MS is a light weight OS-based service installed on target, it’s what we called the agent. This agent have to be installed on the source machines to be protected. the MS will Capture data in real time and  syncs source volumes to target volumes. The Mobility Service can be manually installed or pushed via the MA portal. Of course, the push method will be always be used (easiest, faster) unless you can’t do it  and need to us your hands.
  2. Process Server (PS): This is an on-premises server that facilitates communication between the Mobility Service and target virtual machines in Azure. The PS provides caching, queuing, compression, encryption and bandwidth management. You have to dedicate a server or a virtual machine in your source platform (LAN) for the best performances. If you enable compression or encryption for the replication, the PS will be more CPU intensive and this is why the sizing is important. The PS will handle all your workloads replication (1 VM or 100 VM…)
  3. Master Target (MT): The MT is the target for replicating disks of on-premises servers. It’s installed within a dedicated Azure VM in a Azure subscription. Disks are attached to the MT to maintain duplicate copies. Can you see it ? This VM (Like the PS server) is so resource intensive, it will handle all the replication traffic to commit changes to attached disks (VHDs). The hardware configuration od this machine should be well calculated or even over sized. A bad sizing of PS or MT will result in poor performances, low replication processing and maybe errors and timeouts.
  4. Configuration Server (CS): The CS manages the communication between the Master Target and the MA Portal. Installed on a dedicated Azure VM in a Azure subscription. Regular synchronization occurs between the CS and MA Portal to maintain a valid and straight configuration of the overall process.
  5. MA Portal: This is the web portal we talk about, it’s a multitenant portal to discover, configure protection and migrate the on-premises workloads into Azure. It’s a single pane of glass where you can configure the protection and migrations.

Snap 2015-03-09 at 10.59.23

The MA portal  includes a dashboard, a report view and a settings configuration view. It also expose a Support view to open a support ticket or view help links or articles related to MA

  A fast jump to the portal   

So let’s see what can we do with MA, and precisely, with the MA portal. After signing in to the MA portal, let’s go to the Cloud Services view. The cloud services includes tow options: Protection and Migration 

Snap 2015-03-09 at 11.38.53

This is a good point, what is the difference between them : In fact, you should first now that MA is a  one-way replication and migration tool, it only allow replicating data to Microsoft Azure. Reversing replication even after failover is not possible nor supported. So if you plan to use MA as a tool for DR to Azure, it’s not what you need. MA is a migration tool to Azure.

  • Protection:  The projection view will give you the possibility to initiate replication from On-premises to Azure. This is the first step you will make. You will discover your workloads and tell MA to replicate them to Azure.
  • Migration: This is the “Failover” view, it will permit you to cutover with your on-premise, and start using your workloads in Azure.

KEEP IN MIND, MA is a one way replication and failover tool, after you failover to Azure, you can’t use it to go back to on-premises 

Now if we go to the Protection view, MA will ask us which Account to use

Snap 2015-03-09 at 11.52.21

This is very interesting, since MA supports multi accounts, this will permit partners to use MA to migrate their customers to Azure using the same portal and subscription. After selecting the account, the protection view will appear, and we can see two tabs: The Primary tab and the Cloud tab.

Snap 2015-03-09 at 12.06.14

The Cloud tab will let us add an Azure subscription, ie where to replicate workloads. It will show us in addition the scout components, ie the two components that resides in Azure, aka the MT server and the CS server. The red arrow shows that the only possible target is Azure

Snap 2015-03-09 at 12.41.08

The picture above shows the Primary tab, this tab will allow us to add our on-premise workloads to replicate and migrate. As discussed at the beginning, 6 sources are possible: AWS, VSphere, VCloud, VCenter, Hyper-V and Physical.

Let’s see if we select a resource to protect. The following picture shows that a related projection view appears at right. You should first (If it is not already done) install the Mobility Service (The agent) on the target, then click Protect to configure the protection (ie the replication). The red narrow show a shield icon. In this picture the shield is grey, this is because the machine is not protected. After you start the protection, it will turn to Green.

Snap 2015-03-09 at 12.50.55

What if I click Protect ?

The following picture shows a Protect configuration page:

Snap 2015-03-09 at 13.04.49

You can see that here, you will make some configuration:

Replication policy : You can choose between replicating with compression or without compression. You can even create your custom policy for custom values

Azure Cloud Account: You will choose here to which Azure subscription you will replicate

Configuration Server: Which configuration server you will use for this replication

You will have to configure the Protection Details like the MT server, the retention drive, the azure storage account to use and the process server (Next picture)

Snap 2015-03-09 at 13.14.04

  NB  Notice that when configuring protection, there is options for the global protection (Protection Options),  and other options per protected instance (Protection Details). This is because you can include other instances under the same protection group that will share the Protection Options, and with specific Protection Details.

Finally, we can see here the Migration view that will permit a ONE WAY failover to Azure

Snap 2015-03-09 at 13.20.08

  Conclusion 

Here we are, this was the last post in this series. This series intend to give you a global picture of how to migrate to Hyper-V and Azure, and what are the tools and solutions provided by Microsoft. There are other partner and third-party tools like DoubleTake move and 5nine V2V that can be used too. You can find below a table comparing these tools

Snap 2015-03-09 at 14.56.02

Before you quit this page, keep in mind:

  • MA is not a real protection tool, but it is a migration tool, and a one time failover tool. There is only one way migration, always to Azure.
  • MA  stills in preview, maybe there will be feature changes or enhancement. MA preview is free, no word about the GA date or licensing.
  • If your on-premise datacenter uses Hyper-V and VMM, think about ASR (Azure Site Recovery) instead, don’t use MA. ASR is more efficient, very specific to Hyper-V and gives you two-way replication and failover to Azure

I will  blog about a real MA and ASR scenario (Soon), so I will technically discuss these scenarios and share with you the step-by-step, recommendations, tricks and also bugs and limits.

Thank you

The way to migrate VMware/Physical to Hyper-V / Azure: MVMC/MAT

Hi folks,

Welcome again in this third post of my blog series  The way to migrate to Hyper-V / Azure. We have seen in the previous posts the first common way to migrate and convert Virtual Machines from VMware to Hyper-V using SCVMM (You can check it HERE). We also seen that this option is not convenient for every scenario, and we saw the mandatory dependency on VMM and other requirements.

Fortunately, Microsoft provides another tool permitting not only converting VMware virtual machines but also physical machines. The tool is Microsoft Virtual Machine Converter (MVMC), and today we are at the 3.0 version. The tool can be downloaded right here

  Short history 

  • MVMC 1.0 was released in 2013 to provide a way to convert VMware virtual machines and virtual hard disks to a Hyper-V 2012 platform. It provided a CLI interface for scripting conversion.
  • MVMC 2.0 was released in April 2014 and brought new features like the support for vCenter and ESX(i) 5.5, the support of more VMware virtual hardware versions (4 to 10) , the support of converting Linux based virtual machines and the possibility of converting/migrating the VMware VMs to Azure. MVMC 2.0 also added native powershell support for more efficient automation and scripting experience.

  Today 

We all know that with the release of System Center 2012 R2, the P2V feature in VMM was removed. This was a real mess for the community, unable to find a Microsoft clean way to convert physical machines to Hyper-V.

Microsoft reacts to that and includes the P2V feature with the third release of MVMC, MVMC 3.0. But to be honest, Hyper-V 2012 R2 and VMM 2012 R2 were GA on October 2013, and MVMC 3.0 a year beyond that, so what Microsoft thought customers will do during this year (Converting the dirty way!!)

  Is MVMC free ? 

MVMC is a Microsoft tool provided at free of charge

  MVMC in action 

  Requirements 

MVMC have to be installed on an machine that meets the following requirements: Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 R2, Windows 8, Windows 8.1. Before that, you must install the following requirements:

  • Microsoft .NET Framework 3.5 and .NET Framework 4 if you install MVMC on Windows Server 2008 R2 SP1
  • Microsoft .NET Framework 4.5 if you install MVMC on Windows Server 2012/2012 R2 or Windows 8/8.1
  • Install Feature Bits Compact server
  • Visual C++ Redistributable for Visual Studio 2012 Update 1

NB : Powershell commands are only supported with Windows Server 2012/2012 R2, Windows 8/8.1 because they do use Powershell 3.0

Then you can remotely convert your VMWare VMs or physical hosts. This assumes that you have network connectivity to both source and target platforms.

  What can we convert and what are the conversion paths ? 

  •   Physical to Hyper-V 

P2V path

MVMC can convert physical machines running : Windows Server 2008 / 2008 R2/ 2012 / 2012 R2, Windows Vista /7 /8 /8.1 , Linux operating systems are not supported on P2V

MVMC can convert physical machines to : Hyper-V : 2008 R2 SP1, 2012, 2012 R2

  NB 

  • Only online P2V is supported, offline P2V  is not possible using MVMC
  • As you can see, we can’t convert physical machines to Azure. As an alternative, you will need to convert them to Hyper-V then use another tool or solution to send them to Azure (This will be the topic of the next post in this series)
  • The only target disk format is VHD, MVMC will not let you choose VHDx as a target disk format even when the target hosts are running Windows Server 2012 or 2012 R2. As a (heavy) alternative, you can convert VHD to VHDx after the VM conversion, using Hyper-V (This remember me the P2V feature in VMM 2012 SP1 🙂 where only VHD format was supported too)

The following picture shows how the P2V process is triggered:

Snap 2015-03-04 at 14.41.50_thumb[4]

  1. MVMC connect to the source machine (The source OS), install the MVMC conversion agent and run a system scan to gather information about the OS (Operating system version, volumes, configuration)
  2. You select the volumes to convert, the target VM settings an the target Hyper-V host
  3. MVMC starts the conversion, triggers VSS for the volumes to be converted
  4. The data starts being copied to the target storage
  •   VMware to Hyper-V or Azure (V2V) 

Snap 2015-03-04 at 14.44.45_thumb[2]

  Update  : Microsoft published KB 2977338 to state that converting VMs running under VMware ESXi 5.5 is not possible because of an ESXi design. VCenter is needed. https://support.microsoft.com/en-us/kb/2977338

MVMC can convert VMware based virtual machines running on: ESX/ESXi 4.1, ESX 5.1, ESX 5.5 ; VCenter 4.1, 5.1, 5.5 ; Vsphere 4.1, 5.1, 5,5

MVMC will let you convert VMware VMs running on version 5, but this case in just unsupported by Microsoft.

  N    Note that unlike SCVMM, MVMC can communicate with ESX/ESXi without  the need of VCenter (Except for ESXi 5.5)

MVMC can V2V virtual machines to Hyper-V 2008 R2 SP1/2012/2012 R2 but also to Microsoft Azure

Unlike P2V, you can use VHD or VHDx as a virtual hard disk format for the 2012 and 2012 R2 target. For Azure, only VHD is supported.

  NB  Till this moment, Azure maximum supported OS VHD size is 127 GB, so if your OS disk size is greater, you have to manage to free it under the 127 GB limit

Snap 2015-03-04 at 15.29.53_thumb[3]

MVMC can convert Windows and Linux based virtual machines. For Windows, Server and client are both supported. MVMC can convert server OS from Windows Server 2008 to Windows Server 2012 R2, and client OS from Windows Vista to Windows 8.1

For Linux guest OS, MVMC can convert RHEL 5/6, Ubuntu 10.04/12.04, SLES 11, Centos 5/6, Debian GNU/Linux 7 and Oracle Linux 5/6. X64 and x86 are both supported.

  NB 

  • Additional steps may be required for Linux conversion, visit this TechNet link for detailed information
  • Not all this versions are supported when converting to Azure, only guest OSs that are supported on Azure can be converted (Look here in the  “Supported configurations for converting virtual machines” section)
  • MVMC will uninstall VMware additions before converting windows based virtual machines, you will need to manually uninstall them for the Linux virtual machines

The following picture shows the steps when converting a VMware virtual machine using MVMC

Snap 2015-03-04 at 15.53.26_thumb[1]

  1. MVMC connects to VCenter or ESX/ESXi and selects the target VM
  2. MVMC creates a VM snapshot, uninstalls the VMware tools, then stops the virtual machine.
  3. MVMC starts the copy of the data and create the VHD/VHDx in the target storage. In the Azure case, the VHD is stored on a temporary location
  4. After the conversion, MVMC restores the VMWare virtual machine (Apply the snapshot). In addition, if the target platform is Azure, the VHD is uploaded to the Azure storage (BLOB)
  •   Microsoft Automation Toolkit 

MVMC is a nice tool for converting VMware virtual machines. MVMC offers a UI wizard that let you convert only one virtual machine at a time, so what if we have hundreds or thousands of VMware virtual machines that to convert ?

Fortunately, MVMC comes with native powershell support. In this case, we can script and automate  virtual machines conversion, and trigger multiple and simultaneous conversions at at the same time. The good news is that Microsoft did the job for us, and provided a set of powershel scripts that wrapper MVMC with automation to permit parallel and batch conversion operations. You can download MAT here

  So how MAT works ? 

Snap 2015-03-04 at 16.58.52_thumb[2]

  1. You need to run MAT in a machine, normally where MVMC is installed. This machine is called the Control Machine. MAT needs a SQL server. You can use SQL server Express or an existent SQL installation. MAT will use a small database to record virtual machine conversion states.
  2. MAT will query a Vsphere server to collect the managed ESX/VCenter/VCenters managed virtual machines
  3. MAT will record this information in its database then generate a text file which contains all the VMs names. You have to open this file with a text editor and delete the VM names you do not want to convert
  4. Then MAT will start the conversion using MVMC based on the list you provided.
  5. Because a single MVMC server can only convert 3 virtual machines at a time (It’s by design for performances reasons), you can bring to your configuration other MVMC servers, and tell MAT to use them and spread the conversion tasks between them. These nodes are called helper nodes. Using helper nodes can accelerate the simultaneous conversion operations.

  Conclusion 

In this post, we seen how we can make P2V and V2V to a Hyper-V or Azure platform. The Microsoft Virtual Machine Converter  offers both physical and virtual conversion capabilities, with a great powershell support. For batch conversions, we can use MAT to convert several virtual machines at a time.

Remember:

  • Only V2V operations are possible with MAT, physical conversion is not supported with MAT
  • MVMC just copy the VHD to the Azure storage, it is for you to create and configure the virtual machine in Azure
  • MVMC will cause down time during V2V (The VMs are stopped). You have to schedule wide maintenance window for conversion. If you cannot allow downtime, you can use MVMC to P2V your VMware virtual machine. MVMC talks just with the OS during a P2V process, it will not look if this OS is really virtualized or not. But in this case you will not be able to use MAT
  • You cannot convert physical machines to Azure. If you use MVMC, you have to make a P2V, then use another tool to migrate to Azure.
  • During conversion to Azure, the VHD will be stored in a local path, then uploaded to Azure after the V2V. So you will have to provide enough local storage for the V2V. And the total V2V duration will be equal to : Conversion time + Upload time

This was the third post in this blog series, in the next coming post, I will present the MA (Migration Accelerator) solution that unable customers to migrate and convert everything to Azure.

Virtual Machine Manager: Granular permissions with Clouds and User roles

Hi All,

During my virtualization missions in customer sites, there’s a lot of new questions that i’m answering, a few other that i have to read about then answer.

But there’s a topic that was always asked by customers : They want to give some users some permissions on specific virtual machines to let them do dome operations like Shutdown, start, save, snapshot…

Out of the box, Hyper-V does not offer such granularity (By default, you are a Hyper-V admin or not). With the emergency of the Cloud concept, Microsoft has added to Virtual Machine Manager a way to achieve great things regarding source pooling, resource dividing, user permissions, and the list stills long.

Today, i will talk about features that was introduced with VMM 2012 : Clouds, User Roles and Self-Service-User

Briefly, i will define these three terms to be accurate:

Clouds: A cloud is container of VMM host groups (ie Hyper-V servers) + Storage + Networking + Resources (VM templates…). Clouds permit dividing you infrastructure into logical pieces with defined configuration. Example: I want a cloud that uses the production cluster but only 20 virtual cpu, 30 GB of memory, 500 GB of storage and only connect to the Database Network.

User Role: A User role is a Role (let’s tell a group) where you will define some permissions like : Which cloud this User Role can use, how much resources this User Role can consume, what actions this User Role can make and so on…

Self-Service-User: SSUs an create, deploy, and manage their own virtual machines and services by using the VMM console or a Web portal (The web portal was retired in VMM 2012 SP1. A a alternative you can use App Controller or Windows Azure Pack. App controller will be discontinued and will not be released in System Center VNext). A self service user have to be part of a User-Role


You can find below a step-by-step guide of one of the use cases of VMM clouds.

  • The goal

I need some users to only do some actions on some virtual machines

  • How

Using SCVMM (2012 R2 in the current post), we can achieve that, but how:

  1. Create a cloud that cover the host group where our hosts (that contain the VMs) belong
  2. Assign the VMs to this cloud so they appear on that cloud
  3. Create a User Role with the specific permissions we want
  4. Add the user to the user role
  5. Assign the user role to this cloud
  6. Share the VM access with the created User Role
  7. Install the VMM console in the users PCs or publish it in RDS RDWeb

The following are my scenario details: The Bold items are your parameters too

  • The VM i want give access to is named A1 and it’s hosted in the SRV-HV02 server
  • The host were the VM resides (SRV-HV02) is under the Earth host group
  • The cloud i will create will be named DEVCLOUD
  • The permissions i want the user to have are : Shutdown VM, Start VM, Connect to VM, Stop VM
  • The user role i will create is named DEVCLOUDROLE
  • The user that will use this VMs is name Ali

1- Create a cloud that cover the host group where our hosts (that contains the VMs) belong

  • Go to the SCVMM console -> VM and Services ->All hosts verify that SRV-HV02 is under the Earth host group (In your case locate for which host group your server belongs)

Host group

  • Now, go to the clouds view and right click Clouds, Create Cloud

Create Cloud

  • Type a name for your cloud, in my example DEVCLOUD

Cloud Name

  • Choose the host group the Hyper-V server or cluster is located in

Cloud host group

  • Click Next under you reach the Capacity step. Because this cloud is only for managing, choose the minimum value in all the dimensions

Cloud capacity

  • Click Next then Finish, the cloud will be create with empty resources

Cloud End

2- Assign the VMs to this cloud so they appear on that cloud

  • Now, go to your VM, right click it and choose Properties

A1 Prop

  • In the cloud drop down list, choose the cloud you created (DEVCLOUD)

A1 belong to DEVCLOUD

  • Verify that the VM you configured is showing now on the DEVCLOUD view

DEVCLOUD view A1

3-Create a user role with the specific permissions we want 

4-Add the user to the user role

5- Assign the user role to this cloud

  • Now go to Settings, Security, User roles. Click Create User Role. The Create User Role wizard will start. Type a name for your Role (DEVCLOUDROLE)

Create role

  • For the user profile, choose Application Administrator (Self-service User) then click Next

Create role 2

  • Now, add the members that you want to be part of this role (LAB\Ali)

Create role 3

  • In the scope step, choose the cloud you want this role to be mapped to (DEVCLOUD)

Create role 4

  • On the Quota page, you can leave the default value in this case, because the user role will no be allowed to place anything in this cloud, or you can set all the values to 0

Create role 5

  • This is an important step: Select the actions the users inside this user role are allowed to make. In our case, i selected Remote Connection, Shutdown, Stop and Start

Create role 6

6. Share the VM access with the created User Role

Go to VM and Services, select your VM and choose Properties. In the Access Tab, add the User Role (In our case DEVCLOUDROLE) in Shared with these Self-Service users and roles

VM properties

It’s done, now you have to provide the VMM console to your users:

  • You can install the VMM console (Using the VMM installation media) for each user  who will use VMM
  • If you have RDS (Remote Desktop Services) in your infrastructure, install the VMM console on a RDSH server and publish the VMM console

When the user will open the console using his credentials, he will only be able to do the exact actions configured in the role settings.