Monday, July 20, 2015

Copying BLOBs Between Azure Storage Containers

In the past when I needed to move BLOBs between Azure Containers I would use a script that I put together based off of Michael Washam's blog post Copying VHDs (Blobs) between Storage Accounts. However with my latest project, I actually needed to move several BLOBs from the Azure Commercial Cloud to the Azure Government Cloud.
Right off the bat, the first problem is that the endpoint for the Government Cloud is not the default endpoint when using PowerShell cmdlets. So after spending some time updating my script to work in Commercial or Government Azure, I was still not able to move anything. So after a bit of "this worked before, why are you not working now?" frustration, it was time for plan B.

Plan B

Luckily enough the Windows Azure Storage Team had put together a command line utility called AzCopy. AzCopy is a very powerful tool as it will allow you to copy items from a machine on your local network into Azure. It will also copy items from one Azure tenant to another Azure tenant. The problem that I ran into is that the copy is synchronous, meaning that it copies one item at a time, and you cannot start another copy until the previous operation has finished. I also ran the command line in ISE vs directly in the command line, which was not as nice. In the AzCopy command line utility, a status is displayed letting you know elapsed time and when the copy has completed. In ISE, you know your BLOB is copied when script has completed running. You can read up on and download AzCopy from Getting Started with the AzCopy Command-Line Utility. This is the script that I used to move BLOBs between the Azure Commercial Tenant and the Azure Government Tenant.
$sourceContainer = "https://commercialsharepoint.blob.core.windows.net/images"
$sourceKey = "insert your key here"
$destinationContainer = "https://governmentsharepoint.blob.core.usgovcloudapi.net/images"
$destinationKey = "insert your key here"
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
$files = @($file1,$file2,$file3)
function copyFiles {
    foreach ($file in $files) {
        & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /Pattern:$file 
    }
}
function copyAllFiles {
    & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /S
}
# copyFiles
# copyAllFiles
While I was waiting for my BLOBs to copy over, I decided to look back at my Plan A,and see if I could figure out my issue(s).

Plan A

After cleaning up my script and taking a bit of a "Type-A personality" look at the script, I noticed that i was grabbing the Azure Container Object, but not grabbing the BLOB Object before copying the item. Once I piped the container to the BLOB before copying, it all worked as expected. Below is my script, but please notice that on the Start-AzureStorageBlobCopy cmdlet, I am using the -Force parameter to overwrite the existing destination BLOB if it exists.
# Source Storage Information
$srcStorageAccount = "commercialsharepoint"
$srcContainer = "images"
$srcStorageKey = "insert your key here"
$srcEndpoint = "core.windows.net"
# Destination Storage Information
$destStorageAccount  = "governmentsharepoint"  
$destContainer = "images"
$destStorageKey = "insert your key here"
$destEndpoint = "core.usgovcloudapi.net" 
# Individual File Names (if required)
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
# Create file name array
$files = @($file1, $file2, $file3)
# Create blobs array
$blobStatus = @()
### Create the source storage account context ### 
$srcContext = New-AzureStorageContext   -StorageAccountName $srcStorageAccount `
                                        -StorageAccountKey $srcStorageKey `
                                        -Endpoint $srcEndpoint 
### Create the destination storage account context ### 
$destContext = New-AzureStorageContext  -StorageAccountName $destStorageAccount `
                                        -StorageAccountKey $destStorageKey `
                                        -Endpoint $destEndpoint
#region Copy Specific Files in Container
    function copyFiles {
        $i = 0
        foreach ($file in $files) {
            $files[$i] = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                         Get-AzureStorageBlob -Blob $file | 
                         Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $file -ConcurrentTaskCount 512 -Force 
            $i++ 
        }  
        getBlobStatus -blobs $files     
    }
#endregion
#region Copy All Files in Container
    function copyAllFiles {
        $destBlobName = $blob.Name
        $blobs = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | Get-AzureStorageBlob
        $i = 0
        foreach ($blob in $blobs) {
            $blobs[$i] =  Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                          Get-AzureStorageBlob -Blob $blob.Name | 
                          Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $destBlobName -ConcurrentTaskCount 512 -Force
            $i ++
        }  
        getBlobStatus -blobs $blobs 
    }
#endregion
#region Get Blob Copy Status
    function getBlobStatus($blobs) {
        $completed = $false
        While($completed -ne $true){
            foreach ($blob in $blobs) {
                $counter = 0
                $status = $blob | Get-AzureStorageBlobCopyState
                Write-Host($blob.Name + " has a status of: "+ $status.status) 
                if ($status.status -ne "Success") {
                    $counter ++
                }
                if ($counter -eq 0) {
                    $completed = $true
                }   
                ELSE {
                    Write-Host("Waiting 30 seconds...")
                    Start-Sleep -Seconds 30
                }                         
            }
        }
    }        
#endregion
# copyFiles
# copyAllFiles

Conclusion

Having more than one way to get something accomplished within Azure if fantastic. There is not a lot of documentation out there on how to work with Azure and PowerShell within the Government Cloud, so hopefully this will make life easier for someone. Remember that these scripts can be used across any tenant, Commercial and Government and On-Premises.

Updates

08/05/2015 Fixed cut and paste variable issues and added $destBlobName for renaming BLOBs at the destination location, and updated BLOB status check wait time.

Thursday, July 2, 2015

Provisioning SQL Server Always-On Without Rights

Separation of roles, duties, and responsibilities in a larger corporate/government environment is a good thing. It is a good thing unless you are actually trying to get something accomplished quickly on your own. But this is why there is a separation of roles, so that one person cannot simply go and add objects into Active Directory on a whim, or play with the F5 because they watched a video on YouTube. I recently had designed a solution that was going to take advantage of SQL Server 2012 High Availability and Disaster Recovery Always-On Group Listeners. The problem was that I was not a domain admin, and did not have rights to create a computer object for the Server OS Windows Cluster, or the SQL Group Listener.

Creating the OS Cluster

Creating the OS Cluster was the easy part, I just needed to find an administrator that had the rights to create a computer object in the domain. Once that was accomplished, I made sure that the user had local admin rights on all of the soon-to-be clustered machines, and had them run the following script:
$node1 = "Node-01.contoso.local"
$node2 = "Node-02.contoso.local"
$osClusternName = "THESPSQLCLUSTER"
$osClusterIP = "192.168.1.11"
# $ignoreAddress = "172.20.0.0/21"
$nodes = ($node1, $node2)
Import-Module FailoverClusters
function testCluster {
    # Test Cluster
    $test = Test-Cluster -Node (foreach{$nodes})
    $testPath = $env:HOMEPATH + "\AppData\Local\Temp\" + $test.Name.ToString()
    # View Report
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($testPath)
    $IE.visible=$true
}
function buildCluster {
    # Build Cluster
    $new = New-Cluster -Name $osClusternName -Node (foreach{$nodes}) -StaticAddress $osClusterIP -NoStorage # -IgnoreNetwork $ignoreAddress
    Get-Cluster | Select *
    # View Report
    $newPath = "C:\Windows\cluster\Reports\" + $new.Name.ToString()
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($newPath)
    $IE.visible=$true
}
# un-comment what you what to do...
# testCluster
buildCluster

Creating the Group Listener

Creating the Group Listener was a bit more challenging, but not too bad. Once the OS Cluster computer object was created (thespsqlcluster.contoso.local), the newly created computer object needed to be given rights as well.
- The cluster identity 'thespsqlcluster' needs Create Computer Objects permissions. By default all computer objects are created in the same container as the cluster identity 'thespsqlcluster'.
- If there is an existing computer object, verify the Cluster Identity 'thespsqlcluster' has 'Full Control' permission to that computer object using the Active Directory Users and Computers tool.
You will also want to make sure that the quota for computer objects for 'thespsqlcluster' has not been reached.
The domain administrator was also given Sysadmin rights to all of the SQL Server instances in the cluster.
After all the permissions were set, the Domain admin could run the following script on the Primary SQL Instance to create the Group Listener:


Import-Module ServerManager -EA 0
Import-Module SQLPS -DisableNameChecking -EA 0
$listenerName = "LSN-TheSPDatabases"
$server = $env:COMPUTERNAME
$path = "SQLSERVER:\sql\$server\default\availabilitygroups\"
$groups = Get-ChildItem -Path $path
$groupPath = $path + $groups[0].Name
$groupPath
New-SqlAvailabilityGroupListener `
    -Name $listenerName `
    -StaticIp "192.168.1.12/255.255.255.0" `
    -Port "1433" `
    -Path $groupPath 

Important

After the group listener is created, all the rights that were put in place can once again be removed with the understanding that if you wish to add another listener at another time, the permissions will have to be reinstated temporarily once again. In my case, once all of the computer objects were created successfully, all rights were removed off the cluster computer object and the domain administrator was removed from SQL.

Updates

07/06/2015 Cleaned up diction and grammar, added the Important section.
10/21/2015 Updated computer object permission requirements