Monday, June 20, 2016

Download All Files in a Yammer Group!

I was lucky enough to take a week long Azure Cloud Solution Architect training class hosted by Microsoft. Unluckily, they uploaded all the documents for the class into Yammer, apporoximately 80 files. Now, I could just go to each file and individually download each file, but where is the fun with that? So, I decided to do a quick search and found a couple of blog posts.
I found a GitHub post Download all files in a Yammer.com group and a blog post from Sahil Malik (https://twitter.com/sahilmalik) Download Multiple Files from Yammer - easily.
The first problem that I ran into was that neither code worked any longer due to the url structure change, as Sahil predicted. The second problem was that the code opened up a new tab for each file downloaded. I figured there had to be a better way.
Now, I am fortunate enough to be able to reach out to someone I consider to be one of the best JavaScript developers around, Matthew Bramer (https://twitter.com/iOnline247) for a bit of help.
In this post, I am going to show you a couple of ways to download your files. First is the full length developer version, while the other is a quick and easily repeatable version.
We decided to test this in Chrome only. The developer version should work in Edge, but was only tested in Chrome. If you want to complain that you cannot get it to work, TRY CHROME FIRST!

Do This First
1) Within Chrome, open up Settings, and select Show advanced settings...
2) Under Downloads, set a download location and make sure that the Ask where to save each file before downloading check box is NOT selected.

Full Length Developer Version
1) Open up Yammer, and go to the files location.
     a) Make sure that you scroll down and click the More button to show all of the files.
2) Hit F12 to open the Developer Tools
3) Under Sources, select Snippets.
4) Insert the following code into the Script snippet window:
5) Click the run snippet button (Ctrl + Enter) to start your downloads

Easily Repeatable Version
1) Within Chrome, open the Bookmark Manager (Ctrl + Shift + O)
2) Under Folders, select (or create) the appropriate folder
3) Under Organize, click the Organize drop-down and select Add page...
4) Give the Page an appropriate name like, Download All Yammer Files on Page
5) For the URL, paste the following code.
6) Open up Yammer, and go to the files location.
     a) Make sure that you scroll down and click the More button to show all of the files.
7) Open the bookmark that you just created to start downloading all the files.

Thank you Matthew for helping me get this up and running in time for Ignite...

Thursday, January 28, 2016

Backup and Restore SQL User Databases Using PowerShell

There are several ways how to backup and restore SQL databases. Over time, the way I back up and restore databases has changed.
Originally my backup database code looked like this:
The problem is that you needed to have SQL Server Management Studio (SSMS) installed for the code to run correctly. Having SSMS installed on a production server is not the best of ideas, so luckily PowerShell gives us the ability to backup all user databases very easily:
Now that we have our databases backed up, let's take a look at the old way that I use to restore databases:
Below is the newer, strictly using PowerShell way that I use to restore the just backed up databases:
Hopefully this will give you a couple of good solutions for backing up and restoring your SQL Server Databases through PowerShell.
-PC

Wednesday, December 30, 2015

Moving User SQL Databases Using PowerShell

I grew tired of manually moving databases around using a combination of SQL and "Copy / Paste" so wrote out a bit of PowerShell to save me some time and effort.
Notice that I am using the copy-item then deleting the object, not just moving the item. This is because of how permissions on objects are handled with copy vs move, plus I am paranoid about not having my original database handy if the move fails, or the moved database gets corrupted in transit.

Let's take a look at the code

In the first section we will be setting the variables.
The next step is to get the database information:
Once we have the database information, the database will need to be taken OFFLINE: Once the database is offline, we can copy the file, set ACLs, and update the database with the new .mdf and .ldf file locations: Then we can bring the DB back ONLINE Once the DB is back ONLINE, wait for 10 seconds and delete the original database: Here is a look at the code once it is all put together:
Updates
01/01/2016: Fixed issues with ACL for moved files by converting file location to UTC based path format.
01/02/2016: Updated to include snippets and comments
01/05/2016: Major update to fix $destination to UTC path, added copyItem function, item extension switch, updated outputs with write-output and write-verbose.

Monday, August 24, 2015

Manually Download and Install the Prerequisites for SharePoint 2016

NOTE: This is for SharePoint 2016 Beta 2 release.

At some point within your career of deploying SharePoint, you will hopefully come across a scenario where your SharePoint servers are not allowed internet access. Most of the server farms that I work on are not allowed access to the Internet or are fire walled/ruled out of the ability to surf the web or the ability to download items directly. This brings me to the need to be able to download the required files to a specific location and use that location for SharePoint's PrerequisiteInstaller.exe to complete its installation.
If you need to do an offline installation of SharePoint 2016, you will need to have the prerequisite files downloaded ahead of time. You will also need the SharePoint 2016 .iso (download here). These scripts are based off of the scripts provided by Craig Lussier (@craiglussier).
Currently this is for the SharePoint Server 2016 Beta 2 release on Windows Server 2012R2 or on Windows Server 2016 Technical Previews 3 and 4.
The only change that needs to be made is the location of the SharePoint prerequisiteinstaller.exe file in script #3 line #3. So, update the $sp2016Location variable before running. If the SP2016 .iso is mounted to the "D:\" drive, you have nothing to change and, in theory, this should just work for you out of the box.
The first thing that I like to do is to add the windows features:
The second step is to download the items required for the prerequisite installer:
The next step is to run the prerequisite installer. With the Beta release, there is a requirement to restart the server during the installation and provisioning of settings:
The final step is to continue the installation of the prerequisites:
I hope that this saves you some time and headaches trying to get SP2016 installed and running correctly.

Updates

08/24/2015 Added ability to install on either Technical Preview for Server 2016 or Server 2012R2
08/25/2015 Added verbiage on updating $sp2016Location variable, and added workaround for PowerShell bug in download script.
11/25/2015 Updated for installation of SharePoint 2016 Beta 2 bits.

Monday, July 20, 2015

Copying BLOBs Between Azure Storage Containers

In the past when I needed to move BLOBs between Azure Containers I would use a script that I put together based off of Michael Washam's blog post Copying VHDs (Blobs) between Storage Accounts. However with my latest project, I actually needed to move several BLOBs from the Azure Commercial Cloud to the Azure Government Cloud.
Right off the bat, the first problem is that the endpoint for the Government Cloud is not the default endpoint when using PowerShell cmdlets. So after spending some time updating my script to work in Commercial or Government Azure, I was still not able to move anything. So after a bit of "this worked before, why are you not working now?" frustration, it was time for plan B.

Plan B

Luckily enough the Windows Azure Storage Team had put together a command line utility called AzCopy. AzCopy is a very powerful tool as it will allow you to copy items from a machine on your local network into Azure. It will also copy items from one Azure tenant to another Azure tenant. The problem that I ran into is that the copy is synchronous, meaning that it copies one item at a time, and you cannot start another copy until the previous operation has finished. I also ran the command line in ISE vs directly in the command line, which was not as nice. In the AzCopy command line utility, a status is displayed letting you know elapsed time and when the copy has completed. In ISE, you know your BLOB is copied when script has completed running. You can read up on and download AzCopy from Getting Started with the AzCopy Command-Line Utility. This is the script that I used to move BLOBs between the Azure Commercial Tenant and the Azure Government Tenant.
$sourceContainer = "https://commercialsharepoint.blob.core.windows.net/images"
$sourceKey = "insert your key here"
$destinationContainer = "https://governmentsharepoint.blob.core.usgovcloudapi.net/images"
$destinationKey = "insert your key here"
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
$files = @($file1,$file2,$file3)
function copyFiles {
    foreach ($file in $files) {
        & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /Pattern:$file 
    }
}
function copyAllFiles {
    & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /S
}
# copyFiles
# copyAllFiles
While I was waiting for my BLOBs to copy over, I decided to look back at my Plan A,and see if I could figure out my issue(s).

Plan A

After cleaning up my script and taking a bit of a "Type-A personality" look at the script, I noticed that i was grabbing the Azure Container Object, but not grabbing the BLOB Object before copying the item. Once I piped the container to the BLOB before copying, it all worked as expected. Below is my script, but please notice that on the Start-AzureStorageBlobCopy cmdlet, I am using the -Force parameter to overwrite the existing destination BLOB if it exists.
# Source Storage Information
$srcStorageAccount = "commercialsharepoint"
$srcContainer = "images"
$srcStorageKey = "insert your key here"
$srcEndpoint = "core.windows.net"
# Destination Storage Information
$destStorageAccount  = "governmentsharepoint"  
$destContainer = "images"
$destStorageKey = "insert your key here"
$destEndpoint = "core.usgovcloudapi.net" 
# Individual File Names (if required)
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
# Create file name array
$files = @($file1, $file2, $file3)
# Create blobs array
$blobStatus = @()
### Create the source storage account context ### 
$srcContext = New-AzureStorageContext   -StorageAccountName $srcStorageAccount `
                                        -StorageAccountKey $srcStorageKey `
                                        -Endpoint $srcEndpoint 
### Create the destination storage account context ### 
$destContext = New-AzureStorageContext  -StorageAccountName $destStorageAccount `
                                        -StorageAccountKey $destStorageKey `
                                        -Endpoint $destEndpoint
#region Copy Specific Files in Container
    function copyFiles {
        $i = 0
        foreach ($file in $files) {
            $files[$i] = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                         Get-AzureStorageBlob -Blob $file | 
                         Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $file -ConcurrentTaskCount 512 -Force 
            $i++ 
        }  
        getBlobStatus -blobs $files     
    }
#endregion
#region Copy All Files in Container
    function copyAllFiles {
        $destBlobName = $blob.Name
        $blobs = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | Get-AzureStorageBlob
        $i = 0
        foreach ($blob in $blobs) {
            $blobs[$i] =  Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                          Get-AzureStorageBlob -Blob $blob.Name | 
                          Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $destBlobName -ConcurrentTaskCount 512 -Force
            $i ++
        }  
        getBlobStatus -blobs $blobs 
    }
#endregion
#region Get Blob Copy Status
    function getBlobStatus($blobs) {
        $completed = $false
        While($completed -ne $true){
            foreach ($blob in $blobs) {
                $counter = 0
                $status = $blob | Get-AzureStorageBlobCopyState
                Write-Host($blob.Name + " has a status of: "+ $status.status) 
                if ($status.status -ne "Success") {
                    $counter ++
                }
                if ($counter -eq 0) {
                    $completed = $true
                }   
                ELSE {
                    Write-Host("Waiting 30 seconds...")
                    Start-Sleep -Seconds 30
                }                         
            }
        }
    }        
#endregion
# copyFiles
# copyAllFiles

Conclusion

Having more than one way to get something accomplished within Azure if fantastic. There is not a lot of documentation out there on how to work with Azure and PowerShell within the Government Cloud, so hopefully this will make life easier for someone. Remember that these scripts can be used across any tenant, Commercial and Government and On-Premises.

Updates

08/05/2015 Fixed cut and paste variable issues and added $destBlobName for renaming BLOBs at the destination location, and updated BLOB status check wait time.

Thursday, July 2, 2015

Provisioning SQL Server Always-On Without Rights

Separation of roles, duties, and responsibilities in a larger corporate/government environment is a good thing. It is a good thing unless you are actually trying to get something accomplished quickly on your own. But this is why there is a separation of roles, so that one person cannot simply go and add objects into Active Directory on a whim, or play with the F5 because they watched a video on YouTube. I recently had designed a solution that was going to take advantage of SQL Server 2012 High Availability and Disaster Recovery Always-On Group Listeners. The problem was that I was not a domain admin, and did not have rights to create a computer object for the Server OS Windows Cluster, or the SQL Group Listener.

Creating the OS Cluster

Creating the OS Cluster was the easy part, I just needed to find an administrator that had the rights to create a computer object in the domain. Once that was accomplished, I made sure that the user had local admin rights on all of the soon-to-be clustered machines, and had them run the following script:
$node1 = "Node-01.contoso.local"
$node2 = "Node-02.contoso.local"
$osClusternName = "THESPSQLCLUSTER"
$osClusterIP = "192.168.1.11"
# $ignoreAddress = "172.20.0.0/21"
$nodes = ($node1, $node2)
Import-Module FailoverClusters
function testCluster {
    # Test Cluster
    $test = Test-Cluster -Node (foreach{$nodes})
    $testPath = $env:HOMEPATH + "\AppData\Local\Temp\" + $test.Name.ToString()
    # View Report
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($testPath)
    $IE.visible=$true
}
function buildCluster {
    # Build Cluster
    $new = New-Cluster -Name $osClusternName -Node (foreach{$nodes}) -StaticAddress $osClusterIP -NoStorage # -IgnoreNetwork $ignoreAddress
    Get-Cluster | Select *
    # View Report
    $newPath = "C:\Windows\cluster\Reports\" + $new.Name.ToString()
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($newPath)
    $IE.visible=$true
}
# un-comment what you what to do...
# testCluster
buildCluster

Creating the Group Listener

Creating the Group Listener was a bit more challenging, but not too bad. Once the OS Cluster computer object was created (thespsqlcluster.contoso.local), the newly created computer object needed to be given rights as well.
- The cluster identity 'thespsqlcluster' needs Create Computer Objects permissions. By default all computer objects are created in the same container as the cluster identity 'thespsqlcluster'.
- If there is an existing computer object, verify the Cluster Identity 'thespsqlcluster' has 'Full Control' permission to that computer object using the Active Directory Users and Computers tool.
You will also want to make sure that the quota for computer objects for 'thespsqlcluster' has not been reached.
The domain administrator was also given Sysadmin rights to all of the SQL Server instances in the cluster.
After all the permissions were set, the Domain admin could run the following script on the Primary SQL Instance to create the Group Listener:


Import-Module ServerManager -EA 0
Import-Module SQLPS -DisableNameChecking -EA 0
$listenerName = "LSN-TheSPDatabases"
$server = $env:COMPUTERNAME
$path = "SQLSERVER:\sql\$server\default\availabilitygroups\"
$groups = Get-ChildItem -Path $path
$groupPath = $path + $groups[0].Name
$groupPath
New-SqlAvailabilityGroupListener `
    -Name $listenerName `
    -StaticIp "192.168.1.12/255.255.255.0" `
    -Port "1433" `
    -Path $groupPath 

Important

After the group listener is created, all the rights that were put in place can once again be removed with the understanding that if you wish to add another listener at another time, the permissions will have to be reinstated temporarily once again. In my case, once all of the computer objects were created successfully, all rights were removed off the cluster computer object and the domain administrator was removed from SQL.

Updates

07/06/2015 Cleaned up diction and grammar, added the Important section.
10/21/2015 Updated computer object permission requirements

Sunday, June 28, 2015

SharePoint and FIPS Exceptions

A couple of weeks ago, I started a "Greenfield" implementation of SharePoint 2013 for a client. This organization has SharePoint 2003, 2007, 2010 already existing in their environment, so I ignorantly figured that the installation should go pretty smoothly.
All of the SharePoint and SQL bits installed correctly, however when trying to provision Central Administration, I ran into an issue where I was not able to create the config database:

What is FIPS?

FIPS stands for the Federal Information Processing Standards, and is used for the standardization of information, such as FIPS 10-4 for Country Codes or FIPS 5-2 for State Codes. However my problem is with FIPS 140-2, the Security Requirements for Cryptography which states:
This Federal Information Processing Standard (140-2) specifies the security requirements that will be satisfied by a cryptographic module, providing four increasing, qualitative levels intended to cover a wide range of potential applications and environments. The areas covered, related to the secure design and implementation of a cryptographic module, include specification; ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks. [Supersedes FIPS 140-1 (January 11, 1994): http://www.nist.gov/manuscript-publication-search.cfm?pub_id=917970]
In essence, FIPS 140-2 is a standard that can be tested against and certified so that the server is hardened up to a government standard. Currently, the US is not the only government that uses the FIPS standard for server hardening. The FIPS Local/Group Security Policy Flag can be found here:

FIPS and SharePoint

There are a couple of problems with using SharePoint on a FIPS enabled server. SharePoint Server uses MD5 for computing hash values (not for security purposes) which is an unapproved algorithm. According to Microsoft (https://technet.microsoft.com/en-us/library/cc750357.aspx) Schannel Security Package is forced to negotiate sessions using TLS1.0. And the following supported Cipher Suites are disabled:

  • TLS_RSA_WITH_RC4_128_SHA
  • TLS_RSA_WITH_RC4_128_MD5
  • SSL_CK_RC4_128_WITH_MD5
  • SSL_CK_DES_192_EDE3_CBC_WITH_MD5
  • TLS_RSA_WITH_NULL_MD5
  • TLS_RSA_WITH_NULL_SHA
If you want to read up more, here are some good posts:

What's Next?

Disabling FIPS is easy, however a larger discussion needs to be had. Is FIPS set at the GPO level or is it part of the image that was provisioned and FIPS was enabled by default? Will the security team come after you if you disable it without their knowledge? Why do they have FIPS enabled, and what are they trying to accomplish with FIPS? All of these questions will need to be answered before changing your server settings.

Fixing FIPS with PowerShell

This is how I reset the FIPS Algorithm Policy so that I could get Central Administration provisioned. Remember that FIPS will need to be disabled on all of your SharePoint Servers.

$sets = @("CurrentControlSet","ControlSet001","ControlSet002")
foreach ($set in $sets) {
    $path = "HKLM:\SYSTEM\$set\Control\LSA\FipsAlgorithmPolicy"
    if ((Get-ItemProperty -Path $path).Enabled -ne 0) {
        Set-ItemProperty -Path $path -Name "Enabled" -Value "0"
        Write-Host("Set $path Enabled to 0")
    }
}

Thursday, April 16, 2015

Binding SSL Certificates to SNI Enabled Host Headers with PowerShell

It is funny how some things are just easier to use a GUI to deploy versus trying to figure out a problem in code or PowerShell. That is until you run into a client that says you cannot use a GUI within their environment. With SharePoint 2013, having a single Web Application (IIS site) in IIS for Host Named Site Collections (HNSC), it is necessary to use a combination of host names and Server Name Identification (SNI) to route the incoming requests correctly.
You would think that with the ability to create a New-WebBinding in PowerShell, that you would have the ability to set all of your binding properties in one cmdlet, but you would be wrong.
If you run the Get-Binding cmdlet for your site, will will see that there are properties for the certificate thumbprint (certificateHash) and certificate store location.
Looking at the Set-Binding cmdlet, there is the ability to set PropertyNames and Values, but they will not allow you to update the thumbprint or certificate store location...
So, how do we set these values through PowerShell:
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers (DNS Names) must be in the Subject Alternative Name
# Variables
$site = "pcDemo 2013 Hosting Web App"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"       # Certificate Friendly Name
$caHostHeader = "ca.pcDemo.net"            # Central Admin Host Header
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $hostHeader = $dnsName.Unicode.ToString()
    if ($hostHeader -ne $caHostHeader) {
        Write-Host("Creating $hostHeader binding...")
        # Create the Binding
        New-WebBinding -Name $site -SslFlags 1 -HostHeader $hostHeader -Protocol https -Port 443 -IPAddress "*"
        Write-Host("Binding " + $cert.FriendlyName + " to $hostHeader...")
        # Add the Certificate
        New-Item -Path "IIS:\SslBindings\*!443!$hostHeader" -Thumbprint $thumbprint -SSLFlags 1
   }
}
Now, you are not truly done yet. Currently the site does not have a default SSL site as all of the bindings created so far all have Host Headers associated with them, which will create an error within IIS.
For those of you implementing SharePoint, it is now time to bind your App Domain. This is the binding that will accept all incoming requests that do not have an SNI associated with them.
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers must be in the Subject Alternative Name as DNS Names
# Variables
$site = "pcDemo 2013 Hosting Web App"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"       # Certificate Friendly Name
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
# is there already a binding?
$bindings = Get-WebBinding -Name $site -Port 443 -Protocol https
$exists = $bindings | ? {$_.sslFlags -eq 0}
if ($exists) {
    Write-Host("Binding already exists...")
    Remove-WebBinding -Name $site -Port 443 -Protocol https -HostHeader $null
    Remove-Item -Path "IIS:\SslBindings\0.0.0.0!443"
    Write-Host("Binding removed...")
}
# Create new binding with correct certificate
# Create the Binding
New-WebBinding -Name $site -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName)
# Add the Certificate
New-Item -Path "IIS:\SslBindings\0.0.0.0!443" -Thumbprint $thumbprint
At this point the SharePoint Web Application (IIS site) should have all of its bindings in place. So what could possibly be left? Well, what about the Central Administration (CA) Web Application?
This one will be a combination of the last two scripts because CA will require SNI.
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers must be in the Subject Alternative Name as DNS Names
# Variables
$site = "SharePoint Central Administration v4"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"                # Certificate Friendly Name
$caHostHeader = "ca.pcdemo.net"                     # Central Administration Host Header
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
Write-Host("Creating $caHostHeader binding...")
# Create the Binding
New-WebBinding -Name $site -SslFlags 1 -HostHeader $caHostHeader -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName + " to $caHostHeader...")
# Add the Certificate
New-Item -Path "IIS:\SslBindings\*!443!$caHostHeader" -Thumbprint $thumbprint -SSLFlags 1
Now to put the entire thing together:
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers (DNS Names) must be in the Subject Alternative Name
# Variables
$site = "pcDemo 2013 Hosting Web App"         # IIS Site Name for SharePoint Hosting
$sanCertFriendlyName = "san.pcDemo.net"       # SAN Certificate Friendly Name
$appCertFriendlyName = "pcdemo-apps.net"      # App Domain Wild Card Certificate Friendly Name
$caCertFriendlyName = "ca.pcDemo.net"         # Central Administration Certificate FriendlyName
$caHostHeader = "ca.pcdemo.net"               # Central Administration Host Header
#region Create SAN Bindings
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $sanCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $hostHeader = $dnsName.Unicode.ToString()
    if ($hostHeader -ne $caHostHeader) {
        Write-Host("Creating $hostHeader binding...")
        # Create the Binding
        New-WebBinding -Name $site -SslFlags 1 -HostHeader $hostHeader -Protocol https -Port 443 -IPAddress "*"
        Write-Host("Binding " + $cert.FriendlyName + " to $hostHeader...")
        # Add the Certificate
        New-Item -Path "IIS:\SslBindings\*!443!$hostHeader" -Thumbprint $thumbprint -SSLFlags 1
    }
}
#endregion
#region Create Default 443 Bindings
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $appCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
# is there already a binding?
$bindings = Get-WebBinding -Name $site -Port 443 -Protocol https
$exists = $bindings | ? {$_.sslFlags -eq 0}
if ($exists) {
    Write-Host("Binding already exists...")
    Remove-WebBinding -Name $site -Port 443 -Protocol https -HostHeader $null
    Remove-Item -Path "IIS:\SslBindings\0.0.0.0!443"
    Write-Host("Binding removed...")
}
# Create new binding with correct certificate
# Create the Binding
New-WebBinding -Name $site -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName)
# Add the Certificate
New-Item -Path "IIS:\SslBindings\0.0.0.0!443" -Thumbprint $thumbprint
#endregion
#region Create Central Administration Bindings
$site = "SharePoint Central Administration v4"      # IIS Site Name to bind certificates to
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $caCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
Write-Host("Creating $caHostHeader binding...")
# Create the Binding
New-WebBinding -Name $site -SslFlags 1 -HostHeader $caHostHeader -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName + " to $caHostHeader...")
# Add the Certificate
New-Item -Path "IIS:\SslBindings\*!443!$caHostHeader" -Thumbprint $thumbprint -SSLFlags 1
#endregion
Updates
04/19/2015 Fixed paragraph 3 from Get-WebBinding to Set-WebBinding
04/21/2015 Added CA certificate checking and added Remove-Item to allow App Domain cert to be added with New-Item

Monday, February 9, 2015

SharePoint 2013 and Web Application Proxy (WAP) Server

The other day I was talking with a friend of mine, Miguel Wood (@MiguelWood) about a SharePoint 2013 farm that I was provisioning in Azure. We were talking about endpoints and direct access into the SharePoint farm, and I realized that I really should have put all of my public facing URLs through a reverse proxy. Luckily, I had a Windows Server 2012R2 WAP server in Azure already, handling all of the ADFS 3.0 authentication traffic, so I updated my DNS entries to point at the WAP Load Balancing URL.
This is where things started to get a bit messy. After updating the URLs and adding the SSL certificates into the WAP server, I was able to hit my public facing URLs. I was very happy to add this added level of security into the environment. My happiness was short lived because when I went and clicked on an application that lived in the App domain, I received a 404 page not found error.
Silly me, I had not set up the WAP server to pass the app domain. After spending way too much time on trying to figure it out, I decided that I had best do a bit of research. I came across the this article in TechNet, https://technet.microsoft.com/en-us/library/dn383655.aspx, that explains why I was not able to pass the app domain successfully.
Well, this is a bit frustrating... So I went back into DNS and pointed the app domain back to the Cloud Service URL for the app domain. Long story, short... I reverted back to creating CNAMEs for all of the public facing URLs to the appropriate Azure Cloud Service.

But Wait There's More!

Luckily there is a new version of the Web Application Proxy coming out wit the new Windows Server OS. It is still in beta, but according to the this Application Proxy Blog post, http://blogs.technet.com/b/applicationproxyblog/archive/2014/10/01/introducing-the-next-version-of-web-application-proxy.aspx

Wildcard domain publishing – new patterns and easier SharePoint 2013 apps publishing

We are adding the ability to publish not only a specific domain name but an entire sub-domain. This opens new opportunities for customer that want to publish sites in bulk and not one by one. For example, if all your apps are under http://*.apps.internal/, you can publish them using a single external domain like https://*.apps.contoso.com/.
This pattern is important when there is a need to publish SharePoint 2013 apps that uses a special sub-domain to all apps. In this case, only wildcard certificates would work as the specific apps domain may change over time.

Ok, that is all good, so lets see how to setup the new WAP server for SharePoint 2013, including the App Domain.

Let's Get Started

If you are already running WAP servers, you will run into issues, unless you remove the older versions first. Ideally you have a development environment to install ADFS and WAP, as you do not want to deploy the technical evaluation bits in your production environment. As just mentioned, if you do add the new WAP server into an older WAP cluster, you will run into this issue:
I recommend standing up a new ADFS server on a seperate VLAN that WAP can connect to, but not actually use the server for ADFS.
The first thing you will need to do is grab the ISO from the TechNet Evaluation Center, http://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview
Create a new server for ADFS and a new server for WAP
After you install the OS on the WAP server, make sure that you have at least 2 NIC ports to separate your traffic. You will use one NIC for your incoming Internet traffic and one NIC for your connection to your Internal Network. 
You will want to add the server to your network but NOT to your domain. Do NOT add the server to your DOMAIN.
This WAP server for this demonstration is going to be used solely for a reverse proxy for SharePoint, and not for ADFS. There are plenty of blog posts on how to set-up ADFS 3.0, but you will want to install ADFS off the Technical Preview bits. Here is a simple post that installs ADFS on Server 2012R2, nothing has changed.
You will need your App Domain Wildcard Certificate and your SharePoint SAN certificates with Key.
For this blog post, we will add the WAP server to the network, configure the reverse proxy for SharePoint 2013 hybrid, including the app domain on-premises, and re-point the DNS entries to the new WAP server

Prepare The Server

Ideally you would want to run your WAP server with 2 NIC cards, so at this point, label and set your 2 NIC Cards, and assign your IP Addresses for each. For example, to start, my Network looks like this: 
My external facing NIC looks like this:
My internal facing NIC looks like this:
Adding the default gateway information the DNS information will allow us to not rely on using the HOSTS file and will update our Network Types:
By having the NICs on different network types, you can then close off the ports appropriately.
After setting up your NICs clean up your firewall so that only the required ports each network type are set up correctly. For example, you do not want to be using Remote Desktop from the Internet side of your server. Make sure that you open up port 80 on the Public side of your firewall, but keep it closed on the internal side. 
Next is to enable the WAP server feature.
Add-WindowsFeature -Name web-application-proxy -IncludeManagementTools
If we run:
Get-Command –Module WebApplicationProxy
on bother the old and new WAP server, during the beta phase at least, the available cmdlets are the same.

Import / Export Certificates

We will need to import certificates with that contain private keys. We will be populating the WAP server with the following information:
We have placed all of our certificates in the C:\Certificates\Imports folder. To import them all run the following:
$importFolder = "C:\Certificates\Imports"
$file = (Get-ChildItem -Path $importFolder -Recurse)
$certificate = $file | Import-Certificate -CertStoreLocation  Cert:\LocalMachine\My
$certificate.DnsNameList | Select Unicode 
Now, if your WAP server is in production, you should have a fail-over onto another WAP server. Remember that at this time we are working with the Technical Preview bits and not for Production Use. To export your certificates to move to your next WAP server:
$exportFolder = "C:\Certificates\Exports"
$pw = ConvertTo-SecureString -String "Pass@w0rd#1" -Force -AsPlainText
$certs = Get-ChildItem -Path cert:\LocalMachine\my | Where-Object {$_.HasPrivateKey -eq $true}
foreach ($cert in $certs) {
    $filePath = $exportFolder + "\" + $cert.FriendlyName.ToString() + ".pfx"
    $cert | Export-PfxCertificate -FilePath $filePath -Password $pw -ChainOption BuildChain
} 
To import the .pfx certificates into the next WAP server:
$importFolder = "C:\Certificates\Imports"
$file = (Get-ChildItem -Path $importFolder -Recurse)
$pw = ConvertTo-SecureString -String "Pass@w0rd#1" -Force -AsPlainText
$certificate = $file | Import-PfxCertificate -CertStoreLocation  Cert:\LocalMachine\My -Password $pw -Exportable #If you want to export again
$certificate.DnsNameList | Select Unicode
Now that the certificates are installed, it's time to set-up WAP for the incoming URLs. But first let's first compare the take a look at the Add-WebApplicationProxyApplication cmdlet from Server 2012R2 to the Technical Preview.
The items in Yellow are the new properties at this point. Now the one that is really important to me is the EnableHTTPRedirect. This will allow users to go to http://sp2013.contoso.com and get redirected to https://sp2013.contoso.com automatically without having to open port 80 on your Firewall (Private Network) or on your SharePoint server. Basically this stops us from having to  create a redirect in IIS as described in my blog post How to Redirect from HTTP to HTTPS with URL Rewrite

Update HOSTS File

If you do not put in any DNS server information in your internal NIC settings, you will have to put all of your DNS entries in the HOSTS file. Since our WAP server is not attached to any domain, and does not have access to the internal DNS, you will need to modify the server's HOSTS file so that the redirects can be redirected correctly.
You will find the HOSTS file here(typically): C:\Windows\System32\drivers\etc 
Since your ADFS server should be a demo server as well, you will also want to include the ADFS server address and URL in your HOSTS file. 

Bind to ADFS

The WAP server is meant to be associated with an existing ADFS farm, and to complete the installation of the WAP server, you will need to either run the Installation Wizard or run the Install-WebApplicationProxy cmdlet. If you are not using the GUI, your PowerShell will look something like this:
$certFriendlyName = "sts.domain.net"
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
$FScredential = Get-Credential
Install-WebApplicationProxy -FederationServiceName "sts.domain.net" -FederationServiceTrustCredential $FScredential -CertificateThumbprint $thumbprint

Publish the SharePoint On-Premises and WAC Web Applications 

The PowerShell for creating the web applications will be the same for your SharePoint needs (not for hybrid) as it will be for your WAC server.
$externalURL = "https://sp2013.pcdemo.net" # Incoming URL
$backendURL = "https://sp2013.pcdemo.net"  # URL for SharePoint WA
$name = "sp2013.pcdemo.net"                # WAC Web Application Name
$certFriendlyName = "san.pcDemo.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
    -ExternalUrl $externalURL `
    -BackendServerUrl $backendURL 
    -name $name `
    -ExternalCertificateThumbprint $thumbprint `
    -ClientCertificatePreauthenticationThumbprint $thumbprint `
    -DisableTranslateUrlInRequestHeaders:$False `
    -DisableTranslateUrlInResponseHeaders:$False `
    -EnableHTTPRedirect:$true
If you want to create Web Applications for all of the DNS names in your SAN certificate you can run this script:
$certFriendlyName = "san.pcDemo.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $externalURL = "https://" + $dnsName.Unicode
    $backendURL = "https://" + $dnsName.Unicode
    $name = $dnsName.Unicode
    $thumbprint = $cert.Thumbprint
    Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
        -ExternalUrl $externalURL `
        -BackendServerUrl $backendURL `
        -name $name `
        -ExternalCertificateThumbprint $thumbprint `
        -ClientCertificatePreauthenticationThumbprint $thumbprint `
        -DisableTranslateUrlInRequestHeaders:$False `
        -DisableTranslateUrlInResponseHeaders:$False `
        -EnableHTTPRedirect:$true 
}

Publish the SharePoint Hybrid Web Application

This one is a bit different since the hybrid connection relies on pre-authentication using a certificate that you have uploaded to O365.
$externalURL = "https://hybrid.pcdemo.net"  # Incoming URL
$backendURL = "https://hybrid.pcdemo.net"   # URL for SharePoint WA
$name = "hybrid.pcdemo.net"                 # WAC Web Application Name
$certFriendlyName = "san.pcDemo.net"        # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication ClientCertificate `
   -ExternalUrl $externalURL `
   -BackendServerUrl $backendURL `
   -name $name `
   -ExternalCertificateThumbprint $thumbprint `
   -ClientCertificatePreauthenticationThumbprint $thumbprint `
   -DisableTranslateUrlInRequestHeaders:$False `
   -DisableTranslateUrlInResponseHeaders:$False

Publish the SharePoint App Domain

There is an issue with not having access to the DNS of your internal netowork, and that would be that the HOSTS file does not accept wildcard entries. To get around this, you can install a tool like Acrylic DNS Proxy http://mayakron.altervista.org/wikibase/show.php?id=AcrylicHome 
To create your wildcard domain Web Application for your App Domain, run the following script.
$externalURL = "https://*.pcdemo-apps.net"  # Incoming URL
$backendURL = "https://*.pcdemo-apps.net"   # URL for SharePoint WA
$name = "*.pcdemo-apps.net"                 # WAC Web Application Name
$certFriendlyName = "pcDemo-apps.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
    -ExternalUrl $externalURL `
    -BackendServerUrl $backendURL `
    -name $name `
    -ExternalCertificateThumbprint $thumbprint `
    -ClientCertificatePreauthenticationThumbprint $thumbprint `
    -DisableTranslateUrlInRequestHeaders:$False `
    -DisableTranslateUrlInResponseHeaders:$False `
    -EnableHTTPRedirect:$true 
After you have created all of your Web Applications, if you go into the Remote Access Management Console, your list of Published Web Applications might look similar to this:
There you go! You now have the ability to create all the WAC Web Applications that you will need to allow external users into your SharePoint environment safely. It is easy to edit or delete the Web Applications in the GUI if you make a mistake.
They will even supply the PowerShell for later use.
Once you have ADFS up and running, regardless if you use it or not, the new WAP server is even better than the previous product. Hopefully this post will help get you started on the road to creating a SharePoint Hybrid environment or just creating a reverse proxy server to keep your SharePoint (or any other site) safe.

Test It Out!

Click on the links below and notice that they get redirected from port 80 to port 443.


Monday, January 26, 2015

PowerShell for Microsoft Azure- Creating Storage

This is part 4 of the blog series on getting Started with Microsoft Azure.
Part 1: Microsoft Azure- Getting Started
Part 2: PowerShell for Microsoft Azure- Getting Started
Part 3: Microsoft Azure- Create Geo Redundancy and Virtual Networks
Part 4: PowerShell for Microsoft Azure- Creating Storage (This post)
Part 5: PowerShell for Microsoft Azure- Upload VHD Images (In Progress)
Part 6: PowerShell for Microsoft Azure- Create Machines (Coming soon)
Part 7: Active Directory and DNS in the Cloud and Azure AD (Coming Soon)
-----
So what is Azure Storage? Feel free to read up on the Introduction Azure Storage site to get a good handle on Azure storage offerings. If you are not going to read about it, please know that there is a new offering called Premium Storage, which uses SSDs for low latency IOPS to support I/O intensive systems like SharePoint and SQL. You can read up more at the Azure Premium Storage site.
Think of Azure Storage as a secure container with a unique namespace that holds all of your storage requirements. You can store all kinds of objects such as blobs, tables, and files.

Let's Get Started

Azure is a great tool for building out servers through their GUI, however, I am not a big fan of the cryptic storage containers that Azure provides if you just create a server before creating your storage. 
This cryptic Name is carried down to the URL of the storage account.
Not a very user friendly way to go...

Creating Storage- PowerShell

In the second post of this series, PowerShell for Microsoft Azure- Getting Started, we review just that... how to get started. Now it is time to start using some of your PowerShell skills. Let's say for example, that you wish to upload your own images into a container named, "Images" and you want to have your "Images" container live within your "Contoyso East Network". Unfortunately, storage and container names need be be all lowercase letters and/or numbers, no spaces and no dashes. By default, when you create a storage account, Geo Redundant Storage is automatically enabled. This is perfect if your storage does not require high IOPS. To create a storage account run the following:
$storageAccountName = "contoysoweststorage1"
$containerName = "images"
# Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
$storage = New-AzureStorageAccount -StorageAccountName $storageAccountName -Location "West Europe" # -Type Standard_LRS # Local Redundant Storage
# Get Storage Key information
$key = Get-AzureStorageKey -StorageAccountName $storageAccountName
# Get Context
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $key.Primary
# Create the container to store blob
$container = New-AzureStorageContainer -Name $containerName -Context $context
This would create a Storage Account that looks like this:

And the West Europe container would look like this:

Now, let's say that we need to put our data on SSDs as we are running SharePoint in Azure, and we want to optimize our SQL drives. Ideally you would want to put the drives in the new Premium Storage.
$storageAccountName = "contoysoweststorage2"
$containerName = "images"
# Create a new Azure Storage Account
New-AzureStorageAccount -StorageAccountName $storageAccountName -Location "West Europe" -Type Premium_LRS
# Get Storage Key information
$key = Get-AzureStorageKey -StorageAccountName $storageAccountName
# Get Context
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $key.Primary
# Create the container to store blob
$container = New-AzureStorageContainer -Name $containerName -Context $context

Unfortunately, at the time of this blog post, Azure Premium storage is only available in the West US, East US 2, and West Europe. If your Azure account is not set up correctly, you will receive notice that your "subscription is not authorized for feature Premium Storage".

Create Storage for SQL

Just like on-premises storage for SQL, for best optimization of data transfer, you want to put your SQL storage on different LUNs. In Azure, you have the ability to break out your disk storage into different LUNs as well. 
Now, depending on the type of VM tier series that you are using (A0 Standard) will determine the number of drives that you can have attached to your VM. So while architecting your Azure infrastructure/machine, make sure the VMs that you want to use, support the number of disks required. I suggest reading through Virtual Machine and Cloud Service Sizes for Azure before building out your VMs.
Remember that in a Virtual Environment the disk associated to a specific VM is just another .vhd file that does not have an OS installed/assigned to it. So to add a disk to a SQL Server, it is easier to add the data disk after the VM has already been created. We will go through creating the data disk in Part 6: PowerShell for Azure- Creating Virtual Machines.

Warning

Do not put all of your eggs in one basket. There is a potential for a Storage Account to get corrupted, and you could lose everything. Make sure that you have backups set up to a secondary Geo-Redundant Storage Account. If you have the ability to delay the creation of your backup storage account, it will lessen the likelihood of your Storage Accounts being created in the same storage pool within Azure. Here is an example of another storage strategy:
And once again, any time you are building out anything in Azure, you will want to do it through PowerShell:
# Local Redundant Storage Information
$lrsAccountNames = @("contoysostorage1east","contoysostorage2east","contoysobustorage1east","contoysobustorage2east")
$lrsContainerNames = @("images","sql-drives","data-drives")
# Global Redundant Storage Information
$grsAccountNames = @("contoysosqlbus")
$grsContainerNames = @("production-bus","development-bus")
# Location Information
$location = "East US"
# Pause Time before Creating Backup Storage Account in Minutes
$time = 120
# Create Local Redundant Storage
foreach ($lrsAccountName in $lrsAccountNames) {
    # Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
    $storage = New-AzureStorageAccount -StorageAccountName $lrsAccountName -Location $location -Type Standard_LRS # Local Redundant Storage
    Write-Host("$lrsAccountName storage account created...")
    # Get Storage Key information
    $key = Get-AzureStorageKey -StorageAccountName $lrsAccountName
    # Get Context
    $context = New-AzureStorageContext -StorageAccountName $lrsAccountName -StorageAccountKey $key.Primary
    foreach ($lrsContainerName in $lrsContainerNames) {
        # Create the container to store blob
        $container = New-AzureStorageContainer -Name $lrsContainerName -Context $context
        Write-Host("$lrsContainerName container created...")
    }
}
# Pause to create backup storage account$counter = 0
do {
    Start-Sleep -Seconds 60
    Write-Host("$counter of $time minutes completed...")
    $counter ++
}
While ($counter -le $time)
# Create Global Redundant Storage
foreach ($grsAccountName in $grsAccountNames) {
    # Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
    $storage = New-AzureStorageAccount -StorageAccountName $grsAccountName -Location $location # -Type Standard_LRS # Local Redundant Storage
    Write-Host("$grsAccountName storage account created...")
    # Get Storage Key information
    $key = Get-AzureStorageKey -StorageAccountName $grsAccountName
    # Get Context
    $context = New-AzureStorageContext -StorageAccountName $grsAccountName -StorageAccountKey $key.Primary
    foreach ($grsContainerName in $grsContainerNames) {
        # Create the container to store blob
        $container = New-AzureStorageContainer -Name $grsContainerName -Context $context
        Write-Host("$grsContainerName container created...")
    }
}

If you take a look at the image of the locally redundant storage, you will notice that there are containers called "vhds". The PowerShell script does not create these containers since Azure will create the containers as needed when the Virtual Machines are created.

Conclusion

Managing your storage is very important when it comes to optimizing your IOPS and system latency. It is also very nice was to make sure that you keep your naming schema nice and clean, and using PowerShell to create your storage also helps keep everything user friendly.
Now that the storage infrastructure is in-place, it is time to upload a .vhd into the Images folder. Please see my next post on Uploading VHDs.

Updates

03/31/2015 Added the Warning section about the potential for Storage Account corruption.
40/02/2015 Added image and PowerShell to Warning section