tag:blogger.com,1999:blog-63534304785745505352024-03-12T13:15:06.456+08:00Code Librarycode librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.comBlogger306125tag:blogger.com,1999:blog-6353430478574550535.post-18767476255184251342018-01-01T17:23:00.000+08:002018-01-01T17:23:03.387+08:00Why back up Azure VMs?The reasons for backing up VMs on-premises and in Azure are similar. Different customer segments derive different value from the solution, but here is a list of commonly encountered reasons for backup of VMs:<br />
<br />
Reduced restore time during disasters The VM contains all the information needed to get the application up and running. This includes the operating system, the application software, the configuration settings, and the data. It would typically take more time to piece together all of these at the time of restore, so restoring the complete VM is much faster.<br />
<br />
Greater suitability for long-term retention The VM includes the user data and the associated application software to work with it. Thus, the restores done from very old backup points are more likely to achieve any data retrieval goals.<br />
<br />
Easier management The VM acts as an encapsulating entity for the data. Rather than manage tens to hundreds of smaller data entities, the VM makes it simpler for the backup administrator. This makes patch rollback scenarios easy to address.<br />
<span style="color: #999999;"><br /></span>
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-43961945644530434162017-12-31T17:18:00.000+08:002017-12-31T17:18:10.689+08:00Backup scenariosThis section examines some of the different backup scenarios that can be implemented using Azure Backup.<br />
<br />
<br />
<b>Tape replacement</b><br />
Many organizations store their backup data on-premises on disk media and store their long-term retention data on tapes. They invest in significant tape infrastructure to meet their compliance requirement. Besides the cost of tape infrastructure, tapes require manual intervention to replace the older ones. Tapes must be labelled correctly and risk potential errors and data loss if they are mishandled. To store tape data offsite, organizations must arrange tape pick-up on a daily or weekly basis. In addition, to recover offsite data, organizations must request all the relevant tapes and restore the data.<br />
<br />
Cloud-based backup inherently addresses all of the preceding issues. As long as organizations have network connectivity to the cloud provider, backup to the cloud saves money. Organizations can leverage the pay-as-you-go model with the cloud rather than investing in upfront costs for tape storage.<br />
<br />
Beyond the cost savings, there are other inherent advantages to storing backup data in the cloud:<br />
Data can be retrieved even if there is a disaster on-premises.<br />
Restore times are cut down and there is no need to wait for the tape delivery from offsite.<br />
There is no need to restore all data to retrieve a single item.<br />
<br />
Azure Backup is clearly a good solution to address the tape replacement scenario because it provides a competitive tape replacement strategy for businesses.<br />
<br />
<br />
<b>Branch office backup</b><br />
Branch offices typically have fewer machines and smaller infrastructure than a large datacenter. However, the data generated in branch offices is often critical for the business. Some organizations back up this data locally in the branch office, which means they need to purchase additional storage for each branch in addition to managing the complexity of the storage and backup infrastructure in each branch.<br />
<br />
More than a handful of branch offices increases the management complexity multi-fold. In this case, some organizations back up their branch office data to the main office. But this again means that the main office must purchase the storage necessary to support all the workloads that are backed up from each of the branches.<br />
<br />
With cloud-based backup, however, organizations can eliminate their local storage and back up data directly to the cloud. Azure Backup enables businesses to back up their Windows-based servers directly to the cloud, thereby eliminating local storage at each of the branch offices.<br />
<br />
<br />
<b>Windows client backup</b><br />
With Azure Backup, organizations can back up files and folders on their Windows-based desktops and laptop computers directly to the cloud with an entirely self-service model where the IT administrator needs to take minimal or no action on behalf of the user. Since the data is encrypted when it leaves the computer, data is always secure. Individuals in a small organization can either share the same vault or each user can have a dedicated vault or subscription, depending on the sharing needs among the individuals in the organization.<br />
<br />
<br />
<b>Protection of Microsoft Azure assets</b><br />
With an ever-increasing number of enterprises and small businesses moving their workloads to the cloud, organizations need a simple mechanism to ensure that data created in the cloud is also backed up, just as is done on-premises. Microsoft Azure inherently provides high availability and redundancy of storage with the guarantee that if there is a storage or computer outage, the application can continue to run using redundant storage or computers. With the support for backup of Azure IaaS virtual machines (VMs), however, (which is currently in preview at the time of this writing) organizations can get the benefit of additional protection since they can protect their workload data from software corruption or data loss scenarios as well. In addition, organizations can always test their backups by performing a restore of data periodically.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-24964048567560468122017-12-30T17:17:00.000+08:002017-12-30T17:17:05.392+08:00Advantages of Azure BackupAzure Backup makes a great case for moving on-premises tape and disk infrastructure to the cloud. As with all cloud solutions, it is cost effective, with a pay-as-you-go model and no upfront costs. But unlike other cloud-connect strategies, Azure Backup is built as a cloud-first software as a service.<br />
<br />
This model has several advantages. The service comes with 99.9 percent availability time. As users create a backup vault to store data, the data is stored in geo-replicated storage, protecting it from disasters. Even if there is an outage of one of the Azure datacenters, the data is accessible.<br />
<br />
But it is not sufficient for the data to be geo-redundant; the service that enables access to data should also be geo-redundant. Azure Backup is available in two or more regions per geography and has a built-in business continuity plan so that even when the primary Azure datacenter experiences an outage, the service fails over to a new datacenter. Therefore, regardless of whether the organization loses on-premises data or whether the Azure datacenter has an outage, both the data and the backup service are available for customers to retrieve their data. If Azure fails over to a secondary data center, customers are able to browse all the recovery points associated with backup, pick any recovery point, and perform a restore, as well as continue backing up data to the service post failover.<br />
<br />
With Azure Backup, backed up data is always encrypted on both the wire and at rest on Azure such that it is always secure before it leaves the on-premises datacenter. The Azure Backup service also maintains backup metadata that enables customers to restore data anywhere from Azure to an alternate Windows-based or DPM server.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-37935834222822597912017-12-29T17:08:00.000+08:002017-12-29T17:08:45.335+08:00Recovering tenant VMsAll tenant VMs are deployed with a single parent VHD. DPM’s original location recovery workflow will not work for tenant VMs. Complete the following steps to recover tenant VMs:<br />
<br />
1. In the VMM console, determine the name of the host on which the VM that you want to recover is located by doing the following:<br />
<br />
a. In the VMs And Services workspace, expand All Hosts, and then click Compute Clusters.<br />
<br />
b. In the VMs pane, type the name of the VM.<br />
<br />
c. Note the value in the Host column that is associated with the VM.<br />
<br />
d. Note which compute cluster the host is a member of. (Under Compute Clusters, click each cluster to view the members.)<br />
<br />
e. Right-click the VM, and then click Properties. Click the Hardware Configuration tab. Under Bus Configuration, the VHDs that are attached to the VM are listed. Click the operating system VHD (typically the first one under IDE Devices) to see if there is a VHD chain. Note the value in the Fully Qualified Path To Parent Virtual Hard Disk box (for example, copy and save it to Notepad). If the VM properties are corrupted and you cannot access them, you can skip this step.<br />
<br />
2. In the VMM console, find a tenant share that has enough available capacity to store the recovered VM by doing the following:<br />
<br />
a. In the Fabric workspace, expand Storage, and then click File Servers.<br />
<br />
b. In the File Servers, File Shares pane, expand the file server that is in the same rack as the compute cluster where the Hyper-V host that you identified in step 1c resides.<br />
<br />
c. Use the Available Capacity column to find a TenantShare with enough free space. (This procedure uses the example share \\<prefix>-FS-02.contoso.com\TenantShare14.)</prefix><br />
<br />
3. On the Console VM, open Failover Cluster Manager, and connect to the compute cluster on which the host that you identified in step 1c is a member of.<br />
<br />
4. Under the cluster name, click Roles.<br />
<br />
5. In the Roles pane, find the cluster resource name of the VM that you want to recover. The name will be in the format SCVMM VMName Resources.<br />
<br />
6. On the Console VM, open Windows PowerShell, and run the following commands to delete the VM. Press Enter after each command. Note that the Hyper-V host is the host on which the VM that you want to recover is located.<br />
Stop-VM -ComputerName HyperVHostName -Name VMName Remove-VM -ComputerName HyperVHostName -Name VMName<br />
<br />
7. Create a symbolic link to the tenant share that you identified in step 2 by first running the following command:<br />
Enter-PSSession -ComputerName HyperVHostName<br />
In the remote session, run the following commands:<br />
cd c:\ cmd /c "mklink /d DirectoryName \\SharePath" exit<br />
<br />
8. On the Console VM, find the DPM server that backs up the VM that you want to recover. To do this, complete the following steps:<br />
<br />
a. Open the Operations console.<br />
<br />
b. In the Monitoring workspace, expand System Center 2012 R2 Data Protection Manager, select State Views, and then click Protected Servers.<br />
<br />
c. In the Look For box, enter the cluster resource name of the VM.<br />
<br />
d. In the DPM server column, note the name of the DPM server that backs up the VM.<br />
You can also do this by running the following Windows PowerShell command from the Operations Manager Shell:<br />
Get-SCOMClassInstance | where {$_.DisplayName -like '*clusterresourcename*'} | foreach { $_.'[Microsoft.SystemCenter.DataProtectionManager.<br />
<br />
9. Open the DPM administrator console, and connect to the DPM server that you identified in step 8. Find and note the name of the protection group that the VM that you want to recover was added to.<br />
<br />
10. On the Console VM, recover the VM by running the following Windows PowerShell commands as an elevated user. Press Enter after each command. Note that DPM-TenantVM-0# is the name of the DPM server that you identified in step 8, ProtectionGroupName is the protection group that the VM is a member of, VMName is the NetBIOS name of the VM that you want to recover, and SymbolicLinkOnHyperVHost is the symbolic link that you created earlier, for example c:\test1.<br />
$pg = Get-DPMPRotectionGroup -DPMServerName DPM-TenantVM-0# | where {$_.Name -eq "ProtectionGroupName"} $ds = Get-DPMDatasource -ProtectionGroup $pg | where {$_.Computer -eq "VMName"} Get-DPMRecoveryPoint -Datasource $ds | select Name, BackupTime ## this is used for display only $rps = Get-DPMRecoveryPoint -Datasource $ds $rpo = New-DPMRecoveryOption -HyperVDatasource -TargetServer HyperVHostName -RecoveryLocation AlternateHyperVServer -RecoveryType Recover -TargetLocation <symboliclinkonhypervhost> $rp = $rps[$rps.Length - 1] ## Value of - 1 indicates the latest recover point. A value of - 2 would be the recovery point before that. $ri = Get-DPMRecoverableItem $rp -BrowseType Child Recover-RecoverableItem -RecoverableItem $rp -RecoveryOption $rpo</symboliclinkonhypervhost><br />
<br />
11. On the Hyper-V host on which the VM is located, open Windows PowerShell as an elevated user.<br />
<br />
12. Perform a storage migration by running the following command where SOFSShare is the share that you identified in step 2.<br />
Move-VMStorage -ComputerName HyperVHostName -VMName VMName -DestinationStoragePath SOFSShare<br />
<br />
13. Re-parent the VM to its original parent that you identified in step 1e by running the following command. (You can skip this step and continue to step 15 if the original VM configuration was corrupted and you could not get this property value in step 1e.)<br />
Get-VMHardDiskDrive VMName | Get-VHD | where {$_.parentPath -ne $null} | Set-VHD -ParentPath "\\SharePathofParentVHD"<br />
<br />
14. Delete the "local" parent VHD (that was just recovered).<br />
<br />
15. Delete the symbolic link. To do this, open a Windows PowerShell session as an elevated user, and then run the following commands. (Press Enter after each command.)<br />
Enter-PSSession -ComputerName HyperVHostName<br />
del DirectoryName<br />
exit<br />
<br />
16. From a Console VM, run the following Windows PowerShell commands to configure the VM as highly available. Press Enter after each command. (You must connect the VM to its original cluster resource role.) Note that in the following commands, VMClusterResourceName is the cluster resource name for the VM (for example "SCVMM VMName Resources"), ComputeClusterName is the compute cluster name on which the Hyper-V host resides, and VMConfigLocation is the location that is identified in the Get-VM command that you run in this procedure.<br />
Get-VM –Name VMName | Select VMId, ConfigurationLocation $res = Get-ClusterResource -Name "VMClusterResourceName" -Cluster ComputeClusterName Set-ClusterParameter -InputObject $res -Name VMId -Value <vmid> -Cluster ComputeClusterName Set-ClusterParameter -InputObject $res -Name VmStoreRootPath -Value "VMConfigLocation" -Cluster ComputeClusterName</vmid><br />
<span style="color: #999999;"><br /></span>
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-84578550172220458802017-12-28T17:07:00.000+08:002017-12-28T17:07:21.616+08:00Adding tenant VMs to backupAs the new VMs are deployed on the CPS stamp, customers can run a runbook (called Protect-TenantVMs) to protect new tenant VMs that were just created. All VMs are configured to protect once daily with a retention period of one week. Test VMs that do not need DPM protection can be excluded by specifying an exclusion VM list using a runbook (called Add-DPMExclusionItems).<br />
<br />
You must run the Protect-TenantVMs runbook to manage tenant VM protection. This runbook adds up to 75 newly created VMs to a protection group in DPM. You should run this runbook manually or through a scheduled task once each day. After a tenant VM is added to a protection group, by default, the tenant VM is configured for daily backup, with a retention period of seven days. This runbook is designed to protect 75 new VMs per run per day to ensure enough time to complete tenant VM backups in the backup window and enough time for the deduplication process to complete. If more than 75 new VMs were created (on one rack) and you need to add them to a protection group on the same day, you can run this runbook more than once to protect the additional VMs.<br />
<br />
The data deduplication process reduces backup storage usage. There is a default schedule for data deduplication and for tenant backups.<br />
<br />
You should plan to run the Protect-TenantVMs runbook so that it does not interfere with the backup window. Therefore, run it any time between 6:00 AM and 6:00 PM local time (at least three to four hours before the backup window starts).<br />
<br />
If you need to prevent protection of some VMs, you can run the Add-DPMExclusionItems runbook and specify VM names (wildcard characters are supported) that should be excluded during VM protection.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-46797436411369584632017-12-27T17:06:00.000+08:002017-12-27T17:06:46.984+08:00Using DPM servers for tenant backupBy default, the CPS installation process provisions eight tenant DPM servers per rack that can be used for tenant backup. These servers are deployed to the compute clusters and use the naming convention DPM-TenantVM-0# (-01 through -08 on the first rack, -09 through 16 on the second rack, and so on). All of these DPM servers are pre-configured and ready to protect.<br />
<br />
To provide spindle isolation and to keep backups on a separate pool, one storage pool on each rack is assigned for backup. This backup pool is configured as dual parity to maintain N+2 redundancy and has total usable capacity of 115.2 TB. Each tenant backup DPM server is provisioned with 20 TB of allocated disk space (20 VHDs of 1 TB each) that is provisioned on 15.4 TB of physical space. This difference between allocated and physical disk space is addressed by data deduplication that runs on the backup pool.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-84520202786607971682017-12-26T17:04:00.000+08:002017-12-26T17:04:17.788+08:00Recovering infrastructure VMsThere are three infrastructure VMs (for Active Directory, DNS, and DHCP) in the management cluster. All of them are backed up locally by using Windows Server Backup.<br />
<br />
If one or more of the Active Directory/DNS/DHCP instances fails because of corruption or deletion of critical directories, you can use bare metal recovery to recover the instance. The procedure for using bare metal recovery to recover a single instance is described in this section. If there are multiple instance failures, you must repeat this procedure sequentially for all failed instances.<br />
<br />
1. From the VMM console, connect to the failed domain controller VM. Boot the Active Directory/DNS/DHCP server into Windows Recovery Environment (WinRE). The server automatically boots into WinRE if it fails to boot into normal mode twice. If the server boots normally, run the following commands at a command prompt to restart in WinRE mode:<br />
reagent /boottore shutdown /r /t 0<br />
<br />
2. In WinRE mode, click Troubleshoot.<br />
<br />
3. On the Advanced Options screen, click System Image Recovery.<br />
<br />
4. Select the Administrator account on the System Image Recovery screen.<br />
<br />
5. Type the password on the next screen.<br />
<br />
6. In the Re-image Your Computer Wizard, you can see the latest available system image for recovery. If you want to recover to an older point in time, click Select A System Image, and choose the desired point in time. Click Next.<br />
<br />
7. Click Next on the Choose Additional Restore Options page.<br />
<br />
8. Click Finish to complete the Re-image Your Computer Wizard. The following screens display the progress of the recovery as all volumes are restored.<br />
<br />
9. In the dialog box that is displayed, click Restart to restart the computer.<br />
<br />
10. After recovery completes, schedule full server backups by using the Windows Server Backup<br />
tool, as described in Configure Automatic Backups to a Volume at http://technet.microsoft.com//library/dd851674.aspx. You can do this as follows:<br />
a. On the Select Backup Configuration page, click Full Server (recommended).<br />
b. On the Specify Backup Time page, click Once A Day, and then select 12:00 AM as the backup time.<br />
c. On the Specify Destination Type page, click Back Up To A Volume.<br />
d. On the Select Destination Volume page, select Local Disk (F:) as the destination volume.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-10989257133439588362017-12-25T17:00:00.000+08:002017-12-25T17:00:32.624+08:00Recovering Virtual Machine ManagerVMM plays a key role in managing the hosts and VMs in the CPS environment. If you have exhausted all options to try to recover from application failure, you can use DPM to recover the VMM database to an older point in time.<br />
<br />
To recover the VMM database, complete the following steps:<br />
1. From the console VM, open Failover Cluster Manager.<br />
<br />
2. Connect to the management cluster.<br />
<br />
3. Shut down the two VMM VMs (<prefix>-VMM-01, -VMM-02) that are located on the management cluster.</prefix><br />
<br />
4. Use the steps in the section "Recovering a database to its original location" to recover the VMM database (called VirtualManagerDB in SCSHAREDDB SQL Server instance). To minimize data loss, be sure to select the latest recovery point.<br />
<br />
5. Open Failover Cluster Manager, and connect to the management cluster.<br />
<br />
6. Start the VMM VMs.<br />
<br />
7. In the VMM console, verify that the content in the Fabric workspace is updated.<br />
<br />
8. Detect and repair any data consistency issues by following the required steps in the “How to use data consistency runbooks” section in the CPS Admin Guide.<br />
<br />
To recover the VMM VMs, complete the following steps:<br />
1. From the console VM, open Failover Cluster Manager.<br />
<br />
2. Connect to the management cluster.<br />
<br />
3. Shut down the two VMM VMs (<prefix>-VMM-01, -VMM-02) that are located on the management cluster.</prefix><br />
<br />
4. Use the steps in the "Recovering VMs to their original location" section to recover the VMM VMs. To minimize data loss, be sure to select the latest recovery point.<br />
<br />
5. In Failover Cluster Manager, connect to the management cluster, and then click Roles. In the Roles pane, right-click each VMM VM, and then click Start.<br />
<br />
6. In Failover Cluster Manager, connect to the VMM guest cluster <prefix>-HA-VMM, and then click Roles. If the <prefix>-HA-VMM clustered role is not running, right-click the role, and then click Start Role.</prefix></prefix><br />
<br />
7. Detect and repair any data consistency issues by following the required steps in the “How to use data consistency runbooks” section in the CPS Admin Guide.<br />
<br />
<span style="color: #999999;">Source of Information : Microsoft System Center</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-63452524311309489032017-12-24T16:51:00.000+08:002017-12-24T16:51:33.398+08:00What is distributed caching? A cache provides high throughput, low-latency access to commonly accessed application data by storing the data in memory. For a cloud app, the most useful type of cache is a distributed cache, which means that the data is not stored in the individual web server's memory but on other cloud resources, and the cached data is made available to all of an application's web servers (or other cloud VMs that are used by the application).<br />
<br />
When the application scales by adding or removing servers, or when servers are replaced because of upgrades or faults, the cached data remains accessible to every server that runs the application.<br />
<br />
By avoiding the high-latency data access of a persistent data store, caching can dramatically improve application responsiveness. For example, retrieving data from cache is much faster than retrieving it from a relational database.<br />
<br />
A side benefit of caching is reduced traffic to the persistent data store, which may result in lower costs when there are data egress charges for the persistent data store.<br />
<br />
<br />
<b>When to use distributed caching</b><br />
Caching works best for application workloads that do more reading than writing of data and when the data model supports the key/value organization that you use to store and retrieve data in cache. Caching is also more useful when application users share a lot of common data; for example, cache would not provide as many benefits if each user typically retrieves data unique to that user. An example where caching could be very beneficial is a product catalog, because the data does not change frequently and all customers are looking at the same data.<br />
<br />
The benefit of caching becomes increasingly measurable the more an application scales, because the throughput limits and latency delays of the persistent data store become more of a limit on overall application performance. However, you might implement caching for reasons other than performance as well. For data that doesn't have to be perfectly up to date when shown to a user, cache access can serve as a circuit breaker for when the persistent data store is unresponsive or unavailable.<br />
<br />
<br />
<b>Popular cache population strategies</b><br />
To be able to retrieve data from cache, you have to store it there first. There are several strategies for getting the data you need in a cache into the cache:<br />
<br />
• On demand/cache aside The application tries to retrieve data from cache, and when the cache doesn't have the data (a “miss”), the application stores the data in the cache so that it will be available the next time. The next time the application tries to get the same data, it finds what it's looking for in the cache (a “hit”). To prevent fetching cached data that has changed in the database, you invalidate the cache when making changes to the data store.<br />
<br />
• Background data push Background services push data into the cache on a regular schedule, and the app always pulls from the cache. This approach works great with high-latency data sources that don't require that you always return the latest data.<br />
<br />
• Circuit breaker The application normally communicates directly with the persistent data store, but when the persistent data store has availability problems, the application retrieves data from cache. Data may have been put in cache using either the cache aside or background data push strategy. This is a fault-handling strategy rather than a performance-enhancing strategy.<br />
<br />
<br />
To keep data in the cache current, you can delete related cache entries when your application creates, updates, or deletes data. If it's all right for your application to sometimes get data that is slightly out of date, you can rely on a configurable expiration time to set a limit on how old cache data can be.<br />
<br />
You can configure absolute expiration (the amount of time since the cache item was created) or sliding expiration (the amount of time since a cache item was last accessed). Absolute expiration is used when you depend on the cache expiration mechanism to prevent data from becoming too stale. Regardless of the expiration policy you choose, the cache will automatically evict the oldest (least recently used, or LRU) items when the cache's memory limit is reached.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-2860362547781851362017-12-23T16:49:00.000+08:002017-12-23T16:49:58.406+08:00Circuit breakersThere are several reasons why you don’t want to retry too many times over too long a period:<br />
<br />
• Too many users persistently retrying failed requests might degrade other users’ experience. If millions of people are all making repeated retry requests, you could tie up IIS dispatch queues and prevent your app from servicing requests that it otherwise could handle successfully.<br />
<br />
• If everyone is retrying an operation because of a service failure, so many requests could be queued up that the service gets flooded when it starts to recover.<br />
<br />
• If the error is the result of throttling and there’s a window of time the service uses for throttling, continued retries could move that window out and cause the throttling to continue.<br />
<br />
• You might have a user waiting for a webpage to render. Making people wait too long might be more annoying that relatively quickly advising them to try again later.<br />
<br />
Exponential back-off addresses some of these issue by limiting the frequency of retries that a service can get from your application. But you also need to have circuit breakers: this means that at a certain retry threshold your app stops retrying and takes some other action, such as one of the following:<br />
<br />
• Custom fallback. If you can’t get a stock price from Reuters, maybe you can get it from Bloomberg; or if you can’t get data from the database, maybe you can get it from cache.<br />
<br />
• Fail silently. If what you need from a service isn’t all-or-nothing for your app, just return null when you can’t get the data. For example, if you're displaying a Fix It task and the Blob service isn't responding, you could display the task details without the image.<br />
<br />
• Fail fast. Error out the user to avoid flooding the service with retry requests that could cause service disruption for other users or extend a throttling window. You can display a friendly “try again later” message.<br />
<br />
There is no one-size-fits-all retry policy. You can retry more times and wait longer in an asynchronous background worker process than you would in a synchronous web app where a user is waiting for a response. You can wait longer between retries for a relational database service than you would for a cache service. Here are some sample recommended retry policies to give you an idea of how the numbers might vary. ("Fast First" means no delay before the first retry.)<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-4231829567441636922017-12-22T16:47:00.000+08:002017-12-22T16:47:41.618+08:00Built-in logging support in AzureAzure supports the following kinds of logging in the Websites service:<br />
<br />
• System.Diagnostics tracing (you can turn on and off and set levels on the fly without restarting the site)<br />
<br />
• Windows events<br />
<br />
• IIS logs (HTTP/FREB)<br />
<br />
<br />
Azure supports the following kinds of logging in Cloud Services:<br />
• System.Diagnostics tracing<br />
<br />
• Performance counters<br />
<br />
• Windows events<br />
<br />
• IIS logs (HTTP/FREB)<br />
<br />
• Custom directory monitoring<br />
<br />
The Fix It app uses System.Diagnostics tracing. All you need to do to enable System.Diagnostics logging in an Azure website is flip a switch in the portal or call the REST API. In the portal, click the Configuration tab for your site and scroll down to see the Application Diagnostics section. You can turn logging on or off and select the logging level you want. You can have Azure write the logs to the file system or to a storage account.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-62978563851342881102017-12-21T16:44:00.000+08:002017-12-21T16:44:22.890+08:00Log for insightA telemetry package is a good first step, but you still have to instrument your own code. The telemetry service tells you when there’s a problem and tells you what customers are experiencing, but it may not give you a lot of insight into what’s going on in your code.<br />
<br />
You don’t want to have to remote into a production server to see what your app is doing. That might be practical when you have one server, but what about when you’ve scaled to hundreds of servers and you don’t know which ones you need to remote into? Your logging should provide enough information that you never have to remote into production servers to analyze and debug problems. You should be logging enough information so that you can isolate issues solely through the logs.<br />
<br />
<br />
<b>Log in production</b><br />
A lot of people turn on tracing in production only when there’s a problem and they want to debug. This approach can introduce a substantial delay between the time you become aware of a problem and the time you obtain useful troubleshooting information about it. And the information you get might not be helpful for intermittent errors.<br />
<br />
What we recommend for the cloud environment, where storage is cheap, is that you always leave logging on in production. That way, when errors happen, you already have them logged and have historical data that can help you analyze issues that develop over time or happen regularly at different times. You could automate a purge process to delete old logs, but you might find that it's more expensive to set up such a process than it is to keep the logs.<br />
<br />
The added expense of logging is trivial compared with the amount of troubleshooting time and money you can save by having all the information you need already available when something goes wrong. Then, when someone tells you they had a random error sometime around 8:00 last night, but they don’t remember the error, you can readily find out what the problem was.<br />
<br />
For less than $4 a month, you can keep 50 gigabytes of logs on hand, and the performance impact of logging is trivial so long as you keep one thing in mind— be sure your logging library is asynchronous<br />
<br />
<br />
<b>Differentiate logs that inform from logs that require action</b><br />
Logs are meant to INFORM (I want you to know something) or ACT (I want you to do something). Be careful to write ACT logs only for issues that genuinely require a person or an automated process to take action. Too many ACT logs will create noise, requiring too much work to sift through all the log records to find genuine issues. And if your ACT logs trigger some action, such as sending email to support staff, avoid having a single issue trigger thousands of such actions.<br />
<br />
In .NET System.Diagnostics tracing, logs can be assigned to the Error, Warning, Info, or Debug/Verbose level. You can differentiate ACT from INFORM logs by reserving the Error level for ACT logs and using the lower levels for INFORM logs.<br />
<br />
<br />
<b>Configure logging levels at run time</b><br />
While it’s worthwhile to always have logging on in production, another best practice is to implement a logging framework that enables you to adjust at run time the level of detail that you’re logging, without redeploying or restarting your application. For example, when you use the tracing facility in System.Diagnostics, you can create Error, Warning, Info, and Debug/Verbose logs. We recommend that you always log Error, Warning, and Info logs in production and be able to dynamically add Debug/Verbose logging for troubleshooting on a case-by-case basis.<br />
<br />
The Azure Websites service has built-in support for writing System.Diagnostics logs to the file system, Table storage, or Blob storage. You can select different logging levels for each storage destination, and you can change the logging level on the fly without restarting your application. Logging support in Blob storage makes it easier to run HDInsight analysis jobs on your application logs because HDInsight knows how to work with Blob storage directly.<br />
<br />
<br />
<b>Log exceptions</b><br />
Don’t just put exception.ToString() in your logging code. That leaves out inner exceptions and contextual information. In the case of SQL errors, it leaves out the SQL error number. For all exceptions, include context information, the exception itself, and inner exceptions to be sure that you provide everything that’s needed for troubleshooting. For example, context information might include the server name, a transaction identifier, and a user name (but not the password or any secrets!).<br />
<br />
Not every developer will do the right thing with exception logging if you rely on them to do so individually. To ensure that logging is done the right way every time, build exception handling into your logger interface: pass the exception object itself to the logger class and log the exception data properly in the logger class.<br />
<br />
<br />
<b>Log calls to services</b><br />
We highly recommend that you write a log every time your app calls out to a service, whether to a database, a REST API, or any external service. Include in your logs not only an indication of success or failure but how long each request took. In the cloud environment you’ll often see problems related to slowdowns rather than complete outages. Something that normally takes 10 milliseconds might suddenly start taking a second. When someone tells you your app is slow, you want to be able to look at New Relic or whichever telemetry service you have and validate the user’s experience, and then you want to be able to look at your own logs to dive into the details of why your app is slow.<br />
<br />
<br />
<b>Use an ILogger interface</b><br />
What Microsoft recommends doing when you create a production application is to create a simple ILogger interface and stick some methods in it. This makes changing the logging implementation later much easier, and you don’t have to go through all your code to do it. We could use the System.Diagnostics.Trace class throughout the Fix It app, but instead we’re using it under the covers in a logging class that implements ILogger, and we make ILogger method calls throughout the app.<br />
<br />
With an approach such as this, if you ever want to make your logging richer, you can replace System.Diagnostics.Trace with whatever logging mechanism you want. For example, as your app grows, you might decide that you want to use a more comprehensive logging package, such as NLog or Enterprise Library Logging Application Block. (Log4Net is another popular logging framework, but it doesn't perform asynchronous logging.)<br />
<br />
One reason for using a framework such as NLog is to divide logging output into separate high-volume and high-value data stores. Doing that helps you efficiently store large volumes of INFORM data that you don’t need to execute fast queries against, while maintaining quick access to ACT data.<br />
<br />
<br />
<b>Semantic logging</b><br />
For a relatively new way to do logging that can produce more useful diagnostic information, see Enterprise Library Semantic Logging Application Block (SLAB). SLAB uses Event Tracing for Windows (ETW) and EventSource support in .NET 4.5 to enable you to create more structured and queryable logs. You define a different method for each type of event that you log, which enables you to customize the information you write. For example, to log a SQL Database error you might call a LogSQLDatabaseError method. For that kind of exception, you know that a key piece of information is the error number, so you could include an error number parameter in the method’s signature and record the error number as a separate field in the log record you write. Because the number is in a separate field, you can more easily and reliably get reports based on SQL error numbers than you could if you were just concatenating the error number into a message string.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-80564141287805862382017-12-20T16:41:00.000+08:002017-12-20T16:41:22.341+08:00Buy or rent a telemetry solutionOne of the things that’s great about the cloud environment is that it’s really easy to buy or rent your way to victory. Telemetry is an example. Without a lot of effort, you can get a really good telemetry system up and running, very cost-effectively. There are a bunch of great Microsoft partners that integrate with Azure, and some of them have free tiers—so you can get basic telemetry for nothing. Here are just a few of the ones currently available on Azure:<br />
<br />
• New Relic<br />
• AppDynamics<br />
• MetricsHub<br />
• Dynatrace<br />
<br />
As June 2014, Microsoft Application Insights for Visual Studio Online is not released but is available in preview. Microsoft System Center also includes monitoring features.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span><br />
code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com1tag:blogger.com,1999:blog-6353430478574550535.post-69210197974588589152017-12-19T16:34:00.000+08:002017-12-19T16:34:35.241+08:00SLAsPeople often hear about service-level agreements (SLAs) in the cloud environment. Basically, these are promises that companies make about how reliable their service is. A 99.9 percent SLA means you should expect the service to be working correctly 99.9 percent of the time. That's a fairly typical value for an SLA, and it sounds like a very high number, but you might not realize how much down time .1 percent actually amounts to. Here’s a table that shows how much downtime various SLA percentages amount to over a year, a month, and a week.<br />
<br />
So a 99.9 percent SLA means your service could be down 8.76 hours a year or 43.2 minutes a month. That’s more downtime than most people realize. As a developer, you want to be aware that a certain amount of downtime is possible and handle it in a graceful way. At some point someone is going to be using your app, and a service is going to be down, and you want to minimize the negative impact of that on the customer.<br />
<br />
One thing you should know about an SLA is what time frame it refers to: is the clock reset every week, every month, or every year? In Azure, the clock is reset every month, which is better for you than a yearly SLA, since a yearly SLA could hide bad months by offsetting them with a series of good months.<br />
<br />
Of course, Microsoft aspires to do better than the SLA; usually, your app will be down much less time than what’s shown in the previous table.. The promise is that if Azure’s services are ever down for longer than the maximum downtime, you can ask for money back. The amount of money you get back probably won’t fully compensate you for the business impact of the excess downtime, but that aspect of the SLA acts as an enforcement policy and lets you know that Microsoft does take its SLA levels very seriously.<br />
<br />
<br />
<b>Composite SLAs</b><br />
An important thing to think about when you’re looking at SLAs is the impact of using multiple services in an app, with each service having a separate SLA. For example, the Fix It app uses the Website, Storage, and SQL Database services. Here are their SLA numbers as of June 2014 (note that a 99.99% SLA is available for Storage at extra cost):<br />
<br />
What is the maximum downtime you would expect for the app on the basis of these service SLAs? You might think that your downtime would be equal to the worst SLA percentage, or 99.9 percent in this case. That would be true if all three services always failed at the same time, but that isn’t necessarily what actually happens. Each service may fail independently at different times, so you have to calculate the composite SLA by multiplying the individual SLA numbers.<br />
<br />
This calculation means that your app could be down not just 43.2 minutes a month but three times that amount—108 minutes a month—and still be within the Azure SLA limits.<br />
<br />
This issue is not unique to Azure. Microsoft actually offers the best cloud SLAs of any cloud service available, and you’ll have similar issues to deal with if you use any vendor’s cloud services. What this highlights is the importance of thinking about how you can design your app to handle the inevitable service failures gracefully, because they might happen often enough to impact your customers or users.<br />
<br />
<br />
<b>Cloud SLAs compared with enterprise downtime experience</b><br />
People sometimes say, “In my enterprise app I never have these problems.” If you ask how much downtime they actually have per month, they usually say, “Well, it happens occasionally.” And if you ask how often, they admit that, “Sometimes we do need to back up or install a new server or update software.” Of course, that counts as downtime. Most enterprise apps, unless they are especially mission-critical, are actually down for more than the amount of time allowed by Microsoft’s service SLAs. But when it’s your server and your infrastructure and you’re responsible for it and in control of it, you tend to feel less angst about down times. In a cloud environment, you’re dependent on someone else, and you don’t know what’s going on, so you might tend to be more worried about it.<br />
<br />
When an enterprise achieves a greater uptime percentage than comes with a cloud SLA, it does so by spending a lot more money on hardware. A cloud service could do that but would have to charge much more for its services. Instead, you take advantage of a cost-effective service and design your software so that the inevitable failures cause minimum disruption to your customers. Your job as a cloud app designer is not so much to avoid failure as to avoid catastrophe, and you do that by focusing on software, not on hardware. Whereas enterprise apps strive to maximize mean time between failures, cloud apps strive to minimize mean time to recover.<br />
<br />
<br />
<b>Not all cloud services have SLAs</b><br />
Be aware also that not every cloud service even has an SLA. If your app is dependent on a service with no uptime guarantee, your app could be down far longer than you might imagine. For example, if you enable login to your site using a social provider such as Facebook or Twitter, check with the service provider to find out whether there is an SLA, and you might find there isn’t one. But if the authentication service goes down or is unable to support the volume of requests you throw at it, your customers are locked out of your app. You could be down for days or longer. The creators of one new app expected hundreds of millions of downloads and took a dependency on Facebook authentication—but they didn’t talk to Facebook before going live and discovered too late that there was no SLA for that service.<br />
<br />
<br />
<b>Not all downtime counts toward SLAs</b><br />
Some cloud services may deliberately deny service if your app over uses them. This is called throttling. If a service has an SLA, it should state the conditions under which your app might be throttled, and your app design should avoid those conditions and react appropriately to the throttling if it happens. For example, if requests to a service start to fail when you exceed a certain number of requests per second, you want to be sure that automatic retries don't happen so fast that they cause the throttling to continue.<br />
<span style="color: #999999;"><br /></span>
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-81737930118536716232017-12-18T16:30:00.000+08:002017-12-18T16:30:26.876+08:00Design to survive failures - Failure scopeYou also have to think about failure scope—whether a single machine is affected, a whole service such as SQL Database or Storage, or an entire region.<br />
<br />
<b>Machine failures</b><br />
In Azure, a failed server is automatically replaced by a new one, and a well-designed cloud app recovers from this kind of failure automatically and quickly. Earlier, we stressed the scalability benefits of a stateless web tier, and ease of recovery from a failed server is another benefit of statelessness. Ease of recovery is also one of the benefits of platform-as-a-service (PaaS) features such as SQL Database and Websites. Hardware failures are rare, but when they occur, these services handle them automatically; you don’t even have to write code to handle machine failures when you’re using one of these services.<br />
<br />
<br />
<b>Service failures</b><br />
Cloud apps typically use multiple services. For example, the Fix It app uses the SQL Database service and the Storage service, and it’s deployed to the Websites service. What will your app do if one of the services you depend on fails? For some service failures a friendly “Sorry, try again later” message might be the best you can do. But in many scenarios you can do better. For example, when your back-end data store is down, you can accept user input, display “Your request has been received,” and store the input someplace else temporarily. Then, when the service you need is operational again, you can retrieve the input and process it.<br />
<br />
The Fix It app stores tasks in SQL Database, but it doesn’t have to quit working when SQL Database is down. In that chapter you'll see how to store user input for a task in a queue and use a worker process to read the queue and update the task. If SQL Database is down, the ability to create Fix It tasks is unaffected; the worker process can wait and process new tasks when SQL Database is available.<br />
<br />
<br />
<b>Region failures</b><br />
Entire regions may fail. A natural disaster might destroy a data center—it might be flattened by a meteor, the trunk line into the datacenter could be cut by a farmer burying a cow with a backhoe, etc. If your app is hosted in the stricken data center, what do you do? It’s possible to set up your app in Azure to run in multiple regions simultaneously so that if a disaster occurs in one, your app continues running in another region. Such failures are extremely rare occurrences, and most apps don’t jump through the hoops necessary to ensure uninterrupted service through failures of this sort. See the Resources section at the end of the chapter for information about how to keep your app available even through a region failure.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-63267881880678261032017-12-17T16:26:00.000+08:002017-12-17T16:26:41.165+08:00What is Blob storage?The Azure Blob Storage service provides a way to store files in the cloud. The Blob service has a number of advantages over storing files in a local network file system:<br />
<br />
• It's highly scalable. A single storage account can store 100 terabytes, and you can have multiple storage accounts. Some of the biggest Azure customers store hundreds of petabytes. Microsoft OneDrive uses Blob storage.<br />
<br />
• It's durable. Every file you store in the Blob service is automatically backed up.<br />
<br />
• It provides high availability. The SLA for Storage promises 99.9 percent or 99.99 percent uptime, depending on which geo-redundancy option you choose.<br />
<br />
• It's a platform-as-a-service (PaaS) feature of Azure, which means you just store and retrieve files, paying only for the actual amount of storage you use, and Azure automatically takes care of setting up and managing all of the VMs and disk drives required for the service.<br />
<br />
• You can access the Blob service by using a REST API or by using a programming language API. SDKs are available for .NET, Java, Ruby, and other languages.<br />
<br />
• When you store a file in the Blob service, you can easily make it publicly available over the Internet.<br />
<br />
• You can secure files in the Blob service so that they can accessed only by authorized users, or you can provide temporary access tokens that makes the files available to someone only for a limited period of time.<br />
<br />
Anytime you're building an app for Azure and you want to store a lot of data that in an on-premises environment would go in files—such as images, videos, PDFs, spreadsheets, and so on—consider the Blob service.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-16516100191453125902017-12-16T15:37:00.000+08:002017-12-16T15:37:34.846+08:00Vertical partitioningVertical portioning is like splitting up a table by columns: one set of columns goes into one data store, and another set of columns goes into a different data store.<br />
<br />
When you represent this data as a table and look at the different varieties of data, you can see that the three columns on the left have string data that can be efficiently stored by a relational database, whereas the two columns on the right are essentially byte arrays that come from image files. It's possible to storage image-file data in a relational database, and a lot of people do that because they don’t want to save the data to the file system. They might not have a file system capable of storing the required volumes of data, or they might not want to manage a separate backup and restore system. This approach works well for on-premises databases and for small amounts of data in cloud databases. In the on-premises environment, it might be easier to just let the database administrator (DBA) take care of everything.<br />
<br />
But in a cloud database, storage is relatively expensive, and a high volume of images could make the size of the database grow beyond the limits at which it can operate efficiently. You can address these problems by partitioning the data vertically, which means you choose the most appropriate data store for each column in your table of data. What might work best for this example is to put the string data in a relational database and the images in Blob storage.<br />
<br />
Storing images in Blob storage instead of in a database is more practical in the cloud than in an on- premises environment because you don’t have to worry about setting up file servers or managing backup and restore of data stored outside the relational database: all that is handled for you by the Blob storage service.<br />
<br />
Without this partitioning scheme, and assuming an average image size of 3 megabytes (MB), the Fix It app would be able to store only about 40,000 tasks before it hit the maximum database size of 150 gigabytes. After removing the images, the database can store 10 times as many tasks; the application can handle a much larger number of people before you have to think about implementing a horizontal partitioning scheme. And as the app scales, your expenses grow more slowly because the bulk of your storage needs are going into very inexpensive Blob storage.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-56755122790636749162017-12-15T15:35:00.000+08:002017-12-15T15:35:21.803+08:00The three Vs of data storageTo determine whether you need a partitioning strategy and what it should be, consider three questions about your data:<br />
<br />
• Volume How much data will you ultimately store? A couple gigabytes? A couple hundred gigabytes? Terabytes? Petabytes?<br />
<br />
• Velocity What is the rate at which your data will grow? Is it an internal app that isn’t generating a lot of data? An external app to which customers will be uploading images and videos?<br />
<br />
• Variety What type of data will you store? Relational, images, key-value pairs, social graphs?<br />
<br />
If you think you’re going to have a lot of volume, velocity, or variety, you have to carefully consider what kind of partitioning scheme will best enable your app to scale efficiently and effectively as it grows, and to ensure that you don’t run into any bottlenecks.<br />
<br />
There are basically three approaches to partitioning:<br />
• Vertical partitioning<br />
• Horizontal partitioning<br />
• Hybrid partitioning<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-47257720459706699882017-12-14T15:10:00.000+08:002017-12-14T15:10:17.320+08:00Choosing a data storage optionNo one approach is right for all scenarios. If anyone says that a particular technology is the answer, the first thing to ask is "What is the question?" because different solutions are optimized for different things. The relational model has definite advantages; that’s why it’s been around for so long. But there are also downsides to SQL that can be addressed with a NoSQL solution.<br />
<br />
Often, what we see work best is a composite approach in which SQL and NoSQL are used in a single solution. Even when people say they’re embracing NoSQL, a closer looks reveals that they’re using several different NoSQL frameworks—they’re using CouchDB, Redis, and Riak for different things. Even Facebook, which uses NoSQL solutions extensively, uses different NoSQL frameworks for different parts of the service. The flexibility to mix and match data storage approaches is one of the qualities that’s nice about the cloud; it’s easy to use multiple data solutions and integrate them in a single app.<br />
<br />
Here are some questions to think about when you’re choosing an approach:<br />
<br />
<b>Data semantic</b><br />
What is the core data storage and data access semantic (are you storing relational or unstructured data)?<br />
Unstructured data such as media files fits best in Blob storage; a collection of related data such as products, inventories, suppliers, customer orders, etc., fits best in a relational database.<br />
<br />
<br />
<b>Query support</b><br />
How easy is it to query the data?<br />
What types of questions can be efficiently asked?<br />
<br />
Key/value data stores are very good at getting a single row when given a key value, but they are not so good for complex queries. For a user-profile data store in which you are always getting the data for one particular user, a key/value data store could work well. For a product catalog from which you want to get different groupings based on various product attributes, a relational database might work better.<br />
<br />
NoSQL databases can store large volumes of data efficiently, but you have to structure the database around how the app queries the data, and this makes ad hoc queries harder to do. With a relational database, you can build almost any kind of query.<br />
<br />
<br />
<b>Functional projection</b><br />
Can questions, aggregations, and so on be executed on the server?<br />
<br />
If you run SELECT COUNT(*) from a table in SQL, the DBMS will very efficiently do all the work on the server and return the number you’re looking for. If you want the same calculation from a NoSQL data store that doesn't support aggregation, this operation is an inefficient “unbounded query” and will probably time out. Even if the query succeeds, you have to retrieve all the data from the server and bring it to the client and count the rows on the client.<br />
<br />
What languages or types of expressions can be used?<br />
With a relational database, you can use SQL. With some NoSQL databases, such as Azure Table storage, you’ll be using OData, and all you can do is filter on the primary key and get projections (select a subset of the available fields).<br />
<br />
<br />
<b>Ease of scalability</b><br />
How often and how much will the data need to scale?<br />
Does the platform natively implement scale-out?<br />
How easy is it to add or remove capacity (size and throughput)?<br />
<br />
Relational databases and tables aren’t automatically partitioned to make them scalable, so they are difficult to scale beyond certain limitations. NoSQL data stores such as Azure Table storage inherently partition everything, and there is almost no limit to adding partitions. You can readily scale Table storage up to 200 terabytes, but the maximum database size for Azure SQL Database is 500 gigabytes. You can scale relational data by partitioning it into multiple databases, but setting up an application to support that model involves a lot of programming work.<br />
<br />
<br />
<b>Instrumentation and Manageability</b><br />
How easy is the platform to instrument, monitor, and manage?<br />
<br />
You need to remain informed about the health and performance of your data store, so you need to know up front what metrics a platform gives you for free and what you have to develop yourself.<br />
<br />
<br />
<b>Operations</b><br />
How easy is the platform to deploy and run on Azure? PaaS? IaaS? Linux?<br />
<br />
Azure Table storage and Azure SQL Database are easy to set up on Azure. Platforms that aren’t built-in Azure PaaS solutions require more effort.<br />
<br />
<br />
<b>API Support</b><br />
Is an API available that makes it easy to work with the platform?<br />
<br />
The Azure Table Service has an SDK with a .NET API that supports the .NET 4.5 asynchronous programming model. If you're writing a .NET app, the work to write and test the code will be much easier for the Azure Table Service than for a key/value column data store platform that has no API or a less comprehensive one.<br />
<br />
<br />
<b>Transactional integrity and data consistency</b><br />
Is it critical that the platform support transactions to guarantee data consistency?<br />
<br />
For keeping track of bulk emails sent, performance and low data-storage cost might be more important than automatic support for transactions or referential integrity in the data platform, making the Azure Table Service a good choice. For tracking bank account balances or purchase orders, a relational database platform that provides strong transactional guarantees would be a better choice.<br />
<br />
<br />
<b>Business continuity</b><br />
How easy are backup, restore, and disaster recovery?<br />
<br />
Sooner or later production data will become corrupted and you’ll need an undo function. Relational databases often have more fine-grained restore capabilities, such as the ability to restore to a point in time. Understanding what restore features are available in each platform you’re considering is an important factor to consider.<br />
<br />
<br />
<b>Cost</b><br />
If more than one platform can support your data workload, how do they compare in cost?<br />
<br />
For example, if you use ASP.NET Identity, you can store user profile data in Azure Table Service or Azure SQL Database. If you don't need the rich querying facilities of SQL Database, you might choose Azure Table storage in part because it costs much less for a given amount of storage.<br />
<br />
<br />
Microsoft generally recommends that you should know the answer to the questions in each of these categories before you choose your data storage solutions.<br />
<br />
In addition, your workload might have specific requirements that some platforms can support better than others. For example:<br />
• Does your application require audit capabilities?<br />
• What are your data longevity requirements—do you require automated archival or purging capabilities?<br />
• Do you have specialized security needs? For example, your data might include personally identifiable information (PII), but you have to be sure that PII is excluded from query results.<br />
• If you have some data that can't be stored in the cloud for regulatory or technological reasons, you might need a cloud data storage platform that facilitates integration with your on-premises storage.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-24070905031199662292017-12-13T14:42:00.000+08:002017-12-13T14:42:26.913+08:00Platform as a Service (PaaS) versus Infrastructure as a Service (IaaS)The data storage options listed earlier include both Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions.<br />
<br />
In a PaaS solution, Microsoft manages the hardware and software infrastructure and you just use the service. SQL Database is a PaaS feature of Azure. You ask for databases, and behind the scenes Azure sets up and configures the virtual machines (VMs) and sets up the databases on them. You don’t have direct access to the VMs and don’t have to manage them.<br />
<br />
In an IaaS solution, you set up, configure, and manage VMs that run in Microsoft’s data center infrastructure, and you put whatever you want on them. Microsoft provides a gallery of preconfigured VM images for common VM configurations. For example, you can install preconfigured VM images for Windows Server 2008, Windows Server 2012, BizTalk Server, Oracle WebLogic Server, Oracle Database, and others.<br />
<br />
PaaS data solutions that Azure offers include:<br />
• Azure SQL Database (formerly known as SQL Azure) A cloud relational database based on SQL Server.<br />
• Azure Table storage A column-oriented NoSQL database.<br />
• Azure Blob storage File storage in the cloud.<br />
<br />
For IaaS, you can run any software that you can load onto a VM, for example:<br />
• Relational databases such as SQL Server, Oracle, MySQL, SQL Compact, SQLite, or Postgres.<br />
• Key/value data stores such as Memcached, Redis, Cassandra, and Riak.<br />
• Column data stores such as HBase.<br />
• Document databases such as MongoDB, RavenDB, and CouchDB.<br />
• Graph databases such as Neo4j.<br />
<br />
The IaaS option gives you almost unlimited data storage options, and many of them are especially easy to use because you can create VMs using preconfigured images. For example, in the management portal, go to Virtual Machines, click the Images tab, and then click Browse VM Depot.<br />
<br />
You then see a list of hundreds of preconfigured VM images, and you can create a VM from an image that has a database management system such as MongoDB, Neo4J, Redis, Cassandra, or CouchDB preinstalled:<br />
<br />
Azure makes IaaS data storage options as easy to use as possible, but the PaaS offerings have many advantages that make them more cost-effective and practical for many scenarios:<br />
<br />
• You don’t have to create VMs; you just use the portal or a script to set up a data store. If you want a 200-terabyte data store, you just click a button or run a command, and in seconds it’s ready for you to use.<br />
<br />
• You don’t have to manage or patch the VMs used by the service; Microsoft does that for you automatically.<br />
<br />
• You don’t have to worry about setting up infrastructure for scaling or high availability; Microsoft handles all that for you.<br />
<br />
• You don’t have to buy licenses; license fees are included in the service fees.<br />
<br />
• You pay only for what you use.<br />
<br />
PaaS data storage options in Azure include offerings by third-party providers. For example, you can choose the MongoLab Add-On from the Azure Store to provision a MongoDB database as a service.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-44238502545736844922017-12-12T14:32:00.000+08:002017-12-12T14:32:10.703+08:00Hadoop and MapReduceThe high volumes of data that you can store in NoSQL databases may be difficult to analyze efficiently in a timely manner. To perform this type of analysis, you can use a framework such as Hadoop, which implements MapReduce functionality. Essentially, what a MapReduce process does is the following:<br />
<br />
• Limits the size of the data that needs to be processed by selecting out of the data store only the data you actually need to analyze. For example, if you want to know the makeup of your user base by birth year, the process selects only birth years out of your user profile data store.<br />
<br />
• Breaks down the data into parts and sends them to different computers for processing. Computer A calculates the number of people with dates between 1950 and 1959, computer B works on dates between 1960 and 1969, and so on. This group of computers is called a Hadoop cluster.<br />
<br />
• Puts the results of each part back together after the processing on the parts is complete. You now have a relatively short list of how many people have each birth year, and the task of calculating percentages in this overall list is manageable.<br />
<br />
On Azure, HDInsight enables you to process, analyze, and gain new insights from big data by using the power of Hadoop. For example, you could use HDInsight to analyze web server logs in the following manner:<br />
<br />
• Enable web server logging to your storage account. This sets up Azure to write logs to the Blob service for every HTTP request to your application. The Blob service is basically cloud file storage and integrates nicely with HDInsight.<br />
<br />
• As the app gets traffic, web server IIS logs are written to Blob storage.<br />
<br />
• In the Azure management portal, click New, Data Services, HDInsight, Quick Create, and then specify an HDInsight cluster name, cluster size (number of HDInsight cluster data nodes), and a user name and password for the HDInsight cluster.<br />
<br />
You can now set up MapReduce jobs to analyze your logs and get answers to questions such as:<br />
<br />
• What times of day does my app get the most or least traffic?<br />
<br />
• What countries is my traffic coming from?<br />
<br />
• What is the average neighborhood income of the areas my traffic comes from? (There's a public dataset that provides neighborhood income by IP address, and you can match that data against the IP addresses in the web server logs.)<br />
<br />
• How does neighborhood income correlate to specific pages or products in the site?<br />
<br />
You could then use the answers to questions such as these to target ads based on the likelihood that a customer would be interested in or would be likely to buy a particular product.<br />
<br />
Most functions that you can perform in the management portal can be automated, and that includes setting up and executing HDInsight analysis jobs. A typical HDInsight script might contain the following steps:<br />
<br />
• Provision an HDInsight cluster and link it to your storage account for Blob storage input.<br />
<br />
• Upload the MapReduce job executables (.jar or .exe files) to the HDInsight cluster.<br />
<br />
• Submit a MapReduce job that stores the output data to Blob storage.<br />
<br />
• Wait for the job to complete.<br />
<br />
• Delete the HDInsight cluster.<br />
<br />
• Access the output from Blob storage.<br />
<br />
By running a script that performs these steps, you minimize the amount of time that the HDInsight cluster is provisioned, which minimizes your costs.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com1tag:blogger.com,1999:blog-6353430478574550535.post-33587923367260976802017-12-11T14:29:00.000+08:002017-12-11T14:29:40.257+08:00Data storage options on AzureThe cloud makes it relatively easy to use a variety of relational and NoSQL data stores. Here are some of the data storage platforms that you can use in Azure.<br />
<br />
The illustration shows four types of NoSQL databases:<br />
<br />
• Key/value databases store a single serialized object for each key value. They’re good for storing large volumes of data in situations where you want to get one item for a given key value and you don’t have to query based on other properties of the item.<br />
<br />
• Azure Blob storage is a key/value database that functions like file storage in the cloud, with key values that correspond to folder and file names. You retrieve a file by its folder and file name, not by searching for values in the file contents.<br />
<br />
• Azure Table storage is also a key/value database. Each value is called an entity (similar to a row, identified by a partition key and row key) and contains multiple properties (similar to columns, but not all entities in a table have to share the same columns). Querying on columns other than the key is extremely inefficient and should be avoided. For example, you can store user profile data, with one partition storing information about a single user. You could store data such as user name, password hash, birth date, and so forth, in separate properties of one entity or in separate entities in the same partition. But you wouldn't want to query for all users with a given range of birth dates, and you can't execute a join query between your profile table and another table. Table storage is more scalable and less expensive than a relational database, but it doesn't enable complex queries or joins.<br />
<br />
• Document databases are key/value databases in which the values are documents. "Document" here isn't used in the sense of a Word or an Excel document but means a collection of named fields and values, any of which could be a child document. For example, in an order history table, an order document might have order number, order date, and customer fields, and the customer field might have name and address fields. The database encodes field data in a format such as XML, YAML, JSON, or BSON, or it can use plain text. One feature that sets document databases apart from other key/value databases is the capability they provide to query on nonkey fields and define secondary indexes, which makes querying more efficient. This capability makes a document database more suitable for applications that need to retrieve data on the basis of criteria more complex than the value of the document key. For example, in a sales order history document database, you could query on various fields, such as product ID, customer ID, customer name, and so forth. MongoDB is a popular document database.<br />
<br />
• Column-family databases are key/value data stores that enable you to structure data storage into collections of related columns called column families. For example, a census database might have one group of columns for a person's name (first, middle, last), one group for the person's address, and one group for the person's profile information (date of birth, gender, and so on). The database can then store each column family in a separate partition while keeping all of the data for one person related to the same key. You can then read all profile information without having to read through all of the name and address information as well. Cassandra is a popular column-family database.<br />
<br />
• Graph databases store information as a collection of objects and relationships. The purpose of a graph database is to enable an application to efficiently perform queries that traverse the network of objects and the relationships between them. For example, the objects might be employees in a human resources database, and you might want to facilitate queries such as "find all employees who directly or indirectly work for Scott." Neo4j is a popular graph database.<br />
<br />
Compared with relational databases, the NoSQL options offer far greater scalability and are more cost effective for storage and analysis of unstructured data. The tradeoff is that they don't provide the rich querying and robust data integrity capabilities of relational databases. NoSQL options would work well for IIS log data, which involves high volume with no need for join queries. NoSQL options would not work so well for banking transactions, which require absolute data integrity and involve many relationships to other account-related data.<br />
<br />
A newer category of database platforms, called NewSQL, combines the scalability of a NoSQL database with the querying capability and transactional integrity of a relational database.<br />
<br />
NewSQL databases are designed for distributed storage and query processing, which are often hard to implement in "OldSQL" databases. NuoDB is an example of a NewSQL database that can be used on Azure.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-92011420337467857202017-12-10T14:24:00.000+08:002017-12-10T14:24:38.997+08:00Async support in ASP.NET 4.5In ASP.NET 4.5, support for asynchronous programming has been added not just to the language but also to the MVC, Web Forms, and Web API frameworks. For example, an ASP.NET MVC controller action method receives data from a web request and passes the data to a view, which then creates the HTML to be sent to the browser. Frequently, the action method needs to get data from a database or web service to display it in a webpage or to save data entered in a webpage. In those scenarios it's easy to make the action method asynchronous: instead of returning an ActionResult object, you return Task<actionresult> and mark the method with the async keyword. Inside the method, when a line of code kicks off an operation that involves wait time, you mark it with the await keyword.</actionresult><br />
<br />
Under the covers the compiler generates the appropriate asynchronous code. When the application makes the call to FindTaskByIdAsync, ASP.NET makes the FindTask request and then unwinds the worker thread and makes it available to process another request. When the FindTask request is done, a thread is restarted to continue processing the code that comes after that call. During the interim, between when the FindTask request is initiated and when the data is returned, you have a thread available to do useful work which otherwise would be tied up waiting for the response.<br />
<br />
There is some overhead for asynchronous code, but under low load conditions, that overhead is negligible, while under high load conditions you’re able to process requests that otherwise would be held up waiting for available threads.<br />
<br />
It has been possible to do this kind of asynchronous programming since ASP.NET 1.1, but it was difficult to write, prone to error, and difficult to debug. Now that the coding for it is simplified in ASP.NET 4.5, there's no reason anymore not to do it.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-71240360462128652192017-12-09T14:20:00.000+08:002017-12-09T14:20:08.189+08:00Use .NET 4.5’s async support to avoid blocking calls.NET 4.5 enhanced the C# and Visual Basic programming languages to make it much simpler to handle tasks asynchronously. The benefit of asynchronous programming applies not just to parallel processing situations, such as when you want to kick off multiple web service calls simultaneously. It also enables your web server to perform more efficiently and reliably under high load conditions. A web server has only a limited number of threads available, and under high load conditions, when all of the threads are in use, incoming requests have to wait until threads are freed up. If your application code doesn't handle tasks like database queries and web service calls asynchronously, many threads are unnecessarily tied up while the server is waiting for an I/O response. This limits the amount of traffic the server can handle under high load conditions. With asynchronous programming, threads that are waiting for a web service or database to return data are freed up to service new requests until the data is received. In a busy web server, hundreds or thousands of requests that would otherwise be waiting for threads to be freed up can then be processed promptly.<br />
<br />
As you saw earlier, it's as easy to decrease the number of web servers handling your website as it is to increase them. So, if a server can achieve greater throughput, you don't need as many of them, and you can decrease your costs because you need fewer servers for a given traffic volume than you otherwise would.<br />
<br />
Support for the .NET 4.5 asynchronous programming model is included in ASP.NET 4.5 for Web Forms, MVC, and Web API; in Entity Framework 6; and in the Azure Storage API.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0tag:blogger.com,1999:blog-6353430478574550535.post-60397442740368023732017-12-08T14:18:00.000+08:002017-12-08T14:18:17.216+08:00Stateless web tier behind a smart load balancerStateless web tier means you don't store any application data in the web server memory or file system. Keeping your web tier stateless enables you to both provide a better customer experience and save money:<br />
<br />
• If the web tier is stateless and it sits behind a load balancer, you can quickly respond to changes in application traffic by dynamically adding or removing servers. In the cloud environment, where you pay for server resources only for as long as you actually use them, that ability to respond to changes in demand can translate into huge savings.<br />
<br />
• A stateless web tier is architecturally much simpler for scaling out the application. That enables you to respond to scaling needs more quickly, and spend less money on development and testing in the process.<br />
<br />
• Cloud servers, like on-premises servers, need to be patched and rebooted occasionally. If the web tier is stateless, rerouting traffic when a server goes down temporarily won't cause errors or unexpected behavior.<br />
<br />
Most real-world applications do need to store state for a web session; the main point here is not to store it on the web server. You can store state in other ways, such as on the client in cookies or out of process server-side in ASP.NET session state by using the Redis cache provider. You can store files in<br />
Azure Blob storage instead of the local file system.<br />
<br />
<span style="color: #999999;">Source of Information : Building Cloud Apps With Microsoft Azure</span>code librarieshttp://www.blogger.com/profile/07491157238841842259noreply@blogger.com0