• Virtual network address spaces When you set up a virtual network, you specify the topology of the virtual network, including the available address spaces and subnets. If the virtual network is to be connected to other virtual networks, you must select address ranges that are not overlapping. This is the range of addresses that the VMs and services in your network can use. These will be private and cannot be accessed from the public Internet. This used to be true only for the unrouteable IP addresses such as 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16, but now Azure will treat any address range as part of the private VNet IP address space that is only reachable within the VNet, within interconnected VNets, and from your on-premises location.

    CIDR specifies an address range using a combination of an IP address and its associated network mask. CIDR notation uses this format: xxx.xxx.xxx.xxx/n, where n is the number of leftmost “1” bits in the mask. For example, 192.168.12.0/23 applies the network mask 255.255.254.0 to the 192.168 network, starting at 192.168.12.0. This notation therefore represents the address range 192.168.12.0–192.168.13.255. Fortunately, the Azure portal displays this information on the screen after you type the CIDR into one of the address space fields so you don’t have to do bit-wise math to figure it out!
    10.0.0.0/8 gives you a usable address range of 10.0.0.0–10.255.255.255. You can certainly use this, but if 10.0.0.0 is being used elsewhere in the network, either on-premises or within Azure, you want to be sure you have no overlap. One way to do this is to specify an address space that is smaller but still has the capacity to hold everything you want to put in it. For example, if you are just going to put a handful of VMs in your virtual network, you could use 10.0.0.0/27, which gives you a usable address range of 10.0.0.0–10.0.0.31.

    If you work within an organization in which someone else is responsible for the internal networks, you should confer with that person before selecting your address space to make sure there is no overlap and to let them know what space you want to use so they don’t try to use the same range of IP addresses.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Virtual Network (VNet) Virtual networks (VNets) are used in Azure to provide private connectivity for Azure Virtual Machines (Azure VMs) and some Azure services. VMs and services that are part of the same virtual network can access one another. By default, services outside the virtual network cannot connect to services within the virtual network. You can, however, configure the network to allow access to the external service.

    Services that talk to each other within a virtual network do not travel through the Azure Load Balancer, which gives you better performance. It’s not significant, but sometimes every little bit counts.

    Here’s an example of how you might want to use that virtual network for connectivity to a private resource. Let’s say you have a front-end web application running in a VM, and that application uses a back-end database running in a different VM. You can put the back-end database in the same virtual network as the web application; the web application will access the database over the virtual network. This allows you to use the back-end database from the web application without the database being accessible on the public Internet.

    A Virtual Network Gateway is a fully managed service in Azure that is used for cross-premises connectivity. You can add a Virtual Network Gateway to a virtual network and use it to connect your on-premises network to Azure, effectively making the virtual network in Azure an extension of your on-premises network. This provides the ability to deploy hybrid cloud applications that securely connect to your on-premises datacenter.

    More complex features available include multisite VPNs, in-region VNet-to-VNet, and cross-region VNet-to-VNet. Most cross-premises connections involve using a VPN device to create a secure connection to your virtual network in Azure. VNet-to-VNet connectivity uses the Azure Virtual Network Gateway to connect two or more virtual networks with IPsec/IKE S2S VPN tunnels. Being able to have cross-premises connectivity gives you flexibility when connecting one or more on-premises sites with your virtual networks. For example, it gives you the ability to have cross-region geo-redundancy, such as SQL Always On across different Azure regions.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • AzCopy: A very useful tool Before finishing the chapter on Azure Storage, you need to know about AzCopy. This is a free tool provided by the Azure Storage team to move data around. The core use case is asynchronous server-side copies. When you copy blobs or files from one storage account to another, they are not downloaded from the first storage account to your local machine and then uploaded to the second storage account. The blobs and files are copied directly within Azure.

    Here are some of the things you can do with AzCopy:
     Upload blobs from the local folder on a machine to Azure Blob storage.
     Upload files from the local folder on a machine to Azure File storage.
     Copy blobs from one container to another in the same storage account.
     Copy blobs from one storage account to another, either in the same region or in a different region.
     Copy files from one file share to another in the same storage account.
     Copy files from one storage account to another, either in the same region or in a different region.
     Copy blobs from one storage account to an Azure File share in the same storage account or in a different storage account.
     Copy files from an Azure File share to a blob container in the same storage account or in a different storage account.
     Export a table to an output file in JSON or CSV format. You can export this to blob storage.
     Import the previously exported table data from a JSON file into a new table. (Note: It won’t import from a CSV file.)

    As you can see, there are a lot of possibilities when using AzCopy. It also has a bunch of options. For example, you can tell it to only copy data where the source files are newer than the target files. You can also have it copy data only where the source files are older than the target files. And you can combine these options to ask it to copy only files that don’t exist in the destination at all.

    AzCopy is frequently used to make backups of Azure Blob storage. Maybe you have files in Blob storage that are updated by your customer frequently, and you want a backup in case there’s a problem. You can do something like this:
     Do a full backup on Saturday from the source container to a target container and put the date in the name of the target container.
     For each subsequent day, do an incremental copy—copy only the files that are newer in the source than in the destination.

    If your customer uploads a file by mistake, if they contact you before end of day, you can retrieve the previous version from the backup copy.

    Here are some other use cases:
     You want to move your data from a classic storage account to a Resource Manager storage account. You can do this by using AzCopy, and then you can change your applications to point to the data in the new location.
     You want to move your data from general-purpose storage to cool storage. You would copy your blobs from the general-purpose storage account to the new Blob storage account, then delete the blobs from the original location.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Encryption at rest Let’s look at the various options available to encrypt the stored data.

    Storage Service Encryption (SSE)
    This is a new feature currently in preview. This lets you ask the storage service to encrypt blob data when writing it to Azure Storage. This feature has been requested by many companies to fulfill security and compliance requirements. It enables you to secure your data without having to add any code to any of your applications. Note that it only works for blob storage; tables, queues, and files will be unaffected.

    This feature is per-storage account, and it can be enabled and disabled using the Azure portal, PowerShell, the CLI, the Azure Storage Resource Provider REST API, or the .NET storage client library. The keys are generated and managed by Microsoft at this time, but in the future you will get the ability to manage your own encryption keys.

    This can be used with both Standard and Premium storage, but only with the new Resource Manager accounts. During the preview, you have to create a new storage account to try out this feature.
    One thing to note: after being enabled, the service encrypts data written to the storage account. Any data already written to the account is not encrypted. If you later disable the encryption, any future data will not be encrypted, but it does retain encryption on the data written while encryption was enabled.

    If you create a VM using an image from the Azure Marketplace, Azure performs a shallow copy of the image to your storage account in Azure Storage, and it is not encrypted even if you have SSE enabled. After it creates the VM and starts updating the image, SSE will start encrypting the data. For this reason, Microsoft recommends that you use Azure Disk Encryption on VMs created from images in the Azure Marketplace if you want them fully encrypted.


    Azure Disk Encryption
    This is another new feature that is currently in preview. This feature allows you to specify that the OS and data disks used by an IaaS VM should be encrypted. For Windows, the drives are encrypted with industry-standard BitLocker encryption technology. For Linux, encryption is performed using DM-Crypt.

    Azure Disk Encryption is integrated with Azure Key Vault to allow you to control and manage the disk encryption keys.

    Unlike SSE, when you enable this, it encrypts the whole disk, including data that was previously written. You can bring your own encrypted images into Azure and upload them and store the keys in Azure Key Vault, and the image will continue to be encrypted. You can also upload an image that is not encrypted or create a VM from the Azure Gallery and ask that its disks be encrypted.

    This is the method recommended by Microsoft to encrypt your IaaS VMs at rest. Note that if you turn on both SSE and Azure Disk Encryption, it will work fine. Your data will simply be double-encrypted.


    Client-side encryption
    We looked at client-side encryption when discussing encryption in transit. The data is encrypted by the application and sent across the wire to be stored in the storage account. When retrieved, the data is decrypted by the application. Because the data is stored encrypted, this is encryption at rest.

    For this encryption, you can encrypt the data in blobs, tables, and queues, rather than just blobs like SSE. Also, you can bring your own keys or use keys generated by Microsoft. If you store your encryption keys in Azure Key Vault, you can use Azure Active Directory to specifically grant access to the keys. This allows you to control who can read the vault and retrieve the keys being used for client-side encryption.

    This is the most secure method of encrypting your data, but it does require that you add code to perform the encryption and decryption. If you only have blobs that need to be encrypted, you may choose to use a combination of HTTPS and SSE to meet the requirement that your data be encrypted at rest.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Securing your data in transit Another consideration when storing your data in Azure Storage is securing the data when it is being transferred between the storage service and your applications.

    First, you should always use the HTTPS protocol, which ensures secure communication over the public Internet. Note that if you are using SAS, there is a query parameter that can be used that specifies that only the HTTPS protocol can be used with that URL.

    For Azure File shares, SMB 3.0 running on Windows encrypts the data going across the public Internet. When Apple and Linux add security support to SMB 3.0, you will be able to mount file shares on those machines and have encrypted data in transit.

    Last, you can use the client-side encryption feature of the .NET and Java storage client libraries to encrypt your data before sending it across the wire. When you retrieve the data, you can then unencrypt it. This is built in to the storage client libraries for .NET and Java. This also counts as encryption at rest because the data is encrypted when stored.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Securing access to your data There are two ways to secure access to your data objects. We just talked about the first one—by controlling access to the storage account keys.

    The second way to secure access is by using shared access signatures and stored access policies. A shared access signature (SAS) is a string containing a security token that can be attached to the URI for an asset that allows you to delegate access to specific storage objects and to specify constraints such as permissions and the date/time range of access.

    You can grant access to blobs, containers, queue messages, files, and tables. With tables, you can grant access to specific partition keys. For example, if you were using geographical state for your partition key, you could give someone access to just the data for California.

    You can fine-tune this by using a separation of concerns. You can give a web application permission to write messages to a queue, but not to read them or delete them. Then, you can give the worker role or Azure WebJob the permission to read the messages, process the messages, and delete the messages. Each component has the least amount of security required to do its job.
    Here’s an example of an SAS, with each parameter explained:

    http://mystorage.blob.core.windows.net/mycontainer/myblob.txt (URL to the blob)
    ?sv=2015-04-05 (storage service version)
    &st=2015-12-10T22%3A18%3A26Z (start time, in UTC time and URL encoded)
    &se=2015-12-10T22%3A23%3A26Z (end time, in UTC time and URL encoded)
    &sr=b (resource is a blob)
    &sp=r (read access)
    &sip=168.1.5.60-168.1.5.70 (requests can only come from this range of IP addresses)
    &spr=https (only allow HTTPS requests)&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D (signature used for the authentication of the SAS)

    Note that the SAS query parameters must be URL encoded, such as %3A for colon (:) and %20 for a space. This SAS gives read access to a blob from 12/10/2015 10:18 PM to 12/10/2015 10:23 PM.
    When the storage service receives this request, it will take the query parameters and create the &sig value on its own and compare it to the one provided here. If they agree, it will verify the rest of the request. If our URL pointed to a file on a file share instead of a blob, the request would fail because blob is specified. If the request were to update the blob, it would fail because only read access has been granted.

    There are both account-level SAS and service-level SAS. With account-level SAS, you can do things like list containers, create containers, delete file shares, and so on. With service-level SAS, you can only access the data objects. For example, you can upload a blob into a container.

    You can also create stored access policies on container-like objects such as blob containers and file shares. This will let you set the default values for the query parameters, and then you can create the SAS by specifying the policy and the query parameter that is different for each request. For example, you might set up a policy that gives read access to a specific container. Then, when someone requests access to that container, you create an SAS from the policy and use it.

    There are two advantages to using stored access policies. First, this hides the parameters that are defined in the policy. So if you set your policy to give access to 30 minutes, it won’t show that in the URL—it just shows the policy name. This is more secure than letting all of your parameters be seen.
    The second reason to use stored access policies is that they can be revoked. You can either change the expiration date to be prior to the current date/time or remove the policy altogether. You might do this if you accidentally provided access to an object you didn’t mean to. With an ad hoc SAS URL, you have to remove the asset or change the storage account keys to revoke access.

    Shared access signatures and stored access policies are the two most secure ways to provide access to your data objects.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Securing your storage account The first thing to think about is securing your storage account.


    Storage account keys
    Each storage account has two authentication keys—a primary and a secondary—either of which can be used for any operation. There are two keys to allow occasional rollover of the keys to enhance security. It is critical that these keys be kept secure because their possession, along with the account name, allows unlimited access to any data in the storage account.

    Say you’re using key 1 for your storage account in multiple applications. You can regenerate key 2 and then change all the applications to use key 2, test them, and deploy them to production. Then, you can regenerate key 1, which removes access from anybody who is still using it. A good example of when you might want to do this is if your team uses a storage explorer that retains the storage account keys, and someone leaves the team or the company—you don’t want them to have access to your data after they leaves. This can happen without a lot of notice, so you should have a procedure in place to know all the apps that need to change, and then practice rotating keys on a regular basis so that it’s simple and not a big problem when it is necessary to rotate the keys in a hurry.


    Using RBAC, Azure AD, and Azure Key Vault to control access to Resource Manager storage accounts
    RBAC and Azure AD With Resource Manager RBAC, you can assign roles to users, groups, or applications. The roles are tied to a specific set of actions that are allowed or disallowed. Using RBAC to grant access to a storage account only handles the management operations for that storage account. You can’t use RBAC to grant access to objects in the data plane like a specific container or file share. You can, however, use RBAC to grant access to the storage account keys, which can then be used to read the data objects.

    For example, you might grant someone the Owner role to the storage account. This means they can access the keys and thus the data objects, and they can create storage accounts and do pretty much anything.

    You might grant someone else the Reader role. This allows them to read information about the storage account. They can read resource groups and resources, but they can’t access the storage account keys and therefore can’t access the data objects.

    If someone is going to create VMs, you must grant them the Virtual Machine Contributor role, which grants them access to retrieve the storage account keys but not to create storage accounts. They need the keys to create the VHD files that are used for the VM disks.

    Azure Key Vault Azure Key Vault helps safeguard cryptographic keys and secrets used by Azure applications and services. You could store your storage account keys in an Azure Key Vault. What does this do for you? While you can’t control access to the data objects directly using Active Directory, you can control access to an Azure Key Vault using Active Directory. This means you can put your storage account keys in Azure Key Vault and then grant access to them for a specific user, group, or application.

    Let’s say you have an application running as a Web App that uploads files to a storage account. You want to be really sure nobody else can access those files. You add the application to Azure Active Directory and grant it access to the Azure Key Vault with that storage account’s keys in it. After that, only that application can access those keys. This is much more secure than putting the keys in the web.config file where a hacker could get to them.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more
  • Azure Storage services Redundancy What happens if the storage node on which your blobs are stored fails? What happens if the rack holding the storage node fails? Fortunately, Azure supports something called redundancy. There are four choices for redundancy; you specify which one to use when you create the storage account. You can change the redundancy settings after they are set up, except in the case of zone redundant storage.

     Locally Redundant Storage (LRS) Azure Storage provides high availability by ensuring that three copies of all data are made synchronously before a write is deemed successful. These copies are stored in a single facility in a single region. The replicas reside in separate fault domains and upgrade domains. This means the data is available even if a storage node holding your data fails or is taken offline to be updated.

    When you make a request to update storage, Azure sends the request to all three replicas and waits for successful responses for all of them before responding to you. This means that the copies in the primary region are always in sync.

    LRS is less expensive than GRS, and it also offers higher throughput. If your application stores data that can be easily reconstructed, you may opt for LRS.


     Geo-Redundant Storage (GRS) GRS makes three synchronous copies of the data in the primary region for high availability, and then it asynchronously makes three replicas in a paired region for disaster recovery. Each Azure region has a defined paired region within the same geopolitical boundary for GRS. For example, West US is paired with East US. This has a small impact on scalability targets for the storage account. The GRS copies in the paired region are not accessible to you, and GRS is best viewed as disaster recovery for Microsoft rather than for you. In the event of a major failure in the primary region, Microsoft would make the GRS replicas available, but this has never happened to date.


     Read-Access Geo-Redundant Storage (RA-GRS) This is GRS plus the ability to read the data in the secondary region, which makes it suitable for partial customer disaster recovery. If there is a problem with the primary region, you can change your application to have read-only access to the paired region. The storage client library supports a fallback mechanism Microsoft.WindowsAzure.Storage.RetryPolicies.LocationMode to try to read from the secondary copy if the primary copy can’t be reached. This feature is built in for you. Your customers might not be able to perform updates, but at least the data is still available for viewing, reporting, etc.

    You also can use this if you have an application in which only a few users can write to the data but many people read the data. You can point your application that writes the data to the primary region but have the people only reading the data access the paired region. This is a good way to spread out the performance when accessing a storage account.


     Zone-Redundant Storage (ZRS) This option can only be used for block blobs in a standard storage account. It replicates your data across two to three facilities, either within a single region or across two regions. This provides higher durability than LRS, but ZRS accounts do not have metrics or logging capability.

    Source of Information : Microsoft Azure Essentials Fundamentals of Azure Second Edition

    more