Azure Storage In Depth

The following is from Azure Administrator Training lab for AZ-103

Azure Storage

Azure Storage is Microsoft’s cloud storage solution for modern data storage scenarios. Azure Storage offers a massively scalable objectstore for data objects, a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store. Azure Storage is:

  • Durable and highly available. Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.
  • Secure. All data written to Azure Storage is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data.
  • Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today’s applications.
  • Managed. Microsoft Azure handles hardware maintenance, updates, and critical issues for you.
  • Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides SDKs for Azure Storage in a variety of languages – .NET, Java, Node.js, Python, PHP, Ruby, Go, and others – as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.

Azure Storage is a service that you can use to store files, messages, tables, and other types of information. You can use Azure storage onits own—for example as a file share—but it is often used by developers as a store for working data. Such stores can be used by websites,mobile apps, desktop applications, and many other types of custom solutions. Azure storage is also used by IaaS virtual machines, andPaaS cloud services. You can generally think of Azure storage in three categories.

  • Storage for Virtual Machines. This includes disks and files. Disks are persistent block storage for Azure IaaS virtual machines.Files are fully managed file shares in the cloud.
  • Unstructured Data. This includes Blobs and Data Lake Store. Blobs are highly scaleable, REST based cloud object store. Data LakeStore is Hadoop Distributed File System (HDFS) as a service.
  • Structured Data. This includes Tables, Cosmos DB, and Azure SQL DB. Tables are a key/value, auto-scaling NoSQL store. CosmosDB is a globally distributed database service. Azure SQL DB is a fully managed database-as-a-service built on SQL.

For more information, you can see: Azure Storage – https://azure.microsoft.com/en-us/services/storage/

Azure Storage Services

Azure Storage services Azure Storage includes these data services, each of which is accessed through a storage account.

  • Azure Blobs: A massively scalable object store for text and binary data.
  • Azure Files: Managed file shares for cloud or on-premises deployments.
  • Azure Queues: A messaging store for reliable messaging between application components.
  • Azure Tables: A NoSQL store for schemaless storage of structured data.

Blob storage Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts ofunstructured data, such as text or binary data. Blob storage is ideal for:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access.
  • Streaming video and audio.
  • Storing data for backup and restore, disaster recovery, and archiving.
  • Storing data for analysis by an on-premises or Azure-hosted service. Objects in Blob storage can be accessed from anywhere in theworld via HTTP or HTTPS. Users or client applications can access blobs via URLs, the Azure Storage REST API, AzurePowerShell, Azure CLI, or an Azure Storage client library. The storage client libraries are available for multiple languages,including .NET, Java, Node.js, Python, PHP, and Ruby.

Azure Files Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files usingthe REST interface or the storage client libraries. One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the worldusing a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specificaccess to a private asset for a specific amount of time. File shares can be used for many common scenarios:

  • Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. Ifyou mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses thefile share should work with minimal, if any, changes.
  • Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in agroup can be stored on a file share, ensuring that everybody can find them, and that they use the same version.
  • Diagnostic logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed oranalyzed later.

At this time, Active Directory-based authentication and access control lists (ACLs) are not supported, but they will be at some time in thefuture. The storage account credentials are used to provide authentication for access to the file share. This means anybody with the sharemounted will have full read/write access to the share.

Queue storage The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in size, and a queue can containmillions of messages. Queues are generally used to store lists of messages to be processed asynchronously. For example, say you want your customers to be able to upload pictures, and you want to create thumbnails for each picture. You couldhave your customer wait for you to create the thumbnails while uploading the pictures. An alternative would be to use a queue. When thecustomer finishes his upload, write a message to the queue. Then have an Azure Function retrieve the message from the queue and createthe thumbnails. Each of the parts of this processing can be scaled separately, giving you more control when tuning it for your usage.

Table storage Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the Azure Table Storage Overview.In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB Table API offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes. To learn more and try out the new premium experience, pleasecheck out Azure Cosmos DB Table API.

Standard and Premium Accounts

General purpose storage accounts have two tiers: Standard and Premium. Standard storage accounts are backed by magnetic drives (HDD) and provide the lowest cost per GB. They are best for applications thatrequire bulk storage or where data is accessed infrequently. Premium storage accounts are backed by solid state drives (SSD) and offer consistent low-latency performance. They can only be usedwith Azure virtual machine disks and are best for I/O-intensive applications, like databases. ✔️ It is not possible to convert a Standard storage account to Premium storage account or vice versa. You must create a new storageaccount with the desired type and copy data, if applicable, to a new storage account.

Storage Types

When you create a storage account you can choose from: Storage (general purpose v1), Storage V2 (general purpose v2), and Blobstorage. Azure Storage offers three types of storage accounts. Each type supports different features and has its own pricing model. Consider thesedifferences before you create a storage account to determine the type of account that is best for your applications. The three types ofstorage accounts are:

  • General-purpose v2 accounts. General-purpose v2 storage accounts support the latest Azure Storage features and incorporate all ofthe functionality of general-purpose v1 and Blob storage accounts. General-purpose v2 accounts deliver the lowest per-gigabytecapacity prices for Azure Storage, as well as industry-competitive transaction prices. General-purpose v2 storage accounts supportmany services: Blobs (all types: Block, Append, Page), Files, Disks, Queues, and Tables. Microsoft recommends using a general-purpose v2 storage account for most scenarios. You can easily upgrade a general-purpose v1 or Blob storage account to a general-purpose v2 account with no downtime and without the need to copy data.
  • General-purpose v1 accounts. General-purpose v1 accounts provide access to all Azure Storage services, but may not have thelatest features or the lowest per gigabyte pricing. Legacy account type for blobs, files, queues, and tables. Use general-purpose v2accounts instead when possible.
  • Blob storage accounts A Blob storage account is a specialized storage account for storing unstructured object data as block blobs.Blob storage accounts provide the same durability, availability, scalability, and performance features that are available withgeneral-purpose v2 storage accounts. Blob storage accounts support storing block blobs and append blobs, but not page blobs. Blobstorage accounts offer multiple access tiers for storing data based on your usage patterns.

Accessing Storage

Every object that you store in Azure Storage has a unique URL address. Thestorage account name forms the subdomain of that address. The combinationof subdomain and domain name, which is specific to each service, forms anendpoint for your storage account. For example, if your storage account is named mystorageaccount, then thedefault endpoints for your storage account are:

  • Blob service: http://mystorageaccount.blob.core.windows.net
  • Table service: http://mystorageaccount.table.core.windows.net
  • Queue service: http://mystorageaccount.queue.core.windows.net
  • File service: http://mystorageaccount.file.core.windows.net

The URL for accessing an object in a storage account is built by appendingthe object’s location in the storage account to the endpoint. For example, toaccess myblob in the mycontainer, use this format:http://mystorageaccount.blob.core.windows.net/mycontainer/myblob. Configuring a Custom Domain You can configure a custom domain for accessing blob data in your Azurestorage account. As mentioned previously, the default endpoint for Azure Blobstorage is <storage-account-name>.blob.core.windows.net. You can also usethe web endpoint that’s generated as a part of the static websites feature (preview). If you map a custom domain and subdomain, such aswww.contoso.com, to the blob or web endpoint for your storage account, yourusers can use that domain to access blob data in your storage account. Thereare two ways to configure this service: Direct CNAME mapping and anintermediary domain. Direct CNAME mapping for example, to enable a custom domain for theblobs.contoso.com sub domain to an Azure storage account, create a CNAMErecord that points from blobs.contoso.com to the Azure storage account[storage account].blob.core.windows.net. The following example maps adomain to an Azure storage account in DNS:

ƒƒ

CNAME record Target
blobs.contoso.com contosoblobs.blob.core.windows.net

Intermediary mapping with asverify Mapping a domain that is already in usewithin Azure may result in minor downtime as the domain is updated. If youhave an application with an SLA, by using the domain you can avoid thedowntime by using a second option, the asverify subdomain, to validate thedomain. By prepending asverify to your own subdomain, you permit Azure torecognize your custom domain without modifying the DNS record for thedomain. After you modify the DNS record for the domain, it will be mapped tothe blob endpoint with no downtime. The following examples maps a domain to the Azure storage account in DNSwith the asverify intermediary domain:

CNAME record Target
asverify.blobs.contoso.com asverify.contosoblobs.blob.core.windows.net
blobs.contoso.com contosoblobs.blob.core.windows.net

✔️A Blob storage account only exposes the Blob service endpoint. And, youcan also configure a custom domain name to use with your storage account.

BLOB Storage

Blob Storage

Azure Blob storage is a service that stores unstructured data in the cloud asobjects/blobs. Blob storage can store any type of text or binary data, such as a document, media file, or application installer.Blob storage is also referred to as object storage. Common uses of Blob storage include:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access, such as installation.
  • Streaming video and audio.
  • Storing data for backup and restore, disaster recovery, and archiving.
  • Storing data for analysis by an on-premises or Azure-hosted service.

Blob service resources Blob storage offers three types of resources:

  • The storage account
  • Containers in the storage account
  • Blobs in a container

The following diagram shows the relationship between these resources. ✔️ Within the storage account, you group as many blobs as needed in acontainer. For more information, you can see: Azure Blob Storage – https://azure.microsoft.com/en-us/services/storage/blobs/

Blob Containers

A container provides a grouping of a set of blobs. All blobs must be in acontainer. An account can contain an unlimited number of containers. Acontainer can store an unlimited number of blobs. You can create thecontainer in the Azure Portal. Name: The name may only contain lowercase letters, numbers, and hyphens,and must begin with a letter or a number. The name must also be between 3and 63 characters long. Public access level: Specifies whether data in the container may be accessedpublicly. By default, container data is private to the account owner.

  • Use Private to ensure there is no anonymous access to the container andblobs.
  • Use Blob to allow anonymous public read access for blobs only.
  • Use Container to allow anonymous public read and list access to theentire container, including the blobs.

✔️ You can also create the Blob container with PowerShell using the New-AzStorageContainer command. ✔️ Have you thought about how you will organize your containers?

Blob Performance Tiers

Azure Storage provides different options for accessing block blob data (as shown in the screenshot), based on usage patterns. Each access tier in Azure Storage is optimized for a particular pattern of data usage. By selecting the right access tier for your needs, you can store your block blob data in the most cost-effective manner.

  • Hot (inferred). The Hot tier is optimized for frequent access of objects in the storage account. Accessing data in the Hot tier is most cost-effective, while storage costs are somewhat higher. New storage accounts are created in the Hot tier by default.
  • Cool. The Cool tier is optimized for storing large amounts of data that is infrequently accessed and stored for at least 30 days. Storing data in the Cool tier is more cost-effective, but accessing that data may be somewhat more expensive than accessing data in the Hot tier.
  • Archive. The Archive tier is optimized for data that can tolerate several hours of retrieval latency and will remain in the Archive tier for at least 180 days. The Archive tier is the most cost-effective option for storing data, but accessing that data is more expensive than accessing data in the Hot or Cool tiers.

✔️ If there is a change in the usage pattern of your data, you can switch between these access tiers at any time.

Uploading Blobs

A blob can be any type and size file. Azure Storage offers three types of blobs:block blobs, page blobs, and append blobs. You specify the blob type whenyou create the blob. The default is a block blob.

  • Block blobs consist of blocks of data assembled to make a blob. Mostscenarios using Blob storage employ block blobs. Block blobs are idealfor storing text and binary data in the cloud, like files, images, andvideos.
  • Append blobs are like block blobs in that they are made up of blocks, butthey are optimized for append operations, so they are useful for loggingscenarios.
  • Page blobs can be up to 8 TB in size and are more efficient for frequentread/write operations. Azure virtual machines use page blobs as OS anddata disks.

✔️ Once the blob has been created, its type cannot be changed. ✔️ You can also upload a local file to blob storage using the PowerShell Set-AzStorageBlobContent command. Blob upload tools There are multiple methods to upload data to blob storage, including thefollowing methods:

  • AzCopy is an easy-to-use command-line tool for Windows and Linux thatcopies data to and from Blob storage, across containers, or acrossstorage accounts. For more information about AzCopy, see Transfer datawith the AzCopy v10 (Preview).
  • The Azure Storage Data Movement library is a .NET library for movingdata between Azure Storage services. The AzCopy utility is built with theData Movement library.
  • Azure Data Factory supports copying data to and from Blob storage byusing the account key, shared access signature, service principal, ormanaged identities for Azure resources authentications. For moreinformation, see Copy data to or from Azure Blob storage by using AzureData Factory.
  • Blobfuse is a virtual file system driver for Azure Blob storage. You canuse blobfuse to access your existing block blob data in your Storageaccount through the Linux file system. For more information, see How tomount Blob storage as a file system with blobfuse.
  • Azure Data Box Disk is a service for transferring on-premises data toBlob storage when large datasets or network constraints make uploadingdata over the wire unrealistic. You can use Azure Data Box Disk torequest solid-state disks (SSDs) from Microsoft. You can then copy yourdata to those disks and ship them back to Microsoft to be uploaded intoBlob storage.
  • The Azure Import/Export service provides a way to export largeamounts of data from your storage account to hard drives that youprovide and that Microsoft then ships back to you with your data. Formore information, see Use the Microsoft Azure Import/Export service totransfer data to Blob storage.

✔️ Of course, you can always use Azure Storage Explorer.

Blob Access Policies

A stored access policy provides an additional level of control over service-level shared access signatures (SAS) on the server side. Establishing a storedaccess policy serves to group shared access signatures and to provideadditional restrictions for signatures that are bound by the policy. You can usea stored access policy to change the start time, expiry time, or permissions fora signature, or to revoke it after it has been issued. The following storage resources support stored access policies:

  • Blob containers
  • File shares
  • Queues
  • Tables

A stored access policy on a container can be associated with a shared accesssignature granting permissions to the container itself or to the blobs itcontains. Similarly, a stored access policy on a file share can be associatedwith a shared access signature granting permissions to the share itself or tothe files it contains. Stored access policies are currently not supported foraccount SAS. ✔️ SAS will be covered in more detail in the last lesson, Storage Security.

Blob Storage Pricing

All storage accounts use a pricing model for blob storage based on the tier ofeach blob. When using a storage account, the following billing considerationsapply:

  • Performance tiers: In addition to, the amount of data stored, the cost ofstoring data varies depending on the storage tier. The per-gigabyte costdecreases as the tier gets cooler.
  • Data access costs: Data access charges increase as the tier gets cooler.For data in the cool and archive storage tier, you are charged a per-gigabyte data access charge for reads.
  • Transaction costs: There is a per-transaction charge for all tiers thatincreases as the tier gets cooler.
  • Geo-Replication data transfer costs: This charge only applies toaccounts with geo-replication configured, including GRS and RA-GRS.Geo-replication data transfer incurs a per-gigabyte charge.
  • Outbound data transfer costs: Outbound data transfers (data that istransferred out of an Azure region) incur billing for bandwidth usage ona per-gigabyte basis, consistent with general-purpose storage accounts.
  • Changing the storage tier: Changing the account storage tier from coolto hot incurs a charge equal to reading all the data existing in thestorage account. However, changing the account storage tier from hot tocool incurs a charge equal to writing all the data into the cool tier (GPv2 accounts only).

Azure Files File storage offers shared storage for applications using the industrystandard SMB protocol. Microsoft Azure virtual machines and cloud servicescan share file data across application components via mounted shares, andon-premises applications can also access file data in the share. Applications running in Azure virtual machines or cloud services can mount afile storage share to access file data, just as a desktop application wouldmount a typical SMB share. Any number of Azure virtual machines or rolescan mount and access the File storage share simultaneously. Common uses of file storage include:

  • Replace and supplement. Azure Files can be used to completely replaceor supplement traditional on-premises file servers or NAS devices.
  • Access anywhere. Popular operating systems such as Windows, macOS,and Linux can directly mount Azure File shares wherever they are in theworld.
  • Lift and shift. Azure Files makes it easy to “lift and shift” applicationsto the cloud that expect a file share to store file application or user data.
  • Azure File Sync. Azure File shares can also be replicated with AzureFile Sync to Windows Servers, either on-premises or in the cloud, forperformance and distributed caching of the data where it’s being used.
  • Shared applications. Storing shared application settings, for example inconfiguration files.
  • Diagnostic data. Storing diagnostic data such as logs, metrics, andcrash dumps in a shared location.
  • Tools and utilities. Storing tools and utilities needed for developing oradministering Azure virtual machines or cloud services.

✔️ Which of the usage cases for files are you most interested in? For more information, you can see: What is Azure Files?- https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

Files vs Blobs

Azure Files offers fully managed file shares in the cloud that are accessiblevia the industry standard Server Message Block (SMB) protocol. Azure Fileshares can be mounted concurrently by cloud or on-premises deployments ofWindows, Linux, and macOS. Additionally, Azure File shares can be cachedon Windows Servers with Azure File Sync (next lesson) for fast access nearwhere the data is being used. Sometimes it is difficult to decide when to use file shares instead of blobs ordisk shares. Take a minute to review this table that compares the differentfeatures.

Feature Description When to use
Azure Files Provides an SMBinterface, clientlibraries, and a RESTinterface that allowsaccess from anywhereto stored files. You want to “lift andshift” an applicationto the cloud whichalready uses the nativefile system APIs toshare data between itand other applicationsrunning in Azure. Youwant to storedevelopment anddebugging tools thatneed to be accessedfrom many virtualmachines.
Azure Blobs Provides clientlibraries and a RESTinterface that allowsunstructured data to bestored and accessed ata massive scale inblock blobs. You want yourapplication to supportstreaming andrandom-accessscenarios.You want tobe able to accessapplication data fromanywhere.

Other distinguishing features, when selecting Azure files.

  • Azure files are true directory objects. Azure blobs are a flat namespace.
  • Azure files are accessed through file shares. Azure blobs are accessedthrough a container.
  • Azure files provide shared access across multiple virtual machines.Azure disks are exclusive to a single virtual machine.

✔️ When selecting which storage feature to use, you should also considerpricing. Take a minute to view the Azure Storage Overview pricing page.

Creating File Shares

To access your files, you will need a file share. There are several ways tocreate a file share. Creating a file share (Portal) Before you can create a file share you will need a storage account. Once thatis in place, provide the file share Name and the Quota. Quota refers to totalsize of files on the share. Be sure to test by uploading and accessing a file. The rules for file service share names are more restrictive than what isprescribed by the SMB protocol for SMB share names, so that the Blob andFile services can share similar naming conventions for containers and shares.The naming restrictions for shares are as follows:

  • A share name must be a valid DNS name.
  • Share names must start with a letter or number, and can contain onlyletters, numbers, and the dash (-) character.
  • Every dash (-) character must be immediately preceded and followed bya letter or number; consecutive dashes are not permitted in share names.
  • All letters in a share name must be lowercase.
  • Share names must be from 3 through 63 characters long.

Creating a file share (PowerShell) You can also use PowerShell to create a file share.

# Retrieve storage account and storage account key $storageContext = New-AzStorageContext <storage-account-name> <storage-account-key> # Create the file share, in this case “logs” $share = New-AzStorageShare logs -Context $storageContext

Mapping File Shares (Windows)

You can connect to your Azure file share with Windows or Windows Server.Here is what you will need:

  • Mapping Drive Letter: Your choice.
  • UNC Path: In the form\\storagename.file.core.windows.net\filesharename

To map the Windows drive you will also need to supply the accountcredentials in the Windows Security dialog box.

  • Account User: In the form AZURE\storagename
  • Storage Account Key: To mount an Azure file share, you will need theprimary (or secondary) storage key.

All of this information is available by selecting Connect from your file sharepage. ✔️ Ensure port 445 is open. Azure Files uses SMB protocol. SMBcommunicates over TCP port 445 – check to see if your firewall is notblocking TCP ports 445 from the client machine.

Mounting File Shares (Linux)

Azure file shares can be mounted in Linux distributions using the CIFS kernelclient. There are two ways to mount an Azure file share:

  • On-demand with the mount command.
  • On-boot (persistent) by creating an entry in /etc/fstab.

Prerequisites for mounting the file share in Linux In addition to the Windows prerequisites, you also need:

  • Install the cifs-utils package. Consult the documentation to ensure youare running a Linux distribution that supports this package.
  • Understand the SMB client requirements. Azure Files can be mountedeither via SMB 2.1 or SMB 3.0. For connections coming from clientson-premises or in other Azure regions, Azure Files will reject SMB 2.1 (or SMB 3.0 without encryption). If secure transfer required is enabled for a storage account, Azure Files will only allow connections using SMB 3.0 with encryption.
  • Decide on the directory/filechmod permissions.

Mount the file share You can use the file share Connect page for your file share to view the mount command.

Secure Transfer Required

The secure transfer option enhances the security of your storage account byonly allowing requests to the storage account by secure connection. Forexample, when calling REST APIs to access your storage accounts, you mustconnect using HTTPs. Any requests using HTTP will be rejected when Securetransfer required is enabled. When you are using the Azure files service, connection without encryptionwill fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, andsome versions of the Linux SMB client. You can also use tooling to enable this feature. Here is how to use PowerShelland the EnableHttpsTrafficOnly parameter.

Set-AzStorageAccount -Name <StorageAccountName> -ResourceGroupName <ResourceGroupName> -EnableHttpsTrafficOnly $True

✔️ Because Azure storage doesn’t support HTTPs for custom domain names,this option is not applied using a custom domain name.

File Share Snapshots

Azure Files provides the capability to take share snapshots of file shares.Share snapshots capture the share state at that point in time. A share snapshotis a point-in-time, read-only copy of your data. Share snapshot capability is provided at the file share level. Retrieval isprovided at the individual file level, to allow for restoring individual files. Youcannot delete a share that has share snapshots unless you delete all the sharesnapshots first. Share snapshots are incremental in nature. Only the data that has changedafter your most recent share snapshot is saved. This minimizes the timerequired to create the share snapshot and saves on storage costs. Even thoughshare snapshots are saved incrementally, you need to retain only the mostrecent share snapshot in order to restore the share. When to use share snapshots

  • Protection against application error and data corruption. Applicationsthat use file shares perform operations such as writing, reading, storage,transmission, and processing. If an application is misconfigured or anunintentional bug is introduced, accidental overwrite or damage canhappen to a few blocks. To help protect against these scenarios, you cantake a share snapshot before you deploy new application code. If a bugor application error is introduced with the new deployment, you can goback to a previous version of your data on that file share.
  • Protection against accidental deletions or unintended changes.Imagine that you’re working on a text file in a file share. After the textfile is closed, you lose the ability to undo your changes. In these cases,you then need to recover a previous version of the file. You can use sharesnapshots to recover previous versions of the file if it’s accidentallyrenamed or deleted.
  • General backup purposes. After you create a file share, you canperiodically create a share snapshot of the file share to use it for databackup. A share snapshot, when taken periodically, helps maintainprevious versions of data that can be used for future audit requirementsor disaster recovery.

Storage Security

Azure Storage provides a comprehensive set of security capabilities thattogether enable developers to build secure applications. In this lesson, wefocus on Shared Access Signatures, but also cover storage encryption andsome best practices. Here are the high-level security capabilities for Azurestorage:

  • Encryption. All data written to Azure Storage is automatically encryptedusing Storage Service Encryption (SSE).
  • Authentication. Azure Active Directory (Azure AD) and Role-BasedAccess Control (RBAC) are supported for Azure Storage for bothresource management operations and data operations, as follows:
    • You can assign RBAC roles scoped to the storage account to securityprincipals and use Azure AD to authorize resource managementoperations such as key management.
    • Azure AD integration is supported in preview for data operations onthe Blob and Queue services.
  • Data in transit. Data can be secured in transit between an applicationand Azure by using Client-Side Encryption, HTTPS, or SMB 3.0.
  • Disk encryption. OS and data disks used by Azure virtual machines canbe encrypted using Azure Disk Encryption.
  • Shared Access Signatures. Delegated access to the data objects in AzureStorage can be granted using Shared Access Signatures.

Authorization options Every request made against a secured resource in the Blob, File, Queue, orTable service must be authorized. Authorization ensures that resources inyour storage account are accessible only when you want them to be, and onlyto those users or applications to whom you grant access. Options forauthorizing requests to Azure Storage include:

  • Azure Active Directory (Azure AD). Azure AD is Microsoft’s cloud-based identity and access management service. Azure AD integration iscurrently available in preview for the Blob and Queue services. WithAzure AD, you can assign fine-grained access to users, groups, orapplications via role-based access control (RBAC).
  • Shared Key. Shared Key authorization relies on your account accesskeys and other parameters to produce an encrypted signature string thatis passed on the request in the Authorization header.
  • Shared access signatures. Shared access signatures (SAS) delegateaccess to a particular resource in your account with specifiedpermissions and over a specified time interval.
  • Anonymous access to containers and blobs. You can optionally makeblob resources public at the container or blob level. A public containeror blob is accessible to any user for anonymous read access. Readrequests to public containers and blobs do not require authorization.

Shared Access Signatures

A shared access signature (SAS) is a URI that grants restricted access rightsto Azure Storage resources (a specific blob in this case). You can provide ashared access signature to clients who should not be trusted with your storageaccount key but whom you wish to delegate access to certain storage accountresources. By distributing a shared access signature URI to these clients, yougrant them access to a resource for a specified period of time. SAS is a secureway to share your storage resources without compromising your accountkeys. A SAS gives you granular control over the type of access you grant to clientswho have the SAS, including:

  • An account-level SAS can delegate access to multiple storage services.For example, blob, file, queue, and table.
  • An interval over which the SAS is valid, including the start time and theexpiry time.
  • The permissions granted by the SAS. For example, a SAS for a blobmight grant read and write permissions to that blob, but not deletepermissions.

Optionally, you can also:

  • Specify an IP address or range of IP addresses from which AzureStorage will accept the SAS. For example, you might specify a range ofIP addresses belonging to your organization.
  • The protocol over which Azure Storage will accept the SAS. You can usethis optional parameter to restrict access to clients using HTTPS.

✔️ There are two types of SAS: account and service. The account SASdelegates access to resources in one or more of the storage services. Theservice SAS delegates access to a resource in just one of the storage services:Blob, Queue, Table, or File service. For more information, you can see: What is a shared access signature? – https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#what-is-a-shared-access-signature

Configuring SAS Parameters

Configuring a SAS includes Permissions, Start and expiry date/time, AllowedIP addresses, Allowed protocols, and Signing key. In this example, we aregenerating a Blob SAS token and URL.

  • Permissions. Your choices are Read, Create, Write, and Delete. You mayselect any combination of permissions.
  • Start and expiry date/time. The times during which the SAS is valid.
  • Allowed IP addresses. The IP addresses from which to accept requests.
  • Allowed protocols. Only allowing HTTPS requests is recommended.
  • Signing key. Your choices are: key1 or key2.

PowerShell Options Create a storage account level SAS with full permissions.

New-AzStorageAccountSASToken -Service Blob,File,Table,Queue -ResourceType Service,Container,Object -Permission “racwdlup”

Create a Blob level SAS will full permisions.

New-AzStorageBlobSASToken -Container “ContainerName” -Blob “BlobName” -Permission rwd

URI and SAS Parameters

As you create your SAS a URI is created using parameters and tokens. TheURI consists of your Storage Resource URI and the SAS token. Here is an example URI. Each part is described in the table below.

https://myaccount.blob.core.windows.net/?restype=service&comp=properties&sv=2015-04-05&ss=bf&srt=s&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https &sig=F%6GRVAZ5Cdj2Pw4txxxxx

Name SAS portion Description
ResourceURI https://myaccount.blob.core.windows.net/?restype=service&comp=properties The Blobserviceendpoint,withparametersfor gettingserviceproperties (whencalled withGET) orsettingserviceproperties (whencalled withSET).
Storageservicesversion sv=2015-04-05 For storageservicesversion2012-02-12and later,thisparameterindicates theversion touse.
Services ss=bf The SASapplies tothe Blob andFile services
Resourcetypes srt=s The SASapplies toservice-leveloperations.
Start time st=2015-04-29T22%3A18%3A26Z Specified inUTC time. Ifyou want theSAS to bevalidimmediately,omit thestart time.
Expiry time se=2015-04-30T02%3A23%3A26Z Specified inUTC time.
Resource sr=b Theresource is ablob.
Permissions sp=rw Thepermissionsgrant accessto read andwriteoperations.
IP Range sip=168.1.5.60-168.1.5.70 The range ofIP addressesfrom whicha requestwill beaccepted.
Protocol spr=https OnlyrequestsusingHTTPS arepermitted.
Signature sig=F%6GRVAZ5Cdj2Pw4tgU7IlSTkWgn7bUkkAg8P6HESXwmf%4B Used toauthenticateaccess to theblob. Thesignature isan HMACcomputedover astring-to-sign and keyusing theSHA256algorithm,and thenencodedusingBase64encoding.

For more information, you can see: Shared access signature parameters – https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#shared-access-signature-parameters

Storage Service Encryption

Azure Storage Service Encryption (SSE) for data at rest helps you protectyour data to meet your organizational security and compliance commitments.With this feature, the Azure storage platform automatically encrypts your databefore persisting it to Azure Managed Disks, Azure Blob, Queue, Tablestorage, or Azure Files, and decrypts the data before retrieval. The handling of encryption, encryption at rest, decryption, and keymanagement in Storage Service Encryption is transparent to users. All datawritten to the Azure storage platform is encrypted through 256-bit AESencryption, one of the strongest block ciphers available. ✔️ SSE is enabled for all new and existing storage accounts and cannot bedisabled. Because your data is secured by default, you don’t need to modifyyour code or applications.

Customer Managed Keys

If you prefer, you can use the Azure Key Vault to manage your encryptionkeys. With the Key Vault you can create your own encryption keys and storethem in a key vault, or you can use Azure Key Vault’s APIs to generateencryption keys. Using custom keys give you more flexibility and control when creating,disabling, auditing, rotating, and defining access controls. ✔️ To use customer-managed keys with SSE, you can either create a new keyvault and key or you can use an existing key vault and key. The storageaccount and the key vault must be in the same region, but they can be indifferent subscriptions. ✔️ The key vault can also be used to store BitLocker keys.

SAS Best Practices

Risks When you use shared access signatures in your applications, you need to beaware of two potential risks:

  • If a SAS is leaked, it can be used by anyone who obtains it, which canpotentially compromise your storage account.
  • If a SAS provided to a client application expires and the application isunable to retrieve a new SAS from your service, then the application’sfunctionality may be hindered.

Recommendations The following recommendations for using shared access signatures can helpmitigate these risks:

  • Always use HTTPS to create or distribute a SAS. If a SAS is passedover HTTP and intercepted, an attacker performing a man-in-the-middleattack is able to read the SAS and then use it just as the intended usercould have, potentially compromising sensitive data or allowing for datacorruption by the malicious user.
  • Reference stored access policies where possible. Stored access policiesgive you the option to revoke permissions without having to regeneratethe storage account keys. Set the expiration on these very far in thefuture (or infinite) and make sure it’s regularly updated to move it fartherinto the future.
  • Use near-term expiration times on an ad hoc SAS. In this way, even if aSAS is compromised, it’s valid only for a short time. This practice isespecially important if you cannot reference a stored access policy.Near-term expiration times also limit the amount of data that can bewritten to a blob by limiting the time available to upload to it.
  • Have clients automatically renew the SAS if necessary. Clients shouldrenew the SAS well before the expiration, in order to allow time forretries if the service providing the SAS is unavailable. If your SAS ismeant to be used for a small number of immediate, short-livedoperations that are expected to be completed within the expirationperiod, then this may be unnecessary as the SAS is not expected to berenewed. However, if you have a client that is routinely making requestsvia SAS, then the possibility of expiration comes into play. The keyconsideration is to balance the need for the SAS to be short-lived (aspreviously stated) with the need to ensure that the client is requestingrenewal early enough (to avoid disruption due to the SAS expiring priorto successful renewal).
  • Be careful with SAS start time. If you set the start time for a SAS to now,then due to clock skew (differences in current time according to differentmachines), failures may be observed intermittently for the first fewminutes. In general, set the start time to be at least 15 minutes in thepast. Or, don’t set it at all, which will make it valid immediately in allcases. The same generally applies to expiry time as well – remember thatyou may observe up to 15 minutes of clock skew in either direction onany request. For clients using a REST version prior to 2012-02-12, themaximum duration for a SAS that does not reference a stored accesspolicy is 1 hour, and any policies specifying longer term than that willfail.
  • Be specific with the resource to be accessed. A security best practice isto provide a user with the minimum required privileges. If a user onlyneeds read access to a single entity, then grant them read access to thatsingle entity, and not read/write/delete access to all entities. This alsohelps lessen the damage if a SAS is compromised because the SAS hasless power in the hands of an attacker
  • Understand that your account will be billed for any usage, includingthat done with SAS. If you provide write access to a blob, a user maychoose to upload a 200GB blob. If you’ve given them read access aswell, they may choose to download it 10 times, incurring 2 TB in egresscosts for you. Again, provide limited permissions to help mitigate thepotential actions of malicious users. Use short-lived SAS to reduce thisthreat (but be mindful of clock skew on the end time).
  • Validate data written using SAS. When a client application writes datato your storage account, keep in mind that there can be problems withthat data. If your application requires that data be validated orauthorized before it is ready to use, you should perform this validationafter the data is written and before it is used by your application. Thispractice also protects against corrupt or malicious data being written toyour account, either by a user who properly acquired the SAS, or by auser exploiting a leaked SAS.
  • Don’t assume SAS is always the correct choice. Sometimes the risksassociated with a particular operation against your storage accountoutweigh the benefits of SAS. For such operations, create a middle-tierservice that writes to your storage account after performing businessrule validation, authentication, and auditing. Also, sometimes it’ssimpler to manage access in other ways. For example, if you want tomake all blobs in a container publicly readable, you can make thecontainer Public, rather than providing a SAS to every client for access.
  • Use Storage Analytics to monitor your application. You can use loggingand metrics to observe any spike in authentication failures due to anoutage in your SAS provider service or to the inadvertent removal of astored access policy. See the Azure Storage Team Blog for additionalinformation.

 

References

https://github.com/MicrosoftLearning/AZ-103-MicrosoftAzureAdministrator