Wednesday, September 10, 2014

C# Code to Upload Images to BLOB storage account in azure

Using Visual Studio create a new C# Console application.

When the standard console application is created from the template, it will not have a reference to the storage client library. We will add it using the Package Manager (NuGet).
Right-click on the project, and select ‘Manage NuGet Packages’ from the context menu.
1834-clip_image009.jpg
This will load up the Package Manager UI. Select the ‘Online‘ tab from the Package Manager dialog, and search for ‘Azure Storage’. As of the time of this writing version 2.0.5.1 was available. Select the Windows Azure Storage package and click ‘Install‘.
1834-clip_image010-630x385.png
If you prefer to manage your packages via the Package Manager Console, you can also type Install-Package WindowsAzure.Storage to achieve the same result. The Package Manager will add the references needed by the storage client library. Some of these references won’t be utilized when just working with BLOB storage. Below you can see that several assembly references were added, but specifically the Microsoft.WindowsAzure.Configuration andMicrosoft.WindowsAzure.Storage assemblies are what we will be working with in this example.
1834-clip_image012.png
Open up the program.cs file and add the following using statements:
Then add this code to the Main method:
Modify the code to add the storage account name for the accountName variable and the storage account access key from when you created the account in the portal for the accountKey variable. Also, substitute the file name used with the OpenRead method with a file that you wish to upload. I’d suggest an image or something small for this example.  For real applications, you will want to have the account name and key in a configuration file, or passed in so that you are not hard-coding these values.
The account name and key values are used to create a StorageCredential instance, which is then used to create an instance of the CloudStorageAccount object. The CloudStorageAccount object represents a storage account that you will be accessing. From this object you can obtain references to BLOBs, tables and queues within the storage account. Note that when we create the instance of the CloudStorageAccount, we specify ‘true’ for the useHttps constructor argument. This will cause all of our operation calls to our storage account to be encrypted with SSL. Because all wire-level communications are therefore encrypted, and all requests are signed using the credentials provided in the StorageCredential object, we can have some confidence that our operations are secure from prying eyes.
Since we are using BLOB storage, we then create an instance of a CloudBlobClient object to provide the client interface that is needed to work with BLOBs. Using the CloudBlobClient object we then get a reference to a container. A container is much like a base directory off the root of a hard drive. In this case, the container we are getting a reference to is named “samples”. On the next line, we call CreateIfNotExists which will actually cause the client library to call out to the public REST endpoint in Windows Azure to check if the container exists and, if not, will create the container. This provides us a location to upload our file to.
The last portion of the code gets a reference to a block BLOB within the container we created. This file doesn’t exist yet, we are just creating a reference to it as a CloudBlockBlob object. We will then use this to upload the file by providing a file stream of our file to the UploadFromStream method. The client library will then send up the file, creating the file in our Azure storage account. Note that many of the client library methods support the asynchronous programming pattern using Begin/End versions. For example, to asynchronously start an upload you can call BeginUploadFromStream.  
The client library may make more than one call to the REST API to upload the file if it is fairly large. You can control the size of file that is sent up as a single operation by setting the SingleBlobUploadThresholdInBytes property. This defaults to 32 MB and can be set as high at 64 MB. Anything over that 64 MB mark will be automatically broken into smaller blocks and uploaded. If a file does get broken down by the client library for upload, it will use a single thread by default to perform the upload. You can set theParallelOperationThreadCount property to indicate how many threads you would like the client library to use to concurrently upload your file. Just be sure to be careful that if you are asynchronously uploading a lot of files and setting theParallelOperationThreadCount option fairly high, you may see some ‘bottlenecking’ and slow throughput

Wednesday, August 13, 2014

Adding an BLOB storage account with azure portal for uploading

To create a storage account, log in to the Windows Azure management portal at https://manage.windowsazure.com. After you log in to the portal you can quickly create a Storage Account by clicking on the large NEW icon at the bottom left hand of the portal.
1834-clip_image001-630x506.png
From the expanding menu select the ‘Data Services’ option, then ‘Storage’ and finally, ‘Quick Create’.
 1834-clip_image003-630x506.png
 You will now need to provide a name for your storage account in the URL textbox. This name is used as part of the URL for the service endpoint and so it must be globally unique. The portal will indicate whether the name is available whenever you pause or finish typing. Next, you select a location for your storage account by selecting one of the data center locations in the dropdown.  This location will be the primary storage location for your data, or more simply, your account will reside in this Data Center. If you have created ‘affinity groups’, which is a friendly name of a collection of services you want to run in the same location, you will also see that in your drop down list. If you have more than one Windows Azure subscriptions related to your login address, you may also see a dropdown list to enable you to select the Azure subscription that the account will belong to.
All storage accounts are stored in triplicate, with transactionally-consistent copies in the primary data center. In addition to that redundancy, you can also choose to have ‘Geo Replication’ enabled for the storage account. ‘Geo Replication’ means that the Windows Azure Table and BLOB data that you place into the account will not only be stored in the primary location but will also be replicated in triplicate to another data center within the same region. So, if you select ‘West US’ for your primary storage location, your account will also have a triplicate copy stored in the East US data center. This mapping is done automatically by Microsoft and you can’t control the location of your secondary replication, but it will never be outside of a region so you don’t have to worry about your West US based account somehow getting replicated to Europe or Asia as part of the Geo Replication feature. Storage accounts that have Geo Replication enabled are referred to as geo redundant storage (GRS) and cost slightly more than accounts that do not have it enabled, which are called locally redundant storage (LRS).
Once you have selected the location and provided a name, you can click the ‘Create Storage Account’ action at the bottom of the screen. The Windows Azure portal will then generate the storage account for you within a few moments. When the account is fully created, you will see a status of Online. By selecting the new storage account in the portal, you can retrieve one of the access keys we will need in order to work with the storage account.
1834-clip_image005-630x506.png
Click on the ‘Manage Access Keys’ at the bottom of the screen to display the storage account name, which you provided when you created the account, and two 512 bit storage access keys used to authenticate requests to the storage account. Whoever has these keys will have complete control over your storage account short of deleting the entire account. They would have the ability to upload BLOBs, modify table data and destroy queues. These account keys should be treated as a secret in the same way that you would guard passwords or a private encryption key. Both of these keys are active and will work to access your storage account. It is a good practice to use one of the keys for all the applications that utilize this storage account so that, if that key becomes compromised, you can use this dialog to regenerate the key you haven’t been using, then update the all the apps to use that newly regenerated key and finally regenerate the compromised key. This would prevent anyone abusing the account with the compromised key.
1834-clip_image007.png
 Your storage account is now created and we have what we need to work with it. For now, get a copy of the Primary Access Key by clicking on the copy icon next to the text box.

What Kind of BLOB is that?

Any file type can be stored in the Windows Azure BLOB Storage service, such as Image files, database files, text files, or virtual hard drive files. However, when they are uploaded to the service they are stored as either a Page BLOB or a Block BLOB depending on how you plan on using that file or the size of the file you need to work with.
Page BLOBs are optimized for random reads and writes so they are most commonly used when storing virtual hard drive files for virtual machines: In fact, the Page BLOB was introduced when the first virtual drive for Windows Azure was announced: the Windows Azure Cloud Drive (at the time they were known as Windows Azure X-Drives). Nowadays, the persisted disks used by Windows Azure Virtual Machine (Microsoft’s IaaS offering) also use the Page BLOB to store their data and Operating System drives. Each Page BLOB is made up of one or more 512-byte pages of data, up to a total size limit of 1 TB per file.
The majority of files that you upload would benefit from being stored as Block BLOBs, which are written to the storage account as a series of blocks and then committed into a single file. We can create a large file by breaking it into blocks, which can be uploaded concurrently and then then committed together into a single file in one operation. This provides us with faster upload times and better throughput. The client storage libraries manage this process by uploading files of less than 64 MB in size in a single operation, and uploading larger files across multiple operations by breaking down the files and running the concurrent uploads. A Block BLOB has a maximum size of 200 GB. For this article we will be using Block BLOBs in the examples.

Wednesday, July 16, 2014

Understanding Storage Services in Azure

Introducing the Azure Storage Services

Azure storage provides the following four services: Blob storage, Table storage, Queue storage, and File storage.
  • Blob Storage stores unstructured object data. A blob can be any type of text or binary data, such as a document, media file, or application installer. Blob storage is also referred to as Object storage.
  • Table Storage stores structured datasets. Table storage is a NoSQL key-attribute data store, which allows for rapid development and fast access to large quantities of data.
  • Queue Storage provides reliable messaging for workflow processing and for communication between components of cloud services.
  • File Storage offers shared storage for legacy applications using the standard SMB protocol. Azure virtual machines and cloud services can share file data across application components via mounted shares, and on-premises applications can access file data in a share via the File service REST API.
An Azure storage account is a secure account that gives you access to services in Azure Storage. Your storage account provides the unique namespace for your storage resources. The image below shows the relationships between the Azure storage resources in a storage account:
Azure Storage Resources
There are two types of storage accounts:

General-purpose Storage Accounts

A general-purpose storage account gives you access to Azure Storage services such as Tables, Queues, Files, Blobs and Azure virtual machine disks under a single account. This type of storage account has two performance tiers:

Blob Storage Accounts

A Blob storage account is a specialized storage account for storing your unstructured data as blobs (objects) in Azure Storage. Blob storage accounts are similar to your existing general-purpose storage accounts and share all the great durability, availability, scalability, and performance features that you use today including 100% API consistency for block blobs and append blobs. For applications requiring only block or append blob storage, we recommend using Blob storage accounts.
Note:
Blob storage accounts support only block and append blobs, and not page blobs.
Blob storage accounts expose the Access Tier attribute which can be specified during account creation and modified later as needed. There are two types of access tiers that can be specified based on your data access pattern:
  • Hot access tier which indicates that the objects in the storage account will be more frequently accessed. This allows you to store data at a lower access cost.
  • Cool access tier which indicates that the objects in the storage account will be less frequently accessed. This allows you to store data at a lower data storage cost.
If there is a change in the usage pattern of your data, you can also switch between these access tiers at any time. Changing the access tier may result in additional charges. Please seePricing and billing for Blob storage accounts for more details.
For more details on Blob storage accounts, see Azure Blob Storage: Cool and Hot tiers.
Before you can create a storage account, you must have an Azure subscription, which is a plan that gives you access to a variety of Azure services. You can get started with Azure with a free account. Once you decide to purchase a subscription plan, you can choose from a variety ofpurchase options. If you’re an MSDN subscriber, you get free monthly credits that you can use with Azure services, including Azure Storage. See Azure Storage Pricing for information on volume pricing.
To learn how to create a storage account, see Create a storage account for more details. You can create up to 100 uniquely named storage accounts with a single subscription. See Azure Storage Scalability and Performance Targets for details about storage account limits.

Storage Service Versions

The Azure Storage services are regularly updated with support for new features. The Azure Storage services REST API reference describes each supported version and its features. We recommend that you use the latest version whenever possible. For information on the latest version of the Azure Storage services, as well as information on previous versions, seeVersioning for the Azure Storage Services.

Blob Storage

For users with large amounts of unstructured object data to store in the cloud, Blob storage offers a cost-effective and scalable solution. You can use Blob storage to store content such as:
  • Documents
  • Social data such as photos, videos, music, and blogs
  • Backups of files, computers, databases, and devices
  • Images and text for web applications
  • Configuration data for cloud applications
  • Big data, such as logs and other large datasets
Every blob is organized into a container. Containers also provide a useful way to assign security policies to groups of objects. A storage account can contain any number of containers, and a container can contain any number of blobs, up to the 500 TB capacity limit of the storage account.
Blob storage offers three types of blobs, block blobs, append blobs, and page blobs (disks).
  • Block blobs are optimized for streaming and storing cloud objects, and are a good choice for storing documents, media files, backups etc.
  • Append blobs are similar to block blobs, but are optimized for append operations. An append blob can be updated only by adding a new block to the end. Append blobs are a good choice for scenarios such as logging, where new data needs to be written only to the end of the blob.
  • Page blobs are optimized for representing IaaS disks and supporting random writes, and may be up to 1 TB in size. An Azure virtual machine network attached IaaS disk is a VHD stored as a page blob.
For very large datasets where network constraints make uploading or downloading data to Blob storage over the wire unrealistic, you can ship a hard drive to Microsoft to import or export data directly from the data center. See Use the Microsoft Azure Import/Export Service to Transfer Data to Blob Storage.

Table storage

Modern applications often demand data stores with greater scalability and flexibility than previous generations of software required. Table storage offers highly available, massively scalable storage, so that your application can automatically scale to meet user demand. Table storage is Microsoft’s NoSQL key/attribute store – it has a schemaless design, making it different from traditional relational databases. With a schemaless data store, it's easy to adapt your data as the needs of your application evolve. Table storage is easy to use, so developers can create applications quickly. Access to data is fast and cost-effective for all kinds of applications. Table storage is typically significantly lower in cost than traditional SQL for similar volumes of data.
Table storage is a key-attribute store, meaning that every value in a table is stored with a typed property name. The property name can be used for filtering and specifying selection criteria. A collection of properties and their values comprise an entity. Since Table storage is schemaless, two entities in the same table can contain different collections of properties, and those properties can be of different types.
You can use Table storage to store flexible datasets, such as user data for web applications, address books, device information, and any other type of metadata that your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
Like Blobs and Queues, developers can manage and access Table storage using standard REST protocols, however Table storage also supports a subset of the OData protocol, simplifying advanced querying capabilities and enabling both JSON and AtomPub (XML based) formats.
For today's Internet-based applications, NoSQL databases like Table storage offer a popular alternative to traditional relational databases.

Queue Storage

In designing applications for scale, application components are often decoupled, so that they can scale independently. Queue storage provides a reliable messaging solution for asynchronous communication between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device. Queue storage also supports managing asynchronous tasks and building process workflows.
A storage account can contain any number of queues. A queue can contain any number of messages, up to the capacity limit of the storage account. Individual messages may be up to 64 KB in size.

File Storage

Azure File storage offers cloud-based SMB file shares, so that you can migrate legacy applications that rely on file shares to Azure quickly and without costly rewrites. With Azure File storage, applications running in Azure virtual machines or cloud services can mount a file share in the cloud, just as a desktop application mounts a typical SMB share. Any number of application components can then mount and access the File storage share simultaneously.
Since a File storage share is a standard SMB file share, applications running in Azure can access data in the share via file sytem I/O APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. IT Pros can use PowerShell cmdlets to create, mount, and manage File storage shares as part of the administration of Azure applications.
Like the other Azure storage services, File storage exposes a REST API for accessing data in a share. On-premise applications can call the File storage REST API to access data in a file share. This way, an enterprise can choose to migrate some legacy applications to Azure and continue running others from within their own organization. Note that mounting a file share is only possible for applications running in Azure; an on-premises application may only access the file share via the REST API.
Distributed applications can also use File storage to store and share useful application data and development and testing tools. For example, an application may store configuration files and diagnostic data such as logs, metrics, and crash dumps in a File storage share so that they are available to multiple virtual machines or roles. Developers and administrators can store utilities that they need to build or manage an application in a File storage share that is available to all components, rather than installing them on every virtual machine or role instance.


Thursday, June 26, 2014

High Avalibility with Storage Replications

Replication for Durability and High Availability

The data in your Microsoft Azure storage account is always replicated to ensure durability and high availability, meeting the SLA for Storage even in the face of transient hardware failures.
See Azure Regions for more information about what services are available in each region.
When you create a storage account, you must select one of the following replication options:
  • Locally redundant storage (LRS). Locally redundant storage maintains three copies of your data. LRS is replicated three times within a single facility in a single region. LRS protects your data from normal hardware failures, but not from the failure of a single facility.
    LRS is offered at a discount. For maximum durability, we recommend that you use geo-redundant storage, described below.
  • Zone-redundant storage (ZRS). Zone-redundant storage maintains three copies of your data. ZRS is replicated three times across two to three facilities, either within a single region or across two regions, providing higher durability than LRS. ZRS ensures that your data is durable within a single region.
    ZRS provides a higher level of durability than LRS; however, for maximum durability, we recommend that you use geo-redundant storage, described below.
    Note:
    ZRS is currently available only for block blobs, and is only supported for versions 2014-02-14 and later.
    Once you have created your storage account and selected ZRS, you cannot convert it to use to any other type of replication, or vice versa.
  • Geo-redundant storage (GRS). GRS maintains six copies of your data. With GRS, your data is replicated three times within the primary region, and is also replicated three times in a secondary region hundreds of miles away from the primary region, providing the highest level of durability. In the event of a failure at the primary region, Azure Storage will failover to the secondary region. GRS ensures that your data is durable in two separate regions.
    For information about primary and secondary pairings by region, see Azure Regions.
  • Read access geo-redundant storage (RA-GRS). Read access geo-redundant storage is enabled for your storage account by default when you create it. Read access geo-redundant storage replicates your data to a secondary geographic location, and also provides read access to your data in the secondary location. Read-access geo-redundant storage allows you to access your data from either the primary or the secondary location, in the event that one location becomes unavailable.


Wednesday, June 11, 2014

Understanding Storage Accounts in Azure

General-purpose Storage Accounts

A general-purpose storage account gives you access to Azure Storage services such as Tables, Queues, Files, Blobs and Azure virtual machine disks under a single account. This type of storage account has two performance tiers:

Blob Storage Accounts

A Blob storage account is a specialized storage account for storing your unstructured data as blobs (objects) in Azure Storage. Blob storage accounts are similar to your existing general-purpose storage accounts and share all the great durability, availability, scalability, and performance features that you use today including 100% API consistency for block blobs and append blobs. For applications requiring only block or append blob storage, we recommend using Blob storage accounts.
Note:
Blob storage accounts support only block and append blobs, and not page blobs.
Blob storage accounts expose the Access Tier attribute which can be specified during account creation and modified later as needed. There are two types of access tiers that can be specified based on your data access pattern:
  • Hot access tier which indicates that the objects in the storage account will be more frequently accessed. This allows you to store data at a lower access cost.
  • Cool access tier which indicates that the objects in the storage account will be less frequently accessed. This allows you to store data at a lower data storage cost.
If there is a change in the usage pattern of your data, you can also switch between these access tiers at any time. Changing the access tier may result in additional charges. Please seePricing and billing for Blob storage accounts for more details.
For more details on Blob storage accounts, see Azure Blob Storage: Cool and Hot tiers.
Before you can create a storage account, you must have an Azure subscription, which is a plan that gives you access to a variety of Azure services. You can get started with Azure with a free account. Once you decide to purchase a subscription plan, you can choose from a variety ofpurchase options. If you’re an MSDN subscriber, you get free monthly credits that you can use with Azure services, including Azure Storage. See Azure Storage Pricing for information on volume pricing.
To learn how to create a storage account, see Create a storage account for more details. You can create up to 100 uniquely named storage accounts with a single subscription. See Azure Storage Scalability and Performance Targets for details about storage account limits.

Storage account billing

You are billed for Azure Storage usage based on your storage account. Storage costs are based on the following factors: region/location, account type, storage capacity, replication scheme, storage transactions, and data egress.
  • Region refers to the geographical region in which your account is based.
  • Account type refers to whether you are using a general-purpose storage account or a Blob storage account. With a Blob storage account, the access tier also determines the billing model for the account.
  • Storage capacity refers to how much of your storage account allotment you are using to store data.
  • Replication determines how many copies of your data are maintained at one time, and in what locations.
  • Transactions refer to all read and write operations to Azure Storage.
  • Data egress refers to data transferred out of an Azure region. When the data in your storage account is accessed by an application that is not running in the same region, you are charged for data egress. (For Azure services, you can take steps to group your data and services in the same data centers to reduce or eliminate data egress charges.)

Thursday, May 29, 2014

Understanding Caching

Understanding Caching

Caching Approach
Characteristics
Advantages
Disadvantages
Comments
Caching Services
Out of Process
In Memory
Distributed
Accessible by all types of Azure technologies. Scalable.
Extra cost for Service
For building high-performance, scalable cloud systems.
Azure In-Role Caching
Out of Process
In Memory
Distributed
Can run in the memory space of existing Roles
Only available to Web/Worker Roles within Cloud Services.
Suitable for adaptation to existing applications migrated to Azure.
ASP.NET/IIS Cache
In-Process
In Memory
Single-Locale
Runs in the memory space of the existing application.
Requires cache synchronization to work with scalable Azure applications.
Will require adaptations to use distributed caching when used with Azure load balancing.
Azure SQL Database
Out of Process
Out of Memory
Distributed
Data is persistently stored in a database table.
Requires custom coding to serialize data, and for management such as expiration.
Slower than in-memory applications, may be throttled in periods of high request.
Azure Table Storage
Out of Process
Out of Memory
Distributed
Data is persistently stored in Azure table storage.
High performance for very large datasets, cost-effective. Requires custom code.
Slower than in-memory, but faster than SQL. May be throttled in periods of high requests.
Azure Blob Storage
Out of Process
Out of Memory
Distributed
Data is persistently stored in Azure Blob storage.
Can leverage geo-redundant persistent storage. Requires custom code.
Slower than in memory, but easy to make highly distributed.

Tuesday, May 13, 2014

ASP.NET Azure Session State in Azure

ASP.NET Azure Session State in Azure

In addition to the data caching, we can also use Windows Azure Caching as the storage mechanism for ASP.NET Session State and ASP.NET Page Output Caching. The Session State Provider for Windows Azure Caching and the Output Cache Provider for Windows Azure Caching are the out-of-process storage for ASP.NET applications which will greatly work with Windows Azure load balancers. To use Windows Azure Caching as the storage mechanism for ASP.NET Session State and ASP.NET Page Output Caching, first you need to configure Cache Cluster (using one of the two approaches explained above) and then you need to configure your ASP.NET application to use the Windows Azure Caching. After you have configured the Cache Cluster, you can use Windows Azure Caching for ASP.NET Session State and ASP.NET Page Output Caching by adding necessary configuration setting in the web.config file.
The following configuration setting in the web.config allows to use the Session State Provider for Windows Azure Caching for web applications.

  
              type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache"
          cacheName="default"
          useBlobMode="true"
          dataCacheClientName="default" />
  
The cacheName specifies the name of the Cache used for persisting Session State data.
The following configuration setting in the web.config configures the Session State Provider for Windows Azure Caching for web applications

  
    
                     type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache"
            cacheName="default"
            dataCacheClientName="default" />