Blogs  >  Windows Azure Storage BUILD Talk – What’s Coming, Best Practices and Internals

Windows Azure Storage BUILD Talk – What’s Coming, Best Practices and Internals

Windows Azure Storage BUILD Talk – What’s Coming, Best Practices and Internals


At Microsoft’s Build conference we spoke about Windows Azure Storage internals, best practices and a set of exciting new features that we have been working on. Before we go ahead talking about the exciting new features in our pipeline, let us reminiscence a little about the past year. It has been almost a year since we blogged about the number of objects and average requests per second we serve.

This past year once again has proven to be great for Windows Azure Storage with many external customers and internal products like XBox, Skype, SkyDrive, Bing, SQL Server, Windows Phone, etc, driving significant growth for Windows Azure Storage and making it their choice for storing and serving critical parts of their service. This has resulted in Windows Azure Storage hosting more than 8.5 trillion unique objects and serving over 900K request/sec on an average (that’s over 2.3 trillion requests per month). This is a 2x increase in number of objects stored and 3x increase in average requests/sec since we last blogged about it a year ago!

In the talk, we also spoke about a variety of new features in our pipeline. Here is a quick recap on all the features we spoke about.

  • Queue Geo-Replication: we are pleased to announce that all queues are now geo replicated for Geo Redundant Storage accounts. This means that all data for Geo Redundant Storage accounts are now geo-replicated (Blobs, Tables and Queues).

By end of CY ’13, we are targeting to release the following features:

  • Secondary read-only access: we will provide a secondary endpoint that can be utilized to read an eventually consistent copy of your geo-replicated data. In addition, we will provide an API to retrieve the current replication lag for your storage account. Applications will be able to access the secondary endpoint as another source for computing over the accounts data as well as a fallback option if primary is not available.
  • Windows Azure Import/Export: we will preview a new service that allows customers to ship terabytes of data in/out of Windows Azure Blobs by shipping disks.
  • Real-Time Metrics: we will provide in near real-time per minute aggregates of storage metrics for Blobs, Tables and Queues. These metrics will provide more granular information about your service, which hourly metrics tends to smoothen out.
  • Cross Origin Resource Sharing (CORS): we will enable CORS for Azure Blobs, Tables and Queue services. This enables our customers to use Javascript in their web pages to access storage directly. This will avoid requiring a proxy service to route storage requests to circumvent the fact that browsers prevent cross domain access.
  • JSON for Azure Tables: we will enable OData v3 JSON protocol which is much lighter and performant than AtomPub. In specific, JSON protocol has a NoMetadata option which is a very efficient protocol in terms of bandwidth.

If you missed the Build talk, you can now access it from here as it covers in more detail the above mentioned features in addition to best practices.

Brad Calder and Jai Haridas


Comments (3)

  1. Great work! Cannot wait to get our hands on CORS and JSON for Table storage.

    Speaking of CORS support. 5 weeks ago Scott Guthrie said on Twitter: twitter.com/…/341780823390957568 Any updates on this? We depend on CORS support for our HTML5 app. Should we implement a proxy kinda workaround on an Azure web role or will this feature be released really soon as Scott tweeted?

  2. Manny Siddiqui says:

    Great list of enhancements! Looking forward to everything especially CORS and Secondary read-only end-point.

  3. jaidevh1@hotmail.com says:

    @Ted,CORS will be released sometime this Fall. Scott meant to tweet that it will be announced @ BUILD but will be released this fall. Until then, as you mentioned, a proxy service would be helpful.

    Thanks,

    Jai