Compute TODAY in the PDC CTP means MANAGED CODE ONLY, running in MEDIUM Trust.In the future as we have already indicated at PDC, there will be support for unmanaged code.There are 2 roles in playA web role – which is just a web site, asp.net, wcf, images, css etc.A worker role – which is similar to a windows service, it runs in the background and can be used to decouple processing. There is a diagram later that shows the architecture, so don’t worry about how it fits together just yet.Key to point out the inbound protocols are HTTP & HTTPS – outbound are any TCP Socket, (but not UDP).All servers are stateless, and all access if through load balancers.
This should give a short introduction to storage. Key points are its durable (meaning once you write something we write it to disk), scalable (you have multiple servers with your data), available (the same as compute, we make sure the storage service is always running – there are 3 instances of your data at all times).Quickly work through the different types of storage:Blobs – similar to the file system, use it to store content that changes, uploads, unstructured data, images, movies etc.Tables – Semi-structured, provides a partitioned entity store (more on partitions etc. in the Building Azure Services Talk) – allows you to have tables containing billions of rows, partitioned across multiple servers.Queues – Simple queue for decoupling Computer Web and Worker Roles.All access is through REST interface. You can actually access the storage from outside of the data center (you don’t need compute) and you can access storage via anything that can make a HTTP request.It also means table storage can be accesses via ADO.NET Data Services.
Remind them the cloud is all the hardware across the board.Point out the automated service management,
Developer SDK is a Cloud in a box, allowing you to develop and debug locally without requiring a connection to the cloud. You can do this without Visual Studio as there are command line tools for executing the “cloud in a box” and publishing to the cloud.There is also a separate download for the Visual Studio 2008 tools, which provide the VS debugging and templates.Requirements are any version of Visual Studio (including Web Developer Express), Vista SP1. (Note: the Azure sdk will not yet work with the Win 7 beta – watch out for a fix in the future or future builds of Windows 7)
There is a small API for the cloud, that allows you to do some simple things, such as logging, reading from a service configuration file, and local file system access. The API is small and is easy to learn.
To allow us to deploy and operate your service in the cloud, we need to know the structure of your service. You describe your service and operating parameters through the use of a service model. This service model tells us which roles you have, any service configuration and can also describe the number of instances you need for each role within your service. Whilst this model is simple today, the model will be extended to allow you to describe a much richer operational model – e.g. allowing scale-out and scale-down based upon consumption and performance.This file is also where you would store configuration that may change once deployed. Since all files within a role are read-only, you cannot change either an app.config or web.config file once deployed, the only configuration you can change is in the service model.
If you have already presented the overview session, you may wish to skip this.For the demo:Create a new cloud Web project named Hello World.Change the title on the default.aspx page to say “Hello cloud”Point out the different parts of the project solution, including the 2 projects and service configuration files.Hit F5, to execute the project, then right click the fabric icon from the icon tray to show the developer UI.Show the nodes on the UI for the service. Close the Web Browser (to/or stop debugging)
In this next section, we’ll dig a little deeper on storage.Recall there are 3 types of storage.Recall the design point is for the cloud, there are 3 replicas of data, and we implement guaranteed consistency. In the future there will be some transaction support and this is why we use guaranteed consistency.Access is via a storage account – you can have multiple storage accounts per live id.Although the APU is REST, there is a sample .net storage client in the SDK that you can compile and use within your project. This makes working with storage much easier.
Blobs在计算机中，BLOB是指二进制长对象。BLOB是一个大文件，典型的BLOB是一张图片或一个声音文件，由于它们的尺寸，必须使用特殊的方式来处理（例如：上传、下载或者存放到一个数据库）。根据Eric Raymond的说法，处理BLOB的主要思想就是让文件处理器（如数据库管理器）不去理会文件是什么，而是关心如何去处理它。但也有专家强调，这种处理大数据对象的方法是把双刃剑，它有可能引发一些问题。在数据库中存放体积较大的多媒体对象就是应用程序处理BLOB的典型例子。Blobs are stored in containers. There are 0 or more blobs per container and 0 or more containers per account. (since you can have 0 containers, but then you would not have any blobs either)Typically url in the cloud is http://accountname.blob.core.windows.net/container/blobpathBlob paths can contain the / character, so you can give the illusion of multiple folders, but there is only 1 level of containers.Blob capacity at CTP is 50gb.There is an 8k dictionary that can be associated with blobs for metadata.Blobs can be private or public:Private requires a key to read and writePublic requires a key to write, but NO KEY to read.Use blobs where you would use the file system in the past.
Remind folks of the Azure Services Platform.It’s worth pointing out the naming again here. E.g. explain that Azure Services Platform is the name of the platform that comprises Live Services, .net Services, SQL Services & Windows Azure. Windows Azure is a key component of the Azure Services PlatformAzure != Windows Azure
Queues are simple:Messages are placed in queues. Max size is 8k (and it’s a string)Message can be read from the queue, at which point it is hidden.Once whatever read the message from the queue is finished processing the message, it should then remove the message from the queue. If not the message is returned to the queue after a specific user defined time limit. This can be used to handle code failures etc.
Tables are simply collections of Entities.Entites must have a PartitionKey and RowKey – can also contain up to 256 other properties.Entities within a table need not be the same shape! E.g.:Entity 1: PartitionKey, RowKey, firstnameEntity 2: PartitionKey, RowKey, firstname, lastnameEntity 3: PartitionKey, Rowkey, orderId, orderData, zipCodePartitions are used to spread data across multiple servers. This happens automatically based on the partition key you provide. Table “heat” is also monitored and data may be moved to different storage endpoints based upon usage.Queries should be targeted at a partition, since there are no indexes to speed up performance. Indexes may be added at a later date.Its important to convey that whilst you could copy tables in from a local data source (e.g. sql) it would not perform well in the cloud, data access needs to be re-thought at this level. Those wanting a more traditional SQL like experience should investigate SDS.
Once you have built and tested your service, you will want to deploy it.The key to deployment and operations is the service model.To deploy – first you build your service, this takes the project output + Content (images, css etc.) and makes a single file. It also creates and instance of your service metadata.Next you would visit the web portal and upload the 2 solution files – from there the “cloud” takes care of deploying it onto the correct number of machines and getting it to run.To increase and decrease capacity today, you would edit the configuration from the web portal.For more than 1 instance, you should be deployed across fault domains, meaning separate hardware racks.In the portal you have a production and staging area, with different urls. You can upload the next version of your project into staging, then flip the switch – which essentially changes the load balancers to point to the new version.
So how do we do the automated deployment & manage your service.1st – remember the service metadata tells us exactly what we need to deploy and how many instances etc.There is no OS footprint, so your service can be copied around the data center without any configuration requirements.The OS itself is on a VHD, so it was copied to the hardware.The hardware itself was also booted from VHD, which was also copied around.Therefore, to put a new version of your software, or the OS that hosts it, all we need to do is copy it to a new machine and spin it up. It also means we can patch and test the OS offline. No live patching!!!
In this demo you want to deploy the hello world you created early to the portal.
Now your service is deployed, how do YOU monitor it?1st – you cannot attach a debugger to the cloud – that would be impractical (which one of the 700 instances would you like to attach too?) and not a good idea from a security viewpoint.To get diagnostic information – you must write it to the Windows Azure logs. These logs can be retrieved from the portal at any time, copied to your local machine for analysis.You will expect detailed consumption reporting, and if anything goes wrong you will also receive a Windows Live Alert, informing you your service failed/crashed.
Some key things to rememberDesign points are scalability and availability – think it terms of lots of small servers rather than a single BIG server.Table storage is semi-structured – ITS NOT A RELATIONAL DATABASE – IT NEVER WILL BE. THAT IS SDS.Everything is stateless (you can maintain state in table or blob storage if YOU want to)Decouple everything using queues, and write code to be repeatable without breaking anything – in other words design for failure!Instrument and log your application yourself.Work on the idea that once you are on – stay on.How will you patch/update your service once it is switched on?
Highlight the purpose of this talk is the Windows Azure piece.
In this demo you want to deploy the hello world you created early to the portal.
Key points here are to start to message that Windows Azure is designed for the cloud. Scalability and Availability are the key design points here – remind people throughout the talk that its designed to be scalable and available. Last point is important. Windows Azure only runs in Microsoft data centers. It’s not a shrink wrapped version of windows that they can buy and run in data centers themselves. There are no plans to do this.
You can draw the comparison between the desktop/server OS and the cloud OS. The desktop abstracts away things like printers, display drivers, memory etc. So you only have to worry about building your desktop application. The Cloud OS does this for the cloud, but instead of printers, display drivers etc. it does it across multiple servers, networking compoents, provides a “cloud file system” for storage, a programming environment etc.The last 2 points:1. Interoperability – the storage etc uses REST based protocols.2. Designed for Utility Computing – Rather than charging a per-seat license etc. you will be likely charged based on consumption. The details of this are not know, so don’t speculate, other than we will be competitive.
Windows Azure is not about letting you setup and run an entire OS with your application.Instead it is about running your service, using commodity servers that are managed by Microsoft. Microsoft take care of deploying your service, patching the OS, keeping your service running, configuring hardware, infrastructure etc. All of this is automated.All you need to worry about is writing the service.
Here are some of the features we’ll walk through in the next few minutes.
.NET 3.5 sp1 on IIS7
Server 2008 – 64bit
• Web Sites (ASP.NET)
• Web Services (WCF)
• Worker Role
• Stateless Servers Developer
Durable, scalable, available
– Can be used without compute
All of the hardware
Hardware Load Balancers
Automated service management
• Windows Azure SDK
Local compute environment
Local Mock Storage
Command line tools
Small Managed API
• Logging, working storage
• Microsoft Visual Studio 2008 add-in
Windows Azure API
<?xml version=quot;1.0quot; encoding=quot;utf-8quot;?>
<ServiceDefinition name=quot;CloudService1quot; xmlns=quot;http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinitionquot;>
<LocalStorage name=quot;scratchquot; sizeInMB=quot;50quot;/>
<!-- Must use port 80 for http and port 443 for https when running in the cloud -->
<InputEndpoint name=quot;HttpInquot; protocol=quot;httpquot; port=quot;80quot; />
LB (ASPX, ASMX, WCF) Service
Windows Azure Datacenter