TechNet Events Presents – for the IT Professional
In this session, we will discuss:
Azure architecture from the IT professional’s point of view
Why an IT operations team would want to pursue Azure as an extension to the data center
Configuration, deployment and scaling Azure-based applications
The Azure roles (web, web service and worker)
Azure storage options
Azure security and identity options
How Azure-based applications can be integrated with on-premises applications
How operations teams can manage and monitor Azure-based applications
24. “Packaged” Application An application that I buy “off the shelf” and run myself Heads in the Cloud, Feet on the Ground Hosted “Packaged” An application that I buy “off the shelf” and then run at a hoster “Packaged” using cloud An application that I buy “off the shelf”, that is hosted using cloud platform “Software as a Service” A hosted application that I buy from a vendor Buy Build vs. Buy “Home Built” Application An application that I develop and run myself Hosted “Home Built” An application that I develop myself, but run at a hoster “Home Built” using cloud An application that I develop myself, that is hosted using cloud platform “Platform as a Service” A vendor hosted development and runtime environment Build Hoster Vendor On Premise Cloud
25. “Packaged” Application Big Pharmaceutical Example Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP “Too costly to run this myself, but I’ve made too many customizations” CRM Email Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” HR System Molecule Research Build Clinical Trial Hoster Vendor On Premise Cloud
26. “Packaged” Application Big Pharmaceutical Example Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP CRM “CRM and Email are commodity services – They have no customizations, and it’s cheaper for someone else to run these” Email Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” HR System Molecule Research Build Clinical Trial Hoster Vendor On Premise Cloud
27. Big Pharmaceutical Example “Packaged” Application Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP CRM Email Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” “I can’t afford to maintain this old HR application written in VB – it’s driving me mad!” HR System “…but due to regulatory issues, I cannot store my HR data off-premise” Molecule Research Build Clinical Trial Hoster Vendor On Premise Cloud
28. “Packaged” Application Big Pharmaceutical Example Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP CRM Email HR System Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” “I wish I had access to cheaper compute and storage when I need it” Molecule Research Build Clinical Trial Hoster Vendor On Premise Cloud
29. “Packaged” Application Big Pharmaceutical Example Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP CRM Email HR System Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” Molecule Research Build “THIS is where I want to spend my IT resources – I’m going to double down on this application!” Clinical Trial Hoster Vendor On Premise Cloud
32. “Packaged” Application Hosted “Packaged” “Packaged” using cloud “Software as a Service” Buy ERP CRM Email HR System Build vs. Buy “Home Built” Application Hosted “Home Built” “Home Built” using cloud “Platform as a Service” Molecule Research Build Clinical Trial Hoster Vendor On Premise Cloud Identity and AuthN
37. Warning – this session contains information about Microsoft Technologies that are in the CTP (pre-Beta) stages. Specifics of the technology may change before final release.
40. We are here to help. Send us your questions, doubts, concerns, challenges, adoration, regrets, denials, and alibis. We will start a discussion and help you out. azFeedbk@microsoft.com
41. RTC makes it easy to ship updates and new features.
42. Windows Azure Platform Roadmap ? Additional Geos Enhanced compliance Commercial launch Geo location Future CY 2010 Q4 2009 Inter-Role Communication Variable VM Sizes Enhanced compliance
44. Windows Azure Platform Compute:Virtualized compute environment based on Windows Server Storage: Durable, scalable, & available storage Management: Automated, model-driven management of the service Database:Relational processing for structured/unstructured data Service Bus: General purpose application bus Access Control: Rules-driven, claims-based access control
50. What does an Operating System do? App1 App2 App3 App4 Management / Security / etc. Task Scheduler Hardware Abstraction Layer DISK CPU GPU Memory
51. Azure does this for the cloud App1 App2 App3 App4 APIs / .NET ACS / etc. Azure Fabric Controller Azure Fabric Server 1 Server 2 Server 3 Server 3,500
67. Using the Cloud for Scale How would Jim do this today on premises? Browser Web Tier N L B Browser Database Web Tier Backend Tier Browser Browser Web Tier Browser
68. Using the Cloud for Scale How would Jim do this today on premises? Browser Backend Tier N L B Browser Database Web Tier Browser Backend Tier Browser Backend Tier Browser
69. Using the Cloud for Scale How would Jim do this today on premises? Browser Web Tier N L B Backend Tier N L B Browser Database Web Tier Browser Backend Tier Browser Web Tier Backend Tier Browser
70. Using the Cloud for Scale How would Jim do this today on premises? Browser p1 p2 p3 Web Tier N L B Backend Tier N L B Browser Database Web Tier Browser Backend Tier Browser Web Tier Backend Tier Browser
72. Using the Cloud for Scale How would Jim do this today on premises? Browser p1 p2 p3 Web Tier N L B Backend Tier N L B Browser Database Web Tier Browser Backend Tier Browser Web Tier Backend Tier Browser “That took a lot of work - and money!”
73. Using the Cloud for Scale How would Jim do this today on premises? p1 p2 p3 “Not so great now…” Web Tier N L B Backend Tier N L B Database Web Tier Browser Backend Tier Web Tier Backend Tier “That took a lot of work - and money!” “Hmmm... Most of this stuff is sitting idle...”
74. Using the Cloud for Scale Lost Business Datacenter peak load Idle time Usage Jan Apr Jul Oct
76. #1 - Using the Cloud for Scale “Wow! What a great site!” Azure Storage Request Web Role Worker Role Browser Response
77. Using the Cloud for Scale Browser Browser Azure Storage Web Role Worker Role Browser “Server Busy” Browser Browser
78.
79. Using the Cloud for Scale Browser Web Role N L B Browser AzureStorage Web Role Worker Role Browser Browser Web Role Browser You don’t see this bit
80. Using the Cloud for Scale Browser Web Role N L B Worker Role N L B Browser AzureStorage Web Role Browser Worker Role Browser Web Role Worker Role Browser
81. Using the Cloud for Scale Browser p1 p2 p3 Web Role N L B Worker Role N L B Browser AzureStorage Web Role Browser Worker Role Browser Web Role Worker Role Browser
90. Project Austin delivers a next-generation, micro-community based opportunity management and collaboration experience that brings a managed feel to the unmanaged space, allowing Microsoft to observe and participate in the sales process at scale through dynamic, customer-driven collaboration. Project Austin leverages Windows Azure, CRM Services, and SharePoint Services to provide a rich set of customer and partner capabilities in the cloud while integrating with existing on-premise solutions. Project Austin significantly enhances our understanding of our customers and partners by facilitating relationships with and between customers, partners, and Microsoft, while providing data that allows Microsoft to identify and promote world-class selling techniques and content. Project Austin Vision
91. Project Goals Gain first-hand experience on Azure Cloud Storage – Security – Integration – Web – SQL Azure Explore a business scenario that leverages the promises of the cloud Provide enterprise feedback to the Azure team Deliver a working prototype in FY09 Project Austin Overview
92. Web Role Multi-Tenant; Web App; Web Service Integration Worker Role; .NET Service Bus; Siebel Data Storage Tables; Blobs; Queues; SQL Azure Live ID Integration Web Auth; Access Control Service; WIF; RPS Technical Overview
93. Community Community Micro Community Factory Community Community Community Groups Community Group A Community Group B Personalization Membership Content Personalization Membership Content High Level Services Personalization Customization Content Security Integration … Navigation Search Membership Identity Groups … Foundation Services Identity Security Storage Eventing Config Content … Micro Community Compute 93
98. Developers build it Test locally Build package w/ Tools Upload your package to the web portal Push “deploy” Monitor, upgrade, scale… Deploying Your Service To The Cloud
106. Have a backup plan Know how to reload the data Practice your deployments Practice your deployments again Know how to rollback as needed Lessons LearnedOperations - Deployment
107. Store startup config data in the Azure config files Retire use of web.config Use Azure tables to store shared config across instances Log to Azure tables In addition to Azure logs Must be asynch Don’t forget to close connections Lessons LearnedOperations
116. Windows Azure Storage Service HTTP Blobs Queues Tables Application Storage Compute Fabric …
117. Blobs stored in Containers 1 or more Containers per account …/Container/blobpath Blobs Capacity 50GB in CTP Metadata, accessed independently name/value pairs (8kb total) Private or Public container access Use Blobs for file system Blobs
118. Windows Azure Storage Service . . . Table Table Table Entity . . . Entity Entity Property Property Property Storage Accounts Name Type Value
119. Entities and properties (rows & columns) Tables scoped by account Designed for billions+ Scale-out using partitions Partition key & row key Operations performed on partitions Efficient queries No limit on number of partitions Automatic load management for hot data Use ADO.NET Data Services Tables
120. No join No group by No order by Think: relational DB partitioned to the max Not a Relational Database
121. Key Example – Blog Posts Partition 1 Partition 2 Getting all of dunnry’s blog posts is fast Single partition Getting all posts after 2008-03-27 is slower Traverse all partitions
122. Partition Key – how data is partitioned Row Key – unique in partition, defines sort Goals Keep partitions small (increased scalability) Specify partition key in common queries Query/sort on row key Keys
123. Azure Queues RemoveMessage GetMessage (Timeout) Worker Role PutMessage Queue Msg 1 Msg 2 Msg 2 Msg 1 Web Role Worker Role Worker Role Msg 3 Msg 4 Msg 2
124. Simple asynchronous dispatch queue Create and delete queues Message: Retrieved at least once Max size 8kb Queues
145. What is a Claim? Web Application/Service Username: Brian Roles: Evangelist, Speaker Email: Brian.Prince@microsoft.com IsOfLegalVotingAge: True
146. The app is no longer concerned with Authentication Storing and securing usernames and passwords Connecting to directories Managing roles/rights/claims
148. Basic Scenario – Active Client Directory/ Credential Store Trusted Authority (Web Service) STS Business Rules 2. Get Claims (WS-Trust) Relying Party (Web Service) 1. Get Policy Smart Client 3. Send Claims
149. Basic Scenario – Passive Client Directory/ Credential Store Trusted Authority (Web App) STS Business Rules 2. Redirect (WS-Federation) Relying Party (Web App) 1. HTTP GET Browser 3. HTTP POST
150. Federated Scenario (.NET?) (Java?) Internet Trusted Authority (Web Service) Trusted Authority (Web Service) STS Business Rules STS Business Rules 1 Relying Party (Web Service) 2 Smart Client 3
151. Delegation and ActAs Directory/ Credential Store Trusted Authority STS Business Rules 3. Get Claims for svcInv ActAs Dieter 1. Get Claims for Dieter Dieter’s Browser Back End Web Service 4. svcInv ActAs Dieter Web Front End 2. Dieter ID: Dieter ID: Dieter ID: svcInv
155. Purchasing Models Consumption Volume Licensing Subscription “Pay as you go and grow” “Coordinated purchasing” “Value for a commitment“ Available at launch Available post launch Select offers at launch Plans for payment predictability Discounts for commitment Low barrier to entry and flexibility Optimized for cloud elasticity Unified purchasing through EA Introduction to volume discounts
156. Pricing Model Compute Per service hour $0.12 / Hour Storage $0.15 GB / Month $0.01/10K Transactions Business Edition 10 GB Database $99.99 / Month Web Edition 1 GB Database $9.99 / Month Messages Per message operation $0.15 / 100K Bandwidth $0.10/GB inbound & $0.15/GB outbound
157. Service Guarantee Storage availability Compute connectivity Role instance monitoring and restart Guarantee Your service is connected and reachable via web Internet facing roles will have external connectivity All running roles will be continuously monitored If role is unhealthy, we will detect and initiate corrective state Storage service will be available / reachable Your storage requests will be processed successfully Technologypromise Automated Systems Management <99.95% <99.9% SLA
RTC (release to cloud) makes it easy to release new features, and upgrades. This would include better management tools, logging/tracking, etc.
How many servers does your company have?What is the IT Pro to Server ratio?Usually an average of 1:10 or 1:30.The Global Datacenter Team for Azure is 1:30,000The Azure Fabric makes this possible.
Here’s the datacenter in the cloudA collection of commodity hardwareA collection of storage servers; triple replicationLoad BalancersFabric Controller: the “Brains” behind it all. Web Portal: where to deploy and manage applicationsService – any app you want to run is the service.It’s about running your service in the Microsoft datacenter. Windows Azure is not a SKU that you would install onsite.
= Service Deployment (So easy, even a CEO can do it) =Service, the application you want to runModel, service configuration; tells what the service looks like, how many you want to run, etc.Today, you must deploy your service through the portal. In the future, there will be an API available that will you to deploy your service through command-line, TFS build procedures, and other types of automation In this scenario, we are deploying our service through the portal. We upload the two files (the service package and model configuration). The Fabric Controller reads the model configuration which describes how to deploy our service. In this case, we are deploying our service to 3 machines. The Fabric Controller determines which 3 machines to deploy to, copies the service package to the 3 machines, and fires up the services. [Transition] The Fabric Controller then configures the DNS so you have an endpoint exposed for your services for the outside world to communicate with your services. From there, it configures the load balancers and routers. That’s it. It’s completed automated.
Managed partner pipeline review -opps in Seibel, partner, PAM; get together and collaborate on opps - more social collaboration, ability to comment, ability to bring people inDidn’t do much in SQL Azure as it wasn’t available at the timeAccomplished goals in 5 mos.
Web role - support for multi-tennancy (host multiple customers or segments on set of infrastructure) - web service for updating the opportunity information.NET Service Bus was used to integrate on the backend with SeibelMoved worker role inside firewall as it made more sense (on premise)Heavy use of Tables and BlobsMost Queue work is done with the .NET Service Bus under-the-covers; not a whole lot of work writing directly to QueuesDuring development, SDS did a reset and became SQL Azure; use Azure storage until SQL Azure becomes available (one of the best decisions they made)
Community wants to control Personalization, Content, Membership
TODO: Convert to Whiteboard template
TODO: Convert to Whiteboard template
http://austin.cloudapp.net/default.aspxLogin with alias. No password.Go to the roadshow page. Click through the headers. Show discussion threads.
Simply put, you basically do what you do today, as a general process goes. The biggest difference is you are pushing a package, instead of individual bits, with some bizzarre, poorly documented steps on how to deploy written at the last minute.
Native Code/FastCGI – Another reason to use Azure. If you aren’t used to managing different infrastructure, the you can host it on azure to not have to deal with the diversity.
Demo: Ask for logs. Show logs in storage that were already moved.This story will get better, especially with the management APIs as they come online.
Azure storage is interestingThe compute service is pretty standard - .net, by and largeThis is interesting in that it’s not quite as familiarAccessed by HTTP – restfulThree partsblob storage, for big chuncks of dataTables, which are not tablesQueues, which are what they sound like - queues
BlobsBlobs are stored in containers. There are 0 or more blobs per container and 0 or more containers per account. (since you can have 0 containers, but then you would not have any blobs either)Typically url in the cloud is http://accountname.blob.core.windows.net/container/blobpathBlob paths can contain the / character, so you can give the illusion of multiple folders, but there is only 1 level of containers.Blob capacity at CTP is 50gb.There is an 8k dictionary that can be associated with blobs for metadata.Blobs can be private or public:Private requires a key to read and writePublic requires a key to write, but NO KEY to read.Use blobs where you would use the file system in the past.
It’s easier to describe what azure tables don’t do than it is to describe what they do do.Most everyone, when they hear tables, think of SQL Server or relational database tables and the functionality you get from these tables – but that’s not what we haveIn windows azure, you have storage accountsStorage accounts need to be signed by keys for access – greatIn your account you can have some number of tablesSome number of entitiesSome number of propertiesThen a name, type, and valueSo, I ask you, are these tables? Do you see rows, tables, columns? No, they’re not tablesHere’s the truth – windows azure tables have some issues
Tables are simply collections of Entities.Entites must have a PartitionKey and RowKey – can also contain up to 256 other properties.Entities within a table need not be the same shape! E.g.:Entity 1: PartitionKey, RowKey, firstnameEntity 2: PartitionKey, RowKey, firstname, lastnameEntity 3: PartitionKey, Rowkey, orderId, orderData, zipCodePartitions are used to spread data across multiple servers. This happens automatically based on the partition key you provide. Table “heat” is also monitored and data may be moved to different storage endpoints based upon usage.Queries should be targeted at a partition, since there are no indexes to speed up performance. Indexes may be added at a later date.Its important to convey that whilst you could copy tables in from a local data source (e.g. sql) it would not perform well in the cloud, data access needs to be re-thought at this level. Those wanting a more traditional SQL like experience should investigate SDS.
07:17It’s an “Entity Store”, you can store entities, can retrieve entities, do simple querying on these entitiesPartitioned SQL Server: - A-M on this server - N-Z on this serverTop 5 customers that ordered the most, you have to poll 26 servers and aggregate the dataThat’s sort of what we have with Azure Table storage. We went with a highly partitioned approach upfront to gain scale and gain availability. We’ve had to sacrifice some of the complex queries such as joins to support this. It’s just a different way of having to deal with your data.
11:53Getting the all of dunnry’s post it fast because we’re selecting the entities by a partition keyGetting all of the posts after a certain is slow because we may have to traverse across multiple servers because we’re selecting entities that span partition keysA query without the partition key is really a scan
14:58Keep partitions small, this increases scalability; this allows us to replicate data when its hot and spread data across multiple servers
Use queues as a way of communicating w/ the backend worker rolesWRs call getmessage and pass timeoutTimeout value is importantExpiration time is important; message is marked in the queue as invisible; for duration of timeout it’s invisibleWhen we’re done processing, we call a message to remove the message through a deleteTh reason we do this is imagine we have a second worker role; if something goes wrong, once the timeout expires, the message becomes visible, and the next person to do a get message will get the message
Queues are simple:Messages are placed in queues. Max size is 8k (and it’s a string)Message can be read from the queue, at which point it is hidden.Once whatever read the message from the queue is finished processing the message, it should then remove the message from the queue. If not the message is returned to the queue after a specific user defined time limit. This can be used to handle code failures etc.
So, I have a simple service that I call the thumbnail generatorThis is a picture of the conceptual architecture of the serviceThere are number of things called web roles, which is asp.net code sitting behind a load balancerAnd they are taking requests where they’re taking in pictures.They are putting these pictures as blobs into this cloud storage system.We then have a set of worker roles that are running asynchronously and just sitting there, and watching these queues that are in the cloud, and they are picking images off the requests in the queues and generating thumbnails based on some code written in the worker role.Finally, the images will get displayed again on the website.The white box you see is meant to designate the service itself, and all of this is actually running on my desktop in this simulation environment.Key points that I want to make with this picture:This architecture represents best practices for how you build cloud services at scale – you don’t build up, you build out; you have a bunch of stateless compute nodes and any of these nodes can fail at any time – it doesn’t matter, your service is still going to run because there’s no data that’s only stored in one placeSecond is that it’s useful to build loosely coupled architectures – this is an example right here; the front end and back end are talking to each other through the queue – very scalable.This is an open platform. You can access it from anywhere, you can reach out to anywhere else, and you can imagine many different scenarios in which you have some code running in our data centers, and somewhere else.So, let’s switch over there.
Duh!
Any silo inhibits agility, slowing down IT’s ability to support the business to respond to the market
This inhibits reuse, and the ability to easily migrate to new environments
Don’t be plumber. If you are focusing on this, you aren’t focusing on what your company does in the market. Focus on code that only you can write.
Many deployments of security endpoints, leads to a greater attack surface, and the multiplication of common flaws across all of your systems.
It is rare that any sizable company has 1 directory. It is usually many, either through acquisition, or on purpose (hub and spoke model in LDAP is common, see me for a walkthrough of that). Many don’t have a directory per se, so some have 0. Very small companies might no do this.That, and code to hid AD (or and LDAP) is not an easy skill, doesn’t work like it should (from a dev perspective), and is easiest outsourced somehow (to a component, form /n software, etc.)
Many regulations and IT policies are moving towards more secure authN mechanisms. SmartCards, Certificates, etc.
Of course the proliferation of accounts for users leads to a diminished security profile. Stickys stuck to monitors, identical, simple passwords everywhere….
What about when you have an extranet that a customer needs access to. Usually you:0- pollute your AD with their info, thereby increasing AD management costs1- create a second AD (which leads to n AD’s, 1 for each customer)2- island of data in your app. Leads to costs in provisioning and managing the accounts. What if an employee of your customer leaves, and still has access to your extranet?What if your customers could still use their own credentials from their own company, so they aren’t your problem?<<<< Visit the Bike Store story here >>>>
If you move an app into the cloud, you are forced into a separate AuthN/Z infrastructure in this model. What if your internal users could use their everyday creds to login to the app you just launched into the cloud?Most company applications might use creds in a local directory, but you can’t do this if the app is running in the cloud, so you must have separate credentials. This is the primary use of federation for everyday companies.
Three geeks walk into a bar in California. The bouncer asks for ID. You whip our your drivers license from the state of Ohio. They inspect it, flash a purple light thing at it, verify your age, and let you in. They didn’t force you to register with them to get a bar credential. You would end up with a ton of credentials you were forced to use (like those grocery store customer loyalty cards). The bar trusts the credentials from a trusted provider (and has ways to validate those credentials are valid (the light, and known emebedded security features)).
A Claim is a property of a user
Turns out, companies need this ability even when they are not in a federation scenario. This helps when moving apps to the cloud, allowing customers/partners into your app, or with many directories through mergers.
shows a claims-aware web service (the relying party) and a smart client that wants to use that service. The relying party exposes policy that describes its addresses, bindings, and contracts. But the policy also includes a list of claims that the relying party needs, for example user name, email address, and role memberships. The policy also tells the smart client the address of the STS (another web service in the system) where it should retrieve these claims. After retrieving this policy (1), the client now knows where to go to authenticate: the STS. The smart client makes a web service request (2) to the STS, requesting the claims that the relying party asked for via its policy. The job of the STS is to authenticate the user and return a security token that gives the relying party all of the claims it needs. The smart client then makes its request to the relying party(3), sending the security token along in the security SOAP header. The relying party now receives claims with each request, and simply rejects any requests that don’t include a security token from the issuing authority that it trusts. DEMO: SamplesBasicSimple STS for Active Clients
The user points her browser at a claims-aware web application (relying party). The web application redirects the browser to the STS so the user can be authenticated. The STS in Figure 3 is wrapped by a simple web application that reads the incoming request, authenticates the user via standard HTTP mechanisms, and then creates a SAML token and emits a bit of javascript that causes the browser to initiate an HTTP POST that sends the SAML token back to the relying party. The body of this POST contains the claims that the relying party requested. At this point it is common for the relying party to package the claims into a cookie so that the user doesn’t have to be redirected for each request. The WS-Federation specification includes a section3 that describes how to do these things in an interoperable way. *** The Trusted Auth web app is a simple aspx page with code behind that does all the work. This can easily be converted into an ISAPI handler of HTTP pipeline component.DEMO: SamplesBasicSimple STS For Passive Clients
the client is in a different security realm over in Bob’s bike shop, while the relying party is still in Fabrikam’s data center. In this case, the client (Alice, say) authenticates with Bob’s STS (1) and gets a security token that she can send to Fabrikam. This token indicates that Alice has been authenticated by Bob’s security infrastructure, and includes claims that specify what roles she plays in Bob’s organization. The client sends this token to Fabrikam’s STS, where it evaluates the claims, decides whether Alice should be allowed to access the relying party in question, and issues a second security token that contains the claims the relying party expects. The client sends this second token to the relying party(3), which now discovers Alice as a new user, and allows her to access the application according to the claims issued by Fabrikam’s STS. Note that the relying party didn’t have to concern itself with validating a security token from Bob’s bike shop. Fabrikam’s authority did all of that heavy lifting: making certain to issue security tokens only to trusted partners that have previously established a relationship with Fabrikam. In this example, the relying party will always get tokens from its own STS. If it sees a token from anywhere else, it will reject it outright. This keeps your applications as simple as possible. LAST BUILD: a company that uses .NET Framework and Zermatt to build its applications. They have recently merged with another company whose IT platform is based on Java. Because the Microsoft .NET-connected applications are already claims-aware, the company was able to install an STS built on Java technology and suddenly the Microsoft .NET-connected applications became accessible to users in the Java-based directory, with no changes to application code or even application configuration.
ActAs scenario. Alice has pointed her browser at a web application that, as part of its implementation, makes use of a back end web service. Alice’s browser goes through the passive redirection handshake just like normal in order to present a security token to the web front end. This is where things get interesting: the web front end which, for the sake of this discussion, runs under an identity called Bob, takes Alice’s token and submits it as an “ActAs” parameter in his request to get a security token for the back end web service. The issuing authority notes that Bob wants to make requests to the back end using Alice’s credentials, and so crafts an IClaimsIdentity for Alice and an IClaimsIdentity for Bob, and links them together via the Delegate property, as shown in Figure 23. These identities are serialized into a security token for the back end, where Zermatt rehydrates this same structure so that the back end can see that this is Alice making the request (but technically, Bob is delegating her credentials). The back end can then perform appropriate access control, typically granting access based on Alice’s level of permission. The back end can also audit the request, typically noting the fact that Bob delegated Alice’s credentials to make the request. This is richer than the current model of delegation in Kerberos on the Windows platform today, where the back end has no programmatic way to discover that Alice’s credentials were delegated by some middle tier component. In the claims-based model, the back end can see that Alice went to the web front end (Bob) and that Bob delegated her credentials to get to the back end. If the back end were to receive a token for Alice without Bob as a delegate, it would know that Alice was accessing the back end directly, and could take appropriate action (deny the request, perhaps). Different business logic possibilities: Consider the information the authority gets in this scenario. The authority knows which target relying party is the target of the request (the back end web service). It knows who is making the request (Bob) and knows that Bob wants to act on Alice’s behalf. The authority may decide not to issue a security token in this case if Alice is a sensitive user such as an administrator with very high privilege. Or it may issue a token with a restricted set of claims to limit what Bob can do while using Alice’s credentials. Or it may issue an entirely different set of claims based on what the back end needs. The authority might decide to deny direct requests from Alice to talk to the back end, if that is desirable. The only limitation is the policy supported by the STS that you buy. Of course, if you implement your own STS, you’ll only be limited by your imagination. Kerberos Two hop limit: You might ask what the two hop limit is. A very simple explanation of this limit is that impersonation authentication can only be exchanged between two machines by default. This means that if Machine A requests work to be done on Machine B for an impersonated user; Machine B can perform the work, but cannot offload the work to Machine C because the authentication for the user will fail. The easiest way to fix this is by Implement Kerberos Delegation. Configuring this is challenging, and fraught with peril. You have to make changes in AD, all systems have to be in the same AD forest, the accounts must have the right delegation flags.DEMO: SamplesIntermediateIdentity Delegation Scenario
Demo: SamplesAdvancedAuthentication AssuranceSometimes different systems or operations in a system should be protected in stronger (which is usually more cumbersome) manner.The STS will add a property as to what the auth method was. And the RP can choose if that is sufficient for the operation. For example, normal ops can be done with Integration Auth, but for a high value wire transfer, you need a smartcard with PIN.Demo on page 31There are two issuers in this example: AuthPasssiveSTSWindows, and AuthPassiveSTSCert. The first uses Windows integrated authentication, and the second requires the client to present a certificate, which is a stronger but more cumbersome form of authentication. Each issuer adds an Authentication claim into the list of claims for the user, indicating the form of authentication used. You can see this in the GetOutputSubjects method found in the App_CodeSampleSTSService.cs file for each of these projects. The relying party in this example is a browser-based application (called AuthAssuranceRP) that exposes a low value page (LowValueResourcePage.aspxlow value page simply checks to see if the user is authenticated, and if not, redirects to default.aspx, on which is an instance of the FederatedPassiveSignIn control. This control presents the user with a link she can click in order to initiate the WS-Federation passive redirect to AuthPassiveSTSWindows, which uses Windows authentication to authenticate the user quickly and without much hassle. Regardless of whether the user is authenticated or not, when she visits HighValueResourcePage.aspx, the code checks not only whether the user is authenticated, but if she also has a claim that indicates the required strength of authentication, which is, “CertOrSmartCard”, and is only issued by AuthPassiveSTSCert STS, which requires the user to authenticate with a certificate (or smart card, if you have that infrastructure). So instead of redirecting the user to default.aspx, the high value page redirects to a separate sign-in page specifically for high-assurance logins. This is easy to implement; if you look at HighAssuranceSignInPage.aspx, you’ll see another instance of the FederatedPassiveSignIn control that redirects to the AuthPassiveSTSCert STS instead. ) and a high value page (HighValueResourcePage.aspx). The
Why are companies doing this? Ask the audience if they are, what are their reasons?Better use of resourcesQuicker provisioningDecouple solutions from physical environmentGives you agility to IT to respond to business needs
This makes you more agile, better able to meet their needs. Not only scale up and out as needed, but down and in as well. Reduce costs, reduce the amount grunt work. Focus on maintaining the systems in an efficient manner, not in growing the number of servers under your command.
http://blogs.zdnet.com/microsoft/?p=3344http://dynamicdatacentertoolkit.comhttp://download.microsoft.com/download/0/C/9/0C9EE51A-EFB7-47DE-A1BF-C9E0797F736C/datasheet_dynamicdatacentertoolkitforenterprises.docxhttp://www.microsoft.com/hosting/dynamicdatacenter/Home.htmlThe Dynamic Data Center Toolkit enables you to build an ongoing relationship with your customers while you scale your business with these resources:Step-by-step instructions and technical best practices to provision and manage a reliable, secure, and scalable data center Customizable marketing material you can use to help your customers take advantage of these new solutions Sample code and demos to use in your deployment