Attention Session en Anglais, animée par Scott Schnoll, Senior Content Developer à Microsoft Corp. Cette session vous expliquera comment architecturer une migration vers Exchange 2013 depuis d'ancienne versions d'Exchange avec comme speaker Scott Schnoll, Gourou Exchange en provenance directe de Redmond. La coexistence de serveurs de versions différentes sera aussi abordée.
Speaker : Scott Schnoll (Microsoft)
From development environments to production deployments with Docker, Compose,...Jérôme Petazzoni
In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.
This document discusses cloud native principles and definitions. It covers the issues with bare metal servers, how virtualization was an improvement, and defines cloud native principles like using containers for isolation and portability, microservices, elasticity, and automation. It provides Pivotal's definition of cloud native including practices like DevOps, continuous delivery, and BOSH for consistent provisioning. While containers are common, cloud native does not require them - examples like NetflixOSS are given. Migrating applications to the cloud "as-is" can miss benefits, and principles like those defined can "raise the bar" of cloud applications.
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick GuideJi-Woong Choi
본 문서는 RHEL에 내장된 재해복구솔루션 ReaR (Relax and Recover)를 이용하여 OS 영역의 데이터를 백업하고 복구하는 방법을 다루고 있습니다. ReaR는 iso를 비롯한 다양한 백업 데이터 포맷을 지원하나, 이 문서에서는 CD/DVD 미디어 반입/보관이 보안상 대부분 허용되지 않는 기업 환경에서도 원활히 사용할 수 있는 PXE boot를 지원하는 포맷으로 ReaR 백업 데이터를 생성하고 복구하는 방법만을 자세히 설명합니다.
Kubernetes your tests! automation with docker on google cloud platformLivePerson
Arik Lerner, Automation Team Leader, and Waseem Hamshawi, Automation Infra Developer, present how to build a large scale automated testing platform by leveraging containers orchestration over GCP, with the ability to scale out and provide fast feedback while maintaining a highly reliable test infrastructure.
The presentation includes new approach of managing a scalable testing platform of distributed automated tests with Kubernetes and Docker over Google Cloud Platform.
Topics:
• GCP and Kubernetes introduction for automated testing
• Traditional Selenium Grid vs Selenium Standalone with Kubernetes and Docker for Web and Mobile tests
• Distributed and containerized testing environment over container cluster - different use cases
Ephemerals - "Short-lived Testing Endpoints". An Open Source by LivePerson which makes automation testing at large scale like a "Walk in the park".
DevOps is a methodology that unites software development (Dev) and IT operations (Ops) into a single continuous process focused on improving quality and speed of delivering new apps. It eliminates finger-pointing between Dev and Ops by emphasizing collaboration through principles like culture, measurement, automation and sharing. Adopting DevOps leads to faster time to market, increased quality, and greater organizational effectiveness.
The introduction covers the following
1. What are Microservices and why should be use this paradigm?
2. 12 factor apps and how Microservices make it easier to create them
3. Characteristics of Microservices
Note: Please download the slides to view animations.
From development environments to production deployments with Docker, Compose,...Jérôme Petazzoni
In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.
This document discusses cloud native principles and definitions. It covers the issues with bare metal servers, how virtualization was an improvement, and defines cloud native principles like using containers for isolation and portability, microservices, elasticity, and automation. It provides Pivotal's definition of cloud native including practices like DevOps, continuous delivery, and BOSH for consistent provisioning. While containers are common, cloud native does not require them - examples like NetflixOSS are given. Migrating applications to the cloud "as-is" can miss benefits, and principles like those defined can "raise the bar" of cloud applications.
[오픈소스컨설팅] Red Hat ReaR (relax and-recover) Quick GuideJi-Woong Choi
본 문서는 RHEL에 내장된 재해복구솔루션 ReaR (Relax and Recover)를 이용하여 OS 영역의 데이터를 백업하고 복구하는 방법을 다루고 있습니다. ReaR는 iso를 비롯한 다양한 백업 데이터 포맷을 지원하나, 이 문서에서는 CD/DVD 미디어 반입/보관이 보안상 대부분 허용되지 않는 기업 환경에서도 원활히 사용할 수 있는 PXE boot를 지원하는 포맷으로 ReaR 백업 데이터를 생성하고 복구하는 방법만을 자세히 설명합니다.
Kubernetes your tests! automation with docker on google cloud platformLivePerson
Arik Lerner, Automation Team Leader, and Waseem Hamshawi, Automation Infra Developer, present how to build a large scale automated testing platform by leveraging containers orchestration over GCP, with the ability to scale out and provide fast feedback while maintaining a highly reliable test infrastructure.
The presentation includes new approach of managing a scalable testing platform of distributed automated tests with Kubernetes and Docker over Google Cloud Platform.
Topics:
• GCP and Kubernetes introduction for automated testing
• Traditional Selenium Grid vs Selenium Standalone with Kubernetes and Docker for Web and Mobile tests
• Distributed and containerized testing environment over container cluster - different use cases
Ephemerals - "Short-lived Testing Endpoints". An Open Source by LivePerson which makes automation testing at large scale like a "Walk in the park".
DevOps is a methodology that unites software development (Dev) and IT operations (Ops) into a single continuous process focused on improving quality and speed of delivering new apps. It eliminates finger-pointing between Dev and Ops by emphasizing collaboration through principles like culture, measurement, automation and sharing. Adopting DevOps leads to faster time to market, increased quality, and greater organizational effectiveness.
The introduction covers the following
1. What are Microservices and why should be use this paradigm?
2. 12 factor apps and how Microservices make it easier to create them
3. Characteristics of Microservices
Note: Please download the slides to view animations.
Oracle SOA Suite in use – a practical experience reportGuido Schmutz
The document discusses two cases where Oracle SOA Suite was used in practical applications. Case 1 describes how SOA Suite was used to integrate an ERP system with external systems, replacing a batch-based interface. Case 2 discusses a modernization project where SOA Suite was used to modernize a legacy system and expose its services.
How to build microservices with node.jsKaty Slemon
In this guide, we’ll learn how to build Microservices with Node.js, i.e., a node app using microservices architecture. You can clone the github repo provided
Oracle REST Data Services: Options for your Web ServicesJeff Smith
ORDS has many options when it comes to delivering web services for your Oracle Database. We have an Automatic feature for your database objects where we handle everything for you. Or, you can write your own services with your SQL & PL/SQL. This slide deck shows exactly what you have to choose from for your applications.
API Lifecycle, Part 2: Monitor and Deploy an APIPostman
Now that you have a well-defined and designed API from the 'API Lifecycle, Part I: Build and Test an API' session, part 2 walks you through the next steps of the API lifecycle: monitoring and deploying an API.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Understanding the Single Thread Event LoopTorontoNodeJS
Node JS was built on Google's JavaScript V8 Engine and it was engineered to preform optimally for the web. Learn what Node JS's single thread event loop is and how it empowers Node to out preform its competitors.
This document introduces infrastructure as code (IaC) using Terraform and provides examples of deploying infrastructure on AWS including:
- A single EC2 instance
- A single web server
- A cluster of web servers using an Auto Scaling Group
- Adding a load balancer using an Elastic Load Balancer
It also discusses Terraform concepts and syntax like variables, resources, outputs, and interpolation. The target audience is people who deploy infrastructure on AWS or other clouds.
Chef is an open-source configuration management and automation tool. It allows users to define infrastructure through recipes organized into cookbooks. Recipes contain resources that describe how to configure systems. Chef runs use recipes and attributes to test systems and repair any deviations from the defined state. Attributes provide details about nodes and can be used to customize configurations. Ohai detects node attributes which are provided to Chef runs. Cookbooks contain recipes, attributes, files and other components to define common scenarios. Node attributes can be defined in cookbooks and overridden to customize configurations for different environments.
This document discusses troubleshooting Oracle Real Application Clusters (RAC). It begins with an overview of RAC architecture including Grid Infrastructure, CRS, ASM, and networking requirements. The document then covers various troubleshooting scenarios for issues like cluster startup failures and node evictions. It also discusses proactive and reactive monitoring tools available in Oracle RAC and recent performance improvements.
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
BootCamp Online en DevOps (and SecDevOps) de GeeksHubs AcademyTelefónica
Este documento describe un Bootcamp Online de Especialización en DevOps de 12 semanas. El bootcamp enseña habilidades como el rol de DevOps, arquitectura como código, infraestructuras híbridas, contenedores, balanceo de carga, automatización, clustering de contenedores, continuous delivery y continuous testing. El programa incluye 10 módulos y un proyecto final para automatizar una aplicación web. El bootcamp es impartido por 10 profesionales expertos y es 100% online con clases en directo.
This document provides an introduction to Ansible, an open source automation tool. It discusses what Ansible is, highlighting that it is a simple yet powerful IT automation system. It then covers Ansible fundamentals like architecture, modules, inventory, playbooks and variables. The document also discusses advanced Ansible topics such as YAML, debugging, SSH and plugins. It concludes with best practices for Ansible such as using roles and the automation server Tower.
Red Hat is an open source software company that provides Linux operating systems, middleware, storage, and cloud computing solutions. Some key facts:
- Red Hat is the #1 provider of open source solutions, with over 90% of Fortune 500 companies using their products.
- They have over 7,000 employees worldwide and annual revenue of over $1 billion.
- Their solutions include Red Hat Enterprise Linux, JBoss middleware, OpenShift PaaS, and CloudForms management tools.
Releases are risky. Often homegrown scripts, manual steps, and runbook orchestrations contribute to the risks involved with application releases.Having a controlled release process can strengthen release management by ensuring quality, reducing manual tasks, deploying applications consistently across environments, and more.Development teams, making the changes to meet customers’ needs, realized that they could not keep up with the increased demand. Many of those teams turned to Agile methodologies. Agile methodologies would help developers create a steady stream of features and solve customer’s problems as they arose. Agile solutions allowed developers to make rapid changes. However, organizations were unable to achieve the full benefit of Agile. Legacy deployment processes delayed the release of the applications because they were built for infrequent releases.
This document summarizes the key features and benefits of Ansible, an agentless automation tool. It notes that Ansible is simple to use with a human-readable YAML language that does not require coding skills. It is powerful yet efficient for deployment, orchestration, and provisioning. It has basic features like modules for managing files, templates, packages, and retrieving file states. Ansible also has wide OS support, integrates with major clouds, works with other configuration tools, and has an easy learning curve and extensible plugin architecture. It helps lower maintenance costs and allows more reliable, faster deployments with automated recovery and failover.
Oracle Drivers configuration for High AvailabilityLudovico Caldara
This document discusses various techniques for achieving high availability and transparent failover in Oracle databases, including:
- Fast Application Notification (FAN) to notify clients of service relocations and allow sessions to drain gracefully.
- Transparent Application Failover (TAF) which automates reconnects for OCI clients and allows resuming queries after a failure.
- Application Continuity (AC) which records transaction state to allow replaying transactions after a failure, requiring code changes or a connection pool.
- Transparent Application Continuity (TAC) which provides the benefits of AC without requiring code changes for supported drivers.
- Connection managers like Traffic Director which can provide session failover without client changes by managing
Microsoft Exchange 2013 deployment and coexistenceMotty Ben Atia
This document provides steps for upgrading Exchange servers from Exchange 2010 or 2007 to Exchange 2013 while maintaining coexistence between the different versions:
1. Install updates on Exchange 2010/2007 servers and prepare Active Directory with Exchange 2013 schema.
2. Deploy Exchange 2013 servers and install Client Access and Mailbox roles.
3. Create a legacy namespace for Exchange 2007 access and obtain/deploy certificates.
4. Move mailboxes from Exchange 2010/2007 to Exchange 2013 servers after building a database availability group.
Exchange Server 2013 : upgrade migration et co-existence avec les anciennes v...Microsoft Technet France
The document discusses the upgrade and coexistence process for moving from Exchange 2010 or Exchange 2007 to Exchange Server 2013. Key steps include installing updates on the existing servers, deploying Exchange 2013 mailbox and client access servers, creating a legacy namespace for Exchange 2007, obtaining and deploying certificates, switching the primary namespace to the new Exchange 2013 servers, and moving mailboxes in batches between the existing and new servers. The upgrade process allows for coexistence between the new Exchange 2013 deployment and the existing Exchange 2010 or Exchange 2007 servers during the transition period.
Oracle SOA Suite in use – a practical experience reportGuido Schmutz
The document discusses two cases where Oracle SOA Suite was used in practical applications. Case 1 describes how SOA Suite was used to integrate an ERP system with external systems, replacing a batch-based interface. Case 2 discusses a modernization project where SOA Suite was used to modernize a legacy system and expose its services.
How to build microservices with node.jsKaty Slemon
In this guide, we’ll learn how to build Microservices with Node.js, i.e., a node app using microservices architecture. You can clone the github repo provided
Oracle REST Data Services: Options for your Web ServicesJeff Smith
ORDS has many options when it comes to delivering web services for your Oracle Database. We have an Automatic feature for your database objects where we handle everything for you. Or, you can write your own services with your SQL & PL/SQL. This slide deck shows exactly what you have to choose from for your applications.
API Lifecycle, Part 2: Monitor and Deploy an APIPostman
Now that you have a well-defined and designed API from the 'API Lifecycle, Part I: Build and Test an API' session, part 2 walks you through the next steps of the API lifecycle: monitoring and deploying an API.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
Understanding the Single Thread Event LoopTorontoNodeJS
Node JS was built on Google's JavaScript V8 Engine and it was engineered to preform optimally for the web. Learn what Node JS's single thread event loop is and how it empowers Node to out preform its competitors.
This document introduces infrastructure as code (IaC) using Terraform and provides examples of deploying infrastructure on AWS including:
- A single EC2 instance
- A single web server
- A cluster of web servers using an Auto Scaling Group
- Adding a load balancer using an Elastic Load Balancer
It also discusses Terraform concepts and syntax like variables, resources, outputs, and interpolation. The target audience is people who deploy infrastructure on AWS or other clouds.
Chef is an open-source configuration management and automation tool. It allows users to define infrastructure through recipes organized into cookbooks. Recipes contain resources that describe how to configure systems. Chef runs use recipes and attributes to test systems and repair any deviations from the defined state. Attributes provide details about nodes and can be used to customize configurations. Ohai detects node attributes which are provided to Chef runs. Cookbooks contain recipes, attributes, files and other components to define common scenarios. Node attributes can be defined in cookbooks and overridden to customize configurations for different environments.
This document discusses troubleshooting Oracle Real Application Clusters (RAC). It begins with an overview of RAC architecture including Grid Infrastructure, CRS, ASM, and networking requirements. The document then covers various troubleshooting scenarios for issues like cluster startup failures and node evictions. It also discusses proactive and reactive monitoring tools available in Oracle RAC and recent performance improvements.
This document provides an overview of IT automation using Ansible. It discusses using Ansible to automate tasks across multiple servers like installing packages and copying files without needing to login to each server individually. It also covers Ansible concepts like playbooks, variables, modules, and vault for securely storing passwords. Playbooks allow defining automation jobs as code that can be run on multiple servers simultaneously in a consistent and repeatable way.
BootCamp Online en DevOps (and SecDevOps) de GeeksHubs AcademyTelefónica
Este documento describe un Bootcamp Online de Especialización en DevOps de 12 semanas. El bootcamp enseña habilidades como el rol de DevOps, arquitectura como código, infraestructuras híbridas, contenedores, balanceo de carga, automatización, clustering de contenedores, continuous delivery y continuous testing. El programa incluye 10 módulos y un proyecto final para automatizar una aplicación web. El bootcamp es impartido por 10 profesionales expertos y es 100% online con clases en directo.
This document provides an introduction to Ansible, an open source automation tool. It discusses what Ansible is, highlighting that it is a simple yet powerful IT automation system. It then covers Ansible fundamentals like architecture, modules, inventory, playbooks and variables. The document also discusses advanced Ansible topics such as YAML, debugging, SSH and plugins. It concludes with best practices for Ansible such as using roles and the automation server Tower.
Red Hat is an open source software company that provides Linux operating systems, middleware, storage, and cloud computing solutions. Some key facts:
- Red Hat is the #1 provider of open source solutions, with over 90% of Fortune 500 companies using their products.
- They have over 7,000 employees worldwide and annual revenue of over $1 billion.
- Their solutions include Red Hat Enterprise Linux, JBoss middleware, OpenShift PaaS, and CloudForms management tools.
Releases are risky. Often homegrown scripts, manual steps, and runbook orchestrations contribute to the risks involved with application releases.Having a controlled release process can strengthen release management by ensuring quality, reducing manual tasks, deploying applications consistently across environments, and more.Development teams, making the changes to meet customers’ needs, realized that they could not keep up with the increased demand. Many of those teams turned to Agile methodologies. Agile methodologies would help developers create a steady stream of features and solve customer’s problems as they arose. Agile solutions allowed developers to make rapid changes. However, organizations were unable to achieve the full benefit of Agile. Legacy deployment processes delayed the release of the applications because they were built for infrequent releases.
This document summarizes the key features and benefits of Ansible, an agentless automation tool. It notes that Ansible is simple to use with a human-readable YAML language that does not require coding skills. It is powerful yet efficient for deployment, orchestration, and provisioning. It has basic features like modules for managing files, templates, packages, and retrieving file states. Ansible also has wide OS support, integrates with major clouds, works with other configuration tools, and has an easy learning curve and extensible plugin architecture. It helps lower maintenance costs and allows more reliable, faster deployments with automated recovery and failover.
Oracle Drivers configuration for High AvailabilityLudovico Caldara
This document discusses various techniques for achieving high availability and transparent failover in Oracle databases, including:
- Fast Application Notification (FAN) to notify clients of service relocations and allow sessions to drain gracefully.
- Transparent Application Failover (TAF) which automates reconnects for OCI clients and allows resuming queries after a failure.
- Application Continuity (AC) which records transaction state to allow replaying transactions after a failure, requiring code changes or a connection pool.
- Transparent Application Continuity (TAC) which provides the benefits of AC without requiring code changes for supported drivers.
- Connection managers like Traffic Director which can provide session failover without client changes by managing
Microsoft Exchange 2013 deployment and coexistenceMotty Ben Atia
This document provides steps for upgrading Exchange servers from Exchange 2010 or 2007 to Exchange 2013 while maintaining coexistence between the different versions:
1. Install updates on Exchange 2010/2007 servers and prepare Active Directory with Exchange 2013 schema.
2. Deploy Exchange 2013 servers and install Client Access and Mailbox roles.
3. Create a legacy namespace for Exchange 2007 access and obtain/deploy certificates.
4. Move mailboxes from Exchange 2010/2007 to Exchange 2013 servers after building a database availability group.
Exchange Server 2013 : upgrade migration et co-existence avec les anciennes v...Microsoft Technet France
The document discusses the upgrade and coexistence process for moving from Exchange 2010 or Exchange 2007 to Exchange Server 2013. Key steps include installing updates on the existing servers, deploying Exchange 2013 mailbox and client access servers, creating a legacy namespace for Exchange 2007, obtaining and deploying certificates, switching the primary namespace to the new Exchange 2013 servers, and moving mailboxes in batches between the existing and new servers. The upgrade process allows for coexistence between the new Exchange 2013 deployment and the existing Exchange 2010 or Exchange 2007 servers during the transition period.
Office Track: Exchange 2013 in the real world - Michael Van HorenbeeckITProceed
This document summarizes a presentation about deploying and managing Exchange 2013 in a real-world environment. It discusses planning the namespace design and server topology across multiple datacenters for high availability. It also covers installing Exchange 2013 and ensuring interoperability with older Exchange versions. Finally, it describes the new "Managed Availability" monitoring and remediation features in Exchange 2013.
This document provides an overview of upgrading to Exchange Server 2016. It discusses Exchange Server 2016 requirements and supported upgrade paths from previous versions such as Exchange Server 2010 and 2013. It also covers options for implementing Exchange Server 2016 such as an on-premises only or hybrid deployment. The document then discusses various topics related to planning and implementing the upgrade such as understanding client access and message transport coexistence, migrating public folders, and removing previous Exchange server versions.
The document provides an overview of designing a Client Access Server (CAS) in Exchange 2013, including CAS requirements, technologies, and key configuration steps. It discusses configuring send and receive connectors, namespaces and email address policies, internal and external URLs for Outlook Anywhere and virtual directories, and SSL certificate configuration for the CAS servers.
Exchange 2013 introduces a new server role architecture with two main building blocks - the Database Availability Group (DAG) and the Client Access server role. The DAG allows for multiple Mailbox servers to host copies of mailbox databases and provide failover capabilities. The Client Access role is a load balanced front end that routes clients to the appropriate Mailbox server based on the active database copy. This new architecture aims to simplify deployment and administration while improving hardware efficiency and cross-version interoperability compared to previous versions of Exchange.
This document provides an overview of installing Microsoft Exchange Server 2010, including:
1) Preparing Active Directory by reviewing components like domains, forests, and trusts, and configuring DNS records and partitions for Exchange integration.
2) Installing Exchange server roles like Mailbox, Client Access, and Hub Transport on servers meeting hardware and software requirements.
3) Verifying a successful Exchange installation by testing services, logs, and mail functionality and deploying additional configuration.
Exchange Server 2013 Preview brings several new features and improvements, including:
1. Support for a multigenerational workforce through enhanced search capabilities and easier contact merging.
2. A refreshed user interface for Outlook 2013 Preview and Outlook Web App.
3. Greater integration with Microsoft SharePoint 2013 Preview and Lync 2013 Preview through new site mailboxes and improved eDiscovery capabilities.
Scott Scholl est un des gourous techniques sur Exchange. Il intervient à des conférences telles que Microsoft TechEd, The Experts Conference, TechReady… et Il nous fait le privilège d’animer cette session (attention, session en anglais). Il est l’auteur de plusieurs livres de référence sur Exchange. Durant cette session découvrez les nouveautés du SP2 d'Exchange sortie en décembre 2011 et les bonnes pratiques de déploiement. Cette session sera l'occasion de découvrir les nouveautés de Exchange Server 2010 SP2 tout en n'oubliant pas de revenir sur quelques fondamentaux de Exchange 2010. Nous parcourerons les améliorations autour du setup et du déploiement, de l'audit des boites aux lettres, de la messagerie unifiée, de la haute disponibilité amsi aussi des solutions d'archivage et de protection de l'information du système de messagerie.
This document outlines the evolution of Microsoft Exchange server from version 4.0 to 2013. It discusses key features and changes introduced in each new version, such as support for Outlook, integration with Active Directory, improved web access, mobile device support, continuous replication for high availability, role-based access control and new server roles. Each version built upon the previous one to provide enhanced email, calendaring and collaboration capabilities.
A Brief History of Microsoft Exchange Serverbedekarpm
This document outlines the evolution of Microsoft Exchange server from version 4.0 to 2013. Key points include the introduction of Exchange 4.0 in 1996 which provided persistent internet connections, Exchange 5.0 in 1997 which integrated email, calendars and address books, Exchange 2007 which introduced roles and eliminated front-end/back-end concepts, and Exchange 2013 which featured the Exchange Admin Center and improved integration with SharePoint and Lync. Each new version brought performance enhancements and additional collaboration and mobile features.
This document provides an overview and agenda for migrating from Exchange Server 2003 and Active Directory 2008 to Exchange Server 2010 and Active Directory 2008 R2. The key steps include installing prerequisites, installing Exchange 2010, configuring Exchange 2010, migrating mailboxes and public folders from Exchange 2003, updating DNS, and removing the legacy Exchange 2003 servers once the migration is complete. PowerShell commands are provided as alternatives to the graphical user interface for many configuration tasks.
This document provides an overview of Module 4 which covers managing client access in Exchange Server. It discusses configuring the Client Access server role, Outlook Web App, and mobile messaging. Key topics include setting up the Client Access server, configuring Outlook Anywhere and Autodiscover, securing connections with certificates, and enabling Exchange ActiveSync on mobile devices. Hands-on demonstrations and labs are included to help administrators configure these client access services.
This document provides an overview of upgrading Exchange Server organizations from Exchange Server 2003 or Exchange Server 2007 to Exchange Server 2010. It discusses the different upgrade options and supported scenarios. It also outlines the processes for installing Exchange Server 2010, implementing coexistence between the server versions, and removing the legacy servers from the organization. Considerations are provided for client access, message transport, and administration during the upgrade and coexistence phases.
Microsoft releases cumulative updates (CUs) for Exchange Server 2013 that include all installation files, allowing updates to be applied without first installing a service pack. Previous versions of Exchange required separate installation of service packs and CUs. The document discusses prerequisites, installation, and post-installation configuration tasks for Exchange Server 2013, including preparing Active Directory, installing prerequisites on the Exchange server, running Setup.exe to install Exchange roles, configuring accepted domains and email address policies, and setting up send/receive connectors and DNS records.
The checklist for preparing your Exchange 2007 infrastructure for Exchange 20...Eyal Doron
The checklist for preparing your Exchange 2007 infrastructure for Exchange 2013 coexistence | 9#23
http://o365info.com/the-checklist-for-preparing-your-exchange-2007-infrastructure-for-exchange-2013-coexistence/
A preparation checklist for the project of - Exchange 2013/2007 coexistence environment, which focus on the needed URL address updates of the Exchange 2007 CAS.
Eyal Doron | o365info.com
Similar to Exchange 2013 Migration & Coexistence (20)
Automatisez, visualisez et améliorez vos processus d’entreprise avec Nintex Microsoft Technet France
Automatiser vos processus métiers vous permet non seulement de sécuriser et de standardiser les flux mais également de sauver du temps de travail a vos équipes, leur permettant de se concentrer sur le cœur de leur métier. Une fois automatisé, la valeur de ces processus peut être mesurée et ainsi le retour sur investissement calculé. Au cours de cette session nous verrons l’intérêt de l’automatisation des processus et les méthodes permettant d’améliorer vos processus, et d’en mesurer la valeur
Dans cette session, nous allons parcourir les différentes options de déploiement de Windows 10 pour l'entreprise. Parmi les nouveautés, nous décrirons la mise à jour « in place » et le provisionnement de machines au travers d'un outil de configuration appelé WICD. Nous verrons notamment ce dernier mode de déploiement dans une demonstration.
Retour d'expérience sur l'utilisation d'OMS Log Search pour constituer un Dashboard personnalisable et évolutif grâce aux informations collectées par les différentes solutions proposées dans OMS. L'objectif est de pouvoir monitorer simplement l'état de santé d'un SI hybride au sein d'une seule interface. Sécurité, performance, disponibilité...... Un Dashboard pour les gourverner tous où qu'ils soient!
Fusion, Acquisition - Optimisez la migration et la continuité des outils col...Microsoft Technet France
La restructuration des services IT lors d’une fusion acquisition est un challenge d’importance pour les entreprises concernées. La transition doit la plupart du temps être rapide, avec une forte contrainte de date buttoir et des impératifs techniques très impactant. Elle ne doit pas perturber les utilisateurs qui vont continuer à utiliser les outils collaboratifs à leur disposition. L'enjeu pour l'IT est de pouvoir migrer rapidement ces utilisateurs et leurs contenus collaboratifs dans la nouvelle structure, malgré bien souvent l’absence de contrôle total sur les environnements sources et destination. Cette session a pour objectif de vous faire part de notre retour d'expérience et des bonnes pratiques pour piloter de manière sereine les migrations Active Directory et de la messagerie Exchange/Office 365 dans de tels contextes.
Début 2016, les deux cabinets de conseil Solucom et Kurt Salmon se sont rapprochés pour former un nouveau leader du conseil en Europe, Wavestone, de 2 300 collaborateurs. Pour faciliter l'intégration des équipes, ce nouvel ensemble a déployé un portail Powell 365. Dans cette session, Yannick Taupiac, Senior Manager chez Wavestone, et Jean-Pierre Vimard, CEO de Powell Software, nous racontent comment ils ont déployé le portail en un temps record de 5 semaines.
Retour d’expérience sur le monitoring et la sécurisation des identités AzureMicrosoft Technet France
"La gestion et la sécurisation des identités cloud est, de nos jours, un sujet plus qu’essentiel. Venez découvrir au travers de cette session notre retour d’expérience sur les méthodes de gestion et de sécurisation de votre environnement Azure Active Directory.
Nous aborderons également les produits Azure AD Identity Protection et Azure AD Privileged Identity Management, inclus dans la suite Azure AD Premium P2."
Présentation des scénarios de mobilité couverts à date par la suite Enterprise Mobility + Security et retours d'expérience basés sur des projets de déploiement de cette solution au sein d'entreprises diverses. Quels sont les services offerts aux utilisateurs les plus primés/implémentés, quels sont les choix à faire avant de déployer ces solutions, quels sont les accompagnements à mettre en place pour garantir l'adhésion des utilisateurs à ces nouveaux services, etc.
Venez découvrir le SharePoint Framework et toutes les nouveautés autour du développement SharePoint. Dans cette session, vous découvrirez comment développer des modules d’extensibilité de la plateforme, comme notamment les principes de personnalisation et de déploiement de contenu via les CDN, les nouvelles méthodes pour des développer des Client Sides Web Parts ou encore les webhooks. Orienté autour d’outils et de technologies open source et de JavaScript, le SharePoint Framework est une nouvelle façon rapide, légère et robuste de développer des extensions à SharePoint Online ou SharePoint Server.
Cette session débutera par la présentation de la stratégie Software Defined Storage (SDN) de Microsoft en balayant les aspects privés, hybrides et publiques. Nous continuerons tout le long de la session par des cas d’usage fonctionnel s’appuyant sur les services de stockage Azure. Vous appréhenderez ainsi les critères de choix à prendre en compte pour concevoir une architecture cible. C’est dans cette démarche que nous clôturerons la session par un retour d’expérience client sur la traçabilité de production et packaging industriel s’appuyant sur les services Azure Tables & Blobs
Cette session vous présente le nouveau cycle de mises à jour introduit pour Windows 10. Avec WAAS, de nouvelles fonctionnalités seront publiées régulièrement : quel est l'impact sur vos process IT ? Comment vous organiser pour prendre en compte ce nouveau rythme ? Quels outils pour vous aider ?
"Les organisations de toute taille s’appuient sur un nombre croissant de services dans le Cloud pour assoir les nouveaux usages et modèles d’affaire dans le cadre de leur transformation numérique. Au-delà des contrôles en place et autres dispositions prises par défaut en matière de sécurité par ces services, d’aucun voit dans le chiffrement de leurs données et l’utilisation de leurs propres clés de chiffrement les clés de la confiance.
Dans ce contexte, cette session vous propose une vue d'ensemble illustrée des différentes solutions de chiffrement proposées dans Azure et Office 365. Elle vise à présenter ces solutions et à donner des indications claires sur la façon de choisir la ou les solutions appropriées en fonction de cas d’usage donnés ou/et d’exigences particulières. Les risques ainsi couverts seront explicités au cas par cas."
Protéger votre patrimoine informationnel dans un monde hybride avec Azure Inf...Microsoft Technet France
"Avec l’évolution en marche vers le Cloud pour la recherche d’économies et d’une meilleure agilité dans le cadre de leur transformation numérique, les organisations font face à des besoins croissants de protection et de contrôle des informations sensibles.
Des questions se posent inévitablement : Comment identifier correctement les informations sensibles ? Et sur cette base, comment appliquer le bon niveau de contrôle pour garantir la sécurité la protection de la vie privée de ces informations ? Comment contrôler les clés qui sont utilisées ?
Dans ce contexte, cette session présente comment la nouvelle solution Azure Information Protection aide les organisations aux différents stades de l’adoption du cloud à protéger leur patrimoine informationnel. Azure Information Protection combine la technologie précédemment disponible dans les services RMS (Rights Management Services) et des apports issus de l’acquisition de Secure Islands pour permettre la classification pertinente des informations (sensibles), leur chiffrement, un contrôle d’accès adapté, l’application de politiques et plus encore."
"Il n’y a aucune économie numérique sans identité. Les relations numériques et la connectivité avec les personnes et les autres acteurs quels qu’ils soient sont en effet essentielles au succès des organisations aujourd’hui. L’identité est au centre de tout, qu’il s’agisse de celle de leurs collaborateurs, partenaires, clients, appareils, « objets », etc.
Cette session introduit la stratégie de Microsoft pour couvrir les scénarios clé de B2E (business-to-employees), B2B (business-to-business) et de B2C (business-to-consumers) afin de permettre les nouveaux usages et/ou modèles d’affaires souhaités dans le cadre de la nécessaire transformation numérique des organisations.
La session illustrera comment les différentes offres et éditions d’Azure Active Directory associent les fonctions plus avancées pour l’identité comme un Service (IDaaS) avec l’externalisation des opérations pour obtenir la réduction des efforts de mise en œuvre, des coûts et des risques."
Vous avez dit « authentification sans mot de passe » : une illustration avec ...Microsoft Technet France
"L’actualité ne cesse de se faire l’écho de cas de vols de mots de passe toujours plus nombreux vis-à-vis de services en ligne. Pour répondre à cette situation, les travaux de l’alliance FIDO (Fast IDentity Online) offrent une authentification sans mot de passe fondée sur la cryptographie asymétrique.
Cette session introduit les spécifications FIDO 2 implémentées dans Windows 10 au travers de Microsoft Hello et de Microsoft Passport, et illustre l’utilisation de ces mécanismes avec la plateforme FranceConnect.
FranceConnect est un nouveau système d’identification à l’initiative de la Direction interministérielle du numérique et du système d’information et de communication de l’État (DINSIC). FranceConnect vise à faciliter l’accès des usagers aux services numériques de l’administration en ligne."
"La version 2016 de SQL Server est une version majeure et apporte de nombreuses nouveautés aussi bien fonctionnelles que techniques. Sans pour autant oublier la sécurité ! Durant cette session nous passerons en revue les fondamentaux de la sécurité dans une base de données, puis nous vous présenterons des méthodes de protection des données, et nous vous présenterons aussi la nouvelle fonctionnalité qu’est « Always Encrypted » disponible aussi dans Azure SQL Database avec Azure KeyVault.
"
Une architecture hybride était souvent vue comme un déploiement temporaire pour la transition vers le Cloud Microsoft. Cependant, avec l'arrivée de SharePoint Server 2016, qui a été conçu et inspiré depuis Office 365, beaucoup d’organisations sont à la recherche de moyens pour combiner leurs investissements SharePoint existants avec le Cloud.
Un déploiement hybride est la voie à suivre pour de nombreuses organisations au moins pour quelques années encore.
Au cours de cette session, nous vous proposons de revenir sur les scenarios déjà existants ainsi que les nouveautés. Que ce soit OneDrive for Business, la recherche, les sites SharePoint, Delve, Delve Analytics, Power BI ou encore les Groupes Office 365, Video ou Planner, nous verrons ensemble comment une topologie hybride peut vous permettre dès maintenant de tirer le potentiel maximum de vos infrastructures SharePoint.
" Avec des utilisateurs mobiles et autonomes, le MDM est une solution de choix pour une gestion légère et efficace des périphériques Windows 10. Cette session est l'occasion de montrer, à travers quelques démonstrations de Microsoft Intune et Azure AD, comment l'identité est au centre de cette gestion et de nouveaux scénarios. Nous vous démontrerons comment déployer des applications universelles métier en entreprise, par exemple, pour faire des achats en volume, pour la facturation ou l'utilisation d'identités professionnelles. avec le Windows Store pour Entreprises."
"La sécurité de votre Système d’Information est à l’honneur dans ce talk.
- Comment sécuriser mes données et mes échanges avec Office 365 ?
- Où sont mes données une fois migrées ?
- Comment sécuriser mes périphériques en mobilité ?
- Protéger mes informations dans l’approche « Cloud First »,
- …
Nos experts répondent à TOUTES vos questions !"
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
2. Exchange Server 2013
Migration and Coexistence
Scott Schnoll
Senior Content Developer
Microsoft Corporation
scott.schnoll@microsoft.com
http://aka.ms/Schnoll
Twitter: @Schnoll
Infrastructure, communication & collaboration
3. Upgrade Approach
Preparing for Exchange 2013
Upgrade and Coexistence
Moving Mailboxes
Public Folders
Managing Coexistence
Quotas
5. 1. Prepare
Verify prerequisites
Install Exchange 2010 SP3 or later across ORG
Prepare AD with Exchange 2013 schema
Validate existing client access
2. Deploy Exchange 2013 servers
Install both Exchange 2013 MBX and CAS roles
Clients
autodiscover.contoso.com
mail.contoso.com
2
1
E2010
HUB
E2010
CAS
3
4
E2013
CAS
Exchange 2010
Servers
SP3
SP3
Intranet site
6
5
E2010
MBX
E2013
MBX
Internet-facing site – upgrade first
3. Obtain and deploy certificates
Obtain and deploy certs on Exchange 2013 CAS
4. Switch primary namespace to Exchange 2013 CAS
Exchange 2013 fields all traffic, including traffic from
Exchange 2010 users
Validate using Remote Connectivity Analyzer
5. Move Mailboxes
Build out DAG
Move Exchange 2010 users to Exchange 2013 MBX
Migrate Legacy Public Folders to Modern Public Folders
6. Repeat for additional sites
6. 1. Prepare
Verify prerequisites
Install Exchange 2007 SP3 + RU10 or later across ORG
Prepare AD with Exchange 2013 schema
Validate existing client access
2. Deploy Exchange 2013 servers
Clients
autodiscover.contoso.com
mail.contoso.com
3
legacy.contoso.com
2
1
E2007
SP3 CAS
E2007
SP3 HUB
4
5
E2013
CAS
Install both Exchange 2013 MBX and CAS servers
3. Create legacy namespace
Create DNS record pointing to Exchange 2007 CAS
Exchange 2007
Servers
RU10
RU10
RU10
Intranet site
7
6
E2007
SP3
MBX
E2013
MBX
Internet-facing site – upgrade first
4. Obtain and Deploy Certificates
Obtain and deploy certificates on Exchange 2013 CAS
Deploy certificates on Exchange 2007 CAS
5. Switch primary namespace to Exchange 2013 CAS
Validate using Remote Connectivity Analyzer
6. Move mailboxes
Build out DAG
Move Exchange 2007 users to Exchange 2013 MBX
Migrate legacy Public Folders to Modern Public Folders
7. Repeat for additional sites
8. 1
Exchange Server 2010 SP3 and later
Exchange Server 2007 SP3 RU10 and later
RPC over HTTP is only method of connectivity for Outlook clients
Entourage 2008 for Mac, Web Services Edition
Outlook for Mac 2011
Outlook Version
Minimum Supported Version
Recommended Version*
Outlook 2013
RTM
August 2013 update
Outlook 2010
SP1 + Nov 2012 Update
June 2013 update
Outlook 2007
SP3 + Nov 2012 Update
August 2013 update
(14.0.6126.5000 or later)
(12.0.6665.5000 or later)
*Recommended Updates fix an issue with Outlook using the wrong Exchange 2013 Internal/External settings
1
9. 1
Windows Server 2008 R2 SP1 or later Standard or Enterprise
Standard - for Exchange 2013 Client Access servers and standalone Mailbox servers
Enterprise - for Exchange 2013 Mailbox servers in a DAG
Windows Server 2012 RTM or later Standard or Datacenter
Windows Server 2012 R2 (support coming in Exchange Server 2013 SP1)
1
10. 1
Install Exchange 2010 SP3 or Exchange 2007 SP3 RU10 to all servers
Extend the AD schema for Exchange Server 2013 setup /PrepareSchema or /ps
Prepare the Exchange organization for Exchange Server 2013 setup /PrepareAD
or /p
Prepare remaining AD domains that have or will have any mail enabled objects for
Exchange Server 2013:
Local domain setup /PrepareDomain or /p
Remote domains one at a time setup /PrepareDomain:FQDN.of.domain or /p:FQDN.of.domain
Or do them all at once setup /PrepareAllDomains or /pad
Validate existing client access using Remote Connectivity Analyzer and test
cmdlets
1
12. 2
2
Install both MBX and CAS Servers
CAS is proxy only
MBX performs PowerShell commands
Use the latest CU package
No more SP then RU install
Exchange 2013 Setup
GUI and command line options
Command line parameters
New parameter for license terms acceptance
After the Fact
You cannot remove roles in Exchange 2013
Setup.exe /mode:install /roles:c,m,mt /IAcceptExchangeServerLicenseTerms
16. 31
Export with private key and import to other CAS from the UI
Assign services right from the UI
First notification shown 30 days prior to expiration
Subsequent notifications provided daily
4
18. 31
Use split DNS or pinpoint DNS for Exchange host names
mail.contoso.com for Exchange connectivity on intranet and Internet
mail.contoso.com has different IP addresses in intranet/Internet DNS zones
This is not a requirement, some customers may have unique environments where different
names would be helpful
Don’t list machine host names in certificate host name list
Use load-balanced (LB) arrays for intranet and Internet access to servers
Use “Subject Alternative Name” (SAN) certificate
Public CA providers are beginning to restrict the issuing of certs with invalid DNS names
4
20. 4
Layer 7 load balancers no longer required for an Exchange 2013 namespace
Layer 4 (aka no-affinity/persistence) and Layer 7 are supported for Exchange 2013 namespace
Validate creation with https://www.exrca.com/
Legacy namespace should begin or continue to use Layer 7 load balancing
Script the change for legacy namespaces (and have a script to revert back if required)
Update mail and Autodiscover DNS records to point to a Exchange 2013 CAS server
Exchange 2007 and Exchange 2010 Autodiscover will redirect to Exchange 2013 CAS for
Exchange 2013 mailbox
5
21. Switching to CAS 2013
4
5
Outlook Anywhere
Clients
mail.contoso.com
RPC/HTTP
Layer 7 LB
Layer 4 LB
RPC/HTTP
HTTP
PROXY
E2007/E2010 CAS
OA Enabled
Client Auth: Basic
IIS Auth: Basic
NTLM
RPC
E2013
CAS OA Enabled
Client Settings
IIS Auth: NTLM
RPC
HTTP
PROX
Y
1. Enable Outlook Anywhere on all legacy
CAS
2. IIS Authentication Methods
E2007/E2010 CAS
Disabled
OA Enabled
Client Settings
IIS Auth:NTLM
RPC
IIS Auth must have NTLM enabled on all
legacy CAS
3. Client Settings
Make legacy OA settings the same as
2013 CAS so all clients get the same
proxy hostname
4. DNS Cutover
A low TTL on the existing record the days
prior to the cutover is a good idea
E2007/E2010
MBX
E2013
Internet facing site MBX
E2007/E2010 MBX
Intranet facing site
22.
23. 3
2013 to 2007 in the same AD site
2013 to 2007 in a different AD site
2013 to 2010 in a different AD site
2013 to 2013 in a different AD site
24. Exchange 2010 Coexistence
OWA
mail.contoso.com
paris.mail.contoso.com
Layer 4 LB
E2010 CAS
HTTP Proxy
E2013 CAS
RPC
Site Boundary
Protocol Head
IIS
Layer 7 LB
Protocol Head
E2010 CAS
RPC
Store
Protocol Head
Store
DB
DB
DB
E2010 MBX
E2013 MBX
E2010 MBX
Cross-Site Proxy
Request
Cross-Site
Silent
Redirect
25. Exchange 2007 Coexistence
Same-Site
Silent Redirect
Request
OWA
Legacy.contoso.com
mail.contoso.com
paris.mail.contoso.com
Layer 7 LB
Layer 4 LB
Layer 7 LB
E2007 CAS
HTTP Proxy
E2013 CAS
RPC
Site Boundary
Protocol Head
IIS
Protocol Head
E2007 CAS
RPC
Store
Protocol Head
Store
DB
DB
DB
E2007 MBX
E2013 MBX
E2007MBX
Cross-Site Proxy
Request
Cross-Site
Silent
Redirection
26. Protocol
Exchange 2007 user accessing
Exchange 2010 namespace
Exchange 2007 user accessing
Exchange 2013 namespace
Exchange 2010 user accessing
Exchange 2013 namespace
Requires
Legacy namespace
Legacy namespace
No additional namespaces
OWA
• Same AD site: silent or SSO FBA redirect
• Externally facing AD site: manual or
silent/SSO Cross-site redirect
• Internally facing AD site: proxy
Silent redirect to CAS 2007 ExternalURL in
same or different AD site.
• Same AD Site: Proxy to CAS 2010
EAS
• EAS v12.1+ : Autodiscover & redirect
• Older EAS devices: proxy
Proxy to MBX 2013
Proxy to CAS 2010 - all noted protocols
Outlook
Anywhere
Direct CAS 2010 support
Proxy to CAS 2007
Autodiscover
Exchange 2010 answers Autodiscover query
for 2007 User
Exchange 2013 answers Autodiscover query
for 2007 User
EWS
Uses Autodiscover to find CAS 2007 EWS
External URL
Uses Autodiscover to find CAS 2007 EWS
External URL
POP/IMAP
Proxy
Proxy to CAS 2007
OAB
Direct CAS 2010 support
Proxy to CAS 2007
RPS
n/a
n/a
ECP
n/a
n/a
• Different AD Site: Cross-site silent redirect to
ExternalURL
• Same AD Site: Proxy to CAS 2010
• Different AD Site: Cross-site silent redirect to
ExternalURL
28. 56
Batch management
Reporting
Retry semantics
Uses Mailbox Replication Service (MRS) internally
New-MigrationBatch
Get-MigrationUserStatistics
WLM will throttle moves to maintain a good user
experience
6
29.
30. Existing Public Folders can be migrated to Exchange 2013
Public Folder Replication is removed
End user experience doesn’t change
Exchange 2013 users can access Exchange 2010/Exchange 2007 Public Folders
Exchange 2010/Exchange 2007 users cannot access Exchange 2013 Public Folders
Migration of Public Folders is a cut-over migration
Similar to online mailbox moves
31. Tool available to analyze existing Public Folder hierarchy to determine how many Exchange 2013
Public Folder mailboxes are recommended
Users continue to access existing Public Folder deployment while data is copied
Data migration happens in the background
There will be a short downtime while the migration is finalized
Once migration completes, everyone switches at the same time
Can switch back, but any post migration Public Folder changes are lost
32. Public Folder Migration
from Exchange 2007 or Exchange 2010 Public Folders
1. Prepare
Outlook Clients
Install Exchange SP and/or updates across the ORG
Migrate all users that require access to Exchange 2013
2. Analyze
4
E2007 SP3
RU1 or E2010
SP3
0
1
Exchange 2013
Map PF folders to PF mailboxes
2
PF dbase 1
4. Begin Migration Request
PF
MB MB
X
X
PFs
PF dbase 2
PF dbase 3
3. Create new Public Folder mailboxes
Set to HoldForMigration Mode, mailboxes invisible to clients
PF mbx 1
3
MBX
Take snapshot of existing PF folder structure, statistics
and permissions
5
PF mbx 2
PF mbx 3
6
Clients continue to access and create new data during copy
After copy is complete migration request status is
AutoSuspended
5. Finalize Migration Request
Update snapshot of existing PF folder structure, statistics
and permissions
Lock source, clients logged off, final sync occurs
6. Validate
Check and verify destination folders
33.
34. Manage Exchange 2013 mailboxes
Manage Exchange 2013 certificates
Manage Exchange 2013 servers
Manage some Exchange 2007/2010 server attributes
View and update Exchange 2010/2007 mailboxes and properties (with a few limitations)
35.
36. This is due to more accurate space usage calculation of items within the database compared to
previous versions
Expectation is 30% increase in quota hit, but will vary based on the content types
May want to increase the quotas of any user using 75% or more of their quota prior to moving
their mailbox to Exchange 2013
The database size on disk does NOT increase
41. Office-related Blogs
•
•
•
•
•
•
Office Blogs – http://blogs.office.com/
Exchange Team Blog – http://aka.ms/ehlo
Lync Team Blog – http://aka.ms/lyncblog
SharePoint Blog – http://aka.ms/spblog
Yammer Blog – http://aka.ms/yammerblog
Outlook Blog – http://aka.ms/outlookblog
#mstechdays
Infrastructure, communication & collaboration
42. Office-related Blogs
• Excel Blog – http://aka.ms/excelblog
• Power Bi Blog – http://aka.ms/pbiblog
• Office 365 for Business Blog –
http://aka.ms/o365fbblog
• Project Blog – http://aka.ms/msprojectblog
• OneNote Blog – http://aka.ms/onenoteblog
#mstechdays
Infrastructure, communication & collaboration
43. Office-related Blogs
•
•
•
•
•
Access Blog – http://aka.ms/accessblog
OneDrive Blog – http://blog.onedrive.com/
PowerPoint Blog – http://aka.ms/pptblog
Word Blog – http://aka.ms/wordblog
Office for Mac Blog – http://aka.ms/ofmblog
#mstechdays
Infrastructure, communication & collaboration
Invalid name examples - (such as .local, .internal, etc…)
Enable Outlook Anywhere on all legacy CAS to allow 2013 CAS to discover and proxy through rpcproxy.dllEnsure IIS Authentication Methods on legacy CAS have NTLM enabledMake sure external hostname matches for legacy CAS and 2013 CAS
Make sure to point out 2007 EAS users are proxied 2013 CAS 2013 mailbox 2007 CAS 2007 mailbox
Batching capabilities is now extended to Exchange On Prem (just like we have had in O365 for some time).Allows for retry semantics and ability to batch users for moves to Exchange 2013.Workload Manager will throttle moves to maintain a positive end user experience. This can reduce the rate of user mailbox moves.
Now let us look at the steps for migrating public folders from 2007 or 2010 to 2013. This is an overview and we have interactive session where we can go into details on a specific step.Up here you have the 2007 and 2010 PF deployment which we will migrate to a 2013 PF deployment.1. PreparationThe first step of preparation is about ensuring that the existing Exchange environments is prepared to handle the migration. The coexistence updates are needed on all 2007 and 2010 servers across the organization. All users who require PF access should first be moved to 2013 servers.2. AnalyzeNext you should analyze the existing public folder deployment. Like the structure, item counts in public folders, permissions. We recommend taking a snapshot of this information so you can validate it later when the data is moved to 2013. A mapping then needs to be created where public folders are mapped to their host mailboxes. Things to consider when assigning public folders to mailboxes are existing size, room for future growth, proximity to clients. At release we will have some scripts which will help you create the mapping.3. Create PF mailboxesOnce the mapping is done, the PF mailboxes need to be created on the 2013 servers. In order to distinguish between mailboxes being created pre and post migration, they need to be stamped with a HoldForMigration attribute.4. Start the migrationNow we are ready to begin the migration. This is done by creating a migration request. The input to the migration request is the mapping which we created in step 2. This begins a data copy from the public folder databases to the mailboxes. It happens in the backgound and clients will still continue to access the existing PF deployment. Once the copy is complete the migration request gets into an AutoSuspended state where it will remain till the administrator takes the next step which is to finalize the migration request.5. FinalizationNow the finalization step will need to be planned over a downtime. First a quick update to the snapshot which we took in step 2 is recommended. Then the finalization will happen in two steps. First the administrator will set a flag locking the source public folders. This will trigger a logoff of all user clients. Following that the administrator needs to wait for replication to complete. Then the second step would be to initiate a finalization of the migration request. This will do a final sync to make the 2013 data current and complete the switch over of clients to 2013.6. ValidateThe last step is q validation to make sure all data has been migrated. The snapshot data from the earlier steps can be used to validate this.-----------------------------------For QA and more detailafter migration is possible, but performing this step could lead to a loss of public folder data since any changes to public folders after the Exchange 2013 Preview migration was finalized will not be reflected in the Exchange 2010 public folders. In addition, as part of the rollback, we recommend that you remove any Exchange 2013 Preview public folders that were created as part of the migration process.For the migration of a geo-distributed hierarchy, how can I ensure that the public folders are created in the location nearest to the target users?As part of the migration process, a CSV file is generated (using the publicfoldertomailboxmapgenerator.ps1 script) which contains the folder to mailbox mapping for the new hierarchy. You can then use this CSV to create public folder mailboxes in the appropriate geographic location and modify the CSV file to have the required folders in the appropriate mailbox so they are near the target users.Draft PF FAQ, document any additional questions that are not on this list for Andreahttp://technet.microsoft.com/en-us/library/jj552408(v=exchg.150) SP3 is required to migrate from 2010 and coexistence RU is required to migrate from E12. It contains code in store which triggers during migration to kick off clients and keep them out till the migration is complete. It also has code to send clients to Exchange 2013 after migration is complete. It also has code to allow transport to understand Exchange 2013 PFs when delivering mail to mail enabled PFs in Exchange 2013.