En este diapositivas der Microsoft podemos ver qué aporta SQL 2014 en áreas como: Tablas optimizadas en memòria, Cambios en estimacion de la cardinalidad, Cifrado de los Backups, Mejoras en arquitectures, Always On, Cambios en Resource Governor, Data files en Azure.
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)Aurimas Mikalauskas
Is my MySQL server configured properly? Should I run Community MySQL, MariaDB, Percona or WebScaleSQL? How many innodb buffer pool instances should I run? Why should I NOT use the query cache? How do I size the innodb log file size and what IS that innodb log anyway? All answers are inside.
Aurimas Mikalauskas is a former Percona performance consultant and architect currently writing and teaching at speedemy.com. He's been involved with MySQL since 1999, scaling and optimizing MySQL backed systems since 2004 for companies such as BBC, EngineYard, famous social networks and small shops like EstanteVirtual, Pine Cove and hundreds of others.
Additional content mentioned in the presentation can be found here: http://speedemy.com/17
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)Aurimas Mikalauskas
Is my MySQL server configured properly? Should I run Community MySQL, MariaDB, Percona or WebScaleSQL? How many innodb buffer pool instances should I run? Why should I NOT use the query cache? How do I size the innodb log file size and what IS that innodb log anyway? All answers are inside.
Aurimas Mikalauskas is a former Percona performance consultant and architect currently writing and teaching at speedemy.com. He's been involved with MySQL since 1999, scaling and optimizing MySQL backed systems since 2004 for companies such as BBC, EngineYard, famous social networks and small shops like EstanteVirtual, Pine Cove and hundreds of others.
Additional content mentioned in the presentation can be found here: http://speedemy.com/17
SQL Server Best Practices - Install SQL Server like a boss (RELOADED)Andre Essing
Best practices are recommendations for a rock solid system and high performance. These best practices are based on recommendations from vendors and countless experiences that were made in the community. All these experiences, tips and recommendations combined makes the manual how you should setup and configure a system. This also applies to SQL Server. Some of these best practices were already spotted on some PASS chapter meeting, SQL Saturdays and conferences.
Unfortunately, on the most SQL Servers you can find best practices only in rare cases. Most times performance issues or instability could be solved by implementing just these best practices. Starting at the BIOS settings, going through the Windows settings and for sure the SQL Server itself, I want to show you how to configure your SQL Server to make it a rock solid high performance data monster.
This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
MySQL 8.0 is the latest Generally Available version of MySQL. This session will give a brief introduction to MySQL 8.0 and help you upgrade from older versions, understand what utilities are available to make the process smoother and also understand what you need to bear in mind with the new version and considerations for possible behaviour changes and solutions. It really is a simple process.
Introducing the ultimate MariaDB cloud, SkySQLMariaDB plc
SkySQL is the first and only database-as-a-service (DBaaS) engineered for MariaDB by MariaDB, to use a state-of-the-art multi-cloud architecture built on Kubernetes and ServiceNow, and to deploy databases and data warehouses for transactional, analytical and hybrid transactional/analytical workloads.
In this session, we’ll lay out the vision for SkySQL, provide an overview of its capabilities, take a tour of its architecture, and discuss the long-term roadmap. We’ll wrap things up with a live demo of SkySQL, including a preview of its deep learning–based workload analysis and visualization interface.
As cloud adoption has grown more rapidly in the last decade , how DBA's a can add more value to system and bring in more scalability to the DB server. This talk was presented at Open Source India 2018 conference by Kabilesh and Manosh of Mydbops. They share a few experience and value addition made to customers during their consulting process.
SQL Server Best Practices - Install SQL Server like a boss (RELOADED)Andre Essing
Best practices are recommendations for a rock solid system and high performance. These best practices are based on recommendations from vendors and countless experiences that were made in the community. All these experiences, tips and recommendations combined makes the manual how you should setup and configure a system. This also applies to SQL Server. Some of these best practices were already spotted on some PASS chapter meeting, SQL Saturdays and conferences.
Unfortunately, on the most SQL Servers you can find best practices only in rare cases. Most times performance issues or instability could be solved by implementing just these best practices. Starting at the BIOS settings, going through the Windows settings and for sure the SQL Server itself, I want to show you how to configure your SQL Server to make it a rock solid high performance data monster.
This session shows an overview of the features and architecture of SQL Server on Linux and Containers. It covers install, config, performance, security, HADR, Docker containers, and tools. Find the demos on http://aka.ms/bobwardms
MySQL 8.0 is the latest Generally Available version of MySQL. This session will give a brief introduction to MySQL 8.0 and help you upgrade from older versions, understand what utilities are available to make the process smoother and also understand what you need to bear in mind with the new version and considerations for possible behaviour changes and solutions. It really is a simple process.
Introducing the ultimate MariaDB cloud, SkySQLMariaDB plc
SkySQL is the first and only database-as-a-service (DBaaS) engineered for MariaDB by MariaDB, to use a state-of-the-art multi-cloud architecture built on Kubernetes and ServiceNow, and to deploy databases and data warehouses for transactional, analytical and hybrid transactional/analytical workloads.
In this session, we’ll lay out the vision for SkySQL, provide an overview of its capabilities, take a tour of its architecture, and discuss the long-term roadmap. We’ll wrap things up with a live demo of SkySQL, including a preview of its deep learning–based workload analysis and visualization interface.
As cloud adoption has grown more rapidly in the last decade , how DBA's a can add more value to system and bring in more scalability to the DB server. This talk was presented at Open Source India 2018 conference by Kabilesh and Manosh of Mydbops. They share a few experience and value addition made to customers during their consulting process.
netmind - Introducción al Business Analysisnetmind
Conferencia de netmind donde se presenta una descripción de una de las profesiones con más futuro dentro de la Gestión de IT, la función de Analista de Negocio, o Business Analyst, haciendo referencia al Business Analysis Body of Knowledge del IIBA.
La Gestión de Proyectos como concepto describe procesos, estructuras organizacionales y toma de decisiones para la implementación eficaz y efectiva del cambio.A un buen director de proyectos se le presuponen habilidades tanto técnicas como personales. Pero más que presuponer, cada vez más se toma consciencia de que la mejor demostración que se poseen esos conocimientos y habilidades lo proponen las Certificaciones. Veamos cuáles son algunas de ellas, para qué perfiles están orientadas y que puede un profesional esperar de cada una de ellas.
Tendencias TIC: Seminario de netmind para Orangenetmind
La transformación digital exige a IT que adquieran nuevas competencias que van mucho más allá de diseñar, implantar y mantener sistemas de información. Los equipos TIC actuales deben entender cómo las organizaciones pueden utilizar la tecnología para aumentar su competitividad, e implicarse directamente en la estrategia de su organización.
Tendencias en la Economía Digital: Big Data, Cloud y Movilidadnetmind
El objetivo principal del seminario consiste en mostrar porqué los departamentos de IT deberían adquirir nuevas competencias en tecnologías emergentes que les permiten ir más allá de la simple implantación de sistemas de información. Nos centramos sobretodo en tres de ellas: Big Data, Cloud Computing y Movilidad. En la sesión hemos visto cómo el uso de estas tecnologías permite a los profesionales de IT implicarse directamente en la estrategia de su organización, alineando perfectamente los departamentos técnicos con la estrategia de negocio.
Muchos profesionales quieren potenciar el uso de metodologías o prácticas ágiles dentro de sus organizaciones pero no siempre es fácil de hacer. Partiendo de la experiencia de netmind en la formación e implantación de metodologías ágiles en organizaciones, vamos a revisar algunos elementos que hemos de tener en cuenta para potenciar Agile en nuestras organizaciones.
Gigatic 2015 - Digital business through it peoplenetmind
El objetivo de la ponencia es poner de manifiesto los retos a los que se enfrentan los Departamentos de IT y, más concretamente, los CIOs, ante la transformación digital de sus organizaciones. En el transcurso de la ponencia analizaremos el rol de los Departamentos de IT y el CIO en la Empresa Digital, así como las principales dificultades con las que se pueden encontrar para liderar el cambio, haciendo especial énfasis en los nuevos perfiles y competencias con las que deberán contar los equipos TIC para actuar como verdaderos impulsores y dinamizadores de la transformación digital de sus organizaciones.
Carlos Mota nos explica en este Techtuesday de netmind Big Data con Hortonworks. En la charla sobre la distribución HortonWorks se muestran las novedades de Hadoop 2 y se habla sobre el ecosistema Big Data de Hadoop. Nuestro experto explica sus aplicaciones, las tendencias actuales y sus API’s más destacadas, así como de las certificaciones ofrecidas por HortonWorks.
Windows Server 2012 R2, en conjunto con System Center y Windows Azure, son las tres plataformas que forman parte de la visión hacia el sistema operativo en la nube de Microsoft. En este vídeo hablamos de las novedades que nos ofrece esta última versión de Windows Server.
Catálogo de Formación 2014 - 2015: Cursos en Gestión de Proyectos, PMP, PRINCE2, PMI-ACP, TOGAF, CBAP, Micorosft, ITIL, Lean IT, ISO, Hadoop and Hortonworks, Ethical Hacking, CompTIA...y muchos más!
Alineación del departamento de TI con el negocionetmind
Ponencia realizada por el DG de estrategia y Negocio de netmind, Aleix Palau, en el Campus EOI de Sevilla dentro de la jornada de Gobierno de TI y Dirección de Proyectos
Nuestro experto Joaquín García explica en este Techtuesday qué significa el concepto de Arquitectura de Empresa y su valor estratégico y operacional. También explica qué es TOGAF como marco de referencia en la Arquitectura de Empresa, introduce el Método de Desarrollo de Arquitectura de TOGAF y presenta otros componentes de TOGAF.
SQL Server In-Memory OLTP: What Every SQL Professional Should KnowBob Ward
Perhaps you have heard the term “In-Memory” but not sure what it means. If you are a SQL Server Professional then you will want to know. Even if you are new to SQL Server, you will want to learn more about this topic. Come learn the basics of how In-Memory OLTP technology in SQL Server 2016 and Azure SQL Database can boost your OLTP application by 30X. We will compare how In-Memory OTLP works vs “normal” disk-based tables. We will discuss what is required to migrate your existing data into memory optimized tables or how to build a new set of data and applications to take advantage of this technology. This presentation will cover the fundamentals of what, how, and why this technology is something every SQL Server Professional should know
Geek Sync I Need for Speed: In-Memory Databases in Oracle and SQL ServerIDERA Software
You can watch the replay for this Geek Sync webcast in the IDERA Resource Center: http://ow.ly/S6MG50A5ok5
Microsoft introduced IN-MEMORY OLTP, widely referred to as “Hekaton” in SQL Server 2014. Hekaton allows for the creation of fully transactionally consistent memory-resident tables designed for high concurrency and no blocking. With SQL 2016, many of the original restrictions and limitations of this feature have been reduced. IDERA’s Vicky Harp will give an overview of this feature, including how to compile T-SQL code into machine code for an even greater performance boost.
There’s also been a lot of buzz about Oracle 12c’s new IN-MEMORY COLUMN STORE. Oracle ACE Bert Scalzo will cover this new feature, how it works, it’s benefits, scripts to measure/monitor it and more. He will also touch on performance observations from benchmarking this new feature against more traditional SGA memory allocations plus Oracle 11g R2’s Database Smart Flash Cache. All findings, scripts and conclusions from this exercise will be shared. In addition, two very popular database benchmarking tools will be highlighted.
Les index columnstore sont apparus avec SQL Server 2012 et bon nombre de limitations ou d'améliorations ont vu le jour avec SQL Server 2014 et bientôt SQL Server 2016. Il en va de même pour les tables In-Memory à partir de SQL Server 2014. Découvrez lors de cette session comment SQL Server 2016 répond aux besoins d'analyse opérationnelle en temps réel en introduisant et en mixant ces 2 technologies In-Memory
Take an in-depth look at data warehousing with Amazon Redshift and get answers to your technical questions. We will cover performance tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
Take an in-depth look at data warehousing with Amazon Redshift and get answers to your technical questions. We will cover performance tuning techniques that take advantage of Amazon Redshift's columnar technology and massively parallel processing architecture. We will also discuss best practices for migrating from existing data warehouses, optimizing your schema, loading data efficiently, and using work load management and interleaved sorting.
SQL Server 2014 Memory Optimised Tables - AdvancedTony Rogerson
Hekaton is large piece of kit, this session will focus on the internals of how in-memory tables and native stored procedures work and interact – Database structure: use of File Stream, backup/restore considerations in HA and DR as well as Database Durability, in-memory table make up: hash and range indexes, row chains, Multi-Version Concurrency Control (MVCC). Design considerations and gottcha’s to watch out for.
The session will be demo led.
Note: the session will assume the basics of Hekaton are known, so it is recommended you attend the Basics session.
MemSQL 201: Advanced Tips and Tricks WebcastSingleStore
Topics discussed include differences between columnstore and rowstore engines, data ingestion, data sharding and query tuning, lastly memory and workload management.
Watch the replay at https://memsql.wistia.com/medias/4siccvlorm
An introduction to SQL 2014 in-memory OLTP know as Hekaton (XTP) Introduction. I presented this session at SQL User Group Melbourne sometime back. Suitable for people who are getting to know more about Hekaton
Kanban Boards to visualize work
Kanban Systems to create a pull system of work limiting work in progress
Evolutionary Change based on Improved Decision-Making through Leadership & Self-Organization
Focus on Customer Needs recognizing work as a Flow of Value
Feedback loops and Metrics for Continuous Improvement
DSDM Frameworks for Agile Project Management Officesnetmind
DSDM Agile Project Framework es uno de los principales y más solventes marcos metodológicos para la gestión ágil de proyectos.
DSDM está diseñado para ser fácilmente adaptado y utilizado conjuntamente con otros métodos tradicionales, como PRINCE2, o para complementar otros enfoques ágiles como Scrum. En este sentido, DSDM es un modelo de referencia excelente, tanto para empresas que quieran realizar una apuesta decidida por la agilidad, como para todas aquellas organizaciones que deseen adoptar modelos Dual IT, combinando los enfoques tradicionales y ágiles.
Curso oficial de la Lean Kanban University de 1 día de duración. Está orientado a profesionales que trabajan en proyectos, servicios y/u operaciones, tanto de forma individual como en equipo. No se requieren conocimientos previos de Kanban.
Más información: https://www.netmind.es/
¡Síguenos en las redes sociales!
Facebook: https://www.facebook.com/netmindtraining
Twitter: https://twitter.com/netmindIT
LinkedIn: https://www.linkedin.com/company/130601
Curso oficial de la Lean Kanban University de diseño de un Sistema Kanban, orientado a profesionales que trabajan en proyectos, servicios y/u operaciones, tanto de forma individual como en equipo. No son imprescindibles conocimientos previos de Kanban, pero se recomienda haber asistido anteriormente al curso Team Kanban Practitioner para tener los conceptos más claros y que la formación fluya mejor.
Más información: https://www.netmind.es/
¡Síguenos en las redes sociales!
Facebook: https://www.facebook.com/netmindtraining
Twitter: https://twitter.com/netmindIT
LinkedIn: https://www.linkedin.com/company/130601
El pasado jueves 31 de mayo Netmind tuvo la oportunidad de asistir al meetup organizado por DASA (DevOps Agile Skills Association) en Madrid. El objetivo del evento era compartir nuestras experiencias formativas y de aplicación de DevOps con nuestros clientes.
El Congreso PMI ® EMEA 2018 tiene como objetivos brindar el concepto, las habilidades y los comportamientos que se necesitan para marcar la diferencia, por eso el lema del evento fue “Calling All Difference-Makers”.
El workshop que facilitaron Santi y Miquel en el PMI® EMEA Congress, tuvo como título “Escape from Earth! A Project Management Board Game”.
En el taller se llevó a cabo el juego Possible Mission: Escape from Earth, que tenemos en el catálogo de formación de Netmind.
Las 5 claves de la gamificación en el aprendizaje de COBIT®netmind
Santi Alcaide, Associate Trainer en Project Management en Netmind y CEO de Play To Growth ha asistido al evento gigaTIC 2018 como speaker.
El speech tiene como título “Las 5 claves de la gamificación en el aprendizaje de COBIT”
DevOps Agile Skills Association (DASA) es una comunidad abierta cuyo propósito es promover el conocimiento sobre los principios, metodologías y prácticas que fomenten la colaboración, comunicación, integración y automatización del flujo de trabajo entre los equipos de desarrollo y operaciones TI. El principal objetivo de la DASA es desarrollar el primer programa estándar de formación y certificación en DevOps. El esquema creado por la DASA, se estructura en 3 niveles de especialización: Associate, Practitioner y Expert, y pretende cubrir las necesidades de formación de todos los profesionales implicados en DevOps.
Netmind es uno de los miembros impulsores de la asociación (DASA Forerunner), y centro oficial de formación. De momento ya está disponible el primer curso oficial de DASA, DevOps Fundamentals y, próximamente, el curso de especialización, DevOps Practitioner.
PMI define estándar como un documento basado en consenso que proporciona un marco acordado y repetible para hacer algo. Los estándares proporcionan criterios precisos diseñados para ser utilizados consistentemente como una definición, regla o guía. Esto es lo que representa la guía del PMBOK® , unas normas globales de PMI que proporcionan directrices, reglas y características para la gestión de proyectos.
CERTIFICACIONES PMI®: Certificaciones que cubren las demandas de cualquier tipo de proyecto y de compañía en todo el mundo. Pertenecer a esta comunidad proporciona oportunidades para hablar e interactuar con algunos de los gerentes de proyecto más experimentados, así como con jóvenes profesionales que acaban de introducirse en esta profesión.
Seminario Tendencias en Gestión de Proyectosnetmind
Ana Aranda, Lead Expert en Netmind, presentó el seminario “Nuevas Tendencias en la Gestión de Proyectos” en la Universidad Politécnica de Madrid para el grado de Ingenieros de Telecomunicaciones
Managing Successful Programmes (MSP) define la Gestión de Programas como la “gestión coordinada de la dirección e implementación de un conjunto de proyectos y actividades que de forma agregada permiten alcanzar los resultados esperados y obtener beneficios estratégicos”.
En resumen, la gestión coordinada de un conjunto de proyectos interrelacionados con un objetivo común. MSP es una metodología de gestión de programas flexible, diseñada para ser fácilmente adaptable a las necesidades concretas de cada organización.
MoP (Management of Portfolios) define portfolio como “la totalidad de la inversión de una organización (o de una parte de ella) en los cambios requeridos para conseguir sus objetivos estratégicos”.
Gestionar el portfolio, desde el punto de vista de MoP, implica “una coordinada colección de procesos y decisiones estratégicas que, juntas, posibilitan un efectivo balance entre el cambio organizacional y la actividad habitual (Business as usual)”.
Para ello, describe dos ciclos continuos de gestión: ciclo de definición, donde se identifican, priorizan y planifican las iniciativas necesarias, y el ciclo de entrega donde se gestiona a nivel de portfolio la ejecución de las iniciativas incluidas en él, poniendo el énfasis en aspectos como la gobernanza, la gestión de beneficios, riesgos, dependencias y recursos.
Foundations of the Scaled Agile Framework® (SAFe® ) 4.5netmind
El Scaled Agile Framework (SAFe) es una base de conocimientos para adoptar métodos de trabajo ágiles en grandes organizaciones. SAFe presenta de forma gráfica un modelo de gestión para escalar la aplicación de las prácticas ágiles de un equipo a la gestión de programas, y de la gestión de programas al conjunto de la organización.
Este modelo para la adopción y transformación ágil de las organizaciones fué diseñado por Dean Leffingwell, a partir de sus libros “Agile Software Requeriments: Lean Requeriments for Teams Programs and the Enterprise” y “Scaling Software Agility: Best Practices for Large Enterprise”, y se ha implementado con éxito en grandes organizaciones de todo el mundo. 60 de las 100 compañías más grandes de Estados Unidos están utilizando SAFe como guía de referencia para la adopción de Agile.
El modelo de gestión propuesto por SAFe cubre el conjunto de la organización, desde los equipos, hasta los niveles de mayor responsabilidad. El modelo estructura en tres niveles: Equipo, Programa y Portfolio, aunque en la última versión, SAFe 4.0, introduce un 4º nivel opcional para soluciones de extremadamente grandes y complejas. Para cada uno de estos niveles SAFe define los roles, estructuras, actividades, artefactos, prácticas y técnicas adecuadas.
Management 3.0 es el futuro de la gestión. Creada por Jurgen Appelo, es una innovadora forma de afrontar el liderazgo y la gestión de equipos y organizaciones, que entiende que la gestión no es una responsabilidad exclusiva de los directivos, sino un trabajo de todos, y que el liderazgo debería tener como objetivo hacer crecer y transformar organizaciones en un gran lugar para trabajar, donde la gente está involucrada, el trabajo cada día es mejor y los clientes están encantados.
Management 3.0 es un movimiento de innovación, liderazgo y gestión. Una revolución en los modelos de gestión, que reúne a miles de directores de proyectos, jefes de equipo, directores y empresarios, para redefinir el rol del liderazgo en las organizaciones. Trabajar en equipo para que las empresas logren sus objetivos, manteniendo la felicidad de los trabajadores como una prioridad.
Management 3.0 se basa en un conjunto de prácticas y técnicas estructuradas en 9 bloques: gestión agile, gestión de la complejidad, motivación, equipos auto-organizados, definición de objetivos, desarrollo del talento, estructuras organizativas, gestión del cambio y mejora continua.
TOGAF® (The Open Group Architecture Framework) es el framework de Enterprise Architecture (EA) más utilizado a nivel mundial. Architecture Development Method (ADM) es el núcleo de TOGAF y el principal responsable de su éxito. Este método propone definir la Arquitectura Empresarial en distintas fases que se desarrollan de forma iterativa. Partiendo del análisis de situación de la organización (en los distintos niveles, o dominios, que propone TOGAF: Arquitectura, Negocio, Información y Tecnología), se definen las necesidades para poder llevar a cabo la estrategia del negocio, y se determinan y priorizan las transformaciones que serán necesarias para ponerla en marcha.
Actualmente TOGAF es utilizado por miles de organizaciones para mejorar la eficiencia de sus negocios. Al ser un estándar abierto, evita quedar atrapado en otros métodos patentados, permite optimizar los recursos, y tiene un mayor retorno de la inversión. La primera versión de TOGAF fue desarrollada en 1995, basándose en TAFIM (Technical Architecture Framework for Information Management) del Departamento de Defensa de los Estados Unidos. La versión actual, TOGAF 9.1, fue publicada en diciembre de 2011.
CAS 2017 Miquel Rodríguez - Taller Training from the BACK of the Room netmind
¿Cómo funciona nuestro cerebro para captar, procesar, almacenar y recuperar conocimiento? Y lo más importante: ¿Cómo podemos basarnos en este funcionamiento del cerebro y diseñar una formación, una charla, una conferencia… para que tenga el máximo impacto en los asistentes?
Ésta ha sido la preocupación de Sharon Bowman, autora de los libros Training from the BACK of the Room y Using Brain Science to Make Training Stick. A partir de sus libros y su formación de formadores Training from the BACK of the Room (TBR), su impacto en la comunidad ágil ha crecido en los últimos años. Sharon ha sido ponente en varios Scrum Gathering, referentes como Ron Jeffries recomiendan su enfoque, la Scrum Alliance tiene varios de sus Certified Scrum Trainers aplicando y recomendando TBR, y Scaled Agile está impartiendo formación de formadores e rediseñando todos sus cursos de SAFe basándose en TBR.
CAS 2017 Aleix Palau - Peer Learning Universities. Aprendiendo de los desafío...netmind
Uno de los mayores retos que tenemos en las organizaciones, seamos pequeñas o grandes, es ayudar a todos los miembros del equipo a pasar From Zero to Hero. Pero, ¿cómo podemos hacerlo en un entorno tan dinámico y cambiante cómo el actual? ¿Nos sirven los modelos tradicionales de formación y desarrollo del talento?
En esta charla analizaremos cómo Twitter aprovechó el desafío que le representaba pasar a ser una plataforma móvil, para impulsar y consolidar una nueva cultura de aprendizaje entre iguales, en la que toda la organización aprende y enseña.
CAS 2017 Alfred Maeso - Business Analysis: Superpoderes para el Product Ownernetmind
Uno de los retos principales a la hora de empezar a trabajar en métodos ágiles es disponer de las personas y las capacidades adecuadas para llevar a cabo con éxito sus proyectos. Seguramente el rol más clave y crítico es el Product Owner, como responsable de maximizar el valor del producto y el trabajo del Equipo de Desarrollo.
En esta charla quería profundizar sobre este rol centrándome, especialmente, en la importancia del Business Analysis como competencia clave, no solo para este rol sino para el éxito global de las iniciativas ágiles, como función de enlace entre el negocio y las soluciones tecnológicas. Aunque la competencia de BA no debe estar únicamente en el PO, cuanto más analista de negocio, mejor Product Owner será. El Business Analysis es el verdadero superpoder del Product Owner.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
4. SQL Server engine
In-memory OLTP
4
New high-performance, memory-optimized online
transaction processing (OLTP) engine integrated into
SQL Server and architected for modern hardware
trends
Memory-optimized
table file group
In-Memory OLTP
engine: Memory-
optimized tables &
indexes
Native compiled SPs
& schema
In-Memory OLTP
compiler
Transaction log Data file group
Buffer pool for tables
& indexes
Lock Manager
Proc/plan cache for
ad-hoc, T-SQL;
interpreter, plans, etc.
Parser,
catalog,
optimizer
5. Suitable application characteristics
5
• Application is suited for in-memory processing
‐ All performance critical data already fits in memory
‐ Transaction locking or physical latching causing stalls and
blocking
• Application is “OLTP-Like”
‐ Relatively short-lived transactions
‐ High degree of concurrent transactions from many
connections
‐ Examples: Stock trading, travel reservations, order
processing
• Application porting simplified if
‐ Stored procedures used
‐ Performance problems isolated to subset of tables and SPs
6. SQL Server integration
• Same manageability,
administration, and
development experience
• Integrated queries and
transactions
• Integrated HA and
backup/restore
Main-memory optimized
• Optimized for in-memory
data
• Indexes (hash and range)
exist only in memory
• No buffer pool
• Stream-based storage for
durability
High concurrency
• Multiversion optimistic
concurrency control with full
ACID support
• Core engine uses lock-free
algorithms
• No lock manager, latches, or
spinlocks
T-SQL compiled to
machine code
• T-SQL compiled to machine
code via C code generator
and Visual C compiler
• Invoking a procedure is just a
DLL entry-point
• Aggressive optimizations at
compile time
Steadily declining memory
price, NVRAM
Many-core processors Stalling CPU clock rate TCO
Hardware trends Business
Hybrid engine and
integrated experience
High-performance data
operations
Frictionless scale-up Efficient, business-logic
processing
BenefitsIn-MemoryOLTPTechPillarsDrivers
In-memory OLTP architecture
7. Main-memory optimized
• Optimized for in-memory
data
• Indexes (hash and ordered)
exist only in memory
• No buffer pool
• Stream-based storage for
durability
Steadily declining memory
price, NVRAM
Hardware trends
Design considerations for memory-optimized
tables
7
Table constructs
Fixed schema; no ALTER TABLE; must drop/recreate/reload
No LOB data types; row size limited to 8,060
No constraints support (primary key only)
No identity or calculated columns, or CLR
Data and table size considerations
Size of tables = (row size * # of rows)
Size of hash index = (bucket_count * 8 bytes)
Max size SCHEMA_AND_DATA = 512 GB
IO for durability
SCHEMA_ONLY vs. SCHEMA_AND_DATA
Memory-optimized filegroup
Data and delta files
Transaction log
Database recovery
High performance data
operations
BenefitsIn-MemoryOLTPTechPillarsDrivers
8. Create Table DDL
CREATE TABLE [Customer](
[CustomerID] INT NOT NULL
PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000),
[Name] NVARCHAR(250) NOT NULL
INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[CustomerSince] DATETIME NULL
)
WITH ( MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
This table is memory
optimized
This table is durable
Secondary indexes are
specified inline
Hash index
9. Create Procedure DDL
CREATE PROCEDURE [dbo].[InsertOrder] @id INT, @date DATETIME
WITH
NATIVE_COMPILATION,
SCHEMABINDING,
EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH
(TRANSACTION
ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = 'us_english')
-- insert T-SQL here
END
This proc is natively compiled
Native procs must be
schema-bound
Atomic blocks
• Create a transaction if
there is none
• Otherwise, create a
savepoint
Execution context is required
Session settings are fixed at
create time
10. High concurrency
• Multiversion optimistic
concurrency control
with full ACID support
• Core engine uses lock-
free algorithms
• No lock manager,
latches, or spinlocks
Many-core processors
Hardware trends
High-concurrency design considerations
10
Impact of no locks or latches
Write-write conflict: design application for condition with try.catch
Applications dependent on locking; may not be a good candidate
Multiple versions of records
Increases the memory needed by memory-optimized tables
Garbage Collection used to reclaim old versions
Transaction scope
Support for Isolation Levels: Snapshot, Repeatable Read, Serializable
Commit time validation; again must retry logic to deal with failure
Frictionless scale-up
BenefitsIn-MemoryOLTPTechPillarsDrivers
11. Example: Write conflict
11
Time Transaction T1 (SNAPSHOT) Transaction T2 (SNAPSHOT)
1 BEGIN
2 BEGIN
3 UPDATE t SET c1=‘bla’ WHERE c2=123
4 UPDATE t SET c1=‘bla’ WHERE
c2=123 (write conflict)
First writer wins
12. Guidelines for usage
12
1. Declare isolation level – no locking hints
2. Use retry logic to handle conflicts and validation
failures
3. Avoid using long-running transactions
13. T-SQL compiled to
machine code
• T-SQL compiled to machine
code via C code generator
and Visual C compiler
• Invoking a procedure is just a
DLL entry-point
• Aggressive optimizations at
compile-time
Stalling CPU clock rate
Hardware trends
Design considerations for native compiled stored
procedures
13
Efficient business-logic
processing
BenefitsIn-MemoryOLTPTechPillarsDrivers
Native compiled stored
procedures
Non-native
compilation
Performance High. Significantly less
instructions to go through
No different than T-SQL calls
in SQL Server today
Migration strategy Application changes;
development overhead
Easier app migration as can
still access memory-
optimized tables
Access to objects Can only interact with
memory-optimized tables
All objects; access for
transactions across memory
optimized and B-tree tables
Support for T-SQL constructs Limited T-SQL surface area (limit on
memory-optimized
interaction)
Optimization, stats, and query
plan
Statistics utilized at CREATE -
> Compile time
Statistics updates can be used
to modify plan at runtime
Flexibility Limited (no ALTER procedure,
compile-time isolation level)
Ad-hoc query patterns
14. Performance gains
14
In-Memory
OLTP
Compiler
In-Memory
OLTP
Component
Memory-optimized Table
Filegroup
Data Filegroup
SQL Server.exe
In-Memory OLTP Engine for
Memory_optimized Tables &
Indexes
TDS Handler and Session Management
Natively Compiled
SPs and Schema
Buffer Pool for Tables & Indexes
Proc/Plan cache for ad-hoc T-
SQL and SPs
Client App
Transaction Log
Interpreter for TSQL, query
plans, expressions
Query
Interop
Access Methods
Parser,
Catalog,
Algebrizer,
Optimizer
10-30x more efficient
Reduced log bandwidth
& contention. Log
latency remains
Checkpoints are
background sequential
IO
No improvements in
communication stack,
parameter passing,
result set generation
Key
Existing SQL
Component
Generated .dll
15. In-memory data structures
15
Rows
• New row format
Structure of the row is optimized for memory residency and
access
• One copy of row
Indexes point to rows, they do not duplicate them
Indexes
• Hash index for point lookups
• Memory-optimized nonclustered index for range and
ordered scans
• Do not exist on disk – recreated during recovery
16. Memory-optimized table: Row format
Key Points
• Begin/End timestamp determines row’s validity
• No data or index page; just rows
• Row size limited to 8060 bytes to allow data to be moved to disk-based table
• Not every SQL table schema is supported
Row header Payload (table columns)
Begin Ts End Ts StmtId IdxLinkCount
8 bytes 8 bytes 4 bytes 2 bytes
8 bytes * (IdxLinkCount – 1)
17. Key lookup: B-tree vs. memory-optimized table
Matching index record
Hash index
on Name
R1 R2
R3
Non-clustered index
19. • No DML triggers
• No FOREIGN KEY or CHECK constraints
• No IDENTITY columns
• 8060 byte hard limit for row length
• No UNIQUE indexes other than for the PRIMARY KEY
• A maximum of 8 indexes, including the index supporting the PRIMARY
KEY
• No schema changes are allowed once a table is created (drop/recreate)
• Indexes are always and ONLY created at as part of the table creation
In-Memory OLTP Limitations
20. Indexes Comparison for different kinds of operations
Operation Memory hash Memory nonclustered Disk
Index Scan, retrieve all table rows. Yes Yes Yes
Index seek (=). Yes (Full key required.) Yes 1 Yes
Index seek (>, <, <=, >=,
BETWEEN).
No (index scan) Yes 1 Yes
Retrieve rows in a sort-order
matching the index definition.
No Yes Yes
Retrieve rows in a sort-order
matching the reverse of the index
definition.
No No Yes
21. Memory management
21
Data resides in memory at all times
• Must configure SQL server with sufficient memory to store
memory-optimized tables
• Failure to allocate memory will fail transactional workload at
run-time
• Integrated with SQL Server memory manager and reacts to
memory pressure where possible
Integration with Resource Governor
• “Bind” a database to a resource pool
• Mem-opt tables in a database cannot exceed the limit of
the resource pool
• Hard limit (80% of phys. memory) to ensure system remains
stable under memory pressure
22. Garbage collection
22
Stale Row Versions
• Updates, deletes, and aborted insert operations create
row versions that (eventually) are no longer visible to
any transaction
• Slow down scans of index structures
• Create unused memory that needs to be reclaimed (i.e.
Garbage Collected)
Garbage Collection (GC)
• Analogous to version store cleanup task for disk-based
tables to support Read Committed Snapshot (RCSI)
• System maintains ‘oldest active transaction’ hint
GC Design
• Non-blocking, Cooperative, Efficient, Responsive,
Scalable
• A dedicated system thread for GC
• Active transactions work cooperatively and pick up
parts of GC work
23. Cooperative garbage collection
Key Points
• Scanners can remove expired rows when
found
• Offloads work from GC thread
• Ensures that frequently visited areas of the
index are cleaned regularly
• A row needs to be removed from all indexes
before memory can be freed
• Garbage collection is most efficient if all
indexes are frequently accessed
100 200 1 John Smith Kirkland
200 ∞ 1 John Smith Redmond
50 100 1 Jim Spring Kirkland
300 ∞ 1 Ken Stone Boston
TX4: Begin = 210
Oldest Active Hint = 175
24. Durability
24
Memory-optimized tables can be durable or non-durable
• Default is ‘durable’
• Non-durable tables are useful for transient data
Durable tables are persisted in a single memory-optimized
file group
Storage used for memory-optimized has a different access pattern
than for disk tables
File group can have multiple containers (volumes)
Additional containers aid in parallel recovery; recovery happens at
the speed of IO
25. On-disk storage
25
Filestream is the underlying storage mechanism
Checksums and single-bit correcting ECC on files
Data files
• ~128MB in size, write 256KB chunks at a time
• Stores only the inserted rows (i.e. table content)
• Chronologically organized streams of row versions
Delta files
• File size is not constant, write 4KB chunks at a time.
• Stores IDs of deleted rows
26. Logging for memory-optimized tables
26
Uses SQL transaction log to store content
Each log record contains a log record header followed by opaque
memory optimized-specific log content
All logging for memory-optimized tables is logical
• No log records for physical structure modifications.
• No index-specific / index-maintenance log records.
• No UNDO information is logged
Recovery Models
All three recovery models are supported
27. Backup for memory-optimized tables
27
Integrated with SQL Database Backup
• Memory-Optimized file group is backed up as part SQL Server
database backup
• Existing backup scripts work with minimal or no changes
• Transaction log backup includes memory-optimized log records
Not supported
Differential backup
28. Recovery for memory-optimized tables
28
Analysis Phase
Finds the last completed checkpoint
Data Load
• Load from set of data/delta files from the last completed
checkpoint
• Parallel Load by reading data/delta files using 1 thread / file
Redo phase to apply tail of the log
• Apply the transaction log from last checkpoint
• Concurrent with REDO on disk-based tables
No UNDO phase for memory-optimized tables
Only committed transactions are logged
35. In-memory OLTP summary
35
What’s being delivered
• High-performance, memory-optimized OLTP engine integrated
into SQL Server and architected for modern hardware trends
Main benefits
• Optimized for in-memory data up to 20–30 times throughput
‐ Indexes (hash and range) exist only in memory; no buffer pool, B-trees
‐ T-SQL compiled to machine code via C code generator and Visual C
compiler
‐ Core engine uses lock-free algorithms; no lock manager, latches, or
spinlocks
• Multiversion optimistic concurrency control with full ACID
support
• On-ramp existing applications
• Integrated experience with same manageability, administration,
and development experience
37. Cardinality
37
• Execution Plans uses Cardinality in order to get an estimation of number of rows
for each query
• Although the new cardinality still works based on Statistics the algorithm has been
dramatically changed
• Databases in native compatibility level (120) works with the new estimator version
• Change that behavior with the OPTION (QUERYTRACEON 9481) to still be
executed with the old version (per Query basis)
• Databases in compatibility level 110 could be executed with the new version when
using QUERYTRACEON 2312 hint (per Query basis)
38. Scenarios
38
• New cardinality estimates use an average cardinality for recently added ascending
data (ascending key problem) without Statistics Update
‐ Old Version estimates 0 rows
‐ New Version uses average cardinality
• New cardinality estimates assume filtered predicates on the same table have some
correlation
‐ Old and New Version depends on the specific sample
• New cardinality estimates assume filtered predicates on different tables are
independent
‐ Old and New Version depends on the specific sample
39. 39
• Check the performance of your workloads and do the math
‐ Compatibility Level 110
‐ Compatibility Level 110 and add OPTION (QUERYTRACEON 2312) to some querys
‐ Native Compatibility Level (120)
‐ Native Compatibility Level (120) and add OPTION (QUERYTRACEON 9481) to some querys
Suggestions
42. Separation of duties enhancement
• Four new permissions
‐ CONNECT ANY DATABASE (server scope)
‐ IMPERSONATE ANY LOGIN (server scope)
‐ SELECT ALL USER SECURABLES (server scope)
‐ ALTER ANY DATABASE EVEN SESSION (database scope)
• Main benefit
‐ Greater role separation to restrict multiple DBA roles
‐ Ability to create new roles for database administrators who are not
sysadmin (super user)
‐ Ability to create new roles for users or apps with specific purposes
43. Best practices for separation of duties
• Eliminate the use of superusers (SA login, SYSADMIN server
role, DBO database user)
• Use permission system rather than superuser
• Use CONTROL SERVER (server level) and CONTROL DATABASE
(database level) instead and use DENY for specifics
• Always document the user of ownership chains
44. Backup encryption
• Increase security of backups stored separate from the instance
(another environment such as the Cloud)
• Encryption keys can be stored on-premises while backup files
in the cloud
• Support non-encrypted databases (don’t need to turn on
Transparent Data Encryption anymore)
• Different policies for databases and their backups
45. T-SQL BACKUP / RESTORE
BACKUP DATABASE <dbname> TO <device> = <path to device>
WITH
ENCRYPTION
(
ALGORITHM = <Algorithm_name> ,
{ SERVER CERTIFICATE = <Encryptor_Name> |
SERVER ASYMMETRIC KEY = <Encryptor_Name> }
);
No changes to RESTORE
47. Additional details
• AES 128, AES 192, AES 256, and Triple DES
• Unique backup key is generated for each backup
• Certificate
• Asymmetric key from an EKM provider only
• All operations require certificate or key
• Appended backup is not supported
• Compression has no effect on pre-encrypted databases
49. AlwaysOn in SQL Server 2014
• What’s being delivered
• Increase number of secondaries from four to eight
• Increase availability of readable secondaries
• Support for Windows Server 2012 CSV
• Enhanced diagnostics
• Main benefits
• Further scale out read workloads across (possibly geo-distributed) replicas
• Use readable secondaries despite network failures (important in geo-distributed environments)
• Improve SAN storage utilization
• Avoid drive letter limitation (max 24 drives) via CSV paths
• Increase resiliency of storage failover
• Ease troubleshooting
50. Description
• Increase number of secondaries (4–8)
• Max number of sync secondaries is still two
Increase number of Availability Group secondaries
Reason
• Customers want to use readable secondaries
• One technology to configure and manage
• Many times faster than replication
• Customers are asking for more database replicas (4–8)
• To reduce query latency (large-scale environments)
• To scale out read workloads
51. Description
Allow FCI customers to configure CSV paths for system and user databases
Support for Windows Server Cluster Shared Volumes
Reason
• Avoid drive letter limitation on SAN (max 24 drives)
• Improves SAN storage utilization and management
• Increased resiliency of storage failover (abstraction of temporary disk-level
failures)
• Migration of SQL Server customers using PolyServe (to be discontinued in 2013)
52. Description
Allow FCI customers to configure CSV paths for system and user databases
Support for Windows Server Cluster Shared Volumes
Reason
• Avoid drive letter limitation on SAN (max 24 drives)
• Improves SAN storage utilization and management
• Increased resiliency of storage failover (abstraction of temporary disk-level
failures)
• Migration of SQL Server customers using PolyServe (to be discontinued in 2013)
54. Resource Governor goals
• Ability to differentiate workloads
• Ability to monitor resource usage per group
• Limit controls to enable throttled execution or
prevent/minimize probability of “run-aways”
• Prioritize workloads
• Provide predictable execution of workloads
• Specify resource boundaries between workloads
56. Complete resource governance
• What’s being delivered
‐ Add max/min IOPS per volume to Resource Governor pools
‐ Add DMVs and perfcounters for IO statistics per pool per volume
‐ Update SSMS Intellisense for new T-SQL
‐ Update SMO and DOM for new T-SQL and objects
• Main benefits
‐ Better isolation (CPU, memory, and IO) for multitenant workloads
‐ Guarantee performance in private cloud and hosters scenario
57. Resource pools
• Represents physical resources
of server
• Can have one or more
workloads assigned to pool
• Pool divided into shared and
non-shared
• Pools control min/max for
CPU/memory and now IOPS
CREATE RESOURCE POOL pool_name
[ WITH
( [ MIN_CPU_PERCENT = value ]
[ [ , ] MAX_CPU_PERCENT = value ]
[ [ , ] CAP_CPU_PERCENT = value ]
[ [ , ] AFFINITY {SCHEDULER = AUTO |
(Scheduler_range_spec) | NUMANODE =
(NUMA_node_range_spec)} ]
[ [ , ] MIN_MEMORY_PERCENT = value ]
[ [ , ] MAX_MEMORY_PERCENT = value ]
[ [ , ] MIN_IOPS_PER_VOLUME = value ]
[ [ , ] MAX_IOPS_PER_VOLUME = value ])
]
58. Resource pools
• Minimums across all resource pools can not exceed 100
percent
• Non-shared portion provides minimums
• Shared portion provides maximums
• Pools can define min/max for CPU/Memory/IOPS
‐ Mins defined non-shared
‐ Max defined shared
59. Steps to implement Resource Governor
• Create workload groups
• Create function to classify requests into workload group
• Register the classification function in the previous step with the
Resource Governor
• Enable Resource Governor
• Monitor resource consumption for each workload group
• Use monitor to establish pools
• Assign workload group to pool
60. Resource Governor scenarios
• Scenario 1: I just got a new version of SQL Server and would
like to make use of resource governor. How can I use it in my
environment?
• Scenario 2 (based on Scenario 1): Based on monitoring results I
would like to see an event any time a query in the ad-hoc
group (groupAdhoc) runs longer than 30 seconds.
• Scenario 3 (based on Scenario 2): I want to further restrict the
ad-hoc group so it does not exceed 50 percent of CPU usage
when all requests are cumulated.
61. Monitoring Resource Governor
• System views
‐ sys.resource_governor_resource_pools
‐ sys.resource_governor_configuration
• DMVs
‐ sys.dm_resource_governor_resource_pools
‐ sys.dm_resource_governor_resource_pool_volumes
‐ sys.dm_resource_governor_configuration
• New performance counters
‐ SQL Server:Resource Pool Stats
‐ SQL Server:Workload group
• XEvents
‐ file_read_enqueued
‐ file_write_enqueued
63. Backup to Windows Azure
What’s being delivered
• SQL Server supports backups to and restores from the Windows Azure Blob storage service
(UI, T-SQL, PowerShell commandlets)
Main benefit: Take advantage of Windows Azure Blob storage
• Flexible, reliable (3-copies geo-DR), and limitless off-site storage
• No need of backup media management
• No overhead of hardware management
CREATE CREDENTIAL mystoragecred
WITH IDENTITY = ‘mystorage',
SECRET = ‘<your storage access key>
BACKUP DATABASE mydb TO URL ='https://mystorage.blob.core.windows.net/backup-container/mydb-
20130411.bak'
WITH CREDENTIAL = ‘mystoragecred',
FORMAT, COMPRESSION, STATS = 5,
MEDIANAME = ‘mydb backup 20130411', MEDIADESCRIPTION = 'Backup of mydb'
64. Backup to Windows Azure
Windows Azure
storage
WA
WindowsAzure
Blobs
• On-site/off-site storage costs
• Device management costs
Box
• XDrives limited to 1 terabyte
• Max 16 drives
• Manage drives and policy
• Near “bottomless” storage
• Off-site, geo-redundant
• No provisioning
• No device management
• Media safety (decay-free)
• Remote accessibility
65. Backup to Windows Azure
• Simple configuration UI
• Easy creation of Azure credential
• No overhead
66. Backup to Windows Azure Tool
• What is it?
‐ Stand-alone Tool that adds backup to Windows
Azure capabilities and backup encryption to prior
versions of SQL Server
• Benefits
‐ One Cloud Backup strategy across prior versions of
SQL Server including 2005, 2008, and 2008 R2
‐ Adds backup encryption to prior versions, locally or
in the cloud
‐ Takes advantage of backup to Azure
‐ Easy configuration
67. Managed backup to Azure
• What’s being delivered
‐ Agent that manages and automates SQL Server
backup policy
• Main benefit
‐ Large-scale management and no need to manage
backup policy
Context-aware – for example, workload/throttling
Minimal knobs – control retention period
Manage whole instance or particular databases
‐ Take advantage of backup to Azure
Inherently off-site
Geo-redundant
Minimal storage costs
Zero hardware management
Example:
EXEC smart_admin.sp_set_db_backup
@database_name='TestDB',
@storage_url=<storage url>,
@retention_days=30,
@credential_name='MyCredential',
@enable_backup=1
69. SQL Server data and log files in Windows Azure
storage
• What’s being delivered
‐ Ability to move data and log files in Windows Azure
Storage, while keeping the compute node of SQL
Server on-premises
‐ Transparent Data Encryption (TDE) is supported
• Main benefits
‐ No application changes required
‐ Centralized copy of data and log files
‐ Enjoy unlimited storage capacity in Azure Storage
(built in HA, SLA, geo-DR)
‐ Secure because TDE encryption key can be stored
on-premises
‐ Restore database is simply an attach operation
73. Deploy databases to Windows Azure VM
• What’s being delivered
‐ New wizard to deploy databases to SQL
Server in Windows Azure VM
‐ Can also create a new Windows Azure VM
if needed
• Main benefits
‐ Easy to use
Perfect for database administrators new to
Azure and for ad hoc scenarios
‐ Complexity hidden
Detailed Azure knowledge not needed
Almost no overhead: defining factor for
time-to-transfer is database size
74. Call to
action
• Download Trial SQL Server 2014
http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx
• Trial Azure
http://azure.microsoft.com/en-us/pricing/free-trial/
• SQL Server
http://www.microsoft.com/SQLServer
Editor's Notes
select * from [sql]
select * from [sql] where c1>100000
Insert 5000 rows into [sql]
Try Estimated plan Ctrl-L for all those querys at the same time
select * from [sql] where c1>100000 option (querytraceon 2312)
select * from [sql] where c1>100000 option (querytraceon 9481)
select * from [sql] where c1>100000
You can use the Transact-SQL ALTER TABLE...SWITCH statement to quickly and efficiently transfer subsets of your data in the following ways:
Assigning a table as a partition to an already existing partitioned table.
Switching a partition from one partitioned table to another.
Reassigning a partition to form a single table.
Not only are we enabling new unique hybrid scenarios, we are also simplify cloud adoption for our customers. With SQL Server 2014 we will also ship a new migration wizard that will help DBAs easily migrate their on-premises SQL Server instance to Windows Azure and again directly through SSMS as you can see in the screen shot there, point and click and you instance will be up and running in Azure in no time.
Once you become familiar with Windows Azure you can start to take advantage of the Windows Azure Virtual Machine to run many scenarios in the cloud including fast dev/test of your SQL Server applications, moving existing applications, doing hybrid scenarios like we talked about, BI scenarios in the cloud because you have full SQL Server functionality including all of the BI services. In addition you have full control over the VM so if you want to put your corporate anti-virus on the VM you can.
Once you have become comfortable with the Windows Azure environment you can take advantage of Windows Azure SQL Database service offering that will speed development of your new database application even faster because you don’t have to manage the database, it is a service, you don’t have to patch the OS or database we take care of that, all you do is develop your application using the service. The SQL Database service also has unique cloud features like dynamic scalability of the database using Federations like Flavorus did to achieve their business goals. In addition this database service also offers an SLA for the database running inside the VM and you don’t have think about setting up high availability because it is built-in to the database service by default. This is where we see cloud applications doing fast development, less maintenance and faster time to market.