• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Introduction to microsoft sql server 2008 r2
 

Introduction to microsoft sql server 2008 r2

on

  • 7,593 views

In this presentation we review the new features in SQL 2008 R2.

In this presentation we review the new features in SQL 2008 R2.

Regards,

Ing. Eduardo Castro Martinez, PhD
http://comunidadwindows.org
http://ecastrom.blogspot.com

Statistics

Views

Total Views
7,593
Views on SlideShare
6,765
Embed Views
828

Actions

Likes
4
Downloads
0
Comments
3

4 Embeds 828

http://www.sqlserver-training.com 494
http://sqlserver-training.com 328
http://translate.googleusercontent.com 4
https://blackboard.montclair.edu 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

13 of 3 previous next Post a comment

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • This was ok, need more in depth.
    Are you sure you want to
    Your message goes here
    Processing…
  • very good ppt
    Are you sure you want to
    Your message goes here
    Processing…
  • worthless!!!!!!!!!
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • DatacenterBuilt on SQL Server 2008 R2 Enterprise, SQL Server 2008 R2 Datacenter is designed to deliver a high-performing data platform that provides the highest levels of scalability for large application workloads, virtualization and consolidation, and management for an organization’s database infrastructure. Datacenter helps enable organizations to cost effectively scale their mission-critical environment. Key features new to Datacenter:Application and Multi-Server Management for enrolling, gaining insights and managing over 25 instancesHighest virtualization support for maximum ROI on consolidation and virtualizationHigh-scale complex event processing with SQL Server StreamInsightSupports more than 8 processors and up to 256 logical processors for highest levels of scaleSupports memory limits up to OS maximum
  • DatacenterBuilt on SQL Server 2008 R2 Enterprise, SQL Server 2008 R2 Datacenter is designed to deliver a high-performing data platform that provides the highest levels of scalability for large application workloads, virtualization and consolidation, and management for an organization’s database infrastructure. Datacenter helps enable organizations to cost effectively scale their mission-critical environment. Key features new to Datacenter:Application and Multi-Server Management for enrolling, gaining insights and managing over 25 instancesHighest virtualization support for maximum ROI on consolidation and virtualizationHigh-scale complex event processing with SQL Server StreamInsightSupports more than 8 processors and up to 256 logical processors for highest levels of scaleSupports memory limits up to OS maximum
  • SQL Server 2008 R2 Parallel Data Warehouse is a highly scalable data warehouse appliance-based solution. Parallel Data Warehouse delivers performance at low cost through a massively parallel processing (MPP) architecture and compatibility with hardware partners – scale your data warehouse to tens and hundreds of terabytes.Key features new to Parallel Data Warehouse:10s to 100s TBs enabled by MPP architectureAdvanced data warehousing capabilities like Star Join Queries and Change Data CaptureIntegration with SSIS, SSRS, and SSASSupports industry standard data warehousing hub and spoke architecture and parallel database copy
  • SQL Server 2008 R2 Parallel Data Warehouse is a highly scalable data warehouse appliance-based solution. Parallel Data Warehouse delivers performance at low cost through a massively parallel processing (MPP) architecture and compatibility with hardware partners – scale your data warehouse to tens and hundreds of terabytes.Key features new to Parallel Data Warehouse:10s to 100s TBs enabled by MPP architectureAdvanced data warehousing capabilities like Star Join Queries and Change Data CaptureIntegration with SSIS, SSRS, and SSASSupports industry standard data warehousing hub and spoke architecture and parallel database copy
  • Resource OptimizationUse new tools in SSMS to gain insights for improved consolidation management, maximize investments and ultimately maintain healthier systems.Consolidation managementUse a new explorer in SQL Server Management Studio to access the central management for multi-server management and at-a-glance dashboard views. Dashboard viewpoints provide insights into utilization and policy violation to help identify consolidation opportunities or resources at risk. What’s more, data and log file utilization are rolled up for visibility across databases and volumes—helping identify potential issues for you to pinpoint and take action.Improve service levelsSet policies to define desired utilization thresholds across target servers or applications within a new central management point. Identify issues with instances and applications, reducing the amount of time spent troubleshooting which apps are running on potential problem servers. Customize what resource properties are displayed based on your needs and view this information through dashboard views. Dashboard views help enable impact analysis – quickly drill in on issues before internal customers come knocking.
  • In R2’s SQL Server Management Studio, right-click on a database and click Tasks, Extract Data-Tier Application.  This starts a wizard that will reverse-engineer your database schema, figure out what makes it tick, and package it in a way that you can redeploy it on another server.  The information is saved in a file with a .dacpac extension, and if you try to open it with SQL Server Management Server, you’ll hit a stumbling block:
  • What DACs Mean for Database AdministratorsIf you never had a change control process and your developers just implemented changes willy-nilly in production, then the DAC approach won’t change anything.  Your developers will do what they’ve always done.If you’ve got change control processes in place, your developers probably hand you change scripts and tell you to implement them in production. If you’re ambitious, you audit their work as a sanity check to make sure their work will scale. In the future, your developers may be creating and updating their database schema, stored procedures, functions, etc. inside Visual Studio, packaging them into DAC Packs, and handing them to you.  In order for you to check their work, you’ll need to switch over into Visual Studio, or perhaps log onto their development SQL Servers to see the schema changes there.  This is another nail in the coffin of the power of the DBA.  From the nosql movement to the DBA-less cloud, DBAs need to be acutely aware of how things are changing.This isn’t necessarily a bad thing; it’s worked great in the world of virtualization.  As a VMware sysadmin, I didn’t need to understand what each virtual server was doing, whether it conformed to best practices, or even what was running on it.  I managed them in large quantities with low overhead simply by moving things around based on the resources they needed.  If a server’s needs grew, I could move them to a larger VMware host or a less-active host.  I only purchased resources incrementally for the entire pool rather than micromanaging what each server needed.  I didn’t do as good of a job as if I’d micromanaged each server’s configuration, but I was able to manage more servers with less manpower.  Everything’s a tradeoff.What if you, as a production DBA, could manage more instances and more databases with less time?  What if, instead of looking at lines of T-SQL code, you were able to step back and see the bigger picture?  What if you treated every application as a sealed, hands-off third-party app?
  • Dramatically reduce the time required for SQL Server installs by creating sysprep-ed images of SQL Server standalone instances that can be copied and quickly installed on target systems. Enables rapid provisioning and configuration using prepared images stored in VHDMart for Hyper-V deployments. SQL Server 2008 Sysprep accomplishes the install in two phases behind the scenes: prepare and configure. In SQL Server 2008 R2, two new SQL Server setup actions are exposed to the users: PrepareImage (also referred to as “prepare”) and CompleteImage (also referred to as “complete” or “configure”). SQL Server PrepareImage takes about 30 minutes for Engine and RS components, whereas CompleteImage takes a few minutes. Windows SysPrep process can be run in between the two steps to create Windows OS image and deploy to target computers, but is not a required step. These two steps can be run back-to-back on the same computer. Sysprep will be able to save on the average an estimated 30 minutes per install.This feature is available in SQL Server 2008 R2 for Database Engine and Reporting Services deployments.
  • Dramatically reduce the time required for SQL Server installs by creating sysprep-ed images of SQL Server standalone instances that can be copied and quickly installed on target systems. Enables rapid provisioning and configuration using prepared images stored in VHDMart for Hyper-V deployments. SQL Server 2008 Sysprep accomplishes the install in two phases behind the scenes: prepare and configure. In SQL Server 2008 R2, two new SQL Server setup actions are exposed to the users: PrepareImage (also referred to as “prepare”) and CompleteImage (also referred to as “complete” or “configure”). SQL Server PrepareImage takes about 30 minutes for Engine and RS components, whereas CompleteImage takes a few minutes. Windows SysPrep process can be run in between the two steps to create Windows OS image and deploy to target computers, but is not a required step. These two steps can be run back-to-back on the same computer. Sysprep will be able to save on the average an estimated 30 minutes per install.This feature is available in SQL Server 2008 R2 for Database Engine and Reporting Services deployments.
  • Dramatically reduce the time required for SQL Server installs by creating sysprep-ed images of SQL Server standalone instances that can be copied and quickly installed on target systems. Enables rapid provisioning and configuration using prepared images stored in VHDMart for Hyper-V deployments. SQL Server 2008 Sysprep accomplishes the install in two phases behind the scenes: prepare and configure. In SQL Server 2008 R2, two new SQL Server setup actions are exposed to the users: PrepareImage (also referred to as “prepare”) and CompleteImage (also referred to as “complete” or “configure”). SQL Server PrepareImage takes about 30 minutes for Engine and RS components, whereas CompleteImage takes a few minutes. Windows SysPrep process can be run in between the two steps to create Windows OS image and deploy to target computers, but is not a required step. These two steps can be run back-to-back on the same computer. Sysprep will be able to save on the average an estimated 30 minutes per install.This feature is available in SQL Server 2008 R2 for Database Engine and Reporting Services deployments.
  • SQL Server PowerPivot Add-in for Excel(formerly known as "Gemini)This innovative Excel add-in enables Excel power users to easily create powerful BI solutions by streamlining the integration of data from multiple sources enabling interactive modeling and analysis of massive amount of data and by supporting the seamless sharing of data models and reports through Microsoft Office SharePoint 2010 SharePoint 2010 based Operations DashboardThis SharePoint managed service enables front-line operators and administrators to monitor access and utilization of analyses and reports as well as track patterns of hardware usage to help ensure the right security privileges are applied and user generated solutions are available, up-to-date, and managed in a consistent way.SQL Server Reporting Services Report Builder 3.0 This updated ad-hoc reporting client accelerates report creation, collaboration and consistency  by allowing users to create and share report components that can be accessed via the shared component library and by enabling the rapid assembly of comprehensive business reports using these shared components.Rich visualization of geospatial data New support for geospatial visualization including mapping, routing, and custom shapes can help your end users create customized reports that leverage existing content objects, such as queries, data regions, and charts and graphs. You can also enhance location-based data reports with Bing Maps in Report Builder 3.0.
  • PowerPivot for Excel supports self-service business intelligence in the following ways.Current row-and-column limitations in Excel are removed so that you can import much more data. This goes far beyond 1,000,000 rows! A data relationship layer lets you integrate data from different sources and work with all of the data holistically. You can enter data, copy data from other worksheets, or import data from corporate databases. You can build relationships among the data to analyze it as if it all originated from a single source. Create portable, reusable data. Data stays inside the workbook. You do not need manage external data connections. If you publish, move, copy, or share a workbook, all the data goes with it. PowerPivot data is fully and immediately available to the rest of the workbook. You can switch between Excel and PowerPivot windows to work on the data and its presentation in PivotTables or charts in an interactive fashion. Working on data or on its presentation are not separate tasks. You work on both together in the same Excel environment.
  • PowerPivot for Excel supports self-service business intelligence in the following ways.Current row-and-column limitations in Excel are removed so that you can import much more data. This goes far beyond 1,000,000 rows! A data relationship layer lets you integrate data from different sources and work with all of the data holistically. You can enter data, copy data from other worksheets, or import data from corporate databases. You can build relationships among the data to analyze it as if it all originated from a single source. Create portable, reusable data. Data stays inside the workbook. You do not need manage external data connections. If you publish, move, copy, or share a workbook, all the data goes with it. PowerPivot data is fully and immediately available to the rest of the workbook. You can switch between Excel and PowerPivot windows to work on the data and its presentation in PivotTables or charts in an interactive fashion. Working on data or on its presentation are not separate tasks. You work on both together in the same Excel environment.
  • A Business Intelligence project can often run into the sand because of data quality issues and tools like PowerPivot and Reporting Services will only highlight these problems back to the business,.  These quality issues aren’t simply about keying errors they relate to the reference data that is stored in multiple places in many systems. An obvious example is the many versions of a customer that exist across these systems e.g. the marketing system have an address where they send out the catalogue, but this is different to the billing address in the finance system.  However while this may well need to be fixed, it isn’t killing the business in that bills are being paid by customers even if the odd catalogue is being mis-mailed. Anyway my point is that this reference data exists in several systems, and fixing this in the data warehouse is OK for reporting but doesn’t resolve issues that can occur in production.  Also this kind of problem is a business process issue, rather than being of a technical nature. Having said that technology can certainly help and this is where Master Data Services in SQL Server 2008 R2 comes in.The new release will provide a portal where end users can manage this reference data..
  • Master Data Services Configuration Manager, from which you can create and configure Master Data Services databases and Web applications.Master Data Manager, from which users can manage master data.Master Data Services Web service, from which a developer can extend or develop custom solutions for Master Data Services in his or her environment.
  • Models (Master Data Services)[This topic is pre-release documentation and is subject to change in future releases. Blank topics are included as placeholders.] Models are the highest level of data organization in Master Data Services. A model contains the following objects:EntitiesAttributes and attribute groupsHierarchies (derived and explicit)CollectionsThese objects organize and define the master data, which are members and their attribute values. Model objects are maintained in the System Administration functional area of the Master Data Manager user interface, and the master data is maintained in the Explorer area.You can have one or many models. Each model should group similar kinds of data. The master data generally falls into one of four categories: people, places, things, or concepts. For example, you can create a Product model to contain product-related data or a Customer model to contain customer-related data.Initially, you create the structure of your model by creating entities to contain members and their attributes. Then you can produce hierarchies and collections to roll up members in different ways for analysis and publishing to subscribing systems.You can assign users and groups permission to view and update objects within the model. If you do not give permission to the model, it is not displayed.At any given time, you can create copies of the master data within a model. These copies are called versions.When you have defined a model in a test environment, you can deploy it, with or without the corresponding data, from the test environment to a production environment. This eliminates the need to recreate your models in your production environment. Example In the following example, the Product model defines the way to organize product-related data.Product (model)      Product (entity)            Name (free-form attribute)            Code (free-form attribute)            Subcategory (domain-based attribute and entity)                  Name (free-form attribute)                  Code (free-form attribute)                  Category (domain-based attribute and entity)                        Name (free-form attribute)                        Code (free-form attribute)            StandardCost (free-form attribute)            ListPrice (free-form attribute)            ThumbNailPhoto (file attribute)Other common models are:Accounts, which could include entities such as balance sheet accounts, income statement accounts, statistics, and account type.Customer, which could include entities such as gender, education, occupation, and marital status.Geography, which could include entities such as postal codes, cities, counties, states, provinces, regions, territories, countries, and continents.
  • Data volumes are exploding with event data streaming from sources such as RFID, sensors and web logs across industries including manufacturing, financial services and utilities.  The size and frequency of the data make it challenging to store for data mining and analysis.  The ability to monitor, analyze and act on the data in motion provides significant opportunity to make more informed business decisions in near real-time

Introduction to microsoft sql server 2008 r2 Introduction to microsoft sql server 2008 r2 Presentation Transcript