This document provides steps to create a data warehouse using SQL Server Integration Services (SSIS). It involves creating the data warehouse structure using a SQL script, then using SSIS to populate the data warehouse tables from the AdventureWorks source database. The SSIS package contains control flow tasks to initialize the data warehouse and load dimension and fact tables. Upon execution of the package, the document verifies that the data warehouse is properly populated by examining the tables.
This document provides an overview and instructions for installing and using the MySQL database system. It describes MySQL's client-server architecture, how to connect to the MySQL server using the command line client, and provides examples of common SQL commands for creating databases and tables, inserting, selecting, updating, and deleting rows of data. It also introduces some basic SQL functions and provides SQL scripts as examples to create tables and insert data.
Spark is fast becoming a critical part of Customer Solutions on Azure. Databricks on Microsoft Azure provides a first-class experience for building and running Spark applications. The Microsoft Azure CAT team engaged with many early adopter customers helping them build their solutions on Azure Databricks.
In this session, we begin by reviewing typical workload patterns, integration with other Azure services like Azure Storage, Azure Data Lake, IoT / Event Hubs, SQL DW, PowerBI etc. Most importantly, we will share real-world tips and learnings that you can take and apply in your Data Engineering / Data Science workloads
The document discusses MySQL and SQL concepts including relational databases, database management systems, and the SQL language. It introduces common SQL statements like SELECT, INSERT, UPDATE, and DELETE and how they are used to query and manipulate data. It also covers topics like database design with tables, keys, and relationships between tables.
Snowflake is an analytic data warehouse provided as software-as-a-service (SaaS). It uses a unique architecture designed for the cloud, with a shared-disk database and shared-nothing architecture. Snowflake's architecture consists of three layers - the database layer, query processing layer, and cloud services layer - which are deployed and managed entirely on cloud platforms like AWS and Azure. Snowflake offers different editions like Standard, Premier, Enterprise, and Enterprise for Sensitive Data that provide additional features, support, and security capabilities.
The document provides an introduction to MySQL and relational database management systems. It discusses what a database and RDBMS are, common RDBMS terminology like tables, columns, rows, keys, and indexes. It also covers how to install and use MySQL, including creating databases and tables, and performing basic CRUD (create, read, update, delete) operations using SQL statements. The document is aimed at getting readers started with the MySQL database system.
This document discusses database systems and SQL. It begins by defining key database concepts like data models, schemas, and instances. It then provides an introduction to SQL, explaining what SQL is used for and some of its main functions. The document goes on to describe database system architecture, languages, and interfaces. It discusses the three-schema architecture and concepts of data independence. It also covers database management system components, utilities, and classifications.
This document provides information about SQL queries and joins. It begins by introducing SQL (Structured Query Language) which is used to communicate with databases and retrieve required information. It describes the basic CRUD (Create, Read, Update, Delete) functions of SQL. It then discusses different types of SQL queries - aggregate function queries, scalar function queries, and join queries. It provides the syntax and explanation of inner joins, outer joins (left, right, full) which are used to query data from multiple tables based on relationships between columns. The document is presented by Hammad, Bilal and Awais.
This document provides an overview and instructions for installing and using the MySQL database system. It describes MySQL's client-server architecture, how to connect to the MySQL server using the command line client, and provides examples of common SQL commands for creating databases and tables, inserting, selecting, updating, and deleting rows of data. It also introduces some basic SQL functions and provides SQL scripts as examples to create tables and insert data.
Spark is fast becoming a critical part of Customer Solutions on Azure. Databricks on Microsoft Azure provides a first-class experience for building and running Spark applications. The Microsoft Azure CAT team engaged with many early adopter customers helping them build their solutions on Azure Databricks.
In this session, we begin by reviewing typical workload patterns, integration with other Azure services like Azure Storage, Azure Data Lake, IoT / Event Hubs, SQL DW, PowerBI etc. Most importantly, we will share real-world tips and learnings that you can take and apply in your Data Engineering / Data Science workloads
The document discusses MySQL and SQL concepts including relational databases, database management systems, and the SQL language. It introduces common SQL statements like SELECT, INSERT, UPDATE, and DELETE and how they are used to query and manipulate data. It also covers topics like database design with tables, keys, and relationships between tables.
Snowflake is an analytic data warehouse provided as software-as-a-service (SaaS). It uses a unique architecture designed for the cloud, with a shared-disk database and shared-nothing architecture. Snowflake's architecture consists of three layers - the database layer, query processing layer, and cloud services layer - which are deployed and managed entirely on cloud platforms like AWS and Azure. Snowflake offers different editions like Standard, Premier, Enterprise, and Enterprise for Sensitive Data that provide additional features, support, and security capabilities.
The document provides an introduction to MySQL and relational database management systems. It discusses what a database and RDBMS are, common RDBMS terminology like tables, columns, rows, keys, and indexes. It also covers how to install and use MySQL, including creating databases and tables, and performing basic CRUD (create, read, update, delete) operations using SQL statements. The document is aimed at getting readers started with the MySQL database system.
This document discusses database systems and SQL. It begins by defining key database concepts like data models, schemas, and instances. It then provides an introduction to SQL, explaining what SQL is used for and some of its main functions. The document goes on to describe database system architecture, languages, and interfaces. It discusses the three-schema architecture and concepts of data independence. It also covers database management system components, utilities, and classifications.
This document provides information about SQL queries and joins. It begins by introducing SQL (Structured Query Language) which is used to communicate with databases and retrieve required information. It describes the basic CRUD (Create, Read, Update, Delete) functions of SQL. It then discusses different types of SQL queries - aggregate function queries, scalar function queries, and join queries. It provides the syntax and explanation of inner joins, outer joins (left, right, full) which are used to query data from multiple tables based on relationships between columns. The document is presented by Hammad, Bilal and Awais.
SSIS is a platform for data integration and workflows that allows users to extract, transform, and load data. It can connect to many different data sources and send data to multiple destinations. SSIS provides functionality for handling errors, monitoring data flows, and restarting packages from failure points. It uses a graphical interface that facilitates transforming data without extensive coding.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
The document discusses various disaster recovery strategies for SQL Server including failover clustering, database mirroring, and peer-to-peer transactional replication. It provides advantages and disadvantages of each approach. It also outlines the steps to configure replication for Always On Availability Groups which involves setting up publications and subscriptions, configuring the availability group, and redirecting the original publisher to the listener name.
The document describes an OLTP database created for a construction company to store ongoing and closed project data in third normal form. An ETL process was developed using SSIS to load data from Excel spreadsheets and XML files into the database tables. This ETL package was combined with database backup, shrink, and index rebuild processes into a single job scheduled to run regularly via SQL Server Agent. The document includes diagrams and details of the database structure and various SSIS packages developed for the ETL load processes.
SQL Server supports two main types of indexes - clustered and nonclustered. A clustered index physically orders the data on disk based on the index key. Only one clustered index is allowed per table. A nonclustered index contains key values and row locators but does not determine the physical order of data. SQL Server supports up to 999 nonclustered indexes per table. The choice of index depends on the query patterns against the table and the desired performance characteristics.
Presentation on tablespaceses segments extends and blocksVinay Ugave
This presentation discusses database storage concepts in Oracle including blocks, extends, segments, and tablespaces. It defines each concept as follows:
Blocks are the smallest logical unit of storage in Oracle and represent a specific number of bytes on disk. Extents are collections of contiguous data blocks that make up segments. Segments store specific data structures like tables or indexes and are made up of one or more extents. Tablespaces logically store segments and physically store data in associated datafiles.
The control flow manages the execution of tasks and containers in an SSIS package. It contains control flow tasks, containers, and precedence constraints. There are three primary control flow objects - tasks that perform jobs, containers that group tasks and containers, and constraints that define execution order. A control flow task performs operations like sending emails or copying files, and completes as succeeded or failed.
This document provides an overview of NoSQL databases. It begins with a brief history of relational databases and Edgar Codd's 1970 paper introducing the relational model. It then discusses modern trends driving the emergence of NoSQL databases, including increased data complexity, the need for nested data structures and graphs, evolving schemas, high query volumes, and cheap storage. The core characteristics of NoSQL databases are outlined, including flexible schemas, non-relational structures, horizontal scaling, and distribution. The major categories of NoSQL databases are explained - key-value, document, graph, and column-oriented stores - along with examples like Redis, MongoDB, Neo4j, and Cassandra. The document concludes by discussing use cases and
Hive is a data warehouse infrastructure tool used to process large datasets in Hadoop. It allows users to query data using SQL-like queries. Hive resides on HDFS and uses MapReduce to process queries in parallel. It includes a metastore to store metadata about tables and partitions. When a query is executed, Hive's execution engine compiles it into a MapReduce job which is run on a Hadoop cluster. Hive is better suited for large datasets and queries compared to traditional RDBMS which are optimized for transactions.
As a leading data visualization tool Tableau has many desirable and unique features. Its powerful data discovery and exploration application allows you to answer important questions in seconds. You can use Tableau's drag and drop interface to visualize any data, explore different views, and even combine multiple databases together easily. It does not need any complex scripting. Anyone who understands the business problem can address it with a visualization of the relevant data. When the analysis is finished, sharing with others is as easy as publishing to Tableau Server.
MS SQL Server is a database server produced by Microsoft that enables users to write and execute SQL queries and statements. It consists of several features like Query Analyzer, Profiler, and Service Manager. Multiple instances of SQL Server can be installed on a machine, with each instance having its own set of users, databases, and other objects. SQL Server uses data files, filegroups, and transaction logs to store database objects and record transactions. The data dictionary contains metadata about database schemas and is stored differently in Oracle and SQL Server.
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
Modern Data Warehousing with the Microsoft Analytics Platform SystemJames Serra
The Microsoft Analytics Platform System (APS) is a turnkey appliance that provides a modern data warehouse with the ability to handle both relational and non-relational data. It uses a massively parallel processing (MPP) architecture with multiple CPUs running queries in parallel. The APS includes an integrated Hadoop distribution called HDInsight that allows users to query Hadoop data using T-SQL with PolyBase. This provides a single query interface and allows users to leverage existing SQL skills. The APS appliance is pre-configured with software and hardware optimized to deliver high performance at scale for data warehousing workloads.
The document discusses Hadoop, an open-source software framework that allows distributed processing of large datasets across clusters of computers. It describes Hadoop as having two main components - the Hadoop Distributed File System (HDFS) which stores data across infrastructure, and MapReduce which processes the data in a parallel, distributed manner. HDFS provides redundancy, scalability, and fault tolerance. Together these components provide a solution for businesses to efficiently analyze the large, unstructured "Big Data" they collect.
There are many Galera Cluster distributions and sometimes differences are well worth noting. We get a lot of queries about which Galera Cluster to use, or why one should use one distribution over the other.
Learn about Galera Cluster with MySQL 5.7 from Codership, and we’ll compare it with Galera Cluster 4 with MariaDB 10.4, and Percona XtraDB Cluster 5.7 with Galera 3. This is also the webinar where we preview Galera Cluster 4 with MySQL 8.0 as well as compare it with the preview release of Percona XtraDB Cluster 8.0.
Overall, learn why distributions exists, and how you can get the most out of your Galera Cluster experience.
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
MySQL was created in 1994 by David Axmark, Allan Larsson, and Michael Widenius in Sweden as a lightweight database based on mSQL that was faster and more flexible. It was initially created for personal usage but is now commonly used as a web database. In 2008, MySQL AB was acquired by Sun Microsystems for $1 billion. MySQL is used for various purposes including data warehousing, e-commerce, and logging, but is most commonly used as a database for websites.
Data visualization is the process of visually representing information to help understand it more quickly. It is important because visuals allow humans to understand complex information instantly. Data visualization helps businesses make better decisions faster by communicating more information than tables and requiring less memory. Tableau is a popular business intelligence tool that allows users to interactively visualize and analyze data through drag-and-drop functionality. It can connect to various data sources and produce many chart types to provide rapid, real-time analysis of large datasets.
1. The document discusses using SQL Server Data Tools in Visual Studio 2013 to explore database and business intelligence projects. It provides steps to create an SSIS project to export data from a SQL Server table to a flat file.
2. The document also discusses using Visual Studio 2013 tools to manage database schemas through reverse engineering and version control capabilities.
3. The last part of the document will cover publishing a database project to an Azure SQL database.
Sql server 2012 tutorials writing transact-sql statementsSteve Xu
This tutorial provides an introduction to writing basic Transact-SQL statements for creating and manipulating database objects. It is divided into three lessons: Lesson 1 covers creating a database, table, inserting and updating data; Lesson 2 covers configuring permissions on database objects by creating logins, users, views and stored procedures; Lesson 3 covers deleting database objects. The document contains step-by-step tutorials to demonstrate creating a database, table, inserting and reading data, and configuring permissions on the database objects.
SSIS is a platform for data integration and workflows that allows users to extract, transform, and load data. It can connect to many different data sources and send data to multiple destinations. SSIS provides functionality for handling errors, monitoring data flows, and restarting packages from failure points. It uses a graphical interface that facilitates transforming data without extensive coding.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
The document discusses various disaster recovery strategies for SQL Server including failover clustering, database mirroring, and peer-to-peer transactional replication. It provides advantages and disadvantages of each approach. It also outlines the steps to configure replication for Always On Availability Groups which involves setting up publications and subscriptions, configuring the availability group, and redirecting the original publisher to the listener name.
The document describes an OLTP database created for a construction company to store ongoing and closed project data in third normal form. An ETL process was developed using SSIS to load data from Excel spreadsheets and XML files into the database tables. This ETL package was combined with database backup, shrink, and index rebuild processes into a single job scheduled to run regularly via SQL Server Agent. The document includes diagrams and details of the database structure and various SSIS packages developed for the ETL load processes.
SQL Server supports two main types of indexes - clustered and nonclustered. A clustered index physically orders the data on disk based on the index key. Only one clustered index is allowed per table. A nonclustered index contains key values and row locators but does not determine the physical order of data. SQL Server supports up to 999 nonclustered indexes per table. The choice of index depends on the query patterns against the table and the desired performance characteristics.
Presentation on tablespaceses segments extends and blocksVinay Ugave
This presentation discusses database storage concepts in Oracle including blocks, extends, segments, and tablespaces. It defines each concept as follows:
Blocks are the smallest logical unit of storage in Oracle and represent a specific number of bytes on disk. Extents are collections of contiguous data blocks that make up segments. Segments store specific data structures like tables or indexes and are made up of one or more extents. Tablespaces logically store segments and physically store data in associated datafiles.
The control flow manages the execution of tasks and containers in an SSIS package. It contains control flow tasks, containers, and precedence constraints. There are three primary control flow objects - tasks that perform jobs, containers that group tasks and containers, and constraints that define execution order. A control flow task performs operations like sending emails or copying files, and completes as succeeded or failed.
This document provides an overview of NoSQL databases. It begins with a brief history of relational databases and Edgar Codd's 1970 paper introducing the relational model. It then discusses modern trends driving the emergence of NoSQL databases, including increased data complexity, the need for nested data structures and graphs, evolving schemas, high query volumes, and cheap storage. The core characteristics of NoSQL databases are outlined, including flexible schemas, non-relational structures, horizontal scaling, and distribution. The major categories of NoSQL databases are explained - key-value, document, graph, and column-oriented stores - along with examples like Redis, MongoDB, Neo4j, and Cassandra. The document concludes by discussing use cases and
Hive is a data warehouse infrastructure tool used to process large datasets in Hadoop. It allows users to query data using SQL-like queries. Hive resides on HDFS and uses MapReduce to process queries in parallel. It includes a metastore to store metadata about tables and partitions. When a query is executed, Hive's execution engine compiles it into a MapReduce job which is run on a Hadoop cluster. Hive is better suited for large datasets and queries compared to traditional RDBMS which are optimized for transactions.
As a leading data visualization tool Tableau has many desirable and unique features. Its powerful data discovery and exploration application allows you to answer important questions in seconds. You can use Tableau's drag and drop interface to visualize any data, explore different views, and even combine multiple databases together easily. It does not need any complex scripting. Anyone who understands the business problem can address it with a visualization of the relevant data. When the analysis is finished, sharing with others is as easy as publishing to Tableau Server.
MS SQL Server is a database server produced by Microsoft that enables users to write and execute SQL queries and statements. It consists of several features like Query Analyzer, Profiler, and Service Manager. Multiple instances of SQL Server can be installed on a machine, with each instance having its own set of users, databases, and other objects. SQL Server uses data files, filegroups, and transaction logs to store database objects and record transactions. The data dictionary contains metadata about database schemas and is stored differently in Oracle and SQL Server.
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
Modern Data Warehousing with the Microsoft Analytics Platform SystemJames Serra
The Microsoft Analytics Platform System (APS) is a turnkey appliance that provides a modern data warehouse with the ability to handle both relational and non-relational data. It uses a massively parallel processing (MPP) architecture with multiple CPUs running queries in parallel. The APS includes an integrated Hadoop distribution called HDInsight that allows users to query Hadoop data using T-SQL with PolyBase. This provides a single query interface and allows users to leverage existing SQL skills. The APS appliance is pre-configured with software and hardware optimized to deliver high performance at scale for data warehousing workloads.
The document discusses Hadoop, an open-source software framework that allows distributed processing of large datasets across clusters of computers. It describes Hadoop as having two main components - the Hadoop Distributed File System (HDFS) which stores data across infrastructure, and MapReduce which processes the data in a parallel, distributed manner. HDFS provides redundancy, scalability, and fault tolerance. Together these components provide a solution for businesses to efficiently analyze the large, unstructured "Big Data" they collect.
There are many Galera Cluster distributions and sometimes differences are well worth noting. We get a lot of queries about which Galera Cluster to use, or why one should use one distribution over the other.
Learn about Galera Cluster with MySQL 5.7 from Codership, and we’ll compare it with Galera Cluster 4 with MariaDB 10.4, and Percona XtraDB Cluster 5.7 with Galera 3. This is also the webinar where we preview Galera Cluster 4 with MySQL 8.0 as well as compare it with the preview release of Percona XtraDB Cluster 8.0.
Overall, learn why distributions exists, and how you can get the most out of your Galera Cluster experience.
Apache Cassandra is a free, distributed, open source, and highly scalable NoSQL database that is designed to handle large amounts of data across many commodity servers. It provides high availability with no single point of failure, linear scalability, and tunable consistency. Cassandra's architecture allows it to spread data across a cluster of servers and replicate across multiple data centers for fault tolerance. It is used by many large companies for applications that require high performance, scalability, and availability.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
MySQL was created in 1994 by David Axmark, Allan Larsson, and Michael Widenius in Sweden as a lightweight database based on mSQL that was faster and more flexible. It was initially created for personal usage but is now commonly used as a web database. In 2008, MySQL AB was acquired by Sun Microsystems for $1 billion. MySQL is used for various purposes including data warehousing, e-commerce, and logging, but is most commonly used as a database for websites.
Data visualization is the process of visually representing information to help understand it more quickly. It is important because visuals allow humans to understand complex information instantly. Data visualization helps businesses make better decisions faster by communicating more information than tables and requiring less memory. Tableau is a popular business intelligence tool that allows users to interactively visualize and analyze data through drag-and-drop functionality. It can connect to various data sources and produce many chart types to provide rapid, real-time analysis of large datasets.
1. The document discusses using SQL Server Data Tools in Visual Studio 2013 to explore database and business intelligence projects. It provides steps to create an SSIS project to export data from a SQL Server table to a flat file.
2. The document also discusses using Visual Studio 2013 tools to manage database schemas through reverse engineering and version control capabilities.
3. The last part of the document will cover publishing a database project to an Azure SQL database.
Sql server 2012 tutorials writing transact-sql statementsSteve Xu
This tutorial provides an introduction to writing basic Transact-SQL statements for creating and manipulating database objects. It is divided into three lessons: Lesson 1 covers creating a database, table, inserting and updating data; Lesson 2 covers configuring permissions on database objects by creating logins, users, views and stored procedures; Lesson 3 covers deleting database objects. The document contains step-by-step tutorials to demonstrate creating a database, table, inserting and reading data, and configuring permissions on the database objects.
This document provides instructions for configuring the database development view in Eclipse to access and work with a Derby database. It describes how to install the database development plugin, create a new Derby connection, specify driver details, and access the Derby database from the command line. It also gives an overview of exploring the database structure in Eclipse and editing, loading, and extracting data from tables.
The document discusses different database management systems like Microsoft SQL Server and MySQL. It covers how to create databases, tables, and queries in both SQL Server Management Studio and MySQL Query Browser. Examples are provided of creating databases and tables using SQL scripts as well as executing queries and viewing the results in the respective management tools.
- The document discusses setting up Microsoft Access databases and connecting them to a Visual Basic project to display data in forms using DataGridView controls.
- It provides steps for adding a database file to a project, configuring a data connection, selecting tables and columns as data sources, and formatting DataGridView controls to display the bound data.
- Two forms are created - one to display course data and another for student data by dragging DataGridView controls and configuring them to show records from tables in the Access database file.
The document discusses interfacing with end users in ASP.NET. It provides two programming models - Web Forms and WCF Services. Web Forms enables creating user interfaces and application logic, while WCF Services enables remote server-side functionality access. It also discusses creating a basic web form in ASP.NET that displays the current date and time when a button is clicked to demonstrate the Web Forms model. Common controls like labels, textboxes, buttons are also summarized with their properties and events.
Cloud and Ubiquitous Computing manual Sonali Parab
This manual consist of cloud and Ubiquitous Computing practicals of the following topics:
1.Implement Windows / Linux Cluster,
2.Developing application for Windows Azure,
3.Implementing private cloud with Xen Server,
4.Implement Hadoop,
5.Develop application using GAE,
6.Implement VMWAre ESXi Server,
7.Native Virtualization using Hyper V,
8.Using OpenNebula to manage heterogeneous distributed data center infrastructures.
( 5 ) Office 2007 Create A Business Data CatologLiquidHub
This document provides instructions for creating a Business Data Catalog in SharePoint that connects to an AdventureWorks database. It involves building metadata that defines entities, methods, filters, and actions. The metadata is used to generate an XML file that can then be imported into SharePoint to register the database. The exercises walk through defining each component in the metadata file and importing it to create a Business Data Catalog application in SharePoint, making the AdventureWorks data available for use.
Treinamento prático com exercícios do tipo step-by-step para você aprender a construir modelos de data mining no SQL Server para determinar padrões e tendências através do uso dos dados. Com este treinamento você será capaz de:
Criar modelos data mining
Visualizar gráficos
Criar uma consulta preditiva
Modelo "Time Series"
This document provides an introduction to creating an OLAP (Online Analytical Processing) project in Microsoft SQL Server Analysis Services (SSAS) 2012. It discusses connecting to data sources, creating dimensions and hierarchies, building cubes, and defining calculations and KPIs. The tutorial uses a sample product inventory dataset to demonstrate how to design and deploy an SSAS project that can then be accessed using Microsoft Excel for analysis and reporting.
( 5 ) Office 2007 Create A Business Data CatologLiquidHub
This lab teaches how to create a business data catalog in SharePoint. The steps include:
1. Creating an XML metadata file that connects to the AdventureWorks database and defines entities, methods, filters, and an IDEnumerator.
2. Importing the XML file into SharePoint's Business Data Catalog.
3. Managing permissions on the catalog's entities so users can access the external business data now exposed in SharePoint.
This document provides instructions for a practice in an Oracle Database 11g: PL/SQL Fundamentals course. It includes:
1. Instructions on setting up the workspace and creating a database connection in SQL Developer.
2. A multi-step practice assignment involving browsing database tables, writing SQL queries, and creating PL/SQL blocks with variables, comments, and logic.
3. Hints that the solutions to practices can be found in an appendix and that students should save their work in a provided labs folder.
Tutorial on how to load images in crystal reports dynamically using visual ba...Aeric Poon
This tutorial will show you how to create a Visual Basic 6 project which will generate a report using Seagate Crystal Reports 8.5 Developer Edition. You will save the path of the image files in a MS Access database where it is protected by password. This project will use an external Crystal Report file and will be previewed using Crystal Viewer control.
Creating a repository using the oracle business intelligence administration toolRavi Kumar Lanke
This 6 hour tutorial shows how to build an Oracle BI metadata repository using the Administration Tool. The document outlines the steps to:
1. Create a new repository called BISAMPLE and import metadata from the BISAMPLE schema including 5 tables.
2. Verify the connection by updating row counts and viewing data.
3. Create aliases for the imported physical tables.
4. Generate physical keys and joins between the tables in the Physical layer.
The tutorial then previews building the Business Model and Mapping layer in the next section.
The tutorial describes the following topics in detail
CREATING AN ADF APPLICATION
DEPLOYING & RUNNING ADF APPLICATION ON WEBLOGIC SERVER
ADF DATA VISUALIZATION COMPONENTS
CREATING MORE COMPLEX BUSINESS COMPONENTS
CREATING MULTIPLE PAGE WEBSITES – PAGE FLOWS
CREATING JEE5 STATELESS SESSION EJBS
CREATING JAX-WS WEB SERVICES
ADDING THE NEW SERVICES INTO THE ADF APPLICATION
DATA VALIDATION (OPTIONAL)
Odi 11g master and work repository creation stepsDharmaraj Borse
The document outlines the steps to create and connect to ODI 11g Master and Work repositories. This includes:
1. Creating schemas and granting privileges for the Master and Work repositories in the database.
2. Using the ODI Studio to create the Master repository by running a wizard and configuring the connection.
3. Creating a login for the Master repository.
4. Creating a Work repository by running a wizard, configuring properties, and creating a login for it.
5. Disconnecting from the Master and connecting to the newly created Work repository.
This document provides steps to create a universe with SAP BusinessObjects XI 4.0's new Information Design Tool. It describes creating a local project, connecting to a database to define a connection, publishing the connection as a secure connection, creating a data foundation using the connection, adding tables to the data foundation, creating a business layer from the data foundation, publishing the universe to a repository, and verifying it is available for use. The key components are connections, data foundations, business layers, and published universes.
This document provides an overview of developing applications using Oracle Application Express (APEX). It discusses the APEX architecture and components used for browser-based application development like the Application Builder, SQL Workshop, and Administrator. The benefits of APEX are also summarized like rapid development, mobile support, and use cases. Steps for creating a demo "help desk" application are outlined, including designing the database tables, loading sample data, and basic application navigation.
How to develop a gateway service using code based implementationnitin2517
This document provides instructions for developing an OData service using SAP NetWeaver Gateway to expose a list of products. It describes creating an entity model based on an existing DDIC structure, generating runtime objects, and implementing GET_ENTITYSET and GET_ENTITY methods to retrieve product data using function modules. The service returns product data in the same structure as the underlying DDIC structure.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
ssis lab
1. Creating a Data Warehouse using
SQL Server Integration Services
After completing this lab, you will be able to:
Create a Data Warehouse from SQL Scripts
Populate the Data Warehouse using SSIS
This lab focuses on the concepts in this module and as a result may not comply with Microsoft
security recommendations.
Exercise 1:
Explore the Data
In this exercise, you will explore the data in the Adventure Works database using the SQL Server
Management Studio.
SQL Server Management Studio is a new tool built for SQL Server 2005/2008. It combines the
functionality of the Enterprise Manager snap-in and the Query Analyzer. Although this is the main
tool for administering one or more SQL Servers, you can also use the SQL Server Management
Studio for executing queries and scripts, and for managing SQL Server projects.
The Adventure Works database is a sample database for the fictional Bike company – Adventure
Works. It replaces the Northwind sample database as the basis for all the samples provided in SQL
Server 2008 and it is provided by Microsoft.
Task 1: Open SQL Server Management Studio and connect to your server
From the Windows task bar, select Start | All Programs | Microsoft SQL Server 2008 | SQL
Server Management Studio.
When the Connect to Server dialog box opens, verify that SQL Server is selected as the Server
type, and verify that Windows Authentication is selected as the authentication method.
Change the server name to (local)
Objectives
Note
Estimated time to complete this lab: 45 minutes
2. 2 Creating a Data Warehouse using SQL Server Integration ServicesFigure: The Connect to Server dialog
Click Connect.
Note the various areas of the SQL Server Management Studio:
In the Upper left hand corner of the window you will see the Object Explorer pane.
Figure: Object Explorer
The Object Explorer pane allows you to explore through the objects that are available in the SQL
Server that you are currently connected to.
Task 2: Explore the Adventure Works Database
1. In the Object Explorer expand the AdventureWorks database from inside the Databases folder.
You will notice that the database has folders for Tables, Views, Programmability, and so on. Our
focus will be on the Tables.
Note: Views may also be used; however, we will not take them into consideration in this lab.
2. Expand the Tables folder.
3. Examine the following tables:
a. Production.Product
b. Production.ProductCategory
c. Production.ProductSubCategory
d. Sales.SalesOrderHeader
e. Sales.SalesOrderDetail
You may expand the table to view its available columns – this will show you the fields available
and their data types. You can also open the table to see the data it contains by right clicking and
selecting SELECT TOP 1000 ROWS.
3. Creating a Data Warehouse using SQL Server Integration Services 3
Exercise 2:
Create the Data Warehouse Structure (schema)
In this exercise you will create the Data Warehouse structure. Due to time restriction, this will be
done with a pre-written SQL script. Your presenter will step through the script with you.
The script is available for download from –
http://sharepoint.ssw.com.au/Training/CallDesign/Documents/01_DWCreateScript.sql
Task 1: Open the SQL Script
1. Save the DWCreateScript.sql from the URL shown above.
2. In the SQL Management Studio click on File > Open and select File.
3. Browse to the location of the DWCreateScript.sql file.
4. Select the DWCreateScript.sql file and click Open.
Figure: Click Open to Open the File
5. You may be presented with a Security Dialog. If you are ensure that the Server Name is correct
and that the Authentication is set to Windows Authentication.
6. Click Connect.
Figure: The DWCreateScript.sql file will now be opened in the SQL Management Studio.
7. Read through the script file to see how the Database, and the Dimension and Fact tables are
created.
Task 2: Execute the Script
1. Press F5 to execute the Script. Alternatively you can click the button.
2. Verify that the script has run without any errors or warnings.
3. Refresh the Object Explorer
4. 4 Creating a Data Warehouse using SQL Server Integration Services
Figure: Refresh the Databases list
4. Explore the AdventureWorksDW_UTS database.
5. Open the DimDate table to verify that it has been populated.
Figure: Open the DimDate table
Note: You may also open the other tables to see that they are empty.
5. Creating a Data Warehouse using SQL Server Integration Services 5
Exercise 3:
Create the SSIS Package
In this exercise, you will use the SQL Server Business Intelligence Development Studio (Visual
Studio 2008) to create a SSIS project. This project will contain the package used to populate the
Data Warehouse.
Task 1: Create the Integration Services Project
1. From the Start Menu select All Programs | Microsoft SQL Server 2008 | SQL Server
Business Intelligence Development Studio.
2. Click File | New | Project
3. From the Project Types box select Business Intelligence Projects.
4. Select Integration Services Project.
5. Name the Project LoadDW.
6. Change the location to C:tempCallDesign
7. Click OK.
You have now created the Integration Services Project.
Task 2: Create the Connection Objects
1. Right Click in the Connection Managers window (lower window).
2. Select New OLE DB Connection…
3. Click on the New button.
4. Enter localhost as the Server Name
5. Select the AdventureWorks database from the list.
6. Click OK.
6. 6 Creating a Data Warehouse using SQL Server Integration Services
7. Click OK
8. Repeat steps 1 through 7 for the AdventureWorksDW_UTS database.
You should now have the following connections.
Task 3: Initialize Data Warehouse
1. Click View | Toolbox to display the Toolbox.
2. From Control Flow Items drag an Execute SQL Task onto the Control Flow tab.
3. Right click on the Execute SQL Task and select Edit to display the Execute SQL Task Editor.
4. Set the Name to Initialize DW
5. Set the Description to Clear down DW Tables
6. Set the Connection to Localhost.AdventureWorksDW_UTS
7. Set the SQL Statement to procDWInitialize
8. Click OK
9. Right Click on the Initialize DW task and select Execute Task
7. Creating a Data Warehouse using SQL Server Integration Services 710. Verify that the task ran successfully. (It should be green)
11. Select Debug | Stop Debugging (Shift + F5)
Task 4: Load Data
1. From the Control Flow Items drag a Sequence Container onto the Control Flow tab.
Figure : Sequence Container is a place to store other items, to make it easy to manage related
tasks in one place.
2. Rename the Sequence Container to Load DW
3. Click on the Initialize DW task
4. Drag the Green Arrow onto the Load DW sequence container.
5. From Control Flow Items drag a Data Flow Task into the Load DW sequence container.
6. Rename the Data Flow task Load Product Category
7. Right Click on Load Product Category and select Edit – this will take you to the Data Flow
tab.
8. Drag an OLE DB Source from the Data Flow Sources section of the Toolbox onto the Data
Flow tab.
9. Rename the OLE DB Source to Product Category DB
10. Right Click on Product Category DB and select Edit
11. Select Localhost.AdventureWorks from the Connection Manager dropdown.
12. Select Table or View as the Data Access Mode
13. Select [Production].[ProductCategory] as the Name of the table.
14. Click OK
15. Drag an OLE DB Destination from the Data Flow Destinations section of the Toolbox onto
the Data Flow tab.
16. Rename the OLE DB Destination to Product Category DW
17. Drag the Green Arrow from Product Category DB onto Product Category DW
18. Right Click on Product Category DW and select Edit
19. Select Localhost.AdventureWorksDW_UTS from the Connection Manager dropdown.
20. Select Table or View – fast load as the Data Access Mode
21. Select [dbo].[DimProductCategory] as the Name of the table.
8. 8 Creating a Data Warehouse using SQL Server Integration Services23. Click on the Name and drag across to the EnglishProductCategoryName
24. Click OK
25. Click on the Control Flow tab.
26. Right Click on the Load DW sequence container
27. Select Execute Container
28. Repeat the relevant steps from 5 – 26 for the following four Tables
a. Production.Product
b. Production.ProductSubCategory
i. Map Name to EnglishProductSubcategoryName
ii. Map ProductCategoryID to ProductCategoryID
c. Sales.SalesOrderHeader
d. Sales.SalesOrderDetail
i. Note: You will have to remove the mapping on the SalesOrderDetail.LineTotal
column as this is a calculated column in our DW so we don’t need to import it.
You can remove the mapping my right clicking Line Total in the Input
Column and selecting Delete.
29. Press F5 to Execute the entire package.
9. Creating a Data Warehouse using SQL Server Integration Services 9
Exercise 5:
Verify the Data
In this exercise, you will return to the SQL Management Studio and verify that the Data Warehouse
has been populated. You will also run a sample query on our Data Warehouse to show it in action.
Task 1: Open SQL Server Management Studio and connect to your server
From the Windows task bar, select Start | All Programs | Microsoft SQL Server 2008 | SQL
Server Management Studio.
When the Connect to Server dialog box opens, verify that SQL Server is selected as the Server
type, and verify that Windows Authentication is selected as the authentication method.
Figure : The Connect to Server Dialog
Click Connect.
Task 2: Explore the AdventureWorksDW_UTS Database
4. In the Object Explorer expand the AdventureWorksDW_UTS database from inside the
Databases folder.
5. Expand the Tables folder.
6. Examine the following tables:
a. DimProduct
b. DimProductCategory
c. DimProductSubCategory
d. SalesOrderHeader
e. SalesOrderDetail
f. DimDate
You may expand the table to view its available columns – this will show you the fields available
and their data types. You can also open the table to see the data it contains by right clicking and
selecting Select top 1000 Rows Table.
10. Creating a Data Warehouse using SQL Server Integration Services 9
Exercise 5:
Verify the Data
In this exercise, you will return to the SQL Management Studio and verify that the Data Warehouse
has been populated. You will also run a sample query on our Data Warehouse to show it in action.
Task 1: Open SQL Server Management Studio and connect to your server
From the Windows task bar, select Start | All Programs | Microsoft SQL Server 2008 | SQL
Server Management Studio.
When the Connect to Server dialog box opens, verify that SQL Server is selected as the Server
type, and verify that Windows Authentication is selected as the authentication method.
Figure : The Connect to Server Dialog
Click Connect.
Task 2: Explore the AdventureWorksDW_UTS Database
4. In the Object Explorer expand the AdventureWorksDW_UTS database from inside the
Databases folder.
5. Expand the Tables folder.
6. Examine the following tables:
a. DimProduct
b. DimProductCategory
c. DimProductSubCategory
d. SalesOrderHeader
e. SalesOrderDetail
f. DimDate
You may expand the table to view its available columns – this will show you the fields available
and their data types. You can also open the table to see the data it contains by right clicking and
selecting Select top 1000 Rows Table.