Ground floor introduction to the tools and best practices surrounding SQL Server’s built-in web-based, enterprise-level reporting engine. We'll start with what SSRS is, what you'll use it for and give top tips to know when developing your first reports.
Ground floor introduction to the tools and best practices surrounding SQL Server’s built-in web-based, enterprise-level reporting engine. We'll start with what SSRS is, what you'll use it for and give top tips to know when developing your first reports.
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Power BI Report Server Enterprise Architecture, Tools to Publish reports and ...Vishal Pawar
To improve the performance, sustainability, security and scalability of enterprise-grade Power BI implementations with constant velocity, we need to adhere best practices with sloid architecture.
In this session Vishal will go over Power BI Ecosystem with quick Example, Power BI report Server evolution from its inception till date with Architecture for Enterprise PBI RS and usage through various tool available to publish -SSDT SSRS, Power BI Desktop(Optimized Version), Report Builder and mobile report builder and various Best Practices for PBI Report Server.
Achieving Lakehouse Models with Spark 3.0Databricks
It’s very easy to be distracted by the latest and greatest approaches with technology, but sometimes there’s a reason old approaches stand the test of time. Star Schemas & Kimball is one of those things that isn’t going anywhere, but as we move towards the “Data Lakehouse” paradigm – how appropriate is this modelling technique, and how can we harness the Delta Engine & Spark 3.0 to maximise it’s performance?
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Microsoft SQL Server Analysis Services (SSAS) - A Practical Introduction Mark Ginnebaugh
Patrick Sheehan of Microsoft covers platform architecture, data warehousing methodology, and multi-dimensional cube development.
You will learn:
* How to develop and deploy data cubes using SQL Server Analysis Services (SSAS)
* Optimal data warehouse methodology for use with SSAS
* Tips/tricks for designing & building cubes over no warehouse/suboptimal source system (it happens)
* Cube processing types - How/why to use each
* Cube design practices + How to build and deploy cubes!
Building an Effective Data Warehouse ArchitectureJames Serra
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Power BI Report Server Enterprise Architecture, Tools to Publish reports and ...Vishal Pawar
To improve the performance, sustainability, security and scalability of enterprise-grade Power BI implementations with constant velocity, we need to adhere best practices with sloid architecture.
In this session Vishal will go over Power BI Ecosystem with quick Example, Power BI report Server evolution from its inception till date with Architecture for Enterprise PBI RS and usage through various tool available to publish -SSDT SSRS, Power BI Desktop(Optimized Version), Report Builder and mobile report builder and various Best Practices for PBI Report Server.
Achieving Lakehouse Models with Spark 3.0Databricks
It’s very easy to be distracted by the latest and greatest approaches with technology, but sometimes there’s a reason old approaches stand the test of time. Star Schemas & Kimball is one of those things that isn’t going anywhere, but as we move towards the “Data Lakehouse” paradigm – how appropriate is this modelling technique, and how can we harness the Delta Engine & Spark 3.0 to maximise it’s performance?
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Microsoft SQL Server Analysis Services (SSAS) - A Practical Introduction Mark Ginnebaugh
Patrick Sheehan of Microsoft covers platform architecture, data warehousing methodology, and multi-dimensional cube development.
You will learn:
* How to develop and deploy data cubes using SQL Server Analysis Services (SSAS)
* Optimal data warehouse methodology for use with SSAS
* Tips/tricks for designing & building cubes over no warehouse/suboptimal source system (it happens)
* Cube processing types - How/why to use each
* Cube design practices + How to build and deploy cubes!
Building an Effective Data Warehouse ArchitectureJames Serra
Why use a data warehouse? What is the best methodology to use when creating a data warehouse? Should I use a normalized or dimensional approach? What is the difference between the Kimball and Inmon methodologies? Does the new Tabular model in SQL Server 2012 change things? What is the difference between a data warehouse and a data mart? Is there hardware that is optimized for a data warehouse? What if I have a ton of data? During this session James will help you to answer these questions.
Agile Methodology Approach to SSRS Reporting. How to utilize principles from Agile project management process and utilize it for creating better SSRS reports.
Deliver Dynamic and Interactive Web Content in J2EE Applicationsinfopapers
F. Stoica, Deliver dynamic and interactive Web content in J2EE applications, Proceedings of the Central and East European Conference in Business Information Systems, Cluj-Napoca, Romania, ISBN 973-656-648-X, pp. 780-789, 2004
Generate reports with SSRS - SQL Server Reporting Services: This session will be a cornucopia of three sub-sessions. The first part will be to convince the skeptics. Why does every organization should consider SQL Server Reporting as part of its front-end solution? What will SSRS do better than a typical web application/site or a client-server application? The second portion will be a quick demo of the possibility and will be the shortest. The final part will talk about the best practices, tips from the field and will cover the implementation techniques.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
3. Introducing Reporting A report is a structured arrangement of information. The report information comes from data in a business application. The report information can be derived from a variety of sources. Reports were available only as part of the business applications.
4. Introducing Reporting (cont’d) Preprogrammed reports rarely answered all the questions that business users needed answered. Report-generation tools with the applications. Deliver reports to a large number of users, expensive, complex to implement and manage, and difficult to integrate into custom applications and technical infrastructures.
5. A Reporting Platform SSRS was introduced in 2004 as an additional component of Microsoft SQL Server 2000. SSRS is a platform of technologies rather than a single application. SSRS is an integrated set of applications. Report development, management, and viewing. Different roles and skill sets to work with some aspect of Reporting Services. A report developer, IT administrator, or business user.
6. One of the primary reasons that organizationsimplement Reporting Services is to provide managed reports to a large number of internal users
7. SSRS Report Types Managed Report: detailed operational data. gathered from a variety of data sources. organized into a central repository. Standard formatting. Ad hoc Report: Report Builder. users with limited technical skills. simple reports saved privately or shared. Embedded Report: portals or custom applications.
8. ReportingLifeCycle The report from creation to delivery. SSRS supports 3 phases of the reporting life cycle: report development. management of the report server. report access by user Rendering: reproduce a report in a variety of formats.
9. ReportDevelopment Selecting data for the report from variety of data sources. Organizing the report layout and formatting. Interactive features: Sorting, parameters, hide/ show details, links and document map. Preview the report for testing. Deploy managed reports to the report server or to a Microsoft Office SharePoint Server 2007 Web site. Store ad hoc reports on your computer, or deploy it to the report server or a SharePoint site.
10. ReportAdministration manage the technical environment for the reporting platform. configure the report server: change the data source connection information optionally integrate the report server with SharePoint. fine-tune the report server performance. manage the location, security, and execution properties of reports: cache the report for faster viewing. multiple snapshots of a report. place the related reports in a folder. apply security to the report or the folder.
11. ReportAccess The most common way to use a browser and navigate to a central report repository. Portal application with links to guide users to reports in Reporting Services. Corporate applications/embedded reports. Store a selection of reports in a personal folder. Subscription to a report to receive it on a scheduled basis in an e-mail inbox, a network file share, or a SharePoint document library. A report rendered in HTML, PDF, TIFF, CSV or XML.
12. ReportingServicesArchitecture A variety of components, extensions, and application programming interfaces (APIs) . A multi-tier architecture with data, application, and server tiers. Modular nature of the architecture and Flexibility: distribute components across multiple servers for scalability. Native mode (the default configuration): a stand-alone application server, Integrated mode, a SharePoint farm.
14. SSRS Architecture (cont’d)DataTier consists of a pair of databases: The ReportServer database is the primary database for permanent storage of reports, report models, and other data related to the management of the report server. The ReportServerTempDB database stores session cache information and cached instances of reports. In a scale-out deployment of Reporting Services across multiple report servers, these two databases in the data tier are the only requirements. These databases do not need to be on the same server as the report server.
15. SSRSArchitecture (cont’d) Application Tier Collection of tools that you use to develop reports and manage the reporting platform. Report Development Tools: Report Designer: full features, a project template available in SQL Server BI Dev Studio. Report Builder: ad hoc report, Report Builder 1.0 and Report Builder 2.0. Report Builder was first available in Reporting Services 2005. Using Report Builder 1.0, you can build a simple report that displays data from a single data source as defined by a report model. Report Builder 2.0 is new in Reporting Services 2008 . Model Designer: in BI Dev Studio to develop and publish a report model. SQL Server, Oracle, and Teradata databases. Programmatic Interface for Report Development: APIs that allow you to build a custom report development tool. Report Definition Language (RDL).
16. SSRSArchitecture (cont’d) Report Viewers Report Manager: native mode, Web application. page through a large report, search, zoom in or out to resize a report, render a report to a new format, print the report, and change report parameters. SharePoint: Integrated mode. open the report in document library or in a Web Part. search, zoom, render, print, and select parameters. Programmatic Interface for Viewing or Delivering Reports: custom application by using the Reporting Services API or by accessing reports using URL endpoints. extend standard functionality by customizing security, data processing, rendering, or delivery options. you cannot use both tools in the same instance.
17. SSRS Architecture (cont’d) ManagementTools Reporting Services Configuration Manager: configure a local or remote Reporting Services installation. assign service accounts for running the service and for processing reports for schedule operations. configure the URLs to be used by the Reporting Service application. create the report server databases to host the application data. convert a report server to native mode or integrated mode. configure e-mail delivery of reports,. connect a report server to a scale-out deployment. SQL Server Management Studio: the management interface for many of the server components in SQL Server. manage the report server.
18. SSRS Architecture (cont’d) ManagementTools (cont’d) SQL Server Configuration Manager: start or stop the report server Windows service. Report Manager: native mode . Organize, configure and secure reports. report model management and subscription management. Note : in SharePoint integrated mode, you perform these same tasks using the SharePoint interface. Programmatic Interface for Management a custom application using the Reporting Services API.
20. SSRSArchitecture (cont’d) Server Tier The central layer of the Reporting Services architecture. Processor Components, respond to and process requests to the report server. Server Extensions, subcomponents delegate very specific functions. These components are implemented as a Windows service. Processor Components: Report Processor, receives all requests that require execution and rendering of reports. Scheduling and Delivery Processor, receives all requests for scheduled events such as snapshots and subscriptions.
21. SSRS Architecture (cont’d) Server Tier (cont’d) Report Processor: An on-demand report, calls a data processing extension to execute the report queries and then merges the query results into a temporary format. A cached report, stored in the ReportServerTempDB database in the temporary format. A report snapshot, stored in the ReportServer database in the temporary format. Rendering Extension. Scheduling and Delivery Processor: A user creates a snapshot or subscription schedule for a report. The Scheduling and Delivery Processor creates a SQL Server Agent job. When the job executes, SQL Server Agent sends a request to the Scheduling and Delivery Processor. The Scheduling and Delivery Processor forwards the request to the Report Processor to execute and render the report. The Report Processor returns the finished report to the Scheduling and Delivery Processor. the Scheduling and Delivery Processor calls a delivery extension to e-mail the report or store it on a network share.
22. SSRS Architecture (cont’d) ServerExtensions The processor components that perform very specific tasks. Modular approach, disable an extension or add your own extension. SSRS includes 5 types of server extensions: authentication, data processing, report processing, rendering, and delivery. Authentication Extension: By default, Windows authentication. only one active authentication extension per report server instance. custom security extension. Data Processing Extension: connects to a data source, executes a query and returns the query results. SSRS includes data processing extensions for: SQL Server, Analysis Services, Hyperion Essbase, Oracle, SAP Netweaver Business Intelligence, Teradata, Object Linking and Embedding Database (OLE DB), and Open Database Connectivity (ODBC).
23. SSRS Architecture (cont’d) ServerExtensions (cont’d) Report Processing Extension: an optional component used to process custom report items from third-party vendors. For example, you can obtain charting or mapping add-ins to enhance your reports. Rendering Extension: Convert a report in temporary format into a finished format for the user. The rendering extensions include: HTML, Excel, CSV, XML, Image, PDF, and Microsoft Office Word. Develop your own rendering extension.
24. Delivery Extension: handle scheduled report requests. The e-mail delivery extension: the report embedded in the report body, attached as a file, or referenced as a URL link to the report on a report server. The file share delivery extension: saves a report in a specified format to a network share. The null delivery provider: a mechanism for caching a report in advance. Custom delivery extension: a fax device or printer, or to another application. SSRS Architecture (cont’d) ServerExtensions (cont’d)
25. InstallingReportingServices SSRS is one of the features included in SQL Server. All editions except SQL Server 2008 Express and SQL Server Compact. Each edition supports a different set of features to meet specific scalability, performance, and pricing requirements. SQL Server 2008 Express with Advanced Services. SQL Server 2008 Web. SQL Server 2008 Workgroup. SQL Server 2008 Standard. SQL Server 2008 Enterprise. SQL Server 2008 Developer. SQL Server 2008 Evaluation.
26. Notes about Modes You cannot migrate any reports that you have previously published to the report server, so you must keep a copy of the report definitions to publish again after switching the report server to a different mode.
27. Notes about Modes (cont’d) You can use a SQL Server 2000 or SQL Server 2005 database for Reporting Services in native mode, you must use a SQL Server 2008 database if you plan to run Reporting Services in SharePoint integrated mode.
28. Notes about Modes (cont’d) You can still use a native-mode report server with SharePoint by using SharePoint WebParts in a partial integration mode
31. ReportDesigner Multiple reports. The report server project used as a container of reports. Shared data source. Data regions: Tables. Matrixes. Lists. Charts.