Australian Service Manager User Group. Presentation deck from our Knowledge Event in February 2015. Head to our website to see a recording of the event.
This document provides a summary of Antonios Chatzipavlis's background and experience working with SQL Server. It details his career starting with SQL Server 6.0 in 1996 and earning his first Microsoft certification. It lists the various Microsoft certifications and roles he has held, including becoming an MVP for SQL Server. It also introduces his creation of SQL School Greece in 2012 to share his knowledge.
The document provides an introduction to Java Enterprise Edition (Java EE). It discusses key concepts such as distributed systems, middleware, the Java EE platform, and Java EE application servers. The Java EE platform consists of the Java SE APIs, Java EE APIs, and a Java EE application server. Applications are built using Java EE components like EJBs and servlets that run within a managed environment provided by the application server.
Azure SQL Database is a managed cloud database service that makes building and maintaining applications easier. It provides continuous learning of app patterns to optimize performance, reliability, and data protection. The service takes care of scalability, backup, and high availability. It provides recommendations to optimize database performance and fix issues. Azure SQL Database offers pricing tiers for different performance levels and capabilities for security, monitoring, and compliance. It can be used for a variety of workloads including web, mobile, and multi-tenant apps.
The document provides an overview of SQL Azure, a relational database service available on the Microsoft Azure platform. Key points include:
- SQL Azure allows users to build applications that use a relational database in the cloud without having to manage infrastructure.
- It is based on SQL Server and provides a familiar programming model, but is designed for the cloud with high availability and scalability.
- The service has limitations on database size and does not provide built-in sharding capabilities, so applications need to implement custom partitioning logic for large datasets.
- Future improvements may address limitations and open up new scenarios and opportunities through integration with other Azure services. SQL Azure is part of Microsoft's broader strategy around cloud-
Come learn about new security features like Vulnerability Assessment, Information Protection, Thread Detection and Always Encrypt to see how Azure SQL Database is securing your data in the most secure database on the planet.
Introduction to Windows Azure and Windows Azure SQL DatabaseVikas Sahni
This document discusses different cloud computing models including Infrastructure as a Service, Platform as a Service, and Software as a Service. It then provides an overview of Azure SQL Database, including its usage scenarios, concepts, and architecture. Key points covered include what SQL Database offers and does not offer compared to on-premises SQL Server, and considerations for migrating databases, accessing data, security, performance, and scaling out databases in the cloud.
Cloud's Hidden Impact on IT Support OrganizationsChristopher Foot
The rapid growth of cloud offerings are providing organizations with cost effective alternatives to on-premises systems. When calculating TCO and return on their cloud investment, savvy decision makers must also factor in costs that include staff training, new organizational roles and responsibilities, policy and procedure changes, modifications to application design, build and change management processes as well as the impact cloud applications will have on existing support toolsets.
The last slide includes a link to the YouTube Webinar of this presentation.
Microsoft Azure platform provides a database as a service offering that allows developers to use SQL in the same way as they would in an on-premises location.
This document provides a summary of Antonios Chatzipavlis's background and experience working with SQL Server. It details his career starting with SQL Server 6.0 in 1996 and earning his first Microsoft certification. It lists the various Microsoft certifications and roles he has held, including becoming an MVP for SQL Server. It also introduces his creation of SQL School Greece in 2012 to share his knowledge.
The document provides an introduction to Java Enterprise Edition (Java EE). It discusses key concepts such as distributed systems, middleware, the Java EE platform, and Java EE application servers. The Java EE platform consists of the Java SE APIs, Java EE APIs, and a Java EE application server. Applications are built using Java EE components like EJBs and servlets that run within a managed environment provided by the application server.
Azure SQL Database is a managed cloud database service that makes building and maintaining applications easier. It provides continuous learning of app patterns to optimize performance, reliability, and data protection. The service takes care of scalability, backup, and high availability. It provides recommendations to optimize database performance and fix issues. Azure SQL Database offers pricing tiers for different performance levels and capabilities for security, monitoring, and compliance. It can be used for a variety of workloads including web, mobile, and multi-tenant apps.
The document provides an overview of SQL Azure, a relational database service available on the Microsoft Azure platform. Key points include:
- SQL Azure allows users to build applications that use a relational database in the cloud without having to manage infrastructure.
- It is based on SQL Server and provides a familiar programming model, but is designed for the cloud with high availability and scalability.
- The service has limitations on database size and does not provide built-in sharding capabilities, so applications need to implement custom partitioning logic for large datasets.
- Future improvements may address limitations and open up new scenarios and opportunities through integration with other Azure services. SQL Azure is part of Microsoft's broader strategy around cloud-
Come learn about new security features like Vulnerability Assessment, Information Protection, Thread Detection and Always Encrypt to see how Azure SQL Database is securing your data in the most secure database on the planet.
Introduction to Windows Azure and Windows Azure SQL DatabaseVikas Sahni
This document discusses different cloud computing models including Infrastructure as a Service, Platform as a Service, and Software as a Service. It then provides an overview of Azure SQL Database, including its usage scenarios, concepts, and architecture. Key points covered include what SQL Database offers and does not offer compared to on-premises SQL Server, and considerations for migrating databases, accessing data, security, performance, and scaling out databases in the cloud.
Cloud's Hidden Impact on IT Support OrganizationsChristopher Foot
The rapid growth of cloud offerings are providing organizations with cost effective alternatives to on-premises systems. When calculating TCO and return on their cloud investment, savvy decision makers must also factor in costs that include staff training, new organizational roles and responsibilities, policy and procedure changes, modifications to application design, build and change management processes as well as the impact cloud applications will have on existing support toolsets.
The last slide includes a link to the YouTube Webinar of this presentation.
Microsoft Azure platform provides a database as a service offering that allows developers to use SQL in the same way as they would in an on-premises location.
This document provides an overview of patterns for enterprise application architecture. It discusses layering as a common technique for breaking apart complex software systems into layers like presentation, domain, and data layers. It describes different kinds of enterprise applications and considerations for performance. It also examines patterns for organizing domain logic, mapping to relational databases, and handling common behavioral issues like change tracking, object loading, and identity management.
An RDX Insights Series Presentation that analyzes the most significant areas of database vendor competition. Competitive evaluations include public vs private cloud, the three leading public cloud offerings, NoSQL vs relational, open source vs commercial and the traditional DBMS vendors vs all competitors.
The document discusses Dell Virtual Integrated System (VIS) Self-Service Creator, which provides automated provisioning and lifecycle management of virtual workloads. It allows users to self-provision compute resources on-demand through a web portal while giving IT administrators governance and control. Key benefits include reducing tasks by 64% and tools by 80%, improving flexibility, agility, and timely delivery of resources.
The document provides best practices for securing Active Directory, including establishing secure boundaries, deploying secure domain controllers, establishing secure policies, and maintaining secure operations. It recommends limiting physical access, disabling unnecessary services, using strong passwords, monitoring for changes, and staying current on security updates. The summary emphasizes maintaining secure domain controller operations, using tools like VPNs, firewalls, and intrusion detection to protect communications and assets.
This module covers implementing dynamic access control (DAC) in Windows Server 2012. It includes lessons on overview of DAC, implementing DAC components like claims and resource properties, using DAC for access control, access denied assistance, and managing work folders. The document provides demonstrations on configuring claims, properties, rules, policies, and access denied assistance. It explains how access checks work with DAC and how to manage and monitor the DAC implementation.
This document discusses several important technologies used to develop digital libraries, including blockchain, artificial intelligence, Docker, and Kubernetes. Blockchain can be used to develop archiving services between institutions securely and electronically without the need for online databases. Docker and Kubernetes help develop the infrastructure flexibly by installing software directly on any web server without the need for programming. The document also discusses data mining concepts like classification, regression, clustering, and recommendation that can be used in library services with artificial intelligence. Machine learning tasks and techniques are also covered.
This document discusses definitions and concepts related to cloud computing. It begins by looking at definitions from NIST and WhatIs.com, which describe cloud computing as enabling on-demand access to configurable computing resources via a network. The document then covers central ideas like utility computing, service-oriented architecture (SOA), and service level agreements (SLAs). It discusses properties and characteristics of clouds like scalability, availability, reliability, manageability, interoperability, performance, and accessibility. Finally, it delves into concepts that enable these properties, such as virtualization, parallel computing, load balancing, fault tolerance, and system monitoring.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
1) Building cloud software presents unique challenges compared to on-premise software, such as the need for faster release cycles, upgrades without regressions, and multitenancy.
2) Scaling issues are a major cause of outages for cloud systems, including problems reaching resource limits and insufficient isolation between users.
3) Testing cloud systems requires evaluating how they scale and handling varying loads, and failures can indicate problems with dimensions like output size or number of tasks.
The document discusses using funds from PERKESO, a Malaysian social security organization, to get certified in cloud computing skills in order to get back to work after being retrenched. It provides details on certification programs that are eligible for funding, as well as other benefits available from PERKESO like allowance payments and career counseling. The second part of the document introduces the trainer, Leo Lourdes, and his qualifications and experience in areas like IT service management, project management, and security.
Getting Started with Azure SQL Database (Presented at Pittsburgh TechFest 2018)Chad Green
Are you still hosting your databases on your own SQL Server? Would you like to consider putting those up in the cloud? Then come and learn what exactly Azure SQL can do for you and how to go about moving your databases to the cloud.
a talk about azure synapse aimed to help people who are not data experts understand what synapse is and how you can integrate it with other technologies
A cloud database management system (CDBMS) is a database management system hosted by a third party on remote servers and accessed over the Internet, whereas a traditional database system is installed locally. A CDBMS can be deployed in three ways: as a virtual machine image managed by the customer, as a database as a service managed by the provider, or as a fully managed hosting service. Before deploying a CDBMS, an organization should consider its performance, budget, data governance, and staffing requirements as a CDBMS may not provide the same level of performance as a local system and has different compliance implications.
The document discusses cloud analytics, cloud testing, and virtual desktop infrastructure (VDI).
Cloud analytics allows organizations to implement analytics capabilities in the cloud to scale easily as the company grows and removes the burden of on-premise management. Cloud testing verifies cloud functions like redundancy and performance scalability. VDI creates a virtualized desktop environment on remote servers that users can access from any device, bringing benefits like access, security, cost reduction, and device portability.
Building a Turbo-fast Data Warehousing Platform with DatabricksDatabricks
Traditionally, data warehouse platforms have been perceived as cost prohibitive, challenging to maintain and complex to scale. The combination of Apache Spark and Spark SQL – running on AWS – provides a fast, simple, and scalable way to build a new generation of data warehouses that revolutionizes how data scientists and engineers analyze their data sets.
In this webinar you will learn how Databricks - a fully managed Spark platform hosted on AWS - integrates with variety of different AWS services, Amazon S3, Kinesis, and VPC. We’ll also show you how to build your own data warehousing platform in very short amount of time and how to integrate it with other tools such as Spark’s machine learning library and Spark streaming for real-time processing of your data.
Controlling Delegation of Windows Servers and Active DirectoryZoho Corporation
Derek Melber, Technical Evangelist for the AD Solutions team at ManageEngine and one of only 12 Microsoft Group Policy MVPs in the world, from his extensive knowledge in the Windows Active Directory security domain shares the various ways in Windows Servers to manage task delegations by Group / User / Permissions… And know the limitations too!
Slides ch-5-the definitive guide to cloud computing -by- dan sullivanMeherFatima8
The topic focuses on how to plan for the organizational and technical issues around the move to cloud computing, it is specifically structured around the broad topics like planing principle etc. Moreover you can visit :https://www.behangservicenederland.com/glasvezelbehang.html
This document discusses the evolution of data center services towards a cloud model. It describes 4 levels of cloud maturity ranging from basic colocation offerings to fully automated cloud services. Each level is associated with increasing levels of standardization, automation, efficiency and reduced total cost of ownership. The document advocates for standardization across infrastructure, software and processes to achieve an optimized private cloud with up to 60% lower costs and increased efficiency compared to traditional data center environments.
Domain logic patterns of Software ArchitectureShweta Ghate
The document discusses different patterns for organizing domain logic, including transaction scripts, domain models, table modules, and service layers. It provides descriptions and examples of when each pattern is appropriate to use based on the complexity of the business logic and data model. Transaction scripts are simple and suitable for less complex logic, while domain models, table modules, and service layers are needed for more intricate business rules and relationships between objects, data tables, and application services.
Azure SQL Database for the SQL Server DBA - Azure Bootcamp Athens 2018 Antonios Chatzipavlis
Azure SQL Database is a managed database service hosted in Microsoft's Azure cloud. Some key differences from SQL Server include: the service is paid by the hour based on the selected service tier; users can dynamically scale resources up or down; backups and high availability are managed by the service provider; and common administration tasks are handled by the provider rather than the user. The service offers automatic backups, point-in-time restore, and geo-restore capabilities along with built-in high availability through replication across three copies in the primary region.
En esta sesión revisamos las nuevas mejoras y funcionalidades que estarán implementadas en la siguiente versión de SQL Server principalmente en Seguridad, Rendimiento y Alta Disponibilidad
Ian allen motor industry management and turnaround summaryIan Allen
Please find an updated profile of my completed motor retail green field and turnaround projects along with current assignments. Happy to discuss any automotive requirements in complete confidence. Ian Allen. 07922 466126. ian.allen@cnaint.com
Leadership Principles for High Impact Results by Peggy KlingelPeggy Klingel
Strong leadership is needed to drive results. With effective team building and communication a compelling vision can make all the difference in motivating teams to achieve challenging turnaround, startup or change management strategies. Successful leaders coach, develop and motivate team members to perform at their best.
This document provides an overview of patterns for enterprise application architecture. It discusses layering as a common technique for breaking apart complex software systems into layers like presentation, domain, and data layers. It describes different kinds of enterprise applications and considerations for performance. It also examines patterns for organizing domain logic, mapping to relational databases, and handling common behavioral issues like change tracking, object loading, and identity management.
An RDX Insights Series Presentation that analyzes the most significant areas of database vendor competition. Competitive evaluations include public vs private cloud, the three leading public cloud offerings, NoSQL vs relational, open source vs commercial and the traditional DBMS vendors vs all competitors.
The document discusses Dell Virtual Integrated System (VIS) Self-Service Creator, which provides automated provisioning and lifecycle management of virtual workloads. It allows users to self-provision compute resources on-demand through a web portal while giving IT administrators governance and control. Key benefits include reducing tasks by 64% and tools by 80%, improving flexibility, agility, and timely delivery of resources.
The document provides best practices for securing Active Directory, including establishing secure boundaries, deploying secure domain controllers, establishing secure policies, and maintaining secure operations. It recommends limiting physical access, disabling unnecessary services, using strong passwords, monitoring for changes, and staying current on security updates. The summary emphasizes maintaining secure domain controller operations, using tools like VPNs, firewalls, and intrusion detection to protect communications and assets.
This module covers implementing dynamic access control (DAC) in Windows Server 2012. It includes lessons on overview of DAC, implementing DAC components like claims and resource properties, using DAC for access control, access denied assistance, and managing work folders. The document provides demonstrations on configuring claims, properties, rules, policies, and access denied assistance. It explains how access checks work with DAC and how to manage and monitor the DAC implementation.
This document discusses several important technologies used to develop digital libraries, including blockchain, artificial intelligence, Docker, and Kubernetes. Blockchain can be used to develop archiving services between institutions securely and electronically without the need for online databases. Docker and Kubernetes help develop the infrastructure flexibly by installing software directly on any web server without the need for programming. The document also discusses data mining concepts like classification, regression, clustering, and recommendation that can be used in library services with artificial intelligence. Machine learning tasks and techniques are also covered.
This document discusses definitions and concepts related to cloud computing. It begins by looking at definitions from NIST and WhatIs.com, which describe cloud computing as enabling on-demand access to configurable computing resources via a network. The document then covers central ideas like utility computing, service-oriented architecture (SOA), and service level agreements (SLAs). It discusses properties and characteristics of clouds like scalability, availability, reliability, manageability, interoperability, performance, and accessibility. Finally, it delves into concepts that enable these properties, such as virtualization, parallel computing, load balancing, fault tolerance, and system monitoring.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
1) Building cloud software presents unique challenges compared to on-premise software, such as the need for faster release cycles, upgrades without regressions, and multitenancy.
2) Scaling issues are a major cause of outages for cloud systems, including problems reaching resource limits and insufficient isolation between users.
3) Testing cloud systems requires evaluating how they scale and handling varying loads, and failures can indicate problems with dimensions like output size or number of tasks.
The document discusses using funds from PERKESO, a Malaysian social security organization, to get certified in cloud computing skills in order to get back to work after being retrenched. It provides details on certification programs that are eligible for funding, as well as other benefits available from PERKESO like allowance payments and career counseling. The second part of the document introduces the trainer, Leo Lourdes, and his qualifications and experience in areas like IT service management, project management, and security.
Getting Started with Azure SQL Database (Presented at Pittsburgh TechFest 2018)Chad Green
Are you still hosting your databases on your own SQL Server? Would you like to consider putting those up in the cloud? Then come and learn what exactly Azure SQL can do for you and how to go about moving your databases to the cloud.
a talk about azure synapse aimed to help people who are not data experts understand what synapse is and how you can integrate it with other technologies
A cloud database management system (CDBMS) is a database management system hosted by a third party on remote servers and accessed over the Internet, whereas a traditional database system is installed locally. A CDBMS can be deployed in three ways: as a virtual machine image managed by the customer, as a database as a service managed by the provider, or as a fully managed hosting service. Before deploying a CDBMS, an organization should consider its performance, budget, data governance, and staffing requirements as a CDBMS may not provide the same level of performance as a local system and has different compliance implications.
The document discusses cloud analytics, cloud testing, and virtual desktop infrastructure (VDI).
Cloud analytics allows organizations to implement analytics capabilities in the cloud to scale easily as the company grows and removes the burden of on-premise management. Cloud testing verifies cloud functions like redundancy and performance scalability. VDI creates a virtualized desktop environment on remote servers that users can access from any device, bringing benefits like access, security, cost reduction, and device portability.
Building a Turbo-fast Data Warehousing Platform with DatabricksDatabricks
Traditionally, data warehouse platforms have been perceived as cost prohibitive, challenging to maintain and complex to scale. The combination of Apache Spark and Spark SQL – running on AWS – provides a fast, simple, and scalable way to build a new generation of data warehouses that revolutionizes how data scientists and engineers analyze their data sets.
In this webinar you will learn how Databricks - a fully managed Spark platform hosted on AWS - integrates with variety of different AWS services, Amazon S3, Kinesis, and VPC. We’ll also show you how to build your own data warehousing platform in very short amount of time and how to integrate it with other tools such as Spark’s machine learning library and Spark streaming for real-time processing of your data.
Controlling Delegation of Windows Servers and Active DirectoryZoho Corporation
Derek Melber, Technical Evangelist for the AD Solutions team at ManageEngine and one of only 12 Microsoft Group Policy MVPs in the world, from his extensive knowledge in the Windows Active Directory security domain shares the various ways in Windows Servers to manage task delegations by Group / User / Permissions… And know the limitations too!
Slides ch-5-the definitive guide to cloud computing -by- dan sullivanMeherFatima8
The topic focuses on how to plan for the organizational and technical issues around the move to cloud computing, it is specifically structured around the broad topics like planing principle etc. Moreover you can visit :https://www.behangservicenederland.com/glasvezelbehang.html
This document discusses the evolution of data center services towards a cloud model. It describes 4 levels of cloud maturity ranging from basic colocation offerings to fully automated cloud services. Each level is associated with increasing levels of standardization, automation, efficiency and reduced total cost of ownership. The document advocates for standardization across infrastructure, software and processes to achieve an optimized private cloud with up to 60% lower costs and increased efficiency compared to traditional data center environments.
Domain logic patterns of Software ArchitectureShweta Ghate
The document discusses different patterns for organizing domain logic, including transaction scripts, domain models, table modules, and service layers. It provides descriptions and examples of when each pattern is appropriate to use based on the complexity of the business logic and data model. Transaction scripts are simple and suitable for less complex logic, while domain models, table modules, and service layers are needed for more intricate business rules and relationships between objects, data tables, and application services.
Azure SQL Database for the SQL Server DBA - Azure Bootcamp Athens 2018 Antonios Chatzipavlis
Azure SQL Database is a managed database service hosted in Microsoft's Azure cloud. Some key differences from SQL Server include: the service is paid by the hour based on the selected service tier; users can dynamically scale resources up or down; backups and high availability are managed by the service provider; and common administration tasks are handled by the provider rather than the user. The service offers automatic backups, point-in-time restore, and geo-restore capabilities along with built-in high availability through replication across three copies in the primary region.
En esta sesión revisamos las nuevas mejoras y funcionalidades que estarán implementadas en la siguiente versión de SQL Server principalmente en Seguridad, Rendimiento y Alta Disponibilidad
Ian allen motor industry management and turnaround summaryIan Allen
Please find an updated profile of my completed motor retail green field and turnaround projects along with current assignments. Happy to discuss any automotive requirements in complete confidence. Ian Allen. 07922 466126. ian.allen@cnaint.com
Leadership Principles for High Impact Results by Peggy KlingelPeggy Klingel
Strong leadership is needed to drive results. With effective team building and communication a compelling vision can make all the difference in motivating teams to achieve challenging turnaround, startup or change management strategies. Successful leaders coach, develop and motivate team members to perform at their best.
New Rules for Culture Change – Accenture Strategyaccenture
Digital disruption is sweeping across all industries and few organizations can afford to stand still. Yet many businesses overhauling their strategies have encountered a major stumbling block: their internal culture.
Understanding what drives culture change can make all the difference between transformations that fail and those that succeed.
View recommendations on how you can support your business strategy with successful cultural change.
Digital Disruption: Embracing the Future of Work accenture
Digital disruption is impacting workforces and how organizations operate. While business leaders recognize the benefits of digital transformation, many organizations lack the necessary digital skills. Research found that while executives and employees agree on digital's benefits, 82% of leaders see skills as a barrier. Employees are open to learning new digital skills and see it improving their careers. For organizations to succeed with digital, they must align HR strategies, experiment with flexible work, identify skills gaps, develop digital competencies, and foster leadership that encourages innovation.
The document discusses the need to change performance management practices to better support the changing workforce. It identifies 10 focus areas for driving better business performance through performance management, including shifting from annual reviews to ongoing coaching, reducing administrative tasks to allow more time for coaching, moving from past-focused assessments to future development, and including collaborative performance in assessments. The document is based on a survey that found most leaders and employees believe current performance management practices are not effective and need further changes to improve performance and support the future workforce.
The document summarizes key concepts in software architecture design, including execution architecture views, code architecture views, component and connector views, architectural styles, and archetypes. It defines execution views as showing how functional components map to runtime entities and how communication is handled. Code views map runtime entities to deployment components. Component and connector views define elements, relations, and properties using styles like pipe-and-filter. Archetypes are universal patterns that recur in business domains and software systems.
In this introductory session, we dive into the inner workings of the newest version of Azure Data Factory (v2) and take a look at the components and principles that you need to understand to begin creating your own data pipelines. See the accompanying GitHub repository @ github.com/ebragas for code samples and ADFv2 ARM templates.
Harness the Power of the Cloud for Grid Computing and Batch Processing Applic...RightScale
This document summarizes a presentation about harnessing the power of cloud computing for grid computing. It discusses how RightScale provides automated management of grid computing workloads in the cloud, allowing users to easily deploy and control large numbers of servers. Demos show how RightScale enables graceful scaling of server arrays, automated queue handling, and analyzing results to quantify economic benefits like cost savings and increased agility compared to on-premise grid solutions.
C19013010 the tutorial to build shared ai services session 1Bill Liu
This document provides an agenda and overview for a tutorial on building shared AI services. The tutorial consists of two modules: the first module discusses a case study of AI as a service and challenges of traditional machine learning, and how deep learning can help address these challenges. The second module introduces Keras and options for running Keras on Spark, including a use case, code lab, and prerequisites for running the code lab in Docker containers.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
FSI201 FINRA’s Managed Data Lake – Next Gen Analytics in the CloudAmazon Web Services
FINRA’s Data Lake unlocks the value in its data to accelerate analytics and machine learning at scale. FINRA's Technology group has changed its customer's relationship with data by creating a Managed Data Lake that enables discovery on Petabytes of capital markets data, while saving time and money over traditional analytics solutions. FINRA’s Managed Data Lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the 'right tool for the right job' at each step in the data processing pipeline. All of this is done while meeting FINRA’s security and compliance responsibilities as a financial regulator.
This document discusses using System Center Operations Manager (SCOM) to provide monitoring services to multiple customers. It describes several scenarios for separating monitoring data and views by customer while also allowing combined views. The solutions involve adding a "Customer" enum property to monitored objects, filtering and grouping objects by customer, and creating roles and permissions to restrict views and access to only relevant customer data. A deployed architecture is shown with SCOM components like agents and management servers separated by a gateway to isolate customer compartments and provide monitoring as a service.
UNIT3 DBMS.pptx operation nd management of data baseshindhe1098cv
The document discusses client-server database architecture. Some key points:
- In client-server architecture, multiple clients connect to a central server which provides services to the clients. The server processes clients' requests and returns results.
- The architecture divides applications into presentation, logic, and data tiers. The presentation tier handles the user interface. The logic tier controls application functions. The data tier stores and retrieves data from the database.
- Advantages include centralized data control and scalability. Disadvantages are potential single point of failure if the server fails and increased hardware/software costs.
Unit 1: Introduction to DBMS Unit 1 CompleteRaj vardhan
This document discusses database management systems (DBMS) and their advantages over traditional file-based data storage. It describes the key components of a DBMS, including the hardware, software, data, procedures, and users. It also explains the three levels of abstraction in a DBMS - the physical level, logical level, and view level - and how they provide data independence. Finally, it provides an overview of different data models like hierarchical, network, and relational models.
This document provides an introduction and overview of an IS220 Database Systems course. It outlines that the course will cover topics like database design, file organization, indexing and hashing, query processing and optimization, transactions, object-oriented and XML databases. It notes that the class will be 70% theory and 30% hands-on assignments completed in pairs. Assessment will include group work, tests, and a final exam. Class rules require punctuality, use of English, dressing professionally, and minimum 80% attendance.
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsZohar Elkayam
Oracle Week 2017 slides.
Agenda:
Basics: How and What To Tune?
Using the Automatic Workload Repository (AWR)
Using AWR-Based Tools: ASH, ADDM
Real-Time Database Operation Monitoring (12c)
Identifying Problem SQL Statements
Using SQL Performance Analyzer
Tuning Memory (SGA and PGA)
Parallel Execution and Compression
Oracle Database 12c Performance New Features
The document provides an overview of database management systems (DBMS). It begins with introducing the presenters and objective to make the audience knowledgeable about DBMS fundamentals and improvements. The contents section outlines topics like introduction, data, information, database components, what is a DBMS, database administrator, database languages, advantages and disadvantages of DBMS, examples of DBMS like SQL Server, and applications of DBMS.
The document provides an overview of database management systems (DBMS). It defines DBMS as software that creates, organizes, and manages databases. It discusses key DBMS concepts like data models, schemas, instances, and database languages. Components of a database system including users, software, hardware, and data are described. Popular DBMS examples like Oracle, SQL Server, and MS Access are listed along with common applications of DBMS in various industries.
Search on the fly: how to lighten your Big Data - Simona Russo, Auro Rolle - ...Codemotion
The talk presents a new technique of realtime single entity information extraction and investigation. The technique eliminates regular refresh and persistence of data within the search engine (ETL), providing real-time access to source data and improving response times using in-memory data techniques. The solution presented is a concrete solution with live customers, based upon real business needs. I will explain the architectural overview, the technology stack used based on Apache Lucene library, the accomplished results and how to scale out the solution.
HashiConf '19
Explaining how we use Inversion of Control at Criteo to create very effective types of services
https://hashiconf.hashicorp.com/schedule/inversion-of-control-with-consul
Software Requirement Engineering includes Requirements Analysis, Analysis Objectives, Types of Requirements, Analysis Principles, Information Domain, Modelling and c
The document discusses Enterprise Resource Planning (ERP) systems. It describes the ERP architecture as using a client-server model with a relational database to store and process data. The ERP lifecycle involves definition, construction, implementation, and operation phases. Core ERP components manage accounting, production, human resources and other internal functions, while extended components provide external capabilities like CRM, SCM, and e-business. Proper implementation requires screening software, evaluating packages, analyzing process gaps, reengineering workflows, training staff, testing, and post-implementation support.
Similar to ASMUG February 2015 Knowledge Event (20)
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
2. Objective
• To share knowledge of SCSM
• To help users get the most from SCSM
• To facilitate an Australian wide community that can peer and network
• To assist users of Cireson apps get the most from their investments
Spread the word
• Tell others about the group
• Share items on social
• Tell us about topics or questions for future knowledge events
This event is being recorded.
Welcome
3. Agenda
Item Presenter Timing
Welcome John Mustac
Systemology
2:00pm
SCSM knowledge
Class System / Data Model
Mat Barnier
Systemology
15 - 30 mins
SCSM Connectors
Best practices
Chris Ross
Cireson
15 - 30 mins
Open Q&A Open 30+ mins
Close 3:30pm
8. Model Database
8
• All hardware, software, services, and other logical components that
you want Service Manager to be aware of are described in a model.
• A model is a computer-consumable representation of software or
hardware components that captures the nature of the components
and the relationships between them. In ITIL or MOF these are
Configuration Items (CI’s )
• An example: To Monitor an email messaging service:
• Configuration Level Monitoring
• Involves monitoring a variety of components (mailbox
servers, front-end servers, operating system
components, disk subsystems, Domain Controllers, or
DNS servers)
• Business Service Level Monitoring
• Requires discovering and monitoring the interaction
between these systems, such as monitoring whether e-
mail is flowing through the system.
Modelling in System Center
Service Manager
9. Model Database
9
• Based on and extends the Operations Manager
modeling system
• Uses the same terminology
• Management pack formats, SDK, API’s and
the database support for all System
Center Modules
• In Service Manager Model Extended to support
• Configuration items
• Work items
• Other
• Further extends support to the model with
additional class extensions and categories.
Modelling in System Center
Service Manager
• Incidents
• Activities
• Releases
• Service Requests
• Changes
• Problems
Work Item
• Business Services
• Environments
• Computers
• Printers
Configuration
Item
10. Model Database
10
• Work items are the operational category of things
we work with like
• Incidents
• Change Requests
• Activities
• Problems
• Releases
• Service Requests
• They Inherit properties from their parent objects
and extend the model
• They also may have relationships
Work Item Hierarchy
11. Model Database
11
• Configuration Items are the operational category of
things we work with like
• Computers
• Business Services
• Network Cards
• Databases
• They Inherit properties from their parent objects
and extend the model
• They also may have relationships, we have different
types of relationships to represent different ways
the Configuration Items may relate to eachother
Configuration Items Hierarchy
14. Management Packs
14
• XML-based file that contains definitions for classes, workflows, views,
forms, reports, and knowledge
• Consists of an XML manifest that defines metadata about these
objects and the references to resources that the objects use.
• Used to extend Service Manager with the definitions and the
information necessary to implement all or part of a service
management process.
• You can use a management pack to do the following:
• Extend Service Manager with new objects.
• Extend Service Manager with new behavior.
• Store new custom objects that you created, such as a form or
a template.
• Transport customizations to another Service Manager
deployment or implement the customizations in a newer
deployment.
Introduction To Management
Packs
15. Classes
15
• Class = property bag (set of properties)
• Each property is defined as
"name/type"
• Properties are always of simple
types such as int, string, double, etc.
• There are no arrays or sets in a
property.
• A class as defined in the
management pack would look
similar to the following:
Introducing Service Manager
Classes
16. Classes
16
• All classes require a base class
• Except for the class Entity
• A class will define all of its properties additional to the
properties that have been inherited
• Allowed property values can be further constrained using
property attributes in XML
• MaxLength,
• CaseSensitive,
• MinValue,
• RegEx,
• Required
• In the SCSM model, there are no complex properties.
• Complex properties are emulated using relationship types
Properties and attributes of a
Class
18. Classes
18
• Support new types of managed resources or
process artifacts
• Need to add a new behavior.
• For example: managing HVAC units or overhead
projectors would require a new class
• Specialise a new class of incident (called
"HRIncident“) this new subset of incidents will also
require a new class.
• Query for HRIncident, returns the subset of
Incidents called HRIncidents
• New HRIncident class will have a dedicated
set of workflows
• In XML the new class would look like the
following:
Defining a New Class
19. Classes
19
• Have additional properties and behaviors to add
• If you cannot update the type because it is defined in a
"sealed" management pack
• In XML the extended class would look like the following:
• Adds "DepartmentName" and "MyBugId" to all incidents
and its descendants
• Implemented with the addition of a type Extension.
• The new incidents should behave exactly like an
Incident
• Query returns all classes of incidents including
those derived from the Incident base class and they
all will have the new properties of
"DepartmentName" and "MyBugId."
• When extending the class, all incidents and classes
that that descend from it will have the new
property values
Extending a Class
21. Service Manager 2012 Connector: Best Practices from
the Field
Chris Ross, MVP3, ITIL
Director of Program Management
Cireson
22. What are the Various Connectors?
Out of the box…
Active Directory
Configuration Manager
Operations Manager CI
Operations Manager Alert
Orchestrator
Virtual Machine Manager
Exchange
CSV
Cireson Connectors…
SMA Connector
Asset Import
Software Metering
Coming Soon…
Project Server Connector
TFS Connector
23. What are the Right Questions to Ask?
How Many Objects
What is the quantity of data that will be stored?
Transaction Volume
What are the scenarios?
What is the degree of customization?
How many concurrent, (active) connectors will there be?
24. Quantity of Data
The bigger the database is the slower every query runs and the more space it
takes on disk.
Contained data is especially impactful to performance.
Computer -> SQL Server -> Database
Container Object Contained Object
Computer 1 SQL Server 1
SQL Server 1 Database 1
Computer 1 Database 1
SQL Server 1 Database 2
Computer 1 Database 2
25. Good Data, Bad Data
Good Data Bad Data
Incidents (w/ action logs and activities) Users
Service requests (w/o action logs and
activities)
Action logs
Computers from AD or SCCM Contained activities (especially nested)
File attachments Computers from SCOM
Knowledge CI data from SCOM in general
26. Good, Bad Customizations
Good Customizations Bad Customizations
User roles Notification subscriptions
Views* Work item event workflows
Data model extensions Custom workflows
Templates Groups
List items* Queues
Tasks* Service level objectives
DW extensions SCCM connectors, especially w/ DCM
Notification templates SCOM connectors
Reports AD connectors
Portal customizations
SLO calendars & metrics Form customizations
Analysis libraries & Excel workbooks
SCVMM and SCOrch connectors
27. Scoping Connectors
Active Directory
Scope by domain, OU, or security group
Configuration Manager
Scope by collection
Operations Manager – CI Connector
Scope by Add|Remove-AllowedList cmdlets (white listing)
Operations Manager – Alert Connector
Scope by alert property query criteria (alert subscriptions on SCOM side)
28. Design Better Connectors! [custom]
Query once and do business logic in runtime using one of these
options:
Custom SCSM workflows (PowerShell)
Orchestrator (scale out)
The difference is hundreds of queries running periodically vs. a single
query running periodically. Evaluating A vs. B vs. … in memory on a
management server is lightning fast.
Don’t roundtrip back to the database! Pass in the data that is needed
to the workflow.
30. Connectors: Do These Things…
Do scope your connectors properly
Properly scoping your connector(s) allows you to ensure
your connectors run error free
Limit each individual connector to ≤ 10,000 objects
If you have more objects, create more connectors
31. Connectors: Do These Things…
Do schedule your connectors to run at different
times
Running multiple connectors simultaneously can cause
performance impacts (SCSM or source system)
32. Connectors: Do These Things…
Do schedule connectors to run during non-business
hours
Method 1: Change the synchronization schedule using
PowerShell
Method 2: Initiate the synchronization using PowerShell
http://bit.ly/1DMchhh
33. Connectors: Do These Things…
Do import AD Users
The AD connector imports all users in a domain, regardless
enabled or disabled.
Also if you have contacts in AD that are created as Domain
users, these are imported as well.
If is very important to consider which OUs to import, and
also whether or not to import both Enabled and Disabled
users.
34. Connectors: Do These Things…
Do use LDAP queries
This will limit the amount of data returned by the connector
Lets only bring in what is relevant
35. Connectors: Do These Things…
Do use unique accounts for connectors
This will create a separate Monitoringhost.exe process on
the workflow management server for each connector when it
runs
This makes it easier to see which connector is currently
running and how much memory/CPU it is consuming
It also makes it easier to isolate that one process from other
workflows/connectors so that it can be terminated without
affecting other workflows/connectors running
36. Connectors: Do These Things…
Do keep Exchange Connector set to 5 min+
If you are using queues for security purposes keeping
Exchange Connector set for longer durations allows the
needed time for group settings to take effect
Less impact on the Exchange environment
37. Connectors: Don’t Do These Things…
Don’t import AD Computers (AD Connector)
If you're also using the Configuration Manager connector,
there may not be a need for the AD connector to import all
computers
Doing so only means SCSM needs to import, rationalize and
normalize two sources
All relevant information about the computers are delivered by
the SCCM connector
There could be examples where the AD connector needs to
import computers or subsets of computers from AD
38. Connectors: Don’t Do These Things…
Don’t use DCM (really DON’T)
There is a Rule which exists in the Configuration Manager Connector Management Pack
which is called
Incident_Desired_Configuration_Management_Custom_Rule.Update
This Rule can cause workflows (Subscription Rules) to lag behind a lot and cause the
grooming jobs to fail, thus causing the EntityChangeLog table to get very large.
In turn this causes in internal SQL Stored Procedure called p_EntityChangeLogSnapshot to
take a lot of time to finish.
This stored procedure is executed very often and while it is running, the performance of the
Console is also impacted a lot.
http://bit.ly/1FlY4oq
39. Connectors: Don’t Do These Things…
Don’t sync null values in AD connectors
Unless needed for a purpose, always select the option:
“Do not write null values for properties not set in Active
Directory”
Using this setting ensures the connectors do not update the
same attributes, despite being null
40. Connectors: Don’t Do These Things…
Don’t synchronize data you don’t need!
When in doubt, use the KISS method!
43. Open Q&A
An opportunity for audience members to ask questions of the group
Questions can be raised via IM or round table discussion
Open Mic
44. • Recording
• To be posted on Systemology website
• Post questions and topics for next knowledge event
• Post on ASMUG page on Systemology website (coming soon)
• Next Knowledge event
• April 2015
• Share & Social
• Expand the network
Close