The document discusses CA ERwin, a data modeling tool that can be used to visualize data across different platforms through conceptual, logical, and physical data models in order to effectively manage increasingly complex data environments with multiple databases and applications. CA ERwin provides a centralized repository for storing metadata from multiple data sources and supports communication between business and technical stakeholders through intuitive reports and high-level conceptual models.
Effective capture of metadata using ca e rwin data modeler 09232010ERwin Modeling
CA ERwin Data Modeler provides flexible features to effectively capture metadata in data warehouse environments. This includes data sources, transformation rules, and data movement rules. It uses a customer dimension example to demonstrate capturing source tables from various sources, defining transformation templates, and attaching data movement rules to tables. Other options like importing metadata definitions from Excel allow business stakeholders to define and import column metadata. Effective metadata capture helps communicate requirements, identify issues, and understand the data model.
Cust experience a practical guide 09152010ERwin Modeling
This document outlines how WorkSafe BC adopted CA ERwin Data Modeler and CA ERwin Model Manager to build and manage their enterprise data models. They faced challenges like standards adoption, new tool learning curves, and change management. They built infrastructure like standards, templates, and procedures. They then built their enterprise data model by merging existing models and resolving conflicts. For model management, they implemented processes for checking models in and out, using the model manager. This centralized their metadata and allowed for faster development and data consistency across projects.
This document provides a case study on using CA ERwin Data Modeler (CA ERwin DM) and CA ERwin Model Manager (CA ERwin MM) across insurance industries. It discusses why these tools were chosen over other options, how strategic business elements like vision and goals impact data modeling. Key elements of the tools like opportunities/threats and capabilities are examined. The document also includes practical demonstrations of using the tools, including reverse engineering, comparing models, and integrating the tools. Appendices provide a SWOT analysis and sample data model.
CA ERwin Data Modeler End User PresentationCA RMDM Latam
CA ERwin is a suite of data modeling products that allows users to model databases across multiple platforms from a single graphical representation. It reduces costs and improves database performance by standardizing database design. The latest release, ERwin 7.3, features enhancements such as ODBC-based metadata access, a SQL query tool, and template customizations. ERwin helps users manage growing database demands with fewer resources through its ability to support multiple database platforms from a single data model.
Generating Code with Oracle SQL Developer Data ModelerRob van den Berg
This presentation discusses code generation capabilities in Oracle SQL Developer Data Modeler. Key features that support code generation include logical and relational modeling, domains, naming standards, and transformation scripts. The presenter demonstrates how to generate various types of code like entity rules, triggers, and packages by writing custom transformation scripts to query the model object and output code to files. Well-designed models can be transformed into maintainable application code automatically.
Oracle Sql Developer Data Modeler 3 3 new featuresPhilip Stoyanov
This document discusses new features in Oracle SQL Developer Data Modeler version 3.3/4.0, including enhanced search functionality, improved handling of logical and relational models including surrogate keys and subtyping, and support for identity columns in Oracle Database. Key new features include global and model-level searching, setting common properties on search results, custom reports on search results, improved mapping of relationships and attributes to relational models, and configuration options for implementing entity hierarchies and generating dependent constraints.
Raymond Cochrane has over 20 years of experience developing and administering SQL Server databases and business intelligence systems. He has extensive experience with SQL Server, SSIS, SSAS, and SSRS. He has worked on projects involving data warehousing, ETL, reporting, and analytics for clients in various industries including banking, insurance, healthcare, and retail. His technical skills include SQL, SSIS, SSAS, SSRS, and working with dimensional data models.
- Oracle Database 10g is an object-relational database management system that allows for grid computing. It is based on the relational model and supports multimedia, large objects, and user-defined data types.
- The course aims to teach students how to perform tasks with Oracle like retrieving and updating data using SQL, obtaining metadata from dictionary views, and creating reports.
- Key tables used in the course include EMPLOYEES, DEPARTMENTS, and JOB_GRADES.
Effective capture of metadata using ca e rwin data modeler 09232010ERwin Modeling
CA ERwin Data Modeler provides flexible features to effectively capture metadata in data warehouse environments. This includes data sources, transformation rules, and data movement rules. It uses a customer dimension example to demonstrate capturing source tables from various sources, defining transformation templates, and attaching data movement rules to tables. Other options like importing metadata definitions from Excel allow business stakeholders to define and import column metadata. Effective metadata capture helps communicate requirements, identify issues, and understand the data model.
Cust experience a practical guide 09152010ERwin Modeling
This document outlines how WorkSafe BC adopted CA ERwin Data Modeler and CA ERwin Model Manager to build and manage their enterprise data models. They faced challenges like standards adoption, new tool learning curves, and change management. They built infrastructure like standards, templates, and procedures. They then built their enterprise data model by merging existing models and resolving conflicts. For model management, they implemented processes for checking models in and out, using the model manager. This centralized their metadata and allowed for faster development and data consistency across projects.
This document provides a case study on using CA ERwin Data Modeler (CA ERwin DM) and CA ERwin Model Manager (CA ERwin MM) across insurance industries. It discusses why these tools were chosen over other options, how strategic business elements like vision and goals impact data modeling. Key elements of the tools like opportunities/threats and capabilities are examined. The document also includes practical demonstrations of using the tools, including reverse engineering, comparing models, and integrating the tools. Appendices provide a SWOT analysis and sample data model.
CA ERwin Data Modeler End User PresentationCA RMDM Latam
CA ERwin is a suite of data modeling products that allows users to model databases across multiple platforms from a single graphical representation. It reduces costs and improves database performance by standardizing database design. The latest release, ERwin 7.3, features enhancements such as ODBC-based metadata access, a SQL query tool, and template customizations. ERwin helps users manage growing database demands with fewer resources through its ability to support multiple database platforms from a single data model.
Generating Code with Oracle SQL Developer Data ModelerRob van den Berg
This presentation discusses code generation capabilities in Oracle SQL Developer Data Modeler. Key features that support code generation include logical and relational modeling, domains, naming standards, and transformation scripts. The presenter demonstrates how to generate various types of code like entity rules, triggers, and packages by writing custom transformation scripts to query the model object and output code to files. Well-designed models can be transformed into maintainable application code automatically.
Oracle Sql Developer Data Modeler 3 3 new featuresPhilip Stoyanov
This document discusses new features in Oracle SQL Developer Data Modeler version 3.3/4.0, including enhanced search functionality, improved handling of logical and relational models including surrogate keys and subtyping, and support for identity columns in Oracle Database. Key new features include global and model-level searching, setting common properties on search results, custom reports on search results, improved mapping of relationships and attributes to relational models, and configuration options for implementing entity hierarchies and generating dependent constraints.
Raymond Cochrane has over 20 years of experience developing and administering SQL Server databases and business intelligence systems. He has extensive experience with SQL Server, SSIS, SSAS, and SSRS. He has worked on projects involving data warehousing, ETL, reporting, and analytics for clients in various industries including banking, insurance, healthcare, and retail. His technical skills include SQL, SSIS, SSAS, SSRS, and working with dimensional data models.
- Oracle Database 10g is an object-relational database management system that allows for grid computing. It is based on the relational model and supports multimedia, large objects, and user-defined data types.
- The course aims to teach students how to perform tasks with Oracle like retrieving and updating data using SQL, obtaining metadata from dictionary views, and creating reports.
- Key tables used in the course include EMPLOYEES, DEPARTMENTS, and JOB_GRADES.
The document discusses relational databases and how they organize data into tables that can be accessed and reassembled in different ways without reorganizing the tables, it also covers how PeopleSoft uses a 3-tier architecture called PeopleSoft Internet Architecture (PIA) consisting of a web browser, web server, application server and database server to deliver pure internet applications to users. PIA provides advantages over traditional client/server architectures like thin clients, improved performance, and the ability to scale more easily to meet increasing user demands.
This document introduces Oracle Database 10g and relational database concepts. It lists the course objectives as understanding Oracle 10g features, relational and object relational databases, and being able to perform SQL queries and DML statements. Key aspects of Oracle 10g are its support for multimedia, unified management, and scalability. Relational databases are based on relations and operators defined in the relational model.
Piyush Mittal has over 5 years of experience as a software developer. He received his MS in Computer Science from Northeastern University and his Bachelor's in Information Technology from UPTU, India. Currently, he works as a Software Developer for Staples.com where he has designed and implemented features like automatic re-stocking and email notifications. Previously, he had internships at Amazon developing web applications for resellers and at Accenture testing ETL processes. He is proficient in languages like Java, Ruby, and frameworks like Spring and Hibernate.
This document provides an overview of OData, including:
- OData is a standard API for accessing data in a RESTful manner that allows for querying and interoperability.
- OData addresses limitations of REST by standardizing resource naming, paging support, querying, and status codes.
- The document demonstrates OData resource identification, operations, representations, query parameters, and tools/libraries, concluding with a link to a GitHub demo.
1. Aburar Yaseen has over 11 years of experience in the IT industry as a software engineer and data analyst.
2. He has extensive experience developing reports and dashboards using tools like Tableau, Crystal Reports, and SQL.
3. Some of the projects he has worked on include data analytics for companies in various industries like manufacturing, logistics, banking, insurance, consumer goods, and sports/entertainment.
George McGeachie's Favourite PowerDesigner featuresGeorge McGeachie
These are the slides from my presentation at Data Modelling Zone in Dusseldorf, in September 2018. These are all features that differentiate the tool from the other players in the market.
1) The document provides guidance on assigning Digital Object Identifiers (DOIs) through DataCite. It discusses decisions that must be made, such as what objects to assign DOIs to and DOI construction.
2) Maintaining DOIs requires ensuring a correct URL and metadata for the object. DOIs also commit an institution to long-term storage of the object for a minimum of 10 years.
3) The quality of the DOI system relies on objects being cite-worthy, having well described metadata, and the institution committing to long-term storage. Metadata must be provided in an XML file and displayed on a landing page.
This document provides an overview of SAP BO Analysis for Office and how to analyze business intelligence data in Excel. It covers getting started with the add-in, creating workbooks, analyzing data through sorting, filtering, and conditional formatting, and sharing content by saving workbooks to the BI platform and creating PowerPoint slides. The document includes step-by-step instructions on tasks like enabling the add-in, inserting data sources, adding measures to crosstabs, creating calculations, inserting dynamic charts and filters, and more.
Pradeep Kumar P has over 3 years of experience as a Tableau and Business Objects developer. He has expertise in data modeling, ETL, reporting, and dashboard development. Some of his skills include universe design, report scheduling, extracting and transforming data, and creating visualizations using Tableau. Currently he works as a reporting developer for Hybridbi Solutions on projects involving Tableau and Business Objects for clients like Aryzta.
The document provides an overview of SAP BusinessObjects Analysis, edition for Microsoft Office. It discusses the two main functionalities, Excel-based OLAP analysis and BI applications. It highlights the usability focus on business users by bringing flexibility and minimizing training. New features like the Design Panel, conditional formatting, and live PowerPoint presentations are summarized. Requirements for the client and server software are also listed.
This document provides an overview of an Excel workshop organized by the Karachi Branch Council of ICMAP on December 10-11, 2011. The workshop aimed to teach participants advanced Excel skills like functions, formulas, pivot tables, data validation, charts and graphs, conditional formatting, data filters, macros and more. It was facilitated by Zahid Mahmood and covered topics like financial modeling, data analysis, problem solving and applying Excel skills to real business situations. The workshop emphasized both foundational Excel skills and customizing lessons to participants' specific needs and problems.
This document describes four primary models for developing Java applications on the AS/400: HTTP servlets, transaction serving, Domino agents, and distributed objects. It compares these models to the traditional interactive job structure and discusses how each handles system services like transactions and security. The models provide different levels of services, with distributed objects eventually providing the most complete environment similar to traditional models.
Open source BI tool Mondrian is an OLAP engine that operates on normalized relational databases to provide multi-dimensional analysis. It is bundled with other open source packages like JPivot for the UI layer. Mondrian uses a schema file to define the logical multi-dimensional model and dimensions, hierarchies, measures and calculated members. It supports extensions through plug-ins and user-defined functions. For performance, aggregate tables, materialized views and query caching can be used. While it provides an open source alternative to proprietary BI tools, it also has some constraints around key joins and schema normalization.
This document discusses data modeling and summarizes that it involves more than just creating diagrams. It involves automating tasks like applying naming standards, adding standard audit columns, and checking designs meet standards. The document highlights how tools can extract statistics to estimate database growth, manage schema versions, and generate JSON. It emphasizes that data modeling requires understanding database dependencies and connections to other systems. A variety of data modeling tools are mentioned but choosing the right tools carefully is important.
The document provides an overview of SQL (Structured Query Language) including its purpose, benefits, and key components. It describes the SQL environment and data types, as well as the main SQL statements used for database definition (DDL), data manipulation (DML), and control (DCL). Examples are given for common statements like CREATE TABLE, SELECT, INSERT, UPDATE, DELETE, and how to define views, integrity controls, indexes and more.
The document outlines a PHP training course covering introductory and advanced PHP topics over 1-2 months. It includes introductions to PHP basics like variables, data types, operators, and control structures. It also covers arrays, functions, object-oriented programming, databases, frameworks like CodeIgniter and CakePHP, and content management systems like Joomla. The training is offered by Resistive Technosource Pvt. Ltd. and includes both conceptual and hands-on components.
Database software like Microsoft Access allows users to store, manage, and analyze information from databases. A database contains tables which organize information into records with fields. Common database objects include tables, forms, queries, and reports. Tables contain records and fields, forms are used to enter and view data, queries retrieve and filter data, and reports summarize and present information. Databases are used to manage various types of lists and information.
The document discusses how to create enterprise data standards using CA ERwin Data Modeling. It describes leveraging naming standards, domains, user defined properties, and data type standards to promote consistency and reuse. The presentation also demonstrates sharing standards using CA ERwin Model Manager and reporting standards to stakeholders using various reporting options like Crystal Reports. Live demo examples are provided of implementing standards in CA ERwin Data Modeling.
This document provides a summary of Nagendra Kumar Busetti's work experience and qualifications. He has over 7 years of experience as a data modeler working with tools like Erwin and databases like Oracle, SQL Server, and DB2. Currently he works for TCS as a data modeler on projects for clients like Toys "R" Us, where he is responsible for logical and physical database design, maintaining metadata, and providing support to applications. Previously he worked for iGATE as a senior system engineer, where he performed similar data modeling duties for clients such as MetLife Insurance. He has a B.Tech degree from 2011 and is proficient with programming languages like SQL and T-SQL.
This document discusses different data models used in database management systems including record-based, relational, network, hierarchical, and entity-relationship models. It provides details on each model such as how data is organized. A record-based model uses fixed-length records and fields. The relational model organizes data into tables with rows and columns. The network model links entities through multiple paths in a graph structure. The hierarchical model arranges data in a tree structure. Finally, the entity-relationship model views the real world as entities and relationships between entities.
The document discusses relational databases and how they organize data into tables that can be accessed and reassembled in different ways without reorganizing the tables, it also covers how PeopleSoft uses a 3-tier architecture called PeopleSoft Internet Architecture (PIA) consisting of a web browser, web server, application server and database server to deliver pure internet applications to users. PIA provides advantages over traditional client/server architectures like thin clients, improved performance, and the ability to scale more easily to meet increasing user demands.
This document introduces Oracle Database 10g and relational database concepts. It lists the course objectives as understanding Oracle 10g features, relational and object relational databases, and being able to perform SQL queries and DML statements. Key aspects of Oracle 10g are its support for multimedia, unified management, and scalability. Relational databases are based on relations and operators defined in the relational model.
Piyush Mittal has over 5 years of experience as a software developer. He received his MS in Computer Science from Northeastern University and his Bachelor's in Information Technology from UPTU, India. Currently, he works as a Software Developer for Staples.com where he has designed and implemented features like automatic re-stocking and email notifications. Previously, he had internships at Amazon developing web applications for resellers and at Accenture testing ETL processes. He is proficient in languages like Java, Ruby, and frameworks like Spring and Hibernate.
This document provides an overview of OData, including:
- OData is a standard API for accessing data in a RESTful manner that allows for querying and interoperability.
- OData addresses limitations of REST by standardizing resource naming, paging support, querying, and status codes.
- The document demonstrates OData resource identification, operations, representations, query parameters, and tools/libraries, concluding with a link to a GitHub demo.
1. Aburar Yaseen has over 11 years of experience in the IT industry as a software engineer and data analyst.
2. He has extensive experience developing reports and dashboards using tools like Tableau, Crystal Reports, and SQL.
3. Some of the projects he has worked on include data analytics for companies in various industries like manufacturing, logistics, banking, insurance, consumer goods, and sports/entertainment.
George McGeachie's Favourite PowerDesigner featuresGeorge McGeachie
These are the slides from my presentation at Data Modelling Zone in Dusseldorf, in September 2018. These are all features that differentiate the tool from the other players in the market.
1) The document provides guidance on assigning Digital Object Identifiers (DOIs) through DataCite. It discusses decisions that must be made, such as what objects to assign DOIs to and DOI construction.
2) Maintaining DOIs requires ensuring a correct URL and metadata for the object. DOIs also commit an institution to long-term storage of the object for a minimum of 10 years.
3) The quality of the DOI system relies on objects being cite-worthy, having well described metadata, and the institution committing to long-term storage. Metadata must be provided in an XML file and displayed on a landing page.
This document provides an overview of SAP BO Analysis for Office and how to analyze business intelligence data in Excel. It covers getting started with the add-in, creating workbooks, analyzing data through sorting, filtering, and conditional formatting, and sharing content by saving workbooks to the BI platform and creating PowerPoint slides. The document includes step-by-step instructions on tasks like enabling the add-in, inserting data sources, adding measures to crosstabs, creating calculations, inserting dynamic charts and filters, and more.
Pradeep Kumar P has over 3 years of experience as a Tableau and Business Objects developer. He has expertise in data modeling, ETL, reporting, and dashboard development. Some of his skills include universe design, report scheduling, extracting and transforming data, and creating visualizations using Tableau. Currently he works as a reporting developer for Hybridbi Solutions on projects involving Tableau and Business Objects for clients like Aryzta.
The document provides an overview of SAP BusinessObjects Analysis, edition for Microsoft Office. It discusses the two main functionalities, Excel-based OLAP analysis and BI applications. It highlights the usability focus on business users by bringing flexibility and minimizing training. New features like the Design Panel, conditional formatting, and live PowerPoint presentations are summarized. Requirements for the client and server software are also listed.
This document provides an overview of an Excel workshop organized by the Karachi Branch Council of ICMAP on December 10-11, 2011. The workshop aimed to teach participants advanced Excel skills like functions, formulas, pivot tables, data validation, charts and graphs, conditional formatting, data filters, macros and more. It was facilitated by Zahid Mahmood and covered topics like financial modeling, data analysis, problem solving and applying Excel skills to real business situations. The workshop emphasized both foundational Excel skills and customizing lessons to participants' specific needs and problems.
This document describes four primary models for developing Java applications on the AS/400: HTTP servlets, transaction serving, Domino agents, and distributed objects. It compares these models to the traditional interactive job structure and discusses how each handles system services like transactions and security. The models provide different levels of services, with distributed objects eventually providing the most complete environment similar to traditional models.
Open source BI tool Mondrian is an OLAP engine that operates on normalized relational databases to provide multi-dimensional analysis. It is bundled with other open source packages like JPivot for the UI layer. Mondrian uses a schema file to define the logical multi-dimensional model and dimensions, hierarchies, measures and calculated members. It supports extensions through plug-ins and user-defined functions. For performance, aggregate tables, materialized views and query caching can be used. While it provides an open source alternative to proprietary BI tools, it also has some constraints around key joins and schema normalization.
This document discusses data modeling and summarizes that it involves more than just creating diagrams. It involves automating tasks like applying naming standards, adding standard audit columns, and checking designs meet standards. The document highlights how tools can extract statistics to estimate database growth, manage schema versions, and generate JSON. It emphasizes that data modeling requires understanding database dependencies and connections to other systems. A variety of data modeling tools are mentioned but choosing the right tools carefully is important.
The document provides an overview of SQL (Structured Query Language) including its purpose, benefits, and key components. It describes the SQL environment and data types, as well as the main SQL statements used for database definition (DDL), data manipulation (DML), and control (DCL). Examples are given for common statements like CREATE TABLE, SELECT, INSERT, UPDATE, DELETE, and how to define views, integrity controls, indexes and more.
The document outlines a PHP training course covering introductory and advanced PHP topics over 1-2 months. It includes introductions to PHP basics like variables, data types, operators, and control structures. It also covers arrays, functions, object-oriented programming, databases, frameworks like CodeIgniter and CakePHP, and content management systems like Joomla. The training is offered by Resistive Technosource Pvt. Ltd. and includes both conceptual and hands-on components.
Database software like Microsoft Access allows users to store, manage, and analyze information from databases. A database contains tables which organize information into records with fields. Common database objects include tables, forms, queries, and reports. Tables contain records and fields, forms are used to enter and view data, queries retrieve and filter data, and reports summarize and present information. Databases are used to manage various types of lists and information.
The document discusses how to create enterprise data standards using CA ERwin Data Modeling. It describes leveraging naming standards, domains, user defined properties, and data type standards to promote consistency and reuse. The presentation also demonstrates sharing standards using CA ERwin Model Manager and reporting standards to stakeholders using various reporting options like Crystal Reports. Live demo examples are provided of implementing standards in CA ERwin Data Modeling.
This document provides a summary of Nagendra Kumar Busetti's work experience and qualifications. He has over 7 years of experience as a data modeler working with tools like Erwin and databases like Oracle, SQL Server, and DB2. Currently he works for TCS as a data modeler on projects for clients like Toys "R" Us, where he is responsible for logical and physical database design, maintaining metadata, and providing support to applications. Previously he worked for iGATE as a senior system engineer, where he performed similar data modeling duties for clients such as MetLife Insurance. He has a B.Tech degree from 2011 and is proficient with programming languages like SQL and T-SQL.
This document discusses different data models used in database management systems including record-based, relational, network, hierarchical, and entity-relationship models. It provides details on each model such as how data is organized. A record-based model uses fixed-length records and fields. The relational model organizes data into tables with rows and columns. The network model links entities through multiple paths in a graph structure. The hierarchical model arranges data in a tree structure. Finally, the entity-relationship model views the real world as entities and relationships between entities.
Paper published as speaker CA World 2010 at Las Vegas, USA.
Speaker & Author: Rasananda Behera Insurance Industry Expert on Enterprise Architecture [Business, Data & Applications] Management.
This document discusses using high-level data modeling to facilitate communication between business and IT stakeholders. It provides examples of high-level data models and discusses best practices for building high-level models, including getting input from all relevant parties, choosing an intuitive notation, and using the model to achieve consensus on key business concepts and definitions. The document also describes how modeling tools from CA like ERwin can help manage technical data sources from multiple systems and databases, and share information with various audiences.
This certificate of achievement certifies that Ernesto Arce completed 3 days of training totaling 24 hours in ERwin Data Modeling. He has met all requirements and completed all exercises to receive this certification, which was issued on October 26, 2016.
Integrating data process a roundtrip modeling using e rwin data modeler_erwin...ERwin Modeling
This document discusses integrating data and process modeling using ERwin Data Modeler and ERwin Process Modeler. It provides an overview of the modeling tools and a 4 step process for mapping process models to data models. The steps include: 1) Mapping entities to process arrows, 2) Mapping attributes to entities, 3) Identifying process actions on entities, and 4) Identifying process actions on attributes. Rules for allowable actions on entities and attributes based on their usage in the process model are also defined.
This document provides a comprehensive analysis comparing the data modeling capabilities of Sybase PowerDesigner 16.0 InformationArchitect and CA ERwin Data Modeler r8.1 Standard Edition. It examines how each tool supports key data modeling activities like creating different types of data models (conceptual, logical, physical), impact analysis across model levels, and model integration. The analysis finds that while both tools allow creating different model types and linking models, PowerDesigner provides more robust, integrated support through dedicated model types and built-in impact/lineage analysis. It concludes PowerDesigner better enables managing relationships across complex data modeling projects.
Data models can facilitate communication between designers, programmers, and users. A well-developed data model can improve understanding of an organization. Data models are a communication tool that represent different types of relationships in a database. Common data models include hierarchical, network, relational, entity-relationship, and object-oriented models. Each model has advantages like conceptual simplicity and flexibility as well as disadvantages like complexity and implementation limitations.
Sneak peak ca e rwin data modeler r8 preview09222010ERwin Modeling
This document provides an overview of new features and enhancements in the upcoming CA ERwin Data Modeler r8 release. Key highlights include a state-of-the-art visualization with dynamic and customizable user interface, productivity-enhancing workflows, support for additional databases, and improvements to modeling tools, editors, and licensing. The goal is to provide the leading data modeling solution while balancing usability, productivity, and database currency.
CA ERwin Modeling provides data modeling solutions to help reduce costs and increase ROI. Their next release, r8, will include improved visualization, customization, and productivity features. r8 is focused on balancing usability improvements and database support to increase user satisfaction and ROI. The data modeling market is seeing new players and a diversification in how solutions are used, with CA ERwin striving to scale their products to meet new requirements.
Mastering your data with ca e rwin dm 09082010ERwin Modeling
This document discusses using data modeling to build the foundations for strong data quality. It outlines a process with six steps: [1] defining metadata standards, [2] encouraging collaboration, [3] organizing models and data, [4] enforcing standards, [5] changing organizational culture, and [6] creating a "to be" target state. The key points are that data quality requires treating data as a valuable asset, establishing good metadata and modeling habits, and ongoing cultural changes rather than a single solution.
Using ca e rwin modeling to asure data 09162010ERwin Modeling
Data profiling analyzes data content to infer metadata and increase the accuracy of data assets and models. It can help with data quality assessments, master data management, and reducing risks in data warehousing projects. The presentation provided examples of how profiling was used to uncover issues, validate models and requirements, standardize values, and reduce development times for various organizations.
This document discusses different types of data models, including hierarchical, network, relational, and object-oriented models. It focuses on explaining the relational model. The relational model organizes data into tables with rows and columns and handles relationships using keys. It allows for simple and symmetric data retrieval and integrity through mechanisms like normalization. The relational model is well-suited for the database assignment scenario because it supports linking data across multiple tables using primary and foreign keys, and provides query capabilities through SQL.
The document provides an introduction to database management systems (DBMS) and database models. It defines key terms like data, database, DBMS, file system vs DBMS. It describes the evolution of DBMS from 1960 onwards and different database models like hierarchical, network and relational models. It also discusses the roles of different people who work with databases like database designers, administrators, application programmers and end users.
This document discusses different types of data models, including object based models like entity relationship and object oriented models, physical models that describe how data is stored, and record based logical models. It specifically mentions hierarchical, network, and relational models as examples of record based logical data models. The purpose of data models is to represent and make data understandable by specifying rules for database construction, allowed data operations, and integrity.
The document discusses data modeling, which involves creating a conceptual model of the data required for an information system. There are three types of data models - conceptual, logical, and physical. A conceptual data model describes what the system contains, a logical model describes how the system will be implemented regardless of the database, and a physical model describes the implementation using a specific database. Common elements of a data model include entities, attributes, and relationships. Data modeling is used to standardize and communicate an organization's data requirements and establish business rules.
Saleseffectivity and business intelligencemarekdan
some information about business intelligence second generation (in-memory) Tibco Spotfire and InfomatiX view how to use BI and mobile solutions to increase sales and marketing effectivennes
This document provides an overview of Database Architechs, a consulting firm specializing in database architecture, design, and performance tuning. It describes the company's areas of expertise, including database architecture, data modeling, performance tuning, data warehousing, and high availability solutions. It also outlines Database Architechs' methodology, tools, team of experts, locations of operations, partners, clients, and benchmark results showing improvements in database performance and availability.
Power BI is introducing new self-service data preparation capabilities through Dataflows. Dataflows allow business analysts to prepare and transform data within Power BI using a low-code/no-code interface. The prepared data can then be reused across Power BI reports and models. Dataflows also aim to address challenges with traditional BI solutions such as requiring specialist knowledge, producing fragmented data, and having no consistent data structures. Data prepared through Dataflows can also potentially be leveraged outside of Power BI in other Azure services.
The document discusses how the American Academy of Physician Assistants (AAPA) is leveraging business intelligence (BI) and predictive analytics to deliver real-time forecasting. It notes that the PA profession is the second fastest growing health profession and provides statistics on current and projected PA employment numbers. It then outlines AAPA's membership statistics. The document discusses the challenges of gathering, analyzing, and querying organizational data in real-time. It presents Adaptive Planning and QlikView as cloud-based solutions that can integrate data sources, provide visual analytics dashboards, and enable self-service BI without IT involvement. It also mentions using SPSS Modeler for predictive analytics. The document concludes by inviting questions from the
Leveraging BI and Predictive Analytics to deliver Real time forecastingShyam Desigan
The document discusses how the American Academy of Physician Assistants (AAPA) used business intelligence and predictive analytics tools to better forecast trends for its members. It notes that PAs are projected to be the second fastest growing healthcare profession in the next decade. AAPA gathered member and industry data from various sources into centralized systems for integrated analysis. This allowed for real-time querying and visualization of key performance indicators. Tools like Adaptive Planning, QlikView and SPSS Modeler were leveraged for dashboarding, predictive modeling and insights. This helped AAPA improve forecasting, decision making and strategic planning for its growing membership base.
The document discusses how the American Academy of Physician Assistants (AAPA) is leveraging business intelligence (BI) and predictive analytics to deliver real-time forecasting. It notes that the PA profession is projected to be the second fastest growing health profession in the next decade. It then outlines AAPA's membership numbers and the top work settings and specialties for PAs. The document discusses the challenges of gathering, analyzing, and querying organizational data in real-time. It presents cloud-based BI and planning tools as a way to augment existing systems with greater flexibility, rapid deployment, and no IT involvement. Visualization and dashboard capabilities are highlighted. Finally, the document notes that AAPA utilizes SPSS
Sap bi roadmap overview 2010 sap inside track stlsjohannes
This document discusses the differences between the roadmaps of SAP Business Warehouse and Business Objects Enterprise. It provides an overview of the key tools in the Business Objects platform, such as Web Intelligence, Crystal Reports, and Xcelsius. The document outlines the roadmaps from 2010-2011, noting the integration of data connection tools, in-memory storage, and semantic layer developments. It concludes by discussing next steps for organizations to define their BI strategy and determine how to connect tools to operational systems and external users.
This document discusses how SAP applications and IT landscapes are changing with the adoption of in-memory computing technologies like SAP HANA. It presents examples of how SAP HANA allows organizations to deliver real-time value by enabling smarter, faster and simpler business processes, interactions and reporting. The document also outlines the evolution of enterprise architectures and data center landscapes towards a consolidated environment on SAP HANA that can optimize transactional and analytical workloads.
This document provides an overview of the services offered by Database Architechs, a consulting firm specializing in database architecture, design, and performance. They offer a wide range of database-focused consulting services including database architecture, design, performance tuning, data warehousing, high availability, and training. They have experience with all major database platforms and have helped large clients across various industries with their database needs.
This document summarizes Microsoft Dynamics AX 2012 enterprise resource planning (ERP) software. It is designed for large organizations with 200-7,500 users and includes industry-specific and core ERP capabilities. It supports multinational companies through shared master data and processes across a single instance. The software provides flexible deployment, agile technology, easy access to data and processes, and integrated collaboration tools. It also includes capabilities for procurement, supply chain management, inventory, projects, financials, governance, risk and compliance, human capital management, sales, and marketing.
Microsoft SQL Server 2012 Master Data ServicesMark Ginnebaugh
Mark Gschwind, VP of Business Intelligence at DesignMind, gave a presentation on Master Data Services (MDS) in SQL Server 2012. He began with an overview of master data and its importance for central curation, quality management, and ease of access for business users. He then reviewed the key capabilities of MDS, including modeling, validation, stewardship, and integration. Gschwind demonstrated creating an MDS model, using the new Excel interface, business rules, and exposing MDS data to a data warehouse. He concluded with tips for successful MDS implementations such as starting small, engaging business users, and using the development environment.
This document provides an overview and summary of a presentation on leveraging PowerPivot. It begins with background on the speaker and their company Superior Consulting Services. It then outlines the session which will cover Microsoft Business Intelligence, what PowerPivot is, a comparison of SSAS and PowerPivot, a demonstration of DAX functions, and what future developments are planned. Audience polling questions are also included to gauge experience levels.
Improve Time to Market with Real-Time Analytics on Time-Series DataVin Dahake
Maximizes the data that power financial services with
fast cloud performance
Powers fundamental operations like investment
strategies, risk assessments, and fraud detections
Highly secure, scalable, reliable, and performing
infrastructure to handle data-intensive workloads
During the “Architecting for the Cloud” breakfast seminar we discussed the requirements of modern cloud-based applications and how to overcome the constraints of traditional database infrastructure.
We discussed how the right cloud-based database architecture can:
- Provide easy and fast access across multiple geographies
- Handle rapid user growth by adding new servers on demand
- Provide high performance even in the face of heavy application usage
- Offer around-the-clock resiliency and uptime
- Deliver cloud-enabled apps in public, private, or hybrid cloud environments
Sql server 2012 smart dive presentation 20120126Andrew Mauch
This document provides an overview of Microsoft SQL Server's business intelligence opportunities and features. It introduces two Microsoft experts, Brian Larson and Dan English, and outlines topics including what business intelligence is, self-service BI, data management, data in the cloud, implementation planning, and SQL Server editions. Live demos are provided of data modeling scalability, power view, and data quality services.
The document discusses Informatica's data virtualization solution. It provides an overview of the challenges companies face in integrating data from multiple sources and making it available for business intelligence, master data management, and service-oriented architecture use cases. Informatica's solution combines data integration, data virtualization, data profiling, and business-IT collaboration capabilities to provide a common view of data across sources and enable fast delivery of new reports and attributes without moving data. Examples are provided of how HealthNow NY used the solution to improve risk analysis and pricing by delivering a complete and trusted view of customer data.
Total Cost of Non-Ownership in the Cloud - AWS India Summit 2012Amazon Web Services
#1 Analyze your applications and usage patterns to understand your needs. Consider steady state, spiky, and unpredictable usage.
#2 Consider all relevant costs, including hardware, maintenance, personnel, space, power, and bandwidth when comparing options. Some costs are hidden.
#3 Leverage AWS's flexible pricing models like reserved instances, spot instances, and free tier services to optimize for your workload patterns and lower your total cost of ownership over time compared to on-premises infrastructure. Savings increase with higher utilization and longer commitments.
Technically Speaking: How Self-Service Analytics Fosters CollaborationInside Analysis
This document summarizes an upcoming webinar series from Bloor Research Group on enterprise software and business intelligence technologies. The webinars will take place monthly from June to November, covering topics like intelligence, disruption, analytics, integration, databases, and cloud computing. Attendees can ask questions of presenters and get detailed analysis of innovative technologies. The webinars aim to reveal enterprise software characteristics and give vendors a chance to explain their products to analysts.
O documento discute as vantagens da computação em nuvem na plataforma Microsoft Azure, incluindo a redução de custos, maior eficiência e flexibilidade. Ele descreve como os clientes podem migrar cargas de trabalho existentes para a nuvem e usar serviços como armazenamento, bancos de dados e desenvolvimento na nuvem.
O documento descreve um Portal de Atendimento ao Cidadão que permite aos cidadãos relatarem problemas em suas ruas ou bairros às autoridades competentes e acompanharem o status de suas solicitações. O portal fornece à prefeitura uma compreensão mais rápida das demandas públicas e gera relatórios para auxiliar na tomada de decisões.
O documento descreve o Escritório 2.0, uma solução baseada no Office 365 que gerencia informações municipais de forma integrada e eficiente, centralizando dados como IPTU e convênios para melhorar a tomada de decisão e produtividade. A solução permite a integração e gestão de múltiplas secretarias a partir de uma única plataforma.
O documento descreve um Centro de Comando e Controle, que permite a gestão integrada de operações de segurança pública através de sistemas de comunicação e tecnologias que integram dispositivos, operadores e o centro de controle. O centro tem como objetivos permitir ações integradas entre agências de segurança, agilidade em situações de emergência e registro colaborativo de ocorrências.
5 verdades essencias sobre a economia das aplicações finalAllen Informática
1. Toda empresa está no negócio de Software.
2. Sua infraestrutura é uma das maiores vantagens competitivas.
3. DevOps deve ser a nova e melhor prática.
4. Segurança deve viabilizar novos negócios e não apenas protegê-los.
5. Mobilidade deve ser imprescindível na sua estratégia Multicanal.
Resumindo, as cinco verdades essenciais da Economia dos Aplicativos segundo o documento.
O documento apresenta uma visão geral da plataforma de computação em nuvem Microsoft Azure, incluindo seus principais recursos e cenários de uso. É destacado que o Azure pode ser usado para rodar softwares, desenvolver aplicativos e usar serviços em nuvem de forma híbrida entre nuvens públicas e privadas. Apresenta também casos de uso comuns do Azure como backup de dados, hospedagem de sites e máquinas virtuais.
O documento discute o Microsoft Azure, uma plataforma de computação em nuvem da Microsoft que fornece infraestrutura virtual, plataforma e serviços. Ele descreve os principais recursos do Azure como data centers, cenários de uso, nuvem híbrida e oportunidades que oferece.
O documento descreve o crescimento e popularidade do Office 365, com mais de 1 bilhão de pessoas usando pelo menos um serviço da nuvem Microsoft e 70% da Fortune 500 como clientes. Ele também discute as tendências de computação em nuvem, mobilidade, social e big data e como o Office 365 fornece soluções para atender a essas necessidades por meio de serviços como Exchange Online, SharePoint Online e Lync Online.
A empresa possui 25 anos de experiência em soluções tecnológicas, equipes qualificadas com mais de 350 certificações e parcerias com grandes empresas como Microsoft e Dell. Realiza projetos amplos e complexos visando redução de custos, qualidade da informação, segurança e retorno de investimento, além de disponibilizar soluções estratégicas relacionadas ao mercado.
O documento discute:
1) O Cadastro Nacional de Produtos, uma plataforma brasileira para gestão e controle de numeração de mercadorias usando padrões globais.
2) O sistema permite inserir e manter listas de produtos cadastrados na nuvem.
3) Acrescenta recursos como informações sobre dimensões, promoções, imagens e URLs relacionadas aos produtos.
O documento descreve os serviços e produtos da empresa Centrify para gerenciamento unificado de identidades em ambientes de data center, nuvem e dispositivos móveis. A Centrify oferece software e serviços na nuvem que permitem aproveitar a infraestrutura de identidade Active Directory existente em diferentes ambientes de TI.
O documento descreve o produto Centrify for Servers, que permite integrar servidores UNIX, Linux, Mac e Windows ao Active Directory do Microsoft, fornecendo funcionalidades como autenticação única, controle de acesso e auditoria em mais de 400 sistemas operacionais.
O documento descreve o produto Centrify para SaaS e aplicativos móveis, fornecendo single sign-on e gerenciamento centralizado de acesso para aplicativos na nuvem, internos e móveis usando credenciais do Active Directory.
O documento descreve o Centrify para Mac e dispositivos móveis, permitindo a gestão centralizada de identidades e segurança para esses sistemas através do Active Directory, com recursos como login único, políticas de grupo e autenticação forte.
Teodore Roosevelt defende que é melhor arriscar grandes coisas e alcançar triunfos e glórias, mesmo correndo o risco de derrota, do que viver uma vida sem grandes emoções onde se nem se experimenta vitória nem derrota.
Teodore Roosevelt defende que é melhor arriscar grandes coisas e alcançar triunfos e glórias, mesmo correndo o risco de derrota, do que viver uma vida sem grandes realizações ou fracassos, apenas na mediocridade.
Apresentação evento lcx tecnologia com sofisticação em f5Allen Informática
O documento discute como a F5 pode otimizar aplicações Microsoft para empresas. A F5 pode transformar a TI em uma ferramenta estratégica dos negócios, tornando a TI mais ágil para atender às necessidades dos negócios e permitindo um crescimento direcionado pelos negócios de forma flexível e segura. A parceria entre a F5 e a Microsoft permite distribuir, proteger e otimizar aplicativos Microsoft.
Anonymous is a loosely organized international hacktivist group. A typical Anonymous attack involves recruiting hundreds of supporters online, using tools like LOIC to conduct DDoS attacks, and employing skilled hackers to find vulnerabilities using scanners. Targets have included government agencies, companies, and other organizations. Web application firewalls can help mitigate these attacks by detecting vulnerabilities, blocking DDoS traffic, and providing visibility into hacker activities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
2. Who am I?
— More than more than 15 years of experience in the areas of
data management, metadata management, and enterprise
architecture.
− Currently VP of Product Marketing for CA’s data modeling
solutions.
− Brand Strategy and Product Management roles at Computer
Associates and Embarcadero Technologies
− Senior consultant for PLATINUM technology’s information
management consulting division in both the U.S. and Europe.
− Worked with dozens of Fortune 500 companies worldwide in the
U.S., Europe, Asia, and Africa and speaks regularly at industry
conferences.
− Co-author of several books including:
• Data Modeling for the Business
• Data Modeling Made Simple with CA ERwin Data Modeler r8
4. Who Are You? Survey
—How would you describe your role?
A. Data Architect, Data Modeler, or Analyst
B. Businessperson or Business Analyst
C. DBA or Technical IT
D. A combination of the above
E. Other
5. Are you Using CA ERwin? Survey
—Are you using CA ERwin currently?
A. Yes!
B. No.
C. I’m not sure
6. What Version of ERwin? Survey
—What version of ERwin are you using?
A. 8.x
B. 7.x
C. 4.x
D. 3.x or earlier
E. I’m not using ERwin, which is very sad.
11. Information in Context
There’s more to data than meets the eye
A customer is
I’d like a report someone who
wants to buy
showing all of A customer is our product.
someone who A person’s not a
our customers owns our customer if they don’t
product. have an active
maintenance account.
Sales
Business My customers
Executive Accounting are internal
employees.
Which customer Support
database do you Engineer
want me to pull
this from? We
have 25. HR
And, by the way, the
Sybase
databases all store
Oracle customer information in a
DB2 DBA different format. “CUST_NM”
on DB2, “cust_last_nm” on
Informix Oracle, etc. It’s a mess.
SQL Teradata
Server
Data
MS SAP
Architect
SQL
Azure
12. The CA ERwin Solution
Visualize the Power of Your Data
On-Premise or in the Cloud
13. CA ERwin Data Modeler
Know what data you have: Create a visual inventory of source and target systems –
Reverse Engineering
Know what your data means: Communicate key business requirements between
business and IT stakeholders
Ensure that your data is consistent: Build consistent database structures - Forward
Engineering
CA ERwin® Data Modeler
DB2 Oracle
MySQL Sybase
Oracle SQL
Server
SQL
Sybase Teradata
Server DB2
SQL SQL MySQL
Teradata
Azure Azure
15. CA ERwin® Data Modeling
At the Center of Your Data Management Initiatives
Cloud or SaaS BI +
Data Management
Master Data Management Business Intelligence +
(MDM) Data Warehousing
Data Data Governance
Management
ERP
Integration Application Developme
Data Quality
18. The Challenge
—You’ve been tasked to assist in the creation of a Business
Intelligence (BI) project
—Trying to obtain a single view of ‘customer’
—Technical and political challenges exist
− Numerous systems have been built already—different platforms and databases
− Parties cannot agree on a single definition of what a ‘customer’ is
—Solution: Need to build a High-Level Data Model
19. What is a High-Level Data Model?
—A high-level data model (HDM) uses simple graphical images to
describe core concepts and principles of an organization and what
they mean
—The main audience of a HDM is businesspeople
—An HDM is used to facilitate communication
—It needs to be high-level enough to be intuitive, but still capture
the rules and definitions needed to create database systems.
20. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
21. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
Product Location
Customer Region
Order
Raw
Material
Ingredient
22. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
23. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
24. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
25. “A Picture is Worth a Thousand Words”
Examples of High-Level Data Models
27. Levels of Data Models
—Models can be built
− Top-Down
− Bottom-Up
− Using a Hybrid Approach
28. How is this Different from a Logical Model?
VHDM HDM LDM
Defines the scope, audience, context for Defines key business concepts and their Represents core business rules and data
information definitions relationships at a detailed level
Main purpose is for communication and Main purpose is for communication and Provides enough detail for subsequent
agreement of scope and context agreement of definitions and business first cut physical design
logic
Relationships optional. If shown, Many-to-Many relationships OK Many-to-Many relationships resolved
represent hierarchy.
Cardinality not shown Cardinality shown Cardinality shown
No attributes shown Attributes are optional. If shown, can be Attributes required and all attributes are
composite attributes to convey business atomic. Primary and foreign keys
meaning. defined.
Not normalized (Relational models) Not normalized (Relational models) Fully normalized (Relational models)
Subject names should represent high- Concept names should use business Entity names may be more abstract
level data subjects or functional areas of terminology
the business
Subjects link to 1-M HDMs Many concepts are supertypes, although Supertypes all broken out to include sub-
subtypes may be shown for clarity types
‘One pager’ Should be a ‘one pager’ May be larger than one page
Business-driven Cross-functional & more senior people Multiple smaller groups of specialists
involved in HDM process with fewer IT. and IT folks involved in LDM process.
Informal notation ‘Looser’ notation required – some format Formal notation required
construct needed, but ultimate goal is to
be understood by a business user
< 20 objects < 100 objects > 100 objects
29. Building a High-Level Data Model
—Let’s go back to our challenge, to achieve a ‘single version of the
truth’ for Customer information
—We have 5 different systems with customer information in them:
− 2 on Oracle
− 1 on DB2
− 1 SAP system
− 1 using MS SQL Server
Oracle
DB2 Oracle
SQL SAP
Server
30. Building a High-Level Data Model
—We start with a very simple HDM, with just one object on it, called
“Customer”.
—We use an ER Model and show business definitions
Too Simple??
31. Too simple?
—Our team thought so, so went ahead and focused on the technical
integration, including:
− Reverse engineering a physical model from each system
− Creating ETL scripts
− Migrating the data into a single hub
− Building a reporting system off of the data
32. Focusing on the Business
—This implementation went “perfectly”, with no errors in the scripts,
no data type inconsistencies, no delays in schedule, etc.
—We built a complex BI reporting system to show our upper
management the results.
—We even sent out a welcome email to all of our customers, giving
them a 50% off coupon, and thanking them for their support.
33. Focusing on the Business
—Until we showed the report to the business sponsor:
− We can’t have 2000 customers in this region! I know we only have around 400!
− Why is Global Bank Company on this list? They are still evaluating our product!
Sales was negotiating a 10% discount with them, and you just sent them a 50%
coupon!?!?
− You just spent all of that money in IT to build this report with bad data???
34. Back to the Drawing Board
—After doing an extensive review of the six source systems, and
talking with the system owners we discovered that:
− The DB2 system was actually used by Sales to track their prospective “customers”
− These “customers” didn’t match our definition—they didn’t own a product of ours!!
35. Oops!
—We were mixing current customers, with prospects (non-
customers).
− We just sent a discount coupon to 1600 of the wrong people!
− We gave upper management a report showing the wrong figure for our total number
of customers!
− We are now significantly over budget to have to go back and fix this!!
—We started over, this time with a High-Level Data Model
36. Achieving Consensus
We created a report of the various definitions of customer
And verified with the various stakeholders that:
There were 2 (and only 2 definitions) of customer
Sales was OK with calling their “customer” a “prospect”
40. A HDM Facilitates Communication
—A High-Level Data Model Facilitates Communication between
Business and IT
− Focus on your (business) audience
• Intuitive display
• Capture the business rules and definitions in your model
− Simplicity does not mean lack of importance
• A simple model can express important concepts
• Ignoring the key business definitions can have negative affects
− A model or tool is only part of the solution
• Communication is key
• Process and Best Practices are critical to achieve consensus and buy-in
41. Communication is the Main Goal
of a High-Level Data Model
—Wouldn’t it be helpful if we did this in daily life, too?
—i.e. “Let’s go on a family vacation!”
Person Concept Definition
Father Vacation An opportunity to take the time to achieve new goals
Mother Vacation Time to relax and read a book
Jane Vacation A chance to get outside and exercise
Bobby Vacation Time to be with friends
Donna Vacation More time to build data models
42. Some Creative Ways to Facilitate Conversations with
Stakeholders
— Food!
− “Lunch and Learn”
− Bring candy to meetings
— Force?
− “No bathroom breaks until we reach consensus!”
— Active Listening
− Understand why there is disagreement (e.g. “Ingredient” vs. Raw Material)
— Fit into their schedule
− Webinars
− The “5 minute rule” for business execs – small, bite-sized models or questions.
— Publish in an easily-accessible, intuitive format
− Web-based publishing
− Spreadsheet-style reporting
43. Identify Model Purpose
— Key to success of any project is finding the right pain-point and
solving it.
— Make sure your model focuses on a particular pain point, i.e.
migrating an application or understanding an area of the business
Existing Proposed
Business “Today an Account can “By next quarter, an
only be owned by one Account can be owned by
Customer.” more than one Customer.”
Application “In the legacy Account “When we migrate to
Management system, we SAP/R3, Account Holder
call the customer an will be represented as
Account Holder.” Object.”
44. Managing the Technical Infrastructure
Why do you need a modeling tool, and not a drawing tool?
—Recall that we had multiple data sources on a variety of platforms:
− 2 on Oracle
− 1 on DB2
− 1 SAP system
− 1 using MS SQL Server
—How can CA ERwin help manage this?
Oracle
DB2 Oracle
SQL SAP
Server
45. Creating a Data Inventory
— “Design Once, Reuse Many Times” across heterogeneous platforms
— Design layers allow you to have a single high-level/logical model pointing to
numerous physical model platforms.
Oracle
DB2
SQL Server
46. Design Layers Create both Business
and Technical Designs
Business Data DBA
Sponsor Architect
Physical Data
Model
Logical Data Model
(Oracle)
(Business Area 1)
Conceptual Data Physical Data
Model Model
(SQL Server)
Logical Data Model
(Business Area 2)
Physical Data
Model
(DB2)
47. A Data Model can be your Filter
—A Data Model can add:
− Focus – by Subject Area, by Platform, etc.
− Visualization – Different Views for Different Audiences
− Translation – to different DMBS AND to non DBMS formats such as UML, BI tools,
Excel, XML, etc, etc.
Data Model
Oracle
Oracle
DB2 Developers Business
Sponsors
ETC! ETC!
DB2
SQL 3NF
Server IDMS
SAP
Data Architects DBAs
52. Managing the Data Inventory with
a Central Repository
— A Central Model Store provides a single repository to store all of your data
model assets
— A collaborative environment for multiple modeling teams.
— Metadata storage for: multiple models, multiple dbms platforms, multiple tools,
multiple audiences
Multiple Multiple Multiple Tools Multiple
Models DBMSs Audiences
Oracle
Teradata
BI Tools
DB2
SQL Developers Business
Server Spreadsheets ETL Tools Sponsors
Single Definition of 3NF
“Customer”
Central Model Store Data Architects DBAs
53. Understanding ERP Systems with
CA ERwin Saphir Option
Important metadata is found beyond traditional databases.
ERP Systems also contain critical information about customers, employees, etc.
SAP, Oracle, JD Edwards, etc.
These ERP systems are difficult to manage with a traditional “reverse
engineering” process using a data modeling tool
There are thousands of tables
When we reverse engineer them, we get unintuitive technical names
54. Understanding ERP Systems with
CA ERwin Saphir Option
Using the CA ERwin Saphir Option, we can easily group tables by
subject area, and can translate table and column names into
intuitive, English versions.
And can more easily integrate ERP data models into our enterprise
data architecture.
55. CA ERwin Data Model Validator
CA ERwin Data Model Validator checks models for consistency & accuracy
with a “teach me” facility to learn from errors
Great for new modelers and team members.
Helps with governance of modeling projects.
56. What’s New in the CA ERwin Product Family
CA ERwin Data Modeling r8.2
57. CA ERwin Data Modeling r8.2
Three New Offerings
CA ERwin® Web Portal CA ERwin® Data CA ERwin® Data
Model for Microsoft Modeler r8.2
SQL Azure
Visualize Information Managing Data – Collaboration
from the Web – for All Both On-Premise + in Facilitated
Audiences the Cloud
58. Data Management – Moving to the Cloud
Many customers are nervous moving their data to the Cloud.
Concerns include:
Security/Privacy
Learning curve for new technologies
Integration with other data mgt. systems or applications
A data model can help allay these fears
Assurance that your data is managed securely—using a data model as your roadmap. You
decide what data stays on premise and what moves to the Cloud. Once in the Cloud,
understand and manage the data stored off-premises.
Use Existing Skills: Customers can use the same familiar data modeling paradigm for Cloud-
based data as for their on-premises data using CA ERwin Data Modeler.
Visualize both on-premises (Oracle, Sybase, SQL Server, DB2, etc.) and Cloud-based
databases (MS SQL Azure) from a single data modeling environment
59. CA ERwin Data Modeler for Microsoft SQL Azure
A Data Model is your Roadmap to the Cloud
A Data Model is your “roadmap” for:
What data to move to the Cloud, and what to keep on-premise
Defining data structures (physical model) and business requirements (logical model) for Cloud
databases
Off-Premise doesn’t mean Out of your Control
CA ERwin Data Modeler for Microsoft SQL Azure
Manage data structures in the Cloud on the MS SQL Azure platform
Visualize both on-premise (Oracle, Sybase, SQL Server, DB2, etc.) and Cloud-based databases (MS
SQL Azure) from a single data modeling environment
Oracle
DB2
MS
SQL SQL Azure
Server Sybase
MySQL Teradata
60. CA ERwin Web Portal
Sharing Information with All Audiences
— While some users need a — Many more can access &
desktop tool to build and understand information via
analyze data models, a web-based interface.
Data Database Data Modeler Business
Architect Administrator Analyst
(DBA)
Developer
Business User /
Steward
Data Data Modeler BI Analyst
Architect
DBA MDM Analyst
60
61. CA ERwin Web Portal
Web-Based Search, Impact Analysis, Reporting
The CA ERwin Web Portal makes it easy to share metadata
(information in context) with both Business and Technical users
Internet-Style Keyword Search
Diagram Visualization
Graphical Impact Analysis
Reporting
Interfaces for Business vs. Technical Users
Easy to roll-out to multiple users (no local install)
CA ERwin
Web Portal
66. Active Model Templates
Creating Enterprise Standards
— Ability to Reuse and Synchronize Enterprise Model Objects
with other models across the Organization.
Enterprise Model Objects
Project 1
Synchronize
Project 2
66 February 8, 2012
67. Active Model Templates
— Ability to define “Enterprise” objects for Reuse
− Share individual model objects, not just models
• tables, entities, domains, etc.
− Wizard-driven
− Synchronize with other model objects
• Automatically on model load
• Or manually, user-driven through Wizard
— First phase in “Data Dictionary” style model sharing
− Next Step is Repository (Mart)-based sharing in r9
67 February 8, 2012