EPM Cloud in Real Life: 2 Real-world Cloud Migration Case StudiesDatavail
In this presentation at the HugMN user conference, we presented 2 different successful real-world EPM Cloud migration and implementation case studies from different industries. Get a birds-eye view into the practicalities of moving to cloud, and the tools you need to make the business case for your own company.
Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...BI Brainz
This document discusses analyzing and troubleshooting performance issues in SAP BusinessObjects BI reports and dashboards. It provides best practices for measuring performance, such as taking consistent measurements in a controlled environment. It also describes how to use tools like traces, HttpWatch, and ST transactions to analyze performance for workflows like a Web Intelligence refresh. The analysis breaks down where time is spent at each stage, such as in the BI platform, data source, and data transfer, to identify potential bottlenecks.
MongoDB World 2019: High Performance Auditing of Changes Based on MongoDB Cha...MongoDB
Take advantage of the elasticity of the cloud by creating resources that can heal themselves. Learn to create Compute Engine resources in GCP using Terraform that will install and configure a MongoDB replica set for you.
Technical and organizational obstacles when introducing Data in Motion to you...confluent
This document discusses the technical and organizational obstacles Brose, a major automotive supplier, faced when introducing the Confluent data integration platform. It provides an overview of Brose's product range and global presence. The presentation then outlines Brose's journey with Confluent since 2019, covering implementation, daily operations, and planned improvements. Technical obstacles included issues with security policies and lack of end-to-end monitoring. Organizational obstacles involved gaining acceptance across teams and responsibilities. The document proposes fields of action to address operations, development, organization, governance, strategy, enhancements, marketing, and business enablement related to the Confluent platform.
DATA SCIENCE AND BIG DATA ANALYTICSCHAPTER 2 DATA ANA.docxrandyburney60861
DATA SCIENCE AND BIG DATA
ANALYTICS
CHAPTER 2:
DATA ANALYTICS LIFECYCLE
DATA ANALYTICS LIFECYCLE
• Data science projects differ from BI projects
• More exploratory in nature
• Critical to have a project process
• Participants should be thorough and rigorous
• Break large projects into smaller pieces
• Spend time to plan and scope the work
• Documenting adds rigor and credibility
DATA ANALYTICS LIFECYCLE
• Data Analytics Lifecycle Overview
• Phase 1: Discovery
• Phase 2: Data Preparation
• Phase 3: Model Planning
• Phase 4: Model Building
• Phase 5: Communicate Results
• Phase 6: Operationalize
• Case Study: GINA
2.1 DATA ANALYTICS
LIFECYCLE OVERVIEW
• The data analytic lifecycle is designed for Big Data problems and
data science projects
• With six phases the project work can occur in several phases
simultaneously
• The cycle is iterative to portray a real project
• Work can return to earlier phases as new information is uncovered
2.1.1 KEY ROLES FOR A
SUCCESSFUL ANALYTICS
PROJECT
KEY ROLES FOR A
SUCCESSFUL ANALYTICS
PROJECT
• Business User – understands the domain area
• Project Sponsor – provides requirements
• Project Manager – ensures meeting objectives
• Business Intelligence Analyst – provides business domain
expertise based on deep understanding of the data
• Database Administrator (DBA) – creates DB environment
• Data Engineer – provides technical skills, assists data
management and extraction, supports analytic sandbox
• Data Scientist – provides analytic techniques and modeling
2.1.2 BACKGROUND AND OVERVIEW
OF DATA ANALYTICS LIFECYCLE
• Data Analytics Lifecycle defines the analytics process and
best practices from discovery to project completion
• The Lifecycle employs aspects of
• Scientific method
• Cross Industry Standard Process for Data Mining (CRISP-DM)
• Process model for data mining
• Davenport’s DELTA framework
• Hubbard’s Applied Information Economics (AIE) approach
• MAD Skills: New Analysis Practices for Big Data by Cohen et al.
https://en.wikipedia.org/wiki/Scientific_method
https://en.wikipedia.org/wiki/Cross_Industry_Standard_Process_for_Data_Mining
http://www.informationweek.com/software/information-management/analytics-at-work-qanda-with-tom-davenport/d/d-id/1085869?
https://en.wikipedia.org/wiki/Applied_information_economics
https://pafnuty.wordpress.com/2013/03/15/reading-log-mad-skills-new-analysis-practices-for-big-data-cohen/
OVERVIEW OF
DATA ANALYTICS LIFECYCLE
2.2 PHASE 1: DISCOVERY
2.2 PHASE 1: DISCOVERY
1. Learning the Business Domain
2. Resources
3. Framing the Problem
4. Identifying Key Stakeholders
5. Interviewing the Analytics Sponsor
6. Developing Initial Hypotheses
7. Identifying Potential Data Sources
2.3 PHASE 2: DATA PREPARATION
2.3 PHASE 2: DATA
PREPARATION
• Includes steps to explore, preprocess, and condition
data
• Create robust environment – analytics sandbox
• Data preparation tends to be t.
ADV Slides: Building and Growing Organizational Analytics with Data LakesDATAVERSITY
Data lakes are providing immense value to organizations embracing data science.
In this webinar, William will discuss the value of having broad, detailed, and seemingly obscure data available in cloud storage for purposes of expanding Data Science in the organization.
UNV Are Dead - How to migrate to UNX in a few simple stepsWiiisdom
- The presentation discusses SAP's roadmap for migrating customers from older BI technologies like UNV universes and multi-source UNX to newer formats like single-source UNX that will be supported longer term.
- It recommends customers begin the migration to UNX now to avoid issues and have time to complete the process. The migration involves backing up the environment, auditing for cleanup, converting UNV to UNX format, repointing documents to the new UNX, and validating the results.
- Migrating through the outlined steps can help automate the process, reduce risks and costs, and ensure an accurate conversion. Beginning the migration early allows time for completion and avoids last minute rushing that
EPM Cloud in Real Life: 2 Real-world Cloud Migration Case StudiesDatavail
In this presentation at the HugMN user conference, we presented 2 different successful real-world EPM Cloud migration and implementation case studies from different industries. Get a birds-eye view into the practicalities of moving to cloud, and the tools you need to make the business case for your own company.
Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...BI Brainz
This document discusses analyzing and troubleshooting performance issues in SAP BusinessObjects BI reports and dashboards. It provides best practices for measuring performance, such as taking consistent measurements in a controlled environment. It also describes how to use tools like traces, HttpWatch, and ST transactions to analyze performance for workflows like a Web Intelligence refresh. The analysis breaks down where time is spent at each stage, such as in the BI platform, data source, and data transfer, to identify potential bottlenecks.
MongoDB World 2019: High Performance Auditing of Changes Based on MongoDB Cha...MongoDB
Take advantage of the elasticity of the cloud by creating resources that can heal themselves. Learn to create Compute Engine resources in GCP using Terraform that will install and configure a MongoDB replica set for you.
Technical and organizational obstacles when introducing Data in Motion to you...confluent
This document discusses the technical and organizational obstacles Brose, a major automotive supplier, faced when introducing the Confluent data integration platform. It provides an overview of Brose's product range and global presence. The presentation then outlines Brose's journey with Confluent since 2019, covering implementation, daily operations, and planned improvements. Technical obstacles included issues with security policies and lack of end-to-end monitoring. Organizational obstacles involved gaining acceptance across teams and responsibilities. The document proposes fields of action to address operations, development, organization, governance, strategy, enhancements, marketing, and business enablement related to the Confluent platform.
DATA SCIENCE AND BIG DATA ANALYTICSCHAPTER 2 DATA ANA.docxrandyburney60861
DATA SCIENCE AND BIG DATA
ANALYTICS
CHAPTER 2:
DATA ANALYTICS LIFECYCLE
DATA ANALYTICS LIFECYCLE
• Data science projects differ from BI projects
• More exploratory in nature
• Critical to have a project process
• Participants should be thorough and rigorous
• Break large projects into smaller pieces
• Spend time to plan and scope the work
• Documenting adds rigor and credibility
DATA ANALYTICS LIFECYCLE
• Data Analytics Lifecycle Overview
• Phase 1: Discovery
• Phase 2: Data Preparation
• Phase 3: Model Planning
• Phase 4: Model Building
• Phase 5: Communicate Results
• Phase 6: Operationalize
• Case Study: GINA
2.1 DATA ANALYTICS
LIFECYCLE OVERVIEW
• The data analytic lifecycle is designed for Big Data problems and
data science projects
• With six phases the project work can occur in several phases
simultaneously
• The cycle is iterative to portray a real project
• Work can return to earlier phases as new information is uncovered
2.1.1 KEY ROLES FOR A
SUCCESSFUL ANALYTICS
PROJECT
KEY ROLES FOR A
SUCCESSFUL ANALYTICS
PROJECT
• Business User – understands the domain area
• Project Sponsor – provides requirements
• Project Manager – ensures meeting objectives
• Business Intelligence Analyst – provides business domain
expertise based on deep understanding of the data
• Database Administrator (DBA) – creates DB environment
• Data Engineer – provides technical skills, assists data
management and extraction, supports analytic sandbox
• Data Scientist – provides analytic techniques and modeling
2.1.2 BACKGROUND AND OVERVIEW
OF DATA ANALYTICS LIFECYCLE
• Data Analytics Lifecycle defines the analytics process and
best practices from discovery to project completion
• The Lifecycle employs aspects of
• Scientific method
• Cross Industry Standard Process for Data Mining (CRISP-DM)
• Process model for data mining
• Davenport’s DELTA framework
• Hubbard’s Applied Information Economics (AIE) approach
• MAD Skills: New Analysis Practices for Big Data by Cohen et al.
https://en.wikipedia.org/wiki/Scientific_method
https://en.wikipedia.org/wiki/Cross_Industry_Standard_Process_for_Data_Mining
http://www.informationweek.com/software/information-management/analytics-at-work-qanda-with-tom-davenport/d/d-id/1085869?
https://en.wikipedia.org/wiki/Applied_information_economics
https://pafnuty.wordpress.com/2013/03/15/reading-log-mad-skills-new-analysis-practices-for-big-data-cohen/
OVERVIEW OF
DATA ANALYTICS LIFECYCLE
2.2 PHASE 1: DISCOVERY
2.2 PHASE 1: DISCOVERY
1. Learning the Business Domain
2. Resources
3. Framing the Problem
4. Identifying Key Stakeholders
5. Interviewing the Analytics Sponsor
6. Developing Initial Hypotheses
7. Identifying Potential Data Sources
2.3 PHASE 2: DATA PREPARATION
2.3 PHASE 2: DATA
PREPARATION
• Includes steps to explore, preprocess, and condition
data
• Create robust environment – analytics sandbox
• Data preparation tends to be t.
ADV Slides: Building and Growing Organizational Analytics with Data LakesDATAVERSITY
Data lakes are providing immense value to organizations embracing data science.
In this webinar, William will discuss the value of having broad, detailed, and seemingly obscure data available in cloud storage for purposes of expanding Data Science in the organization.
UNV Are Dead - How to migrate to UNX in a few simple stepsWiiisdom
- The presentation discusses SAP's roadmap for migrating customers from older BI technologies like UNV universes and multi-source UNX to newer formats like single-source UNX that will be supported longer term.
- It recommends customers begin the migration to UNX now to avoid issues and have time to complete the process. The migration involves backing up the environment, auditing for cleanup, converting UNV to UNX format, repointing documents to the new UNX, and validating the results.
- Migrating through the outlined steps can help automate the process, reduce risks and costs, and ensure an accurate conversion. Beginning the migration early allows time for completion and avoids last minute rushing that
Loras College 2014 Business Analytics Symposium | Aaron Lanzen: Creating Busi...Cartegraph
Cisco Services is providing a behind-the-scenes perspective of its decision management and smart analytics programs. Success for Cisco is more than the technology or any one project. It's a mix of art, philosophy and technology that allows analytics to keep adding value to the business. You will hear how the program has evolved over the last 6 years and will explore different levels of smart analytics. Along the way, you will hear how the team grew a simple idea into a patent-pending resource allocation model.
For more information on the Loras College 2014 Business Analytics Symposium, the Loras College MBA in Business Analytics or the Loras College Business Analytics Certificate visit www.loras.edu/mba or www.loras.edu/bigdata.
Best Practices in Moving Hyperion Planning to the CloudDatavail
You’ll learn how this move fit with each company’s corporate vision for the organization, how configuring EPBCS modules improved processes, and how new reports and scenarios took their financial planning from “on-premise engine” to “cloud machine.”
DOAG Oracle Unified Audit in Multitenant EnvironmentsStefan Oehrli
Oracle Audit is a well-known and proven database functionality. Or maybe not? What does auditing look like in combination with Oracle Multitenant Databases? Does database and Unified Audit work analogous to existing configurations? In the context of this presentation the auditing in the environment of container databases will be examined more closely. It will be shown what has to be considered and how an auditing concept has to be adapted to the new architecture. With focus on the current versions of the Oracle database, specific problems and workarounds in the area of Unified Audit will be shown. The presentation will be complemented by corresponding examples and live demos.
The document outlines a step-wise approach for planning software projects and discusses each step in detail using an example scenario of developing a payroll system for Brightmouth College. The key steps include establishing project scope and objectives, identifying project infrastructure, analyzing project characteristics, identifying required products and activities, and developing a product flow diagram to outline the relationships between products. The overall approach provides a structured method for comprehensively planning a software project from start to finish.
This document discusses scope management processes in project management. It defines scope management as defining the work required for a project and ensuring only that work is completed. It then outlines the key processes involved in scope management: collecting requirements, defining scope, creating a work breakdown structure, verifying scope, and controlling scope. For each process, it identifies typical inputs, tools and techniques, and outputs involved. The overall purpose is to plan and manage the scope of work to successfully deliver a project.
VTU 7TH SEM CSE DATA WAREHOUSING AND DATA MINING SOLVED PAPERS OF DEC2013 JUN...vtunotesbysree
This document contains solved questions and answers from a past data warehousing and data mining exam. It includes questions on operational data stores, extract transform load (ETL) processes, online transaction processing (OLTP) vs online analytical processing (OLAP), data cubes, and data pre-processing approaches. The responses provide detailed explanations and examples for each topic.
Accelerate your SAP BusinessObjects to the CloudWiiisdom
This document provides guidance on migrating SAP BusinessObjects from an on-premise environment to the cloud. It outlines a five-phase methodology for a successful migration: 1) assessment and planning, 2) execution, 3) validation and optimization, 4) going live, and 5) day-to-day maintenance. The methodology aims to reduce costs and timelines by up to 80% by automating tasks like regression testing, content migration, and maintenance. This ensures the migrated platform performs as well as the original while inspiring user confidence.
BDD Scenarios in a Testing & Traceability Strategy (Webinar 19/02/2021)Gáspár Nagy
We are inviting you to join our free webinar to see a case study of a real project developed with Behaviour Driven Development (BDD).
Our product, "SpecSync for Azure DevOps" has been developed with BDD. The functionalities are specified as BDD scenarios that can be verified as automated tests. But BDD scenarios alone would not be enough for us to meet our quality expectations, so there are other tests and quality considerations as well -- all these fit into our agile testing and traceability strategy.
In this webinar Gaspar Nagy gives you a walkthrough of the quality considerations and tests of the product by focusing on the following questions:
• What kind of tests are needed? How to decide what is the right way to specify and test a concrete function?
• What kind of feedback can we get from the different tests and how? Do they form a test automation pyramid?
• How are the requirements expressed in BDD scenarios and how are they connected to the other development artifacts?
• BDD, tests automation and continuous integration tips & tricks
The webinar focuses on the general testing and automation challenges, so people with or without coding skills are both welcome. At the end of the webinar there will be a Q&A, so you can also ask your own questions. The webinar recording will be made available for everyone who registered.
Three signs your architecture is too small for big data. Camp IT December 2014Craig Jordan
Three capability gaps that a traditional business intelligence architecture has with respect to processing big data and recommended extensions to address them.
This document describes a final year project to develop a system called the Keropok Lekor Sales and Stock Forecast System for D'no Corner Keropok Lekor, a small company that produces and sells the traditional Malay snack. The system will allow admin and staff to view sales and stock predictions using the Weighted Moving Average method. It will also generate monthly reports. The objectives are to design the system interfaces, database and processes to manage orders, estimates sales and stock. The system will be tested and reports generated for users. It will manage user profiles, products, orders, payments and inventory for admin, staff and customers.
Sharing Blockchain Performance Knowledge for Edge Service DevelopmentHong-Linh Truong
The document discusses a framework called GIAU that aims to share performance knowledge to help with edge service and blockchain application development. GIAU provides a way to capture information about blockchain software, deployment patterns, benchmarks, and experiments. It uses a microservices architecture with databases to store and search this knowledge. The goal is to help developers select appropriate blockchain technologies and deployments for different edge service topologies.
Software development planning and essentialsRajesh P
The document discusses project planning for a hostel management system. It covers project definitions and lifecycles, including the software development lifecycle of requirement analysis, design, development, testing, and implementation. It also discusses functional details, UML design, and architecture of the hostel management system. Finally, it covers database and code optimization techniques.
Software development planning and essentialsRajesh P
The document discusses project planning for a hostel management system. It covers project definitions and lifecycles, including the software development lifecycle of requirement analysis, design, development, testing, and implementation. It also discusses functional details, UML design, and architecture of the hostel management system. Finally, it covers database and code optimization techniques.
The document provides an overview of the data analytics process (lifecycle). It discusses the key phases in the lifecycle including discovery, data preparation, model planning, model building, communicating results, and operationalizing. In the discovery phase, stakeholders analyze business trends and domains to build hypotheses. In data preparation, data is explored, preprocessed, and conditioned to create an analytics sandbox. This involves extract, transform, load processes to prepare the data for analysis.
The document summarizes Dan Lantz and Ken Swift's journey in implementing the Aqueduct data warehousing solution to access data stored in their Blackbaud Raiser's Edge NXT CRM. It describes how they moved from a hosted to on-premise and back to hosted CRM configuration, their challenges with reporting, and their process of setting up Aqueduct including building Power BI visualizations to analyze gift officer, portfolio, proposal and other data. It concludes by outlining their plans to further extend Aqueduct's capabilities.
vodQA Pune (2019) - Insights into big data testingvodQA
This document discusses testing of big data applications. It describes various types of tests including unit tests, integration tests, functional tests and tests of workflows like Oozie tests. It also discusses automation tools and frameworks for testing like Mockito, Worker Bee and Cucumber. Challenges in big data testing mentioned include long test run times, need for dedicated test clusters and maintaining large test data sets.
SOUG Oracle Unified Audit for Multitenant DatabasesStefan Oehrli
Oracle Audit is a proven database functionality. Or maybe not? How does auditing look like in combination with Oracle Multitenant DBs? Does DB and Unified Audit work analogous to existing configurations? In the context of this lecture audit in Container DBs (19c/20c) will be discussed more closely. We will shown where to pay attention and how to adapt an audit concept to the new architecture. Specific problems and workarounds will be shown. The presentation will be complemented by demos.
The document discusses techniques for decomposing software projects to aid in cost estimation. It describes decomposing by problem or process. Process decomposition breaks down framework activities like communication. For complex projects, communication can be broken into smaller tasks. The document also discusses software sizing methods, empirical estimation models, and making buy versus build decisions. It outlines manual and automated cost estimation techniques from project-level to activity-level estimates.
Lineage-Preserving Anonymization of the Provenance of Collection-Based WorkflowsKhalid Belhajjame
I gave this talk at the EDBT'2020 conference. It shows how the provenance of workflows can be anonymized without compromising lineage relationships between the data records that are used and generated by the modules that compose the workflow.
Privacy-Preserving Data Analysis Workflows for eScienceKhalid Belhajjame
This document discusses an approach for preserving privacy in scientific workflows that use large datasets. It proposes using k-anonymity to anonymize sensitive workflow data. Parameter dependencies are leveraged to identify sensitive parameters and infer appropriate anonymity degrees. The approach was tested on 20 workflows, with overhead less than 1 millisecond. This preliminary work aims to assist scientists in anonymizing workflow data while enabling exploration of provenance and data products.
Loras College 2014 Business Analytics Symposium | Aaron Lanzen: Creating Busi...Cartegraph
Cisco Services is providing a behind-the-scenes perspective of its decision management and smart analytics programs. Success for Cisco is more than the technology or any one project. It's a mix of art, philosophy and technology that allows analytics to keep adding value to the business. You will hear how the program has evolved over the last 6 years and will explore different levels of smart analytics. Along the way, you will hear how the team grew a simple idea into a patent-pending resource allocation model.
For more information on the Loras College 2014 Business Analytics Symposium, the Loras College MBA in Business Analytics or the Loras College Business Analytics Certificate visit www.loras.edu/mba or www.loras.edu/bigdata.
Best Practices in Moving Hyperion Planning to the CloudDatavail
You’ll learn how this move fit with each company’s corporate vision for the organization, how configuring EPBCS modules improved processes, and how new reports and scenarios took their financial planning from “on-premise engine” to “cloud machine.”
DOAG Oracle Unified Audit in Multitenant EnvironmentsStefan Oehrli
Oracle Audit is a well-known and proven database functionality. Or maybe not? What does auditing look like in combination with Oracle Multitenant Databases? Does database and Unified Audit work analogous to existing configurations? In the context of this presentation the auditing in the environment of container databases will be examined more closely. It will be shown what has to be considered and how an auditing concept has to be adapted to the new architecture. With focus on the current versions of the Oracle database, specific problems and workarounds in the area of Unified Audit will be shown. The presentation will be complemented by corresponding examples and live demos.
The document outlines a step-wise approach for planning software projects and discusses each step in detail using an example scenario of developing a payroll system for Brightmouth College. The key steps include establishing project scope and objectives, identifying project infrastructure, analyzing project characteristics, identifying required products and activities, and developing a product flow diagram to outline the relationships between products. The overall approach provides a structured method for comprehensively planning a software project from start to finish.
This document discusses scope management processes in project management. It defines scope management as defining the work required for a project and ensuring only that work is completed. It then outlines the key processes involved in scope management: collecting requirements, defining scope, creating a work breakdown structure, verifying scope, and controlling scope. For each process, it identifies typical inputs, tools and techniques, and outputs involved. The overall purpose is to plan and manage the scope of work to successfully deliver a project.
VTU 7TH SEM CSE DATA WAREHOUSING AND DATA MINING SOLVED PAPERS OF DEC2013 JUN...vtunotesbysree
This document contains solved questions and answers from a past data warehousing and data mining exam. It includes questions on operational data stores, extract transform load (ETL) processes, online transaction processing (OLTP) vs online analytical processing (OLAP), data cubes, and data pre-processing approaches. The responses provide detailed explanations and examples for each topic.
Accelerate your SAP BusinessObjects to the CloudWiiisdom
This document provides guidance on migrating SAP BusinessObjects from an on-premise environment to the cloud. It outlines a five-phase methodology for a successful migration: 1) assessment and planning, 2) execution, 3) validation and optimization, 4) going live, and 5) day-to-day maintenance. The methodology aims to reduce costs and timelines by up to 80% by automating tasks like regression testing, content migration, and maintenance. This ensures the migrated platform performs as well as the original while inspiring user confidence.
BDD Scenarios in a Testing & Traceability Strategy (Webinar 19/02/2021)Gáspár Nagy
We are inviting you to join our free webinar to see a case study of a real project developed with Behaviour Driven Development (BDD).
Our product, "SpecSync for Azure DevOps" has been developed with BDD. The functionalities are specified as BDD scenarios that can be verified as automated tests. But BDD scenarios alone would not be enough for us to meet our quality expectations, so there are other tests and quality considerations as well -- all these fit into our agile testing and traceability strategy.
In this webinar Gaspar Nagy gives you a walkthrough of the quality considerations and tests of the product by focusing on the following questions:
• What kind of tests are needed? How to decide what is the right way to specify and test a concrete function?
• What kind of feedback can we get from the different tests and how? Do they form a test automation pyramid?
• How are the requirements expressed in BDD scenarios and how are they connected to the other development artifacts?
• BDD, tests automation and continuous integration tips & tricks
The webinar focuses on the general testing and automation challenges, so people with or without coding skills are both welcome. At the end of the webinar there will be a Q&A, so you can also ask your own questions. The webinar recording will be made available for everyone who registered.
Three signs your architecture is too small for big data. Camp IT December 2014Craig Jordan
Three capability gaps that a traditional business intelligence architecture has with respect to processing big data and recommended extensions to address them.
This document describes a final year project to develop a system called the Keropok Lekor Sales and Stock Forecast System for D'no Corner Keropok Lekor, a small company that produces and sells the traditional Malay snack. The system will allow admin and staff to view sales and stock predictions using the Weighted Moving Average method. It will also generate monthly reports. The objectives are to design the system interfaces, database and processes to manage orders, estimates sales and stock. The system will be tested and reports generated for users. It will manage user profiles, products, orders, payments and inventory for admin, staff and customers.
Sharing Blockchain Performance Knowledge for Edge Service DevelopmentHong-Linh Truong
The document discusses a framework called GIAU that aims to share performance knowledge to help with edge service and blockchain application development. GIAU provides a way to capture information about blockchain software, deployment patterns, benchmarks, and experiments. It uses a microservices architecture with databases to store and search this knowledge. The goal is to help developers select appropriate blockchain technologies and deployments for different edge service topologies.
Software development planning and essentialsRajesh P
The document discusses project planning for a hostel management system. It covers project definitions and lifecycles, including the software development lifecycle of requirement analysis, design, development, testing, and implementation. It also discusses functional details, UML design, and architecture of the hostel management system. Finally, it covers database and code optimization techniques.
Software development planning and essentialsRajesh P
The document discusses project planning for a hostel management system. It covers project definitions and lifecycles, including the software development lifecycle of requirement analysis, design, development, testing, and implementation. It also discusses functional details, UML design, and architecture of the hostel management system. Finally, it covers database and code optimization techniques.
The document provides an overview of the data analytics process (lifecycle). It discusses the key phases in the lifecycle including discovery, data preparation, model planning, model building, communicating results, and operationalizing. In the discovery phase, stakeholders analyze business trends and domains to build hypotheses. In data preparation, data is explored, preprocessed, and conditioned to create an analytics sandbox. This involves extract, transform, load processes to prepare the data for analysis.
The document summarizes Dan Lantz and Ken Swift's journey in implementing the Aqueduct data warehousing solution to access data stored in their Blackbaud Raiser's Edge NXT CRM. It describes how they moved from a hosted to on-premise and back to hosted CRM configuration, their challenges with reporting, and their process of setting up Aqueduct including building Power BI visualizations to analyze gift officer, portfolio, proposal and other data. It concludes by outlining their plans to further extend Aqueduct's capabilities.
vodQA Pune (2019) - Insights into big data testingvodQA
This document discusses testing of big data applications. It describes various types of tests including unit tests, integration tests, functional tests and tests of workflows like Oozie tests. It also discusses automation tools and frameworks for testing like Mockito, Worker Bee and Cucumber. Challenges in big data testing mentioned include long test run times, need for dedicated test clusters and maintaining large test data sets.
SOUG Oracle Unified Audit for Multitenant DatabasesStefan Oehrli
Oracle Audit is a proven database functionality. Or maybe not? How does auditing look like in combination with Oracle Multitenant DBs? Does DB and Unified Audit work analogous to existing configurations? In the context of this lecture audit in Container DBs (19c/20c) will be discussed more closely. We will shown where to pay attention and how to adapt an audit concept to the new architecture. Specific problems and workarounds will be shown. The presentation will be complemented by demos.
The document discusses techniques for decomposing software projects to aid in cost estimation. It describes decomposing by problem or process. Process decomposition breaks down framework activities like communication. For complex projects, communication can be broken into smaller tasks. The document also discusses software sizing methods, empirical estimation models, and making buy versus build decisions. It outlines manual and automated cost estimation techniques from project-level to activity-level estimates.
Lineage-Preserving Anonymization of the Provenance of Collection-Based WorkflowsKhalid Belhajjame
I gave this talk at the EDBT'2020 conference. It shows how the provenance of workflows can be anonymized without compromising lineage relationships between the data records that are used and generated by the modules that compose the workflow.
Privacy-Preserving Data Analysis Workflows for eScienceKhalid Belhajjame
This document discusses an approach for preserving privacy in scientific workflows that use large datasets. It proposes using k-anonymity to anonymize sensitive workflow data. Parameter dependencies are leveraged to identify sensitive parameters and infer appropriate anonymity degrees. The approach was tested on 20 workflows, with overhead less than 1 millisecond. This preliminary work aims to assist scientists in anonymizing workflow data while enabling exploration of provenance and data products.
- The document discusses evaluating "why-not" queries against scientific workflow provenance. Why-not queries help understand why a data item was not returned by a workflow execution.
- It proposes a solution for evaluating why-not queries in workflows with black-box modules that do not preserve attribute information from inputs. The solution explores workflow modules from sink to source to identify "picky" modules responsible for a data item not appearing in results.
- To identify picky modules, it harvests information from the web by searching for traces of scientific module invocations to find valid candidate inputs and determine if a module accepts them or is likely picky. It conducts an experiment using real workflows to test the effectiveness of
Converting scripts into reproducible workflow research objectsKhalid Belhajjame
1) The document presents a methodology to convert script-based experiments into reproducible workflow research objects (WROs). This addresses issues of understanding, reusing, and reproducing experiments conducted through scripts.
2) The methodology involves 5 steps: generate an abstract workflow, create an executable workflow, refine the workflow, record provenance data, and annotate and check the quality of the conversion.
3) Applying the methodology to a molecular dynamics simulation case study, the authors demonstrate how scripts can be transformed into WROs containing workflows, annotations, provenance data, and other resources needed for reproducibility.
A Sightseeing Tour of Prov and Some of its ExtensionsKhalid Belhajjame
This document provides an overview of the PROV provenance model and some of its extensions. It discusses the motivation for provenance, the history and development of the PROV model, its key concepts of entities, activities, and agents. It also describes extensions like ProvONE and PAV that build upon PROV to model workflow and scientific provenance.
The document discusses assisting designers in composing workflows through the reuse of frequent workflow fragments mined from repositories. It proposes an approach that involves mining fragments, representing workflows as graphs, homogenizing activity labels, and allowing users to search for fragments using keywords and activities from their initial workflow. Fragments are retrieved based on relevance to keywords and compatibility to specified activities, then ranked and presented to users for composition. Experiments assess different graph representations for mining fragments in terms of effectiveness, size and runtime. The approach aims to help designers reuse best practices from repositories when specifying new workflows.
This document proposes a method to improve the reuse of workflow fragments by mining workflow repositories. It evaluates different graph representations of workflows and uses the SUBDUE algorithm to identify recurrent fragments. An experiment compares representations on precision, recall, memory usage, and time. Representation D1, which labels edges and nodes, performed best. A second experiment assesses how filtering workflows by keywords impacts finding relevant fragments for a user query. The method aims to incorporate workflow fragment search capabilities into the design lifecycle to promote reuse.
Linking the prospective and retrospective provenance of scriptsKhalid Belhajjame
Scripting languages like Python, R, andMATLAB have seen significant use across a variety of scientific domains. To assist scientists in the analysis of script executions, a number of mechanisms, e.g., noWorkflow, have been recently proposed to capture the provenance of script executions. The provenance information recorded can be used, e.g., to trace the lineage of a particular result by identifying the data inputs and the processing steps that were used to produce it. By and large, the provenance information captured for scripts is fine-grained in the sense that it captures data dependencies at the level of script statement, and do so for every variable within the script. While useful, the amount of recorded provenance information can be overwhelming for users and cumbersome to use. This suggests the need for abstraction mechanisms that focus attention on specific parts of provenance relevant for analyses. Toward this goal, we advocate that fine-grained provenance information recorded as the result of script execution can be abstracted using user-specified, workflow-like views. Specifically, we show how the provenance traces recorded by noWorkflow can be mapped to the workflow specifications generated by YesWorkflow from scripts based on user annotations. We examine the issues in constructing a successful mapping, provide an initial implementation of our solution, and present competency queries illustrating how a workflow view generated from the script can be used to explore the provenance recorded during script execution.
This is a keynote that I have given in polyweb workshop on the state of the art of data science reproducibility. I review tools that have been developed over the last few years in the first part. In the second part, I focus on proposals that I have been involved in to facilitate workflow reproducibility and preservation.
These slides introduces the second edition of ProvBench which I am leading to collect a corpus of provenance data for benchmarking for the provenance (and scientific) community
I gave this talk in TAPP 2014 during the provenance week in Cologne, on inferring fine graine dependencies between data (ports) in scientific workflows. -- khalid
I gave this talk in the EDBT 2014 conference, which tool place in Athens, Greece.
I show how data examples can be used to characterize the behavior of scientific modules. I present a new methods that automatically generate the data examples, and show that such data examples are useful for the human user to understand the task of the modules, and that they can be used to assist curators in repairing broken workflows (i.e., workflows for which one or more modules are no longer supplied by their providers)
This document discusses research objects and scientific workflows. It introduces research objects as a way to aggregate all elements needed to understand a research investigation, including datasets, results, experiments, and provenance. Scientific workflows are presented as tools for automating data-intensive scientific activities, with prospective and retrospective provenance capturing the intended and actual methods. The document outlines an approach to summarizing complex workflows using semantic annotations of workflow motifs and reduction primitives like collapse and eliminate. This distills provenance traces for improved understanding and querying.
Small Is Beautiful: Summarizing Scientific Workflows Using Semantic Annotat...Khalid Belhajjame
Scientific Workflows have become the workhorse of BigData analytics for scientists. As well as being repeatable and optimizable pipelines that bring together datasets and analysis tools, workflows make-up an important part of the provenance of data generated from their execution. By faithfully capturing all stages in the analysis, workflows play a critical part in building up the audit-trail (a.k.a. provenance) meta- data for derived datasets and contributes to the veracity of results. Provenance is essential for reporting results, reporting the method followed, and adapting to changes in the datasets or tools. These functions, however, are hampered by the complexity of workflows and consequently the complexity of data-trails generated from their instrumented execution. In this paper we propose the generation of workflow description summaries in order to tackle workflow complexity. We elaborate reduction primitives for summarizing workflows, and show how prim- itives, as building blocks, can be used in conjunction with semantic workflow annotations to encode different summariza- tion strategies. We report on the effectiveness of the method through experimental evaluation using real-world workflows from the Taverna system.
A use case designed in the context of the Dataone provenance woring group illustrating how the provenance traces generated by differet workflow engines can be quered via the D-PROV model.
This document proposes representing scientific workflows as first-class citizens called research objects. It presents a model for workflow research objects that aggregates all necessary elements to understand an investigation. These include experiments, annotations, results, datasets and provenance. Research objects are encoded using semantic technologies like RDF and follow standards such as the Object Exchange model. The lifecycle of research objects is also described.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.