This paper is an improved version of the WUSS 2011 paper of the same time. This paper covers the following:
(1). The Nature of the SGPLOT procedure.
How are Line Attributes assigned?
(2). The Legend (Line Inconsistency) problem.
(3). 5 Solutions to Line Attribute Inconsistency.
(4). Additional Comments on Attribute Maps.
Unlike earlier version of this paper, the current paper has more emphasis on Attribute Map and the SGPanel procedure was just mentioned/refered.
The document provides an overview of functional dependencies and database normalization. It discusses four informal design guidelines for relational databases: 1) design relations so their meaning is clear, 2) avoid anomalies, 3) avoid null values, and 4) avoid spurious tuples. It then covers functional dependencies, inference rules, equivalence, and normal forms including 1NF, 2NF, 3NF and BCNF. The goals of normalization are also summarized as reducing redundancy, anomalies, and producing high quality schemas. Examples are provided to illustrate each concept.
The document discusses functional dependencies and database normalization. It provides examples of functional dependencies and explains key concepts like:
- Functional dependencies define relationships between attributes in a relation.
- Armstrong's axioms are properties used to derive functional dependencies.
- Decomposition aims to eliminate redundancy and anomalies by breaking relations into smaller, normalized relations while preserving information and dependencies.
- A decomposition is lossless if it does not lose any information, and dependency preserving if the original dependencies can be maintained on the decomposed relations.
The document discusses normalization of database relations through various normal forms. It defines first normal form (1NF) as removing repeating groups from relations by either adding empty columns or creating a separate relation. Second normal form (2NF) requires that relations in 1NF have no partial dependencies, where non-key attributes must fully depend on the primary key. The document provides examples of relations in 1NF and 2NF and discusses functional dependencies.
The document discusses normalization of database relations. It begins by describing normalization and its purpose of reducing data redundancy and update anomalies. It then discusses different normal forms (1NF, 2NF, 3NF, BCNF) and concepts used in normalization like functional dependencies and candidate keys. Examples are provided to illustrate key normalization concepts. Finally, example relations are provided and normalized to BCNF to demonstrate the normalization process.
The document discusses decomposition of relations in a database. It defines decomposition as breaking a relation into multiple smaller relations. Decomposition should be lossless, meaning the original relation can be accurately reconstructed from the decomposed relations. There are two types of decomposition: lossless decomposition and dependency preserving decomposition. For a decomposition to be lossless, the common attributes between relations must contain a key. For a decomposition to be dependency preserving, the functional dependencies of the original relation must be logically implied by the functional dependencies of the decomposed relations. Several examples of lossless and non-lossless decompositions are provided based on different schemas and functional dependencies.
This document discusses database normalization through various normal forms. It defines key concepts like functional dependencies and full functional dependencies. It explains the objectives and rules of first, second, third normal forms and BCNF. First normal form requires each field to contain a single value. Second normal form requires fields to depend on the whole primary key. Third normal form and BCNF further eliminate transitive dependencies. The document provides examples to illustrate normalization and resolving anomalies through decomposition. It also introduces multi-valued dependencies and fourth normal form.
This paper is an improved version of the WUSS 2011 paper of the same time. This paper covers the following:
(1). The Nature of the SGPLOT procedure.
How are Line Attributes assigned?
(2). The Legend (Line Inconsistency) problem.
(3). 5 Solutions to Line Attribute Inconsistency.
(4). Additional Comments on Attribute Maps.
Unlike earlier version of this paper, the current paper has more emphasis on Attribute Map and the SGPanel procedure was just mentioned/refered.
The document provides an overview of functional dependencies and database normalization. It discusses four informal design guidelines for relational databases: 1) design relations so their meaning is clear, 2) avoid anomalies, 3) avoid null values, and 4) avoid spurious tuples. It then covers functional dependencies, inference rules, equivalence, and normal forms including 1NF, 2NF, 3NF and BCNF. The goals of normalization are also summarized as reducing redundancy, anomalies, and producing high quality schemas. Examples are provided to illustrate each concept.
The document discusses functional dependencies and database normalization. It provides examples of functional dependencies and explains key concepts like:
- Functional dependencies define relationships between attributes in a relation.
- Armstrong's axioms are properties used to derive functional dependencies.
- Decomposition aims to eliminate redundancy and anomalies by breaking relations into smaller, normalized relations while preserving information and dependencies.
- A decomposition is lossless if it does not lose any information, and dependency preserving if the original dependencies can be maintained on the decomposed relations.
The document discusses normalization of database relations through various normal forms. It defines first normal form (1NF) as removing repeating groups from relations by either adding empty columns or creating a separate relation. Second normal form (2NF) requires that relations in 1NF have no partial dependencies, where non-key attributes must fully depend on the primary key. The document provides examples of relations in 1NF and 2NF and discusses functional dependencies.
The document discusses normalization of database relations. It begins by describing normalization and its purpose of reducing data redundancy and update anomalies. It then discusses different normal forms (1NF, 2NF, 3NF, BCNF) and concepts used in normalization like functional dependencies and candidate keys. Examples are provided to illustrate key normalization concepts. Finally, example relations are provided and normalized to BCNF to demonstrate the normalization process.
The document discusses decomposition of relations in a database. It defines decomposition as breaking a relation into multiple smaller relations. Decomposition should be lossless, meaning the original relation can be accurately reconstructed from the decomposed relations. There are two types of decomposition: lossless decomposition and dependency preserving decomposition. For a decomposition to be lossless, the common attributes between relations must contain a key. For a decomposition to be dependency preserving, the functional dependencies of the original relation must be logically implied by the functional dependencies of the decomposed relations. Several examples of lossless and non-lossless decompositions are provided based on different schemas and functional dependencies.
This document discusses database normalization through various normal forms. It defines key concepts like functional dependencies and full functional dependencies. It explains the objectives and rules of first, second, third normal forms and BCNF. First normal form requires each field to contain a single value. Second normal form requires fields to depend on the whole primary key. Third normal form and BCNF further eliminate transitive dependencies. The document provides examples to illustrate normalization and resolving anomalies through decomposition. It also introduces multi-valued dependencies and fourth normal form.
This is the slides for my presentation at WUSS 2011 for my paper ’Hunting for Columbus Eggs in the SAS(R) Programming World: A Guidiance to Creative Think for SAS Programmers
How to Speed Up Your Validation Process Without Really Trying?alice_m_cheng
In this paper, the author will not only provide a brief introduction of the COMPARE procedure, but will also share with you her experience to speed up the validation process. Comparison discussed in this paper is mainly between two datasets, even though it is possible to compare variables within the same dataset using the COMPARE procedure. Below are the topics that will be covered in this paper.
• A brief introduction to the COMPARE procedure
• Orthodox Use of Features in PROC COMPARE to speed up the validation of a dataset
• Unorthodox Use of Features in PROC COMPARE to speed up the validation of a dataset
• A macro named %QCDATA to speed up the validation of a single dataset
• Introduction to &SYSINFO generated by the COMPARE procedure
• A macro named %QCDIR to identify all datasets in the production and validation directories
The document discusses four methods for calculating duration from clinical trial data: (1) Add disjoint intervals, (2) Discount days in gaps, (3) Check one day at a time, and (4) Simply counting distinct dates. It notes that duration is often needed to determine how long adverse events or medications lasted. Vertical data representation and challenges of gaps and overlaps are also covered. Examples and SAS code for each duration calculation method are provided.
CDISC journey in solid tumor using recist 1.1 (Paper)Kevin Lee
This document summarizes the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 guidelines for evaluating tumor response in clinical trials. It introduces the three types of oncology studies, defines target and non-target lesions, and describes how lesion measurements are used to determine complete response, partial response, stable disease, progression disease, and not evaluable responses. It also discusses how RECIST 1.1 data are organized in CDISC SDTM and ADaM domains, and provides an example of how these domains can be used to evaluate tumor response and analyze outcomes like objective response rate, progression-free survival, and time to progression.
This document provides an introduction to SAS (Statistical Analysis System). It describes SAS as a combination of statistical software, database management, and programming language. It outlines the history of SAS and its wide usage across various industries like healthcare, finance, retail, and telecom. The document discusses the SAS user interface and basic program structure. It also highlights the benefits of SAS certification.
Understanding SAS Data Step Processingguest2160992
Learning
Base SAS,
Advanced SAS,
Proc SQl,
ODS,
SAS in financial industry,
Clinical trials,
SAS Macros,
SAS BI,
SAS on Unix,
SAS on Mainframe,
SAS interview Questions and Answers,
SAS Tips and Techniques,
SAS Resources,
SAS Certification questions...
visit http://sastechies.blogspot.com
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
This is the slides for my presentation at WUSS 2011 for my paper ’Hunting for Columbus Eggs in the SAS(R) Programming World: A Guidiance to Creative Think for SAS Programmers
How to Speed Up Your Validation Process Without Really Trying?alice_m_cheng
In this paper, the author will not only provide a brief introduction of the COMPARE procedure, but will also share with you her experience to speed up the validation process. Comparison discussed in this paper is mainly between two datasets, even though it is possible to compare variables within the same dataset using the COMPARE procedure. Below are the topics that will be covered in this paper.
• A brief introduction to the COMPARE procedure
• Orthodox Use of Features in PROC COMPARE to speed up the validation of a dataset
• Unorthodox Use of Features in PROC COMPARE to speed up the validation of a dataset
• A macro named %QCDATA to speed up the validation of a single dataset
• Introduction to &SYSINFO generated by the COMPARE procedure
• A macro named %QCDIR to identify all datasets in the production and validation directories
The document discusses four methods for calculating duration from clinical trial data: (1) Add disjoint intervals, (2) Discount days in gaps, (3) Check one day at a time, and (4) Simply counting distinct dates. It notes that duration is often needed to determine how long adverse events or medications lasted. Vertical data representation and challenges of gaps and overlaps are also covered. Examples and SAS code for each duration calculation method are provided.
CDISC journey in solid tumor using recist 1.1 (Paper)Kevin Lee
This document summarizes the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1 guidelines for evaluating tumor response in clinical trials. It introduces the three types of oncology studies, defines target and non-target lesions, and describes how lesion measurements are used to determine complete response, partial response, stable disease, progression disease, and not evaluable responses. It also discusses how RECIST 1.1 data are organized in CDISC SDTM and ADaM domains, and provides an example of how these domains can be used to evaluate tumor response and analyze outcomes like objective response rate, progression-free survival, and time to progression.
This document provides an introduction to SAS (Statistical Analysis System). It describes SAS as a combination of statistical software, database management, and programming language. It outlines the history of SAS and its wide usage across various industries like healthcare, finance, retail, and telecom. The document discusses the SAS user interface and basic program structure. It also highlights the benefits of SAS certification.
Understanding SAS Data Step Processingguest2160992
Learning
Base SAS,
Advanced SAS,
Proc SQl,
ODS,
SAS in financial industry,
Clinical trials,
SAS Macros,
SAS BI,
SAS on Unix,
SAS on Mainframe,
SAS interview Questions and Answers,
SAS Tips and Techniques,
SAS Resources,
SAS Certification questions...
visit http://sastechies.blogspot.com
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Legend Consistency, SAS (WUSS 2011)
1. IS THE LEGEND IN YOUR
SAS/GRAPH® OUTPUT ST ILL
TELLING THE RIGHT STORY?
ALICE M. CHENG
OCTOBER13, 2011
2. OBJECTIVES
To explore both traditional and the latest approaches
to ensure Legend Consistency.
This paper is meant to be a sequel to the SUGI 29 paper:
Is the legend in your SAS/Graph®
output Telling the right story?
Justina M. Flavin, Art Carpenter
2
3. AGENDA
The Nature of SGPLOT Procedure.
How are Line Attributes Assigned?
Re-visit the Line Inconsistency Problem
by means of SGPLOT Procedure.
5 Solutions to Resolve Line Inconsistency Issues.
The SGPANEL Procedure
(as a Special Case of SGPLOT Procedure).
3
4. INTRODUCTION
Line Attributes (line color, symbol and style).
Placebo
Drug A
Drug B
Drug C
It is critical that line attributes are CONSISTENT.
4
5. THE NATURE OF SGPLOT
(HOW ARE LINES ATTRIBUTES ASSIGNED?)
5
6. Treatment
Order based
on Input Data
Order.
6
Figure 1A: Average Scores of 4 Treatments are displayed according
to input data order.
7. INPUT DATASET
data DRUG;
input TRT & $7. MONTH : 2. SCORE : 3. @@;
label TRT=‘Treatment:’
MONTH=‘Month’
SCORE=‘Average Score’;
Datalines;
Placebo 1 10 Placebo 2 15 Placebo 3 18 Placebo 4 20 Placebo 5 22
Drug A 1 30 Drug A 2 40 Drug A 3 60 Drug A 4 85 Drug A 5 100
Drug B 1 20 Drug B 2 30 Drug B 3 45 Drug B 4 70 Drug B 5 90
Drug C 1 25 Drug C 2 30 Drug C 3 35 Drug C 4 60 Drug C 5 85
;
run;
Note: Based on input dataset DRUG,
Order of Treatments is: Placebo, Drug A, Drug B, Drug C.
7
8. DEFINE STYLE
proc template;
define style STYLES.MYSTYLES;
parent=STYLES.DEFAULT;
*--- Re-define Style for Line Attributes. ---*;
style GraphData1 from GraphData1 / linestyle=1 contrastcolor=red
markersymbol=‘circle’; * Line Attributes for 1st Treatment. ;
style GraphData2 from GraphData2 / linestyle=2 contrastcolor=blue
markersymbol=‘square’; * Line Attributes for 2nd Treatment. ;
style GraphData3 from GraphData3 / linestyle=3 contrastcolor=green
markersymbol=‘diamond’; * Line Attributes for 3rd Treatment. ;
style GraphData4 from GraphData4 / linestyle=4 contrastcolor=orange
markersymbol=‘circlefilled’; * Line Attributes for 4th Treatment. ;
*--- Re-define other styles to make sure the graph is easier to read. ---*;
class GraphFonts / ‘GraphLabelFont’=(“Arial”, 18pt, bold)
‘GraphValueFont’=(“Arial”, 15pt, bold)
‘GraphTitleFont’=(“Arial”, 25pt, bold);
style graphbackground from graphbackground / color = _undef_;
end;
run;
Note: The k-th treatment in the input dataset takes on the k-th color here. 8
For instance, the first treatment (Placebo) takes on the RED color.
9. USE SGPLOT TO GENERATE FIGURE
ods listing style=STYLES.MYSTYLE;
title ‘Score Over Time by Treatment’;
proc sgplot data=DRUG;
series y=SCORE x=MONTH / group=TRT markers;
keylegend / noborder title=‘Treatment’;
run;
Use the style STYLES.MYSTYLE.
Assign TRT as the GROUP variable.
9
10. PROC SGPLOT VS PROC GPLOT
SGPLOT GPLOT
Default Order Data Order Internal Value
of Group
(TRT) Variable
Line Style Statement in Symboln
Attributes PROC TEMPLATE Statement
So if you want to present the values based on Internal Value of the Group
Variable in PROC SGPLOT (like in GPLOT), prior to SAS 9.3, you have to
sort by the Group (TRT) variable.
10
11. Or in SAS Version 9.3, use …
proc sgplot data = DRUG;
series y=SCORE x=MONTH / GROUP=TRT
GROUPORDER=ascending
markers;
run;
GROUP=variable and GROUPORDER=ascending/descending/data options
in the SERIES statement specify:
the order of the values in the GROUP variable
based on which the line attributes have been assigned to the groups.
11
12. Now based on
the ascending
order of the
value of TRT.
Figure 1B: With GROUP=TRT and GROUPORDER=ASCENDING,
the order of the treatments in the legend and the line attributes
12
have been changed.
14. LINE ATTRIBUTES
Line Attributes (line color, symbol and style).
Drug A
Drug B
Drug C
Placebo
It is critical that line attributes are CONSISTENT.
14
15. WHAT IF DRUG B IS NO LONGER IN A STUDY?
SGPLOT
Drop Drug B We Want
(Default)
Drug A
Drug B
Drug C
Placebo
THE LINE ATTRIBUTES ARE NO LONGER CONSISTENT.
15
16. Error!!!
Inconsistency
In line attribute.
Drug C should
GREEN and
Placebo should
be ORANGE!
Figure 2: The same line inconsistency issues in the GPLOT
procedure now re-appears in the output for SGPLOT procedure 16
when Drug B is excluded.
17. THE FIGURE WE WANT …
17
Note: The lines in the legend are consistent with the lines when
all 4 treatments are available.
20. Solution # 1. Use Dummy Data
Create a dummy dataset with a single observation
for each treatment group.
In this dummy dataset, only the Group variable has
non-missing values.
Add this dummy dataset to your original dataset.
20
21. Solution # 1. Use Dummy Data
*- Create a DUMMY dataset so that all treatment groups are represented. -*;
data DUMMY;
length TRT $7.;
TRT=‘Drug A’; MONTH= . ; SCORE= . ; output;
TRT=‘Drug B’; MONTH= . ; SCORE= . ; output;
TRT=‘Drug C’; MONTH= . ; SCORE =. ; output;
TRT=‘Placebo’; MONTH= . ; SCORE =.; output;
run;
*- Add DUMMY dataset to the original DRUG dataset without Drug B. -*;
data DRUG_WT_DUMMY;
Note: DUMMY data must come
set DUMMY
before DRUG data to retain the
DRUG (where=(TRT ne ‘Drug B’) );
right order.
run;
*- Generate Figure 3A. -*;
ods listing style=STYLES.MYSTYLE;
proc sgplot data=DRUG_WT_DUMMY;
series y=SCORE x=MONTH / group=TRT markers;
keylegend / noborder title=‘Treatment’;
run; 21
22. Solution # 1. Use Dummy Data
Since Drug B
is not supposed
to be in this
figure.
Drug B should
NOT be in the
legend!!!
22
Figure 3A: The Add Dummy Data Approach ensures line consistency,
However, the excluded treatment group Drug B remains in the legend.
24. Solution # 2. Re-Map Line Attributes
Create a LOOKUP dataset with data for every possible
GROUP variable value and its line attributes.
Get the data from LOOKUP dataset for the GROUP
variable values and their line attributes based on current
source database.
Create a new template by Re-defining the
Style GraphDatan statements.
Apply the newly defined Style to PROC SGPLOT.
24
25. Solution # 2. RE-MAP LINE ATTRIBUTE
*- Establish a LOOKUP dictionary for line attribute for each drug. -*;
data LOOKUP;
input ID & $7. LINESTYLE : 3. COLOR : $6. SYMBOL : $12.;
cards;
Drug A 1 red circle
Drug B 2 blue square
Drug C 3 green diamond
Placebo 4 orange circlefilled
;
run;
25
26. Solution # 2. RE-MAP LINE ATTRIBUTE
*- Define LINEATTR macro to create Style Template with line attributes. -*;
%macro lineattr ( indata = /* Enter Input Data Set Name */
, lookup=LOOKUP /* Enter LOOKUP Dataset */
, idvar=ID /* ID variable in LOOKUP Data Set. */
, groupvar=TRT /* Enter Group Variable. */
, style=STYLES.MYSTYLE2 /* Enter Style name to be output. */
, parent_style=STYLES.MYSTYLE /* Enter Parent Style. */
)/ des=‘To create line attributes in Style Template’;
%local I;
%*- Get line attribute for treatment groups from LOOKUP. -*%;
%*- Note: only line attributes for treatment groups in the dataset specified in -*%;
%*- the dataset specified in &indata will be kept. -*%;
proc sql noprint;
create table LINEATTR as
select distinct L.*, D.&GROUPVAR
from &indata D
left join
&LOOKUP L
on D.&GROUPVAR=L.&IDVAR
order by D.&GROUPVAR; 26
quit;
27. Solution # 2. RE-MAP LINE ATTRIBUTE
%*- Assign value for the line style, colors and symbol of each treatment. -*%;
data null;
set LINEATTR;
call symput (‘LINESTYLE’ || strip(_n_, 3.)), LINESTYLE);
call symput (‘COLOR’ || strip ( put (_n_, 3.)), COLOR);
call symput (‘SYMBOL’|| strip(put (_n_, 3.)), strip(SYMBOL) );
run;
%*- Redefine your line attributes in a new style. -*%;
proc template;
define style &STYLE;
parent=&parent_style;
%do i=1 %to &sqlobs;
style GraphData&i from GraphData&i / linestyle=&&LINESTYLE&i
contrastcolor=&&color&I markersymbol=&&symbol&i;
%end;
end;
Run;
%mend lineattr;
27
28. Solution # 2. RE-MAP LINE ATTRIBUTE
*- Create a dataset based on DRUG, excluding TRT B. -*;
data DRUG_noB;
set DRUG (where=(TRT ne ‘Drug B’));
run;
*- Invoke %lineattr to create STYLES.MYSTYLES2 with the correct line styles. -*;
%lineattr(indata=DRUG_noB,
style=STYLES.MYSTYLES2);
ods listing style=STYLES.MYSTYLES2;
Title h=20pt ‘Score Over Time by Treatment’;
proc sgplot data=DRUG_noB;
series y=SCORE x=MONTH / group=TRT markers;
keylegend / noborder title=‘Treatment: ‘;
run;
28
29. This is the
figure we
want!
Figure 3B: 3 remaining treatments are represented by lines consistent in 29
appearance as those seen in Figure 1B. Legend reflects only the lines
in the figure.
31. WHAT IF DRUG B IS NO LONGER IN A STUDY?
SGPLOT
Drop Drug B We Want
(Default)
Drug A
Drug B
Drug C
Placebo
NOTE: Line attributes before the deleted group, namely DRUG A,
remain intact.
31
32. WHAT IF PLACEBO IS NOT IN A STUDY?
SGPLOT
Drop Placebo We Want
(Default)
Drug A
Drug B
Drug C
Placebo
NOTE: Line attributes are consistent.
Line attributes before the deleted group remain intact.
32
33. Solution # 3. RE-CODE INTERNAL VALUES OF GROUP VARIABLE
So if one knows ahead of the treatments that are going to be
dropped, one can put these treatments to be dropped last,
in terms of its internal value.
The line attributes for the remaining treatment groups will
remain intact.
33
34. Solution # 3. RE-CODE INTERNAL VALUES OF GROUP VARIABLE
*- Define formats for re-coding group variable, TRT. -*;
proc format;
invalue INTRTF ‘Drug A’ = 1
‘Drug B’ = 4
‘Drug C’= 2
‘Placebo’=3;
value TRTF 1= ‘Drug A’
2=‘Drug C’ Note that Drug B is now
3=‘Placebo’ assigned to the largest value!
4=‘Drug B’;
run;
*- Re-coded the value from TRT to TRT2 so that treatment groups to be -*;
*- dropped have higher internal values and will be listed last. -*;
data DRUG_RECODED;
set DRUG (where=(TRT ne ‘Drug b’) );
TRT2=input(TRT, intrf.);
label TRT2=‘Treatment: ‘;
format TRT2 TRTF.; run;
*- Sort the data based on the internal value of the re-coded variable -*;
*- of TRT2. -*;
proc sort data=DRUG_RECODED;
by TRT2; 34
run;
35. Solution # 3. RE-CODE INTERNAL VALUES OF GROUP VARIABLE
*- Re-order the line attributes to match internal values of group variable, TRT2;
proc template;
define style STYLES.MYSTYLE3;
parent=STYLES.MYSTYLE;
style GraphData1 from GraphData1 / linestyle=1 contrastcolor=RED
markersymbol=‘circle’;
style GraphData2 from GraphData2 / linestyle=3 contrastcolor=GREEN
markersymbol=‘diamond’;
style GraphData3 from GraphData3 / linestyle=4 contrastcolor=ORANGE
markersymbol=‘circlefilled’;
style GraphData4 from GraphData4 / linestyle=2 contrastcolor=BLUE
markersymbol=‘square’;
end;
Run;
Note: line attributes for Drug B
is now the last in the list.
*- Generate Figure 3B that excludes Drug B with Re-coded variable, TRT2 -*;
ods listing style=STYLES.MYSTYLES3;
title ‘Score Over Time by Treatment’;
proc sgplot data=DRUG_RECORDED;
series y=SCORE x=MONTH / group=TRT2 markers;
keylegend/noborder title=‘Treatment: ‘; 35
Run;
36. SOLUTION # 4
USE INDEX OPTION IN
GRAPH TEMPLATE LANGUAGE (GTL)
36
37. PROC SGPLOT vs Graph Template Language (GTL)
PROC SGPLOT GTL
Single Cell Plot Multiple Cell Plot
Quality Graphics Higher Quality Graphics
Some Customization More Customization
Quick and Easy to Require Significant Time
Learn and Use and Effort to Learn
37
38. Use of Index Options in GTL Can Ensure Line Consistency.
What if you have yet to master GTL?
If you have SAS 9.3, you can use SG Attribute Map (Solution #5).
If not and if you only need a simple plot that can be produced by the SG
procedure, you can use TMPLST options to get the code to create the
template and then render that template.
Example:
proc sgplot data=DRUG
tmplst=“c:alicewuss_2011_graphline_plot.sas”;
series y=SCORE x=MONTH / group=TRT markers;
keylegend/noborder title=‘Treatment: ‘;
run;
38
39. Solution # 4. USING INDEX IN GTL
*- Create a variable IDVAL to associate the group variables with -*;
*- line attributes defined in STYLES.MYSTYLE. -*;
data DRUG_IDVAL;
set DRUG (where=(TRT NE ‘Drug B’) );
if TRT=‘Drug A’ then IDVAL=1;
else if TRT=‘Drug B’ then IDVAL=2;
else if TRT=‘Drug C’ then IDVAL= 3;
else if TRT=‘Placebo’ then IDVAL=4;
run;
*- Template SGPLOT was obtained from TMPLOUT options in PROC SGPLOT.-*;
*- INDEX options in SERIESPLOT statement has been added manually. -*;
proc template;
define statgraph SGPLOT;
begingraph;
EntryTitle “Score Over Time by Treatment’/ textattrs=(size=2pct);
layout overlay;
seriesplot x=MONTH y=SCORE /
primary=true group=TRT index=IDVAL display=(markers)
LegendLabel=“Average Score” Name=“SERIES”;
DiscreteLegend “Series” / Location=Outside Title=“Treatment: “ Border=false;
endlayout;
39
endgraph; Add Manually.
end; run;
40. Solution # 4. USING INDEX IN GTL
*- Use PROC SGRENDER to render the graph from the stored graphics -*;
* Template SGPLOT. -*;
ods listing style=STYLES.MYSTYLE;
proc sgrender data=DRUG_IDVAL template=SGPLOT;
run;
40
41. SOLUTION # 5
USE OF SG ATTRIBUTE MAP
(Available in SAS Version 9.3)
41
42. Solution # 5. USING SG Attribute Map Approach
*- Create Attribute Map Data Set named MYATTRMAP. ---*;
data MYATTRMAP;
retain ID “MYID”;
input VALUE & $7. LINESTYLE : 3. LINECOLOR : $6
MARKERSYMBOL : $12 MARKCOLOR : $6.;
cards;
Drug A 1 red circle red
Drug B 2 blue square blue
Drug C 3 green diamond green
Placebo 4 orange circlefilled orange
; run;
*- Reference to MYATTRMAP to define the line attributes. -*;
ods listing;
title ‘Score Over Time by Treatment’;
proc sgplot data=DRUG dattrmap=MYATTRMAP;
series y=SCORE X=MONTH / group=TRT attrid=MYID
marker; 42
keylegend / noborder title=“Treatment: “; run;
44. A Panel of Graph Cells Produced by PROC SGPANEL
PANELBY variables
(BMI, SUBJID) controls
the cell order.
44
Figure 4A: Lines are NOT consistent with the desirable line attributes.
45. THE SGPANEL PROCEDURE
Code for Figure 4A
ods listing style=STYLES.MYSTYLE;
title h=20pt ‘HbA1c (%) by Month’;
proc sgpanel data=HBA1C;
panelby BMI SUBJID / layout=panel columns=3 colheader=top
rowheaderpos=right uniscale=row start=topleft spacing=5;
series x=MONTH y=HBA1C / group=TRT name=‘HBA1C’
lineattrs=(pattern=solid thickness=3) markers;
keylegend ‘HBA2C’ / noborder position=bottom title=“Treatment: “;
colaxis value=(0 3 6 9 12) label=‘Month’;
rowaxis label=‘HbA1c (%)’;
run;
Unlike SGPLOT, in SGPANEL, the default order is controlled by
the order of PANELBY variables and then the order of GROUP
variable within the same panel.
45
Here, we can think of each cell as having 3 missing treatment groups.
46. A Panel of Graph Cells Produced by PROC SGPANEL
46
Figure 4B: Lines are consistent with the desirable line attributes.
47. THE SGPANEL PROCEDURE
Code for Figure 4B (Using Index Options in GTL)
ods listing style=STYLES.MYSTYLE;
title h=20pt ‘HbA1c (%) by Month’;
proc sgpanel data=HBA1C tmplout=“c:wuss_2011Figure4B.sas” ;
panelby BMI SUBJID / layout=panel columns=3 colheader=top
rowheaderpos=right uniscale=row start=topleft spacing=5;
series x=MONTH y=HBA1C / group=TRT name=‘HBA1C’
lineattrs=(pattern=solid thickness=3) markers;
keylegend ‘HBA2C’ / noborder position=bottom title=“Treatment: “;
colaxis value=(0 3 6 9 12) label=‘Month’;
rowaxis label=‘HbA1c (%)’;
run;
47
49. ACKNOWLEDGEMENT
Art Carpenter, Caloxy
Shawn Hopkin, Seattle Genetics
Lelia McConnell, SAS Technical Support
Susan Morrison, SAS Technical Support
Marcia Surratt, SAS Technical Support
49