The where expression allows for declaring filters on SAS System datasets. This presentation illustrates some uses in the data step and SAS Macro Language.
The qualities which SAS® macros share with object-oriented languages account for the power of macro
programming. This paper illustrates some examples of specific design patterns which can be partially or fully
implemented with the SAS macro language. The material is intermediate to advanced, and assumes knowledge of
macros and macro variables. The goal is to illustrate best practices for SAS macro programming.
In version 9, the SAS® System introduces Perl regular expressions (sometimes known by the acronym PRX, the first three letters of these functions or call routines). However, previous versions of SAS® already had regular expressions (known by their acronym RX, the first two letters of these functions or call routines). This presentation will describe specific functional and performance differences in these two exclusive regular expression strategies, and offer recommendations on when to use each strategy. The technologies will be compared using SAS Enterprise Guide® 4.3.
Serverless is the most clickbaity title for an actually interesting thing. Despite the name, Serverless does not mean you’re not using a server, rather, the promise of Serverless is to no longer have to babysit a server. Scaling is done for you, you’re billed only for what you use. In this session, we’ll cover some key use cases for these functions within a Vue.js application: we’ll accept payments with stripe, we’ll gather geolocation data from Google Maps, and more! We’ll make it all work with Vue and Nuxt seamlessly, simplifying how to leverage this paradigm to be a workhorse for your application.
Data infrastructure for the other 90% of companiesMartin Loetzsch
Abstract: Unscientific guess: 90% of the companies out there neither have the data amounts nor the real-time requirements that justify maintaining a big data streaming infrastructure. Still, these companies also need to integrate data in order to improve their products and processes. Some of them then still use Spark to handle a few GB of data, but for the vast majority, running SQL scripts in simple relational databases does the trick. In this talk, I will give some recommendations and best practices for setting up data integration infrastructure with open source technologies. I will explain why PostgreSQL is a perfect fit for building data warehouses with up to a few TB of data. And I will argue that Airflow is probably not the best tool for orchestrating the execution of SQL scripts.
Presented at the Data Council Meetup Kickoff in Berlin
The qualities which SAS® macros share with object-oriented languages account for the power of macro
programming. This paper illustrates some examples of specific design patterns which can be partially or fully
implemented with the SAS macro language. The material is intermediate to advanced, and assumes knowledge of
macros and macro variables. The goal is to illustrate best practices for SAS macro programming.
In version 9, the SAS® System introduces Perl regular expressions (sometimes known by the acronym PRX, the first three letters of these functions or call routines). However, previous versions of SAS® already had regular expressions (known by their acronym RX, the first two letters of these functions or call routines). This presentation will describe specific functional and performance differences in these two exclusive regular expression strategies, and offer recommendations on when to use each strategy. The technologies will be compared using SAS Enterprise Guide® 4.3.
Serverless is the most clickbaity title for an actually interesting thing. Despite the name, Serverless does not mean you’re not using a server, rather, the promise of Serverless is to no longer have to babysit a server. Scaling is done for you, you’re billed only for what you use. In this session, we’ll cover some key use cases for these functions within a Vue.js application: we’ll accept payments with stripe, we’ll gather geolocation data from Google Maps, and more! We’ll make it all work with Vue and Nuxt seamlessly, simplifying how to leverage this paradigm to be a workhorse for your application.
Data infrastructure for the other 90% of companiesMartin Loetzsch
Abstract: Unscientific guess: 90% of the companies out there neither have the data amounts nor the real-time requirements that justify maintaining a big data streaming infrastructure. Still, these companies also need to integrate data in order to improve their products and processes. Some of them then still use Spark to handle a few GB of data, but for the vast majority, running SQL scripts in simple relational databases does the trick. In this talk, I will give some recommendations and best practices for setting up data integration infrastructure with open source technologies. I will explain why PostgreSQL is a perfect fit for building data warehouses with up to a few TB of data. And I will argue that Airflow is probably not the best tool for orchestrating the execution of SQL scripts.
Presented at the Data Council Meetup Kickoff in Berlin
Adding Statistical Functionality to the DATA Step with PROC FCMPJacques Rioux
Extend and reuse SAS own procedures within data step code. Using PROC FCMP, we show you can create reusable code in the data step to pull together the power of possibly many procedures and getting a much cleaner programming model.
jQuery & 10,000 Global Functions: Working with Legacy JavaScriptGuy Royse
Long ago, in the late days of the first Internet boom, before jQuery, before Underscore, before Angular, there was a web application built by a large corporation. This application was written as a server-side application using server-side technology like Java or PHP. A tiny seed of JavaScript was added to some of the pages of this application to give it a little sizzle.
Over the ages, this tiny bit of JavaScript grew like kudzu. Most of it was embedded in the HTML in
The journey of an (un)orthodox optimizationSian Lerk Lau
We live in a world that celebrates diversity. When it comes to code and database, we don’t. However, reality hits when we are working on an existing code base which it served its purpose, time-tested, just work™, but just one tiny little problem… it’s slow. What can we do?
Model relationships in our application often a reflection of the needs of our business requirements. However these requirements change over time and the relationships can be a hell lot difficult to normalize. Putting aside a potential time consuming and bug-friendly code refactoring, migration on a big database will incur long downtime and perhaps significant hair lost, if not money.
The above scenario perhaps ring a bell on your current workplace. As the data grows larger each day, scalability issues surfaced and long response time haunt us, if not our client. Perhaps we can no longer sweep it under the carpet.
In this talk, I would like to share my journey in optimizing a service task from 10 minutes to 30 seconds.
The breakdown as follow: 1. Database optimisation 2. Python code optimisation 3. Recommendation on optimisation best practices
Speed is one of the most important features of the lightning experience. One of the first considerations when building your application or component using Lightning should be to make it fast and responsive. Join us in this session to learn about the speed of lightning. We will walk through the process of building a lightning component, instrumenting it using Lightning Metrics Service, analyzing the performance and optimizing your application for a lightning fast experience!
How to generate a 100+ page website using parameterisation in RPaul Bradshaw
Parameterisation can be used to build a website with a page for every region/category/row in your data. This talk at DataHarvest/EIJC 2023 walks through how to do that, with example code and tips.
This presentation deals with the fundamentals of SQL, Installation and Database concepts. Presented by our team in Alphalogic Inc: https://www.alphalogicinc.com/
How to find low-cost or free data science resources 202006Mark Tabladillo
There are many free or low-cost resources to become better trained in data science. None of these options equals a formal degree: but short of that scope, these other resources are helpful at least for keeping up with technology. This presentation will provide specific recommendations on free or low-cost resources based on the Team Data Science Process framework (business understanding, data engineering, modeling, deployment).
This presentation covers some of the major data science and AI announcements from the May 2020 Microsoft Build conference. Included in this talk are 1) Azure Synapse Link, 2) Responsible AI, 3) Project Bonsai & Project Moab, and 4) AI Models at Scale (deep learning with billions of parameters).
More Related Content
Similar to Introduction to SAS System Where Expressions
Adding Statistical Functionality to the DATA Step with PROC FCMPJacques Rioux
Extend and reuse SAS own procedures within data step code. Using PROC FCMP, we show you can create reusable code in the data step to pull together the power of possibly many procedures and getting a much cleaner programming model.
jQuery & 10,000 Global Functions: Working with Legacy JavaScriptGuy Royse
Long ago, in the late days of the first Internet boom, before jQuery, before Underscore, before Angular, there was a web application built by a large corporation. This application was written as a server-side application using server-side technology like Java or PHP. A tiny seed of JavaScript was added to some of the pages of this application to give it a little sizzle.
Over the ages, this tiny bit of JavaScript grew like kudzu. Most of it was embedded in the HTML in
The journey of an (un)orthodox optimizationSian Lerk Lau
We live in a world that celebrates diversity. When it comes to code and database, we don’t. However, reality hits when we are working on an existing code base which it served its purpose, time-tested, just work™, but just one tiny little problem… it’s slow. What can we do?
Model relationships in our application often a reflection of the needs of our business requirements. However these requirements change over time and the relationships can be a hell lot difficult to normalize. Putting aside a potential time consuming and bug-friendly code refactoring, migration on a big database will incur long downtime and perhaps significant hair lost, if not money.
The above scenario perhaps ring a bell on your current workplace. As the data grows larger each day, scalability issues surfaced and long response time haunt us, if not our client. Perhaps we can no longer sweep it under the carpet.
In this talk, I would like to share my journey in optimizing a service task from 10 minutes to 30 seconds.
The breakdown as follow: 1. Database optimisation 2. Python code optimisation 3. Recommendation on optimisation best practices
Speed is one of the most important features of the lightning experience. One of the first considerations when building your application or component using Lightning should be to make it fast and responsive. Join us in this session to learn about the speed of lightning. We will walk through the process of building a lightning component, instrumenting it using Lightning Metrics Service, analyzing the performance and optimizing your application for a lightning fast experience!
How to generate a 100+ page website using parameterisation in RPaul Bradshaw
Parameterisation can be used to build a website with a page for every region/category/row in your data. This talk at DataHarvest/EIJC 2023 walks through how to do that, with example code and tips.
This presentation deals with the fundamentals of SQL, Installation and Database concepts. Presented by our team in Alphalogic Inc: https://www.alphalogicinc.com/
Similar to Introduction to SAS System Where Expressions (20)
How to find low-cost or free data science resources 202006Mark Tabladillo
There are many free or low-cost resources to become better trained in data science. None of these options equals a formal degree: but short of that scope, these other resources are helpful at least for keeping up with technology. This presentation will provide specific recommendations on free or low-cost resources based on the Team Data Science Process framework (business understanding, data engineering, modeling, deployment).
This presentation covers some of the major data science and AI announcements from the May 2020 Microsoft Build conference. Included in this talk are 1) Azure Synapse Link, 2) Responsible AI, 3) Project Bonsai & Project Moab, and 4) AI Models at Scale (deep learning with billions of parameters).
Microsoft has released Automated ML technologies for developers through ML.NET, Azure ML Service, and Azure Databricks. This presenter is a data scientist and Microsoft architect, and will give a comprehensive overview of the utility and use case of this automated technology for production solutions. The presentation includes code you can try now.
Automated machine learning (automated ML) automates feature engineering, algorithm and hyperparameter selection to find the best model for your data. The mission: Enable automated building of machine learning with the goal of accelerating, democratizing and scaling AI. This presentation covers some recent announcements of technologies related to Automated ML, and especially for Azure. The demonstrations focus on Python with Azure ML Service and Azure Databricks.
ML.NET 1.0 release is the first major milestone of a great journey that started in May 2018 when we released ML.NET 0.1 as open source. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more.
This presentation provides an overview of the technology with demos run in a Deep Learning Virtual Machine running Windows Server 2016. Code examples are in C# and F# and run in Visual Studio Community 2019. This technology is ready for production implementation and runs on .NET Core.
This presentation is the first of four related to ML.NET and Automated ML. The presentation will be recorded with video posted to this YouTube Channel: http://bit.ly/2ZybKwI
Automated machine learning (automated ML) automates feature engineering, algorithm and hyperparameter selection to find the best model for your data. The mission: Enable automated building of machine learning with the goal of accelerating, democratizing and scaling AI.
This presentation covers some recent announcements of technologies related to Automated ML, and especially for Azure. The demonstrations focus on Python with Azure ML Service and Azure Databricks.
This presentation is the fourth of four related to ML.NET and Automated ML. The presentation will be recorded with video posted to this YouTube Channel: http://bit.ly/2ZybKwI
NimbusML enables data scientists to use ML.NET to train models in Azure Machine Learning or anywhere else they use Python. NimbusML provides state-of-the-art ML algorithms, transforms and components, aiming to make them useful for all developers, data scientists, and information workers and helpful in all products, services and devices. The components are authored by the team members, as well as numerous contributors from MSR, CISL, Bing and other teams at Microsoft. NimbusML is interoperable with scikit-learn estimators and transforms, while adding a suite of highly optimized algorithms written in C++ and C# for speed and performance.
The trained machine learning model can be used in a .NET application with ML.NET. This presentation will outline the features of NimbusML and provide a notebook-based demonstration using Azure Notebooks.
This presentation is the third of four related to ML.NET and Automated ML. The presentation will be recorded with video posted to this YouTube Channel: http://bit.ly/2ZybKwI
201906 02 Introduction to AutoML with ML.NET 1.0Mark Tabladillo
ML.NET 1.0 release is the first major milestone of a great journey that started in May 2018 when we released ML.NET 0.1 as open source. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more.
“Automated ML” is a collection of new technologies from Microsoft to enhance the data science development process. Still in preview, Auto ML for ML.NET 1.0 will be demonstrated in a Deep Learning Virtual Machine running Windows Server 2016. Code examples are in C# and run in Visual Studio Community 2019.
This presentation is the second of four related to ML.NET and Automated ML. The presentation will be recorded with video posted to this YouTube Channel: http://bit.ly/2ZybKwI
This presentation focuses on the value proposition for Azure Databricks for Data Science. First, the talk includes an overview of the merits of Azure Databricks and Spark. Second, the talk includes demos of data science on Azure Databricks. Finally, the presentation includes some ideas for data science production.
201905 Azure Certification DP-100: Designing and Implementing a Data Science ...Mark Tabladillo
Microsoft has several Azure certifications including DP-100 (Designing and Implementing a Data Science Solution on Azure). Until this month, the exam had been in beta: however, the presenter has just passed the exam (first try). The purpose of this event is to share a viewpoint on how to study for the exam. Today, there are multiple ways to develop and deliver and deploy R or Python or Spark or deep learning models on Azure. The differences are important for this exam.
Big Data Advanced Analytics on Microsoft Azure 201904Mark Tabladillo
This talk summarizes key points for big data advanced analytics on Microsoft Azure. First, there is a review of the major technologies. Second, there is a series of technology demos (focusing on VMs, Databricks and Azure ML Service). Third, there is some advice on using the Team Data Science Process to help plan projects. The deck has web resources recommended. This presentation was delivered at the Global Azure Bootcamp 2019, Atlanta GA location (Alpharetta Avalon).
This presentation anchors best practices for Enterprise Data Science based on Microsoft's "Team Data Science Process". The talk includes introducing the concepts, describing some real-world advice for project planning, and discusses typical titles of professionals who make enterprise data science successful. These techniques also apply for AI (artificial intelligence), deep learning, machine learning, and advanced analytics.
Training of Python scikit-learn models on AzureMark Tabladillo
This intermediate-level presentation covers latest Azure technology for deploying Python sci-kit models on Azure. The presentation is a demo using a Microsoft Data Science Virtual Machine (DSVM), Visual Studio Code, Azure Machine Learning Service, Azure Machine Learning Compute, Azure Storage Blobs, and Azure Container Registry to train a model from a Python 3 Anaconda environment.
The presentation will include an architectural diagram and downloadable code from Github.
YouTube recording at https://www.youtube.com/watch?v=HyzbxHBpAbg&feature=youtu.be
Big Data Adavnced Analytics on Microsoft AzureMark Tabladillo
This presentation provides a survey of the advanced analytics strengths of Microsoft Azure from an enterprise perspective (with these organizations being the bulk of big data users) based on the Team Data Science Process. The talk also covers the range of analytics and advanced analytics solutions available for developers using data science and artificial intelligence from Microsoft Azure.
Power BI has become an increasingly important data analytics tool. This presentation focuses on the advanced analytics options currently available in Power BI. Attendees to this talk will see:
· Microsoft’s perspective on advanced analytics development: the Team Data Science Process
· What the general options are for advanced analytics on Azure
· What the specific native advanced analytics capabilities are in Power BI
· Some ideas on pairing Power BI with other technologies in advanced analytics architectures
Microsoft Cognitive Toolkit (Atlanta Code Camp 2017)Mark Tabladillo
The Microsoft Cognitive Toolkit (CNTK) is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs.
The objectives of this presentation is to 1) describe what CNTK is, 2) present a comparative evaluation with similar technologies, 3) outline potential applications, and 4) demonstrate the technology with Jupyter Python examples.
Machine learning services with SQL Server 2017Mark Tabladillo
SQL Server 2017 introduces Machine Learning Services with two independent technologies: R and Python. The purpose of this presentation is 1) to describe major features of this technology for technology managers; 2) to outline use cases for architects; and 3) to provide demos for developers and data scientists.
Microsoft Technologies for Data Science 201612Mark Tabladillo
Delivered to SQL Saturday BI Edition -- Atlanta, GA
Microsoft provides several technologies in and around Azure which can be used for casual to serious data science. This presentation provides an overview of the major Microsoft options for both on-premise and cloud-based data science (and hybrid). These technologies have been used by the presenter in various companies and industries, both as a Microsoft consultant and previously independent consultant. As well, the speaker provides insights into data science careers, information which helps imply where the business will likely be for consultants and partners.
How Big Companies plan to use Our Big Data 201610Mark Tabladillo
Underneath the shiny popular apps on tablets, smartphones, and entertainment channels are typically large cloud-based data centers. App developers leverage the cloud to provide advertisers with targeted sales opportunities, which has been accounting for an ongoing shift from paper to online media. This presentation will provide updated trends and statistics for 2016 on big data usage (based on consumer use), statistical concerns with big data, and the Microsoft big data story.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. Introduction to Where
Expressions
Mark Tabladillo, Ph.D.
Software Developer, MarkTab Consulting
Associate Faculty University of Phoenix
Faculty,
January 30, 2007
2. Introduction
• WHERE expressions allow for processing
subsets of observations
• WHERE expressions can be used in the
DATA step or with PROC (procedure)
statements
• This presentation will contain a series of
features and examples of the WHERE
p
expression
• We end with some intensive macros
3. WHERE-expression Processing
WHERE expression
• Enables us to conditionally select a subset
of observations, so that SAS processes
only the observations that meet a set of
specified conditions.
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999253.htm
4. Work Sales Dataset
Work.Sales
data work.sales (drop=i randomState);
length state $2 sales 8 randomState 3;
do i = 1 to 2500;
randomState = round(rand('gaussian',3,1)+0.5);
if randomState in (1,2,3,4,5) then do;
( )
select(randomState);
when(1) state='TN';
when(2) state='AL';
when(3) state= GA ;
state='GA';
when(4) state='FL';
when(5) state='MS';
end;
sales = int(rand('gaussian',1000000,500000));
output work.sales;
end;
end;
run;
5. Data Set Option or Statement
data work.highSales;
set work.sales (where=(sales>1500000));
run;
data work highSales;
work.highSales;
set work.sales;
where sales>1500000;
run;
proc means data=work.sales;
where sales>1500000;
run;
;
6. Data Set Option or Statement
data work.lowSales;
set work.sales (where=(sales<0));
run;
data work lowSales;
work.lowSales;
set work.sales;
where sales<0;
run;
proc means data=work.sales (where=(sales<0));
run;
7. Multiple Comparisons
data work.highFloridaSales;
set work.sales (where=(sales>1500000 and state = 'FL'));
run;
data work highFloridaSales;
work.highFloridaSales;
set work.sales;
where sales>1500000 and state = 'FL';
run;
proc freq data=work.sales;
tables state;
where sales>1500000 and state = 'FL';
;
run;
8. SAS Functions
data work.highFloridaSales;
set work.sales (where=(sales>1500000 and substr(state,1,1) = 'F'));
run;
data work highFloridaSales;
work.highFloridaSales;
set work.sales;
where sales>1500000 and substr(state,1,1) = 'F';
run;
proc means data=work.sales;
where sales>1500000 and substr(state,1,1) = 'F';
run;
;
9. Comparison Operators
Priority Order of Symbols Mnemonic
Evaluation Equivalent
Group I right to left **
+
-
ˆ¬~ NOT
>< MIN
<> MAX
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000780367.htm
10. Comparison Operators
Priority Order of Symbols Mnemonic
Evaluation Equivalent
Group II left to right *
/
Group left to right +
III
-
Group left to right || ¦¦ !!
IV
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000780367.htm
11. Comparison Operators
Priority Order of Symbols Mnemonic
Evaluation Equivalent
Group left to right < LT
V
<= LE
= EQ
¬= NE
>= GE
> GT
IN
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000780367.htm
12. Comparison Operators
Priority Order of Symbols Mnemonic
Evaluation Equivalent
Group left to right & AND
VI
Group left to right |¦! OR
VII
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000780367.htm
13. Comparison Operators
data work.extremeNonGeorgia;
set work.sales
(where=((sales<0 | sales>1500000) and state in ('TN','AL','FL','MS')));
run;
data work.extremeNonGeorgia;
set work.sales;
where (sales<0 | sales>1500000) and state in ('TN','AL','FL','MS');
run;
data work.extremeNonGeorgia;
set work.sales;
;
where ^ (0 <= sales <= 1500000) & state ne 'GA';
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
14. “Between And”
Between And
data work.boundedNonGeorgia;
set work.sales (where=((sales between 1000000 and 1500000) &
state in ('TN','AL','FL','MS')));
run;
data work.boundedNonGeorgia;
set work.sales;
where (sales between 1000000 and 1500000) &
state in ('TN','AL','FL','MS');
t t i ('TN' 'AL' 'FL' 'MS')
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
15. Contains ?
data work.LStates;
set work.sales (where=(state contains 'L'));
run;
data work LStates;
work.LStates;
set work.sales;
where state contains 'L';
run;
data work.LStates;
set work.sales;
where state ? 'L';
;
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
16. Is Null/Is Missing
data work.nullStates;
set work.sales (where=(state is null));
run;
data work.missingStates;
se o sa es (where=(state s ss g));
set work.sales ( e e (s a e is missing));
run;
data work.nullSales;
set work.sales;
work sales;
where sales is missing;
run;
data work.nonNullSales;
set work.sales;
where sales is not missing;
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
17. Like
data work.likeL;
set work.sales (where=(state like '%L'));
work sales
run;
data work.likeL;
set work.sales (where=(state like quot;%Lquot;));
run;
data work likeL;
work.likeL;
set work.sales (where=(state like quot;%%Lquot;));
run;
data work.notLikeG;
set work.sales;
where state not like 'G_';
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
18. Sounds Like (Soundex)
data work.soundsLikeFill;
set work.sales (where=(state =* 'fill'));
run;
data work notSoundsLikeTin;
work.notSoundsLikeTin;
set work.sales;
where state not =* 'tin';
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
19. “Same And”
Same And
data work.boundedNonGeorgia;
set work.sales (where=((sales between 1000000 and 1500000) &
state in ('TN','AL','FL','MS')));
run;
data work.boundedNonGeorgia;
set work.sales;
where (sales between 1000000 and 1500000);
where same and state i ('TN' 'AL' 'FL' 'MS')
h d t t in ('TN','AL','FL','MS');
run;
data work.boundedNonGeorgia;
g ;
set work.sales;
where same and (sales between 1000000 and 1500000);
where same and state in ('TN','AL','FL','MS');
run;
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a000999255.htm
20. WHERE vs. Subsetting IF
vs
Task Method
Make the selection in a procedure without using a WHERE expression
preceding DATA step
Take advantage of the efficiency available with an indexed WHERE expression
data set
Use one of a group of special operators, such as WHERE expression
BETWEEN-AND, CONTAINS, IS MISSING or IS
NULL, LIKE, SAME-AND, and Sounds-Like
B th l ti thi th th i bl l b tti
Base the selection on anything other than a variable value subsetting IF
that already exists in a SAS data set. For example, you
can select a value that is read from raw data, or a
value that is calculated or assigned during the course
of the DATA step
f th t
Make the selection at some point during a DATA step subsetting IF
rather than at the beginning
Execute the selection conditionally subsetting IF
http://support.sas.com/onlinedoc/913/getDoc/en/lrcon.hlp/a001000521.htm
21. Intensive Dataset Generation
%macro OurCentury();
%local year interest;
y ;
%do year = 2001 %to 2100;
%let interest = %sysfunc(compound(1,.,0.05,%eval(&year.-2001)));
data work.sales&year. (drop=i randomState index=(state sales));
length state $2 stateName $20 sales 8 randomState 3;
g ;
do i = 1 to 2500;
randomState = round(56*rand('uniform')+0.5);
if randomState <= 56 and randomState not in (3,7,14,43,52) then do;
state = fipstate(randomState);
p ( )
stateName = fipnameL(randomState);
sales = int(rand('gaussian',1000000*&interest.,500000*&interest.));
output work.sales&year.;
end;
end;
run;
%end;
%mend OurCentury; y
%OurCentury;
22. Year/State Datasets
%macro SalesByYearState();
%local year stateCode state;
%do year = 2001 %to 2100;
%do stateCode = 1 %to 56;
%if &stateCode ne 3 & &stateCode ne 7 & &stateCode. ne 14 &
&stateCode. &stateCode. &stateCode
&stateCode. ne 43 & &stateCode. ne 52 %then %do;
%let state = %sysfunc(fipstate(&stateCode.));
data work.sales&year.&state.;
set work.sales&year.;
t k l &
where state = quot;&state.quot;;
run;
%end; ;
%end;
%end;
%mend SalesByYearState;
%SalesByYearState;
23. Year/State High Sales Datasets
%macro HighSalesByYearState();
%local year stateCode state interest keepDataset;
%do year = 2001 %to 2100;
%let interest = %sysfunc(compound(1,.,0.05,%eval(&year.-2001)));
%do stateCode = 1 %to 56;
%if &stateCode. ne 3 & &stateCode. ne 7 & &stateCode. ne 14 & &stateCode. ne 43 &
&stateCode. ne 52 %then %do;
%let state = %sysfunc(fipstate(&stateCode.));
%let keepDataset = 0;
data work.sales&year.&state.high;
set work.sales&year.;
where state = quot;&state.quot; and sales > 2000000*&i t
h t t quot;& t t quot; d l 2000000*&interest.;
t
call symput('keepDataset',left('1'));
run;
%if not(&keepDataset.) %then %do;
p
proc datasets lib=work nolist;
delete sales&year.&state.high;
run; quit;
%end;
%end;
%end;
%end;
%mend HighSalesByYearState;
%HighSalesByYearState;
24. Conclusion
• The WHERE expression allows for
efficient observation processing in the
DATA step and the PROC statements
• The SAS System Documentation provides
specific details on the syntax
• Using macros increases the processing
power of WHERE expressions
f i
25. Contact Information
• Mark Tabladillo
MarkTab Consulting
http://www.marktab.com/
http://www marktab com/