This presentation is about -
Working Under Change Management,
What is change management? ,
repository types using change management
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
This presentation is about -
Overview of SAS 9 Business Intelligence Platform,
SAS Data Integration,
Study Business Intelligence,
overview Business Intelligence Information Consumers ,navigating in SAS Data Integration Studio,
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
Video of Workshop - https://media.dlib.indiana.edu/media_objects/rj430941s
This is workshop offered via Social Science Research Center to students and faculty to become familiar with an online collaborative writing using Latex and Overleaf.
Tool connectors in Warewolf are used to perform common tasks or data manipulation inside your service or microservice.
You do not need to go out of Warewolf and call a data connector to perform the task for you.
This presentation is about -
Overview of SAS 9 Business Intelligence Platform,
SAS Data Integration,
Study Business Intelligence,
overview Business Intelligence Information Consumers ,navigating in SAS Data Integration Studio,
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
Video of Workshop - https://media.dlib.indiana.edu/media_objects/rj430941s
This is workshop offered via Social Science Research Center to students and faculty to become familiar with an online collaborative writing using Latex and Overleaf.
Tool connectors in Warewolf are used to perform common tasks or data manipulation inside your service or microservice.
You do not need to go out of Warewolf and call a data connector to perform the task for you.
This is a short introduction course to Stata statistical software version 9. The course still applies to later versions of Stata, too. The course duration was 9 hours. It has been given at the Faculty of Economics and Political Science, Cairo University.
Learn how to navigate Stata’s graphical user interface, create log files, and import data from a variety of software packages. Includes tips for getting started with Stata including the creation and organization of do-files, examining descriptive statistics, and managing data and value labels. This workshop is designed for individuals who have little or no experience using Stata software.
Full workshop materials including example data sets and .do file are available at http://projects.iq.harvard.edu/rtc/event/introduction-stata
Introduces common data management techniques in Stata. Topics covered include basic data manipulation commands such as: recoding variables, creating new variables, working with missing data, and generating variables based on complex selection criteria, merging and collapsing data sets. Intended for users who have an introductory level of knowledge of Stata software.
All workshop materials including slides, do files, and example data sets can be downloaded from http://projects.iq.harvard.edu/rtc/event/data-management-stata
Provide an introduction to graphics in Stata. Topics include graphing principles, descriptive graphs, and post-estimation graphs. This is an introductory workshop appropriate for those with little experience with graphics in Stata. Intended for those with basic Stata skills.
All workshop materials including slides, do files, and example data sets can be downloaded from http://projects.iq.harvard.edu/rtc/event/graphing-stata
Aan introduction to SAS, one of the more frequently used statistical packages in business. With hands-on exercises, explore SAS's many features and learn how to import and manage datasets and and run basic statistical analyses. This is an introductory workshop appropriate for those with little or no experience with SAS.
Complete workshop materials include demo SAS programs available at http://projects.iq.harvard.edu/rtc/sas-intro
Sample Questions The following sample questions are not in.docxtodd331
Sample Questions
The following sample questions are not inclusive and do not necessarily represent all of the types of
questions that comprise the exams. The questions are not designed to assess an individual's readiness to
take a certification exam.
SAS 9.4 Base Programming – Performance-based Exam
Practical Programming Questions:
Project 1:
This project will use data set sashelp.shoes.
Write a SAS program that will:
• Read sashelp.shoes as input.
• Create the SAS data set work.sortedshoes.
• Sort the sashelp.shoes data set:
o First by variable product in descending order.
o Second by variable sales in ascending order.
Run the program and answer the following questions:
Question 1: What is the value of the product variable in observation 148?
Answer: Slipper
Question 2: What is the value of the Region variable in observation 130?
Answer: Pacific
Project 2:
This project will use the data set sashelp.shoes.
Write a SAS program that will:
• Read sashelp.shoes as input.
• Create a new SAS data set, work.shoerange.
• Create a new character variable SalesRange that will be used to categorize the observations into
three groups.
• Set the value of SalesRange to the following:
o Lower when Sales are less than $100,000.
o Middle when Sales are between $100,000 and $200,000, inclusively.
o Upper when Sales are above $200,000.
Run the program, then use additional SAS procedures to answer the following questions:
Question 3: How many observations are classified into the “Lower” group?
Answer: 288
Question 4: What is the mean value of the Sales variable for observations in the “Middle” group? Round
your answer to the nearest whole number.
Answer: 135127
Project 3:
This project will work with the following program:
data work.lowchol work.highchol;
set sashelp.heart;
if cholesterol lt 200 output work.lowchol;
if cholesterol ge 200 output work.highchol;
if cholesterol is missing output work.misschol;
run;
This program is intended to:
• Divide the observations of sashelp.heart into three data sets, work.highchol, work.lowchol, and
work.misschol
• Only observations with cholesterol below 200 should be in the work.lowchol data set.
• Only Observations with cholesterol that is 200 and above should be in the work.highchol data
set.
• Observations with missing cholesterol values should only be in the work.misschol data set.
Fix the errors in the above program. There may be multiple errors in the program. Errors may be syntax
errors, program structure errors, or logic errors. In the case of logic errors, the program may not
produce an error in the log.
After fixing all of the errors in the program, answer the following questions:
Question 5: How many observations are in the work.highchol data set?
Answer: 3652
Question 6: How many observations are in the work.lowchol data set?
Answer: 1405
Standard Questions:
Que.
This is a short introduction course to Stata statistical software version 9. The course still applies to later versions of Stata, too. The course duration was 9 hours. It has been given at the Faculty of Economics and Political Science, Cairo University.
Learn how to navigate Stata’s graphical user interface, create log files, and import data from a variety of software packages. Includes tips for getting started with Stata including the creation and organization of do-files, examining descriptive statistics, and managing data and value labels. This workshop is designed for individuals who have little or no experience using Stata software.
Full workshop materials including example data sets and .do file are available at http://projects.iq.harvard.edu/rtc/event/introduction-stata
Introduces common data management techniques in Stata. Topics covered include basic data manipulation commands such as: recoding variables, creating new variables, working with missing data, and generating variables based on complex selection criteria, merging and collapsing data sets. Intended for users who have an introductory level of knowledge of Stata software.
All workshop materials including slides, do files, and example data sets can be downloaded from http://projects.iq.harvard.edu/rtc/event/data-management-stata
Provide an introduction to graphics in Stata. Topics include graphing principles, descriptive graphs, and post-estimation graphs. This is an introductory workshop appropriate for those with little experience with graphics in Stata. Intended for those with basic Stata skills.
All workshop materials including slides, do files, and example data sets can be downloaded from http://projects.iq.harvard.edu/rtc/event/graphing-stata
Aan introduction to SAS, one of the more frequently used statistical packages in business. With hands-on exercises, explore SAS's many features and learn how to import and manage datasets and and run basic statistical analyses. This is an introductory workshop appropriate for those with little or no experience with SAS.
Complete workshop materials include demo SAS programs available at http://projects.iq.harvard.edu/rtc/sas-intro
Sample Questions The following sample questions are not in.docxtodd331
Sample Questions
The following sample questions are not inclusive and do not necessarily represent all of the types of
questions that comprise the exams. The questions are not designed to assess an individual's readiness to
take a certification exam.
SAS 9.4 Base Programming – Performance-based Exam
Practical Programming Questions:
Project 1:
This project will use data set sashelp.shoes.
Write a SAS program that will:
• Read sashelp.shoes as input.
• Create the SAS data set work.sortedshoes.
• Sort the sashelp.shoes data set:
o First by variable product in descending order.
o Second by variable sales in ascending order.
Run the program and answer the following questions:
Question 1: What is the value of the product variable in observation 148?
Answer: Slipper
Question 2: What is the value of the Region variable in observation 130?
Answer: Pacific
Project 2:
This project will use the data set sashelp.shoes.
Write a SAS program that will:
• Read sashelp.shoes as input.
• Create a new SAS data set, work.shoerange.
• Create a new character variable SalesRange that will be used to categorize the observations into
three groups.
• Set the value of SalesRange to the following:
o Lower when Sales are less than $100,000.
o Middle when Sales are between $100,000 and $200,000, inclusively.
o Upper when Sales are above $200,000.
Run the program, then use additional SAS procedures to answer the following questions:
Question 3: How many observations are classified into the “Lower” group?
Answer: 288
Question 4: What is the mean value of the Sales variable for observations in the “Middle” group? Round
your answer to the nearest whole number.
Answer: 135127
Project 3:
This project will work with the following program:
data work.lowchol work.highchol;
set sashelp.heart;
if cholesterol lt 200 output work.lowchol;
if cholesterol ge 200 output work.highchol;
if cholesterol is missing output work.misschol;
run;
This program is intended to:
• Divide the observations of sashelp.heart into three data sets, work.highchol, work.lowchol, and
work.misschol
• Only observations with cholesterol below 200 should be in the work.lowchol data set.
• Only Observations with cholesterol that is 200 and above should be in the work.highchol data
set.
• Observations with missing cholesterol values should only be in the work.misschol data set.
Fix the errors in the above program. There may be multiple errors in the program. Errors may be syntax
errors, program structure errors, or logic errors. In the case of logic errors, the program may not
produce an error in the log.
After fixing all of the errors in the program, answer the following questions:
Question 5: How many observations are in the work.highchol data set?
Answer: 3652
Question 6: How many observations are in the work.lowchol data set?
Answer: 1405
Standard Questions:
Que.
Vibrant Technologies is headquarted in Mumbai,India.We are the best Business Analyst training provider in Navi Mumbai who provides Live Projects to students.We provide Corporate Training also.We are Best Business Analyst classes in Mumbai according to our students and corporators
This presentation is about -
History of ITIL,
ITIL Qualification scheme,
Introduction to ITIL,
For more details visit -
http://vibranttechnologies.co.in/itil-classes-in-mumbai.html
This presentation is about -
Create & Manager Users,
Set organization-wide defaults,
Learn about record accessed,
Create the role hierarchy,
Learn about role transfer & mass Transfer functionality,
Profiles, Login History,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Based on as a service model,
• SAAS (Software as a service),
• PAAS (Platform as a service),
• IAAS (Infrastructure as a service,
Based on deployment or access model,
• Public Cloud,
• Private Cloud,
• Hybrid Cloud,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Introduction to the Cloud Computing ,
Evolution of Cloud Computing,
Comparisons with other computing techniques fetchers,
Key characteristics of cloud computing,
Advantages/Disadvantages,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Designing the Data Mart planning,
a data warehouse course data for the Orion Star company,
Orion Star data models,
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
What is dimension modeling? ,
Difference between ER modeling and dimension modeling,
What is a Dimension? ,
What is a Fact?
Start Schema ,
Snow Flake Schema ,
Difference between Star and snow flake schema ,
Fact Table ,
Different types of facts
Dimensional Tables,
Fact less Fact Table ,
Confirmed Dimensions ,
Unconfirmed Dimensions ,
Junk Dimensions ,
Monster Dimensions ,
Degenerative Dimensions ,
What are slowly changing Dimensions? ,
Different types of SCD's,
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
5. • Point and Click
• Command Line
• Programs (the best way)
Three Ways to Work
6. Outline
• Sermon on SYNTAX
• Cleaning data and creating variables
• Never overwrite original data
• Practices that will help you keep track of your work
• Safeguarding your work
7. A Sermon on SYNTAX
• Command line and Point and Click
– Advantages:
• Quick, may require less learning
– Disadvantages:
• Takes longer the second time – you must wade through the
point and click menu rather than just change a word
• You do not have a record of what you have done
9. You can point and click to get files, create variables, change variable
values, and do analysis, and end up without a record of what you
have done. You will be sorry.
10. Or, you can use Point and Click as an aid as you write programs.
You can copy syntax created by Point and Click into your program.
In SPSS programs are written in a Syntax Window and they have the
extension of .sps when you save them.
11. You can modify SPSS defaults so that commands will be reflected in the
log. This allows you to copy commands from your log into your
program file. These changes also make debugging easier.
12. You will find information about how to
modify SPSS at the following URL.
16. With R you can point and click, issue
commands on the command line, or
create .R files. “.R” files store your
programs.
Results from P&C are reflected so you
can copy them into your program.
18. SAS allows some point and
click, but immediately offers
an editor where you can write
your programs. SAS
programs end with the .sas
extension, and are text files.
SAS features an enhanced
editor with cool color coding
that makes it easier to write
and debug programs.
20. Scenario 1:
You get a data set and find errors in it.
You change the values in the data window.
You save it with point and click, over-writing your original data.
Later you try to recall what changes you made, when and why. Of
course you can’t. You can’t even be sure that you made the
“corrections” for the proper cases.
You can’t look back at older data sets to confirm what you did. You
sit there sweating.
21. Scenario 2 same as Scenario 1 :
You save it with point and click, over-writing your original data and,
while you are saving the file,
1) Your computer goes down because of a power outage OR
2) There is a brief interruption in the network
HALF OF YOUR DATA SET IS LOST.
You cry.
22. Scenario 3:
You get a data set and find errors in it.
You write a program that:
1) gets the original data
2) makes changes in values with SYNTAX
3) Includes comments about the changes
4) saves the new file in a different name
Science marches forward.
23. Creating Variables and Recoding
is not the same as Cleaning Data
• You always want clean data
• You may not always want the recoded or created
variables
• Make new variables, but keep the old ones. (don’t
over-write) Use the original to check the new
24. Examples of Recoding/Creating
• Creating a series of dummies from a categorical variable
• Creating an index from a series of scale variables
• Creating a dichotomous or categorical variable from a continuous
variable
• Always consider MISSING VALUES
25. Sample SPSS Program
* CleanNew.sps .
* 10/10/05 created dummy for male .
Get file = ‘dirty.sav’ .
* Cleaning data, PJG, looked at survey form, educ for ID=1 should be 16, 10/9/05 .
If id = 1 educ = 16 .
* Create a dummy variable from “gender”.
If gender = ‘m’ male = 1 .
If gender = ‘f’ male = 0 .
If gender = ‘’ male = -9 .
Missing values male (-9) .
Variable label male ‘Male’ .
Value labels male 1 ‘Male’ 0 ‘Female’ .
Save outfile = ‘CleanNew.sav’ / drop gender .
26. Summary for Cleaning and Creating
variables
• Use syntax (programs) to create and clean variables
• Document when and why in your programs
• Save new file – do not over-write the old
27. It may be months between the
time that you finish a paper,
submit it, and get to revise it for
publication.
28. What you will need to know:
• The origin of your variables:
– What is the source for each variable
– How were they created?
• What programs created your final tables?
• What program files created the file you used for your final tables?
29. Create a Directory for the Project
• For example, c:MA_Thesis
• Store all of the programs and data in that directory and
subdirectories
30. Naming Conventions
• For every data file you have, you should have a program
file with a corresponding name.
• When you have finished your paper, create a program
file for each table. For example: table1.sas table2.sas
31. Document your work
• Write comments in your program.
• Put a file in your directory called a_note, readme, or
something similar that includes a brief description of the
project and important information.
32. Safeguarding your work
• Multiple backups – not all stored in the same basket
• Worry about the future
– Keep up with formats (cards, tapes, floppy disks, CDs, what
next? )
– Store in portable formats
33. For More Information click below link:
Follow Us on:
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
Thank You !!!