INFORMATIVE ESSAYThe purpose of the Informative Essay assignme.docxcarliotwaycave
INFORMATIVE ESSAY
The purpose of the Informative Essay assignment is to choose a job or task that you know how to do and then write a minimum of 2 full pages, maximum of 3 full pages, Informative Essay teaching the reader how to do that job or task. You will follow the organization techniques explained in Unit 6.
Here are the details:
1. Read the Lecture Notes in Unit 6. You may also find the information in Chapter 10.5 in our text on Process Analysis helpful. The lecture notes will really be the most important to read in writing this assignment. However, here is a link to that chapter that you may look at in addition to the lecture notes:
https://open.lib.umn.edu/writingforsuccess/chapter/10-5-process-analysis/ (Links to an external site.)
2. Choose your topic, that is, the job or task you want to teach. As the notes explain, this should be a job or task that you already know how to do, and it should be something you can do well. At this point, think about your audience (reader). Will your reader need any knowledge or experience to do this job or task, or will you write these instructions for a general reader where no experience is required to perform the job?
3. Plan your outline to organize this essay. Unit 6 notes offer advice on this organization process. Be sure to include an introductory paragraph that has the four main points presented in the lecture notes.
4. Write the essay. It will need to be at least 2 FULL pages long, maximum of 3 full pages long. You will use the MLA formatting that you used in previous essays from Units 3, 4, and 5.
5. Be sure to include a title for your essay.
6. After writing the essay, be sure to take time to read it several times for revision and editing. It would be helpful to have at least one other person proofread it as well before submitting the assignment.
Quiz2
# comments start with #
# to quit q()
# two steps to install any library
#install.packages("rattle")
#library(rattle)
setwd("D:/AJITH/CUMBERLANDS/Ph.D/SEMESTER 3/Data Science & Big Data Analy (ITS-836-51)/RStudio/Week2")
getwd()
x <- 3 # x is a vector of length 1
print(x)
v1 <- c(2,4,6,8,10)
print(v1)
print(v1[3])
v <- c(1:10) #creates a vector of 10 elements numbered 1 through 10. More complicated data
print(v)
print(v[6])
# Import test data
test<-read.csv("CVEs.csv")
test1<-read.csv("CVEs.csv", sep=",")
test2<-read.table("CVEs.csv", sep=",")
write.csv(test2, file="out.csv")
# Write CSV in R
write.table(test1, file = "out1.csv",row.names=TRUE, na="",col.names=TRUE, sep=",")
head(test)
tail(test)
summary(test)
head <- head(test)
tail <- tail(test)
cor(test$X, test$index)
sd(test$index)
var(test$index)
plot(test$index)
hist(test$index)
str(test$index)
quit()
Quiz3
setwd("C:/Users/ialsmadi/Desktop/University_of_Cumberlands/Lectures/Week2/RScripts")
getwd()
# Import test data
data<-read.csv("yearly_sales.csv")
#A 5-number summary is a set of 5 descriptive statistics for summarizing a continuous univariate data set.
#It consists o ...
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries. These stored functions currently only work on Linux-based systems.
Beyond PHP - it's not (just) about the codeWim Godden
Most PHP developers focus on writing code. But creating Web applications is about much more than just writing PHP. Take a step outside the PHP cocoon and into the big PHP ecosphere to find out how small code changes can make a world of difference on servers and network. This talk is an eye-opener for developers who spend over 80% of their time coding, debugging and testing.
Learn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic
Cassandra Performance Tuning Like You've Been Doing It for Ten YearsJon Haddad
Slides from my performance talk at the 2023 Cassandra summit. Here I share my tools and process for improving Cassandra's performance. We look at the OODA loop, USE method, high level observability tools and system tools such as flame graphs and bcc-tools (ebpf). Using the example of giving more memory to Cassandra, we explore how to leverage async-profiler and bcc-tools to generate cpu flame graphs and histograms of I/O performance. We can see how identifying a performance bottleneck like time spent in decompression can guide us to solving the right problems - in this case resizing compression buffers.
Dok Talks #115 - What More Can I Learn From My OpenTelemetry Traces?DoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Of the three observability data types supported by OpenTelemetry (metrics, logs, and traces) the latter is the one with most potential. Tracing gives users insights into how requests are processed by microservices in a modern, cloud-native architecture.
Jaeger and Grafana can visualize a single trace, showing how an individual request traversed your entire system. This helps for distributed debugging and analysis, but using traces only this way is limiting.
What if you stored tracing data in a SQL database? You could ask global questions about your system. You could find slow communication paths, where the error rate spiked since the last deployment, or where the request rate suddenly dropped. Thus, tracing can be used proactively to help you spot issues before your customers do.
This talk will show you how to do all the above by ingesting OpenTelemetry traces into a PostgreSQL/TimescaleDB database, and building custom dashboards using SQL to make the most out of your tracing data.
BIO
John Pruitt is a software engineer at Timescale. His work focuses on database/SQL development for the Promscale open-source observability tool, and currently on adding support for OpenTelemetry tracing. Prior to joining Timescale, John grew the DBA team at Shipt. Most of the balance of his career was spent building custom time-series applications in the energy industry and leading data warehousing efforts at regional banks.
KEY TAKE-AWAYS FROM THE TALK
- What is distributed tracing
- Why viewing individual traces is of limited value
- How SQL can be used to analyze and visualize traces
- What insights can be unlocked using SQL against traces
INFORMATIVE ESSAYThe purpose of the Informative Essay assignme.docxcarliotwaycave
INFORMATIVE ESSAY
The purpose of the Informative Essay assignment is to choose a job or task that you know how to do and then write a minimum of 2 full pages, maximum of 3 full pages, Informative Essay teaching the reader how to do that job or task. You will follow the organization techniques explained in Unit 6.
Here are the details:
1. Read the Lecture Notes in Unit 6. You may also find the information in Chapter 10.5 in our text on Process Analysis helpful. The lecture notes will really be the most important to read in writing this assignment. However, here is a link to that chapter that you may look at in addition to the lecture notes:
https://open.lib.umn.edu/writingforsuccess/chapter/10-5-process-analysis/ (Links to an external site.)
2. Choose your topic, that is, the job or task you want to teach. As the notes explain, this should be a job or task that you already know how to do, and it should be something you can do well. At this point, think about your audience (reader). Will your reader need any knowledge or experience to do this job or task, or will you write these instructions for a general reader where no experience is required to perform the job?
3. Plan your outline to organize this essay. Unit 6 notes offer advice on this organization process. Be sure to include an introductory paragraph that has the four main points presented in the lecture notes.
4. Write the essay. It will need to be at least 2 FULL pages long, maximum of 3 full pages long. You will use the MLA formatting that you used in previous essays from Units 3, 4, and 5.
5. Be sure to include a title for your essay.
6. After writing the essay, be sure to take time to read it several times for revision and editing. It would be helpful to have at least one other person proofread it as well before submitting the assignment.
Quiz2
# comments start with #
# to quit q()
# two steps to install any library
#install.packages("rattle")
#library(rattle)
setwd("D:/AJITH/CUMBERLANDS/Ph.D/SEMESTER 3/Data Science & Big Data Analy (ITS-836-51)/RStudio/Week2")
getwd()
x <- 3 # x is a vector of length 1
print(x)
v1 <- c(2,4,6,8,10)
print(v1)
print(v1[3])
v <- c(1:10) #creates a vector of 10 elements numbered 1 through 10. More complicated data
print(v)
print(v[6])
# Import test data
test<-read.csv("CVEs.csv")
test1<-read.csv("CVEs.csv", sep=",")
test2<-read.table("CVEs.csv", sep=",")
write.csv(test2, file="out.csv")
# Write CSV in R
write.table(test1, file = "out1.csv",row.names=TRUE, na="",col.names=TRUE, sep=",")
head(test)
tail(test)
summary(test)
head <- head(test)
tail <- tail(test)
cor(test$X, test$index)
sd(test$index)
var(test$index)
plot(test$index)
hist(test$index)
str(test$index)
quit()
Quiz3
setwd("C:/Users/ialsmadi/Desktop/University_of_Cumberlands/Lectures/Week2/RScripts")
getwd()
# Import test data
data<-read.csv("yearly_sales.csv")
#A 5-number summary is a set of 5 descriptive statistics for summarizing a continuous univariate data set.
#It consists o ...
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries. These stored functions currently only work on Linux-based systems.
Beyond PHP - it's not (just) about the codeWim Godden
Most PHP developers focus on writing code. But creating Web applications is about much more than just writing PHP. Take a step outside the PHP cocoon and into the big PHP ecosphere to find out how small code changes can make a world of difference on servers and network. This talk is an eye-opener for developers who spend over 80% of their time coding, debugging and testing.
Learn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic to AdvancenLearn Yara-L from Basic
Cassandra Performance Tuning Like You've Been Doing It for Ten YearsJon Haddad
Slides from my performance talk at the 2023 Cassandra summit. Here I share my tools and process for improving Cassandra's performance. We look at the OODA loop, USE method, high level observability tools and system tools such as flame graphs and bcc-tools (ebpf). Using the example of giving more memory to Cassandra, we explore how to leverage async-profiler and bcc-tools to generate cpu flame graphs and histograms of I/O performance. We can see how identifying a performance bottleneck like time spent in decompression can guide us to solving the right problems - in this case resizing compression buffers.
Dok Talks #115 - What More Can I Learn From My OpenTelemetry Traces?DoKC
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
Of the three observability data types supported by OpenTelemetry (metrics, logs, and traces) the latter is the one with most potential. Tracing gives users insights into how requests are processed by microservices in a modern, cloud-native architecture.
Jaeger and Grafana can visualize a single trace, showing how an individual request traversed your entire system. This helps for distributed debugging and analysis, but using traces only this way is limiting.
What if you stored tracing data in a SQL database? You could ask global questions about your system. You could find slow communication paths, where the error rate spiked since the last deployment, or where the request rate suddenly dropped. Thus, tracing can be used proactively to help you spot issues before your customers do.
This talk will show you how to do all the above by ingesting OpenTelemetry traces into a PostgreSQL/TimescaleDB database, and building custom dashboards using SQL to make the most out of your tracing data.
BIO
John Pruitt is a software engineer at Timescale. His work focuses on database/SQL development for the Promscale open-source observability tool, and currently on adding support for OpenTelemetry tracing. Prior to joining Timescale, John grew the DBA team at Shipt. Most of the balance of his career was spent building custom time-series applications in the energy industry and leading data warehousing efforts at regional banks.
KEY TAKE-AWAYS FROM THE TALK
- What is distributed tracing
- Why viewing individual traces is of limited value
- How SQL can be used to analyze and visualize traces
- What insights can be unlocked using SQL against traces
2. ac Command Print statistics about users' connect time. Syntax: ac [ -d | --daily-totals ] [ -y | --print-year ] [ -p | --individual-totals ] [ people ] [ -f | --file filename ] [ -a | --all-days ] [ --complain ] [ --reboots ] [ --supplants ] [ --timewarps ] [ --compatibility ] [ --tw-leniency num ] [ --tw-suspicious num ] [ -z | --print-zeros ] [ --debug ] [ -V | --version ] [ -h | --help ] Example: ac -d -y - Would display results similar to: May 2 2010 total 0.21 May 3 2010 total 3.04 May 4 2010 total 10.95 May 5 2010 total 10.49 May 6 2010 total 13.86 May 7 2010 total 6.79 May 9 2010 total 2.22 May 10 2010 total 11.90 May 11 2010 total 9.31 May 12 2010 total 0.82 May 13 2010 total 10.00 May 14 2010 total 29.13 May 15 2010 total 9.47 May 23 2010 total 0.11 Today total 3.37
3. chage Command Syntax: chage [options] user The chage command changes the number of days between password changes and the date of the last password change. This information is used by the system to determine when a user must change his/her password. Example: chage /etc/passwd
4. logname Command Syntax: logname logname - would return the name of the user currently logged in. Examples: fp060@pbs060:~$ logname fp060
5. passwd Command Syntax: passwd Allows you to change your password. Examples: fp060@pbs060:~$ passwd fp060 Changing password for fp060. (current) UNIX password: Enter new UNIX password: Retype new UNIX password:
6. id Command Syntax: id [Option] Shows you the numeric user and group ID on BSD(Berkeley Software Distribution). Examples: fp060@pbs060:~$ id uid=1000(fp060) gid=1000(fp060) groups=4(adm),20(dialout),24(cdrom),46(plugdev),104(lpadmin),115(admin),120(sambashare),1000(fp060)
7. last Command Syntax: last Show listing of last logged in users. Example: fp060@pbs060:~$ last fp060 pts/1 :0.0 Mon May 24 02:08 - 02:08 (00:00) fp060 pts/0 :0.0 Sun May 23 23:59 still logged in fp060 tty7 :0 Sun May 23 23:53 still logged in reboot system boot 2.6.31-14-generi Sun May 23 23:52 - 02:14 (02:22) fp060 pts/0 :0.0 Sat May 15 02:26 - 02:36 (00:09) fp060 pts/5 :0.0 Sat May 15 00:00 - 00:22 (00:22) fp060 pts/5 :0.0 Fri May 14 23:53 - 23:54 (00:00) fp060 pts/5 :0.0 Fri May 14 23:50 - 23:51 (00:00) fp060 pts/6 :0.0 Fri May 14 23:26 - 23:44 (00:17) fp060 pts/5 :0.0 Fri May 14 23:21 - 23:44 (00:22) fp060 pts/4 :0.0 Fri May 14 22:56 - 00:22 (01:26) fp060 pts/3 :0.0 Fri May 14 22:42 - 00:22 (01:40) fp060 pts/2 :0.0 Fri May 14 22:39 - 00:22 (01:43) fp060 pts/1 :0.0 Fri May 14 22:35 - 00:22 (01:47) fp060 pts/0 :0.0 Fri May 14 22:22 - 00:22 (02:00) fp060 tty7 :0 Fri May 14 22:22 - down (08:39) reboot system boot 2.6.31-14-generi Fri May 14 22:21 - 07:02 (08:40)