Literature Searching For Your Summer Scholarship 2011 - Science and EngineeringDeborah Fitchett
An introduction to library resources, including database search skills, to support the UC Summer Scholarship programme in the science and engineering fields.
Literature Searching For Your Summer Scholarship 2011 - Science and EngineeringDeborah Fitchett
An introduction to library resources, including database search skills, to support the UC Summer Scholarship programme in the science and engineering fields.
One of the complexities for many undergraduate students and for first time researchers is ‘How to blend their socialization with the systematic rigours of scientific inquiry?’ For some, the socialization process would have embedded in them hunches, faith, family authority and even ‘hearsay’ as acceptable modes of establishing the existence of certain phenomena. These are not principles or approaches rooted in academic theorizing or critical thinking. Despite insurmountable scientific evidence that have been gathered by empiricism, the falsification of some perspectives that students hold are difficulty to change as they still want to hold ‘true’ to the previous ways of gaining knowledge. Even though time may be clearly showing those issues are obsolete or even ‘mythological’, students will always adhere to information that they had garnered in their early socialization. The difficulty in objectivism is not the ‘truths’ that it claims to provide and/or how we must relate to these realities, it is ‘how do young researchers abandon their preferred socialization to research findings? Furthermore, the difficulty of humans and even more so upcoming scholars is how to validate their socialization with research findings in the presence of empiricism.
In writing this book, I tried not to assume that readers have grasped the intricacies of quantitative data analysis as such I have provided the apparatus and the solutions that are needed in analyzing data from stated hypotheses. The purpose for this approach is for junior researchers to thoroughly understand the materials while recognizing the importance of hypothesis testing in scientific inquiry.
The Outcomes and Performance Criteria for each of the core and optional units in the National Progress Award qualifications in Data Science at Levels 4, 5 and 6
One of the complexities for many undergraduate students and for first time researchers is ‘How to blend their socialization with the systematic rigours of scientific inquiry?’ For some, the socialization process would have embedded in them hunches, faith, family authority and even ‘hearsay’ as acceptable modes of establishing the existence of certain phenomena. These are not principles or approaches rooted in academic theorizing or critical thinking. Despite insurmountable scientific evidence that have been gathered by empiricism, the falsification of some perspectives that students hold are difficulty to change as they still want to hold ‘true’ to the previous ways of gaining knowledge. Even though time may be clearly showing those issues are obsolete or even ‘mythological’, students will always adhere to information that they had garnered in their early socialization. The difficulty in objectivism is not the ‘truths’ that it claims to provide and/or how we must relate to these realities, it is ‘how do young researchers abandon their preferred socialization to research findings? Furthermore, the difficulty of humans and even more so upcoming scholars is how to validate their socialization with research findings in the presence of empiricism.
In writing this book, I tried not to assume that readers have grasped the intricacies of quantitative data analysis as such I have provided the apparatus and the solutions that are needed in analyzing data from stated hypotheses. The purpose for this approach is for junior researchers to thoroughly understand the materials while recognizing the importance of hypothesis testing in scientific inquiry.
The Outcomes and Performance Criteria for each of the core and optional units in the National Progress Award qualifications in Data Science at Levels 4, 5 and 6
e-Health Subay Contact Tracing System App of Surigao del NorteJOHNY NATAD
The e-Health Subay is the official Contact Tracing System Application System of Surigao del Norte for COVID-19 contact tracing with full implementation in November 2020
Importance of Sustaining Connection with GodJOHNY NATAD
What is the status of our connection to God? Does your wireless gadgets enable you to connect with God? It is very important to know the essentials of connecting to our great God. Find it here in this presentation the three essentials of sustaining our connection with God.
Our Great God who created the entire universe has also created His angels. There are various class or kinds of angels that you might not know like the Seraphim, Cherubim, Four Living Creatures, and the 24 Elders.
Angels are God's Messengers to humanity and God’ Spiritual Army. In this presentation will discuss 5 surprising truths about God's angels. The Angels, upon God’s permission are visible in adult men form, not child or woman appearance and not with wings.
DENR Programs and Services of PENRO Surigao del Sur (November 2019)JOHNY NATAD
The DENR Programs and Services of PENRO Surigao del Sur (November 2019) include the 1. Agricultural Free Patent Application, Residential Free Patent, 2. Application, Issuance of Survey Authority, 3.Issuance of Survey Authority, 4. Issuance for Special Patent for LGUs (Province, City, Municipality), 5.Issuance for Special Patents for Public School Sites, 6. Issuance for Special Patents for National Agencies, Branches, and Instrumentalities, 7.Issuance of Certificate of Tree Plantation Ownership (CTPO), 8.Issuance of Certificate of Verification (Cutting/Transport of Non-Wood Forest Products within Private Land (e.g. vines, bamboo, nipa, etc.), 9. Issuance of Certificate of Verification (Cutting/Transport of Non-Premium Trees within Private Land), 10. Application for Chainsaw Registration, 11. Application for Issuance of Tree Cutting Permits/Special Tree Cutting Permits (Public places, private lands, infrastructure projects, government projects), 12. Rescue and Turn-over of Wildlife, 13. National Greening Program (NGP)
What Does the Bible Say about Getting Rich QuicklyJOHNY NATAD
There any places in Holy Scripture where God reveals His heart on the topic of getting rich quick and the drive to accumulate massive amounts of wealth over a very short period of time.
Grandparents Powerful Roles Based on the Bible 03.16.2019 JOHNY NATAD
Grandparent has been playing critical role in building and sustaining society. There are three powerful role that grandparents can do according to the Holy Bible. These are to live healthy and productive life, able to teach the children, and bless the children.
Becoming Faithful Stewards of God's Creation - A Message for World Wildlife o...JOHNY NATAD
You can be steward of God's sea creation by: 1. Sustainable Waste Management starting in S&R ~ Self-reflection on waste; 2. Protect the shellfish, don’t collect and never eat them; and 3. Volunteer for Good Works.
Woman - Become More and Truly Beautiful to GodJOHNY NATAD
Let your true beauty shine in the world. Woman, become more & truly Beautiful to God! Be gentle and quiet in spirit; Be submissive to God’s authority; and fears God always!
Stay Alert and Avoid Spiritual Snake BiteJOHNY NATAD
Everyday a Christian must stay alert and avoid the DANGER of SPIRITURAL SNAKE BITE in life - the influence and deception of spiritual serpent, the Satan and devil - the enemy of the Living God.
The Voice of the Lord - The Voice in the Bible that MatterJOHNY NATAD
The Bible manifest the "Voice of the Lord". Many holy people, the Prophets both young and old were actually called and heard the voice of the Lord as shown in the Bible.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
The Manual Of Operations For Data Analysis
1. BUKIDNON STATE UNIVERSITY
Graduate Extension Studies
Surigao City
THE MANUAL OF
OPERATIONS FOR
DATA ANALYSIS
A REQUIREMENT OF THE COURSE
PA 212
INFORMATION TECHNOLOGY (Data Analysis)
Submitted to:
DR. FLORENCIO F. SUNICO, CEO VI
Submitted by:
JOHNY S. NATAD
October 2008
2. TABLE OF CONTENTS
TITLE PAGES
I. PREPARING THE DATA SHEET
A. Scoring the Data ………………. 1
B. Preparing the Program in
………………. 2
Computer
C. Making Table in MS Office Excel ………………. 2
D. Getting the Mean and the Standard
………………. 3
Deviation
E. Getting the total number of the
………………. 4
Respondents
II. OBTAINING FREQUENCY AND
PERCENTAGE
A. Getting Frequency of the Profile of
………………. 5
Respondents
B. Getting Percentage of the Profile of
………………. 6
Respondents
C. Making the Frequency Table for
………………. 7
the Perception of the Respondents
III. COMPUTING THE STATISTICAL TEST
A. Checking the Data Analysis Tool in
………………. 8
the computer
B. Using t-Test: Two-Sample
………………. 8
Assuming Equal Variances
C. Using ANOVA: Single Factor ………………. 12
<<<< The Manual of Operations for Data Analysis ii >>>
3. THE MANUAL OF OPERATIONS FOR DATA ANALYSIS
I. PREPARING THE DATA SHEET
A. Scoring the Data
1. After assuring the data gathering is complete, get columnar paper and
classify the data as to number of respondents, profile, and perceptions based
on the data questionnaire in the following legend:
Number of Respondents is numbers 1 to 50
Profile: A – refer to Sex
B – refer to Course
C – refer to School
D – refer to Residence
Perceptions: Knowledge, Critical Thinking, Involvement, Mobilization
Questions: Q1 – refers to question number 1
Q2 – refers to question number 2
Q3 – refers to question number 3
Q4 – refers to question number 4
2. Using the answered data questionnaire, create a worksheet considering the
table below in tabulating the data into the columnar paper:
Table 1. Worksheet Table for Data Analysis
Profile PERCEPTIONS PERCEPTIONS
No. of
Knowledge Critical Thinking Involvement Mobilization
Respondent A B C D
Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3 Q4 Q5 Q1 Q2 Q3
3. Fill up the columnar paper worksheet using the following classified scores in
answering the profile and perceptions:
Sex (A) 1 - refer to Male Course (B) 1 – AB/Arts related courses
2 - refer to Female 2 – BSC/ Commerce
3 – BEED/BSED/ Education
4 – Computer Science/ IT
5 – Engineering courses
School (C) 1 – SPU 6 – Nursing/Medical courses
2 – NEMCO 7 – Criminology
3 – SJTIT
4 – SEC
5 – SSCT Residence (D) 1 – Within the city
2 – Outside city limits
3 – Island barangays
<<<< The Manual of Operations for Data Analysis 1 >>>
4. Answer to Questions:
1 – Never
2 – Seldom
3 – Frequently
4 – Always
B. Preparing the Program in Computer
1. Using the mouse, click Start Button in the computer.
2. Choose All Programs.
3. Select MS Office Excel.
4. On FILE menu, click SAVE and type “The Worksheet Analysis.
C. Making Table in MS Office Excel
1. Using the Table 1. Worksheet Table for Data Analysis prepares the same
table using MS Office Excel.
2. Transfer and encode the data from the columnar paper worksheet using the
format in Table 1. Worksheet Table for Data Analysis.
3. After completing the data transferring and encoding, add two (2) columns
after every type of Perception (Knowledge, Critical Thinking, Involvement,
and Mobilization columns) for the Mean and Standard Deviation and another
two (2) last columns for the Overall Mean and Overall Standard Deviation.
4. Add also three (3) last rows below after number 50 of respondents and place
N (sample), M (median), and SD (Standard Deviation).
<<<< The Manual of Operations for Data Analysis 2 >>>
5. D. Getting the Mean and the Standard Deviation
How obtain the Mean
1. To get the Mean (M), type the formula inside the cell under M columns in the
following manner:
a. Type equal sign (=)
b. Type the word AVERAGE followed by an open parenthesis [( ]
c. Highlight the first up to the last scores under Q of the first row followed by a
close parenthesis [ )] with no spaces
Example = average(F5:G5)
Note that scores are from Q1 up to Q5 per Perception on Knowledge,
Critical Thinking, and Involvement while only Q1 up to Q3 for Mobilization.
d. Press Enter when done.
2. To appear all Means of the said score, move the mouse at the edge of the
cell while the positive sign appearing.
3. Drag down the cursor using the mouse up to the last score.
Note: Be sure that the
positive sign is appearing
while dragging the source
of data to successfully fill in
the complete needed data
for M.
How to obtain Standard Deviation
1. To obtain the Standard Deviation, position the cursor into a cell under SD
columns and then follow the procedures below:
a. Type equal sign (=)
b. Type the word STDEV followed by an open parenthesis [( ]
c. Highlight the first up to the last scores under Q columns of the first row
followed by a close parenthesis [ )]
Example = STDEV(F4:G4)
d. Press Enter when done.
2. To appear all SD of the said score, move the mouse at the edge of the cell
while the positive sign is appearing.
3. Drag down the cursor using the mouse up to the last score.
<<<< The Manual of Operations for Data Analysis 3 >>>
6. Note: Continue the process after all the Mean and SD of the said Perception will
be completed and the same process will be done in getting the Mean and
SD at the last rows at bottom of the table.
How to obtain Overall Mean and Overall Standard Deviation
1. In getting overall mean, following the following steps:
a. Position the cursor to a cell under Overall Mean column.
b. In the cell, type equal sign (=) followed by word AVERAGE.
c. Highlight the Mean (M) scores under Knowledge, Critical Thinking,
Involvement, and Mobilization Perception.
Example: =AVERAGE(K5,R5,Y5,AD5)
d. Press ENTER when done and the number will appear.
2. To appear all overall Mean, follow the steps 2 & 3 on how to obtain M and SD
by moving the mouse at the edge of the cell while the positive sign is
appearing.
3. Drag down the cursor using the mouse up to the last score.
4. Repeat the same procedure when getting overall SD. However, use formula
STDEV and click only each SD variables (scores) per Perceptions
(Knowledge, Critical Thinking, Involvement, and Mobilization) followed by
comma (,) every after score variables like example below:
Example: =STDEV(L4,S4,Z4,AE4)
E. Getting the total number of the Respondents
1. Position the cursor at the bottom under row where sample of respondents
(stands for N) and the second column under Profile A.
2. Type the formula; in the following manner:
a. Type equal sign (=)
<<<< The Manual of Operations for Data Analysis 4 >>>
7. b. Type the word COUNT followed by an open parenthesis [( ]
c. Highlight all the scores under Profile A column followed by a close
parenthesis [ ) ]
Example: =COUNT(B4:B53)
d. Press Enter when done and you can see the figure is appearing.
3. To appear all the total number of the respondents, move the mouse at the
edge of the cell while the positive sign is appearing.
4. Drag the cursor using the mouse horizontally up to the last score.
II. OBTAINING THE FREQUENCY AND PERCENTAGE
A. Getting Frequency of the Profile of Respondents
1. Make a frequency table at Sheet 2.
2. Double click Sheet 2 and rename (Profile of Respondents)
3. Make two columns for the Variables and Frequency
4. Type a number in the first columns starting from one (1,2,3,4…) that
represent the first up to the last variable
5. Type the formula in the second (2nd) column to get the frequency; to wit:
a) Type equals (=)
b) Type the word FREQUENCY followed by an open parenthesis [ ( ]
c) Highlight all the scores from the mother data (e.g. The scores of the
Sex columns) followed by a comma [ , ]
d) Highlight the number of the first column of the sheet (profile of
respondents) that represent the variables (score codes) followed by
a close parenthesis [ )]
<<<< The Manual of Operations for Data Analysis 5 >>>
8. Example =FREQUENCY(Data!B4:B53,'Profile of Respondents'!B5:B6)
e) Press Enter when done
5. To appear all the frequency of the said variables, move the mouse at the
edge of the cell while the positive sign is appearing.
6. Drag the cursor using the mouse vertically up to the last score
7. To get the Total of frequency; perform the following procedure:
a. Type equal sign (=)
b. Type the word SUM followed by an open parenthesis [( ]
c. Highlight the score from top to bottom followed by a close parenthesis
[ )]
Example =SUM(D5:D6)
d. Press Enter when done and the total will automatically appear. You
can also highlight all the score in Frequency column until the total row
then click autosum symbol Σ.
Note:
• Continue the process in getting the frequency the rest of the profile of the
respondents until the frequency table will be completed.
• Every time you copy a data from the mother data choose Paste Specials
to avoid errors in analyzing the data.
B. Getting the Percentage of the Profile of Respondents
1. Add one column at the right side of the Frequency.
2. Label it as (Percentage)
3. Type the formula inside the cell align with the first frequency score in the
following manner:
a. Type equal (=) sign to the cell under Percentage column.
b. Click the first score from the Frequency column followed by a bar [/]
sign.
c. Click the sum total number of the Frequency followed by an asterisk
sign (*) and type 100.
Example =D5/D7*100
<<<< The Manual of Operations for Data Analysis 6 >>>
9. The autosum
symbol will
help you
automatic
summing.
d. Press Enter when done.
Note: Be sure that the number is not in percentage format. Check it by
clicking the number in a cell then click FORMAT menu > click Cell
> click Number.
4. Repeat the same process following the same steps and formula in the next
cells of percentage until the percentage column will be completed.
5. Then, get the sum similar to step II. A. 7. like example below:
Example =SUM(E5:E6)
C. Making the Frequency Table for the Perception of the Respondents
1. Create a new sheet and rename it “perception for respondents”.
2. Perform the same procedure as stated in making the frequency table but
using the following scale and description:
1.00 - 1.75 Never
1.76 - 2.75 Seldom
2.76 - 3.75 Frequently
3.76 - 4.00 Always
3. Type the formula to get the frequency in the following manner:
a. Type equals (=) followed by the word FREQUENCY and an open
parenthesis [ ( ]
b. Highlight all the Median (M) scores from the mother data (e.g. the
scores of the knowledge perception under M columns) followed by a
comma [ , ]
c. Highlight the score code column of the sheet (perception of
respondents) that represent the variables (score codes e.g. 1.75,
2.75,3.75, 4.0) followed by a close parenthesis [ )]
Example =FREQUENCY(Data!K4:K53,'Perception of Respondents'!C7:C10)
Press Enter when done.
4. To appear all the frequency of the said variables, move the mouse at the
edge of the cell while the positive sign is appearing.
5. Drag the cursor using the mouse vertically up to the last score.
6. To get the Total of frequency; perform the following procedure:
a. Type equal sign (=)
<<<< The Manual of Operations for Data Analysis 7 >>>
10. b. Type the word SUM followed by an open parenthesis [( ]
c. Highlight the score from top to bottom followed by a close
parenthesis [ )]
Example =SUM(E7:E10)
d. Press Enter when done and the total will automatically appear. You
can also highlight all the score in Frequency column until the total
row then click autosum symbol Σ.
III. COMPUTING THE STATISTICAL TEST
A. Checking the Data Analysis Tool in the computer
1. Perform the said operation to the next Sheet and change the label.
2. Click Tools and Add-Ins.
3. Check Analysis ToolPak.
4. Click OK from the dialog box or use the ENTER key for its installation and
install completely the program.
5. Click Tools to check if Data Analysis icon is successfully installed, if it
appears on the status bar the program is ready to analyze the data.
B. Using t-Test: Two-Sample Assuming Equal Variances
Note: If two (2) variables being compared use the t-Test Statistical Analysis
if more than two (2) use the ANOVA.
Since the SEX as one of the variables of the Profile of the Senior College
Students has only two variables, t-Test will be chosen for the analysis.
<<<< The Manual of Operations for Data Analysis 8 >>>
11. 1. Copy the scores of the variable (e.g. SEX) from the mother data including the
Overall Mean to the next sheet. Please take note that when copying, always
use Paste Special by clicking right of the mouse and click Values, and then
click OK.
2. Highlight the scores from the variable then click the Sort Ascending Icon to
arrange the scores and it will group accordingly.
2
3
3. Classify male and female by
separating the Male and Female score
variables to the next column.
4
4. Get the total COUNT of each group.
<<<< The Manual of Operations for Data Analysis 9 >>>
12. 5
5. Go to Tools, Click Data Analysis.
6. Click t-Test: Two-Sample
Assuming Equal Variances.
7. Click OK.
7
6
8. Place the cursor at the Variable 1 Range, and then highlight the Male scores
followed by the Female scores to be placed at the Variable 2 Range.
Input Range are Variable 1 Range and
the Variable 2 Range
8
<<<< The Manual of Operations for Data Analysis 10 >>>
13. 9. Click Labels and on the Output Options, click New Worksheet Ply and
position the cursor in the blank portion and name the worksheet. Or you can
also choose New Worksheet so that the result will be put in another
worksheet. Afterwards click OK.
The summary and the computed t-Test: Two-Sample Assuming Equal Variances will
appear on the separate worksheet like the table below:
<<<< The Manual of Operations for Data Analysis 11 >>>
14. C. Using ANOVA: Single Factor
Since the rest of the Variables were grouped more that two (2) such as the Residence,
Course and School, ANOVA will be used for the analysis of the data in performing the
following steps:
1. Repeat steps from 1 up to 5 of III.B. like Copying, Pasting Specials, Sorting,
classifying and getting the count. The scores to be copied are from the Residence,
Course and School. For example on school:
1
2
2. Choose ANOVA: Single Factor and click OK.
3
Input range is
where the
variables are
placed after
highlighting.
<<<< The Manual of Operations for Data Analysis 12 >>>
15. 3. Place the cursor at the Input Range and highlights all the data except the COUNT as
shown in color red above.
4. Check Labels in first row. Choose among the three (3) choices from the Output
Option, then Click OK.
The summary of the data analysis will appear on the separate screen worksheet if
you chose New Workbook of the Output option similar to the one below:
<<<< The Manual of Operations for Data Analysis 13 >>>