Presentation given April 27th, 2013 for the Missouri State University Graduate Interdisciplinary Forum. Research conducted by myself and Dr. Erin M. Buchanan.
Capturing the Student Perspective: A New Instrument for Measuring Advising Satisfaction
1. Capturing the Student Perspective:
A New Instrument for Measuring
Advising Satisfaction
Marilee L. Teasley,
Erin M. Buchanan, Ph.D.
2. Why is Academic Advising Important?
• Kuh (2008): Quality of advising is a powerful
predictor of campus satisfaction
• Metzner (1989): Lower attrition rates for high
quality advising
▫ But some advising is better than none!
3. What Should Be Measured?
• Types of advising (Crookston, 1972):
▫ Prescriptive
▫ Developmental
• Which is better?
4. Measuring Advising: Reliability & Validity
• Reliability: Consistency of scores
• Validity: How well do you measure what you are
claiming to measure?
8. Existing Measures
• Very few standardized assessments exist
• Validity and Reliability?
• Qualitative versus Quantitative methods
9. Original Research Question
• How do our students feel about their past and
present academic advising experiences?
10. Additional Research Question
• Can we extend the existing advising assessment
research by creating a new scale with statistical
reliability and validity?
14. How did we do? (Version 1)
• Exploratory Factor Analysis (EFA):
▫ One factor model, poor overall fit
• Compound questions & clarity issues
▫ “My advisor encourages me to speak freely and
listens to what I have to say.”
15. Our Study: Take 2!
• 181 total students, 177 complete questionnaires,
157 remaining after outlier analysis
• Original demographics questions + 30 revised
advising questions
16. How Did We Do? (Version 2)
• Exploratory Factor Analysis (EFA) showed that
one or two factors would be appropriate
• Best fit: two factors, 24 questions
17. The Two Factor Model
• Advising Functions
▫ “Advising appointments are worth my time”
▫ “I find academic advising appointments to be a
positive experience.”
• Outreach Functions
▫ “I learn how I can contribute to the surrounding
community during my advising appointments.”
▫ “My advisor lets me know about the importance of
our public affairs mission.”
18. One More Time… (Experiment 3)
• 184 total students, 167 remaining after outlier
analysis
• 59 returning students from Experiment 2 for
test-retest reliability purposes
▫ 24 final advising questions, no demographics
• New participants:
▫ Demographics + 24 final advising questions
19. How Did We Do? (Version 3)
• Confirmatory Factor Analysis
▫ Good overall fit
• High test-retest reliability
20. What About Our Original Question?
• Both factors had high averages (above “neutral”)
• Advising factor > Outreach factor
21. What’s Next?
• NACADA Journal, 33(2)
▫ Late Fall/Early Winter 2013
• The goal: impact advising assessment!
22. Thank You!
• For questions about our project or usage of our
scale, please contact either one of us:
Marilee Teasley
(teasley888@live.missouristate.edu)
or
Dr. Erin Buchanan
(erinbuchanan@missouristate.edu)
Kuh 2008: Basically, if students are happy with their advisor, they are pretty likely to be happy with the rest of campus.
Metzner 1989: Building off of Kuh, if students are happy with their advisor, they are more likely to stay in college. They may not be so impressed with their advisor, but just the presence of an advisor is better than no advisor.
We want to make sure that students are receiving high quality advising, so we want to constantly monitor and evaluate the situation. How do we do such a thing and what should be measured?
In 1972, Crookston came up with two types of advising: prescriptive and developmental.
- Prescriptive advising is what most students think of when they consider advising. Here the advisor is seen as an authority figure, and the advisor tells the student what to do. That’s not necessarily a bad thing - prescriptive functions in advising are essential to student success, as they include discussing graduation requirements, course selection, and registration procedures.
- Developmental advising, on the other hand, is focused on an equal and deeper relationship between advisor and advisee and examines the student as a whole person. Developmental advising should be a team effort, where the advisor guides the student in developing skills and self-awareness that will lead to a rewarding college career and beyond. Examples of developmental advising topics include strengthening skills and identifying goals.
- So which is better? Both methods of advising are important and should be utilized at certain times throughout a student’s college career. Much like Maslow’s hierarchy of needs, a student’s basic needs should be met using prescriptive advising before higher level needs can be met by developmental advising. Typically, a freshman right out of high school will not be as prepared to discuss goals and skills as a junior or senior.
So how do you measure advising? Typically, when you measure a concept, you want to measure it with an instrument that has been psychometrically tested for the properties of reliability and validity, among other things.
Generally speaking, RELIABILITY is how consistent scores are over time. If I take this assessment today and then take it again tomorrow, I should have approximately the same score each time. Error plays a role in this too, because we want our scores to be accurate each time.
VALIDITY asks us if we are measuring what we claim to be measuring. This is important – the measurement tool we have created to measure advising may be very reliable, but it may be measuring something way different than advising. This is why we took a look at what should be measured when it comes to advising.
You also need a scale with questions that fit well with one another. Perhaps your instrument measures multiple subfacets of a concept, much like the different types of advising. With Factor Analysis, you examine the instrument to find out what overall themes, or factors, are coming out of your questions, and which questions match to each factor.
What’s the difference between the two?
Creating a scale with reliability, validity, and good overall fit is hard work, and it takes multiple rounds of running participants and editing questions to get there.
This is Exploratory Factor Analysis, or EFA. You have all these answers, and we want to see how they group together. And then you are given numbers that tell you HOW WELL they group together. You want answers to be highly related to one and only one group.
This is Confirmatory Factor Analysis, or CFA. You know how the answers should group together, and want to make sure you can repeatedly get that result.
So what measurement tools are out there already? Well, very few standardized assessments exist.
Many of the existing advising assessments neglect reliability and validity. Some instruments haven’t been examined for these properties, and others are missing details regarding the scale development process. Some studies were vague about their scale creation, and declared they had acceptable reliability and validity without the statistical information needed to confirm these claims.
Furthermore, many of the existing assessment initiatives are qualitative in nature, including interviews and focus groups. While assessment methods of this nature can give us great insight as to what students are thinking, you cannot fully rely on this method, as it is hard to analyze and is left to too much interpretation.
When we started this project, here was our original research question:
But after reviewing the existing assessment literature, we added another research question: (NEXT SLIDE)
Because scale development is a long process, we ended up with three separate experiments. And before I get into what we found, I want to introduce the demographics of our three samples. These are all students who were enrolled in PSY 121, Intro to Psychology. For class credit, they have to complete 6 credits of study participation. Typically, a credit = 1 half hour of participation.
Average age – 19 to 20
More females than males, mostly freshmen and sophomores, non-transfers, Caucasians, and students who have chosen a major.
Unfortunately, our results suggested one overall factor with very poor overall fit. Like I said before, scale development is a long process, so you rarely get perfect results the first time.
We took a good look at our questionnaire to see what went wrong so we could try again, and found that some of our questions could use some more clarity. For example, “My advisor encourages me to speak freely and listens to what I have to say.” This is known as a compound question – it actually asks two things in one question. This is bad – if someone says yes, which one are they saying yes to?
So we split up the compound questions and reworded others for greater clarity. We ended up with 30 questions for the next round.
Factor analysis showed that two factors had the best overall fit once we got rid of 6 questions. These 6 questions either had too much in common with all of the factors, or did not have anything in common with any of the factors.
Once we looked at the distribution of questions onto the two factors, we noticed two themes: advising functions and outreach functions.
Basically, advising functions have to do with the advising appointment itself, and outreach functions involve the greater campus and community.
Just to make sure what we got wasn’t a fluke, we ran new participants to confirm the two factors we found. We even invited students to retake the questionnaire to see if their scores changed from time 1 to time 2. They didn’t have to complete demographics again.
Test-retest reliability: do their scores stay approximately the same over time?
Overall, our students are happy with their advising experiences – overall scores for both factors were above “neutral.” This is good!
Even so, our advising factor had significantly higher average scores than our outreach factor. This may mean students have not had enough exposure to developmental-style advising just yet (remember all the freshmen in our sample!), or they may not be as happy with it as they are everything else.
What’s next for this project? We’re published!
Overall, we want this project to impact the assessment of advising, and we hope it is used to do just that. We have had interest from several departments on campus regarding the usage of our instrument in future assessment initiatives, and we hope others are interested too – not just here, but at other institutions as well.