A Presentation Designed for Community Colleges By Russell Kunz
Rationale for Data Driven Decision- Making attempts to answer two basic questions: What is it that we do? Versus what do we think we do? How well do we do it? Versus how well does our end-user(s) believe that we do it?
Data Driven Decision-Making is a massive culture change, not an activity change. The high degree of culture change is also the biggest reason Data Driven Decision- Making fails. No implementation of Data-Driven Changes often because there is no training for all involved. Change is implemented for the wrong reason.
Create hypothesis. Data (converted to knowledge) must lead to change which leads to greater accomplishments. Involves pre-test (establishing a baseline) and post- test (measure effects of change). Transformation issue. Create culture where change becomes part of the job.
When we create curriculum, our initial goal is often the following: “Add VALUE to the process of education.”
Create value-added evaluation. New classroom activity: Does it add value to the course or will it be viewed as busy work? Presently, Data-Driven Decision-Making tends to be done for justification purposes rather than for knowledge and insight.
Define “VALUE” and “BY WHOM.” Identify all stakeholders. Valued by the Students. Valued by the Administration. Valued by the Faculty. Valued by the Business Community. Valued by the Tax Paying Community. Valued by the Legislature. Valued by the THECB. Valued by SACS.
We tend to focus all of our data collection and decision-making on satisfying the requirements of our administration, the legislature, the THECB, and SACS because those areas control the rules and our funding.
Identify what activity is specifically valued by the each stakeholder group? Ask “Do I have control over the activity?” No (Activity is required by the THECB, the Legislature, or some outside source and is driven by funding, laws, or rules). Ignore it because either you do the required activity or you don’t. Yes. Then measure the activity over and over to determine value/worth of inclusion.
This is the fundamental question we need to answer. Ex: We teach courses well, but what is the correct course to be taught. (If English, why English, and why that particular course in English).Question: Do we keep certain courses in program curriculum for the right reason (are they sacred cows)?
Why do the stakeholders value that activity? Identify universally understood components (determined by Root Cause Analysis) Root Cause Analysis to the fifth degree Purpose of Root Cause Analysis – Determine if we are we collecting the right data for the right reason? Create, maintain, and update benchmarks
Beware of moving parts/skewed data. Ex: Different groups of students taking classes for different agendas. Root Causes Analysis is the equivalent to Statistical Process Control 101.
Identify what activities are necessary/valued in the learning process. Instructor’s Perspective: Eliminate those activities that do not contribute to the learning process.
Is part of the curriculum simply busy work? Is part of the curriculum done solely to cover the rules/requirements? There must be a continuous post-course evaluation (6 months, 1 year, etc.) to study the impact of course/degree (a longitudinal study). We need to study the results of the course more than the presentation skills. Community colleges pride themselves on Applied Learning. We need to evaluate the long-term effectiveness of our Applied Learning techniques.
Implement the needed change and run post- tests to determine effectiveness of change. Leader’s Perspective: How do I mentor that learning process to help it become more effective?
Follow-up questions: Did the degree/certificate/course matter to them? Did it have value? Did the student go to work in a field identified as the same as his/her major? This information is already required by Perkins Grant criteria, but it is also significant to determine the worth/value of the related degree/certificate curriculum.
Performance Driven Institution: Leadership defines “Performance.” Determine how do we measure that performance? Data Collection – Develop a regular predictable system that drives change. Convert data to knowledge and apply to strategic planning to drive changes.
Performance Driven Institutions Implement changes. Administer post-test to determine effectiveness of changes.Note that people generally resist change (sabotage), but also note reality (performance gap) leading to the need for change.
Results of a Performance Driven Institution: Less ego. Less subjective evaluation. More analytic thought and input. Less emotion. More predictable actions.
Results of a Performance Driven Institution: Data driven change. Must supply adequate resources to drive change. Need to act on change (implementation). Courage to change. Documentation of where the organization is versus where it needs to be (Performance Gap).
National Education Initiatives directed at Community Colleges and requiring Data Driven Decision-Making: Achieving the Dream Completion by Design Completion Matters American Graduation Initiative Foundations of Excellence
If you hold people accountable without changing the culture, then people will do the following: Manipulate the Data Ex: ISD in Atlanta that changed test scores. Cherry-pick the Data If you hold people accountable, but do not change the system, then people will ignore you.
How to Evaluate Programs (systematic process and schedule - usually 5-7 years). 1. Complete Program Discrepancy Analysis Initial Discussion or Collaboration: This is what we do now - versus…… This is what we want the program to be based on best practices, survey results, research results, and more. Look at discrepancy between what is and what is desired. Resources Financial statements Staffing Technology Professional Development
How to Evaluate Programs 2. Determine extent and/or dimensions (scope) of program evaluations. 3. Revisit, revise, or draft a philosophy statement for the program to be evaluated. What do we want the evaluation process to achieve: Hide problems or maintain status quo. Discover problems and pinpoint possible causes. Explore possibilities for real growth. Use evaluation instrument to justify changes (including hiring additional staffing).
How to Evaluate Programs: 4. Determine the questions to be asked (attributes) and answered by program. Should include the essential attributes of the program so that you know what you are evaluating. Know the degree to which those essential attributes are implemented (needed for differences between control and experimental groups). Know levels of satisfaction of the users of the program and of the consumers of the program Examine results (test results, satisfaction levels, accomplishment of objectives, and more). Look for unintended results.
How to Evaluate Programs: 5. Determine the types of data and number/types of data collection instruments to be used. Include the following: Classroom observation (degree of implementation) Survey work (levels of satisfaction, implementation) Focus groups (can help interpret results of surveys with confusing results) Data (what else is going on, what type of task forces are involved, what type of study teams are being used, how are levels of rigor determined) Data that includes job placement %/ transfer %, number of completers per year, how courses were chosen, number of faculty, growth of program, program/course assessments, professional development, program demographics, etc.
How to Evaluate Programs: 6. Decide who will be involved in the program evaluation process. Program Faculty Other Faculty Members Associate Faculty Members Administrators Students Affected Business/Industry members Community
How to Evaluate Programs: 7. Develop program evaluation timeline (planning process, questions, data collection, analysis, reporting) Can be a very daunting process. Can take several years to complete.