Evaluating the Effectiveness of Academic Development: Principles and Practice

Free download. Book file PDF easily for everyone and every device. You can download and read online Evaluating the Effectiveness of Academic Development: Principles and Practice file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Evaluating the Effectiveness of Academic Development: Principles and Practice book. Happy reading Evaluating the Effectiveness of Academic Development: Principles and Practice Bookeveryone. Download file Free Book PDF Evaluating the Effectiveness of Academic Development: Principles and Practice at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Evaluating the Effectiveness of Academic Development: Principles and Practice Pocket Guide.

Good Practice : Once the purpose of evaluation, its focus, reporting and responsibility for action have been determined, decide the method of evaluation that best suits these criteria. There is always an advantage to using several methods of evaluation and correlating their outcomes. Questionnaire This familiar method of seeking feedback from learners and participants has the potential advantage of speed in administration , anonymity of response and standardisation for purposes of comparison between cohorts.

The shortcomings can include poor response rate and validity of the outcomes, if the questionnaire is not designed with care for purpose and focus , and if questionnaires are over-used the effect of "questionnaire-fatigue".

Principles of Good Practice

The University wants to make sure that we actively engage learners in providing feedback that lecturers can respond to and act on. You might also want to run a specific survey about an innovative or new learning activity, to help you evaluate the success or areas for further development needed, and you can do this using Accelerate. Points to consider : a Who should design the questionnaire? The answer is determined by the purpose of the evaluation, and is, most commonly, the person s responsible for the delivery of the education under evaluation.

It is good practice to seek the views of the intended evaluators of its suitability for the purpose. Freeform responses allow a more subtle range of responses and for raising issues beyond those set out in the questionnaire. However they take longer to complete, longer to process and much longer to report. It is good practice for the processing and reporting to be done by someone not closely involved with the subject of the evaluation. In order that the purpose and focus remains clear it is good practice to keep a questionnaire short - about 10 questions would be about right for a rating-scale format, but much less if all the questions allow freeform response.

The answer to this depends entirely on the purpose. For example, evaluation after the end of a module gives a more complete picture, but too late for that cohort to benefit from the information - evaluation part-way through the module, or after individual classes, gives an incomplete picture, but would enable some adjustment of the remainder of the module to benefit that cohort.

The purpose and focus also determine the best frequency of administration, but it is unwise to overload to the extent that questionnaire fatigue sets in. It is good practice for a department to have a planned schedule of evaluation, with higher frequency of evaluation where there is cause for concern, and lower frequency where evaluation gives stable positive outcomes. Structured group interview nominal group technique This is a meeting with learners or participants where they are asked to give their views about a programme, course or class. Typically, learners are asked to work in small groups to reflect upon positive and negative features of the educational provision e.

A spokesperson from each group is asked to relay the considered views of the group to the meeting. The role of the member of staff leading the meeting is to compile a summary of such views, to validate them at the meeting, and, later, to produce a short report of the main outcomes. It is an advantage for this person to be someone from outside of the department or teaching team to support anonymity and to provide a safe environment for learners to express their views honestly. For example, empowering processes for individuals might include organizational or community involvement, empowering processes at the organizational level might include shared leadership and decision making, and empowering processes at the community level might include accessible government, media, and other community resources.

Empowerment theory processes contribute to specific outcomes. Linking the processes to outcomes helps groups specify their chain of reasoning. Zimmerman provides additional insight into the outcome level of analysis to further explicate empowerment theory:. Empowerment outcomes refer to operationalization of empowerment so we can study the consequences of citizen attempts to gain greater control in their community or the effects of interventions designed to empower participants.


  • Evaluation Phases and Processes.
  • Assessment Principles.
  • Browse more videos;

Empowered outcomes also differ across levels of analysis. When we are concerned with individuals, outcomes might include situation-specific perceived control, skills, and proactive behaviors. When we are studying organizations, outcomes might include organizational networks, effective resource acquisition, and policy leverage.

When we are concerned with community level empowerment, outcomes might include evidence of pluralism, the existence of organizational coalitions, and accessible community resources.

Distance Education Program Principles of Good Practice

As collaborators, professionals learn about the participants through their cultures, their worldviews, and their life struggles. The professional works with participants instead of advocating for them.

This role relationship suggests that what professionals do will depend on the particular place and people with whom they are working, rather than on the technologies that are predetermined to be applied in all situations. For the purposes of clarity and as it relates to instructing empowerment evaluation, self-determination is being used as the umbrella term in this discussion.

Self-determination consists of numerous interconnected capabilities, such as the ability to identify and express needs; establish goals or expectations and a plan of action to achieve them; identify resources; make rational choices from various alternative courses of action; take appropriate steps to pursue objectives; evaluate short- and long-term results, including reassessing plans and expectations and taking necessary detours; and persist in the pursuit of those goals.

Self-determination mechanisms help program staff members and participants implement an empowerment evaluation. Process use assumes that the more that people conduct their own evaluation the more they own them. The greater the sense of ownership the more likely people are to consider their findings credible and act on their own recommendations. Empowerment evaluation places evaluation in the hands of community and staff members to facilitate ownership, enhance credibility, and promote action.

In addition, a byproduct of this experience is that people learn to think evaluatively Patton, This makes them more likely to make decisions and take actions based on their evaluation data. Theories that enable comparisons between action and use are essential. Empowerment evaluation relies on the reciprocal relationship between theories of action and use at every step in the process. A theory of action is the espoused operating theory about how a program or organization works. This theory of action is compared with a theory of use.

People engaged in empowerment evaluations create a theory of action and test it against the existing theory of use.

anritorcasi.ml

Chapter 7: Evaluation Phases and Processes | Principles of Community Engagement | ATSDR

Because empowerment evaluation is an ongoing and iterative process, stakeholders test their theories of action against theories of use during various microcycles to determine whether their strategies are being implemented as recommended or designed. These theories are used to identify gross differences between the ideal and the real.

For example, communities of empowerment evaluation practice compare their theory of action with their theory of use to determine whether they are even pointing in the same direction. Three common patterns that emerge from this comparison include in alignment, out of alignment, and alignment in conflict. In alignment is when the two theories are parallel or pointed in the same direction. They may be distant or close levels of alignment, but they are on the same general track. Out of alignment occurs when actual practice is divergent from the espoused theory of how things are supposed to work.

The theory of use is not simply distant or closely aligned, but actually off target or at least pointed in another direction. Alignment in conflict occurs when the theory of action and use are pointed in diametrically opposite directions. This signals a group or organization in serious trouble or self-denial. After making the first-level comparison, a gross indicator, to determine whether the theories of action and use are even remotely related to each other, communities of empowerment evaluation practice compare their theory of action with their theory of use in an effort to reduce the gap between them.

This assumes they are at least pointed in the same direction. The ideal progression is from distant alignment to close alignment between the two theories. This is the conceptual space where most communities of empowerment evaluation practice strive to accomplish their goals as they close the gap between the theories. The process of empowerment embraces the tension between the two types of theories and offers a means for reconciling incongruities. Empowerment evaluation is guided by 10 specific principles Fetterman and Wandersman, , pp.

They include:.


  • Effective Professional Development: Principles & Best Practice Whitepaper?
  • Suggested Principles and Best Practices!
  • Global Optimization: Deterministic Approaches.
  • Evaluation principles and practices: Second edition.
  • Gravity, Black Holes, and the Very Early Universe. An Introduction to General Relativity and Cosmology;
  • Principles of Good Practice?

Improvement — empowerment evaluation is designed to help people improve program performance; it is designed to help people build on their successes and re-evaluate areas meriting attention. Community ownership — empowerment evaluation values and facilitates community control; use and sustainability are dependent on a sense of ownership.

Inclusion — empowerment evaluation invites involvement, participation, and diversity; contributions come from all levels and walks of life. Democratic participation — participation and decision making should be open and fair. Social justice — evaluation can and should be used to address social inequities in society.

Community knowledge — empowerment evaluation respects and values community knowledge.

Practice 2: Show criteria and models in advance.

Evidence-based strategies — empowerment evaluation respects and uses the knowledge base of scholars in conjunction with community knowledge. Organizational learning — data should be used to evaluate new practices, inform decision making, and implement program practices; empowerment evaluation is used to help organizations learn from their experience building on successes, learning from mistakes, and making mid-course corrections.

Empowerment evaluation principles help evaluators and community members make decisions that are in alignment with the larger purpose or goals associated with capacity building and self-determination. The principles of inclusion, for example, reminds evaluators and community members to include rather than exclude members of the community, even though fiscal, logistic, and personality factors might suggest otherwise.

The capacity building principle reminds the evaluator to provide community members with the opportunity to collect their own data, even though it might initially be faster and easier for the evaluator to collect the same information. The accountability principle guides community members to hold one another accountable. It also situates the evaluation with the context of external requirements and credible results or outcomes. See Fetterman , p. Critical friend: A critical friend is an evaluator who facilitates the process and steps of empowerment evaluation.

They believe in the purpose of the program but provide constructive feedback. They help to ensure the evaluation remains organized, rigorous, and honest. Culture of evidence: Empowerment evaluators help cultivate a culture of evidence by asking people why they believe what they believe.

MORE STORIES

It is a cyclical process. Programs are dynamic, not static, and require continual feedback as they change and evolve. Empowerment evaluation is successful when it is institutionalized and becomes a normal part of the planning and management of the program. The group learns from each other, serving as their own peer review group, critical friend, resource, and norming mechanism. Individual members of the group hold each other accountable concerning progress toward stated goals.