Skip to content

Latest commit

 

History

History
386 lines (271 loc) · 31.5 KB

TtT_session_4.md

File metadata and controls

386 lines (271 loc) · 31.5 KB

ELIXIR – EXCELERATE Train-the Trainer subtask

Session 4: Assessment and feedback in training

Introduction to assessment and feedback in training

Description: This sequence incites the reflexion on feedback and the way it is conveyed. Keywords: feedback, giving feedback, receiving feedback Learning objective: Develop an understanding of different types of feedback, when to give and receive feedback, and for which purpose. Learning outcome:

Introduction "Usually, teachers give feedback to their students, while trainers receive feedback from the learners"

We have to be aware of the difference between giving feedback to learners and receiving feedback from learners. Both types of feedback are useful but have different purposes.

Activity (individual) - Challenge 1: what kind of feedback/assessment do you know as a learner or use as a trainer?

  • What type of assessment did you undertake as a learner or trainer?
  • What was its purpose in your opinion?
  • Was it useful to your learning or teaching?

Write at least one example and discuss it with us.

Assessment timeline - when and why to assess

  1. Pre-course assessment (before the course) - verify the target audience of the course
  2. Preventive assessment (beginning of the course) - final adjustments of the course to the reality of the participants
  3. Formative assessment (during the course) - pilot in real time if learning is taking place
  4. Summative assessment (right after the course) - measure and evaluate the knowledge and skills acquired
  5. Strategic evaluation (after the course, long time after the course) - measure the adequacy, quality and impact of the course

Pre-course assessment - Diagnostic questionnaires

Pre-course assessment are useful tools for teachers to get an idea of where the learners and the group of learners stand at the beginning of a course. This is very helpful to setup realistic learning objectives, to meet expectations of learners and to adapt the course content to fill gaps identified in the diagnostic questionnaire and avoid spending time in things that are not necessary. Diagnostic questionnaires can be anonymous or not. Anonymous questionnaires allow to have an idea of the level of knowledge of the whole group of learners. An example of a diagnostic anonymous questionnaire for a session on PPI resources is giving here. And here you can see how the responses look like.

Non-anonymous and personal questionnaires allow to find out if the learner has a necessary pre-required knowledge, and in the negative case indicate an appropriate teaching choice to palliate this lack. For example, a diagnostic questionnaire with Unix commands is used at SIB previous to an HPC course where Unix is pre-required. And an Unix e-learning module is proposed at SIB when the learners are not sure of their level of expertise or have been identified of lacking some knowledge.

Design of MCQs with distractors

We quote from Center for Teaching at the Vanderbilt University.

A multiple choice question consists of a problem, known as the stem, and a list of suggested solutions, known as alternatives. The alternatives consist of one correct or best alternative, which is the answer, and incorrect or inferior alternatives, known as distractors.

Multiple choice test questions, also known as items, can be an effective and efficient way to assess learning outcomes. Multiple choice test items have several potential advantages:

Versatility: Multiple choice test items can be written to assess various levels of learning outcomes, from basic recall to application, analysis, and evaluation. Because students are choosing from a set of potential answers, however, there are obvious limits on what can be tested with multiple choice items. For example, they are not an effective way to test students’ ability to organize thoughts or articulate explanations or creative ideas.

Reliability: Reliability is defined as the degree to which a test consistently measures a learning outcome. Multiple choice test items are less susceptible to guessing than true/false questions, making them a more reliable means of assessment. The reliability is enhanced when the number of MC items focused on a single learning objective is increased. In addition, the objective scoring associated with multiple choice test items frees them from problems with scorer inconsistency that can plague scoring of essay questions.

Validity: Validity is the degree to which a test measures the learning outcomes it purports to measure. Because students can typically answer a multiple choice item much more quickly than an essay question, tests based on multiple choice items can typically focus on a relatively broad representation of course material, thus increasing the validity of the assessment.

The key to taking advantage of these strengths, however, is construction of good multiple choice items.

Activity (individual) Challenge 2: Design a questionnaire

Write three MCQs (in your field of teaching) revealing:

  • a knowledge gap ("what")
  • a weakness in a practical skill ("why, when, how")
  • a misconception

Feedback to learners

Feedback to learners is anything we do to help both ourselves, the instructors, and learners to get information about whether learning is occurring (if during the teaching) or has occurred (at the end of the teaching). Grades are an example of a type of feedback we can give to students to inform them how they performed in a test or exam. Grading, on the one hand, informs instructors whether the learning took place and whether learners are ready to move on and, on the other, should make learners aware of the knowledge and mastery they have attained by the time they took the test.

Summative and formative assessment

Feedback to learners can be summative or formative.
Summative assessment. An exam or a test at the end of a course is an example of summative assessment. Summative assessment is aimed at evaluating learners' performance at the end of teaching (this could be at the end of a topic, a session, or at the end of the entire course). This is the most frequent type of assessment occurring in schools and universities and usually includes grading. It is less frequent in training.

Formative assessment. Formative assessment takes place during teaching and learning. Its purpose is to help both instructors and learners to become aware of what the focus should be.

Formative "Classroom assessment's purpose is to improve the quality of student learning, not to provide evidence for evaluating or grading students. The assessment is almost never graded and are almost always anonymous." (from From Angelo & Cross, Classroom Assessment techniques, a Handbook for College Teachers)

Formative assessment can be used to collect information about learners'

  • prior knowledge
  • mental models
  • level of mastery of the topic at hand
  • goals and objectives
  • frequent mistakes

And can help understand

  • which knowledge gaps need to be filled before moving on
  • whether their mental models are correct
  • if the level of mastery is sufficient according to the course's learning objectives and outcomes
  • if learners goals and objectives are aligned to the course's goals and objectives
  • which types of mistakes need special attention

From the GLOSSARY OF EDUCATION REFORM (also in PDF):

Formative assessment refers to a wide variety of methods that teachers use to conduct in-process evaluations of student comprehension, learning needs, and academic progress during a lesson, unit, or course.

Formative assessments help teachers identify concepts that students are struggling to understand, skills they are having difficulty acquiring, or learning standards they have not yet achieved so that adjustments can be made to lessons, instructional techniques, and academic support.

The general goal of formative assessment is to collect detailed information that can be used to improve instruction and student learning while it’s happening.

What makes an assessment “formative” is not the design of a test, technique, or self-evaluation, per se, but the way it is used—i.e., to inform in-process teaching and learning modifications.

In order to be useful during teaching, formative assessment has to be quick to administer and evaluate.

Formative assessment can be used as a teaching strategy and, as such, as an actual opportunity to learn.

In particular, it can be used to:

1) activate and explore prior knowledge

"Student's prior knowledge can help or hinder learning"
(Ambrose et al. (2010) "How learning works", principle 1)

  • strategies to activate prior knowledge
    • make examples taken from real life
    • ask students questions designed to trigger recall; this can help them use prior knowledge to aid the integration and retention of new information.
  • strategies to reveal accurate but insufficient prior knowledge
    • administer a diagnostic questionnaire. In preparing a diagnostic questionnaire, be aware of the difference between "declarative knowledge" (knowing what) and " procedural knowledge (knowing how and when to apply various procedures, methods, theories, etc.). A questionnaire would be sufficient to assess "declarative knowledge". Solving a small exercise may help test "procedural knowledge".
    • administer a self-assessment questionnaire. Self-assessment may be a problem because students may not be able to accurately assess their abilities. Generally, people tend to overestimate their knowledge and skills. Accuracy improves when the response options are clear and tied to speciic concepts or behaviours. Example in Amborse et al. (2010) Appendix A.
  • strategies to help learners recognise inappropriate prior knowledge. If the students are explicitely taught the conditions and contexts in which knowledge is applicable (and inapplicable) it can help them avoid applying prior knowledge inappropriately (example: Python "methods").
    • make a list of kewords essential to the topic you are teaching and ask learners to classify terms introduced in a session at the end of the session. Example: pin to the classoroom wall cards with Python categories (modules, built-in functions, methods, data types, etc), write Python terms to cards and ask learners to pin cards with Python terms under the correct category while telling aloud why they are putting that term in that category.
  • strategies to highlight inaccurate prior knowledge. Inaccurate prior knowledge can be corrected fairly easily if it consists of relatively isolated ideas or beliefs that are not embedded in larger conceptual models (for example, the belief that Pluto is a planet). Some kinds of inaccurate prior knowledge - called misconceptions - are remarkably resistant to correction. Misconceptions are models or theories that are deeply embedded in students' thinking (e.g. the notion that objects of different masses fall at different rates, "folk psychology" myths such as that blind people have more sensitive hearing than sighted people, or that seasons depend on the distance of the Earth from the Sun). Misconceptions are difficult to refute for a number of reasons: 1) many of them have been reinforced over time and across multiple contexts; 2) they often include accurate - as well as inaccurate - elements, thus students may not recognise their flaws; 3) in many cases, they may allow for successful explanation and prediction in a number of everyday circumstances.
    • Administer MCQs with distractors (see below).

2) to promote peer instruction and content delivery

  • You can use an anonymous diagnostic questionnaire as described below.

3) to practice retrieval

..... (see Small Teaching)

4) to stimulate reflection and prepare learners' brain for learning

5) to highlight learners' weaknesses and difficulties and therefore to set the pace of the following teaching

6) to help learners understand what they have to focus on

Formative assessment can be done in many different ways:

  • Asking questions to learners and getting responses orally;
  • Asking them to describe the strategy they would adopt to solve a problem;
  • Asking them to solve a problem in groups, or individually but in front of the class;
  • Using brainstorming and discussions;
  • Providing diagnostic questionnaires;
  • Providing MCQs with distractors.

In the following, we report the seven assumptions on which the CATs (Course Assessment Techniques) are based and five suggestions to use them fruitfully and effectively:

From Angelo and Cross:

Classroom Assessment is based on seven assumptions:

  1. The quality of student learning is directly, although not exclusively, related to the quality of teaching. Therefore, one of the most promising ways to improve learning is to improve teaching.
  2. To improve their effectiveness teachers need first to make their goals and objectives explicit and then to get specific, comprehensible feedback on the extent to which they are achieving those goals and objectives.
  3. To improve their learning, students need to receive appropnate and focused feedback early and often; they also need to learn how to assess their own learning.
  4. The type of assessment most likely to improve teaching and learning is that conducted by faculty to answer questions they themselves have formulated in response to issues or problems in their own teaching.
  5. Systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and renewal for college teachers, and Classroom Assessment can provide such challenge.
  6. Classroom Assessment does not require specialized training; it can be carried out by dedicated teachers from all disciplines.
  7. By collaborating with colleagues and actively involving students in Classroom Assessment efforts, faculty (and students) enhance learning and personal satisfaction.

Five suggestions for a successful start:

  1. If a Classroom Assessment Techniques does not appeal to your intuition and professional judgement as a teacher, don't use it.
  2. Don't make Classroom Assessment into a self-inflicted chore or burden.
  3. Don't ask your students to use any Classroom Assessment Technique you haven't previously tried on yourself.
  4. Allow for more time than you think you will need to carry out and respond to the assessment.
  5. Make sure to "close the loop." Let students know what you learn from their feedback and how you and they can use that information to improve learning.

Self-assessment, self-confidence and usage independence

In active learning environments, learners are so involved in the learning process that they often loose consciousness about their accumulated knowledge and its level of operational value. Learning by doing catches them in the process, so they often forget about assessing it.

In good quality training, instructors make efforts to keep the interaction loop closed. As facilitators, they can give steering contributions to this build-up.

At carefully chosen times, it may be useful to intervene and stimulate self assessment (see how to, under instant feedback below). Self assessment helps to regain such consciousness. Learners verify that they can do things that they could not do by themselves before, or at least that the need for external aid is lowering. This can be seen as a work-out process towards gaining independence or mastery in a subject matter. The conscious learner feels "empowered". It is up to the instructor to moderate and keep this empowerment within reasonable limits. Learners that are not feeling empowered often find it by comparing their experience with their peers. Dialogues between learners will naturally occur, but can also be stimulated by reflective exercises (such as in Software and Data Carpentry). The instructor will learn to adapt the level of intervention to each situation, keeping in mind that the learner is the focus of the learning process, and that the instructor/learner relationship is the cornerstone of learning as a stimulated human activity.

The recently empowered learner will naturally want to test new knowledge by using it in different contexts. Simple observation our experience as human beings is that if our knowledge in a subject is solid, it may also work in different settings or environments. This is good as a positive test, but the novice learner may fail in several aspects, such as overlooking assumptions, for example. In a closed loop interaction that is desired in a training environment, this can be seen as experimentation, subject to the exact same rules as any experimental work. The instructor can help to steer this process, stimulate the testing when he sees value in it, helping to highlight and avoid the pitfalls, validate the outcomes, etc. In this way he is directly stimulating critical thinking.

[a note regarding its usage in training quality assessment] Usage independence gained in active learning environments is a measure of training effectiveness. It can be usefully associated with each learning instance. In particular, a well designed training exercise with a well defined learning outcome, can be seen as a gauge for measuring effectiveness in a focused way. In a training instance (a course, a programme) if this technique is applied systematically, overall quantitative data about training effectively may emerge. It will need to be subject to validation via independent testing, confrontation with other assessment methodologies and ultimately subject to a critical appraisal of its value.

Quotes form "Peer Instruction, Getting students to think in class" by Erik Mazur , pdf available here

".. while listening is largely a passive activity, reading more easily engages the mind and it allows more time for the imagination to explore questions."

"the first exposure to new material comes from reading printed material before the lecture reading."

to be continued .... SEE QUESTIONNAIRE here

Feedback from learners

Feedback from learners is aimed at:

  • assessing learner reactions to teachers and teaching thus providing context-specific feedback that can improve teaching within a particular course;
  • assessing learner reactions to class activities, assignments, and materials thus giving instructors information that will help them improve their course materials and assignments;
  • assessing learner reactions course organisational aspects, thus providing the organiser information that will help him or her to improve the course organisation.

Systematic immediate feedback

In a training course, getting feedback at the end of the event is necessary, as the participants may (should) have developed encompassing, integrated views. However, it is vastly insufficient. Questioning participants frequently during training provision is rich with information and has very interesting effects. But when should this happen? And how can it be induced so that, as a drug, has as many positive effects and as low adverse effects as possible?

When? Ideally at natural breakpoints such as ending an exercise, shifting to a different subject and right after a wrap-up session.

How? It should be very focused and expedite in execution. The instructor should think of a clearly stated question that has a binary (Yes/no) or garaded (0-5) response. Ideally the isntructor should write the question and display it, ensuring that everybody knows what the question is at the same time and is aware of what the answering method is. Then, the isntructor collects the answers and records trhem in a tally.

This is Instant Feedback.

Several methods have been tested, some of the using technology (Clickers, Socrative, Learning Catalytics) or not (the fist of five method). The choice is made according to the availability of the means and how engaging the audience finds it.


Fist or Five Feedback

by Allegra Via, Kristian Rother and Pedro Fernandes From: Academis by Kristian Rother.

How well was your explanation understood? How useful was an exercise? Is your class enthusiastic or frustrated? During a one-week programming course at IGC, Portugal, we asked after each training module:

"How much did you learn during the lesson? Please show one to five fingers. Raise your hands!"

Then we counted how often each number of fingers occurred. This way, the trainees felt more encouraged to provide critical feedback than if you would simply ask:

"did you understand it or not?"

Not necessarily do trainees utilize all five fingers. Our course participant Patricia commented:

"It is a good feedback and it is immediate. Although I feel sometimes a little bit shy to express my opinion."

The method needs seconds to execute and no preparation, which is a plus for the teacher. But trainees benefit as well. Our course participant Rita commented:

"I like it because it makes me think. It forces me to review and figure out whether I understood the subject or not and how much. It also shows you are interested."

This feedback is not an objective control of students' knowledge; it gives rather an indication of how confident they feel at a given point. You can try to suggest examples what a zero or five means, as in the linked article. The fist or five technique has also been recommended as a voting procedure to reach consensus in group discussions. You may test the method after giving a presentation to evaluate yourself.

The numbers we accumulated over more than a dozen sessions using one consistent method helped us to keep the course on track. The counting itself needed a bit of exercise to do it quickly. When we used the Fist or Five technique for the first time in 2012 with a group of 20 people, we asked for each number from zero to five separately This took a bit longer. For us, the main value of the Fist or Five technique is that it is easy to execute, it is quantitative, it is not stressful, it is immediate and can be repeated many times during a course. We hope you will see lots of 'high fives' in your next course!


Carpentry assessing practices

Notice that the Carpentry teaching practices quoted in session 2 - Sticky notes; Minutes cards; One-up, one-down - are forms of Instant Feedback.

Instant Feedback: benefits worth noticing

  • For the LEARNER. Carefully implemented instant feedback obliges the learner to introspect, to answer himself first (do I really know this? How easy it it for me to do this by myself?). With this, it becomes clear that he is made aware of his own progress and this is the smartest way to gain self-confidence. When questioned at the end-of the course questionnaire, he is much more able to make encompassing self assessments
  • For the INSTRUCTOR. Multiple ways of checking if what has just been done was effective, as a result of the quality of the question. Useful assessment of the quality of the materials and the performance of the instructor. A way of identifying learners that may be dragging behind and may need more attention. A way of identifying learners that are getting ahead of the others in the group, and can become more active, receive harder assignments, help their colleagues, etc. A way to judge whether the pace of training delivery is correctly chosen for the audience.

Socrative

Getting instant feedback with the app Socrative:

"Socrative is your classroom app for fun, effective classroom engagement. No matter where or how you teach, Socrative allows you to instantly connect with students as learning happens."

(You can) quickly assess students with prepared activities or on-the-fly questions to get immediate insight into student understanding. Then use auto-populated results to determine the best instructional approach to most effectively drive learning.

Short term feedback - assessment of training quality, participant and instructor performance

Short term feedback is a very important strategic evaluation of a course. It happens immediately at the end of the course with the purpose to measure the trainees’ perception of: the quality of the training and its organisation aspects, the trainer’s capacity to teach (performance), the adequacy of the training to their expectations and the strengths and weaknesses of course.

Examples of feedback questionnaires:

  1. This is the type of questionnaire we developed for ELIXIR Italy courses:

Feedback questionnaire for the ELIXIR Italy course on "NGS for evolutionary biologists: from basic scripting to variant calling" (Rome, 23-27 November 2015)

We adapt it to each new course.

  1. This is the type of questionnaire used to assess the quality of bioinformatics courses organised and delivered by the Gulbenkian Training Programme in Bioinformatics at the Instituto Gulbenkian de Ciência:

Feedback questionnaire for the course on "Bioinformatics using Python for Biomedical Researchers" (Oeiras, PT July 11th – July 15th 2016)

  1. And this is the type of common shared questions used by most of the ELIXIR Nodes (countries).

Feedback questionnaire

Long term post-course feedback

Long term assessments (over 6 months after a course) are rather difficult. First because learners move jobs/cities frequently and become more difficult to contact with. Secondly because they forget, as all of us do. In this case they forget what worked for them as hidden details. Those may matter because what we are looking for, here, is the assessment of impacts that endure.

Interviewing former course participants would be a possibility but it requires a lot of time. Sending them short questions by e-mail has worked with a yeald of about 30%, so unless you are tarining at least several hundreds of people it is likely that you end-up with a very small number of answers. Currently we see some home in the usage of social networks to collect valuable data.

Critical appraisals often happen is casual conversations. One should take notes to record them.

Example: Pedro Fernandes, Pooja Jain, Catarina Moita Training Experimental Biologists in Bioinformatics, Adv Bioinformatics. 2012;2012:672749. doi: 10.1155/2012/672749. Epub 2012 Jan 31. (Open Access)

Dealing with (bad) feedback

What do to with bad feedback:

  • Trainees feedback should be considered along other forms of quality evidence:
    • Review what they have effectively learned (in exams)
    • Consider your own experience of teaching
    • Discuss with colleagues and friends
    • Look at the feedback from past sessions of the same course
    • Look at the response rates
    • Look at the counter examples (contradictions)
    • Look at the repetitive patterns(not at only one single answer)
  • Breath deeply
  • Humans focus more on negative feedback than on positive (you are not alone)
  • Try to see the point in the criticism, learn from it
  • Don’t take it personally (easier said than done). Try to focus on what they say about what you do (not who you are)

Extra - Going deeper on Training Evaluation

There are several methods that can be used to evaluate training. One of the most referenced methods comes from Donald Kirpatrick (1924-2014).

The Kirkpatrick Model

  • Level 1: Reaction The degree to which participants find the training favorable, engaging and relevant to their jobs

  • Level 2: Learning The degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation in the training

  • Level 3: Behavior The degree to which participants apply what they learned during training when they are back on the job

  • Level 4: Results The degree to which targeted outcomes occur as a result of the training and the support and accountability package

This model has been revised and expanded several times, see for example:

http://www.kirkpatrickpartners.com/OurPhilosophy/TheNewWorldKirkpatrickModel/tabid/303/Default.aspx

Applying the Kirkpatrick model and its variants is not easy. One needs to be very careful in checking pre-requisites, assumptions and options in the measurement methods.

The evaluation of training efficiency is a difficult subject. There is an obvious need to standardise to allow for the comparison of observations.

You may like to read an article about applying Kirkpatrick's methods. https://www.mindtools.com/pages/article/kirkpatrick.htm

Extra - 50 classroom assessment techniques (CATs) by Angelo & Cross

[Here] (./docs/angelo_and_cross_50_cats.pdf) you can find the 50 CATS by Angelo and Cross. These are fifty assessment techinques grouped by purpose which can be used in teaching and in training. Some of them better apply to university semester courses, or high school classes, wheras some may turn out to be also useful in training courses/sessions. They are fully described and discussed in the book ""Classroom assessment techinques: A handbook for college teachers" (1993) by the same authors.

Techniques for Assessing Course-Related Knowledge & Skills

  • Assessing Prior Knowledge, Recall, and Understanding
  • Assessing Skill in analysis and Critical Thinking
  • Assessing Skill in Synthesis and Creative Thinking
  • Assessing Skill in Problem Solving
  • Assessing Skill in Application and Performance

Techniques for Assessing Learner Attitudes, Values, and Self-Awareness

  • Assessing Students’ Awareness of Their Attitudes and Values
  • Assessing Students’ Self-Awareness as Learners
  • Assessing Course-Related Learning and Study Skills, Strategies, and Behaviors

Techniques for Assessing Learner Reactions to Instruction

  • Assessing Learner Reactions to Teachers and Teaching
  • Assessing Learner Reactions to Class Activities, Assignments, and Materials

A description of the book written by the authors can be found here.