Discussing, reflecting upon & evaluating THINQ

Last Updated on January 22, 2021

Discussing, reflecting upon, and evaluating the outcomes of the THINQ approach for nurturing questioning skills can be beneficial both for the participants and the facilitator. On the one hand, participants have the opportunity to actively think about the learning topic, their questioning (and thinking) skills, and whether these have been improved or not. Also, they can state their opinions and contribute towards improving and customizing the process to better serve their needs and preferences. On the other hand, the facilitator gains valuable feedback regarding the overall approach, as well as its outcomes and impact on learner achievement and improvement.

Overview of the THINQ Approach

A. Group discussion and (self-)reflection

A group conversation is held where participants are invited to freely share their views regarding a number of issues.  The facilitator uses a set of questions as discussion prompts, but participants can also pose their own. Based on their focus, the prompting questions can be classified into 3 groups:

i. Learning topic
•Do you feel that now you know more about the topic?
•Did your view or understanding of the topic evolve or change? How did this happen?
•Did your interest in the topic grow?
•Did you learn something unexpected or surprising? 
ii. Skills
•Did you acquire any new skill, perspective, or technique regarding question generation? 
•Did you discover something about yourself?
•Which was your greatest strength during the process? What areas would you like to improve?
•Was there something that you could not achieve? Why?
•If you would do this process again, what would you do differently?
•What was the effect of teaming up with another person? 
•How listening to the other person’s ideas affected your viewpoint or ways of thinking? 
•What happened when you listened to all the other participants’ questions? Did you get inspired or discouraged?
•What can you do with what you learned today? Where else could you apply it? Can it help you in other areas of your life?
•What else would you like to learn about?
iii. Process
•Did you enjoy the process? Did you have fun and laugh?
•Were you bored or excited during the activity?
•Did you feel that time passed fast?
Did you feel creative?
•Did THINQ help you come up with more and better questions?
•Was the process particularly easy or hard for you?
•Did you experience any problems?
•Would you like to do it again?
•Do you feel that you can conduct the process on your own?
•Which were the strong and weak points of the approach?
•Can you suggest any adjustments or improvements?
•Can you think of additional or alternative categories that could be used?
•Are you aware of any similar or complementary approaches?

B. Evaluation of the process and its outcomes

Facilitators can employ two complementary approaches for evaluating THINQ and its outcomes: qualitative subjective assessment and quantitative analysis.

B1. Qualitative subjective assessment

Facilitators use their personal observations and notes in combination with participants’ verbalizations expressed during the group discussion and self-reflection session, to study the three aspects cited in the previous section (learning topic, skills, process). Some additional questions for guiding observations from the facilitator’s viewpoint include:

i. Learning topic
Did participants ask…
•all the questions that you had in mind?
•questions covering a wide spectrum of areas?
•questions that you did / could not think of?
•questions that you would not expect to be asked for this topic?
•any amazing questions?
ii. Skills
•Were participants challenged to think harder and in new ways?
•Did every subsequent step yield more and better questions?
•As time passed, did the production of new questions become easier or harder?
•Did participants actively collaborate?
•Were there vivid conversations and arguments during the discussion phases?
•Do you consider that (some of) the participants got better in questioning?
•Did you learn something?
•Did any of your skills improve?
iii. Process
•Was the atmosphere during the activity relaxed and enjoyable? 
•Did you experience any particular problems?
•Were the participants engaged? How many did not actively participate or looked bored? 
•Were there any participants who usually would not be active questioners that changed their stance?
•How many participants would be willing to do it again?
•If you followed Step 8 (i.e., iteration), what was the reaction of the participants?
•Would you use THINQ again? How often?

B2. Quantitative analysis

Dori and Herscovitz (1999) have concluded that three indices of questioning capability include the number, complexity, and orientation of the questions posed. Additionally, another useful – though rather hard to measure – criterion, is originality (i.e., uniqueness) which is directly related to creativity. More specifically (note: in order to score the following assessment criteria you will need to keep track of the questions generated by each participant at each step of the process):

Ι. Number of questions

The number of questions (as long as they are relevant to the topic) can be an indicator of divergent and creative thinking. Also, a large difference in the number of questions between the end of the process and Step 2, provides evidence that question generation skills have been successfully nurtured.

To measure this criterion, record the number of generated questions after:

  • Step 2: Per participant.
  • Step 5: Per participant.
  • Step 6: Per group. If several iterations are made, record results separately.

The difference between the total number of questions after each step measures group improvement.

To measure individual improvement:

  • Compare the total number of questions between Steps 2 and 5.
  • Compare the total number of questions between Step 5 (sum the questions of each group member) and Step 6. A significant increase is also an indicator of successful collaborative thinking.

II. Complexity

A gradual increase in the complexity of the generated questions is an indicator of improvement of question quality, as well as of high-order thinking.

In order to use this criterion, classify all questions in two groups:

  1. Simple/closed questions: They have a single, unambiguous answer or require recall of information.
  2. Open-ended questions: They require high-order thinking, reflection, and understanding, to frame and to answer.

An alternative naming for the groups is suggested by Di Teodoro et al. (2011). They use the terms surface (for questions that prompt to imitate, recall, or apply taught knowledge and information, through a mimicking process) and deeper (for questions that provide the opportunity to create, analyze or evaluate; they are usually open-ended and divergent in nature). 

You can measure group and individual improvement following the approach described in the previous section but, in this case, compare the total numbers per question group (simple vs. open-ended).

III. Orientation

This criterion is often used to analyze and evaluate the quality of the questions. Assessment works similarly to “II. Complexity” but a different classification approach is used, which is often context-dependent. There are several alternatives to choose from, e.g.:

Dori and Herscovitz (1999) use three orientation attributes: a) phenomenon and/or problem description; 
b) hazards related to the problem; 
c) treatment and/or solution.
Chin and Chia (2004) classify questions under four categories:1) information-gathering questions which pertained to mainly seeking basic factual information; 
2) bridging questions that attempted to find connections between two or more concepts; 
3) extension questions which led students to explore beyond the scope of the problem resulting in creative invention or application of the newly acquired knowledge; 
4) reflective questions that were evaluative and critical, and sometimes contributed to decision-making or change of mindsets.
King (1994) categorizes questions in 3 groups according to what they asked for:a) Factual question: Asks for recall of facts or other information explicitly covered in the lesson.
b) Comprehension question: Asks for a process or term to be described or defined.
c) Integration question: Goes beyond what was explicitly stated in the lesson, connects two ideas together or asks for an explanation, inference, justification, etc.
King also maps these groups to corresponding knowledge-construction indicators, ranging from low to high in complexity:a) Knowledge restating: Simple statements of fact or information gleaned directly from the lesson or prior knowledge.
b) Knowledge assimilation: Definitions, descriptions, and other material paraphrased in students’ own words.
c) Knowledge integration: Makes new connections or goes beyond what was provided in the lesson— explanations, inferences, relationships between ideas, justifications, statements linking lesson content to material from outside the lesson (prior knowledge and personal experience).
The questions can also be mapped to the level of cognitive learning that they support according to (the revised version of) Bloom’s taxonomy (Anderson, & Krathwohl, 2001):1) Remember: Facts, terms, basic concepts.
2) Understand: Basic comprehension of facts and ideas (organize, compare, translate).
3) Apply: Apply acquired knowledge, facts, techniques, and rules in a different way or transfer to a different context.
4) Analyze: Examine and break information into parts, identifying motives, causes, patterns, and relationships. 
5) Evaluate: Making judgments about information, the validity of ideas, or the quality of work based on a set of criteria. 
6) Create: Compile information together in a different way by combining elements in a new pattern or proposing alternative solutions. 

IV. Originality

Originality is a common criterion of creativity tests, along with fluency, flexibility, and elaboration (Runco et al. 2010). Originality measures how infrequent, unexpected, or novel a question is and it is considerably subjective and quite hard to score. An increase in the originality of the produced questions throughout the process is an indicator of creative thinking.

Assessment works similarly to “II. Complexity”, but this time score each question using a scale 1-5:

  1. Very common: anyone would think of it
  2. Ordinary: many would think of it
  3. Unfamiliar: some would think of it
  4. Rare: very few would think of it
  5. Unique: nobody would think of it

Depending on the evaluation context and goals, you can score the above scale in relation to the group or to the world. For example, a score of 5 could be interpreted as “nobody else in the group would think of something similar” or “nobody else in the world would think of something similar.”

Why do this?

Discussing, reflecting upon, and evaluating the outcomes of any learning process can be beneficial both for the participants and the facilitator. They offer an opportunity for thinking about what was accomplished and how they support personal and group development and they contribute towards improving and extending the process and its application. Depending on the current context (goals, needs, available resources), the rigor with which these activities may be implemented can range from a brief casual discussion to an extensive formal study. But, in any case, they provide closure and nicely sum up the overall experience and can work as a stepping stone for advancing it to the “next level.”



Subscribe to our blog to be the first to read the next post in the THINQ series!





References

Anderson, L. W., & Krathwohl, D. R. (2001).A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives. New York:Longman.

Chin, C. and Chia, L.G. (2004). Problem‐based learning: Using students’ questions to drive knowledge construction. Science Education, 88: 707–727.

Chin, C. & Osborne, J. (2008). Students’ questions: a potential resource for teaching and learning science, Studies in Science Education, 44:1, 1-39, DOI:10.1080/03057260701828101

Di Teodoro, S., Donders, S., Kemp-Davidson, J., Robertson, P., & Schuyler, L. (2011). Asking good questions: Promoting greater understanding of mathematics through purposeful teacher and student questioning. The Canadian Journal of Action Research, 12 (2), 18-29.

Dori, Y.J. and Herscovitz, O. (1999). Question‐posing capability as an alternative evaluation method: Analysis of an environmental case study. Journal of Research in Science Teaching, 36: 411–430.

King, A. (1994). Guiding knowledge construction in the classroom: Effects of teaching children how to question and how to explain. American Educational Research Journal, 31: 338–368.

Runco, M. A., Millar, G., Acar, S., Cramond, B. (2010). Torrance Tests of Creative Thinking as Predictors of Personal and Public Achievement: A Fifty Year Follow-Up. Creativity Research Journal, 22 (4). DOI: 10.1080/10400419.2010.523393.

Dimitris is a Principal Researcher at the Institute of Computer Science of the Foundation for Research and Technology - Hellas (FORTH). He specializes in Human-Computer Interaction and also holds a Certificate of Competency for the Torrance Tests of Creative Thinking (TTCT). Since 2014, he develops and delivers workshops and events that introduce the concepts and practice of design, creativity and creative thinking to children, parents, teachers and the general public. Up to now, he has run more than 55 workshops in 5 countries with a total of about 3500 participants.

Leave a reply:

Your email address will not be published.

Subscribe to our blog

You will receive an email when we post something new and can opt-out with one click any time.

Site Footer