Evaluation Models – Seeing, Believing, and Foreseeing – p 103 q2

p 103-2

Identify a recent instructional design or performance technology project on which you have worked. If you have not worked on any such project, interview someone who has. Describe how you did (or would) evaluate the project using one or more of the evaluation models explained in this chapter.

 

 

 

Seeing, Believing, and Foreseeing

(Initially, I would have chosen to write about the instructional strategy I have started to use for the Acoustics portion of my Environmental Tech class. Though I started integrating listening to more sound samples, using sound meters, and discussing case studies into the class, I find that my intentions, at present, can not succeed all the way through since the required exercises and tests are not designed to show evidence of sound simulation to accompany basic calculations. Until I find better exercise designs for this, I can not discuss how I might evaluate my idea of basics acoustics instruction.)

I shall discuss about the introduction of daylighting and solar geometry in the Environmental Tech class that I teach every spring. It had not been taught in many US Architecture schools for some time and has now been enjoying re-discovery. While the issues of global warming, energy consumption, dependency on traditional fuel resources, and building energy demand are presented with relative ease and positive acceptance by the student body, developing actionable knowledge that the student designers can learn, apply, and integrate into their growing design philosophy requires much more than what traditional prescriptive lectures can deliver. This valuable body of knowledge, even at introductory level, is better mastered, in my opinion, through the strategies of experiential learning, and systematic project application, and post-project reflection. An ideal proof of the topic being taken to heart and mind would be manifested in seeing the knowledge and skills applied with a level of commitment and precision in their studio design projects (which I do not handle).

Integral to this module’s instruction of solar geometry is the use of a portable 7-ring device that can simulate solar angles for different locations and different seasons, aka the Heliodon (please feel free to refer to ” http://www.hpd-online.com/documents/heliodon_manual.pdf). Through demonstration, discussion, and skills transfer, students tackle familiarization exercises about the sun with the use of the Heliodon. Additional graphic tools are layered in sequence, aimed to instruct students about designing solar shading responses. Exercises follow to task students to demonstrate these new solar-response skills. Finally, the students form small groups to undertake a major project, designing, refining, testing, analyzing and evaluating a solar-response design that avoids direct sun exposure during summer seasons, and encouraging direct solar access during winter months. To cap the module out, an examination is given, with questions ranging from quick verbal recall to basic computation to graphic application of solar geometry. This module on solar geometry is conducted over 5-6 weeks.

My own opinions of the module’s present state and its pros and cons notwithstanding, I feel that if the module were to be given over to evaluations models, it would favor a combination of two models. I opt for this in consideration of the other two professors of the Environmental Tech course, and the students who are quite open about discussing the course. Conducting candid conversations to acknowledge what works and what does not, what worth the content has and how valuable it towards better design, which of the instructional steps engage and sustained student effort, and how well these (should) work, then the pick of Brinkerhoff Success Case Method presents itself as an obvious choice for summative evaluation.

However, the use of the Kirkpatrick Training Evaluation Model would also be quite constructive. With some feedback from summative evaluation, the Kirkpatrick model can contribute to a formative evaluation process for the module. I very much like K’s level 1 that assesses reaction (engagement and motivation). This alone is quite important as the next levels depend on how well students decide to get involved and whether they do it for grades alone, or whether they learn it for deeper reasons. If the students choose to engage and commit positively, then K’s level 2 can be employed and the instruction may be evaluated properly – did the strategies – learning the content and skills, applying them as tasks, and evaluating them as procedural proofs – did these work and how well? For this particular module on solar geometry, it would be more difficult to apply the third level of the K model, transfer. However, coordination with design studios may afford opportunities to observe whether students have chosen to absorb and apply the topic as one of their design strategies. For observing whether the fourth level is in effect for this particular topic of solar geometry, this would be evidenced by how they exercise solar knowledge in their design practice; while it is not impossible to do, it may not be as probable to observe empirically.

Nevertheless, to summarize, I would recommend a more basic combination of the Kirkpatrick and the Brinkerhoff models of evaluation. By using the Brinkerhoff model first, I can already gauge the buy-in to continue participation in evaluation methods further. Then the opportunity for the Kirkpatrick model to be activated will make itself obvious.

Advertisements

~ by bdytoc on September 30, 2012.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s