(See the previous posting Part I first). Progressing the debate around impact evaluation means unpacking some of the different issues which are involved in the debate. In doing this, it should not be assumed that those on various sides in the debate will always side in the same way with each of the sub-issues. In my doctorate (Duignan, 1997) I did this type of analysis for the positivist versus constructivist debate which was hot for a while within evaluation circles. Drawing on that analysis, we can now to look at the sub-issues which underly the impact evaluation debate because there are a number of similarities with the earlier debate. The sub-issues which can underly the impact evaluation debate and two contrasting positions that can be taken on them include the following: Continue reading
Category Archives: Uncategorized
Standardized visual evaluation plans – quick and effective
I’ve not had much time to blog recently due to building a number of large outcomes models for public sector organizations; having input into the further development of DoView; and presenting at international evaluation conferences on Easy Outcomes, DoView and related evaluation and outcomes topics. A lot has been happening though, from version 1.14, DoView is now able to create web page versions of its visual outcomes models. I’ll do several postings showing how this new feature can be used. The first is that now, once an outcomes model has been built in DoView, the user can quickly create a web page version of the same model and then have it put up on an intranet or the internet. You can see (and use) a number of examples at OutcomesModels.org. The second great thing is that you can now produce visual evaluation plans will save you a great deal of time. I delivered a paper on this at the recent European Evaluation Society Biennial Conference in Lisbon. Continue reading
Avoiding the question: Defining quality assurance versus program evaluation
Sometimes it’s more useful to avoid initially answering a question that’s posed in a particular way because there’s a better way of addressing the concern that lies behind the question. Such is the case if you’re ever asked to define the difference between quality assurance (or monitoring) and program evaluation.
Seeing the question as a theoretical one and attempting to find a definition which works has some similarities to the situation where you’re building a house and someone keeps wanting you to stop and define, from a theoretical point of view, the difference between the kitchen and the dining room. Now, some people do stuff in the dining room that others do in the kitchen, and some do stuff in the kitchen that others do in the dining room. Still other people don’t really have any theoretical problems because they have a kitchen/dining area where they do both kitchen and dining room stuff.
A more fruitful way of working with the question of the difference between quality assurance (or monitoring) and program evaluation is to attempt to identify all of the stuff (tasks) that you would do under each of these. Once you’ve done that, you can then decide whether or not you need to spend a lot of time defining the difference between the two if everybody concerned is clear about which of the underlying tasks are, and are not, being done by whom. Continue reading