This paper presents a critical discussion of our conceptual and empirical understanding of how to use evaluation and assessment practices to improve the value of environmental decision and information support tools. Matthews et al. 2011)1 framework for understanding the life cycle of Diets from to basic science to use within wider societal governance and media processes is used as the basis for identifying points where additional evaluation and assessment might offer benefits. Difficulties in evaluating the ultimate outcomes of using Diets are subject to retinue, and the current lack of an accepted framework for conceptualizing organizations and the kinds of impacts that Diets might have within them discussed. What we know about how to conduct evaluation and assessment at each point is briefly reviewed alongside what we know empirically about tool impacts.The paper ends with some recommendations regarding both the framing and measurement of impact across the DIDST lifestyle from adoption to outcome.
EkeВ»words: Decision support; Information support; Evaluation; Adoption; Impact Matthews, K. B. , Riving, M. , Bloodstock, K. , McGraw, G. Buchanan, K.
And Miller, D. G. (201 1). Raising the bar? – The challenges of evaluating the outcomes of environmental modeling and software. Environmental Modeling and Software 26:247-257. 97 McIntosh, Evaluation Of environmental decision and information support tools: from adoption to outcome 1 .INTRODUCTION How can we ensure and know that the model-based tools we produce to provide support to environmental decision formulating and making processes are effective? How can we act to ensure that those tools and the underpinning mathematical, computational and software technologies are seed, useful and help us collectively as a species, and more specifically help organizations and agencies with policy and management responsibilities, navigate a successful course around the interfaces between economy, society and the environment?With the population of the world due to grow from around 7 billion to 9. 3 billion by 2050 (United Nations, 2010) it is clear that the need to generate and implement effective policy change and direction is pressing.
It is also clear that environmental decision and information support tools (Diets) (see Dies and McIntosh, 201 1 for a definition of the term) from GIS through to sophisticated integrated assessment models and tools are used, and have the potential to positively influence policy and management processes and outcomes (Dies and McIntosh, 2011; Nilsson et al. 2008; van Delved et al. , 201 1). What is less clear is the extent to which those tools are as effective as they could be, to what extent they are effective at all and what effects they have in the first place (McIntosh et al. , accepted). Our knowledge about these gaps can be filled through the use of assessment and evaluation practices, both to inform and improve Diets, and to contribute to a more bust understanding of their role in the complex and ever-changing world of environmental policy and management.The objectives of this paper are to: ; ; ; Identify a range of points across the development and use of a DIDST that could benefit from more focused use of assessment and evaluation; Critically review what we currently know about the kinds of impacts and outcomes that Diets have; Critically discuss ways in which assessment and evaluation might benefit each life cycle point, and the overall impacts and outcomes of Diets.
TO enable this discussion it will first be necessary to provide a conceptual ramekin which articulates the meaning of impact and outcome, and links these concepts to the life cycle of a DIDST from development through adoption to use. The focus of this paper will be on the use of Diets by organizations in particular, which the reader should bear in mind. 2. UNDERSTANDING IMPACT AND OUTCOME How are Diets developed? How do they relate to the natural, social and computational science which underpins them?How are they used and what kinds of impacts do they have? Can, and how can, such impacts be measured? Nilsson et al. (2008) present evidence demonstrating the use of a range of logic assessment tools across European countries, including some model- based tools that may reasonably be termed Diets, and argue that it is often difficult to tease out the impact of those tools from the complex web of influences which impact on policy processes over time. In a similar vein, Matthews et al.
2011) argue that whilst it is possible to evaluate the outcomes of processes which underpin the delivery of broader social, economic or environmental outcomes, unpicking the contribution of Diets to the complex web of influences on the achievement of those broader outcomes is significantly more difficult, and may be impossible. Considering the notions of impact and outcome as they relate to Diets requires a conceptualization of how such tools are generated and how they relate to both science and policy. Impact and outcome can only really be defined and discussed in the context of such a conceptual framework.Building on the ‘consultancy model Of Moocow (2002), Matthews et al. (2011) provide a framework for doing so (see Figure 1 The framework itself can be viewed as description of the overall process of knowledge transfer from science to logic, with Diets being the product of cyclical development processes, which themselves are products of cyclical science knowledge generation processes, both embedded within larger cyclical governance and social processes which determine funding patterns and use opportunities.In the Matthews et al. (201 1) framework environmental modeling might play a role in the research cycle, through helping to generate and test theories but such modeling is not intended to be policy-relevant (Solely et al. , 2004).
It is not until the development cycle comes into play that modeling technologies become cussed on answering policy or management questions, on informing decision-making and action taking I. E. Take the form of Diets. Matthews et al. 201 1) identify assessment and evaluation opportunities across this set of inter-linked processes as occurring through peer review (research cycle); through a set of processes 98 tools: from adoption to outcome which can inform the development and influence the usefulness of Diets (validation, reliability, usability, interpretation and relevance); through the periodic evaluation of broader social, economic and environmental outcomes by governmental and non- overstatement agencies, and; through subsequent periodic reviews of development and research priorities by those agencies.Figure 1 . A framework for linking decision and information support tools (see the development cycle) to science (see research cycle) and societal use (see operations cycle) (from Matthews et al.
, 2011) We won’t consider peer review evaluation processes within the research cycle any further here, nor processes of research or development priority evaluation. Instead we will focus on assessment and evaluation processes within the development cycle, and outcome evaluation processes. Matthews et al. 01 1) argue that the main class of impacts and outcomes which in principle it is possible to evaluate are those which are labeled ‘process effects’ – the set of impacts and outcomes which arise as a consequence of developing and using Diets individually or in organizations. Outcomes in the sense of improvements to social well-being, economic success or environmental change cannot be evaluated in the same way for they are less easily measurable, and where they are may be subject to condensation.Further, being able to attribute the causes of any such outcome to the use of one or a set of Diets is likely to be official or impossible because of the time lagged nature of action effects on larger scale social, economic or environmental systems, and because Of the sheer complexity of the web of causal influences on those outcomes. Using an example of a communicating climate change project, Matthews et al.
(201 1) demonstrate that outcomes are not necessarily related to process effects (or outputs) from DIDST use.They found that a range of outcomes were identified such as participants changing their attitudes towards climate change and enhanced discussion and awareness of adaptation strategies, but hat those outcomes were not necessarily easily relatable to the measurable process effects due to the complexity of the social processes involved. That is, participants who rated the utility of the DIDST outputs they employed highly were not necessarily those who changed their attitudes and behaviors.In one sense Matthews et al. (201 1) are correct, and the framework they developed is useful as the only extant (to the authors best knowledge) description of the relationship of science – DIDST development and use – broader outcomes. However, one might expect that outcomes in the sense of reader scale social, economic or environmental change should only ever be evaluated periodically, and with regards the use of a broader suite of policy instruments, tools and processes.
There is an issue of scale here – one should not really expect single Diets to be ‘game changers’, but one might reasonably expect that more widespread use of Diets or DIDST types (e. G. GIS) should create some form of broader but measurable outcome. For example, one might expect that the use of GIS to better map the spatial location of endangered species or habitats should play a key role in arresting the further cline of those species or habitats.
Such outcomes are measurable in principle over a period of 5-10 years or more.Matthews et al. (2011) conclude with a note of caution that the danger in an evaluation agenda is that the expectation is placed on researchers or DIDST developers to be able to convincingly and measurably demonstrate outcome based impact from their work, but that this cannot typically be achieved. In response to this position one might reasonably then argue for placing less emphasis on trying to understand the 99 McIntosh, Evaluation of environmental decision and information support lolls: from adoption to outcome relationships beet. En DIDST use and outcome and instead focus on better understanding the relationships between DIDST use and ‘process effects’ – the kinds of impacts which occur with regards to individuals and organizations using Diets. Doing so would reposition broader outcomes as being a function of the behaviors of many people or organizations, or many behaviors of the same people or organizations, and therefore only indirectly as a consequence of DIDST use. Instead assessment and evaluation activities should then focus on improving our understanding Of the relationships between Diets and individual or organizational behavior as the intermediate step.
Doing so does not avoid the difficulties in assessing or evaluating outcomes, rather it recognizes that outcomes are complex aggregates of a whole range of individual and organizational behaviors and should only be assessed or evaluated as such. To focus on assessing and evaluating individuals and organizations means that we need to refine our understanding of how ‘process effects’ are generated within organizations, and how desirable process effects might be remoter through particular DIDST development practices.We will start with a brief examination of how assessment and evaluation of organizations can be used to inform the development and enhance the adoption of Diets before proceeding on to the more problematic issue of assessing organizational change and impact in relation to DIDST use. 3. INCORPORATING ORGANIZATIONAL ASSESSMENT INTO DIDST DEVELOPMENT There is clear empirical evidence from across the information systems literature that involving users in development is more likely to lead to positive adoption and outcomes from use (Dies and McIntosh, 2009).Such involvement should be viewed as a form of assessment and evaluation for it involves gathering and analyzing information on users’ needs, preferences and desires and reflecting those appropriately within the design of a DIDST. Such involvement, particularly if geared towards understanding the usability of Diets, can be viewed as trial assessments of the extent to which desirable ‘process effects’ are generated.
A growing body of professional practice is emerging amongst the environmental modeling and software community about how interactions with users (in the sense of whole organizations and ingle individuals within those organizations) should be structured to produce more useful and more used tools. Pioneering work over the past decade by commercial companies and academics (van Delved et al. , 201 1) has shown that a number of different roles are required within an environmental DIDST development project for SUCceSS. Figure 2 depicts these roles and some Of their relationships.Figure 2. Major roles, responsibilities and interactions during the development of environmental Diets (from van Delved et al. , 201 1) Critical to SUccess is a clear distinction between the end user (person or errors representing an organization), the scientist (person or persons providing domain specific scientific knowledge into the models), the IT specialist (person or persons providing the skills to code and turn DIDST designs into software with appropriate interfacing) and the architect (the person able to engage with every role and ensure overall co-ordination).Between these roles a number of assessment relationships may exist.
The IT specialist(s) must assess user needs in terms of software functionality and interfacing, and assess the usability of the tool once built. The scientist must e able to assess the scientific and policy question needs of the end user, to assess the suitability Of scientific models to answer those questions, and be able to evaluate the quality of the software implemented by the IT specialist.Central to the whole process is the ability of the architect to assess and evaluate what each party is doing, and to help ensure that there are no MIS- understandings or miscommunication. How user needs should be captured, and how tool usability and usefulness should be assessed and evaluated is less well developed within the environmental modeling and software iterate. Methods for doing so are however significantly more well developed within the interaction design literature, which is related both to 100 tools: from adoption to outcome information systems and more generally to product design.
For example, user needs can be elicited and used to inform DIDST design through a combination of user modeling to capture the underlying ambitions, needs and desires of individual users (see Cooper et al 2007) with work modeling (see Whaleboats et al. , 2005) to capture the ways in which Diets should interface with and improve users work flow patterns. Usability analysis can be employed to then test DIDST attributes such as learnable, understandability etc. See Tulles and Albert, 2008), the results of which could then be incorporated into DIDST prototype refinement. Figure 3 shows a proposed scheme for employing these interaction design methods with the kinds of DIDST roles distinguished by van Delved et al. (201 1). Figure 3. Proposed user centered DIDST development approach showing key activities (rounded rectangles), activity inputs / outputs (labeled arcs) and role responsibilities (shaded areas and text at top indicate who does what) 4.
PROCESS EFFECTS AND ORGANIZATIONS Measuring the impacts, or as Matthews et al. (201 1) label them, ‘process effects’ of DIDST use on organizations depends on one’s view of what constitutes an organization – the attributes one might be interested in or expect to change will be determined in part by how one views organizations. The framework provided by Matthews et al. (2011) does not provide sufficient conceptual structure regarding organizational attributes to provide a basis for determining how to measure DIDST use impacts.
Something else is required.Quoting Auckland and Schools (1 990), Reeve and Pitch (1999) raise some questions that illustrate the basis for a view of organizations which might be labeled ‘social interactions’. The questions they raise challenge the kinds of impacts and effects one might expect to see as a scones ounce of DIDST use: “In the sass the adoption Of the standard assumption from management science that organizations could be treated as if they were instrumentalities, goal-seeking machines, seemed not unreasonable. But in the sass such an assumption seemed increasingly dubious.Why not treat organizations as if they were not goalkeeping machines but discourses, cultures, political battle- rounds, quasi-families, or communications and task networks? ” Answering these questions, Auckland and Schools (1998) articulate the Process for Organization Meanings (POMP) model to conceptualize what organizations are in terms of a set of inter-linked social processes, and how those processes relate to using information systems (to be treated here as synonymous with Diets). Figure 4 shows the POMP model.Interpreting the POMP model one might expect to see organizational impacts of DIDST use taking the form of behavioral changes amongst Individual actors or groups of actors within organizations. For example one might look to see changes in individual understandings of the basis of or consequences of action; to see changes in intention to act, or; to see changes in agreement or disagreement over issues.
Some of these kinds of individual and group behavioral changes have been observed empirically in relation to group system dynamics model building processes in organizations (Route et al. 2002). Related work in participatory modeling has begun to report some empirical detail (Von and Bouquet, 201 0), but the context is not organizational and so not serially transferable in insight. Route et al.
(2002) undertook a significant meta-analysis of 107 organizational group model-building processes to understand the extent to which a range of different social outcomes were achieved, and were achieved positively.The outcomes they assessed included, at the individual level, ‘insight generated’; at the group level, ‘commitment’ (the intention to implement changes suggested by the modeling exercise), ‘communication improvements’, ‘consensus generated’, ‘shared language created’, and; at the organizational level, ‘system changes’ and ‘positive results’. The relationships between desirable changes and the characteristics of the group model building process were difficult to tease out with really only ‘commitment’ and ‘system changes’ seeming to be lower under one configuration. 01 tools: from adoption to outcome Dies and McIntosh (2011) undertook a multiple organization study of the drivers for, constraints on and impacts of DIDST use in a particular field of environmental policy and management desertification. The results of this study were able to attribute different kinds of impact to different kinds of DIDST type from remote sensing through GIS to emulation models and DES.
A wide range of impacts of DIDST use were reported by employees Of the organizations interviewed, with the biggest range of impacts reported for the DIDST pipes most widely used – GIS and remote sensing.Impacts ranged from those related to increases in efficiency and effectiveness of tasks and entire organizations, through improved participation and communication to cost impacts including the need to hire new staff, to re-train staff and to invest in IT. There were no clear relationships to DIDST type, leading one to either conclude, as with Route et al. 2002) did, that more and better data collection is needed, or perhaps towards the conclusions of Matthews et al. 2011 ) that the complexities of the social process are significant and likely to make the identification of easy patterns of cause and effect between Diets, use and impacts (whether process effects or outcomes) difficult at the very least.
Figure 4. Process for Organization Meanings (POMP) model (from Auckland and Howell 1998) 5. CONCLUSIONS The social, economic and environmental challenges of the 21 SST century require the effective transfer and use of scientific knowledge into policy. Diets offer one route for this transfer.Ensuring that Diets offer real value as a transfer mechanism will require improved design practices and an improved understanding of the relationships between DIDST use and changes in organizational behaviors and attributes (like efficiency and effectiveness), and an improved understanding of the relationships between those behaviors and attributes and broader social, economic and environmental ambitions.
Assessment and evaluation practices offer the potential to help realize this agenda.Improved use of interaction design methods for capturing seer needs and assessing DIDST usability offer the potential to improve tool adoption rates and eventual utility. The extent to which improved adoption will result in desirable organizational changes is unclear however. Available published evidence is scant and there is no clear consensus on the kinds of changes which should be priorities as desirable, nor how to measure them or best promote them in tool development.