Evaluating the Quality of Web Based Sustainability Reports: A Multi

Transcrição

Evaluating the Quality of Web Based Sustainability Reports: A Multi
2012 45th Hawaii International Conference on System Sciences
Evaluating the Quality of Web Based Sustainability Reports:
A Multi-Method Framework
Frank Teuteberg
Research Group in Accounting and Information Systems, University of Osnabrueck, Germany
[email protected]
Michael Freundlieb
Research Group in Accounting and Information Systems, University of Osnabrueck, Germany
[email protected]
development and decision making of companies and
stakeholders alike. Since many companies publish their
SR in form of a website [21], web based sustainability
reporting poses an interesting research topic for the IS
community.
SRs need to fulfill the requirements of internal and
external stakeholders such as employees, supply chain
partners, regulatory bodies, consumers and the general
public [9,28]. Therefore, designing and implementing a
SR that fits the needs of these different stakeholder
groups is a difficult task. Aid is provided by a variety
of standards and guidelines on sustainability reporting.
Section 2 gives an overview of existing standards
and guidelines and outlines the main goals and quality
criteria mentioned within these guidelines. Our review
of the standards and guidelines shows that existing
quality criteria mainly refer to the contents, but not to
the ease of use or the visual appeal of sustainability
reports, although these have proven to be decisive
factors for the acceptance of related IT artifacts such as
websites [4,6,19,24,25]. In view of the stakeholder
focus of SRs [14], we propose a multi-method framework that involves the different internal and external
stakeholder groups directly into the definition of quality criteria and the quality evaluation process. In Section 3 the developed framework is described in more
detail. In order to evaluate our framework, we conducted a preliminary study with 8 participants. In Section 4 we present our key findings regarding the
framework itself, the design and implementation of
SRs as well as implications for future research. Section
5 sums up our results, outlines the limitations of our
research approach and provides a number of starting
points for future research.
Abstract
Sustainability reports bear the potential to improve
a company’s image and to create competitive advantages, for they influence consumers’ buying decisions
and the investment decisions of potential shareholders.
There are multiple standards and guidelines on sustainability reporting, which, however, mainly relate to the
contents of the reports. We argue that due to the stakeholder-focused nature of sustainability reports, additional quality criteria exceeding the mere content need
to be considered to improve the stakeholders’ acceptance. Since these criteria are stakeholder-specific,
they need to be defined and evaluated with direct involvement of the stakeholders. Thus, we propose a
multi-method framework for the quality evaluation of
web based sustainability reports. In order to evaluate
our framework and to gain first insights into stakeholder-focused quality and acceptance criteria, we
applied our framework within a preliminary study.
First experiences and results are presented and implications for theory and practice are derived.
1. Introduction
Sustainability reports (SRs) provide stakeholders
with information on a company’s efforts to balance its
economic, ecological and social goals, the so-called
triple bottom line [10]. In this context sustainability
can be defined as “meeting the needs of the present
without compromising the ability of future generations
to meet their own needs” [2]. For the majority of the
world’s 250 largest companies sustainability reporting
has become a standard procedure [21]. Studies show
that SRs bear the potential to improve a company’s
image and to influence consumers’ buying decisions as
well as the investment decisions of potential shareholders [1,23]. Hence, there are potential competitive
advantages inherent in sustainability reporting which
make SRs an integral part of environmental online
communication and a key factor in the sustainable
978-0-7695-4525-7/12 $26.00 © 2012 IEEE
DOI 10.1109/HICSS.2012.255
2. Existing standards and guidelines for
sustainability reports
In order to identify scientific literature dealing with
quality criteria for and/or the evaluation of sustainability reports, we conducted a systematic literature review
1177
[29]. We searched the Top 20 journals from the AIS
MIS journal ranking as well as the high quality conferences AMCIS, ECIS, HICSS ICIS and Wirtschaftsinformatik for the search terms “quality” or “evaluation”
in combination with the terms “sustainability reporting” or “corporate social responsibility reporting”. Our
search did not lead to relevant results, which is in line
with other researches who have concluded that the
sustainable development and environmental sustainability have not yet been adequately explored by the IS
community [16,28].
Hence, we focused on more practically oriented
standards and guidelines for SRs, which are often referred to when rankings or scorings of SRs are performed [5]. Table 1 provides a short overview of the
sustainability reporting standards and guidelines we
considered most relevant to our work.
gives an overview of the central reporting goals stated
in the above listed guidelines and norms. The goals
were summarized from the standards and guidelines by
the authors and categorized into internal and external
goals. Internal goals represent goals within the reporting company towards its internal stakeholders whereas
external goals represent goals towards external stakeholders.
Table 2. Internal and external goals of sustainability reporting
Organization
GRI [11]
Table 1. Sustainability reporting standards
and guidelines
Organization
Global Reporting Initiative
(GRI) [11]
World Business
Council for
Sustainable
Development
(WBCSD) [26]
EcoManagement
and Audit
Scheme
(EMAS) [8]
International
Organization
for Standardization (ISO)
[12]
Background
The GRI is a guideline on the content and
design of sustainability reports which is based on
the “consensus seeking consultation” of different
stakeholder groups such as business, labor, nongovernmental organizations, investors and accountancy.
13 quality criteria for sustainability reports
are developed and according test procedures are
proposed. Also, a reference outline including
concrete key performance indicators for sustainability reports is suggested.
The approach of the WBCSD is similar to
that of the GRI, but it is of a more general and
recommendatory character. The recommendations primarily concern the information needs of
the stakeholders as well as the contents of reports.
In order to motivate non-reporters to publish a
sustainability report, especially successful companies are referred to and citations from their top
management are given.
EMAS is a regulation regarding the environmental management and environmental audits
of organizations which goes beyond the requirements of the international ISO 14001 norm. It
applies to European companies only. Annex IV of
EMAS includes regulations on environmental
reporting, especially regarding environmental
statements and key performance indicators, as
well as availability and accountability.
The ISO standard 14063 suggests 5 quality
principles for environmental management and
environmental communication and provides a
survey of different channels for environmental
communication as, for example, sustainability
reports. A reference procedure for environmental
communication is described, but there are no
concrete suggestions regarding content or key
performance indicators.
WBCSD
[26]
EMAS
[8]
ISO [12]
We believe that in order to develop a framework
for the quality evaluation of SRs, the goals pursued by
these reports need to be sufficiently clear. Table 2
1178
Internal Goals
External Goals
- provide a balanced and
reasonable representation of
the sustainability performance
of a reporting organization including both positive and
negative contributions
- demonstrate how the organization influences and is influenced by expectations about
sustainable development
- compare performance between different organizations
over time
- showing a balanced and reasonable presentation of an
organization's economic, environmental and social
performance ('true and fair view', 'presented fairly')
- promote continuous im- the active inprovements in the environvolvement of emmental performance of organiployees in organizazations by the establishment
tions and appropriand implementation of enviate training
ronmental management systems by organizations
- the systematic, objective and
periodic evaluation of the
performance of such systems
- the provision of information
on environmental performance
- an open dialogue with the
public and other interested
parties
- promoting an organization's
- providing inenvironmental credentials,
puts/suggestions for
achievements and performance
improving the
- increasing business support
environmental
and shareholder confidence
performance of an
- enhancing interested parties'
organization's
perceptions of the organization
activities, products
- addressing interested parties'
and services, and
concerns and complaints about
progress toward
operational and emergency
sustainability
environmental hazards
- raising the impor- improving understanding of
tance and level of
interested parties' needs and
environmental
awareness to support concerns to foster trust and
dialogue
an environmentally
- assisting interested parties in
responsible culture
understanding an organizaand values within
tion's environmental committhe organization
ments, policies and performance
- benchmarking and
assessing sustainability performance
with respect to laws,
norms, codes,
performance standards, and voluntary
initiatives
- comparing performance within an
organization over
time
In order to constitute an initial set of quality criteria
for sustainability reports, Table 3 gives an overview of
the quality criteria included in the standards and guidelines. Within the table, references to stakeholders are
underlined in order to highlight the stakeholder focus
inherent in the quality criteria.
credibility: Conduct environmental communication in an honest
and fair manner, and provide information that is truthful, accurate,
substantive and not misleading to interested parties. Develop information and data using recognized and reproducible methods and
indicators.
responsiveness: Ensure that environmental communication is open
to the needs of interested parties. Respond to the queries and concerns of interested parties in a fully and timely manner. Make
interested parties aware of how their queries and concerns have
been addressed.
clarity: Ensure that environmental communication approaches and
language are understandable to interested parties to minimize
ambiguity.
EMAS [8]
relevance: Indicators should be relevant to an organization’s significant direct environmental aspects.
understandability: Indicators shall be understandable and unambiguous.
comparability:
- indicators shall allow for a year on year comparison to assess the
development of the environmental performance of the organization
- indicators shall allow for comparison with sector, national or
regional benchmarks as appropriate
- indicators shall allow for comparison with regulatory requirements
as appropriate
accuracy: indicators shall give an accurate appraisal of the organization’s environmental performance
clarity: Environmental information shall be presented in a clear and
coherent manner.
availability: Anybody interested in the organization’s environmental performance can easily and freely be given access to the information.
WBCSD [26]
relevance: Information is relevant when it helps to evaluate a
company's activities and to confirm or correct past evaluations.
materiality: A company should assess the materiality, i. e. the
importance of the information it discloses. Information is to be
considered material if its omission or misstatement could influence
users when making decisions about their involvement/relations with
the company.
reliability: Information is reliable when it is free from material
error and bias, and faithfully reflects activities and processes.
comparability: Users should be able to compare the sustainable
development reports of a company over time in order to identify
trends in its sustainable development performance and position.
cost effectiveness: The benefits derived from producing a report
should justify the cost.
Table 3. Quality criteria for sustainability
reports
GRI [11]
transparency: complete disclosure of information on the topics and
indicators required to reflect impacts and enable stakeholders to
make decisions, and the processes, procedures and assumptions
used to prepare those disclosures
balanced and reasonable presentation of the organization's
performance considering the organization's purpose and experience
and the reasonable expectations and interests of the organization's
stakeholders
materiality: The information in a report should cover topics and
indicators that reflect the organization's significant economic,
environmental, and social impacts, or that would substantively
influence the assessments and decisions of stakeholders.
stakeholder inclusiveness: The reporting organization should
identify its stakeholders and explain in the report how it has responded to their reasonable expectations and interests.
sustainability context: The report should present the organization's
performance in the wider context of sustainability.
completeness: Coverage of the material topics and indicators and
definition of the report boundary should be sufficient to reflect
significant economic, environmental and social impacts and enable
stakeholders to assess the reporting organization's performance in
the reporting period.
comparability: Issues and information should be selected, compiled, and reported consistently. Reported information should be
presented in a manner that enables stakeholders to analyze changes
in the organization's performance over time, and could support
analysis relative to other organizations.
balance: The report should reflect positive and negative aspects of
the organization's performance to enable a reasoned assessment of
overall performance.
accuracy: The reported information should be sufficiently accurate
and detailed for stakeholders to assess the reporting organization's
performance.
timeliness: Reporting occurs on a regular schedule and information
is available in time for stakeholders to make informed decisions.
clarity: Information should be made available in a manner that is
understandable and accessible to stakeholders using the report.
reliability: Information and processes used in the preparation of a
report should be gathered, recorded, compiled, analyzed, and disclosed in a way that could be subject to examination and that establishes the quality and materiality of the information.
report boundary: The sustainability report boundary should
include the entities over which the reporting organization exercises
control or significant influence both in and through its relationships
with various entities upstream (e.g. supply chain) and downstream
(e.g. distribution and customers).
ISO [12]
transparency: Make the processes, procedures, methods, data
sources and assumptions used in environmental communication
available to all interested parties, taking account of the confidentiality of information as required. Inform interested parties of their role
in environmental communication.
appropriateness: Make information provided in environmental
communication relevant to interested parties, using formats, language and media that meet their interests and needs, enabling them
to participate fully.
Despite the different background of the standards
and guidelines, the contained quality criteria are quite
similar. However, the criteria mostly relate to the contents of sustainability reports – many quality criteria
and their definitions are analogous to common data
quality dimensions [27]. The multitude of references to
stakeholders within the quality criteria stresses the
stakeholder focus of sustainability reporting. In view of
this focus, we found that four important aspects were
missing in existing guidelines and standards which we
intend to address with our framework:
• Companies are left in doubt about the specific
needs of the different stakeholder groups since the
existing standards and guidelines do not differentiate between different stakeholder groups.
1179
• Although ease of use and usefulness are common
factors in many established acceptance models (e.g.
[6,24,25]), existing standards and guidelines do not
give concrete advice on how to improve the usability.
• Research on websites indicates that the visual appeal has a significant impact the level of acceptance
[4]. As many companies publish their sustainability
reports in the form of a website [21], visual appeal
is also likely to be a decisive acceptance factor for
web based SRs.
• The existing standards and guidelines do not disclose detailed information on how the included
quality criteria were determined. Since many references to stakeholders are included in the definitions
of the quality criteria, further assistance on how to
involve the stakeholders into the evaluation of these
quality criteria needs to be provided.
include availability, reliability, adaptability and response time of the system as well as content that is
personalized, complete, relevant, easy to understand
and secure [6]. Other researchers also confirm the
importance of usability and add site design and media
richness to the list of important influence factors on the
success of websites [19].
In order to fulfill the collected quality criteria, a company needs to know its stakeholders’ needs, expectations and capabilities. Thus, concrete quality criteria
can only be determined and evaluated by directly involving the stakeholders in the evaluation process. Our
framework, which is presented in the following section, is designed to enable such an interactive process.
Nevertheless, existing standards and guidelines on
sustainability reporting or the usability of websites can
provide a useful set of basic quality criteria, which can
be complemented with the help of the proposed
framework.
Due to the mentioned shortcomings in existing
standards and guidelines for sustainability reporting,
we have extended our search to standards and guidelines in the field of software usability. Table 4 provides
a survey of the quality criteria for usability stated in the
ISO standard 9241-110, which deals with the ergonomics of human-system interaction [7]. Again, we have
underlined references to stakeholders within the definitions of the quality criteria.
3. Framework for the quality evaluation of
web based sustainability reports
Multi-method approaches have been successfully
used by other researches in order to evaluate websites
and other IT artifacts [3,4,15,17]. Following these
researchers, we have designed a framework including a
multi-method approach for the quality evaluation of
sustainability reports in order to overcome the shortcomings of existing standards and guidelines we
pointed out in Section 2. With respect to the identified
stakeholder focus, our framework directly involves the
stakeholder groups into the evaluation process.
Table 4. Quality criteria for user dialogues
ISO 9241-110 [7]
Suitability for the task: “An interactive system is suitable for the
task when it supports the user in the completion of the task, i.e.
when the functionality and the dialogue are based on the task characteristics (rather than the technology chosen to perform the task).”
Self-descriptiveness: “A dialogue is self-descriptive to the extent
that, at any time, it is obvious to the users which dialogue they are
in, where they are within the dialogue, which actions can be taken
and how they can be performed.”
Conformity with user expectations: “A dialogue conforms with
user expectations if it corresponds to predictable contextual needs
of the user and to commonly accepted conventions.”
Suitability for learning: “A dialogue is suitable for learning when
it supports and guides the user in learning to use the system.”
Controllability: “A dialogue is controllable when the user is able
to initiate and control the direction and pace of the interaction until
the point at which the goal has been met.”
Error tolerance: “A dialogue is error-tolerant if, despite evident
errors in input, the intended result may be achieved with either no,
or minimal, corrective action by the user. Error tolerance is
achieved by means of error control (damage control), error correction, or error management, to cope with the errors that occur. ”
Suitability for individualization: “A dialogue is capable of individualization when users can modify interaction and presentation of
information to suit their individual capabilities and needs.”
Literature in the field of website quality also confirms
that quality criteria exceeding the mere content need to
be taken into account. Besides usability, these criteria
Figure 1. Framework for the quality evaluation
1180
The starting point of the framework is the iterative
development of a catalogue of quality criteria. In order
to assemble this catalogue, different research methods
such as literature reviews, empirical surveys or expert
interviews can be applied. For the preliminary study of
our framework, we used the quality criteria outlined in
Section 2.
As a next step, the technological basis for the implementation of the reports as well as the evaluation is
constituted. For the preliminary study, we implemented
a set of alternative reports using a data warehouse
implemented in an Oracle database and choosing Cognos as a reporting engine.
The constitution of the technological basis is followed by the design and implementation of alternative
reports. During this phase, different color schemes,
menu structures and layouts should be applied in order
to evaluate different alternatives in the following multimethod approach [4].
During the multi-method approach, different research and analyses methods are applied in order to
evaluate the reports [3,4,15,17]. Representatives of the
different stakeholder groups such as customers, investors, employees and suppliers should be directly involved in the evaluation process [18]. Research on the
acceptance of IT artifacts often relies on questionnaires
in order to measure the acceptance factors [24,25]. We
argue that questionnaires are a good method to capture
the users’ (self-)perception, but in order to achieve a
complete view of the acceptance factors, more objective methods also need to be applied [24]. Thus, we
suggest adding the following research methods to the
multi method approach:
• observation from multiple perspectives,
• eye-tracking,
• a software-based evaluation.
It should be noted that the framework can be easily
modified by adding or disregarding one or more of the
proposed research methods. One of the strengths of the
proposed framework is that it may also be applied in
order to evaluate other IT artifacts such as websites or
software user interfaces.
In the preliminary study of our framework, we used
an eye-tracking device to be able to monitor the users’
attention behavior. The participants were given tasks to
retrieve information on a company’s environmental
performance from the reports and, in addition to the
eye-tracking, filmed and observed from multiple perspectives during the usage of the reports. This allowed
identifying perceived shortcomings of the report alternatives by closely observing the users’ behavior. The
users could later on be confronted with the recordings
in order to further investigate the causes for the perceived shortcomings. In order to back up the eyetracking results, we also performed a software-
supported attention analysis using the software EyeQuant [30], which emulates the attention behavior of
users by means of a neural network. The software was
originally designed and trained for the evaluation of
websites. However, we tested whether the software
could also aid in evaluating sustainability reports. In
addition to these rather innovative research methods,
we also used a questionnaire in order to evaluate the
reports.
As soon as the design, conduction and analysis of
the chosen research methods are completed, the findings are consolidated. In the course of this consolidation process, the results provided by the different research methods are checked for consistency and contradictions are identified and interpreted.
Subsequently, the catalogue of quality criteria is reviewed in the light of the research results: Which report alternatives met the requirements, which did not?
Were any additional requirements raised by the involved stakeholders or did some of the requirements
turn out to be obsolete?
If the quality criteria are not sufficiently satisfied, a
cause analysis is conducted. The cause analysis may
result in changes made to the design and content of the
reports, or a different technological basis may be
created that better suits the quality criteria. Alternatively, the catalogue of quality criteria itself may be modified.
If the quality criteria are finally satisfied, the
process model should be cycled after some time to
account for changes in stakeholder requirements over
time and to enable a continuous improvement process.
4 Evaluation and discussion of preliminary
results
In order to evaluate our proposed framework and
gain first insights into the stakeholder-focused requirements, we used the quality criteria collected in
Section 2 to design and implement a number of sustainability report alternatives and later on applied our
multi-method framework to them. Our initial test involved 8 participants (master students and employees
from the University of Osnabrueck) who acted as
stakeholders. We intentionally violated some of the
quality criteria in order to check whether our framework was suitable to discover these violations. Our
findings can be subdivided into findings on the framework itself, findings regarding the design and implementation of sustainability reports and implications for
future research.
1181
4.1 Findings on the framework
retrospectively and thus without distracting the participant during the completion of a task.
In addition to eye-tracking, we conducted a software-supported analysis using the software EyeQuant
which is based on a neural network that emulates human attention behavior [30]. It allows generating
heatmaps and other visual analyses of the presumed
user’s attention. Figure 3 shows a heatmap that was
generated by applying the software to one of our report
designs.
We will focus on the eye-tracking and softwaresupported evaluation results in this section since most
researchers are probably familiar with questionnaires
and observations. Additionally, we will point out the
main benefits and problems we encountered using a
multi method approach.
Although conducting eye-tracking is both timeconsuming and expensive, we found that it was well
worth the effort. By using eye-tracking, we gained
valuable information on what parts of a report the
participants actually used to solve certain tasks. Figure
2 shows the eye-tracking results of one participant for
one report alternative. The blue lines indicate saccades
(movements of the eyes) whereas the red circles indicate fixations (areas the participant focused on for a
longer period of time) [20].
Figure 3. EyeQuant results
Our experiences with EyeQuant were mixed since
the software was originally designed to evaluate websites and not sustainability reports: On the one hand,
compared to eye-tracking the analysis was a lot easier,
faster, and cheaper to conduct. The software produced
quite good results for reports that contained many
different elements such as tables, charts, legends etc.
However, for reports that included one large table only,
the software was not able to produce comprehensive
results. This is probably due to the fact that, in contrast
to real participants, the software is not able to focus on
a certain task. Generally, the software-supported analysis could not replace, but complement the eye-tracking
data.
Figure 2. Eye-tracking results
The participant was given the task to state the costs
for the cleaning of exhaust air in the year 2010 for the
product group washing machines. The eye-tracking
allowed us to find out that instead of using the table on
the left hand side, which could have easily provided
the exact number within a few seconds, this participant
decided to use the bar chart on the right hand side
instead, which did not allow him to complete the task
successfully. Obviously, the bar chart violated the
quality criterion suitability for the task which was
successfully discovered by our framework.
Some of our participants were interested in the
technology of eye-tracking and asked to see the visualized eye-tracking data afterwards. During the presentation of the eye-tracking results, we coincidently made
the experience that when viewing the recorded eyetracking data, participants were suddenly able to remember small details and decisions they made that
were not included in the results of our other research
methods. We thus believe that a retrospective confrontation of the participants with their eye-tracking data
can lead to interesting insights. In contrast to other
methods investigating the users’ cognitive processes,
such as thinking aloud [22], this method can be applied
Overall, the combination of different research and
analyses methods proved to be a good approach, which
corresponds to findings from existing literature
[3,4,15,17]. The results of each research method provided valuable insights into the stakeholders’ view of
quality criteria for web based sustainability reports:
• The eye-tracking data allowed us to evaluate
which components of a report were used to
complete a certain task and which components
were ignored. Furthermore, the retrospective
eye-tracking provided valuable insights into
the users’ cognitive processes.
1182
•
•
•
The questionnaire allowed us to evaluate the
stakeholder’s perception of the reports.
The observation allowed us to see whether the
stakeholder’s perception was biased.
The software-supported analysis extended our
findings from the eye-tracking in a more timeand cost-effective way.
2.
Naturally, using different research and analyses
methods may also lead to contradicting results: For
example, some participants perceived a report alternative to be useful and easy to use, although they were
not able to solve a given task using the report. We
found that these contradictions provide good starting
points for future research that we point out in Section
4.3 and 5.
ber for the cleaning costs of exhaust air for the
product group washing machines. For this task, the
report shown in Figure 2 was used.
For the results on the right hand side of the table,
the participants were asked to state whether the
revenue of a certain company location was going
up or down within a period of several weeks. The
layout of the report used to solve this task was
similar to Figure 2, offering both, a table as well as
a bar chart.
The two tasks were presented to the participants in
different orders to exclude bias due to the order in
which the tasks were presented or due to learning effects. After the completion of the tasks, the participants
were asked to fill out a questionnaire using a Likert
scale ranging from 1 (strong disagree) to 5 (strong
agree). The participants were asked whether the table
was helpful for the task, whether the chart was helpful
for the task and whether the visual appeal of the chart
was greater than that of the table.
We extended the questionnaire answers of the participants in Table 5 by adding whether the chart was
perceived to be more helpful than the table. Our observation contributed whether the given task was solved
correctly by the participant.
Obviously, we expected the table to be considered
more helpful for the first task, whereas we expected the
chart to be more helpful for the second task. Although
this expectation was generally fulfilled by our participants, as the higher mean values of the table for task 1
and of the chart for task 2 indicate, the results were not
as clear as we had imaged: 3 out of 8 participants
found the chart to be more helpful for the first task and
one participant was of the opinion that the table was
more helpful to solve task 2. Accordingly, out of these
4 participants highlighted in bold print in Table 5, only
one was able to correctly solve the given task.
This underlines our recommendations to
• provide coaching to internal stakeholders,
• offer a help function, wizard, instructional video or avatar to external stakeholders and
• preselect the visualization that is suitable for
the task where possible.
4.2 Findings on the design and implementation
of web based sustainability reports
Since the stakeholders did not always choose the
visualization alternative of report contents that was
suitable for the given task (cf. Figure 2), we conclude
that coaching of the users is desirable and should be
considered when designing and implementing SRs.
While coaching can be easily provided to internal
stakeholders such as employees, coaching obviously
cannot be easily realized for external stakeholders such
as customers. For web based reports coaching of the
users can be accomplished by adding a help function, a
wizard or an instructional video or avatar which demonstrates how to switch between different types of
visualization and which kind of visualization is suitable
for which task. In accordance with the quality criteria
suitability for the task (cf. Table 4) the appropriate
visualization should be pre-selected where possible.
In view of the fact that we discovered contradictions between the perceived and actual usefulness and
ease of use (cf. Section 4.1), we argue that sustainability reports need to address both aspects, the users’ perception as well as the factual properties, since both
may contribute to the overall acceptance of a report.
Table 5 shows an excerpt from our questionnaire
and observation results. The 8 participants were given
two tasks:
1. For the results on the left hand side of the table,
the participants were asked to state an exact num-
Surprisingly, the results for the second task were
more significant than for the first task although the
report designs were quite similar.
1183
Table 5.Questionnaire and observation results (excerpt)
task
participant
A
B
C
D
E
F
G
H
mean
table
helpful for
task
3
5
1
5
5
3
4
5
3.875
stating a concrete number
(cf. report Figure 2)
chart
helpful
helpful for
chart >
task
task
table?
solved?
4
no
no
3
yes
yes
4
no
yes
3
yes
yes
1
yes
yes
5
no
no
2
yes
no
1
yes
yes
2.875
visual appeal
chart > table?
5
3
4
4
4
4
1
1
3.25
table
helpful for
task
2
2
3
1
1
3
2
2
2
identifying a trend
(report similar to Figure 2)
chart
helpful
helpful for chart >
task
task
table?
solved?
5
no
yes
5
no
yes
2
yes
no
5
no
yes
5
no
yes
5
no
yes
3
no
yes
5
no
yes
4.375
visual appeal
chart > table?
5
3
1
4
4
4
4
5
3.75
hand side were too glaring which might be the reason
why the results were not as significant as we expected.
Regarding the color scheme, we conclude the following recommendations:
• color schemes should be intuitive,
• a legend of the color scheme should be included in order to provide orientation,
• colors should not be too glaring and friendly to
the eye.
Although the chart was by and large considered to
be less helpful for the first task, the paarticipants considered the chart to be visually more appealing in both
cases. One could conclude that sustaainability reports
should therefore generally include charts rather than
tables, but as our results also indicated charts may not
always be suitable for the task at hand. Moreover, as
our eye-tracking and observation results indicated, the
participants were not always able to select the appropriate visualization of data for the given task.
4.3 Implications for future research
Figure 4 shows two alternative reports that were
designed using different color schemes.
Although we were overall satisfied with the combination of research methods we chose for our framework and the preliminary study, further research on the
combination of different research methods is needed:
Which research methods complement each other? How
can contradictions between results from different research methods be resolved during the consolidation of
findings?
We believe that due to the fact that the research and
analyses methods included in the multi-method approach can be replaced by others, our framework can
be easily adapted to suit other problem domains.
We have already made the experience, that some
research methods, e.g. eye-tracking, are more timeconsuming and costly than others, e.g. the softwaresupported analysis. While we only considered a rather
small group of participants, an economical evaluation
of different combinations of research and analyses
methods could provide interesting insights for researchers and practitioners alike: Which combination
of research methods in a multi-method approach offers
the best price performance ratio?
Figure 4. Color Schemes
Although the two alternatives use the same layout
and include the same information, the stakeholders’
experiences differed strongly. While the report on the
left uses established traffic light colors (green for low
costs/high profits, yellow for medium costs and profits,
red for high costs/low profits) the report on the right
uses colors that have no common connotation. Although the tasks provided to the participants were
equally difficult, not surprisingly, thee participants on
average needed 72% more time in order to complete
the task using the report on the right. The eye-tracking
results backed up this finding by shoowing confused
participants without orientation. Surprisingly, the
stakeholders’ questionnaire answers found the two
reports to be almost equally appealing (2.9 for the
report on the left compared to 2.75 for tthe report on the
right). The results were more significant regarding the
helpfulness for the task (3.3 vs. 2.63) and the intuitiveness of the color scheme (3.5 vs. 2.5). Some participants remarked that the colors in the report on the left
In view of many scientific contributions on the acceptance of technologies that rely on questionnaires in
order to determine acceptance factors such as ease of
use and usefulness, our discovered discrepancies between questionnaire and observation results show that
future research might have to distinguish between the
1184
perceived (questionnaire based) and actual (observed,
measured, eye-tracked) ease of use as well as perceived
and actual usefulness. Some researchers have already
started to make a distinction between perceived and
objective acceptance criteria [24]. However, we believe that further research into the cause for the differences between perceived and actual acceptance criteria
and their impact on the overall acceptance is needed.
Our preliminary results indicate that the visual appeal
of a report may have a strong influence on the user’s
perceived ease of use and usefulness. The empirical
investigation of this hypothesis could be another starting point for future research.
tation of sustainability reports as well as implications
and possible starting points for future research.
Naturally, our research approach suffers from some
limitations: The preliminary study of our framework
was set in an artificial environment (laboratory experiment) that possibly biased the results. However, we
believe that the applied multi-method approach reduces
this problem to a large extend. Nevertheless, future
research can focus on settings that are more ‘natural’ to
the participants.
Furthermore, our preliminary study only involved a
small number of participants and we only conducted
one iteration of the framework. Ideally, a larger number of users and a more representative user population
could improve the findings. Thus, our proposed
framework needs to be further evaluated in the future.
So far, our framework is a theoretical construct that
still needs to prove its value in practice. Due to the
interchangeability of the included research and analyses methods, however, it can be easily modified by
researches and practitioners alike to meet their needs.
We invite other researchers to apply and to adopt our
framework to other problem domains than sustainability reporting (e.g. the design of websites or software
user interfaces) in order to further evaluate it.
For the authors of standards and guidelines on sustainability reporting, we conclude that the quality criteria should be extended by common acceptance factors
from the IS disciple [6,24,25]. Since many companies
publish their SRs in the form of a website [21], it is no
longer sufficient to focus on quality criteria regarding
the content of static PDF documents [13,14]. Also, the
guidelines could include more advice on how to evaluate the stated quality criteria and information on how
the criteria were constituted in the first place.
Other researchers have already started to explore
the additional possibilities that web based sustainability reports offer in comparison to traditional, paper
based reports. Web based reports can, for example,
facilitate two-way communication between the reporting company and its stakeholders and provide the possibility to generate custom-tailored reports for different
stakeholder groups [13,14]. The application of the
presented framework might help to derive additional
quality criteria and further evaluate these approaches.
Acknowledgement
This work is part of the project IT-for-Green (Next
Generation CEMIS for Environmental, Energy and
Resource Management). The IT-for-Green project is
funded by the European regional development fund
(grant number W/A III 80119242). The authors are
pleased to acknowledge the support by all involved
project partners. Furthermore, we would like to thank
the anonymous reviewers for their insightful and constructive comments.
5 Conclusion
This paper contributes to the research on sustainable development in general and on web based sustainability reports in particular by giving an overview of the
goals and quality criteria in existing reporting standards and guidelines. Our overview shows that existing
standards and guidelines are too focused on the content
of the reports and, in view of the stakeholder-focused
nature of sustainability reporting, neglect common IS
acceptance criteria which have proven to be valid in
many other problem domains. In response to these
shortcomings, we presented a multi-method framework
that directly involves the different stakeholder groups
into the definition of quality criteria and the corresponding evaluation. This paper presented initial findings on the framework and the included multi-method
approach itself, findings on the design and implemen-
6 References
[1] W. Bartels, J. Iansen-Rogers, and J. Kuszewski, Count
Me In - The Readers’ Take on Sustainability Reporting,
KPMG, 2008.
[2] G.H. Brundtland, editor., Our Common Future, Oxford:
Oxford University Press, 1987.
[3] J. Cao, M. Lin, J. Crews, A. Deokar, and J. Burgoon,
“The Interaction Of Research Methods For System Evaluation And Theory Testing: A New Vision Of The Benefits Of
Multi-Methodological Information Systems Research,” Proceedings Of The International Conference On Information
Systems (ICIS), 2004.
1185
[4] D. Cyr, M. Head, H. Larios, and B. Pan, “Exploring
Human Images In Website Design: A Multi-Method Approach,” MIS Quarterly, vol. 33, 2009, pp. 1-32.
[17] J. Mingers, “Combining IS Research Methods: Towards
A Pluralist Methodology,” Information Systems Research,
vol. 12, Sep. 2001, pp. 240-259.
[5] C.-H. Daub, “Assessing The Quality Of Sustainability
Reporting: An Alternative Methodological Approach,” Journal of Cleaner Production, vol. 15, 2007, pp. 75-85.
[18] H. Oesterle and B. Otto, “Consortium Research,” Business & Information Systems Engineering, vol. 2, Aug. 2010,
pp. 283-293.
[6] W.H. DeLone and E.R. McLean, “The DeLone And
McLean Model Of Information Systems Success : A TenYear Update,” Journal of Management Information Systems,
vol. 19, 2003, pp. 9-30.
[19] J.W. Palmer, “Web Site Usability, Design, and Performance Metrics,” Information Systems Research, vol. 13, Jun.
2002, pp. 151-167.
[7] Deutsches Institut für Normung e.V., DIN EN ISO 9241110, Deutsches Institut für Normung e.V., 2008.
[20] D.D. Salvucci and J.H. Goldberg, “Identifying Fixations
And Saccades In Eye-Tracking Protocols,” Proceedings Of
The 2000 Symposium On Eye Tracking Research And Applications, 2000, pp. 71-78.
[8] EMAS Eco-Management and Audit Scheme, “Regulation
(EC) No 1221/2009 Of The European Parliament And Of
The Council,” Official Journal of the European Union L
342/1, 2009.
[21] A. Slater, International Survey Of Corporate Responsibility Reporting 2008, KPMG, 2008.
[22] M.W. van Someren, Y.F. Barnard, and J.A.C. Sandberg,
The Think Aloud Method - A Practical Guide To Modelling
Cognitive Processes, London: Academic Press, 1994.
[9] O. El-Gayar and B.D. Fritz, “Environmental Management
Information Systems (EMIS) for Sustainable Development A Conceptual Overview,” Communications of the Association for Information Systems, vol. 17, 2006, pp. 756 - 784.
[23] S. Townsend, W. Bartels, and J.-P. Renaut, Reporting
Change: Readers & Reporters Survey 2010, Futerra Sustainability Communications, 2010.
[10] J. Elkington, “Partnerships from cannibals with forks –
The triple bottom line of 21st-century business,” Environmental Quality Management, vol. 8, 1998, pp. 37-51.
[24] V. Venkatesh and H. Bala, “Technology Acceptance
Model 3 And A Research Agenda On Interventions,” Decision Sciences, vol. 39, May. 2008, pp. 273-315.
[11] GRI Global Reporting Initiative, Sustainability Reporting Guidelines 3.1, Global Reporting Initiative, 2011.
[25] V. Venkatesh, M.G. Morris, G.B. Davis, and F.D. Davis,
“User Acceptance Of Information Technology - Toward A
Unified View,” MIS Quarterly, vol. 27, 2003, pp. 425-478.
[12] ISO International Organization for Standardization,
ISO/FDIS 14063: Environmental Management - Environmental Communication - Guidelines and Examples, 2006.
[26] WBCSD World Business Council for Sustainable Development, Sustainable Development Reporting - Striking
The Balance, World Business Council for Sustainable Development, 2003.
[13] R. Isenmann, C. Bey, and M. Welter, “Online Reporting
For Sustainability Issues,” Business Strategy and the Environment, vol. 16, 2007, pp. 487-501.
[27] R. Wang and D.M. Strong, “Beyond Accuracy: What
Data Quality Means to Data Consumers,” Journal of Management Information Systems, vol. 12, 1996, pp. 5-34.
[14] R. Isenmann, J.M. Gómez, and D. Süpke, “Making
Stakeholder Dialogue For Sustainability Issues Happen –
Benefits , Reference Architecture And Pilot Implementation
For Automated Sustainability Reporting à la Carte,” Proceedings of the 44th Hawaii International Conference on
System Sciences, IEEE Computer Society, 2011.
[28] R.T. Watson, M.-C. Boudreau, and A.J. Chen, “Information Systems and Environmentally Sustainable Development:
Energy Informatics and New Directions for the IS Community,” MIS Quarterly, vol. 34, 2010, pp. 23-38.
[15] J. Maib, R. Hall, H. Collier, and M. Thomas, “A MultiMethod Evaluation Of The Implementation Of A Student
Response System,” Proceedings Of the Americas Conference
On Information Systems (AMCIS), 2006.
[29] J. Webster and R.T. Watson, “Analyzing The Past To
Prepare For The Future - Writing A Literature Review,” MIS
Quarterly, vol. 26, 2002, p. xiii-xxiii.
[16] N.P. Melville and S.M. Ross, “Information Systems
Innovation for Environmental Sustainability,” MIS Quarterly, vol. 34, 2010, pp. 1-21.
[30] WhiteMatter Labs, “EyeQuant Attention Analytics,”
URL: http://eyequant.com/ last accessed: 15.06.2011.
1186

Documentos relacionados