Research Article

Indoor Environmental Quality of Classrooms and Student Outcomes: A Path Analysis Approach
SeonMi Choi, PhD
Researcher, Interior Design, College of Design, University of Minnesota

Denise A. Guerin, PhD
Professor, Interior Design, College of Design, University of Minnesota

Hye-Young Kim, PhD
Assistant Professor, Design, Housing, and Apparel, University of Minnesota

Jonee Kulman Brigham, AIA, LEED AP
Research Fellow, Center for Sustainable Building Research, College of Design, University of Minnesota

Theresa Bauer
PhD candidate, Interior Design, College of Design, University of Minnesota

The purpose of this study was to investigate the relationship between indoor environmental quality (IEQ) in a set of university classrooms and students’ outcomes, i.e., satisfaction with IEQ, perceived learning, and course satisfaction. Data collected from students (N = 631) of a Midwestern university were analyzed to test a hypothesized conceptual model by conducting a path analysis. Findings suggested that IEQ of the classrooms, such as thermal conditions, indoor air quality, acoustic conditions, lighting conditions, furnishings, aesthetics, technology, and view conditions, was associated with positive student outcomes. Implications for classroom design were discussed with suggestions for future research.
Introduction

The purpose of this study was to investigate the relationship between indoor environmental quality (IEQ) in a set of university classrooms and students’ outcomes, i.e., their satisfaction with IEQ, their perception of the effect of IEQ on learning, and, subsequently, their course satisfaction. Many researchers have found that IEQ affects people’s performance whether they are in work, home, or learning environments. This can be true for schools where it has been found that poor indoor environments may reduce students’ performance (Fisk, 2000; Mendell et al., 2002). It is important to study IEQ of schools because of the age of the buildings, that they house vulnerable people, i.e., students and children, and that historically their construction, maintenance, and renovation are underfunded (U.S. General Accounting Office, 1995). Especially, when considering that students spend more time in the classrooms than in any other interior environments of schools for academic achievement, IEQ of the classroom can directly influence student outcomes, such as satisfaction and learning.

In this study, a conceptual model representing various IEQ criteria associated with physical environments of classrooms was developed and tested for their relationships to college students’ satisfaction with their learning environments and courses as well as their perceived learning. In so doing, a path analysis was conducted to simultaneously investigate structural relationships among variables, which can deepen our understanding of designed environments and human outcomes. Because there are many variables that may be interdependent, it was important to develop a conceptual model based on theoretical propositions and empirical evidence. By incorporating new insights and methodological advances in research, this study can contribute to the current literature on the effect of classroom design on students’ satisfaction and learning. The remainder of this paper is structured as follows. First, relevant literature is comprehensively reviewed and then the conceptual model and the related research hypotheses are presented. Subsequently, the applied research methodology and the results are discussed. Finally, important implications for educators and design practitioners and directions for future research are provided.

Literature Review
Background

IEQ has been found to both support and hinder people’s comfort, performance, and satisfaction with their physical environments and, therefore, can contribute to environmental and economic goals for sustainable building. Appropriate indoor environmental qualities of air, temperature, sound, light, visible and physical space, and occupants' ability to personally control these are the building's contributions to the biological bases of occupant comfort, health, and well-being (Buildings, Benchmarks, and Beyond- Minnesota Sustainable Building Guidelines (B3-MSBG), 2012). The effect of IEQ on people has become a significant research issue with the advent of sustainable design guidelines such as LEED™ (Leadership in Energy and Environmental Design) or the B3-MSBG, which call for architects, engineers, and interior designers to meet specific IEQ standards in the interiors of the buildings they design. One way to determine if designing to meet IEQ standards is successful is by conducting a post-occupancy evaluation (POE) about one year after the sustainable building is occupied. In schools, this would be an evaluation of students’ opinions and perceptions of the influence the interior environment has on their learning and how it is related to their satisfaction with the classroom and, perhaps, even satisfaction with their courses.

The issue with schools is that they are historically poorly funded, which means they may be underfunded in the initial building design stage and often go without proper maintenance or repair. These design, maintenance, and operations issues may lead to indoor environments where the IEQ is hazardous to students’ health and can be related to students’ poor health, attendance, and performance. Almost 20 years ago, the U.S. General Accounting Office (1995) reported that 63% of US students attended schools with dissatisfactory indoor environments, that is, they are in need of repair or renovation, or contaminants are present. They also reported that nearly 14 million students learn in spaces that are below standard or dangerous. Additionally, these figures were related to physical deterioration of the spaces and did not include specific IEQ criteria, which were just being uncovered at that time. Although much has been done in the last 20 years to improve schools’ indoor environments, they are still vulnerable to underfunding, overuse, and lack of research investigating the effect of IEQ criteria, which could affect building and renovation budgets.

Data about student outcomes in elementary and secondary schools are more readily available than data related to college students, and few studies have been completed on various IEQ criteria of college classrooms. The need to maximize college students’ academic achievement through their increased satisfaction and improved learning is a vested interest of administrators who must establish institutional credibility or accreditation and must prepare young professionals for a knowledge-based workforce. Further, students themselves need to maximize their learning as they prepare to seek positions in a competitive job market (Duque & Weeks, 2010; Roberts, 2009). Therefore, it is important to determine if there is any relationship between college students’ outcomes and classroom IEQ. Researchers have looked at various drivers in this equation by studying faculty perceptions of student learning (Duyar, 2010; Earthman & Lemasters, 2009; Kelting & Montoya, 2011), the relationship between attendance and performance (Mendell & Heath, 2005), the influence of school climate (Duyar, 2010; Uline & Tschannen-Moran, 2008), and instructional delivery methods (Brookhart, 1999; Terenzini, Cabrera, Colbeck, Parente, & Bjorklund, 2001).

However, the issue of linking students’ satisfaction and learning to classroom’s physical environment has been investigated on a limited basis. Several studies linked the physical environment of the classroom setting to students’ attendance and students’ learning (Daisey, Angell, & Apte, 2003; Mendell &Heath, 2005; Schneider, 2002; Tanner, 2009). Strange and Banning (2001) cited research that links improved classroom attractiveness and lighting to students’ improved motivation and task performance. Graetz and Goliber (2002) summarized research that linked lighting to psychological arousal, overheated spaces to hostility, and density with low student achievement. None of these studies, however, comprehensively investigate the relationships between various IEQ criteria typically associated with the physical environment and student outcomes. A closer look at several of IEQ criteria provides an overview of the relationships involved.

IEQ of the Classroom Environment and Student Outcomes

IEQ criteria of the built environment are typically evaluated in various combinations using different environmental features and addressing different characteristics associated with user outcomes, e.g., satisfaction, performance, achievement, absenteeism, health, and comfort. In the early 1980’s, indoor air quality (IAQ) emerged as a substantial focus in the literature when the World Health Organization (WHO) (1986) reported that up to 30% of the new and remodeled buildings across the world had received excessive complaints concerning IAQ. Subsequently, IAQ was associated with sick building syndrome (SBS), which was related to occupant exposure or time spent in a building. IAQ was also linked to building related illness (BRI), which was diagnosed as illnesses identified directly with airborne building containments (EPA, 1991).

Not surprisingly, national statistics prior to 2000 revealed that over 43% of the U.S. schools had reported problems with IAQ (Kelting &Montoya, 2011; National Center for Educational Statistics, 2000). IAQ, ventilation, and CO2 ratings in schools provided concern for health issues related to respiratory illness (asthma), chemical sensitivities, volatile organic compounds, and biological pathogens (Daisey et al., 2003). More specifically, early studies found temperature control  (including air conditioning) and air quality as significant IEQ features that contributed most to student learning performance (Cash, 1993; Earthman, 2004). Mendell and Heath (2005) found the evidence that there were direct or indirect connections of indoor pollutants (biological, chemical, or particulate pollutants) and thermal conditions (temperature and humidity) to student performance and absenteeism in school environments. Given the overriding concern for health and performance issues, there was a greater amount of research that focused on IAQ and thermal and ventilation conditions than on other IEQ criteria.

Lighting conditions have long been an important IEQ criterion in the built environment as it includes both electric and daylight sources and ambient and task uses. Each one of these elements has a unique role in assessing user experiences within the built environment. Exposure to various types of light can be associated with physiological responses in human performance, and daylight from windows can provide both visual lighting and an opportunity for a view to the outside or natural environment, which have also been found to positively influence human behavior. Studies conducted in elementary school settings found a positive and significant correlation between the presence of daylight and student performances across three different school districts (Heschong Mahone Group, 1999). In addition, daylighting provided through skylights also provides a positive effect on students in their classrooms. Subsequent studies involving classrooms with greater amounts of daylighting compared to classrooms with the least amount of daylighting showed a 21% increase in student performance (Heschong Mahone Group, 2003; Kelting & Montoya, 2011). Similarly, electric lighting has long been found to improve test scores, reduce off-task behavior, and plays a role in student achievement (Jago &Tanner, 1999).

Studies involving acoustics have looked at different aspects of the classroom environment, e.g., the presence of unwanted noise and types of indoor finishes such as hard surface and soft surface floorcovering, to better understand the relationship between acoustic conditions and student learning performance (Earthman, 2004; Tanner & Langford, 2003; Uline &Tschannen-Moran, 2008). Learning performance is understandably improved when discussions between students and instructors can be easily heard and clearly distinguished from outside influences. Research examining acoustic conditions in classrooms located near noisy vehicular traffic, community noises, and in rooms without any acoustic treatment have shown decreased student performance when compared with classroom settings located in quiet neighborhoods or with noise abatement treatment, e.g., rubber floor mats, acoustical tiles, etc., included in the classroom (Earthman, 2004, Uline & Tschannen-Moran, 2008). Lastly, schools with soft floorcovering such as carpet found student achievement higher in those rooms than in those classrooms with hard surface flooring. In addition, there was a preference by the instructors to teach in classrooms with carpeting due to improved acoustical conditions and lower reverberation times (Tanner &Langford, 2003).

Classroom furniture plays a strategic role in addressing different learning styles and pedagogical delivery methods. New insights into how students learn and the various methods to enhance this opportunity are changing how furniture serves the learning experience (Felix & Brown, 2011). Moreover, technology requirements have become integrated into many seating, table, and presentation furniture items used classroom environments today. Furniture that is flexible and adjustable to the mode of teaching also contributes to supportive learning spaces (Brown &Lippincott, 2003). Assuming a human-factors or user-centered design approach, Cornell (2002) identified four important criteria that can be used in the assessment of learning experience in the classroom environment: 1) functionality (wire management, flexibility, and mobility); 2) comfort, safety, and health (not harmful); 3) usability (easy to use, with little or no training, prevent accidents, and optimize use); and 4) aesthetics (a design that is pleasing or acceptable for future use). The concern for ergonomics cannot be understated when one considers the amount of time that is spent in seated positions throughout the day (Castellucci, Arezes, &Viviani, 2010; Chung &Wong, 2007; Milanese &Grimmer, 2004).

Discussions involving aesthetics frequently invoke images of attributes such as color, materials, ambiance, and cleanliness. In research regarding IEQ and student learning, the concept of aesthetics is more often associated with building age, features, condition, cleanliness, and overall image. Earthman (2004) found that school environments that are considered newer or adequately maintained reflect higher learning achievements among students than those settings where facilities were considered inadequate. In a somewhat related case study, student test scores from students in a remodeled school environment were found to be noticeably higher after the remodel (Baker &Bernstein, 2012).

Additional criteria related to student learning outcomes include classroom layout and availability of technology (Lei, 2010). Many studies in the last 10 years have shown that school design and layout, including spatial configuration, affect students’ learning (Schneider, 2002). Use of technology in the classroom for teaching and learning as well as students’ ability to see the instructor and teaching materials, i.e., the visual images shown on a screen, seem to be obviously related to student learning performance and satisfaction with the physical environment as well as, perhaps, the course.

It can be seen from the previous research that many IEQ criteria of the classroom environment seem to be related to student outcomes. The need for continued research on ways to improve student satisfaction and learning has become more imperative as demands for increased performance, efficiency, and a tightening economy exert pressure on higher education faculty and administrators. Concerns for educational and sustainable performance have also risen because of ongoing legislation issues, e.g., state-assisted institutions of higher education are faced with dwindling legislative financial support. In addition, post-occupancy evaluation studies have been a method by which researchers have investigated occupants’ self-reported satisfaction, performance, and health issues related to various IEQ criteria in high school education environments (Khalil, Husin, Wahab, Kamal, &Mahat, 2011). Therefore, it is appropriate to continue the use of POEs in higher education classrooms to determine the influence of IEQ on college students’ satisfaction and learning.

Further, although this literature review is not exhaustive, it is important to note that not all IEQ criteria were equally represented in research studies, yet any one or combination of IEQ criteria could be reasoned to enhance or hinder students’ learning process as well as their satisfaction. Additionally, research has demonstrated that occupants’ satisfaction with one or more IEQ criteria did not necessarily reflect satisfaction with the overall environment (Humphreys, 2005; Khalil et al., 2011). Therefore, it is important to study the IEQ of classrooms in a comprehensive way that includes all IEQ criteria so that the contribution of each, any, or all criteria can be determined, as well as the interaction effect. This study then used a theoretical framework to investigate the influence of various IEQ criteria on students’ satisfaction and learning.

Theoretical Background

This study integrated a number of streams of research to develop and test a model delineating the impact of IEQ on student outcomes. The basic idea behind our conceptual model, depicted in Figure 1, is consistent with the work of Mehrabian and Russell (1974) who recognized that environmental stimuli affect human responses. Our model also reflects more recent advancements in classroom environment research highlighting that:

a) “….the (classroom) environment should be a place people want to be, not a place they have to be.They should be motivated by fun and enjoyment as much as by a desire to learn….” (Cornell, 2002, p. 41);
b) classroom environment research should embrace the issue of how to use technology effectively to support and enhance the academic performance of today’s learners (Cradler, McNabb, Freeman, & Burchett, 2002);
c) the condition of the classroom may cause morale problems with classroom users (Earthman & Lemasters, 2009); and
d) generic design criteria can form the basis for benchmarking classroom facility performance (Fleming & Storr, 1999).

Specific hypotheses tested in this study were:

Figure 1. The Hypothesized Conceptual Model

Methods
Settings and Participants

The settings of this study were general classrooms of a single building in a major Midwestern university. This building was a 132,000 square foot, four-story classroom-office building designed and constructed based on the B3-MSBG in 2008. It included four 124-seat classrooms and five 75-seat classrooms. All classrooms had similar physical environmental features such as recycled low-emitting materials; windows with blinds to control the sunlight and minimize glare as well as to reduce exterior environmental noise level; doors to minimize noise transmitted from corridors into classrooms; the latest presentation technology with wireless access and dual projections for students’ engagement in presentation materials and interaction with instructors; tiered seating with fixed bench tables and moveable chairs; light reflective ceiling finishes; floor coverings that reduce unwanted noise transmission; aesthetically pleasing color and interior finishes; and occupant-controlled overhead lighting (see Figures 2, 3, and 4).

Participants were students who took classes in these classrooms. The researchers sent an email invitation to 5,490 students to complete the online survey. The survey was completed by 631 students for a response rate of 11.5%. As shown in Table 1, there was no big difference in percentage between male (45.9%) and female (54.1%) students, and between students who took classes in each type of classrooms (58.8% students used 124-seat classrooms; 41.2% used 72-seat classrooms), therefore there was no demographic difference effect. Further, the majority of students’ (77.9%) ages ranged from 18 to 24; and most of students (76.9%) spent 3 to 4 hours in one of the classrooms per week.

Table 1. Demographic information

Measures and Procedure

As a series of studies using the Sustainable Post-Occupancy Evaluation Survey (B3-SPOES) tool developed by a research center at a Midwestern university, this study used a self-administered, online questionnaire to evaluate students’ perspectives of IEQ in classrooms. The questionnaire was developed reflecting B3-MSBG IEQ criteria and included questions related to satisfaction with specific IEQ criteria (i.e., thermal conditions, IAQ, acoustic conditions, lighting conditions, furnishings, aesthetics, technology, vibration conditions, and view conditions); satisfaction with the overall IEQ of classrooms; perceived effect of IEQ on learning; and satisfaction with the course. A 7-point Likert-type scale was used to measure satisfaction (1=very dissatisfied, 7=very satisfied); learning (1=hinders learning, 7=enhances learning); and course satisfaction statements (1=strongly disagree, 7=strongly agree).

Most variables included in the questionnaire had a single item used to measure the variable. However, satisfaction with view conditions was composed of two items: 1) your ability to see the presenter in your classroom and 2) your ability to see materials presented in your classroom. Course satisfaction included three items: 1) Generally speaking, I am very satisfied with this course; 2) I frequently think of quitting this course; and 3) I am generally satisfied with the kind of work I do in this course. Table 2 shows the variables (independent and dependent), the questions that measured each variable, and the scale used for each measure.

Table 2. Variables, Measures, and Scales

After approval by the Institutional Review Board, data were collected through an online survey tool. The SPOES team announced via email that students would be invited to the voluntary online survey to evaluate their satisfaction with classrooms, their perceptions of effect on learning, and their course satisfaction as a result of new building design. Students were given a URL link to the online survey and eight days to complete the survey; one reminder was sent after seven days. Only completed survey data were used for data analysis.

Data Analysis

The data analysis technique chosen for this study was a path analysis, a statistical technique primarily used to examine the comparative strength of direct and indirect relationships among variables (Lleras, 2005). The intended rationale to use a path analysis was that the authors could explicitly examine how the chosen variables relate to one another and thus develop the causal hypotheses about the sequential processes influencing a particular student outcome. Another advantage was that the authors were able to decompose the various variables affecting each given student outcome into direct (versus indirect) effects while testing a path model (Lleras, 2005).

A series of data analysis procedures was applied. First, the data collected were purified through a standard procedure for handling missing responses, dealing with outliers, and checking the normality assumption required for a path analysis. Some missing data were replaced using the mean imputation method. Univariate outliers with standardized scores (z-scores) more than ±3.0 and multivariate outliers showing the squared Mahalanobis distance with a low p value (less than .001) were carefully considered to avoid biased inferences when analyzing data  (Byrne, 2010; Kline, 2011). To determine whether the data were normal or not, skewness and kurtosis were checked. For normal distribution, skewness should be less than ±3 and kurtosis should be less than ±10 (Kline, 2010). Second, internal consistency of variables that had multiple items (i.e., “view conditions” and “course satisfaction”) was checked using Cronbach’s alpha reliability scores (Gefen, Straub, &Boudreau, 2000). When an acceptable reliability with a .70 or .80 cutoff value (Henson, 2001; Nunnally &Bernstein, 1994) was indicated, the composite measure of items of each variable was used for further analysis. For testing the hypotheses, path analysis was used to examine the simultaneous relationships between students’ satisfaction with specific IEQ criteria of classrooms, their satisfaction with the overall IEQ of classrooms, perceived effect of IEQ on their learning, and their satisfaction with courses.

The fit of a path model was evaluated using various model-fit indices, such as chi-square fit index (χ2/df), the goodness-of-fit index (GFI), the adjusted goodness-of-fit index (AGFI), the normed fit index (NFI), comparative fit index (CFI), Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA), to provide an accurate evaluation of a model-fit (Byrne, 2010; Harrington, 2009). Recommended range of χ2/df for a good model-fit is from 5.0 to 2.0 (Tabachnick & Fidell, 2007; Wheaton, Muthen, Alwin, &Summers, 1977). GFI and AGFI values of .90 or greater indicate well fitting models (Hooper, Coughlan, & Mullen, 2008). NFI, CFI and TLI values close to .95 or greater are recognized as a good model-fit (Hu & Bentler, 1999). RMSEA values less than .08 suggest an adequate model-fit (Browne & Cudeck, 1993).

Limitation

In this study, most variables, except for view conditions and course satisfaction, had only one item. Multiple-item measures should be used to establish a more reliable test and better represent the underlying dimensions of each variable. Student learning outcome was a self-perceived measure, not a directly assessed learning performance. A self-assessment can be subjective that may decrease accuracy of the assessment. Further studies using objective measures of student outcomes are required to get more accurate examination of the relationships between variables. The sample size was large enough to gain more accurate data, but the data were collected from only one building. The developed conceptual model showing relationships between variables needs to be confirmed by collecting data from a number of buildings to provide consistent evidence.

Results
Data screening

Missing data were minimal and were replaced with the mean of each variable. When checking variables having multiple items, the negative item that was correlated with other items was reversed. As shown in Table 3, one negative item under course satisfaction, “I frequently think of quitting this course (M=2.09, SD=1.70)”, was reverse scored in a positive manner (M= 5.91, SD=1.70). Several univariate outliers were identified in each variable showing standardized scores more than ±3.0. To determine whether the outliers detected were deleted, the raw scores of the outliers in each variable were compared with the mean and standard deviation of the corresponding variable. The descriptive output indicated that the mean values of variables were ranged from 5.35 to 6.34 with the standard deviations ranging from 0.88 to 1.70. All outliers in each variable showed low scores less than 4. When considering the high mean score of each variable, cases less than 4 might be identified as outliers. However, not only positive but also negative perceptions of physical classroom environments should be taken into consideration when analyzing data. Thus, the researchers decided not to drop the univariate outliers detected in the process of data purification. For the same reason, several multivariate outliers showing the squared Mahalanobis distance less than .001 were not deleted. All variables were normally distributed (-1.82 ≤ skewness ≤ - .96, .83 ≤ kurtosis ≤ 4.78). Finally, a total of 631 cases were retained for path analysis.

When checking the Cronbach’s alpha of variables with multiple items, view conditions (.81) and course satisfaction (.70), both showed acceptable internal consistency reliability. The composite measures of multiple items were used to represent each variable when conducting path analysis. Table 3 summarizes the data conditions for each variable.

Table 3. Mean, Standard Deviation, Skewness, and Kurtosis (N=631)

Hypothesis testing

Path analysis using maximum likelihood was used to simultaneously examine the hypothesized relationships between students’ satisfaction with specific IEQ criteria of the classroom, their satisfaction with the overall IEQ of the classroom, their perceived effect of IEQ on learning, and their course satisfaction. Correlations among independent variables (satisfaction with IEQ criteria) were not highly correlated, ranging from .35 to .59 (see Table 4). The overall path model was statistically significant (χ2=31.7, df=9, p=.000) and had a very good fit with χ2/df = 3.521, GFI = .992, AGFI = .930, NFI = .991, CFI = .994, TLI = .955, and RMSEA = .063. The resulting standardized regression weights of the paths are shown in Figure 5.

While students’ satisfaction with vibration conditions (β = .03, p=.28) was not a significant predictor of their satisfaction with the overall IEQ of the classroom, satisfaction with thermal conditions (β = .13, p<.001), IAQ (β = .08, p<.05), acoustic conditions(β = .11, p<.001), lighting conditions (β = .11, p<.01), furnishings (β = .23, p<.001), aesthetics (β = .13, p<.001), technology (β = .13, p<.001), and view conditions (β = .10, p<.01) were significantly related to satisfaction with the overall IEQ of the classroom. Satisfaction with furnishings was the strongest contributor to predicting students’ satisfaction with the overall IEQ of the classroom. Therefore, hypothesis 1 was partially supported.

Students’ satisfaction with thermal conditions (β = .09, p<.01), IAQ (β = -.10, p<.01), acoustic conditions (β = .09, p<.01), aesthetics (β = .08, p<.05), technology (β = .13, p<.001), vibration conditions (β = .07, p<.05), and view conditions (β = .17, p<.001) significantly influenced students’ perceived effect of IEQ on learning. Satisfaction with view conditions (the ability to see the presenter and materials presented) was the strongest factor to influence students’ perceived effect of IEQ on learning, followed by technology provided for learning. Interestingly, students’ perceived effect of IEQ on learning was not significantly accounted for by satisfaction with lighting conditions (β = .00, p=.994) and furnishings (β = - .03, p=.392). Therefore, hypothesis 2 was partially supported.

Students’ satisfaction with the overall IEQ of the classroom significantly influenced their perceived effect of IEQ on learning (β = .47, p<.001), thus hypothesis 3 was supported. While students’ satisfaction with the overall IEQ of the classroom did not significantly influence students’ course satisfaction (β = .04, p=.449), the indirect path from this variable to course satisfaction, mediated through students’ perceived effect of IEQ on learning, was significant (indirect effect, β = .12, p<.05). Therefore, hypothesis 4 was not supported, but hypothesis 5 was supported. Students’ perceived effect of IEQ on learning significantly influenced their course satisfaction (β = .26, p<.001). Therefore, hypothesis 6 was supported.

Table 4. Correlations among Independent Variables

Figure 5. Path diagram with standardized regression weights, direct and indirect effects, and squared multiple correlations

Discussion and Implications

This study investigated simultaneous relationships between the IEQ of classrooms and students’ outcomes, such as satisfaction with IEQ, perceived learning, and course satisfaction, by developing a hypothesized conceptual model and testing the fit of the data to the model. The findings of path analysis indicated that there were significant relationships between students’ satisfaction with thermal conditions, IAQ, acoustic conditions, lighting conditions, furnishings, aesthetics, technology, and view conditions and their satisfaction with the overall IEQ of classroom environments. Further, students’ satisfaction with the overall IEQ of classroom environments influenced by their satisfaction with these specific IEQ criteria significantly led to enhanced perceived learning. In addition, students’ satisfaction with thermal conditions, IAQ, acoustic conditions, aesthetics, technology, vibration conditions, and view conditions directly influenced their enhanced learning. These findings support previous studies indicating that IEQ of classrooms had a positive effect on students’ satisfaction and learning (Felix & Brown, 2011; Heschong Mahone Group, 1999; Mendell & Health, 2005).

However, this study showed that although students were satisfied with lighting conditions and furnishings of their classrooms, their satisfaction with these IEQ criteria did not significantly influence their perceived learning. The finding that lighting or furnishings did not directly contribute to students’ learning is inconsistent with previous studies (Cornell, 2002; Kelting & Montoya, 2011; Heschong Mahone Group, 2003; Lei, 2010), and must be confirmed and investigated further. For example, the setting of this study had non-flexible tables, movable seating, and controllable overhead lighting, all of which may contribute to students’ satisfaction, but the extent of this was not identified in this study. Therefore, future study needs to be also conducted in classrooms that have various furniture and lighting types and arrangements to confirm these results.

Researchers have found linkages between specific IEQ criteria and student performance. For example, Strange and Banning (2006) linked lighting to improved student task performance; Cash (1993) and Earthman (2004) linked IAQ to student learning. However, no other researchers have tried to comprehensively investigate the relationship among students’ satisfaction with IEQ criteria, their satisfaction with the overall classroom environment, and their learning. Further, little research was found on the effect of students’ satisfaction with classroom IEQ and their enhanced learning on their course satisfaction. In this study, students who reported that IEQ had a high level of perceived effect on their learning were also satisfied with their courses. In addition, students’ satisfaction with the overall IEQ of the classroom indirectly influenced their course satisfaction when they showed a high level of perceived effect of IEQ on learning.

The fact that these relationships are occurring in sustainably design classrooms lends further complexity to the issues to be studied. Following B3-MSBG means that not only sustainable criteria were met, but also best practices in IEQ were used as benchmarks due to the nature of the B3-MSBG. Use of path analysis to explore the potential for simultaneous relationships in real classroom environments and testing these findings against a theoretical model means we now can build other test models to explore simultaneous relationships, which is the way human behavior occurs. The value of path analysis is evident; there were simultaneous relationships found that affect both satisfaction and learning. These are unique aspects of this study that contribute to our understanding of these relationships.

Future studies could both confirm and build on the methods, framework, and findings from this study. The understanding of the boundaries and generalizability of our findings requires additional studies that use objective measures of student outcomes (e.g., GPA, attendance rates, course evaluation scores, etc.) as opposed to students’ perceptions of these outcomes. The former can be directly manipulated by educational institutions and therefore are more managerially relevant. There is a need for more studies that employ multiple items for each variable instead of relying on a single item to measure each variable. This helps avoid the well-known problem of common-method variance, which can lead to inflated correlations between the measures of the antecedents and consequences (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). Additionally, using multiple items to investigate more detailed relationships between IEQ of classrooms and student outcomes can explore specific features of each IEQ. For example, if students were dissatisfied with the IAQ, what component contributed to that dissatisfaction—stagnant air, odors, etc.? Another suggestion for future research is to develop psychometric scales that can measure the relationship of the physical environment to student outcomes. Again, this can lead to the design of classroom environments that predict greater student success.

Conclusions

This study investigated students’ perceptions of the effect of various IEQ criteria of classrooms on their satisfaction with the overall classroom physical environments and their perceived learning, subsequently their course satisfaction in a newly designed sustainable campus building. Further, this study was able to develop and test a conceptual model that looked at simultaneous interactions of IEQ criteria to student outcomes.

The findings of this study indicated mostly positive results from use of sustainable IEQ criteria in the new classroom environments. The generally positive contribution that classrooms make to students’ satisfaction and learning concurs with many other researchers (Earthman, 2004; Heschong Mahone Group, 1999; Mendell & Heath, 2005) who have investigated these issues. This study provided empirical evidence that designing a classroom with attention to sustainable IEQ criteria, e.g., thermal conditions, IAQ, acoustic conditions, lighting conditions, furnishings, aesthetics, technology, and view conditions, is associated with positive student outcomes including their overall satisfaction with classroom IEQ and its perceived effect on their learning, that lead to students’ satisfaction with courses. However, additional study is warranted to ensure that learning is more quantifiably measured.

The findings of this study can be used to underpin designers’ knowledge of IEQ in higher education classroom environments. This offers an opportunity for educational institutions to use classroom design as a means to increase desirable student outcomes. Because of the large numbers of variables that can affect students’ satisfaction and learning, it is important for designers to understand how individual variables affect student outcomes, specifically variables that are within designers’ control. With a better understanding of how these variables affect students’ satisfaction and learning, designers can make informed decisions about which variables are the most important ones and warrant spending project dollars to ensure the highest level of students’ satisfaction and learning.

References

Baker, L., & Bernstein, H. (2012). The impact of school buildings on student health and performance: A call for research. The Center for Green Schools and McGraw-Hill Research Foundation. Retrieved from http://mcgraw-hillresearchfoundation.org/wp- content/uploads/2012/02/GreenSchoolsWP-2012.pdf.

Brookhart, S. M. (1999). The art and science of classroom assessment: The missing part of pedagogy. The Art and Science of Classroom Assessment. The Missing Part of Pedagogy. ASHE-ERIC Higher Education Report, 27(1), 1-128. Retrieved from http://www.eric.ed.gov/PDFS/ED432937.pdf.

Browne, M. W., & Cudeck, R. (1993). Alternate ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp.136-162). Newbury Park, CA: Sage.

Brown, M. B., & Lippincott, J. K. (2003). Learning spaces: More than meets the eye. Educause Quarterly, 26(1), 14-16. Retrieved from http://net.educause.edu/ir/library/pdf/eqm0312.pdf.

Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York, NY: Taylor and Francis Group.

Cash, C. S. (1993). Building condition and student achievement and behavior. Unpublished doctoral dissertation. Virginia Polytechnic Institute and State University, Blacksburg, VA.

Castellucci, H. I., Arezes P.M., & Viviani, C.A. (2010). Mismatch between classroom furniture and anthropometric measures in Chilean schools. Applied Ergonomics, 41(4), 563-568. doi: 10.1016/j.apergo.2009.12.001

Chung, J. W. Y., & Wong, T. K. S. (2007). Anthropometric evaluation for primary school furniture design. Ergonomics, 50(3), 323-334. doi:10.1080/00140130600842328

Cornell, P. (2002). The impact of changes in teaching and learning on furniture and the learning environment. New Directions for Teaching and Learning, 92, 33–42. doi: 10.1002/tl.77

Cradler, J., McNabb, M., Freeman, M., & Burchett, R. (2002). How does technology influence student learning? Learning & Leading with Technology, 29(8), 46-56.

Daisey, J. M., Angell, W. J., & Apte, M. G. (2003). Indoor air quality, ventilation and health symptoms in schools: An analysis of existing information. Indoor Air, 13, 53–64. doi: 10.1034/j.1600-0668.2003.00153

Duque L. C., & Weeks, J. R. (2010). Towards a model and methodology for assessing student learning outcomes and satisfaction. Quality Assurance in Education, 18(2), 84-105. doi: 10.1108/09684881011035321

Duyar, I. (2010). Relationship between school facility conditions and the delivery of instruction: Evidence from a national survey of school principals. Journal of Facilities Management, 8(1), 8-25. doi: 10.1108/14725961011019058.

Earthman, G. I. (2004). Prioritization of 31 criteria for school building adequacy. Baltimore: American Civil Liberties Union Foundation of Maryland. Retrieved from http://schoolfunding.info/policy/facilities/ACLUfacilities_report1-04.pdf

Earthman, G. I., & Lemasters, L. K. (2009). Teacher attitudes about classroom conditions. Journal of Educational Administration, 47(3), 323–335. doi:10.1108/09578230910955764.

EPA U.S. Air and Radiation (6609J). (1991). Research and Development Environmental Protection (MD-56) Agency. Indoor Air Facts No. 4 (revised), Sick Building Syndrome. Retrieved from http://www.epa.gov/iaq/pdfs/sick_building_factsheet.pdf.

Fleming, D., & Storr, J. (1999). The impact of lecture theatre design on learning experience. Facilities, 17(7/8), 231-236. Retrieved from http://dx.doi.org/10.1108/02632779910270186

Fisk, W.J. (2000). Estimates of potential nationwide productivity and health benefits from better indoor environments: An update. In J. Spengler, J.M. Samet, & J.F. McCarthy (Eds.). Indoor Air Quality Handbook, New York, NY: McGraw Hill, 4.1–4.36.

Felix, E., & Brown, M. (2011). The case for a learning space performance rating system. Journal of Learning Spaces, 1(1). Retrieved from http://libjournal.uncg.edu/ojs/index.php/jls/article/view/277/170.

Gefen, D., Straub, D. W., & Boudreau, M-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the AIS, 4(7), 170.

Graetz, K. A. & Goliber, M. J. (2002). Designing collaborative learning places: Psychological foundations and new frontiers. New Directions for Teaching and Learning, 92, 13-22.

Harrington, D. (2009). Confirmatory factor analysis. New York, NY: Oxford University Press.

Henson, R. K. (2001). Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Measurement and Evaluation in Counseling and Development, 34(3), 177-189.

Heschong Mahone Group (1999). Daylighting in schools: An investigation into the relationship between daylight and human performance. Retrieved from http://www.h-m- g.com/downloads/Daylighting/order_daylighting.htm

Heschong Mahone Group (2003). Windows and classrooms: A study of student performance and the indoor environment (Technical Report No. P500-03-082-A-7). Retrieved from http://www.h-m-g.com/downloads/Daylighting/order_daylighting.htm

Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53-60. Retrieved from www.ejbrm.com

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling, 6(1), 1-55. doi: 10.1080/10705519909540118

Humphreys, M. A. (2005). Quantifying occupant comfort: Are combined indices of the indoor environment practical? Building Research & Information, 33(4), 317-325. doi: 10.1080/09613210500161950

Jago, E., & Tanner, K. (1999). Influence of the school facility on student achievement: Lighting; color. Athens, GA: Dept. of Educational Leadership, University of Georgia. Retrieved from http://www.coe.uga.edu/sdpl/researchabstracts/visual.html

Kelting, S., & Montoya, M. (2011). Green building policy, school performance, and educational leaders’ perspectives in USA. Proceedings of the Sixth International Conferences on Construction in the 21st Century. Kuala Lumpur, Malaysia.

Khalil, N., Husin, H. N., Wahab, L. A., Kamal, K. S., & Mahat, N. (2011). Performance evaluation of indoor environment: Towards sustainability for higher educational buildings. US-China Education Review, 2, 188-195.

Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY: The Guilford Press.

Lei, S. A. (2010). Classroom physical design influencing student learning and evaluations of college instructors: A review of literature. Education, 131(1), 128-134.

Lleras, C. (2005). Path analysis. Encyclopedia of Social Measurement, 3, 25-30.

Mehrabian, A., & Russell, J. (1974). An approach to environmental psychology. Cambridge, MA: The MIT Press.

<>Mendell, M.J., Fisk, W.J., Kreiss, K., Levin, H., Alexander, D., Cain, W.S., Girman, J.R., Hines, C.J., Jensen, P.A., Milton, D.K., Rexroat, L.P., & Wallingford, K.M. (2002). Improving the health of workers in indoor environments: Priority research needs for a national occupational research agenda. American Journal of Public Health, 92, 1430–1440.

Mendell, M. J., & Heath, G. A. (2005). Do indoor pollutants and thermal conditions in schools influence student performance? A critical review of the literature. Indoor Air, 15, 27–52. doi: 10.1111/j.1600-0668.2004.00320.

Milanese, S., & Grimmer, K. (2004). School furniture and the user population: An anthropometric perspective. Ergonomics, 47(4), 416-426. doi: 10.1080/0014013032000157841

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.

Podsakoff, P.M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. doi: 10.1037/0021-9010.88.5.879

Roberts, L. W. (2009). Measuring school facility conditions: An illustration of the importance of purpose. Journal of Educational Administration 47(3), 368–380. doi:10.1108/09578230910955791

Schneider, M. (2002). Do school facilities affect academic outcomes? Washington, D.C.: National Clearinghouse for Educational Facilities. Retrieved from www.edfacilities.org/pubs/outcomes.pdf.

Strange, C. C. & Banning, J. H. (2001). Educating by design: Creating campus learning environments that work. San Francisco: Jossey-Bass.

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). New York, NY: Allyn and Bacon.

Tanner, C. K. (2009). Effects of school design on student outcomes. Journal of Educational Administration, 47(3), 381-399. doi: 10.1108/09578230910955809

Tanner, C. K., & Langford, A. (2003). The importance of interior design elements as they relate to student outcomes. Carpet and Rug Institute. Retrieved from http://www.eric.ed.gov/PDFS/ED478177.pdf.

Terenzini, P.T., Cabrera, A. F., Colbeck, C. L., Parente, J. M., & Bjorklund, S. A. (2001). Collaborative learning vs. lecture/discussion: Students’ reported learning gains. Journal of Engineering Education, 90(1), 123-130.

Uline, C., & Tschannen-Moran, M. (2008). The walls speak: The interplay of quality facilities, school climate, and student achievement. Journal of Educational Administration, 46(1), 55-73. doi: 10.1108/09578230810849817

U.S. General Accounting Office. (1995). School facilities: Condition of America’s schools (GAO Report No. HEHS-95-61). Retrieved from http://www.gao.gov/products/HEHS-95-61

Wheaton, B., Muthen, B., Alwin, D. F., & Summers, G. (1977). Assessing reliability and stability in panel models. Sociological Methodology, 8, 84-136.

World Health Organization (WHO) (1986). Indoor air quality research: Report on a WHO meeting (EURO Reports and Studies 103), Stockholm