 Review
 Open Access
 Published:
Selection criteria for linear regression models to estimate individual tree biomasses in the Atlantic Rain Forest, Brazil
Carbon Balance and Management volume 13, Article number: 25 (2018)
Abstract
Background
Biomass models are useful for several purposes, especially for quantifying carbon stocks and dynamics in forests. Selecting appropriate equations from a fitted model is a process which can involves several criteria, some widely used and others used to a lesser extent. This study analyzes six selection criteria for models fitted to six sets of individual biomass collected from woody indigenous species of the Tropical Atlantic Rain Forest in Brazil. Six models were examined and the respective fitted equations evaluated by the residual sum of squares, adjusted coefficient of determination, absolute and relative estimates of the standard error of estimate, and Akaike and Schwartz (Bayesian) information criteria. The aim of this study was to analyze the numeric behavior of these model selection criteria and discuss the ease of interpretation of them. The importance of residual analysis in model selection is stressed.
Results
The adjusted coefficient of determination (\( R^{2}_{adj.} \)) and the standard error of estimate in percentage (Syx%) are relative model selection criteria and are not affected by sample size and scale of the response variable. The sum of squared residuals (SSR), the absolute standard error of estimate (Syx), the Akaike information criterion and the Schwartz information criterion, in turn, depend on these quantities. The best fit model was always the same within a given data set regardless the model selection criteria considered (except for SSR in two cases), indicating they tend to converge to a common result. However, such criteria are not always closely related across different data sets. General model selection criteria are indicative of the average goodness of fit, but do not capture bias and outlier effects. Graphical residual analysis is a useful tool to this detection and must always be used in model selection.
Conclusions
It is concluded that the criteria for model selection tend to lead to a common result, regardless their mathematical formulation and statistical significance. Relative measures of goodness of fitting are easier to interpret than the absolute ones. Careful graphical residual analysis must always be used to confirm the performance of the models.
Background
There are different methods of calculating biomass and carbon storage in forests. Usually these methods combine information from forest inventories and expansion factors or fitting linear regression models [1]. Biomass models, usually fitted by linear regression (called allometric equations by some authors) can be used to obtain indirect estimates, by using tree measurement data (such as dbh, height, among others) coming from forest inventory, and are widely used for this purpose. Soares and Tomé [2] advocate the use of biomass equations, because the architecture of trees changes over time and under prevailing site conditions, altering the fixed proportion implicit in the expansion factors. Equations for biomass estimation require the examination of different models, which must be judged by some statistical indicators of goodness of fit. Selecting the best model, in principle, is a simple task, since there are well known criteria for this purpose. Many tools for the choice of the “best model” have been suggested in the literature [3]. However, different objectives in modeling can exist besides prediction, which require an integrated vision of the different model selection criteria.
Model selection has occupied the minds of many researchers, and a large number of publications devoted to this subject can be found in the literature [4,5,6,7,8,9,10,11,12]. Particularly in biomass estimation this issue is not deeply explored. Although model selection criteria for biomass estimation are widely used, a specific discussion on their significance and application has not been yet published.
Criteria for model selection must incorporate goodness of fit and parsimony, allowing that several models examined can be simultaneously compared [13]. Among the selection criteria most commonly adopted are the following: adjusted coefficient of determination, maximum likelihood test, Akaike information criterion, Akaike information criterion not biased to small samples, and Schwarz information criterion (also called Bayesian) [13]. There are variations of the mathematical formulations of these criteria in the literature, though their rationale are similar.
R^{2} (coefficient of determination) is perhaps the measure of fitting most widely used in linear regression modeling, but, according to some authors, it has been improperly used [14]. After the Anscombe’s publication on R^{2} [15], various criticisms have been made about the use of it as a model selection criterion. The author’s analysis has become famous, when he proposed a consideration on four series of different data that resulted in the same value of R^{2} in the fitting of the straightline model, the socalled “Anscombe’s quartet”. Kvalseth [14] has discussed the several potential pitfalls in using the R^{2} inadvertently. Some authors consider this measure as antiquated and with many restrictions [5, 11, 16].
One of R^{2} features is that the increase in the number of parameters causes a concomitant increase of its value, giving the false impression that a certain model is better than another. Another point is that models with different numbers of coefficients cannot be compared directly by R^{2}. Therefore, the adjusted R^{2} should be used instead [17]. Other statistical analyses traditionally employed are the size of the absolute (Syx) and the relative (Syx%) error of estimate, and the graphical residual analysis as well. Vanclay [18] suggests analyzing the data graphically, noting also the Fvalues of the regression and the statistic prediction sum of squares (PRESS) [12, 19].
The information criteria proposed by Akaike [20] and Schwarz [21] have been used and recommended for model selection. These alternative indices would be a better combination of ability to detect the goodness of fit and, therefore, the quality of the model, as well as penalize complex models that could mask the selection results.
Although this matter is of great importance for biomass and carbon modeling of woody species, we have not found in the literature research papers devoted to the compare the results obtained with a variety of data sets, by using analyzing different criteria for selecting models. In this work, the behavior of six selection criteria are evaluated to estimate individual biomass through six linear regression models fitted to actual data of different woody species indigenous of the Tropical Atlantic Rain Forest in southern/southeastern Brazil.
The aim of this study was to analyze the behavior of six model selection criteria, typically used to judge the goodness of fit of the resulting equations fitted to six different data series with wide biomass range. Besides the sample size and response variable size on these criteria, we also examined the numeric relations between them. We discuss the ease of interpretation of the model selection criteria and stress the importance of the graphical residual analysis to detect bias in estimates.
Methods
Data sources
Six sets of dry biomass data were used in this study, totaling 330 individuals of various woody species indigenous in the Tropical Atlantic Rain Forest in southern/southeastern Brazil (Table 1). Data sets 1–3 are composed of aboveground biomass measures (trunk + branches + foliage), whereas series 4–6 data come from total biomass (aboveground + belowground) measurements. Biomass was measured through destructive method (simple separation of compartments) [22], which consisted of weighing fresh biomass in the field and further analysis in the laboratory to obtain the oven dry biomass.
Data sets of plants with broad range of diameter at breast height (1.30 m from the ground level) and of total heights were taken and deliberately utilized. Individual biomass averages ranging from 0.26 kg (Merostachys skvortzovii bamboo) up to 1493 kg (indigenous oldgrowth tree species in mixedspecies natural stand). All data sets had 30 plants, except for one of them (native species in restoration forest plantations, data set 4) with 180 plants. The data sets 5 and 6 are subsets of the 4th series, with a smaller number of cases, without and with outliers, respectively. 2.2 data analysis.
The dependent (response) variable in the regression models was w (oven dry biomass) and the independent (input) variables were dbh (diameter at breast height or 1.3 m above the ground—cm) and h (total height—m), and combinations of both, as seen below:
The models examined in this study were:
where β_{0}, β_{1}, β_{2} are the coefficients to be determined, ln is logarithm neperian, e_{i} is random error.
The models examined include formulations with 2, 3 and 4 coefficients. The purpose of this variation was to evaluate the effect of model’s complexity on the behavior of the model selection criteria. The equations obtained after fitting were evaluated regarding the model selection criteria (Table 2) and graphical residual analysis. The statistical significance of each coefficient was examined by means of the ttest. The following hypotheses were formulated: If H_{0} (β_{j} = 0) is not rejected, then x_{j} (independent variable) should be removed from the model, because this variable has not influenced on the response of w in a meaningful way. If H_{0} (β_{j} = 0) cannot be accepted, then x_{j} contributes significantly to explain the responses of w.
The equation fitting was carried out by means of the ordinary least squares method. For the logarithmic models, the values were transformed back to the original response variable to calculate the statistical model selection criteria. In these cases, the logarithmic bias (discrepancy) was corrected by the Meyer’s factor (MF):
The detection of influential points in the fitting (outliers) was performed by means of DFFITS and COOK [23, 24] distance values. Normality and variance homogeneity were evaluated by the Shapiro–Wilk and White tests, respectively.
Results and discussion
Results
The relationships between dbh and height, and the respective biomasses, were positive, as expected, with a greater or lesser degree of dispersion, depending on the data series. The correlation of biomass with dbh was greater than with height, as measured by the Pearson´s coefficient (Fig. 1). All the linear correlations between biomass and dbh and h were statistically significant, so that these two measures can be properly used as input variables in biomass modeling. Some coefficients of the equations were not statistically significant (p < 0.01), indicating that the respective models could be reduced in number of parameters (Table 3). However, in order to keep consistency and avoid unnecessary complexity, we decided to maintain the original formulas throughout the analysis.
In general, the fittings for the data sets 1–3 could be considered satisfactory regarding \( R^{2}_{adj.} \) and Syx%. However, loss of accuracy for the fitted models to data set 3, as indicated by the higher value of Syx% in spite of the high \( R^{2}_{adj.} \) (Table 4), was evidenced. We noticed a remarkable reduction of \( R^{2}_{adj.} \) and increase of Syx% for data sets 4–6 in comparison to the previous ones. For the data sets 4 and 5, model fittings could be considered satisfactory if \( R^{2}_{adj.} \) figures alone are taken into account, but in the case of data set 6 they could not. Considering only Syx%, model fitting to data set 4 could not be acceptable, whereas those to data set 4 and 5 could be regarded as fair. From these analysis we can say that \( R^{2}_{adj.} \times Syx\;\% \) relationship is not always as clear as expected and that model selection criteria are affected in different ways, depending on the data features and the model examined. Thus, decision making should not be done based on only one measure.
The best fit equations to data sets 1 and 2 was model 2, considering all the model selection criteria, i.e., lowest SSR, Syx, Syx%, AIC and BIC values of and largest \( R^{2}_{adj.} \). Equation (6) was the best for data sets 3, 5 and 6, and model 1 gave the best results for data set 4. The model selection criteria did to not affect the best fit model decision, except for SSR which gave distinct results for data sets 5 and 6. Therefore, the best fit model does not change and it does not matter which criterion is being used to rank the goodness of fitting.
This work revealed a close relationship between the general model selection criteria within each data series, since they are all calculated on the basis of the square difference of actual and predicted values, the SSR. Relations among them tend to be linear for all combinations of selection criteria, though some deviations from linearity regarding AIC and BIC were noticed (Fig. 2). The relations were direct, i.e., the larger SSR the larger the values of the selection criteria Syx, Syx%, AIC and BIC, and reverse for \( R^{2}_{adj.} \). From this analysis, it can be said that all selection criteria converge to a common result within the same data set.
The SSR values are closely related to the size of the response variable, considering that it is an absolute measure of the quadratic difference of the actual and estimated values. The same can be said in relation to Syx. Note that not only the effect of the unit of measure on data sets 1 and 2 appears on the values of these model selection criteria. AIC and BIC are transformed absolute measures of fitting and assume somewhat different behaviors. In the case of data sets 1 and 2, negative values are noticed for the first and positive for the second, suggesting that these measures do not only suffer the effect of the unit of the response variable. It is important to note that the AIC and BIC values do not imply to any change in the ranking of the goodness of fit of the models, and hence no practical advantage in using them for this purpose was evidenced in this study.
The close relationship of the selection criteria did not apply when the data sets are analyzed altogether, even for the relative measures, i.e., \( R^{2}_{adj.} \) and Syx% (Fig. 3). As mentioned before, we detected that fitted biomass equations with high \( R^{2}_{adj.} \) may result in high Syx% (e.g. data set 4), what is not expected. Similarly, low \( R^{2}_{adj.} \) may be followed by relatively low Syx% (e.g. data set 6). It means that even those relative straightforward and easytounderstand criteria widely used in model selection may fail in decision making. Caution should be taken when using any of these model selection criteria.
Residual graphical analysis performed on all the models and data sets used on this study revealed important particularities of the fittings that were no apparent from the other model selection criteria (Figs. 4, 5). All the equations presented good general fitting criteria for data set 1, which could lead one to believe that any of these models would be reliable. However, graphical analysis detected the presence of bias in the residual distribution in some cases [e.g. Eqs. (1) and (3)]. Equation (1), for instance, did not present normality in the statistical test. The same applies to data set 2.
Biases were also evidenced in model fittings for data set 3. Equation (1), for example, that showed acceptable behavior by the general model selection criteria, gives biased biomass estimates and lack of normality of residues. Residual analysis revealed strong biases in data set 4 biomass prediction, particularly for the smallsized individual estimates generated by the fitting of Eqs. (1) and (3). This was not detected by the general model selection criteria. All the models examined were negatively affected by lack of normality and heteroscedasticity of residuals. In other words, all the equations fitted to this data set, in principle, should be rejected.
In addition, biased estimates were also noticed for Eqs. (1)–(3) fitted to data set 5. Though model was considered the best fit to this data set by the model selection criteria, but from the residual analysis another model should be chosen. Finally, residual analysis showed that all fitted models provide overestimation of the largesized individuals of data set 6. However, the range of the residuals of the models fitted to this data set, which showed poor model selection indicators, indicated that the estimates are not as bad as one can suppose from the other model selection criteria.
Then, invisible facts from the model selection criteria could be revealed by the residual analysis, which can be useful either in detecting biases and/or showing the width of the residuals individual by individual, which is not possible by the general model selection criteria.
Discussion
Many distinct models have been proposed and several model selection indicators used in biomass estimation. Perhaps, in some cases the modelers and users do not care about the quality and reliability of such models. However, superficial analysis of the general model selection criteria may lead to critical errors.
Some model selection criteria are particular interesting and useful. Interpretation of \( R^{2}_{adj.} \) and Syx% values is straightforward and allows us to understand whether the fitting is good or not, while the other criteria sometimes are not so friendly. This does not necessarily mean that these are ideal criteria for model selection and that are free of possible misleading interpretations, as shown here and emphasized also by the literature.
\( R^{2}_{adj.} \) and Syx% are not affected by the magnitude of the response variable, once they are relative measures. In turn, SSR and Syx vary with biomass unit in a direct way, i.e., these statistics are directly affected by the dimension of the dependent variable. AIC and BIC values are also affected by the size of the dependent variable, but the changes in values do not maintain a direct relationship with the magnitude of the dependent variable. It happens because such information criteria are logarithmic transformations of SSR. It was observed that when the biomass values are in kg, the corresponding AIC and BIC are negative and when in grams become positive. AIC cannot be used to compare models tested for different sets of data [11]. The same can be said to BIC. Moreover, they cannot be used to compare models fitted for the same data set but with different units of the response variable. This should be taken into account in model selection.
Some absolute model selection measures (e.g. AIC and BIC) may not be sensitive to the existence of outliers. This indicates that these measures may not be sensitive enough to capture the effect of such abnormal data on model fitting. Outliers are not uncommon in modeling forest biomass and impossibility of detecting outliers is very problematic. This was one of the arguments against the R^{2} in Anscombe’s [15] work and by other authors who criticized this criterion.
It is fundamental at this point to highlight the importance of the residual analysis on the selection of regression models for plant biomass estimation. This analysis is very helpful in verifying the presence of bias in model fitting. Taking data set 4 as an example we are able to realize serious biased estimation of smallsized individuals, which was not evidenced from another manner (Figs. 4, 5). Although general criteria can be very helpful for model selection, the presence of outliers and bias in estimates can only be detected through the residual analysis. Residual analysis can used to evidence whether a model is adequate and/or help to discriminate the best fit when various models are fitted to the same data set.
Model selection are related one each other. This is conditional to the formulation of the information criteria examined. If it is assumed here that the parameters of the model can be estimated by the maximum likelihood method in ordinary linear regression models [13, 25]:
where \( \ln \left[ {L\left( {\hat{\theta }_{p} \left y \right.} \right)} \right] \) is the maximum likelihood for the parameters of the model.
Assuming this relationship, the close practical relationship between the information criteria and \( R^{2}_{adj.} \) can be readily noticed, in spite of the theoretical difference among them (Fig. 2).
The literature is prolific in works criticizing the use of the coefficient of determination as a useful criterion for selecting models. Figueiredo Filho et al. [26] claim that there is no substantive significance in the use of R^{2} as indicative of adjustment of a model. Many researchers have abandoned completely the use of the coefficient of determination, mainly after the publication by Anscombe [15].
Several authors have presented alternatives, making apology to a criterion and criticism to others. According to Vismara [16], criteria have been sought to assess the best model by approximation to describe data, among several possibilities, with different functional relations and with different numbers of parameters. The author describes the advantages of using the AIC and suggests that it could be an excellent tool for selecting empirical models for predictions in the forest environment.
Burnham and Anderson [11], in turn, point out that AIC represents a new paradigm in the selection of models from empirical data and that the model selection based on the socalled “information theory” represents a quite different approach in the statistical science in comparison to the usual hypothesis tests.
Despite the favorable or unfavorable positions of the several authors to one or another criterion, it is evident that the criteria present similarities in their practical applications, in spite of differences in their mathematical formulations and the theoretical basis behind them. This study shows that \( R^{2}_{adj.} \) and AIC are related one each other. No clear practical advantage of using AIC or BIC in model selection was evidenced in this research. AIC and BIC are tremendously affected by the size of the data set in use, which makes it more difficult to use the approach in a broader and more generic analysis of model fitting Although R^{2}, according to the literature, presents many limitations for use in model selection [5], the other criteria may show similar pitfalls.
Model selection criteria are general indicators of the behavior of the theoretical model against empirical data. They tend to give a good indication of the goodness of fit to the extent that the data have a regular pattern, i.e., without great dispersion and outliers, and that logical models are tested against the actual data. It is also important to point out that in regression modeling, as in any other sampling scheme, it is definitely important to use an amount of data that is representative of the real world. Perhaps the great sin of Anscombe’s work has been to force an illogical adjustment of the model to a database consisting of only 11 values, and with outliers. The problem is in the data set itself and not in R^{2}. The database and the philosophy behind model fitting are more relevant in this sense.
On the other hand, the great merit of Anscombe’s work was to highlight the importance of graphical data analysis before performing any model fitting. In this context, the graphical analysis of the residuals should be considered as the tool to help the modeler to select one among the various tested models. The importance of the residuals analysis is widely addressed by Dubbelman [27] and Cook and Weisberg [24]. Just looking at the \( R^{2}_{adj.} , \) it can be concluded that the fittings made to the data set 4 could be good (at least reasonable), but when we observe the distribution of residuals is evident the weakness of the predictions. By observing the values of AIC and BIC one could inadvertently conclude that there is not much difference of fitting for data sets 5 and 6. It would not be possible to identify the presence of outliers in the series 6.
\( R^{2}_{adj.} , \) taking the criterion of Theil (1961), is based on the assumption that one of the specified models is correct. In this case, if \( \hat{\sigma }_{j}^{2} = \frac{{SQR_{j} }}{{(n  k)_{j} }} \) is the estimate σ^{2} of the jth model, then \( E(\hat{\sigma }_{j}^{2} ) = \sigma^{2} \) for the correct model, but is ≥ σ^{2} for the model poorly specified. According to Maddala [28], a model that has all the explanatory variables of the correct model, but also a number of irrelevant variables will result in \( (\hat{\sigma }_{j}^{2} ) = \sigma^{2} . \) Thus, the choice of the model based on σ^{2} minimum leads on the average to choose the correct model [29]. How to minimize σ^{2} means maximize \( R_{adj.}^{2} , \) therefore, the best model is the one with the highest \( R^{2}_{adj.} , \) i.e., the rule of \( R^{2}_{adj.} \) maximum.
Maddala and Lahiri [29] indicate that the main problem with this rule is that the model that has all the explanatory variables of the correct model, but also a number of irrelevant variables will also result in \( E(\hat{\sigma }_{j}^{2} ) = \sigma^{2} . \) Thus, only taking this rule does not allow you to choose the correct model. Ebbeler [30] discussed regarding this aspect, concluding that the probability of choosing the correct model is considerably smaller than 1, when another model includes a number of irrelevant variables. The effect of omission of important variables or inclusion of irrelevant variables is widely discussed by Gujarati [17] and Greene [31]. We found that the Ftest of the analysis of variance for the equation informs the statistical significance of the adjusted equation, which is at the same time a measure of the statistical significance of R^{2}. According to Gujarati [17], the Ftest is given by:\( F = \frac{SSE/(k  1)}{SSR(n  k)} = \left( {\frac{n  k}{k  1}} \right)\left( {\frac{SSE}{SST  SSE}} \right) = \left( {\frac{n  k}{k  1}} \right)\left( {\frac{SSE/SST}{1  (SSE/SST)}} \right), \) being \( R = \frac{SSE}{SST}, \) the value of F can be calculated by: \( F = \left( {\frac{n  k}{k  1}} \right)\left( {\frac{{R^{2} }}{{1  R^{2} }}} \right) = \left( {\frac{{R^{2} /(k  1)}}{{(1  R^{2} )/(n  k)}}} \right), \) being SSE the sum of squares explained and SST the total sum of squares. The assumptions made for the statistical test are the same as those proposed for the F test. The F test is a comprehensive test of the equation and in the majority of cases taken into account as a criterion in the choice of an equation; therefore this only reinforces the notion that the value of R^{2} should not be simply dismissed as a criterion in the choice of an equation.
The literature on model selection has brought to light a number of statistical tests that can be performed for this purpose. There is not ideal criterion for model selection, especially for tree biomass. This depends on the objectives of the modeling and of the data you have at hand [5, 32]. Therefore, it is essential that in model fitting, particularly for biomass of woody plants, that certain basic steps should be followed, namely: (1) Make a broad exploratory data analysis; (2) Study the behavior of variables and their trends; (3) Select appropriate models to be tested, which should describe the relations of cause and effect between the variables, even if empirically made; (4) Use the various selection criteria for models to achieve the best choice, particularly the graphical analysis of residuals; (5) Use the fitted equations with parsimony, avoiding to extrapolate their estimation ability.
It was evidenced that no statistical test, alone, has been able to indicate the equation to be used. Even when the overall tests were combined, they ended up running into difficulties especially when evaluated the individual tests for the coefficients. In addition, even when analyzed together, comprehensive test and individual test, in some cases, the selected equation could not meet some of the assumptions tested for validation of the classic model of linear regression. This indicates that the choice of equations must pass through three stages. The authors suggest, in this work, to start with the evaluation of the assumptions of linear regression, followed by the analysis of individual coefficients (significance of the coefficients and standard deviation) and the assessment of the overall quality of the adjustment (taking a series of statistics) and finally to perform the residual analysis, in order to find the best specifications for the model.
If the main concern of the linear regression analysis is only the statistical inference on the coefficient estimates, to explore the method of least squares would be good enough. However, linear regression analysis involves the inference about the equality between the estimators and a population sample. For this reason, it should be verified which are the delineated hypotheses for a classical linear regression model, which are addressed in detail by Gujarati [17], Greene [31] and Wooldridge [33].
In general, it is not usually assumed, when modeling biomass, that the statistical model to be fitted to data is in the first moment known, so that the only issue to be addressed in modeling would be the estimation of the coefficients. Thus, the choice of models for biomass is performed after the statistical analysis of the adjustments. Usually the first evaluation is made on the statistics’ overall quality of the equation. However, it was verified that these do not take into account some basic assumptions of the linear regression model, for example: average random error equal to zero, homoscedasticity of errors, absence of autocorrelation between the errors, proper specification of the regression model and absence of multicollinearity. The heteroscedasticity and autocorrelation depend on particular values of explanatory variables in the sample [29]. These two constraints are easy to be violated, especially when modeling forest biomass; the reasons for doing so are obvious. What is expected of the residuals in an equation is that they should behave with the same properties as the real errors, i.e., the errors should have zero mean, constant variance and be serially independent; residuals also should assume these properties.
One of the hypotheses of the classic model of linear regression is that the errors \( \hat{e}_{i} \) in equation have common variance σ^{2}, being this hypothesis known as homoscedasticity. When the errors do not have constant variance they present heteroscedasticity. One way to detect heteroscedasticity is to build a graph of predicted residuals to check whether there is any systematic pattern in the distribution of residuals that suggests the heteroscedasticity of the errors [29]. Moreover, statistical tests to check for heteroscedasticity are available, as example, the test proposed by White [34], which involves the regression in all explanatory variables, their squares and crossproducts.
The main consequences of heteroscedasticity in estimators of least squares are that they do not present bias, but they are inefficient and the main problem is that the estimates of the variances are skewed, invalidating, as a result, the tests of significance. Maddala [29] presents the proof of these two hypotheses. Therefore, a fundamental review to be conducted at a first moment in the selection of models for biomass is to evaluate the homoscedasticity. Therefore, a fundamental review to be conducted at a first moment in the selection of models for biomass is to evaluate the homoscedasticity. For cases of detection of heteroscedasticity in forest biomass, the solution to this problem would be to turn the series in logs.
Another assumption of the classical linear regression model is the absence of multicollinearity—term used by Frisch [35], i.e., it implies that two or more independent variables should not be correlated linearly between themselves. If they are, then not all parameters are estimable. In the case of modeling biomass, this is a hypothesis hardly likely to be violated, since the independent variables used are not linearly correlated because, in most cases, they can be combined variables (the example of dbh^{2}h). However, if we still want to check, an appropriate test would be the inflation factor of the variance. Maddala [29] has discussed at length about this hypothesis of the classic linear regression model.
An important hypothesis that must be evaluated in linear regression modeling is whether errors are or not normally distributed. A good way to test this hypothesis is to use the Shapiro–Wilk test, widely discussed by Huang and Bolch [36]. Commonly, when we are modeling tree’s biomass this problem will appear, due to the nature of the data. One of the ways suggested by Maddala [29] is to escape from not normality, i.e., transform the data so that the assumption of normality will remain valid. One of many possible ways to make an asymmetric distribution become symmetric is to raise y to a power or apply the log. Tukey [37] covers in detail the processing of data. The author suggests that the changes help to make the model approximately linear, errors more homoscedastic and normally distributed. The author shows a great family of transformations, as well as later did Box and Cox [38]. For Box and Watson [39], studying the robustness of the tests of regression coefficients, when the errors are not normal, they argue that the empirical distribution of the explanatory variable x is approximately normal, the usual tests will hold the significance levels assumed.
In view of these facts, it is suggested that the evaluation of the modeling of biomass should start by two basic assumptions of the model classic linear regression: homoscedasticity and normality. The individual analysis of the coefficients is a good technique to start the evaluation of equations after this process, because it makes no sense to keep in the model coefficients that are not statistically significant. As a result, the choice of the equation must pass by the statistics of the overall quality of the adjustment and the conclusion made after a deep analysis of the residuals.
Conclusions

1.
The model selection criteria (\( R^{2}_{adj.} , \) Syx, Syx%, AIC and BIC) are useful as general indicative of goodness of fit;

2.
These criteria keep relations among them within the same data set, because they are based on the root mean square of the difference between the actual and predicted values;

3.
No practice advantage of the use of AIC and BIC in comparison to the adjusted coefficient of determination, despite the eloquent defense of these information criteria by various authors and the criticism to the traditional R^{2};

4.
The model selection criteria may fail in not detecting biases and other special data and fitting features that are only possible through the examination of residuals;

5.
In biomass modeling, it is recommended to perform a detailed exploratory data analysis, a preselection of logical models to be tested and use several model selection criteria, including necessarily a careful residual analysis.
Abbreviations
 \( R^{2}_{adj.} \) :

adjusted coefficient of determination
 Syx:

absolute estimates of the standard error
 Syx%:

relative estimates of the standard error
 w :

dry biomass
 dbh :

diameter at breast height or 1.3 m above the ground
 h :

total height in meters
 AIC:

Akaike information criterion
 AICc:

Akaike information criterion not biased for small samples
 BICp:

Schwartz’s information criterion or Bayesian
 MF:

Meyer’s factor
References
 1.
Sanquetta CR, Corte AP, da Silva F. Biomass expansion factor and roottoshoot ratio for Pinus in Brazil. Carbon Bal Manag. 2011. https://doi.org/10.1186/1750068066.
 2.
Soares P, Tomé M. Analysis of the effectiveness of biomass expansion factors to estimate stand biomass. In: Hasenauer H, Makela A, editors. Modeling forest production. Vienna: University of Natural Resources and Applied Life Sciences; 2004. p. 368–74.
 3.
Kadane JB, Lazar NA. Methods and criteria for model selection. J Am Stat Assoc. 2004. https://doi.org/10.1198/016214504000000269.
 4.
Linhart H, Zucchini W. Finite sample selection criteria for multinomial models. Stat Hefte. 1986. https://doi.org/10.1007/bf02932566.
 5.
McQuarrie AD, Tsai CL. Regression and time series model selection. 1st ed. Singapore: World Scientific Publishing Company; 1998.
 6.
Forster MR. Key concepts in model selection: performance and generalizability. J Math Psychol. 2000. https://doi.org/10.1006/jmps.1999.1284.
 7.
Zucchini W. An introduction to model selection. J Math Psychol. 2000. https://doi.org/10.1006/jmps.1999.1276.
 8.
Lahiri P. Model selection. Columbus: Institute of Mathematical Statistics; 2001.
 9.
Kuha J, AIC and BIC. Comparisons of assumptions and performance. Sociol Methods Res. 2004. https://doi.org/10.1177/0049124103262065.
 10.
Müller S, Scealy JL, Welsh AH. Model selection in linear mixed models. Stat Sci. 2013. https://doi.org/10.1214/12sts410.
 11.
Burnham KP, Anderson DR. Model selection and multimodel inference: a practical informationtheoretic approach. Berlin: Springer, Science & Business Media; 2003.
 12.
Aitkin MA, Francis B, Hinde J. Statistical modelling in GLIM 4. 2nd ed. Oxford: Claredon Press; 2005.
 13.
Johnson JB, Omland KS. Model selection in ecology and evolution. Trends Ecol Evol. 2004. https://doi.org/10.1016/j.tree.2003.10.013.
 14.
Tarald OK. Cautionary note about R^{2}. Am Stat. 1985. https://doi.org/10.2307/2683704.
 15.
Anscombe FJ. Graphs in statistical analysis. Am Stat. 1973. https://doi.org/10.2307/2682899.
 16.
Vismara EdS. Mensuração da biomassa e construção de modelos para construção de equações de biomassa: Universidade de São Paulo; 2016.
 17.
Gujarati DN, Porter D. Basic econometrics. 5th ed. Bostos: McGrawHill Education; 2009.
 18.
Vanclay JK. Modelling forest growth and yield: applications to mixed tropical forests. 1st ed. Wallingford: CAB International; 1994.
 19.
Weisberg S. Applied linear regression. New York: Wiley; 2005.
 20.
Akaike H. Information theory as an extension of the maximum likelihood principle. In: Petrov BN, Csaki F, editors. Proceedings of the second international symposium on information theory. Budapeste: Akademiai Kiado; 1973. p. 267–81.
 21.
Schwarz G. Estimating the dimension of a model. Ann Stat. 1978;6(2):461–4.
 22.
Sanquetta CR, Balbinot R. Métodos de determinação de biomassa florestal. In: Sanquetta CR, Watzlawick LF, Balbinot R, Ziliotto MAB, Gomes FS, editors. As Florestas e o Carbono. Curitiba: UFPR Press; 2002. p. 119–40.
 23.
Belsley DA, Kuh E, Welsch RE. Regression diagnostics: identifying influential data and sources of collinearity. J Market Res. 1980. https://doi.org/10.2307/3150985.
 24.
Cook RD, Weisberg S. Residuals and influence in regression. New York: Chapman and Hall; 1982.
 25.
Doyle J. Model selection procedures and their errorreduction targets. 2011. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1789907. Accessed 15 July 2018.
 26.
Figueiredo Filho DB, Júnior JAS, Rocha EC. What is R^{2} all about? Leviathan. 2011;3:60–8.
 27.
Dubbelman C. Disturbances in the linear model: estimation and hypothesis testing. Leiden: Martinus Nihjoff; 1978. p. 111.
 28.
Maddala G. Econometrics. New York: McGrawHill; 1977.
 29.
Maddala G, Lahiri K. Introduction to econometrics. New York: Wiley; 2001.
 30.
Ebbeler DH. On the probability of correct model selection using the maximum choice criterion. Int Econ Rev. 1975;16(2):516–20.
 31.
Greene WH. Econometric analysis. New Jersey: Prentice Hall International; 2003.
 32.
Rao CR, Wu Y, et al. On model selection. IMS Lect Monogr Ser. 2011. https://doi.org/10.1214/lnms/1215540960.
 33.
Wooldridge JM. Introdução à econometria: uma abordagem moderna. São Paulo: Thomson Pioneira; 2006.
 34.
White H. A heteroskedasticityconsistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica. 1980. https://doi.org/10.2307/1912934.
 35.
Frisch R. Statistical confluence analysis by means of complete regression systems. Nord Stat J. 1934;5:1–97.
 36.
Huang CJ, Bolch BW. On the testing of regression disturbances for normality. J Am Stat Assoc. 1974;69(346):330–5.
 37.
Tukey JW. On the comparative anatomy of transformations. Ann Math Stat. 1957;28:602–32.
 38.
Box GE, Cox DR. An analysis of transformations. J R Stat Soc. 1964;26:211–52.
 39.
Box GE, Watson GS. Robustness to nonnormality of regression tests. Biometrika. 1962;49(1–2):93–106.
Authors’ contributions
CRS, APC and SPN designed the study. APC, AB and LROP performed the statistical analysis. CRS, APC and SPN discussed the results. Critical revision of the manuscript were provided by all authors. All authors read and approved the final manuscript.
Acknowledgements
We thank the BIOFIX Lab (Center for Excellence in Research on Carbon Fixation in Biomass) for the support in biomass and carbon analysis. CAPES—Brazilian Ministry of Education provided financial support this study.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The datasets used in this article are available upon request.
Consent for publication
All authors consent to the publication of this manuscript.
Ethics approval and consent to participate
Not applicable.
Funding
Not applicable.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Sanquetta, C.R., Dalla Corte, A.P., Behling, A. et al. Selection criteria for linear regression models to estimate individual tree biomasses in the Atlantic Rain Forest, Brazil. Carbon Balance Manage 13, 25 (2018). https://doi.org/10.1186/s1302101801126
Received:
Accepted:
Published:
Keywords
 Equation fitting
 Modeling
 Regression
 Tropical forest
 Woody species