Reverse transformation of natural log predictions
Now that you have read Duan's paper several times, here's how to apply to our work. I'm going to provide you with a user-defined function. It will do the following:
- Exponentiate the residuals from the transformed model
- Exponentiate the predicted values from the transformed model
- Calculate the mean of the exponentiated residuals
- Calculate the smeared predictions by multiplying the values in step 2 by the value in step 3
- Return the results
Here's the function, which requires only two arguments:
> duan_smear <- function(pred, resid){
expo_resid <- exp(resid)
expo_pred <- exp(pred)
avg_expo_resid <- mean(expo_resid)
smear_predictions <- avg_expo_resid * expo_pred
return(smear_predictions)
}
Next, we calculate the new predictions from the results of the MARS model:
> duan_pred <- duan_smear(pred = earth_pred, resid = earth_residTest)
We can now see how the model error plays out at the original sales price:
> caret::postResample(duan_pred, test_y)
RMSE Rsquared MAE
23483.5659 0.9356 16405.7395
We can say that the model is wrong, on average, by $16,406. How does that compare with not smearing? Let's see:
> exp_pred <- exp(earth_pred)
> caret::postResample(exp_pred, test_y)
RMSE Rsquared MAE
23106.1245 0.9356 16117.4235
The error is slightly less so, in this case, it just doesn't seem to be the wise choice to smear the estimate. I've seen examples where Duan's method, and others, are combined in an ensemble model. Again, more on ensembles later in this book.
Let's conclude the analysis by plotting the non-smeared predictions alongside the actual values. I'll show how to do this in ggplot fashion:
> results <- data.frame(exp_pred, test_y)
> colnames(results) <- c('predicted', 'actual')
> ggplot2::ggplot(results, ggplot2::aes(predicted, actual)) +
ggplot2::geom_point(size=1) +
ggplot2::geom_smooth() +
ggthemes::theme_fivethirtyeight()
The output of the preceding code is as follows:
This is interesting as you can see that there's almost a subset of actual values that have higher sales prices than we predicted with their counterparts. There's some feature or interaction term that we could try and find to address that difference. We also see that, around the $400,000 sale price, there's considerable variation in the residuals—primarily, I would argue, because of the paucity of observations.
For starters, we have a pretty good model and serves as an excellent foundation for other modeling efforts as discussed. Additionally, we produced a model that's rather simple to interpret and explain, which in some cases may be more critical than some rather insignificant reduction in error. Hey, that's why you make big money. If it were easy, everyone would be doing it.