mmpf: Monte-Carlo Methods for Prediction Functions
Zachary M. Jones
, The R Journal (2018) 10:1, pages 56-60.
Abstract Machine learning methods can often learn high-dimensional functions which generalize well but are not human interpretable. The mmpf package marginalizes prediction functions using Monte-Carlo methods, allowing users to investigate the behavior of these learned functions, as on a lower dimensional subset of input features: partial dependence and variations thereof. This makes machine learning methods more useful in situations where accurate prediction is not the only goal, such as in the social sciences where linear models are commonly used because of their interpretability. Many methods for estimating prediction functions produce estimated functions which are not directly human-interpretable because of their complexity: for example, they may include high dimensional interactions and/or complex nonlinearities. While a learning method’s capacity to automatically learn interactions and nonlinearities is attractive when the goal is prediction, there are many cases where users want good predictions and the ability to understand how predictions depend on the features. mmpf implements general methods for interpreting prediction functions using Monte-Carlo methods. These methods allow any function which generates predictions to be be interpreted. mmpf is currently used in other packages for machine learning like edarf and mlr (Jones and Linder, 2016; Bischl et al., 2016).
Received: 2017-04-17; online 2018-06-29@article{RJ-2018-038, author = {Zachary M. Jones}, title = {{mmpf: Monte-Carlo Methods for Prediction Functions}}, year = {2018}, journal = {{The R Journal}}, doi = {10.32614/RJ-2018-038}, url = {https://doi.org/10.32614/RJ-2018-038}, pages = {56--60}, volume = {10}, number = {1} }