The R Journal: accepted article

This article will be copy edited and may be changed before publication.

mmpf: Monte-Carlo Methods for Prediction Functions PDF download
Zachary M. Jones

Abstract Machine learning methods can often learn high-dimensional functions which generalize well but are not human interpretable. mmpf marginalizes prediction functions using Monte-Carlo methods, allowing users to investigate the behavior of these learned functions as on a lower dimensional subset of input features: partial dependence and variations thereof. This makes machine learning methods more useful in situations where accurate prediction is not the only goal, such as in the social sciences where linear models are commonly used because of their interpretability. Many methods for estimating prediction functions produce estimated functions which are not directly human-interpretable because of their complexity: they may include high-dimensional inter actions and/or complex nonlinearities. While a learning method’s capacity to automatically learn interactions and nonlinearities is attractive when the goal is prediction, there are many cases where users want good predictions and the ability to understand how predictions depend on the features. mmpf implements general methods for interpreting prediction functions using Monte-Carlo methods. These methods allow any function which generates predictions to be be interpreted. mmpf is currently used in other packages for machine learning like edarf and mlr (Jones and Linder, 2016; Bischl et al., 2016).

Received: 2017-04-17; online 2018-06-29

CC BY 4.0
This article is licensed under a Creative Commons Attribution 4.0 International license.

  author = {Zachary M. Jones},
  title = {{mmpf: Monte-Carlo Methods for Prediction Functions}},
  year = {2018},
  journal = {{The R Journal}},
  url = {}