Counterfactual explanations have other benefits as well. First, same as importance weights, they are defined in terms of domain knowledge (i.e., features) rather than in terms of modeling techniques. As mentioned before, this is of critical importance to explain individual decisions made by such models to users. More importantly, these explanations can be used to understand which features would need to change for the decision to change, which is not captured by feature importance methods. Also, because only a fraction of the features will be present in any single explanation, our approach may be used to explain decisions from models with thousands of features (or many more). Studies show cases where such explanations can be obtained in seconds for models with tens or hundreds of thousands of features and that the explanations typically consisted of a few dozen of features at the most (Martens and Provost, 2014; Chen et al., 2016; Moeyersoms et al., 2016).