This presentation discusses methods of detecting counterfactuals (predictions, what if questions, and casual inferences) far enough from the data that any inferences based on it will yield highly model dependent inferences -- where small, indefensible changes in a model specification have large impacts on our conclusions. The talk also shows how to ameliorate many situations like this via matching for causal inference. We introduce matching methods that are simpler, more powerful, and easier to understand. We also show that the most commonly used existing method, propensity score matching, should almost never be used. Easy-to-use software is available to implement all methods discussed.