Professor Jennifer Hill gave a vivid talk at NYU today discounting the matching methods. In the middle of the talk, I was wondering that maybe it is Professor Donald Rubin's great tutorship that his students, Professor Andrew Gelman & Professor Jennifer Hill, whom I am lucky to work with, are both great presenters.
I am on Jennifer's side that I have switched my support from matching methods to BART (Bayesian Additive Regression Tree) an year ago.
BART 1.0-0
I might be the first student around Columbia University who uses BART. I recall that the first time Jennifer introduced BART in the Quantitative Research Seminar at Columbia, I was so excited to know there was an alternative exists. I was so frustrated by how different matching methods could possibly yield different estimates. So after Jennifer's talk, I tried BART right away. I was disappointed with BARTbecause it failed on a n=40 fake data, which the true treatment effect is 4. BART gave me the treatment effect of 1.8. I did not know what's going on and I did not know if I should ask Jennifer. She was always cool, sitting in Andy's multilevel modeling class. I did not know she was such a great tutor like Andy.
BART 2.0-0
Jeronimo Cortina is a great friend of mine. He just got his Ph.D at Columbia this April. A year ago, he told me he was working with Jennifer. So I asked him if he knew what's going on with BART. He did not know and he was not a fan of BART then. But he was upset with matching methods too. I told him he should try BART and I shared with him how to use BART. I was actually hoping that he could use BART in this work. This way, I could know the feedback of the real applied case of BART.
I was actually disappointed to know that Andy's suggestion for using BART is to use it as a robust check. But anyway, I had a chance to talk to Jeninfer through Jeronimo. I knew then that BART 1.0-0 was buggy when n<200.>
Becoming a fan of BART
Now I had a BART friend-Jeronimo and a BART instructor-Jennifer. So I began to use BART in my work. Whenever I had a new idea of making BART graphs, I share it with Jeronimo. Whenever I had a question about BART I ask Jennifer. I was so excited when I finished a work of a trichotomous treatments analysis using BART. This can be a bit challenging using matching methods because it involves using multinomial logit model to get propensity scores.
Does BART have its future?
During today's talk, someone asked Jennifer a very tricky question. The question I also had in my mind for a long time. He asked that if she is the journal reviewer of a causal inference paper using BART, how she is going to judge if the guy is doing the right thing.
Andy was skeptical about BART because he does not like the black-box-like procedure of BART. We learned that BART does outperform other methods. But there is no way now to know what's really going on inside. What variables are used? What are the interaction? I am worried about the future of BART if we can not show reader these stuffs. It might need times for BART to build up its fame that everyone is convinced the superiority of BART and takes the result of BART for granted.
Jennifer complains about how economic journal reviewers always give her hard time when she uses matching method in her paper (They don't trust the claim of the ignorability using matching). Those reviewers might have the same feeling over BART. Although I am a fan of BART, I am worried about the future of BART. In the meantime, maybe I will only use BART as a robust check to other methods.
15 hours ago
1 comment:
It reminds me of Neural Net a bit. People like to attack it a lot for it's blackboxness. I think being conservative about using it is good. As long as we keep trying to understand it.
Post a Comment