Copied from X: Interesting post. On the Theory bit: I think Econ moved in the wrong direction recently where 'proper' papers are expected to do both theory and empirical work. Often, neither part is fully convincing, and both are engineered to complement each other. I'd prefer a world where some papers focus solely on establishing robust statistical relationships, i.e. stylized facts, and then theorists come up with mechanisms explaining these. Finally, structural people take competing explanations and empirically test which one fits the data best. Currently many papers do all of these 3 steps jointly, and I think that's far from ideal. Circling back to your post, I think ML techniques are definitely useeful in step 1 and probably also in step 3, and there's no reason to dismiss them.
Hi Lukas! You're absolutely right. Like you said, there was a time when Econ embraced a more "modular" approach: one paper might identify an empirical regularity; another would model it; a third might structurally estimate competing theories. Each contribution had value on its own. Now, it often feels like we've replaced this collective, iterative process with an all-in-one production model, where a paper isn't "publishable" unless it does everything: data, theory, estimation, robustness, policy implications. And I think this comes at a cost. We lose clarity: papers are bloated, and each part gets just enough attention to be "passable", but rarely convincing. We also discourage specialisation because empiricists and theorists have less room to work independently, even if their insights would be sharper alone. And we kill the pipeline of ideas. A clean empirical pattern that doesn't come with a theory gets desk-rejected. A new theoretical insight without immediate empirical validation is "too abstract".
In theory, co-authorship should solve this because everyone would bring their strengths. But in practice, this means you don't even get to contribute at all unless you have the right co-author lined up in advance. We've gone from a world of cumulative progress to a system that expects fully-finished "products" at the submission stage. It’s no surprise then that ML gets dismissed because it excels in exactly the kind of modular, iterative knowledge building that the current "gatekeeping" structure discourages. But we should remember that Econ moved forward fastest when it was ok to publish just a stylized fact, or just a model. Maybe it's time to bring that back.
Right, and I think it's worse than that, in the sense that I don't think co-authorship can solve this. Even with co-authors the "do 3 steps in one paper" model is restrictive. It is really hard to find interesting stylized facts. And often the most interesting stylized facts are those for which it is really hard to come up with a theoretical mechanism explaining it. Never mind that for the third step (showing that your mechanism is empirically relevant) the incentives are completely screwed if you do it in the same paper.
I think the underlying issue is that aggregation of knowledge is much less rewarded in Econ than is originality and novelty. If one finds a novel stylized fact, I think the focus should be first to confirm it is robust and holds across various settings. Instead, we immediately go to theoretically explaining and structurally estimating it, after which it is often swiftly forgotten and we move on to the next novel thing.
Copied from X: Interesting post. On the Theory bit: I think Econ moved in the wrong direction recently where 'proper' papers are expected to do both theory and empirical work. Often, neither part is fully convincing, and both are engineered to complement each other. I'd prefer a world where some papers focus solely on establishing robust statistical relationships, i.e. stylized facts, and then theorists come up with mechanisms explaining these. Finally, structural people take competing explanations and empirically test which one fits the data best. Currently many papers do all of these 3 steps jointly, and I think that's far from ideal. Circling back to your post, I think ML techniques are definitely useeful in step 1 and probably also in step 3, and there's no reason to dismiss them.
Hi Lukas! You're absolutely right. Like you said, there was a time when Econ embraced a more "modular" approach: one paper might identify an empirical regularity; another would model it; a third might structurally estimate competing theories. Each contribution had value on its own. Now, it often feels like we've replaced this collective, iterative process with an all-in-one production model, where a paper isn't "publishable" unless it does everything: data, theory, estimation, robustness, policy implications. And I think this comes at a cost. We lose clarity: papers are bloated, and each part gets just enough attention to be "passable", but rarely convincing. We also discourage specialisation because empiricists and theorists have less room to work independently, even if their insights would be sharper alone. And we kill the pipeline of ideas. A clean empirical pattern that doesn't come with a theory gets desk-rejected. A new theoretical insight without immediate empirical validation is "too abstract".
In theory, co-authorship should solve this because everyone would bring their strengths. But in practice, this means you don't even get to contribute at all unless you have the right co-author lined up in advance. We've gone from a world of cumulative progress to a system that expects fully-finished "products" at the submission stage. It’s no surprise then that ML gets dismissed because it excels in exactly the kind of modular, iterative knowledge building that the current "gatekeeping" structure discourages. But we should remember that Econ moved forward fastest when it was ok to publish just a stylized fact, or just a model. Maybe it's time to bring that back.
Right, and I think it's worse than that, in the sense that I don't think co-authorship can solve this. Even with co-authors the "do 3 steps in one paper" model is restrictive. It is really hard to find interesting stylized facts. And often the most interesting stylized facts are those for which it is really hard to come up with a theoretical mechanism explaining it. Never mind that for the third step (showing that your mechanism is empirically relevant) the incentives are completely screwed if you do it in the same paper.
I think the underlying issue is that aggregation of knowledge is much less rewarded in Econ than is originality and novelty. If one finds a novel stylized fact, I think the focus should be first to confirm it is robust and holds across various settings. Instead, we immediately go to theoretically explaining and structurally estimating it, after which it is often swiftly forgotten and we move on to the next novel thing.