[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Monday, December 30, 2013

Is there a small-state effect?

In countries where some parliament chamber allocates the same number of seats to each member state regardless of its population, small states are deemed to enjoy a disproportionately strong influence. One paper that analyzes whether this small-state effect is empirically significant is by Gary Hoover and Paul Pecorino which shows that US states with higher per capita representation also get more federal funding. Does this mean that the open question is now closed? Of course not, as the scientific process would tell us to revisit this to test whether it holds more generally, whether the effect disappears with time, or whether it is robust to different specifications.

Stratford Douglas and Robert Reed (link corrected) address the latter question. They run a robustness exercise that is unfortunately too rare in Economics. They confirm the results of Hoover and Pecorino, but find that when you switch from ordinary least squares to cluster robust standard errors and include population growth that small-state effect vanishes. So we are not done with this question.

We should have more replication studies in Economies. It saddens me that Douglas and Reed felt the need to add the following footnote on the front page: "we wish to express our special appreciation to Gary Hoover and Paul Pecorino for their willingness to allow their study to be subject to critical analysis. Openness and integrity such as theirs is the basis by which science advances." This should be obvious.

Wednesday, October 2, 2013

What kind of jobs are academic scholars looking for?

Any university ranking that is published these days features a majority of US-based schools at the top. It is clear that across every field the United States is able to attract the best talent, at least when looking at top schools. Why is that so? What attracts scholars to the US?

Jürgen Janger and Klaus Nowotny have created an interesting data set by surveying 10,000 academics across the world and letting them choose between various hypothetical jobs. Those jobs varied along a series of characteristics, which allows to understand what academics value most. The results indicate that the standard US tenure track system is pretty much close to optimal. What matters most is pay, which should have a performance component, valuable peers, internal grants and a good mix of teaching and research. Location does not matter much, presumably because academicians are so focused on their work. Those early in their careers value financial and intellectual autonomy, as well as having some prospect for internal promotion based on performance. The more senior ones do not like being bound to a particular research stream and prefer being in a departmental setup rather than a chair-like system. Given all this, it is no surprise that the US manages to attract the best talent. But one can wonder whether the responses also reflect the realization that the United States has attracted the best researchers, so its system must be better independently from personal preferences.

Wednesday, September 25, 2013

The fuss about big data

"Big data" is the latest buzzword describing the next technological revolution wherein enormous amounts of data can be collected about our daily lives and can be used to improve our choices and better understand what is going on in all sorts of dimensions. That includes very detailed information about transactions, locations, and even online behavior. Who has not noticed that ads suddenly turn to what one has searched for a few days ago, if not getting emails about that. Whether big data will keep its promise will depend in part on what will happen with privacy protection. Europe has already taken steps, for example imposing that web cookies need to be accepted by users. In the US, people have been so far very tolerant with companies (but not the government) spying on them, but the tide could turn. But what are really the promises of big data?

Liran Einav and Jonathan Levin focus on economic policy and research. Quite obviously, we complain when data is not available when we want to measure something. Will big data make that possible? While I do not think the (mostly) random collection of big data will allow us to get exactly what we need, the authors thinks that with new statistical techniques and computer algorithms being developed specifically for big data, there should be something useful for economists. They hope to achieve better statistical power from massively larger and finer data. The opening of larger administrative data sets also has a lot of potential, especially, I would add, if the researcher is allowed to link them to each other. Denmark has shown how great data allows for better research and policy, and also makes researchers flock to you. But again, this is all dependent on how privacy laws will evolve.

Thursday, August 22, 2013

A look at faculty workload

A common complaint about teachers is that they have too much vacation time. Such complaints are even louder for university faculty, as the academic calendar specifies even shorter teaching times, and on top of this the weekly class room hours are ridiculously low. These complaints emerge because teaching is the only face time university faculty have with the paying public. We do a lot of other things that the tax payer does not see and in particular does not realize how much time it takes. But how much do university faculty actually work?

Manuel Crespo and Denis Bertrand have analyzed surveys distributed to faculty of a "Quebec research-intensive university." Using results from 130 tenured faculty who agreed to spend significant time thinking about there use of time, the average workweek is 57 hours. That takes into account that there are parts of the year where workload is lighter (summers) and other times where there more to do. Only about a third of the time is dedicated to research, which I find surprising as this is supposed to be a research university. 44% of the time, or 25 hours, are dedicated to teaching, a surprisingly low 3 hours a week to administration and 9 hours a week to "public service" (would my blogging count?). The report goes through more details, some of which I want to highlight: only 10% of time related to teaching is actually in the classroom. The rest is mostly preparing for classes, face time with individual students, and grading. Time spend on teaching has increased over a decade, attributed foremost to increasing class size (I do not think there is much value to this result, as faculty also got older and in some cases tenured). And there are very few gender differences in time allocation.

Thursday, August 15, 2013

Top Economics graduate programs are not as good as you think

Along with business schools, Economics is where pedigree matters most in the placement of PhD students to academic positions. Students from top ranked (or considered such) programs have a job almost guaranteed in research universities, and students from lower ranked universities find it very hard to break into such universities no matter what their performance is. In part, this is due to the fact that we tend to hire faculty fresh out of graduate school, while other fields go first through post-docs, and that publication delays imply that graduating students have typically no publication. Thus one has to rely on reputations only (or actually read their papers, but then are you going to read the papers of all students from lower ranked programs?).

John Conley and Ali Sina Onder find that while there is indeed a steep gradient across program rankings, there is an even steeper gradient within programs. They use student rankings within programs and cohorts and their publication output after six years, that is, when they are up for tenure. Looking at AER equivalents, they find that the top Toronto student is equivalent to the number three from Berkeley, for example. And to illustrate how steep the gradient is, the median Harvard student has after six years only 0.04 AER equivalent publications, despite coming from the #2 program. This means that more than half of Harvard students are not tenurable in any research-oriented institution.

I see two major conclusions from this: 1) Stop worrying so much about where PhD students are graduating from. It is OK to hire students from lower ranked programs as long as they excelled in those programs. 2) Even the top places should acknowledge that not all students should take research positions and need to prepare them for other ones, like industry, government or purely teaching jobs. These students are screwed twice: they are sent to tenure-track positions that they will never get tenure in, and they are woefully unprepared for the jobs they should take.

Wednesday, July 31, 2013

Groom yourself to publish better research?

There is plenty of evidence that being beautiful helps you on the job market. First impressions count a lot, and physical appearance is likely the main factor in first impressions. But does beauty matter in situations where there are no such first impressions? Take the case of scholarly publishing: editors and referees do not see a picture of the author(s), thus it should not matter. If it still matters, it must be that beauty is correlated with something that makes your more likely to get published, say, confidence or more subtly that beautiful people are more healthy, and thus should have had less illness disruptions in schooling and have more human capital. Anyway, we need some evidence.

Alexander Dilger, Laura Lütkenhüner and Harry Müller want to offer some. They asked attendees at a conference of business researchers about their happiness, took their pictures and had others judge these mug shots. They then looked for the publication records of their subjects over the next two years. It turns out that happy people publish more, but of course the causality could run the other way, as you may be happy that your research agenda is progressing well, especially when you are asked about your happiness at a conference in your research field. Maybe more interesting is that a trustworthy appearance is correlated with a better publication record. Is it really the appearance that matters here, or simply that a person who is capable of keeping himself in order is also more likely to be well organized to publish well? Also the population under study (n=49, by the way) is faculty from business schools. It is notorious that in business schools appearances matter a lot, and after law schools it is where it matters the most. Not the kind of sample I would use to make general claims about the research productivity of scholars as it relates to appearance.

Thursday, May 30, 2013

Open science in commercial firms

Universities engage in research and put result in the public domain because it fosters the public good. In recent years, though, they have put more focus on patenting research results in order to obtain more revenue in the face of dwindling income from public sources. For-profit firms, though, seem to follow the opposite evolution. They hire more and more researchers to let them publish their results in scientific journals instead of patenting them. This even happens to economists who get hired, for example, by Google, Microsoft, Yahoo, AT&T, and commercial banks to conduct research. For the economists, I kind of understand it as a way to secure top talent when needing advice in complex markets. For hard sciences and engineering, my prior is that these firms have realized that patenting has become very inefficient as seeking exclusivity is now more of a lawyer's than a scientist's job.

Markus Simeth and Julio Raffo have another interpretation of what is happening in for-profit firms, and it looks like what is happening for economists. Using a dataset of R&D performing firms in France that they match with academic publications, they find that the old way of just collaborating with academics is not sufficient to acquire knowledge from the technology frontier, you need to hire them full-time. Adopting the academic discourse and disclosure allow to also benefit from it. And like firms participating in the open source movement, I suppose participating in the open dissemination of science also buys you some academic credibility that can attract top talent.

Sunday, May 5, 2013

Blog mentions: are they citations?

RePEc is putting up for vote some modifications to its rankings, including whether blog mentions should be counted as citations. The blog mentions are taken from EconAcademics, to which the present blog seems to be the largest contributor. So my opinion may matter whether I consider my mentioning of a paper to be equivalent to a citation.

Sometimes, I point out bad papers, and this should obviously not count as a citation. Of course, people often cite rather bad works because they want to improve on them, so nothing unusual here. Also, people often cite other works for strategic reasons, because the cited author is the editor, a potential referee, or otherwise powerful. Yet we still count that as a citation. A blog post, however, is entirely dedicated to a paper. I do not discuss it because I somehow want to gain favors with someone, as I am incognito. For other bloggers that may be different, though. What you get from my confused statements is that I am not quite sure blog mentions should count as citations. What I observe, though, is that authors do treat them differently. Never do you see an author listing a particular citation on a CV or homepage, but they sometimes do it for blog mentions. For this last reason, I tend to think a blog mention is even more than a citation, as long as there is some control over which blogs count, something EconAcademics does.

Monday, April 29, 2013

Department size and research productivity

This is a bit late for the current job market for Economics PhDs, but say you have to choose among several job offers (lucky you). The departments are are all of equal prestige and working conditions, salary, and geographic environment are all comparable. The only difference is the size of the faculty. What offer should you take if you care about your future research output?

Clément Bosquet and Pierre-Philippe Combes say you should go for the uniformly good, more field diverse, and larger department. At least this applies to French academic economists. Even more interesting, they observe that department characteristics are as important as researcher characteristics. The initial placement of a researcher thus matters a lot for her future, and this may explain why there are so few success stories after an initially poor placement, at least in Economics. Of course, the academic market in France is very different from any other country, so I am not sure the results can generalize, but this is a start.

Thursday, April 11, 2013

Test statistics and the publication game

It is well known that journals do not like replications or confirmations of hypotheses. They are looking for the empirical results that contradict popular wisdom, and this must be influencing the way researchers look for test results. To increase your chances of success, you want to only mention highly significant results and ignore the so-so ones.

Abel Brodeur, Mathias Lé, Marc Sangnier and Yanos Zylberberg look at the distribution of p-values in articles published in the top three economics journals. I am not quite sure what the distribution of p-values would be if the publication process were unbiased, but it would probably look like a Poisson distribution and it would be monotonic on each side of the mode. What the authors find does not look at all like this. There is a distinct lack of test results that just miss the 5% or 10% significance, and distinctively more that just pass those thresholds, making the distribution bimodal. Interestingly, this problem is less present when stars are not used to highlight significance or when the authors are tenured.

These results indicate that there is more than a selection bias. This is an inflation bias by the researcher when he only presents the most significant results, which were obtained by finding the specification that allows to pass the magic significance thresholds. I do not think this is ethical, but the publishing game makes it unavoidable, so the profession is apparently fine with it. I guess we have to tolerate this and take it into account when reading papers much like we know there is grade inflation when looking at transcripts or there is similar inflation when reading recommendation letters.

PS: This paper is a strong candidate for the best paper title of the year. Bravo!

PS2: What is really unethical is claiming results are significant when they are not. The case of Ulrich Lichtenthaler comes to mind, who added "significance stars" to his results when they were not warranted. The fact that he still managed to publish widely is an indictment of the quality of research in business journals, too.

Wednesday, November 28, 2012

Research and teaching are complements in terms of quality

Are good teachers also good researchers? Does spending more time on research improve one's teaching? Or does research get in the way of good teaching performance? It seems everyone has his own theory about this, some thinking that teaching and research are substitutes (you hear that mostly in teaching colleges) or complements (research universities think that). What about some empirical evidence on the matter?

Aurora García-Gallego, Nikolaso Georgantzís, Joan Martín-Montaner and Teodosio Pérez-Amaral provide this for a Spanish university. It turns out research and teaching are complements, at least for that university. What I find striking is that the professors who perform the most research also teach the most, and do it better. Those doing no research are among the worst teachers. That seems like the way a social planner would do it: Get the best people to work their ass off, while the worst should be kept away from anything productive. That is the way to increase productivity. That does not seem very fair, though, unless pay is tied to performance, which is not likely in this case. And how this study can generalize to other universities is not clear, especially as we do not know much about the university in this case.

Monday, October 8, 2012

Mathematics, Econometrics and the top economist's career outcomes

Some people have been complaining about the increasing mathematization of econonmics and how it leads to a disconnect of economics with real life (which I suppose is void of mathematics). I would argue that this is actually a good thing, first because it forces you to have rigorous arguments, second because you often need quantitative answers to questions that result in qualitatively ambiguous outcomes, and third because it simply allows us to look at more complex problems. But, has mathematization actually increased?

Miguel Espinosa, Carlos Rondon and Mauricio Romero look at the publications of top economists and look at how many equations or econometric outputs per article they produced. The analysis for the last century shows a gradual increase of mathematization throughout, except for the number of equations which went through a serious recession in the 1980's. And of course, econometrics started only seriously in the 1950's. It also appears that professional success as measured by prestigious prizes is certainly linked to the use of mathematics, but not the econometric kind.

Thursday, September 13, 2012

Does publishing better pay better in California?

In some countries, academics can be compensated for performance. Some universities provide incentives for good research and/or teaching through bonuses or pay raises. Others may only respond to outside offers, but the end effect is the same: performance compensation responds to market pressure. Or so we would like to think. There is, however, a substantial source of disagreement in how much a particular publication is worth. Everyone has his own ranking in mind, and the correlation among all those is not that high.

John Gibson, David Anderson and John Tressler look at Economics departments in the University of California system. They look at 700 Economics journals and how publication in them translates into tenure, promotion and salary increases. They compare different journal ranking schemes and finding that it is not only the order of journals that matters but also the convexity of the scores: how fast they drop after the top journals. This matter is consequential: the average lifetime output of a UC economist varies between 36 and 144 American Economic Review equivalent pages.

Results show that the statistical fits of various journal ranking schemes to salary outcomes are surprisingly close to each other. I wonder, though, how even more convex ranking schemes would have fared, such as the RePEc journal H-index which is used is several departments (but I am not sure about UC departments). Maybe this was not considered because of endogeneity issues. Another interesting result is that a 10-page article in the AER raises compensation by 1.3%, or US$27,500 in average net present value (neglecting the impact on retirement pensions).

Friday, June 22, 2012

Are consulting and research substitues or complements?

Think tanks have a horrible reputation everywhere by in the media. The reputation is because they are often very biased and sell out to their funders. The media is because think tank staff are willing to provide the expected sound bites to journalists, no matter what the topic. All this would be OK if think tanks were good at conducting independent research. It turns out the most prominent ones, do not do much of it on the hot topics they talk about, according to Dan Farber, who finds that they do not publish much of relevance (and this is not even considering peer reviewed research).

Interestingly, the picture is very different with respect to consulting. Looking at academics across all fields from five Spanish niversities, Pablo D'Este, Francesco Rentocchini, Liney Manjarrés-Henrìquez and Rosa Grimaldi find that getting grant money is positively associated with getting consulting contracts. In other words, good researchers also get consulting gigs. And in some fields, consulting is where the financial rewards of research really lie, especially in social sciences where grants are usually relatively small.

Sunday, June 10, 2012

What is up with Elsevier?

Whether you like it or not, Elsevier matters in the dissemination of research in Economics. By far the largest player in the field, it enjoys considerable market power (and a profit margin around 30% that comes with it). And even though journals are not at the frontier of research in Economics, it still matters what happens at Elsevier because it controls so many of the top field journals.

According to its web page on the global dissemination of research, Elsevier states:
We recognise that access to quality research is vital to the scientific community and beyond. For us this means providing support and the latest tools to maintain the quality and integrity of published scientific literature, achieving the widest dissemination of content, and embracing the opportunities of open access. We will continue to identify access gaps, and work towards ensuring that everyone has access to quality scientific content anytime, anywhere.

These are all nice words, but this is not all what Elsevier practices. First of all, all of the Economics content of Elsevier is gated, and academic libraries have to pay through the nose to let faculty access the content, including their own works. Even errata and retraction notices are gated. There is no open access journal in Economics, and even in other fields where it is available, the cost is prohibitive (usually US$3000, even more with color charges!), which cannot be justified in any reasonable way by hosting costs. Indeed, Elsevier spends considerable resources trying to keep potential readers away, by gating the material for the general public and making it difficult for individuals to buy subscriptions, especially for hard copies. All this management of subscriptions and filtering of web traffic would disappear with open access, making it much cheaper, not more expensive.

But this is not an issue only with Elsevier (Springer is much worse in this respect). Elsevier, with its market power is trying to kill any competition and any initiative that tries to open up the dissemination of research. For example, it was a huge backer of the Research Works Act in the US, which would have prohibited mandates that publicly funded research should be available in open-access repositories. Of course this generated a huge outcry from the scientific community (you know, the one that Elsevier claims to serve) and lead to a call for a boycott. This seems to have been successful, as Elsevier reversed its stance, thereby killing the bill.

Unfortunately, few economists seem to have participated in the boycott, which is probably why Elsevier continues to flaunt the research community with no remorse. For example, it has not updated the listings of its journals for over a year in RePEc, and still vigorously refuses to let RePEc perform citation analysis on its contents. Repeated attempts to get a reaction from Elsevier have unsuccessful from my part. My suspicion is that RePEc is threatening some of the products that Elsevier is pushing (Sciverse, Scopus), and the interest of the research community becomes second fiddle. From what hear, people are deserting the Economics desk at Elsevier, starting with its head, which makes you wonder who is in charge of "the widest dissemination of content."

To understand further what a fine business Elsevier is, here are some of my previous posts:
The evil empire strikes again
The evil empire strikes again (II)
Copyright and the lack of competition in academic publishing
Why I am boycotting Elsevier

Tuesday, June 5, 2012

How did online journals change the economics literature?

Scientific publication is not the same as it was, now that we can easily access the literature over the Internet. No more trips to the library, much fewer waits for interlibrary loans, and no more chasing who took or misplaced the volume in the racks. But did all this change anything in the way we publish our results?

This is what Timo Boppart and Kevin E. Staub study by looking at the diversity of topics covered in journals and how the availability of on-line publication would have changed that. The idea is that on-line publication allows to discover and read more material, and one may in particular stray away from the usual topics. No doubt about that. But I wonder why Boppart and Straub have this focus on journals. After all, working papers is where its at in Economics, and journal readership has not really increased, I believe. The treatment variable is the share of cited articles available on-line the year before publication. This seems so wrong. There is no way it takes only one year from the literature search to the print issue. Not in Economics, where I would say it is a minimum three years, with really rare cases below that. In fact, a good share of mine took more than a year from final acceptance to actual publication. Then, what about working papers? This is what people read, not articles.

Sunday, April 15, 2012

On the state of economic research blogging

If you are an economist and maintain a blog, the best way to get an audience is to get into fights with pundits and journalists, and you then easily become a pundit yourself, losing the impartiality you are supposed to have as a scientist. The most popular economist bloggers are either openly libertarian or quite far right, or in reaction resolutely on the left. Almost all engage in politicking and wars of words and have basically abandoned the principles of impartiality their scientific upbringing taught them. The impartial scientists are drowned.

EconAcademics.org tries to rectify that, aggregating blog posts that discuss research, or at least refer to research. The list of monitored blogs is impressively long, yet it saddens me to see that the "popular" blogs are nowhere to be seen near the top of those who have discussed the most research, despite their considerable volume of posts. While I could be proud to be (far) ahead on this last list, it saddens me again how little consistent discussion of research there is. Out in the blogosphere, are we really so few economists doing this? Why can't the top economics blogs relate more to the results of their field?

A positive externality of EconAcademics.org is that is compiles a list of all papers I discussed, including where the ultimately got published. I have put the link in the sidebar and will come back to discuss it sometime soon.

Saturday, March 17, 2012

Bruno Frey: the epilogue?

A little less than a year ago, a controversy erupted about the publishing practices of Bruno Frey and his students. Indeed, they tend to repackage their research and submit it to multiple journals simultaneously (or sometimes successively), without cross-references and without alerting editors to this. This is in clear violation of the submission conditions of most academic journals and even goes against principles Bruno Frey has himself advocated in multiple (of course) publications: there is not enough space for everyone to publish on the one hand, and the pressure to publish leads people to (self-)plagiarize on the other hand. On his homepage, Bruno Frey crows about over 500 or 600 publications, depending on where you look, numbers that are completely surreal for any self-respecting academic economist.

The scheme blew in his face when some editors and some blogs started raising questions when very similar articles about the Titanic, with Benno Torgler and David Savage, appeared in four journals (some say there is even a fifth one in German, but I cannot verify). And the article was not even original, as a similar analysis was done and published 25 years prior and is now standard reading and exercise in statistics courses. Newspapers picked up the story, Frey went into denial but finally confessed to the editor of the Journal of Economic Perspectives, who published correspondence about the case and publicly admonished him for multiple submissions (he do not yet know about the prior literature). But that is only for this case, there are all the other ones. The University of Zurich, from where Bruno Frey recently retired, promised an investigation. That was sometime in the Summer. Since then, nothing.

One could suspect the University would do nothing, as Bruno Frey is the best ranked economist in German-speaking universities. And the prolonged silence clearly seemed to corroborate this. But rumors started circulating in the hallways, rumors that were not encouraging at all. But no evidence from Zurich.

Finally, I got good evidence from a reliable source. And it is indeed not encouraging. The University of Zurich mandated three prominent academics to look into the case. But the mandate was formulated in such a way that only the articles about the Titanic could be analyzed. The experts came to the obvious conclusion that unethical behavior was at play for this case. They could not mention the others and thus the University concluded that this was a one off miss-step. The University gave Frey a verbal admonishment, which does not go on his record, and did not release the report.

But this was not an one-off miss-step. Frey has been banned from the editorial board of Public Choice for a similar case of re-publication. He is by now banned from publishing in at least a half-dozen journals. To make matters worse, he has himself advocated to go against plagiarizers and others that unnecessarily take up valuable publication space. The investigation should have looked at his whole career, like when a scientist is suspected of fabricating data and all his publications are subject to scrutiny. And it is not like the information would be difficult to obtain, it is readily available and people have even compiled it, as documented in the FreyPlag Wiki.

For more about the case, you can read my past blog posts: 30 April 2011, 3 September 2011, 27 September 2011. Also, Olaf Storbeck's Economics Intelligence blog was the one that convinced the University of Zurich to finally (pretend to) act: 4 July 2011, 4 July 2011, 5 July 2011, 6 July 2011, 7 July 2011, 9 July 2011, 20 August 2011, 29 August 2011, 12 September 2011

Tuesday, March 13, 2012

Women do not patent

A frighteningly low proportion of patents are granted to women, 7.5% in the United States, while only 5.5% of those that are commercialized are from women. Maybe they are of a more generous nature and realize how progress-crippling patenting can be these days. But I doubt this effect can be that strong. The elephant in the room is of course the low proportion of females in sciences and engineering professions. It cannot be the only explanation, because females have a higher share than 7.5%, but it is a start.

Jennifer Hunt, Jean-Philippe Garant, Hannah Herman and David Munroe claim the missing women has little to do with their proportion in the science and engineering fields. Rather, the culprit is the lack of women in science and engineering jobs that involve development and design, in particular electrical and mechanical engineering. What pushes women out? Lack of interest, abilities or discrimination? The paper is silent on this (except for hurdles in the promotion process) but ventures to say that correcting this would increase GDP by 2.7%. This number is based on the idea that if more women were working in those fields, there would be proportionally more patents, and GDP would be proportionally higher. I do not think this is that simple.

Wednesday, March 7, 2012

About the cult of statistical significance

A large part of economic research is devoted to empirical studies, and the name of the game there is statistical significance. Once you get an interesting effect being significant, it becomes a study worth writing, and possibly publishing. If you cannot find an effect, then try another specification until it is statistically significant. Nobody will know how much you tried, the one that is significant is all that counts. If one cannot find a statistically significant result, a null result, publishing it becomes really difficult. This game of finding statistical significance is unfortunately misleading, as this hunt dominates theory or even common sense when choosing specifications, and often completely neglects economic significance. What if a statistically significant results is tiny even though it is precise? And what about a large effect that is statistically weak?

I am certainly guilty as well of confusing statistical and economic significance, including on this blog. Indeed, it is often difficult even understanding what the size of the effect is, because the specification does not allow one to relate the effect to something tangible.

The reason I am mentioning all this is that I came across a recent paper by Walter Krämer, who tries to revise the argument made by Stephen Ziliak and Deirdre McCloskey that any statistical significance is useless. While he seems to concur on the general abuse of statistical significance, he claims it can still be useful under some circumstances. And these are when you do exploratory testing, as it allows to discard unviable hypotheses or specifications. But one has to remember, even though this is elementary statistics yet so often ignored, one can only reject or not reject, but never accept a hypothesis. So even the exploratory testing has its limitations.