I was listening to a recent More or Less which had a piece about statistical significance. The guest was Stephen Ziliak who has a book on the topic. I actually thought he gave a slightly confusing account of the limitations of significance testing ("likelihood of the magnitude"?). His book also has a lot of hostile reviews on Amazon suggesting it reads a bit like a blog rant. Perhaps this Gerd Gigerenzer article is better written.
The reason for the More or Less article was a recent US Supreme Court decision that medical trial results could not be brushed under the carpet simply due to their being "statistically insignificant". In the case in question, it seems that there might have been prior reasons to suspect side effects of the type observed, so the fact that they had not (at that time) reached an arbitrary threshold is not adequate justification for concealing them.
I've mentioned before, IMO most of the confusion over significance testing is that the p-value actually doesn't answer the question people are interested in (probability of a hypothesis being true), but is routinely misinterpreted in that way. The same confusion extends to confidence intervals, of course, and these errors are routinely found even in articles that claim to be authoritative (eg and of course also here). But I wouldn't call it a cult, it's more likely to be down to confused thinking and laziness on the whole.
And also, as several people spotted, on xkcd: