Phishing is one of those threats that have been around since the dark ages of the Internet and has never really gone away. It's constantly on everyone's lips, and most people know someone who's fallen victim to the scam somewhere along the line. I think that, as a threat, Phishing is on the decline (from the perspective of uncontrollable escalation like some other threats) but will likely never go away as long as we continue with Internet 1.0.
From the criminal phishing perspective, a critical component to the scam is the hosting of the counterfeit Web site. While I'd say that most security professionals have a good feel for the percentage and frequency of Web sites that are compromised and end up hosting the phishing content, I'd never really encountered any public analysis of the compromise vector and what the percentages were beyond a finger in the air.
Today I came across a very interesting paper Evil Searching: Compromise and Recompromise of Internet Hosts for Phishing, by security researchers Tyler Moore and Richard Clayton, which actually quantifies whats been going on.
Of particular interest to me was their analysis of the use of search engines by the criminals to find the vulnerable Web servers, and how repeated compromise of these vulnerable Web servers (for the purpose of Phishing) can be analyzed.
It's a very interesting paper, and I'd recommend those of you looking to protect your sites from Phishers take the time out to read through it.
Personally, I think the best defense against the Search vector is what we've been saying for decades (and has appeared in every single pentest report I can ever remember writing) - watch out for information leakage, and change/obfuscate all service banners! Yes, I know that that's not going to work in all cases, but it's a damn good place to start!
You should probably also check out the paper I wrote several years back related to Passive Information Gathering.