During the weekend, Yahoo!'s social bookmarking property del.icio.us seems to have prevented robots of other search engines including Google from spidering the website or fishing out fresh web pages, bookmarks and websites, observes Collin.
It was clear that it was not a simple robots.txt exclusion, rather it was a 404 response that was being shown based on who the requesting User-Agent was.
An observation was made that when the User-Agent was set to Googlebot the 404 error showed up each time, and the same response was received when any page other than the homepage was attempted to be navigated.
"I took a look at del.icio.us' robots.txt," says Collin "and found that it was disallowing Googlebot, Slurp, Teoma, and msnbot for the following:
- Disallow: /inbox
- Disallow: /subscriptions
- Disallow: /network
- Disallow: /search
- Disallow: /post
- Disallow: /login
- Disallow: /rss"
Why on earth Yahoo! may have done this, competition or what. As of now Yahoo has added delicious in its search pages, and that delicious is an important property for Yahoo to bank upon. Apparently, it has begun to make use of its powers from blocking other search engines from taking advantage from spidering the information contained therein.
Moreover, Yahoo! also has an edge over Ask, Google and MSN and that is, it's able to notice the web pages, etc. through human bookmarking, tagging etc. on Delicious, even before other search engine can. Consequently, it leads to better search results and rankings.
In this case Yahoo seems to have made use of its discretionary powers to combat competition. While preventing others from examining your business secrets is a common strategy used by business, it is quite a bold move from the company. What are your thought about it?
There's a discussion about it at Spinn