Amid what Google describes as a potentially infinite web, the search engine has identified a landmark trillion meaningful URLs (unique URLs). This is quite some figure when you consider that they passed the billion mark only back in 2000.
The mind-boggling figure illustrates the challenge for search engines in trying to judge value among so many competitive sources of information and in an environment where there are both fair (white hat SEO) and foul (black hat SEO) means to present content.
Understandable is the emergence of social software to facilitate peer-review of content sources to better identify value. After all, value is in the eyes of the beholder, with part of that value derived from one’s own community.
The likes of Delicious and Friendfeed offer forms of community search both allowing you to apply search filters through your own network. From my experiments, I find these useful in finding quality content, but not for providing the comprehensive reach that the major search engines offer.
While Google uses your own search history data to improve search results, it does of course use the wider community through apportioning value to inbound links (what’s of value to the community is of value to you). That community is narrowed by attaching more value to links from related topic websites to better idenfity the kind of niche communities that you benefit from on social software.
As yet, neither offer the best combination of the personal and the comprehensive to find the ideal search mix for the individual. There remain challenges in key areas, such as privacy, simplicity, data portability and applying the right filters at the right times. Then we can start talking about finding that trillion-in-one URL.
jeremydestin
Would love to see the breakdown of that trillion. How many are blogs, what’s the long tail of professional publishers, what’s the breakdown of traffic, etc. etc. What’s the real picture of the web?