Distributed search is a search engine model in which the tasks of Web crawling, indexing and query processing are distributed among multiple computers and networks.
Originally, most search engines were supported by a single supercomputer . In recent years, however, most have moved to a distributed model. Google search, for example, relies upon thousands of computers crawling the Web from multiple locations all over the world.
In Google's distributed search system, each computer involved in indexing crawls and reviews a portion of the Web, taking a URL and following every link available from it (minus those marked for exclusion). The computer gathers the crawled results from the URLs and sends that information back to a centralized server in compressed format. The centralized server then coordinates that information in a database , along with information from other computers involved in indexing.
When a user types a query into the search field, Google's domain name server ( DNS ) software relays the query to the most logical cluster of computers, based on factors such as its proximity to the user or how busy it is. At the recipient cluster, the Web server software distributes the query to hundreds or thousands of computers to search simultaneously. Hundreds of computers scan the database index to find all relevant records. The index server compiles the results, the document server pulls together the titles and summaries and the page builder creates the search result pages.
Some projects, such as Wikia Search (formerly Grub ) are moving towards an even more decentralized search model. Similarly to distributed computing projects such as [email protected] , many distributed search projects are supported by a network of voluntary users whose computers run client software in the background.