The earliest search engines had an index that contained a few hundred thousand pages and documents and received perhaps one or two thousand queries per day. A top search engine today will respond to tens of millions of queries per day and index hundreds of millions of pages.
A file or document needs to be found before a search engine can tell you where it is. A Search Engine makes use of special software robots known as spiders to compile lists of the words that can be found on the hundreds of millions of Web pages that are currently accessible.
Web crawling refers to the process by which a spider creates its lists.
No comments:
Post a Comment