正在加载图片...
inexperienced in the art of web research.People are likely to surf the web using its link graph,often starting with high quality human maintained indices such as Yahoo!or with search engines.Human maintained lists cover popular topics effectively but are subjective,expensive to build and maintain,slow to improve, and cannot cover all esoteric topics.Automated search engines that rely on keyword matching usually return too many low quality matches.To make matters worse,some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines.We have built a large-scale search engine which addresses many of the problems of existing systems.It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results.We chose our system name,Google,because it is a common spelling of googol,or 1000 and fits well with our goal of building very large-scale search engines. 1.1 Web Search Engines--Scaling Up:1994-2000 Search engine technology has had to scale dramatically to keep up with the growth of the web.In 1994,one of the first web search engines,the World Wide Web Worm (WWWW)[McBryan 94]had an index of 110,000 web pages and web accessible documents.As of November,1997,the top search engines claim to index from 2 million(WebCrawler)to 100 million web documents(frorSearch Engine Watch).It is foreseeable that by the year 2000,a comprehensive index of the Web will contain over a billion documents.At the same time,the number of queries search engines handle has grown incredibly too.In March and April 1994, the World Wide Web Worm received an average of about 1500 queries per day.In November 1997,Altavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web,and automated systems which query search engines,it is likely that top search engines will handle hundreds of millions of queries per day by the year 2000.The goal of our system is to address many of the problems,both in quality and scalability,introduced by scaling search engine technology to such extraordinary numbers. 1.2.Google:Scaling with the Web Creating a search engine which scales even to today's web presents many challenges.Fast crawling technology is needed to gather the web documents and keep them up to date.Storage space must be used efficiently to store indices and, optionally,the documents themselves.The indexing system must process hundreds of gigabytes of data efficiently.Queries must be handled quickly,at a rate of hundreds to thousands per second. These tasks are becoming increasingly difficult as the Web grows.However, hardware performance and cost have improved dramatically to partially offset the difficulty.There are,however,several notable exceptions to this progress such as disk seek time and operating system robustness.In designing Google,we have considered both the rate of growth of the Web and technological changes.Google is desianed to scale well to extremely large data sets.It makes efficient use of http://www7.scu.edu.au/programme/fullpapers/1921/com1921.htm 2The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as Yahoo! or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines. 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 Search engine technology has had to scale dramatically to keep up with the growth of the web. In 1994, one of the first web search engines, the World Wide Web Worm (WWWW) [McBryan 94] had an index of 110,000 web pages and web accessible documents. As of November, 1997, the top search engines claim to index from 2 million (WebCrawler) to 100 million web documents (from Search Engine Watch). It is foreseeable that by the year 2000, a comprehensive index of the Web will contain over a billion documents. At the same time, the number of queries search engines handle has grown incredibly too. In March and April 1994, the World Wide Web Worm received an average of about 1500 queries per day. In November 1997, Altavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web, and automated systems which query search engines, it is likely that top search engines will handle hundreds of millions of queries per day by the year 2000. The goal of our system is to address many of the problems, both in quality and scalability, introduced by scaling search engine technology to such extraordinary numbers. 1.2. Google: Scaling with the Web Creating a search engine which scales even to today's web presents many challenges. Fast crawling technology is needed to gather the web documents and keep them up to date. Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds to thousands per second. These tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost have improved dramatically to partially offset the difficulty. There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness. In designing Google, we have considered both the rate of growth of the Web and technological changes. Google is designed to scale well to extremely large data sets. It makes efficient use of storage space to store the index. Its data structures are optimized for fast and efficient access (see section http://www7.scu.edu.au/programme/fullpapers/1921/com1921.htm 2
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有