![]() ![]() Let's say, for example, that you need audio-visual material to accompany a 3rd grade lesson on the local history of Tip #1: Have a clear idea of what it is you're looking for. Read on for some useful online searching tips. Of the time you might otherwise spend stumbling around in the dark. Surfing? Good news! If you know how to "phrase" your Web search questions, you can find what you want in a fraction You can find almost anything you're looking for on the Web if you search long enough, but who has time for extended Web ![]() Learn more about meta tags on the next page.If you know what youâre doing, search engines can help you find whatever you need quickly and efficiently.ĭiscover the secrets that savvy searchers know! The push to completeness in this approach is matched by other systems in the attention given to the unseen portion of the Web page, the meta tags. Other systems, such as AltaVista, go in the other direction, indexing every single word on a page, including "a," "an," "the" and other "insignificant" words. Lycos is said to use this approach to spidering the Web. For example, some spiders will keep track of the words in the title, sub-headings and links, along with the 100 most frequently used words on the page and each word in the first 20 lines of text. These different approaches usually attempt to make the spider operate faster, allow users to search more efficiently, or both. The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the." Other spiders take different approaches. Words occurring in the title, subtitles, meta tags and other positions of relative importance were noted for special consideration during a subsequent user search. When the Google spider looked at an HTML page, it took note of two things: Rather than depending on an Internet service provider for the domain name server (DNS) that translates a server's name into an address, Google had its own DNS, in order to keep delays to a minimum. The early Google system had a server dedicated to providing URLs to the spiders. Keeping everything running quickly meant building a system to feed necessary information to the spiders. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second. ![]() Each spider could keep about 300 connections to Web pages open at a time. They built their initial system to use multiple spiders, usually three at one time. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. Google began as an academic search engine. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. (There are some disadvantages to calling part of the Internet the World Wide Web - a large set of arachnid-centric names for tools is one of them.) In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages. ![]() When a spider is building its lists, the process is called Web crawling. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. Before a search engine can tell you where a file or document is, it must be found. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |