Knowledgebase: Other Products > ASEOPS
Search Engines: How do they rank websites?
Posted by Markus Paulsen on 07 February 2011 17:12

Every user of search engines is familiar with the fact that, after having keyed in a search term, the engine immediately returns a list of pages that match the search term. These pages have apparently been selected from the millions of pages published on the Internet. The matches are also in order of priority, showing the most relevant hits at the top of the list.

However, search engine are not always right. Every so often, you might find pages at the top of the list that are not at all relevant to your search term. Most users are also familiar with the experience that it often takes a long time until the information that one is looking for is actually found. In general, however, search engines provide outstanding services to users. All they obviously lack is experience and judgment when ranking websites on their result pages. Although better and more "intelligent" software agents are being developed, they are not yet ready to be implemented. So, how do search engines actually determine the relevance of a page in relation to the search term? They simply follow a set of rules, of which the most important is the position and the frequency of specific keywords in a website.

Pages that contain the keywords in their title are considered more relevant that those where none of the keywords are found in the title. The search engines check also whether the keywords are again found in the top section of the website, i.e. in the heading or the first text block. This is again done on the assumption that relevant websites will contain the key terms in the first few lines. Frequency of a term is another main factor by which a page is ranked by a search engine. Search engines thereby evaluate the number of keywords in relation to other words in a website. Websites where keywords are frequent are considered more relevant than others.

All major search engines also use more refined methods. As these specialized methods vary from search engine to search engine, the search results differ considerably. One of the obvious differences is the fact that certain search engines list more hits than others. Some of the search engines index websites more frequently than others. The result is that none of the search engines has exactly the same collection of websites to browse through. There is also the fact that certain search engines prefer certain websites for particular reasons. Some apply link popularity as a criteria for ranking. In other words, these search engines indicate through their ranking order which pages are categorized in their index as having many links. These pages are thus listed at the top of the list, as sites to which there are many links on the Internet are considered important. Certain hybrid search engines, namely those with linked directories might even prioritize pages that have been reviewed by their editors. This is done on the assumption that a site that has been considered worth a review must be superior to one that has not been reviewed.

Meta tags are wrongly seen by certain web designers as "secrete tuning", and they believe that they are able to push a site to the top of a result page simply by means of this feature. But there are also numerous examples of sites that do not contain meta tags at all and still show up at the very top of the lists.

Search engines can equally ban or push back sites in their index, if they detect that an attempt was made to "outwit" the search engine. An example of such an approach is the use of a key word that is repeated hundred of times on a page, simply to boost its frequency and thus push the site upwards in the result page. Search engines are constantly on the lookout for such pages, and also follow complaints of users to eliminate such sites.