How Search Engines Work

How search engines work
Search engines have one objective – to provide you with the most relevant results possible in relation to your search query. If the search engine is successful in providing you with information that meets your needs, then you are a happy searcher. And happy searchers are more likely to come back to the same search engine time and time again because they are getting the results they need.

In order for a search engine to be able to display results when a user types in a query, they need to have an archive of available information to choose from. Every search engine has proprietary methods for gathering and prioritizing website content. Regardless of the specific tactics or methods used, this process is called indexing. Search engines actually attempt to scan the entire online universe and index all the information so they can show it to you when you enter a search query.

How do they do it? Every search engine has what are referred to as bots, or crawlers, that constantly scan the web, indexing websites for content and following links on each webpage to other webpages. If your website has not been indexed, it is impossible for your website to appear in the search results. Unless you are running a shady online business or trying to cheat your way to the top of the search engine results page (SERP), chances are your website has already been indexed.

So, big search engines like Google, Bing, and Yahoo are constantly indexing hundreds of millions, if not billions, of webpages. How do they know what to show on the SERP when you enter a search query? The search engines consider two main areas when determining what your website is about and how to prioritize it.

  1. Content on your website: When indexing pages, the search engine bots scan each page of your website, looking for clues about what topics your website covers and scanning your website‟s back-end code for certain tags, descriptions, and instructions.
  2. Who’s linking to you: As the search engine bots scan webpages for indexing, they also look for links from other websites. The more inbound links a website has, the more influence or authority it has. Essentially, every inbound link counts as a vote for that website‟s content. Also, eachinbound link holds different weight. For instance, a link from a highly authoritative website like The New York Times (nytimes.com) will give a website a bigger boost than a link from a small blog site. This boost is sometimes referred to as link juice.
When a search query is entered, the search engine looks in its index for the most relevant information and displays the results on the SERP. The results are then listed in order of most relevant and authoritative.

If you conduct the same search on different search engines, chances are you will see different results on the SERP. This is because each search engine uses a proprietary algorithm that considers multiple factors in order to determine what results to show in the SERP when a search query is entered.

For December 2012, the search landscape was like this:
  • Google: 114.7 billion searches, 65.2% share
  • Baidu: 14.5 billion searches, 8.2% share
  • Yahoo: 8.6 billion searches, 4.9% share
  • Yandex: 4.8 billion searches, 2.8% share
  • Microsoft: 4.5 billion searches, 2.5% share
  • Others: 28.7 billion searches, 16.3% share


A few factors that a search engine algorithm may consider when deciding what information to show in the SERP include:

  • Geographic location of the searcher
  • Historical performance of a listing (clicks, bounce rates, etc.)
  • Link quality (reciprocal vs. one-way)
  • Webpage content (keywords, tags, pictures)
  • Back end code or HTML of webpage
  • Link type (social media sharing, link from media outlet, blog, etc.)
With a 200B market capiii, Google dominates the search engine market. Google became the leader by fundamentally revolutionizing the way search engines work and giving searchers better results with their advanced algorithm. With 64% market share, according to Compete, Inc., Google is still viewed as the primary innovator and master in the space.

Before the days of Google (circa 1997), search engines relied solely on indexing web page content and considering factors like keyword density in order to determine what results to put at the top of the SERP. This approach gave way to what are referred to as black-hat SEO tactics, as website engineers began intentionally stuffing their webpages with keywords so they would rank at the top of the search engines, even if their webpages were completely irrelevant to the search result.

0 comments:

Post a Comment

Contact Form

Name

Email *

Message *