How Search Engine Works
The work of the search engine is divided into three stages, i.e. crawling, indexing and retrieval.
The search engines have the web crawler or spiders to perform crawling. The task of crawler is to visit a web page, read it and follow the links to other web pages of the site. Each time the crawler visits a webpage it makes a copy of the page and adds its URL to the index. After adding the URL it regularly visits the sites like every month or two to look for updates or changes.
In this stage, the crawler creates the index of the search engine. The index is like a huge book which contains a copy of each web page found by the crawler. If any webpage changes the crawler updates the book with new content.
So, the index comprises URL of different webpages visited by the crawler and contains the information collected by the crawler. This information is used by search engines to provide the relevant answers to users for their queries. If a page is not added to the index it will not be available to the users.
This is the final stage in which the search engine provides the most useful and relevant answers in a particular order. Search engines use algorithms to improve the search results so that only genuine information could reach to the users, e.g. PageRank is a popular algorithm used by search engines. It shifts through the pages recorded in the index and shows those webpages on the first page of results that it thinks are the best.