Monday, 16 December 2013

Black Hat SEO

Black hat methods are unacceptable and highly discouraged ways of bringing traffic to your website. These methods bring short-term results, but once a search engine notices that you are cheating the system, your page could be penalized or banned.



Black hat methods include:
  • Hidden Content
  • Keyword Stuffing
  • Doorway or Gateway pages
  • Link Farming
  • Cloaking
Each term will be explained in detail in the next post.

Saturday, 7 December 2013

White Hat SEO- A pure Organic SEO without spamming

When we say “white”, it refers to pure, no impurity or bad things, similarly white hat SEO means no illegal or bad means of achieving top rankings. It is pure and totally legal ways, without spamming and stuffing of keywords. 
White Hat SEO refers to the use of SEO strategies, techniques and tactics that focus on human audiences, as opposed to search engines and completely follow search engine rules and policies.

Any SEO tactic that maintains the integrity of your website and the SERPs (search engine results pages) is considered a "white-hat" search engine optimization tactic. These are the only tactics that we should use whenever applicable and which enhance rather than detract from your website and from the rankings.

Thursday, 5 December 2013

What is difference between crawling, indexing and caching?


Crawling is where search engines spiders / bots move from web page to web page by following the links on the pages. The pages "found" are then ranked using an algorithm and indexed into the search engine database. Crawling is when search engines view your website and try to find the depths of it or understanding the hierarchy of your website.
Indexing is where search engine has crawled the web and ranks the URLs found using various criteria and places them in the database, or index. It's like in the library, if there's a new book, before the librarian put it in the right shelve, they create an index of it first

Caching is where copies of web pages stored locally on an Internet user's hard drive or within a search engine's database. " Cache is a more techy term, as it is a temporary store of your website. Google updates their cache on timely basis, depending of what you have indicated in your sitemap.xml

Wednesday, 4 December 2013

How Search Engines Work?

The first basic truth you need to know to learn SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven.

Search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.

First, search engines crawl the Web to see what is there. This task is performed by a piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way.

After a page is crawled, the next step is to index its content. The indexed page is stored in a giant database, from where it can later be retrieved.

When a search request comes, the search engine processes it – i.e. it compares the search string in the search request with the indexed pages in the database.

Since it is likely that more than one page (practically it is millions of pages) contains the search string, the search engine starts calculating the relevancy of each of the pages in its index with the search string.

The last step in search engines' activity is retrieving the results. Basically, it is nothing more than simply displaying them in the browser – i.e. the endless pages of search results that are sorted from the most relevant to the least relevant sites. 

Google Technical guidelines

Google Technical guidelines

· Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.


· Allow search bots to crawl your sites without session IDs or arguments that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page.


· Make sure your web server supports the If-Modified-Since HTTP header. This feature allows your web server to tell Google whether your content has changed since we last crawled your site. Supporting this feature saves your bandwidth and overhead.


· Make use of the robots.txt file on your web server. This file tells crawlers which directories can or cannot be crawled. Make sure it's current for your site so that you don't accidentally block the Googlebot crawler. Visithttp://code.google.com/web/controlcrawlindex/docs/faq.html to learn how to instruct robots when they visit your site. You can test your robots.txt file to make sure you're using it correctly with the robots.txt analysis tool available in Google Webmaster Tools.


· Make reasonable efforts to ensure that advertisements do not affect search engine rankings. For example, Google's AdSense ads and DoubleClick links are blocked from being crawled by a rorbots.txt file.


· If your company buys a content management system, make sure that the system creates pages and links that search engines can crawl.


· Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.


· Test your site to make sure that it appears correctly in different browsers.


· Monitor your site's performance and optimize load times. Google's goal is to provide users with the most relevant results and a great user experience. Fast sites increase user satisfaction and improve the overall quality of the web (especially for those users with slow Internet connections), and we hope that as webmasters improve their sites, the overall speed of the web will improve.
Google strongly recommends that all webmasters regularly monitor site performance using Page Speed, YSlow, WebPagetest or other tools. In addition, the Site Performance tool in Webmaster Tools shows the speed of your website as experienced by users around the world.

Tuesday, 26 November 2013

Why SEO plays an important role in website ranking on any search engine?


Why is SEO so important in website ranking on any search engine?

SEO (Search Engine Optimization) improves the visibility of a website on any search engine.  It makes your website easy for both users and search engine robots to understand. Search engines can't see and understand a web page in the way a human does. SEO allows webmaster to provide hints which the search engine can use to understand the content and in turn figure out what each page is about and how it may be useful for users. SEO add proper structure to your content without which many websites are invisible on search engines.

Because of the high degree of competition amongst the various web based businesses, it has become very difficult to make your business visible to the targeted audience. To increase the visibility of your website get it Search Engine Optimized. 

Advantages of SEO:
·         SEO helps search engines to crawl your website deeply and more frequently and thereby helps in quick indexing.
·         Drives more organic traffic towards your website with a natural and smooth flow.
·         Provides a clear picture about your website to the Search Engines in terms of your services as well as products so as to enhance the authenticity of your business
·         Provides you a winning edge by making your presence prominent among the listings for relevant search terms
·         Quality SEO keep you safe from getting penalized by Search Engines for any sort of unethical behavior (Black Hat techniques)

Google Quality Guidelines for websites

Google Quality guidelines

Webmasters who spend their energies upholding the spirit of the basic principles will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.
Quality guidelines - basic principles
  • Make pages primarily for users, not for search engines.
  •  Don't deceive your users.
  • Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you'd feel comfortable explaining what you've done to a website that competes with you, or to a Google employee. Another useful test is to ask, "Does this help my users? Would I do this if search engines didn't exist?"
  • Think about what makes your website unique, valuable, or engaging. Make your website stand out from others in your field.
Avoid the following techniques:
  • Automatically generated content
  • Cloaking
  • Participating in link schemes
  • Hidden text or links
  • Doorway pages
  • Sneaky redirects
  • Scraped content
  • Participating in affiliate programs without adding sufficient value
  • Loading pages with irrelevant keywords
  • Creating pages with malicious behavior such as phishing or installing viruses
  • Sending automated queries to Google
  • Abusing rich snippets markup 

Engage in good practices like the following:
  • Monitoring your site for hacking and removing hacked content as soon as it appears
  •  Preventing and removing user-generated spam on your site


If your site violates one or more of these guidelines, then Google may take manual action against it. Once you have remedied the problem, you can submit your site for reconsideration.