Google recommends that all websites use https:// when possible. The hostname is where your website is hosted, commonly using the same domain name that you'd use for email. Google differentiates between the "www" and "non-www" version (for example, "www.example.com" or just "example.com"). When adding your website to Search Console, we recommend adding both http:// and https:// versions, as well as the "www" and "non-www" versions.
Why? Today, we're faced with a plethora of disinformation and misinformation, crafted and concocted by clever minds looking more to extract money from you than help you to earn it. That latest "proven traffic system" that you just plopped down $997 for isn't going to bring you the results you expected. That new video series by the latest raving internet marketer on how you can drive "unlimited" traffic to your website? Nope. That isn't going to work either.
Attempting to replace a dead link with your own is easily and routinely identified as spam by the Wikipedia community, which expects dead links to be replaced to equivalent links at archive.org. Persistent attempts will quickly get your account blocked, and your webiste can be blacklisted (the Wikipedia blacklist is public, and there is evidence that Google uses it to determine rankings), which will have negative SEO consequences.
You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine's search results. If you do want to prevent search engines from crawling your pages, Google Search Console has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you'll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files13.
Commenting on blog posts written by industry experts with lots of followers can bring your website a lot of traffic. When you post a comment (most) blogs allow you to leave a link back to your site for other readers to check out – as long as you leave an insightful comment you WILL get traffic from your blog comments. Make sure you comment as quickly as possible when new blog posts go up. The higher in the comments you are the more clicks you’ll get. I have Google Reader setup to alert me when new blog posts are made on the industry blogs I follow and I comment immediately to lock in my first place spot.
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.