Once you’ve submitted your website to Google for indexing, you’re probably raring to go. After all, you put a lot of time and money into presenting your brand in the online arena. Naturally, you want it to pay off in traffic, lead generation, patronage, and sales.

There’s a lot you can do to promote your website and improve placement on search engines (even though you can’t track page rank anymore, it still exists). You want to increase your online presence.

Yet, to an extent, you’re reliant on Google (and other search engines) to index and rank new content. Is there anything you can do to speed the process?

You can’t necessarily make web crawlers work faster. However, Google has a slew of Webmaster Tools designed to help site owners direct crawlers. Therefore, that new content can be indexed more quickly.

It will start delivering results like traffic and conversions. One tool that you may not have heard about from the Webmaster lineup yet is the Fetch as Google tool.

This tool simulates a Google web crawl. According to Google, it lets you see whether Googlebot can access a page on your site and how it’s rendered. Additionally, you’ll be able to determine whether any page resources (such as images or scripts) are blocked to Googlebot.

You will gain valuable information about your website. You’ll acquire insight into why new content isn’t indexing as fast as you expect. Plus, there’s even an equivalent tool for mobile apps. How do you use the Fetch as Google Webmaster Tool?

Here’s what you need to know.

Find the Tool

It’s easy enough to get started with your fetch. Go to your Webmaster Tools homepage and select your website. In the toolbar, locate and select the “crawl” function. Then, select “fetch as Google” from the menu.

Next, enter the URL of the page you want to simulate a crawl on. If the page has a redirect, the fetch cannot follow it. Thus, you may have to enter it manually.

For your information, a real web crawler will follow the redirect if everything is functioning properly. Conversely, when using the tool you may have to enter the redirect URL in the fetch box.

Depending on where the redirect leads, you might be able to repopulate it by clicking the “follow” button. You may have to copy and paste a new URL after clicking the button.

Choose Your Googlebot

Once you’ve entered relevant information about the location of your indexing expedition, you need to select a Googlebot. What the heck does this mean?

Different devices view your pages in different ways. This is part of the indexing process. The Googlebot you choose will view your page as those devices would.

For most websites, you’ll either want to view your page as if you were a desktop browser or mobile device accessing the content. You just need to select the appropriate Googlebot to accomplish this.

There are a few tweaks for mobile use. Some of which are more often used in other countries. However, generally, these are the two you will use.

Desktop browser is the default Googlebot setting. Within this category, you can choose subcategories. This includes the standard Googlebot crawler or specialties like crawlers for news, images, videos, AdSense, and AdsBot. The one you select will depend on your site and the content of the pages you’re hoping to index.

Fetch vs. Fetch and Render

You’re almost ready to make fetch happen. Now you have to decide if you want a simple fetch or a fetch and render. Think of the standard fetch tool as a quick once-over.

Let’s say you’re pretty sure your site and your pages are fine. You’re just exercising due diligence so you don’t waste any time once you’ve posted content. In that case, the fetch command will probably suffice.

The basic fetch crawl should give you information about connectivity. It will report whether the simulated crawl comes across any errors, redirects, or security problems. These will need to be addressed before you post content.

You’ll receive information about any minor fixes that may need to occur before you post content. This will ensure that Google can quickly and easily index your page.

The fetch and render process is a bit more complex. This tool views your web page not only as a web crawler would, but as a browser would. This means it looks at more than just the hidden code that makes up a page.

It also takes the time to render the data into a visible representation. This is essentially what a viewer would see upon opening the page in a web browser.

It provides additional information about how your page loads. It can be valuable when it comes to indexing. Also, it gets the same experience as page visitors.

Fetch History

Once you’ve submitted your fetch request, it should populate in the fetch history table. The request may take some time to complete, showing a “pending” status. Once it is complete you can see which elements succeeded and failed.

You’ll be able to gather additional information about portions of the web crawl that succeeded. There will be either a “complete” or “partial” status from the table. Plus, you’ll have a roadmap of which portions failed so that you can fix them.

You can also see the rendering of your page if you selected the fetch and render option. You’ll see which elements, if any, failed to render. The last hundred fetch requests on the table will be viewable.

From there you have the option to submit it to Google for actual indexing. That supposes your fetch delivered successful results all around. This is a great way to check and index pages quickly if you have only a handful to submit.

Fetch Limits

Unfortunately, you don’t get unlimited fetches. However, the average website will never hit the limit. You can use up to 500 fetches per week with this tool.

Therefore, unless you’re running a busy news site, chances are you’ll never get the warning that your limit is approaching, much less exceed it.

You can do this!

Don

 

Please follow and like us:
Facebook
Instagram