You are currently viewing WHAT IS TECHNICAL SEO? WHY IS IT IMPORTANT?
Photo credit: https://www.pexels.com/

WHAT IS TECHNICAL SEO? WHY IS IT IMPORTANT?

Technical SEO is the process of optimizing your website to help search engines find, understand, and index your pages. For beginners, technical SEO doesn’t need to be all that technical. And for that reason, this article will be focused on the basics so you can perform regular maintenance on your site and ensure that your pages can be discovered and indexed by search engines.

IMPORTANCE OF TECHNICAL SEO

Basically, if search engines can’t properly access, read, understand, or index your pages, you won’t rank or even be found. So to avoid innocent mistakes like removing yourself from Google’s index or diluting a page’s backlinks, I want to discuss five things that should help you avoid that. They are

  1. Noindex Meta Tag
  2. TXT
  3. Sitemaps
  4. Redirect
  5. Canonical tag

 

  • Noindex Meta Tag

By adding a noindex meta tag [<meta name=”robots” content=”noindex”>] to your page, you are telling search engines not to add it to their index. And you probably don’t want to do that. This actually happens more often than you might think. For example, let’s say you hire bloxvaresources.com to create or redesign a website for you during the development phase; they may create it on a subdomain on their own site.

So it makes sense for them to noindex the site they are working on. But what often happens is that after you have approved the design, they will migrate it over to your domain. But they often forget to remove the meta noindex tag. And as a result, your pages end up getting removed from Google’s search index or never making it in.

Now there are times when it actually makes sense to noindex certain pages. For example, if you have a website that you feel the author’s page provides very little value to search engines looking at it from an SEO perspective, then you can leave it noindexed. Although from a user experience standpoint, it can be argued that it makes sense to be there because some people may have their favorite authors on a blog and want to read only their content.

Generally speaking, you won’t need to worry about noindexing specific pages for small sites. Just keep your eye out for noindex tags on your pages, especially after a redesign.

 

  • TXT

This is a file that usually lives on your root domain. And you should be able to access it at yourdomain.com/robots.txt. The file itself includes a set of rules for search engine crawlers and tells them where they can and cannot go on your site. And it’s important to note that a website can have multiple robots files if you are using subdomains.

For example, if you have a blog on recipes.com, then you would have a robot.txt file for just the root domain. But you might also have an e-commerce store that lives on store.recipes.com. So you can have a separate robots file for your online store.

That means that crawlers could be given two different sets of rules depending on the domain they are trying to crawl. Now the rules are created using something called “directives.” And while you probably don’t need to know what all of them are or what they do, there are two that you should know about from an indexing standpoint.

  1. User-agent: This defines the crawler that the rule applies to. And the value for this directive would be the name of the crawler.
  2. Disallow: This is a page or directory on your domain that you don’t want the user agent to crawl. While this might sound like something you would never use, there are times when it makes sense to block certain parts of your site or to block specific crawlers.

For example, if you have a WordPress site and you don’t want your wp-admin folder to be crawled, then you can simply set the user agent to “All crawlers” and set the disallow value to /wp-admin/. Now, if you are a beginner, I wouldn’t worry too much about your robots files. But if you run into any indexing issues that need to be troubleshooted, robots.txt is one of the first places I would check.

  • Sitemaps

These are usually XLM files, and they list the important URLs on your website. So these can be pages, images, videos, and other files. Sitemaps help search engines like Google to more intelligently crawl your site. Creating an XML file can be complicated if you don’t know how to code, and it’s almost impossible to maintain manually. But if you are using a CMS like WordPress, plugins like Yoast and Rank Math will automatically generate sitemaps for you. To help search engines find your sitemaps, you can use the Sitemap directive in your robots file and also submit it in Google search console.

  • Redirect

A redirect takes visitors and bots from one URL to another. Their purpose is to consolidate signals. For example, let’s say you have two pages on your website on the best golf balls. One as Domain.com/ top 10 best chocolate recipe [redirected URL] and another as Domain.com/ best chocolate recipe [destination URL].

Seeing as these are highly relevant to one another, it would make sense to redirect the top 10 versions to the current version. By consolidating these pages, you are telling search engines to pass the signals from the redirected URL to the destination URL.

  • Canonical tag

A canonical tag is a snippet of HTML code that looks like this – <link rel=”Canonical” href=https://yourdomain/slug />. Its purpose is to tell search engines what the preferred URL is for a page. And this helps to solve duplicate content issues. For example, let’s say your website is accessible by http://yourdomain.com and https://yourdomain.com, and for whatever reason, you weren’t able to use a redirect.

These will be exact duplicates, but by setting a canonical URL, you are telling search engines that there is a preferred version of the page. As a result, they will pass signals such as links to the canonical URL, so they are not diluted across two different pages.

It is important to note that Google may choose to ignore your canonical tag. For example, if we set the canonical tag of a website to the insecure HTTP page, Google would probably choose the secure HTTPS version instead. If you are running a simple WordPress site, you shouldn’t have to worry about this too much. CMS’s are pretty good out of the box and will handle a lot of these basic technical issues for you.

So these are some of the foundational things that are good to know when it comes to indexing, which is arguably the most important part of SEO. Because if your pages aren’t getting indexed, nothing else really matters. You don’t need to start worrying about indexing issues if you are not having any problems with your site. Instead, focus on technical SEO best practices to keep your website in good health.

Emeka Okorie

Emeka is a microbiologist, a content creator and an affiliate marketer. His marketing style centres around SEO traffic and list building.

Leave a Reply