Technical SEO checkout method

  • Make Sure You’re Using HTTPS  
  • Check for Duplicate Versions of Your Site in Google’s IndeX
  • Find and Fix Crawl Errors
  • Google Search Console Setup                                                                                       
  • Improve Your Site Speed
  • Fix Broken Internal and Outbound Links
  • Make Sure Your Website Is Mobile-Friendly
  • Use an SEO-Friendly URL Structure
  • Add Structured Data/Schema 
  • Check Your Site’s Crawl Depth
  • Check Temporary 302 Redirects
  • Google Analytics
  • Canonical

What is HTTP ?

Hypertext Transfer Protocol – An application layer protocol that transfers data over the Internet, enabling the exchange of information between a web server and a web browser.

What is HTTPS

HTTPS stands for Hypertext Transfer Protocol Secure. It is the secure version of the HTTP protocol, used for communication between a web browser and a website. HTTPS is designed to provide secure communication by encrypting data exchanged between browsers and websites. This encryption ensures that transmitted information, such as login credentials, credit card details and other sensitive data, is protected from interception and tampering by unauthorized persons. HTTPS uses SSL (Secure Sockets Layer) or TLS (Transport Layer Security) protocols to establish a secure connection. When a user accesses a website that uses HTTPS, their browser initiates a process called SSL

What is Indexing ?

The process of finding a website after searching in a search engine is called indexing.

What is Crawling ?

After searching in the search engine, judge whether a website is perfect for indexing.

What is Crawler ?

A crawler, also known as a web crawler, is a program or automated script used to systematically browse and index the content of websites. It is an essential component of search engines like Google, Bing and Yahoo. The main purpose of a crawler is to discover and collect information including URLs, text content, images and other relevant data of web pages. Crawlers work by starting from a list of seed URLs or following links from one web page to another. They visit web pages, download their content and extract relevant information such as keywords, titles and meta tags. This information is indexed by search engines to facilitate quick and accurate retrieval of information in response to user queries. Web crawlers generally follow a set of rules called the “robots.txt” protocol, which webmasters can use to control which parts of their websites are accessible to crawlers. This protocol helps ensure that crawlers respect the preferences of website owners and do not access restricted or sensitive content. Crawlers have a variety of applications beyond search engines, including website monitoring, data mining, and content aggregation. They are often used to collect information for research purposes, create website backups, or support various automated processes that rely on web data.

When Crawler comes a website?

                                   1. For a new website

  2. After Updated Existing Content

When Google’s crawler comes and takes updated information from a new website or an old website and stores this database with it, it is called caffeine.

What is Robots.Txt ?

Robots.Txt means that Google can index some files of my website and some files cannot be indexed, given its permission.

The robots.txt file is a text file that webmasters create to instruct web robots (also known as crawlers or spiders) how to interact with their website. It is usually placed in the root directory of a website. The main purpose of the robots.txt file is to provide guidance to search engine crawlers about which pages or directories are allowed to crawl and index. It helps control the access of web robots to different parts of a website. By specifying rules in the robots.txt file, webmasters can prevent certain pages from being indexed or crawled, or they can allow access to certain directories while disallowing others. The syntax of the robots.txt file is relatively simple. Each rule consists of two parts: user-agent and directive. The user-agent specifies to the web robot where the rule applies, and the directive indicates the action that should be taken. The most common instructions are “deny” and “allow”.

  1. No any  Linking ( Internal or external)
  2. Navigation- Header or Structure Location না থাকা
  3. If you blocked your robots.txt by a simple code
  4. You’ve Been Penalized By Google In The Past And Haven’t Cleaned Up Your Act Yet

If you provide content related to other or scam sites other than Niche’s subject, Google will penalize the website.

What is a canonical Tag ?

A web page or web URL or web content is not duplicated on any website, to avoid it, the tag we use is called canonical ta

Structured Data / Schema Markup ?

The process of showing the specific data of a website or a page separately is called Structured Data / Schema markup

more details are :

Schema markup, also known as structured data, is a code that you can add to your website to provide search engines with more information about the content on your pages. This markup helps search engines understand the context and meaning of your content, which can enhance the display of search results.

Schema markup uses a specific vocabulary of tags (microdata) that you add to the HTML of your web pages. These tags define different types of information such as events, reviews, products, organizations, and more. By implementing schema markup, you make it easier for search engines to interpret and present your content in a more structured and informative manner.

For example, if you have an article, you can use schema markup to specify details such as the author, publication date, and the article’s main topic. This can result in rich snippets in search results, providing users with additional information before they click on a link.

Schema markup is supported by major search engines like Google, Bing, Yahoo, and Yandex, and it can positively impact your website’s visibility and click-through rates in search engine results pages (SERPs). It’s a valuable tool for improving the way search engines understand and present your content to users.

What is Google Search Console ( GSC) ?

Key features and functionalities of Google Search Console include:


Google Search Console (formerly known as Google Webmaster Tools) is a free web service provided by Google that allows website owners and webmasters to monitor and manage their site’s presence in Google Search results. It provides a variety of tools and reports that help website owners understand how Google’s search engine interacts with their site.

Key features and functionalities of Google Search Console include:

  1. Performance Reports: Provides insights into the performance of your website in Google Search. It includes information on search queries that lead users to your site, the pages that are most popular in search results, and the countries where your site is most frequently seen.
  2. Index Coverage Report: Helps you understand which pages of your site are indexed by Google and if there are any issues preventing certain pages from being indexed.
  3. URL Inspection Tool: Allows you to check how a specific URL is indexed and provides information about any indexing issues or errors.
  4. Sitemap Submission: Enables you to submit XML sitemaps, which helps Google understand the structure and organization of your website.
  5. Mobile Usability Report: Highlights issues related to the mobile-friendliness of your site, helping you ensure a good user experience for visitors using mobile devices.
  6. Security Issues Alerts: Notifies you if Google detects any security issues, such as malware or hacked content, on your website.
  7. Structured Data and Rich Results Reports: Shows information about the structured data on your site and how it may be used to generate rich results (enhanced snippets) in search engine results.
  8. Links to Your Site: Provides information about external websites linking to your site. This data can help you understand your site’s backlink profile.

Redirection refers to the process of forwarding one URL to another. When a web page or website undergoes changes, such as a change in the URL structure, domain name, or when content is moved to a new location, redirection is commonly used to ensure that visitors and search engines are directed to the correct, updated page.

There are different types of redirects, and each serves a specific purpose:

  1. 301 Redirect (Permanent Redirect): This is a permanent redirect that informs search engines that the original URL has permanently moved to a new location. It is recommended for SEO purposes when you want the new page to inherit the ranking and authority of the old page.
  2. 302 Redirect (Temporary Redirect): This is a temporary redirect, indicating that the move is only temporary. Search engines may not transfer the ranking and authority to the new URL with a 302 redirect.
  3. 303 Redirect (See Other): Similar to a 302 redirect, a 303 redirect indicates that the resource can be found at another location, but it is a temporary move.
  4. 307 Redirect (Temporary Redirect): This is similar to a 302 redirect and indicates a temporary move. However, it was introduced in HTTP 1.1 as a replacement for 302.
  5. Meta Refresh: This is a method that uses HTML meta tags to instruct the browser to automatically refresh to another URL after a specified time. While it can be a form of redirection, it’s not as commonly used as server-side redirects.
  1. Crawling (robots.txt, sitemap)
  2. Indexing (GSC-GA, Internal & external Link perfect, No Content Duplicate-canonical Tag, website structure looking good
  3. Schema/ structural Data (Additional Data)-Plugin Use Best
  4. Canonical Tag- Auto setup থাকে
  5. Broken Interbnal or external link-check & solved ( For page Or Post) 
  6. 404 Error Solved ( For main Website)
  7. Page Speed 

Leave a Comment

Your email address will not be published. Required fields are marked *