What is JavaScript SEO?
JavaScript SEO is a component of Technical SEO that works to make JS-reliant websites accessible for websites to explore and put into their catalogs as well as user-friendly for searches. The aim is for these sites to be detected and gain a higher position in search engines.
Is JavaScript bad for SEO; is JavaScript evil? Not at all. It varies from what most SEOs are familiar with, and there’s a bit of an adjustment period. People oftentimes employ the use of it even when there may be a more appropriate solution, yet sometimes one has to make do with whatever is available. Be aware that Javascript is not flawless and sometimes it’s not the ideal tool for a specific task. Unlike HTML and CSS, JavaScript cannot be processed piecemeal, and it can have a negative impact on page load speed and functionality. In some instances, you may be giving up performance for additional features.
How Google processes pages with JavaScript
In the beginning of search engines, it was easy to view the contents of the majority of webpages just by downloading the HTML response. Thanks to the surge of JavaScript, search engines must render pages in the same way a browser would so they can understand the content exactly as a user perceives it.
At Google, the Web Rendering Service (WRS) is the system responsible for managing the rendering process. Google has presented an uncomplicated diagram to illustrate the operation of this procedure.
1. Crawler
The crawler sends GET requests to the server. The server sends back the file headers and data, which then get stored.
It is probable that the inquiry will be made by a mobile user agent since Google has primarily shifted to mobile-first indexing. You can utilize the URL Inspection Tool within the Search Console to monitor how Google is scouring your website. When running this for a URL, look at the Coverage information to determine whether you are still on desktop indexing or if you have switched to mobile-first indexing.
2. Processing
Resources and Links
Google does not operate in the same manner as a user moving through different pages. When using Processing, one of the steps involves examining the web page to find connections to other web pages and the necessary documents for generating the page. The crawl queue is what Google uses to structure the sequence and order of which web pages it will visit and examine first. The chosen links are separated from the rest and are placed into this queue.
Google will obtain the resources (stylesheets, JavaScript, etc.) that are necessary to construct a page from elements such as tags. Google requires links to other pages to be structured in a particular way in order for it to register them. Links that exist within your webpage and those that lead to other pages need to be encapsulated in a tag and it must contain the href attribute. There are a variety of approaches one can take to accommodate users who don’t have search-friendly JavaScript.
Caching
Google will keep a strong cache of every single thing it pulls from the web, be it HTML pages, JavaScript files, CSS files, and more. Google will not adhere to the allotted times for cache retrieval and will instead download a fresh version whenever they deem necessary. In the Renderer section, I’m going to go into more detail on why this is important.
Duplicate elimination
The HTML which is received may have duplicate content removed or given a lower priority before it is sent off to be displayed. In-app shell models, there is not a lot of material and code that is presented in the HTML response. Essentially, it is plausible that the same code might be seen on a multitude of websites and that every single page on a site could show the same code. Occasionally, this can lead to pages being perceived as duplicates and they will not be displayed promptly. The search results could potentially show the wrong page or even an entirely different website. This issue may work itself out in due course, but it can cause difficulties, particularly with more recently created websites.
3. Render queue
Every page goes to the renderer now. SEOs have great worries about the potential delay in JavaScript and two-stage indexing (HTML then rendered page), where pages might not be displayed for even weeks or days. Google’s investigation revealed that webpages were typically taking 5 seconds to reach the renderer, with the bottom 10% taking as long as several minutes. In the majority of cases, there should not be any worry about the time taken between receiving the HTML and displaying the pages.
4. Renderer
Google uses the renderer in order to create a visualization of what a user views when they access a page. This is the place where they will take care of the JavaScript and every alteration made by JavaScript to the Document Object Model (DOM).
5. Crawl queue
Google provides information on the subject of crawl budget, however, each website has its own crawl budget and needs to decide which requests are the most important. Google needs to maintain a balance between the amount of time they spend crawling through your website versus other websites on the internet. Websites that are newly established or contain a large amount of dynamic content are likely to be indexed by search engines more slowly. Certain pages will be amended less regularly than the rest, and certain resources may also be sought out less often.
SEO JavaScript Auditing 101: How to Identify JavaScript SEO Errors
JavaScript is often implemented in a fashion that makes it difficult for search engines to carry out their duties. That is clearly the exact opposite of what is required for technical SEO work. We strive to create material that is easy to obtain and has worth, so anything that would limit this would be an obstacle.
To make sure your website is running smoothly, you must find out how to discover, spot, and correct any JavaScript mistakes or issues present. Let’s investigate the process for that now.
Step 1 – Preparation: Identify Your Tech Stack
You need to have a clear understanding of the technology behind the website before you can begin analyzing how JavaScript is performing. This necessitates assembling a mixture of programming languages, structures, and instruments to construct your website or application.
We advise utilizing a Chrome add-on known as Wappalyzer. This tool can be used to find out if the website uses React, Angular, or any other technology which might be influencing how easy it is for search engines to access or render the site.
Gaining knowledge about your technology stack will assist you in specifying how much audit assessment is necessary.
Wappalyzer is useful for identifying the components of your web page’s technology stack, such as the CMS, Frameworks, and Programming Languages employed. If you don’t know about the data that has been made public, it is best to do a quick search on Google.
We can infer that if a website has the title “JS Library,” it is a JavaScript website. For situations where neither JavaScript nor a SPA is involved, it may be possible to assess interactive elements by using the bookmarklet to inspect them visually.
Utilizing Sitebulb’s Website Auditing Tool is a fine choice for recognizing the tech stack, possibly as part of your usual SEO assessment procedure.
Step 2 – Site Audit: Macro JS SEO Issues
Now that you’ve determined that JavaScript is indeed being used on your website (which is true of almost all modern websites), the next step is to find out what common problems can be fixed overall.
Begin by performing a website scan with JavaScript enabled. We enjoy using Screaming Frog, Ahrefs, and Sitebulb for this task. Be sure to consult the instructional guides for each tool on how to switch on JavaScript execution (really, they offer much better advice than we can on how to do it).
Typically, these crawl reports will show you more immediate issues, such as:
- 404 status codes not being served from your SPA – meaning Google just sees lots of soft 404s and very unoriginal content
- redirects being handled at a page level instead of at the server request (i.e. JS redirects and meta refresh redirects) – so basically if you have redirects to help with passing page equity, hah, kiss that goodbye
- metadata is missing and/or duplicated multiple times
- canonicalization is nowhere to be seen
- the robots.txt file blocks JS, image or CSS files
- orphan pages, or pages with low in links – because internal linking doesn’t include proper <a> href tags
The aim of this initial audit is to recognize major problems that can be solved on a global scale. In certain situations, it is possible to identify patterns that appear to be a problem and these can be examined more thoroughly in the following step.
Step 3 – Page Audit: Micro JS SEO Issues
The goal of this move is to look into and review problems on the page level by visually inspecting them. To get off to a good start, it is essential to identify the most important pages that should be inspected first. The most widely viewed pages and a sample page of each main template design are here.
For instance, your homepage, various category pages, product detail pages (if you have an eCommerce website), contact page, blog entry, and any other pages constructed around essential layouts are the most important pages to review (normally about 3-8 pages, depending on how large the website is). Be sure to include pages with many interactive abilities when creating the website, as these are usually where technical problems develop. It is essential to comprehend how each layout/content type and related code works with or without JavaScript, although some commonalities may exist.
Step 4 – (Optional) Troubleshoot JS SEO Issues
As you go through the steps to audit any JS SEO problems, you’ll recognize a number of issues that must be recorded and remedied. You may be expected to communicate your discoveries to a development team, which means giving them detailed information concerning the why, when, and how of the situation.
This step is intended to give you the specific techniques and methods you need to accurately identify and explain the JavaScript difficulties which are influencing your SEO. This step may not be compulsory, contingent on the engineering unit.
Step 5 – Prepare Issue Insights for Engineering Handoff
You should prioritize problems by how necessary the page type is and how important it is that something is not present. For instance, having occasional JavaScript mistakes isn’t as destructive as search engine bots not having the capacity to access product page links from the various sections of an eCommerce website (since that is one of the main functions of that page, and without it, it can be hard for Google to discover or pass page rank to the products that need to be sold). Make use of the JS SEO Problem Level listed above to calculate how major the JS mistake really is.
The first step of this two-part system is determining what should be the priority, with the second focusing on what information should be given to the engineering crew. In the event that you are not resolving problems alone, the following indispensable step in your JavaScript website optimization assessment is to inform what has been omitted, where it has been dropped, and what impact it has on the organization. Using a template basis when tackling issues can make it simpler for web developers to correct issues located on pivotal templates/layouts and URLs that achieve peak success.
Step 6 – Stage QA, Go-Live, Live QA and Performance Monitoring
This process is all about ensuring the repairs are implemented, tested, and put through to their completion.
Upon initial quality assurance testing, it should be taken into consideration that extra resources may need to be used to ascertain a solution. Tools like the Mobile-Friendly Tool can only be employed for openly accessible web pages, (in other words, it’s not possible to use it on URLs that are unable to be indexed.)
QA for Content Changes
Modifications to important SEO content must be looked at to guarantee its excellence. Confirm that any recent material is still there and achieves (or exceeds) your anticipations based on what you observe.
It is clear, but confirm you have all of the following in order: page titles and meta descriptions, Hreflang and internationalization set up, text on the page, links within the page, pictures, clips, and any other file types.
Performance Monitoring
Just as vital as solving issues is making sure they don’t reoccur in the future. We suggest a few tools and steps assist with ongoing quality assurance and performance tracking.
- Post-Release Crawl – Your primary line of defense to verify that fixes are indeed in place
- Automated Weekly Crawling – Use SEO tools like Moz, Ahrefs, Deepcrawl, SEMRush or your auditing crawler of choice to automatically notify you of new issues that arise on a weekly basis. Then pay attention to these emails and make fixes as needed.
Checking the SEO quality and observing its performance afterward is not overly difficult. Having the necessary equipment and knowledge about where to look is essential for success. Using automation can be beneficial but checking back on the original auditing mechanisms is the safest way to manually verify any top-priority problems.
Leave a Reply