Webinar Recording: Real-World JavaScript SEO Problems}

Webinar Recording: Real-World JavaScript SEO Problems

Published 2024-03-28

When auditing websites for technical SEO, it’s increasingly important to understand how JavaScript impacts the content that Googlebot can access for indexing, and perhaps more importantly, the content it can’t access…

Sitebulb’s Patrick Hathaway was joined by an expert panel of Aleyda Solis, Sam Torres and Arnout Hellemans to discuss all things JavaScript.

Interested in learning about JavaScript SEO? Register for our free on-demand training course.
Sign up now

Jump to:

Watch the JavaScript SEO webinar recording

Here's the webinar recording to watch at your leisure.

Don't forget to subscribe to our YouTube channel to get more videos like this!

Read the webinar transcript

[Patrick intro]

I would like to start with Google. So if you read their JavaScript SEO documentation, it essentially says that since every page they want to index gets rendered, it's no problem to use JavaScript to set or update page content, links, titles, meta descriptions, h1s etc.

So I want to know, does this differ from your experience when dealing with JavaScript on client sites? 

Aleyda: Yes, yes indeed. I believe that one thing we can all agree with is the theory of what ideally they can do. But of course we know that a lot of these websites and a lot of the websites that we end up optimizing for are very large.

So yes, they can render JavaScript, but up to which point when they need to go through millions of URLs? What happens when the JavaScript takes so much to load? And yes, they pretty much can render very fast now, but some client-side JavaScript has a lot to render, right? So there's this layer of complexity that we can all agree always ends up going bad. And in very, very little cases, I have found that everything is supposed to be indexed as expected. And let's not even go into the scenarios in which what is rendered is very different to what is seen in the raw HTML.

And I remember a few years back when I was contributing with the Web Almanac SEO chapter, I literally asked this directly to Martin Splitt, what do you take into account when the raw versus the render HTML differ? Which canonical, which title tag? And it's like, “it depends”. It depends, right? So maybe that changed since a few years back, but the reality is that there's this additional layer of uncertainty that why? Why is it needed? Many, many times also the decision of using client-side rendering JavaScript or even dynamic rendering, which is used many times as a solution when it pretty much is a workaround was just because of a decision that it's not very up to date, provided certain functionalities that nowadays can be also obtained in many other ways.

And another thing, the real use cases because of so many constraints and restrictions. Yeah. I mean, I saw everyone else nodding along.

Sam: So I think it's more about understanding that Google can understand these things. I mean, Google can even go through and parse out videos.

It just takes a lot of resources to do it. And especially now, the web is just getting larger and larger and we can talk about generative AI and how it's really inflating the internet. So I think a lot of times when we're talking about JavaScript that's on a site, it's more about what are the signals that you're sending to Google to tell them that it's actually worth the rendering, the execution time.

And I think that's where a lot of sites maybe fall short or also are you not having an honest conversation with yourself about what content is actually worth their notice or their time versus not. Especially when you are working with larger sites, there ends up often being a lot of fluff. There's a lot of things going on that you really need to hone in on what's important.

And JavaScript, if you are relying on that to render your content can make it so much harder to make sure that Google actually does that. So I think it's really just understanding a few years ago, you'd post a page, you could almost count on it getting indexed. It's not the same anymore.

We can't count on that. It's a privilege these days instead of a right. And JavaScript just convolutes that.

And so then when it comes to the sort of stage of producing website audits, when you're looking at websites for customers, how are you approaching the topic of rendering? And how has that changed over the last few years? 

Arnout: Yeah. I think having worked with a lot of PWAs, React apps, and all kinds of stuff, you kind of get to the conclusion when people come to you saying, well, why is my title different or whatever? One of the first things I check is actually using a tiny extension in Google Chrome, which is view rendered source, which basically highlights the raw HTML versus the rendered source.

And you see the weirdest things happening where people insert in the rendering a no index, or they remove links, or they do all kinds of stuff. But also the page style is often updated in these kind of things. So I think the first thing you start with an audit is trying to figure out what kind of JavaScript is used, what kind of framework is used.

Is it the React side? Is it WordPress side? Is it whatever? Because that allows you to at least assess what the SEO impact of JavaScript might be. So that's, I think, a big change over the past five years for me, because back in the days it was a lot easier, right? But it's something to really take into account. And I think adding on what Sam was saying, I think we also need to understand that the actual real life impact, not just from rendering, but the costs of excessive JavaScript is massive, right? I mean, environmentally, but also for cost saving of Google, right? If we send a lot of JavaScript out there, they're going to look through all of it, which takes a massive amount of time.

And then they go, like, they haven't indexed all of my pages. No, because if most of them take, like, a lot of seconds to actually render and assess, then that's going to happen. So I think we need to do our homework, and we as SEOs, but also as website builders, right? And developers.

Aleyda: Yeah. I agree that it's funny how this has changed, because one of the first things that I do every time a potential client comes to me, and I'm sure you're there, like, just disable JavaScript from the browser, or even with a web developer extension in Chrome, let's take a look at how complex this is meant to be as a potential project, right? In reality, it's funny, because I have also seen in many scenarios how the company is supposedly aware that they have potentially not necessarily made the most straightforward solution as a framework when they have developed their website.

And many, many times I have found that they say, but we're already doing dynamic rendering, or rehydration, or some very complex sophisticated development solution just to pre-render something there. And then when you double check, you validate and say, okay, I mean, yes, the top navigation, the top five categories are crawlable or linked. The main content is indexable.

However, the second level navigation is not, because you're still relying on an on-click event to trigger and to pretty much render these additional very, very important links to your top facets, right? So there are a lot of levels or layers, let's say, of complexity here. And unfortunately, many times they think that this is solved already, or taken care of, but it's not, indeed. 

So is rendering like a key part of, is it one of the first things you look at when you deal with a new website for the first time? 

Sam: I would say usually I'm just trying to figure out what stack they're on first, because every stack has some kind of unique caveats.

And then also, Aleyda, Arnout, I totally agree. There's so many times where clients are like, oh no, we've handled that. And you're like, but did you? Because I hate to be the bearer of bad news that maybe developers who built the site don't know, hey, there's this option.

I mean, I've got a client right now that we're doing that, and the developers didn't know some of the things that were available in Next.js. And I'm like, no, it's beautiful. It's automatically there. Just turn it on.

It's great. So I think knowing the stack can then also sometimes help you figure out what are those rendering pitfalls, but more importantly, what are the possibilities? Because between, there's really client side on one end, and then there's server side on the other. And there's so much in between, like Aleda, you mentioned rehydration and partial rehydration and trisomorphic and isomorphic and dynamic.

And there's a lot of fancy sounding words just because there's a lot of super grey area as far as what's going on with the rendering. So I will say for us, when we're working with clients who have JavaScript based sites, we write a ton of documentation about what are the rendering options and then have to deliver that to the engineering team because they have to decide based on their own resources. Like server side always sounds super great.

That's what every SEO wants, but it's really expensive from a resources perspective and like what your server needs to be able to do. So yes, rendering is something we talk about a whole lot. And we also talk about what needs to be in the initial HTML response versus what can wait to be executed.

So there's a lot of like middle ground that you can find here and the answer is almost never server side because it's just too expensive. 

When do you make this suggestion for server side? 

Sam: So it's really going to depend on the, of course it always depends, right? Depends on what kind of client is viewing, right? So there's a lot of times where, you know, if your website is brochureware, what JavaScript gives you is a lot of interactivity. So at that point, if your content isn't changing that much, we'd probably first want to approach the client about like, is a replatform in order because you don't need all this. Now, of course that can get really expensive and it scares a lot of people.

So if you do have a super interactive site or your content is changing often, then these types of frameworks do make a ton of sense for you. And at that point, it just becomes, what kind of traffic are you talking about? What's the level of user engagement that you're getting? And so there have been times where we're working with our clients and putting together the business use case of you need server side because 80% of your business is coming from organic search. That's what it's driven by.

You're in an industry that's heavily driven by search. This needs to become part of your costs. A lot of times though, we'll usually back off from that just a little bit and start talking about what are the elements on a page and we'll go by page template that should be in the initial HTML.

Because frankly, I mean, we were talking about it. I use a lot of the same tools as far as like, what does this page look like without JavaScript? And as long as my content is there, it doesn't have to be pretty. Right.

So I feel like that was kind of a mixed answer, but- 

Aleyda: No, no. I think Sam, you mentioned something fundamental here. It's like the context and the trade-offs at the end of the day.

And many, many times if they are still a small website looking to scale and they have the resources and the flexibility and they're using a framework that makes it easy to start server-side rendering. So even if it is going to be a non-trivial cost at the beginning, but they do need the scale and all of the impact on crowd budget in the future, et cetera, et cetera, then it's better to do this investment on a good base that will allow them to grow well in the future right at the beginning. But the complexity comes when already very big established website where there are a lot of trade-offs, a lot of stakeholders involved.

There is someone also taking care of the speed of the website and have their own, let's say, criteria and opinions on things. So there is where I have found that the complexity, like I have a client that they prefer to move little by little and start testing things out with a very well-known solution out there for dynamic rendering, even if they understood that this was a workaround to see what was the impact on certain core pages. And once that they saw the impact, okay, let's start investing on this, on doing partial migrations more and more on the core elements that were more important and what we're going to expect it to be more impactful, right? But yes, many, many times it's not a very straightforward thing because of trade-offs and the cost effectiveness of it all.

So my recommendation will be to, yeah, prioritize depending on what stage of the development process and how mature and big is the website and show value first. The whole thing here is to show value and to show the potential impact that this can have. And also for you to validate if this is actually the right thing to prioritize at that stage rather than a lot of other stuff that you could potentially do too. Yeah. And test, test, test. 

Arnout: I also feel Sam brings up a really good point, is do you really need that React website if you're just brochureware, right? Now I've seen this many, many times.

And then you basically, it's almost to a point where I go, well, actually, if you want all of this, then you'd better start over. I've seen real use cases for PWAs and for proper, like, but in a lot of cases, you're better off. Like if you want to build a multi-brand website and you do everything in React, it just becomes a freaking nightmare, right? In loads of cases.

And I also really want to bring to the attention that sometimes trying to fix it might not be the right solution. You might be better off replatforming to something that fits your needs in a way better way.

And I think that's something to also take into account. Yeah. I do.

Sam: I do want to comment though, because I feel like there's a lot of times, especially for SEOs, where we're hampered by other business needs. Somebody else who's on another team is championing this and we, and you just have to adapt. That's when I would say, really start looking at what are the options, depending on what platform you're on.

Because especially if it's anything that's using any versions from really the last couple of years, the rendering options that we have now as website developers are light years ahead of where it was three or four years ago. There are so many options. There's a lot of areas for us to win.

But it is also moving so frequently that as a developer, it's hard to keep up with. So it's not that anybody's not doing their job. It's not that nobody's staying.

It's just, it's hard. There's so many things out there. It's exactly like being an SEO.

You're just constantly inundated with so many updates. So I would say just depending on your platform, really do the research, see what options are there. For example, with Next.js, I'm going to keep using that because it's top of mind, because I'm working on a project with it right now.

But the options that we had for rendering two years ago were okay, but they were pricey. Now there's actually their static site generation as part of it. So basically all of your HTML that isn't changing all the time, that can be served statically versus, and then it'll layer in the interactive content using hydration, but it's a very sophisticated type of hydration.

So it does really well. But these are options that are new to us, that are worth your time to research. And unfortunately, as SEOs, it does mean that we have to start really digging into specific platforms and frameworks, and that's usually going to be picked outside of your control.

But there are really good options out there because we're not unique, right? Every business needs this. Every website is looking at these types of things because SEO is so prevalent. It is so required for really any business.

Well, I guess not any, but most of them, that the options are getting much bigger than and much better than they used to be. So there's still hope. 

So I guess back on the topic of doing audits for customers, and I also want to appreciate the levels of different customers that everyone watching will have as well. They won't all be as technically sophisticated as the ones that you've just been talking about a minute ago. How are you approaching the education element? 

So when you're dealing with a website, for instance, that is changing a small amount of things, but still perhaps too much, in your opinion, through JavaScript, how are you communicating that to your customers to say, all right, I think that maybe this is something we should look at? 

Aleyda: I try to understand first the why, how the decision was made to use this or that framework.

Because many, many times, what I mentioned before, there may have been a reason that it made total sense five years ago, but it doesn't anymore, or it's achievable in different ways, right? And then the implications for other areas, right? So for example, we know that something can be solved, for example, with rehydration, but the implications that this has in TBT, it will improve certain core vitals, but will have bad implications, will tend to have bad implications to others. So really good refinement and alignment optimization is required. And also get those areas involved, those that are more related to WPO or user experience, etc.

So the other day, for example, I understood that a lot of the, let's say, the decisions of how some PDPs were implemented and highly reliant on client-side rendered JavaScript of a certain website, e-commerce, was because they were using certain models, third-party models, that were reliant by default on client-side rendered JavaScript to show OGC, to show the reviews and questions and answers. And then I realized that they were mostly not getting indexed, right? But how did we get there? It was not even a decision of the company itself, how they sold the solution to be a plug-and-play, right? Without requiring much configuration, etc. And I realized that it was actually doable to at least, by default, server-side render a few of those initial reviews, not all, but a few.

So let's start with that, right? So understanding the context, understanding the decision, where this solution came from, the why behind, I will say that that should be always ideally the first step. And based on that, to be able to recommend or to try to also make or develop awareness of your goals, right? Because at the end of the day, even if I know about certain web development best practices and options and alternatives, the in-house developers are the ones that understand the whole context and how and why they have implemented in certain ways the current website or the current framework, how they have chosen this or that. So explaining and raise awareness about the why you need all of this and what is the impact and your final goal, they will understand better the different alternatives and solutions that they can even provide to you that you are not aware of.

So this ongoing communication and clear communication and explaining clearly well the why, not only the what you need, but the why and the context is important. 

Arnout: Yeah, I think just showing people what their website looks with JavaScript turned off is usually a big eye-opener because they experience it themselves. Yeah, and that really, like, if you then explain how crawling works and rendering and indexing and everything, they start to grasp, like, okay, so does this mean that initially Googlebot won't see all of these kinds of things? I'm like, no, but they will eventually when they render it, right? And then it starts to, because they can actually see it, it's like, oh, now I get it, right? So that's another great way.

Sam: And then also we might just do a comparison of, like, here's the code that you're seeing, like, when you're looking at it in your browser, and here's what Google tells us they're seeing, because you can get that out of Google Search Console. And so kind of highlighting the differences and showing, like, hey, remember all that time you spent writing that content to be optimized and use your target keyword and really add some relevance? Yeah, it's out there. So like I said, usually that starts to get a lot of emotion going and people get invested.

Aleyda: And another thing that sadly works well, it shouldn't be like that, but it tends to work well, is to do a little bit of a benchmark of what they are doing versus what their main competitors, especially those that are ranking there for core topics and queries they really care about, are doing it because they come up with reasons like, oh, it's because this is how we have this really cool option to showcase more products without loading any more pages, this dynamic sort of configuration, whatever, that this or that player has, and we need to be better, right? But then you go to these other players or even other players that do even something better than they are and show that they are able to provide the functionality while servers are rendering their pages, how indexing much more, bio-ranking them, et cetera, et cetera. So doing these sort of benchmarks, I work with a lot of commerce and marketplaces, always like POPs, categories, facets, and PDPs, product pages, like every level, and showcase this, what you are showing this, what Google is seeing. Let's go to, yes, the URL inspect tool or the rich results, let's try to, or even Sitebulb tools to show what it is actually seeing and how they are configuring versus their competitors and how it correlates with rankings at the end of the day.

So I want to talk about internal links. Everybody loves links, right? So I think this question has got two parts. Have you experienced issues with URL discovery due to reliance on internal links being loaded in by JavaScript? And then how are you identifying and tackling these issues for customers?

Arnout: I've seen plenty of those, right? And I mean, one of the things I regularly use is Sitebulb the response versus rendered report, which will actually just show you all the ones that are added. And I've seen some really, really weird things happening there, right? Where certain templates are just, they don't have any normal links, and then it's everything is just rendered.

And then they basically are like, why aren't these being indexed in the same speed as all the other ones? I'm like, well, there you go. So that's a way of me doing it. But there are different ways of doing it, right? 

Sam: I'd say for us, a lot of our customers, a lot of new clients come to us because they're looking for the traffic investigations, they're trying to figure out why are they declining. And that is one of the first things that we look at, as far as like, all right, this page, you say it's important to you, it's been declining. But what do you have pointing to it? Are you actually sending the right signals as far to Google that this is a priority page for you, and you do find pretty often that the internal linking does not reflect the actual priority that the brand is placing on that page. Yeah, we do a lot of the same things that Arno just said.

We use the response versus render report on Sitebulb. And that's something we run regularly for every client. Yeah, there you go. We love the tool, so I'm happy to promote it anytime. So we'll run that in our regular crawls so that it can be flagged at any time. It's part of our QA processes when any change is being put in.

So it's usually though, if you don't, like, unless something has broken, I would say there's other indicators that usually make you then go and discover it, which would be like declines and things like that. 

Aleyda: I have to say that in particular, I want to highlight how useful it is. And it's not only because this is a Sitebulb webinar, but I want to highlight what Sam mentioned before, the response versus render functionalities of Sitebulb in different ways.

So for example, even the single page analysis, I love how it organizes it in tabs. Because with a lot of extensions, it's true that you can do a quick validation of the page and see what changes with the raw versus render. But with the single page analysis, you see the links, the images, the text, right? So every single area or element is clearly segmented.

So you can go and pinpoint very, very easily, even to people who are non technical. Okay, take a look at how it is changing. And this is the number of links that is seen in the raw versus the render.

So it's like, I love it. Then on the other hand, besides what has been mentioned before, I have also seen not only the number of internal links, right, changing a lot between the raw versus render, but the anchor text. Many, many times, empty images will empty all descriptions.

So empty links pretty much pointing to pages. So it's important to give context to Google about what is this page about, what is popular about, and be consistent with it. Many, many times, like very little things that will be highly overlooked.

But at the end of the day, it's the consistency and alignment of it all that will help the pages to rank better from the right queries, right? 

So I want to hear some JavaScript horror stories. I've got a specific one I want to talk to you about, Sam. We'll do that after. But yeah, I want to hear anything you guys can think of that's just, that kind of blew your mind when you saw it. 

Arnout: Oh, I think mine was back in the days when somebody apparently put a no index in the version and wasn't basically telling, right? So they pushed something new code from dev to live, but it wasn't in the source code. It was actually injected by JavaScript.

So suddenly there was like, what the heck? Nothing happened. And then a few years later, they started getting alerts in Search Console. And then I was looking at it, like, I didn't know what was happening.

This was a few years ago. And I started digging in and this actually happened. And it basically de-indexed half the site.

It was quite a significant site. So it was just like, fuck. Yeah.

That's bad. Right? And others I've seen where you see headings. So basically when a development agency just builds everything using JavaScript.

So I ended up, and I also ping you guys on this because basically we were merging. They basically ticked the box on Magento and it was basically all these small parts of JavaScript. Let's make it one big file, but it ended up being 7.5 megs.

And when I found out that that was happening, it was just like, because nobody was looking, they were only looking at the results, like, ah, that looks fine. The functionality works, but they just kept on adding little things of JavaScript on top of it. And in the merged file, it became 7.5 megs.

And then we were looking in the inspect tool in Search Console and it wouldn't render. Right? Because 7.5 megs is too much. Right? I'm slowly unpicking this now, but this was kind of a biggie.

Sam: Thanks, Magento. That was really cool of you.

Arnout: Exactly. And thanks, Magento developers, right? Because I don't think it was purely Magento, but that little tick box saying, hey, let's merge all the JavaScript. I'm like, yeah.

Yeah. Come on. I want some more horror stories, guys.

Sam: So, I would say one that keeps coming up for me is, especially when people are using JavaScript frontends like Gatsby, which I still love, but there's a whole build process that happens that if the build process fails, all of the pages that were created before it failed are still available.

So, you end up having like 17 to 70 to 700, depending on how often you do builds, versions of your homepage just sitting out there and available in a sub folder. So, I would say always be careful about that. Yeah.

That one's really fun to find. Because then suddenly like a site, I think the first time we had it, we had a site that was relatively small, but they were e-commerce. So, they had like less than 20,000 pages, but we couldn't figure out why all of a sudden we were having all these issues with crawl budget.

And then, oh, it's because all these nightly builds that you're doing are failing. By the way, did you guys build them? Because that seems problematic. So, yeah, definitely run into quite a few of those.

I keep seeing it happen across all different kinds of platforms and frameworks. So, definitely, if you're using anything, and even if you're using some of the plugins that just do like the static site generation for WordPress, things like that, make sure that it's not adding a bunch of fluff. And the other thing I've seen usually take up crawl budget is like these JSON API files that get indexed or that Google keeps crawling because they're referenced in the code of any of your actual pages.

So, send a no index on those. Tell it to stop. 

Aleyda: That is a really good call.

In fact, you don't even need access for web server log files. You just go to the crawl stats from the Google Search Console to see those. Very straightforward because, yes, they tend to consume a lot of the crawl budget, unfortunately, many times.

I have, unfortunately, like the horror stories that I have run into is like the other way around of what I think that you just mentioned regarding blocking JavaScript files that were needed to crawl and to render the page, indeed. So, I have run into those. Very straightforward, thankfully, to fix.

I have also run into these scenarios in which somehow, I don't understand why, right? It's like developers choosing to implement hreflang annotations and canonical tags through JavaScript, not with the actual tag directly there in the HTML, even in the render one. It's all everything through JavaScript. It's like, why? Right? It was just a decision because, yeah, I don't know.

To feel cool? I have no idea. It was replaced very quickly after I pointed it out because there was no necessity of that. But, yes, like potentially, I would say like the biggest worst scenario that was thankfully very quickly fixed was when I started doing SEO for this location-focused marketplace that was just starting.

So, it was very early in the process, great to make a switch of how they were configuring things. So, they were using hash-based URLs for every location, for LA, for Seattle, for Denver. So, we just pretty much changed it to output, well, actually callable, indexable URLs.

And just by doing that, well, they 4X their traffic after a couple of months, right? So, it was that straightforward, right? Unfortunately, it's not that easy many, many times. But if you're early, that is why every time I tell clients and everybody who speak with me about this, like involve an SEO, even if it is not a full SEO strategy or process that you will do early on, at least some sort of validation or involvement. If there is a redesign, migration, replatforming, whatever structural change you're doing, to ensure that the core, the base for a good callability and indexability, independently of the framework that you're using, is as much as possible in there from the start.

So, I've got one more question, and this is about a support query that came in from Tori, who works with Sam, and about an issue with a site that wasn't crawling properly, and why can't Sitebulb crawl this? And when we started digging into it, we found that there was loads and loads of content loaded in by the shadow DOM, and Siteboard was just sort of going and going and going, and we had to end up limiting the depth that it would crawl. So, I haven't, since we implemented the fix into Sitebulb, I haven't kind of heard what happened next.

So, I'm hoping that Sam, you've got some interesting story to share about what happened with this site, and what went wrong. 

Sam: Yeah. So, this site has been super fun to work on.

And I say super fun, because we're learning a lot. So, I will say, basically, like you said, what we found, the JavaScript files that they have were constantly running. So, it was constantly adjusting the shadow DOM.

We ended up, and Patrick, I'm going to steal your thunder, but yeah, we basically were like, all right, limit to this many deep, as far as the number of children in the shadow DOM, so that it could finally stop. Because the problem that we were seeing is that the traffic was going down, but at a more accelerated rate than we thought it would. Because also, this client changed from server side to client side, which is, that was the first time I had ever seen that happen.

Yeah, they made a lot of changes. This is where definitely other business requirements were calling the shots, and we were doing our best to try to put out the fires. So, it was declining even more than we thought it would.

Pages weren't getting indexed after some migrations. So, we were trying to figure out why. Because when we put them in Google Search Console, Google can totally see all the content, it looks fine.

But yeah, what we found is that actually what's happening is they're just constantly spinning their wheels. So, it's a large enterprise organization, so change takes a long time. We'll put in a ticket, and it's probably at least six weeks before that gets worked on, most of the time, not with all of them, but before it can actually get to a sprint.

So, I will say with them, we've done a lot of work of that, here's the elements that need to be in the initial HTML response. And so, we've been slowly moving those things over. That is also where we're starting to, they've done some upgrades on their JavaScript library so that we can use some of their more sophisticated rendering options.

But in short, we've had to accept that for some of these pages, until they can be refactored, recoded, like this particular template, we're just going to have to look at the loss right now. It's thankfully not the most conversion-driving part of their business, that's why we can kind of minimize, put it at a lower priority, because essentially that's what it needs to be, it's entirely rewritten. They're also pages, so it is a very dynamic site.

So, the industry that they have, their content is changing at least hourly. It's new content all the time, lots of filtering, lots of search. So, having the framework make sense for their business use case, it's really the level of interactivity that they need.

But like some of the page templates where we're seeing the declines, that's evergreen content, that doesn't change a whole lot. So, it's like, so why did we put these in JavaScript? Why? Why did we do that? Can we put that? Or like, and there might be some, like there's some statistics on some of these pages that update, but they update like every day. So, like, can we just build them? Like, why are we doing this? And I'm pretty sure their development team probably doesn't like us, but that's okay.

Aleyda: But you know what, Sam? You have mentioned something that is interesting because, so for example, in my case, what I have seen that is similar with a website that has a very dynamic inventory and they're changing like pretty much every few hours, the state of many products, right? And we used to get even warnings from Google because of outdated structure data, things like that. And this is the thing, a lot of it was being pre-rendered slash cache through their CDN because they were heavily reliant on JavaScript. So, there's that.

And at the end of the day, also be consistent with your own content needs. At the end of the day, the technology needs to fulfill the need and the nature of that particular context. And yes, update it consistently.

So, these are the constraints that we're talking about versus the ideal configuration that we all know or supposedly should follow, right? So, there's that. On the other hand, something interesting here regarding how things should supposed to work versus what we see on a day-to-day. And other things that I have seen in the past is how this pre-rendering services that we see that rely on CDN workers, whatever, yeah, one thing is what they are supposed to do too and how many times actually the Google bots end up going through the CDN to see our content.

And like, yes, there are certain percentage of times that they don't go through them too. So, there's also that, many times also the rules. I remember one time when there was a release on a website that we supposedly had made sure that the Google bot will see certain configurations server-sided already, but this is not what we saw reflected out there.

And then at the end of the day, after a lot of validations and workarounds, we ended up having a call with the DevOps of the company and they were like, let me see the order of rules that I have in the CDN. Oh, sorry. There's that.

Switch. I'm going to switch it. Thank you.

It was that. Thank you very much. So, yes, there's also a lot of these cases and configurations like this that, so good communication is always, and coordination is always a must.

Arnout: Yeah. I think there's one more thing which I'd like to add, which you briefly touched upon is structured data. I see a lot of companies implementing structured data through Tag Manager and these kinds of solutions, right? JavaScript.

And that, especially if you're an e-commerce, can be kind of a biggie because Google sees a different price than in the structured data because it's rendered, it's handled later. So, you get disqualified in the feed and these kinds of things. So, I see a lot of these discrepancies, especially in e-commerce.

And it's something people just don't think about because they go like, ah, it's fine. No, it's not fine because there is a time in between that rendering of that structured data through JavaScript and the actual feed getting updated. So, I see a lot of those as well.

End of webinar Q&A

Google's cache was a useful way to understand what Google could see, especially in regard to JavaScript. With this going away, is there another reliable way to better understand what Google is able to render? 

Sam: We use the inspect URL tool within Google Search Console. One of the tabs will be the HTML that they see.

Can be a little bit laborious to try to actually pull out. I don't know if I said that right. It can take a lot of work to try to pull out that code and then put it into something that you can actually read, especially for larger pages.

I've definitely seen it break my Chrome. But that's what we're using. Cool.

Aleyda: Yeah. I would say that the most reliable one would be that one. I also like just in case that you don't have access to the Google Search Console, the technicalseo.com, shout out to Merkle to make all of the website and free tools accessible to everybody.

They have also a feature render option. And, of course, Sitebulb with their single page validation, single page analysis is available to everybody out there. And if it is not, then you should totally start using it.

Another way to validate it. 

There is a move away from React now. Are you recommending clients consider flat file headless sites yet? 

Sam: All the time.

I love headless. I love it so much. Yes.

Yes. 

Shout out to Astro. I used to be a huge Gatsby stan. Now I'm all about Astro.

It's beautiful. You can use any JavaScript library you like and it just spits out beautiful HTML. It's super elegant.

Love it. 

Aleyda: Headless is great. Again, it's important to have an SEO involved right from the beginning.

We know there are certain platforms that, yeah, it should support everything. But, yeah, good specification right from the start. Yeah. 

Any suggestions for a good framework? 

Sam: So, I'll just shout out Astro again. If you are someone who's super got a lot of interactivity going on, a lot of changing content, Next.js is usually the one I see the most common. May not be the best, but it also has huge community support that finding developers who know how to work on it is a lot easier.

What impact on traffic have you seen from fixing JS issues? 

Arnout: Massive.

I've seen massive. Especially in an e-commerce store where all the images were basically served through JavaScript and we got it playing. It skyrocketed the image search traffic. 

So, it can be, I guess it depends, but there are use cases where it's massive. Especially with images. 

Aleyda: That is a great shout there.

The impact that it can have on the thumbnail in rich results in the search. Yeah. And the click through rate of already ranking pages.

Indeed. Just because of relying on JavaScript to show the images and making them big enough. 

Sam: Kind of a benchmark that we've been using when we do forecasting for clients who are making the switch from client side to server side is typically we see about a 30% increase.

Now, obviously it's going to depend on how competitive things are. Are your competitors already doing that? If so, you're just trying to catch up. But typically we see the site traffic and engagement go up around 30%.

Apart from Screaming Frog, would you recommend other tools for auditing websites JavaScript regularly?  

Aleyda: Sorry. I have to say something, right? I will say that if I remember well, Sitebulb was the first tool, the first SEO crawler out there that properly started to report and support the discrepancies and the gap between the raw versus render JavaScript.

And the one that up to right now provides the best UI and the most flexibility to troubleshoot and work around it. Right? So, Sitebulb, well done. Yeah.

We have a website showing blank while using a live testing Google inspection tool, but when using the same bot in Screaming Frog, the content is showing it and it's what could be the reason? 

Arnout: Oh, it could well be that the JavaScript is way too large.

So, basically, Google just times out. So, that could be a reason. Maybe there's other reasons, but that's the first one that jumps to mind.

Aleyda: Or resources are blocked to the Google bot, but not to other crawlers or even to IPs, things like that. Yeah. So, yeah.

Always try to simulate, but at the end of the day, that is a simulation. It's always, yeah. Yeah.

Patrick: And you can go and change the user agent in your crawl tool to Google bot as well, increase the render timeout, muck around with those settings, and you might be able to figure out why that's happening. 

Do you recommend a WordPress plugin to help with this? 

Sam: Leave WordPress. I can't stand WordPress.

Also, it still uses jQuery, which is so old and antiquated. Can we please move up? I hate it. 

Aleyda: I will say if Jono was here, we would say you're already using WordPress.

It's perfect, isn't it? 

Arnout: No. I'm going to say the biggest problem I see with JavaScript and WordPress is the sheer amount of plugins that people install. Just trim down on your plugins.

And then a free version of what's it called? Quick.cloud and Lightspeed, which basically does a lot of the optimizations, and you can connect it with Cloudflare. It's a great way to fix it, fix a lot of the stuff that's broken, but I'd rather fix it at the core. But Lightspeed is a good one.

Aleyda: By the way, I don't know if the question potentially also asks about if they are using some sort of template that is client-side rendering a lot of things, like navigation, core elements of the site pages, right? If that's the case, then the good thing about WordPress is that you can easily switch to another one that doesn't do that. So I will say go and take a look at so many different templates, free and pay, right? And ensure that the one that you use is lightweight, doesn't rely or over-rely on client-side render JavaScript on things that are actually not necessarily required based on your particular functionality. So there are plenty of choices out there.

If for some reason you're right now using a template that does that, go and take a look at many others, and it should be pretty straightforward to do a migration for the better in this case. Yeah. Awesome.

Talk more about Shadow DOM…

Sam: Y'all, this is a whole other webinar. I would say, except that it's a grey area, I've met even a number of developers that don't understand the Shadow DOM, just essentially know that the Shadow DOM and what you're seeing and how it's rendered, I don't know how to answer. It's a convoluted and it's complex, and usually you're going to see if you're having issues with the Shadow DOM, you're going to see it coming out in the response versus render report.

If you're looking at the code itself, you've got developer tools open in Chrome, you'll see it's constantly changing. Those are all signs that something's wrong with your Shadow DOM. 

Further JavaScript SEO reading

Jojo Furnival
Jojo is Marketing Manager at Sitebulb. Jojo has 15 years of experience in content and SEO, with 10 years of those agency-side. When Jojo isn’t wrestling with content, you can find her trudging through fields with her King Charles Cavalier.

Sitebulb Desktop

Find, fix and communicate technical issues with easy visuals, in-depth insights, & prioritized recommendations across 300+ SEO issues.

  • Ideal for SEO professionals, consultants & marketing agencies.

Sitebulb Cloud

Get all the capability of Sitebulb Desktop, accessible via your web browser. Crawl at scale without project, crawl credit, or machine limits.

  • Perfect for collaboration, remote teams & extreme scale.