Static GitHub Issues

[3073] Google crawler parses empty DOM with SSR

prev: How to extract each page style file into an independent file
next: Loading indicator also visible in ssr

Logic below explained: If user agent is a crawler/bot (e.g. google), then render the page with SSR.

  serverMiddleware: [{
    handler(req, res, next) {
      let isBot = crawlersRegex.test(req.headers['user-agent'])
      res.spa = !isBot
      next()
    }
  }]

Using the fetch as google: https://www.google.com/webmasters/tools/googlebot-fetch

I get this strange result: image

Obviously the content is empty. That was when loading indicator is disabled.

If loading indicator is enabled, then the loading indicator will be there: image


Other:

  • Google uses Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) User Agent. If I fake it in my chrome, I get correctly a SSR rendering.
  • Curl returns the correct DOM structure if a bot agent (like the above) is used
  • Google result looks like they get a CSR rendering
  • Facebook, twitter etc parse the content also correctly (e.g. from the open graph meta). This issue happens only with google crawler.

Assumptions:

  • If no one else is affected from this but me, then the conditional rendering above might be an issue
  • If Nuxt returns basic initial html and the Google crawler is programmed to stop when there is some html (even before the server has not completed) this might also be a reason

Update:

  • If I force set res.spa = false in every request, google will render the results correctly. Therefore something is happening with the conditional rendering
<!--cmty--><!--cmty_prevent_hook--><div align="right"><sub><em>This question is available on <a href="https://nuxtjs.cmty.io">Nuxt.js</a> community (<a href="https://nuxtjs.cmty.io/nuxt/nuxt.js/issues/c2660">#c2660</a>)</em></sub></div>