thinking about shadow dom and SEO

887 views
Skip to first unread message

Seth Ladd

unread,
Aug 15, 2013, 2:46:08 PM8/15/13
to polymer-dev
Hi all,

When I see custom elements, I think of a great way to create modular web pages. I have an instinct to do this:

Page 1:

<body>
  <div>Some content</div>
  <contact-us></contact-us>
</body>

Page 2:

<body>
  <div>Some other content</div>
  <contact-us></contact-us>  <== awesome! reuse!
</body>

Where <contact-us> might have:

<polymer-element name="content-us">
  <template>
    <div>Our address is Foo and our phone number is bar</div>
  </template>
</polymer-element>

If I did this, I'm afraid the semantics of custom elements and shadow dom would prohibit crawlers from finding and indexing info inside of custom elements. Or am I overthinking it? Or am I misusing custom elements?

Thanks for your insight!
Seth

John Messerly

unread,
Aug 15, 2013, 2:52:59 PM8/15/13
to Seth Ladd, polymer-dev
I might be missing something, but I don't think this has anything to do with Shadow DOM. It's more about Custom Elements, right? To interpret the custom element, the crawler had to either understand polymer-element, or run the associated JavaScript code to register the custom element. If the crawler can get far enough to instantiate the custom element (and possibly create the Shadow DOM if the custom element has one), then it can easily discover and traverse into a Shadow DOM via DOM APIs.


Follow us on Google+ : plus.google.com/107187849809354688692
---
You received this message because you are subscribed to the Google Groups "Polymer" group.
To unsubscribe from this group and stop receiving emails from it, send an email to polymer-dev...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Seth Ladd

unread,
Aug 15, 2013, 3:11:26 PM8/15/13
to John Messerly, polymer-dev
On Thu, Aug 15, 2013 at 11:52 AM, John Messerly <jmes...@google.com> wrote:
I might be missing something, but I don't think this has anything to do with Shadow DOM. It's more about Custom Elements, right? To interpret the custom element, the crawler had to either understand polymer-element, or run the associated JavaScript code to register the custom element. If the crawler can get far enough to instantiate the custom element (and possibly create the Shadow DOM if the custom element has one), then it can easily discover and traverse into a Shadow DOM via DOM APIs.

My question is about both custom elements and Shadow DOM, really. And you're right, the first-order question is, are crawlers smart enough to run the JS to register and run the custom elements? But let's pretend they are running the JS on the page; since the contents of the custom element live in the shadow DOM for the custom element, should crawlers go in there?

John Messerly

unread,
Aug 15, 2013, 4:52:40 PM8/15/13
to Seth Ladd, polymer-dev
On Thu, Aug 15, 2013 at 12:11 PM, Seth Ladd <seth...@google.com> wrote:
On Thu, Aug 15, 2013 at 11:52 AM, John Messerly <jmes...@google.com> wrote:
I might be missing something, but I don't think this has anything to do with Shadow DOM. It's more about Custom Elements, right? To interpret the custom element, the crawler had to either understand polymer-element, or run the associated JavaScript code to register the custom element. If the crawler can get far enough to instantiate the custom element (and possibly create the Shadow DOM if the custom element has one), then it can easily discover and traverse into a Shadow DOM via DOM APIs.

My question is about both custom elements and Shadow DOM, really. And you're right, the first-order question is, are crawlers smart enough to run the JS to register and run the custom elements? But let's pretend they are running the JS on the page; since the contents of the custom element live in the shadow DOM for the custom element, should crawlers go in there?

Ah, I see what you mean. Yeah to find all of the links they'd want to traverse into the ShadowRoot. And maybe they'd want to see the composed tree, which would mean running the composition algorithm.

But if we're assuming crawler has a JS engine, then I guess we can also assume it has a DOM? Which would mean either it has native ShadowRoot support, or it can run the polyfill which will produce the composed tree. Either way, it should work at least as well as any other client-side render-in-JS framework. I think :)
Reply all
Reply to author
Forward
0 new messages