Hi TJ,
Single-page apps that use Polymer tend to face the same set of challenges as other SPAs. That is, crawlers (and especially social sharing bots, like Facebook and Twitter) may or may not run JavaScript, so they may not see the right content and meta tags if your app is populating those client-side. Some people use server-side rendering to work around these issues in SPAs, but shadow DOM isn't really compatible with server-side rendering.
In practice, shadow DOM itself tends to be less of an issue, as long as you include the correct polyfills. Crawlers that run script should see the stuff in the shadow roots (typically, they're not doing "view source", they're looking at the rendered page).
There are a couple of approaches to ensuring that your site is indexable. For content sites like blogs, you might use server-side or build-time templating to inject the content into light DOM and populate the metadata. The Polymer docs site takes this approach. After initial page load, it acts as a SPA, loading page content dynamically when you navigate. But it's kind of hybrid in structure.
Sam Li from the Polymer team recently gave a talk about another approach—using headless chrome to render the app server side. In this case, when the site receives a request from a pre-defined list of bots, it forwards the request to the renderer. It forces the shadow DOM polyfill on, so that the contents can be rendered server side and sent to the client. The Webcomponents.org catalog uses this approach.
SSR for web components overview (Gray Norton)
SSR with headless chrome (Sam Li)
Sam's headless chrome renderer (Rendertron):
Hope that helps.
Cheers,
Arthur