Hi Yonni -
Trace viewer actually started out using "cr.js" which was a very minimalist framework that preceeded polymer. It provided mechanisms for having UI elements that were subclasses of HTML elements, which was nice for writing componenty UI code. But, later, we switched to Polymer but did it over two years time... it was very hard because of the size of the code. We chose polymer mainly because other members of Chrome team were using it, so it was just a popular thing and I was just curious to try. Not much more to it than that.
I would note that if you dig into the code, we don't actually use much of Polymer --- we're mainly using html imports, and its custom elements capabilities, rather than the full polymer elements library. Much of the cleverness of polymer is kinda lost on me, but I like HTML imports because a single file contains all the assets for a component, vs having your css and html template and js all scatteered in different directories or different files. Then, I like custom elements largely because they make the UI more testable --- I can write a component that is an element, test the heck out of it in isolation, but then embed it pretty easily into my overall app too. Both React and Angular do this too, so I suppose I could get to enjoying those too were we based on those. But again I doubt I'd personally use it heavily, just use the code module features and component-ization stuff. I do dig jsx as a way to do thigns --- one thing that drives me personally crazy is that html imports look like html to emacs, but then there's js inside it and emacs js-mode or js2 mode don't like that, and though there are hacks to emacs to edit mixed mode files, they never really work to my liking. So I use sublime to hack on trace viewer, which is.... nice enough but I do miss emacs. JSX styled files have a much better story in emacs. But thats just me being a nerd. :D
Tracing was built with the assumption that the trace model (post parsing) would fit in memory. This lets us focus on UI and features a lot more. In a previous life I tried to build a tool like tracing but support giant files such that the actual processed trace model could be paged in and out depending on what the user was looking at. That goal made it very hard to also support fancy UI ideas. In traceviewer, I wanted to focus on ui richness, and keep the project agile and simple, so we just assume that the model fits. When it doesn't, we crash. :( Our workaround is to have lots of controls in the recording part of tracing to allow people to limit what gets traced to be of reasonable size, and that has allowed us to limp along for years with a shockingly small number of engineers building the tool even given how many people use it. That having been said, I would *really really really* liked to have built traceviewer for larger files.... large files are a source of headaches and bugs for a fair number of our users, and fixing the viewer to handling them well at this point is very hard to do. Sometimes I think about spending a month just hacking on the code to do streaming import for instance, but that'd only fix the memory footprint that we use to load the file: the assumption that the resulting parsed model fits in ram is too hard to remove from the code at this point without a full scale rewrite. I think. Do you have clever ideas here? ^_^
Best,
- Nat