I wasn't aware of the fact that the development of the IE-Webdriver is community driven. I had assumed that it was, just like the for the other browsers, by Microsoft employees. I aplogize for striking the wrong tone regarding that fact, i didn't mean to insult anyone.
Have you simply looked through the code or did you actually try and develop a solution to the problem that you had described?
Reading your short description of the architecture ... What speaks against offering a special "click" API that would definitely wait for such DocumentComplete-event to be returned before allowing further execution? It then would lay in the hands of the user of the API to determine when a click causes page navigation and when not.
That would give me the ability to actually achieve what i want because as page owner i surely know what clicks causes a page-navigation and which don't.
Then there is still the second problem you mentioned, which i, frankly, don't quite understand yet. I would have to look into the code to fully understand. But maybe there is a way of assuring that what you described doesn't happen. The fact that this solution might come with a lot of overhead would not matter, because you would provide this only to a special click-function that is KNOWN to be inefficient and ONLY for clicks that cause page-navigation. This way it would again be in the users hand and responsibility to use what is suited.
As for the improvements to take in the code ... : Well, it's not always that simple. Consider the element that i am looking for already exists on the page and now i want to see if it's text has changed upon a finished page-navigation.
I assume that the ExpectedConditions.textToBePresentInElement does not actually re-fetch the passed element again and again.
So i would have to write custom code to achieve this, possibly a while loop that uses a custom timer to fetch the element again and again and check it's content.
That is definitely possible, but you can already assume that it's going to bloat up the code by a lot more than what you have given as an example.
I would really love to craft a utility class, so that i wrap the drivers API with a more "secure" API, meaning i use .click() but internally i have waits and all. But there are many "custom" cases, so that i would have to provide a lot of methods. Like clickAndWaitForTextToBePresentInElement(WebElement, String, boolean), the boolean for re fetching the element or not, etc.
There is a thing i wonder though ... Are the chrome and Firefox browsers designed better? Or why was it able for them to achieve this page-navigation thing securely?
You said employees designed the drivers, but employees aren't necessarily better. They might have some more time on their hands, but i don't think that this is it.
Probably those two actually adjusted their browsers API to work with Selenium?