--
You received this message because you are subscribed to the Google Groups "robotframework-users" group.
To post to this group, send email to robotframe...@googlegroups.com.
To unsubscribe from this group, send email to robotframework-u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/robotframework-users?hl=en.
| Documentation: | Clicks a button identified by `locator`. |
|---|---|
| Start / End / Elapsed: | 20120321 11:18:11.910 / 20120321 11:19:42.622 / 00:01:30.712 |
| 11:19:42.568 | INFO | Clicking button 'css=input[type="submit"]'. |
| 11:19:42.568 | DEBUG | Parsed locator 'css=input[type="submit"]' to search expression 'css=input[type="submit"]' |
| 11:19:42.621 | FAIL | Timed out after 90000.0ms |
| 11:19:42.621 | DEBUG | Traceback (most recent call last): File "<string>", line 2, in click_button File "C:\Python27\lib\site-packages\SeleniumLibrary\runonfailure.py", line 40, in _run_on_failure_wrapper return method(*args, **kwargs) File "C:\Python27\lib\site-packages\SeleniumLibrary\click.py", line 82, in click_button self._click(self._parse_locator(locator, 'input'), dont_wait) File "C:\Python27\lib\site-packages\SeleniumLibrary\click.py", line 107, in _click dont_wait) File "C:\Python27\lib\site-packages\SeleniumLibrary\click.py", line 122, in _click_or_click_at self.wait_until_page_loaded() File "<string>", line 2, in wait_until_page_loaded File "C:\Python27\lib\site-packages\SeleniumLibrary\runonfailure.py", line 40, in _run_on_failure_wrapper return method(*args, **kwargs) File "C:\Python27\lib\site-packages\SeleniumLibrary\browser.py", line 191, in wait_until_page_loaded self._selenium.wait_for_page_to_load(timeout * 1000) File "C:\Python27\lib\site-packages\SeleniumLibrary\selenium.py", line 1751, in wait_for_page_to_load self.do_command("waitForPageToLoad", [timeout,]) File "C:\Python27\lib\site-packages\SeleniumLibrary\selenium.py", line 217, in do_command raise Exception, data |
Hi All!
I read the questions and the answers and I'd like to know if someone can help me with my problem.
The principal problem that I have is download a file from a emergent window using the keywords that robotframework provides me, because this keywords only works with the GUI and the browser, the windows to download or upload a document are property of the OS that you are using, and I can't take control of them.
I tried to find a solution for this and found a library for the framework (AutoIt) that allows me control the windows of the OS, but for one reason that I don't have clear yet I can't manipulate the buttons in the download window only in the upload window.
After that I tried to configure the browser that I used for the tests (firefox) to accept all the downloads of specific type of document and don't have to manipulate the download window, but the command "open browser" with a profile as a parameter doesn't work, the command opens the browser but no with the modification to accept the downloads.
This is basically my principal problem, had you ever been in a similar situation or work with something similar? Is there anything that you can provide me to go on with this blocker?
Ugh
Before doing the download, one should really consider is it really necessary. If one is only going to download the file and not perform any checks on the downloaded file, perhaps it's not worth of effort to download the file in first place.
If one really needs to download the file, at least I have to do such thing, most of the time, the actual download is not interesting. For me it doesn't matter how the file is downloaded, only the file content matters. I don't download the file by using the browser, because it is very annoying for all the reason you did explain. Instead I use cURL to download the file for me and then I perform checking for the content in the file. If I would live in the *nix world, I would use wget.
Lately I have been thinking to move that cURL logic to a library keyword and use requests[1] to download the file. This is because setting all the header, session ID and so on is really hard to debug when something doesn't work. Other than, cURL works really well and the debugging problem is a small problem.
With cURL or wget you can perform the download manually first and when you get it working you can move that logic to keyword.
-Tatu
[1] http://docs.python-requests.org/en/master/
--
You received this message because you are subscribed to the Google Groups "robotframework-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to robotframework-u...@googlegroups.com.
To post to this group, send email to robotframe...@googlegroups.com.
Visit this group at https://groups.google.com/group/robotframework-users.
For more options, visit https://groups.google.com/d/optout.