headless chrome & aws lambda

8,374 views
Skip to first unread message

tadeus...@gmail.com

unread,
Feb 16, 2017, 11:39:54 AM2/16/17
to headless-dev
Hi,

Has anyone tried to run headless chrome inside aws lambda function? Would that be even possible?

Justin Ribeiro

unread,
Feb 16, 2017, 3:44:38 PM2/16/17
to tadeus...@gmail.com, headless-dev
Lambda won't let you run headless within the code environment. That said, Lambda can start Container Service tasks, which means you could have run headless in a ECS cluster via docker and then use the use either job queues or other triggers to do your headless work as needed.

On Thu, Feb 16, 2017 at 8:39 AM <tadeus...@gmail.com> wrote:
Hi,

Has anyone tried to run headless chrome inside aws lambda function? Would that be even possible?

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.
To post to this group, send email to headle...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/headless-dev/099dee39-0a0f-4549-a543-f2eb59d0ef07%40chromium.org.

tadeus...@gmail.com

unread,
Feb 17, 2017, 5:07:17 AM2/17/17
to headless-dev, tadeus...@gmail.com
Thanks. I know people run PhantomJS in Lambda hence question. Phantom.js is standalone binary so it's relatively easy to do so, with Chrome not so much as Lambda runs on Amazon Linux AMI wich is based on CentOS and does not include all necessary dependencies Chrome requires (and there is no way to install them)... but I wonder if it would be possible to provide all those dependencies in chrome/opt/google/chrome/lib and then try? Sounds difficult and what is more Lambda has 250MB limit of total code size so it will be hard to fit within. Are there any plans to provide headless chrome version that only runs in that mode - could that mean it would be smaller in size and require less external dependencies?

Justin Ribeiro

unread,
Feb 17, 2017, 9:36:50 AM2/17/17
to tadeus...@gmail.com, headless-dev
I can't speak to plans, but you can use headless as a lib to embed into any C++ application you like (see https://cs.chromium.org/chromium/src/headless/app/headless_example.cc). You could reasonably build what you're exactly looking to do (I package headless into a deb for instance and other folks are doing their own embeds inside existing apps).

That, and I misspoke; it's not that Lambda won't run a binary, but rather I wouldn't approach/use lambda in that way (separation allows more control, particularly when you have something that potentially needs to run long or have memory requirements, which you don't control in Lambda). 

tadeus...@gmail.com

unread,
Feb 17, 2017, 10:17:13 AM2/17/17
to headless-dev
I guess you're right about using Amazon ECS for that, I could use normal chrome compilation then, but still in some cases running headless chrome straight inside lambda would be great. Thanks for your help!

Marco Lüthy

unread,
Mar 13, 2017, 7:01:26 AM3/13/17
to headless-dev, tadeus...@gmail.com
Mostly for shits 'n' giggles I had a go at running headless chrome in AWS Lambda.

I was able to build headless_shell in an EC2 instance mimicking the Lambda execution environment (using the AMI named amzn-ami-hvm-2016.03.3.x86_64-gp2). Compiling a non-debug build yields a headless_shell and libosmesa.so which, combined, are just under 44 Mb when gzipped. This means it fits within the Lambda function package's 50 Mb file size limitation. So far so good.. The headless_shell binary (and code to use CDP via a websocket) run fine in another/new/clean instance created from the amzn-ami-hvm-2016.03.3.x86_64-gp2 AMI.. but it's all downhill from here.. 

The headless_shell executable will run in the Lambda function. On start up of the Lambda function, it logs these errors when executing headless_shell:

$ /tmp/headless-chrome/headless_shell --disable-gpu --no-sandbox --remote-debugging-port=9222 --user-data-dir=/tmp --trace-startup=*
prctl(PR_SET_NO_NEW_PRIVS) failed
[0313/090756.636269:WARNING:discardable_shared_memory_manager.cc(193)] Less than 64MB of free space in temporary directory for shared memory files: 0

From within the Lambda function's execution, a GET to http://127.0.0.1/json/new returns seemingly without issue:

{ description: '',
devtoolsFrontendUrl: '/devtools/inspector.html?ws=127.0.0.1:9222/devtools/page/251f985e-5724-4515-b1bf-7a9a921dc574',
id: '251f985e-5724-4515-b1bf-7a9a921dc574',
title: '',
type: 'page',
url: 'about:blank',

Then, opening a connection to the websocket given by webSocketDebuggerUrl is also successful.

However, not a whole lot happens when you send a message over the socket to control Chrome with the Chrome Debugger Protocol. For example, the Lambda function sends:

{ "id": 1, "method": "Log.enable", "params": {} }  ==> No messages received from the socket
{ "id": 1, "method": "Domain.Doesnt.Exist.Fake", "params": {} }  ==> No messages received from the socket
{ "id": 1, "method": "Network.enable", "params": {} }  ==> No messages received from the socket
{ "id": 1, "method": "Page.enable", "params": {} }  ==> No messages received from the socket
{ "id": 1, "method": Page.navigate", "params": { "url": "https://google.com/" } }  ==> results in message from socket: {
"method": "Inspector.targetCrashed", "params": {} }

Inspector.targetCrashed sounds a lot like the "Aw, Snap!" error you get in regular Chrome when the dev tools crash.

In the headless_shell startup, I don't really understand the prctl error. I assume it has something to do with Chrome trying to gain more privileges. I have no idea if this is a show stopper, but in the Lambda container we have very few privileges. We can't write anywhere other than /tmp.

The second issue seems to be the result of the Lambda function's container not having a tmpfs at /dev/shm. According to Comment 19 in this bug thread, "Chrome will still work and render pages correctly even if the discardable memory limit is set to 0. Performance will just suffer as nothing will be cached." But maybe that's not true?

It's quite a mess and definitely in that hack-shit-together-and-get-something-working phase, but the code, including how I built headless Chrome for Lambda is available here. The code running in Lambda is in, src/argh.js here.

A startup trace for headless_shell is available here. I don't know how to make much sense of this file. Does it tell us anything useful?

I wonder if any Chrome developers might be able to provide some context/details on prctl and the /dev/shm warning/failures—and whether or not they're show stoppers.

Sami Kyostila

unread,
Mar 13, 2017, 7:51:34 AM3/13/17
to Marco Lüthy, headless-dev, tadeus...@gmail.com
Thanks Marco, this is pretty interesting. PR_SET_NO_NEW_PRIVS is basically used as a security measure to limit the capabilities of the renderer processes. Chrome should still work even if that fails.

The lack of /dev/shm might be a show stopper but it's hard to say for sure without more information. Maybe try with --enable-logging --v=99? Another thing to try would be --single-process.

By the way, if you're using --disable-gpu you won't need libosmesa.so.

- Sami

Marco Lüthy

unread,
Mar 13, 2017, 9:30:15 AM3/13/17
to headless-dev, marco....@gmail.com, tadeus...@gmail.com
Thank you for your quick reply, Sami!

The --single-process suggestion gets me past the Inspector.targetCrashed error! All of my Chrome Debugger Protocol messages sent over the websocket to Chrome now return a response message. Unfortunately not quite there yet, though. The CDP Page.navigate method appears to load a page successfully, but bails out after Chrome gets a response.

Over the CDP websocket, I send to Chrome:

{ "id": 1, "method": "Page.navigate", "params": { "url": "https://github.com/adieuadieu" } } 

CDP replies with (more of me talking/rambling after the JSON):

{ "method": "Page.frameStartedLoading", "params": { "frameId": "9.2" } }

{ "method": "Network.requestWillBeSent", "params": { "requestId": "9.1", "frameId": "9.2", "loaderId": "9.1", "documentURL": "https://github.com/adieuadieu", "request": { "url": "https://github.com/adieuadieu", "method": "GET", "headers": { "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/59.0.3039.0 Safari/537.36" }, "mixedContentType": "none", "initialPriority": "VeryHigh", "referrerPolicy": "no-referrer-when-downgrade" }, "timestamp": 3748.707215, "wallTime": 1489410281.82881, "initiator": { "type": "other" }, "type": "Document" } }

{ "method": "Network.responseReceived", "params": { "requestId": "9.1", "frameId": "9.2", "loaderId": "9.1", "timestamp": 3749.183834, "type": "Document", "response": { "url": "https://github.com/adieuadieu", "status": 200, "statusText": "OK", "headers": { "Date": "Mon, 13 Mar 2017 13:04:42 GMT", "Content-Encoding": "gzip", "X-Content-Type-Options": "nosniff", "Transfer-Encoding": "chunked", "Status": "200 OK", "Vary": "X-Requested-With, Accept-Encoding", "X-XSS-Protection": "1; mode=block", "Public-Key-Pins": "max-age=5184000; pin-sha256=\"WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18=\"; pin-sha256=\"RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho=\"; pin-sha256=\"k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws=\"; pin-sha256=\"K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q=\"; pin-sha256=\"IQBnNBEiFuhj+8x6X8XLgh01V9Ic5/V3IRQLNFFc7v4=\"; pin-sha256=\"iie1VXtL7HzAMF+/PVPR9xzT80kQxdZeJ+zduCB3uj0=\"; pin-sha256=\"LvRiGEjRqfzurezaWuj8Wie2gyHMrW5Q06LspMnox7A=\"; includeSubDomains", "X-UA-Compatible": "IE=Edge,chrome=1", "X-Runtime": "0.207397", "Server": "GitHub.com", "X-GitHub-Request-Id": "8C86:15B94:54F3AB9:8A898AD:58C698E9", "X-Frame-Options": "deny", "X-Served-By": "9e87b28fb3a65e563c8d72545356c3e0", "Strict-Transport-Security": "max-age=31536000; includeSubdomains; preload", "Content-Type": "text/html; charset=utf-8", "X-Request-Id": "15a2d73b52420af45488b994000d3047", "Cache-Control": "no-cache", "Content-Security-Policy": "default-src 'none'; connect-src 'self' uploads.github.com status.github.com collector.githubapp.com api.github.com www.google-analytics.com github-cloud.s3.amazonaws.com wss://live.github.com; font-src assets-cdn.github.com; frame-src render.githubusercontent.com; img-src 'self' data: assets-cdn.github.com identicons.github.com collector.githubapp.com github-cloud.s3.amazonaws.com *.githubusercontent.com; media-src 'none'; script-src assets-cdn.github.com; style-src 'unsafe-inline' assets-cdn.github.com" }, "mimeType": "text/html", "connectionReused": false, "connectionId": 10, "remoteIPAddress": "192.30.253.113", "remotePort": 443, "fromDiskCache": false, "fromServiceWorker": false, "encodedDataLength": 1982, "timing": { "requestTime": 3748.707744, "proxyStart": -1, "proxyEnd": -1, "dnsStart": 0.199000000066008, "dnsEnd": 12.7460000003339, "connectStart": 12.7460000003339, "connectEnd": 173.563000000286, "sslStart": 88.9600000000428, "sslEnd": 173.52800000026, "workerStart": -1, "workerReady": -1, "sendStart": 173.629000000346, "sendEnd": 173.664000000372, "pushStart": 0, "pushEnd": 0, "receiveHeadersEnd": 474.75200000008 }, "protocol": "http/1.1", "securityState": "secure", "securityDetails": { "protocol": "TLS 1.2", "keyExchange": "ECDHE_RSA", "keyExchangeGroup": "P-256", "cipher": "AES_128_GCM", "certificateId": 0, "subjectName": "github.com", "sanList": [ "github.com", "www.github.com" ], "issuer": "DigiCert SHA2 Extended Validation Server CA", "validFrom": 1457568000, "validTo": 1526558400, "signedCertificateTimestampList": [ { "status": "Verified", "origin": "Embedded in certificate", "logDescription": "Google 'Pilot' log", "logId": "A4B90990B418581487BB13A2CC67700A3C359804F91BDFB8E377CD0EC80DDC10", "timestamp": 1457630341662, "hashAlgorithm": "SHA-256", "signatureAlgorithm": "ECDSA", "signatureData": "3045022100871D2118FD138ADBFB0E9636CA68D11C296CFA0711C934F3AD8D2CAE5674A7E1022027A46ABD86D25F5BCA2DE5FBBE99CE7C201F4B663C941E5134CC24EAEB364220" }, { "status": "Verified", "origin": "Embedded in certificate", "logDescription": "Google 'Aviator' log", "logId": "68F698F81F6482BE3A8CEEB9281D4CFC71515D6793D444D10A67ACBB4F4FFBC4", "timestamp": 1457630341607, "hashAlgorithm": "SHA-256", "signatureAlgorithm": "ECDSA", "signatureData": "3045022100D9A5DE52FB7B68F24EE57037960618890128984E4DAB3404F6EA555A337C615B0220354AAB908383669460FA4861A7C6A0EB907C9AED29E095009A44436E262746F6" }, { "status": "Verified", "origin": "Embedded in certificate", "logDescription": "DigiCert Log Server", "logId": "5614069A2FD7C2ECD3F5E1BD44B23EC74676B9BC99115CC0EF949855D689D0DD", "timestamp": 1457630341785, "hashAlgorithm": "SHA-256", "signatureAlgorithm": "ECDSA", "signatureData": "3046022100E79B7592B65BC4F7D1828B34B1F941AD1A6424D964E89283E0A3585F8AFF3320022100FAD8797AC182C780F635165A807822F99C66DB218D7B289D3F0C206D6ED7317C" } ] } } } }

{ "method": "Network.loadingFailed", "params": { "requestId": "9.1", "timestamp": 3749.184127, "type": "Document", "errorText": "net::ERR_INSUFFICIENT_RESOURCES", "canceled": false } }


:-(

I can reproduce the same ERR_INSUFFICIENT_RESOURCES message by running headless_shell on an EC2 instance where I've unmounted the /dev/shm tmpfs mount point.

Is it possible to tell Chrome to use /tmp instead of /dev/shm? Or, point at a different location for /dev/shm?

During it's execution, the Lambda function used a max of 179 MB. It has 1536 MB available.


Here's a log of what headless_shell spits out on stderr/out. Also, the startup trace log is here.

[0313/130426.711586:WARNING:resource_bundle.cc(338)] locale_file_path.empty() for locale 
prctl(PR_SET_NO_NEW_PRIVS) failed
[0313/130426.716834:WARNING:discardable_shared_memory_manager.cc(193)] Less than 64MB of free space in temporary directory for shared memory files: 0
[0313/130426.725816:VERBOSE1:webrtc_internals.cc(106)] Could not get the download directory.
[0313/130426.726829:VERBOSE1:proxy_service.cc(958)] PAC support disabled because there is no system implementation
[0313/130426.730730:WARNING:histograms.cc(40)] Started multiple compositor clients (Browser, Renderer) in one process. Some metrics will be disabled.
[0313/130426.904640:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Google 'Pilot' log
[0313/130426.904672:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Google 'Aviator' log
[0313/130426.904680:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: DigiCert Log Server
[0313/130426.904685:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Google 'Rocketeer' log
[0313/130426.904691:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Symantec log
[0313/130426.904697:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Venafi log
[0313/130426.904702:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Symantec 'Vega' log
[0313/130426.904707:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: CNNIC CT log
[0313/130426.904712:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: WoSign log
[0313/130426.904717:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: StartCom CT log
[0313/130426.904722:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Google 'Skydiver' log
[0313/130426.904727:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Google 'Icarus' log
[0313/130426.904733:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Izenpe log
[0313/130426.904738:VERBOSE1:multi_log_ct_verifier.cc(75)] Adding CT log: Certly.IO log
[0313/130431.723516:INFO:browser_main_loop.cc(305)] Completed startup tracing to chrometrace.log
[0313/130441.999919:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=ocsp.digicert.com port=80
[0313/130441.999957:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: ocsp.digicert.com
[0313/130441.999972:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=ocsp.digicert.com port=80
[0313/130441.999981:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: ocsp.digicert.com
[0313/130442.000007:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=crl4.digicert.com port=80
[0313/130442.000017:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: crl4.digicert.com
[0313/130442.001003:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=ocsp.digicert.com port=80
[0313/130442.001017:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: ocsp.digicert.com
[0313/130442.001024:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=ocsp.digicert.com port=80
[0313/130442.001028:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: ocsp.digicert.com
[0313/130442.001040:VERBOSE1:nss_ocsp.cc(586)] OCSP create session: host=crl4.digicert.com port=80
[0313/130442.001044:ERROR:nss_ocsp.cc(591)] No URLRequestContext for NSS HTTP handler. host: crl4.digicert.com
[0313/130442.304115:VERBOSE7:cookie_monster.cc(1728)] SetCookie() line: logged_in=no; domain=.github.com; path=/; expires=Fri, 13 Mar 2037 13:04:42 -0000; secure; HttpOnly
[0313/130442.304201:VERBOSE7:cookie_monster.cc(1766)] SetCookie() key: github.com cc: name: logged_in value: no domain: .github.com path: / creation: 1489410282
[0313/130442.304269:VERBOSE7:cookie_monster.cc(1728)] SetCookie() line: _gh_sess=eyJzZXNzaW9uX2lkIjoiMzAwNzE0MDEwZTg0MzZmNzdkOTFmNmQ4M2Q0OTJhMGMiLCJfY3NyZl90b2tlbiI6ImswcnBtWVNyaWI0REpwLzFicG16c2k3QytzbHBqNStHalE0SUpnRDBPNUE9In0%3D--d04e18c28fe3969ce400a69e6e2898e022d95feb; path=/; secure; HttpOnly
[0313/130442.304305:VERBOSE7:cookie_monster.cc(1766)] SetCookie() key: github.com cc: name: _gh_sess value: eyJzZXNzaW9uX2lkIjoiMzAwNzE0MDEwZTg0MzZmNzdkOTFmNmQ4M2Q0OTJhMGMiLCJfY3NyZl90b2tlbiI6ImswcnBtWVNyaWI0REpwLzFicG16c2k3QytzbHBqNStHalE0SUpnRDBPNUE9In0%3D--d04e18c28fe3969ce400a69e6e2898e022d95feb domain: github.com path: / creation: 1489410282
[0313/130442.306131:VERBOSE1:navigator_impl.cc(240)] Failed Provisional Load: https://github.com/adieuadieu, error_code: -12, error_description: , showing_repost_interstitial: 0, frame_id: 4




Marco

Sami Kyostila

unread,
Mar 13, 2017, 9:38:18 AM3/13/17
to Marco Lüthy, headless-dev, tadeus...@gmail.com
Maybe try tracing where the ERR_INSUFFICIENT_RESOURCES error is being generated from? It might be that we generally assume /dev/shm to be available.

- Sami
Message has been deleted

depre...@gmail.com

unread,
Mar 14, 2017, 11:50:45 AM3/14/17
to headless-dev, tadeus...@gmail.com
I have exactly the same setup and need as Marco, and I confirm the same error on page navigation ERR_INSUFFICIENT_RESOURCES even though the lambda process only took a fraction of the available memory. I tried to trace to see what's happening, unfortunately all the trace switches --trace-{startup | shutdown} seem ignored when there is also --single-process, and no output trace files are produced. I tried with a various combination of switches, to no avail... So kinda stuck, any ideas how to move on? Running chromium headless in a lambda would be really convenient for our case!

Sami Kyostila

unread,
Mar 14, 2017, 12:22:36 PM3/14/17
to depre...@gmail.com, headless-dev, tadeus...@gmail.com
It's likely that the lack of /dev/shm is the root cause here. Traces likely wouldn't help here since they don't have full coverage of low level things like this. I'd suggest adding some logging statements (e.g., LOG(ERROR), base::debug::StackTrace().Print() is useful too) to anything using shared memory or using a tool like strace to see where things are going wrong.

- Sami

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.
To post to this group, send email to headle...@chromium.org.

Marco Lüthy

unread,
Mar 14, 2017, 8:24:54 PM3/14/17
to headless-dev, depre...@gmail.com, tadeus...@gmail.com
Good news: it's possible to successfully run headless Chrome in Lambda!

The ERR_INSUFFICIENT_RESOURCES is indeed related to Chrome trying to use the tmpfs at /dev/shm. It looks like on Linux builds, using shared memory at /dev/shm is hard coded into Chrome. By modifying the GetShmemTempDir() in the Chrome source code in base/files/file_util_posix.cc (starts at line 928) to return /tmp instead of /dev/shm, I was able to make a build of headless Chrome which works correctly in Lambda and gets me past the ERR_INSUFFICIENT_RESOURCES error.


bool GetShmemTempDir(bool executable, FilePath* path) {
#if defined(OS_LINUX)
  bool use_dev_shm = true;
  if (executable) {
    static const bool s_dev_shm_executable = DetermineDevShmExecutable();
    use_dev_shm = s_dev_shm_executable;
  }

// cuz lambda
use_dev_shm = false;

  if (use_dev_shm) {
    *path = FilePath("/dev/shm");
    return true;
  }
#endif
  return GetTempDir(path);   <--- this parts interesting as it suggests its OK to run Chrome without /dev/shm
}


It looks like the fallback behaviour of GetShmemTempDir is to return /tmp so perhaps this hack isn't as awful as it feels. I have no clue what the repercussions of this change are to Chrome other than it probably being much slower than using /dev/shm.. which is not ideal.. 

Does the community think it's possible/acceptable to add a flag to provide a path to the shared memory dir or a flag to disable it in Chrome?


Marco
lambda-hack.patch

Marco Lüthy

unread,
Mar 15, 2017, 9:29:16 AM3/15/17
to headless-dev, depre...@gmail.com, tadeus...@gmail.com
Although it's still a work in progress, I've packaged this all up into a project here: https://github.com/adieuadieu/serverless-chrome

I hope it'll help get other people started more quickly.


Marco

Sami Kyostila

unread,
Mar 15, 2017, 12:42:50 PM3/15/17
to Marco Lüthy, headless-dev, depre...@gmail.com, tadeus...@gmail.com
Thanks for tracking this down. One option could be to add code to check if /dev/shm is available and falling back to /tmp if not (with a warning).

- Sami

Will Rubel

unread,
Mar 15, 2017, 10:46:25 PM3/15/17
to headless-dev, depre...@gmail.com, tadeus...@gmail.com
So what do I need to do with the headless.shell in lambda to test it out?    Can you provide a snippet?

Thanks!

Marco Lüthy

unread,
Mar 16, 2017, 1:22:18 AM3/16/17
to headless-dev, depre...@gmail.com, tadeus...@gmail.com
In my Lambda function, I'm running the headless shell with this command:

/path/to/headless_shell --single-process --disable-gpu --no-sandbox --remote-debugging-port=9222 --user-data-dir=/tmp/user-data  --data-path=/tmp/data-path --homedir=/tmp --disk-cache-dir=/tmp/cache-dir

I haven't thoroughly checked which of those /tmp path flags are actually required. I pass them all because /tmp is the only writeable place we have access to from the Lambda function.

How you spawn the headless_shell process from within your Lambda function's code will depend on what language you're using. So far I've only done it from Node.js with something like:

const os = require('os')
const childProcess = require('child_process')

const chrome = childProcess.spawn(
  '/var/task/your/path/to/headless_shell',
  [
    '--single-process',
    '--disable-gpu',
    '--no-sandbox',
    '--remote-debugging-port=9222',
    '--user-data-dir=/tmp/user-data',
    '--data-path=/tmp/data-path',
    '--homedir=/tmp',
    '--disk-cache-dir=/tmp/cache-dir',
  ],
  {
    cwd: os.tmpdir(),
    shell: true,
    detached: true,
    stdio: 'ignore', // if you want to listen to events from stdout or stderr, remove this. However, doing so will cause the Lambda function to never exist correctly despite detached: true and unref(). I dunno why.
  },
)

chrome.unref()

Once you've spawned headless_shell, in your Lambda function's code, you can then use the Chrome Debugger Protocol which was set to run on port 9222 with the --remote-debugging-port=9222 flag to drive/control headless Chrome. It takes headless Chrome a few hundred milliseconds to start up, so be sure to wait for headless Chrome to finish it's start-up before trying to connect. I haven't seen anything reliable that headless Chrome logs to stdout when it's finished starting up, so I just try GET requests to http://127.0.0.1:9222/json until I get a response (or timeout.)

I use the chrome-remote-interface package to wrap around the Chrome Debugger Protocol. Be aware that, chrome-remote-interface uses ws@2.x as a dependency which uses Node v6 Buffer.from which does not exist in Node 4.3 (which Lambda still uses... sadly.) Either force install ws@1.x, or polyfill Buffer.from.

Lambda function handler code might look something like:

const Cdp = require('chrome-remote-interface')

module.exports.run = function(event, context, callback) {
  codeToSpawnChromeShownAbove()

  Cdp.Version(function(err, info) {
    callback(null, info)
  })
}

If you don't want to do all of the above from scratch, this might give you a jump start:

npm install serverless@latest -g
cd serverless-chrome
npm install

Then provide your AWS permissions. Either

export AWS_ACCESS_KEY_ID=<your-key-here>
export AWS_SECRET_ACCESS_KEY=<your-secret-key-here>

Or if you make use of profiles in ~/.aws/credentials

export AWS_PROFILE=<your-profile-name>

Then, modify config.js:

import Cdp from 'chrome-remote-interface'
import { sleep } from './src/utils'

const LOAD_TIMEOUT = 1000 * 30

export default {
  logging: true,
  async handler (event) {
    const requestsMade = []
    let loaded = false

    const loading = async (startTime = Date.now()) => {
      if (!loaded && Date.now() - startTime < LOAD_TIMEOUT) {
        await sleep(100)
        await loading(startTime)
      }
    }

    const tab = await Cdp.New({ host: '127.0.0.1' })
    const client = await Cdp({ host: '127.0.0.1', tab })

    const { Network, Page } = client

    Network.requestWillBeSent(params => requestsMade.push(params))

    Page.loadEventFired(() => {
      loaded = true
    })

    const versionInfo = await Cdp.Version()

    await Network.enable()

    await Page.enable()

    await Page.navigate({ url: 'https://www.chromium.org/' })

    // wait until page is done loading, or timeout
    await loading()

    // this seems to be important. If you don't close the connection, Lambda just hangs until killed after max execution duration
    await client.close()

    return {
      statusCode: 200,
      body: JSON.stringify({
        versionInfo,
        requestsMade,
      }),
      headers: {
        'Content-Type': 'application/json',
      },
    }
  },
}

Then, deploy the function with:

serverless deploy

At the end of deploying, the the script should provide you with an API Gateway endpoint, something like:

endpoints:
    GET - https://XXXXXXXX.execute-api.us-west-2.amazonaws.com/dev/chrome

Navigate to the URL and cross your fingers that it worked! The URL response should include some version info of Headless Chrome and all of the requests which Chrome sent when loading/rendering https://www.chromium.org/

Let me know if you can't get it to work. Maybe I screwed something up.


Marco

Sami Kyostila

unread,
Mar 16, 2017, 8:27:50 AM3/16/17
to Marco Lüthy, headless-dev, depre...@gmail.com, tadeus...@gmail.com
Thanks for the great write-up! One tip for figuring out when the browser has finished launching: https://bugs.chromium.org/p/chromium/issues/detail?id=624837#c4

- Sami

Marco Lüthy

unread,
May 16, 2017, 8:59:04 AM5/16/17
to headless-dev, marco....@gmail.com, depre...@gmail.com, tadeus...@gmail.com
Hello,

So far, in the context of the AWS Lambda environment, I've only been able to get the headless shell to run with the --single-process flag. The problem with this is that, with the --single-process flag, it seems some metrics are disabled, which are used by reporting tools like Lighthouse (for example.) It's also slower..

--single-process causes the following to be logged to stderr:

[0516/112658.332373:WARNING:histograms.cc(40)] Started multiple compositor clients (Browser, Renderer) in one process. Some metrics will be disabled.


When running the headless shell without --single-process, the process crashes when I try to open a new tab. The following error is logged multiple times to stderr:

prctl(PR_SET_NO_NEW_PRIVS) failed

I assume this is somehow related to Chromium trying to sandbox itself. Is this assumption correct? However, adding the --no-sandbox flag doesn't seem to make a difference at addressing the prctl error. What could be causing the prctl error, and how can I work around it?

Without the --no-sandbox flag, the following is logged to stderr:

[0516/113410.811078:FATAL:zygote_host_impl_linux.cc(107)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux_suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.

However, the link in that log message seems to indicate that the linux suid sandbox has been mostly removed? Additionally, the --disable-setuid-sandbox doesn't seem to do anything.


Any help, or direction are greatly appreciated. Unfortunately I've only been able to produce these issues within the Lambda execution environment, which is rather restrictive and doesn't let you do anything with setsuid. Is there a way to completely disable the sandboxing in the headless shell?


Thank you,

Marco

Sami Kyostila

unread,
May 17, 2017, 8:46:45 AM5/17/17
to Marco Lüthy, headless-dev, depre...@gmail.com, tadeus...@gmail.com

On Tue, May 16, 2017, 13:59 Marco Lüthy <marco....@gmail.com> wrote:

Hello,

So far, in the context of the AWS Lambda environment, I've only been able to get the headless shell to run with the --single-process flag. The problem with this is that, with the --single-process flag, it seems some metrics are disabled, which are used by reporting tools like Lighthouse (for example.) It's also slower..

--single-process causes the following to be logged to stderr:

[0516/112658.332373:WARNING:histograms.cc(40)] Started multiple compositor clients (Browser, Renderer) in one process. Some metrics will be disabled.

This warning refers to Chromium's user metrics analysis (UMA) feature, which isn't used by Lighthouse. I suspect Lighthouse is just getting confused because it assumes each renderer process is separate from the browser process. 


When running the headless shell without --single-process, the process crashes when I try to open a new tab. The following error is logged multiple times to stderr:

prctl(PR_SET_NO_NEW_PRIVS) failed

I assume this is somehow related to Chromium trying to sandbox itself. Is this assumption correct? However, adding the --no-sandbox flag doesn't seem to make a difference at addressing the prctl error. What could be causing the prctl error, and how can I work around it?

Right, the prctl error is related to the sandbox attempting to drop privileges. It's not a fatal error per se, but does mean that the sandbox is less well protected than it would be normally.

If you're getting a crash in this mode, it's probably coming from somewhere else. Please try --enable-logging --v=99 to see if there are more clues.


Without the --no-sandbox flag, the following is logged to stderr:

[0516/113410.811078:FATAL:zygote_host_impl_linux.cc(107)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux_suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.

However, the link in that log message seems to indicate that the linux suid sandbox has been mostly removed? Additionally, the --disable-setuid-sandbox doesn't seem to do anything.


Any help, or direction are greatly appreciated. Unfortunately I've only been able to produce these issues within the Lambda execution environment, which is rather restrictive and doesn't let you do anything with setsuid. Is there a way to completely disable the sandboxing in the headless shell?

Yes, --no-sandbox will do that. You may also want to try --no-zygote, although that will make renderer startup somewhat slower.

- Sami

 

Boris Okusnkiy

unread,
Jun 1, 2017, 4:32:26 AM6/1/17
to headless-dev, tadeus...@gmail.com
Hi guys,

Thanks for your great efforts in making Chrome work in Lambda. Running a "real" headless browser in serverless opens a lot of unique possibilities of automating the web at crazy scale, which is generally not achievable with classic computation clusters in a cost-effective way.

I've done my share of experiment as well and also managed to run it with /dev/shm hack and some combo of --no-sandbox, --single-process and a bunch of other flags. Since we run in isolated world anyway, we most likely don't need any sandboxing, so I have no problem with --no-sandbox flag — but please correct me if my understanding is incomplete and it poses some serious security threat.

Another issue I bumped in is that Chrome apparently doesn't like being suspended. If same instance executes few simple jobs (e.g. just navigate to some website and return its title), then after few runs Chrome eventually crashes with:

[0601/081857.236642:FATAL:sandbox_ipc_linux.cc(91)] poll(2) failing. SandboxIPCHandler aborting.
#0 0x000001423bf7 base::debug::StackTrace::StackTrace()
#1 0x00000143526b logging::LogMessage::~LogMessage()
#2 0x00000085a0a6 content::SandboxIPCHandler::Run()
#3 0x000001470495 base::SimpleThread::ThreadMain()
#4 0x00000146cc43 base::(anonymous namespace)::ThreadFunc()
#5 0x7f0c5393edc5 start_thread
#6 0x7f0c5196bc9d __clone
Received signal 6
#0 0x000001423bf7 base::debug::StackTrace::StackTrace()
#1 0x00000142376f base::debug::(anonymous namespace)::StackDumpSignalHandler()
#2 0x7f0c53946100 <unknown>
#3 0x7f0c518aa5f7 __GI_raise
#4 0x7f0c518abce8 __GI_abort
#5 0x000001422742 base::debug::BreakDebugger()
#6 0x000001435572 logging::LogMessage::~LogMessage()
#7 0x00000085a0a6 content::SandboxIPCHandler::Run()
#8 0x000001470495 base::SimpleThread::ThreadMain()
#9 0x00000146cc43 base::(anonymous namespace)::ThreadFunc()
#10 0x7f0c5393edc5 start_thread
#11 0x7f0c5196bc9d __clone
r8: 00007f0c4af43a1b r9: 00007f0c519c3a40 r10: 0000000000000008 r11: 0000000000000206
r12: 0000000003ca3353 r13: 00007f0c4af43cc8 r14: 00007f0c4af43cd8 r15: 00007f0c4af43cc8
di: 000000000000000c si: 000000000000000d bp: 0000000003ca338c bx: 0000000000000000
dx: 0000000000000006 ax: 0000000000000000 cx: 00007f0c518aa5f7 sp: 00007f0c4af43718
ip: 00007f0c518aa5f7 efl: 0000000000000206 cgf: 0000000000000033 erf: 0000000000000000
trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.

While tracing the problem from sandbox_ipc_linux.cc I realized that it has something to do with how browser communicates with a renderer process (which is sort of strange, because I run it with --single-process and --no-sandbox anyway).

Further tracing has lead me to try running with --no-zygote flag — and to my surprise it worked, Chrome doesn't die anymore. But before celebrating I'd like to understand the consequences, what are the trade-offs and whether there are other alternatives. For example, if I don't intend to open new tab for every function invokation, then I don't need super-fast tab process spawning, right?

Anyway, I am very excited that people from Chrome team are starting to look into application scenarios of Headless. Keep up the good work!



Alex Clarke

unread,
Jun 1, 2017, 5:27:41 AM6/1/17
to Boris Okusnkiy, headless-dev, tadeus...@gmail.com
On 1 June 2017 at 09:32, Boris Okusnkiy <bo...@ub.io> wrote:
Hi guys,

Thanks for your great efforts in making Chrome work in Lambda. Running a "real" headless browser in serverless opens a lot of unique possibilities of automating the web at crazy scale, which is generally not achievable with classic computation clusters in a cost-effective way.

I've done my share of experiment as well and also managed to run it with /dev/shm hack and some combo of --no-sandbox, --single-process and a bunch of other flags. Since we run in isolated world anyway,

I don't know what you're use case is but Isolated worlds shouldn't be regarded as a hardened security boundary.  V8 exploits exist and the isolated world executes in the same process as the (potentially malicious) target web content.
 

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev+unsubscribe@chromium.org.

To post to this group, send email to headle...@chromium.org.

Sami Kyostila

unread,
Jun 1, 2017, 7:13:07 AM6/1/17
to Boris Okusnkiy, headless-dev, tadeus...@gmail.com
Thanks for the great feedback Boris! Replies inline.

to 1. kesäk. 2017 klo 9.32 Boris Okusnkiy <bo...@ub.io> kirjoitti:
Hi guys,

Thanks for your great efforts in making Chrome work in Lambda. Running a "real" headless browser in serverless opens a lot of unique possibilities of automating the web at crazy scale, which is generally not achievable with classic computation clusters in a cost-effective way.

I've done my share of experiment as well and also managed to run it with /dev/shm hack and some combo of --no-sandbox, --single-process and a bunch of other flags. Since we run in isolated world anyway, we most likely don't need any sandboxing, so I have no problem with --no-sandbox flag — but please correct me if my understanding is incomplete and it poses some serious security threat.

If you're running the resulting binary in a sandbox (which I believe is part of the contract with Lambda) and treating any data you read out of it as untrusted, then not using Chromium's own sandbox as a second layer seems like a reasonable trade-off.
Interesting. That retry count for poll() was there from the beginning and I wasn't able to find any discussion about why it was thought to be necessary.
 

Further tracing has lead me to try running with --no-zygote flag — and to my surprise it worked, Chrome doesn't die anymore. But before celebrating I'd like to understand the consequences, what are the trade-offs and whether there are other alternatives. For example, if I don't intend to open new tab for every function invokation, then I don't need super-fast tab process spawning, right?

I think the problem is that even with --no-sandbox and --single-process we'd start a mostly useless zygote process which ends up doing IPC, leading to the failure you were seeing. --no-zygote makes this process go away and should not have any other functional or performance differences (there was a problem with fonts recently which was fixed).


Anyway, I am very excited that people from Chrome team are starting to look into application scenarios of Headless. Keep up the good work!

Thanks!

- Sami
 



--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.

ar.ar...@gmail.com

unread,
Jul 2, 2017, 10:47:28 AM7/2/17
to headless-dev, tadeus...@gmail.com
Hi. I am also trying to run chrome in lambda and actually it runs OK, but every 5 execution chrome exits itself with code '1'. When I also tried to add --no-zygote flag but nothing changed. At the moment I am just listeting to the chrome process and if it exits - I am launching new one immediately. This solution at least prevent whole lambda from crashing, but it also increases execution time of every 5th call. So, actually, I would really like to fix it in some normal way. Here are my current flags : '--headless',
'--disable-gpu',
'--remote-debugging-port=9222',
'--window-size=1000,800',
'--no-sandbox',
'--no-zygote',
'--user-data-dir=/tmp/user-data',
'--hide-scrollbars',
'--enable-logging',
'--log-level=0',
'--v=99',
'--single-process',
'--data-path=/tmp/data-path',
'--ignore-certificate-errors',
'--homedir=/tmp',
'--disk-cache-dir=/tmp/cache-dir'

Do you have any ideas of what I probably done wrong?
четверг, 1 июня 2017 г., 11:32:26 UTC+3 пользователь Boris Okunskiy написал:

Matthew Mueller

unread,
Jul 2, 2017, 11:41:21 PM7/2/17
to headless-dev, tadeus...@gmail.com
Hey folks!

I'm running into something quite strange. While I was able to get chrome headless running inside lambda (using Marco Lüthy's excellent medium post), I'm consistently hitting crashes with the Page.navigate command. Here are the chrome devtool commands I'm running:

{ "method": "Page.enable", "params": {} }
{ "method": "Network.enable", "params": {} }
{ "method": "Runtime.enable", "params": {} }
{ "method": "Page.setControlNavigations", "params": { "enabled": true } }
{ "method": "Page.navigate", "params": { "url": "https://google.com" } }

I've also got something that listen for `Page.navigationRequested` and runs:

{ "method": "Page.processNavigation", "params": { "response": "Proceed", "navigationId": $int }

Unfortunately, I'm getting the following stack trace every time:

[0703/033315.949451:ERROR:bad_message.cc(23)] Terminating renderer for bad IPC message, reason 138
[0703/033315.949509:FATAL:browser_message_filter.cc(165)] Check failed: false.
#0 0x00000140d027 base::debug::StackTrace::StackTrace()
#1 0x00000141de0d logging::LogMessage::~LogMessage()
#2 0x000000462130 content::BrowserMessageFilter::ShutdownForBadMessage()
#3 0x000000809ad4 content::DOMStorageMessageFilter::OnOpenStorageArea()
#4 0x00000080998f _ZN3IPC8MessageTI38DOMStorageHostMsg_OpenStorageArea_MetaSt5tupleIJil4GURLEEvE8DispatchIN7content23DOMStorageMessageFilterES8_vMS8_FvilRKS3_EEEbPKNS_7MessageEPT_PT0_PT1_T2_
#5 0x000000809501 content::DOMStorageMessageFilter::OnMessageReceived()
#6 0x000001456483 base::SequencedWorkerPool::Inner::ThreadLoop()
#7 0x000001455ce1 base::SequencedWorkerPool::Worker::Run()
#8 0x000001458ec5 base::SimpleThread::ThreadMain()
#9 0x0000014552d3 base::(anonymous namespace)::ThreadFunc()
#10 0x7f5a85539dc5 start_thread
#11 0x7f5a835916ed __clone


Received signal 6
#0 0x00000140d027 base::debug::StackTrace::StackTrace()
#1 0x00000140cb9f base::debug::(anonymous namespace)::StackDumpSignalHandler()
#2 0x7f5a85541370 <unknown>
#3 0x7f5a834cf1d7 __GI_raise
#4 0x7f5a834d08c8 __GI_abort
#5 0x00000140bce2 base::debug::BreakDebugger()
#6 0x00000141e160 logging::LogMessage::~LogMessage()
#7 0x000000462130 content::BrowserMessageFilter::ShutdownForBadMessage()
#8 0x000000809ad4 content::DOMStorageMessageFilter::OnOpenStorageArea()
#9 0x00000080998f _ZN3IPC8MessageTI38DOMStorageHostMsg_OpenStorageArea_MetaSt5tupleIJil4GURLEEvE8DispatchIN7content23DOMStorageMessageFilterES8_vMS8_FvilRKS3_EEEbPKNS_7MessageEPT_PT0_PT1_T2_
#10 0x000000809501 content::DOMStorageMessageFilter::OnMessageReceived()
#11 0x000001456483 base::SequencedWorkerPool::Inner::ThreadLoop()
#12 0x000001455ce1 base::SequencedWorkerPool::Worker::Run()
#13 0x000001458ec5 base::SimpleThread::ThreadMain()
#14 0x0000014552d3 base::(anonymous namespace)::ThreadFunc()
#15 0x7f5a85539dc5 start_thread

For awhile I was just thinking that I was sending a faulty command, but I've check that quite a bit. Is there anyway to get more detailed information about what the bad IPC message is? It seems to be on the listening side actually.

One other thing about the setup is I'm running inside a target, so the raw messages look like this:

{"id":469339106,"method":"Target.sendMessageToTarget","params":{"targetId":"b3be4d9d-68e3-453a-9328-7a0f664517e0","message":"{\"id\":1458323237,\"method\":\"Page.navigate\",\"params\":{\"url\":\"https://google.com\"}}"}}

It definitely seems to be the Page.navigate function that kills the linux chrome process though.

Alex Clarke

unread,
Jul 3, 2017, 3:40:15 AM7/3/17
to Matthew Mueller, headless-dev, Tadeusz Wojcik
Looks like DSH_WRONG_STORAGE_PARTITION which suggests a mismatch between browser session and the renderer.  I'm not really sure how that can happen, can you tell us a little more about your use case?  Are you using Target.createBrowserContext?

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev+unsubscribe@chromium.org.

To post to this group, send email to headle...@chromium.org.

Matthew Mueller

unread,
Jul 3, 2017, 5:35:53 AM7/3/17
to headless-dev, mattm...@gmail.com, tadeus...@gmail.com
Thanks Alex for your response. I am running the Target.createBrowserContext. Here's the full sequence of commands I'm running on chrome linux v60.0.3095.0:

{"id":1298498081,"method":"Target.createBrowserContext"}
{"id":1427131847,"method":"Target.createTarget","params":{"browserContextId":"6d6fbee5-024f-4512-bac1-ef3000d6a03a","height":500,"url":"about:blank","width":940}}
{"id":911902081,"method":"Target.attachToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38"}}
{"id":336122540,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":140954425,\"method\":\"Page.enable\",\"params\":{}}"}}
{"id":1106410694,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":646203300,\"method\":\"Runtime.enable\",\"params\":{}}"}}
{"id":817455089,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":460128162,\"method\":\"DOM.enable\",\"params\":{}}"}}
{"id":607811211,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":1006933274,\"method\":\"Network.enable\",\"params\":{}}"}}
{"id":469339106,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":1458323237,\"method\":\"Page.setControlNavigations\",\"params\":{\"enabled\":true}}"}}
{"id":1225511528,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":774965466,\"method\":\"Emulation.setVisibleSize\",\"params\":{\"width\":1920,\"height\":951}}"}}
{"id":637979947,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":629458047,\"method\":\"Page.navigate\",\"params\":{\"url\":\"https://google.com\"}}"}}
{"id":1858292790,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":443632888,\"method\":\"Page.processNavigation\",\"params\":{\"response\":\"Proceed\",\"navigationId\":0}}"}}


I can also confirm that every command works up until Page.navigate. Actually you can reproduce this with just:

{"id":1298498081,"method":"Target.createBrowserContext"}
{"id":1427131847,"method":"Target.createTarget","params":{"browserContextId":"6d6fbee5-024f-4512-bac1-ef3000d6a03a","height":500,"url":"about:blank","width":940}}
{"id":911902081,"method":"Target.attachToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38"}}
{"id":637979947,"method":"Target.sendMessageToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38","message":"{\"id\":629458047,\"method\":\"Page.navigate\",\"params\":{\"url\":\"https://google.com\"}}"}}

Last thing worth mentioning is that it doesn't happen on *every* website, but it happens on most that I've tried including Google & Medium.

Matt
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.

To post to this group, send email to headle...@chromium.org.

Matthew Mueller

unread,
Jul 5, 2017, 11:58:50 PM7/5/17
to headless-dev, mattm...@gmail.com, tadeus...@gmail.com
Just a heads up that this appears related to:

{"id":911902081,"method":"Target.attachToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38"}}

I'm pretty sure I misunderstood what attachToTarget is meant for.

Alex Clarke

unread,
Jul 6, 2017, 3:33:55 AM7/6/17
to Matthew Mueller, Pavel Feldman, headless-dev, Tadeusz Wojcik
+Pavel

This sounds like a DevTools issue.  I've filed a bug, could you please triage:  https://bugs.chromium.org/p/chromium/issues/detail?id=739635

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev+unsubscribe@chromium.org.

To post to this group, send email to headle...@chromium.org.

Sami Kyostila

unread,
Jul 6, 2017, 7:50:47 PM7/6/17
to Alex Clarke, Matthew Mueller, Pavel Feldman, headless-dev, Tadeusz Wojcik
Not sure if it would lead to this, but is it possible that the lambda's temporary directory or some other part of the filesystem has become full?

- Sami

to 6. heinäk. 2017 klo 8.33 'Alex Clarke' via headless-dev <headle...@chromium.org> kirjoitti:
+Pavel

This sounds like a DevTools issue.  I've filed a bug, could you please triage:  https://bugs.chromium.org/p/chromium/issues/detail?id=739635
On 6 July 2017 at 04:58, Matthew Mueller <mattm...@gmail.com> wrote:
Just a heads up that this appears related to:

{"id":911902081,"method":"Target.attachToTarget","params":{"targetId":"b5e8db2f-2a90-46dd-ad9b-284466a0ba38"}}

I'm pretty sure I misunderstood what attachToTarget is meant for.

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.

To post to this group, send email to headle...@chromium.org.

--
You received this message because you are subscribed to the Google Groups "headless-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to headless-dev...@chromium.org.

To post to this group, send email to headle...@chromium.org.

chilakama...@gmail.com

unread,
Aug 21, 2017, 10:40:16 PM8/21/17
to headless-dev, tadeus...@gmail.com
Reply all
Reply to author
Forward
0 new messages