Implement Transferable Streams: ServiceWorker <=> Window

533 views
Skip to first unread message

guest271314

unread,
May 17, 2021, 5:50:19 PM5/17/21
to Chromium Extensions
Chromium, Chrome has implemented Transferable Streams 


Port of chrome.runtime.postMessage(), chrome.runtime.connectNative(), chrome.runtime.sendNativeMessage() do not implement transferable objects.

While we can send objects via postMessage() https://groups.google.com/a/chromium.org/g/chromium-extensions/c/cTjYAX0nkF0/m/Yw3lD09uBAAJ we cannot transfer ArrayBuffer or stream with ReabableStream/WritableStream (Streams API) https://bugs.chromium.org/p/chromium/issues/detail?id=248548.

While we can request localhost at extension URL, we cannot request localhost at console at an arbitrary URL (e.g., github.com) due to CSP directives. We need to get the response in extension and send the JavaScript object/JSON using runtime.postMessage().

Since Transferable Streams is already implemented we should be able to effective use the background ServiceWorker as a router, or forwarding server, where we can have a server running locally, which responds with STDOUT from native application and shell script calls to work around browser restrictions and limitations, e.g., Web Speech API does not support capture of audio output, e.g., to a MediaStreamTrack (https://github.com/WICG/speech-api/issues/69; https://bugs.chromium.org/p/chromium/issues/detail?id=1185527), nor SSML input parsing (https://github.com/WICG/speech-api/issues/10; https://bugs.chromium.org/p/chromium/issues/detail?id=795371; https://bugzilla.mozilla.org/show_bug.cgi?id=1425523).

While my use-case input to native application, espeak-ng, for the purpose of parsing SSML and streaming STDOUT in "real-time", other applications can also be called.

The API can look something like:

manifest.json

{
  "permissions": ["transferableStreams"],
  "externallyConnectableTransferableStreams": ["<all_urls>"],
}

console

// create a MessageChannel in Window, with a unique identifier
// multiple transferable channels can be created
const port = chrome.runtime.createTransferableStream('1'); // MessagePort
port.onmessage = async(e) => {
  const {readable} = e.data; 
  readable.pipeThrough(...).pipeTo(...);
}

background.js

// MessageChannel port in ServiceWorker
const port = chrome.runtime.createTransferableStream('1');
.then((response) => response.body)
.then((readable) => port.postMessage(readbale, [readable]))
.catch(console.error);

Simeon Vincent

unread,
May 18, 2021, 1:35:29 PM5/18/21
to guest271314, Chromium Extensions
This appears to be a feature request. Please file an issue on crbug.com so the Chromium team can triage it.

As a quick bit of feedback, I didn't realize this was a feature request until you got to the code snippets and realized you were using code that we don't currently support. One small tweak you could make to help readers more easily understand the request is to add section headings for "Problem" and "Solution".

Cheers,

Simeon - @dotproto
Chrome Extensions DevRel


--
You received this message because you are subscribed to the Google Groups "Chromium Extensions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-extens...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-extensions/32e36d9d-08c3-4ff5-a17a-70cf3585b26bn%40chromium.org.

guest271314

unread,
May 19, 2021, 3:09:06 AM5/19/21
to Chromium Extensions, Simeon Vincent, Chromium Extensions, guest271314
Relevant Chromium bug closed as WontFix https://bugs.chromium.org/p/chromium/issues/detail?id=1115640.

Problem:

1) There is no way to capture Web Speech API audio output for speechSynthsis.speak() (stream, in "real-time");
2) There is no way to input SSML to speech synthesis engine; Web Speech API does not support SSML parsing
(assistive technologies; et al.)
3) For various other use-cases not specifically included in 1), 2); that would be remedied by a means to establish local connection to local server to run arbitrary native applications and shell scripts

Solutions:

TL;DR

guest271314

unread,
May 20, 2021, 10:08:01 AM5/20/21
to Chromium Extensions, guest271314, Simeon Vincent, Chromium Extensions
Simeon Vincent,

Do you gather what my feature request is here?

guest271314

unread,
May 20, 2021, 8:03:50 PM5/20/21
to Chromium Extensions, guest271314, Simeon Vincent, Chromium Extensions
Given that the parent organization Google has partnered with StackBlitz to run Node.js in the browser, this feature request is within the scope of running arbitrary native applications and shell scripts in the browser https://blog.stackblitz.com/posts/introducing-webcontainers/

> Introducing WebContainers: Run Node.js natively in your browser

> Today we're excited to announce a new technology we've been working on in concert with the teams at Next.js and Google.

> A few years ago we realized that the web was heading towards a key inflection point. The advent of WebAssembly and new capabilities APIs made it seem possible to write a WebAssembly-based operating system powerful enough to run Node.js, entirely inside your browser.

> Today we're excited to announce WebContainers.

> WebContainers allow you to create fullstack Node.js environments that boot in milliseconds and are immediately online & link shareable—in just one click. The environment loads with VS Code's powerful editing experience, a full terminal, npm and more. It also runs entirely inside your browser ...

> Again, these environments are not running on remote servers. Instead, each environment is completely contained within your web browser. That’s right: the Node.js runtime itself is running natively, inside the browser, for the first time ever.


guest271314

unread,
May 20, 2021, 8:51:44 PM5/20/21
to Chromium Extensions, guest271314, Simeon Vincent, Chromium Extensions
There cannot possibly be any rational objection to this feature request. The below means the closure of the linked FUGU https://bugs.chromium.org/p/chromium/issues/detail?id=1115640 makes no sense, as it made no sense to close the issue before the partnership between Google and StackBlitz was announced.


> Run servers. In your browser.
> Yes, actually. WebContainers include a virtualized TCP network stack that's mapped to your browser's ServiceWorker API, enabling you to instantly create live Node.js servers on-demand that continue to work even when you go offline.

guest271314

unread,
May 24, 2021, 12:51:24 AM5/24/21
to Chromium Extensions, guest271314, Simeon Vincent, Chromium Extensions
FWIW this is how I implemented a proof of concept using existing web platform technologies https://github.com/guest271314/NativeTransferableStreams.

Steps:

1. Start your local server. This can be achieves using a browser extension with Native Messaging to toggle the local server on and off.

2. Create an HTML document in the root of the server directory.

3. Turn off popup blocker at browser settings/preferences.

4. Open `Window` using `window.open()` with URL set to HTML document at 2.

5. `postMessage()` to `opener` from newly opened `Window`.

6. Transfer `ReadableStream` representing `STDIN` using `postMessage()` from `opener` to newly opened `Window`.

7. Read `ReadableStream` at newly opened Window.

8. `fetch()` localhost with `POST` body set as command to run at a local shell, for example using PHP `passthru()`.

9. Transfer `ReadableStream` of `Response.body` representing STDOUT using `postMessage()` from newly opened `Window` to `opener`.

10. Read `ReadableStream` at `opener`.

Local server

<?php
if (isset($_POST["tts"])) {
print($_GET["tts"]);
header('Vary: Origin');
header("Access-Control-Allow-Origin: *");
header("Access-Control-Allow-Methods: GET, POST, OPTIONS, HEAD");
header("Access-Control-Allow-Headers: Content-Type, Access-Control-Allow-Headers");
header("Content-Type: text/plain");
header("X-Powered-By:");
echo passthru($_POST["tts"]);
exit();
}

index.html in root of server

<!DOCTYPE html>
<html>
<body>
NativeTransferableStream
<script>
onload = async (e) => {
blur();
opener.postMessage('Ready', name);
onmessage = async ({ data }) => {
await data
.pipeThrough(new TextDecoderStream())
.pipeTo(
new WritableStream({
async write(value, c) {
const fd = new FormData();
fd.append('tts', value);
const { body } = await fetch('http://localhost:8000', {
method: 'post',
body: fd,
});
opener.postMessage(body, name, [body]);
},
})
)
.catch(() => close());
};
};
</script>
</body>
</html>

Usage at any origin

async function audioStream(readable) {
let readOffset = 0;
let duration = 0;
let init = false;
const ac = new AudioContext({
sampleRate: 22050,
latencyHint: 0,
});
await ac.suspend();
const msd = new MediaStreamAudioDestinationNode(ac, {
channelCount: 1,
});
let inputController;
const inputStream = new ReadableStream({
async start(_) {
return (inputController = _);
},
});
const abortable = new AbortController();
const { signal } = abortable;
const inputReader = inputStream.getReader();
const { stream } = msd;
const [track] = stream.getAudioTracks();
const osc = new OscillatorNode(ac, { frequency: 0 });
const processor = new MediaStreamTrackProcessor({ track });
const generator = new MediaStreamTrackGenerator({ kind: 'audio' });
const { writable } = generator;
const { readable: audioReadable } = processor;
const audioWriter = writable.getWriter();
const mediaStream = new MediaStream([generator]);
const source = new MediaStreamAudioSourceNode(ac, { mediaStream });
source.connect(ac.destination);
osc.connect(msd);
osc.start();
track.onmute = track.onunmute = track.onended = (e) => console.log(e);
// const recorder = new MediaRecorder(mediaStream);
// recorder.ondataavailable = ({ data }) => console.log(URL.createObjectURL(data));
// recorder.start();
let channelData = [];
await Promise.all([
readable.pipeTo(
new WritableStream({
async write(value, c) {
let i = 0;
if (!init) {
init = true;
i = 44;
}
for (; i < value.buffer.byteLength; i++, readOffset++) {
if (channelData.length === 440) {
inputController.enqueue([...channelData]);
channelData.length = 0;
}
channelData.push(value[i]);
}
},
async close() {
console.log('Done writing input stream.');
if (channelData.length) {
inputController.enqueue(channelData);
}
inputController.close();
},
})
),
audioReadable.pipeTo(
new WritableStream({
async write({ timestamp }) {
if (inputController.desiredSize === 0) {
msd.disconnect();
osc.disconnect();
source.disconnect();
track.stop();
// abortable.abort();
await audioWriter.close();
await audioWriter.closed;
await inputReader.cancel();
generator.stop();
await ac.close();
console.log(
`readOffset:${readOffset}, duration:${duration}, ac.currentTime:${ac.currentTime}`,
`generator.readyState:${generator.readyState}, audioWriter.desiredSize:${audioWriter.desiredSize}`
);
return await Promise.all([
new Promise((resolve) => (stream.oninactive = resolve)),
new Promise((resolve) => (ac.onstatechange = resolve)),
]);
}
const uint8 = new Uint8Array(440);
const { value, done } = await inputReader.read();
if (!done) uint8.set(new Uint8Array(value));
const uint16 = new Uint16Array(uint8.buffer);
const floats = new Float32Array(220);
// https://stackoverflow.com/a/35248852
for (let i = 0; i < uint16.length; i++) {
const int = uint16[i];
// If the high bit is on, then it is a negative number, and actually counts backwards.
const float =
int >= 0x8000 ? -(0x10000 - int) / 0x8000 : int / 0x7fff;
floats[i] = float;
}
const buffer = new AudioBuffer({
numberOfChannels: 1,
length: floats.length,
sampleRate: 22050,
});
buffer.getChannelData(0).set(floats);
duration += buffer.duration;
const frame = new AudioData({ timestamp, buffer });
await audioWriter.write(frame);
},
close() {
console.log('Done reading input stream.');
},
}),
{ signal, preventClose: false }
),
ac.resume(),
]);
return 'Done streaming.';
}


async function nativeTransferableStream(stdin) {
return new Promise((resolve) => {
onmessage = async (e) => {
if (e.data === 'Ready') {
const encoder = new TextEncoder();
const input = encoder.encode(stdin);
const readable = new ReadableStream({
start(c) {
c.enqueue(input);
c.close();
},
});
e.source.postMessage(readable, e.origin, [readable]);
}
if (e.data instanceof ReadableStream) {
const message = await stream(e.data);
onmessage = null;
transferableWindow.close();
resolve(message);
}
};
const transferableWindow = window.open(
'http://localhost:8000/index.html',
location.href,
'menubar=no,location=no,resizable=no,scrollbars=no,status=no,width=100,height=100'
);
}).catch((err) => {
throw err;
});
}

let text = `... So we need people to have weird new ideas. We need more ideas to break it and make it better.

Use it
Break it
File bugs
Request features

- Real time front-end alchemy, or:
capturing, playing, altering and encoding video and audio streams, without servers or plugins!
by Soledad Penadés

von Braun believed in testing. I cannot emphasize that term enough – test, test, test.
Test to the point it breaks.

- Ed Buckbee, NASA Public Affairs Officer, Chasing the Moon

Now watch. Um, this how science works.
One researcher comes up with a result.
And that is not the truth. No, no.
A scientific emergent truth is not the
result of one experiment. What has to
happen is somebody else has to verify
it. Preferably a competitor. Preferably
someone who doesn't want you to be correct.

- Neil deGrasse Tyson, May 3, 2017 at 92nd Street Y`;

try {
await nativeTransferableStream(`espeak-ng -m --stdout -v 'Storm' "${text}"`);
} catch (err) {
console.error(err);

guest271314

unread,
May 29, 2021, 7:06:53 PM5/29/21
to Chromium Extensions, guest271314, Simeon Vincent, Chromium Extensions
> Steps:

> 1. Start your local server. This can be achieves using a browser extension with Native Messaging to toggle the local server on and off.

To start and stop local server with user action or programmatically

manifest.json

{
  "name": "native_messaging_local_server",
  "version": "1.0",
  "manifest_version": 3,
  "background": {
    "service_worker": "background.js"
  },
  "permissions": ["nativeMessaging", "tabs"],
  "action": {}
}

native_messaging_local_server.json

{
  "name": "native_messaging_local_server",
  "description": "Start, stop local server",
  "path": "/path/to/localServer.sh",
  "type": "stdio",
  "allowed_origins": [
    "chrome-extension://<id>/"
  ]
}

$ cp native_messaging_local_server.json ~./config/chromium/NativeMessagingHosts

localServer.sh

#!/bin/bash
sendMessage() {
    message=''
    if pgrep -f 'php -S localhost:8000' > /dev/null; then
      message+='"Local server on."'  
    else
      message+='"Local server off."' 
    fi
    # Calculate the byte size of the string.
    # NOTE: This assumes that byte length is identical to the string length!
    # Do not use multibyte (unicode) characters, escape them instead, e.g.
    # message='"Some unicode character:\u1234"'
    messagelen=${#message}
    # Convert to an integer in native byte order.
    # If you see an error message in Chrome's stdout with
    # "Native Messaging host tried sending a message that is ... bytes long.",
    # then just swap the order, i.e. messagelen1 <-> messagelen4 and
    # messagelen2 <-> messagelen3
    messagelen1=$(( ($messagelen      ) & 0xFF ))               
    messagelen2=$(( ($messagelen >>  8) & 0xFF ))               
    messagelen3=$(( ($messagelen >> 16) & 0xFF ))               
    messagelen4=$(( ($messagelen >> 24) & 0xFF ))               
    # Print the message byte length followed by the actual message.
    printf "$(printf '\\x%x\\x%x\\x%x\\x%x' \
        $messagelen1 $messagelpen2 $messagelen3 $messagelen4)%s" "$message"
}

localServer() {
  if pgrep -f 'php -S localhost:8000' > /dev/null; then
    pkill -f 'php -S localhost:8000' & sendMessage  
  else
    php -S localhost:8000 & sendMessage
  fi
}
localServer

background.js

function sendNativeMessage(title, url) {
  chrome.runtime.sendNativeMessage('native_messaging_local_server', {}, (nativeMessage) => {
    console.log({nativeMessage, title, url});
  });
}
// Setting 'document.title'
chrome.tabs.onUpdated.addListener((tabId, {title}, {url}) => {
  if (/(start|stop)_local_server/.test(title)) {
    sendNativeMessage(title, url);
  }
});
// Clicking extension badge
chrome.action.onClicked.addListener(({title, url}) => {
  sendNativeMessage(title, url);
});

The local server can be started and stopped with a) clicking badge; b) setting 'document.title' to 'start_local_server' or 'stop_local_server'.



maria del socorro barragan

unread,
May 29, 2021, 7:07:54 PM5/29/21
to guest271314, Chromium Extensions, Simeon Vincent

guest271314

unread,
May 30, 2021, 1:07:57 PM5/30/21
to Chromium Extensions, Simeon Vincent, Chromium Extensions, guest271314
I went ahead and filed another FUGU https://bugs.chromium.org/p/chromium/issues/detail?id=1214621.

Perhaps this time the issue will be marked with appropriate components and relevant developers will gather what I propose.

On Tuesday, May 18, 2021 at 10:35:29 AM UTC-7 Simeon Vincent wrote:

PhistucK

unread,
May 30, 2021, 4:01:00 PM5/30/21
to guest271314, Chromium Extensions, Simeon Vincent
I think you are miscategorising your request yourself. Fugu is for open web platform features, but you are talking about extension APIs (a totally different group of people), right?

PhistucK


guest271314

unread,
May 30, 2021, 4:23:39 PM5/30/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
No. Both apply. 

Transferable Streams and WebTransport can be useful when specified and implemented within the scope of Web extensions, and conversely, though not mutually exclusively, there is interest in connecting to local devices and applications, which does not necessarily require any networking.

Ideally users should be capable of connecting to local applications, devices, and running arbitrary shell scripts without using an extension. I could use a local shell script to perform a task when a local file is written with File System Access API https://github.com/guest271314/captureSystemAudio#usage which is a workaround for not being able to start and stop services directly https://github.com/whatwg/html/issues/3443, in the proof-of-concept I linked to earlier I am using Native Messaging, which, again can benefit from updating the specification and implementations to use Streams API, Transferable Streams, WebTransport, instead of JSO https://bugs.chromium.org/p/chromium/issues/detail?id=248548, see https://groups.google.com/a/chromium.org/g/web-transport-dev/c/njMLrjHdyLs.

Due to the varied and wide interests on the Web there are examples of proposals for specifications and implementations that overlap - potentially within the same parent organization sphere - without the different efforts being aware of each other. For example, compare the proposals https://github.com/httpslocal and https://github.com/backkem/local-devices-api.

Unpacked extensions and Native Messaging hosts reside locally, thus there is an infrastructure in place with regard to accessing local applications and devices, though uses JSON instead of streams due to diconnect between various aspects of Web platform interests, stakeholders. I have been ateempting to bridge gaps, where work is being done disparately that could be done collaborately.

There is no reason SSML input and parsing and capability to capture audio output from Web Speech API cannot be specified and implemented. Infrastructures are in place to do that. The reason that has not occurred is because specification owners and contributors pass the buck, as it were. Easier to just ban me for 1,000 years while Chrome users have to ask for WebTransport server that is being tested https://github.com/GoogleChrome/samples/issues/718 to be pulled into Google repositories https://github.com/GoogleChrome/chrome-extensions-samples/issues/548. Again, disconnect, which can be fixed; Google and cohorts and specifications do not have lack of resources, rather a lack of will to connect disparate though actually common interests into a common effort, which is correctable, if you folks listen to individuals outside of your closed-loop system. Or, not. I will continue with my individual research, experimentation, and working around the omissions of these very smart (in some ways) individuals and organizations. 



guest271314

unread,
May 30, 2021, 4:44:18 PM5/30/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
In theory https://github.com/backkem/local-devices-api/issues/6https://github.com/httpslocal/usecases/blob/master/NetworkBasedAPI.mdhttps://bugs.chromium.org/p/chromium/issues/detail?id=248548, and to the extent applicable or necessary https://bugs.chromium.org/p/chromium/issues/detail?id=1207214, can all be solved with something like

Web page <=>Transferable Streams <=> ServiceWorker <=> WebTransport <=> Native Messaging (Updating Native Messaging connection/messaging protocol; utilizing deprecated quic-transport/HTTP/3 backend to connect to, pair with, local applications, deives vitual or otherwsie, run aribtrary shell scripts, et al.)

On Sunday, May 30, 2021 at 1:01:00 PM UTC-7 PhistucK wrote:

guest271314

unread,
May 30, 2021, 4:56:00 PM5/30/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
Technically the flow chart could be reduced to 

Web page <=>Transferable Streams <=> ServiceWorker 

where the MV3 background ServiceWorker serves as server, router, with access to whatever server is being run locally, to connect, or pair with aribitrary devices, run scripts, do stuff that is not possible using API's shipped with the browser or extensions - because we can fetch() localhost from the ServiceWorker and stream STDOUT as a ReadabelStream.

Right now I need to open a new Window to get a Window handle to transfer the ReadableStream. Some users might prefer to use WebTransport.

What I find revealing is that Native Messaging provides the means to connect to local applications and devices, yet proposals to do so do not mention Native Messaging, which does not get much contribution https://github.com/browserext/native-messaging/issues/6#issuecomment-666053799

> People interested in this work being resumed are encouraged to drum up support among the various implementors.

perhaps due to the archaic runtime.sendMessage() protocol when Streams API is already being implemented in various media and networking new proposals and API's.

With some form of Transferable Streams and/or WebTransport connection, communication backend the goals of those proposals for local  device connection and streaming can be realized, relatively simply, by using existing technologies already shipped with the browser, merging like interests, and contributing to and updating Native Messaging specification. Thus, all stakeholder interests are covered.
On Sunday, May 30, 2021 at 1:01:00 PM UTC-7 PhistucK wrote:

guest271314

unread,
May 30, 2021, 5:21:19 PM5/30/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
Right now, the MV3 ServiceWorker has MessageChannel defined, which provides a means to connect to Web page and utilize Transferable Streams, yet still uses 

    var id = 'jmnojflkjiloekecianpibbbclcgmhag';
    var port = chrome.runtime.connect(id);
    port.onMessage.addListener((e) => { 
      console.log(e, ((performance.now() - now) / 1000) / 60); 
      if (disconnected === true) {
        console.log(port, e, e.sender);
        port.disconnect();
        clearTimeout(timer);
        return;
      }
      timer = setTimeout(() => {
         port.postMessage(sendMessage());
      }, (1000 * 60) * 6);
    });
    port.onDisconnect.addListener((e) => { disconnected = true; clearTimeout(timer); console.log(e); });
    port.postMessage(sendMessage());

again, is JSON-based, meaning when I stream raw audio I need to further process the plain object at onMessage handler, which makes no sense, wastes the technology. We should be able to connect to the ServiceWorker using a MessagePort or directly with onmessage, post message.

Also, WebTransport is HTTP, though sites' use CSP and CORS applies, so I will not be able to just connect to any device I want at any site I want with that API. Initially I could with 'quic-transport' protocol, then github set a blocker for that protocol. There needs to be a means to use WebTransport on any site I want, at console, or otherwise, with URL pointing to a local  address. That is what the code I cobbled together does. I can send input, including SSML, get streaming output of espeak-ng ultimately as a MediaStreamTrack. The only specification I am aware of for the procedure is the 10 step one I wrote at Explainer https://github.com/guest271314/NativeTransferableStreams/blob/main/Explainer.md.

Very simple.

Am I really missing something here?

Or, are all of the pieces available, just not cohesively merged and interoperable?


On Sunday, May 30, 2021 at 1:01:00 PM UTC-7 PhistucK wrote:

PhistucK

unread,
May 30, 2021, 7:27:37 PM5/30/21
to guest271314, Chromium Extensions, Simeon Vincent
Sorry for not reading your whole message(s), you write too much and you spread onto other territories.
A suggestion - it is fine to file meta feature requests, but you should "back them up" with isolated, specific feature requests (those are easier to implement, since they are usually implemented by a specific team), because the meta ones are too broad/require man-months.

PhistucK

guest271314

unread,
May 30, 2021, 9:05:39 PM5/30/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
> Sorry for not reading your whole message(s),

Well, that is problematic. Perhaps whatever role you believe you are attempting to fill here if not the correct task for you if you are lazy, do not care enough to read what users in the field write, or have issues comprehending individuals who might not view the world the same as you.

> you write too much

That makes no sense. I reject your assertion.

> and you spread onto other territories.

The whole universe is my territoty.

> A suggestion - it is fine to file meta feature requests, but you should "back them up" with isolated, specific feature requests (those are easier to implement, since they are usually implemented by a specific team), because the meta ones are too broad/require man-months.

You could have folowed your own suggestion and just written that without your first sentence in your reply.

I laid out the entire roadmap. Browser implementers are some of the smartest people in the world. If they don't get it, or decide not to get what I propose, I have performed my own due diligence. I am well-suited to just continue creating and using workarounds while issues in various territories can remain unfixed although the tools exist in the shop to fix them.

maria del socorro barragan

unread,
May 30, 2021, 10:35:57 PM5/30/21
to guest271314, Chromium Extensions, PhistucK, Simeon Vincent

PhistucK

unread,
May 31, 2021, 5:19:54 AM5/31/21
to guest271314, Chromium Extensions, Simeon Vincent
I am sorry if this offends you, but you do generally write a lot and not everyone has the time to read it (as kind of evident by your blink-dev thread and  I think you also mentioned your feature requests get closed because they did not understand what you were talking about).
By "other territories" I meant, you post a web platform feature request in a Chromium extensions group - know your audience...
They are smart (and they are human. Some of them are even lazy, I have been told by one of them), but they also have a lot of other things to do than to decipher your long and too broad feature requests, as simple as that.
Also note that having a workaround lowers the priority of a feature request generally (and probably also makes the reader look away, as it might be deemed "not too important to implement, as there is a workaround").

PhistucK

guest271314

unread,
May 31, 2021, 11:42:24 AM5/31/21
to Chromium Extensions, PhistucK, Chromium Extensions, Simeon Vincent, guest271314
> I am sorry if this offends you

Sorry is meaningless to me. I cannot be offended.

> By "other territories" I meant, you post a web platform feature request in a Chromium extensions group - know your audience...

The concept applies to both territories.

>  but they also have a lot of other things to do than to decipher your long and too broad feature requests, as simple as that.

I need not provide a list of specification or implementer issues or feature requests that have at least or greater detail. Encoding and decoding multipart/form-data in Streams and HTML specifications is but one example that has spanned several years and issues. When a formal specification is published there are normative and non-normative references that can be quite lengthy, e.g., RFC's, referring to prior art, etc. In that regard the feature requests I have filed are no different. 

> Also note that having a workaround lowers the priority of a feature request generally (and probably also makes the reader look away, as it might be deemed "not too important to implement, as there is a workaround").

That may or may not be the case. From my perspective given your rationale https://github.com/httpslocal and https://github.com/backkem/local-devices-api should stop attempting to formalize a specification and just use the workaround I implemented?

PhistucK

unread,
May 31, 2021, 3:20:31 PM5/31/21
to guest271314, Chromium Extensions, Simeon Vincent
Regarding your last point, a specification feature request is different from a browser feature request. Filing a feature request at crbug.com with a workaround could make the prospective-implementer/decision-maker reader look away.

Also, Chrome usually implements proposals that have gone through some consensus (not always the case if the proposal came from Google), or at least visible engagement from other vendors (negative or positive). If your crbug.com feature requests were closed, trying to promote your specification to be looked at by other vendors and/or web developers (showing interest is always a good sign) is probably the next step. crbug.com stars are also an indicator, but they have not been influencing decisions as much as I would have liked.

Another option is to implement it yourself in Chrome/Chromium, ideally following an intent to prototype announcement in order not to waste your time implementing something that might not be accepted in advance.

PhistucK

Reply all
Reply to author
Forward
0 new messages