And by the way, it would be nice to drop naming conventions likegetLength()/setLength() as both SpiderMonkey and V8 (AFAIK) providegetter/setter structures.
Good idea - does anybody know what is the situation with our other
targets (Rhino, JSCore) ?
I think the cleanliness is so much better.
I like the basic api as proposed. i'd likely add SHA256/SHA384/SHA512
and CRC32 and an optional mimetype property so that something that
gets passed the binary object can deal with it more easily, also along
the same lines i think some kind of toJSON might be cool, where the
mimetype and the base64 data are wrapped up as in JSON
I agree, though you should also be able to pass in streams (namely
Files) so you can hash large files without loading the whole thing
into memory. That might be why Davey suggested they be on the File
object (?)
-Tom
but the idea of an encodings module works for me, though it would put
a dependency on that to Binary, to get the toJSON method operable
something like
var jsonContent = myBinaryObject.toJSON('MD5');
but the potential worry is Binary depends on Encodings which depends
on File, that may be fine, although i'd like to avoid the situation
where we get import explosions
I like the cleanliness of automatic getters and setters as well. And they
should be find for, say, networking code that will only be on the server.
But one of the main reasons we're all interested in SSJS is the possibility
of sharing code between the browser and the server.
If this binary object was implemented in javascript, why shouldn't it be usable
on both the client and server?
Starr
Ondrej Zara wrote:
> [snip]
>> And by the way, it would be nice to drop naming conventions like
>> getLength()/setLength() as both SpiderMonkey and V8 (AFAIK) provide
>> getter/setter structures.
>>
>>
>
> Good idea - does anybody know what is the situation with our other
> targets (Rhino, JSCore) ?
>
All engines have getters and setters at the native level (including
Rhino, JSCore, and even JScript), they must in order to provide the host
objects in the browser (many of which rely on getter/setters). I would
really prefer to keep this more array-like and use the length property
and allow access to the bytes through numerical index access:
binary[3] // gets the 4th byte
binary.length // returns the number of bytes
binary[2] = 22 // puts 22 in the third byte slot
(NodeList could be a good model for this).
Thanks,
Kris
I am not a huge fan of this. Please note that both Base64 and MD5/SHA1
are _not_ defined on "string", these functions are defined for an
array of bytes. From the coder's view, there is no bijection between a
JS string and the corresponding byte structure: this conversion must
be performed under certain assumptions (for instance, that the string
can be decomposed into bytes using UTF-8 standard).
What I am trying to say: it is not generally possible to compute a
unique hash for "žščřďťň" (letters very common in my language),
because there are numerous ways to represent these in binary. More
semantic approach is to first convert a string to binary (at this
phase, encoding should be supplied!) and then perform the hash
computation (on the binary object).
Ondrej
I dont think that's the case at all, it's about common serverside
API's. trying to do something like this and work with say IE5 would be
a complete nightmare. I think we are more concerned with having things
that work like 'privileged' javascript, e.g. FF plugins.
I agree some of the API's may be client runnable though, but
sacrificing ourselves on the altar of browser compatability is a
really horrible thought for me. client use is a nice plus but largely
secondary.
> If this binary object was implemented in javascript, why shouldn't it be usable
> on both the client and server?
basically if the features we want need privileged access. May or may
not be the case for Binary.
They built a browser on it. Chrome uses getters and setters extensively
for it's host objects.
Kris
var $ = require('jquery').jQuery
would be neat (and would actually work in Jaxer as we have the real
DOM, but we're an odd one from that perspective)
for me it's
+1 module support on client
-1 server modules lobotomized to run on the client
in Jaxer we have a Jaxer.isOnServer property that can be checked and
behavior adjusted accordingly, this let's us have code that doesn't
depend on the server where needed for sharing clientside, I usually
hate it when I have to deal with that as it drops me back into the
multibrowser lowest common denominator syntax and abilities.
I hope that great features serverside would eventually be avail on the
client, but it's a security nightmare if nothing else, and I can't see
browser vendors racing to give us file system or socket access. (HTML5
provides some of this of course)
Ok, let me restate that. This is the reason that I'm interested in SSJS. I think
that to make JS a really great tool for creating rich applications, we need
server-level tools and a standard library.
And granted, privileged code will never run in the browser. And as Kris mentioned,
maybe you can't efficiently implement certain parts of the library in javascript.
I just wanted to bring up the subject now because - even if we're only making
certain pieces of the library available to the browser - we're going need to work
that out ahead of time.
And nobody said anything about IE5. As long as you're not doing DOM stuff, it's
not a big deal to make JS code work across all modern browsers.
SH
Ionut Gabriel Stan wrote:
> Actually, starting with FF 3 we're able to read local files by means of
> the file input element. There's also an active W3C draft[1] regarding
> this, which is edited by a Mozilla employee.
>
> On my blog (link in signature) there's an exhaustive tutorial about AJAX
> file uploads in FF3 (<- shameless plug :D). The binary methods are
> ile.getAsBinary() and XMLHttpRequest.sendAsBinary(). Anyway, I do NOT
> advocate for supporting a certain API in the browsers. All I want is JS
> on the server.
>
> [1] http://dev.w3.org/2006/webapi/FileUpload/publish/FileUpload.html
>
That's the same link I shared (just in a different format).
getAsBinary() returns the binary data as a hex-encoded string. I think
the point of this whole exercise was to provide something better than
strings, dealing with binary data through hex-encoded strings is pretty
clunky. Blob was an attempt at that (in your/our link).
Kris
I don't think you want to subclass an array to make this happen. JS
arrays are too "heavy" to represent binary data streams.
You're much better malloc'ing a bunch of RAM, hooking the JS resolver
for properties on your byte array (IF they are to be used singly), and
de-referencing and returning as a JS String on demand. Recall
myBytes[4] is roughly equivalent to myBytes.4.
But most of the time you won't be looking at individual bytes. So
you're going to want a meaningful .toString method. And of course,
whatever File object gets spec'd out will want to know about this
class.
I don't like things like getLength / setLength -- those should be
length getters and setters. Also, how is setting the length
meaningful? What if you make it bigger?
Adding String.toBinary is not really necessary IMHO. It won't get
called during type promotion, so why not use a cleaner syntax like
var myBytes = new Binary("hello");
I definately don't think base64 encoding/decoding belongs here. That
should be in another class that understands String and Binary. md5
and sha should be in a crypto lib.
Binary.push, pop, shift and unshift are probably not useful enough to
standardize on now, although I suppose they wouldn't hurt. If we're
doing those, we should also slice and splice.
getByte, setByte should be done with array brackets, I think. If note,
they should be "byteAt()" to mirror String.charAt(). Note that since
strings are immutable, there is no String.setByteAt() either.
Wes
--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102
I have not proposed any setLength(). Note sure how this got into
thread, I can assure you that nothing like this is in my proposal
(which I encourage you to look at prior to discussing it .) ).
> Adding String.toBinary is not really necessary IMHO. It won't get
> called during type promotion, so why not use a cleaner syntax like
>
> var myBytes = new Binary("hello");
>
> I definately don't think base64 encoding/decoding belongs here. That
> should be in another class that understands String and Binary. md5
> and sha should be in a crypto lib.
>
I have already expressed my negative feeling about this few posts
before: http://groups.google.com/group/serverjs/msg/0243f943fffb542e
Ondrej
/**
* Decodes Encodes the data in Base64
* @returns {string}
*/
Binary.prototype.base64encode = function() {};
---
Is the above supposed to be as below (or not exist at all)?
/**
* Decodes the data in Base64
* @returns {Binary}
*/
Binary.prototype.base64decode = function() {};
Regards,
Scott Christopher
Possibly. If "base64encode" converts Binary to Strings, it seems
logical that inverse function converts String to Binary. Specifically,
that
var a = "anyString;
a.toBinary().base64encode().base64decode().toString() == a
:-)
On the other hand, one can add base64decode to binary as well.
O.
>
> I think that the base64, sha1 and md5 functions should all be removed
> and made into their own classes.
>
> I'd rather have a Base64 class with .encode() and .decode()
> You could pass a String or a Binary data type to .encode() and it
> would do it's thing.
>
> If I have a String in the future that has base64 data or I want to md5
> or sha it, it seems to make more sense that I can just Base64.decode()
> it rather than having to go through a Binary object.
I dislike API that takes "whatever" for a type and then converts on
the fly. Its messy, because you can't read through your code and
immediately understand what kind of data you are working with. It also
leads to messy convoluted implementations.
I'd rather see encode() and encodeBinary() or something to that
effect. I'm OK with those being properties of a Base64 Object though.