Drag and drop/selectable notation items

783 views
Skip to first unread message

Chris Ward

unread,
Sep 28, 2012, 10:37:14 AM9/28/12
to vex...@googlegroups.com
Hi,,
 
Sorry if this has been covered - I've done some searching and seen some vague matches but nothing concrete.
 
Short version of question - "Is there any work being/been done of drag and drop or making the notation items on the staff selectable"?
 
 
Longer version
 
A while back I developed my own non-VexFlow basic drag and drop music staff which takes single notes (whole/half/quarter/eighth) and rests (same set) clefs, accidentals etc.  This is using <DIV>s and JS/JQuery and TouchPunch to make it work on iPad/iPhone.
 
However - it's limited and I'm faced with some choices
 
  1. do I try to extend it with the current approach?
  2. do I swap to rolling my own but using HTML5 canvas or SVG?
       or (and hence why I am posting here)
  3. do I try to take advantage of the great work done in VexFlow?
 
I'm so impressed with VexFlow but my requirements are for drag and drop.  I am aware of the complexity involved in building up even quite simple groups of notation and how this means more work than just dragging images. 
 
I am aware of VexFlow JSON and have thoughts about using that as an intermediate data model but what I need is to be able to select the notes etc on the staff by mouse click or touch.
 
 
Any feedback on this would be great.  I'm chasing my tail wonder which way proceed.
 
Thanks,
Chris
 
 

Mohit Muthanna

unread,
Sep 28, 2012, 2:39:25 PM9/28/12
to vex...@googlegroups.com
On Fri, Sep 28, 2012 at 10:37 AM, Chris Ward <christopher...@gmail.com> wrote:
Hi,,
 
Sorry if this has been covered - I've done some searching and seen some vague matches but nothing concrete.
 
Short version of question - "Is there any work being/been done of drag and drop or making the notation items on the staff selectable"?

There have been many attempts at this, but nothing that stuck.

I think this can be bolted onto VexFlow/SVG without too much work, but will require some thought on how to group vectors that belong to a single object (e.g., note, chord, beam, etc.)

Doing it for Canvas would be very messy, because you would have to implement all the "collision detection" stuff your self (e.g., note boundaries, mouse over, etc.)

Mohit.
   
Longer version
 
A while back I developed my own non-VexFlow basic drag and drop music staff which takes single notes (whole/half/quarter/eighth) and rests (same set) clefs, accidentals etc.  This is using <DIV>s and JS/JQuery and TouchPunch to make it work on iPad/iPhone.
 
However - it's limited and I'm faced with some choices
 
  1. do I try to extend it with the current approach?
  2. do I swap to rolling my own but using HTML5 canvas or SVG?
       or (and hence why I am posting here)
  3. do I try to take advantage of the great work done in VexFlow?
 
I'm so impressed with VexFlow but my requirements are for drag and drop.  I am aware of the complexity involved in building up even quite simple groups of notation and how this means more work than just dragging images. 
 
I am aware of VexFlow JSON and have thoughts about using that as an intermediate data model but what I need is to be able to select the notes etc on the staff by mouse click or touch.
 
 
Any feedback on this would be great.  I'm chasing my tail wonder which way proceed.
 
Thanks,
Chris
 
 

--
You received this message because you are subscribed to the Google
Groups "vexflow" group.
To post to this group, send email to vex...@googlegroups.com
To unsubscribe from this group, send email to
vexflow+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/vexflow?hl=en



--
Mohit Muthanna [mohit (at) muthanna (uhuh) com]

Chris Ward

unread,
Sep 29, 2012, 12:35:13 PM9/29/12
to vex...@googlegroups.com, mo...@muthanna.com
Hi,

Thanks for the feedback - I didn't want to (potentially) attempt to do something if (a) it's been done by someone already (b) it's a doomed quest.

I have thought about the SVG thing since I believe that SVG has a DOM and you can get hold of things by id.  

I have recently been playing around with kineticJS.  Using this I was able to start to reimplement my current <DIV> and jQuery thing.  It seems to put the required abstraction layer onto the Canvas to allow the user to drag/drop/select.   I didn't see much on the "drop" side though.  In jQuery UI drag and drop there is a wealth of control over things made ".droppable()".  In kineticJS it appears to be up to the user to use the coordinates of the dropEnd event to work out what to do.  I could be wrong on this - I've only done a few hours of playing with kineticJS so it anyone knows different please tell me.

Would there be any merit in using something like kineticJS though?

> Doing it for Canvas would be very messy, because you would have to implement all the "collision detection" stuff your self (e.g., note boundaries, mouse over, etc.)

Yes - I've given it a lot of thought and usually take the easy road of giving up.  Then I look at how nice VexFlow is and how much I would end to duplicate (badly probably) and I get motivated to try again.   I have made a lot of sketches and notes on it.  

Any views on kineticJS or any of the similar libs?

Thanks again,

Chris

Michael Scott Cuthbert

unread,
Sep 29, 2012, 2:41:55 PM9/29/12
to vex...@googlegroups.com, mo...@muthanna.com

Hi Chris, Mohit, and community.

 

There is great interest in selecting/editing, so please keep at it and keep the community informed about your progress.  Even if the drag-to-edit is far from completion, even getting to the point where any event (clicks, mouseOvers, etc.) could bubble up with the object that was selected would be a huge benefit to a lot of Vexflow projects.  Click to play from a certain note/measure, tooltips on chords to say whether they’re major or minor, etc. could all be added much more easily with javascript-events attached to SVG-Vexflow objects.

 

Best,

Michael Cuthbert

http://web.mit.edu/music21/

Chris Ward

unread,
Sep 30, 2012, 4:28:57 AM9/30/12
to vex...@googlegroups.com, mo...@muthanna.com
Hi,

I did see something in a recent session of searching where someone mentioned that they had got a limited amount of "selectable" items working.

What I'm looking for is the ability to have any/all of the placed items selectable.  This I can do with my (rather clunky DIV and jQuery thing).

Once they are selectable I can identify them in the data model I have (that represents the staff contents) thus I can update the model and then render the model.  

I am quite new to the use of Canvas but noticed that kineticJS puts an abstraction layer on top of it and provides drag and drop (thus must have the ability to identify and select items on the canvas).

I've been looking at a couple of books



This is why I am keen to know what the others in the Community think about kineticJS or similar.

I've not done any dev work on VexFlow yet.  I've downloaded it and run the tests - which I think are very impressive. 

Any pointers to where I'd need to try to drop in the kineticJS stuff would be great.  

Or should I forget that and look at the SVG flavor?

Chris

Cyril

unread,
Oct 1, 2012, 3:12:02 PM10/1/12
to vex...@googlegroups.com, mo...@muthanna.com
Is there a reason why you need each individual object selectable? I have implemented selection editing in a prototype, but you can only select measure and notes. Not any other individual elements. But since those VexFlow objects have methods which return positions (getX, getYs, etc) click detection is pretty easy.

In general I'm weary about "drag and drop" notation programs because generally they're slow in terms of workflow. Not sure what you're trying to build though.

Trying to work the DOM to display notation will get really slow if you have large amounts of elements. Canvas will be the quickest. SVG bogs down with large amounts of the svg element, but the upside is that everything is already an element.

Chris Ward

unread,
Oct 2, 2012, 9:25:50 AM10/2/12
to vex...@googlegroups.com, mo...@muthanna.com
Hi there, thanks for the reply,

The reason I would like drag and drop is that I am doing something to test users and can't expect them to learn the VexFlow "language".  Thus I'd like an intuitive, tactile drag and drop interaction.  As I've mentioned in earlier posts here I have got this running now using a different approach (not VexFlow) and it works on tablets etc.  

The basic building blocks will tend to be notes/rests/accidentals/clefs etc,  

Now imagine my user has dragged four quarter-notes to the staff and they are expected to beam them.  This is why I need to be able to "select" the notes.  They may want to beam the first two only so I would like to be able to select them then proceed with some action to do the beaming,

Another example is selected start and stop note ties or slurs.

I hold an intermediate model of the staff so all I really need is to be able to get the placed items to identify themselves somehow so I can update the model.  The rendering of the model would spit out VexFlow format or even VexFlow JSON (anyone got a view on this?)

I was looking into the possibility of integrating kineticJS which seems to offer the selection features.  It works on the concept of a "Stage" (which i think would be the staff) and then layers on top of that (could be a layer per item? or a notes layer and an accidentals layer etc).  Not sure.  

I am interested you mention detection by x and y position.  Maybe this is something I could use.  Any further pointers/examples on this would be very much appreciated.  

If I could get to the stage where I could identify WHAT item was clicked on something like this ...


... I would be excited!

Thanks for your time
Chris

Michael Scott Cuthbert

unread,
Oct 2, 2012, 2:11:23 PM10/2/12
to vex...@googlegroups.com, mo...@muthanna.com

Dear All,

 

Could it work faster if the score itself were rendered as a Canvas but with the individual measures’ or staves positions recorded.  Then onclick a measure could be rendered in SVG on top of the Canvas so that particular notes could be selected.  It could be an intermediate stage that could later be discarded if SVG rendering ever gets fast enough.

 

Finale’s Speedy Entry works something like this, but less transparently than I imagine Canvas + SVG could be rigged to do.

 

My sense is that if this were implemented, by default SVG elements should have the ability to have event listeners attached to them, while Canvas would not. 

 

At music21 we have a webservice that can translate musicxml, abc, and other notations to (rudimentary) Vexflow.  I’ll send the URL to anyone who is interested in non-clientbased translation.

 

Best,

Myke

Chris Ward

unread,
Oct 2, 2012, 4:00:46 PM10/2/12
to vex...@googlegroups.com, mo...@muthanna.com
Interesting.

If I understand correctly, what you are suggesting here is analogous to having a "flat" image representation (the rendered canvas) with a list of "hot spots" for the items that have been rendered on it.  

When the user clicks/touches somewhere within the rendered area the coordinates are used to check against these hot spots.  

If there is a "hit" then the settings for the rendered notation item (that is associated with the hot spot) are used to create an new "object" version in SVG sitting over the top of the canvas.

Is that broadly it?

If so I do like the idea of having a sort of flyweight approach - we only items that get selected need to be inflated to object form.  What this would give is the ability to move a placed item - I am wondering if I need the SVG bit or whether I can get away with just Canvas-based drag and drop.  

Something to ponder certainly.

Many thanks.
Chris

Cyril

unread,
Oct 3, 2012, 1:06:15 AM10/3/12
to vex...@googlegroups.com, mo...@muthanna.com
I implemented hot spots in a rectangular way. You can see a prototype of what I am working on here:

Michael Scott Cuthbert

unread,
Oct 5, 2012, 3:49:10 PM10/5/12
to vex...@googlegroups.com, mo...@muthanna.com

Can I just say “BRAVO!” to this implementation – it really shows what Vexflow is capable of! It’s really fantastic.  Will you consider releasing the non .min.js version of the editor?  Very cool and responsive.

Cyril

unread,
Oct 5, 2012, 4:28:10 PM10/5/12
to vex...@googlegroups.com, mo...@muthanna.com
Thank you very much! I've spent a lot of time working on this. I will likely release the code fully, but not until I clean it up and refactor a lot. Some parts are embarrassingly dirty because the UI is pretty hacked together. But I can briefly describe how it works, because it's actually quite simple. The thing is, I'm really only using vexflow for the rendering. I have separate objects that essentially run parallel to VexFlow objects. I have a Editor.Measure object which renders out a Vex.Flow.Stave, Editor.Note renders out a Vex.Flow.StaveNote, etc. After every action, I rebuild all the Vex.Flow objects based off the Editor objects. So essentially I have my own custom Abstract classes which get modified, and then i have the vex.flow for the graphical representation.

For the click functionality, I just test the click against the vexflow object coordinates on the canvas. Then if I find the object that was clicked, since the Editor objects run in parallel, I can quickly find the abstract object and modify it. Then i rebuild the vexflow and then redraw all the vexflow.

Making all this happen is just a small matter of programming ;)

I have a whole "vision" where I integrate this editor into a website for social practicing. So I'm weary to really release or open source any code until that vision starts to develop.  Is that douchey? I probably don't really need to be so proprietary but sometimes I am just paranoid! haha

Chris Ward

unread,
Oct 9, 2012, 7:39:58 AM10/9/12
to vex...@googlegroups.com, mo...@muthanna.com
Many thanks Cyril.

It's very interesting to see what you've done - thank you very much for sharing the link.  I've not had a chance to play around with it very much yet but it's got a lot of the features I was thinking of embarking on - in particular the selecting of a placed item.  Once you can identify what the user's clicked on you can get it's context (not Canvas "context") from your data model and apply changes.

I've not doing any coding in VexFlow yet - did you have to set up the hotspot list or is there something available "out of the box".

Please keep us posted on any further development on this.

Many thanks again,
Chris

Cyril

unread,
Oct 9, 2012, 5:40:55 PM10/9/12
to vex...@googlegroups.com, mo...@muthanna.com
Many of the VexFlow objects have methods which return relative coordinates of the object on the canvas. So, essentially, I just test each item using these coordinates to see if it's being clicked. Keep in mind that's a simplified explanation, the actual algorithm is more optimized.
Reply all
Reply to author
Forward
0 new messages