Automatic Fingering Prediction 0.1ish

Synthesia is a living project. You can help by sharing your ideas.
Search the forum before posting your idea. :D

No explicit, hateful, or hurtful language. Nothing illegal.
Frost
Posts: 51

Post by Frost » 07-28-09 5:27 pm

I expected it was going to take a while before we started getting halfway usable results out of this.
A single one-finger difference in 228 notes is something like 99.6% accuracy, right? That's incredible. Congratulations.
nah, hanon is mostly repeats, so the accuracy is not that good (but I'm glad that it finds the same fingering in each starting key, I feared that it would find different fingerings depending on the presence of intermedite black keys). also, hanon n1 (and the other few hanons that I know of) have very obvious, easy to find fingerings. In actual, harder pieces with thumb passes, wrist movement etc. it may have problems. But, I will say it's definitely _halfway_ usable right now; the predictions may not be the best, but they definitely have some kind of logic into them, and you don't find yourself in a weird position, thinking how to jump from one to another with the given fingering.
If AFP performs already that great we can start thinking about including the full sets of

1. Hanon
2. Czerny

with their full correct fingerings in Synthesia.
yes, they will be immensely helpful, I think. Metadata editor should be released in few weeks according to Nicholas (right?), so the file format should be designed by now (I'm curious about how the finger information is added to a specific note). If it's completed, with a few tweaks and hacks, the user corrected fingering information may be written in the metadata file by the software. From synthesia viewpoint, I think reading the metadata and showing a finger number on keys while playing should be relatively easy for Nicholas. The only problem is converting them; I haven't encountered (free) Hanon's beside 10 and a collection of Czerny's (60 and 100), creating the midis may be necessary.

TonE
Synthesia Donor
Posts: 1180

Post by TonE » 07-29-09 12:22 am

Frost wrote:(I'm curious about how the finger information is added to a specific note). If it's completed, with a few tweaks and hacks, the user corrected fingering information may be written in the metadata file by the software.
I would suggest two or later three different methods of writing fingering information to .mid files:
1. algorithmically, using the help of AFP which means we could take the output of AFP and add the fingering number directly into the midi as Lyrics and/or Text and/or Marker events, in my case I would use the midi programming language Keykit or better its command-line version lowkey.exe for this purpose.
2. manually, using the metadata file editor Nicholas will create
3. algorithmically, using the help of AFP and the metadata file editor will take over the information from AFP automatically
It could be useful to have in the metadata file editor two different layers/versions of fingering, in the first layer we would add the algorithmically generated fingering numbers, then into the second layer users could add their modifications, so Synthesia would always check these both layers and if the second layer is not empty it would prefer those fingerings. This would result in the fact that each midi file would get some fingering suggestions, if they were designed properly (left and right separation was done correctly before).
From synthesia viewpoint, I think reading the metadata and showing a finger number on keys while playing should be relatively easy for Nicholas. The only problem is converting them; I haven't encountered (free) Hanon's beside 10 and a collection of Czerny's (60 and 100), creating the midis may be necessary.
I have already following parts in .mid format, which I can of course share here, I hope Nicholas would include them then into the Synthesia package:

1. Hanon 1..43 (I even splitted some exercises into smaller parts, like exercises 39..43 which are the scales and arpeggios)
2. Czerny 1..100 (meaning it is complete)

Rickeeey
Posts: 647

Post by Rickeeey » 07-29-09 3:22 am

TonE wrote:I have already following parts in .mid format, which I can of course share here, I hope Nicholas would include them then into the Synthesia package:

1. Hanon 1..43 (I even splitted some exercises into smaller parts, like exercises 39..43 which are the scales and arpeggios)
2. Czerny 1..100 (meaning it is complete)
Wow, could you share all those exercises with us, at least the Hanon ones?

Nicholas
Posts: 12289

Post by Nicholas » 07-29-09 4:39 pm

Frost wrote:Metadata editor should be released in few weeks according to Nicholas (right?), so the file format should be designed by now (I'm curious about how the finger information is added to a specific note)...
The first pass of the editor (that will only have simple fields like title, composer, difficulty, and tags) is nearly finished. This first effort was to basically get my foot in the door with metadata. Once it's complete, the half-dozen or so tasks on the voting list down in the "with dependencies" section will be unleashed on the masses and I fully expect them to be voted up rapidly.

That said, AFP is a special case. I'll be "accepting" the "Manual Note Fingerings" feature (which now desperately needs to be renamed and re-worded) for 0.7.1 regardless of votes. With results as promising as yours and with the potential impact being as strong as it is, it would be a mistake not to make it happen as soon as possible.

(I will also have to take that little lock symbol off the feature. Since note fingerings will be community-developed, it doesn't make any sense for me to make any money from it.)

Anyway, to answer your other question, I'm *also* curious how finger information will be added to specific notes. ;)

At first, the (open source) C# metadata editor was just going to be a decent code-example for community members to see how to parse and serialize the metadata XML. But, there was always the tricky problem of making note and track information available too. (I wasn't tackling that this time around, but I have had it in mind the whole time.)

My solution for now, to get the most code-reuse as possible, was to make a little (C++) library DLL out of my Synthesia MIDI code that could do things like load, format-0 MIDI splitting, quantization, etc. and make it available via some interop layer in the C# editor.

Now, if I go to all that trouble, and also taking into account that the first community effort we're seeing (before the editor has even been released!) also happens to be a .NET app, I'm wondering if I shouldn't take the editor a tiny bit further and make it a platform that supports plug-ins.

To make your app a plug-in, you'd basically implement some interface. One of the interface methods would be to return a Windows.Forms.Control for display on a generated tab. Plug-ins would be placed in a nice song-based context where they could ask the top-level app for things like the song's notes. Finally, when it came time to load/save, each plug-in would get its chance to read/write from the xml element of that particular song.

I'm not sure how that sounds yet. What do you think?

Nicholas
Posts: 12289

Post by Nicholas » 07-29-09 4:55 pm

More thoughts:

Regardless of the plug-in architecture decision, the per-note metadata question is still very real. If you have any suggestions, I still haven't come up with anything excellent. The main complications are that Synthesia does a fair bit of manipulation to songs during the load process. It's not just format-0 files that are broken up into additional tracks. Tracks that whose events refer to more than one channel are broken up too. On top of that, as soon as the manual track splitting meta-data gets in there, it's even more complicated because the user will be defining their own splits.

Should the note list available via the utility DLL be pre or post split? If pre, I'll have to start tracking the changes occurring during the splitting process much more carefully. If post, the order that certain plug-ins are allowed to execute in (e.g. the track-splitting plugin) may impact the input the following plug-in gets.

The easier answer feels like taking the responsibility on myself and giving all plug-ins the same note lists. Then, the in-game song loading just has to get a little smarter and involve the metadata right in the loading process.

But, some of the earlier plug-in processes could represent added value to later ones. Using the manual-split plug-in to delineate both hands in a single-track MIDI would be valuable knowledge for the fingering plug-in...

Again, I'd love to hear any thoughts you guys had. It's a tough problem.

vicentefer31
Posts: 899
Location: Madrid, Spain

Post by vicentefer31 » 07-29-09 5:16 pm

I have an idea:
Example:
-we have a song with two tracks, tracks 1= left hand and track 2=right hand
-Synthesia: does two new tracks track1fingerings +track2fingerings
-Synthesia edit those new tracks and add fingerirngs. Where to add the fingerings? In the volume
Example: v=0 (no finger), v=1 (left hand 1), v=2(left hand 2),...v=10 (right hand 5)
256 On ch=1 n=60 v=2
256 On ch=1 n=60 v=5
512 On ch=1 n=60 v=3
512 On ch=1 n=64 v=7
768 On ch=1 n=64 v=9
768 On ch=1 n=67 v=4
Synthesia load those "new tracks" like ghost tracks an only use for fingers.

Edit:
When I said I have an idea, I didn't say it was a good idea :lol:
Picasso: I am always doing that which I cannot do, in order that I may learn how to do it.

Nicholas
Posts: 12289

Post by Nicholas » 07-29-09 5:35 pm

I already foresee these metadata files becoming large. MIDI excels at being *extremely* compact (to the detriment of readability). These are going to be verbose XML. I guess we'll have to see how it goes... ;)

Still, you may be on to something. A sort of plug-in based filter graph might not be a bad way to go. Like, graphical and everything.

As an example: you open a two-track MIDI, so you start with two (unlabeled, unknown) boxes. You can then drag a "track muxer" filter onto the graph. It has two (or more as you need them?) inputs, and a single output. You draw lines from the original two track output pins to the joiner.

Then, drag a "finger predictor" filter onto the graph and connect the joined single-track output to the finger predictor input.

I'm not sure what the output of that would be... maybe a track+fingering type of thing. I was going to mention that right at the end of the chain you'd add a "track label" filter. That would be a similarly strange output. Like, it's still a track but now it's annotated with some extra info, too.

In either case, I do like the filter graph idea. It takes advantage of (and solves) all the plug-in ordering problems. It's also a well-defined interface that's been in use for years and years. It would also give the top-level metadata editor a better reason to exist than "decent code example".

Instead of providing a UI control for a generated tab on the editor form, the plug-in architecture would require plug-in authors to provide a UI control that would be displayed as some property window of their particular filter graph node.

vicentefer31
Posts: 899
Location: Madrid, Spain

Post by vicentefer31 » 07-29-09 6:07 pm

Let me think about this... yes, more or less what I was trying to say :lol:
Attachments
thinking.png
thinking.png (20.14 KiB) Viewed 22883 times
Picasso: I am always doing that which I cannot do, in order that I may learn how to do it.

TonE
Synthesia Donor
Posts: 1180

Post by TonE » 07-29-09 10:29 pm

vicentefer31 wrote:I have an idea:
Example:
-we have a song with two tracks, tracks 1= left hand and track 2=right hand
-Synthesia: does two new tracks track1fingerings +track2fingerings
-Synthesia edit those new tracks and add fingerirngs. Where to add the fingerings? In the volume
Example: v=0 (no finger), v=1 (left hand 1), v=2(left hand 2),...v=10 (right hand 5)
256 On ch=1 n=60 v=2
256 On ch=1 n=60 v=5
512 On ch=1 n=60 v=3
512 On ch=1 n=64 v=7
768 On ch=1 n=64 v=9
768 On ch=1 n=67 v=4
Synthesia load those "new tracks" like ghost tracks an only use for fingers.

Edit:
When I said I have an idea, I didn't say it was a good idea :lol:
"Encoding" fingering information as note velocities in new copied midi tracks are just another nice idea how it could be done. The nice thing about using Lyrics/Marker/Text information would be that many other midi supporting applications like sequencers, midi players have already a built in display support for such events, and such midi files could be used, viewed and edited also there.

For the simplicity I would suggest for the beginning just using normal text files and using for each note a new line. I give an example for a midi file with the name classic.mid, and the right hand fingering file could be named as R.classic.mid.txt having following content for the first two measures:

Code: Select all

1
2
3
4

1
3
4
5
Meaning we can use a free line to distinguish the end of each bar. It would be easy to write, no extra editors would be necessary, the output from AFP can be used directly.

Nicholas if you can make Synthesia display Lyrics and Marker events from midi files, I can easily make a tool which could add such text file information into the midi file directly. I would use AutoHotkey and lowkey.exe for this purpose.

Actually any simple text format would work, for example also this:

Code: Select all

1234 1345
or this

Code: Select all

1 2 3 4 1 3 4 5
as we have now in AFP.
The first method above would have the advantage that you could add for each line some more comments if you would want.

But imagining we can have later also chords in one hand, I think vicentefer31's idea is just great. It would be a precise definition. Only I would suggest for the left hand values as 1,2,3,4,5 but for the right hand the values as 10,20,30,40,50, so we could keep the "standard 1..5 thinking" for fingerings. Or just the opposite, using for the left hand 10,20,30,40,50 and for the right hand 1,2,3,4,5 as we would most probably use right hand fingering annotations more often.

Frost
Posts: 51

Post by Frost » 07-31-09 3:54 pm

Actually any simple text format would work, for example also this:

Code: Select all

1234 1345
But that approach would not be suitable for polyphonic pieces.
AFP currently uses 2 output types. One consists of an array of all 88 notes, thus contains both which notes are suppressed and the accompanying finger. (think of a vector of an array[88]).
The other, easier one is 5 lines for polyphonic melodies for output, an example:
----time ---->
1:
2: 1 2 1
3: 1 2 3 4 2 2
4: 5 4
5:

So, if 5 notes are played at the same time, all 5 lines would have notes, otherwise, fewer than 5. But that approach does not easily allow for user-friendly line breaks after bars, and also hard to edit. A table like structure would be better for manual filling of the data, maybe in a spreadsheet program, or better, in the editor (which can also show the accompanying notes). However for clarity from the perspective of the software, fingering note embedded directly with the note itself would be better and much more clear. Encoding them as velocities may be a little bit overkill (velocities should change after all, we shouldn't overwrite it. and also, dynamics), but if midi file has rarely used fields, they can be appended into those.
What nicholas suggested also makes a lot of sense. User can do what s/he wants in the editor by wiring the input midi to different modules/plugins, then, when saving the editor can save it as a compact binary file. Even if its a human readable format, such as an xml file, it would be hell to edit concurrent notes with different durations, different tracks, instruments.. this way, we can just leave it to the editor for saving, each module can show necessary information for itself, making editing very easy.

Frost
Posts: 51

Post by Frost » 08-08-09 9:13 am

The new version got delayed but I'm still working. I've finished most of what I was planning, loads midis etc. Now I'm improving the midi import to differentiate between channels and polyphonic melodies. And the more important news, I've finished the first version of the genetic algorithm for finding optimal parameters. Tried with a longer midi and, it found the parameters that give exactly the same fingering. I'm thrilled! Now, we need many songs with fingerings for optimizing the parameters.
Anyway, just thought I'd share. I'll try to post a more information asap, but things are quite busy around here.

Choul
Posts: 487

Post by Choul » 08-08-09 12:02 pm

That's great news @Frost. Which midi files do you use if I may ask? Does it work already for left and right hands? Do we always needs to use midi files with two tracks to use this feature? Or am I asking to much now? :mrgreen: I'm sorry, haven't followed this thread anymore, because it became too technical for me, so maybe I'm asking what's already known.

TonE
Synthesia Donor
Posts: 1180

Post by TonE » 08-08-09 12:13 pm

Great news from Frost again.
Choul wrote:Do we always needs to use midi files with two tracks to use this feature?
Yes, it should be like that.

Choul
Posts: 487

Post by Choul » 08-08-09 1:25 pm

TonE wrote:Great news from Frost again.
Choul wrote:Do we always needs to use midi files with two tracks to use this feature?
Yes, it should be like that.
That's great news too. :D No more 1 track files.

vicentefer31
Posts: 899
Location: Madrid, Spain

Post by vicentefer31 » 08-08-09 4:42 pm

Thanks Frost, your news are very rewarding.
Picasso: I am always doing that which I cannot do, in order that I may learn how to do it.

vicentefer31
Posts: 899
Location: Madrid, Spain

Post by vicentefer31 » 09-14-09 4:30 am

I wonder if Frost could say us how his "Automatic Fingering Prediction Software" is going to know from a midi song if a track is for the right or the left hand. I supposed there are a few options:
a) Tracks in an specific channel. Example: Tracks in channel 1 = left hand ; and tracks in channel 2=right hand
b) Rename the tracks to "RH track" and "LH track"
c) With an algorithm that make this work automatic

Note: Option a) and b) cuold be done with Anvil Studio
Picasso: I am always doing that which I cannot do, in order that I may learn how to do it.

TonE
Synthesia Donor
Posts: 1180

Post by TonE » 09-14-09 4:35 am

vicentefer31 wrote:I wonder if Frost could say us how his "Automatic Fingering Prediction Software" is going to know from a midi song if a track is for the right or the left hand.
Frost's software does not consider this at all, it assumes the incoming notes are already for a single hand. Where and how is another topic, e.g. as you have suggested here.

vicentefer31
Posts: 899
Location: Madrid, Spain

Post by vicentefer31 » 09-14-09 5:46 pm

TonE, someone who makes a software like this shouldn't have any problem to add this feature. Even, I could bet that it is ready to use.
Picasso: I am always doing that which I cannot do, in order that I may learn how to do it.

Frost
Posts: 51

Post by Frost » 09-25-09 10:02 am

yeah.. umm.. well.
I did very little work since my last post. It's still in the same phase. Lately, most of the time is not spent on the algorithm itself but GUI, input output etc., which is, honestly, very boring. But I have some free time coming up, I'll try to finish the basics.

To make the software useful, the parameters have to be really good. As I said, the parameter optimizer is ready, you input a song and the actual fingering. The software would try to get the given fingering by changing the parameters. If it's successful (I don't know.. Accuracy should be >95% ? 90% ? 80% ? ), we can release it as it is. Otherwise, I'll tweak the algorithm a little more, until it gets the fingering right.

So, if by any chance you'd like to help in this process, we can use all hands. What that needs is, some musical piece, and the accompanying fingers, separated by hands.We should get the parameters for monophonic and polyphonic songs separately.

An example:

Piece: Hanon 1
Notes: C3 E3 F3 G3 A3 G3 F3... (or it can be key numbers as well)
Hand: Left
Fingerings: 1 2 3 4 5 4 3....


That's it. But we need many, many songs. Monophonic ones would be harder to come by (after we exhaust hanon/czerny etc.), but any part of a song will do, whole piece is not necessary. We'll use ALL pieces at the same time to optimize the parameters. Afterwards, we will use polyphonic midis to really hone them.

To make the process easier, notes can be extracted from the midi itself. However fingering needs to be entered manually.

So, in short: We'll need actual fingering data to further develop AFP. By myself, I can only create so much. If you'd like to help, I'll be glad.

Frost
Posts: 51

Post by Frost » 09-25-09 10:07 am

Note that, those note fingering may be later used by the synthesia midi editor; when you are adding song metadata, you can use the fingering you developed as well. The reverse is also true; an extension to the metadata editor may be written to export the song notes/fingering to use in AFP parameter optimizer. So If you were planning on creating song metadata for use by community, it's still useful for AFP.

For more visual types, maybe I/someone can develop a software for easy transcription, i.e. you click the notes on a graphical piano, then enter the actual finger, maybe it will be easier than note reading and less error prone.

Post Reply