Synthesia 11

Try new versions before anyone else!
Always the latest dev version: [ win ] [ mac ] [ android beta opt-in ]
Please report comments and bugs!

Your data hasn't disappeared: development previews store their data in a different place. Details here.

Postby meirppp » 01-23-18 1:25 pm

Please if you can't transfer the split area to the main board- maybe can you make the sound we be in the split area too?

Thanks Meir
meirppp
 
Posts: 25

Postby Nicholas » 01-24-18 4:33 am

Done. That's a nice low-effort workaround that was fun to add. Now, as you pan through the song while splitting a part, notes from the part you're editing will be played. (There is a little speaker icon at the top-right corner that you can use to toggle this on or off.)



That demo is just about the worst-case scenario (using an instrument that sustains forever), and it's still not so bad. All you need to do is rewind the song the slightest bit and it will cut off all sound.

That feature will show up in the next dev preview.
Nicholas
 
Posts: 11664

Postby meirppp » 01-26-18 5:01 am

A big THANK YOU from me.

Meir
meirppp
 
Posts: 25

Postby Tiothae » 01-28-18 8:55 am

I really love the changes on the dev build, it's finally got me to learn how to read sheet music a bit, which was long overdue! In the process, though, I found what appears to be an unreported bug (or, I haven't seen it in this thread, I may have missed it).

If a chord is being played on both the natural and the flat/sharp of the same note, the natural and flat/sharp display over each other. Here's an example:
Image

This screenshot is from one of the pieces that came with Synthesia - The Silly Seal, under Easiest. I guess in this case it could be changed to display as an F# instead of a G flat, but I wouldn't know if this is down to the file itself or how Synthesia is reading the file. I don't know if there would be a way for Synthesia to resolve this and display it as an F# and G chord automatically so it's more readable. Although, this may look weird if, say, there's a chord of F, F#/E flat and G together (if anyone would ever want to).

I saw elsewhere that you're adding more features to how notes are displayed on the sheet music to bring it more inline with the falling notes, are you planning to include colouring the notes by hand?
Tiothae
 
Posts: 4

Postby Nicholas » 01-28-18 2:23 pm

Yeah, multiple note-heads with correctly stacked accidentals is on the list for Synthesia 11. It's a rather involved topic (scroll down a page or two), so I can't promise it will end up perfect. But, it should end up in a place better than "completely broken" like it is today. :)
Nicholas
 
Posts: 11664

Postby Tiothae » 01-31-18 2:23 pm

Wow, that looks super complicated! :?

Fortunately, it isn't too common (at least in the stuff I'm playing!), and even if it isn't clear initially, you can tell what it means after seeing it once.
Tiothae
 
Posts: 4

Postby Nicholas » 07-29-18 3:50 pm

Progress Update!

Actually, 10.5 is moving along quite rapidly, especially in the last two weeks. The BASSMIDI-based replacement synth for Windows and Android has been a delight to implement. Their API is very nice and it does most of the work for you, which is the opposite of what I'm used to! :lol:

It's much faster and (hopefully once I finish working out licensing with the Voice Crystal folks) I'm hoping to get an upgraded version of the sound set we've been using on the iPad (since 2012) with a higher-quality grand piano. That's a huge upgrade for Windows and something approaching a quantum leap for Android. It's so nice, I'm even considering adding it to the Mac and iPad versions, too, to have a nice, even, high-quality sound across all four platforms.

The latency is so much better, I was hoping to actually quantify it in a repeatable way this time so that the "X% faster" bullet point in the feature list will be based on real data. As a little hobby project in the evenings, I've been cobbling together a Teensy 2.0-based latency tester that will send out a MIDI note (via USB or MIDI port) and start a timer to see how long it takes before its headphone jack picks up any sound. (Most of the work has already been done by Google, but it's still been some fun. I don't get to tinker with hardware nearly as much as I'd like.)

Choosing the software synth on the Settings screen now gives you a new "Reverb" slider which -- used with some restraint -- can add some nice extra depth to the sound. (Without restraint you can make it sound like your piano has been installed in a public restroom!) :lol: And, if the background song scanner (that populates the song list) happens to stumble across any SF2 SoundFont files in the same folders, they're shown in a list on the same settings screen and you can choose between and audition them in a single click. That means you can bring your own favorite/preferred SoundFont quite easily, even on Android. :D

Other work in the meantime: the constant tide of Android bugs continues to assault and be pushed back. I think I may have found and fixed the current #1 source of Android crashes, which also coincidentally resulted in a nice speed/responsiveness improvement on that platform.

Remaining work includes a final bug fix before Chromebook support is complete, automatic latency detection for the new synth on Android (I've still witnessed some drop-outs using BASS's built-in detection, so hopefully we'll be able to layer another detector on top of that or at least include a manual slider), some UI wrap-up for the new built-in synth settings screen, actually measuring some MIDI latency :D, and the last handful of tiny bugs that have been reported recently.

Then this eight month(?!) "emergency" bug-fix release will be out the door and we'll be able to get back to work on Synthesia 11.
Nicholas
 
Posts: 11664

Postby Nicholas » 08-01-18 10:10 am

tester.jpg
tester.jpg (119.76 KiB) Viewed 2359 times

Preliminaries

I built a little latency tester prototype and ran it against most of the test hardware I have around here.

There are 1000 milliseconds (ms) in one second. On a typical computer monitor or tablet (running at 60 frames per second), each frame lasts for 1000/60 = 16.7ms.



Procedure

This testing device sends a "Note On: Middle C at max velocity" message through a MIDI cable, starts a timer immediately, and counts the time until the very first evidence of any sound at all coming in from the headphone jack. It can measure that result down to the nearest two microseconds (a thousandth of a millisecond), so we've got about 500x the resolution we need to accurately characterize things at the millisecond level. It continues to send Note On/Off pairs automatically to acquire 25 separate measurements.



Baseline

As a sanity check (and because I was curious :D ), I tested a few digital pianos directly. Their whole job is to produce sound quickly after a key strike, so presumably they will also respond to MIDI messages quickly. Keyboards should be the fastest things around, so these tests establish a nice baseline for the fastest we should realistically ever hope to see a software synth behave.

Keyboard average time to respond to a MIDI "Note On" message:
  • 6.4ms (800μs std dev) - Casio LK-100
  • 9.9ms (260μs std dev) - Yamaha EZ-200
  • 12.4ms (780μs std dev) - Yamaha P-70
These are delightfully fast (as you'll see) with an amazingly tight distribution. Every one of them produces sound in less time (often substantially) than a single video frame.

More data points that may be interesting:
  • 57.1ms (2.3ms std dev) - Synthesia on iPad 4
  • 57.2ms (30μs std dev) - Synthesia on iPad 2



Results

Computer/tablet average time to respond to a MIDI "Note On" using legacy synths (Windows MME on Windows and Sonivox on Android), with the latency tester connected through an E-MU Xmidi 1x1:

  • 241.2ms (10ms std dev) - Surface Pro 2 (Windows 8.1)
  • 245.5ms (8.8ms std dev) - Intel NUC (D54250WYK, Windows 10 MME driver)
  • 59.7ms (3.8ms std dev) - Intel NUC (D54250WYK, Windows 10 UWP driver)
  • 182.0ms (6.7ms std dev) - Nvidia SHIELD tablet (Android 7.0)
  • 119.9ms (10.5ms std dev) - Google Nexus 7 (2013, Android 6.0.1)
  • 195.2ms (11.8ms std dev) - Kindle Fire 7 (Android 5.1.1)
  • 254.2ms (10.7ms std dev) - ASUS Transformer Pad (TF103C, Android 4.4.2)
Right away we can see that the legacy synths take ~15-40x longer to produce a note than a keyboard. The Windows synth at a quarter second is especially shameful. The wider spread most-likely has to do with Synthesia's own MIDI polling once per frame. (2x either of those standard deviations is close to a 16.7ms frame.)

Computer/tablet average time to respond to a MIDI "Note On" using the new BASS 2.4.13.8 based synth (WASAPI on Windows and OpenSL on Android), with the latency tester connected through an E-MU Xmidi 1x1:

  • 76.7ms (5.5ms std dev) - Surface Pro 2 (Windows 8.1)
  • 69.9ms (0.2ms std dev) - Surface Pro 2 (Windows 8.1) with V-sync off (~160 fps)
  • 69.4ms (6.5ms std dev) - Intel NUC (D54250WYK, Windows 10)
  • 74.8ms (6.1ms std dev) - Nvidia SHIELD tablet (Android 7.0)
  • 114.ms (6.6ms std dev) - Google Nexus 7 (2013, Android 6.0.1)
  • 241.6ms (7.6ms std dev) - Kindle Fire 7 (Android 5.1.1)
  • 331.8ms (8.0ms std dev) - ASUS Transformer Pad (TF103C, Android 4.4.2)

chart.png
chart.png (16.73 KiB) Viewed 514 times

There is a lot to take in, here.

For computers and "newer" Android, BASS is consistently half to one-third the time it takes the old synths! Four video frames is still inside the range of "imperceptible delay" for me (and hopefully you!), and this synth can use arbitrary SoundFonts and add effects like reverb (without impacting that latency). This is very exciting. :D

Tripling the frame rate (on Surface Pro 2) appears to have soaked up the "average 8ms in either direction" slop and tightened up the standard deviation (more than should have been expected).

The older the Android device, the less rosy the picture. On devices that don't have an audio "Fast Path", it's neck-and-neck with Sonivox usually winning on speed but BASS easily still winning on sound quality. I've decided to leave it up to users: both synths will be available in the Android version of Synthesia 10.5 (defaulting to the new BASS synth, which is the easy answer on Android 6 and later).

(Checking against my manual measurements from six years ago, I'm happy to see these results are very consistent with those.)



Conclusions

  • The only software synths that can break into the high 50ms range are Apple's and the broken/unusable UWP synth in Windows 10 at the expense of pops and crackles. So, really just Apple's. :D
  • BASS is often just behind that at ~70ms response time.
  • BASS universally has tighter response timing (i.e., lower standard deviations)
  • BASS universally sounds (or can sound) better than the built-in alternatives.
  • BASS is always faster (often much faster) on modern hardware.
  • On older 4.x and 5.x Android hardware, your mileage may vary... but as the owner of one of those devices, you already knew that! :lol:
Nicholas
 
Posts: 11664

Postby jimhenry » 08-01-18 1:10 pm

The rule of thumb in the organ world, if I remember correctly, is that 20 mSec is the threshold of perceptible delay. Organists learn to cope with perceptible delay because the electro-pneumatic mechanism used to open a pipe valve takes some time to open from the time an organ key is depressed, a pipe takes some time to "speak" after air starts flowing into the pipe, and the pipes are usually far enough away from the organist that it takes a perceptible length of time for the sound of a pipe to reach the organist.Having a pipe 20 feet from the organist, which is close, is enough to introduce a perceptible delay. I haven't played real pipe organs enough to learn to cope with the delays. I'm told that good organists don't listen to the sound because it will throw you off.
Jim Henry
Author of the Miditzer, a free virtual theatre pipe organ
http://www.VirtualOrgan.com/
User avatar
jimhenry
 
Posts: 1741
Location: Southern California

Postby Nicholas » 08-01-18 2:45 pm

Yeah, playing with units a bit, it looks like sound travels at 1.1 ft/ms. So that pipe at twenty feet is already enough to eat the 20ms.

I suppose when my options are 75ms or 240ms after twelve years of 240ms, I'm pretty happy to take 75! :lol:

Certainly the advice (now backed up by real data) will continue to be "use your keyboard's synth if you've got one", but in those cases where a keyboard synth isn't available, things might have reached "passable" now instead of "what do you mean a quarter second?!"
Nicholas
 
Posts: 11664

Previous

Return to Development Updates

Who is online

Users browsing this forum: No registered users

cron