Page 2 of 3
Posted: 04-29-11 12:48 am
I don't see the price ever going that high. That's a different market segment than I'm interested in. I like the $25 crowd.
If I'm understanding your mock-up, you're looking for real-time finger detection in the live video feed? It would be really cool to be able to place markers on the fingers like that so you could see exactly where they needed to go.
Incredibly hard problem. Something I've been thinking about for years though. (There are some ancient forum posts to that effect someplace.)
Not only is video lag your enemy there, but your detection and tracking has to also be done almost instantaneously too. Add to that how many wacky shapes you can get your hand into where one finger is covered by another, or how at that particular camera angle you can't quite see things perfectly, or there isn't enough light in the room to get a good clean signal...
Still, it's sort of the holy grail, isn't it?
I would love to do that.
Posted: 04-29-11 11:54 am
Nicholas wrote:...It would be really cool to be able to place markers on the fingers like that so you could see exactly where they needed to go.
Incredibly hard problem.
That's what I thought. Been learning C# and ASP.NET, took a few months course at a tutorial center, still in relatively simple programming. So I can imagine something like this must be complicated.
Posted: 04-29-11 12:11 pm
Check this one, just like Kasper's mock-up
That's some AR stuff I found some weeks ago
http://ilab.cs.ucsb.edu/projects/taehee ... ndyAR.html
I'm not sure it would help video lag indeed,
Let's see how synthesia looks like with just the upcoming fingering numbers at first
Posted: 04-29-11 1:15 pm
Seems like there needs some hand-detection-calibration-thing.
The same for the piano, 'where are the keys?'.
Posted: 04-29-11 5:23 pm
That is very cool. Still a much simpler version of the problem (palm open, no gestures, no weird hand configurations), but it's definitely a start.
The source link on the page is dead, but I just contacted one of the authors hoping he could point me in the right direction. Here's to hoping.
Posted: 05-06-11 3:06 am
Maybe this YouTube video is also interesting:
What you need is only a 100 Euro Microsoft Kinect sensor.
Posted: 05-06-11 4:05 am
I've been looking into depth sensors since a year before the Kinect (or Project Natal, for that matter) was announced.
I haven't gotten a chance to play around with it yet, but it is an incredibly low-cost way to get a really awesome sensor. Placement is still tricky. To get it to see an entire keyboard, you'd have to have it near your ceiling.
Posted: 05-06-11 4:20 am
Nicholas wrote:Placement is still tricky. To get it to see an entire keyboard, you'd have to have it near your ceiling.
Well, yes, this might be the requirement for the correct view direction of the sensor for the normal piano keyboard playing, meaning hanging above the piano keyboard. Using another Kinect for the side view would give you even 6D data, which might help reducing algorithms cpu use.
Posted: 05-06-11 5:03 am
Then you run into the problem of finding people that have two
Kinect sensors. Ideally the feature would be something a large number of people would be able to use.
Posted: 05-06-11 5:08 am
Ok, then only one Kinect as standard, two Kinects as bonus version for those who show interest in possible improvements.
Posted: 05-06-11 1:43 pm
Or build one yourself from a DIY kit?
Posted: 05-06-11 10:40 pm
So if this goes (and probably will) how long will this take to finish?
Or even produce?
Posted: 07-09-11 1:39 am
Any new development news on this feature?
just politely asking
Posted: 07-09-11 1:48 am
It still has a ways to go on the voting list (though it was a late addition, so it's kind of behind the rest of the features there).
Otherwise, over the last release or two (and even more in the next couple) I've been slowly wrangling various bits of code here-and-there into place to get ready for it.
Right now I have the iPad port prioritized higher though, so it'll be after that, for better or worse.
Posted: 11-11-11 7:28 pm
Okay, i think i've got a pretty good idea for how this could work...
lots of people now a days have iPhones right? what if you made an iPhone App that would act as a web cam. It wouldn't send the video over WiFi, like a Skype/Facetime call, it could be pluged into your computer and transmit it via USB... If you made a app for the App Store, it could promote not only Synthesia, but using the live video feed feature in synthesia.
Of course, the question comes up: how would i get my iphone positioned directly over my keyboard... I could make it work... I have a metal gated shelf right above my computer. I could rest my iphone on top of it and place it just so that the camera is inbetween the bars of the shelf... If that description doesn't make sense, dont bother trying to figure it out, all im saying is that I can make it work... Could you?
Yes or no?
maybe we should take a poll to see how many people could make this work to see if it's a good idea...
Posted: 11-11-11 8:05 pm
You can already do that with some random iCrap
if you like to pay to use hardware you already have
I'll just use my webcam.
Posted: 11-11-11 8:11 pm
what if you dont have a webcam but you do have an iphone? (i do)
and i doubt that app integrates with perfectly with synthesia...
Posted: 11-12-11 9:33 am
So you want to use your iPhone as webcam? If Synthesia comes out on iPad will you still use your iPhone as webcam? Like duct-taping it on the ceiling; isn't that a bit too far for a phone?
Posted: 11-12-11 5:37 pm
Maybe this would be a manageable way to provide hints with a live keyboard feed.
You could use a view of the keyboard with no hands taken before you start playing and generate polygons for each key and transform the keyboard image and polygons to match the back edge of the keys with the falling note display. Then you could fill the polygons with colors as the notes fall to create a virtual lighted keyboard.
Posted: 11-12-11 5:47 pm
Yeah, that's true... After posting it i realize it was a pretty stupid idea...