All joints are now being saved, and I included a “dashboard” that is what I expect the conductor to be able to do (change colors, point size, select joints, and so on). But it is possible now to do all of that without a second Kinect, making this a viable standalone application (not to mention the help it’ll be in debugging). We can toggle visibility on all joints, and change size and color on them individually. There’s a lot more we can do, of course! Plus, there’s a “conductor mode” checkbox (to be implemented next) that toggles whether a conductor is present or not.
Here is the new UI!
I’m uploading the new version on Dropbox right now.
I have just merged my code into Danilo’s, so that now we have a couple of improvements. Data is now saved as a continuous stream (rather than being limited by what is currently on screen), and the filtering script also outputs all of the joints as separate *.obj files, and can be imported into Blender and the like!
I added functionality for the program to save positions to a file, X and Y as pixel positions and Z as depth. Each joint is labeled (H: Head, WL: Wrist Left, and so on), and flagged as tracked or inferred. The screenshot is of a skeleton in “sitting position”, hence no joints below the spine.
Working on a Python script to read the file and output another with the positions averaged out, in order to reduce noise.