I have just merged my code into Danilo’s, so that now we have a couple of improvements. Data is now saved as a continuous stream (rather than being limited by what is currently on screen), and the filtering script also outputs all of the joints as separate *.obj files, and can be imported into Blender and the like!
Troels and I went on a mission to interview UCSD dancers about motionDraw. We asked the following questions:
1) What are some examples of dance that has used technology that you like? Dislike?
2) Have you ever tried working with computing and projection with dance?
3)Could you see benefits of interacting with a computer while dancing?
a. In real real time vs. post production
4)Would you prefer to control a device that changed a projection, or would you prefer a second “conductor” to alter the atmosphere?
a. In general as a dancer, do you prefer set choreography or a more open improvisation?
We learned a lot and will post answers soon!!
I added functionality for the program to save positions to a file, X and Y as pixel positions and Z as depth. Each joint is labeled (H: Head, WL: Wrist Left, and so on), and flagged as tracked or inferred. The screenshot is of a skeleton in “sitting position”, hence no joints below the spine.
Working on a Python script to read the file and output another with the positions averaged out, in order to reduce noise.
public void Export(Uri path, Canvas surface)
if (path == null) return;
// Save current canvas transform
Transform transform = surface.LayoutTransform;
// reset current transform (in case it is scaled or rotated)
surface.LayoutTransform = null;
// Get the size of canvas
Size size = new Size(surface.Width, surface.Height);
// Measure and arrange the surface
// VERY IMPORTANT
// Create a render bitmap and push the surface to it
RenderTargetBitmap renderBitmap =
Example of usage from the C#-library: RenderTargetBitmap
These are some of the web pages I found that might help us.
We were looking for ways to use png images as brushes.