Kinect – Infrared Stream Q&A
Got the following question via email:
I am trying to understand the differences between the depth stream and the IR stream. Both can see in the dark. What can the depth stream do that the IR stream can’t and vice versa?
The docs says that I can also use IR data to capture an IR image in darkness as long as I provide my own IR source. In the demo app I darkened the room (not pitch black but dark) and put my hand in view of the sensor and could see it quite well. What was providing the IR in that scenario?
First, some background:
We added the ability in Kinect for Windows 1.6 to get an infrared stream, on top of the depth stream that was enabled since v1. The depth stream is created by processing the dots in the infrared stream. One of the goals of our v1.6 release was to unlock more of the data that the sensor could provide (accelerometer, infrared, extended depth) and to provide more control over the data it captures (color camera settings, infrared emitter control).
Some additional pointers with more detail
- Kinect for Windows team blog post: “Inside the Newest Kinect for Windows SDK — Infrared Control“
- See my discussion of infrared during my “Build 2012 Kinect for Windows Programming Deep Dive” (Watch video from 29m 5s until 32m 30s). (if you want the demo code from that talk, it is located here “KinectMagicMirror demo“.
What other clarifications would people like?