The part of Natal that players see looks like a webcam. (Microsoft's not divulging details about this hardware yet, presumably because the release is many months in the future, but we do know that it measures relative distances using a black-and-white camera sensor and an near-infrared beam.) But it's the software inside, which Microsoft casually refers to as “the brain,†that makes sense of the images captured by the camera. It's been programmed to analyze images, look for a basic human form, and identify about 30 essential parts, such as your head, torso, hips, knees, elbows, and thighs. In programming this brain--a process that's still going on—Microsoft relies on an advancing field of artificial intelligence called machine learning. The premise is this: Feed the computer enough data—in this case, millions of images of people—and it can learn for itself how to understand it. That saves programmers the near-impossible task of coding rules that describe all the zillions of possible movements a body can make. The process is a lot like a parent pointing to many different people's hands and saying "hand," until a baby gradually figures out what hands looks like, how they can move, and that, for instance, they don't vanish into thin air when they're momentarily out of sight.