One emerging application of computing technology is that of interactive rooms and furniture. For instance, Interactive Tables allow for a workspace that is intuitive, natural, and conducive to creating multi-user collaborative work environments. Although several interactive table prototypes have been developed, we have engineered a method using a computer vision system instead of touch screen technology, which allows increased flexibility to the end user because of its ability to ignore or even make use of objects placed upon the table and its decreased likelihood of accidental input. In this work we present a method for implementing such a camera-driven interactive table with a ceiling-mounted camera and demonstrate some of its potential uses. The vision system makes use of a novel hand detection and segmentation technique designed to be tolerant of any level of background complexity on the display and any reasonable range of indoor lighting conditions, thus allowing the highest level of freedom to the end user. It searches the results of multi-scale line and curve finding systems to locate thimbleshaped finger models, marking them as candidate fingers and performing a set of geometric and texture-based tests on each to remove false positives. Finally, it groups finger detections that are similar to each other in location and appearance, while allowing the reintroduction of weak candidates that are supported by strong neighbors, into hand detections with finger and palm locations. Results demonstrate the system's ability extract enough information from images of hands in very complex backgrounds to allow for finger and palm placement recognition.