In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects' names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.