Lip Sync mapping fails when sound is moved or cut up

Several of my students have had issues mapping phonemes to mouth layers when they have adjusted the start time of their sounds, or cut pieces out. For example, a student moved his sound to start at frame 100 instead of frame 1, but then the mapping failed (nothing was mapped). When we took the sound into Audacity and added some artificial silence which registered as a waveform at frame 1, it worked. Am I missing something, or will Harmony not recognize audio for mapping if it is preceded by silence/nothing?