I’ve used auto-detection on the clip and mapped everything correctly, it has applied the mapping to the correct layer and the mouth is moving… but it bears absolutely no relation to the audio! I have simple audio of a single voice speaking and the mapping is just completely off. Unusably bad, to quote a thread I found complaining about this same issue from 13 years ago - have there really been no improvements made in all that time? It’s so bad that the mouth is just cycling through random shapes during moments of complete silence. Is there some trick to this that I’m not doing, or is it really just this unbelievably bad? I don’t expect it to be perfect but it should at least bear SOME resemblance to the audio it’s supposedly syncing to, right?
If you are using longer audio files around 8 minutes or more, the lip-syncing animation will progressively go out of sync as you get closer to the end of the timeline as I have shown in the video below:
I’ve noticed Adobe Animate has this problem as well with its AutoLip Sync Feature when using longer audio files.
The issue shouldn’t be too noticeable with multiple shorter audio files, and that might be a way to prevent the mouth from moving during silent portions of your audio. Though, it is more work to have to cut up and arrange multiple audio files in Harmony as opposed to using one long one.
If that’s not the issue, then Harmony might be picking up inaudible but still present white noise in your audio that’s causing the mouth to move when it shouldn’t. You could try running a Noise Gate Filter on your audio in software like Audacity to completely silence areas in your audio where speaking isn’t present.
And if that still doesn’t fix the mouth-moving-during-silence issue, I’m afraid you might have to manually fix those errors in the lipsync animation wherever they occur.
As for the general quality of Harmony’s lipsyncing, it doesn’t appear to be that great. I think even Adobe Animate’s Lipsyncing is slightly better since it allows for 12 different mouth shapes or visemes instead of Harmony’s 8, and I don’t typically get a problem with the mouth moving when it’s not supposed to in that program.
It would be nice regardless if some improvements could be made to Harmony’s Auto lipSync feature so that it at least could support more mouth shapes and use them properly more of the time.
Thanks for the response. It’s not an issue with drift, it’s just totally off from the beginning. I’ve recently started transitioning to Harmony after years of using Animate, and it’s so much more powerful in so many other areas I figured the auto-sync would at least be comparable to Animate’s, which I actually think is quite good. As a workaround I ended up copying my mouth animations from Harmony into Animate, auto-syncing in there (same audio file worked perfectly in there, very accurate no issues), then exporting as a PNG sequence and putting that BACK into Harmony. Tedious, but still less time-consuming than it’d be to manually sync the whole thing.
As a note about the drift in both programs with longer audio, here’s something I noticed while doing this process: my Harmony scene is 15000 frames, which works out to exactly 10:25 at 24fps. My audio file is exactly 10:25 and in Harmony it goes right to the end of the scene just like it should. In Animate, however, with the scene set to 15000 frames at 24fps, the exact same audio file ENDS before the scene does, about 20 frames early. So when playing back the lip synced animation in Animate itself, it appears to drift out of sync over time. After exporting the image sequence and importing it into Harmony, however, it stays perfectly in sync all the way through the end. I assume it’s something to do with how each program interprets the bitrate of the audio file, but I’m glad it worked out this way cause the last thing I needed was another headache in this process. Already set me back a full day on my schedule figuring it all out.
Thank you for sharing that workaround and your test results!
I too am transitioning from Adobe Animate to Toon Boom Harmony, and I also expected or hoped that Harmony’s auto lip-sync feature would have been better or at least as good as Adobe’s. Too bad that’s not the case.
I’ll have to try out Moho’s auto lip-sync feature in the future too when I have more time to see how it compares.
I also confirm the audio drift issue is happening for me in Adobe Animate for both my 10 minute-long .MP3 and .WAV files in a 16,000-frame scene using 30FPS.
The final exported videos for both file formats show progressive desync as you get closer to the end of them as you can see below:
Adobe Animate - WAV Sync Test:
Adobe Animate - MP3 Sync Test:
For anyone reading this that’s not already aware, Adobe Animate behaves differently when using .MP3 files as opposed to .WAV files in ActionScript projects. So it was important for me to test both file formats, however, it doesn’t appear to be causing this particular issue since the progressive desync is happening to both file types. Also both Harmony and Adobe Animate have the same 16,000 frame limit for scenes.
At least Harmony seems reliable for its audio and animation syncing for longer scenes as I have tested it in the below videos using the same 10-minute WAV and MP3 files with a 60-frame looping animation made with paste cycles only (No timing or looping related Nodes).
Toon Boom Harmony - WAV Sync Test:
Toon Boom Harmony - MP3 Sync Test:
The results are pretty much the same with no noticeable audio desync.
Interesting - are you exporting from Animate directly to a video file, or as an image sequence and then compiling afterward with the sound?
No image sequences were used in any of my tests.
The Adobe Animate videos were created by exporting them from Adobe Animate as .MP4 (H.264) files in the usual way (File>Export>Export Video…)
And the Harmony related videos were exported directly from Harmony using File>Export>Movie as .MOV (H.264) files.
Based on my experience figuring out this workaround, you might not end up with the sync issue in Animate if you export your animation as an image sequence, then reassemble it in Premiere (or whatever else) with the original sound file. I’ve always found Animate’s export-to-video options much buggier and less reliable than doing it this way.