Can you map auto lip-sync data to stack slider?

Hello everyone! I’m rigging up a character to have a rotating head. I’m using the stack wizard, so I can create a head that rotates, and also talks. I know I can set it up so that I can keyframe between the stacks to select different mouth positions, which were created either drawing swaps (different exposures) or deformers. I am wondering if there is a way to drive the slider function of this master controller stack from the auto-lip-sync feature available in the layer properties of a sound file. IE, can I auto-lip-sync into a stack, or can the auto-lip-sync only effect individual layers?

Thanks in advance for your help!


It’s just something that bugs me in games when I flip the camera on my character and their mouth is just not doing anything to match their VO. I was playing Sleeping Dogs last night and noticed how bad it was and thinking, how the fuck did Half Life 2 nail it almost a decade ago?