Animation & Trackview

View unanswered posts | View active topics


Post a new topicPost a reply
Previous topic | Next topic 

  Beginner BeginnerMember since: 09.09.2011, 22:06Posts: 1 Likes: 0
 

 Post Topic: Lip syncing
PostPosted 18.12.2011, 05:03 
Hello,

I am currently working on a Machinima project; I have dialog that needs lip syncing for this project. I have been able to get the dialog to play, but I have not been able to get characters to lip sync.

I have tried the following:

Creating a flow graph and using Sound:Dialog (I have tried with both an AI grunt, and an AnimObject).

my flow graph:
Image


Creating a sequence, adding a sound track, and checking the 'Voice' and 'LipSync' check boxes

my sequence:
Image

Notes:
- I have a .mp2, .wav and .fsq file. All are named the same and reside in the same folder.
- The lip sync plays in the facial editor perfectly.
- The sound clip plays every time, just without lip syncing.
- I have also tried files with no spaces in the file name if that makes a difference.

Any help would be appreciated.

User avatar   Honourable Member Honourable MemberMember since: 01.11.2010, 15:28Posts: 605Location: Orlando, FL Likes: 10
 

 Post Topic: Re: Lip syncing
PostPosted 29.12.2011, 23:01 
There is a special thing to know about the automatic playing of FSQs and WAV files:
You will need to put these files inside a proper folder inside Languages and create a English.PAK file for your mp3 and the fsq files - and put it inside the Localized folder. If the file is not inside the PAK file, it will not be loaded or played. I am not sure if you have to create entries in the recording lists in there as well.

The reason for this is that the Localization system which handles this automatic playback requires pak files (so that you can work with multiple languages at the same time).
The easiest is to create yourself a little script that creates the PAK file from your Languages folder automatically of you want it to work this way. (PAK is just a ZIP file, you can use WinRAR to create).

You can test this with the playsound node.
Alternatively, just play the FSQ through TrackView as well, just add a track for facials to it.
  Trainee TraineeMember since: 21.11.2007, 23:13Posts: 183Status: Online Likes: 19
 

 Post Topic: Re: Lip syncing
PostPosted 12.07.2012, 13:58 
Hello. After many, many trials I got a facial animation to play on track view. adding an animation track along with the facial track... But i would love to know the specifics on how to make it work with a sound + fsq via flowgraph using sound:dialog node. I tried adding my files to the localized/english.pak without success. Cry-mika could you please give a more detailed explanation on ho to do this? I am using Free sdk.

Thanks!!!
User avatar   Honourable Member Honourable MemberMember since: 01.11.2010, 15:28Posts: 605Location: Orlando, FL Likes: 10
 

 Post Topic: Re: Lip syncing
PostPosted 25.07.2012, 19:46 
The language localization system has changed a bit with the latest SDK release.
One of the changes is that fsq files no longer HAVE to be inside PAK files (Hooray).
The drawback is that now it is a little messed up of where the files have to be placed in order to be played automatically. I know that the system constructs the folder by putting the name of the language (for example english) in front of it somehow, but because this has already been changed (see below) again, I have never investigated this further.

With the next release there will be an update that makes the system backwards compatible, meaning, the fsq file can be placed in the same folder as the wav file again. As long as you then tick the flag "voice" and "lip synch", it will automatically be played with the sound file (this work in TrackView as well). No manual triggering needed in FG or TrackView.

Until the release you will unfortunately be stuck with either explicitely triggering the FSQ file, through code or TrackView, or coding your own system in C++ that starts sound and fsq (if existing) simultaneously - personally I opted for the latter for my own project.
  Trainee TraineeMember since: 21.11.2007, 23:13Posts: 183Status: Online Likes: 19
 

 Post Topic: Re: Lip syncing
PostPosted 25.07.2012, 20:02 
Thanks so much Cry-Mika. I wish i had enough c++ skills to code myself. I can compile the the game.dll in VS, but that's it :( I will keep learning and on the meantime i will wait for next release. I hope they also fixed the problem with morphs, which are unreliable at best, crashing the editor and not always working. i switched to the bone system for facial expressions, which i find much more difficult to set up than the morphs. Anyways thanks a ton for the input.
User avatar   Honourable Member Honourable MemberMember since: 01.11.2010, 15:28Posts: 605Location: Orlando, FL Likes: 10
 

 Post Topic: Re: Lip syncing
PostPosted 26.07.2012, 13:53 
Hi there,

what problems are you experiencing with morphs? I am using them all the time (with the FreeSDK) and do not have any problems with them.

Just guessing wildly here, but, did you UNCHECK the "Write Vertex Animation" checkbox before exporting you chrs (or any geometry for that matter)? That box needs to be unchecked, otherwise you end up with very large files that will also likely crash the build. Despite what the name suggests, this checkbox has nothing to do with morphs.
  Trainee TraineeMember since: 21.11.2007, 23:13Posts: 183Status: Online Likes: 19
 

 Post Topic: Re: Lip syncing
PostPosted 20.11.2012, 21:49 
Hey Cry-Mika, could you explain where the .fsq have to be placed and listed for the new flow-node "playfacialSequece" to be able to see them in editor?
The old system is working now, with the sound+facial sequence working when they are in the same directory with the same name using the soundplayer with voice checked... Yay!

but could you tell us where to place the.fsq to use this node? seems rather handy. Thanks!
User avatar   Honourable Member Honourable MemberMember since: 01.11.2010, 15:28Posts: 605Location: Orlando, FL Likes: 10
 

 Post Topic: Re: Lip syncing
PostPosted 20.11.2012, 22:05 
Hi kimba,

you can place the fsqs wherever you want, though I would recommend a subfolder in the Animations folder.
If you want to use them with the FG node, you will have to make an entry for your fsq in the chrparams file of your head.chr (your filename might obviously be different). The fsqs are loaded like regular animation files then.

Your head chr needs it's own chrparams file, as you have probably already figured out when setting up the facial expression library.
Just add lines like there to that file:

<Animation name="#filepath" path="animations\human\facial"/>
<Animation name="mysequence_name" path="miscellaneous/myFirstSequence.fsq"/>

Does that answer your question?
  Trainee TraineeMember since: 21.11.2007, 23:13Posts: 183Status: Online Likes: 19
 

 Post Topic: Re: Lip syncing
PostPosted 20.11.2012, 22:08 
Yes! that is the answer!
Thanks so much!
User avatar   Uber Modder Uber ModderMember since: 29.05.2010, 11:46Posts: 1259Location: Germany Likes: 148
 

 Post Topic: Re: Lip syncing
PostPosted 20.11.2012, 22:58 
hi mika,

since we are on the topic :P
How would you go about dynamically creating lipsync from phonemes coming from a speech synthesizer.

https://github.com/hendrikp/Plugin_Flit ... onemes.cpp

Is there already something in place or will I need to generate a chrparams and fsq at runtime.

Flite uses these phonemes:
https://github.com/hendrikp/Plugin_Flit ... lish.c#L43

So seems like IFacialSentence would be the way to go, do you think its doable with FreeSDK or will i miss a required functionality :P


User avatar   Honourable Member Honourable MemberMember since: 01.11.2010, 15:28Posts: 605Location: Orlando, FL Likes: 10
 

 Post Topic: Re: Lip syncing
PostPosted 21.11.2012, 16:11 
Hi there,

WARNING - looong post ahead :)

Ok, first of a bit of information so you are not going down the wrong path.
You cannot generate a chrparams at runtime. Sure, you can create the xml file, but chrparams files cannot be hot-reloaded for a character that is already loaded. So this won't help you.

You can however create an fsq file at runtime and simply manually load and play it alongside your speech synthesizer. This is more direct - and since you know what file you want, there is no need to first map it in a chrparams file anyway.

Creating an fsq is not as simple as creating a IFacialSentence. There is no code in CryEngine that will do this for you based on a text string. The facial tools were all designed to work with audio tools and use a plugin like annosoft to recognize the audio. You will have to manually write code to construct an fsq for you sentence by stringing the phonemes together for the words. This can be done (depending on your level of C++ skills it won't even be too hard), but you will have to write the code for this.

Now, the question remains whether an fsq is the right way to go in the first place. I am not sure, to be honest - for multiple reasons:

One is timing. You might know what sentence you speech synthesizer will say, but you don't know the timings of when each word/vowel/phoneme is actually generated, right? (I have not worked with speech synthesizers before, so I might be wrong). If this is the case, then the lip synch will certainly be off.

For this case I have experimented with generic talking animations. That means one or more (so it is not repetitive) looping fsqs that looks convincingly like a talking character, which are played for the entire time that the speech is active. It worked surprisingly well, and the only issue was that the mouth was moving while there was a pause in a sentence. I always thought this could be improved by stopping the fsq, or overlaying it with a closed-mouth one, when the amplitude of the sound playing was close to 0. But I never tried that out.

Just to give you even more things to consider - a friend of mine has player around with live lip synching a bit and took a different approach. Instead of playing an fsq, he played the expressions for the individual phonemes directly on the face, the moment his code recognized them for the audio file.

Anther option: If you already know which sentences you will have your speech synthesizer say, you could always record those prior to shipping and have a lip synch tool create dedicated fsqs for these sentences for you. Even if you don't know all sentences yet, maybe this is an option for those that you _do_ know.

I hope this helped you at least a little bit in finding the best way for you to go. :)
User avatar   Uber Modder Uber ModderMember since: 29.05.2010, 11:46Posts: 1259Location: Germany Likes: 148
 

 Post Topic: Re: Lip syncing
PostPosted 22.11.2012, 01:29 
yes thanks I'll post here again if i have some results or proof of concept, but could take some time ;)