Early attempts at Speech and Expression sync.
Using Eleven labs API for text to speech—at least for testing— we generate viseme timestamps that we can then pass to code and then map in Rive.
I plan to add other means of bodily expressions, and obviously the UI, but for a first draft, looks promising.
A robust avatar creator I launched about a year ago, using Rive and Swift. This was well before the data binding feature in Rive. Crazy to think of how much would change in the workflow if I were to try it today.
Try it at:
https://apps.apple.com/gb/app/jas-character-creator/id6740578290