Thinking about the logic + features of the character itself
- nathanenglish
- Feb 2, 2023
- 3 min read
Updated: Feb 19, 2023
So I'm thinking a little bit about what all I want/need my interactive character(s) to be capable of. Aside from just using ChatGPT and Oculus voice to talk, the character will need to feel more "alive" through things like animations, emotes, and states.
I want to build a state machine that will transfer between different states, including its talking state. Ideally this state machine would be able to switch dynamically between Talking and Idling, with different emotion states like boredom woven in between. Having a state machine will also be useful for any cases where the character's responses aren't powered by ChatGPT, and are instead pre-written responses or reactions. This state machine will also help with controlling different animations and facial expressions depending on the current state.
I've only looked at the theory around state machines, and haven't ever implemented a proper one in Unity. So this will be a nice excuse to figure one out.
I'm also looking a little bit into the Blend Shapes I'll have to make for the talking and facial expressions. There's a handful that need to be made for the lip syncing, and these mouth shapes are called visemes.
Unity, too, has documentation about setting and getting Blend Shape states, and the values can be controlled through script. This way, the combination of different Blend Shapes can be used to create dozens of combinations of facial expressions driven through script based on the current emotional state.
Even something like, for example, if the character is getting angrier and angrier, the face will gradually become more expressively angry through lowering the brow and turning the mouth to a frown.
These blend shapes can also be combined with animations. My assumption is that they naturally get overlayed on top of baked animations, but if not, Unity does have things like animation layers which are an option for controlling specific parts of an armature and overlaying aspects of multiple different animations.
Something I need to figure out, which I'm hoping can be implemented a little bit using ChatGPT, is a way to track emotional states or emotional responses based on a current "mood". For example, if the character feels angry or insulted, the facial expressions and responses should correlate with that. I would like to implement a sort of keyword system that can be fed to the chat AI and back, so that the character's emotional state influences its responses, and the responses would come with keywords to dictate its mood (like if it responded in an angry manner, this system could receive an ANGRY keyword and adjust the facial expressions, and so on). This keyword system would then continue to influence its next responses, until something changed that ANGRY state to a content or happy one.
This could perhaps expand into a point-based favor system. Like RPGs and romanceable characters have. There could even be some kind of check system when you ask about things the character likes or has preferences towards (this is something that ChatGPT can recognize, if my brief experiments are anything to go by).
Speech Recognition?
Right now, taking in speech from a microphone to talk to the characters is not on my radar. Mostly just because I, personally, hate using a microphone for anything (lmao). I'm much more interested in just text-based user input. Adding speech recognition is very low if not last on my list of interests, and would be a last minute addition if I had any spare time.
Kommentare