I am still unable to replicate your situation.
Did you update your Spine file from the last time that I downloaded it?
Try clearing the Chrome cache for the itch.io website. Maybe the 503 server caused your browser to cache corrupted Prototype files.
Another cause could be Spine runtime texture loading. If the texture is not loaded by the time Spine runtime executes rendering, then the texture is black by default.
Did you update your Spine file from the last time that I downloaded it?
No, I did not add any change to my Spine file.
I have tried several times and it appears to turn black depending on the timing. Here is the video:
I simply reloaded the browser and tried the same steps, but it worked and didn't work. You can see the bug at the following times in the video:
0:07
1:07
1:19
2:12
Another cause could be Spine runtime texture loading. If the texture is not loaded by the time Spine runtime executes rendering, then the texture is black by default.
I believe this is happening to your situation after your further testing The reason that the texture is not loading correctly and inconsistently maybe because you have the web developer console open. I know for a fact that the console can cause delay in script execution. The console being open does lag inconsistently and could explain why the texture loads correctly sometimes.
When I first read your reply, I thought I got it, but I was able to reproduce the problem when I tried it without opening the developer tools. In most cases, just reloading the page and reuploading the files can solve the problem, so I'm still not sure what the cause is. Maybe after a while other people who have encountered this problem will have a clue of the cause.
There is a solution and it is not a very serious bug, so I think you may ignore it for the time being.
Add premultiplied alpha setting to Single Value Properties. Takes the value of "false" or "true". Each model can be rendered either premultiplied alpha or straight alpha.
Exand *.svp file to save premultiplied alpha setting.
Add the ability to load a video for face tracking. You can find it in the Video Player Settings menu. Video player accepts *.mp4, *.webm, and *.ogg as long as your web browser support those formats and corresponding video codecs.
Allow the required animation in Spine to be placed in a folder. All the required animations need to be in the same folder. You cannot put the required animations in multiple different folders.
@SilverStraw I just tried the video player feature in 1.0.9 and it's GREAT!!! Thanks to the ability to stop the video midway through, it is very easy to find the points I need to fix, which is very helpful to improve my model.
@OatmealMu The mouth rig of your model is very cool!
I just tried the video player feature in 1.0.9 and it's GREAT!!! Thanks to the ability to stop the video midway through, it is very easy to find the points I need to fix, which is very helpful to improve my model.
Sorry for the late reply. I have been working with a troublesome experimental feature. I am glad that the video player is helpful to everyone. I hope you can review the next experimental feature because it looks promising. Erika and Cydomin requested the experimental feature.
Update
1.1.0
Override spine.SkeletonData.findAnimation method to take in regular expression string parameter. It will allow regular expression pattern use for finding animation names.
Add an experimental menu for unfinished features and allow others to give feedback during development
Add "Motion Capture Snapshot" to experimental menu to test saving motion capture pose as a animation in JSON. The JSON then could be re-imported as data into Spine editor. Only works with Spine JSON file imported into Spine Vtuber Prototype. Spine skel binary will not work because the Spine Vtuber Prototype needs to duplicate the Spine JSON. Bone and draw order keyframes are currently operational.This is a requested unfinished feature.
Add a slider (range) to control the video player opacity in the video player settings menu. At full transparency, the video player can be controlled by the right-mouse context menu.
Add an experimental menu for unfinished features and allow others to give feedback during development
Add "Motion Capture Snapshot" to experimental menu to test saving motion capture pose as a animation in JSON. The JSON then could be re-imported as data into Spine editor. Only works with Spine JSON file imported into Spine Vtuber Prototype. Spine skel binary will not work because the Spine Vtuber Prototype needs to duplicate the Spine JSON. Bone and draw order keyframes are currently operational.This is a requested unfinished feature.
I'm really excited about this new feature's potential! :heart:
As far as I've tried, what I can obtain seems only one frame so far, is this the right behavior? The pose on the frame did indeed match the pose of the model at the time the download was performed, so if the current behavior is as intended, then it is working correctly.
Add a slider (range) to control the video player opacity in the video player settings menu. At full transparency, the video player can be controlled by the right-mouse context menu.
This is really helpful! It would be useful for taking screenshots or screen recording.
This is really helpful! It would be useful for taking screenshots or screen recording.
Yes, for those who do not want to show their faces.
As far as I've tried, what I can obtain seems only one frame so far, is this the right behavior? The pose on the frame did indeed match the pose of the model at the time the download was performed, so if the current behavior is as intended, then it is working correctly.
Currently there is only one frame. There were several changes to the Spine JSON from version 3 to version 4. I did not add functions for all the keyframe types yet. I am still testing a single keyframe. Once that is done, I will move onto saving multiple keyframes. >
Currently there is only one frame. There were several changes to the Spine JSON from version 3 to version 4. I did not add functions for all the keyframe types yet. I am still testing a single keyframe. Once that is done, I will move onto saving multiple keyframes. >
Great to hear that!! It would be a very beneficial update to be able to save multiple frames of the captured motion, so that people who want to use Spine skeletons to make a music video can use this tool to make a lip sync and tracking the face movement. (I was just recently asked about lip-sync by a Japanese user, so I believe there is a real demand for it.)
I'm really looking forward to the next update very much :bananatime:
• Add "Start Record" button to Experimental menu. This button will start motion capture the model(s) poses. User will need on enter the amount of delay between each capture (milliseconds).
• Add slowly flashing "Recording" label in the Experimental menu. This will signal when the application is motion capturing.
• Add "Stop Record" button to Experimental menu. This button will stop motion capturing.
• Add "Play Keyframes" button to Experimental menu. This button will play back the motion capture without any in-betweening.
• Add "Save Motion Capture" button to Experimental menu. This button saves the current motion capture. The user will be asked to enter a name for the motion capture animation and the amount of delay between each keyframe (milliseconds). The user will also be asked to save a JSON file that can be re-imported back into the Spine editor. User can save multiple motion capture animations in a JSON file by choosing "Cancel" in the save dialog window after each motion capture session. Then save the JSON file at the end when user is ready to re-import back to Spine editor.
1.1.1 is Super COOL!! I obtained the following animation in about 10 minutes using the motion capture of 1.1.1:
Supplement for people who have not yet tried this feature: The animation in the video above is not the state immediately after importing the JSON obtained using the motion capture feature, but added a little editing (remove unnecessary keyframes, match the first and last keyframe, etc).
I set the amount of time between each keyframe, which can be defined when starting a recording as 50 (milliseconds). I have also tried 250 (default), 100, but 50 seemed about right to achieve the smoothness I was looking for. @SilverStraw I was afraid that changing the time setting would cause some kind of error or freeze the recording, but it didn't at all, and I got this result without any errors. I have tried recording for a longer time and had no problems even then, and I was amazed at how fast the captured data JSON file was saved. This is really fantastic! :heart:
By the way, I have a question. When I record with 50 milliseconds for the amount of time between each keyframe, the captured animation becomes smooth, but I felt that the motion is slower than when it was captured. Does it affect the FPS of the result?
By the way, I have a question. When I record with 50 milliseconds for the amount of time between each keyframe, the captured animation becomes smooth, but I felt that the motion is slower than when it was captured. Does it affect the FPS of the result?
What I wanted to say is, if the shorter the time between each keyframe, the more the FPS in a saved animation, or not. For example, if it were 250 milliseconds, the animation would be recorded in 30 FPS, but if it were 50, it would be recorded in 150 FPS. I would like to save an animation that can be played at the same speed as when captured, but the actual saved animation seems to be moving slower, and I would like to know how to play it back at the correct playback speed.
Do you mean the playback animation in Spine Vtuber Prototype or in Spine editor?
The "Start Record" and "Save Motion Capture" buttons ask for time delay. You do not necessarily need to input the same value for each of these button. When using the "Save Motion Capture" button, you can have a shorter delay than when you motion capture. I hope this helps.
I meant in the Spine editor, sorry for being unclear!
The "Start Record" and "Save Motion Capture" buttons ask for time delay. You do not necessarily need to input the same value for each of these button. When using the "Save Motion Capture" button, you can have a shorter delay than when you motion capture. I hope this helps.
I see, so I can make adjustments by that. Thank you for your response!
This may be an unpopular idea/opinion, but instead of face-tracking could the character follow the mouse instead or both in different modes. And Maybe when you press/hold a certain button the expression changes (I imagine in spine having Animation types like idle Happy, Sad, Angry, exc.) or also even changing clothing (which in spine would be skins) Any way just a thought I had
Here's a program with that example (except of course it's still images and nothing to do with spine animations) https://dreamtoaster.itch.io/honk
This may be an unpopular idea/opinion, but instead of face-tracking could the character follow the mouse instead or both in different modes.
Mouse movements cannot be tracked when the web browser is out of focus.
And Maybe when you press/hold a certain button the expression changes (I imagine in spine having Animation types like idle Happy, Sad, Angry, exc.) or also even changing clothing (which in spine would be skins)
There are not built-in animation types for facial expressions in Spine editor. There are plans to add more animation tracks for customizable override of lower tracks.
There is support for skins but not runtime mix&match skins.
Here's a program with that example (except of course it's still images and nothing to do with spine animations) https://dreamtoaster.itch.io/honk
Have you purchased that program?
Work in Progress: Integrating face expression AI into the project. It is going to be another experimental.
Allow model to be loaded into default pose even though the model lacks the required animations.
Add face expression AI to test into Experimental menu. "Face Expression AI Active" checkbox starts and stops the facial expression recognition. The camera has to be active for the AI. This experiment outputs the recognized facial expression state and list all the possible facial expressions and their respective confidence level.
Add "Change Facial Expression Threshold" button to Experimental menu to adjust the confidence threshold for each possible facial expression. Neutral expression will be ignored in the future releases.
The face expression AI is tricky to use. Some facial expressions are easier to be recognized than other expressions.
By default, the AI wants to choose the facial expression with the highest confidence level.
I added a way for the user to adjust the confidence threshold for each possible facial expression to add another criteria for determining the face expression state.
This AI experiment will not affect any rendered model so you can test this experiment without loading any Spine files. The camera has to be active for the AI to get image data.
Determining the facial expression that you want is important step before the project starts messing with your vtuber models.
The face expression AI is fun As you mentioned, some facial expressions seem to be difficult to recognize, like angry and fearful. However, just being able to recognize happiness, surprise and sadness seems to give me a lot more freedom!
I would definitely like to add facial expression animations to my model when it is possible to change the model's facial expression with the AI. (My understanding is that it is not possible to do that yet, but is it possible already?)
Have you tried to lower the confident threshold for the harder facial expressions and raise the confident threshold for the easier facial expression?
This picture might help people understand what the AI is looking for when it tries to recognize facial expressions.
I would definitely like to add facial expression animations to my model when it is possible to change the model's facial expression with the AI. (My understanding is that it is not possible to do that yet, but is it possible already?)
Have you tried to lower the confident threshold for the harder facial expressions and raise the confident threshold for the easier facial expression?
Yes, but even when set threshold to 1, "angry" and "fearful" were not recognized well. As long as I've tried, "disgusted" is the most difficult to be recognized.
Yes it should be possible.
Oh, so could you tell me what animation name is required for each expression?
Yes, but even when set threshold to 1, "angry" and "fearful" were not recognized well. As long as I've tried, "disgusted" is the most difficult to be recognized.
I might not have explained it well. The values should be between 0 and 1 where 1 makes it harder to trigger and 0 makes it easier to trigger. If you set those expressions to 1, then it makes it harder for the expressions to be recognized.
Oh, so could you tell me what animation name is required for each expression?
I am sorry for the misunderstanding. The current prototype is not ready yet for adding facial expression animations.
I might not have explained it well. The values should be between 0 and 1 where 1 makes it harder to trigger and 0 makes it easier to trigger. If you set those expressions to 1, then it makes it harder for the expressions to be recognized.
Ah, I see. Indeed, when I tried to set the threshold for "angry" to 0, my facial expression was often recognized as "angry" even if it is almost expressionless.
I am sorry for the misunderstanding. The current prototype is not ready yet for adding facial expression animations.
I got it, so I'll try to add the animations when it is ready
Is there a guide on how to export correctly? I tried out a very big Spine file, but some images are completely black and shaped like their mesh. It would also be nice if this supports multiple skins as well.
Edit 1: I found out that this does not accept multiple pngs.
No worries at all. This is already feels very usable among most aspiring V-Tubers! The Spine Team should consider fast-tracking official support for this, because it's still the perfect time to be part of the market!
Separate error catching on Spine animation tracks.
Allow "left eye open" Spine animation track to keyframe both eyes while detecting eye wink.
Add wink threshold to determine the ease of detecting eye winks. Range from 0 ( easiest ) to 1 ( hardest ). Setting located in Model Settings > Single Value Properties > wink threshold. This property can be saved into and loaded from svp file.
I have tried 1.1.3, and it certainly seems to make it easier to detect eye wink! Here is the result of my test (the model was also modified slightly):
It had been a while since I tested this tool, but I found it enjoyable once again
Thank you for keeping this cool tool updated!
One thing I thought was it might be a good idea to add a threshold value for the eye pupils, such that subtle changes in position could be ignored. My model's pupils wobbled while winking, so I adjusted the left/right pupil pitch strength and left/right pupil yaw strength parameters, but if these values are set too low, the model’s eyes will not follow the eye movement at all and it is not ideal. FYI, in the model in the video above, left/right pupil pitch strength is set to 4 and left/right pupil yaw strength to 5. I would be happy if you could consider it!
Add skeleton debug mode into scene render. The checkbox is located under "Canvas Settings" menu.
Add debug bones checkbox under "Canvas Settings" menu. This include options for bone center color and bone line color, and bone line width ( minimum of 0 and max of 10 ).
Add debug region attachments checkbox under "Canvas Settings" menu. This include option for region attachment line color.
Add debug mesh triangle checkbox under "Canvas Settings" menu. This include the option for mesh triangle line color and mesh line opacity ( minimum of 0 and max 100 ).
Add debug clipping checkbox under "Canvas Settings" menu. This include the option for clipping line color.
Add left/right pupil pitch/yaw threshold settings to "Single Value Properties" drop-down list under "Model Settings" menu.
Fix bug that did not load pre-multiplied alpha when it was assigned false.
One thing I thought was it might be a good idea to add a threshold value for the eye pupils, such that subtle changes in position could be ignored. My model's pupils wobbled while winking, so I adjusted the left/right pupil pitch strength and left/right pupil yaw strength parameters, but if these values are set too low, the model’s eyes will not follow the eye movement at all and it is not ideal. FYI, in the model in the video above, left/right pupil pitch strength is set to 4 and left/right pupil yaw strength to 5. I would be happy if you could consider it!
I had to reinstate the threshold mechanism after I forgo it for moving average. You cannot export those settings to .svp yet. I want you to test it out before I make more commitments to the pupil threshold.
Thank you so much for adding left/right pupil pitch/yaw threshold settings!! I have tried them and changing the threshold values actually helped to reduce the eye wobbling. Here is the result:
The settings for my model related the eyes are as follows:
left/right eye strength: 50
left/right pupil pitch strength: 10
left/right pupil pitch threshold: 0.15
left/right pupil yaw strength: 4
left/right pupil yaw threshold: 0.07
While testing these settings, I found I should modify my animations, so the model in the video above is updated. Here is the updated project files: chara-for-Spine-Vtuber-Prototype_20230117.zip
As shown in the following image, I made the pose will not be changed immediately after detecting changes in eyelids and mouth movements captured in some animations:
This adjustment was made so that minute movement changes would not cause the eyelids or mouth subtle opening. Also, since I set the thresholds for pupil pitch higher values than for pupil yaw, I adjusted pupils in the pitch down/up animations do not move from frame 0 to frame 10, so that these do not appear as if the position suddenly jumps when the threshold is exceeded.
By the way, you said:
You cannot export those settings to .svp yet.
but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)
Anyway, I am happy with the results this time! There are some things I would like to fix in my rig (e.g., The half-eye pose is not very good, although I have adjusted it many times), but I think the current specification of this tool is already great for vtubing. I am looking forward to the day when facial expression animations can be added. Great work!! :yes:
but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)
It has been a while since I worked on the source code. I forgot that I have a function that exports settings from a list of default setting values. I updated that list for the left/right pupil pitch/yaw threshold default values, therefore, they got exported. :lol:
Update 1.1.5
Add "Backface culling" checkbox to "Canvas Settings" menu. The backface of attachments are invisible if checked. The back side is kept consistent with Spine editor.
Add "Flip Horizontal" checkbox to "Canvas Settings" menu . When checked, the world X-axis is flipped.
Add empty animation tracks: custom tracks 1 to 4.
Add customizable expression buttons 0 to 20 underneath the rendering canvas. Button 0 resets all custom tracks thus removing all customizable expressions active. The buttons are usable once the expression slot has been setup.
Add customizable expression setup interface under "Model Settings" menu.
Add customizable expression slot drop-down menu ( 1 to 20 ) in the setup. Each slot allow you to setup multiple custom track index ( 1 to 4 ), transitional animation, and animation loop that follow after the transitional animation.
Include a button to add a setup interface row for adding more custom tracks ( 1 to 4 ), transitional animations, and animation loops.
Include a button to add a setup interface row for adding more custom tracks ( 1 to 4 ), transitional animations, and animation loops.
Add a button to remove the last setup interface row. You do not want any rows with incomplete information as you will not be able to finish setting up the customizable expression slot.
Add a button to assign all the custom tracks ( 1 to 4 ), transitional animations, and animation loops to the customizable expression numbered slot ( 1 to 20 ).
Each setup interface row has three parts: custom track index ( 1 to 4 ), transitional animation, animation loop. Custom track index ( 1 to 4 ) and transition animation are required for setup while animation loop is optional. The transition animation and animation loop input fields are drop-down menus that lists all the animation found in file ( .json | .skel ). Transition animation does not loop and the animation loop plays after the transitional animation. There is not any mix duration between transition animation and animation loop.
The customizable expression slots are saved into and can be loaded from SVP file.
Remove animation selection from Single Value Properties drop-down list with "Model Settings"
The expressions feature is really fun and wonderful!! I haven't been able to test it much because preparing the animations still takes more time, but here are the results of a quick test I did:
I know that the transition animations back to the default pause also need to be registered in the expression buttons, but I have not yet been able to do that at this time.
My current model is in a very half-assed state, but I'll leave the data here for anyone who wants to test it: chara-for-Spine-Vtuber-Prototype_20230130.zip
When I make more improvements, I will share the data here again.
I should add another setting for 'returning to default pose' animation to the customizable expression feature. That way you save another expression slot.
After this feature, facial expression recognition AI should be as easy as setting which expression slot you want to use for the AI result.
SilverStraw Wow, the tool can finally capture body movements!! That is definitely an inovative update I'm really looking forward to the day when it will be available!!