Update: BodyLanguageVR 1.1
This first update of this asset mainly sees me finish the Detect Inverse option. This basically give a user more freedom to trigger some inputs in a more natural feeling way. For example, the head shake no motion previously required a Right->Left->Right->Left, but a users natural instinct may be to go Left->Right->Left->Right. Now both can work.
Besides some minor bug fixes and adding a manual, most of this update is done as preparation for the future direction of the asset. I wanted to get more done in for the next update, but felt it better to release what I have early so I can take my time and do things properly.
I've been reflecting on the direction I want to take this asset, and to begin with that I had to break down and define its current goals. I want this asset to allow replacing traditional input with VR motions. Traditional input is stuff like digital button presses and analog stick/button presses. As is, my asset does not imitate such input very well. Even a simple digital button press. When you press button down, sure its simple a TRUE/FALSE thing and my asset does that. But what it also allows which my asset did not is an amount of time that button was held down. When you do a motion its very event based. Preforming a motion triggers a TRUE value for a single frame. That's it. That's only good for yes/no types of scenarios.
This update attempts to allow you to return values for more than a single frame, but its kind of limited. And not well thought out. I have no stock demonstration input ideas for the additions.
This brings me to the future direction of the asset.
Right now, the asset is designed around what I refer to as a "DoThen" setup. What that means is, the users Does a motion, Then a value is returned. This is as opposed to what I'd like to additionally support with what I currently refer to as "WhileDo". That would be, return a value While the user Does a motion/sequence. This would better simulate holding a button/stick for a period of time. While you can currently just make a 1 motion sequence, I'd also like to better support singular motions.
In the future I can see this asset supporting input like Fruit Ninja, oddball movement setups, etc. I may even just be able to finally make the game idea I had that originally sparked creation of this asset! =)
Changelog:
v1.1
- Added DetectInverse option. This allows the user to do the mirror image of a sequence to trigger it.
- Added GetMotionAxis() to MotionDetection. It's still crude atm, but this allows you to get an analog value back from the input. Right now the value is just a float from 0 to 1.0f, and based off speed of which the user completes the sequence.
- Along with that, there's a new option called ResultDurationType. This allows you to get the non-zero result of GetMotionTriggered() and GetMotionAxis() for longer than just a single frame.
- Fixed some issues with changing the size of the Input and MotionSequence lists.
- Removed the GetMotionTriggered() method in the BodyLanguageVR class in favor of calling it directly form MotionDetection.
- Tweaked the expected usage to use a more common instance/object oriented approach instead of the string based lookup method.
- Added an optional OnMotionTriggered() method selector to the UI. I don't recommend its usage for most users. Its just for new users who aren't great with code, to help grasp the idea of the asset quickly. Proper/recommended usage is getting a instance of the InputMethod, checking myInputMethod.motion.GetMotionTriggered() every frame and calling your own code when true. GetMotionAxis() is not supported with OnMotionTriggered().
- Split the InputExample script, along with the Demo scene, into 2 to reflect the differences with setup for the above.
- Created new Manual.pdf instead of a ReadMe.
- Updated demo scene objects.