Indie VR & Game Developer

Hello! I'm Jason.

I'm a computer programmer with almost 2 decades of experience, now trying to transition from a hobby to a career. I've spent the last several years learning game development with the Unity game engine, and specialize in C# scripting.

I publish my own assets on the Unity Asset Store, aspire to make my own games, and will entertain offers for jobs on projects that suit my skills and interests.

Please enjoy my projects, consider my services, and contact me anytime!

Find Out More

Projects

Showing posts with label BodyLanguageVR. Show all posts
Showing posts with label BodyLanguageVR. Show all posts

Wednesday, December 11, 2019

Update: BodyLanguageVR 1.1



This first update of this asset mainly sees me finish the Detect Inverse option. This basically give a user more freedom to trigger some inputs in a more natural feeling way. For example, the head shake no motion previously required a Right->Left->Right->Left, but a users natural instinct may be to go Left->Right->Left->Right. Now both can work.

Besides some minor bug fixes and adding a manual, most of this update is done as preparation for the future direction of the asset. I wanted to get more done in for the next update, but felt it better to release what I have early so I can take my time and do things properly.

I've been reflecting on the direction I want to take this asset, and to begin with that I had to break down and define its current goals. I want this asset to allow replacing traditional input with VR motions. Traditional input is stuff like digital button presses and analog stick/button presses. As is, my asset does not imitate such input very well. Even a simple digital button press. When you press  button down, sure its simple a TRUE/FALSE thing and my asset does that. But what it also allows which my asset did not is an amount of time that button was held down. When you do a motion its very event based. Preforming a motion triggers a TRUE value for a single frame. That's it. That's only good for yes/no types of scenarios.

This update attempts to allow you to return values for more than a single frame, but its kind of limited. And not well thought out. I have no stock demonstration input ideas for the additions.

This brings me to the future direction of the asset.

Right now, the asset is designed around what I refer to as a "DoThen" setup. What that means is, the users Does a motion, Then a value is returned. This is as opposed to what I'd like to additionally support with what I currently refer to as "WhileDo". That would be, return a value While the user Does a motion/sequence. This would better simulate holding a button/stick for a period of time. While you can currently just make a 1 motion sequence, I'd also like to better support singular motions.

In the future I can see this asset supporting input like Fruit Ninja, oddball movement setups, etc. I may even just be able to finally make the game idea I had that originally sparked creation of this asset! =)

Changelog:

v1.1
  • Added DetectInverse option. This allows the user to do the mirror image of a sequence to trigger it.
  • Added GetMotionAxis() to MotionDetection. It's still crude atm, but this allows you to get an analog value back from the input. Right now the value is just a float from 0 to 1.0f, and based off speed of which the user completes the sequence.
  • Along with that, there's a new option called ResultDurationType. This allows you to get the non-zero result of GetMotionTriggered() and GetMotionAxis() for longer than just a single frame.
  • Fixed some issues with changing the size of the Input and MotionSequence lists.
  • Removed the GetMotionTriggered() method in the BodyLanguageVR class in favor of calling it directly form MotionDetection.
  • Tweaked the expected usage to use a more common instance/object oriented approach instead of the string based lookup method.
  • Added an optional OnMotionTriggered() method selector to the UI. I don't recommend its usage for most users. Its just for new users who aren't great with code, to help grasp the idea of the asset quickly. Proper/recommended usage is getting a instance of the InputMethod, checking myInputMethod.motion.GetMotionTriggered() every frame and calling your own code when true. GetMotionAxis() is not supported with OnMotionTriggered().
  • Split the InputExample script, along with the Demo scene, into 2 to reflect the differences with setup for the above.
  • Created new Manual.pdf instead of a ReadMe.
  • Updated demo scene objects.

Tuesday, October 15, 2019

New Asset: BodyLanguageVR 1.0



Hello!

Today at last I release my next asset!

This asset is something I conceived of and worked on a few years ago, but was forced to shelve it before it was finished. Ever since then, its been hard to pick up work on it again as it's complex and a very daunting task to work on after stopping!

But recently I was finding it such a waste that it was so near to completion, but going unsold. So I doubled down and finished it up by removing and shelving all the numerous ideas I had for it that contributed to feature creep, and would never let me get around to releasing it. That was partly why it was so daunting.

At the moment, its a little basic. But I think it should be well worth its price as is, and over time as I get back to adding those new features, its value will grow!

For now, this may be my last VR focused asset, as I try and go in a new, less niche direction and make some assets for a larger pool of developers.

Anyway, onto the asset itself!

The purpose of this asset is to allow more VR devs to include more appropriate, intuitive input for VR. Input that suits the VR experience more, and keeps immersion alive. In real life you don't press F to say "Yes". You just speak the word "yes", or in this case you give a nod of your head up and down.

Yes/No nod's are included as examples, and will work on any VR device!

Additionally, I've included an example for waving "Hello". This of course requires a VR device with motion tracked controllers, like the Vive or Rift. It may even be able to work with other AR type hand tracking setups provided you select a hand model as the tracked object. I don't know. I don't own such devices.

I've included a current feature list at the end of this post.

I hope many people enjoy this asset!



Included Example Gestures:
  • - Head Shake: Yes
  • - Head Shake: No
  • - Hand Wave: Hello

Supported Custom Gesture Detection Options:
  • - Positional Distance
  • - Rotational Degrees
  • - Direction Held
  • - more TBA...
Available now on the Unity Asset Store!

My Services

Unity C# Scripting

I'm available for C# scripting for small Unity projects.

Read More

Basic 3D Modeling

I'm experienced with Blender. Basic modeling and rigging.

Read More

Unity Development

I've created many demos, scenes, and even a simple game.

Read More

Virtual Reality Consultation

I've been an VR enthusiast for a few years, and made VR content.

Read More

News / Dev Blog

Contact

Contact me

You can reach me by email or twitter.

Email: darkakumaz-net.us

Twitter: @DarkAkuma_

LinkedIn: Profile