Home / NEWSFEED / TUNING VORTX (AND WHY IT IS COMPLEX)
TUNING VORTX (AND WHY IT IS COMPLEX)

TUNING VORTX (AND WHY IT IS COMPLEX)

When the dialog first started about Vortx and the possibility of using content analysis for haptics triggering, I knew we had our work cut out for us. Most of what was to-be done required a complete redesign of my predecessor’s mechanisms, and I was ok with that. I’m no stranger to embedded software, and dsp is something I had been playing with as both a hobbyist and professional for a long time. We faced a couple of problems, right at the outset.

The first of these problems? Latency & safety. We needed to abstract the current firmware mechanisms, which meant a complete rewrite of the system internals. My predecessor had large data transfers happening, in an effort to control much of the internal scope of the Vortx via external software. This was the first piece that had to go - allowing external software to completely control the functionality of Vortx and its mechanics was a risk, and one that came with a bit of a latent price tag. After implementing a new protocol & subsystem design, we were off to the races. We used very small packet sizes to control the abstract functionality of the unit, and manage all our safety mechanisms internally - resulting in low latency and a higher transfer rate. Vortx was fast - and safe.

The next problem, was data acquisition. Historically, capturing audio can be easily accomplished at < 10 millisecond intervals, but video is more of a challenge. After a bit of trial-and-error (mixed with R&D) we decided to use DX11 and it’s mechanisms for grabbing frame data at the adapter output. This gives us a fast capture rate that matches the on-screen framerate, which is good enough for our purposes. Now, getting these systems in place was not an incredibly difficult task - what took a bit more work and intelligence was configuring everything above in a way that requires little-to-no user interaction. No configuration. No more menus and pages of options to wade through. (Lord knows modern gamers have enough of that to deal with) Vortx is, for the most part, plug-and-play. We autoadapt gain, grab foreground buffers, and drive Vortx with what you see and hear.

OK - so now we’ve got all the data we need, and awesome low-latency hardware. (Thanks Tim) How do we get from data to haptic output? This is the art, and what we’ve spent a large part of our development resources working on. We started with the basics. RMS provides Vortx a perceptual safety net, and the short-time fourier-transform gives us the decoupled frequency-domain information that we need. We’ve built all sorts of toys to process and filter video & audio over the past year - and all of these tools create an expanding set of ‘ingredients’ for Vortx. We are always working on developing more of these ‘ingredients’, but the magic of Vortx and its output lies in how we combine them. Oh, and it's fast. Did I mention it’s fast? We use 6-8% CPU on average, and that disappears into your process list with a whopping 0.1% at idle when you aren’t consuming media.

This brings us to our final (and I believe, nearly perpetual) task with Vortx -- tuning. This is the tough (and fun) stuff. There are many ingredients to consider, as we aren’t simply using level-based feedback to control the Vortx, but more complex analytics of both frequency and audiovisual landscape characteristics. We break down each of these characteristics, applying heuristic weights in some cases to make decisions for Vortx at a given instant in time. Combining all of this data into meaningful haptic output is much like balancing a beach ball on the end of a broomstick. It takes quite a bit of calculation, but all the math in the world will not save us - the process is more like an art. Fortunately, it is the art that we love, and the art that creates the experience for our users. Personally, I look forward to continued improvements of Vortx and its mechanisms, and I feel fortunate to be a part of such an incredible team.