IMG_8133.PNG

Spotify in VR

I designed a speech-enabled user interface for music content exploration in VR, which was accepted as a demo to the ISMAR (International Symposium on Mixed and Augmented Reality) Conference in October 2018.

I worked independently on this project under the supervision of an assistant professor in the Columbia University Graphics and User Interfaces Lab.

Click HERE to watch a demo!

Why build this?

VR headsets are bulky, uncomfortable and cumbersome to take off.

We wanted to create a VR application which would allow someone to control their Spotify music collection without having to take off the headset.

First step?

Identify how this application would be used, and what its capabilities would be.

For a user to control music without removing their headset, the application had to allow users to do three things …

  1. Search through the entire Spotify music collection.

  2. Select songs.

  3. Play, pause and stop songs.

What next?

I designed and began to build the user interface.

We defined a music collection as a collection of playlists.

Each playlist was represented as a square, since users were accustomed to square album covers on Spotify.

Users could tap a playlist with their virtual hands in order to select it, since users were accustomed to tapping to select.

Once tapped, every song in the playlist appeared in a helix in front of the user, so that the user could easily distinguish songs from playlists.

Users could grab a particular song from the helix.

Then, use a control pad on their arm to play, pause or stop the song.

I chose three modalities — selecting, grabbing, and using the control pad, so that users would not confuse selecting a playlist, with selecting a song, or playing a particular song.

Then what?

I asked 10 Columbia University students to come into the lab and try the VR experience.

I gave them an open-ended task — find a song you like, and play it.

Results of the tests were documented with screen grabs and post-test interviews.

I learned that most students were able to find a particular song, but struggled to find the control pad they could use to play the song.

I also learned that most students fatigue quickly since they have to use their arms to do everything.

So, I decided to add voice control to the application.

Users could say ‘play, pause, or stop’ out loud in order to play, pause or stop the music.

This dramatically reduced fatigue and eliminated the problem with the control pad.

Results?

This speech-enabled user interface was accepted as a demo to the 2018 ISMAR (International Symposium on Mixed and Augmented Reality) Conference in October 2018.