# On Making VR Useful Date: 2024-10-12 Tags: Tech, VR # VR is Pretty Cool But despite this, when you get a VR headset, it doesn't take too long for you to go "okay but what else is there?" I got a Valve Index in June of 2020, a year after it had released. Back then it was still 6-weeks backordered and I remember seeing the FedEx guy approaching and being "oh my god it's the Index" I immediately set it up on a Razer Blade 15 laptop and played the included Half-Life Alyx. I was impressed with everything. It was the first time I was actually trying VR. Remember, back then the Quest 2 hadn't even been released yet. I could go on about how impressed I was, but I'll refrain. So I played Alyx, I played Beat Saber, had my friends and family try it. It was great fun for a few weeks and receiving it in the middle of summer was a perfect timing. But when my school started back up, a thought immediately popped into my mind... # Can I use this thing for more than games? The answer, in theory, is absolutely YES. Imagine a 3D world with the most important human senses being reliably simulated: sight, sounds, and some degree of touch alongside the virtual environment. But in addition, you aren't bound to any rules of physics. Doors can swing both ways or even phase through the wall. You can fly, you can change height, you can handle devices that exist or that you don't own. This is like describing having access to everything in the entire world; it's to the material world what the internet is to information. Anything from anywhere at any time. But it's now 2024. I've had my Index for over 4 years. Graduation is coming into sight and the Index, despite being so popular still, is showing its age. # Let's check up on the Software There is none... What? With all the possibilities VR offers, no one has figured out how to use this thing other than for games? Well let me correct myself. There's a few, but not a lot: * VRChat dominates the VR demographic's time bank. It's consistently in top 50 **on Steam alone** by player count, let alone with Questies. Although fundamentally a game, I count it as a bit more than "just" a game. * FreebirdXR for Blender, a tool to put your Blender scene in VR. I've made use of it a few times and it's actually been useful, though not for actual creation and more just for checking proportions. * Google's Tilt Brush let's you do some cool stuff, but it's not a comprehensive tool. That didn't stop one Counter Strike skin made with it to make it into the game though, but that's a one-off. And... That's it? That's all we've done? And the games haven't even reached past Alyx yet? What's going on? It's not like the hardware market is slowing down. Quest has more users than ever, Apple released their thing and Facebook is burning more money than ever... Why isn't there anything? # Figuring out what the fuck to do I know I'm making it sound like I'm surprised but I'm not. Other than a few very recent releases, hardware hasn't been able to show text very well. It makes sense that, in a world where most productivity relies on written information, that it hasn't taken off. But what if you think past that? Humans don't work exclusively with words, and shoeing in a task that works better on another tool is just a waste of time. The answer appears to be that we simply haven't figured out VR yet. I know I know, breaking news here. Next up, water is wet. But seriously, there seems to be a disconnect. Just like I'm expecting to us MS Word in VR, people have been trying for some time to just "work" in VR. It doesn't work. It's not comfortable, it looks like ass, it works like ass, and if you can do it in VR, you can do it better on a screen that you definitely have if you can afford VR. I think there needs to be a refocus. Let me reiterate: the goal is to use VR for more than just entertainment like games or socialising or viewing content. What does that include though? Well, we already established that anything you can do easily on a monitor should stay on a monitor, so almost anything on PC is out of the question. What else is there then? # The Next Steps All this got me thinking. With all the above as context, I've started looking at my current work to find what could be done. What VR can probably do then is a mix between physical and digital. If the internet lets you do anything with information, VR lets you do anything with "objects", something more "physical". The parallel is easy to see in the fields of engineering and architecture, but that's probably because I'm almost exclusively exposed to those fields. But imagine an engineering project for maintenance where your experts can walk around a digitised structure? This sounds amazing, but the bigger challenge is making those digital twins in the first place, which is outside the scope of this blog. But it doesn't stop there. Basically, we should actively look for times where our work would be better WITHOUT a computer monitor (for those that mainly work with them, which incidentally would also be the kind of people that could even remotely consider VR as a work tool). For instance, reviewing large PDF files? (I'm actually working on a project for that, more info in the future maybe...) What about CAD? Actually, that's one of the biggest uses I can see. Making a CAD on a screen can be hard because you lose all senses of scale. Somehow, there are no VR CAD software right now though. What next? Well, visual design in architecture maybe? Did you know there's an entire field of design only for lighting? Imagine if you could simulate an entire space when doing lighting design. That sort of stuff is what I think VR can be useful for. There's also the Apple approach which focuses on bringing what we now do on monitors to the VR space. But this is a challenge and you're fighting the wrong battle by going this way in my opinion. It's better to find out what VR does well instead of making it do something that we can already do well. When computers first had GUIs, document production and storage was one of the first uses because it was simply better than mountains of paper files. When graphics became better, it could also do graphic design or making engineering plans. But before they could, those things were done by hand, simply because the computer wasn't good at it. Right now, VR isn't good at writing or reading text. It's not good at making precise movement, and it's not good at viewing videos. But, it's great at simulating an entire 3D space. The PC took over the desktop. VR can take over the rest of the room. We just need to figure it out properly and go from there. # What I'm doing I mentioned it earlier, but I want to experiment with PDF work in VR. I'll post more info when I get there. Other potential use cases could be new workflows in planning or other fields that are intricately tied to the physical world. Design is also something I mentioned, but the tech isn't quite there yet to make it past the tech-demo level of usability. One of the big shifts that can occur right now though is optimising user interactions. Text and laser pointers are horrible. You're in a simulated world not bound to reality, the least you can do is provide your user with intuitive interactions. You aren't born with a tutorial panel. You don't walk into a bathroom only for the stall doors to glow to tell you they can be interacted with. When making an app, especially in VR, make use of existing human concepts and intuit to guide the user through an immersive interaction. Play-test with people clueless about VR and change stuff around until it works. There's this story about how Valve had playtests for the original Half-Life and every time a player tried to open a decorative door, it would be tweaked or remove until they didn't. When they made Portal, people thought they could only enter one of the two portals and couldn't go back. So they made the first few levels with only the blue portal to force you into assuming you could go both ways. The first thing you can do in the game is going through the blue portal to exit your cell. Do this in VR. Don't pepper your user with prompts, glowing shit and garbage interactions. Make everything run with the user's hands and gaze; make it intuitive. If you need a tutorial window, you're doing it very, very wrong. Once we figure this out, we'll be able to make all these cool things. Until then, we're stuck with games and bad PC app ports. Side note: I hate to write a LaTeX math parser to post this blog lmao