top of page
  • Writer's pictureDarien Smith

VR Project - 'The Fog that Watches'

Updated: Apr 15, 2023


Background


Up until this point I only had only a few experiences with VR. I have played some basic VR games but due to the 3D aspect of the format, as a more traditional filmmaker, I had never considered the idea of using VR for storytelling. I could see its potential and had even witnessed the power of VR storytelling first-hand. The reason I had not considered telling a story with VR myself is that it was way out of my comfort zone and my technical ability. I always thought it would take years of practice learning game engines, 3D modelling, and animation to come up with a good VR experience. For this reason, taking on the challenge of creating my own VR experience was as exciting as it was daunting.



Planning


During a meeting with my two group mates, we discussed ideas for a 2-3 VR experience. We discussed whether we would use a 360 camera a shoot a film in a more traditional manner, or we could go down a more challenging route that would allow us more creative freedom. It was at this point we stepped out of our comfort zones and turned to 3D rendering to tell our story


With the medium decided, we conceptualised a story that would take advantage of one of the things VR does best. We decide to play with scale, something hard to represent in 2d space, but something that shines in 3D VR.


After Josh and Anthony finalised the story and created storyboards, I was able to begin my end of the work. I decided to handle all the 3D side of things including, modelling, animating and rendering. My 3d software of choice was Blender. I chose to use Blender as it had a huge online community that I knew could help me when I needed it.


The first task was to create the assets included in the story.



Creating the VR Project


The Eye


I downloaded a basic eye model from here - https://free3d.com/3d-model/eye-995738.html


Next I made animated eyelids following this tutorial - https://www.youtube.com/watch?v=SrZ-yYquoh8&ab_channel=BlenderStudy


I changed the iris of the eye to look more akin to that of a monster. I did this by playing around with the shader menu settings within Blender.



After some further tweaking to the look and scale of the eye, I moved onto creating the animated book.


The Book



Creating the spine


Creating the covers



Creating the pages




Adding effects to the pages for realistic motion


I extracted the textures from this existing book model, and applied them to my book using the UV mapping tool in blender - https://free3d.com/3d-model/book-pack-93349.html


I learned how to apply textures from this video - https://www.youtube.com/watch?v=r5YNJghc81U&ab_channel=TutsByKai


This is the finished book:


I made a basic book stand by changing the geometry of a cube using the tools found in the edit window.


I brought all the assets together and began building the world.



World


Using a tutorial I used noise to add stars to the black skybox for a space effect - https://www.youtube.com/watch?v=vLC86GhYi5w&t=1s&ab_channel=ContentWonder


With the main assets in place, I added a 3D animated model of the solar system that Ant sourced for me. I removed everything but the planets from that model and placed it so it looked to be orbiting the eye.



Audio


Josh wrote a script and recorded the voice lines for the eyeball character in the film. Using my daw of choice Reaper, I edited Josh’s voice performance by adding EQ to bring out the low end, pitch shift to give the voice size, and added reverb to make it sound out of this world. I also added a distortion effect to some interest and some non-linearity to the waveform. I also cut out some of the more significant words and placed them behind the listener’s head using a biaural panner


This is a snippet of the voice-over before and after processing




With the voice lines added, I now had a reference for the pacing and animation Timings



Rendering


Before performing the final render, I created a couple of low res test renders to check for the correct timing, as well as the overall composition of the assets. The test render revealed I was a little bit out of sync with the audio due to a mistake when converting the frames to seconds. After tweaking the timing and increasing the size of the eye, I was ready for the final render.


Using a tutorial, I set up my blender project for a stereoscopic, 360 render https://www.youtube.com/watch?v=OMGxpJKmLn0&t=7s&ab_channel=AmortMedia


Additional information on 360 stereoscopic export https://renderpool.net/blog/blender-360-video/


I ran into a problem with how long the render would take. I wanted the video to be as high quality as I could reasonably get, however due to the high 8k resolution that is required for good look 360 videos, and the fact that stereoscopic means it needs to render each frame twice (right, left), plus raytracing, and a relatively long scene length, the expected render time was approximately 47 days. This was not acceptable given the timeframe. I had to make some changes to reduce the render time.


I lowered the resolution from 8k to 2k, I lowered the light bounces down from 12 to 3, and lowered the samples from 300 to 150. This reduced the render time to 47 hours, much more acceptable. To compensate for the low samples, I applied noise reduction in the render settings. This helped with the visuals significantly and only added an extra 2 seconds per frame.


The resource I used to help me lower the render times- https://irendering.net/how-to-improve-speed-of-blenders-cycles-x-rendering-engine/


48 hours later and the render was finished, the project however was not. There was still more to be done before handing the video off to my groupmates for post-processing.


Firstly, I had to add 3D 360-specific metadata to the video in order for other programs to recognise what the video was and how to handle it properly. This step was crucial because without it was just a video of 2 stretched copies of the scene sitting on top of one another.



Where to download Spatial Media Metadata Injector v2.1- https://github.com/google/spatial-media/releases/


With the correct metadata injected into the video file, It was ready to send over to Anthony for sound design. At this point, my contribution to this project was concluded.


Link to Blender File and Video:


The Video is VR-compatible. Watch in VR for the full experience.



Review


Overall my group and I are very pleased with how the VR experience turned out. It was almost identical to what we all had in our heads when conceptualising the idea.


There were certainly some ups and downs throughout the creation process, for example, we originally wanted to create the 3D assets within blender, and export these assets into a game engine such as unity, then tell our vr story as a game in real time with a level of intractability, however, due to issues surrounding exporting the assets from blender to unity with the correct animations and textures proved to be very difficult. After hours of disscussion with group mates and searching online for a solution, I was still unable to port over the assets correctly. As frustrating as this was, it actually turned out to be a blessing in disguise. All the trouble that arose from this process gave me the idea of just doing the whole experience within blender. this meant we would lose any level of intractability, however, outputting the experience as a pre-rendered VR video as opposed to a game, allowed us to produce a higher quality output without having to learn how to use a game engine on top of already learning blender. Another advantage of rendering video from blender is it meant Anthony could edit and add sound design with the video editing software he already has much experience with.


Even if the assets pored over to unity correctly, the benefits of doing everything within blender and exporting the project as a VR video outweighed the benefits of creating a game. It was only through failure I came up with the idea of 360 stereoscopic video. Had unity been more friendly to me, I would have likely spent a significant portion of time learning how to use it and undoubtedly running into even bigger problems that were further out of my depth then I already was.



Final Thoughts


Throughout the creation of this VR experience, I was required to navigate 3D software as a means of telling a story, everything from conceptualization to modelling, animating, to rendering. It was a lot to learn in a short amount of time, however, after the initial steep learning curve, the 3D software became fairly intuitive to use. Furthermore thanks to a plethora of YouTube videos and articles, I was able to find an enormous amount of information on how to do exactly what I wanted.


As a more traditional filmmaker, looking at the final output, I never considered I would be able to make such a thing. 3D rendering and animation seemed like a herculean task that would take me years to learn, however, it seems in this age of information, learning is as simple as navigating a search bar. Thanks to the huge online Blender community I was able to create something in just a couple of weeks that I thought would take years of learning to achieve. This project is a real testament to the power of vast, free information that is accessible to anyone with an internet connection and the willingness to learn.


I now feel I have a new, powerful tool in my media creation toolbox which I am sure to use again in future projects.

8 views0 comments

Comentarios


bottom of page