Skip to main content

The Alpha Prototype

This week began with adding some finishing touches to the project design document, (~2 hours) , coming up with a plan for implementing the alpha prototype (~5 hours), and then the rest of the week was spent starting to implement that plan (~3 hours).


We decided that the biggest hurdle for this project is going to be creating pieces of furniture that can snap together to create a single piece of furniture, so this is what we want to have completed for the alpha prototype.  There are a couple approaches that could be made to achieve this effect, and most of the planning was spent trying to determine what worked best for us in terms of effort, functionality, and modularity.   

The first option was to try and use some existing functionality from an existing component, but the best existing library we found is the aframe-snapto-component, which doesn't quite have the functionality we are looking for in that the snapping can cause performance hits if performed continuously, and the documentation seems to imply that the snapping cannot be turned on and off or triggered by events.  I have not had a chance to fully test this component yet, so we may find that we can make it work in the end.

Another option is a custom implementation of a snapping component that is not constantly searching for a snap point and instead only snaps in response to certain triggers.  This implementation would rely on invisible spherical or hemispherical collision objects placed on the surfaces of objects at points where a snap can occur.  When two snap points collide they will trigger an event that has handlers that check whether the two 'colliding' objects are compatible (probably by comparing two identifying properties on the snap point entities), and, if they are compatible, this will trigger the snap, otherwise nothing will happen.  This implementation is likely to draw from the existing aframe-snapto-component if there is a lot of transferable code.  

However, before any of the snapping could really be investigate I had to create an A-Frame environment that supported testing this kind of functionality, and that turned out to be a lot more work than expected.  This environment needed to include a few things:
  1. A simple physics system
  2. Grabbing and moving objects with a mouse (and eventually a controller)
  3. Free camera movement and rotation
All three of these functionalities have to work concurrently and getting two of these features at once was doable, but adding a third always broke something.  For instance, the default implementation of grabbing objects requires holding down left mouse button, but the default method of rotating the camera also requires holding down the left mouse.  After much trial and error, I eventually got everything working by creating the camera entity shown below. 

<a-entity camera
            wasd-controls
            look-controls="pointerLockEnabled:true"
            position="0 1.2 1.5"
            capture-mouse
            raycaster="objects: .grabbable" cursor="rayOrigin:mouse"
            static-body="shape: sphere; sphereRadius: 0.001"
            super-hands="colliderEvent: raycaster-intersection;
                    colliderEventProperty: els;
                    colliderEndEvent:raycaster-intersection-cleared;
                    colliderEndEventProperty: clearedEls;">
</a-entity>

There are a few important element to this camera entity, but the biggest problem solver was the look-controls component with the "pointerLockEnabled" property set to true as this allows the window to lock the mouse cursor to the window which enables the camera to be moved with mouse movement rather than clicking and dragging. The super-hands component is from the aframe-super-hands-component package and it is a relatively powerful, but complex implementation of grabbing objects in A-Frame.

With the environment complete and the modularity functionality begun we are well on track for completing our alpha prototype.

Comments

Popular posts from this blog

[WEEK 1] Introducing our project...

Our goal is to make a cool VR game for Design Studio 3. The main idea involves a collaborative asymmetrical experience to build furniture virtually. There will be two roles in this game: a finder (to look for furniture pieces in the warehouse), and a builder (putting the parts together). We started this project on January 22, 2020 and are currently on our first 1 week sprint of development.

Sprint 10 - Adding more boxes and lots of scripting

As the final submission draws near, lots of work has yet to be done. Due to time constraints and the lack of resources in light of recent events, we made the decision to cut down our scope by removing VR functionality entirely and focus on desktop-to-desktop connection fully. With new goals in mind, I spent the beginning of the week by adding all the boxes for spawning furniture components. To do so, I started by replacing the blue boxes we used previously with stylised boxes that match the environment better. To tell the player what each box contains, an image of the rendered component is placed on each side. The challenge here was that I wanted to avoid creating a GLTF for every single box because it would have slowed down the page drastically. The solution was to instead use a single, universal GLTF for every box and placed images on each side of the box as explained previously (~6hrs). New Warehouse Area - Added new boxes Close up of updated box - Bright colours and side

Storyboard and Physical Layout

I finished up on some graphical elements for the user interaction specification component of the proposal due this coming Friday. This includes the storyboard panels and the physical layout diagram. As I was researching Oculus Rift physical setups, I had to determine how many sensors we would need for our game. I believe that 2 sensors will be sufficient, since we do not need a true 360 degrees experience as the Builder player will primarily be focused on the 180 degree space in front of them (i.e. the fireplace, the TV, and building the furniture). Our game is not an action packed game with any running or shooting. Of course, the player will still be able to fully look around but they shouldn't have a great need to move in the other 180 degrees of space. This would also take into consideration accessibility to our game, because it costs extra to buy a third sensor (the Rift only comes with 2) as well as requiring adapters and wire extensions. I spent about 4 hours researching