Skip to main content

The Alpha Prototype

This week began with adding some finishing touches to the project design document, (~2 hours) , coming up with a plan for implementing the alpha prototype (~5 hours), and then the rest of the week was spent starting to implement that plan (~3 hours).


We decided that the biggest hurdle for this project is going to be creating pieces of furniture that can snap together to create a single piece of furniture, so this is what we want to have completed for the alpha prototype.  There are a couple approaches that could be made to achieve this effect, and most of the planning was spent trying to determine what worked best for us in terms of effort, functionality, and modularity.   

The first option was to try and use some existing functionality from an existing component, but the best existing library we found is the aframe-snapto-component, which doesn't quite have the functionality we are looking for in that the snapping can cause performance hits if performed continuously, and the documentation seems to imply that the snapping cannot be turned on and off or triggered by events.  I have not had a chance to fully test this component yet, so we may find that we can make it work in the end.

Another option is a custom implementation of a snapping component that is not constantly searching for a snap point and instead only snaps in response to certain triggers.  This implementation would rely on invisible spherical or hemispherical collision objects placed on the surfaces of objects at points where a snap can occur.  When two snap points collide they will trigger an event that has handlers that check whether the two 'colliding' objects are compatible (probably by comparing two identifying properties on the snap point entities), and, if they are compatible, this will trigger the snap, otherwise nothing will happen.  This implementation is likely to draw from the existing aframe-snapto-component if there is a lot of transferable code.  

However, before any of the snapping could really be investigate I had to create an A-Frame environment that supported testing this kind of functionality, and that turned out to be a lot more work than expected.  This environment needed to include a few things:
  1. A simple physics system
  2. Grabbing and moving objects with a mouse (and eventually a controller)
  3. Free camera movement and rotation
All three of these functionalities have to work concurrently and getting two of these features at once was doable, but adding a third always broke something.  For instance, the default implementation of grabbing objects requires holding down left mouse button, but the default method of rotating the camera also requires holding down the left mouse.  After much trial and error, I eventually got everything working by creating the camera entity shown below. 

<a-entity camera
            wasd-controls
            look-controls="pointerLockEnabled:true"
            position="0 1.2 1.5"
            capture-mouse
            raycaster="objects: .grabbable" cursor="rayOrigin:mouse"
            static-body="shape: sphere; sphereRadius: 0.001"
            super-hands="colliderEvent: raycaster-intersection;
                    colliderEventProperty: els;
                    colliderEndEvent:raycaster-intersection-cleared;
                    colliderEndEventProperty: clearedEls;">
</a-entity>

There are a few important element to this camera entity, but the biggest problem solver was the look-controls component with the "pointerLockEnabled" property set to true as this allows the window to lock the mouse cursor to the window which enables the camera to be moved with mouse movement rather than clicking and dragging. The super-hands component is from the aframe-super-hands-component package and it is a relatively powerful, but complex implementation of grabbing objects in A-Frame.

With the environment complete and the modularity functionality begun we are well on track for completing our alpha prototype.

Comments

Popular posts from this blog

[FINAL] - Where to find our game

Hello future builders, we're 3 programmer-artists that make up the Seismic Octopus team: Mitchell Koch - lead programmer, game tester, researcher  Priscilla Lo - project manager, sound design, lead 2D artist, programmer, documentation  Maxime Vincent - lead 3D artist, lighting designer, game tester We finished this course in May 2020 with a final release of Build-a-Furniture available on GitHub: https://github.com/Areizza/Build-a-Furniture Although this semester was full of hardships and unprecedented times near the end, we got through it together and are proud to present our simple web-VR game. Check it out and let us know what you think! :)

Sprint 10 - Adding more boxes and lots of scripting

As the final submission draws near, lots of work has yet to be done. Due to time constraints and the lack of resources in light of recent events, we made the decision to cut down our scope by removing VR functionality entirely and focus on desktop-to-desktop connection fully. With new goals in mind, I spent the beginning of the week by adding all the boxes for spawning furniture components. To do so, I started by replacing the blue boxes we used previously with stylised boxes that match the environment better. To tell the player what each box contains, an image of the rendered component is placed on each side. The challenge here was that I wanted to avoid creating a GLTF for every single box because it would have slowed down the page drastically. The solution was to instead use a single, universal GLTF for every box and placed images on each side of the box as explained previously (~6hrs). New Warehouse Area - Added new boxes Close up of updated box - Bright colours and side ...

Making Instructions

This week, I worked on creating all the instructions images for both the Warehouse and the Living Room. Although this was not a very difficult task, it was tedious to work on and took around 3hrs to complete. It was a challenge to adjust to our new schedules given the current circumstances and allot appropriate time to spend on each of our classes. The instructions in the Living Room show the current required furniture pieces and their quantities, which the Builder will need to communicate to the Finder in the Warehouse. See below for an example of pieces required for a table. On the other hand, the instructions in the Warehouse would show the symbols on the different furniture pieces that should touch in order to be combined, which the Finder will need to communicate to the Builder in the Living Room. See below for an example where the green heart marking should be made to touch the green circle marking. I will be putting the source link for these instructions into arr...