Skip to content

Commit

Permalink
M7 - Finalized showtime website content (#117)
Browse files Browse the repository at this point in the history
* Add our own project - m4, edit m4-index.md

* Added project image

* Updated project image

* Added tech-stack image

* Edit m4 index page.

* Added some feature information and pictures

* Unnecessary files deleted, unnecessary Lorem Ipsum text removed

* Added some texts and images.

* Added process.md and more images

* WIP

* Changes on tech-stack site.

* Resized bench drill image, highlighted some words on the tech-stack site.

* content changes

* Fixed some texts, added links

* upstream changes

* Added more content

* Final changes to content

* Deleted .vs folder and reduced gif size

* Added embedded showtime video

* M7 - Setup new master project

* M7 - Added first website draft

* M7 - Added more text content

* M/ - Added Pre-finalized text and exchanged images

* M7 - Added Pre-finalized text and exchanged images

* M7 - Finalized showtime website content

* M7 - Added merge changes

* M7 - Removed unnecessary merge duplicates

Co-authored-by: Marvin Kullick <mkullick@web.de>
  • Loading branch information
Sabrows and Rein3ke authored Jul 8, 2021
1 parent 08f6ac3 commit 09d2203
Show file tree
Hide file tree
Showing 13 changed files with 17 additions and 77 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,7 @@ supervisor = "Tamara Voigt, Martin Steinicke"
+++

{{<section title="Summary">}}
How focused will you stay in a busy workshop?
------


How focused will you stay in a busy workshop?
------
Expand All @@ -34,7 +33,6 @@ Our application in the first iteration was a 'sandbox' experience, where a task
Now, employed with a series of challenges modeled after real workplace assignments, users must cope with time pressure, and keep their cool while operating machines.
A task submission setup was engineered, and submitted tasks are evaluated for precision by a supervisor.


User avatar for increased immersion
------

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -38,16 +38,12 @@ The most observable improvement is the virtual environment itself.
Our modelling team took one extra step to provide our users with a more appealing and wider playing area.
The **new VR workshop** unites the goal of further increasing the player's immersion with many possibilities of later additions to the narrative framing.

The most observable improvement is the virtual environment itself.
Our modelling team took one extra step to provide our users with a more appealing and wider playing area.
The **new VR workshop** unites the goal of further increasing the player's immersion with many possibilities of later additions to the narrative framing.
{{<image src="previous_workshop.jpg" alt="previous workshop" caption="Previous workshop">}}
{{<image src="improved_workshop.jpg" alt="improved workshop" caption="Improved workshop">}}

{{</section>}}

{{<gallery>}}
{{<image src="image_placeholder.jpg" alt="previous workshop" caption="Previous workshop">}}
{{<image src="image_placeholder.jpg" alt="improved workshop" caption="Improved workshop">}}
{{</gallery>}}


{{<section title="The brand-new 33+%" >}}

Expand All @@ -65,10 +61,7 @@ Our research gave us a number of ideas from which we chose a selection based on

- **Custom Event System**

In terms of dynamically triggering distractions at runtime, we created a system that follows the Observer Design pattern.
From the implementation perspective, this means that we have defined certain areas in the virtual workshop in which distractions can take place, the so-called hotspots.

In terms of dynamically triggering distractions at runtime, we created a system that follows the Observer Design pattern.
In terms of dynamically triggering distractions at runtime, we created a system that follows the **Observer design pattern**.
From the implementation perspective, this means that we have defined certain areas in the virtual workshop in which distractions can take place, the so-called hotspots.

Every hotspot is also a listener to any kind of incoming event.
Expand Down Expand Up @@ -108,22 +101,20 @@ In such cases, creative workarounds must be found. We designed our tasks in such
To simulate these, a dynamic animation targeting lamps relevant to the position of the user and the machine they are working with.


{{<image src="fly_distraction.gif" alt="fly gif" caption="Fly distraction">}}


Full-Body VR Avatar
------
---

One of the biggest challenges was our full-body VR avatar. To have full control and freedom of choice, we decided to develop everything that VR avatar creation revolves around, by ourselves.
That included 3D modelling a male and female body, texturing and UV-mapping them in ZBrush and rigging and weight-painting them in Blender.
After that, the biggest challenge of implementing the avatars became mapping them to the players movements. For that, we chose a common tool in the industry, the so-called procedural animations.

One of the greatest challenges was our full-body VR avatar. To have full control and freedom of choice, we decided to develop everything that VR avatar creation revolves around by ourselves.
That included 3D modelling a male and female body, texturing and UV-mapping them in ZBrush and rigging and weight-painting them in Blender.
After that, the biggest challenge of implementing the avatars became mapping them to the players movements. For that, we chose a common tool in the industry, the so-called procedural animations.


- **Procedural Animations**

Procedural Animations describe the mathematical computation of movement, that - through a set of rules and states - gets adapted to the specifics of different environments.

_Procedural Animations describe the mathematical computation of movement, that - through a set of rules and states - gets adapted to the specifics of different environments._

Expand All @@ -142,21 +133,13 @@ Another one being the positioning of the avatar's legs, since VR setups usually
Finding and implementing answers to these types of questions, as well as defining structured, mathematical rules eventually enabled us to fully procedurally animate our VR avatars.


{{<image src="avatar.gif" alt="avatar in mirror gif" caption="Full-body VR avatar">}}


New Challenge Mode
------
---

Our challenge mode consists of a multitude of tasks, which become increasingly complex throughout the playthrough.
Users are given a limited amount of time (currently 10 minutes for a total of 6 tasks) to create and deliver a variety of metal pieces.
To extend this feature we designed and modeled a new machine, the export station.
It has an overview interface, in which the user sees a leaderboard of the top 3 recent users' achieved scores, their current performance in the ongoing session, and the time they have left.

The machine provides a green outline as a template of what the to-be-delivered workpiece should look like.
To enable the user to accomplish their best result, helpful tools and mechanics such as a yardstick or manually marking the worksheets texture at runtime were included.
The green template also marks the spot where the user is to lay down their submission when it is ready for evaluation.
When the user places their finished workpiece here, they can submit it by pressing the green button on the side of the machine.
In the back end, the teacher or supervisor then is shown the metal piece on top of the template.
Aided with a ruler grid, they can then visually evaluate the piece, manually enter the points they think the work deserves, and submit these for the user to see in the simulation.

Our challenge mode consists of a multitude of tasks, which become increasingly complex throughout the playthrough.
Users are given a limited amount of time (currently 10 minutes for a total of 6 tasks) to create and deliver a variety of metal pieces.
Expand All @@ -172,10 +155,7 @@ When the user places their finished workpiece here, they can submit it by pressi
On the computer monitor, the teacher or supervisor is then shown the metal piece on top of the template.
Aided with a ruler grid, they can then visually evaluate the piece, manually enter the points they think the work deserves, and submit these for the user to see in the simulation.

{{</section>}}

{{<gallery>}}
{{<image src="image_placeholder.jpg" alt="fly gif" caption="Fly distraction">}}
{{<image src="image_placeholder.jpg" alt="avatar in mirror" caption="Full-body VR avatar">}}
{{<image src="image_placeholder.jpg" alt="export station" caption="Export station">}}
{{</gallery>}}
{{<image src="challenge_mode.gif" alt="export station gif" caption="Submitting a task at the export station">}}

{{</section>}}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -87,42 +87,4 @@ If the project is to be continued by a dedicated team and regular communication

Next steps are already planned: a follow up test session at the HTW workshop to further prove or disprove our current findings. Additionally the Berliner Stadtreinigungsbetriebe (BSR) expressed their interest to try out our application.

The cutting edge we have been working on emcompases a number of exciting areas that offer room for expansion.

Our product (in its current form) makes use of gamification, which remains a debated topic.
Considering how young our user base is, there is untapped potential, both for the single-user experience as well as an integration in the social setting that an apprenticeship is.
As with all serious games, the selection and implementation of gamification elements must be carefully made and tested. Gamification, if done wrongly, can be demotivating and,
in our case, even harmful, if too much focus is shifted from the actual lessons of workshop safety to the completion of game goals.

Our product is so far also designed for single-user use.
We believe that allowing two or more users to participate in a single learning session could greatly increase engagement and widen the range of scenarios,
interaction (and distraction!) possibilities. It could allow for creative roleplay within the already existing workshop, e.g. where one user plays a supervisor or visitor while another
user must fulfil work assignments. Furthermore, it could allow supervisors or teachers to be 'present' in the workshop while a trainee performs operations.

Experiencing the environment (including surround sound and visual effects such as flickering lights) will allow for a closer understanding and discussion of circumstances that lead
to an injury or lapse in judgement. Of course, given a longer time frame for work would allow for a holistic integration in the apprenticeship education.
This would mean working out a larger educative concept in which for example users would keep a log of their various injuries, spaced repetition could be employed, and
user feedback/in-game information could be expanded.

The standardized questionnaire we used to evaluate Avatar Embodiment was published earlier this year (2021) and highlights the significance the phenomena has in virtual reality products.
The work is motivated by the need for standardization in measuring users' experience, inferring that so far this research has been conducted in individual experiments with varying capacities.
The research we have found and conducted ourselves promises that avatar embodiment plays a crucial role in making VR more stimulating, engaging and immersive.
Perhaps improvements in this area will create more comfortable, credible and enjoyable experiences for users, making the technology more accessible and attractive for mainstream use.
The most experimental aspect of our project is creating a multisensory experience of physical injuries. Whereas for obvious reasons VR safety and hazard training is made for industrial settings,
little research has aimed at using VR to simulate 'shocking' bodily harm for educational purposes.
We have found research that demonstrates how users in VR will avoid virtual injuries, inferring avatar embodiment.
However, considering the psychological implications that such experiences entail, it would be unethical to push boundaries in this aspect without professional psychological guidance.

Besides these, the existing product still offers room for improvement. More accurate body tracking and avatar placement can improve the existing avatar embodiment experience.
Our tasks can be designed to require more complex machine operations, ideally leading to more hectic and thus injuries.
Our colleague-NPC can be rigged and given more realistic movement animations, thus improving its credibility as a real-life representation.

Larger milestones within the existing simulation would include further mesh manipulation (bending and filing, for example) and requested machines such as the lathe.
Performance optimizations would allow the placement of more mirrors or other reflective surfaces throughout the workshop, which function as reinforcement of avatar embodiment
(by allowing users to see the body they are controlling). Body/limb tracking with additional VR sensors would allow a more accurate avatar portrayal.

Finally, creating consistently reliable physics, and ideally very accurate hand tracking would greatly improve usability across the whole application.
Many operational steps on the machines of the workshop involve precise hand movements (tightening, holding, gripping, sliding and more), and many injuries can happen to single fingers,
from single movements. Creating this level of detail will help improve our simulation's capacity for accurately and realistically reproducing injuries.

{{</section>}}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title = "Process"
weight = 2
+++

{{<image src="image_placeholder.jpg" alt="agile process" caption="Our Agile Development Process">}}
{{<image src="agile_process.png" alt="agile process" caption="Our Agile Development Process">}}

{{<section title="Starting off">}}

Expand Down Expand Up @@ -68,7 +68,7 @@ vocational training center. This first test round was planned as a proving groun
Five potential users tried out our simulation and participated in interviews.


{{<image src="image_placeholder.jpg" alt="bfw user test" caption="First user test at the bfw">}}
{{<image src="bfw_user_test.png" alt="bfw user test" caption="First user test at the bfw">}}


Improvement and Preparation for Second Round Testing
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title = "Tech Stack"
weight = 3
+++

{{<image src="image_placeholder.jpg" alt="tech stack" caption="Our Tech Stack">}}
{{<image src="tech_stack.png" alt="tech stack" caption="Our Tech Stack">}}

{{<section title="Used Technologies">}}

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 09d2203

Please sign in to comment.