Gameplay video:

The Elemental Tetrad

Introduction & Story

Valheim is a 3D survival game for 1~10 players, based on the Norse mythologies where players need to gather resources, craft tools and weapons, build shelters, and finally find defeat bosses to prove themselves to the god Odin.


The game unfolds in a huge, dynamic and procedurally generated world free for the player to explore, while the flow of the game is sequential. 

The game starts off with players at the world origin with an altar where players can give loots from each of the 6 bosses to be defeated as sacrifice to Odin. The world consists of many biomes such as dark forests, swaps and mountains. In each biome, there are different types of crafting materials and monsters. There is also one boss corresponding to each biome type. 

Players can explore the world, find the bosses and defeat them, but in a sequential manner, as there are also mechanics discouraging players from going to and explore meaningfully a biome that corresponds to a later boss. For example, the Dark Forest has an abundance of copper mines, but players need to defeat the previous boss to craft their first pick-axe to do minings.


In terms of graphics and modelling, the game is not great. In fact, entities in the game has very coarse 3D meshes and the animations are not impressive. However, the game has very impressive rendering and lighting effects.

However, the game has immersive background music that corresponds to the theme of each biome. In the case of players entering a “higher level biome” for the first time, there will also be eerie music sounds alerting you that you are entering a biome beyond your strength.


Valheim is an indie game developed by Iron Gate Studio consisting of only 5 developers. It is developed using Unity game engine. Despite many different elements and game mechanics, the game is surprisingly small (probably as a result of low mesh counts for in-game entities), as the program size is less than 1 Gigabyte, and it has very low system requirements.

Game Interface


#3 Lens of Venue

The game map is randomly generated for each game, so it ensures unique gameplay each time you start with a new game character. However, different biomes have different generation criteria based on distance from world center. This ensures that new players are not challenged with different biomes when they first begin, and players are expected to travel further away from their starting point to explore as the game progresses.

#8 Lens of Problem Solving

To defeat each boss, players need to first explore and find the altar for the boss. At the altar, players must provide a specific item available in the biome as sacrifice to summon and defeat the boss. A carved stone at the altar shows a riddle hinting the correct sacrifice item. This mechanism not only compels players to think and solve the riddle, it also encourages them to explore the biome thoroughly so that they can get access to all the items in the biome.

The Altar to summon the boss

#35 Lens of Expected Value

When players are killed in Valheim, they lose all items in the inventory at the corpse and their combat and survival skill levels drop. They are then spawned and have to go to the corpse to get items recovered. Such punishments discourage players from entering a new biome or summon a boss before getting well-equipped. On the other hand, venturing into a new biome means getting access to new materials for crafting more powerful equipment. As a result, players need to weigh and think carefully whether they should continue upgrading their equipment in the current biome or advance the game progress.

#51 Lens of Imagination

One element that makes Valheim stands out from other survival games is its extreme level of realistic-ness. Buildings in Valheim follow real world physics principles – for example, different materials have load-bearing capacity, and you can see the load on each segment of a building through different color codes when building it. In addition, players also need to take into account things such as ventilation when setting up a fire indoor. Such principles enforce players to build realistic shelters, giving them a more immersive experience, making them feel as if they are warriors surviving in the wild themselves.

A player’s base in game
Cooking on fire in Valheim



Explore the Mona Lisa in Virtual Reality (VR)

Introduction of Mona Lisa: Beyond the Glass

In 2019, the Louvre launched its first virtual reality experience, Mona Lisa: Beyond the Glass, during the Leonardo da Vinci exhibition. Through the immersive experience, the audience would be able to appreciate the famous art piece at a close distance and learn the details behind it. This is an application of VR in the fields of both education and tourism.

Why do I like it?

I physically visited the Louvre in 2018, and have seen the Mona Lisa with my own eyes. However, about 30,000 tourists come to visit the Mona Lisa every day from around the world, and undoubtedly I was only able to take a quick look at the famous art piece from meters away in the crowd. The presence of Mona Lisa: Beyond the Glass provides me with a brand-new experience of visiting the painting virtually, which allows me to closely study the painting and explore the mysteries of the Mona Lisa in no hurry. As the audience, I love the interactive design of the application and I appreciate its educational value. With this application, more people would be able to “visit” Mona Lisa and learn more about this great masterpiece.

Why is it engaging?

The understanding of art should not only be learned by explanations, but by experience as well.

The design of this application is interactive, with sound, images, and animations. The virtual experience follows a storytelling style. The story of the Mona Lisa is told during the virtual tour, and the details of the painting are presented to the audience with clear explanations. Different sides of the Mona Lisa are presented to the audience so that the audience can understand the painting in more aspects.

What features are well done?             

One amazing feature of the application is that the audience is able to see how the look of the painting changes over the years in the virtual space. It is what the audience cannot see in the real world, but this virtual application gives the audience the chance to view history. It is also amazing that a 3D model of Mona Lisa is built into the application, so the audience can see the woman in 360 degrees and watch her elegant moves.

What features can be improved?

From my own perspective, this is a rather perfect application and it is an art piece itself. If I have to give some advice for improvements, I would suggest that maybe the view of the landscape of the painting at the end of the virtual tour can be presented in a more real-world view instead of in a painting style, so the experience can be more genuine to the audience.


Remarkerable Beat Saber, from an 8-year music player’s view

I’ve been a music game player for more than 8 years, supposed to be picky enough about them. But I still could not forget the shock brought to me by the first sight of Beat Saber, so that whenever VR comes to my mind, Beat Saber comes up. And the first thing I did after switching to a 2060 PC, was to buy a VR device to play it.

Dance Monkey • Expert • Beat Saber • Mixed Reality [1]

Published since 2018, Beat Saber is now still a widely praised VR Game on Steam. The content of Beat Saber is good, but it couldn’t achieve such a huge success without VR. On my first sight of it, a player was like a brave warrior, wielding two fantastic red and blue swords, like sword-dance, nimbly avoiding the oncoming light wall, with gorgeous light effect and epic BGM sound effect. This sums up why Beat Saber is so enaging: the strong visual impact brought by VR, the wonderful interactive UI, and the sense of engaging rooted in a music game.

Immersive VR’s application

Too many wonderful music games I’ve tried, so that I won’t cast a glance at average ones. However, in Beat Saber, it differs. Vivid music blocks, FLY onto your face, and you waving your awesome lightswords, like dancing to the beat, especially in a unacquainted wonderland. What an impressive experience! And it’s what VR can offer, a fictional world full of fancy.

Less is more. Simple UI but a multisensory interation.

[Beat Saber] YOASOBI – Racing into the Night (Yoru ni Kakeru) Collaboration with MASO [2]

On the first sight of Beat Saber, you can find there are few words in it. Instead, you are embraced by a new music world where there are many instructions hiding in the properties of objects, from sight, hearing to touch. For example, the red/blue sword is to split red/blue blocks, arrows on blocks tell you in which direction to split them, feedback like an electric shock tell you bombs and walls are to elude, and etc. All of these make the game are easy to play, like you are born to know.

Sense of engaging rooted in a music game

Hearing plays an important in our life. Even without sight, simply a piece of music can make one feel sense of engaging. In some extend, “VR in sound” is the earlist and mature technology in VR developmenyt. As a music game, Beat Saber’s sound effect is no doubt excellent. In my opinion, when a VR application is able to offer me wonderful sight and hearing experience, it’s praisable enough.

Limitations of nowadays VR interation

Not just Beat Saber, nowadays many VR applications’ interation is limited to 6 DoF VR Controller. However, due to the limited technology of the current spatial positioning equipment, the interaction between the base station and the headset VR controller will cause disorder after the user moves his/her body greatly. During the game involving movement of entire body, users may find themselves gradually move to a direction where VR controllers don’t work well.

What’ more, space is much more expensive than VR equipments. These kinds of VR applications equire a lot of space, and in small rooms, such as bedrooms, it is difficult to play, likely to hit the furniture in the room. As a result, the highest cost of this game is to have a wide room rather than a VR device, which will make such kind of VR applications not applicable.


In a conclusion, nowadays a successful VR application, in my opinion, should be a VR+X application. Only VR doesn’t work. When it combine with a good content, in this article, a wonder music game, it may be potential to gain a success.


[1] Dance Monkey • Expert • Beat Saber • Mixed Reality, xoxobluff, Youtube

[2] [Beat Saber] YOASOBI – Racing into the Night (Yoru ni Kakeru) Collaboration with MASO, Artemisblue, Youtube

Google Lens


The concept of augmented reality is highly intriguing, offering a wealth of possibilities and the potential to transform our daily lives. It enables us to augment our perception of the physical world by providing an additional layer of information, without the need for active searching.

The application that I will be talking about in this post is one that is readily accessible to most people and might have already been used by most readers. That application is Google Lens.

About Google Lens

Google Lens was originally announced during Google I/O 2017 and was originally as a visual add on to Google assistant. Google lens was originally pre-installed only on the Pixel 2 however over the following two years it was rolled out to non pixel devices and from June 2018 it was separated out into its own standalone app.

How Google lens enhances my life(The features I like and are well done)

For the few people unfamiliar with Google Lens, Here is a short demo of the features it possesses:

The translate feature has been available on Google since 2006, but it had limited functionality in day to day life. Sure it could allow you to read text on your computer that was written in a language that you don’t comprehend, but it was unable to help in situations outside of computers. Google lens bridges that disconnect between the virtual and physical world. With google lens, travelling to a different country and understanding signage isn’t a hassle. It’s intuitive and it just works. This real time processing and translation is far from the flashy visions of XR that we imagine, but in my opinion, it is one that impacts our life the greatest to such an extent that we already take it for granted.

As a vegetarian, I have dietary restrictions. I look at ingredients for products that I buy to ensure that they don’t contain any animal product. This ends up being hard for products which are in a language other than in English. Google Lens has simplified that entire process for me and others who have similar dietary restrictions or allergies.

While travelling, gone are the days of running around with a pocket translator book, or typing in words in a foreign language into google translate. With the Live Translate feature, all you need to do is open the camera app and have the translated terms superimposed on top of whatever you would like to translate. And for devices after Pixel 6, all this translation occurs on device thanks to the tensor processing units.

The seamlessness and the fact that instead of showing it up in a separate window, lens ends up just overlapping on top of the original text while giving you additional functionality hidden off to the side like copying the translation or sending it to linked devices makes it a truly unique product which allows it to do things that no other product can do.

Future Possibilities and Upcoming Updates

And this is just the beginning of the interactions possible with Augmented Reality and Computer vision in Google Lens. During the 2022 Google I/O, Google announced an expansion to their Multi search feature that allows you to add search queries on top of a picture. Adding onto this, Google announced Scene exploration, a feature that would work very similarly to “Ctrl+F” for the real world. Google Lens’s “Scene Exploration” feature allows users to identify multiple products or objects in their surroundings by moving their camera around and gathering information. The feature can automatically recognize multiple objects and provide insights.

The demo that they gave during the presentation was about identifying the chocolates in a grocery isle and based on the user’s nutritional needs and reviews, picking out a chocolate and pointing it out to them using AR. The demo can be found here:

What makes Google lens engaging

During the Made By Google event in 2019, Google’s SVP of devices and services Rick Osterloh discussed Google’s vision for the future of computing. He described a world beyond smartphones, where computing is not confined to a device in your pocket, but is all around you and integrated into everything. He referred to this concept as “ambient computing”. The idea is that devices and services work together with artificial intelligence to provide assistance and help wherever you want it, and the technology should be seamless and fade into the background when not needed, with the focus on the user, not the device. Google lens is a step in the direction of seamless ambient computing where the interactions needed to get the information you need are so natural that they don’t stick out. What makes google lens engaging is that fact that it’s seamless and so intuitive to use that anyone can very easily pick it up and explore the environment around them that they might not be able to otherwise. Its clean UI and unobtrusive interface makes it blend right into the scene it is analyzing. Engagement is defined as the quality of being engrossed and connected. When looking out of a window, one doesn’t think look at the glass, rather the view outside. Google lens is a window, to an augmented world, and the fact that we forget its existence, is a testament to how engaging it is.

What can be made better?

For all their talk of ambient computing and computing that just exists, in most phones other than their own products, Google lens is still a standalone app that has to be launched separately. This needs to be improved since it ends up being several extra steps which the user could have used to search for the information they are seeking on a traditional browser, breaking the immersion and preventing them from completely connecting with what is in front of them. Google needs to work on integrating lens with third party manufacturer’s camera applications natively to allow the average consumer access to this technology. Google Lens is still a more niche product by Google and this move would allow them to reach a lot more consumers.

Google lens is unfortunately also only able to translate these languages:

  • Chinese
  • French
  • German
  • Hebrew
  • Hindi
  • Italian
  • Japanese
  • Korean
  • Portuguese
  • Spanish

They can improve their product by branching out to include other languages and helping create an ultimately more interconnected multilingual world.


It Takes Two Review

The game: It Takes Two.

Game of the Year 2021, with a 10/10 rating on Steam, It Takes Two is an action-adventure platform video game developed by Hazelight Studios. It is a multiplayer game, where players can choose to play as either May or Cody, a couple on the brink of divorce. They get sent to a magical realm where they become dolls, and have to complete a set of tasks from a magic book, Dr Hakim, before they can become human again.


Essential Experience:

The game had a very novel implementation, expanding on technology used in the sister game “A Way Out”. It allowed either player to see the other player’s screen in real time, even when not playing locally (which is usually the norm). This simulates the experience of playing alongside your friend in person, as you are able to see what your friend is looking at in any given point in time. The game also came out in the middle of COVID-19 (in March 2021), and this gameplay experience simulated being physically next to your friend playing locally, even though both are at different locations.

Games Like It Takes Two: 2 Player Co-Op Action Game Alternatives


The gameplay was full of surprises and twists. This was mainly brought out with mechanics and the storyline.

One of the more memorable surprises was a scene where the characters went into dolls to hide out the guards, but whilst doing so they managed to get the “superpowers” of the dolls, and the game switched to something like a Massive Online Battle Arena (MOBA), with the players facing off the guards. This was a very surprising gameplay mechanic switch as the game mode went from typical third person gameplay to a MOBA, and was done with the very smooth integration of the characters gaining the skills when they hid in the dolls’ armour. (

IT TAKES TWO – Episode 12: Ice and Fire | Let's Play - YouTube


The game also taps into nostalgia and integrates real life components like using fidget spinners as wings to fly, which gives a very intuitive but creative gameplay mechanic. Every level played also used different items and had a different theme, keeping the game fresh and fun.

Also, instead of making players run to every location, they integrated fun components and mechanics like riding spiders to climb a tree, which shot webs to bring the player to another tree.

The Garden - It Takes Two Wiki Guide - IGN


The game had several interactable components which did not contribute to gameplay progression, but allowed the player to fulfil their curiosity of the scene. The mechanics are quite intuitive as they are usually our every day objects. One of which included coins on the floor to put into a piggy bank, then jumping on the piggy bank to break it.

It Takes Two – Break the Bank achievement guide - Gamepur

They also had various super aesthetic scenes that the user could go around and explore to fulfil their curiosity. One of them was a very intricately designed clocktower that the players could attempt to climb, which unlocked an achievement.

Cuckoo Clock - It Takes Two Wiki Guide - IGN

Microsoft Flight Simulator 2020 – ready to be in the very front seat?


Microsoft Flight Simulator (abbreviated as MSFS) is a series of flight simulation applications that was first released in 1982. Starting with the 2020 version, the application now runs in Virtual Reality (VR) mode, allowing users to experience a highly interactive and realistic flight simulation.

What makes it engaging?

Porting the simulator to VR makes this application especially engaging and entrancing to the users. The users feel like they are inside the cockpit and maneuvering the actual aircraft. This sense of immersion is strengthened by the detailed depiction of surroundings, including the airports, cities, and skyscrapers, as well as the natural landscape, providing the users with a life-like experience. While flying over the Grand Canyon, users can see the intricate details of the canyon’s rock formations and the winding Colorado River. Additionally, while flying over a city like New York, users can see detailed 3D models of famous buildings such as the Empire State Building and the Statue of Liberty.

Features that are well-done

High-fidelity representation of the surroundings

Being a simulator, when it comes to representational fidelity, MSFS strives to be on the realistic side of the triangle. As such, MSFS employs a number of methods to ensure that the surroundings the users interact with are as realistic as possible. To begin with, it uses high-quality 3D photogrammetry data from Microsoft’s Azure 3D maps library. When the area is not captured well by the in-house data, it then applies a deep-learning algorithm on the 2D satellite image to restructure the sight into 3D graphics. Lastly, some areas that are worthy of attention are modeled by the designers, manually[1].

User-friendly interactions inside the cockpit

When run in VR mode, MSFS does not use any HUD. Instead, it relies on the virtual world reference frame, where the users learn the status of the current flight via the dashboard inside the cockpit. Previously, hand-tracking features were not supported by the MSFS, meaning that users had to purchase simulator-compatible controllers to maneuver the aircraft fully[2]. With the update in November 2021, the users are now able to use hand-tracking controllers to interact with the cockpits of the planes, thus greatly increasing the immersion of the simulation.

Features that need improvement


As MSFS tries to resemble the flying experience as realistic as possible, performance and hardware requirements come as a natural concern. There are complaints from the users about the FPS drops and the performance issues when the application is run, especially in VR settings. While Asobo Studio, the original creator of this simulation, regularly releases updates and hotfixes to improve the frame rates, more can be done to optimize hardware usage and graphics rendering[3].

Control complexity

While it is fantastic that users can directly interact with the dashboards and controls, the complexity of flying an aircraft poses a significant learning curve for most users. As MSFS claims itself to be an amateur flight simulator, while it is important to preserve the realistic experience of flight simulation, it also needs to consider the entering users and provide a more intuitive way to maneuver the aircraft, such as adding voice-recognition commands or virtually placing a person (i.e. a co-pilot) who will do the heavy-lifting.


If you are one of those who are into flight simulations and want to have the surreal experience of becoming a pilot by yourself, MSFS is a phenomenal application that could achieve your dreams. Nevertheless, beware of the influx of complex instructions and preferences–whether on the hardware or software side—you should learn before embarking on your dream journey.


[1] “Exploring the Whole World in VR with Bing 3D Maps and MRTK.” TECHCOMMUNITY.MICROSOFT.COM, 1 May 2022,

[2] Feltham, Jamie, et al. “Microsoft Flight Simulator Finally Has VR Controller Support.” UploadVR, 22 Nov. 2021,

[3] Chawake, Anurag. “Microsoft Flight Simulator ‘Fps Drops’ & ‘Performance Issues’ in V1.25.9.0.” PiunikaWeb, 12 May 2022,

Hololy – The AR app that brings your idol to the real world.

Hololy is an Augmented Reality (AR) application that allows you to project 3d anime girls into the real world through your phone. The application allows you to choose from a selection of models, poses as well as dances that allows your AR model to appear as if they are right where you are.

How it works.

The main attraction of the application is the ability to use your creativity to bring the 2D world into the 3D world and make interesting pictures as well as videos. First, the application identifies a flat piece of area where the model can be suitably placed, this prevents the model from floating in thin air. This piece of area will be a reference point where the model is. After confirming the location, you can then place the model and rotate around. Moving the camera around will show different perspectives of the model, as if the model itself was really in said area. From then, the user can choose poses, expressions and dance moves to make the model look as if it were alive in the place.

What makes it fun?

The idea that you can see something usually seen in the 2d world, whether it be from a cartoon or game, in a 3d sense is pretty amazing. Usually we see 3d models and animation through a 2d screen, and hence your perspective is locked through that screen. Though perspective may change, the user is glued at the same spot. Through this AR app, the user can see and feel, how we can see a model in different ways and how it fits in with the environment. The idea that you can position your model to sit on a chair, or let them dance in your room allows a lot of wacky photos to be taken.

Things to improve on

The first glaring thing upon using the app is that the 3d models are lighted in a consistent way. Thus, it conflicts with how the lighting is at the environment. There will be shadows appearing on sides where it is clearly lighted in the scene. If the app could allow tracking of light through machine learning it could enhance the immersion as it would fit more into the picture.

The second thing was the identification of a base to stand on was not very accurate. At times the model would seem like it’s floating in the air. Also, restricting the reference point to standing actions limits the creativity of poses. For example, they could do sitting or leaning poses where the reference point would be on furniture. This would enhance the experience as standing straight up in someone’s house isn’t the most natural pose to be in.


Hololy is a decent app for AR immersion, especially for fans of the anime characters. It allows you to utilize your creativity along with the AR powers of the application to create memorable photos and videos. However, there is plenty of room for improvement as the model does not really fit in with the 3d world.


[2022/23S2 CS4240 VAR] XR – Ikea Kreativ


Ikea Kreativ is a Virtual Reality (VR) design tool to help customers visualise how furniture will look like in their room. Powered by VR and AI-technology, home redesign/renovation is made easy, helping customers to visualise their ideas into their home. Features include:

  1. Scanning of users’ room using iPhone camera to accurate capture room dimensions.
  2. Remove of existing furniture and placing virtual furniture in its place to see its fit.
  3. Availability of more than 50 virtual showrooms for users to place furniture in.
  4. Includes more than thousands of furniture and decorations for users to interact with.
Figure- Ikea Kreativ

What do I like it?

Ikea Kreativ provides the convenience and hassle-free of redesigning space, skipping the process of measuring furniture, space and provides users the ability to imagine their space not just in their mind but right in front of their eyes.

It also provides good variety of customisability, with over thousands of furniture, decoration, and accessories, allowing user to customise their space with. The ability to select items to remove in the room further provides greater customisability for users who wish to integrate new items into the space containing existing items.

Why is it engaging?

I found the application engaging because of the good degree of freedom it provides. Ikea Kreativ allows user to adjust Ikea items with ease, through rotation and movement around the space, just like how user would move their items in real life.

What features are done well?

Ikea’s furniture into the virtual space/room does a good job in allowing users to picture the proportion of the furniture to the room. With Ikea Kreativ’s VR element, it helps to visualise how items would look like in users’ space, skipping the hassle of having to measure the dimensions of furniture and room.

The wide selection of items and showroom also does a good job replicating Ikea’s real-life stores, allowing users to browse through item selections, without comprising the experience users get from real-life stores.

What features could be improved and how?

Though marked as a selling point, the erasing tool is not perfect. For example, when removing existing items, the software does not perfectly understand how to fill in the empty space. On this note, this can be improved by feeding the software more data to learn. Nonetheless, it does not obstruct users in adding Ikea’s furniture into the space. Afterall, the eraser tool is a new feature and I believe Ikea will continue to improve it to make a better user experience.

Another feature I think could be improved on is the degree of freedom to navigate around the room. This would require more devices, such as head-mounted display and tracker, allowing users to “feel” its surrounding space and the ambience the new environment (new environment here refers to the space after user completes with setting up the space) creates.


Porter, J. (22, June 22). Ikea’s new virtual design tool deletes your furniture and replaces it with Ikea’s. Retrieved from The Verge:

Wilson, M. (2022, June 22). Ikea’s new app deletes your living room furniture so you can buy even more. Retrieved from Fast Company:

IKEA Launches AI-Powered “Kreativ” Mixed Reality App. (2022, June 22). Retrieved from Hypebeast:

Extended Reality (XR) in Arts and Entertainment

Van Gogh experience Virtual Reality (VR)

I am not a big art fan. Especially with it being so up to interpretation and abstract, it usually bores me. However, I recently got a chance to visit the Van Gogh exhibition which includes a 360º immersive experience of his art and journey throughout his life. It was held in a cathedral where all the walls, including the floor, had projections on them. It was quite a spectacle, bringing the audience through his thought process and showcasing his artwork and this was just the beginning.

Image of 360º projection display in York, St Mary’s Cathedral [1]

After the 35 minutes showcase, there is a VR experience which included a headset, speakers and we were seated on a chair that allowed us to turn 360º. It brought us through the time periods and locations where Van Gogh was inspired and explained the motivation behind each painting.

A snippet of Van Gogh Experience VR section [2]

One of the more famous pieces that were showcased was Starry Night. We were brought to the exact location of the painting, with Van Gogh “narrating” his thought process; like the colors he saw and why he decided to use a particular color in the art. I liked that the information was easy to digest on top of the fact that we could see what Van Gogh saw and thought when he was painting the various art pieces.

However, one thing I think can be improved is the mobility of the experience. I think it would have been better if we could walk through the whole “exhibit” as if we were in that time period and explore the area and paintings at our own pace. This could be done by providing users with controllers and adding demarcations in the VR to allow users to move around the area without having a big venue to move around in.

VR brings art to life, making it easier for people to understand the artist’s point of view. This reinvents museums and strays away from traditional art galleries, which is likely to attract more youths to the art scene.

Zero Latency (Sol Raiders)

VR in gaming is nothing new anymore. With the rise of games like Beat Saber, it pushes innovation and the possibility of multiplayer games. Zero Latency is a free-roaming, multi-player VR experience providing games such as Survivor, Outbreak Origin, and Sol Raiders. I, alongside 7 players participated in Sol Raiders which is a 4v4 game, where the objective is to complete as many tasks and minimize the number of deaths on each team.

Sol Raiders trailer by Zero Latency [3]

I do not usually play first-player shooter-type games but this was so much fun! After putting on the vest, headset, headphones, and guns we were transported into the game dimension and all the players (teammates and opponents) were dressed up like robots! Everything was so life-like, it really felt like I was a robot in that reality, especially with the sound effects.

Players in real world [4]
Players in game world [5]

At some point during the game, I felt very lightheaded because of the mismatch between the real world and the game world. The game world was multi-dimensional with slopes and lifts while the real world was a flat ground room. So walking up the slopes and taking the lifts were disorientating.

I think one thing that can be improved is the UI of the headset. The existing screen did not include any information about the game status, only showing it on the screen at the end of each round. Since this was a team game with a common objective, the screen could have included more information like the number of kills/deaths and objectives fulfilled. This would allow us to better plan our game instead of constantly trying to keep track of these data.

Overall, I think the game was very well done, especially since it had to sync the 8 players throughout the game.


[1] Google maps. [Online]. Available:,-1.0809633,3a,83.9y,90t/data=!3m8!1e2!3m6!1sAF1QipO3uPo03Wb4cJR9A05JAQ7rzK_QfEesIPJ3zPY8!2e10!3e12!!7i1024!8i682!4m7!3m6!1s0x48793130726fa6ed:0x651d1a44837b68c1!8m2!3d53.9572689!4d-1.0808674!14m1!1BCgIgARICGAI. [Accessed: 20-Jan-2023].

[2] “Van Gogh: The immersive experience,” YouTube, 24-Feb-2021. [Online]. Available: [Accessed: 20-Jan-2023].

[3] “Sol Raiders – trailer – zero latency VR,” YouTube, 07-Feb-2019. [Online]. Available: [Accessed: 20-Jan-2023].

[4] “Zero Latency’s Latest Free-Roam Experience Made Me A Believer In VR Esports” VRScout, 10-Aug-2018. [Online]. Available: [Accessed: 20-Jan-2023].

[5] “Sol Raiders,” Zero Latency Luxembourg, 07-Aug-2022. [Online]. Available: [Accessed: 20-Jan-2023].

Snapchat 👻

Snapchat is primarily a social media application, where people can send photos and videos in form of snaps directly to their friends or put it on their stories for all of their friends can see. They can also use the application to look at highlights, other people’s stories, and many other features. However, it also has an AR application called filters and lenses.

Snapchat Filters

Snapchat is unlike other AR apps, where most of them interact with the surroundings, but Snapchat interacts with your face and your surroundings. It was one of the first few apps to come up with interactive filters, then other social media applications soon caught on with this trend. Some filters or lenses can be as simple as just adding some colors or effects to the surroundings, while others can be a little more complex like swapping your face with your friend, or changing your appearance to look like a different gender. Snapchat also has a new scan feature that allows users to interact with their surroundings, such as solving a mathematics problem identifying a plant, or a car, and many more possibilities. The scan feature is relatively new compared to the filters and lenses and adds to the AR experience. Just the scan function is very similar to google lens, but what makes it different is the inclusion of filters and lenses, and the social media aspect.

Soon after this new feature, Snapchat also released Snap AR Lens Studio – where developers and artists can create new augmented reality experiences for users, and get to use their creativity.

Why do I like it?

It is simple and fun. It allows you to experiment with so many different filters, and users can create filters as well. The fun part about these filters is that they are very realistic. On top of that it is very easy to use. All you have to do is point it at your face, and it easily detects your face to put the filter on top of it. I also like the ‘explore’ option, which does not limit a user to a small number of filters. They also keep coming up with new filters, and ideas, and keep changing the default filters, so the users are not bored. The Scan feature, on the other hand, is like a visual search engine. It is very useful, quick, responsive, and easy to use. It is linked with the filters and lenses as well, for example, once a user points the camera at a face, the scan will suggest some face filters. It also has many different options to choose from, and the fact that all of these features are in one place makes it even better. Users can use it to find a product online, The feature is not limited to visual elements, it can also detect the music playing, which I think is very useful and fun.

Why is it engaging?

The filters are the most engaging aspect, as they allow you to play around and see how you would look in different ways. Some filters just enhance some facial features, while others add dog years, tongues, and things like that. They become more engaging as they add interactions. For example one of the most famous dog filters adds dog years and nose, and if you put your tongue out, it will do a dog tongue-swiping motion. Many other filters add interactions such as raising your eyebrows or smiling. The filters also detect faces on pets or screens, or anything that looks like a face.

The dog filter on pets

Another aspect that makes these filters engaging is that you get to share them with other people. You can send snaps to your friends or put them on your stories for more people to see. It creates something fun for people to do together. The scan feature is also engaging as it now suggests lenses or filters based on what the camera is pointed at. This makes it less tedious for users and allows them to explore new lenses easily.

What features are well done?

I think the filters look very realistic, and that is the part I believe is well done. For example, a guy can look exactly like a girl using some of these filters and it even fools some people. Finally, I think the fact that you can create filters is also a really good feature. Users get to showcase their creativity and some people also use it to market and increase publicity. The scan features are also built very well. For example, the find music feature allows users to identify any music playing in the background. It is very accurate and also gives links to the most popular music applications. The find a product feature is also really helpful, it also gives links to the Amazon page for users to directly buy it. Small details like providing these links, which make it easier for users and require the least effort from them, is a well thought-out aspect. The scan has other features such as finding a plant, or car or solving a math problem that are also very useful.

What features can be improved and how?

I think the one thing that can be improved is face detection, which is not accurate all the time.

Inaccurate face detection

It is especially poor for people wearing glasses. A lot of the time if a person tries on a filter with glasses, it is not at the right position on the face. This excludes a portion of the users and does not give them the full experience. Another feature is the filter accuracy while the face moves. Especially make-up filters do not stay in the correct position if someone is speaking. This makes it look less realistic and so makes these types of filters less engaging and fun. The filters should be able to move with the movement of facial features. The scan feature is relatively new, but so far I have not encountered any problems. The only way it can be improved is by introducing more options for users, than the small limited ones available now.

Overall, Snapchat is a fun and engaging application, which makes use of Augmented Reality in various ways. It has many good features, giving its users an enjoyable user experience, with an easy-to-use and fun user interface.