X
Bemore Optics- Three stages of smart glasses development
By bemoreoptics November 3rd, 2023
Bemore Optics- Three stages of smart glasses development

Three stages of smart glasses development

 

In the next 10 years, we predict that the development of smart glasses will go through three stages.

 

These three stages will be separated by four fields: output mode, environment perception capability, input mode and environment integration capability. The environment integration capability not only depends on the development of smart glasses itself, but also depends on other technologies such as 5G, AI, IOT and so on. development of important technologies.

 

Characteristics of the first stage

From 2021 to 2023, the characteristics of consumer smart glasses at this stage include:

 

output method

The viewing angle (FOV) of glasses is relatively small. The FOV of the BB module may reach a maximum of 50 to 60 degrees, and the FOV of the optical waveguide module may reach a maximum of 40 to 50 degrees. No matter which solution is used, it will be troubled by brightness, computing power, battery life, etc. In terms of form, it is difficult to maintain "ordinary glasses style" and "a day-long battery life" at the same time. Therefore, smart glasses at this stage are mainly in split form.

environmental awareness

The glasses at this stage lack complete environmental awareness and motion tracking, and cannot respond to the movement and rotation of the user's head like Vive or Oculus. The glasses may have an IMU module, but due to the small FOV and resolution, the effect will be affected if the information is presented around the user by rotating the head.

Input

Glasses use buttons or touch input on the body, but raising your hands to operate this input method is very tiring on the one hand, and on the other hand it can only achieve one-dimensional operations such as on/off status switching, or volume volume, etc., which is only suitable for low-frequency Super lightweight interaction. Relatively heavy interaction requires the use of specific hardware for input, such as a joystick or trackpad.

Environmental integration ability

There is almost no environmental integration capability.

 

Characteristics of the second stage

From 2023 to 2025, the characteristics of consumer smart glasses at this stage include:

 

output method

The viewing angle (FOV) of glasses has further increased, and most glasses will use optical waveguide solutions. The BB shape is limited by optical principles, and it is difficult to achieve a bottom-cut shape close to that of ordinary glasses. Battery life is still a big challenge, but other issues such as computing power will be greatly alleviated at this stage. Smart glasses at this stage are still mainly in split form.

environmental awareness

At this stage, some glasses have six degrees of freedom (6DOF) motion tracking capabilities. At this time, smart glasses can "place" a virtual object in a fixed position in space, just like what the iPhone's ARkit can do now. Through the cooperation of sensors and algorithms, the glasses can obtain part of the depth information of the environment and achieve the effect of real objects masking virtual content.

Input

The lightweight interaction of the glasses body still exists, but at this stage smart glasses at least have a "natural interaction" method. Natural interaction is a method that can satisfy most usage scenarios and can intuitively map actions to input. Just like multi-touch on smartphones, even children can easily use it.

Environmental integration ability

By connecting peripheral devices, smart glasses will gain certain environmental integration capabilities. For example, the status information of the air conditioner in the room can be obtained through the glasses. However, due to concerns about privacy leakage, camera-based environmental collection and recognition capabilities will be restricted, affecting the further expansion of environmental fusion capabilities. In addition to technological advancements, this stage also requires the establishment of relevant laws and usage guidelines.

 

 

Characteristics of the third stage

From 2025 to 2030, the characteristics of consumer smart glasses at this stage include:

 

output method

Smart glasses will become a standard personal item like mobile phones, almost close to the form of traditional glasses. The output method at this stage will show the trend of "integration", that is, realizing AR and VR functions on a pair of glasses, and smart glasses will truly become the last screen for human beings.

environmental awareness

Glasses can completely obtain environmental depth information at high frequency. Whether it is placing virtual items in the real world or connecting real-world items to virtual scenes, smart glasses can be well realized and bring unprecedented immersion to users. experience.

Input

The natural interaction methods of smart glasses tend to be mature and unified, and are most likely to be based on gestures, supplemented by eye tracking, voice input and other methods. Real feedback in interaction is the focus of breakthroughs at this stage. The challenge faced by brain-computer interaction, a form of direct output of consciousness, is consumers' concerns about privacy and security.

Environmental integration ability

With the comprehensive deployment of 5G, AI, IOT and other technologies, smart glasses serve as a middle platform for individuals to obtain information and output intentions, and work with these technologies to achieve environmental integration and interaction. Various scenes in our lives will change a lot. Just like when we switched from using cash or swiping cards to using QR codes, gradually people rarely take out their wallets from their trouser pockets. The scope and depth of this shift is much greater than what we experienced when we moved from PCs to smartphones.

 

First stage of interaction

Since consumer smart glasses in the first stage lack motion tracking capabilities, the glasses-side content in this stage mainly solves the problem of "digital offloading", that is, migrating some smartphone content and scenes to smart glasses.

At this stage, smart glasses, like smart watches, are digital devices that cannot exist independently of smartphones. Glasses provide a better experience than mobile phones in some scenarios, allowing users to reduce the number of times they take out their phones, but these scenarios may be fragmented and single.

If the smart glasses body is used as a pure output terminal, the applicable scenarios mainly depend on the user's input method.

 

Ultra lightweight interaction

For example, answering/hanging up calls, checking the time, getting push information, etc. These low-frequency simple interactions or passive interactions can be performed using the buttons or touch controls on the glasses body. Including operations such as play/pause, volume adjustment, and song switching in music scenes, such TWS interaction can also be implemented in this way.

 

Lightweight interaction

High-frequency simple interactions can be carried out using input devices such as joystick rings. If the single joystick handle is reduced to the size of a ring, users can wear it on their fingers to perform operations such as dialing and pressing. The interaction of many native mobile applications is completed by simply sliding the screen, and some other applications can also be completed through the joystick after simplified adaptation.

For example, in reading applications, everything from selecting books and chapters to scrolling content can be performed through the handle.

 

heavy interaction

Currently, some mobile phone manufacturers provide interactive options using keyboard and mouse, such as Samsung's DEX system, which essentially adds a computer form that is suitable for keyboard and mouse to the Android system of the mobile phone. However, early smart glasses were limited by resolution and FOV and may not be able to provide a good desktop application experience. With the progress in the above two aspects, smart glasses have great potential in desktop application scenarios.

Most of the existing applications are designed with smartphones as the carrier, and these applications are adapted to touch operations. If these applications can be migrated to the glasses, it can effectively solve the problem of insufficient device content. We need to find a way that can not only adapt to touch operations, but also map the operations to the content on the glasses. A floating touchpad is a feasible direction.

 

Contents of the first stage

Super lightweight content

For passive content, it is logically necessary to consider how to "not disturb users" and how to handle it conveniently. Taking push information as an example, we need to be very careful about the content pushed to users to prevent users from falling into "information bombing". One possible way is that not all push information prompts the user, but the user customizes which information needs to appear on the glasses.

When the push information reaches the glasses, it may appear at the edge of the visual range, where the user can see it with a slight adjustment of his gaze, but if the user is not interested, he can quickly look away.

When the user needs to handle it, we provide a way to cover it with one-dimensional operations. For example, when a call comes in, the user can answer/hang up by pressing the button, or answer/hang up by sliding on the touch temple.

In ultra-lightweight scenarios, active content with complex interactions is not suitable. Therefore, the active content UI is mainly focused on quickly waking up/turning off, and providing as streamlined necessary information as possible. Just like the dial of the Apple Watch, it lights up when the wrist is raised, and the user can get the information he wants at a glance.

 

Light content

On the premise that the lightweight content is adapted to the joystick input device, the entire UI and operation logic need to adapt to the up, down, left, and right movements and presses of the joystick (including double-click and long press). Examples of operations in this regard are in PS game consoles or There are many in TV set-top boxes.

However, the brightness, FOV and resolution of early smart glasses are not comparable to those of TVs, and designers need to consider the corresponding layout and content. At the same time, most remote controls may include multiple buttons, but a joystick ring may have only one joystick, or at most one additional button. Designers also need to consider the interaction logic that matches it.

 

Heavy content

Will expand on this in detail in subsequent articles.

 

Interaction and content in the second stage

The second stage of consumer smart glasses has motion tracking capabilities, coupled with larger FOV and resolution, smart glasses gradually get rid of simple "two-dimensional migration" and include more three-dimensional content with spatial perception, or more and more AR is becoming more and more popular.

Smart glasses are gradually becoming like iPads, digital products that work in parallel with mobile phones rather than relying on them. Exclusive scenarios for smart glasses will gradually increase, and they are the most used digital products besides mobile phones at this stage.

The natural interaction method at this stage is most likely based on gestures. Why do we think of gestures instead of other forms of interaction? First of all, due to technological development and concerns about privacy and security, the more cutting-edge brain-computer methods do not yet have the conditions for large-scale popularization at this stage. Other methods are either not convenient enough, such as using a handle to input; or they cannot meet the needs of all scenarios, such as voice input.

One challenge faced by gesture input is "tactile feedback". Moving your fingers in the air and pinching your fingers looks very cool, but it is not comfortable at all. Holding hands for a long time and lack of tactile feedback will quickly tire users. There are two solutions. One is to simulate touch through technology, such as the tactile gloves being developed by some companies (but we believe that gloves are not the optimal form); It is to "integrate" the input device into the environment around the user. Every object is an input device, so there is no exclusive input device.

 

Interaction

After the natural interaction method is developed, whether it is ultra-lightweight, lightweight or heavy interaction, it can be unified under one interaction method. Just like users use a touch screen to operate a smartphone, all content on the glasses is operated through this gesture-based method.

In some special scenarios, users may use traditional input devices. For example, when editing text in a desktop scenario, the user will use a keyboard and mouse; when playing games, the user will use a controller. This interactive state is somewhat similar to iPad. Users still mainly use touch when using iPad, but in some scenarios they will choose Apple Pencil or keyboard and mouse.

Detailed interaction design will be further developed in subsequent articles.

 

Content

Two-dimensional content will be well adapted to smart glasses at this stage, because natural interaction based on gestures will be well compatible with touch operations. However, the new generation of personal computing platforms will still bring new challenges to designers, such as safety regulations for using glasses and the stacking of "application windows" in space.

In addition, 3D content will grow explosively, and people will quickly turn their enthusiasm for generating and disseminating 3D content. Because from the perspective of the development of two-dimensional content, we have experienced text (twitter) - picture (instagram) - video (Tik Tok/Titok). On the one hand, we continue to lower the threshold for generating content, and on the other hand, we continue to increase the amount of information disseminated. Density. Three-dimensional content has a greater density than two-dimensional content, and a new content platform focusing on three-dimensional content may emerge.

Social platforms based on 3D will also flourish at this stage. Users are no longer satisfied with interacting with friends through the screen, but "see" friends in reality and engage in some face-to-face activities, such as playing board games. Social experiences that are getting closer to real meetings will also make more companies or teams turn to virtual offices.

Maximizing Performance with Polarized Sport Sunglasses
Previous
Maximizing Performance with Polarized Sport Sunglasses
Read More
Embracing Sustainability: The Rise of Biodegradable Sunglasses
Next
Embracing Sustainability: The Rise of Biodegradable Sunglasses
Read More
Message Us