Hi. We're LAWK
Shaping the future
Lawaken Technologies is an AR eyewear company based in Hangzhou, China. Founded in 2021 by entrepreneur Roy, who is credited with creating China's first AI-powered smart speaker, Tmall Genie, and is also the vice president of Xiaomi TV. The company's founding team consists of top talents from the internet, artificial intelligence, and smart device industries, including individuals with backgrounds from Alibaba, Huawei, Vivo, Xiaomi, Uber, and more.
Lawaken Technologies, with AR devices as its platform and AI as its computational core, is dedicated to seamlessly integrating AR+AI into users' lives, with a focus on outdoor use cases. Our commitment is to make AR devices truly valuable and dependable tools for users. Positioned at the forefront of the AR+AI eyewear industry, we are actively expanding our business globally.
What Have We Achieved
Augmented Reality
Artificial Intelligence
Dual-chip Dual-system Architecture
Binocular Air Screen
The optical component adopts JBD's light engine with a brightness of up to 3,000,000 nits, combined with the jointly developed "single-machine binocular diffractive waveguide." The actual average in-eye brightness can reach up to 1150 nits.
High brightness, non-intrusive
FOV 30°, eyebox 12*10, equivalent imaging at 316 inches @ 15 meters. Ensuring clear display of information while seamlessly blending with the external environment.
Zero-feel nose support
Independently developed zero-feel nose support patent (Patent No: 202223020793.2). The unique structural design and material selection ensure a comfortable touch while keeping the overall nose support weight at just 1.7g.
Customized waveguide
To meet the safety requirements of motion scenarios, the conventional approach for traditional waveguides involves a thickness >1.5mm, resulting in a weight of 20g. Through repeated simulation and validation, we have ultimately customized a solution with a [0.5mm waveguide + 0.4mm double-sided tempered glass], controlling the overall thickness at 1.3mm and reducing weight by 30%.
Binocular Air Screen
The optical component adopts JBD's light engine with a brightness of up to 3,000,000 nits, combined with the jointly developed "single-machine binocular diffractive waveguide." The actual average in-eye brightness can reach up to 1150 nits.
High brightness, non-intrusive
FOV 30°, eyebox 12*10, equivalent imaging at 316 inches @ 15 meters. Ensuring clear display of information while seamlessly blending with the external environment.
Zero-feel nose support
Independently developed zero-feel nose support patent (Patent No: 202223020793.2). The unique structural design and material selection ensure a comfortable touch while keeping the overall nose support weight at just 1.7g.
Customized waveguide
To meet the safety requirements of motion scenarios, the conventional approach for traditional waveguides involves a thickness >1.5mm, resulting in a weight of 20g. Through repeated simulation and validation, we have ultimately customized a solution with a [0.5mm waveguide + 0.4mm double-sided tempered glass], controlling the overall thickness at 1.3mm and reducing weight by 30%.
1 / 4
Multidimensional Visual Algorithm Model
Voice Interaction
Leveraging large-scale models to achieve intelligent upgrades in interaction, better understanding user intent, providing more professional answers to user queries, and achieving user needs with higher quality.
Distributing Large Models
Based on extensive real user data, our self-developed X.Hub for distributing large models can recognize user interaction intent, intelligently distribute, and automatically fulfill a variety of complex user needs.
ARGC Video Generation
Integrating visual SLAM technology, a self-developed 3D rendering engine, and video style transformation to achieve the creation of first-person AR videos with multiple templates.
In-house Developed Low-Power AR Rendering Engine
The engine's underlying architecture adopts an Entity-Component (EC) design. For specific apps, the Entity-Component-System (ECS) architecture can be employed, along with dynamic batching and instance rendering, effortlessly supporting large-scene rendering with thousands of objects.
Voice Interaction
Leveraging large-scale models to achieve intelligent upgrades in interaction, better understanding user intent, providing more professional answers to user queries, and achieving user needs with higher quality.
Distributing Large Models
Based on extensive real user data, our self-developed X.Hub for distributing large models can recognize user interaction intent, intelligently distribute, and automatically fulfill a variety of complex user needs.
ARGC Video Generation
Integrating visual SLAM technology, a self-developed 3D rendering engine, and video style transformation to achieve the creation of first-person AR videos with multiple templates.
In-house Developed Low-Power AR Rendering Engine
The engine's underlying architecture adopts an Entity-Component (EC) design. For specific apps, the Entity-Component-System (ECS) architecture can be employed, along with dynamic batching and instance rendering, effortlessly supporting large-scene rendering with thousands of objects.
1 / 4