Hi. We're LAWK
Shaping the future
Lawaken Technologies, founded in 2021, is a pioneering smart eyewear company. As a leading innovator in AI-centric smart eyewear, we are dedicated to integrating AR technology to develop intelligent glasses that enhance daily efficiency and elevate the quality of life.
We've fine-tuned our own AI model specifically for smart eyewear, ensuring it meets the unique demands of our users. Our goal is to make smart eyewear a tool that significantly enhances human life quality and efficiency, enabling greater freedom and reducing dependence on smartphones. We strive to make technology more accessible and beneficial, ensuring it serves humanity in meaningful ways.
What Have We Achieved
Augmented Reality
Artificial Intelligence
Dual-chip Dual-system Architecture
Binocular Air Screen
The optical component adopts JBD's light engine with a brightness of up to 3,000,000 nits, combined with the jointly developed "single-machine binocular diffractive waveguide." The actual average in-eye brightness can reach up to 1150 nits.
High brightness, non-intrusive
FOV 30°, eyebox 12*10, equivalent imaging at 316 inches @ 15 meters. Ensuring clear display of information while seamlessly blending with the external environment.
Zero-feel nose support
Independently developed zero-feel nose support patent (Patent No: 202223020793.2). The unique structural design and material selection ensure a comfortable touch while keeping the overall nose support weight at just 1.7g.
Customized waveguide
To meet the safety requirements of motion scenarios, the conventional approach for traditional waveguides involves a thickness >1.5mm, resulting in a weight of 20g. Through repeated simulation and validation, we have ultimately customized a solution with a [0.5mm waveguide + 0.4mm double-sided tempered glass], controlling the overall thickness at 1.3mm and reducing weight by 30%.
Binocular Air Screen
The optical component adopts JBD's light engine with a brightness of up to 3,000,000 nits, combined with the jointly developed "single-machine binocular diffractive waveguide." The actual average in-eye brightness can reach up to 1150 nits.
High brightness, non-intrusive
FOV 30°, eyebox 12*10, equivalent imaging at 316 inches @ 15 meters. Ensuring clear display of information while seamlessly blending with the external environment.
Zero-feel nose support
Independently developed zero-feel nose support patent (Patent No: 202223020793.2). The unique structural design and material selection ensure a comfortable touch while keeping the overall nose support weight at just 1.7g.
Customized waveguide
To meet the safety requirements of motion scenarios, the conventional approach for traditional waveguides involves a thickness >1.5mm, resulting in a weight of 20g. Through repeated simulation and validation, we have ultimately customized a solution with a [0.5mm waveguide + 0.4mm double-sided tempered glass], controlling the overall thickness at 1.3mm and reducing weight by 30%.
1 / 4
Multidimensional Visual Algorithm Model
Voice Interaction
Leveraging large-scale models to achieve intelligent upgrades in interaction, better understanding user intent, providing more professional answers to user queries, and achieving user needs with higher quality.
Distributing Large Models
Based on extensive real user data, our self-developed X.Hub for distributing large models can recognize user interaction intent, intelligently distribute, and automatically fulfill a variety of complex user needs.
ARGC Video Generation
Integrating visual SLAM technology, a self-developed 3D rendering engine, and video style transformation to achieve the creation of first-person AR videos with multiple templates.
In-house Developed Low-Power AR Rendering Engine
The engine's underlying architecture adopts an Entity-Component (EC) design. For specific apps, the Entity-Component-System (ECS) architecture can be employed, along with dynamic batching and instance rendering, effortlessly supporting large-scene rendering with thousands of objects.
Voice Interaction
Leveraging large-scale models to achieve intelligent upgrades in interaction, better understanding user intent, providing more professional answers to user queries, and achieving user needs with higher quality.
Distributing Large Models
Based on extensive real user data, our self-developed X.Hub for distributing large models can recognize user interaction intent, intelligently distribute, and automatically fulfill a variety of complex user needs.
ARGC Video Generation
Integrating visual SLAM technology, a self-developed 3D rendering engine, and video style transformation to achieve the creation of first-person AR videos with multiple templates.
In-house Developed Low-Power AR Rendering Engine
The engine's underlying architecture adopts an Entity-Component (EC) design. For specific apps, the Entity-Component-System (ECS) architecture can be employed, along with dynamic batching and instance rendering, effortlessly supporting large-scene rendering with thousands of objects.
1 / 4