In 2017, the term "Internet of Things" was widely discussed, while by early 2018, "artificial intelligence" became the hot topic. It seems that the ICT industry was generally optimistic about AI's development in 2018. According to data from Whale, a venture capital big data platform, in 2016 and 2017, the AI segment of China’s capital market saw the highest investment in computer vision, deep learning, autonomous driving, and natural language processing—indicating that both visual and language-based AI interactions were highly valued by investors.
From another perspective, artificial intelligence can be viewed as the intelligent interaction between machines and humans (like service robots) or machines and their environments (such as self-driving cars). The evolution of these interaction methods is a key indicator of AI's progress. In scenarios where human-machine interaction is less demanding, such as in car cockpits, the shift has been particularly clear—from traditional mechanical buttons to touchscreens, and further into voice, gesture, and facial recognition.
According to statistics, the global automotive instrument market reached approximately $7.7 billion in 2016, growing by 9% from the previous year. By 2020, it's expected to reach $9.5 billion, showing continued growth potential. These changes in interaction have become more rapid over time. While transitioning from mechanical to digital instruments took decades, the rise of voice, gesture, and facial recognition technologies has occurred within just a few years. This suggests that touch-based virtual instruments are still in an upward trend, with new interaction methods already emerging and gaining traction.
Fujitsu, for example, has introduced a 3D full virtual instrumentation solution that is now in mass production. This solution uses the Triton-C chip and a dedicated graphics processor from Socionext, featuring advanced display capabilities and security features. Touch-interactive virtual instruments are becoming central to cockpit functions, paving the way for future interaction methods. Tesla, known for its innovative approach, removed traditional instrument panels and control buttons in the Model 3, replacing them with a single 15-inch touchscreen and integrating ADAS with networking functions. This marks a significant step toward fully digitized human-vehicle interaction, surpassing traditional methods.
Voice interaction has also seen substantial growth. In 2016, investments in natural language processing in China exceeded 100 cases, second only to computer vision. The rise of speech recognition applications in 2017 highlighted the maturity of the industry chain, including software, algorithms, sensors, and chips. Companies like Amazon (Alexa), Google (Google Assistant), and Microsoft (Cortana) have led the way, supported by hardware giants like Infineon, Bosch, and ams, which have launched high-performance MEMS microphones for far-field voice applications. Qualcomm, a major player in mobile processors, has integrated voice recognition into its platforms, supporting multiple voice services across different regions.
While voice interaction has gained momentum, gesture recognition has developed more slowly but still shows promise. Bosch Sensortec introduced a laser projection micro scanner at the 2017 Munich Electronics Show, combining ToF technology for applications like distance measurement and air gesture control. Sony, after acquiring SoftKinetic, developed 3D gesture recognition image sensors for automotive use, enhancing driver safety by allowing hands-free operation.
Facial recognition has also made significant strides, especially with the introduction of FaceID on the iPhone X. This sparked interest in 3D facial recognition, leading to the development of near-infrared camera sensors and color/ambient light sensors by companies like STMicroelectronics and ams. These innovations not only improved smartphone security but also opened new opportunities in automotive applications. Facial recognition can now detect driver fatigue and concentration levels, contributing to safer driving experiences.
Moreover, Google’s recent patent using optical sensors to detect vital signs, such as blood flow velocity, highlights the potential for facial recognition to evolve beyond identity verification into health monitoring. As AI continues to advance, the transition from digital to intelligent interaction in car cockpits is accelerating. With breakthroughs in sensors and processors, edge computing will enable more sophisticated AI applications. In the future, human-vehicle or human-computer interactions will likely blend seamlessly, driven by high-performance, reliable sensor products and processors that form the backbone of edge AI. This shift promises new opportunities and growth in the AI-driven automotive industry.
I-type Inductor
I-type Inductance Core,I Inductor Model,Ring I-type Inductance,Design Of I-Shaped Inductor
Xuzhou Jiuli Electronics Co., Ltd , https://www.xzjiulielectronic.com