Artificial intelligence develops innovative breakthroughs, and the interactive mode of car cockpit usher in a period of change

In 2017, the term "Internet of Things" was widely discussed, and by early 2018, "artificial intelligence" had taken center stage. This shift reflects a general optimism within the ICT industry regarding AI's development in 2018. According to data from Whale, a venture capital big data platform, China's AI sector saw significant investment in 2016 and 2017, particularly in computer vision, deep learning, autonomous driving, and natural language processing. These areas highlight the growing interest in both visual and linguistic AI interactions. Artificial intelligence can be viewed as an intelligent interaction between machines and humans, such as service robots, or between machines and their environments, like self-driving cars. The evolution of these interaction models is a key indicator of AI’s progress. In scenarios where human-machine interaction is less demanding, the evolution of in-car human-vehicle interaction has been especially notable. From mechanical buttons to touchscreens and now voice, gesture, and facial recognition, the transition has been rapid. In 2016, the global automotive instrument market reached approximately $7.7 billion, with a 9% growth from the previous year. It's expected to reach $9.5 billion by 2020, showing continued potential for expansion. The timeline of these interactive changes has become more compressed over time. While the shift from mechanical to digital instruments took decades, the rise of voice, gesture, and facial recognition has occurred in just a few years. Touch-based virtual instruments are still in a growth phase, but new interaction methods are already emerging, signaling a dynamic and fast-evolving industry. Fujitsu introduced its 3D full virtual instrumentation solution at the 2018 Munich Electronics Show, using the Triton-C chip and a dedicated graphics processor from Socionext. This system supports up to 8 display layers and multiple video inputs, making it ideal for secure, high-performance instrument displays. Meanwhile, Tesla’s Model 3 eliminated traditional controls, replacing them with a single 15-inch touchscreen that integrates ADAS and networking features. This marks a significant step toward fully digitized human-vehicle interaction. Voice interaction has also seen rapid growth. In 2016, natural language processing received over 100 investment cases in China, second only to computer vision. The popularity of speech recognition applications in 2017 highlighted the maturity of the industry chain, including software, algorithms, sensors, and chips. Companies like Amazon (Alexa), Google (Google Assistant), and Microsoft (Cortana) have driven demand, while sensor manufacturers like Infineon and Bosch have developed high-performance MEMS microphones for far-field voice applications. Qualcomm has made significant strides, supplying chips to Jaguar Land Rover and launching HomeHub platforms based on SDA624 and SDA212 SoCs. These platforms support major voice services, demonstrating Qualcomm’s broad presence in voice recognition. MediaTek has also been active in the voice chip market. While voice interaction has grown quickly, gesture recognition has developed more slowly. However, companies like Bosch Sensortec have introduced innovative solutions, such as the BML050 laser projection scanner, which enables 3D scanning and air gesture control. Sony, after acquiring SoftKinetic, has developed automotive-grade 3D gesture recognition sensors that enhance driver safety by allowing hands-free operation. Facial recognition has gained traction, especially with Apple’s FaceID on the iPhone X. This has boosted interest in 3D facial recognition technology. STMicroelectronics and ams have supplied components for this, contributing to revenue growth for ams in 2017. Beyond smartphones, facial recognition holds great potential in automotive applications, offering identity authentication and safety monitoring, such as detecting fatigue or distraction. Research institutions like FZI in Germany are exploring driver condition monitoring systems using facial recognition. Additionally, Google has patented optical sensors that can detect vital signs, potentially expanding facial recognition into health monitoring. Overall, car cockpits are transitioning from digitization to intelligence. With advancements in sensors and processors, edge computing is enabling more sophisticated AI applications. As human-vehicle and human-computer interactions evolve, high-performance, reliable sensors and chips will play a crucial role, opening new opportunities in the AI market.

Inductor

Differential Mode Inductors,I type Inductors,Common Mode Inductors,Differential Mode Choke Inductor

Xuzhou Jiuli Electronics Co., Ltd , https://www.xzjiulielectronic.com