CNBC AI News – June 16, 2025: Elon Musk Weighs In: Refining the Future of Autonomous Driving
In a recent interview released by Tesla’s VP, Elon Musk doubled down on his vision for self-driving technology, asserting that the optimal approach is a combination of artificial intelligence, digital neural networks, and cameras.
Musk emphasized, “Our vehicles also feature microphones to identify the sounds of emergency vehicles.” He believes the world’s road systems are designed for “eyes” and biological neural networks, not systems that rely on “emitting lasers from the eyes.”
“Moreover, we’ve observed that systems employing multiple sensors often present conflicts. You’re left deciding whether to trust the camera or the lidar, and if the perception system contradicts itself, it can lead to safety issues,” he elaborated.
Yet, the industry isn’t entirely in lockstep with Musk’s vision. Several domestic companies are championing lidar technology, with some like Huawei advocating the necessity of this approach.
In September 2024, at a discussion, a Huawei representative emphasized their commitment to lidar. The company’s stance is predicated on the safety advantages provided by lidar.
One representative stated “When using only a camera solution without lidar, if the camera is blinded, it’s game over, as cameras struggle in certain scenarios, and have limitations. This includes millimeter-wave radar. Auto manufacturers should be utilizing these, as cameras struggle in rainy or foggy conditions.”
The speaker then added, “Life is invaluable, and the cost of safety is insignificant when compared to it.”
Notably, there is one exception in the marketplace, where one company is a strong proponent of a visual, large model-based approach.
One such leader has expressed that the perception of lidar having longer ranges is a misconception.
Lidar is an active sensor, relying on the emission of near-infrared light and calculating the time of flight (ToF) of the reflected signal to determine the distance of obstacles. This causes certain shortcomings.
As distance increases, the laser beam’s divergence angle expands, leading to a decrease in energy density, inversely proportional to the square of the distance. At longer distances, the intensity of the return signal and point cloud density are significantly reduced.
The ability to extract information from an advanced 192-line lidar unit at a distance of 200 meters is compared with the information gathered by an 8-megapixel camera in the diagram below. A vehicle needs detailed information to distinguish, for instance, a flimsy plastic bag from a swiftly moving scooter at a distance of 200m. Therefore, high-resolution cameras are a better solution than lidar for detecting distant targets with large model methods.
Lidar is also prone to multi-path effects and has a refresh rate far lower than cameras. Being an active sensor, lidar can cause multiple reflections when measuring complex terrains or obstacles at a distance, which results in signal mixing, signal distortion, and errors that make it difficult to accurately identify or even misidentify true targets. Meanwhile, the processing frame rate of mainstream lidars is only half that of a camera, and low frame rates further amplify the recognition errors of rapidly moving objects at a distance.
In addition, lidar is very weather-sensitive, while millimeter-wave radar is primarily used to penetrate rain and fog conditions. Near-infrared light has a short wavelength, so according to the wave-particle duality, the shorter the wavelength, the stronger the particle properties, and the worse the diffraction, resulting in a cluster of noise within a few meters of the sensor when encountering rain, snow, and fog, and cannot penetrate these transparent obstacles to capture the targets behind the fog, which is a form of “blinding” the system. On the other hand, millimeter-wave radar has longer wavelengths with favorable diffraction properties that handle rain and fog with greater effectiveness.
Therefore, lidar is a sensor with low information density and is susceptible to interference and is thus not suitable to be used as the “eyes” for a powerful AI brain.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/2591.html