close
close
migores1

Understanding self-driving cars and how to profit from them

Recognizing an emerging megatrend is key to finding the best stocks to buy to profit from it

If you’ve been reading these issues for a long time, you’ll know that being at the forefront of innovation is central to my core investment approach. As such, I keep a close eye on several corners of the tech world, keeping an eye out for the latest and most exciting developments. And in the last few years, one such corner has caught my attention – self-driving cars.

Indeed, I have been optimistic about autonomous vehicles for some time. But while industry developments have been promising, self-driving cars remain “five years away from five years away”…

Until now, that is.

Thanks to the rapid expansion of autonomous transportation services in places like Phoenix and San Francisco, the launch of self-driving trucks in Texas and Arizona, and the upcoming launch of Elon Musk. Robot taxis on October 10, I think the stage is set for self-driving cars to begin transforming the $11 trillion transportation services industry.

Now, this is all great information to know. But it doesn’t mean much if we don’t understand how these vehicles actually work.

After all, understanding a developing megatrend is key to finding the best stocks to buy to profit from it.

Therefore, to the potential turn the Age of AVs into a massive long-term payday, we must first understand how a self-driving car works.

A Tech Trifecta

Essentially, a self-driving car is operated by a combination of sensors – the “hardware stack” – and AI-powered software – unsurprisingly called the “software stack”.

In short, the car’s sensors gather information about its surroundings. AI software then processes that data to determine whether the car accelerates, brakes, changes lanes, turns, etc. And all this happens instantly.

Typically, the “hardware stack” comprises three sensors: cameras, radarand lidar. A typical self-driving car uses all three sensors, as each has strengths and weaknesses that complement the others nicely.

Cameras are used to collect visual data. They capture high-resolution images of the vehicle’s environment, similar to that of a human driver’s eye. These cameras help recognize different signs, lane markings and traffic lights and can distinguish between different objects such as pedestrians, cyclists and vehicles. They are very good at providing detailed visual information that helps the car understand its surroundings. But they tend to perform poorly in poor visual environments, such as when there is poor light or inclement weather.

An AV’s radar sensors emit radio waves that bounce off objects and return to the sensor, providing information about the distance, speed and movement of obstacles in the car’s vicinity. These sensors perform well in all weather conditions (complementing cameras nicely), but offer limited resolution and detail (where cameras excel).

Lidar – which stands for light detection and measurement – ​​is essentially radar powered by lasers. These sensors emit laser pulses that also bounce off surrounding objects and return to the sensor. By measuring the time it takes for the light to return, the lidar can create a high-resolution 3D map of the vehicle’s environment. This provides accurate depth perception, allowing the machine to understand the exact shape, size and distance of surrounding objects. However, lidar does not capture color or texture information (as cameras do).

In other words, cameras are used to see things. Radar is used to sense how fast those things are going. And lidar helps calculate the exact position of these things.

In this sense, it is easy to see how these three sensors work together in a self-driving car.

Related Articles

Back to top button