Elon har rätt! Lidar-AP system är helt värdelösa!

Cornell University kommer i Juni presentera en rapport på 2019 Conference on Computer Vision and Pattern Recognition på genombrott i resultat av utvecklingen kring autonoma system och därmed för Auto-pilot-system liknande Tesla’s.

Kameror har setts som olämpliga att använda i autonoma system då man har skyllt på att de sitter för lågt monterade med påföljande svårigheter av diverse anledningar som vägsmuts etc. Andra tveksamheter har varit att i neurala nätverk kräver mycket processer för att analysera dessa data.
Nedan är artikeln länkad och i sin helhet.


Among the many tidbits of wisdom that Elon Musk dropped at a Tesla company investor event on Monday was the revelation that Lidar, a laser-based scanning technology that images objects in 3D, was “friggin’ stupid,” and that “…anyone relying on LiDAR is doomed.” It seemed a grandiose claim given how many autonomous car initiatives rely on the tech, but Cornell researchers have just backed up Musk’s predictions with a new method for self-driving cars to see the world in 3D using a pair of cheap cameras.

Being able to visualize and detect objects around a vehicle in three dimensions is crucial for autonomous cars to safely operate in a world where roads are shared with other vehicles, cyclists, and often pedestrians. As a driver, every time you turn your head to scan what’s around your car, your brain is instantly visualizing your surroundings in 3D and assessing potential hazards. Using cheap sensors to simply detect objects near a self-driving car isn’t enough. When it’s cruising down the road at 60 MPH, it needs to see what’s ahead and be able to plan for avoiding hazards.

That’s why you’ll often see Lidar (Light Detection and Ranging) systems perched atop autonomous vehicles. Using spinning lasers they scan a vehicle’s surroundings and generate 3D images of objects near and far, allowing the software to analyze the results and pinpoint things to avoid. Lidar’s expensive, though, often adding $10,000 worth of components to a car’s price tag, and it needs to be perched atop a vehicle for the best vantage point. In a time when we’re trying to maximize the range of both gas and electric vehicles, a Lidar upgrade adds a lot of drag to a car’s aerodynamics and its performance.

In a paper that will be presented at the 2019 Conference on Computer Vision and Pattern Recognition in June, Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving, Cornell researchers detail a potential breakthrough for autonomous vehicles. Cameras have typically been considered an inferior technology to Lidar given that they’re often installed at low angles, near a vehicle’s bumper, resulting in images that tend to distort objects in the distance which confuses neural networks trying to process and interpret the data. 

But by placing a pair of cheap cameras on either side of a vehicle behind its windshield, stereoscopic images are produced which can be converted to 3D data. Because the images are being generated from a higher vantage point, closer to where Lidar systems are typically installed, the 3D data that was generated from the cameras was found to be nearly as precise as what laser scanners are able to generate, without distortion, and at a fraction of the cost.

It will probably be a long time before this research makes its way into self-driving vehicles, however. Lidar is still reliable and incredibly accurate, and companies working on autonomous vehicles are more concerned about safety and liabilities at the moment, instead of costs. But as the technologies improve, the software improves, and restrictions limiting where and when autonomous cars can roam are lifted, self-driving will soon be a big selling point for consumers buying new vehicles—and they do care about costs. Cornell’s approach will make it much cheaper to implement self-driving features on a car, and it could eventually make Lidar obsolete. So maybe Musk was right?

https://gizmodo.com/elon-musk-was-right-cheap-cameras-could-replace-lidar-1834266742

Tesla’s full autopilot – 100 gånger före konkurrenterna?

Nyligen talade Elon Musk i en pod-cast-intervju hur stor betydelse Teslas insamling av verklig data har. Man ligger tack vare datainsamlingen långt före konkurrenterna. Ben Sullins förklarar i det här klippet hur det kommer sig.

Men… hur funkar det då?

För att kunna nå en så bra auto-pilot som möjligt bör även video, tillsammans med insamlad vägdata”Fleet-learning” och bild-data, med hjälp av smarta algoritmer och neuralt nätverk, kunna användas istället för att försöka bygga omgivningen från början, genom traditionell programmering.

Många av konkurrenterna gör istället insamling av data (i vissa fall kombinerat med begränsad bildinsamling) – följande.
Man kör en planerad färdväg, för att läsa av omgivningen med radarsystem som LIDAR.
Sedan matas data manuellt in som övergångsställen etc. Alltså kan det ta upp till åtskilliga arbetstimmar att kartlägga omgivningen, beroende på sträckans längd och omgivning. Ofta görs det arbetet i länder där arbetskostnaden är låg.

Läs mer i denna länk om hur det fungerar och vilka nivåer autopilotsystem definieras! http://blog.ho-form.se/?p=6370