Adding a comment here with some info on LIDAR human safety, since many are asking.
There are two wavelengths of interest used:
a) 905 nm/940 nm (roof and bumpers): 70–100 µJ per pulse max, regulated by IEC 60825 since this WL is focused on the retina
b) 1550 nm systems (the Laser Bear Honeycomb): 8–12 mJ per pulse allowed (100x more photons since this WL stays the cornea)
The failure mode of these LIDARs can be akin to a weapon. A stuck mirror or frozen phased array turns into a continuous-wave pencil beam.
A 1550 nm LIDAR leaking 1W continuous will raise corneal temperature >5C in 100ms. The threshold for cataract creation is only 4C rise in temp.
A 905 nm Class 1 system stuck in one pixel gives 10 mW continuous on retina, capable of creating a lesion in 250ms or less.
20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory. The current regs (IEC 60825-1:2014) assume single-source exposure. There is no standard for multi-source, multi-axis, moving-platform overlay.
Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury
The article talks about eye safety a bit in section 4.
> a stuck mirror
This is one of the advantages of using an array of low power lasers rather than steering a single high power laser. The array physically doesn't have a failure mode where the power gets concentrated in a single direction. Anyway, theoretically, you would hope that class 1 eye-safe lidars should be eye safe even at point blank range, meaning that even if the beam gets stuck pointing into your eye, it would still be more or less safe.
> 20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory.
In the article, I point out a small nuance: If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.
Also, if a lidar is eye-safe at point blank range, when you have multiple cars tens of meters away, laser beam divergence already starts to reduce the intensity, not to mention that when the lidars are scanning properly, the probability of all of them pointing in the same spot is almost impossible.
By the way, the Waymo Laser Bear Honeycomb is the bumper lidar (940 nm iirc) and not the big 1550 nm unit that was on the Chrysler Pacificas. The newer Jaguar I-Pace cars don't have the 1550 nm lidar at all but have a much bigger and higher performance spinning lidar.
Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
So yes, a mirror trip reset is probably a good start. But would I trust someone's vision to this alone?
> Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
Nope, nothing as complicated as that. You're close with the watchdog timer.
The solenoid is driven by a charge pump, which is capacitively coupled to the output of the controller. The controller toggles the gas grant output on and off a couple of times a second, and it doesn't matter if it sticks high or low - if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Same way we used to do for electron beam scanning.
>> if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
>> Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Sounds like a great plan, but I question the "about a second" timing; the GP post calculates that "about a second" is between 4X and 10X the time required to cause damage. So, how fast do these things scan/cycle across their field of view? Could this be solved by speeding up the cycle, or would that overly compromise the image? Change the scan pattern, or insert more check-points in the pattern?
Beamforming with a phased array is talked about in the article, but you are conflating two very different types of arrays. You can't form beams with the types of macroscopic arrays I was referring to, since they consist of macroscopic array elements whose phase cannot be controlled, and reside behind a fixed lens that ensures that they all point in different directions.
A quick note about units -- you correctly quote the limits as an energy-per-pulse limit. The theory behind this is that pulses are short enough that rotation during a pulse is negligible, so they tend to hit a single point (on the retina, at focusable frequencies; the cornea itself for longer wave lengths), and the absorption of that energy is what causes damage. But LiDAR range is determined not by energy per pulse, but by power. This drives a desire for minimum-time pulses, often < 10 ns -- if you can halve your pulse length, you can increase your range substantially while still being eye-safe. GaNFETs are one of the enabling technologies for pulsed lidar, since they're really the only way out there to steer tens of amps in single-digit nanoseconds. Even once you've solved generating short pulses, though, you still need to interpret short responses. Which drives either a need for very fast ADCs (gigasample+), or TDCs, which are themselves fascinating components.
I live in town, and walk past countless stopped and moving cars every day.
I also know how the tech industry makes decisions about safety and responsibility (hahaha...). And I have seen some of the recent changes that automakers have somehow slipped past safety regulators. So it seems foolish to trust any of them on this safety issue.
Do we all have to move to rural areas, if we want to be able to go outside without wearing laser safety goggles?
You know, I was just thinking a headset like Oculus would be pretty great for night driving if it was sensitive enough; my night vision is getting really bad and coupled with most new cars having annoyingly bright LED lamps with auto-high beams it's getting super uncomfortable to drive in the dark. Then that would automatically shield eyes from lasers!
One could go further, and have an integrated system where the headset shows video feed from cameras around the car. You could almost get a 3rd person view of your own car like in video games.
If you find anything, please let me know. The least obtrusive option I've found is this Zeiss lens coating ("Thermo Force") which claims to block 90% of IR "between 780 and 2000 nm", which covers the NIR used in both types of lidar. It only seems to be available as part of sunglasses though.
Laser safety glasses are off-the-shelf, but are usually tuned for a single band stop. There are at least three frequencies of near IR LiDAR in use in the wild.
Hm the ones I can find have a heavy green reflex and their optical density seems about twice or thrice what you'd need for a 1W CW laser. Maybe it's unavoidable given the closeness of near IR and deep red, but I wonder if there exists glasses with a cutoff sharp enough and reduced OD so as to not result in noticeable color shift.
We always used Thorlabs' [0]. If you want to block Waymo (900 - 940 nm), but not Ouster (840 nm), the LG11s may work and are quite transmissive and neutral, but I've never used them. The LG20s are the standard NIR blockers that I'm familiar with.
I was thinking more "one could create a business" vibe. The requirements for making everyday protections from errant lasers is different from what you'd need for lab safety working with lasers.
I was always curious about this, it's impossible to find any safety certifications or details about the lidars used by e.g. Waymo. Are we supposed to just trust that they didn't cut corners, especially given the financial incentives to convince people that lidar is necessary (because there's a notable competitor that doesn't use it).
To date most class-1 lasers have also been hidden/enclosed I think (and there is class 1M for limited medical use), so I'm not convinced that the limits for long-term daily exposure have been properly studied.
Until I see 3rd party studies otherwise, I plan to treat vehicle lidar no different than laser pointers and avoid looking directly at them. If/when cars become common enough that this is too hard to do, maybe I'll purchase NIR blocking glasses (though most ones I found have an ugly green tint, I wonder if it's possible to make the frequency cutoff sharp enough that it doesn't filter out visible reds).
Every day dozens of Waymos are in close proximity to the people cleaning them and plugging them in, and they are maneuvering in tight spaces amongst other Waymos. That's not a peer reviewed study, but it seems to work.
The visual system can patch over tiny defects (see: blindspot) and visual field tests have not been part of standard yearly eye exams I've been to. And possible longer-term risks (say increased risk of cataracts) would be harder to conclusively show. And the sample size involved would skew heavily towards young healthy adults instead of people with pre-existing eye conditions.
I realize it's not easily possible to prove the negative, but when you're exposing the public the burden must be on the company to be transparent and rigorous. And from what I see it's difficult to even find certification documents for the lidars used in commercial self-driving vehicles, possibly because everything is proprietary and trade secret.
“…Every day dozens of cigarettes are smoked in close proximity to other people… that’s not a peer reviewed study, but it seems to work…”
- someone probably, sometime in the 1950s
A camera CMOS dies at 1–2 µJ pulse, this same pulse energy reaches the cornea. If a sensor has fried, that means the dose to your eye is already in the zone of cataract creation. Human corneal endothelial cells do not regenerate. If the endothelium is damaged, stromal fluid accumulates and opacity can progressively over months/a year from one hit. You might never know what caused it.
That's a possibility. Google appears to be contracting out depot work to car rental companies because a Waymo depot is basically a car rental lot. They need three shifts for each depot. So there's probably a couple hundred people who would otherwise be cleaning out rental cars working the depots. At some point injuries would get hard to sweep under the rug.
Ouster uses (or at least used to use, not sure if they still do) 840 nm. Much higher quantum efficiency for standard silicon receivers, without having to play games with stressed silicon and stuff; but also much better focusing by the retina, so lower power permitted.
The incredible irony that reduced injury and death from collisions is how these things are sold to registers and cities, but no one mentioned that in a city full of millions of poorly maintained lidars, they just might slowly make everyone blind instead.
Enormous complexity, safety risks, and completely unnecessary for successful level 4 FSD - the hurdle to full autonomous driving was basically jumped by Tesla this year. I don't see why lidar is even allowed in public at this point, it seems dangerous enough that you'd want it effectively restricted to highly regulated and licensed uses, like military or academic scanning, with all sorts of deliberate safeguards and liability checks.
Social media is full of little clips of lidar systems burning out camera pixels, and I'm sure big proponents of the tech have paid people off over eye injuries at this point. There've probably been a ton of injuries that just got written off as random environmental hazards, "must have looked at the sun" etc.
the hurdle to full autonomous driving was basically jumped by Tesla this year.
Tesla doesn't have driverless operations anywhere, and their Austin fleet consists of <30 vehicles with full time safety drivers that have a far worse safety record than Waymo vehicles.
It's not nothing, but it's a long way from being a complete system (let alone the obviously superior one).
IIRC, Tesla's safety record is about 30% worse than Waymo. The gap has been closing rapidly. It's not that long time ago Tesla made an order of magnitude more mistakes than Waymo.
That's with safety drivers, a small fleet, and literally only the most recent data (since it wasn't broken out before). My experience with AV deployments is that your incident rate is significantly different once you remove humans, and small fleet sizes/deployment areas hide a lot of long tail issues.
Waymo is operating at a much larger scale across a huge range of conditions with hardware that's generations behind their latest and still performing better.
No need to wait. It's got the same reliability as FSD. If people were sitting in the backseat while FSD is driving and taking a nap, I'd believe it's comparable to Waymo.
Having built a LiDAR system for an autonomy company in the past, this is a great write-up, but it omits what I found to be one of the more interesting challenges. For our system (bistatic, discrete edge-emitting laser diodes and APDs; much like a Velodyne system at high level), we had about an inch of separation between our laser diodes and our photodiodes. With 70 A peak currents through the laser diodes. And nanoamp sensitivity in the photodiodes. EMI is... interesting. Many similar lidars ignore the problem by blanking out responses very close to firing time, giving a minimum range sensitivity, and by waiting for maximum delay to elapse before firing the next salvo -- but this gives a maximum fire rate that can be an issue. For example, a 32 channel system running at 20 kHz/channel would be limited to ~200 m range (468 m round trip delay, some blanking time needed)... so to get both high rate (horizontal resolution) and high channel count (vertical resolution), you need to be able to ignore your own cross-talk and be able to fire when beams are in flight.
200m range seems adequate for passenger vehicle use. Even at 100kph that's over 7 seconds to cover the distance even if you aren't trying to slow down. I think there is diminishing returns with chasing even longer ranges. Even fully loaded trucks are expected to stop in about 160m or so.
Yep, 200 m is pretty close to standard. Which is why 32 channel and 20 kHz is a pretty common design point. But customers would love 64 channel and 40 kHz, for example. Also, it's worth noting that if your design range is 200 m -- your beam doesn't just magically stop beyond that. While the inverse square law is on your side in preventing a 250 m target from interfering with the next pulse, a retro-reflector at 250 m can absolutely provide a signal that aliases with a ~16 m signal (assuming 234 m time between pulses) on the next channel under the right conditions. This is an edge case -- but it's one that's observable under steady-state conditions, it's not just a single pulse that gets misinterpreted.
Don't these things use Gold codes or something similar to eliminate temporal aliasing problems? I guess that wouldn't make multipath issues go away completely, but it could fix the case you're referring to.
You can, and we did an extremely limited form of that — see other comment on reducing correlations. But you have an energy limit from eye safety concerns, so energy spent on spreading the signal over time and modulating it directly takes away from power giving range. And doing non-trivial modulation isn’t easy — most of these pulses are generated by a capacitive discharge, which limits shaping.
100kph is rather slow. Relative speeds of cars exceed that regularly even on city streets. Relative speeds in excess of 200kph are common outside cities.
I wrote a whole paragraph, then realised that "relative speeds" is the sum of opposing speeds, ie. two cars going in the opposite direction at 50km/h each make up a relative speed of 100km/h.
>we had about an inch of separation between our laser diodes and our photodiodes
Why can't you place them further away from each other using an additional optical system (i.e. a mirror) and adjusting for the additional distance in software?
You can, but customers like compact self-contained units. All trade offs.
Edit: There's basically three approaches to this problem that I'm aware of. Number one is to push the cross-talk below the noise floor -- your suggestion helps with this. Number two is to do noise cancellation by measuring your cross-talk and deleting it from the signal. Number three is to make the cross-talk signal distinct from a real reflection (e.g. by modulating the pulses so that there's low correlation between an in-flight pulse and a being-fired pulse). In practice, all three work nicely together; getting the cross-talk noise below saturation allows cancellation to leave the signal in place, and reduced correlation means that the imperfections of the cancellation still get cleaned up later in the pipeline.
Some other comments have posted about the laser safety being the issue but I have a more physical story:
Recently got a Waymo for the first time to take my kids and I from one hotel to another in Phoenix.
- Car pulls up
- I walk up to the trunk as I have a suitcase
- Out of habit, I go to open the trunk by pressing the button under the "handle" (didn't realize you have to unlock the car via the app first)
- My hand moves by the rear trunk laser that is spinning and "whacks" my hand.
Not a big deal but seems like an interesting design choice to place a motorized spinning device right next to where people are going to be reaching to open the trunk.
The externally spinning Waymo Laser Bear Honeycombs do indeed cause whacking and pinching and occasionally get gunked up with wet leaves and debris. One reason why they are like that is because these have very large fields of view. A cylindrical plastic cover seriously degrades optical quality especially when the beam is hitting it at a steep angle. Another reason is that it has a heatsink on the back of the spinny part. Earlier Waymos like the Firefly in fact cover up this lidar, e.g. on the "nose" and the side mirrors [1]. But they went back to leaving it exposed for better performance.
Likewise with the big spinning lidar on top, which was covered in the older Chrysler Pacificas but externally spinning in the newer Jaguar I-Paces.
Some correction here. FMCW lidar does not need fiber lasers. In fact most fiber lasers are actually very difficult to frequency sweep internally. Typical lasers used in swept wavelength interferometry (which is really the same thing) are so-called external cavity lasers, which rely on photodiodes + external cavity e.g. through a wavelength selective feedback (still comparably expensive though).
Baraja selling point was AFAIK that they used a integrated swept laser source (they typically have lower coherence but you can work around that in DSP).
In the current state of self-driving tech, lidar is clearly the most effective and safest option. Yet companies like Tesla refuse to integrate lidar, preferring to rely solely on cameras. This is partially to keep costs down. But this means the Tesla self-driving isn't quite as good as Waymo, which sits pretty comfortably at level 4 autonomy.
But humans have no lidar technology. We rely almost solely on sight for driving (and a tiny bit on sound I guess). Hence in principle it should be possible for cars to do so too. My question is this: at what point, if at all, will self-driving get good enough to make automotive lidar redundant? Or will it always be able to make the self-driving 1% better than just cameras?
I can't speak for lidar, but the Tesla self driving with cameras only on HW4 in my little Model 3 is so good that I don't even think about it anymore. I never thought I would trust this type of technology.
Over the last 2 days I drove from Greenville, SC to Raleigh, NC (4-5 hours) and back with self driving the entire way. Traffic, Charlotte, navigating parking lots to pull into a super charger. The only place I took over was the conference center parking lot for the Secure Carolina's Conference.
It drives at least as well or better than me in almost all cases...and I'm a pretty confident driver.
I say all that to say this...I can't imagine lidar improving on what I'm already seeing that much. Diminishing returns would be the biggest concern from a standpoint of cost justification. The fact that this type of technology exists in a vehicle as affordable as the Model 3 is mind blowing.
Anecdotal evidence isn't super useful here in preventing tragedy, because the people with negative anecdotes might be dead, and thus cannot give them.
To wit: Plenty of other tesla owners in a similar position as you, probably similarly praised the system, until it slammed them into a wall, car, or other obstacle, killing them.
Autopilot kills loads of people but my understanding is that autopilot is the dumb driver assist while FSD is the one that tries to solve general purpose driving.
Has FSD really only killed 2 people? FSD has driven 6 billion miles and the human driver death rate is 10 per billion so it has killed 2 where "as good as human" would mean 60. That seems really good tbh.
There are unquestionably some cases where Lidar adds actual data that cameras can't see and is relevant to driving accuracy. So the real question is whether there are cases where Lidar actually hurts. I think that is possible but unlikely to be the case.
I think the safety of other humans eyes (lidar exposure) is the real negative for lidar use.
The MKBHD YouTube video where he shows his phone camera has burned out pixels from lidar equipped car reviews is revealing (if I recall correctly, he proceeds to show it live). I don't want that pointed at my eye.
I love lidar from an engineering / capability perspective. But I grew up with the "don't look in a laser!" warnings everywhere even on super low power units... and it's weird that those have somehow gone away. :P
if cameras end up only slightly better than humans - who cause 40k deaths annually and 1M worldwide, or a world war amount of deaths every 15 years or so - but rapidly deployable due to cost, they will save more lives than a handful of lidar cars.
As far as Tesla, time will tell. I ride their robotaxis daily and see them performing better than Waymo, but it's obviously meaningless until we see accident stats after they remove safety monitors.
> I ride their robotaxis daily and see them performing better than Waymo, but it's obviously meaningless until we see accident stats after they remove safety monitors.
I've seen this claimed a lot but never have gotten a definitive answer.
Is this like "overall better but hard to pinpoint" or "this maneuver is smoother than Waymo" or something in between?
Would love to hear experiences with them since they're so limited currently.
Yeah Tesla has more smoothing, but IMO that's less interesting than the ability to navigate tricky scenarios and model other actors. Here's my collection of interesting videos, Tesla only because those are the ones I get forwarded to me. I'd love to see a similar collection for Waymo.
Smoother maneuvers, or things like seamlessly backing up a bit when it predicts that a large vehicle turning from an intersecting street won't have enough room to turn unless the car moves out of its way. It's really cool.
Human eyes are incredible in so many dimensions, and that’s before you go to our embedded evolved world models and reflexes.
I think a future where cameras are more eye like would be a big leap forward especially in bad weather - give them proper eyelids, refined tears, rotating ability, actual lenses to refocus at different distances, etc.
> We rely almost solely on sight for driving (and a tiny bit on sound I guess).
And proprioception. If I'm driving in snowy conditions, I'm definitely paying attention to whether the wheels are slipping, the car is sliding, the steering wheel suddenly feels slack, etc. combined with memorized knowledge of the road.
However, that's ... not great. It requires a lot of active engagement from the driver and gets tiring fast.
Self-driving can be way better than this.
GPS with dead reckoning tells the car exactly where it is relative to a memorized maps of the road--it won't miss a curve in a whiteout condition because it doesn't need to see the curve--that's a really big deal and gets you huge improvements over humans. Radar/lidar will detect a stopped car in front of you long before your sight will. And a computer system won't get tired after driving in stressful conditions for a half hour. etc.
Let's just do a quick comparison: the visual cortex consumes about 10x more volume of the human brain than the language center. So... that's a rough comparison of difficulty. I seem to remember the visual centers is also a lot older, evolutionarily than the language centers?
> Many humans do a really bad job at driving, so I'm not sure we should try to emulate that
Agreed, but there are still really good human drivers, who still operate on sight alone. It's more about the upper bound, not the human average, that can be achieved with only sight.
That upper bound can be pretty low in bad lighting conditions. If you have no strategy to work around that, your performance is going to be bad compared to vehicles with radar and lidar. On top of all that, Waymo's performance advantage might come in part from the staggering amount of geospatial data available to Waymo vehicles and unique to Waymo's parent company.
The second and third place companies in terms of the number of deployed robotaxis are both subsidiaries of large Chinese Internet platforms, and both of them are also leaders in providing geospatial data and navigation in China. Neither operates camera-only vehicles.
I learned a lot from this article. The breakdown of the different LiDAR types and how they fit into real automotive sensor stacks was especially helpful. Nice to see a clear explanation without the usual hype or ideology around cameras vs. LiDAR.
I am surprised that I didn’t see discussion about Audi’s lidar that’s been in use in production vehicles now. Yes, it’s on a different level, only used for ADAS, but it’s still lidar that’s actively used.
If I remember correctly, the Valeo Scala that's in the Audi cars uses a discrete 16 element 1D array (940 nm diodes + APDs) plus a rotating mirror to scan.
No mention of flash LIDAR, which really ought to be seen more for the short-range units for side and rear views.
Interference between LIDARs can be a problem, mostly with the continuous-wave emitters. Pulsed emitters are unlikely to collide in time, especially if you put some random jitter in the pulse timing to prevent it. The radar people figured this out decades ago.
A flash lidar is simply a 2D array of detectors plus a light source that's not imaged. It's mentioned super briefly at the start of section 3 but you're right, I should have gone into more detail given how common and important they are.
For pulsed emitters, indeed adding random jitter in the timing would avoid the problem of multiple lidars being synced up and firing at the same time. For some SPAD sensors, it's common to emit a train of multiple pulses to make a single measurement. Adding random jitter between them is a known and useful trick to mitigate interference. But in fact it isn't super accurate to say that interference is a problem for continuous-wave emitters either. Coherent FMCW lidar are typically quite robust against interference by, say, using randomized chirp patterns.
I do wonder what’s preventing a Lidar device from cross talking with other lidar devices. I remember talking to somebody about this and they told me that each signal is uniquely encoded per machine.
This seems like it will be a growing problem with increased autonomy on the roads
It is likely to be similar to how a dozen or more GPS sats can use the same frequency at once without interfering with each other. The outgoing signal from each satellite is modulated with a maximal-length shift register sequence for that specific bird, each sequence being chosen for both minimal autocorrelation with itself and minimal cross-correlation with the others.
I'm not aware of the inner workings of automotive lidar, but I can't imagine building one that didn't work that way.
See my other comments in this discussion. For long m-range pulsed LiDAR, full modulation is not feasible due to the firing circuits used. Minimal modulation can be used, and jitter injection means that any incident is likely to effect a single sample, not be repeated; but the main protection is narrow field of view and a duty cycle well under 0.1%.
Adding a comment here with some info on LIDAR human safety, since many are asking.
There are two wavelengths of interest used:
The failure mode of these LIDARs can be akin to a weapon. A stuck mirror or frozen phased array turns into a continuous-wave pencil beam. A 1550 nm LIDAR leaking 1W continuous will raise corneal temperature >5C in 100ms. The threshold for cataract creation is only 4C rise in temp. A 905 nm Class 1 system stuck in one pixel gives 10 mW continuous on retina, capable of creating a lesion in 250ms or less.20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory. The current regs (IEC 60825-1:2014) assume single-source exposure. There is no standard for multi-source, multi-axis, moving-platform overlay.
Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury
The article talks about eye safety a bit in section 4.
> a stuck mirror
This is one of the advantages of using an array of low power lasers rather than steering a single high power laser. The array physically doesn't have a failure mode where the power gets concentrated in a single direction. Anyway, theoretically, you would hope that class 1 eye-safe lidars should be eye safe even at point blank range, meaning that even if the beam gets stuck pointing into your eye, it would still be more or less safe.
> 20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory.
In the article, I point out a small nuance: If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.
Also, if a lidar is eye-safe at point blank range, when you have multiple cars tens of meters away, laser beam divergence already starts to reduce the intensity, not to mention that when the lidars are scanning properly, the probability of all of them pointing in the same spot is almost impossible.
By the way, the Waymo Laser Bear Honeycomb is the bumper lidar (940 nm iirc) and not the big 1550 nm unit that was on the Chrysler Pacificas. The newer Jaguar I-Pace cars don't have the 1550 nm lidar at all but have a much bigger and higher performance spinning lidar.
> > a stuck mirror
Detect the mirror being stuck and shut the beam off. Easy.
Hint: how bad would it be if the MCU in your gas heating boiler latched up and wouldn't shut the burner off? How is this mitigated?
This was addressed in the original comment:
> Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury
You should be able to do it way faster than that.
Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
So yes, a mirror trip reset is probably a good start. But would I trust someone's vision to this alone?
> Pressure switches, flow sensors, mechanical flame detectors, power supply monitoring, watchdog timers, and XX years of Honeywell or whoever knowing what they are doing.
Nope, nothing as complicated as that. You're close with the watchdog timer.
The solenoid is driven by a charge pump, which is capacitively coupled to the output of the controller. The controller toggles the gas grant output on and off a couple of times a second, and it doesn't matter if it sticks high or low - if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Same way we used to do for electron beam scanning.
>> if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
>> Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Sounds like a great plan, but I question the "about a second" timing; the GP post calculates that "about a second" is between 4X and 10X the time required to cause damage. So, how fast do these things scan/cycle across their field of view? Could this be solved by speeding up the cycle, or would that overly compromise the image? Change the scan pattern, or insert more check-points in the pattern?
> The array physically doesn't have a failure mode where the power gets concentrated in a single direction
Ok, but now the software can cause the failure. Not sure if that's much of a relief.
For the downvoters:
https://en.wikipedia.org/wiki/Beamforming
It is possible for the array to produce a concentrated beam into one direction. The software determines when that happens and in what direction.
Beamforming with a phased array is talked about in the article, but you are conflating two very different types of arrays. You can't form beams with the types of macroscopic arrays I was referring to, since they consist of macroscopic array elements whose phase cannot be controlled, and reside behind a fixed lens that ensures that they all point in different directions.
Yeah, I was surprised stories of Volvo's EX90 lidar damaging camera sensors didn't get more traction: https://www.thedrive.com/news/volvo-ex90s-lidar-sensor-will-...
One would hope there would be more regulation around this.
A quick note about units -- you correctly quote the limits as an energy-per-pulse limit. The theory behind this is that pulses are short enough that rotation during a pulse is negligible, so they tend to hit a single point (on the retina, at focusable frequencies; the cornea itself for longer wave lengths), and the absorption of that energy is what causes damage. But LiDAR range is determined not by energy per pulse, but by power. This drives a desire for minimum-time pulses, often < 10 ns -- if you can halve your pulse length, you can increase your range substantially while still being eye-safe. GaNFETs are one of the enabling technologies for pulsed lidar, since they're really the only way out there to steer tens of amps in single-digit nanoseconds. Even once you've solved generating short pulses, though, you still need to interpret short responses. Which drives either a need for very fast ADCs (gigasample+), or TDCs, which are themselves fascinating components.
I live in town, and walk past countless stopped and moving cars every day.
I also know how the tech industry makes decisions about safety and responsibility (hahaha...). And I have seen some of the recent changes that automakers have somehow slipped past safety regulators. So it seems foolish to trust any of them on this safety issue.
Do we all have to move to rural areas, if we want to be able to go outside without wearing laser safety goggles?
Time to make a sunglasses and window tint company specifically formulating their products to shield from these kinds of lasers.
You know, I was just thinking a headset like Oculus would be pretty great for night driving if it was sensitive enough; my night vision is getting really bad and coupled with most new cars having annoyingly bright LED lamps with auto-high beams it's getting super uncomfortable to drive in the dark. Then that would automatically shield eyes from lasers!
One could go further, and have an integrated system where the headset shows video feed from cameras around the car. You could almost get a 3rd person view of your own car like in video games.
If you find anything, please let me know. The least obtrusive option I've found is this Zeiss lens coating ("Thermo Force") which claims to block 90% of IR "between 780 and 2000 nm", which covers the NIR used in both types of lidar. It only seems to be available as part of sunglasses though.
Laser safety glasses are off-the-shelf, but are usually tuned for a single band stop. There are at least three frequencies of near IR LiDAR in use in the wild.
Hm the ones I can find have a heavy green reflex and their optical density seems about twice or thrice what you'd need for a 1W CW laser. Maybe it's unavoidable given the closeness of near IR and deep red, but I wonder if there exists glasses with a cutoff sharp enough and reduced OD so as to not result in noticeable color shift.
We always used Thorlabs' [0]. If you want to block Waymo (900 - 940 nm), but not Ouster (840 nm), the LG11s may work and are quite transmissive and neutral, but I've never used them. The LG20s are the standard NIR blockers that I'm familiar with.
[0] https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=76...
I was thinking more "one could create a business" vibe. The requirements for making everyday protections from errant lasers is different from what you'd need for lab safety working with lasers.
I was always curious about this, it's impossible to find any safety certifications or details about the lidars used by e.g. Waymo. Are we supposed to just trust that they didn't cut corners, especially given the financial incentives to convince people that lidar is necessary (because there's a notable competitor that doesn't use it).
To date most class-1 lasers have also been hidden/enclosed I think (and there is class 1M for limited medical use), so I'm not convinced that the limits for long-term daily exposure have been properly studied.
Until I see 3rd party studies otherwise, I plan to treat vehicle lidar no different than laser pointers and avoid looking directly at them. If/when cars become common enough that this is too hard to do, maybe I'll purchase NIR blocking glasses (though most ones I found have an ugly green tint, I wonder if it's possible to make the frequency cutoff sharp enough that it doesn't filter out visible reds).
How would you go about "avoid looking directly at" invisible LIDAR beams from cars passing you on the road?
Every day dozens of Waymos are in close proximity to the people cleaning them and plugging them in, and they are maneuvering in tight spaces amongst other Waymos. That's not a peer reviewed study, but it seems to work.
The visual system can patch over tiny defects (see: blindspot) and visual field tests have not been part of standard yearly eye exams I've been to. And possible longer-term risks (say increased risk of cataracts) would be harder to conclusively show. And the sample size involved would skew heavily towards young healthy adults instead of people with pre-existing eye conditions.
I realize it's not easily possible to prove the negative, but when you're exposing the public the burden must be on the company to be transparent and rigorous. And from what I see it's difficult to even find certification documents for the lidars used in commercial self-driving vehicles, possibly because everything is proprietary and trade secret.
That's not even a study. That's an opinion, based on guesses.
“…Every day dozens of cigarettes are smoked in close proximity to other people… that’s not a peer reviewed study, but it seems to work…” - someone probably, sometime in the 1950s
The fried iPhone pixels are spooky. Eyes clearly aren't as affected, but they also aren't as easy to replace.
A camera CMOS dies at 1–2 µJ pulse, this same pulse energy reaches the cornea. If a sensor has fried, that means the dose to your eye is already in the zone of cataract creation. Human corneal endothelial cells do not regenerate. If the endothelium is damaged, stromal fluid accumulates and opacity can progressively over months/a year from one hit. You might never know what caused it.
Workplace injuries have never been swept under the rug, especially if those cleaners are non-persons in the eyes of the government.
That's a possibility. Google appears to be contracting out depot work to car rental companies because a Waymo depot is basically a car rental lot. They need three shifts for each depot. So there's probably a couple hundred people who would otherwise be cleaning out rental cars working the depots. At some point injuries would get hard to sweep under the rug.
> There are two wavelengths of interest used
Ouster uses (or at least used to use, not sure if they still do) 840 nm. Much higher quantum efficiency for standard silicon receivers, without having to play games with stressed silicon and stuff; but also much better focusing by the retina, so lower power permitted.
The incredible irony that reduced injury and death from collisions is how these things are sold to registers and cities, but no one mentioned that in a city full of millions of poorly maintained lidars, they just might slowly make everyone blind instead.
Enormous complexity, safety risks, and completely unnecessary for successful level 4 FSD - the hurdle to full autonomous driving was basically jumped by Tesla this year. I don't see why lidar is even allowed in public at this point, it seems dangerous enough that you'd want it effectively restricted to highly regulated and licensed uses, like military or academic scanning, with all sorts of deliberate safeguards and liability checks.
Social media is full of little clips of lidar systems burning out camera pixels, and I'm sure big proponents of the tech have paid people off over eye injuries at this point. There've probably been a ton of injuries that just got written off as random environmental hazards, "must have looked at the sun" etc.
It's nuts that this stuff gets deployed.
It's not nothing, but it's a long way from being a complete system (let alone the obviously superior one).
IIRC, Tesla's safety record is about 30% worse than Waymo. The gap has been closing rapidly. It's not that long time ago Tesla made an order of magnitude more mistakes than Waymo.
That's with safety drivers, a small fleet, and literally only the most recent data (since it wasn't broken out before). My experience with AV deployments is that your incident rate is significantly different once you remove humans, and small fleet sizes/deployment areas hide a lot of long tail issues.
Waymo is operating at a much larger scale across a huge range of conditions with hardware that's generations behind their latest and still performing better.
Ok, I guess we'll need to wait until Tesla removes the safety drivers to see the whole truth.
No need to wait. It's got the same reliability as FSD. If people were sitting in the backseat while FSD is driving and taking a nap, I'd believe it's comparable to Waymo.
If that was true they would've already released a full, no safety driver service. They have not.
Having built a LiDAR system for an autonomy company in the past, this is a great write-up, but it omits what I found to be one of the more interesting challenges. For our system (bistatic, discrete edge-emitting laser diodes and APDs; much like a Velodyne system at high level), we had about an inch of separation between our laser diodes and our photodiodes. With 70 A peak currents through the laser diodes. And nanoamp sensitivity in the photodiodes. EMI is... interesting. Many similar lidars ignore the problem by blanking out responses very close to firing time, giving a minimum range sensitivity, and by waiting for maximum delay to elapse before firing the next salvo -- but this gives a maximum fire rate that can be an issue. For example, a 32 channel system running at 20 kHz/channel would be limited to ~200 m range (468 m round trip delay, some blanking time needed)... so to get both high rate (horizontal resolution) and high channel count (vertical resolution), you need to be able to ignore your own cross-talk and be able to fire when beams are in flight.
200m range seems adequate for passenger vehicle use. Even at 100kph that's over 7 seconds to cover the distance even if you aren't trying to slow down. I think there is diminishing returns with chasing even longer ranges. Even fully loaded trucks are expected to stop in about 160m or so.
Yep, 200 m is pretty close to standard. Which is why 32 channel and 20 kHz is a pretty common design point. But customers would love 64 channel and 40 kHz, for example. Also, it's worth noting that if your design range is 200 m -- your beam doesn't just magically stop beyond that. While the inverse square law is on your side in preventing a 250 m target from interfering with the next pulse, a retro-reflector at 250 m can absolutely provide a signal that aliases with a ~16 m signal (assuming 234 m time between pulses) on the next channel under the right conditions. This is an edge case -- but it's one that's observable under steady-state conditions, it's not just a single pulse that gets misinterpreted.
Don't these things use Gold codes or something similar to eliminate temporal aliasing problems? I guess that wouldn't make multipath issues go away completely, but it could fix the case you're referring to.
You can, and we did an extremely limited form of that — see other comment on reducing correlations. But you have an energy limit from eye safety concerns, so energy spent on spreading the signal over time and modulating it directly takes away from power giving range. And doing non-trivial modulation isn’t easy — most of these pulses are generated by a capacitive discharge, which limits shaping.
100kph is rather slow. Relative speeds of cars exceed that regularly even on city streets. Relative speeds in excess of 200kph are common outside cities.
In case anyone else made the same mistake as me:
I wrote a whole paragraph, then realised that "relative speeds" is the sum of opposing speeds, ie. two cars going in the opposite direction at 50km/h each make up a relative speed of 100km/h.
>we had about an inch of separation between our laser diodes and our photodiodes
Why can't you place them further away from each other using an additional optical system (i.e. a mirror) and adjusting for the additional distance in software?
You can, but customers like compact self-contained units. All trade offs.
Edit: There's basically three approaches to this problem that I'm aware of. Number one is to push the cross-talk below the noise floor -- your suggestion helps with this. Number two is to do noise cancellation by measuring your cross-talk and deleting it from the signal. Number three is to make the cross-talk signal distinct from a real reflection (e.g. by modulating the pulses so that there's low correlation between an in-flight pulse and a being-fired pulse). In practice, all three work nicely together; getting the cross-talk noise below saturation allows cancellation to leave the signal in place, and reduced correlation means that the imperfections of the cancellation still get cleaned up later in the pipeline.
For those that have never seen it, Sebatian Thrun's talk at Google after winning the DARPA Grand Challenge is excellent (and he's very funny)
So many great lines:
- "We tried to find the smoothest thing in the frame but the smoothest thing turned out to be the sky"
- "We had it adapt to rough terrain by having me drive the car and it learned from my driving. Granted, it drives like a German now."
- "Nobody tells you their sensor error rate so we had to drive the car around and have the car learn the error probabilities"
- "Nobody needs to tell you this but Stanford students are amazing"
- "A lot of the people who entered are what I would call: 'car nuts' "
https://www.youtube.com/watch?v=PXQlpu8Y4fI
Some other comments have posted about the laser safety being the issue but I have a more physical story:
Recently got a Waymo for the first time to take my kids and I from one hotel to another in Phoenix.
- Car pulls up
- I walk up to the trunk as I have a suitcase
- Out of habit, I go to open the trunk by pressing the button under the "handle" (didn't realize you have to unlock the car via the app first)
- My hand moves by the rear trunk laser that is spinning and "whacks" my hand.
Not a big deal but seems like an interesting design choice to place a motorized spinning device right next to where people are going to be reaching to open the trunk.
The externally spinning Waymo Laser Bear Honeycombs do indeed cause whacking and pinching and occasionally get gunked up with wet leaves and debris. One reason why they are like that is because these have very large fields of view. A cylindrical plastic cover seriously degrades optical quality especially when the beam is hitting it at a steep angle. Another reason is that it has a heatsink on the back of the spinny part. Earlier Waymos like the Firefly in fact cover up this lidar, e.g. on the "nose" and the side mirrors [1]. But they went back to leaving it exposed for better performance.
Likewise with the big spinning lidar on top, which was covered in the older Chrysler Pacificas but externally spinning in the newer Jaguar I-Paces.
[1] https://commons.wikimedia.org/wiki/File:Waymo_self-driving_c...
Some correction here. FMCW lidar does not need fiber lasers. In fact most fiber lasers are actually very difficult to frequency sweep internally. Typical lasers used in swept wavelength interferometry (which is really the same thing) are so-called external cavity lasers, which rely on photodiodes + external cavity e.g. through a wavelength selective feedback (still comparably expensive though).
Baraja selling point was AFAIK that they used a integrated swept laser source (they typically have lower coherence but you can work around that in DSP).
On a related note Volvo just dropped lidar and may not be activating it for cars already sold with it...
https://tech.yahoo.com/transportation/articles/volvo-ends-re...
In the current state of self-driving tech, lidar is clearly the most effective and safest option. Yet companies like Tesla refuse to integrate lidar, preferring to rely solely on cameras. This is partially to keep costs down. But this means the Tesla self-driving isn't quite as good as Waymo, which sits pretty comfortably at level 4 autonomy.
But humans have no lidar technology. We rely almost solely on sight for driving (and a tiny bit on sound I guess). Hence in principle it should be possible for cars to do so too. My question is this: at what point, if at all, will self-driving get good enough to make automotive lidar redundant? Or will it always be able to make the self-driving 1% better than just cameras?
I can't speak for lidar, but the Tesla self driving with cameras only on HW4 in my little Model 3 is so good that I don't even think about it anymore. I never thought I would trust this type of technology.
Over the last 2 days I drove from Greenville, SC to Raleigh, NC (4-5 hours) and back with self driving the entire way. Traffic, Charlotte, navigating parking lots to pull into a super charger. The only place I took over was the conference center parking lot for the Secure Carolina's Conference.
It drives at least as well or better than me in almost all cases...and I'm a pretty confident driver.
I say all that to say this...I can't imagine lidar improving on what I'm already seeing that much. Diminishing returns would be the biggest concern from a standpoint of cost justification. The fact that this type of technology exists in a vehicle as affordable as the Model 3 is mind blowing.
Anecdotal evidence isn't super useful here in preventing tragedy, because the people with negative anecdotes might be dead, and thus cannot give them.
To wit: Plenty of other tesla owners in a similar position as you, probably similarly praised the system, until it slammed them into a wall, car, or other obstacle, killing them.
The one good thing about death statistics is that they are difficult to hide or game the reporting thresholds.
https://www.tesladeaths.com/
Autopilot kills loads of people but my understanding is that autopilot is the dumb driver assist while FSD is the one that tries to solve general purpose driving.
Has FSD really only killed 2 people? FSD has driven 6 billion miles and the human driver death rate is 10 per billion so it has killed 2 where "as good as human" would mean 60. That seems really good tbh.
EDIT: and it looks like "deactivate before collision" doesn't work as a cheat, NHTSA requires reporting if it was active at any time within 30 seconds of the crash: https://www.nhtsa.gov/laws-regulations/standing-general-orde...
There are unquestionably some cases where Lidar adds actual data that cameras can't see and is relevant to driving accuracy. So the real question is whether there are cases where Lidar actually hurts. I think that is possible but unlikely to be the case.
I think the safety of other humans eyes (lidar exposure) is the real negative for lidar use.
The MKBHD YouTube video where he shows his phone camera has burned out pixels from lidar equipped car reviews is revealing (if I recall correctly, he proceeds to show it live). I don't want that pointed at my eye.
I love lidar from an engineering / capability perspective. But I grew up with the "don't look in a laser!" warnings everywhere even on super low power units... and it's weird that those have somehow gone away. :P
if cameras end up only slightly better than humans - who cause 40k deaths annually and 1M worldwide, or a world war amount of deaths every 15 years or so - but rapidly deployable due to cost, they will save more lives than a handful of lidar cars.
As far as Tesla, time will tell. I ride their robotaxis daily and see them performing better than Waymo, but it's obviously meaningless until we see accident stats after they remove safety monitors.
> I ride their robotaxis daily and see them performing better than Waymo, but it's obviously meaningless until we see accident stats after they remove safety monitors.
I've seen this claimed a lot but never have gotten a definitive answer.
Is this like "overall better but hard to pinpoint" or "this maneuver is smoother than Waymo" or something in between?
Would love to hear experiences with them since they're so limited currently.
Yeah Tesla has more smoothing, but IMO that's less interesting than the ability to navigate tricky scenarios and model other actors. Here's my collection of interesting videos, Tesla only because those are the ones I get forwarded to me. I'd love to see a similar collection for Waymo.
Crowd: https://www.youtube.com/watch?v=3DWz1TD-VZg
Negotiation: https://www.youtube.com/shorts/NxloAweI6nU
Model: https://www.youtube.com/shorts/KVa4GWepX74
Smoother maneuvers, or things like seamlessly backing up a bit when it predicts that a large vehicle turning from an intersecting street won't have enough room to turn unless the car moves out of its way. It's really cool.
Human eyes are incredible in so many dimensions, and that’s before you go to our embedded evolved world models and reflexes.
I think a future where cameras are more eye like would be a big leap forward especially in bad weather - give them proper eyelids, refined tears, rotating ability, actual lenses to refocus at different distances, etc.
> We rely almost solely on sight for driving (and a tiny bit on sound I guess).
And proprioception. If I'm driving in snowy conditions, I'm definitely paying attention to whether the wheels are slipping, the car is sliding, the steering wheel suddenly feels slack, etc. combined with memorized knowledge of the road.
However, that's ... not great. It requires a lot of active engagement from the driver and gets tiring fast.
Self-driving can be way better than this.
GPS with dead reckoning tells the car exactly where it is relative to a memorized maps of the road--it won't miss a curve in a whiteout condition because it doesn't need to see the curve--that's a really big deal and gets you huge improvements over humans. Radar/lidar will detect a stopped car in front of you long before your sight will. And a computer system won't get tired after driving in stressful conditions for a half hour. etc.
Let's just do a quick comparison: the visual cortex consumes about 10x more volume of the human brain than the language center. So... that's a rough comparison of difficulty. I seem to remember the visual centers is also a lot older, evolutionarily than the language centers?
> My question is this: at what point, if at all, will self-driving get good enough to make automotive lidar redundant?
By 2018, if you listen to certain circa-2015 full self-driving technologists.
Many humans do a really bad job at driving, so I'm not sure we should try to emulate that.
And it is certain that in India they use sound sound for echolocation.
> Many humans do a really bad job at driving, so I'm not sure we should try to emulate that
Agreed, but there are still really good human drivers, who still operate on sight alone. It's more about the upper bound, not the human average, that can be achieved with only sight.
That upper bound can be pretty low in bad lighting conditions. If you have no strategy to work around that, your performance is going to be bad compared to vehicles with radar and lidar. On top of all that, Waymo's performance advantage might come in part from the staggering amount of geospatial data available to Waymo vehicles and unique to Waymo's parent company.
The second and third place companies in terms of the number of deployed robotaxis are both subsidiaries of large Chinese Internet platforms, and both of them are also leaders in providing geospatial data and navigation in China. Neither operates camera-only vehicles.
I learned a lot from this article. The breakdown of the different LiDAR types and how they fit into real automotive sensor stacks was especially helpful. Nice to see a clear explanation without the usual hype or ideology around cameras vs. LiDAR.
I am surprised that I didn’t see discussion about Audi’s lidar that’s been in use in production vehicles now. Yes, it’s on a different level, only used for ADAS, but it’s still lidar that’s actively used.
If I remember correctly, the Valeo Scala that's in the Audi cars uses a discrete 16 element 1D array (940 nm diodes + APDs) plus a rotating mirror to scan.
No mention of flash LIDAR, which really ought to be seen more for the short-range units for side and rear views.
Interference between LIDARs can be a problem, mostly with the continuous-wave emitters. Pulsed emitters are unlikely to collide in time, especially if you put some random jitter in the pulse timing to prevent it. The radar people figured this out decades ago.
A flash lidar is simply a 2D array of detectors plus a light source that's not imaged. It's mentioned super briefly at the start of section 3 but you're right, I should have gone into more detail given how common and important they are.
For pulsed emitters, indeed adding random jitter in the timing would avoid the problem of multiple lidars being synced up and firing at the same time. For some SPAD sensors, it's common to emit a train of multiple pulses to make a single measurement. Adding random jitter between them is a known and useful trick to mitigate interference. But in fact it isn't super accurate to say that interference is a problem for continuous-wave emitters either. Coherent FMCW lidar are typically quite robust against interference by, say, using randomized chirp patterns.
I do wonder what’s preventing a Lidar device from cross talking with other lidar devices. I remember talking to somebody about this and they told me that each signal is uniquely encoded per machine.
This seems like it will be a growing problem with increased autonomy on the roads
It is likely to be similar to how a dozen or more GPS sats can use the same frequency at once without interfering with each other. The outgoing signal from each satellite is modulated with a maximal-length shift register sequence for that specific bird, each sequence being chosen for both minimal autocorrelation with itself and minimal cross-correlation with the others.
I'm not aware of the inner workings of automotive lidar, but I can't imagine building one that didn't work that way.
See my other comments in this discussion. For long m-range pulsed LiDAR, full modulation is not feasible due to the firing circuits used. Minimal modulation can be used, and jitter injection means that any incident is likely to effect a single sample, not be repeated; but the main protection is narrow field of view and a duty cycle well under 0.1%.
The discrete array, must be accurate for them to be close like that and not get overlap (eg. receiver 1 gets beam from emitter 2)
[dead]