Robotic driving and the Eye

Blog vol 6.32. Robotic driving and the Eye. 


This past week, there was an interesting article in The Economist on autonomous driving. We have come a long way with digital cameras and their use in a variety of robotic applications.  The latest Model S from Tesla has eight exterior cameras to monitor road conditions. That is definitely more angles than we can access with our eyes, so all those cameras must be better than the human eyes? Think again. 


The problem lies in the difference between optical flow and the motion field. A camera does not track motion like we do; all it can do is measure the motion of brightness patterns in the image. Even with a highly pixilated image, the camera can only detect these brightness changes as the object moves by. Ideally for the camera, the image velocity would be equal to the scene velocity.


Unfortunately, the delay when cameras process images is more than ½ second, so travelling at 90 kilometres an hour you can travel 12 metres with outdated information. That is definitely not good enough for autonomous driving to safely navigate our highways and streets. 


A roboticist, Shuo Gao, from Beihang University in China, looked to the human visual system for help. When studying the pathways for visual processing, the place to start is the eye and the retina where light photons are converted into neural responses, processed by the ganglion cells, leave the eye by the optic nerve, and go to the visual cortex via the LGN. 


The Lateral Geniculate Nucleus (LGN) is located in the thalamus of the brain. The LGN acts as a relay station for visual signals, where much processing and feedback occurs. Crazily, 95% of the neural information to the LGN is from the rest of the brain and only 5% comes from the eyes. The cortex provides feedback to the LGN which serves to selectively amplify or suppress visual information. Factors like emotions or stresses on certain stimuli affect the signal, prioritizing them.


In this new device, a LGN-like layer was introduced into the artificial vision system to guide the attention of the optical flow algorithms.  This neuromorphic hardware integrates processing storage functions which allow the device to build up a picture of when the motion is occurring. It has increased processing speed by 400% and has actually improved accuracy in some cases (Read more here).


We must marvel at how well our visual systems work. By mimicking them, this new technology works that much quicker, and yet we can perform these visual pathway decisions in a fraction of a second. For these camera systems to work optimally they need to lose that 12-metre lag.  This is a start. 

   


Til next week,




The good doctor


By Dr. Mark Germain April 16, 2026
The good doctor focuses in on visual processing. Why do we miss things that are right in front of our eyes?
By Dr. Mark Germain April 10, 2026
The good doctor discusses the Artemis II mission, space travels affects on the eyes, and "the dark side of the moon"
By Dr. Mark Germain April 3, 2026
The good doctor discusses exciting new and emerging options to treat nystagmus.
By Dr. Mark Germain March 27, 2026
The good doctor reflects on The Country of the Blind, a book by Andrew Leland which gives insight into the experience of vision loss.
By Dr. Mark Germain March 19, 2026
This week's blog focuses on the newest version of the Neurolens - a lens with technology that can provide relief for headaches, neck pain, and eye strain caused by trigeminal dysphoria.
By Dr. Mark Germain March 12, 2026
The good doctor writes about Burlington Eyecare's longstanding relationship with Frost eyewear and this week's exciting event.
More Posts