Robotic driving and the Eye

Blog vol 6.32. Robotic driving and the Eye. 


This past week, there was an interesting article in The Economist on autonomous driving. We have come a long way with digital cameras and their use in a variety of robotic applications.  The latest Model S from Tesla has eight exterior cameras to monitor road conditions. That is definitely more angles than we can access with our eyes, so all those cameras must be better than the human eyes? Think again. 


The problem lies in the difference between optical flow and the motion field. A camera does not track motion like we do; all it can do is measure the motion of brightness patterns in the image. Even with a highly pixilated image, the camera can only detect these brightness changes as the object moves by. Ideally for the camera, the image velocity would be equal to the scene velocity.


Unfortunately, the delay when cameras process images is more than ½ second, so travelling at 90 kilometres an hour you can travel 12 metres with outdated information. That is definitely not good enough for autonomous driving to safely navigate our highways and streets. 


A roboticist, Shuo Gao, from Beihang University in China, looked to the human visual system for help. When studying the pathways for visual processing, the place to start is the eye and the retina where light photons are converted into neural responses, processed by the ganglion cells, leave the eye by the optic nerve, and go to the visual cortex via the LGN. 


The Lateral Geniculate Nucleus (LGN) is located in the thalamus of the brain. The LGN acts as a relay station for visual signals, where much processing and feedback occurs. Crazily, 95% of the neural information to the LGN is from the rest of the brain and only 5% comes from the eyes. The cortex provides feedback to the LGN which serves to selectively amplify or suppress visual information. Factors like emotions or stresses on certain stimuli affect the signal, prioritizing them.


In this new device, a LGN-like layer was introduced into the artificial vision system to guide the attention of the optical flow algorithms.  This neuromorphic hardware integrates processing storage functions which allow the device to build up a picture of when the motion is occurring. It has increased processing speed by 400% and has actually improved accuracy in some cases (Read more here).


We must marvel at how well our visual systems work. By mimicking them, this new technology works that much quicker, and yet we can perform these visual pathway decisions in a fraction of a second. For these camera systems to work optimally they need to lose that 12-metre lag.  This is a start. 

   


Til next week,




The good doctor


By Dr. Mark Germain March 5, 2026
Read a little bit about the amazing work that BEC office manager, Karen, is doing in Bolivia!
By Dr. Mark Germain February 20, 2026
The good doctor offers some does and don't of polarized lenses. These amazing lenses are not perfect for all uses.
By Dr. Mark Germain February 12, 2026
While watching the winter Olympics this week, the good doctor considers the eye protection and eye wear needed for these elite athletes.
By Dr. Mark Germain February 5, 2026
The good doctor examines the steadiness of avian eye sight. A true marvel!
By Dr. Mark Germain January 29, 2026
The good doctor discusses recent advances in predicting health outcomes
By Dr. Mark Germain January 22, 2026
The good doctor emphasizes both the known and unknown effects of blue light on our eyes.
More Posts