Sunday, July 10, 2016

Thoughts on Tesla autopilot incident

http://money.cnn.com/2016/06/30/technology/tesla-autopilot-death/index.html?iid=EL

So all technical aspects aside, i had some thoughts about the Tesla crash. One is that people keep making the same mistakes over and over again, and different people repeat other people's mistakes. This is not the case with machines/learning algorithms. They also learn by example and this particular scenario will be well covered in the training data used to detect dangerous situations. So I think there are a couple lessons 1) the training data was sparse with respect to rare corner cases 2) the techonology and training datasets will keep improving until it's much much safer than driving by yourself (ultimately to the point of near-zero risk i think) 3) There are other scenarios where humans wouldn't react properly/optimally/in time to save a life compared to an algorithm 4) the technology will improve very rapidly and every crash will be super scrutinized and ensured not to be repeated by any self-driving car manufacturer. While there's currently somewhat limited statistics on this (Tesla claims 130 million miles) I think this logic points towards improving safety statistics compared to an average driver. There's also a somewhat irrational phenomenon about trusting ourselves even though our knowledge may be more superficial than others. But I guess in the end we still end up trusting technology and others such as the plane pilot (mostly plane autopilot/largely dependent on sensors and instrumentation I believe) or a surgeon to do their critical work.