Friday, May 10, 2019

Tesla Autonomy Investor Day (22 Apr 2019)

I had a chance to glimpse on Youtube Tesla's unveiling of the custom computers that are now being put into new Tesla cars. These new computers will allow the vehicles to drive themselves, which is something that still doesn't quite sit right with a lot of people. I wanted to learn more.

Up until now, Tesla has been using off the shelf computer components to give their cars intelligence, but Teslas goal has always been to create a level of intelligence and autonomy that would be more difficult to accomplish given  existing hardware and software. Let's face it, most computer hardware that exists on the market today was designed to do business productivity or gaming related tasks, not drive a car and keep its passengers safe. Most computers aren't designed to connect to 8 cameras, 12 ultrasonic sensors and 1 radar to build an accurate picture of what's going on around a vehicle for several car lengths, plus pedal and steering wheel angle sensors. So, Tesla decided to hire specialists to build their own task specific computers and then hire specialists to make neural net software to run on those custom built computers.

The new computer is a dual redundant design, where both sets of circuitry analyze what's coming in from the sensors and cameras, devise a plan, and then the two circuits compare plans to ensure that no mistake was made. But the system can operate even if half of the system were to malfunction. Their failure prediction analysis suggests that you're 100 to 1000 times more likely to lose consciousness yourself than the computer is to malfunction. Having custom made hardware also makes it possible to only run software that was cryptographically signed by Tesla. In other words, hacking the car and taking control would be pretty tough to do. The computer chips themselves employ a neural network design, which is quite different from a traditional chip, and are fast - 144 TOPS (tera operations per second). That's 144 trillion operations per second. If you are a gamer, you know that achieving 60+ frames per second of video is only possible with really great video hardware. Tesla's new computer gets to over 3000 frames per second of processed video. As impressive as this is, Tesla is already half way through the design of the next generation computer.

A few people in the assembled reveal audience were asking if Tesla was afraid of other companies stealing their design. Elon Musk said that reverse engineering and then building a duplicate system would take 3 years, assuming they were skilled enough. But in 2 years or less, an upgraded design with at least 3 times the capability will have been released. Elon also stressed that unlike other companies attempting self driving, Tesla is collecting and analyzing real world driving data from 500,000 cars and counting to tweak their software. In other words, Tesla has gone into a manic sprint and has left the rest of the field far behind.

What the 8 cameras are seeing in real time in the car.

In the composite of what the 8 cameras are seeing, you can see the drivable space in blue, dotted lines identifying where the lane markings are. Based on the neural net designer's explanation, the critical task that the computer has, is to recognize everything it sees and determine with accuracy how far away those things are from the car. This is very difficult for a computer to do, because although our brains are able to identify objects instantly, a computer only sees billions of pixels of varying degrees of brightness. It needs to learn how to put boundaries around these bits of brightness and determine that what it's seeing is most likely a bike, or a person, or a giant truck. The bigger problem is that the 'shape' and brightness of a truck is going to be different depending on from which direction it is being lit by the sun or street lights. And if the sun is directly in your line of sight, the truck is going to appear much darker than it normally would. This is easy for us, because our brain is a high performing neural network that excels in pattern recognition.

So what the Tesla's computers are doing is learning about its surroundings from scratch. You might learn what a car looks like from one picture and be able to recognize them going forward, but a computer cannot do this. A computer needs lots of examples of what a car looks like, from every angle, in every lighting condition. The same goes for bikes, people, trucks, lane markings, signs, construction barriers, etc. In the real world, a deployed computer is learning more based on what it already knows via programming, letting the mothership know what it learned and experienced, then the computers are updated with more awareness of the world and its possibilities.

Some people in the audience wanted to know how Tesla felt about Waymo's suggestion that their self driving system would be better because of the billions of miles driven in their simulations. Elon responded by saying that basing a car's situational awareness on simulations was akin to correcting your own tests. The real world is going to throw many more unexpected but real situations at the car than could even be dreamed of in a simulation. So, Tesla believes that their system is going to be more street smart (pun intended) than the other systems that to this day have very little fleet experience. Not just now, but into the future as well, as more Tesla cars hit the streets and self drive. So the reason Tesla feels their neural network computer is best is because it's being trained with lots of data, lots of varied data, and it's all real data. Teslas have an advantage over humans in that a human can only see where the eyes are facing. Tesla cameras and sensors give the car a much better vantage point to see the world around it. Behind, beside, front and from higher levels than a human's eyes.


The problem gets even more complicated when it comes to object identification and tracking. The neural net might know what a car looks like and what a bike looks like, but what will it do when it sees a bike mounted on the back of a car on a rack? So the computer, using real images from the fleet of cars, looks at many examples of bikes mounted on the back of cars and learns that it's just a car, one object, not two objects that need to be tracked. Now imagine a bike mounted to a car being towed by a motor-home. Do you see why these computers need to be real world aware? The car needs to be aware of debris on the road, animals, what construction sites look like, boats being towed, etc.

So Tesla is being notified daily when their existing cars come across things they don't know how to deal with. Every time a driver takes back control of a vehicle from autopilot, the car sends data about that intervention back to the mothership so they can learn what happened and how to teach the fleet to deal with that particular situation. The cars are also being asked to look out for very specific situations and send video and sensor data when those situations arise. For example, there's no need to send a stream of video of normal highway driving, because the car already knows how to drive in a lane at speed. Once masses of real data are accumulated, Tesla trains the neural net to know what it's looking at, how to deal with it and uploads that new knowledge to the fleet. Imagine if every time you had a question about something, once you learned what it was, that knowledge is passed along to everyone else. Almost sounds like an automatic, built in Google. What's important to note about this, is that the car is quite capable of driving on its own, but with each new software update, it has learned how to deal with more and more situations.

But it gets better. You know how if you're really paying attention to the cars around you and where their drivers are looking, you can predict to a fair degree of accuracy, whether they're about to change lanes? Or cut you off? Well, Tesla cars have been learning this too. They can react to a car coming into its lane not only because of its speed of processing, but also because it can recognize the signs that the lane change is very likely to happen soon, such as when a car starts drifting toward the lane marking. Regardless of whether they're using the turn signal. By the way, this training has been happening for the last few years. All Teslas (built in the last few years) are watching the objects around them and making predictions about what will happen next even though it's not in self driving mode. Then the car gauges how accurate its predictions are and reports all of the false positives and false negatives back to Tesla, so they can figure out where the weaknesses are that need improving. So, when you get right down to it, Tesla's neural network is learning how to drive thanks to our driving. And better still, Tesla only pays attention to the good driving habits. That's impressive. What's even more impressive is that the car is making path predictions even on things it can't see, like blind corners and curves in the road. This why back in September 2018, a Tesla would not have been able to navigate a cloverleaf intersection, but as of April 2019, it can. This is a direct result of refined path prediction.

An audience member asked how the car could possibly figure out when it would be safe to make a lane change, considering how unpredictable drivers can be. Again, the answer came down to learning from real world scenarios. Every time a human driver did or did not choose to change lanes as they were driving their Tesla, the neural net learned from this, because it is simultaneously predicting whether it is safe to change lanes itself and seeing what the human (and the cars around the car) does in each case. It learns and ultimately teaches the rest of the fleet from these experiences. Tesla is tuning the neural net to make decisions based on a more conservative driving style, but as it learns more it will offer more aggressive driving styles while maintaining a level of safety. Owners have reported for example, that their Model 3 saw cars trying to merge onto the highway from the right and created the gaps necessary for them to merge safely. As Elon put it, they're training the car to play chicken and win every time. It sounds scary, but in real life, driving is scary. Tesla cars have driven 70+ million miles with Navigate on Autopilot. Tesla has also logged 9+ million successful lane changes performed by the cars with an additional 100,000 more every day. All of this with no accidents. That will accelerate rapidly with each passing month.

Something I've been curious about, since Tesla's system leverage what the car can see, is what happens when lane markings are hard to see, faded, or non-existent. What happens when the markings are completely covered in snow? The answer is that although Tesla needs to see lane markings some of the time, they will have the ability to train the car to figure out where the lanes are even without the markings being visible, in a manner similar to how we know where the lanes are based on the width of the visible road and where other cars are on it. It won't even rely on GPS to assist, because GPS is often wrong about where stuff is, especially when there have been changes to a road, detours during construction, unplowed lanes, etc. Tesla even said that although it had been predicted that cars communicating with each other would make for a more intelligent car, the current neural net intelligence is making that completely unnecessary, in much the same way that humans are able to navigate all driving situations without having to talk to the other drivers. They simply observe, predict, and react accordingly. Tesla cars are getting extremely good at the same process.

Again, as we drive more cars in the snow, the better the system will get, in a rather short period of time. The car is more interested in driveable space than where the lanes are, in the grand scheme of things. Because when it comes to accident avoidance, the car needs to know where it can go while maintaining control when it tries to avoid an animal, or another car losing control, etc. Elon said that in the next iteration of the software, people will be amazed at how good the driving skills of the car have evolved to. The goal is for the car to be a better driver than any human, an all scenarios. Elon expects that the system will be good enough that we won't need to monitor the car's driving by mid 2020, and convincing regulators that it's safe should come not long afterward, at least in some jurisdictions. The cars would also park themselves and connect to chargers themselves. Elon went so far as to predict that once Tesla cars prove their mettle, consumers will not be interested in driving much anymore because it will be more dangerous than letting the car do it.

Now things get really interesting. The next goal is to enable Robotaxi sometime in 2020, pending regulatory approval. That's taxis with no driver. Any Tesla owner would be able to add their vehicle to the robotaxi fleet, at times they dictate. They would even be able to limit sharing with social media friends and co-workers. The money earned (some predictions put this as much as $30,000 annually) would offset some or even all of the monthly car payments. This would also make cars 5 times more practical considering how many more hours of use they would get. The next generation of battery packs are designed to last 1 million miles before they'd need replacing, which is in line with the expected longevity of the rest of the car itself. For USD$38,000. In time, Teslas will ship without steering wheels and pedals and other parts only required by a human driver. Elon suggests this could bring the cost down to USD$25,000 and make the cars lighter and have better range. Perhaps by 2023. AAA indicates that the average all in cost of ownership of a gasoline car is $0.62 per mile. Elon predicts the average cost to run a robotaxi will be at least as low as $0.18 per mile.

No comments: