What do you know about Tesla’s self-driving meal in 2020?

Unlike other driving companies, all the cars Tesla sells are data harvesters, capable of forming massive datasets. Tesla uses machine learning techniques that imitate learning, and believes that it can successfully train self-driving technology that performs well with enough data. Tesla is building its own training process through a series of work, and its development methods for autonomous driving may achieve an instant technological breakthrough at a certain point in time, which will have a subversive impact ON the development of the automotive industry.

In November 2018, foreign technology media The Information revealed that Tesla was using a machine learning technology called “imitation learning”:

“Unlike other self-driving companies, all cars Tesla sells are data harvesters, and the vehicle sensors work even when Autopilot is turned off.

As a result, the Autopilot team has an inexhaustible amount of data from which they can discover and imitate how human drivers drive in different scenarios.


After taking the essence and removing the dross, this part of the information becomes an auxiliary additional factor for the vehicle to drive in a specific scene, and they can teach the vehicle how to turn or avoid obstacles. “

In The InformaTIon, the idea of ​​imitation learning “believers” is relayed: Tesla engineers believe that as long as the neural network is fed enough driving data, Autopilot can learn most situations. Steering, braking and acceleration control.

They even believe that in the future, there is no need to write code, and the autonomous driving system will know what to do in a specific scenario.

These engineers are not blindly optimistic, because a few months after they made these points, Google’s DeepMind used imitation learning to make AI the top master of “StarCraft”.

Before that, this game was a peak on the road of AI “cultivation”.

Even Alex Irpan, a machine learning engineer at Google Brain (an important project of Google X Labs), was surprised:

“We’ve always felt that this kind of long-term task is the dead end of imitation learning, I didn’t even think it could play StarCraft at all, who knew the results were so unexpected. Our concern is that if the AI ​​makes a mistake, it will find itself In an embarrassing situation where there are no human habits to imitate.”

Irpan speculates that as long as there is enough data, DeepMind’s AI can successfully cross this hurdle:

“If you have a very large data set that includes different levels of skill, such as logging all the actions of players who have played StarCraft, then you have enough data for the AI ​​to learn how to successfully come back after making a bad judgment. already.”

At Tesla’s Autonomy Day conference in April this year, Tesla AI director Andrej Karpathy personally confirmed the existence of imitation learning.

In fact, he also revealed that imitation learning has been used to a certain extent in the production version of Autopilot.

Karpathy also believes that tasks that are hand-coded today will be better learned by imitation in the future.

Obviously, if all goes well, Tesla can also sweep everything in the self-driving industry like DeepMind.

More importantly, there are no competitors in the industry at all.

Because other than Tesla, no one has 600,000 vehicles on the road that diligently collect data, and these vehicles can accumulate 20 million miles of driving every day in various road conditions.

Even if it is as strong as Waymo, it does not have such a large data set at hand, because Waymo’s test car ownership is only 0.1% of Tesla’s fleet.

How to get really valuable data?

Although there are many vehicles that can collect data, it is not easy to find useful parts from these randomly uploaded data. So Tesla needs to find new ways to capture a variety of different driving habits.

For most companies, manual capture is the most straightforward way.

Tesla engineers can implant upload triggers into the system that automatically save and upload certain data as long as it meets requirements, such as an unprotected left turn.

And they do: When the vehicle’s visual neural network detects a traffic light, or the driver turns the steering wheel to the left, the vehicle records the clip and sends it back over Wi-Fi.

But what about behaviors that the engineers didn’t expect? What if the neural network makes a mistake?

In such a situation, the intervention of Autopilot is the ideal trigger. Tesla’s approach, Musk said, is to “see all of this as wrong input.”

If Autopilot’s intervention is seen as a mistake, the subsequent actions of the human driver are the best example of correct behavior.

In this way, Autopilot’s intervention can continue to provide useful training data.

Of course, we can also collect training data in driver operating mode. To this end, Tesla has specially designed a “shadow mode”.

The InformaTIon describes it in the article:

There are also many comments in the industry that Tesla’s so-called “shadow mode” is of little value to the development of new Autopilot software. After all, Tesla can run various test software without affecting the driving of the vehicle.

Autopilot engineers can compare the performance of the test software with the actual performance of the vehicle. In “shadow mode,” the Autopilot team was able to compare human and Autopilot responses to the same scenarios.

A neural network trained with imitation learning can run passively on an onboard computer while outputting what it sees as the best choice. If the human driver reacts differently, it can trigger the data upload of this segment.

In the event of a “disagreement”, the neural network’s decision is judged wrong, and the driver’s actions become the correct demonstration.

In addition to saving data when people and computers disagree, Tesla can also save data when the neural network “doubts” itself.

At present, there are already many technologies on the market that try to quantify the uncertainty of neural networks. When this value exceeds a certain threshold, data saving and uploading will be triggered.

So, broadly speaking, Tesla has three ways of collecting valuable data:

Manually set triggers;

Autopilot intervenes;

Neural networks that operate passively, such as “shadow mode”.

Given that Tesla’s hard drives receive 20 million miles of data per day, it’s not difficult to collect large data sets, including different driving habits and error correction demonstrations.

However, for a company like Waymo, which has only driven 20 million miles in two years, this approach is not practical.

If DeepMind only has data on hundreds of players, can its AI still dominate StarCraft?

I’m afraid there is no hope.

This is not to underestimate them, the reality is that the current neural network has too much appetite for data.

It is not difficult for a human to learn to play StarCraft in an hour, but to train a neural network of the same level in the same amount of time requires millions or even hundreds of millions of ever-changing examples to lay the foundation.

Likewise, even if the neural network can learn some driving essence from the huge driving data, it will have to reduce the error rate to less than 0.1% to be comparable to humans.

Since the marginal benefit diminishes as the amount of data increases, only really massive databases can solve the problem.

However, the massive database is not something we can build if we want. For example, if we want 10,000 examples of extreme cases, we need to accumulate 10 billion miles driven.

For Tesla, there is hope. Because as new vehicles continue to be delivered, 10 billion miles can be done in the next 17 months.

With the size of Waymo’s current fleet, it will take 800 years to achieve this goal.

Here’s Tesla’s solution to Autopilot:

Imitation learning for driving tasks that cannot be manually encoded, deep learning for computational vision and behavioral prediction, and that all-encompassing massive dataset.

Is a thousand miles of progress realistic?

Musk believes that Tesla will solve the full self-driving problem by the end of 2020. However, the industry generally believes that Musk’s time point is a bit aggressive.

In their view, the development of autonomous driving technology is a human process.

According to this understanding, developing self-driving software requires engineers to make constant changes to the code, so human speed is the ceiling, and even adding more engineers will not be much faster.

But is this still the way of thinking?

DeepMind spent three years researching StarCraft. However, with imitation learning, it took only three days to train the AI ​​to human level.

Why so fast?

Because the training process has been able to write the “StarCraft” battle code by itself, and the work of humans in the first three years is mainly to calibrate this training process, especially the neural network architecture and data sets.

Rocket engineers are also taking this path. The early design and construction of rockets will take them a lot of time, and they will not even see any success within a few years.

However, when everything is ready, the miracle can be witnessed after a ten-second countdown. From the perspective of time, there was no action in the past few years, and the last few seconds suddenly broke out.

In fact, research and engineering advances are hidden behind the scenes.

Obviously, Tesla is now building its own training process through a series of jobs.

For example, Tesla’s new computing hardware is not yet fully fired up, and the neural networks designed for the new hardware are still being developed and tested, and they are gathering strength for the future.

As for the version of the fully automatic driving software shown in April this year, it is said that it is only the result of three months of neural network training, and the scenarios corresponding to software development are quite simple.

And just a year ago, Tesla had just mastered the lane keeping function on highways, when Tesla also pushed an OTA upgrade package to fix the loophole that the vehicle swayed repeatedly between lane lines.

With that upgrade, Karpathy noted, Tesla “rewrote a lot of things.” Since then, Karpathy’s team and other technologists in the Autopilot division have been laying the groundwork for Autopilot, particularly computational vision.

If Tesla doesn’t push the upgrade package, the public won’t see the results of their efforts.

As for whether some urban driving functions are suitable for deployment, it is purely a binary rather than incremental issue. In the end, everything has to be decided by Tesla, and the incremental development process cannot be seen from the outside.

Autonomous Driving Economic Model

Fully autonomous vehicles can not only completely subvert the dynasties that Uber, Didi, Lyft and traditional taxi companies have painstakingly built, but also disrupt the traditional private car ownership system.

According to the American Automobile Association, buying and using a new car costs an average of $0.62 per mile. The investment agency ARK estimates that the production and operation cost of self-driving taxis is only $0.26 per mile.

If applying ARK’s economic model:

“As long as the robo-taxi companies set the price at $0.45/mile, which is slightly lower than a small car ($0.47/mile cost of ownership), the robo-taxi still earns $0.19 per mile.

Assuming a car that operates 127,000 miles per year (almost 1.8 times the mileage of a traditional taxi today) and charges $0.45 per mile, the annual profit is $26,200.

A million such robo-taxis would net $26.2 billion a year. “

But don’t be intimidated by 1 million vehicles, taxis of this order of magnitude are simply not enough:

Assuming user travel needs remain at current levels, each robo-taxi can replace 8-9 regular cars (an average of 13,500 miles per year). Given that there are about 1 billion conventional passenger cars in the world, there must be 110-125 million autonomous taxis.

Therefore, there is still a lot of room for growth in the autonomous driving industry.

However, some assumptions in ARK Investment’s model need to be adjusted.

For example, it sets the cost of self-driving taxis at $50,000, which is much more expensive than the cheapest Model 3 ($39,000).

Even if the Model 3 gross profit is only 5%, it can take another $2,000 off the cost. For Tesla Network, that’s a more reasonable number.

Assuming the annual operating mileage of a self-driving taxi, it should be about the same as Uber and traditional taxis, which is 70,000 miles a year.

It’s important to note that even self-driving cars will have idle time.

At Uber’s level, 64% of the vehicle’s operating time is spent with a passenger on board.

In other words, it is more reasonable to have a net profit of $4,200 per vehicle per year, and 1 million self-driving taxis can make a net profit of $4.2 billion.

Of course, gross margins are closely tied to price, and $0.45/mile is pretty cheap. As mentioned earlier, the average price per mile for a private passenger car is $0.62.

The same Tesla model would net $10,800 per year if the ride-hailing rate was set at $0.60/mile.

No matter how you calculate it, self-driving taxis will be a cost killer:

First, the nature of these vehicles means their cost is spread over many passengers.

Second, self-driving cars do not need drivers.

In addition, we have to pay attention to the life of the vehicle.

For a traditional fuel vehicle, the maintenance cost after 200,000 miles will quickly rise. Electric vehicles have advantages in this regard, for example, the current Tesla battery pack has no problem using 500,000 miles.

This also means that the average user has to drive for 37 years to exhaust the life of the Tesla battery pack. And new Tesla vehicles with a million-mile lifespan are also in development.

In short, robo-taxi unlocks the potential tangible asset of electric vehicles: the life of an electric vehicle’s battery pack is not easily wasted.

Still, the synergy between robo-taxi and electric vehicles may not be obvious at first. Under such circumstances, as long as the life cycle of the vehicle is extended, it can be converted into considerable economic value.

If the improvement and improvement of batteries and electric motors can add 200,000 miles of life to electric vehicles, the economic equivalent of turning one car into two cars.

For automakers, selling fewer cars is equivalent to ruining their lives. From the perspective of consumers, driving a car for 30 years may also be a nightmare. After all, in such a long span, the car will have a reborn improvement in various functions.

But for self-driving cars, that’s not a problem at all. This is also the reason why Wall Street giants (McKinsey, UBS, Morgan Stanley, etc.) are generally optimistic about self-driving taxis for a long time, and their market size will reach hundreds of billions or even trillions of dollars.

If self-driving cars can be deployed as promised, not only will drivers lose their jobs, but the entire auto industry will be gradually swallowed up.

Even now, though, there are doubts about the technology’s viability: especially whether deep learning can rival humans’ ability to see, predict behavior, and drive in the near- and mid-term.

No one can answer this question at present, but we can still see some clues from this trend:

First, if deep learning can solve these problems in the near and mid-term, Tesla will become a real celebrity in self-driving technology.

Secondly, if Tesla achieves fully automatic driving, from the perspective of outsiders, Tesla is like opening a plug-in, blocking the Buddha and killing the Buddha.

In short, Tesla’s new “routine” in autonomous driving has given us a new understanding of technological progress.

Slow development in the early stages does not mean subsequent procrastination. A neural network that took years to build may be trained in a few days.And autonomous driving, it may only take one OTA time, and it will arrive as scheduled