Videos about Neural networks
Past Tesletter articles
A few weeks ago, we included some videos with the rendering of the Voxel neural nets. This week the same person, @rice_fry, is comparing the neural network in 10.5 vs. a previous version. It is kind of amazing how much more detail is in there now.From issue #192
In a response to Chuck’s video above, Elon said that the car will move into tighter gaps as they enhance the velocity predictions for crossing traffic in the NN. According to him, the next month’s version (10.69.3) has significant improvements on that.From issue #234
If you’ve been following jimmy_d’s posts (and our shares) about Tesla’s Neural Networks (NN), here’s a new entry on vehicle classification and calibration, triggered by software update 2018.18. Enjoy!
Read more: TMC ForumFrom issue #8
jimmy_d has recently found out new neural networks for which there’s no metadata but complete execution code, network weights an so forth. It looks like these neural networks have been in the car for a long time. “There’s lots of stuff that I don’t know yet. It’s unlikely that all these networks are used in all cars because some of them are redundant.”- he says.
Read more: TMC ForumsFrom issue #4
The wide release of v9 isn’t here yet but there is a lot of new information about it - as you will see by the amount of videos and photos below. Elon Musk said “Got to make sure we iron out the details though. Long tail of tricky edge cases”. Some news since last week are:
- Confirmation that EAP in v9 uses the repeater and the b pillar cameras.
- New neural network for the rearview camera. Rumors before v9 were that it wasn’t going to be used for EAP or FSD since it didn’t have any neural network, it seems like it is going to be used!
- ‘Navigate on AutoPilot’ but… ‘Automatic Lane Change’ is gone. The driver will have to do a single pull on the AP stalk to confirm the lane change. From there the car will turn on the blinker, change lanes, and turn it off.
- Allows lane changes at any speed (not just above 30mph), and will slow down and find a place in traffic to merge. appleguru used it today in heavy traffic to merge in moving 2-3mph and was really impressed.
- Uses maps to help with navigating tricky road sections.
jimmy_d shared another spectacular analysis of the NN on v9. His first post analyses the NN called AKNET_V9. Here are some details but you should go and read the original post on TMC:
- One unified camera network handles all 8 cameras vs. a sepate network per camera on previous versions
- Same weight file being used for all cameras (this has pretty interesting implications and previously v8 main / narrow seems to have had separate weights for each camera)
- Full resolution for the 3 front cameras and back camera (1280x960) and 1/2 the resolution for pillar and repeater cameras (640x480)
- All cameras analyze 2 frames at the same time (most likely so each camera can see motion)
- The size of the network makes jimmy_d wonder the amount of training data that Tesla is using in their backend and if they are manually tagging all the images since it seems a ton of manual labor, on his own words “there aren’t enough humans to label this much data”
- I really like his closing statement As a neural network dork I couldn’t be more pleased."
In a later post jimmy_d mentions that AKNET_V9 might not be the network that’s currently driving the car since he thinks it’s too big to run on HW2 or HW2.5. His best guess is that it can only run at 3 fps and that doesn’t seem fast enough to be usable. However, it seems posible that this is the network that FSD is using since in HW3 it could run at 30 fps. The current firmware includes a number of different NN and it doesn’t seem easy to understand what is use and what is not, a bunch of the NN that are there are at evolution of what we got in 8.1 but AKNET_V9 seems a completely different beast.
As always, it is delightful to read you jimmy_d, keep up the good work!
Read more: TMC ForumFrom issue #29
“Lots of exciting recent work in large-scale distributed training of neural nets: (very) large-batch SGD, KFAC, ES, population-based training / ENAS, (online) distillation, …” - said
Andrej Karpathy on Twitter.
According to jimmy_d Karpathy’s tweet is referring to recent and substantial advances in techniques that enable efficient partitioning of experiments across thousands of machines.
Read more: TMC ForumsFrom issue #10
Jimmy_d made an awesome job at explaining to us mortals how the new Neural Networks pushed on 2018.10.04 work. The three main types he observed are called main, fisheye and repeater. “I believe main is used for both the main and narrow forward facing cameras, that fisheye is used for the wide angle forward facing camera, and that repeater is used for both of the repeater cameras.”- he says.
Read more: Teslamotorsclub.comFrom issue #1
Tesla is working on updating all Neural Networks to surround video using subnets on focal areas which are delaying a new FSD Beta update. In exchange, “This is evolving into solving a big part of physical world AI.”- said Elon Musk. A totally worth tradeoff!From issue #153
Tesla related stuff in this talk starts in minute 15:30. Some interesting bits:
- Since he joined - 11 months ago - the neural network is taking over the AP code base
- Tesla has created a ton of tooling for the people who tag images so they can be more efficient
- Around minute 20 he shows why labeling something that seemed easy like lane lines isn’t as easy as it seems
- Min. 22:30 - He shows traffic lights, some are really crazy!
- He talks about how random data collection doesn’t work for Tesla. For instance, if you want to identify when a car changes lanes and you collect images at random most often than not blinkers are going to be off
- Min. 25:40 - He mentions how auto wiper and how the dataset is crazy and it ‘mostly works’