Occupancy maps are not new. ROS' SLAM systems use them. But it seems Tesla has taken the idea and run with it about as far as you can go.
I think there are some ideas that could be used in our lower-cost, more simple robots. If you use ROS mapping you already have an occupancy map. But Tesla does not use a static map. They do these things
1) builds the map as it goes
2) because the map is constantly being re-mapped it include dynamic obstacles like pedestrian or trash on the road
3) the content of a cell in ROS/gmaping is just a bit (occupied or not) but it seems a Telsa cell has a velocity as well
4) with Tesla multiple sensors can contribute to the map-making process
Some of this is not that hard to do especially as we amateurs tend to use 3D depth cameras and LIDAR. Depth camera or LIDAR data is MUCH easier to process than the flat cameras Tesla uses. Tesla uses a neural network to transform 2D video to depth. We could skip that step. Why does Tesla not use depth cams or LIDAR? Only because of the cost. But if only building one robot cost hardly matters.
Tesla uses a 3D map with "voxels" not "pixels" but for a simple "floor bot" 2D can work. I think we need 3D for service bots that have arms and need to move objects.
Bottom line is that there are a lot of good ideas that really could be used in robots, I think the best one is to combine navigation and obstacle avoidance. In our ROS nav stacks, localization and obstacle avoidance are handled differently, but I think with a dynamic occupancy map the two can be handled with just one algorithm.