Space to Watch: End-to-End Neural Networks in Mobility and Robotics
End-to-End Neural Networks in Mobility and Robotics — an interesting direction

End-to-End Neural Networks in Mobility and Robotics — an interesting direction
Let’s consider the advances in end-to-end neural networks for mobility and robotics applications — it seems like an interesting direction.
Can using raw, vision-only sensor inputs for direct motion planning completely bypass the need for task-specific models (perception, planning, control), lidars, and high-definition maps? Are single transformer models effective in unseen, challenging scenarios?
Is it possible to overcome the ‘black box’ nature of these models and their potential lack of interpretability and troubleshooting? Can it be statistically proven that they can be trusted in most, if not all, scenarios?
Could end-to-end models with vision-only sensors perform well under various lighting and weather conditions?
Interesting papers and articles on the topic:
- End-to-end Autonomous Driving: Challenges and Frontiers
- Recent Advancements in End-to-End Autonomous Driving using Deep Learning: A Survey
- “The Bitter Lesson,” authored by AI researcher Richard Sutton
Interesting companies in the space that may soon get more attention:
Update 5/8/2024 — Interesting that Wayve raised a $1B round;
https://wayve.ai/thinking/road-to-embodied-ai/
Bogdan Cristei is a Co-Founder & Partner at Shack15 Ventures