In Miami, a small delivery robot got stuck on the tracks and… just stayed there. A moment later, a speeding train hit him and the topic became viral in the Western Hemisphere. There is no need to explain that such situations may end not only with the loss of the robot and its destruction, but also in a real tragedy in which someone dies? This short video tells the whole truth about autonomy: it works great as long as the world is predictable to the machine. What if things don’t go right: according to Murphy’s laws?
The incident occurred on Thursday, January 15, around 8 p.m. local time. The video was recorded by Guillermo Dapelo, who spotted the robot while walking his dogs. According to him, the device was more or less on the tracks 15 minutes — that is, long enough for someone to react, and at the same time for him to be “taken care of” by the oncoming train, which had no mercy for the robot.
The recording shows the sequence of events: the robot stands still, the train moves at cruising speed, and after a second it is left with scrap metal. An Uber Eats driver was nearby and contacted Coco Robotics, reporting the robot’s location. Why didn’t anyone react when the robot had been standing on the tracks for fifteen minutes? Couldn’t you take control of it? No one came to help or notified the railway operator about the threat?
Maintenance-free autonomy
Coco Robotics confirmed that the robot belonged to them at the time of the recording did not make the delivery. The company explains the incident as a “rare hardware failure”. The statement included typical corporate talk: safety is a priority, robots drive at walking speed, yield to humans and are monitored in real time by humans. So what went wrong?
Real-time monitoring does not automatically mean “the ability to respond immediately” if a robot gets stuck on a curb. Sure, you can restart it remotely, redirect it, or send someone from the team. If he gets stuck at a railway crossing, space-time bends a bit and there is simply less time to react. You have to use slightly different protocols and try to do what you can to prevent a serious collision.
Coco has been operating in Miami for over a year and, according to the statement, has traveled thousands of miles there without serious incidents, crossing the same tracks multiple times a day. We can therefore assume that this is one of those critical exceptions that happen rarely. But we already know (from this “eye-opening” situation) that such cases, if they happen, end badly. This time for the machine. Other times, the injured party may be a human being.
Railway tracks are critical infrastructure
Delivery robots are designed to navigate predictable environments. A railway crossing looks similar – just a few slabs and metal rails. However, in terms of risk, it is a completely different league. The train will not turn, it will not do the “moose test”, it will not stop within five meters because it has detected an obstacle. The train is a pile of heavy iron that has enormous kinetic force, a very long braking time and zero maneuverability.
A freight train traveling at “cruising” speed may require more than a mileto stop to zero even in emergency braking. That’s a terrifyingly long distance. We have a clash of two worlds: light urban autonomy and heavy infrastructure that operates according to different rules – and will not adapt just because someone wants to deliver ramen or sushi to their customers faster, more conveniently and in a “fancy” way.
What should a robot do when it doesn’t know what to do?
When a problem occurs in autonomous systems, what matters is what the device does, when no longer understands the situation. This may be, for example, pulling over to the side of the road, stopping in a place that does not block traffic, sending a high-priority alarm or automatically withdrawing from the risk area. Railway tracks are one of those places where staying on them for a long time usually results in a disaster.
Geofencing of railway crossings, additional precautionary rules and even entry bans can be applied if the system detects unstable sensor behavior.
Human supervision works if the human is available
Coco’s statement includes a phrase about “human safety pilots” monitoring the fleet. The autonomous delivery industry has long operated on an assisted autonomy model: a robot drives itself, but a human can take over or help it when it gets stuck.
If one operator supervises many machines, a queue appears in a critical situation. And if the event is unusual, there is a delay in recognizing the problem. The video material suggests that the robot stood on the tracks for several minutes. The question is what happened to the machine operator then.
And the more common these systems become, the more “rare” appeals to us in numbers. Statistics is a ruthless field: even if a breakdown is extremely unlikely, a fleet driving around a large city every day will eventually experience one.
Read also: A robot is a tool, not a companion. Emotional AI is a dead end
High caliber warning
Leaving anything on the tracks – a car, debris or a robot – can cause damage, emergency situations, and in extreme cases even derailments and huge losses: both in infrastructure and people. It would be good for the world of technology to treat this as both a warning and a cold shower. The next such failure may end much worse, and this time the “virality” will be determined by the sad balance of losses reported in the media, and not by the strangeness of the entire situation.
