Monday, September 12, 2016

Shared Autonomy


A blog post by Henrik Christensen, Director, UC San Diego Contextual Robotics Institute. (The post originally appeared on Christensen's blog. Follow Christensen on Twitter: https://twitter.com/hiskov)

We are at present seeing a lot of interest in autonomous systems. A lot of automotive companies are talking about autonomous cars or driver-less cars. GM and Google demonstrated early systems. Google started out with automation of regular cars and has also presented a concept system for a car without a steering wheel [URL]. Tesla has a model where the driver is expected to take over [URL] when the autopilot cannot provide a robust solution. The sharing of autonomy between well understood contexts – that are handled automatically and human intervention for challenge situation is a version of shared autonomy, where humans and robots collaborate to achieve a mission objective.
Tele-operation of robots has existed for a long-time. Much of the early work was carried out in the  handling of radioactive material, where direct contact by people is not an option. These systems were all purely tele-operated. This is the same type of model we see applied to medical robots such as minimally invasive systems. The Intuitive Surgical System – Da Vinci [URL] is a great example of such a system. The objective is here minimization of trauma to the body.
For Aerospace Systems we have long known the auto-pilot which is a shared autonomy system. The pilots will typically handle take-off and landing, whereas cruise flight is handled by the auto-pilot. For Unmanned Aerial Vehicles (UAVs) the pilots / operators are sitting on the ground and operating vehicles that may be airborne for as long as 36 hours. We are seeing similar applications for smaller UAVs for commercial and entertainment tasks. New commercial applications include building inspection and mapping of construction sites [URL]. For entertainment companies such as DJI [URL] build robots that are radio controlled. We are slowly seeing small functions such as level keeping or automation tracking of skiers which are examples of shared autonomy. The systems are launched and an objective is specified (tracker me, or maintain level) which is performed autonomously.
One of the biggest challenges in design systems with shared autonomy is to provide the operator with adequate context to allow them to take over as appropriate. A great example of a system that does this in an industrial context is the company Aethon [URL] out of Pittsburgh. They provide delivery robots for hospitals and other institutions. The objective is an autonomous system, but when a robot gets caught in an unusual situation such as a trashcan in the middle of a hallway, the robot requests assistance from a call center. The operator uses the on-board sensors to understand the problem and drive the robot out of the situation. If you are in a car taking over control is more of a challenge when you are driving 55 mph down the highway. It takes time to understand the challenge and to take over, which challenges the design of such systems with automatic takeover. How do we provide the driver with adequate information to take over control of the car? Or is this an appropriate model for shared control?
As we explore the shared control of systems with some functions performed autonomously and others carried out by an operator it is essential to consider the fluency of human-robot interaction, to consider the cognitive aspects of systems and to ensure that engineers use these models as an integral part of their systems design. On October 28, 2016, the University of California San Diego will host the annual Contextual Robotics Forum with the theme of “Shared Autonomy: New Directions in Human-Machine Interaction”. Join us for a day focused on the future of robotics and shared autonomy. You'll meet world-leaders in robotics and connect with the robotics ecosystem at UC San Diego and in the region at the technology showcase.

Over the next few years we will see tremendous progress on design of systems that off-load the operator but we will be challenged in doing this in a way that still allows the operator to intervene for challenge cases. So far few systems have managed to do this with a high degree of fluency. We need more research at the intersection of cognitive science, system engineering and robotics to fully leverage next generation systems with shared autonomy.


No comments:

Post a Comment