Cloud Robotics is a new paradigm where distributed robots are connected to cloud services via networks to access “unlimited” computation power. Combined with advanced network technology, such as 5G and Wi-Fi 6, it can support service robots operating under unstructured, human rich environments on a global scale. Cloud Robotics has scalable servers that host artificial intelligence, robotic vision, crowd-sourcing, and web-based human computer interface (HCI). These modular Cloud Robotic infrastructures enable control and monitoring of distributed service robots that require sophisticated physical human robot interactions (pHRIs) and human guided tele-operations. Cloud Robotics is also capable of scale up and down robotic service deployments based on rapid changes in user demands. A similar feature in Cloud-based video conferencing services has shown great value in scaling up and down based on user demands during the on-going Covid-19 pandemic. The ability to match user demands will be an important advantage of using Cloud Robotics to keep the operational cost down for service robots applications, where mixed Cloud Robotic modules can be selected for different environments on demand. Besides above advantages, Cloud Robotic systems pay the additional price of network communication. There are three major network communication costs that hinder effective deployment of cloud robotics: (1) network bandwidth, (2) privacy and security, (3) network latency and variability. With the emerging high speed 5G and Wi-Fi 6 technology, the cost of network speed and bandwidth are dropping significantly, hence the value of Cloud Robotic services will eventually triumph the cost of network communication. However, if we want to use Cloud Robotic services to control dynamic, compliant, service robots with feedbacks, unpredictable variable delays caused by network routine protocols over long physical distances presents a major obstacle. In this thesis, we propose a Cloud-Edge hybrid robotic system to enable dynamic, compliant, feedback controls for physical human robot interactions (pHRIs). Specifically, we built a framework to (1) move centralized high-level controllers and computational intensive perception services to the Cloud; (2) deploy low latency, agile, Edge Robotic controller to handle dynamic and compliant motions; (3) implement a hybrid, two-level feedback controller leveraging both the Cloud and the Edge; (4) use robotic-learning algorithms to perform motion segmentation and synthesis to mitigate network latencies within the Cloud-Edge perception feedback loop. We demonstrate the robustness of the above framework using different robots, including a dual arm robot (Yumi) from ABB, a dynamic self-balancing robot (Igor) and a compliant 5 degree-of-freedom (DoF) robot arm both from Hebi Robotics, and a humanoid robot (Pepper) from Softbank Robotics. A copy of the dissertation talk including video demonstrations can be found here: https://drive.google.com/drive/folders/1rh8gCydsXCpGJCI6n31mwgTdsJdjJfn-?usp=sharing




Download Full History