I contributed to the humanoid project in three different phases:
First, I built an interface to control an IoT haptic sensor (start, stop, record). We wanted to install a haptic sensor on the robot hand and did an experiment to caress the sensor over surfaces of multiple materials. I applied a SOM (Self-organizing map) to cluster different material properties from the sensor data and the model successfully grouped sensor data from the same material within the same cluster and similar materials close to each other.
Second, in a group project, I help build a computer vision module that enables the camera to detect faces and make the robotic head follow where the faces go. More work went into designing a dialogue system of a conversation between the humanoid robot with a human assistant using ROS-SMACH. The dialogue system enabled the robot to respond to questions, derive capital cities from country names, and learn object names and respective positions taught by the human subject.
Third, I enhanced the vision-to-grasp functionality of the humanoid robot as the work for my Master's thesis. I enabled the humanoid robot from single object detection to recognizing multiple objects in the scene by applying Grad-CAM and CNN methods. The implementation allows an end-to-end process of a robot recognizing an object, asking the user to choose an object to pick up, identifying the object of choice, and proceeding to grasp the object with its predicted arm and hand joint values.