Robots always lack some of the attributes and are not able to perform some tasks as well as humans could like composing music, or knowing if the literature is good enough or sometimes, something as simple as grasping objects with their fingers.
There are a number of factors that allow one to successfully hold on to something, a general robot that is supposed to grasp something has to use its sensors to figure out what its trying to grasp and adjust accordingly applying different strategies to grip that object. Strategies like how to hold it and where to place its fingers. Some robots are actually pretty good at the task, but the researches continue to improve the process.
One of the ways to deal with this problem is using artificial intelligence, that is the algorithm hold within the process of learning, hence the robot learns by picking up different objects successfully and unsuccessfully. The robot practices in the lab and figures out the method on its own. According to studies, this is quite an effective method to perform the task. But the engineers at Oregon State University decided to find a more effective and time-saving way to teach the robots to grab stuff. The solution they came up with was to provide feedback to the robot from strangers online.
The robot that was learning by the old method, that is by picking objects did a little better as compared to the one using crowd-sourcing for learning. The Oregon team paid users on the internet to rate photos of grasps possible on a scale of 1 to 5 according to how secure that grasp would be. The grasps included 522 finger configurations for nine everyday objects. The teams also used a three fingered robot hand that was commercially available, to try different grasps. They shook the robots with objects in the robot hand to check how firm the grip was.
According to the Oregon team reports, the robot that learned by picking up objects did a little better than the one using crowd-sourcing. The lab-trained robot scored 0.766 in accuracy (according to measure of accuracy called the area under the ROC curve), whereas the robot that used crowd-sourcing scored 0.659. On the scale, 1 is considered as a perfect score.
This isn’t the first time crowd-sourcing has been used for research purposes. Earlier it was used to teach robots how to make cute block shapes via Mechanical Turk Feedback. The Oregon team members will be presenting their work on the project at a conference hosted by the Association for the Advancement of Artificial Intelligence in November.