humm...
this could lead to better wearable devices, in which you will touch your skin to give an input. A projector will project the interface on your skin and you will touch your skin to make the selection.
Well, let us combine this with google glasses, let the robotic skin have 4 corners that make use of AR technology, the glasses will recognize the area (angle and size) and project the menu on your glasses. You could then touch the skin to make a selection. The difference between this and AR technology alone is that the touch will be real, there won't be any mistake made by the camera.
Although this requires that the robotic skin is able to power itself and to include a transmitter!