Just by looking at the target image of a word or sketch, the robot can reproduce each stroke as a continuous action thanks to an algorithm developed by Brown University scientists.
The algorithm thus makes it very difficult for us to distinguish whether the text was written by the robot or if it was written by a human.
The algorithm makes use of deep learning networks that analyze images of handwritten words or sketches and can deduce the likely series of pencil strokes that created them. The algorithm was trained using a Japanese character set until it was able to reproduce the characters and strokes that created them with approximately 93% accuracy.
The algorithm ended up reproducing very different types of characters that I had never seen before: English letters and italics, for example. As the authors explain:
To illustrate how our system works in various robotic environments, we tested our model with two robots, Baxter and Movo. We applied our trained model directly to the actual robotic environment, creating a need to process the original target image to match the image format of our training data.
Using a global model that considers the image as a whole, the algorithm identifies a likely starting point for the first stroke. Once the stroke has begun, the algorithm zooms in, looking at the image pixel by pixel to determine where that stroke should go and how long it should last. When it reaches the end of the stroke, the algorithm calls back the global model to determine where the next stroke should begin, and then returns to the expanded model. And so on.