What if we told you that you need not rely on your imagination to create mental pictures of how your favorite characters from all the books you read would look like? Can algorithms be developed to create such images of characters based on their description? Would it ensure more value in real life scenarios?
A Pune-based engineer, Animesh Karnekar at Mobiliya, conducted an experiment named Text to Face (T2F) for about 3 months. This experiment is based on the principle of Generative Adversarial Networks (GANs) pioneered in 2014 by Ian Goodfellow, a machine learning scholar. Consisting of 2 networks, the generator, and the discriminator, which act against each other, an image is generated and fine-tuned, based on textual annotations.
Adding precision to real-life scenarios
Animesh believes that there is a lot more work to be done to perfect the algorithms. The implications, at the same time, point towards a future where this artificial-intelligence based technology would enable 3D scenes and objects to be rendered on a larger scale. Film directors will be able to pick more convincing actors while law enforcement would be able to find victims and perpetrators with greater ease.