A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper use of GUI components and their compositions. Automating this translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layout, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-words applications. We conduct extensive experiments to evaluate the accuracy, generality and usefulness of our approach.

Please let us know what you think by sending me an email Chunyang Chen.



The Gallery of the example UI screenshots:



The UI design images used for user study:



The algorithm of this work can be seen in our publication paper. When citing this site or paper, please use the bibitex below:

		  
@inproceedings{chen2018ui2code,
title={From UI Design Image to GUI Skeleton: A Neural Machine Translator to Bootstrap Mobile GUI Implementation},
author={Chen, Chunyang and Su, Ting and Meng, Guozhu and Xing, Zhenchang and Liu, Yang},
booktitle={The 40th International Conference on Software Engineering, Gothenburg, Sweden},
year={2018},
organization={ACM}
}