CN109656554A - User interface creating method and device - Google Patents
User interface creating method and device Download PDFInfo
- Publication number
- CN109656554A CN109656554A CN201811425428.5A CN201811425428A CN109656554A CN 109656554 A CN109656554 A CN 109656554A CN 201811425428 A CN201811425428 A CN 201811425428A CN 109656554 A CN109656554 A CN 109656554A
- Authority
- CN
- China
- Prior art keywords
- freehandhand
- training
- label
- neural network
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/38—Creation or generation of source code for implementing user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The disclosure proposes a kind of user interface creating method and device, wherein this method comprises: obtaining the image feature data at Freehandhand-drawing interface corresponding with user interface to be generated, and the characteristic of the predetermined initial markers of acquisition;According to image feature data, the characteristic of initial markers and first circulation neural network model trained in advance, the flag sequence at Freehandhand-drawing interface is determined;User interface is generated according to flag sequence.To which the flag sequence at the Freehandhand-drawing interface that can be converted into GUI code can be obtained according to the image feature data at Freehandhand-drawing interface, the characteristic of initial markers and first circulation neural network model trained in advance, realize that developer need to only draw a Freehandhand-drawing interface and can automatically derive GUI code, complete the exploitation of user interface, without manual compiling GUI code, the user interface of the light development and application program of developer can be greatly helped, and improves the development efficiency of application program.
Description
Technical field
This disclosure relates to mobile internet technical field more particularly to a kind of user interface creating method and device.
Background technique
With the development of mobile internet, application program shows fulminant growth.Wherein, user circle of application program
Face as the medium interacted between system and user with information exchange, the availability of user interface, flexibility, complexity and
Reliability directly affects user to the viscosity of application program.
In the related technology, developer generates the user interface of application program by writing a large amount of GUI code.Work as needs
When more new user interface, also needs developer and expend how many time and efforts researchs modify GUI code to update user
Interface.Obviously, developer had both been needed to have outstanding by way of writing GUI code and generating user interface in the related technology
Program capability, and a large amount of time and efforts of developer is expended, the development efficiency of application program is influenced, therefore, how to be helped out
The user interface of the light development and application program of originator becomes technical problem urgently to be resolved.
Summary of the invention
The disclosure is intended to solve at least some of the technical problems in related technologies.
First purpose of the disclosure is to propose a kind of user interface creating method.
Second purpose of the disclosure is to propose a kind of user interface generating means.
The third purpose of the disclosure is to propose a kind of electronic equipment.
4th purpose of the disclosure is to propose a kind of computer readable storage medium.
In order to achieve the above object, disclosure first aspect embodiment proposes a kind of user interface creating method, comprising:
The image feature data at Freehandhand-drawing interface corresponding with user interface to be generated is obtained, and is obtained predetermined
The characteristic of initial markers;
According to described image characteristic, the characteristic of the initial markers and first circulation nerve net trained in advance
Network model determines the flag sequence at the Freehandhand-drawing interface;
The user interface is generated according to the flag sequence.
The user interface creating method of the embodiment of the present disclosure, by obtaining Freehandhand-drawing circle corresponding with user interface to be generated
The image feature data in face, and obtain the characteristic of predetermined initial markers;According to described image characteristic, institute
The first circulation neural network model stating the characteristic of initial markers and training in advance, determines the label sequence at the Freehandhand-drawing interface
Column;The user interface is generated according to the flag sequence.To according to the image feature data at Freehandhand-drawing interface, initial markers
Characteristic and first circulation neural network model trained in advance can obtain the Freehandhand-drawing interface that can be converted into GUI code
Flag sequence realizes that developer need to only draw a Freehandhand-drawing interface and can automatically derive GUI code, completes opening for user interface
Hair is not necessarily to manual compiling GUI code, can greatly help the user interface of the light development and application program of developer, and improve
The development efficiency of application program.
In order to achieve the above object, disclosure second aspect embodiment proposes a kind of user interface generating means, comprising:
Module is obtained, for obtaining the image feature data at Freehandhand-drawing corresponding with user interface to be generated interface, and
Obtain the characteristic of predetermined initial markers;
Determining module, for according to described image characteristic, the characteristic of the initial markers and training in advance
First circulation neural network model determines the flag sequence at the Freehandhand-drawing interface;
Generation module, for converting GUI code for the flag sequence, and according to GUI code generation
User interface.
The user interface generating means of the embodiment of the present disclosure, by obtaining Freehandhand-drawing circle corresponding with user interface to be generated
The image feature data in face, and obtain the characteristic of predetermined initial markers;According to described image characteristic, institute
The first circulation neural network model stating the characteristic of initial markers and training in advance, determines the label sequence at the Freehandhand-drawing interface
Column;The user interface is generated according to the flag sequence.To according to the image feature data at Freehandhand-drawing interface, initial markers
Characteristic and first circulation neural network model trained in advance can obtain the Freehandhand-drawing interface that can be converted into GUI code
Flag sequence realizes that developer need to only draw a Freehandhand-drawing interface and can automatically derive GUI code, completes opening for user interface
Hair is not necessarily to manual compiling GUI code, can greatly help the user interface of the light development and application program of developer, and improve
The development efficiency of application program.
In order to achieve the above object, disclosure third aspect embodiment proposes a kind of electronic equipment, comprising: memory, processing
Device and storage are on a memory and the computer program that can run on a processor, which is characterized in that processor execution institute
User interface creating method as described above is realized when stating program.
To achieve the goals above, disclosure fourth aspect embodiment proposes a kind of computer readable storage medium,
On be stored with computer program, which realizes user interface creating method as described above when being executed by processor.
The additional aspect of the disclosure and advantage will be set forth in part in the description, and will partially become from the following description
It obtains obviously, or recognized by the practice of the disclosure.
Detailed description of the invention
The disclosure is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of flow diagram of user interface creating method provided by the embodiment of the present disclosure;
Fig. 2 is the flow diagram of another user interface creating method provided by the embodiment of the present disclosure;
Fig. 3 is the flow diagram of another user interface creating method provided by the embodiment of the present disclosure;
Fig. 4 is a kind of structural schematic diagram of user interface generating means provided by the embodiment of the present disclosure.
Fig. 5 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present disclosure provides.
Specific embodiment
Embodiment of the disclosure is described below in detail, the example of embodiment is shown in the accompanying drawings, wherein identical from beginning to end
Or similar label indicates same or similar element or element with the same or similar functions.It is retouched below with reference to attached drawing
The embodiment stated is exemplary, it is intended to for explaining the disclosure, and should not be understood as the limitation to the disclosure.
Below with reference to the accompanying drawings user interface creating method, the user interface generating means of the embodiment of the present disclosure are described.
Fig. 1 is a kind of flow diagram of user interface creating method provided by the embodiment of the present disclosure.The present embodiment mentions
A kind of user interface creating method is supplied, executing subject is user interface generating means, and the executing subject is by hardware and/or soft
Part composition.
As shown in Figure 1, the user interface creating method the following steps are included:
S101, the image feature data for obtaining Freehandhand-drawing interface corresponding with user interface to be generated, and obtain preparatory
The characteristic of determining initial markers.
In the present embodiment, user interface can be understood as the user interface that user is showed when application program operation, Freehandhand-drawing
Interface can be understood as the sketch of the user interface of Freehandhand-drawing, the Freehandhand-drawing interface can be sketch that developer is drawn on paper into
What row was taken pictures, it is also possible to the electronic sketch that developer uses drawing software to draw, but be not limited thereto.
In one possible implementation, " image for obtaining Freehandhand-drawing interface corresponding with user interface to be generated is special
The specific implementation of sign data " are as follows: receive Freehandhand-drawing interface corresponding with user interface to be generated, square is carried out to Freehandhand-drawing interface
Array processing, obtains the image array of intelligent sketching;Image array is input in convolutional neural networks model, manual draw is obtained
The image feature data of picture, wherein convolutional neural networks model is by each trained Freehandhand-drawing interface trained to convolutional neural networks
It arrives.
In the present embodiment, after receiving Freehandhand-drawing interface, progress matrixing processing in Freehandhand-drawing interface can be obtained corresponding
Image array.Wherein, image array can be understood as the digital image data of intelligent sketching.Wherein, the row correspondence image of matrix
Height (unit is pixel), the width (unit is pixel) of matrix column correspondence image, the pixel of the element correspondence image of matrix, square
The value of array element element is exactly the gray value of pixel.
In order to image array can more exact representation Freehandhand-drawing interface, before carrying out matrixing processing to Freehandhand-drawing interface,
Carry out image preprocessing to Freehandhand-drawing interface, image preprocessing includes but is not limited to following pretreatment: noise eliminates, the two-value of image
Change etc..
In the present embodiment, after the image array for obtaining characterization Freehandhand-drawing interface, image array is input to preparatory training
Convolutional neural networks model in, extract Freehandhand-drawing interface image feature data.
Since convolutional neural networks (Convolutional Neural Network, CNN) are excellent in terms of image procossing
More property, the present embodiment extracts the image feature data at Freehandhand-drawing interface using convolutional neural networks model, and then improves Freehandhand-drawing circle
The accuracy and extraction efficiency of the image feature data in face.
Convolutional neural networks model in the present embodiment obtains convolutional neural networks training by each trained Freehandhand-drawing interface
, it may refer to the relevant technologies on how to training convolutional neural networks, details are not described herein.Wherein, training Freehandhand-drawing interface can
To be interpreted as the training sample of convolutional neural networks model, the training sample can be sketch that developer is drawn on paper into
What row was taken pictures, it is also possible to the electronic sketch that developer uses drawing software to draw, but be not limited thereto.It is understood that
, the sum at training Freehandhand-drawing interface is more, and the precision of convolutional neural networks model is higher, and the sum at training Freehandhand-drawing interface is by opening
Originator is configured according to practical situation.
First simple introduce lower marks (Token) herein: label (Token) is Fundamentals of Compiling term, and label (Token) can be with
It is interpreted as constituting the minimum unit of source code, in the compilation phase, lexical analyzer reads in the character stream of composition source code and output
The process of label (Token) is called marking (Tokenization), and the introduction about label (Token) can be detailed in related skill
Art, details are not described herein.
By taking GUI code is the code of text label as an example, then entire GUI code can parse to mark as follows: <
PAD>,<sTART>,<text>, x, y, width, height, content,</Text>,<eND>, wherein<pAD>indicate blank,
The effect of placeholder is only served,<START>indicates that GUI code starts, and<text>is represented as text, and x, y are that the label is being applied
The abscissa of coordinate system in the user interface of program, y are the ordinate of label coordinate system in the user interface of application program,
Width, height are the width and height of the label, and content indicates content of text,</Text>representing the label terminates,<eND>
Indicate that GUI code terminates.
In the present embodiment, using label (Token) sequence at first circulation neural network model output Freehandhand-drawing interface.For
The flag sequence at starting first circulation neural network model output Freehandhand-drawing interface, needs to predefine initial markers.Initial mark
Note can be the initial markers of system default, be also possible to the initial markers of developer's setting, but be not limited thereto, for example,
Initial markers are START.
In the present embodiment, after obtaining predetermined initial markers, into the characteristic for extracting initial markers
Process.
In one possible implementation, the specific implementation side of " characteristic for obtaining predetermined initial markers "
Formula are as follows: predetermined initial markers are input in second circulation neural network model, obtain the characteristic of initial markers.
The present embodiment extracts the characteristic of label using second circulation neural network model trained in advance, to improve mark
The precision of the characteristic of note.
In one possible implementation, the implementation of " training second circulation neural network model " are as follows: obtain each
Corresponding GUI code is carried out morphological analysis, obtains each trained Freehandhand-drawing circle by the corresponding GUI code in a trained Freehandhand-drawing interface
Each training label in face;Training second circulation neural network is marked using each training, obtains second circulation neural network mould
Type.
Specifically, the GUI code for having prepared each trained Freehandhand-drawing interface in advance, by carrying out morphology point to GUI code
Analysis obtains each training label at each trained Freehandhand-drawing interface.The training label i.e. training sample of training second circulation neural network
This, each training sample is input in second circulation neural network, is adjusted the parameter of second circulation neural network, is obtained second
Recognition with Recurrent Neural Network model.
S102, according to image feature data, the characteristic of initial markers and first circulation neural network trained in advance
Model determines the flag sequence at Freehandhand-drawing interface.
Since Recognition with Recurrent Neural Network (Recurrent Neural Network, RNN) can be reached when handling time series data
It is the preferred network for handling time series data, the present embodiment is using first circulation neural network trained in advance to very high precision
Model exports label (Token) sequence at Freehandhand-drawing interface, and label (Token) sequence includes being ranked up by output sequencing
N number of label, N are the integer greater than 1.
In the present embodiment, after extracting the characteristic of image feature data and initial markers at Freehandhand-drawing interface,
The output of first circulation neural network model can be started to mark one by one, mark be ranked up i.e. by output sequencing one by one
For label (Token) sequence.
By taking initial markers are START as an example, the characteristic of image feature data and START is input to first circulation mind
Through in network model, first circulation neural network model exports the 1st label;Then image feature data and the 1st are marked
Characteristic be input in first circulation neural network model, first circulation neural network model exports the 2nd label;Successively
Analogize, the characteristic of image feature data and previous output token is input in first circulation neural network model, is obtained
To next output token, until obtaining n-th label;First by output by the 1st label, the 2nd label ... n-th label
It is sequentially ranked up afterwards as label (Token) sequence.
S103, user interface is generated according to flag sequence.
Specifically, flag sequence can be understood as the tokenized GUI code for meeting morphological rule, parse flag sequence
Available corresponding GUI code.After obtaining GUI code, the GUI code is compiled in mobile applications Development Framework, it is raw
At corresponding user interface.
In order to compile the GUI code in mobile applications Development Framework, GUI code is embedded in mobile application journey
In Page Template provided by sequence Development Framework, mobile applications Development Framework compiles the page mould for being embedded in GUI code
Plate generates corresponding user interface.
By taking mobile applications Development Framework is Flutter as an example, Flutter is the mobile UI frame of Google, can be fast
Speed constructs the primary user interface of high quality on iOS and Android.
The following are the Page Templates of Flutter:
The user interface creating method that the disclosure provides, by obtaining Freehandhand-drawing interface corresponding with user interface to be generated
Image feature data, and obtain the characteristic of predetermined initial markers;According to image feature data, initial markers
Characteristic and first circulation neural network model trained in advance, determine the flag sequence at Freehandhand-drawing interface;According to label sequence
Column-generation user interface.To according to the image feature data at Freehandhand-drawing interface, the characteristic of initial markers and training in advance
First circulation neural network model can obtain the flag sequence at the Freehandhand-drawing interface that can be converted into GUI code, realize developer only
A Freehandhand-drawing interface, which need to be drawn, can automatically derive GUI code, complete the exploitation of user interface, be not necessarily to manual compiling interface generation
Code, can greatly help the user interface of the light development and application program of developer, and improve the development efficiency of application program.
Further, in conjunction with reference Fig. 2, on the basis of embodiment shown in Fig. 1, flag sequence includes successively suitable by output
N number of label that sequence is ranked up, N are the integer greater than 1, the specific implementation of step S102 are as follows:
S1021, it is marked for i-th, image feature data and (i-1)-th characteristic marked is input to preparatory instruction
In experienced first circulation neural network model, i-th of label is exported.
Specifically, the N of i=1,2,3 ..., i.e. i 1 into N value, the 0th label be.In the present embodiment
In, the characteristic of image feature data and initial markers is input in first circulation neural network model, the 1st mark is exported
Note.Image feature data and the 1st characteristic marked are input in first circulation neural network model, export the 2nd
Label.And so on, by image feature data and the N-1 characteristic marked into first circulation neural network model,
Export n-th label.
In the present embodiment, the characteristic of (i-1)-th label can be extracted using second circulation neural network model,
To improve the precision of the characteristic of label.Specifically, (i-1)-th label is input in second circulation neural network model,
Obtain the characteristic of (i-1)-th label.
S1022, judge to mark whether to terminate label for i-th, if it is not, step S1023 is executed, if so, executing step
S1024。
In the present embodiment, terminating label can be the initial markers of system default, be also possible to the first of developer's setting
Begin label, but is not limited thereto, for example, terminating to be labeled as END.When the label of first circulation neural network model output is
When beam marks, illustrate that the label of this output is the last one label in flag sequence, when first circulation neural network model
The label of output is not when terminating label, to illustrate also to continue to output next label in flag sequence.The present embodiment is by sentencing
Disconnected this output of first circulation neural network model marks whether that, to terminate label, realization obtains complete flag sequence, after
It is continuous that complete GUI code can be obtained based on complete flag sequence, conducive to the exploitation of user interface.
S1023, the counting of i is added one.
For example, if the 1st label of output is not END, i is updated to 2;If the 2nd label of output is not
When being END, then i is updated to 3;And so on, i is updated to i+1, until i-th is labeled as END.
S1024, the 1st label is ranked up according to output sequencing to i-th of label, obtains flag sequence.
For example, when i-th be labeled as END, by the 1st label to i-th mark etc. it is N number of label be ranked up, obtain
To flag sequence.
The user interface creating method that the disclosure provides, according to the image feature data at Freehandhand-drawing interface, the spy of initial markers
Sign data and first circulation neural network model trained in advance can obtain the mark at the Freehandhand-drawing interface that can be converted into GUI code
Remember sequence, realize that developer need to only draw a Freehandhand-drawing interface and can automatically derive GUI code, complete the exploitation of user interface,
Without manual compiling GUI code, the user interface of the light development and application program of developer can be greatly helped, and improves and answers
With the development efficiency of program.
Further, in conjunction with reference Fig. 3, on the basis of embodiment shown in Fig. 1, the method can also include:
S104, each training sample is obtained.
S105, first circulation neural network is trained according to each training sample, obtains first circulation neural network model.
First circulation neural network model in the present embodiment is the flag sequence for obtaining Freehandhand-drawing interface, therefore,
The each training sample obtained needs to include training Freehandhand-drawing interface and corresponding trained flag sequence, will training Freehandhand-drawing interface as
The input data of first circulation neural network, using training flag sequence as the output data of first circulation neural network, training
First circulation neural network obtains first circulation neural network model.It is understood that the sum of training sample is more, first
The precision of Recognition with Recurrent Neural Network model is higher.The sum of training sample is configured by developer according to practical situation.
In the actual process, the corresponding GUI code in trained Freehandhand-drawing interface has been prepared in advance, GUI code is subjected to morphology
Analysis obtains including the M training flag sequence for training label, and M is the integer greater than 1, that is, training flag sequence is according to word
What the M training label that method analyzes sequencing sequence was formed.
Since the corresponding trained flag sequence of each training sample includes the M instruction according to the sequence of morphological analysis sequencing
Practice label, M-1 training need to be carried out when using each training sample training first circulation neural network.
Further, the corresponding trained flag sequence of each training sample includes sorted M training label, and M is big
In 1 integer, the specific implementation of step S105 are as follows: carry out M-1 to first circulation neural network according to each training sample
Secondary training.
Wherein, for the m times training, the M-1 of m=1,2,3 ..., i.e. m 1 into M-1 value, comprising the following steps:
The characteristic of S51, the characteristic for obtaining training Freehandhand-drawing interface and m-th of training label.
In the present embodiment, the characteristic at training Freehandhand-drawing interface can be obtained using convolutional neural networks model, using the
Two Recognition with Recurrent Neural Network models obtain the characteristic of m-th of training label, but are not limited thereto.
S52, first circulation mind is trained according to the characteristic at training Freehandhand-drawing interface and the characteristic of m-th of training label
Through network, first circulation neural network model is obtained.
Specifically, using the characteristic at training Freehandhand-drawing interface as the input data of first circulation neural network, by m-th
Output data of the characteristic as first circulation neural network of training label, trained first circulation neural network obtain the
One Recognition with Recurrent Neural Network model.
S53, the desired output label by the m+1 training label as this training, by first circulation neural network mould
Reality output label of the label of type output as this training.
S54, the gap for determining reality output label and desired output label.
In the present embodiment, in order to avoid the study of first circulation neural network is slowed down, promote first circulation neural network
Training, can using cross entropy cost function (Cross-entropy cost function) be used to measure reality output mark
The gap of note and desired output label, but be not limited thereto.More introductions about cross entropy cost function are detailed in related skill
Art is not repeating herein.
S55, the parameter that first circulation neural network model is adjusted according to gap.
In the present embodiment, in order to optimize first circulation neural network model, back-propagation algorithm can be based on
(Backpropagation algorithm, BP) adjusts the parameter of first circulation neural network model according to gap, but not
As limit.More introductions about back-propagation algorithm are detailed in the relevant technologies, are not repeating herein.
The user interface creating method that the disclosure provides, by training first circulation neural network model in advance, according to hand
Drawing the image feature data at interface, the characteristic of initial markers and first circulation neural network model trained in advance can obtain
To the flag sequence at the Freehandhand-drawing interface that can be converted into GUI code, realize that developer need to only draw a Freehandhand-drawing interface and can convert
For the flag sequence at the Freehandhand-drawing interface of GUI code, realize that developer need to only draw a Freehandhand-drawing interface and can automatically derive interface
Code completes the exploitation of user interface, is not necessarily to manual compiling GUI code, can greatly help the light development and application of developer
The user interface of program, and improve the development efficiency of application program.
Fig. 4 is a kind of structural schematic diagram of user interface generating means provided by the embodiment of the present disclosure.The present embodiment mentions
A kind of user interface generating means are supplied, which is the executing subject of user interface creating method, and the executing subject is by hardware
And/or software composition.As shown in figure 4, the user interface generating means include: to obtain module 11, determining module 12, generation module
13。
Module 11 is obtained, for obtaining the image feature data at Freehandhand-drawing corresponding with user interface to be generated interface, with
And obtain the characteristic of predetermined initial markers;
Determining module 12, for being followed according to the characteristic of image feature data, initial markers and trained in advance first
Ring neural network model determines the flag sequence at Freehandhand-drawing interface;
Generation module 13, for generating user interface according to flag sequence.
Further, flag sequence includes the N number of label being ranked up by output sequencing, and N is the integer greater than 1,
Determining module 12 is specifically used for:
It is marked for i-th, wherein the N of i=1,2,3 ..., the 0th label is:
Image feature data and (i-1)-th characteristic marked are input to first circulation neural network trained in advance
In model, i-th of label is exported;
Judge to mark whether to terminate label, if it is not, the counting of i is added one for i-th;
If so, being ranked up to the 1st label to i-th of label according to output sequencing, flag sequence is obtained.
Further, described device further includes the first training module;
First training module, for obtaining each training sample, wherein each training sample include training Freehandhand-drawing interface and
Corresponding trained flag sequence;
According to each training sample training first circulation neural network, first circulation neural network model is obtained.
Further, the corresponding trained flag sequence of each training sample includes sorted M training label, and M is big
In 1 integer, the first training module is specifically used for:
M-1 training is carried out to first circulation neural network according to each training sample;
Obtain training Freehandhand-drawing interface characteristic and m-th training label characteristic, wherein m=1,2,
3……M-1;
According to the characteristic training first circulation nerve net of the characteristic at training Freehandhand-drawing interface and m-th of training label
Network obtains first circulation neural network model;
Desired output label by the m+1 training label as this training, first circulation neural network model is defeated
Reality output label of the label out as this training;
Determine the gap of reality output label and desired output label;
The parameter of first circulation neural network model is adjusted according to gap.
Further, obtaining module 11 includes first acquisition unit 111;
First acquisition unit 111, for receiving Freehandhand-drawing corresponding with user interface to be generated interface, to Freehandhand-drawing interface into
Row matrixization processing, obtains the image array of intelligent sketching;
Image array is input in convolutional neural networks model, the image feature data of hand-drawing image is obtained, wherein volume
Product neural network model obtains convolutional neural networks training by each trained Freehandhand-drawing interface.
Further, obtaining module 11 includes second acquisition unit 112;
Second acquisition unit 112, for predetermined initial markers to be input in second circulation neural network model,
Obtain the characteristic of initial markers.
Further, described device further includes the second training unit;
Second training unit, for obtaining the corresponding GUI code in each trained Freehandhand-drawing interface, by corresponding GUI code
Morphological analysis is carried out, each training label at each trained Freehandhand-drawing interface is obtained;
Training second circulation neural network is marked using each training, obtains second circulation neural network model.
It should be noted that the aforementioned explanation to user interface creating method embodiment is also applied for the embodiment
User interface generating means, details are not described herein again.
The user interface generating means that the disclosure provides, by obtaining Freehandhand-drawing interface corresponding with user interface to be generated
Image feature data, and obtain the characteristic of predetermined initial markers;According to image feature data, initial markers
Characteristic and first circulation neural network model trained in advance, determine the flag sequence at Freehandhand-drawing interface;According to label sequence
Column-generation user interface.To according to the image feature data at Freehandhand-drawing interface, the characteristic of initial markers and training in advance
First circulation neural network model can obtain the flag sequence at the Freehandhand-drawing interface that can be converted into GUI code, realize developer only
A Freehandhand-drawing interface, which need to be drawn, can automatically derive GUI code, complete the exploitation of user interface, be not necessarily to manual compiling interface generation
Code, can greatly help the user interface of the light development and application program of developer, and improve the development efficiency of application program.
Fig. 5 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present disclosure provides.The electronic equipment includes:
Memory 1001, processor 1002 and it is stored in the calculating that can be run on memory 1001 and on processor 1002
Machine program.
Processor 1002 realizes the user interface creating method provided in above-described embodiment when executing described program.
Further, electronic equipment further include:
Communication interface 1003, for the communication between memory 1001 and processor 1002.
Memory 1001, for storing the computer program that can be run on processor 1002.
Memory 1001 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non-
Volatile memory), a for example, at least magnetic disk storage.
Processor 1002 realizes user interface creating method described in above-described embodiment when for executing described program.
If memory 1001, processor 1002 and the independent realization of communication interface 1003, communication interface 1003, memory
1001 and processor 1002 can be connected with each other by bus and complete mutual communication.The bus can be industrial standard
Architecture (Industry Standard Architecture, referred to as ISA) bus, external equipment interconnection
(Peripheral Component, referred to as PCI) bus or extended industry-standard architecture (Extended Industry
Standard Architecture, referred to as EISA) bus etc..The bus can be divided into address bus, data/address bus, control
Bus processed etc..Only to be indicated with a thick line in Fig. 5, it is not intended that an only bus or a type of convenient for indicating
Bus.
Optionally, in specific implementation, if memory 1001, processor 1002 and communication interface 1003, are integrated in one
It is realized on block chip, then memory 1001, processor 1002 and communication interface 1003 can be completed mutual by internal interface
Communication.
Processor 1002 may be a central processing unit (Central Processing Unit, referred to as CPU), or
Person is specific integrated circuit (Application Specific Integrated Circuit, referred to as ASIC) or quilt
It is configured to implement one or more integrated circuits of the embodiment of the present disclosure.
The disclosure also provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is processed
Device realizes user interface creating method as described above when executing.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is contained at least one embodiment or example of the disclosure.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the disclosure, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the disclosure includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be by the disclosure
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the disclosure can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized: have for data-signal is realized the logic gates of logic function from
Logic circuit is dissipated, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, can integrate in a processing module in each functional unit in each embodiment of the disclosure
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiment of the disclosure is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the disclosure
System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of this disclosure
Type.
Claims (10)
1. a kind of user interface creating method characterized by comprising
The image feature data at Freehandhand-drawing interface corresponding with user interface to be generated is obtained, and is obtained predetermined initial
The characteristic of label;
According to described image characteristic, the characteristic of the initial markers and first circulation neural network mould trained in advance
Type determines the flag sequence at the Freehandhand-drawing interface;
The user interface is generated according to the flag sequence.
2. the method according to claim 1, wherein the flag sequence includes being arranged by output sequencing
N number of label of sequence, N are integer greater than 1, it is described according to described image characteristic, the characteristic of the initial markers and
Trained first circulation neural network model in advance, determines the flag sequence at the Freehandhand-drawing interface, comprising:
Image feature data and (i-1)-th characteristic marked are input to the first circulation neural network of training in advance
In model, i-th of label is exported, wherein the N of i=1,2,3 ..., the 0th label are initial markers;
Judge to mark whether to terminate label, if it is not, the counting of i is added one for i-th;
If so, being ranked up to the 1st label to i-th of label according to output sequencing, the flag sequence is obtained.
3. the method according to claim 1, wherein further include:
Obtain each training sample, wherein each training sample includes training Freehandhand-drawing interface and corresponding trained flag sequence;
According to each training sample training first circulation neural network, the first circulation neural network model is obtained.
4. according to the method described in claim 3, it is characterized in that, the corresponding trained flag sequence of each training sample includes row
M training label of good sequence, M is the integer greater than 1;
It is described that first circulation neural network is trained according to each training sample, the first circulation neural network model is obtained, is wrapped
It includes:
M-1 training is carried out to the first circulation neural network according to each training sample;
Obtain the trained Freehandhand-drawing interface characteristic and m-th training label characteristic, wherein m=1,2,
3……M-1;
It is followed according to the characteristic training described first of the characteristic at the trained Freehandhand-drawing interface and m-th of training label
Ring neural network obtains the first circulation neural network model;
Desired output label by the m+1 training label as this training, the first circulation neural network model is defeated
Reality output label of the label out as this training;
Determine the gap of the reality output label and desired output label;
The parameter of the first circulation neural network model is adjusted according to the gap.
5. the method according to claim 1, wherein described obtain Freehandhand-drawing corresponding with user interface to be generated
The image feature data at interface, comprising:
Freehandhand-drawing interface corresponding with user interface to be generated is received, matrixing processing is carried out to the Freehandhand-drawing interface, obtains institute
State the image array of intelligent sketching;
By described image Input matrix into convolutional neural networks model, the image feature data of the hand-drawing image is obtained,
In, the convolutional neural networks model obtains convolutional neural networks training by each trained Freehandhand-drawing interface.
6. the method according to claim 1, wherein the characteristic for obtaining predetermined initial markers
According to, comprising:
Predetermined initial markers are input in second circulation neural network model, the characteristic of the initial markers is obtained
According to.
7. according to the method described in claim 6, it is characterized by further comprising:
The corresponding GUI code in each trained Freehandhand-drawing interface is obtained, corresponding GUI code is subjected to morphological analysis, is obtained each
Each training label at training Freehandhand-drawing interface;
Training second circulation neural network is marked using each training, obtains the second circulation neural network model.
8. a kind of user interface generating means characterized by comprising
Module is obtained, for obtaining the image feature data at Freehandhand-drawing corresponding with user interface to be generated interface, and acquisition
The characteristic of predetermined initial markers;
Determining module, for according to described image characteristic, the characteristic of the initial markers and trained in advance first
Recognition with Recurrent Neural Network model determines the flag sequence at the Freehandhand-drawing interface;
Generation module, for generating the user interface according to the flag sequence.
9. a kind of electronic equipment characterized by comprising
Memory, processor and storage are on a memory and the computer program that can run on a processor, which is characterized in that institute
State the user interface creating method realized as described in any in claim 1-7 when processor executes described program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The user interface creating method as described in any in claim 1-7 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811425428.5A CN109656554B (en) | 2018-11-27 | 2018-11-27 | User interface generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811425428.5A CN109656554B (en) | 2018-11-27 | 2018-11-27 | User interface generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109656554A true CN109656554A (en) | 2019-04-19 |
CN109656554B CN109656554B (en) | 2022-04-15 |
Family
ID=66111548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811425428.5A Active CN109656554B (en) | 2018-11-27 | 2018-11-27 | User interface generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109656554B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110262825A (en) * | 2019-06-24 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Hot update method, device, electronic equipment and readable storage medium storing program for executing |
CN110502236A (en) * | 2019-08-07 | 2019-11-26 | 山东师范大学 | Based on the decoded front-end code generation method of Analysis On Multi-scale Features, system and equipment |
CN110968299A (en) * | 2019-11-20 | 2020-04-07 | 北京工业大学 | Front-end engineering code generation method based on hand-drawn webpage image |
CN111176960A (en) * | 2019-10-22 | 2020-05-19 | 腾讯科技(深圳)有限公司 | User operation behavior tracking method, device, equipment and storage medium |
CN112527296A (en) * | 2020-12-21 | 2021-03-19 | Oppo广东移动通信有限公司 | User interface customizing method and device, electronic equipment and storage medium |
CN112685033A (en) * | 2020-12-24 | 2021-04-20 | 北京浪潮数据技术有限公司 | Method and device for automatically generating user interface component and computer readable storage medium |
CN113377356A (en) * | 2021-06-11 | 2021-09-10 | 四川大学 | Method, device, equipment and medium for generating user interface prototype code |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150007067A1 (en) * | 2013-06-28 | 2015-01-01 | Tencent Technology (Shenzhen) Company Limited | Method and system for generating a user interface |
CN105930159A (en) * | 2016-04-20 | 2016-09-07 | 中山大学 | Image-based interface code generation method and system |
CN108288078A (en) * | 2017-12-07 | 2018-07-17 | 腾讯科技(深圳)有限公司 | Character identifying method, device and medium in a kind of image |
CN108664473A (en) * | 2018-05-11 | 2018-10-16 | 平安科技(深圳)有限公司 | Recognition methods, electronic device and the readable storage medium storing program for executing of text key message |
-
2018
- 2018-11-27 CN CN201811425428.5A patent/CN109656554B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150007067A1 (en) * | 2013-06-28 | 2015-01-01 | Tencent Technology (Shenzhen) Company Limited | Method and system for generating a user interface |
CN105930159A (en) * | 2016-04-20 | 2016-09-07 | 中山大学 | Image-based interface code generation method and system |
CN108288078A (en) * | 2017-12-07 | 2018-07-17 | 腾讯科技(深圳)有限公司 | Character identifying method, device and medium in a kind of image |
CN108664473A (en) * | 2018-05-11 | 2018-10-16 | 平安科技(深圳)有限公司 | Recognition methods, electronic device and the readable storage medium storing program for executing of text key message |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110262825A (en) * | 2019-06-24 | 2019-09-20 | 北京字节跳动网络技术有限公司 | Hot update method, device, electronic equipment and readable storage medium storing program for executing |
CN110262825B (en) * | 2019-06-24 | 2023-12-05 | 北京字节跳动网络技术有限公司 | Thermal updating method, thermal updating device, electronic equipment and readable storage medium |
CN110502236A (en) * | 2019-08-07 | 2019-11-26 | 山东师范大学 | Based on the decoded front-end code generation method of Analysis On Multi-scale Features, system and equipment |
CN110502236B (en) * | 2019-08-07 | 2022-10-25 | 山东师范大学 | Front-end code generation method, system and equipment based on multi-scale feature decoding |
CN111176960A (en) * | 2019-10-22 | 2020-05-19 | 腾讯科技(深圳)有限公司 | User operation behavior tracking method, device, equipment and storage medium |
CN111176960B (en) * | 2019-10-22 | 2022-02-18 | 腾讯科技(深圳)有限公司 | User operation behavior tracking method, device, equipment and storage medium |
CN110968299A (en) * | 2019-11-20 | 2020-04-07 | 北京工业大学 | Front-end engineering code generation method based on hand-drawn webpage image |
CN112527296A (en) * | 2020-12-21 | 2021-03-19 | Oppo广东移动通信有限公司 | User interface customizing method and device, electronic equipment and storage medium |
CN112685033A (en) * | 2020-12-24 | 2021-04-20 | 北京浪潮数据技术有限公司 | Method and device for automatically generating user interface component and computer readable storage medium |
CN113377356A (en) * | 2021-06-11 | 2021-09-10 | 四川大学 | Method, device, equipment and medium for generating user interface prototype code |
CN113377356B (en) * | 2021-06-11 | 2022-11-15 | 四川大学 | Method, device, equipment and medium for generating user interface prototype code |
Also Published As
Publication number | Publication date |
---|---|
CN109656554B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109656554A (en) | User interface creating method and device | |
CN109190722A (en) | Font style based on language of the Manchus character picture migrates transform method | |
CN110414519A (en) | A kind of recognition methods of picture character and its identification device | |
CN107492135A (en) | A kind of image segmentation mask method, device and computer-readable recording medium | |
CN109902672A (en) | Image labeling method and device, storage medium, computer equipment | |
KR20200052438A (en) | Deep learning-based webtoons auto-painting programs and applications | |
CN110490182A (en) | A kind of point reads production method, system, storage medium and the electronic equipment of data | |
CN115812221A (en) | Image generation and coloring method and device | |
CN106067019A (en) | The method and device of Text region is carried out for image | |
CN109615671A (en) | A kind of character library sample automatic generation method, computer installation and readable storage medium storing program for executing | |
CN110379251A (en) | Intelligence based on touch-control clipboard assists system of practising handwriting | |
CN110399488A (en) | File classification method and device | |
CN109710258A (en) | WeChat applet interface generation method and device | |
CN113140023A (en) | Text-to-image generation method and system based on space attention | |
CN105426944A (en) | Square lattice anti-counterfeit label group, and method and system for reading square lattice anti-counterfeit label group | |
CN108171650B (en) | Chinese flower water-ink painting style stroke generation method with stroke optimization function | |
CN115908613B (en) | AI model generation method, system and storage medium based on artificial intelligence | |
CN108288064A (en) | Method and apparatus for generating picture | |
CN115687643A (en) | Method for training multi-mode information extraction model and information extraction method | |
CN115454423A (en) | Static webpage generation method and device, electronic equipment and storage medium | |
CN110991303A (en) | Method and device for positioning text in image and electronic equipment | |
CN110516125A (en) | Method, device and equipment for identifying abnormal character string and readable storage medium | |
CN111008531B (en) | Training method and device for sentence selection model, sentence selection method and device | |
CN114548102A (en) | Method and device for labeling sequence of entity text and computer readable storage medium | |
CN115130437B (en) | Intelligent document filling method and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |