CN113867875A - Method, device, equipment and storage medium for editing and displaying marked object - Google Patents
Method, device, equipment and storage medium for editing and displaying marked object Download PDFInfo
- Publication number
- CN113867875A CN113867875A CN202111165884.2A CN202111165884A CN113867875A CN 113867875 A CN113867875 A CN 113867875A CN 202111165884 A CN202111165884 A CN 202111165884A CN 113867875 A CN113867875 A CN 113867875A
- Authority
- CN
- China
- Prior art keywords
- configuration
- mark
- editing
- information
- markup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 230000000694 effects Effects 0.000 claims abstract description 59
- 230000003190 augmentative effect Effects 0.000 claims abstract description 49
- 239000003550 marker Substances 0.000 claims abstract description 28
- 230000004044 response Effects 0.000 claims description 44
- 238000012795 verification Methods 0.000 claims description 31
- 238000012360 testing method Methods 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013095 identification testing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure discloses a method, a device and equipment for editing a marked object and a computer-readable storage medium. The method comprises the following steps: displaying an editing interface of a mark object to be edited; responding to the operation of editing the mark object on the editing interface, acquiring the configuration information of the edited mark object, generating a configuration file of the mark object based on the configuration information, and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified. Through the method and the device, the personalized configuration of the marker object can be realized, the diversity of the marker object is increased, and the experience requirement of a user in an augmented reality effect scene can be better met.
Description
Technical Field
The present disclosure relates to, but not limited to, the field of augmented reality technologies, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for editing and displaying a marker object.
Background
Augmented Reality (AR) technology is a technology for fusing virtual information and real world information, and the technology realizes loading and interacting of a virtual object in the real world in a way of rendering the virtual object in a real-time image, so that a real environment and the virtual object are displayed on the same interface in real time. In the related art, the marker object may be used as an identification mark for displaying the augmented reality effect to trigger displaying of the augmented reality effect. However, the markup objects that can be supported in the related art are generally simple and cannot well meet the user requirements.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device and equipment for editing and displaying a mark object and a computer readable storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides a method for editing a mark object, which includes:
displaying an editing interface of a mark object to be edited;
responding to the operation of editing the mark object on the editing interface, and acquiring the configuration information of the edited mark object;
generating a configuration file of the mark object based on the configuration information and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
In some embodiments, the obtaining the configuration information of the edited tagged object in response to the operation of editing the tagged object in the editing interface includes: responding to the operation of setting the type of the mark object in the type setting area of the editing interface, and acquiring the set mark type; displaying at least one information configuration area on the editing interface based on the mark type; and acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area.
In some embodiments, the mark type is a two-dimensional type, the at least one information configuration area includes an image setting area, and the configuration information includes an identification image of the mark object; the acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area includes: and acquiring the set identification image of the marking object in response to the image setting operation performed in the image setting area.
In some embodiments, the effect type is a three-dimensional type, the at least one information configuration area includes an image setting area and a size setting area, and the configuration information includes an identification image and a size parameter of the mark object; the acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area includes: acquiring a set identification image of the marking object in response to an image setting operation performed in the image setting area; acquiring the set size parameter of the marking object in response to the size setting operation performed in the size setting area.
In some embodiments, where the bottom surface of the marking object is circular, the dimensional parameters include a bottom surface diameter and a height; in the case where the bottom surface of the marking object is rectangular, the dimensional parameters include a bottom surface length, a bottom surface width, and a height.
In some embodiments, the image setting area includes an image upload control; the acquiring the set identification image of the marker object in response to the image setting operation performed in the image setting area includes: and responding to the image uploading operation based on the image uploading control, and acquiring the uploaded identification image of the mark object.
In some embodiments, the configuration information further comprises a recognition algorithm for recognizing the identification object; the obtaining of the edited configuration information of the markup object in response to the information configuration operation performed in at least one of the information configuration areas further includes one of: determining the recognition algorithm based on the identification image and the marker type; and determining the identification algorithm based on the configured algorithm version in response to the algorithm version configuration operation performed in at least one information configuration area.
In some embodiments, the method further comprises: displaying the test address of the marked object on the editing interface; the test address is used for entering a test environment of the marked object, and the test environment is used for identifying the marked object based on the configuration file and displaying an augmented reality effect associated with the marked object under the condition that the marked object is identified.
In some embodiments, the generating and sending a configuration file of the markup object to a client based on the configuration information includes: carrying out format verification on the configuration information based on a preset verification rule to obtain a verification result; and generating a configuration file of the marked object and sending the configuration file to a client based on the configuration information under the condition that the verification result represents that the verification is successful.
The embodiment of the disclosure provides a display method applied to a client, including:
acquiring a configuration file of a marked object; wherein the configuration file is generated based on configuration information of the markup object, the configuration information being obtained in response to an operation of editing the markup object at an editing interface of the markup object;
identifying the tagged object based on the configuration file;
in the event that the tagged object is identified, an augmented reality effect associated with the tagged object is presented.
The embodiment of the present disclosure provides an editing apparatus for a mark object, including:
the first display module is used for displaying an editing interface of a mark object to be edited;
the editing module is used for responding to the operation of editing the mark object on the editing interface and acquiring the configuration information of the edited mark object;
the generating module is used for generating a configuration file of the mark object based on the configuration information and sending the configuration file to the client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
An embodiment of the present disclosure provides a display device, including:
the second acquisition module is used for acquiring a configuration file of the marked object; wherein the configuration file is generated based on configuration information of the markup object, the configuration information being obtained in response to an operation of editing the markup object at an editing interface of the markup object;
the identification module is used for identifying the marked object based on the configuration file;
and the third display module is used for displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
An embodiment of the present disclosure provides an electronic device, including: a display screen; a memory for storing an executable computer program; and the processor is used for combining the display screen to realize the editing method or the display method of the mark object when executing the executable computer program stored in the memory.
The present disclosure provides a computer-readable storage medium storing a computer program for causing a processor to implement the above-mentioned editing method or display method of a mark object when executed.
In the embodiment of the disclosure, an editing interface of a mark object to be edited is displayed; responding to the operation of editing the mark object in the editing interface, and acquiring the configuration information of the edited mark object; generating a configuration file of the mark object based on the configuration information and sending the configuration file to the client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified. Therefore, the user can edit the marked object in the visual editing interface according to actual requirements, generate the configuration file and send the configuration file to the client, so that the client can identify the marked object based on the configuration file to display the augmented reality effect associated with the marked object, thereby realizing the personalized configuration of the marked object, being beneficial to increasing the diversity of the marked object and further being capable of better meeting the experience requirements of the user in the scene of the augmented reality effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic implementation flow diagram of an editing method for a markup object according to an embodiment of the present disclosure;
fig. 2 is a schematic implementation flow diagram of an editing method for a markup object according to an embodiment of the present disclosure;
fig. 3 is a schematic implementation flow diagram of an editing method for a markup object according to an embodiment of the present disclosure;
fig. 4 is a schematic implementation flow diagram of an editing method for a markup object according to an embodiment of the present disclosure;
fig. 5 is a schematic implementation flow diagram of an editing method for a markup object according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart illustrating an implementation of a display method according to an embodiment of the present disclosure;
FIG. 7A is a schematic diagram of a markup object management interface provided by an embodiment of the present disclosure;
fig. 7B is a schematic diagram of an editing interface for marking an object according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram illustrating a composition of an apparatus for editing a markup object according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) A Mini Program (also called a Web Program) is a Program developed based on a front-end-oriented Language (e.g., JavaScript) and implementing a service in a hypertext Markup Language (HTML) page, and software downloaded by a client (e.g., a browser or any client embedded in a browser core) via a network (e.g., the internet) and interpreted and executed in a browser environment of the client saves steps installed in the client. For example, an applet for implementing a singing service may be downloaded and run in a social network client.
2) Augmented Reality (AR), which is a relatively new technology content that promotes integration between real world information and virtual world information content, implements analog simulation processing on the basis of computer and other scientific technologies of entity information that is relatively difficult to experience in the spatial range of the real world originally, superimposes the virtual information content in the real world for effective application, and can be perceived by human senses in the process, thereby realizing sensory experience beyond Reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time.
3) Hypertext Transfer Protocol (HTTP): HTTP is a standard for requests and responses from a client and a server, belongs to the Transmission Control Protocol (TCP) Protocol family, and is the most widely used network Protocol on the internet. Because the HTTP protocol is simple to realize, the program scale of the HTTP server is small, and the communication speed is high. When a client requests a service from a server, only the request method and path need to be transmitted. Moreover, HTTP allows the transfer of any type of data object.
4) Interface: an interface is a standard and abstraction that provides concrete capabilities, and is a predefined function that includes interface addresses, incoming parameters, and return parameters and data. It can be simply understood that when some data needs to be accessed, a return parameter within the data range is received by passing a qualified parameter to the interface under normal conditions. In an application or web service, the interaction between the front end and the back end is basically realized through a program interface. The process of data interaction can be simply understood as that the front end wants to obtain some data, the incoming parameters are transmitted to the back end server through the interface address of the back end server, the back end server determines the data to be obtained by the front end according to the incoming parameters, and returns the data to the front end after obtaining the data (for example, obtaining the data by querying a database, etc.), and the front end performs corresponding page display based on the returned data.
5) JS Object Notation (JSON) is a lightweight data exchange format that stores and represents data in a text format that is completely independent of the programming language. JSON has a simple and clear hierarchical structure, is easy to read and write, is easy to analyze and generate by a machine, and can effectively improve the network transmission efficiency. The format of the transmitted data when the front end and the back end interact is usually JSON format.
6) Marker-based augmented reality effects (Marker-based AR): the realization method needs a mark object (Marker) which is made in advance, such as a template card or a two-dimensional code with a certain specification and shape, then the mark object is placed at a position in a real scene, the mark object is identified and/or evaluated in posture (position Estimation) through a camera, the position of the mark object is determined, then a template coordinate system (Marker Coordinates) is determined by taking the center of the mark object as an origin, and a mapping relation is established between the template coordinate system and a screen coordinate system of display equipment, so that a virtual object which is related to the mark object can be drawn on the display equipment according to the mapping relation, and the effect that the virtual object is attached to the mark object is achieved. In the process of transforming the Coordinates of the marked object from the template coordinate system to the real screen coordinate system, the Coordinates in the template coordinate system may be first rotationally translated to the Camera coordinate system (Camera Coordinates), and then the Coordinates in the Camera coordinate system may be mapped to the screen coordinate system.
The embodiment of the disclosure provides an editing method for a marked object, which can realize personalized configuration of the marked object, is beneficial to increase of diversity of the marked object, and can further better meet the experience requirements of users in the scene of augmented reality effect. The editing method of the tagged object provided by the embodiments of the present disclosure may be executed by an electronic device, where the electronic device may be any suitable device with data processing capability, such as a server, a notebook computer, a tablet computer, a desktop computer, a smart television, a set-top box, a mobile device (e.g., a mobile phone, a portable video player, a personal digital assistant, a dedicated messaging device, a portable game device), and the like.
Fig. 1 is a schematic implementation flow diagram of an editing method for a markup object provided by an embodiment of the present disclosure, as shown in fig. 1, the method includes:
and step S101, displaying an editing interface of the mark object to be edited.
Here, the marker object may be any suitable object associated with a particular augmented reality effect and used to trigger presentation of the augmented reality effect. In practice, the mark object may include a two-dimensional object, such as a graphic drawn with a certain specification shape, a two-dimensional code, a barcode, a photograph containing a specific object, or an image frame containing a specific picture, and may also include a three-dimensional object, such as a stereoscopic object with a specific identification image, a three-dimensional exhibit, an animal, a person, a building, a vehicle, and the like, which is not limited herein. In implementation, the markup object to be edited may be a markup object that is determined by a user according to actual conditions and needs to be edited, and may be a newly created markup object or an already created markup object, which is not limited herein.
The editing interface of the marked object is an interactive interface used for editing operation and information display of the marked object. The editing interface may include at least one region where editing operations may be performed, and the region where editing operations may be performed may include, but is not limited to, a region where one or more of an identification image, a size parameter, a name, description information, and the like of a mark object are edited. In implementation, at least one area of the editing interface where an editing operation can be performed and a specific layout of each area in the editing interface may be determined according to actual situations, which is not limited herein.
The editing interface may be displayed on any suitable electronic device with interface interaction function, such as a laptop, a mobile phone, a tablet computer, a palmtop computer, a personal digital assistant, a digital television, or a desktop computer. In implementation, the electronic device displaying the editing interface may be the same as or different from the computer device executing the editing method for the markup object, and is not limited herein. For example, the computer device executing the method for editing the markup object may be a notebook computer, the electronic device displaying the editing interface may also be the notebook computer, and the editing interface may be an interactive interface of a client running on the notebook computer, or a web page displayed in a browser running on the notebook computer. For another example, the computer device executing the method for editing the markup object may be a server, the electronic device displaying the editing interface may also be a notebook computer, the editing interface may be an interactive interface of a client running on the notebook computer, or a web page displayed in a browser running on the notebook computer, and the notebook computer may access the server through the client or the browser.
Step S102, responding to the operation of editing the mark object on the editing interface, and acquiring the configuration information of the edited mark object.
Here, the configuration information of the mark object may include characteristic information such as an identification image and a size parameter of the mark object, may also include attribute information such as a name and description information of the mark object, and may also include a recognition algorithm for recognizing the mark object, which is not limited herein. In implementation, a person skilled in the art may determine the content included in the configuration information of the markup object according to actual requirements, and a user may set appropriate configuration information for the markup object in an editing interface.
In implementation, any suitable manner may be adopted to obtain configuration information of the markup object configured by the user in the editing interface, which is not limited in this disclosure. In some embodiments, a front end (e.g., a client application, a browser, an applet, or the like) displaying an editing interface may transmit configuration information of a markup object edited by a user to a backend service through a data acquisition interface of the backend service running on an editing device of the markup object, and the editing device of the markup object may acquire the configuration information of the markup object through the backend service. The format of the data interaction between the front-end and the back-end service may include, but is not limited to, one or more of JSON format, eXtensible Markup Language (XML) format, and the like.
Step S103, generating a configuration file of the mark object based on the configuration information and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
Here, the configuration file of the markup object includes any suitable file that can be used to identify the markup object, and may be an executable data package used to identify the markup object, a data set including configuration information edited by a user, or a file used to describe characteristics of the markup object, which is not limited herein. In implementation, a person skilled in the art may determine a file format of the configuration file according to an actual situation, and generate the configuration file of the markup object based on the configuration information of the markup object in an appropriate manner according to the actual file format, which is not limited herein.
The augmented reality effect associated with the marker object may be set in advance according to an actual application scenario, and is not limited herein.
The client may be any suitable application, browser, applet, or the like running on the terminal that can display the augmented reality effect. The client may identify the tagged object based on the configuration file after receiving the configuration file of the tagged object, and display the augmented reality effect associated with the tagged object if the tagged object is identified. In some embodiments, in the case that the configuration file of the tagged object is an executable data packet for identifying the tagged object, the client may identify the tagged object by executing the configuration file. In some embodiments, in the case that the configuration file of the markup object is a file for describing features of the markup object, the client may identify the markup object based on the features described in the configuration file, for example, the configuration file of the markup object may be a pat format file, at least one feature point of the markup object may be included in the file, and the client may identify the markup object based on the at least one feature point. In some embodiments, in the case that the configuration file of the markup object is configuration information including the markup object edited by the user, the client may determine the configuration information of the markup object to be recognized by parsing the configuration file, and recognize the markup object based on the configuration information.
In some embodiments, the generated configuration file of the markup object may also be stored in any suitable location, such as a database, a cloud, or a file server, which is not limited herein.
In some embodiments, a control for generating the tagged object may be displayed on the editing interface, and by triggering the control, a configuration file of the tagged object may be generated and sent to the client based on the configuration information of the tagged object; or when exiting from the editing interface of the markup object, generating a configuration file of the markup object based on the configuration information of the markup object, and sending the configuration file to the client, which is not limited in the embodiment of the present disclosure.
In the embodiment of the disclosure, an editing interface of a mark object to be edited is displayed; responding to the operation of editing the mark object in the editing interface, and acquiring the configuration information of the edited mark object; generating a configuration file of the mark object based on the configuration information and sending the configuration file to the client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified. Therefore, the user can edit the marked object in the visual editing interface according to actual requirements, generate the configuration file and send the configuration file to the client, so that the client can identify the marked object based on the configuration file to display the augmented reality effect associated with the marked object, thereby realizing the personalized configuration of the marked object, being beneficial to increasing the diversity of the marked object and further being capable of better meeting the experience requirements of the user in the scene of the augmented reality effect.
In some embodiments, the method may further include: and storing the generated configuration file of the mark object in a mark object library. Here, the tagged object library may be any suitable information library configured to store the created tagged objects, and may be a data table, a data file, and the like in a database, which is not limited herein. In some embodiments, for a created tagged object, a configuration file of the tagged object already exists in the tagged object library, and when the edited configuration file of the tagged object is stored in the tagged object library, the configuration file of the tagged object already existing in the tagged library object may be updated based on the generated configuration file. In some embodiments, in the case that the configuration file of the markup object does not exist in the markup object library, when the edited configuration file of the markup object is stored in the markup object library, the configuration file of the markup object may be newly added in the markup object library.
In some embodiments, the step S103 may include the following steps S111 to S112:
and step S111, carrying out format verification on the configuration information based on a preset verification rule to obtain a verification result.
Here, an appropriate check rule may be set in advance according to the target format of the tag configuration information, and the format of the configuration information of the edited tagged object may be checked based on the check rule, so as to obtain a check result. For example, the verification rule may include a target format, a target size, a target pixel, and the like of the identification image of the mark object; in the case that the configuration information includes information of text type, the check rule may include one or more of target text format, white list characters, black list characters, and the like; in the case that the configuration information includes the size parameter of the mark object, the verification rule may include one or more of a target value range, a numerical precision, and the like of the size parameter.
The verification result may include, but is not limited to, a verification success, a verification failure, or a verification error, and the like.
And step S112, under the condition that the verification result represents that the verification is successful, generating a configuration file of the mark object based on the configuration information and sending the configuration file to the client.
In the above embodiment, format verification is performed on the configuration information of the tagged object based on a preset verification rule to obtain a verification result, and a configuration file of the tagged object is generated and sent to the client based on the configuration information of the tagged object under the condition that the verification result represents that verification is successful. Therefore, format errors in the process of editing the marked object can be reduced, and the accuracy of editing the marked object can be improved.
The embodiment of the present disclosure provides a method for editing a mark object, and fig. 2 is a schematic flow chart illustrating an implementation of the method for editing a mark object provided by the embodiment of the present disclosure, as shown in fig. 2, the method includes:
step S201, displaying an editing interface of the mark object to be edited.
Here, the step S201 corresponds to the step S101, and in implementation, reference may be made to a specific embodiment of the step S101.
Step S202, responding to the operation of setting the type of the mark object in the type setting area of the editing interface, and acquiring the set mark type.
Here, the type setting area may be any suitable area on the editing interface for setting the type of the markup object, and may include a text input box for inputting the markup type, and may also include a selection control for selecting the markup type, which is not limited in this disclosure.
In implementation, a person skilled in the art can classify the markup objects into different markup types according to an appropriate classification manner according to actual conditions in advance, and a user can set an appropriate markup type for the markup objects in a type setting area of an editing interface according to actual requirements. In some embodiments, the mark object may be divided into a plurality of mark types, such as a two-dimensional type, a three-dimensional type, and the like according to the dimension of the mark object, the two-dimensional type mark object may be a two-dimensional planar object, such as a drawing, a card, a scroll, a photograph, and the like drawing a plane with a specific mark image, and the three-dimensional type mark object may be any suitable three-dimensional object with recognizable features, such as a solid object with a specific mark image, a three-dimensional exhibit, an animal, a person, a building, a vehicle, and the like. In some embodiments, the mark object may be classified into a plurality of mark types, such as a grayscale type, a black-and-white type, a color type, and the like, according to a color of the mark object.
Step S203, displaying at least one information configuration area on the editing interface based on the mark type.
Here, for the markup objects of different markup types, the information to be configured in the editing interface may be different, so that at least one corresponding information configuration area may be displayed in the editing interface according to the set markup type.
In some embodiments, in the case that the mark type is a two-dimensional type, the mark object may be a two-dimensional object including a specific mark image, the mark image may be a planar image on a plane where the two-dimensional object is located, an image setting area may be displayed on the editing interface for setting the mark image of the mark object, and the mark object may be recognized based on the set mark image. For example, the mark object may be a card on which a mark image is drawn, and the mark image drawn on the card may be set in an image setting area of the editing interface so that the card can be recognized by the mark image on the card.
In some embodiments, in a case where the mark type is a three-dimensional type, the mark object may be a three-dimensional object including a specific mark image, the mark image may be an image presented on a surface of the three-dimensional object, a form of the mark image presented in a space is related to a shape, a size, and the like of the three-dimensional object, an image setting area for setting the mark image, and a size setting area for setting a size parameter of the mark object may be displayed on the editing interface, and the mark image of the corresponding form on the mark object may be identified based on the set mark image and the size parameter. For example, the marking object may be a cylinder with a side surface painted with a marking image, when the marking object is edited, the marking image may be set in an image setting area of an editing interface, a bottom diameter and a height of the cylinder may be set in a size setting area, and based on the bottom diameter and the height of the cylinder and the marking image, a form of the marking image in space may be determined, so that the marking image painted on the side surface of the cylinder in a corresponding form may be recognized, and thus the marking object may be determined to be recognized.
Step S204, in response to the information configuration operation performed in at least one information configuration area, acquiring the configuration information of the edited tagged object.
Here, the information configuration area may include, but is not limited to, one or more of a text input box for inputting text information, a selection control for selecting information, an image upload control for uploading image information, and the like, and the embodiment of the present disclosure is not limited thereto.
In implementation, the configuration information of the mark object may be set by the user in the at least one information configuration area, or may be automatically determined based on the information set by the user in the at least one information configuration area, which is not limited in this disclosure.
Step S205, based on the configuration information, generating a configuration file of the mark object and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
Here, the step S205 corresponds to the step S103, and in implementation, reference may be made to a specific embodiment of the step S103.
In the embodiment of the disclosure, a set mark type is obtained in response to an operation of setting the type of a mark object in a type setting area of the editing interface; displaying at least one information configuration area on an editing interface based on the mark type; and acquiring configuration information of the edited marked object in response to the information configuration operation performed in the at least one information configuration area. Therefore, different information configuration areas can be provided for different types of marked objects to perform corresponding information configuration operation, so that the editing requirements of users can be better met, and the operation experience of the users is improved.
In some embodiments, the mark type is a two-dimensional type, the at least one information configuration area includes an image setting area, and the configuration information includes an identification image of the mark object; the step S204 may include:
step S211, in response to the image mark setting operation performed in the image setting area, acquires the set identification image of the mark object.
Here, the identification image of the marker object may be any suitable recognizable image that the marker object has. The image setting area may be any suitable area on the editing interface for setting the identification image of the mark object, and is not limited herein. In some embodiments, the mark setting area may include an image upload control for uploading the identification image, and may also include an image capture control for capturing the identification image in real time, which is not limited in this disclosure.
In some embodiments, the effect type is a three-dimensional type, the at least one information configuration area includes an image setting area and a size setting area, and the configuration information includes an identification image and a size parameter of the mark object; the step S204 may include:
step S221, responding to the image setting operation in the image setting area, and acquiring the set identification image of the mark object;
step S222, in response to the size setting operation performed in the size setting area, acquiring the set display size of the mark object.
Here, for the three-dimensional type marker object, in addition to the setting of the identification image of the marker object, the setting of the size parameter of the marker object is also possible. Since the identification image in the three-dimensional type marking object may be an image presented on the surface of the three-dimensional object, and the form presented in the space of the identification image is related to the shape, size, etc. of the three-dimensional object, when the three-dimensional type marking object is identified, the identification image and the size parameter of the marking object may be combined to identify the form presented in the space of the identification image, so that the identification image of the corresponding form on the three-dimensional type marking object may be identified, and the marking object may be identified.
The user can perform size setting operation on the mark object in the size setting area in the at least one information configuration area, so that the size parameter of the mark object edited by the user can be acquired. The size parameters of the marking object may include, but are not limited to, one or more of the size of the bottom surface, the height of the marking object, and the like. In practice, the bottom surface of the marking object may include, but is not limited to, any suitable shape such as a circle, a rectangle, an ellipse, or a diamond, and the size of the bottom surface of the marking object may be an index for measuring the critical dimension of the bottom surface of the marking object, and is not limited herein.
In some embodiments, where the bottom surface of the marking object is circular, the display size includes a bottom surface diameter and a height.
In some embodiments, where the bottom surface of the marking object is rectangular, the display dimensions include a bottom surface length, a bottom surface width, and a height. In some embodiments, the bottom surface of the marking object is square, and the length of the bottom surface of the marking object is equal to the width of the bottom surface.
In some embodiments, the marker setting area comprises an image upload control; the step S211 or the step S221 may include:
step S231, in response to the image uploading operation performed based on the image uploading control, acquires the uploaded identification image of the markup object.
Here, the user may upload an appropriate identification image based on the image upload control of the image setting area according to an actual situation, and a format, a size, and the like of the uploaded identification image may be determined according to an actual application scenario, which is not limited in this disclosure. Therefore, the user can set the identification image more conveniently and more quickly, and the use experience of the user can be further improved.
In some embodiments, the configuration information further comprises a recognition algorithm for recognizing the identification object; the step S204 may further include the following step S241 or step S242:
step S241, determining the recognition algorithm based on the identification image and the mark type.
Here, the recognition algorithm is an algorithm for recognizing the marker object. The recognition algorithm that is applicable may be different for different identification images and different marker types. In some embodiments, different identification images and corresponding relationships between different tag types and identification algorithms may be set in advance according to actual conditions, and the corresponding identification algorithms may be determined by querying the corresponding relationships based on the edited identification images and tag types. In this way, an appropriate recognition algorithm for recognizing the mark object can be automatically determined based on the edited recognition image and mark type of the mark object, so that the user's operation can be simplified and the recognition effect of the mark object can be improved. In some embodiments, each recognition algorithm has a corresponding algorithm version, different recognition images and corresponding relations between different mark types and algorithm versions can be set in advance according to actual conditions, a target algorithm version can be determined by querying the corresponding relations based on the recognition images and the mark types of the edited mark objects, and a corresponding recognition algorithm can be determined based on the target algorithm version.
Step S242, in response to the algorithm version configuration operation performed in at least one information configuration area, determining the identification algorithm based on the configured algorithm version.
Here, the at least one information configuration area may include an algorithm configuration area. In implementation, the algorithm configuration area may include a text input box for inputting an algorithm version, and the algorithm configuration operation may include inputting a name, a version number, a type, etc. of a display algorithm in the text input box; the algorithm configuration area may also include a selection control for making an algorithm version selection, and the algorithm configuration operation may include selecting a name, version number, type, etc. of the identified algorithm using the selection control. Therefore, the user can configure a proper recognition algorithm for recognizing the marked object according to the actual situation, and the operation is simple and visual.
The embodiment of the present disclosure provides a method for editing a tagged object, and fig. 3 is a schematic flow chart illustrating an implementation of the method for editing a tagged object provided by the embodiment of the present disclosure, as shown in fig. 3, the method includes:
step S301, displaying an editing interface of the mark object to be edited.
Step S302, responding to the operation of editing the mark object on the editing interface, and acquiring the configuration information of the edited mark object.
Step S303, generating a configuration file of the mark object based on the configuration information and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
Here, the steps S301 to S303 correspond to the steps S101 to S103, respectively, and when the steps S101 to S103 are performed, specific embodiments of the steps S101 to S103 may be referred to.
Step S304, displaying the test address of the mark object on the editing interface; the test address is used for entering a test environment of the marked object, and the test environment is used for identifying the marked object based on the configuration file and displaying a preview effect of an augmented reality effect associated with the marked object under the condition that the marked object is identified.
Here, the test environment of the markup object may be a preset environment for presenting the augmented reality effect, and in the test environment, the markup object may be recognized based on the generated configuration file of the markup object, and in the case that the markup object is recognized, the augmented reality effect associated with the markup object may be presented.
In the embodiment of the disclosure, after the configuration file of the tagged object is generated, a test address of the tagged object may be displayed on the editing interface, and a user may enter the test environment of the tagged object through the test address to perform an identification test on the tagged object based on the configuration file in the test environment. Therefore, the user can test the use effect of the marked object after editing the marked object, so that the editing requirement of the user can be better met, and the operation experience of the user is improved.
The embodiment of the present disclosure provides a method for editing a tagged object, and fig. 4 is a schematic flow chart illustrating an implementation of the method for editing a tagged object provided by the embodiment of the present disclosure, as shown in fig. 4, the method includes:
step S401, responding to the operation of adding the mark object in the mark object management interface, creating the added mark object and displaying the editing interface of the mark object.
Here, the tagged object management interface may be an interface for performing management operations such as viewing, adding, deleting, and the like on the tagged object.
The user can perform any suitable operation which can trigger the newly added mark object on the mark object management interface, respond to the operation, can create the newly added mark object, and display the editing interface of the newly added mark object. For example, a control of a newly added markup object may be displayed in the markup object management interface, and by triggering the control, the newly added markup object may be created and the editing interface of the markup object may be displayed. In some embodiments, when a new tagged object is created, an identification information is automatically generated for the new tagged object, and the identification information can be displayed in the editing interface of the tagged object.
Step S402, responding to the operation of editing the mark object on the editing interface, and acquiring the configuration information of the edited mark object.
Step S403, generating a configuration file of the mark object based on the configuration information and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
Here, the steps S402 to S403 correspond to the steps S102 to S103, respectively, and when the steps S102 to S103 are performed, specific embodiments of the steps S102 to S103 may be referred to.
In the embodiment of the present disclosure, a user may perform an operation of adding a tag object on the tag object management interface, create the added tag object, and display an editing interface of the tag object. Therefore, the user can conveniently add the mark object and edit the added mark object, so that the editing flexibility of the mark object can be further improved, and the experience requirement of the user in the scene of the augmented reality effect can be better met.
The embodiment of the present disclosure provides a method for editing a mark object, and fig. 5 is a schematic flow chart illustrating an implementation of the method for editing a mark object provided by the embodiment of the present disclosure, and as shown in fig. 5, the method includes:
step S501, in response to an editing trigger operation performed on the created target mark object at the mark object management interface, acquires configuration information of the target mark object.
Here, the editing triggering operation may be any suitable operation that the user performs on the markup object management interface and can trigger editing of the created target markup object. For example, the configuration information of at least one created markup object may be displayed in the markup object management interface, and for each editing button of the markup object, by clicking any editing button, it may trigger editing of the corresponding created markup object, and may acquire the configuration information of the markup object (i.e., the target markup object). For another example, the mark object management interface may include a mark selection operation area for selecting a target mark object to be edited, and the user may trigger editing of the target mark object by selecting the created target mark object in the mark selection operation area.
In implementation, the configuration information of the target markup object may be acquired from the markup object library, or may be directly acquired from the markup object management interface, which is not limited herein.
Step S502, displaying an editing interface of the target mark object based on the configuration information of the target mark object.
Here, based on the configuration information of the target mark object, an editing interface of the target mark object may be displayed, and one or more of identification information, a name, an identification image, a size parameter, a test address, and the like of the edited target mark object may be included in the editing interface.
Step S503, in response to the operation of editing the target mark object on the editing interface, acquiring configuration information of the edited target mark object.
Step S504, based on the configuration, generating a configuration file of the target mark object and sending the configuration file to a client; the client is used for identifying the target mark object based on the configuration file and displaying the augmented reality effect associated with the mark object under the condition that the target mark object is identified.
Here, the steps S503 to S504 correspond to the steps S102 to S103, respectively, and in implementation, specific embodiments of the steps S102 to S103 may be referred to.
In some embodiments, the markup object management interface includes a markup list region; before the step S501, the method may further include:
step S511, responding to the mark viewing operation performed on the mark object management interface, and acquiring the configuration information of at least one created mark object;
here, the user may perform a tag viewing operation on the tag object management interface, for example, by clicking a tag viewing control in the tag object management interface, the created tag object may be viewed.
Configuration information for at least one created markup object may be obtained in response to a markup viewing operation performed at a markup object management interface.
Step S512, displaying configuration information of each created markup object in the markup list area.
Here, the markup list area may be a list area on the markup object management interface for displaying configuration information of at least one created markup object. In implementation, the acquired configuration information of each created markup object may be displayed in a form of a list in the markup list region.
In the embodiment of the disclosure, a user may perform an editing trigger operation on the created target markup object at the markup object management interface to display an editing interface of the target markup object. Therefore, the user can conveniently edit the configuration information of the created marked object, so that the editing flexibility of the marked object can be further improved, and the experience requirements of the user in the scene of the augmented reality effect can be better met.
The embodiment of the disclosure provides a display method applied to a client. Fig. 6 is a schematic flow chart of an implementation of a display method provided in the embodiment of the present disclosure, and as shown in fig. 6, the method includes:
step S601, acquiring a configuration file of a mark object; the configuration file is generated based on configuration information of the mark object, and the configuration information of the mark object is acquired in response to the operation of editing the mark object at an editing interface of the mark object.
Here, the client may obtain the configuration file of the markup object in any suitable manner, which is not limited in this disclosure. For example, the configuration file of the markup object sent by other applications or services may be acquired by means of message reception, the configuration file of the created markup object may be acquired by querying a database, and the configuration file of the markup object uploaded by the user may be acquired in response to a configuration file upload operation of the user.
Step S602, identifying the marked object based on the configuration file.
Here, in some embodiments, the configuration file of the markup object is an executable data packet for identifying the markup object, and the client may identify the markup object by executing the configuration file. In some embodiments, the configuration file of the markup object is a file for describing characteristics of the markup object, and the client may identify the markup object based on the characteristics described in the configuration file. In some embodiments, the configuration file of the markup object is configuration information including the markup object edited by the user, and the client may determine the configuration information of the markup object to be identified by parsing the configuration file and identify the markup object based on the configuration information.
Step S603, in the case that the marker object is identified, displaying an augmented reality effect associated with the marker object.
Here, upon identifying the tagged object, the client may employ any suitable rendering algorithm to render the augmented reality effect associated with the tagged object.
In the embodiment of the disclosure, a configuration file of a markup object is obtained, where the configuration file is generated based on configuration information of the markup object, and the configuration information of the markup object is obtained in response to an operation of editing the markup object at an editing interface of the markup object; identifying the tagged object based on the configuration file; in the event that the tagged object is identified, an augmented reality effect associated with the tagged object is presented. Therefore, the user can edit the marked object in the visual editing interface according to actual requirements and generate the configuration file, the client can acquire the configuration file, the marked object is identified based on the configuration file, and the augmented reality effect associated with the marked object is displayed under the condition that the marked object is identified, so that personalized customization of the marked object can be realized, the diversity of the marked object is favorably increased, and the experience requirement of the user in the scene of the augmented reality effect can be better met.
An exemplary application of the embodiments of the present disclosure in a practical application scenario will be described below. The editing of the guide page in the AR project is taken as an example for explanation.
When an AR project is created, some background configuration may be performed on the front-end presentation of the AR project, including creating a library of tagged objects. The user may select the tagged object from the tagged object library as an identifier in the AR item that triggers presentation of the AR presentation content.
The embodiment of the disclosure provides an editing method for a markup object, which can uniformly manage and edit the markup object in an AR project, can directly generate a markup object library, can display configuration information edited in the created markup object, and facilitates front-end call. The marked object library generated by the method can contain the configuration information of each marked object, and meanwhile, the method can automatically generate a configuration file in a pat format corresponding to the edited marked object, record a suitable version of an identification algorithm for identifying the marked object, and further generate a test address for a worker to perform effect test on the marked object. In some embodiments, the markup objects in the markup object library can be managed through a markup object management interface, and the markup objects are edited through an editing interface of the markup objects.
Fig. 7A is a schematic diagram of a mark object management interface according to an embodiment of the present disclosure, as shown in fig. 7A, the mark object management interface 100 includes an additional mark button 110 and a mark list area 120, and a new mark object can be created by clicking the additional mark button 110 and enter an editing interface of the mark object; configuration information of at least one created markup object in the markup object library may be displayed in the markup list region 120, and the configuration information may include an Identifier (ID) 121 of the markup object, a name 122 of the markup object, an identifier image 123 of the markup object, a markup type 124 of the markup object, an algorithm version 125 of a recognition algorithm for recognizing the markup object, and an operation control 126, where the ID of the markup object may be a unique ID automatically generated by the system; the name 122 of the markup object can be edited and modified in an editing interface of the markup object; the identification image 123 of the marker object may be shown in a list; the mark type 124 may be a two-dimensional (2-dimension, 2D) type or a three-dimensional (3-dimension, 3D) type; the algorithm version 125 of the recognition algorithm may be automatically determined based on preset matching rules according to the identification image and the tag type; the operation control 126 is an editing control of the mark object, and by clicking the operation control 125 of the mark object, a user can jump to the corresponding editing interface of the mark object.
Fig. 7B is a schematic diagram of an editing interface of a markup object provided by an embodiment of the present disclosure, as shown in fig. 7B, the editing interface 200 of the markup object may include a display area 210 of an ID of the markup object, an editing area 220 of a name of the markup object, a selection area 230 of a markup type of the markup object, an information configuration area 240, a test address display area 250, and a submission control 260, where different markup types correspond to different information configuration areas 240, in a case that a selected markup type is a 2D type, the information configuration area 240 includes an image setting area 241, and in a case that the selected markup type is a 3D type, the information configuration area 240 includes an image setting area 241, an editing area 242 of a bottom diameter, and an editing area 243 of a height. The ID display area 210 may display the ID of the current markup object; the name edit area 220 may be used to edit the name of the current markup object; the test address display area 250 may be used to display the test address of the current markup object; the image setting region 241 may be used to set an identification image of a mark object; the bottom surface diameter editing region 242 may be used to edit the bottom surface diameter of the mark object; height editing region 243 may be used to edit the height of the mark object; clicking the submit control 260 can save the configuration information of the currently edited tagged object to the tagged object library, generate a configuration file in pat format corresponding to the edited tagged object, and automatically record the version of the presentation algorithm applicable to the tagged object.
In some embodiments, the identification image of the uploaded markup object may be deleted in the image setting region 241, and furthermore, the size, dimension, format, and the like of the uploaded identification image may be limited according to actual situations.
In some embodiments, after the submission control 260 is clicked, the format of the information edited in each editing region may be checked, and when the information format (such as picture format, picture size, character format, and the like) edited in the editing region is incorrect, the configuration information of the tagged object cannot be saved, and the configuration file corresponding to the edited tagged object cannot be generated.
Based on the foregoing embodiments, the present disclosure provides an editing apparatus for a mark object, where the apparatus includes units and modules included in the units, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 8 is a schematic structural diagram of a composition of an editing apparatus for a markup object according to an embodiment of the present disclosure, and as shown in fig. 8, the editing apparatus 800 for a markup object includes: a first display module 810, an editing module 820, and a generating module 830, wherein:
the first display module 810 is configured to display an editing interface of a markup object to be edited;
the editing module 820 is configured to respond to an operation of editing the markup object in the editing interface, and acquire configuration information of the edited markup object;
a generating module 830, configured to generate a configuration file of the tagged object based on the configuration information and send the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
In some embodiments, the editing module is further to: responding to the operation of setting the type of the mark object in the type setting area of the editing interface, and acquiring the set mark type; displaying at least one information configuration area on the editing interface based on the mark type; and acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area.
In some embodiments, the mark type is a two-dimensional type, the at least one information configuration area includes an image setting area, and the configuration information includes an identification image of the mark object; the editing module is further configured to: and acquiring the set identification image of the marking object in response to the image setting operation performed in the image setting area.
In some embodiments, the effect type is a three-dimensional type, the at least one information configuration area includes an image setting area and a size setting area, and the configuration information includes an identification image and a size parameter of the mark object; the editing module is further configured to: acquiring a set identification image of the marking object in response to an image setting operation performed in the image setting area; acquiring the set size parameter of the marking object in response to the size setting operation performed in the size setting area.
In some embodiments, where the bottom surface of the marking object is circular, the dimensional parameters include a bottom surface diameter and a height; in the case where the bottom surface of the marking object is rectangular, the dimensional parameters include a bottom surface length, a bottom surface width, and a height.
In some embodiments, the image setting area includes an image upload control; the editing module is further configured to: and responding to the image uploading operation based on the image uploading control, and acquiring the uploaded identification image of the mark object.
In some embodiments, the configuration information further comprises a recognition algorithm for recognizing the identification object; the editing module is further configured to: determining the recognition algorithm based on the identification image and the marker type; and determining the identification algorithm based on the configured algorithm version in response to the algorithm version configuration operation performed in at least one information configuration area.
In some embodiments, the apparatus further comprises: the second display module is used for displaying the test address of the marked object on the editing interface; the test address is used for entering a test environment of the marked object, and the test environment is used for identifying the marked object based on the configuration file and displaying an augmented reality effect associated with the marked object under the condition that the marked object is identified.
In some embodiments, the generation module is further to: carrying out format verification on the configuration information based on a preset verification rule to obtain a verification result; and generating a configuration file of the marked object and sending the configuration file to a client based on the configuration information under the condition that the verification result represents that the verification is successful.
The disclosed embodiment provides a display device, which comprises units and modules, and can be realized by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic view of a composition structure of a display device according to an embodiment of the disclosure, and as shown in fig. 9, the display device 900 includes: a second obtaining module 910, an identifying module 920, and a third displaying module 930, wherein:
a second obtaining module 910, configured to obtain a configuration file of the markup object; wherein the configuration file is generated based on configuration information of the markup object, the configuration information being obtained in response to an operation of editing the markup object at an editing interface of the markup object;
an identifying module 920, configured to identify the tagged object based on the configuration file;
a third display module 930 configured to display the augmented reality effect associated with the tagged object if the tagged object is identified.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It should be noted that, in the embodiment of the present disclosure, if the editing method or the display method of the mark object is implemented in the form of a software functional module and is sold or used as a standalone product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present disclosure provides an electronic device, which includes a display screen; a memory for storing an executable computer program; and the processor is used for realizing the steps in the editing method or the display method of the mark object by combining the display screen when the executable computer program stored in the memory is executed.
Accordingly, the disclosed embodiments provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the steps in the above-described method for editing or displaying a markup object.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that fig. 10 is a schematic diagram of a hardware entity of an electronic device in an embodiment of the present disclosure, and as shown in fig. 10, the hardware entity of the electronic device 1000 includes: a display 1001, a memory 1002, and a processor 1003, wherein the display 1001, the memory 1002, and the processor 1003 are connected by a communication bus 1004; a memory 1002 for storing executable computer programs; the processor 1003 is configured to implement the method provided by the embodiment of the present disclosure, for example, the editing method or the displaying method of the markup object provided by the embodiment of the present disclosure, in combination with the display screen 1001 when executing the executable computer program stored in the memory 1002.
The Memory 1002 may be configured to store instructions and applications executable by the processor 1003, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 1003 and modules in the electronic device 1000, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
The present disclosure provides a computer-readable storage medium, on which a computer program is stored, for causing the processor 1003 to execute the method provided by the present disclosure, for example, the editing method or the displaying method of the mark object provided by the present disclosure.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the scope of the present disclosure.
Claims (14)
1. An editing method for a mark object, comprising:
displaying an editing interface of a mark object to be edited;
responding to the operation of editing the mark object on the editing interface, and acquiring the configuration information of the edited mark object;
generating a configuration file of the mark object based on the configuration information and sending the configuration file to a client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
2. The method according to claim 1, wherein the obtaining configuration information of the edited tagged object in response to the operation of editing the tagged object in the editing interface comprises:
responding to the operation of setting the type of the mark object in the type setting area of the editing interface, and acquiring the set mark type;
displaying at least one information configuration area on the editing interface based on the mark type;
and acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area.
3. The method according to claim 2, wherein the mark type is a two-dimensional type, the at least one information configuration area includes an image setting area, and the configuration information includes an identification image of the mark object;
the acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area includes:
and acquiring the set identification image of the marking object in response to the image setting operation performed in the image setting area.
4. The method according to claim 2, wherein the effect type is a three-dimensional type, the at least one information configuration area includes an image setting area and a size setting area, and the configuration information includes an identification image and a size parameter of the markup object;
the acquiring the edited configuration information of the mark object in response to the information configuration operation performed in at least one information configuration area includes:
acquiring a set identification image of the marking object in response to an image setting operation performed in the image setting area;
acquiring the set size parameter of the marking object in response to the size setting operation performed in the size setting area.
5. The method of claim 4,
in the case where the bottom surface of the marking object is circular, the size parameters include a bottom surface diameter and a height;
in the case where the bottom surface of the marking object is rectangular, the dimensional parameters include a bottom surface length, a bottom surface width, and a height.
6. The method according to any one of claims 3 to 5, wherein the image setting area includes an image upload control;
the acquiring the set identification image of the marker object in response to the image setting operation performed in the image setting area includes:
and responding to the image uploading operation based on the image uploading control, and acquiring the uploaded identification image of the mark object.
7. The method according to any of claims 3 to 6, wherein the configuration information further comprises a recognition algorithm for recognizing the identification object;
the obtaining of the edited configuration information of the markup object in response to the information configuration operation performed in at least one of the information configuration areas further includes one of:
determining the recognition algorithm based on the identification image and the marker type;
and determining the identification algorithm based on the configured algorithm version in response to the algorithm version configuration operation performed in at least one information configuration area.
8. The method according to any one of claims 1 to 7, further comprising:
displaying the test address of the marked object on the editing interface; the test address is used for entering a test environment of the marked object, and the test environment is used for identifying the marked object based on the configuration file and displaying an augmented reality effect associated with the marked object under the condition that the marked object is identified.
9. The method according to any one of claims 1 to 8, wherein generating and sending the configuration file of the markup object to a client based on the configuration information comprises:
carrying out format verification on the configuration information based on a preset verification rule to obtain a verification result;
and generating a configuration file of the marked object and sending the configuration file to a client based on the configuration information under the condition that the verification result represents that the verification is successful.
10. A display method is applied to a client and comprises the following steps:
acquiring a configuration file of a marked object; wherein the configuration file is generated based on configuration information of the markup object, the configuration information being obtained in response to an operation of editing the markup object at an editing interface of the markup object;
identifying the tagged object based on the configuration file;
in the event that the tagged object is identified, an augmented reality effect associated with the tagged object is presented.
11. An editing apparatus for marking an object, comprising:
the first display module is used for displaying an editing interface of a mark object to be edited;
the editing module is used for responding to the operation of editing the mark object on the editing interface and acquiring the configuration information of the edited mark object;
the generating module is used for generating a configuration file of the mark object based on the configuration information and sending the configuration file to the client; the client is used for identifying the marked object based on the configuration file and displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
12. A display device, comprising:
the second acquisition module is used for acquiring a configuration file of the marked object; wherein the configuration file is generated based on configuration information of the markup object, the configuration information being obtained in response to an operation of editing the markup object at an editing interface of the markup object;
the identification module is used for identifying the marked object based on the configuration file;
and the third display module is used for displaying the augmented reality effect associated with the marked object under the condition that the marked object is identified.
13. An electronic device, comprising:
a display screen; a memory for storing an executable computer program;
a processor for implementing the method of any one of claims 1 to 9 or 10 in conjunction with the display screen when executing an executable computer program stored in the memory.
14. A computer-readable storage medium, having stored thereon a computer program for causing a processor, when executed, to carry out the method of any one of claims 1 to 9 or 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111165884.2A CN113867875A (en) | 2021-09-30 | 2021-09-30 | Method, device, equipment and storage medium for editing and displaying marked object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111165884.2A CN113867875A (en) | 2021-09-30 | 2021-09-30 | Method, device, equipment and storage medium for editing and displaying marked object |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113867875A true CN113867875A (en) | 2021-12-31 |
Family
ID=79001498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111165884.2A Withdrawn CN113867875A (en) | 2021-09-30 | 2021-09-30 | Method, device, equipment and storage medium for editing and displaying marked object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113867875A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115129553A (en) * | 2022-07-04 | 2022-09-30 | 北京百度网讯科技有限公司 | Graph visualization method, device, equipment, medium and product |
CN116166242A (en) * | 2023-03-22 | 2023-05-26 | 广州嘉为科技有限公司 | Canvas-based measurement card implementation method, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108983971A (en) * | 2018-06-29 | 2018-12-11 | 北京小米智能科技有限公司 | Labeling method and device based on augmented reality |
CN112070907A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Augmented reality system and augmented reality data generation method and device |
CN112348968A (en) * | 2020-11-06 | 2021-02-09 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
-
2021
- 2021-09-30 CN CN202111165884.2A patent/CN113867875A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108983971A (en) * | 2018-06-29 | 2018-12-11 | 北京小米智能科技有限公司 | Labeling method and device based on augmented reality |
CN112070907A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Augmented reality system and augmented reality data generation method and device |
CN112348968A (en) * | 2020-11-06 | 2021-02-09 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115129553A (en) * | 2022-07-04 | 2022-09-30 | 北京百度网讯科技有限公司 | Graph visualization method, device, equipment, medium and product |
CN116166242A (en) * | 2023-03-22 | 2023-05-26 | 广州嘉为科技有限公司 | Canvas-based measurement card implementation method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10839605B2 (en) | Sharing links in an augmented reality environment | |
US10121099B2 (en) | Information processing method and system | |
EP2508975B1 (en) | Display control device, display control method, and program | |
KR101691903B1 (en) | Methods and apparatus for using optical character recognition to provide augmented reality | |
US20190333478A1 (en) | Adaptive fiducials for image match recognition and tracking | |
US10186084B2 (en) | Image processing to enhance variety of displayable augmented reality objects | |
KR101737725B1 (en) | Content creation tool | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
CN112215924A (en) | Picture comment processing method and device, electronic equipment and storage medium | |
CN113934297B (en) | Interaction method and device based on augmented reality, electronic equipment and medium | |
CN113906413A (en) | Contextual media filter search | |
CN113867875A (en) | Method, device, equipment and storage medium for editing and displaying marked object | |
JP2023172893A (en) | Control method, control device, and recording medium for interactive three-dimensional representation of target object | |
CN114116086A (en) | Page editing method, device, equipment and storage medium | |
US10600060B1 (en) | Predictive analytics from visual data | |
US20150185992A1 (en) | Providing geolocated imagery related to a user-selected image | |
CN114363705B (en) | Augmented reality equipment and interaction enhancement method | |
JP2020534590A (en) | Processing of visual input | |
CN113867874A (en) | Page editing and displaying method, device, equipment and computer readable storage medium | |
KR102175519B1 (en) | Apparatus for providing virtual contents to augment usability of real object and method using the same | |
CN115018975A (en) | Data set generation method and device, electronic equipment and storage medium | |
US10354176B1 (en) | Fingerprint-based experience generation | |
KR102167588B1 (en) | Video producing service device based on contents received from a plurality of user equipments, video producing method based on contents received from a plurality of user equipments and computer readable medium having computer program recorded therefor | |
US20240185546A1 (en) | Interactive reality computing experience using multi-layer projections to create an illusion of depth | |
CN118550489A (en) | Display method, display device, electronic equipment and computer medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211231 |
|
WW01 | Invention patent application withdrawn after publication |