[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110887194B - Simulation modeling method and device for three-dimensional space - Google Patents

Simulation modeling method and device for three-dimensional space Download PDF

Info

Publication number
CN110887194B
CN110887194B CN201910979241.8A CN201910979241A CN110887194B CN 110887194 B CN110887194 B CN 110887194B CN 201910979241 A CN201910979241 A CN 201910979241A CN 110887194 B CN110887194 B CN 110887194B
Authority
CN
China
Prior art keywords
dimensional space
use range
preset use
preset
control terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910979241.8A
Other languages
Chinese (zh)
Other versions
CN110887194A (en
Inventor
马鑫磊
陈彦宇
谭泽汉
叶盛世
李茹
黎小坚
孙波
蔡琪
曾安福
朱鹏飞
汪立富
邓剑锋
刘郑宇
杜洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910979241.8A priority Critical patent/CN110887194B/en
Publication of CN110887194A publication Critical patent/CN110887194A/en
Application granted granted Critical
Publication of CN110887194B publication Critical patent/CN110887194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/89Arrangement or mounting of control or safety devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a simulation modeling method and device for a three-dimensional space, and belongs to the technical field of smart home. The method comprises the following steps: the method comprises the steps of obtaining an image of a preset use range of a target intelligent device, carrying out image recognition through a machine vision technology to obtain first three-dimensional space data of the preset use range, then generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data, and sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal can display the three-dimensional space visualization model. By the method and the device, visual information in the process of setting the intelligent device can be enriched, and user experience is improved.

Description

Simulation modeling method and device for three-dimensional space
Technical Field
The application relates to the technical field of smart home, in particular to a three-dimensional space simulation modeling method and device.
Background
With the improvement of living standard of people, more and more users choose to install intelligent equipment indoors to adjust indoor temperature.
At present, when a user uses an intelligent device, the working parameters of the intelligent device are generally required to be manually set. Taking an intelligent air conditioner as an example, a user needs to set parameters such as temperature, wind speed and wind direction, and the air conditioner can detect the indoor temperature through a temperature sensor arranged on the air conditioner or a remote controller and then display the indoor temperature through a display. The user can further adjust the parameters of the air conditioner according to the body feeling of the user and the temperature fed back by the air conditioner, so that the optimal user experience is achieved.
In the existing scheme, visual information fed back to a user by the intelligent equipment is less, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application aims to provide a simulation modeling method and device for a three-dimensional space, so as to solve the problems of less visual information and poor user experience in the process of setting intelligent equipment. The specific technical scheme is as follows:
in a first aspect, a simulation modeling method for a three-dimensional space is provided, the method comprising:
acquiring an image of a preset use range of a target intelligent device;
performing image recognition through a machine vision technology to obtain first three-dimensional space data of the preset use range;
generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data;
and sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal displays the three-dimensional space visualization model.
Optionally, the first three-dimensional spatial data of the preset usage range at least includes: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
Optionally, the generating a three-dimensional space visualization model corresponding to the preset usage range according to the first three-dimensional space data includes:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter.
Optionally, the method further includes:
receiving second three-dimensional space data of a preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating of the three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data includes:
calculating modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data and a preset data modification algorithm;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
Optionally, the acquiring an image of a preset use range of the target smart device includes:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
In a second aspect, there is provided a simulation modeling apparatus for a three-dimensional space, the apparatus comprising:
the acquisition module is used for acquiring an image of a preset use range of the target intelligent equipment;
the identification module is used for carrying out image identification through a machine vision technology to obtain first three-dimensional space data of the preset use range;
the generating module is used for generating a three-dimensional space visualization model corresponding to the preset using range according to the first three-dimensional space data;
and the sending module is used for sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal can display the three-dimensional space visualization model.
Optionally, the first three-dimensional spatial data of the preset usage range at least includes: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
Optionally, the generating module is specifically configured to:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter.
Optionally, the apparatus further comprises:
the receiving module is used for receiving second three-dimensional space data of the preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating module is specifically configured to calculate modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data, and a preset data modification algorithm; and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
Optionally, the obtaining module is specifically configured to:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
In a third aspect, a cloud server is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of the method when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, wherein a computer program is stored in the computer-readable storage medium, and the computer program, when executed by a processor, implements the above-mentioned method steps.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the above described methods of simulation modeling of a three-dimensional space.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a simulation modeling method for a three-dimensional space, which can acquire an image of a preset use range of a target intelligent device, perform image recognition through a machine vision technology to obtain first three-dimensional space data of the preset use range, then generate a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data, and send the three-dimensional space visualization model to a control terminal of the target intelligent device, so that the control terminal displays the three-dimensional space visualization model. The three-dimensional space visualization model of the preset application range of the intelligent equipment can be generated through the scheme, the three-dimensional space visualization model can be displayed through the control terminal, the visualization information in the process of setting the intelligent equipment is enriched, and the user experience is improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a simulation modeling method for a three-dimensional space according to an embodiment of the present application;
fig. 2 is a schematic diagram of a three-dimensional space visualization model provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a simulation modeling apparatus for a three-dimensional space according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a cloud server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a simulation modeling method, which can be applied to a cloud server of an intelligent device, wherein the cloud server can be a background server of some intelligent device. A detailed description will be given below of a simulation modeling method provided in the embodiments of the present application with reference to specific embodiments, as shown in fig. 1, the specific steps are as follows:
step 101, obtaining an image of a preset use range of a target intelligent device.
In the embodiment of the application, the cloud server can establish communication connection with at least one intelligent device and a control terminal of the intelligent device, and perform data transmission through the communication connection. The control terminal can be an intelligent device remote controller, and also can be a mobile phone or an intelligent tablet and the like provided with an APP for controlling the intelligent device.
For any one smart device (which may be called a target smart device), the target smart device may be used within a preset use range. For example, the target smart device may be installed in a fixed room (i.e. the preset usage range of the target smart device) and may operate with specified operating parameters, for example, the target air conditioner is installed in the living room, and in the dining mode, the temperature of the target air conditioner is 24 ℃, the wind speed is 10, and the wind direction is upward. For another example, the target smart device may be used in multiple rooms (for example, the intelligent sweeping robot may move in multiple rooms), and the preset use range includes the multiple rooms, and the preset use range may be a use range preset by a technician or a use range set by a user.
The cloud server can acquire an image of a preset use range of the target intelligent device. The image comprises images of all directions of the preset use range, and can be a photo or a video. The manner in which the cloud server obtains the image in the preset use range may be various, and the embodiments of the present application provide several feasible implementation manners, which are specifically as follows.
In a first mode, the cloud server can receive an image sent by a control terminal corresponding to the target intelligent device.
The image is obtained by shooting a preset use range of the target intelligent device through the control terminal by the user.
In the embodiment of the application, a user can shoot an image of a preset use range of a target intelligent device through a control terminal, for example, shoot a plurality of pictures of the preset use range, or record a video of the preset use range. For the mode of taking the picture, the user can take a plurality of pictures of the preset use range to cover all directions of the preset use range, and for each object (such as furniture or electric appliances) in the preset use range, the user can take pictures of a plurality of views of each object to improve the accuracy of image recognition. For the video recording mode, a user needs to record a 360-degree video (i.e., a panoramic video) in a preset use range, and the video needs to include objects such as smart devices, windows, or furniture in the preset use range.
After the control terminal obtains the image of the preset use range of the target intelligent device, the image can be sent to the cloud server. The cloud server can receive the image sent by the control terminal and correspondingly store the image and the identification of the target intelligent device.
In the second mode, the cloud server can receive the image sent by the target intelligent device.
The image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
In the embodiment of the present application, a camera device, such as a camera, may also be disposed in the target smart device. The target smart device may capture an image of the preset usage range through the camera, for example, capture multiple pictures of the preset usage range, or record a video of the preset usage range. Wherein, to the mode of taking the photo, target smart machine can adjust camera device's shooting angle to shoot many photos of predetermineeing application range, cover each position of predetermineeing application range. Similarly, for the mode of recording the video, the target intelligent device can adjust the shooting angle of the camera device, record the 360-degree video (namely, the panoramic video) within the preset use range, and the video needs to contain the intelligent device, the window or the furniture and other objects within the preset use range.
After the target intelligent device obtains the image of the preset use range of the target intelligent device, the image can be sent to the cloud server. The cloud server may receive the image sent by the target intelligent device, and store the image and the identifier of the target intelligent device correspondingly.
And 102, carrying out image recognition through a machine vision technology to obtain first three-dimensional space data in a preset use range.
In the embodiment of the application, after the cloud server receives the image of the preset use range of the target intelligent device sent by the control terminal, the image can be identified through a machine vision technology. Specifically, the image recognition and parameter extraction module may extract the geometric size of the image and the position information of each object such as the smart device, the window, the furniture, and the like, to obtain three-dimensional space data (which may be referred to as first three-dimensional space data for convenience of distinguishing), so as to perform three-dimensional space modeling. Wherein, the three-dimensional space data of the preset use range at least comprises: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range. Each object in the preset use range refers to furniture, large-scale electronic equipment and other objects in the preset use range, such as a wardrobe, a bed, a screen, a refrigerator and the like. The three-dimensional spatial data may also include other data, which may be specifically set by a technician as required, and the embodiment of the present application is not limited.
And 103, generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data.
In the embodiment of the application, after the cloud server acquires the three-dimensional space data, the three-dimensional space modeling module in the cloud server constructs a three-dimensional space model corresponding to the preset use range according to the three-dimensional space data. The three-dimensional space modeling module can be realized by three-dimensional modeling software such as PRO/E, Solidworks and UG, and the embodiment of the application is not repeated.
Optionally, the user can also upload the environmental parameters, and the cloud server can optimize the display effect of the three-dimensional space visualization model in combination with the environmental parameters. Correspondingly, the processing procedure of the cloud server may be: acquiring an environmental parameter sent by a control terminal; and generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter.
In the embodiment of the application, the user can set the environmental parameters at the control terminal, the environmental parameters are parameters capable of assisting three-dimensional modeling, and the environmental parameters at least comprise one or more of the floor height and the orientation of the preset use range. The control terminal can send the environmental parameters set by the user to the cloud server. The cloud server can generate a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data, and can further adjust the display effect of the three-dimensional space visualization model according to the environment parameters uploaded by the user, for example, the orientation of the three-dimensional space visualization model can be adjusted according to the orientation of the preset use range, or the orientation can be displayed; in another example, a window exists in the preset use range, and an image corresponding to the floor can be displayed in the display area corresponding to the window according to the height of the floor.
In one example, the preset usage range faces south, and the three-dimensional space visualization model of the preset usage range can be oriented according to the reference direction. In another example, the height of the floor is 1 floor, and images of trees, flowers, plants and the like can be displayed in the display area corresponding to the window; the height of the floor is 20 floors, and images of buildings, birds and the like can be displayed in the display area corresponding to the window. Based on the scheme, the display effect of the three-dimensional space visualization model can be optimized, and the user experience is improved. Fig. 2 is a schematic diagram of a three-dimensional space visualization model provided in an embodiment of the present application.
Optionally, the three-dimensional space visualization model may be further adjusted by combining with data uploaded by the user, so as to improve the accuracy of the simulation, and accordingly, the processing process of the cloud server may be: receiving second three-dimensional space data of a preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user; calculating modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data and a preset data modification algorithm; and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
In the embodiment of the application, a user can input three-dimensional space data (for convenience of distinguishing, may be referred to as second three-dimensional space data) of a preset use range of a target smart device on a control terminal. In one implementation, the control terminal may provide an input interface of three-dimensional space data, and the user may measure second three-dimensional space data of a preset use range of the target smart device, such as the length, width and height of a bedroom, the length, width and height of each piece of furniture in the bedroom, the size and position of a window, the position of the smart device in the bedroom, and the like. The user can input a measurement result in the input interface, and the control terminal can send the second three-dimensional space data input by the user to the cloud server, so that the cloud server stores the second three-dimensional space data in the preset use range of the target intelligent device.
In another implementation, Flash-based three-dimensional visual modeling may be performed. Specifically, the APP of the control terminal may be preset with a three-dimensional virtual scene, furniture models, a window model, a smart device model, and the like. The user can set up specific length, width, height three-dimensional size according to smart machine's preset application range to can place furniture model, window model smart machine model etc. in the virtual scene through the mode of pulling, with the position of setting up furniture, window smart machine in the virtual scene, the user can also set up the size of furniture, window size, smart machine size etc.. And the control terminal acquires corresponding second three-dimensional space data according to a result set by the user and sends the second three-dimensional space data to the cloud server so that the cloud server stores the second three-dimensional space data within the preset use range of the target intelligent device. Optionally, the user does not need to input all three-dimensional spatial data, and only needs to input specified important data, which may be specifically set by the user or a technician, and this embodiment is not described in detail again.
The cloud server can calculate the corrected target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data and a preset data correction algorithm. In one implementation, weights of the first three-dimensional space data and the second three-dimensional space data may be set, respectively, and then the target three-dimensional space data may be calculated according to the weights. For example, if the room height in the first three-dimensional space data is 2.7, the weight of the first three-dimensional space data is 0.4, the room height in the second three-dimensional space data is 2.5, and the weight of the second three-dimensional space data is 0.6, the room height in the target three-dimensional space data is 2.7 × 0.4+2.5 × 0.6 — 2.58. The cloud server can generate a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
And 104, sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal can display the three-dimensional space visualization model.
In the embodiment of the application, the cloud server can send the three-dimensional space visualization model to the control terminal of the target intelligent device in a three-dimensional visualization cloud picture mode, and the control terminal can display the three-dimensional space visualization model so that a user can control the target intelligent device according to the three-dimensional space visualization model. Optionally, when the control terminal displays the three-dimensional space visualization model, if the user adjusts the temperature, the control terminal may adjust the three-dimensional space visualization model, for example, if the user increases the temperature, the user may display a warm color, and if the user decreases the temperature, the user may display a cool color, and the specific scheme may be set by a technician, which is not limited in the embodiment of the present application.
In the embodiment of the application, the image of the preset use range of the target intelligent device can be acquired, the image recognition is carried out through a machine vision technology, the first three-dimensional space data of the preset use range are obtained, then the three-dimensional space visualization model corresponding to the preset use range is generated according to the first three-dimensional space data, and the three-dimensional space visualization model is sent to the control terminal of the target intelligent device, so that the control terminal can display the three-dimensional space visualization model. The three-dimensional space visualization model of the preset application range of the intelligent equipment can be generated through the scheme, the three-dimensional space visualization model can be displayed through the control terminal, the visualization information in the process of setting the intelligent equipment is enriched, and the user experience is improved.
Based on the same technical concept, an embodiment of the present application further provides a simulation modeling apparatus for a three-dimensional space, as shown in fig. 3, the apparatus includes:
an obtaining module 310, configured to obtain an image of a preset use range of a target smart device;
the identification module 320 is configured to perform image identification through a machine vision technology to obtain first three-dimensional space data of the preset use range;
a generating module 330, configured to generate a three-dimensional space visualization model corresponding to the preset usage range according to the first three-dimensional space data;
a sending module 340, configured to send the three-dimensional space visualization model to a control terminal of the target intelligent device, so that the control terminal displays the three-dimensional space visualization model.
Optionally, the first three-dimensional spatial data of the preset usage range at least includes: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
Optionally, the generating module is specifically configured to:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter.
Optionally, the apparatus further comprises:
the receiving module is used for receiving second three-dimensional space data of the preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating module is specifically configured to calculate modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data, and a preset data modification algorithm; and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
Optionally, the obtaining module is specifically configured to:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
In the embodiment of the application, the image of the preset use range of the target intelligent device can be acquired, the image recognition is carried out through a machine vision technology, the first three-dimensional space data of the preset use range are obtained, then the three-dimensional space visualization model corresponding to the preset use range is generated according to the first three-dimensional space data, and the three-dimensional space visualization model is sent to the control terminal of the target intelligent device, so that the control terminal can display the three-dimensional space visualization model. The three-dimensional space visualization model of the preset application range of the intelligent equipment can be generated through the scheme, the three-dimensional space visualization model can be displayed through the control terminal, the visualization information in the process of setting the intelligent equipment is enriched, and the user experience is improved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 4, including a processor 401, a communication interface 402, a memory 403 and a communication bus 404, where the processor 401, the communication interface 402 and the memory 403 complete mutual communication through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401, when executing the program stored in the memory 403, implements the following steps:
acquiring an image of a preset use range of a target intelligent device;
performing image recognition through a machine vision technology to obtain first three-dimensional space data of the preset use range;
generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data;
and sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal displays the three-dimensional space visualization model.
Optionally, the first three-dimensional spatial data of the preset usage range at least includes: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
Optionally, the generating a three-dimensional space visualization model corresponding to the preset usage range according to the first three-dimensional space data includes:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter.
Optionally, the method further includes:
receiving second three-dimensional space data of a preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating of the three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data includes:
calculating modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data and a preset data modification algorithm;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
Optionally, the acquiring an image of a preset use range of the target smart device includes:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above simulation modeling methods for three-dimensional space.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method for simulation modeling of any of the three-dimensional spaces described above.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of simulation modeling of a three-dimensional space, the method comprising:
acquiring an image of a preset use range of a target intelligent device;
performing image recognition through a machine vision technology to obtain first three-dimensional space data of the preset use range;
generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data;
sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal displays the three-dimensional space visualization model;
the generating of the three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data includes:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter;
the method further comprises the following steps:
receiving second three-dimensional space data of a preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating of the three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data includes:
calculating modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data and a preset data modification algorithm;
and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
2. The method according to claim 1, wherein the first three-dimensional spatial data of the preset usage range at least comprises: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
3. The method of claim 1, wherein the obtaining the image of the preset use range of the target smart device comprises:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
4. An apparatus for simulation modeling of a three-dimensional space, the apparatus comprising:
the acquisition module is used for acquiring an image of a preset use range of the target intelligent equipment;
the identification module is used for carrying out image identification through a machine vision technology to obtain first three-dimensional space data of the preset use range;
the generating module is used for generating a three-dimensional space visualization model corresponding to the preset using range according to the first three-dimensional space data;
the sending module is used for sending the three-dimensional space visualization model to a control terminal of the target intelligent device so that the control terminal can display the three-dimensional space visualization model;
the generation module is specifically configured to:
acquiring environmental parameters sent by the control terminal, wherein the environmental parameters at least comprise one or more of the height of a floor and the orientation of the preset use range;
generating a three-dimensional space visualization model corresponding to the preset use range according to the first three-dimensional space data and the environment parameter;
the device further comprises:
the receiving module is used for receiving second three-dimensional space data of the preset use range of the target intelligent device, which is sent by the control terminal, wherein the second three-dimensional space data is data set on the control terminal by a user;
the generating module is specifically configured to calculate modified target three-dimensional space data according to the first three-dimensional space data, the second three-dimensional space data, and a preset data modification algorithm; and generating a three-dimensional space visualization model corresponding to the preset use range according to the target three-dimensional space data.
5. The apparatus according to claim 4, wherein the first three-dimensional spatial data of the preset usage range at least comprises: the size of the preset use range, the size and the position of each object in the preset use range, the position of the target intelligent device in the preset use range, and the size and the position of a window in the preset use range.
6. The apparatus of claim 4, wherein the obtaining module is specifically configured to:
receiving an image sent by a control terminal corresponding to target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment through the control terminal by a user; or,
receiving an image sent by target intelligent equipment, wherein the image is obtained by shooting a preset use range of the target intelligent equipment by a camera device of the target intelligent equipment.
7. The cloud server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication through the communication bus by the memory;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 3 when executing a program stored in the memory.
8. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-3.
CN201910979241.8A 2019-10-15 2019-10-15 Simulation modeling method and device for three-dimensional space Active CN110887194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910979241.8A CN110887194B (en) 2019-10-15 2019-10-15 Simulation modeling method and device for three-dimensional space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910979241.8A CN110887194B (en) 2019-10-15 2019-10-15 Simulation modeling method and device for three-dimensional space

Publications (2)

Publication Number Publication Date
CN110887194A CN110887194A (en) 2020-03-17
CN110887194B true CN110887194B (en) 2020-11-17

Family

ID=69746207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910979241.8A Active CN110887194B (en) 2019-10-15 2019-10-15 Simulation modeling method and device for three-dimensional space

Country Status (1)

Country Link
CN (1) CN110887194B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652969A (en) * 2020-05-28 2020-09-11 京东数字科技控股有限公司 Data 3D visualization display method and device, electronic equipment and storage medium
CN111739154A (en) * 2020-06-26 2020-10-02 深圳全景空间工业有限公司 System and method for building indoor environment automatic modeling
CN113160304A (en) * 2021-01-28 2021-07-23 珠海格力电器股份有限公司 Equipment position identification method and device, storage medium and equipment central control system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1987857A (en) * 2006-12-18 2007-06-27 于慧 Method for realizing digital city system using virtual artificial comprehensive image and text information interaction
CN105378792A (en) * 2013-05-31 2016-03-02 朗桑有限公司 Three-dimensional object modeling
CN108332365A (en) * 2018-01-04 2018-07-27 珠海格力电器股份有限公司 Air conditioner control method and device
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019002596A (en) * 2017-06-12 2019-01-10 清水建設株式会社 Temperature distribution measurement system, air conditioning system, cooling method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1987857A (en) * 2006-12-18 2007-06-27 于慧 Method for realizing digital city system using virtual artificial comprehensive image and text information interaction
CN105378792A (en) * 2013-05-31 2016-03-02 朗桑有限公司 Three-dimensional object modeling
CN108332365A (en) * 2018-01-04 2018-07-27 珠海格力电器股份有限公司 Air conditioner control method and device
CN108648272A (en) * 2018-04-28 2018-10-12 上海激点信息科技有限公司 Three-dimensional live acquires modeling method, readable storage medium storing program for executing and device

Also Published As

Publication number Publication date
CN110887194A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN110910503B (en) Simulation method and device for air conditioning environment
CN110887194B (en) Simulation modeling method and device for three-dimensional space
US10154246B2 (en) Systems and methods for 3D capturing of objects and motion sequences using multiple range and RGB cameras
CN111145352A (en) House live-action picture display method and device, terminal equipment and storage medium
CN108961152B (en) Method and device for generating plane house type graph
WO2019228188A1 (en) Method and apparatus for marking and displaying spatial size in virtual three-dimensional house model
CN114155299B (en) Building digital twinning construction method and system
CN109525674B (en) System and method for making house panorama
WO2017215308A1 (en) Method, device and system for controlling electrical appliance
CN111161336B (en) Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and computer-readable storage medium
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
US11062422B2 (en) Image processing apparatus, image communication system, image processing method, and recording medium
CN113436311A (en) House type graph generation method and device
CN111932666A (en) Reconstruction method and device of house three-dimensional virtual image and electronic equipment
CN108846899B (en) Method and system for improving area perception of user for each function in house source
US10147240B2 (en) Product image processing method, and apparatus and system thereof
CN106446098A (en) Live action image processing method and server based on location information
CN111028362A (en) Image display method, image annotation processing method, image processing device, image processing program, and storage medium
US20120098967A1 (en) 3d image monitoring system and method implemented by portable electronic device
WO2022101707A1 (en) Image processing method, recording medium, and image processing system
CN110662015A (en) Method and apparatus for displaying image
US12112424B2 (en) Systems and methods for constructing 3D model based on changing viewpiont
US20150138199A1 (en) Image generating system and image generating program product
TW201642158A (en) Interior design system and method
TWI706656B (en) Visualized household appliance control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 519015 Room 601, Lianshan Lane, Jida Jingshan Road, Zhuhai City, Guangdong Province

Patentee after: Zhuhai Lianyun Technology Co.,Ltd.

Patentee after: GREE ELECTRIC APPLIANCES,Inc.OF ZHUHAI

Address before: 519070, Jinji Hill Road, front hill, Zhuhai, Guangdong

Patentee before: GREE ELECTRIC APPLIANCES,Inc.OF ZHUHAI

Patentee before: Zhuhai Lianyun Technology Co.,Ltd.

CP03 Change of name, title or address