[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114859739A - Motion adjustment method and device, storage medium, and electronic device - Google Patents

Motion adjustment method and device, storage medium, and electronic device Download PDF

Info

Publication number
CN114859739A
CN114859739A CN202210303314.3A CN202210303314A CN114859739A CN 114859739 A CN114859739 A CN 114859739A CN 202210303314 A CN202210303314 A CN 202210303314A CN 114859739 A CN114859739 A CN 114859739A
Authority
CN
China
Prior art keywords
action
wearable device
determining
protection
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210303314.3A
Other languages
Chinese (zh)
Inventor
李雅维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202210303314.3A priority Critical patent/CN114859739A/en
Publication of CN114859739A publication Critical patent/CN114859739A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an action adjusting method and device, a storage medium and an electronic device, and relates to the technical field of smart families, wherein the action adjusting method comprises the following steps: determining, by a wearable device of a first object, location information of the first object in a home area; determining whether the current action of the first object is consistent with the preset action under the condition that the first object is determined to be located in a first protection area in the home area according to the position information; under the unanimous condition, through the speech information of wearing equipment broadcast second object to remind first object to adjust the current action of first object, adopt above-mentioned technical scheme, among the prior art, when managing the pet action, can greatly restrict the activity of pet, probably cause the problem to the healthy injury of body and mind of pet even.

Description

Motion adjustment method and device, storage medium, and electronic device
Technical Field
The application relates to the technical field of smart homes, in particular to an action adjusting method and device, a storage medium and an electronic device.
Background
While more and more families are beginning to feed pets, imperfections in management of pet behavior have led to more and more problems, such as: the dog bites the sofa in the house, and for example, the cat turns over the vase in the house and the like. In view of this point, the existing pet behavior management scheme is either a mode of limiting pet activities such as a dog cage or a cat cage, or a mode of managing behaviors of pets by using a mode of possibly damaging the pets such as discharging, and by adopting such a scheme, on one hand, the pet behavior management scheme is not beneficial to physical and mental health of the pets, and on the other hand, the pet behavior management scheme cannot effectively manage the pet behaviors.
Aiming at the problems that in the prior art, when the behavior of the pet is managed, the activity of the pet can be greatly limited, and even the physical and mental health of the pet can be damaged, an effective solution is not provided.
Disclosure of Invention
The embodiment of the invention provides an action adjusting method and device, a storage medium and an electronic device, which are used for at least solving the problems that in the prior art, when pet behaviors are managed, activities of pets are greatly limited, and even physical and psychological health of the pets can be damaged.
According to an embodiment of the present invention, there is provided an action adjustment method including: determining, by a wearable device of a first object, location information of the first object in a home area; determining whether the current action of the first object is consistent with a preset action under the condition that the first object is determined to be located in a first protection area in the home area according to the position information; and under the condition of consistency, playing voice information of a second object through the wearable device to remind the first object to adjust the current action of the first object.
In an exemplary embodiment, in case of coincidence, after playing, by the wearable device, voice information of a second object to remind the first object to adjust a current action of the first object, the method further includes: in a case where the first object does not adjust the current motion of the first object according to the voice information, playing, by the wearable device, ultrasonic waves within a preset frequency causes auditory discomfort of the first object to adjust the current motion of the first object.
In an exemplary embodiment, after determining the location information of the first object in the home area by the wearable device of the first object, the method further comprises: determining whether the current action of the first object is consistent with the preset action or not under the condition that the first object is provided with a plurality of protection areas and the first object is located in a second protection area according to the position information; in case of coincidence, playing the ultrasonic waves within a preset frequency by the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject.
In an exemplary embodiment, in case of coincidence, after playing the ultrasonic waves within the preset frequency by the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject, the method further includes: and under the condition that the first object does not adjust the current action of the first object according to the ultrasonic waves in the preset frequency, playing voice information of the second object through the wearable device so as to remind the first object of adjusting the current action of the first object again.
In one exemplary embodiment, determining, by a wearable device of a first object, location information of the first object in a home area includes: shooting image information around the first object through a built-in camera of the wearable device; performing image recognition on the image information to determine position information of the first object in the home area.
In one exemplary embodiment, before determining, by a wearable device of a first object, location information of the first object in a home area, the method further comprises: receiving a selection operation of the second object; in response to the selection operation, determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device; wherein each of the plurality of protection modes comprises at least one of: a preset action, the voice information of the second object, and the ultrasonic wave of the preset frequency.
In one exemplary embodiment, determining whether the current action of the first object is consistent with a preset action comprises: determining one or more devices with action recognition function in the family area; under the condition that the first object is located in the first protection area, determining a first device closest to the current position of the first object from the one or more devices, and controlling the first device to perform motion recognition on the first object to obtain the current motion of the first object; and determining whether the current action of the first object is consistent with the preset action.
According to another embodiment of the present invention, there is also provided a motion adjustment apparatus including: the first determining module is used for determining the position information of a first object in a home area through a wearable device of the first object; the second determining module is used for determining whether the current action of the first object is consistent with a preset action or not under the condition that the first object is determined to be located in a first protection area in the home area according to the position information; and the playing module is used for playing the voice information of the second object through the wearable device under the consistent condition so as to remind the first object to adjust the current action of the first object.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned action adjustment method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned action adjustment method through the computer program.
In the embodiment of the application, the wearing device of the first object is used for determining the position information of the first object in the home area, and determining whether the first object is located in a first protection area of the home area according to the position information, under the condition that the first object is located in the first protection area, determining whether the current action of the first object is consistent with the preset action, and under the condition that the current action of the first object is consistent with the preset action, playing the voice information of the second object through the wearing device to remind the first object to adjust the current action of the first object. By adopting the technical scheme, the problem that in the prior art, when the behavior of the pet is managed, the activity of the pet can be greatly limited, and even the physical and mental health of the pet can be damaged is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a diagram of a hardware environment of a method for adjusting actions according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of action adjustment according to an embodiment of the present invention;
FIG. 3 is a diagram of product application of an alternative pet status reminder approach in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative motion adjustment method according to an embodiment of the invention;
fig. 5 is a block diagram of an alternative motion adjustment apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, there is provided a motion adjustment method. The motion adjusting method is widely applied to full-House intelligent digital control application scenes such as intelligent homes (Smart Home), intelligent homes, intelligent Home equipment ecology, intelligent House (Intelligent House) ecology and the like. Alternatively, in this embodiment, the above-mentioned motion adjustment method may be applied to a hardware environment composed of the smart collar 102, the terminal device 104 and the server 106 as shown in fig. 1. As shown in fig. 1, the server 106 is connected to the terminal device 104 through a network, and may be configured to provide a service (e.g., an application service) for the terminal or a client installed on the terminal, set a database on the server or independent of the server, and provide a data storage service for the server 106, and configure a cloud computing and/or edge computing service on the server or independent of the server, and provide a data operation service for the server 106.
The network may include, but is not limited to, at least one of: wired networks, wireless networks. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. Terminal equipment 104 can be but not limited to be PC, the cell-phone, the panel computer, intelligent air conditioner, intelligent cigarette machine, intelligent refrigerator, intelligent oven, intelligent kitchen range, intelligent washing machine, intelligent water heater, intelligent washing equipment, intelligent dish washer, intelligent projection equipment, intelligent TV, intelligent clothes hanger, intelligent (window) curtain, intelligence audio-visual, smart jack, intelligent stereo set, intelligent audio amplifier, intelligent new trend equipment, intelligent kitchen guarding equipment, intelligent bathroom equipment, intelligence robot of sweeping the floor, intelligence robot of wiping the window, intelligence robot of mopping the ground, intelligent air purification equipment, intelligent steam ager, intelligent microwave oven, intelligent kitchen is precious, intelligent clarifier, intelligent water dispenser, intelligent lock etc..
It should be noted that the technical solution of the embodiment of the present application can be applied to a wearable device, in particular to a collar of a pet, that is, the pet collar can be used to manage pet behavior without hurting the pet.
In this embodiment, a motion adjustment method is provided, which is applied to the wearable device, and fig. 2 is a flowchart of an alternative motion adjustment method according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, determining position information of a first object in a home area through wearing equipment of the first object;
the first object mainly refers to a pet, such as a cat, a dog, etc., but may be other objects besides a pet, which is not limited in this application.
It should be noted that the position information may be an image of an environment around the position of the first object, and may also be a location of the position of the first object, which is not limited in the present application.
Step S204, determining whether the current action of the first object is consistent with a preset action or not under the condition that the first object is determined to be located in a first protection area in the home area according to the position information;
it should be noted that the first protection area is not limited to one area, and the first protection area may include a plurality of areas, for example, the first protection area may be a sofa area of a living room, a bedroom area, a bookcase area of a study room, and the like.
It should be noted that the preset action may be preset by a system or set by a user, and the preset action may be to bite a sofa or push a vase, etc.
And S206, under the condition of consistency, playing voice information of a second object through the wearable device to remind the first object to adjust the current action of the first object.
Through the steps, the wearing equipment of the first object determines the position information of the first object in the home area, whether the first object is located in a first protection area of the home area is determined according to the position information, whether the current action of the first object is consistent with the preset action is determined under the condition that the first object is located in the first protection area, and under the condition that the current action of the first object is consistent with the preset action, the wearing equipment plays the voice information of the second object to remind the first object to adjust the current action of the first object. By adopting the technical scheme, in the prior art, when the behavior of the pet is managed, the activity of the pet can be greatly limited, and even the harm to the physical and mental health of the pet can be caused.
It should be noted that the voice message of the second object is used to remind the first object of the adjustment action, and optionally, the voice message of the second object may be a sound of the owner of the pet, or a sound of a natural enemy of the pet. Optionally, the voice information is a recording of the host, and the recording content is: a (pet name), do not bite the sofa. Alternatively, in the case where the pet is a cat, the voice message may be a cry of a natural enemy of the cat.
In one exemplary embodiment, before determining the position information of the first object in the home area through the wearable device of the first object, a selection operation of the second object is received; in response to the selection operation, determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device; wherein each of the plurality of protection modes comprises at least one of: a preset action, the voice information of the second object, and the ultrasonic wave of the preset frequency.
It should be noted that the above "receiving the selection operation of the second object" may be understood as an operation of the receiving target object selecting the target protection mode from a plurality of modes, and the plurality of modes may be divided according to the type of the first object, for example, the plurality of modes may be a dog mode, a cat mode, and the like. Furthermore, different modes correspond to different preset actions, different voice messages and different preset frequencies of ultrasonic waves respectively. For example, when the preset action corresponding to the dog mode is to bite the sofa, the voice message is: b (dog name) does not bite the sofa, and the preset ultrasonic frequency is 30 KHZ. And the preset action that the cat mode corresponds is scratching the sofa, and voice message is: b (cat name) do not scratch the sofa, and the preset frequency of the ultrasonic wave is 20 KHZ.
Optionally, in this embodiment, after determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device, the method further includes: image information of a first protection area uploaded by a first object is received to determine characteristics of the first protection area. For example, a video of a first protection area uploaded by the pet owner is received, wherein the first protection area is a sofa, and the collar performs image recognition on the video, so as to determine the color of the sofa, the structure of the sofa, objects placed on the sofa, and other characteristics.
The image information of the first protection area may be a picture of the first protection area, or may be a video of the first protection area.
In one exemplary embodiment, image information around the first object is captured by a built-in camera of the wearable device; performing image recognition on the image information to determine position information of the first object in the home area.
Alternatively, in this embodiment, in addition to acquiring image information around the first object by using a built-in camera of the wearable device, image information around the first object may be acquired by using a device having an image capturing function in a home area of the first object. For example, an image around the first object is taken by an air conditioner with a camera closest to the first object.
Optionally, in an exemplary embodiment, the method for determining the location information of the first object in the home area further includes: the position information of the first object in the home area is determined by a positioning function of the wearable device of the first object. For example, a GPS (Global Positioning System) Positioning System built in the collar of the dog can position the real-time position of the dog and determine that the dog is currently on the sofa.
In one exemplary embodiment, one or more devices with motion recognition functionality in the home region are determined; under the condition that the first object is located in the first protection area, determining a first device closest to the current position of the first object from the one or more devices, and controlling the first device to perform motion recognition on the first object to obtain the current motion of the first object; and determining whether the current action of the first object is consistent with the preset action.
In the case where the wearable device of the first object has the motion recognition function, one or more devices having the motion recognition function in the home area may include the wearable device, and the device having the motion recognition function may be a refrigerator or the like having a camera and having an image recognition function.
Optionally, in this embodiment, in addition to the method of determining the first device from the one or more devices, the method further includes: sending a message to the mobile terminal of the second object, the message being used by the second object to select the second device from the one or more devices with motion recognition function, and the message may be: please select the device for identifying the action of the C (pet name); the second object is preset with a first device for performing motion recognition on the second object, and the first device is selected according to the preset setting, for example, if the pet owner sets the collar of the pet as the device for performing motion recognition on the pet, the collar is selected as the first device.
In one exemplary embodiment, in a case where the first subject does not adjust the current motion of the first subject according to the voice information, playing of the ultrasonic waves within a preset frequency by the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject.
It should be noted that, the above embodiments can be understood as follows: after the voice information of the second object is played, the first object does not adjust the current action, the reminding mode is switched, and the first object is reminded to adjust the current action in the mode of playing ultrasonic waves.
Optionally, in this embodiment, it is determined whether the first object adjusts the current motion of the first object according to the voice information within a preset time period, and when the first object does not adjust the current motion of the first object, the wearing device plays the ultrasonic waves within a preset frequency, for example, when the preset time is 1 minute, and when the owner plays the voice for 1 minute, if the pet still does not stop biting the sofa, the collar starts playing the ultrasonic waves.
In one exemplary embodiment, after determining location information of a first object in a home area through a wearable device of the first object, in a case where the first object is provided with a plurality of protection areas and the first object is determined to be located in a second protection area according to the location information, determining whether a current action of the first object is consistent with the preset action; in case of coincidence, playing the ultrasonic waves within a preset frequency by the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject.
The plurality of protection regions may include a first protection region, a second protection region, a third protection region, and the like, and the first protection region and the second protection region are listed in the present application, but the present application does not limit the plurality of protection regions to include only the first protection region and the second protection region, and different management methods for the first object may be set for different protection regions.
Optionally, in this embodiment, the method for dividing the first protection area and the second protection area includes, but is not limited to: the protection areas are divided by different damage degrees, for example, a fragile article such as porcelain and a glass article is most easily damaged, the area where the fragile article is located is set as a first protection area, and the area which is less easily damaged is set as a second protection area such as a table of a living room; continuously shooting a behavior image of a first object, determining the frequency of the target object in different protection areas according to the behavior image, sequencing according to the frequency in the different areas, and determining a first protection area and a second protection area according to a sequencing result, for example, continuously shooting an image of the surrounding environment of the first object through a collar of a pet, determining that the pet often stays in a sofa and a bed in the protection areas, and rarely goes to a bookcase and a wardrobe area, wherein the sofa and the bed are used as the first protection area, and the bookcase and the wardrobe area are used as the second protection area.
In this embodiment, different management of different protection areas is realized by dividing the first protection area and the second protection area, that is, the pet behavior is managed in the first protection area by playing the voice of the second object first, and the pet behavior is managed in the second protection area by playing the ultrasonic wave first.
In an exemplary embodiment, in a case that the first object does not adjust the current action of the first object according to the ultrasonic waves in the preset frequency, the voice message of the second object is played through the wearable device to remind the first object to adjust the current action of the first object again.
It should be noted that, the above embodiments can be understood as follows: and after the ultrasonic wave is played, the first object does not adjust the action, the reminding mode is switched, and the voice message of the second object is played to remind the first object of adjusting the action.
Optionally, in this embodiment, after the wearing device plays the voice information of the second object, the method for determining the timing when the wearing device stops playing the voice information of the second object includes, but is not limited to: under the condition that the preset playing termination time of the wearable device is not reached, identifying the action information and the position information of the first object, and under the condition that the current action of the first object is inconsistent with the preset action and the position of the first object is not included in the plurality of protection areas, stopping playing the voice information of the second object by the wearable device; and stopping playing the voice information under the condition that the preset playing stopping time of the wearable device is reached, and sending the voice information to the second object or sending information to the mobile terminal device held by the second object to inform the condition of the first object if the current action of the first object is still consistent with the preset action or the position of the first object is included in a plurality of protection areas. Such as: the time for playing voice information is preset in a collar of a pet to be 5 minutes, a protection area is a sofa, the pet bites the sofa, when the time for playing voice is not more than 5 minutes, if the pet leaves the sofa and does not bite the sofa any more, the voice information is stopped playing, if the time for playing voice is more than 5 minutes, the pet still bites the sofa or still does not leave the sofa area, information is sent to a mobile phone terminal of an owner of the pet, as shown in fig. 3, fig. 3 is a product application diagram of an optional pet state reminding mode according to an embodiment of the invention, and in fig. 3, reminding information is sent to the owner of the pet to remind the owner that the pet bites the sofa and enable the owner to confirm whether to continue playing the voice information.
In order to better understand the process of the above-mentioned action adjustment method, the following describes a flow of the implementation method of the above-mentioned action adjustment with reference to an optional embodiment, but the invention is not limited to the technical solution of the embodiment of the present invention.
In this embodiment, an action adjustment method is provided, and fig. 4 is a schematic diagram of an alternative action adjustment method according to an embodiment of the present invention, as shown in fig. 4, the following steps are specifically provided:
step S402: receiving a protection mode selected by a user (corresponding to a second object);
step S404: receiving a protection area set by a user;
step S406: receiving voice set by a user;
step S408: recognizing whether the pet (corresponding to the first object) is in the first protection area or the second protection area through the image, and if the pet is in the first protection area or the second protection area, continuing to execute the step S410;
step S410: identifying whether the current action of the pet is consistent with the preset action or not through the image, executing a step S412 under the condition that the pet is located in the first protection area and the current action of the pet is consistent with the preset action, and executing a step S414 under the condition that the pet is located in the second protection area and the current action of the pet is consistent with the preset action;
step S412: playing the voice of the user (equivalent to voice information), and executing the step S414 when the pet does not adjust the current action of the first object after playing the voice of the user;
step S414: playing ultrasonic waves with preset frequency; and after the ultrasonic wave with the preset frequency is played, the pet does not adjust the action in the second protection area, and the step S412 is executed.
Through the steps, the problem that in the prior art, when the behavior of the pet is managed, the activity of the pet can be greatly limited, and even the physical and mental health of the pet can be damaged is solved. Image recognition technology is currently used in more and more types of devices, and image recognition technology is also beginning to be used in pet collars for understanding and recognizing 3D images. However, image recognition in pet collars is currently mainly applied to collect images of the surroundings of a pet to help an owner find a lost pet, and does not perform behavior management on the pet. The scheme utilizes image recognition and voice technology to limit the approach of the pet to a specific area or article and guide the pet to stop specific behaviors so as to achieve the purposes of protecting the article and managing the pet. Utilize pronunciation and sound wave guide and shift pet attention, and do not cause the injury to the pet to can customize the sound wave scope according to different pets, thereby make this scheme can adapt to the action management of the pet of different grade type.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a motion adjustment device is further provided, and the motion adjustment device is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
FIG. 5 is a block diagram of an alternative motion adjustment apparatus according to an embodiment of the present invention; as shown in fig. 5, includes:
a first determining module 52, configured to determine, by a wearable device of a first object, location information of the first object in a home area;
the first object mainly refers to a pet, such as a cat, a dog, etc., but may be other objects besides a pet, which is not limited in this application.
It should be noted that the position information may be an image of an environment around the position of the first object, or may be a location of the position of the first object, and the like, which is not limited in the present application.
A first determining module 54, configured to determine whether a current action of the first object is consistent with a preset action when it is determined that the first object is located in a first protection area within the home area according to the location information;
it should be noted that the first protection area is not limited to one area, and the first protection area may include a plurality of areas, for example, the first protection area may be a sofa area of a living room, a bedroom area, a bookcase area of a study room, and the like.
It should be noted that the preset action may be preset by a system or set by a user, and the preset action may be to bite a sofa or push a vase, etc.
And the playing module 56 is configured to play the voice message of the second object through the wearable device under the consistent condition so as to remind the first object to adjust the current action of the first object.
It should be noted that the voice message of the second object is used to remind the first object of the adjustment action, and optionally, the voice message of the second object may be a sound of the owner of the pet, or a sound of a natural enemy of the pet. Optionally, the voice information is a recording of the host, and the recording content is: a (pet name), do not bite the sofa. Alternatively, in the case where the pet is a cat, the voice message may be a cry of a natural enemy of the cat.
Through the device, the wearing equipment of the first object determines the position information of the first object in the home area, whether the first object is located in a first protection area of the home area is determined according to the position information, whether the current action of the first object is consistent with the preset action is determined under the condition that the first object is located in the first protection area, and under the condition that the current action of the first object is consistent with the preset action, the wearing equipment plays the voice information of the second object to remind the first object to adjust the current action of the first object. Adopt above-mentioned device, solve among the prior art, when managing the pet action, can greatly restrict the activity of pet, probably cause the problem to the healthy problem of body and mind of pet even.
In an exemplary embodiment, the first determining module 52 is further configured to receive a selection operation of the second object before determining, by the wearable device of the first object, the location information of the first object in the home area; in response to the selection operation, determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device; wherein each of the plurality of protection modes comprises at least one of: a preset action, the voice information of the second object, and the ultrasonic wave of the preset frequency.
It should be noted that the above "receiving the selection operation of the second object" may be understood as receiving an operation that the target object selects a target protection mode in multiple modes, where the multiple modes may be divided according to types of the first object, for example, the multiple modes may be a dog mode, a cat mode, and the like, and further, different modes respectively correspond to different preset actions, different voice information, and different ultrasonic waves with different preset frequencies. For example, when the preset action corresponding to the dog mode is to bite the sofa, the voice message is: b (dog name) do not bite the sofa, and the frequency of preset ultrasonic wave is 30KHZ, and the corresponding preset action of cat mode is scratching the sofa, and voice message is: b (cat name) do not scratch the sofa, and the preset frequency of the ultrasonic wave is 20 KHZ.
Optionally, in this embodiment, the first determining module 52 is further configured to receive image information of the first protection area uploaded by the first object after determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device, so as to determine location information of the first protection area.
The image information of the first protection area may be a picture of the first protection area, or may be a video of the first protection area.
In an exemplary embodiment, the first determining module 52 is further configured to capture image information around the first object through a built-in camera of the wearable device; performing image recognition on the image information to determine position information of the first object in the home area.
Optionally, in this embodiment, the first determining module 52 is further configured to, in addition to acquiring the image information around the first object through the built-in camera of the wearable device, acquire the image information around the first object through a device having an image capturing function in the home area of the first object. For example, an air conditioner with a camera closest to the first object.
Optionally, in an exemplary embodiment, the first determining module 52 is further configured to determine the location information of the first object in the home area by: the position information of the first object in the home area is determined by a positioning function of the wearable device of the first object. For example, a GPS (Global Positioning System) Positioning System built in the collar of the dog can position the real-time position of the dog and determine that the dog is currently on the sofa.
In an exemplary embodiment, the second determining module 54 is further configured to determine one or more devices with motion recognition function in the home region; under the condition that the first object is located in the first protection area, determining a first device closest to the current position of the first object from the one or more devices, and controlling the first device to perform motion recognition on the first object to obtain the current motion of the first object; and determining whether the current action of the first object is consistent with the preset action.
In the case where the wearable device of the first object has a motion recognition function, one or more devices having the motion recognition function in the home area may include the wearable device, and the device having the motion recognition function may be a refrigerator or the like having a camera and an image recognition function.
Optionally, in this embodiment, the second determining module 54 is further configured to determine the first device from the one or more devices by: sending a message to the mobile terminal of the second object, the message being used by the second object to select the second device from the one or more devices with motion recognition function, and the message may be: please select the device for identifying the action of the C (pet name); the second object is preset with a first device for performing action recognition on the second object, and the first device is selected according to the preset, for example, if the pet owner sets the collar of the pet as the device for performing action recognition on the pet, the collar is selected as the first device.
In an exemplary embodiment, the playing module 56 is further configured to play, by the wearable device, the ultrasonic waves within a preset frequency to cause auditory discomfort of the first subject to adjust the current motion of the first subject in a case where the first subject does not adjust the current motion of the first subject according to the voice information. It should be noted that, the above embodiments can be understood as follows: after the voice information of the second object is played, the first object does not adjust the action, the reminding mode is switched, and the first object is reminded of adjusting the action in a mode of playing ultrasonic waves.
Optionally, in this embodiment, the playing module 56 is further configured to determine whether the first object adjusts the current motion of the first object according to the voice information within a preset time period, and play the ultrasonic wave within a preset frequency through the wearable device under the condition that the first object does not adjust the current motion of the first object, for example, under the condition that the preset time is 1 minute, and when the owner plays the voice for 1 minute, if the pet still does not stop biting the sofa, the collar starts playing the ultrasonic wave.
In an exemplary embodiment, the playing module 56 is further configured to, after determining, by the wearable device of the first object, location information of the first object in the home area, determine whether a current action of the first object is consistent with the preset action in a case that the first object is provided with a plurality of protection areas and the first object is located in a second protection area according to the location information; in the case of coincidence, playing the ultrasonic waves within a preset frequency through the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject.
The plurality of protection regions may include a first protection region, a second protection region, a third protection region, and the like, and the first protection region and the second protection region are listed in this application, but the application is not limited to the plurality of protection regions including only the first protection region and the second protection region.
Optionally, in this embodiment, the apparatus further includes: a classification module for a method of partitioning a first protection zone and a second protection zone, including but not limited to: the protection areas are divided by different damage degrees, for example, a fragile article such as porcelain and a glass article is most easily damaged, the area where the fragile article is located is set as a first protection area, and the area which is less easily damaged is set as a second protection area such as a table of a living room; continuously shooting a behavior image of a first object, determining the frequency of the target object in different protection areas according to the behavior image, sequencing according to the frequency in the different areas, and determining a first protection area and a second protection area according to a sequencing result, for example, continuously shooting an image of the surrounding environment of the first object through a collar of a pet, determining that the pet often stays in a sofa and a bed in the protection areas, and rarely goes to a bookcase and a wardrobe area, wherein the sofa and the bed are used as the first protection area, and the bookcase and the wardrobe area are used as the second protection area.
In this embodiment, different management of different protection areas is realized by dividing the first protection area and the second protection area, that is, the pet behavior is managed in the first protection area by playing the voice of the second object first, and the pet behavior is managed in the second protection area by playing the ultrasonic wave first.
In an exemplary embodiment, the playing module 56 is further configured to, in a case that the first object does not adjust the current action of the first object according to the ultrasonic waves in the preset frequency, play the voice message of the second object through the wearable device to remind the first object to adjust the current action of the first object again.
It should be noted that, the above embodiments can be understood as follows: after the ultrasonic wave is played, the first object does not adjust the current action of the first object, the reminding mode is switched, and the voice message of the second object is played to remind the first object to adjust the current action of the first object.
Optionally, in this embodiment, the apparatus further includes: the determining module is used for determining the time when the wearable device stops playing the voice information of the second object after the voice information of the second object is played through the wearable device in the following way: under the condition that the preset playing termination time of the wearable device is not reached, identifying the action information and the position information of the first object, and under the condition that the current action of the first object is inconsistent with the preset action and the positions of the first object are not included in the plurality of protection areas, stopping playing the voice information of the second object by the wearable device; and stopping playing the voice information under the condition that the preset playing stopping time of the wearable device is reached, and sending the voice information to the second object or sending information to the mobile terminal device held by the second object to inform the condition of the first object if the current action of the first object is still consistent with the preset action or the position of the first object is included in a plurality of protection areas. Such as: the time for playing voice information is preset in a collar of a pet to be 5 minutes, a protection area is a sofa, the pet bites the sofa, when the time for playing voice is not more than 5 minutes, if the pet leaves the sofa and does not bite the sofa any more, the voice information is stopped playing, if the time for playing voice is more than 5 minutes, the pet still bites the sofa or still does not leave the sofa area, information is sent to a mobile phone terminal of an owner of the pet, as shown in fig. 3, fig. 3 is a product application diagram of an optional pet state reminding mode according to an embodiment of the invention, and in fig. 3, reminding information is sent to the owner of the pet to remind the owner that the pet bites the sofa and enable the owner to confirm whether to continue playing the voice information.
An embodiment of the present invention further provides a storage medium including a stored program, wherein the program executes any one of the methods described above.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, determining the position information of the first object in the home area through the wearing device of the first object
S2, determining whether the current action of the first object is consistent with a preset action or not under the condition that the first object is determined to be located in a first protection area in the home area according to the position information;
and S3, playing voice information of a second object through the wearable device under the condition of consistency so as to remind the first object to adjust the current action of the first object.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining the position information of the first object in the home area through the wearing device of the first object
S2, determining whether the current action of the first object is consistent with a preset action or not under the condition that the first object is determined to be located in a first protection area in the home area according to the position information;
and S3, playing voice information of a second object through the wearable device under the condition of consistency so as to remind the first object to adjust the current action of the first object.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An action adjustment method, comprising:
determining, by a wearable device of a first object, location information of the first object in a home area;
determining whether the current action of the first object is consistent with a preset action under the condition that the first object is determined to be located in a first protection area in the home area according to the position information;
and under the condition of consistency, playing voice information of a second object through the wearable device to remind the first object to adjust the current action of the first object.
2. The action adjustment method according to claim 1, wherein in case of coincidence, a voice message of a second object is played through the wearable device to remind the first object to adjust the current action of the first object, and the method further comprises:
in a case where the first object does not adjust the current motion of the first object according to the voice information, playing, by the wearable device, ultrasonic waves within a preset frequency causes auditory discomfort of the first object to adjust the current motion of the first object.
3. The motion adjustment method according to claim 1, wherein after determining the position information of the first object in the home area by the wearable device of the first object, the method further comprises:
determining whether the current action of the first object is consistent with the preset action or not under the condition that the first object is provided with a plurality of protection areas and the first object is located in a second protection area according to the position information;
in the case of coincidence, playing the ultrasonic waves within a preset frequency through the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject.
4. The motion adjustment method according to claim 3, wherein, in a case of coincidence, playing of the ultrasonic wave within a preset frequency by the wearable device causes auditory discomfort of the first subject to adjust the current motion of the first subject, the method further comprising:
and under the condition that the first object does not adjust the current action of the first object according to the ultrasonic waves in the preset frequency, playing voice information of the second object through the wearable device so as to remind the first object of adjusting the current action of the first object again.
5. The action adjustment method according to claim 1, wherein determining, by a wearable device of a first object, position information of the first object in a home area includes:
shooting image information around the first object through a built-in camera of the wearable device;
performing image recognition on the image information to determine position information of the first object in the home area.
6. The motion adjustment method according to claim 2, wherein before determining the position information of the first object in the home area by the wearable device of the first object, the method further comprises:
receiving a selection operation of the second object;
in response to the selection operation, determining a target protection mode currently adopted by the wearable device in a plurality of protection modes corresponding to the wearable device; wherein each of the plurality of protection modes comprises at least one of: a preset action, the voice information of the second object, and the ultrasonic wave of the preset frequency.
7. The motion adjustment method according to claim 1, wherein determining whether the current motion of the first object is consistent with a preset motion comprises:
determining one or more devices with action recognition function in the family area;
under the condition that the first object is located in the first protection area, determining a first device closest to the current position of the first object from the one or more devices, and controlling the first device to perform motion recognition on the first object to obtain the current motion of the first object;
and determining whether the current action of the first object is consistent with the preset action.
8. An action adjustment device, comprising:
the first determining module is used for determining the position information of a first object in a home area through a wearable device of the first object;
the second determining module is used for determining whether the current action of the first object is consistent with a preset action or not under the condition that the first object is determined to be located in a first protection area in the home area according to the position information;
and the playing module is used for playing the voice information of the second object through the wearable device under the consistent condition so as to remind the first object to adjust the current action of the first object.
9. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202210303314.3A 2022-03-25 2022-03-25 Motion adjustment method and device, storage medium, and electronic device Pending CN114859739A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210303314.3A CN114859739A (en) 2022-03-25 2022-03-25 Motion adjustment method and device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210303314.3A CN114859739A (en) 2022-03-25 2022-03-25 Motion adjustment method and device, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN114859739A true CN114859739A (en) 2022-08-05

Family

ID=82628617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210303314.3A Pending CN114859739A (en) 2022-03-25 2022-03-25 Motion adjustment method and device, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114859739A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169441A1 (en) * 2011-12-28 2013-07-04 Jason Wilson System for repelling a pet from a predetermined area
US20150075446A1 (en) * 2013-09-13 2015-03-19 Jun Hu Pet training system
CN106453622A (en) * 2016-11-21 2017-02-22 深圳市沃特沃德股份有限公司 Pet disease information pushing method, device and system
CN106534477A (en) * 2016-08-30 2017-03-22 深圳市沃特沃德股份有限公司 Method, device and system for managing living habits of pet
CN107372168A (en) * 2017-07-26 2017-11-24 桂林电子科技大学 A kind of pet accessory system based on positional information
CN107960341A (en) * 2017-11-28 2018-04-27 北京小米移动软件有限公司 The method and device for correcting of pet behavior
CN110443976A (en) * 2019-08-14 2019-11-12 深圳市沃特沃德股份有限公司 Safety prompt function method, apparatus and storage medium based on safety cap
CN114019868A (en) * 2021-11-04 2022-02-08 深圳市智宠科技有限公司 Intelligent dog training device electrostatic pulse gear regulation and control system adopting Bluetooth frequency band

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169441A1 (en) * 2011-12-28 2013-07-04 Jason Wilson System for repelling a pet from a predetermined area
US20150075446A1 (en) * 2013-09-13 2015-03-19 Jun Hu Pet training system
CN106534477A (en) * 2016-08-30 2017-03-22 深圳市沃特沃德股份有限公司 Method, device and system for managing living habits of pet
CN106453622A (en) * 2016-11-21 2017-02-22 深圳市沃特沃德股份有限公司 Pet disease information pushing method, device and system
CN107372168A (en) * 2017-07-26 2017-11-24 桂林电子科技大学 A kind of pet accessory system based on positional information
CN107960341A (en) * 2017-11-28 2018-04-27 北京小米移动软件有限公司 The method and device for correcting of pet behavior
CN110443976A (en) * 2019-08-14 2019-11-12 深圳市沃特沃德股份有限公司 Safety prompt function method, apparatus and storage medium based on safety cap
CN114019868A (en) * 2021-11-04 2022-02-08 深圳市智宠科技有限公司 Intelligent dog training device electrostatic pulse gear regulation and control system adopting Bluetooth frequency band

Similar Documents

Publication Publication Date Title
JP6923695B2 (en) Electronic devices, electronic device systems, and device control methods
CN105682011B (en) Bluetooth module control method, device and the audio-video frequency playing system of playback equipment
EP3115905A1 (en) Information processing apparatus, information processing method, and program
CN110535735B (en) Internet of things equipment multimedia stream management method and device
US20140095635A1 (en) Operation-assisting apparatus, operation-assisting method, and recording medium containing control program
CN106648524A (en) Audio paying method and audio playing equipment
US20170196195A1 (en) Pet mat
CN107153948A (en) Reminding method and device, storage equipment, mobile terminal and electric appliance
CN109287511B (en) Method and device for training pet control equipment and wearable equipment for pet
CN113375316B (en) Control method and device for air conditioner and air conditioner
CN111650842A (en) Household appliance control method and device
CN113111199B (en) Method and device for continuing playing of multimedia resource, storage medium and electronic device
CN105975079A (en) Information processing method and device for air conditioner
CN110874061A (en) Intelligent household working method and device
CN106375809B (en) Volume adjusting method and device and storage medium
CN114859739A (en) Motion adjustment method and device, storage medium, and electronic device
CN113589699A (en) Intelligent household scene control method and device
JP6765083B2 (en) Action goal achievement support system, action goal achievement support method and action goal achievement support processing program
CN107094641B (en) Pet performance reward and punishment feeding method and system
CN113739373B (en) Control method, server and multimedia air conditioning device
CN113728941B (en) Intelligent pet dog domestication method and system
CN115473755A (en) Control method and device of intelligent equipment based on digital twins
JPWO2019087854A1 (en) Cleanup support system, cleanup support method and program
CN105828135B (en) Control method for playing back, device and playback equipment in audio-video frequency playing system
CN109407843A (en) Method and device for controlling multimedia playing, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination