WO2023221233A1 - 交互式镜面装置、系统及方法 - Google Patents
交互式镜面装置、系统及方法 Download PDFInfo
- Publication number
- WO2023221233A1 WO2023221233A1 PCT/CN2022/100664 CN2022100664W WO2023221233A1 WO 2023221233 A1 WO2023221233 A1 WO 2023221233A1 CN 2022100664 W CN2022100664 W CN 2022100664W WO 2023221233 A1 WO2023221233 A1 WO 2023221233A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- page
- user interface
- course
- display
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 122
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004891 communication Methods 0.000 claims abstract description 11
- 230000009471 action Effects 0.000 claims description 108
- 238000003860 storage Methods 0.000 claims description 23
- 230000003068 static effect Effects 0.000 claims description 15
- 238000011156 evaluation Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 7
- 230000037147 athletic performance Effects 0.000 claims 2
- 238000009434 installation Methods 0.000 claims 1
- 230000006870 function Effects 0.000 abstract description 57
- 230000003993 interaction Effects 0.000 description 73
- 230000000875 corresponding effect Effects 0.000 description 22
- 210000000988 bone and bone Anatomy 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 15
- 210000002411 hand bone Anatomy 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 238000012549 training Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 230000001276 controlling effect Effects 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 7
- 210000003049 pelvic bone Anatomy 0.000 description 7
- 230000002996 emotional effect Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000036772 blood pressure Effects 0.000 description 5
- 238000012790 confirmation Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 238000001931 thermography Methods 0.000 description 4
- 210000000577 adipose tissue Anatomy 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 210000004243 sweat Anatomy 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000003442 weekly effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B71/0622—Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
Definitions
- the present disclosure relates to the fields of smart home and smart fitness, and specifically to interactive mirror devices, systems and methods.
- Smart fitness mirror is a new fitness product that integrates artificial intelligence and hardware content services.
- the application number is CN202110946212.9, and the patent titled Reflective Video Display Device for Interactive Training and Demonstration and Method of Use thereof discloses a fitness mirror and a method of using the fitness mirror for training.
- Smart fitness mirrors can be used as mirrors and can also see high-quality coaching images.
- the existing smart fitness mirror interaction methods are troublesome. They generally need to be controlled through an external smart device (such as a smartphone) or a touch screen. When controlled through an external smart device, the operation is more complicated, and the smart device and smart fitness mirror also need to be controlled. When using touch control, it is easy to leave fingerprints and sweat on the mirror surface, affecting the display effect of the screen.
- current fitness mirrors are larger in size and take up more home space. However, current smart fitness mirrors can only display fitness content and fitness-related data, and are less frequently used in daily life.
- One purpose of the present disclosure is to provide an interactive mirror device, system and method, by adding a personalized interface and a new interaction method to the smart fitness mirror, optimizing the interaction mode between the smart fitness mirror and the user, and improving the use of the smart fitness mirror in daily life. frequency of use.
- a reflective video display device (also referred to herein as a "smart fitness mirror” and “interactive fitness system”) configured to display video content, such as a pre-recorded or live-streamed workout led by a trainer, to a user and provide an interface that allows users to interact and personalize video content.
- the smart fitness mirror may be a networked device communicatively coupled to a content provider (eg, server, cloud service) and/or a smart device (eg, smartphone, tablet, computer).
- the smart fitness mirror may include a display panel and speakers to output video content and audio to the user.
- the smart fitness mirror can also include cameras and microphones to capture video and audio of the user during exercise. Therefore, this smart fitness mirror enables two-way communication between the user and the coach during exercise. In this way, the smart fitness mirror could provide users with a convenient option to receive guided workouts while enabling a greater degree of personalization and personal guidance similar to workouts provided by a personal trainer or coach at a regular gym.
- An example of a smart fitness mirror includes a communication interface for receiving a video image of a fitness instructor, a display operably coupled to the communication interface to display the video image of the fitness instructor, and a display disposed in front of the display to reflect an image of a person opposite the display mirror.
- the mirror has a partially reflective portion to transmit the fitness instructor's video image to a person opposite the display such that the fitness instructor's video image appears superimposed on a portion of the person's image.
- An example of an interactive fitness method includes the following operations: (1) streaming fitness content to an interactive video system including a mirror having a partially reflective portion and a display disposed on a side of the partially reflective portion; (2) displaying fitness content to the user via the partially reflective portion of the display and mirror; (3) and utilizing the mirror to reflect the user's image such that the user's image is at least partially superimposed on the fitness content displayed via the partially reflective portion of the display and mirror.
- An example of a method of using a smart fitness mirror includes the following operations: when displaying fitness content to a user on a video display behind a partially transmissive mirror: (1) utilizing the partially transmissive mirror to reflect the user's image; (2) utilizing a device attached to the user The heart rate monitor measures the user's heart rate; (3) transmits the heart rate from the heart rate monitor to an antenna operably coupled to the video display; (4) displays the user's heart rate on the video display; and (5) displays the user's heart rate on the video display The user's target heart rate.
- the operating system Launcher specifically includes: (1) GUI interface; (2) system control method for action control; (3) system control method for voice control; (4) multi-modal fusion control method; (5) executable Control instructions; (6) Data collection and management methods; (7) Some specific function management methods.
- the present disclosure also provides a customized system and operation method for a device such as a smart fitness mirror, so that the function of the smart fitness mirror is not limited to watching fitness exercise videos, but can also be used as an important part of the smart home to facilitate users to view more information to improve usage efficiency.
- the customized operation method overcomes the shortcomings of traditional smart fitness mirrors that require touch control and are prone to leaving fingerprints on some reflective mirror surfaces.
- the functions of smart fitness mirrors include allowing users to compare their own mirror movements while watching fitness exercise videos.
- leaving fingerprints on some reflective mirrors has a great impact on the functions of smart fitness mirrors, causing most users to be unwilling to use touch interaction, and using smart terminals to control smart fitness mirrors is unintuitive and troublesome.
- the method of using motion control does not have these problems.
- gesture control is also used in some other electronic products, it has also made breakthrough progress in its application on smart fitness mirrors.
- smart fitness mirrors include partial reflective mirrors, we are working on During motion control, you can clearly see the actions you are doing, reducing the possibility of miscontrol and misoperation.
- motion recognition is a function of smart fitness mirrors
- smart fitness mirrors can achieve higher precision and complexity without additional costs.
- Action recognition enables much richer control instructions than existing gesture control.
- the main function of smart fitness mirrors is sports and fitness, applying motion control to smart fitness mirrors also needs to avoid control errors caused by fitness actions. Touch is also significantly different from traditional gesture control.
- the present disclosure enriches the functions of the fitness mirror and improves multiple user interfaces by switching user interfaces with one or more user interface objects on the fitness mirror.
- the object's user experience increases users' enthusiasm and interest in fitness.
- Figure 1 is a block diagram of an exemplary smart fitness mirror
- Figure 2 is an exemplary GUI page on the smart fitness mirror
- Figure 3 is an exemplary GUI page on the smart fitness mirror
- Figure 4 is an exemplary GUI page on the smart fitness mirror
- Figure 5-a is an exemplary GUI page on the smart fitness mirror
- Figure 5-b is an exemplary GUI page on the smart fitness mirror
- Figure 6 is an exemplary GUI page on the smart fitness mirror
- Figure 7 is an exemplary GUI page on the smart fitness mirror
- Figure 8 is an exemplary GUI page on the smart fitness mirror
- Figure 9 is an exemplary GUI page on the smart fitness mirror
- Figure 10 is a schematic diagram of action recognition of the smart fitness mirror.
- this patent relates to an interactive mirror device (also known as a “smart fitness mirror” and an “interactive training system”) and methods of using interactive training equipment.
- Smart fitness mirrors include displays configured to display exercise content (pre-recorded videos or live streams) and interfaces that enable users to personalize their workouts. Additionally, smart fitness mirrors may allow users and/or trainers to interact with each other during a workout in a manner similar to a regular workout in a gym or small fitness studio where the user and trainer are in the same room (e.g., providing the trainer with information about the workout Rhythm feedback, correcting the user's form during a specific fitness program).
- An exemplary smart fitness mirror An exemplary smart fitness mirror.
- FIG. 1 shows an exemplary representation of a smart fitness mirror.
- the smart fitness mirror may include a processor 110 for, in part, controlling the operation of various sub-components in the smart fitness mirror and managing the flow of data to/from the smart fitness mirror (e.g., video content, audio from a trainer or user, Biometric feedback analysis).
- a smart fitness mirror may include a display 120 for displaying video content, a graphical user interface (GUI) with which a user can interact and control the smart fitness mirror, biometric feedback data, and/or other visual content.
- Sensor 130 may be coupled to processor 110 to collect user-related data.
- Antenna 140 may be coupled to processor 110 to provide data transmission between the smart fitness mirror and another device (eg, remote control device, biometric sensor, wireless router).
- another device eg, remote control device, biometric sensor, wireless router
- Antenna 140 may include multiple transmitters and receivers, each transmitter and receiver targeting a specific frequency and/or wireless standard (e.g., Bluetooth, 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 2G, 3G , 4G, 4G LTE, 5G) and customized.
- Amplifier 150 may be coupled to processor 110 to receive audio signals from processor 110 for outputting subsequent sounds through left speaker 152 and/or right speaker 154 .
- Smart fitness mirrors may also include additional components not shown in Figure 1.
- a smart fitness mirror may include a switched mode power supply (SMPS), a switch, and onboard memory and storage (non-volatile and/or volatile memory) including, but not limited to, a hard disk drive (HDD), a solid state drive (SDD) ), flash memory, random access memory (RAM) and secure digital (SD) cards.
- the onboard memory and/or storage may be used to store firmware and/or software for operation of the smart fitness mirror.
- the onboard memory and/or storage may also be used to store (temporarily and/or permanently) other data, including but not limited to video content, audio, user video, biometric feedback data, and user settings.
- a smart fitness mirror may include various components for mounting and supporting the smart fitness mirror.
- the antenna 140 may include a plurality of antennas, each antenna serving as a receiver and/or transmitter to communicate with various external devices, such as a user's smart terminal (eg, computer, smartphone, tablet computer), external device, etc. sensors (e.g., heart rate straps, inertial sensors, body fat scales) and/or remote servers or cloud servers for streaming or playing video content.
- antenna 140 may be compliant with various wireless standards, including, but not limited to, Bluetooth, 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 2G, 3G, 4G, 4G LTE and 5G standards.
- the sensor 130 may include multiple sensors, and the multiple sensors may be one or more of an image recognition sensor (camera), an infrared recognition sensor, or a voice recognition sensor (microphone).
- image recognition sensor camera
- infrared recognition sensor infrared recognition sensor
- voice recognition sensor microphone
- the image recognition sensor (camera) in the smart fitness mirror can be used to obtain video and/or still images of the user while the user is performing activities (e.g., exercising).
- the user's video can then be shared with the coach to allow the coach to observe and provide guidance to the user during the workout. Videos can also be shared to other users of other smart fitness mirrors for comparison or competition.
- the user's video can also be displayed in real time on the display 120 or stored for later playback. For example, a user's video can be used for self-assessment during or after a workout by providing a visual comparison of the user and the trainer. Stored videos may also allow users to assess their progress or improvements over time while performing similar workouts.
- the image recognition sensor can also be configured to acquire dynamic and/or static images of the user when the user is performing activities (for example, exercising) and process them into skeletal point images, and determine whether the user's actions match standard actions based on the skeletal point images. And output the judgment result. If the user action does not match the standard action, action correction information is output. Dynamic and/or static images can also be processed in real time while the user is using the smart fitness mirror or after completing a workout to control the smart fitness mirror or derive the user's biometric data based on the user's movements and actions.
- the infrared recognition sensor in the smart fitness mirror can be used to obtain the user's thermal image when the user is performing activities (for example, exercising), and use the thermal image to determine the parts of the user's exercise, the parts where the force is exerted, and the calories consumed during exercise. And based on the parts of the user's exercise and the parts where force is exerted, it is judged whether the user's action matches the standard action and the judgment result is output. If the user's action does not match the standard action, action correction information is output.
- the voice recognition sensor (microphone) in the smart fitness mirror can be used to obtain and process the user's voice information in real time when the user is using the smart fitness mirror and control the smart fitness mirror or have a conversation with the user based on the user's voice information.
- GUI Graphical User Interface
- the smart fitness mirror may display a graphical user interface (GUI) through the display 120 to facilitate user interaction with the smart fitness mirror.
- GUI graphical user interface
- GUI Graphical user interface
- GUI includes but is not limited to wake-up page, home page, exercise page and other pages.
- the wake-up page refers to the initial page when the smart fitness mirror is woken up, which is used to display some basic page information. Only when it detects that the user performs a specified operation, the smart fitness mirror is completely woken up and enters the main page, exercise page or other pages.
- the main page displays at least one piece of page information, and the page information is used to display data, operable interactive objects or interactive guidance to the user.
- the main page is used to display preset page information or page information set by the user to the user.
- the user can control the smart fitness mirror by performing specified operations on the main page.
- the control includes but is not limited to page switching. , function switching or entering third-party access applications.
- the exercise page displays a video playback window for playing exercise videos to guide users to exercise. It can also display course playback information (such as total course duration, played duration, etc.), accessory connection information (such as heart rate belt connection information, etc.) and user information. At least one of physiological information (such as heart rate, blood pressure, calories burned, etc.).
- the exercise page can be entered from the main page or wake-up page and returned to the main page or wake-up page, or it can be entered directly when the smart fitness mirror is turned on.
- Other pages display pre-installed function pages accessed through the main page (such as multimedia player page, weather display page, food recommendation page, etc.) or third-party access application pages (such as WeChat, Douyin, Himalaya, etc.). Other pages are generally entered from the main page or wake-up page and returned to the main page or wake-up page.
- pre-installed function pages accessed through the main page (such as multimedia player page, weather display page, food recommendation page, etc.) or third-party access application pages (such as WeChat, Douyin, Himalaya, etc.).
- Other pages are generally entered from the main page or wake-up page and returned to the main page or wake-up page.
- the graphical user interface can display page information through page objects. Each page object is used to display at least one or a group of page information.
- the displayed page information can be information stored in a local storage device of the smart fitness mirror. , or it can be information received by the smart fitness mirror from other devices through the communication interface, including but not limited to switchable display course preview information, user information, environment information, multimedia information, external device information, etc.
- the course preview information includes but is not limited to coach name, course type, course duration, course difficulty, course label and/or estimated consumption, etc.
- the types of courses include but are not limited to showing users HIIT, aerobic dance, yoga, strength shaping, combat training, barre, Pilates, stretching, dance, meditation, challenges, etc. through smart fitness mirrors.
- the course labels include but are not limited to showing the user through the smart fitness mirror whether it is an AI recognition course, whether it is a parent-child course, the body parts mainly involved in the course (such as the whole body, arms, waist and abdomen, legs, etc.) and the needs of the course. Props used (such as no equipment, yoga mats, elastic rings, elastic bands, dumbbells, etc.), etc.
- the page object displaying course preview information is also called a course card.
- the user information includes but is not limited to usage history information, physiological information, social information, calendar information, etc.
- the usage history information includes but is not limited to displaying completed courses, favorite courses, historical best results, frequency of classes, etc. to the user through the smart fitness mirror.
- the physiological information includes but is not limited to displaying height, weight, gender, age, heart rate, blood pressure, injury history, etc. to the user through a smart fitness mirror.
- the social information includes, but is not limited to, information related to the people/friends the user follows that are displayed to the user through a smart fitness mirror, and may also be information related to groups/interest groups that the user has joined. Specifically, this information may be to invite the user.
- the dynamics can be synchronized to other social platforms (such as WeChat, Weibo, Facebook, Twitter, etc.).
- the calendar information includes but is not limited to displaying the user's schedule information, to-do items, course plans, etc. to the user through the smart fitness mirror.
- the page object displaying user information is also called a user card.
- the environmental information includes but is not limited to weather information.
- the weather information includes but is not limited to displaying weather forecast, temperature, humidity, clothing recommendations, etc. to users through smart fitness mirrors.
- the page object displaying weather information is also called a weather card.
- the multimedia information includes but is not limited to music and video information played through the smart fitness mirror.
- the page object displaying multimedia information is also called a multimedia card.
- the external device information includes but is not limited to information about other fitness devices connected to the smart fitness mirror displayed through the smart fitness mirror, and information about other smart furniture (such as sweeping robots, smart speakers, smart gateways, etc.).
- the page object displaying external device information is also called a device card.
- the graphical user interface can also display page information through interactive objects, which can be icons, text, highlighted selections, other forms that can prompt the user to interact, or a combination thereof.
- the interactive object is used to show the user the controls or operations that the user can perform (such as page switching, selection, pause/play of the course, etc.). Further, the interactive object can also be combined with a graphical user interface (GUI) page or page object to display different interactive functions.
- GUI graphical user interface
- the page objects and interactive objects can also be displayed by pop-up windows or scrolling playback on each page.
- Option 1 is shown in Figures 2, 3, 4, 5-a, and 5-b.
- Figure 2 is the first main page
- Figure 3 is the page switching page
- Figures 4, 5-a, and 5-b are the second main page respectively.
- the first main page displays multiple page information through multiple page objects, including weather cards, multimedia cards, recommended course cards, etc.
- the first main page also includes interactive objects displayed in the upper right corner for entering the page switching page. and the interactive object combined with the course card.
- the user can enter the second main page by executing the interaction with the interactive object combined with the course card, or enter the page switching page by executing the interaction with the interactive object used to enter the page switching page.
- Page switching The page includes previews of the first main page, the second main page, and the third main page and highlights one of the pages.
- the upper part of the page switching page also displays the interactive objects for returning and confirming entering the highlighted page and the bottom part of the page switching page.
- an interactive object that toggles the highlighted page The user can switch the highlighted page by performing an interaction of the interactive object that switches the highlighted page, and enter the selected page by confirming the interaction of the interactive object that enters the highlighted page.
- the second main page includes multiple course cards and highlights one of the course cards.
- the upper part of the second main page also displays the interactive objects for returning and confirming entering the highlighted course card course.
- the interactive object for switching the highlighted course card is also shown below the page.
- the user can switch the highlighted course card by executing the interaction of the interactive object for switching the highlighted course card, and enter the highlighted course card by executing the confirmation interaction.
- the interaction of the object enters the exercise page of the selected course card.
- the third main page includes multiple page objects displayed side by side.
- the third main page also displays the selected page object and the interactive object that enters the selected page object, as well as the third home page.
- the interactive object for switching the page object is also shown below. The user can switch the displayed page object by performing the interaction of the interactive object of the switching page object, and enter the corresponding page object of the selected page object by performing the interaction of the interactive object of the selected page object. page.
- Option 2 is shown in Figures 6, 7, 8, and 9.
- Figure 6 is the fifth main page
- Figure 7 is the sixth main page
- Figures 8 and 9 are the seventh and eighth main pages respectively.
- the fifth main page displays multiple page information through multiple page objects, including weather cards, multimedia cards, recommended course cards, etc.
- the fifth main page also includes interactive objects for switching the main page displayed below and the The user can enter the eighth main page by executing the interaction with the interactive object of the course card combination, or switch the displayed main page by executing the interaction with the interactive object for switching the main page.
- the sixth main page includes multiple page objects displayed side by side.
- the sixth main page also displays the selected page object and the interactive objects that enter the selected page object.
- the interactive object for switching the main page is also displayed below the sixth main page.
- the user can switch the displayed main page by executing the interaction of the interactive object for switching the main page, and enter the interactive object of the selected page object by executing the Interactively enter the page corresponding to the selected page object.
- the seventh main page includes multiple page objects displayed side by side.
- the seventh main page also displays the selected page object and the interactive objects that enter the selected page object.
- the interactive object for switching the main page is also displayed below the seventh main page.
- the user can switch the displayed main page by executing the interaction of the interactive object for switching the main page, and enter the interactive object of the selected page object by executing the Interactively enter the page corresponding to the selected page object.
- the eighth main page When the user chooses to enter the selected page object and the page corresponding to the eighth main page is the eighth main page, the eighth main page includes multiple course cards displayed side by side. At the same time, the eighth main page also displays the selected course card and the method to enter the selected course card. The interactive object, and the interactive object for switching course cards is also displayed at the bottom of the eighth main page. The user can switch the displayed course card by executing the interaction of the interactive object for switching course cards, and enter the selected course card by executing The interaction of the interactive object enters the exercise page corresponding to the selected course card.
- the page switching page includes previews of the first main page, the second main page, and the third main page and highlights one of the pages.
- the upper part of the page switching page also displays the interactive objects for returning and confirming entering the highlighted page and the page switching.
- the interactive object for switching the highlighted page is also shown at the bottom of the page.
- the user can switch the highlighted page by performing an interaction of the interactive object that switches the highlighted page, and enter the selected page by confirming the interaction of the interactive object that enters the highlighted page.
- the second main page includes multiple course cards and highlights one of the course cards.
- the upper part of the second main page also displays the interactive objects for returning and confirming entering the highlighted course card course.
- the interactive object for switching the highlighted course card is also shown below the page.
- the user can switch the highlighted course card by executing the interaction of the interactive object for switching the highlighted course card, and enter the highlighted course card by executing the confirmation interaction.
- the interaction of the object enters the exercise page of the selected course card.
- the third main page includes multiple page objects displayed side by side.
- the third main page also displays the selected page object and the interactive object that enters the selected page object, as well as the third home page.
- the interactive object for switching the page object is also shown below. The user can switch the displayed page object through the interaction of the interactive object of the switching page object, and enter the page corresponding to the selected page object by executing the interaction of the interactive object that enters the selected page object. .
- GUI graphical user interface
- page animation effects include but are not limited to page switching animation effects and page interaction effects.
- page switching animations include page turning animations, returning to the previous level, entering the next level animation, etc.
- Page switching animations are mainly animation effects for page operations and changes. The purpose is to make page operations and changes have a unique effect when displayed. Better user experience.
- page interaction effects include selection confirmation animations, action guidance animations, etc.
- Page interaction effects are mainly aimed at providing guidance to users when interacting, such as prompting users for the current interaction, the progress of the interaction, and prompting users to proceed.
- the operations required during interaction include, but are not limited to, highlighting or flashing the involved page information when the user interacts, displaying a progress bar on the involved page information, etc. It also includes identifying the actions the user is doing and displaying prompt actions or suggested actions when performing action control. It also includes completion prompts based on the user's voice commands during voice control.
- GUI graphical user interface
- Motion command control (including “posture control” and “gesture control)
- Smart fitness mirrors can control the graphical user interface (GUI) based on user actions collected by image recognition sensors.
- GUI graphical user interface
- actions controlled by action commands include but are not limited to static actions and dynamic actions.
- Static action means that the user makes a specific posture with a specific body part and maintains it for a specified time;
- dynamic action means that the user makes a specific action with a specific body part.
- static actions include but are not limited to raising the left/right hand, raising the left/right leg, raising both hands above the head to compare the heart, and various static gestures.
- the range of the specified time is generally 0-5 seconds or longer. As long as the action is determined by the positional relationship between specific parts of the human body relative to space, the entire human body, or other specific parts, and is maintained for a specified time, it should fall within the scope of protection of static actions. Further specific examples include: holding the left hand straight at an angle of 15°/30° to the horizontal plane for 1.5 seconds, holding the left hand at a right angle to the right hand for 1 second, extending the left hand forward for 0.1 seconds, etc.
- dynamic actions include but are not limited to sliding the left/right hand horizontally, sliding the left/right hand up and down, high-fiving with both hands, jumping up, squatting, and various dynamic gestures, etc., as long as they are performed through specific parts of the human body relative to space, the entire human body, or Other actions determined by the movement trajectories of specific parts should fall within the protection scope of dynamic actions.
- Further specific examples include: sliding the left hand horizontally to move the left hand from the left side of the body to the right side of the body; sliding the left hand horizontally to move the left hand from the right side of the body to the left side of the body; sliding the right hand horizontally to move the right hand from the right side of the body to the left side of the body; Slide the left hand up and down to move the left hand from above the shoulder to below the waist, slide the left hand up and down to move the left hand from above the upper 20% of the body to below the upper 40%, high-five both hands above the head, high-five both hands above the left shoulder, etc. .
- control instructions for controlling execution of action commands include, but are not limited to, control instructions for non-exercise pages (wakeup page, home page, other pages) and control instructions for exercise pages.
- the control instructions for non-exercise pages mainly involve user interaction instructions with non-exercise pages.
- the control instructions for the exercise page mainly involve the user’s control instructions for the playback of exercise courses.
- control instructions for non-exercise pages include but are not limited to switching non-exercise pages, previous page, next page, confirm, return, select courses, course favorites, join plans, enter the page switching interface, wake up the voice assistant, Multimedia volume control, etc.
- control instructions for the exercise page include but are not limited to course pause, course playback, previous link, next link, course background music volume control, course coach volume control, course evaluation, etc.
- users can also achieve different functions through a combination of multiple actions.
- Specific examples include but are not limited to using the first action to enter the first-level control instruction menu, and then using the second action to select instructions in the first-level control instruction menu or enter the second-level control instruction menu.
- the control instruction menu can Displayed to the user on the monitor.
- Further specific examples include but are not limited to the first-level control instruction menu, which may be a course selection control instruction menu, including instructions for playing courses, collecting courses, displaying course details, reserving courses, and entering the second-level control instruction menu.
- the control instruction menu may be a course evaluation menu, including instructions for positive evaluation, medium evaluation, and negative evaluation.
- GUI graphical user interface
- GUI graphical user interface
- Specific examples include, but are not limited to, displaying icons of operable actions in the Graphical User Interface (GUI), marking or highlighting the control instructions directed by the user's static actions in the Graphical User Interface (GUI), and displaying in the Graphical User Interface (GUI) the control instructions directed by the user's static actions. Display the progress bar of the user's static action maintenance time, guide the user's dynamic actions in the graphical user interface (GUI) (when the completion of the user's dynamic actions exceeds the specified ratio), etc.
- smart fitness mirrors can rely on other sensors to provide auxiliary judgment when controlling the graphical user interface (GUI) based on user actions collected by image recognition sensors.
- GUI graphical user interface
- Specific examples include, but are not limited to, enabling/interrupting/confirming action command control through voice recognition sensors, determining the user’s front or back through facial recognition sensors (action command control is enabled only when the user faces the smart fitness mirror frontally), and using facial recognition sensors. Distinguish between different users to load customized action command control instructions, optimize action/trajectory recognition accuracy through IMU, etc.
- the actions controlled by the action command can be actions preset in the smart fitness mirror, or actions recorded by the user themselves.
- the control instructions executed by action command control can be pre-set instructions in the smart fitness mirror, or they can be automated instructions set by the user themselves.
- the action controlled by the action command and the executed control instruction may have a preset corresponding relationship, or may be a corresponding relationship set by the user. It may be a one-to-one relationship or a many-to-one relationship.
- the turning on or off of the action command control function can be determined by a specific graphical user interface (GUI) or exercise page.
- GUI graphical user interface
- the action command control function is turned on by default on the main page of the graphical user interface (GUI) and turned off by default on other pages.
- Command control function; the turning on or off of the action command control function can also be determined based on the data detected by other sensors; the turning on or off of the action command control function can also be the turning on/off of all functions or the turning on/off of some functions.
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the first page of the graphical user interface (GUI) through the display 120.
- the first page There is a first prompt icon displayed on the screen to enter the page switching mode by raising the left hand. The user raises his left hand, the first prompt icon enters the highlighted state and the progress bar is turned on. After 1.5 seconds, the progress bar is completed, and the smart fitness mirror displays the function through the display 120
- the page displayed on the switching page switches the page.
- the switching page also includes the previous page "second page" of the first page and the next page "third page” of the first page.
- the switching page It also displays prompt icons for the previous page, next page, exit and confirmation.
- the prompt icon for the previous page represents the left hand waving horizontally from left to right
- the prompt icon for the next page represents the right hand waving horizontally from right to left.
- action the exit prompt icon represents raising the left hand
- the confirmation prompt icon represents raising the right hand.
- the user now waves his left hand horizontally from left to right, and the switching page changes to highlight the second page. At this time, the user raises the right hand, which represents The confirmed icon enters the highlighted state and the progress bar is turned on. After 1.5 seconds, the progress bar is completed and the smart fitness mirror displays the second page through the display 120 .
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the fourth page of the graphical user interface (GUI) through the display 120.
- the fourth page There is a second prompt icon representing the left hand waving horizontally from left to right to enter the previous page and a third prompt icon representing the right hand waving horizontally from right to left to enter the next page.
- the user makes the right hand move horizontally from right to left.
- the smart fitness mirror displays from the fourth page to the fifth page through the display 120 .
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the third main page in Figure 5-a through the display 120.
- the third home page The page includes multiple page objects displayed side by side.
- the third main page also displays the selected page object and the interactive object that enters the selected page object.
- the bottom of the third main page also displays the interactive object for switching page objects.
- the user can The displayed page object is switched by switching the interaction of the interactive object of the page object, and the page corresponding to the selected page object is entered by executing the interaction of the interactive object that enters the selected page object.
- the user can select the selected page object by raising his right hand/left hand diagonally upward to the left, diagonally downward to the left, diagonally above the right, or diagonally downward to the right. After the selection is completed, the user can maintain the posture of the right hand/left hand while making the designation with the hand. A static action such as spreading your fingers for 1 second will allow you to enter the page corresponding to the selected page object. At the same time, the user can wave his right hand horizontally from left to right to switch the displayed page objects and enter the fourth main page.
- the smart fitness mirror first displays the third main page as shown in Figure 5-a through the display 120.
- the third main page includes page objects arranged in a four-square grid and displayed side by side.
- the user When using a smart fitness mirror, the page object selected by the user is determined based on the position of the user's right hand. That is, when the user's right hand is above the right shoulder of the human body, the page object "Weather" is selected. When the user's right hand is below the right shoulder of the human body, When the user's right hand is above the left shoulder of the human body, the page object "Music" is selected.
- the page object "Body Fat Scale" is selected.
- the smart fitness mirror determines the page object selected by the user based on the recognized right hand position.
- the preset interaction for the selected page object is executed.
- the interaction for the selected page object can be to open the function of the page object, or it can be the contextual menu of the page object.
- the page objects displayed side by side in the four-square grid arrangement may also be arranged in left-right arrangement, six-square grid arrangement, nine-square grid arrangement, etc.
- the actions can be set with personal preferences (left-handedness, disability, etc.), and have different modes for one-hand and two-hand, including left/right-hand mode and simultaneous mode with both hands (such as left-hand selection, right-hand screen switching).
- the first page, the second page, the third page, the fourth page and the fifth page may be a wake-up page, a home page, an exercise page or other pages respectively.
- the smart fitness mirror can control the graphical user interface (GUI) based on the user's voice collected by the voice recognition sensor.
- GUI graphical user interface
- the user can replace the actions controlled by the action command with the voice recognition result controlled by the voice command and achieve the same control effect.
- control instructions for controlling execution of action commands include, but are not limited to, control instructions for non-exercise pages (wakeup page, home page, other pages) and control instructions for exercise pages.
- the control instructions for non-exercise pages mainly involve user interaction instructions with non-exercise pages.
- the control instructions for the exercise page mainly involve the user’s control instructions for the playback of exercise courses.
- control instructions for non-exercise pages include but are not limited to switching non-exercise pages, previous page, next page, confirm, return, select courses, course favorites, join plans, enter the page switching interface, wake up the voice assistant, Multimedia volume control, etc.
- control instructions for the exercise page include but are not limited to course pause, course playback, previous link, next link, course background music volume control, course coach volume control, course evaluation, etc.
- users can also achieve different functions through a combination of multiple actions.
- Specific examples include but are not limited to using the first voice to enter the first-level control instruction menu, and then using the second voice to select instructions in the first-level control instruction menu or enter the second-level control instruction menu.
- the control instruction menu can Displayed to the user on the monitor. Further specific examples include but are not limited to the first-level control instruction menu, which may be a course selection control instruction menu, including instructions for playing courses, collecting courses, displaying course details, reserving courses, and entering the second-level control instruction menu.
- the control instruction menu may be a course evaluation menu, including instructions for positive evaluation, medium evaluation, and negative evaluation.
- GUI graphical user interface
- Specific examples include, but are not limited to, displaying available voice keywords in the graphical user interface (GUI), completing user voice instructions in the graphical user interface (GUI) (when the user's dynamic action completion exceeds a specified ratio), etc. .
- the smart fitness mirror can rely on other sensors to provide auxiliary judgment when controlling the graphical user interface (GUI) based on the user's voice collected by the voice recognition sensor.
- GUI graphical user interface
- Specific examples include, but are not limited to, enabling/interrupting/confirming motion command control through motion recognition sensors, determining the user’s front or back through facial recognition sensors (movement command control is enabled only when the user faces the smart fitness mirror frontally), and facial recognition sensors. Differentiate different users to load customized voice command control instructions, etc.
- the keywords controlled by voice commands can be keywords preset in the smart fitness mirror, or keywords recorded by the user themselves.
- the control instructions executed by voice command control can be pre-set instructions in the smart fitness mirror, or they can be automated instructions set by the user themselves.
- the keywords controlled by voice commands and the executed control instructions can have a pre-set correspondence, or a correspondence set by the user themselves, a one-to-one relationship, or a many-to-one relationship.
- the turning on or off of the voice command control function can be determined by a specific graphical user interface (GUI) or exercise page.
- GUI graphical user interface
- the voice command control function is turned on by default on the main page of the graphical user interface (GUI) and turned off by default on other pages.
- Command control function; the voice command control function can also be turned on by recognizing wake words; the voice command control function can also be turned on or off based on data detected by other sensors; the voice command control function can also be turned on or off for all Turn on/off functions or turn on/off some functions.
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the first page of the graphical user interface (GUI) through the display 120, and the user wakes up the word to enable the voice command control function.
- An icon representing the activation of the voice command control function is displayed on the fourth page.
- the user speaks the instruction of "enter the switching page” or "page switching”
- the smart fitness mirror displays the command for switching pages through the display 120
- the displayed page switching page in addition to the highlighted first page, also includes the previous page "second page” of the first page and the next page "third page” of the first page.
- the switching page also displays There is an icon that represents the activation of the voice command control function.
- the user says the command "previous page” or “turn forward one page”
- the switching page changes to highlight the second page.
- the user says “enter” or " “OK” instruction
- the smart fitness mirror displays the second page through
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the fourth page of the graphical user interface (GUI) through the display 120, and the user wakes up the word to turn on the voice command control function, and an icon representing the turning on of the voice command control function is displayed on the fourth page.
- the user speaks the instruction of "next page” or "turn one page back”
- the smart fitness mirror displays the command "Next page” or "Turn back one page” through the display 120.
- the fourth page turns to the fifth page.
- a specific example of a smart fitness mirror that can control a graphical user interface (GUI) based on user actions collected by an image recognition sensor.
- the smart fitness mirror first displays the sixth page of the graphical user interface (GUI) through the display 120, and the user passes the action
- the command control turns on the voice command control function, and an icon representing the turning on of the voice command control function is displayed on the fourth page.
- the user speaks the instruction of "start training" or "confirm”
- the smart fitness mirror displays the transition from the sixth page through the display 120 Enter the seventh page.
- the first page, the second page, the third page, the fourth page and the fifth page may be a wake-up page, a main page, an exercise page or other pages respectively, the sixth page is a page used to display exercise cards, and the seventh page The page is a workout page.
- the smart fitness mirror can control the graphical user interface (GUI) based on user touch instructions collected by the touch screen.
- GUI graphical user interface
- Multimodal command control includes combination control, enhanced control and conflict control.
- the combination control includes but is not limited to the action combination control of multiple actions and the voice combination control of multiple keywords. More complex control instructions (such as selecting And collect, select and recommend to friends, take screenshots and share, etc.).
- the combined control includes but is not limited to enhancing and optimizing the user's action command control or voice command control through facial recognition or scene recognition, further including but not limited to recognizing the user when the user's action command control or voice command control is playing music. If the facial expression is unhappy, the user is presented with soothing recommendation results and cares for the user; when the user's movement command control or voice command control is to recommend a course, it is recognized that the scene temperature is low, and the recommendation of meditation courses is reduced.
- Conflict control includes but is not limited to when the user uses multiple different control methods to control the smart fitness mirror at the same time, and multiple conflicting control instructions appear. By judging the priority of different control methods, determine what the user wants to execute. The control instructions are executed by the smart fitness mirror.
- Control instructions include but are not limited to system wake-up instructions, page switching instructions, page interaction instructions and course control instructions. Users can control the smart fitness mirror through these control instructions.
- the system wake-up instructions include instructions to wake up the smart fitness mirror from standby state to the wake-up page, home page, exercise page, and other pages, and instructions to switch the smart fitness mirror from the wake-up page to the home page, exercise page, and other pages.
- the page switching instruction refers to an instruction used to control the smart fitness mirror to switch among multiple different pages.
- the different pages can be pages of different types or pages with different contents.
- Further page switching instructions include but are not limited to page turning instructions (such as next page/previous page) and instructions to control the smart fitness mirror to enter the page switching page.
- the page interaction instructions refer to instructions for controlling the smart fitness mirror to perform interaction or input.
- the page interaction instructions include but are not limited to control interaction instructions and input interaction instructions.
- the control interaction instructions refer to instructions for controlling the graphical user interface (GUI) or the page objects or interactive objects in the graphical user interface (GUI) to realize their interactive functions, such as confirm/return, add to favorites, add to plan, wake up voice Assistant, volume control, switching accounts, switching modes, opening contextual menu, etc.
- the input interactive instructions refer to instructions for inputting information in a graphical user interface (GUI) or a page object or interactive object in a graphical user interface (GUI). For example, when conducting course evaluation, one will be added for each instruction input. heart; or when chatting with other users through the smart fitness mirror, switch to a preset expression/reply every time a command is entered.
- the course control instructions refer to the control instructions used to control the smart fitness mirror to play exercise videos on the exercise page, and are mainly used to control the playback of exercise videos. Further course control instructions include but are not limited to pause/play, previous section/next section, background volume control, coach volume control, course evaluation, etc.
- the data sources of smart fitness mirrors include but are not limited to sensor data and background data, which are used for inputting user control instructions and displaying page information.
- the sensor data includes but is not limited to on-mirror sensor data and external sensor data.
- the on-mirror sensor data includes but is not limited to image recognition sensors, infrared recognition sensors, voice recognition sensors and other sensors installed on the smart fitness mirror. data.
- the data detected by the image recognition sensor includes but is not limited to the image detected by the image recognition sensor and the data obtained after image processing and recognition, such as user action data, user posture data, facial recognition data, user expression data and environment Image data, etc.
- the data detected by the infrared recognition sensor includes but is not limited to the thermal imaging image detected by the infrared recognition sensor and the data obtained after processing and identifying the thermal imaging image, such as user thermal imaging data, user exercise location data, and environmental thermal imaging. Data etc.
- the data detected by the voice recognition sensor includes but is not limited to the voice data detected by the voice recognition sensor and the data obtained after processing and recognizing the voice data, such as voice control instructions, keyword instructions, wake-up word instructions, etc.
- the on-mirror sensor data includes but is not limited to inertial sensor data, physiological data sensor data (such as heart rate belt data, body fat scale data, etc.) and other data detected by sensors wirelessly connected to the smart fitness mirror.
- the data detected by the inertial sensor includes but is not limited to speed/acceleration data detected by the inertial sensor and data obtained by processing and identifying the speed/acceleration data, such as user action data.
- the data detected by the physiological data sensor includes but is not limited to the physiological data detected by the physiological data sensor and the data obtained after processing and identifying the physiological data, such as heart rate data, blood pressure data, blood oxygen saturation data, etc.
- the background data includes but is not limited to background user data and other background data.
- the background user data is the data directly related to the user stored by the smart fitness mirror and/or the server connected to the smart fitness mirror.
- the background user data includes but Not limited to user physiological data, user social data, user calendar data, usage history data, etc.
- the usage history data includes, but is not limited to, the user's completed course data, collected course data, historical best performance data, physiological data and performance data during the course, frequency of class data, etc.
- the user's physiological data includes but is not limited to the user's height, weight, gender, age, heart rate, blood pressure, injury history, etc.
- the user social data includes but is not limited to data related to the people/friends the user follows and data related to the groups/interest groups the user joins.
- this information can include class invitation data that invites the user to complete a designated course, and invites the user to participate in a class.
- Challenge invitation data for one or more people, rankings among users, and dynamic data uploaded by the user himself or other users, etc.
- the dynamics can be synchronized to other social platforms (such as WeChat, Weibo, Facebook, Twitter, etc.).
- the user calendar data includes but is not limited to the user's schedule information data, to-do data, course plan data, etc.
- the other background data is data with low relevance to the user stored by the smart fitness mirror and/or the server connected to the smart fitness mirror.
- the other background data includes but is not limited to IP data, environmental data, course data and third-party data. wait.
- the IP data includes but is not limited to the IP address data, Mac address data, etc. of the smart fitness mirror.
- the environmental data includes but is not limited to time data, positioning data, weather data, temperature data, clothing index data, etc. in the area where the IP of the smart fitness mirror is located.
- the course data includes but is not limited to exercise video data, coach name, course type, course duration, course difficulty, course label and/or estimated consumption, etc.
- the types of courses include but are not limited to showing users HIIT, aerobic dance, yoga, strength shaping, combat training, barre, Pilates, stretching, dance, meditation, challenges, etc. through smart fitness mirrors.
- the course labels include but are not limited to showing the user through the smart fitness mirror whether it is an AI recognition course, whether it is a parent-child course, the body parts mainly involved in the course (such as the whole body, arms, waist and abdomen, legs, etc.) and the needs of the course. Props used (such as no equipment, yoga mats, elastic rings, elastic bands, dumbbells, etc.), etc.
- the third-party data includes but is not limited to data of third-party functions installed on the smart fitness mirror.
- the smart fitness mirror may also store user information locally on the smart fitness mirror and/or a remote storage device (eg, a cloud service) depending on the amount of storage space used.
- user information that uses little storage space can be stored locally on a smart fitness mirror, including but not limited to the user's name, age, height, weight, and gender.
- course data can be stored in the smart fitness mirror to reduce the impact of network latency that can affect video streaming quality.
- the amount of video content stored may be limited by the storage capacity of the smart fitness mirror. In some configurations, video content may be stored temporarily only on a daily or weekly basis, or depending on the percentage of smart fitness mirror capacity being used. Background user data using a large amount of storage space can be stored on remote storage devices.
- This user information includes, but is not limited to, physiological data such as the user's heart rate and calories burned, as well as videos or skeletal point data of the user taken during exercise. Smart fitness mirrors can retrieve this information for subsequent analysis and display.
- the Bluetooth Low Energy protocol includes built-in security features that can be used by devices that utilize the protocol. However, these security features can only be used if the Bluetooth bonding operation is completed before establishing the connection with encryption. In some cases, various security mechanisms may not be implemented or various security mechanisms may fail. In this case, application-level security can be achieved by combining the above-mentioned data segmentation specifications. For example, Advanced Encryption Standard (AES) encryption of the message may be applied before preamble of the message.
- AES Advanced Encryption Standard
- the Bluetooth Low Energy protocol performs a similar process via built-in security features at the firmware level, and can provide similar protection against human reading of communications between client and server.
- the GATT service added for the client to read/notify messages can be removed from the service recording on the server device when the client disconnects from the server. This ensures that no connections are left open and the system doesn't accidentally leak information to nefarious snoopers.
- This connection termination can be triggered by the server or the client and relies on the Bluetooth Low Energy stack to provide notification to both parties that the connection has been closed. If Bluetooth binding is used during the initial connection setup to provide firmware-level cryptographic security, the binding information can be stored on each device so that the binding does not need to be repeated after subsequent connections between the client and the server.
- Smart fitness mirrors can also implement the following functions based on the received data: user identification, mode switching, attention judgment, body/emotion assessment, and clothing recommendations.
- the user identification function allows the smart fitness mirror to identify the user who is using the smart fitness mirror through the input data.
- This function is mainly realized through the following methods: the user inputs the user's characteristic data through the data input device, and the smart fitness mirror converts the user's input characteristic data Compare with the characteristic data in the database, determine the user who is using the smart fitness mirror based on the comparison results, and allow the smart fitness mirror to adjust the graphical user interface (GUI) based on the recognition results, such as adjusting font size, page information preferences, etc.
- GUI graphical user interface
- the data input device includes but is not limited to on-mirror sensor data and external sensors.
- the user's characteristic data includes but is not limited to the user's bone points, voiceprint, weight, resting heart rate and other on-mirror sensor data and external sensors collected or After collecting and processing the data, the mapping relationship between users and feature data is recorded in the database.
- the user identification function can be used to identify and switch multiple sub-accounts under one main account, and can also be used to identify and switch between multiple main accounts.
- the user identification function allows the smart fitness mirror to identify the characteristics of the user who is using the smart fitness mirror through input data, such as male users, female users, the elderly, and children, and allows the smart fitness mirror to identify graphic users based on the recognition results.
- Interface (GUI) adjustment such as adjusting font size, page information, etc.
- the mode switching function allows the smart fitness mirror to switch between multiple working modes to adapt to different working scenarios.
- the smart fitness mirror is allowed to switch to youth mode.
- youth mode the youth account is bound to the parent account or the main device account.
- youth mode the page information displayed by the smart fitness mirror will be adaptively adjusted, such as course recommendations. It will be adjusted to give priority to content suitable for teenagers, such as skipping, somatosensory games, etc.
- the real-time situation and sports data in this mode can be synchronized to the smart terminal of the parent account or the device's main account.
- the smart fitness mirror is allowed to switch to guest mode. In guest mode, the smart fitness mirror does not need to be bound to an account to work.
- the page information displayed by the smart fitness mirror can also be adapted, such as courses. Recommendations will be adjusted to give priority to content suitable for novices, such as content with low difficulty tags. At the same time, even if the smart fitness mirror is bound to an account, the exercise data in this mode will not be included in the bound account and will not affect the bound account. Set the account’s training plan or course completion status.
- Attention judgment allows the smart fitness mirror to determine whether the user's attention is on the smart fitness mirror when the device is working, so that the smart fitness mirror can determine the user's interaction intention with the mirror and simultaneously activate or deactivate sensors, other digital input devices and functions. . Attention judgment is mainly based on the user's performance to judge whether the user wants to use the smart fitness mirror. The judgment can be but is not limited to the following characteristics: distance judgment (the distance between the user and the smart fitness mirror does not exceed the threshold), posture judgment (whether the user is facing the front) Facing smart fitness mirrors), action judgment (whether the user has made specified actions or commonly used exercises), semantic judgment (whether the user's voice recognition results are related to specific keywords), etc.
- the physical/emotional assessment allows the smart fitness mirror to determine the user's physical/emotional status when using the device, so that the smart fitness mirror can operatively adjust page information or other feedback based on the user's physical/emotional status.
- the user's physical/emotional condition can be identified through physiological data sensors and image recognition sensors (face recognition).
- the user When it is recognized that the user's mood is unhappy, the user is presented with soothing page information recommendation results and cares for the user during voice interaction; in a further example, when the user awakens the voice function of the smart fitness mirror through the wake-up word, the smart fitness mirror
- the image recognition sensor facial recognition
- the smart fitness mirror initiates the first round of dialogue, "Hey, what's wrong? You don't look happy.
- the identification of the physical/emotional status can be based on the identification of the user's age, body posture, expression and other information to determine the user's physical/emotional status such as mood, stress, fatigue, etc.; it can also be based on the blood pressure detected by the physiological data sensor. , heart rate data for identification.
- clothing recommendations allow smart fitness mirrors to initiate active interactions with users based on specific conditions.
- the smart fitness mirror determines whether the user needs the smart fitness mirror to initiate active interaction based on the collected time data and positioning data, combined with user information and the user's actions in front of the smart fitness mirror. For example, when the user appears in front of the smart fitness mirror for the first time in the morning and turns around, the smart fitness mirror turns on the clothing recommendation function and recommends to the user clothes currently positioned suitable for today's wear. For example, when the user appears in front of the smart fitness mirror and turns around on the weekend, the smart fitness mirror turns on the clothing recommendation function and outputs the current clothing recommendations to the user.
- the above functions can also be used for other functions of smart fitness mirrors that actively interact based on users and external information.
- An electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor.
- the processor executes the computer program, the multi-modal interactive fitness mirror is implemented. operate.
- the processor may be a central processing unit, or other general-purpose processor, digital signal processor, application-specific integrated circuit, off-the-shelf programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components etc.
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
- the memory may be used to store the computer program and/or module, and the processor implements various functions of the device of the multi-modal interactive fitness mirror in the present disclosure by running or executing data stored in the memory.
- the memory may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the memory may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, smart memory card, secure digital card, flash memory card, at least one disk storage device, flash memory device, or other volatile solid-state storage devices.
- the smart fitness mirror is configured to display video content from the video production factory on the display panel 120 .
- Video content can be streamed as live content or pre-recorded recordings.
- the live broadcast content can also be recorded and stored in the cloud server, so that users can request and play the video content later, thus becoming recorded content.
- the smart fitness mirror When transmitting video streaming, the smart fitness mirror first sends a request for the specified course to the API server.
- the data returned by the API server is divided into two parts. The first part is an overview of the specified course (including course pictures and course introduction); the second part is The address (URL) of the course video, and then the smart fitness mirror goes to the OSS server to request the corresponding course video based on the received URL.
- the smart fitness mirror uses the HLS protocol for video playback, that is, the address (URL) of the course video is the M3U8 file of the corresponding course, and then the player of the smart fitness mirror downloads it from the OSS server according to the record of the M3U8 file.
- the smart fitness mirror uses the RTMP protocol for video playback, that is, the address (URL) of the course video is the RTMP live broadcast address of the corresponding course, and then the player of the smart fitness mirror downloads the corresponding video from the OSS server based on the RTMP live broadcast address.
- Course video live data that is, the address (URL) of the course video is the M3U8 file of the corresponding course.
- smart fitness mirrors can be connected to online streaming services that provide users with third-party video content streamed from a server (e.g., directly through a network router or indirectly through the user's smart device) .
- Third-party content may be made available to users on a subscription basis.
- Third parties can provide content to a centralized distribution platform that communicates with the smart fitness mirror over the network.
- One benefit of a centralized distribution platform is that distribution of content to smart fitness mirrors is simpler.
- a third party may develop a separate distribution platform, which may use a separate software application on smart devices for users to access the content.
- a computer-readable storage medium stores a computer program.
- the computer program is executed by a processor, the operation of the multi-modal interactive fitness mirror is implemented.
- the computer storage media of the embodiments of the present disclosure may be any combination of one or more computer-readable media.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- the computer-readable storage medium may be, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- the interactive process of this device specifically includes the following operations:
- S121 fitness mirror collects user images in the area and obtains user action images
- S122 generates the user's skeletal point information based on the action image, and establishes a coordinate system based on the skeletal point information;
- S122.1 bone point information includes top bone point, pelvic bone point, left hand bone point and right hand bone point;
- S122.21 Get the straight-line distance between the head bone point and the pelvic bone point, and use one-sixth of the straight-line distance as the unit length of the coordinate system, that is, the unit length 1 of the coordinate system is one-sixth of the straight-line distance. ;
- S123 establishes several interactive areas according to the coordinate system
- the first interactive area is Area A in Figure 3
- the second interaction area is area B in Figure 4
- the third interaction area is area C in Figure 4
- the fourth interaction area is area D in Figure 4
- the fifth interaction area is area E in Figure 4 .
- the first interaction area is centered on the bone point on the top of the head, with a length of 5 units in the vertical direction and 6 units in the horizontal direction. That is, the length of the first interaction area is 12 units in length and the height is 10 units in length.
- the first interaction area is centered on the bone point of the top of the head, with a length of 5 units vertically upward as the upper side, and a length of 5 units downward vertically as the bottom; with the bone point of the top of the head as the center, a length of 6 units horizontally to the left is the left side, and a length of 6 units horizontally to the left is the left side.
- the length of the right 6 units is the right side.
- the preset control instruction of the first interaction area is to determine A, that is, when the interaction gesture is located in the first interaction area, the control instruction corresponding to the first interaction area is output, that is, the control instruction to determine A is output.
- the control command of the second interactive area is to the left, and the control command of the third interactive area is to the right.
- the second interactive area starts from the pelvic bone point, 2 units of length vertically upward, and 10 units of length vertically downward.
- the unit length is the bottom, starting from the bone point on the top of the head, 3 units of length to the left horizontally is the right side, and 16 units of length to the left is the left side, forming an area with a length of 13 units and a height of 12 units.
- the third interaction area is starting from the pelvic bone point, 2 units in length vertically upward as the upper side, 10 units in length vertically downward as the lower side, starting from the bone point on the top of the head, and 3 units in length to the right horizontally as the left side. , 16 units long to the right in the horizontal direction, forming an area with a length of 13 units on the right and a height of 12 units.
- the control instruction of the fourth interactive area is OK D
- the control instruction of the fifth interactive area is OK E.
- the fourth interactive area starts from the bone point on the top of the head, with a length of 4 units vertically upward and 5 units vertically downward. The length is the bottom, starting from the bone point on the top of the head, 7 units horizontally to the left is the right side, 16 units horizontally to the left is the left side, forming an area with a length of 11 units and a height of 9 units.
- the fifth interactive area is the starting point of the bone point on the top of the head, 4 units of length vertically upward as the upper side, 5 units of length vertically downward as the lower side, starting from the bone point of the top of the head, and 7 units of length to the right as the left side.
- the area 16 units long to the right in the horizontal direction is 11 units long and 9 units high.
- S124 identifies interactive gestures based on skeletal point information
- the S124.2 Compare the distance between the left hand bone point and the right hand bone point with the threshold T. If the distance between the left hand bone point and the right hand bone point is less than or equal to the threshold T, then the interaction gesture is recognized.
- the threshold T is 15cm. If the distance between the left hand bone point and the right hand bone point is less than or equal to the threshold T, it is judged that the user has made an interactive gesture and the interactive gesture is recognized.
- the interactive gesture is a high-five, that is, the user makes A high-five action; if the distance between the left hand bone point and the right hand bone point is greater than the threshold T, the user does not perform a high-five action; the specific interaction gestures in this embodiment are only examples and are not limited.
- S125 determines whether the intermediate point coordinates are located in the interaction area
- the length of the interactive area in the activated state increases by 1.5 units, and the height increases by 0.5 units. Specifically, in this embodiment, the length of the interactive area in the activated state increases sequentially toward the left and right sides. The maximum is 0.75 units of length, and the height increases sequentially from the upper and lower ends, that is, 0.25 units of length.
- the length of the inactive interactive area is reduced by 2 units, and the height is reduced by 1 unit. Specifically, the length of the interactive area in the inactive state shrinks to the left and right sides by 1 unit, and the height shrinks to the upper and lower ends by 0.5 units.
- the area of the interaction area does not change. After the control instruction is output, the area of the interaction area is adjusted. After the control instruction is executed, the area of the interaction area returns to the original state.
- the fitness mirror When the user makes an interactive gesture in the first interactive area, the fitness mirror recognizes the interactive gesture and outputs the control instructions corresponding to the first interactive area.
- the area of the first interactive area expands, that is, the upper side increases vertically by 0.25 unit length.
- the lower side increases vertically by 0.25 units in length downward, the left side increases in length by 0.75 units in the horizontal direction to the left, and the right side increases in length by 0.75 units in the horizontal direction to the right.
- any combination of two or more such features, systems, articles, materials, kits, and/or methods is included in this Agreement if such features, systems, articles, materials, kits, and/or methods are not mutually exclusive.
- the scope of the public text Other substitutions, modifications, changes and omissions may be made in the design, operating conditions and arrangement of corresponding elements of the exemplary embodiments without departing from the spirit of the disclosure.
- the use of numerical ranges does not exclude equivalents falling outside the range that perform the same function in the same way to produce the same results.
- implementations may be implemented using hardware, software, or a combination thereof.
- the software code can execute on a suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
- the computer may be embodied in any of a variety of forms, such as a rack computer, a desktop computer, a laptop computer, or a tablet computer.
- a computer may be embedded in a device that is not typically considered a computer but has suitable processing capabilities, including a personal digital assistant (PDA), a smartphone, or any other suitable portable or stationary electronic device.
- PDA personal digital assistant
- a computer may have one or more input and output devices.
- these devices can be used to present user interfaces.
- output devices that may be used to provide a user interface include a printer or display screen for a visual presentation of output and a speaker or other sound-generating device for an audible presentation of output.
- input devices that may be used for user interfaces include keyboards and pointing devices such as mice, touch pads, and digital tablet computers.
- a computer may receive input information through speech recognition or other audible formats.
- Such computers may be suitably interconnected through one or more networks, including local or wide area networks, such as an enterprise network, an Intelligent Network (IN), or the Internet.
- networks may be based on suitable technology, may operate according to suitable protocols, and may include wireless, wired or fiber optic networks.
- the various methods or processes outlined herein may be encoded as software executable on one or more processors employing any of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and may also be compiled into executable machine language code that executes on a framework or virtual machine, or intermediate code. Some implementations may specifically employ one or more of a specific operating system or platform and a specific programming language and/or scripting tool to facilitate execution.
- various concepts may be embodied in one or more methods, at least one example of which has been provided.
- the actions performed as part of this method can be ordered differently in some cases. Accordingly, in some disclosed embodiments, the corresponding actions of a given method may be performed in an order different from that specifically shown, which may include performing some actions concurrently (even if such actions are shown as sequential in the exemplary embodiments). action).
- a reference to “A and/or B” may in one embodiment refer to only A (optionally including elements other than B); in another embodiment, reference is made to only B (optionally including elements other than A); in yet another embodiment, reference is made to both A and B (optionally including other components); etc.
- the phrase "at least one" shall be understood to mean at least one element selected from any one or more elements in the list of elements, but not At least one element from each element specifically listed in the Elements List must be included, and any combination of elements in the Elements List is not excluded.
- This definition also allows for the optional presence of elements other than those specifically represented within the list of elements referred to by the phrase "at least one" whether or not related to those specifically represented.
- At least one of A and B in one embodiment may refer to at least one A, optionally including more than one A, and the absence of B (and optionally including elements other than B); in another embodiment, refers to at least one B, optionally including more than one B, without A (and optionally including elements other than A); in yet another embodiment, reference to at least one A, optionally including more than one A and at least A B, optionally including more than one B (and optionally other elements); etc.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开涉及交互式镜面装置、系统及方法,通信接口,其用于接收视频影像;显示器,其可操作地耦合到所述通信接口以显示具有一个或多个用户界面对象的用户界面;镜子,其具有部分反射部分,部分反射部分将用户界面传输给与显示器相对的用户使得用户界面表现为叠加在用户的图像的一部分上;和控制装置,其被配置为向健身镜发送指令,使得健身镜响应于从所述控制装置接收指令来修改所述显示器的用户界面。本公开通过健身镜上具有一个或多个用户界面对象的用户界面的切换,丰富健身镜的功能,提升多个用户界面对象的用户体验,增加用户健身的积极性和兴趣。
Description
本公开涉及智能家居及智能健身领域,具体涉及交互式镜面装置、系统及方法。
随着经济的快速发展,人们生活节奏的加快,高速的生活和高负荷的工作使得城市生活的人们身体逐渐出现了各种健康问题,随之健身成为了人们关注的热点。智能健身镜,是集人工智能、硬件内容服务于一身的新型健身产品。
申请号为CN202110946212.9,名称为用于交互式训练和演示的反射视频显示设备及其使用方法的专利公开了一种健身镜并公开了使用健身镜进行训练的方法。
目前,大部分在家健身人员通过智能健身镜观看视频进行学习训练动作,智能健身镜既可以当作镜子又可以看到高画质的教练图像。但是,现有的智能健身镜交互方式麻烦,一般需要通过外部的智能设备(如智能手机)或者触摸屏进行控制,通过外部智能设备进行控制时操作较为复杂,还需要对智能设备和智能健身镜进行配网,而采用触摸控制则容易在镜面上留下指纹和汗水,影响屏幕的显示效果。同时,现在的健身镜体积较大,占用了较多的家庭空间,但是现在的智能健身镜只能显示健身内容及健身相关的数据,在日常生活中使用频率较低。
发明内容
本公开的一个目的在于提供交互式镜面装置、系统及方法,通过在智能健身镜上增加个性化的界面及新的交互方式,优化智能健身镜与用户的交互方式,提高智能健身镜在日常生活中的使用频率。
该目的采用以下技术方案实现:
一种反射视频显示设备(本文也被称为“智能健身镜”和“交互式健身系统”),其被配置为向用户显示诸如由教练带领的预先录制的或直播的锻炼之类的视频内容并提供允许用户交互和使视频内容个性化的界面。该智能健身镜可以是通信地耦合到内容提供商(例如,服务器、云服务)和/或智能装置(例如,智能手机、平板计算机、计算机)的联网装置。该智能健身镜可以包括显示面板和扬声器以向用户输出视频内容和音频。该智能健身镜还可以包括摄像头和传声器以在锻炼期间捕获用户的视频和音频。因此,该智能健身镜可以在锻炼期间实现用户与教练之间的双向通信。通过这种方式,该智能健身镜可以为用户提供一种接收指导锻炼的便捷选择,同时实现与由常规健身房的私人教练员或教练提供的锻炼类似的更大程度的个性化和个人指导。
智能健身镜的一个例子包括用于接收健身教练的视频影像的通信接口、可操作地耦合到通信接口以显示健身教练的视频影像的显示器、以及设置在显示器前面以反射与显示器相对的人员的图像的镜子。该镜子具有部分反射部分以将健身教练的视频影像传输给与显示器相对的人员使得健身教练的视频影像表现为叠加在人员的图像的一部分上。
交互式健身方法的一个例子包括以下操作:(1)将健身内容流式传输到交互式视频系统,该交互式视频系统包括具有部分反射部分的镜子和设置在部分反射部分的一侧的显示器;(2)经由显示器和镜子的部分反射部分向用户显示健身内容;(3)以及利用镜子反射用户的图像使得用户的图像至少部分地叠加在经由显示器和镜子的部分反射部分显示的健身内容上。
使用智能健身镜的方法的一个例子包括以下操作:在部分透射镜子后面的视频显示器上向用户显示健身内容时:(1)利用部分透射镜反射用户的图像;(2)利用附接到用户的心率监测器测量用户的心率;(3)将心率从心率监测器传输到可操作地耦合到视频显示器的天线;(4)在视频显示器上显示用户的心率;以及(5)在视频显示器上显示用户的目标心率。
为了在智能健身镜上增加个性化的界面及新的交互方式,优化智能健身镜与用户的交互方式,提高智能健身镜在日常生活中的使用频率,本公开在为智能健身镜设计了一套操作系统Launcher,具体包括:(1)GUI界面;(2)动作控制的系统控制方法;(3)语音控制的系统控制方法;(4)多模态融合的控制方法;(5)可执行的控制指令;(6)数据采集管理的方法;(7)一些具体的功能管理方法。本公开还在于,为智能健身镜这种设备定制了一套系统及操作方法,使智能健身镜的功能不止于观看健身锻炼视频,还可以作为智能家居的一个重要组成部分便于用户查看更多的信息,提升使用效率。同时,定制化的操作方法客服了传统智能健身镜需要通过触摸进行控制容易在部分反射镜面上留下指纹的弊病,由于智能健身镜的功能包括可以让用户观看健身锻炼视频时对比自己镜像的动作,部分反射镜面上留下指纹后对智能健身镜的功能影响较大,导致大多数用户不愿意使用触摸交互,而使用智能终端对智能健身镜控制又不直观,比较麻烦。而采用动作控制的方法则没有这些问题,同时虽然手势控制在一些其他电子产品上也有应用,不过在智能健身镜上应用时也有突破性的进展,由于智能健身镜包括部分反射镜面,我们在进行动作控制时,能清楚的看到自己正在做的动作,降低误控制、误操作的可能,同时由于动作识别属于智能健身镜的功能,智能健身镜无需额外增加成本就可以实现更高精度的复杂动作识别,使能够完成的控制指令相对于现有的手势控制也丰富很多,最后由于智能健身镜的主要功能是运动健身,将动作控制应用于智能健身镜还需要避免健身动作才来的控制误触,与传统的手势控制也有明显区别。
以下更详细讨论的前述概念和附加概念的所有组合(假设此类概念并不相互矛盾)可以被预期作为本文公开的主题的一部分。具体地,出现在本公开文本结尾的所要求保护的主题的 所有组合都可以被预期是本文公开的主题的一部分。本文明确采用的并且也可以出现在通过引用并入的任何公开内容中的术语应当被赋予与本文公开的特定概念最一致的含义。
本公开提供的一个或多个技术方案,至少具有如下技术效果或优点:本公开通过健身镜上具有一个或多个用户界面对象的用户界面的切换,丰富健身镜的功能,提升多个用户界面对象的用户体验,增加用户健身的积极性和兴趣。
此处所说明的附图用来提供对本公开实施例的进一步理解,构成本申请的一部分,并不构成对本公开实施例的限定。在附图中:
图1为示例性智能健身镜的框图;
图2为智能健身镜上的示例性GUI页面;
图3为智能健身镜上的示例性GUI页面;
图4为智能健身镜上的示例性GUI页面;
图5-a为智能健身镜上的示例性GUI页面;
图5-b为智能健身镜上的示例性GUI页面;
图6为智能健身镜上的示例性GUI页面;
图7为智能健身镜上的示例性GUI页面;
图8为智能健身镜上的示例性GUI页面;
图9为智能健身镜上的示例性GUI页面;
图10为智能健身镜的动作识别示意图。
为了能够更清楚地理解本公开的上述目的、特征和优点,下面结合附图和具体实施方式对本公开进行进一步的详细描述。需要说明的是,在相互不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但是,本公开还可以采用其他不同于在此描述范围内的其他方式来实施,因此,本公开的保护范围并不受下面公开的具体实施例的限制。
本领域技术人员应理解的是,在本公开的揭露中,术语“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系是基于附图所示的方位或位置关系,其仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此上述术语不能理解为对本公开的限制。
因此,本专利涉及一种交互式镜面装置(也被称为“智能健身镜”和“交互式训练系统”)以及使用交互式训练设备的方法。智能健身镜包括被配置为显示锻炼内容(预先录制的视频或直播流)的显示器以及使得用户能够个性化锻炼的界面。此外,智能健身镜可以允许用户和/或教练在锻炼期间以类似于在其中用户和教练在同一个房间的健身房或小型健身工作室进行的常规锻炼的方式彼此交互(例如,向教练提供关于锻炼节奏的反馈,校正用户在特定健身计划期间的形式)。
一个示例性的智能健身镜。
图1示出了智能健身镜的示例性表示。智能健身镜可以包括处理器110,其用于部分地控制智能健身镜中的各种子部件的操作并且管理流入/流出智能健身镜的数据流(例如,视频内容、来自教练或用户的音频、生物识别反馈分析)。智能健身镜可以包括用于显示视频内容的显示器120、用户可以与其交互并控制智能健身镜的图形用户界面(GUI)、生物识别反馈数据和/或其他视觉内容。传感器130可以耦合到处理器110以采集用户相关数据。天线140可以耦合到处理器110,以在智能健身镜与另一个装置(例如,遥控装置、生物识别传感器、无线路由器)之间提供数据传输。天线140可以包括多个发射器和接收器,每个发射器和接收器针对特定频率和/或无线标准(例如,蓝牙、802.11a、802.11b、802.11g、802.11n、802.11ac、2G、3G、4G、4G LTE、5G)而定制。放大器150可以耦合到处理器110以从处理器110接收音频信号以便通过左扬声器152和/或右扬声器154输出后续声音。
智能健身镜还可以包括图1中未示出的附加部件。例如,智能健身镜可以包括开关模式电源(SMPS)、开关以及板载存储器和存储装置(非易失性和/或易失性存储器),包括但不限于硬盘驱动器(HDD)、固态驱动器(SDD)、快闪存储器、随机存取存储器(RAM)和安全数字(SD)卡。该板载存储器和/或存储装置可以用于存储用于智能健身镜的操作的固件和/或软件。如上所述,板载存储器和/或存储装置还可以用于(临时和/或永久)存储其他数据,包括但不限于视频内容、音频、用户视频、生物识别反馈数据和用户设置。在另一个例子中,智能健身镜可以包括安装和支撑智能健身镜的各种部件。
天线140可以包括多个天线,每个天线用作接收器和/或发射器以与各种外部装置通信,这些外部装置诸如是用户的智能终端(例如,计算机、智能手机、平板计算机)、外置传感器(例如,心率带、惯性传感器、体脂称)和/或用于流式传输或播放视频内容的远程服务器或云服务器。再一次,天线140可以符合各种无线标准,包括但不限于蓝牙、802.11a、802.11b、802.11g、802.11n、802.11ac、2G、3G、4G、4G LTE和5G标准。
传感器130可以包括多个传感器,多个传感器可以分别是图像识别传感器(摄像头)、红外识别传感器或语音识别传感器(麦克风)中的一个或多个。
智能健身镜中的图像识别传感器(摄像头)可以用于在用户进行活动(例如,锻炼)时获取用户的视频和/或静态图像。然后可以向教练分享用户的视频以允许教练在锻炼期间观察并向用户提供指导。还可以向其他智能健身镜的其他用户分享视频以进行比较或竞争。用户的视频也可以实时显示在显示120上或者存储以供以后回放。例如,通过提供用户与教练的视觉比较,用户的视频可以用于锻炼期间或之后的自我评估。存储的视频还可以允许用户在进行类似健身时评估他们随时间变化的进度或改进。图像识别传感器(摄像头)还可以被配置为在用户进行活动(例如,锻炼)时获取用户的动态和/或静态图像并处理为骨骼点图像,并根据骨骼点图像判断用户动作是否与标准动作匹配并输出判断结果,若用户动作与标准动作不匹配,则输出动作矫正信息。还可以在用户使用智能健身镜时或在锻炼完成之后实时处理动态和/或静态图像,以基于用户的运动和动作来对智能健身镜进行控制或导出用户的生物识别数据。
智能健身镜中的红外识别传感器可以用于在用户进行活动(例如,锻炼)时获取用户的热成像图,并根据热成像图判断用户运动时锻炼的部位、发力的部位及消耗的热量,并根据用户运动时锻炼的部位、发力的部位判断用户动作是否与标准动作匹配并输出判断结果,若用户动作与标准动作不匹配,则输出动作矫正信息
智能健身镜中的语音识别传感器(麦克风)可以用于在用户使用智能健身镜时获取实时处理用户的语音信息并,以基于用户的语音信息来对智能健身镜进行控制或与用户进行对话。
图形用户界面(GUI)
智能健身镜可以通过显示器120来显示图形用户界面(GUI)以促进用户与智能健身镜的交互。
图形用户界面(GUI)包括但不限于唤醒页面、主页面、锻炼页面和其他页面等。
其中唤醒页面是指智能健身镜被唤醒时的初始页面,用于展示一些基础的页面信息,只有检测到用户进行指定操作时,智能健身镜被彻底唤醒进入主页面、锻炼页面或其他页面。
其中主页面展示至少一项页面信息,所述页面信息用于向用户展示数据、可操作的交互对象或交互引导。所述主页面用于向用户展示预设的页面信息或用户的自己设置的页面信息,同时用户可以通过在主页面上进行指定操作对智能健身镜进行控制,所述控制包括但不限于页面切换、功能切换或进入第三方接入的应用程序。
其中锻炼页面展示用于播放锻炼视频指导用户进行锻炼的视频播放窗口,还可以展示课程播放信息(如课程总时长、已播放的时长等)、配件连接信息(如心率带连接信息等)和用户生理信息(如心率、血压、消耗的卡路里等)中的至少一种。锻炼页面可以由主页面进 入或唤醒页面进入并退回到主页面或唤醒页面,也可以在智能健身镜开启时直接进入。
其中其他页面展示通过主页面进入的预安装的功能页面(如多媒体播放器页面、天气展示页面、饮食推荐页面等)或第三方接入的应用程序页面(如微信、抖音、喜马拉雅等),其他页面一般由主页面进入或唤醒页面进入并退回到主页面或唤醒页面。
页面信息
图形用户界面(GUI)可以通过页面对象展示页面信息,每个页面对象用于展示至少一种或一组页面信息,所述展示的页面信息可以是存储在智能健身镜本地的存储装置中的信息,也可以是智能健身镜通过通信接口从其他设备处接收到的信息,包括但不限于可切换展示的课程预览信息、用户信息、环境信息、多媒体信息、外部设备信息等。
所述课程预览信息包括但不限于教练名称、课程类型、课程时长、课程难度、课程标签和/或预计消耗等。所述课程类型包括但不限于是通过智能健身镜向用户展示HIIT、有氧舞、瑜伽、力量塑形、格斗训练、Barre、普拉提、拉伸、舞蹈、冥想、挑战等。所述课程标签包括但不限于是通过智能健身镜向用户展示是否为AI识别课程、是否为亲子课程、课程主要涉及的身体部位(如全身、手臂、腰腹、腿部等)及课程中需要使用的道具(如无器械、瑜伽垫、弹力圈、弹力带、哑铃等)等。所述展示课程预览信息的页面对象也称为课程卡片。
所述用户信息包括但不限于使用历史信息、生理信息、社交信息、日历信息等。所述使用历史信息包括但不限于是通过智能健身镜向用户展示已完成的课程、收藏的课程、历史最佳成绩、上课的频率等。所述生理信息包括但不限于是通过智能健身镜向用户展示身高、体重、性别、年龄、心率、血压、伤病史等。所述社交信息包括但不限于是通过智能健身镜向用户展示与用户关注的人/好友相关的信息,也可以是与用户加入的团体/兴趣组相关的信息,这些信息具体的可以是邀请用户完成指定课程的上课邀请、邀请用户进行一对一或多人的挑战邀请、用户间的排名及用户自己或其他用户上传的动态等。所述动态可以被同步到其他社交平台(如微信、微博、Facebook、Twitter等)。所述日历信息包括但不限于是通过智能健身镜向用户展示用户的日程信息、待办事项、课程计划等。所述展示用户信息的页面对象也称为用户卡片。
所述环境信息包括但不限于天气信息。所述天气信息包括但不限于是通过智能健身镜向用户展示天气预报、温度、湿度、穿衣推荐等。所述展示天气信息的页面对象也称为天气卡片。
所述多媒体信息包括但不限于通过智能健身镜播放的音乐、视频的信息。所述展示多媒体信息的页面对象也称为多媒体卡片。
所述外部设备信息包括但不限于通过智能健身镜展示与智能健身镜建立连接的其他健身装置的信息、其他智能家具的信息(如扫地机器人、智能音箱、智能网关等)。所述展示外部设备信息的页面对象也称为设备卡片。
图形用户界面(GUI)还可以通过交互对象展示页面信息,所述交互对象可以是图标、文字、高亮选择、其他可以提示用户进行交互的形式或其组合。所述交互对象用于向用户展示用户可以执行的控制或操作(如页面切换、选择、课程的暂停/播放等)。进一步的,所述交互对象还可以与图形用户界面(GUI)的页面或页面对象进行结合以展示不同的交互功能。
所述页面对象和交互对象除了可以固定在各个页面上,还可以采用在各个页面进行弹窗或滚动播放的方式进行展示。
一些示例性的页面
方案一如图2、3、4、5-a、5-b所示,其中图2是第一主页面,图3是页面切换页面、图4、5-a、5-b分别是第二主页面、第三主页面和第四主页面。第一主页面通过多个页面对象展示多个页面信息,包括天气卡片、多媒体卡片和推荐的课程卡片等,同时第一主页面上还包括在右上角展示的用于进入页面切换页面的交互对象及与课程卡片组合的交互对象,用户可以通过执行与课程卡片组合的交互对象的交互进入第二主页面,也可以通过执行用于进入页面切换页面的交互对象的交互进入页面切换页面,页面切换页面包括第一主页面、第二主页面、第三主页面的预览图并突出显示其中一个页面,同时页面切换页面上方还展示了返回和确认进入突出显示页面的交互对象及与页面切换页面下方还展示了切换突出显示页面的交互对象。用户可以通过执行切换突出显示页面的交互对象的交互切换突出显示的页面,并通过确认进入突出显示页面的交互对象的交互进入选中的页面。当用户选中并进入第二主页面时,第二主页面包括多个课程卡片并突出显示其中一个课程卡片,同时第二主页面上方还展示了返回和确认进入突出显示课程卡片课程的交互对象及与页面切换页面下方还展示了切换突出显示课程卡片的交互对象,用户可以通过执行切换突出显示课程卡片的交互对象的交互切换突出显示的课程卡片,并通过执行确认进入突出显示课程卡片课程的交互对象的交互进入选中的课程卡片的锻炼页面。当用户选中并进入第三主页面时,第三主页面包括多个并列显示的页面对象,同时第三主页面还展示了选中的页面对象及进入选中的页面对象的交互对象,以及第三主页面下方还展示了切换页面对象的交互对象,用户可以执行通过切换页面对象的交互对象的交互切换显示的页面对象,并通过执行进入选中的页面对象的交互对象的交互进入选中的页面对象对应的页面。
方案二如图6、7、8、9所示,其中图6是第五主页面,图7是第六主页面、图8、9分 别是第七主页面、第八主页面。第五主页面通过多个页面对象展示多个页面信息,包括天气卡片、多媒体卡片和推荐的课程卡片等,同时第五主页面上还包括在下方展示的用于切换主页面的交互对象及与课程卡片组合的交互对象,用户可以通过执行与课程卡片组合的交互对象的交互进入第八主页面,也可以通过执行用于切换主页面的交互对象的交互切换显示的主页面。当用户选择切换显示的主页面为第六主页面时,第六主页面包括多个并列显示的页面对象,同时第六主页面还展示了选中的页面对象及进入选中的页面对象的交互对象,以及第六主页面下方还展示了用于切换主页面的交互对象,用户可以通过执行用于切换主页面的交互对象的交互切换显示的主页面,并通过执行进入选中的页面对象的交互对象的交互进入选中的页面对象对应的页面。当用户选择切换显示的主页面为第七主页面时,第七主页面包括多个并列显示的页面对象,同时第七主页面还展示了选中的页面对象及进入选中的页面对象的交互对象,以及第七主页面下方还展示了用于切换主页面的交互对象,用户可以通过执行用于切换主页面的交互对象的交互切换显示的主页面,并通过执行进入选中的页面对象的交互对象的交互进入选中的页面对象对应的页面。当用户选择进入选中的页面对象对应的页面为第八主页面时,第八主页面包括多个并列显示的课程卡片,同时第八主页面还展示了选中的课程卡片及进入选中的课程卡片的交互对象,以及第八主页面下方还展示了用于切换课程卡片的交互对象,用户可以通过执行用于切换课程卡片的交互对象的交互切换显示的课程卡片,并通过执行进入选中的课程卡片的交互对象的交互进入选中的课程卡片对应的锻炼页面。
页面切换页面包括第一主页面、第二主页面、第三主页面的预览图并突出显示其中一个页面,同时页面切换页面上方还展示了返回和确认进入突出显示页面的交互对象及与页面切换页面下方还展示了切换突出显示页面的交互对象。用户可以通过执行切换突出显示页面的交互对象的交互切换突出显示的页面,并通过确认进入突出显示页面的交互对象的交互进入选中的页面。当用户选中并进入第二主页面时,第二主页面包括多个课程卡片并突出显示其中一个课程卡片,同时第二主页面上方还展示了返回和确认进入突出显示课程卡片课程的交互对象及与页面切换页面下方还展示了切换突出显示课程卡片的交互对象,用户可以通过执行切换突出显示课程卡片的交互对象的交互切换突出显示的课程卡片,并通过执行确认进入突出显示课程卡片课程的交互对象的交互进入选中的课程卡片的锻炼页面。当用户选中并进入第三主页面时,第三主页面包括多个并列显示的页面对象,同时第三主页面还展示了选中的页面对象及进入选中的页面对象的交互对象,以及第三主页面下方还展示了切换页面对象的交互对象,用户可以通过切换页面对象的交互对象的交互切换显示的页面对象,并通过执行进入选中的页面对象的交互对象的交互进入选中的页面对象对应的页面。
页面动效
图形用户界面(GUI)还包括页面动效,所述页面动效包括但不限于页面切换动效和页面交互动效。
其中页面切换动效包括翻页动效、返回上一级、进入下一级动效等,页面切换动效主要是针对页面操作和变化的动效,目的是使页面操作和变化在显示时具有更好的用户体验。
其中页面交互动效包括选择确认动效、动作引导动效等,页面交互动效主要是针对用户在进行交互时为用户提供引导,如提示用户当前正在进行的交互、交互的进度以及提示用户进行交互时需要进行的操作。进一步具体包括但不限于,在用户进行交互时对涉及的页面信息进行高亮或闪烁、在涉及的页面信息上显示进度条等。还包括在进行动作控制时,对用户正在做的动作进行识别并显示提示动作或建议动作等。还包括在进行语音控制时,根据用户的语音指令进行补全提示等。
智能健身镜控制
用户可以直接与智能健身镜对接来控制智能健身镜(例如,动作命令、语音命令、触摸命令等)。可以提供在显示器120上显示的图形用户界面(GUI)以促进用户与智能健身镜的交互。
动作命令控制(包括“体式控制”和“手势控制”)
智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制。
具体的,动作命令控制的动作包括但不限于静态动作和动态动作。静态动作是指用户将特定的身体部位做出特定的姿势并维持指定时间;动态动作是指用户将特定的身体部位做出特定的动作。
具体的,静态动作包括但不限于举起左/右手、抬起左/右腿、双手举过头顶比心以及各种静态手势等,维持指定时间的范围一般为0-5秒或更长,只要是通过人体的特定部位相对于空间、人体整体或其他特定部位间的位置关系确定,并有维持的指定时间的动作,均应落入静态动作的保护范围。进一步具体的如:左手打直与水平面呈15°/30°角维持1.5秒、左手与右手呈直角维持1秒、左手向前伸出维持0.1秒等。
具体的,动态动作包括但不限于水平滑动左/右手、上下滑动左/右手、双手击掌、跳起、蹲下以及各种动态手势等,只要是通过人体的特定部位相对于空间、人体整体或其他特定部位的移动轨迹确定的动作均应落入动态动作的保护范围。进一步具体的如:水平滑动左手使左手从身体左侧移动到身体右侧、水平滑动左手使左手从身体右侧移动到身体左侧、水平滑动右手使右手从身体右侧移动到身体左侧、上下滑动左手使左手从肩部以上移动到腰部以下、上下滑动左手使左手从人体整体的上20%处以上移动到上40%处以下、双手在头顶击掌、双手 在左侧肩部以上击掌等。
具体的,动作命令控制执行的控制指令包括但不限于对非锻炼页面(唤醒页面、主页面、其他页面)的控制指令和对锻炼页面的控制指令。对非锻炼页面的控制指令主要涉及用户与非锻炼页面的交互指令。对锻炼页面的控制指令主要涉及用户对锻炼课程播放的控制指令
具体的,对非锻炼页面的控制指令包括但不限于非锻炼页面的切换、上一页、下一页、确定、返回、选择课程、课程收藏、加入计划、进入页面切换界面、唤醒语音助手、多媒体音量控制等。
具体的,对锻炼页面的控制指令包括但不限于课程暂停、课程播放、上一环节、下一环节、课程背景音乐音量控制、课程教练音量控制、课程评价等。
具体的,用户还可以通过多个动作的组合实现不同的功能。具体的包括但不限于使用第一动作进入第一级控制指令菜单,然后使用第二动作在第一级控制指令菜单中进行指令选择或进入第二级级控制指令菜单,所述控制指令菜单可以在显示器上展示给用户。进一步的具体的包括但不限于第一级控制指令菜单可以是课程选择控制指令菜单,包括播放课程、收藏课程、显示课程详情、预约课程和进入第二级级控制指令菜单指令,第二级级控制指令菜单可以是课程评价菜单,包括好评、中评、差评指令。
具体的,还包括在图形用户界面(GUI)对用户动作命令控制进行提示。具体的包括但不限于,在图形用户界面(GUI)显示可操作的动作的图标、在图形用户界面(GUI)对用户静态动作指向的控制指令进行标记或突出显示、在图形用户界面(GUI)对用户静态动作维持时间的进度条进行显示、在图形用户界面(GUI)对用户动态动作进行指引(当用户动态动作完成度超过指定比例时)等。
具体的,还包括智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制时依赖其他传感器提供辅助判断。具体的包括但不限于通过语音识别传感器开启/中断/确认动作命令控制、通过面部识别传感器判断用户的正面或背面(仅当用户正面面对智能健身镜时开启动作命令控制)、通过面部识别传感器区分不同用户以加载自定义的动作命令控制指令、通过IMU优化动作/轨迹识别精度等。
具体的,动作命令控制的动作可以是预先在智能健身镜中设定好的动作,也可以是用户自己录制的动作。动作命令控制执行的控制指令可以是预先在智能健身镜中设定好的指令,也可以是用户自己设置的自动化指令。动作命令控制的动作和执行的控制指令可以是预先设置好的对应关系,也可以是用户自己设置的对应关系,可以是一对一的关系,也可以是多对一的关系。
具体的,动作命令控制功能的开启或关闭可以由特定的图形用户界面(GUI)或锻炼页面决 定,如在图形用户界面(GUI)的主页面默认开启动作命令控制功能而在其他页面默认关闭动作命令控制功能;动作命令控制功能的开启或关闭还可以根据其他传感器检测到的数据进行决定;动作命令控制功能的开启或关闭还可以是全部功能的开启/关闭或部分功能的开启/关闭。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示图形用户界面(GUI)的第一页面,第一页面上显示有举左手进入页面切换模式的第一提示图标,用户举起左手,第一提示图标进入高亮状态并开启进度条,1.5秒后,进度条完成,智能健身镜通过显示器120显示用于切换页面显示的页面切换页面,切换页面上除突出显示的第一页面,还包括第一页面的上一页“第二页面”和第一页面的下一页“第三页面”,切换页面上还显示有上一页、下一页、退出和确认的提示图标,其中上一页的提示图标代表左手从左往右水平挥动的动作,下一页的提示图标代表右手从右往左水平挥动的动作,退出的提示图标代表举左手,确定的提示图标代表举右手,用户此时做出左手从左往右水平挥动,切换页面变为突出显示第二页面,此时用户举起右手,代表确认的图标进入高亮状态并开启进度条,1.5秒后,进度条完成,智能健身镜通过显示器120显示第二页面。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示图形用户界面(GUI)的第四页面,第四页面上显示有代表左手从左往右水平挥动进入上一页的第二提示图标和代表右手从右往左水平挥动进入下一页的第三提示图标,用户此时做出右手从右往左水平挥动的动作,智能健身镜通过显示器120显示由第四页面转入第五页面。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示如图5-a的第三主页面,第三主页面包括多个并列显示的页面对象,同时第三主页面还展示了选中的页面对象及进入选中的页面对象的交互对象,以及第三主页面下方还展示了切换页面对象的交互对象,用户可以通过切换页面对象的交互对象的交互切换显示的页面对象,并通过执行进入选中的页面对象的交互对象的交互进入选中的页面对象对应的页面。用户可以通过右手/左手向左斜上方、左斜下方、右斜上方、右斜下方举起来选择选中的页面对象,并在选择完成后维持住右手/左手的姿态,同时使手部做出指定的静态动作如五指张开1秒即可进入选中的页面对象对应的页面。同时,用户可以做出右手从左往右水平挥动的动作切换显示的页面对象,进入第四主页面。
进一步的,一个具体的智能健身镜悬浮动作控制示例,智能健身镜首先通过显示器120显示如图5-a的第三主页面,第三主页面包括四宫格排布并列显示的页面对象,用户使用智能健身镜时,根据用户右手的位置确定用户选中的页面对象,即当用户的右手处于人体右侧 肩部以上时,选中页面对象“天气”、当用户的右手处于人体右侧肩部以下时,选中页面对象“提醒”、当用户的右手处于人体左侧肩部以上时,选中页面对象“音乐”、当用户的右手处于人体右侧肩部以下时,选中页面对象“体脂秤”。当用户右手动作时,智能健身镜根据识别到的右手位置判断用户选择的页面对象,当右手停止运动并超过一定的时间后,执行预设的针对选中的页面对象的交互,所述预设的针对选中的页面对象的交互可以是打开该页面对象的功能,也可以是大概该页面对象的上下文菜单(contextual menu)。所述四宫格排布并列显示的页面对象也可以是左右排布、六宫格排布、九宫格排布等排布方式。
所述动作可以设置有个人偏好(左右撇子、残疾等),同时具备有单手和双手的不同模式,包括左/右手模式、双手同时进行的模式(如左手选择,右手切屏)。
所述第一页面、第二页面、第三页面、第四页面和第五页面可以分别是唤醒页面、主页面、锻炼页面或其他页面。
语音命令控制
智能健身镜可以根据语音识别传感器采集到的用户语音对图形用户界面(GUI)进行控制。
具体的,用户可以通过语音命令控制的语音识别结果来代替动作命令控制的动作并实现相同的控制效果。
具体的,动作命令控制执行的控制指令包括但不限于对非锻炼页面(唤醒页面、主页面、其他页面)的控制指令和对锻炼页面的控制指令。对非锻炼页面的控制指令主要涉及用户与非锻炼页面的交互指令。对锻炼页面的控制指令主要涉及用户对锻炼课程播放的控制指令
具体的,对非锻炼页面的控制指令包括但不限于非锻炼页面的切换、上一页、下一页、确定、返回、选择课程、课程收藏、加入计划、进入页面切换界面、唤醒语音助手、多媒体音量控制等。
具体的,对锻炼页面的控制指令包括但不限于课程暂停、课程播放、上一环节、下一环节、课程背景音乐音量控制、课程教练音量控制、课程评价等。
具体的,用户还可以通过多个动作的组合实现不同的功能。具体的包括但不限于使用第一语音进入第一级控制指令菜单,然后使用第二语音在第一级控制指令菜单中进行指令选择或进入第二级级控制指令菜单,所述控制指令菜单可以在显示器上展示给用户。进一步的具体的包括但不限于第一级控制指令菜单可以是课程选择控制指令菜单,包括播放课程、收藏课程、显示课程详情、预约课程和进入第二级级控制指令菜单指令,第二级级控制指令菜单可以是课程评价菜单,包括好评、中评、差评指令。
具体的,还包括在图形用户界面(GUI)对用户语音命令控制进行提示。具体的包括但不限 于,在图形用户界面(GUI)显示可使用的语音的关键词、在图形用户界面(GUI)对用户语音指令进行补全(当用户动态动作完成度超过指定比例时)等。
具体的,还包括智能健身镜可以根据语音识别传感器采集到的用户语音对图形用户界面(GUI)进行控制时依赖其他传感器提供辅助判断。具体的包括但不限于通过动作识别传感器开启/中断/确认动作命令控制、通过面部识别传感器判断用户的正面或背面(仅当用户正面面对智能健身镜时开启动作命令控制)、通过面部识别传感器区分不同用户以加载自定义的语音命令控制指令等。
具体的,语音命令控制的关键词可以是预先在智能健身镜中设定好的关键词,也可以是用户自己录制的关键词。语音命令控制执行的控制指令可以是预先在智能健身镜中设定好的指令,也可以是用户自己设置的自动化指令。语音命令控制的关键词和执行的控制指令可以是预先设置好的对应关系,也可以是用户自己设置的对应关系,可以是一对一的关系,也可以是多对一的关系。
具体的,语音命令控制功能的开启或关闭可以由特定的图形用户界面(GUI)或锻炼页面决定,如在图形用户界面(GUI)的主页面默认开启语音命令控制功能而在其他页面默认关闭动作命令控制功能;语音命令控制功能的开启还可以通过识别唤醒词进行;语音命令控制功能的开启或关闭还可以根据其他传感器检测到的数据进行决定;语音命令控制功能的开启或关闭还可以是全部功能的开启/关闭或部分功能的开启/关闭。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示图形用户界面(GUI)的第一页面,用户通过唤醒词开启语音命令控制功能,第四页面上显示有代表语音命令控制功能开启图标,用户此时说出“进入切换页面”或“页面切换”的指令,智能健身镜通过显示器120显示用于切换页面显示的页面切换页面,切换页面上除突出显示的第一页面,还包括第一页面的上一页“第二页面”和第一页面的下一页“第三页面”,切换页面上还显示有代表语音命令控制功能开启图标,用户此时说出“上一页”或“往前翻一页”的指令,切换页面变为突出显示第二页面,此时用户说出“进入”或“确定”的指令,智能健身镜通过显示器120显示第二页面。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示图形用户界面(GUI)的第四页面,用户通过唤醒词开启语音命令控制功能,第四页面上显示有代表语音命令控制功能开启图标,用户此时说出“下一页”或“往后翻一页”的指令,智能健身镜通过显示器120显示由第四页面转入第五页面。
一个具体的智能健身镜可以根据图像识别传感器采集到的用户动作对图形用户界面(GUI)进行控制的示例,智能健身镜首先通过显示器120显示图形用户界面(GUI)的第六页面,用户通过动作命令控制开启语音命令控制功能,第四页面上显示有代表语音命令控制功能开启图标,用户此时说出“开始训练”或“确定”的指令,智能健身镜通过显示器120显示由第六页面转入第七页面。
所述第一页面、第二页面、第三页面、第四页面和第五页面可以分别是唤醒页面、主页面、锻炼页面或其他页面,第六页面是用来展示锻炼卡片的页面,第七页面是锻炼页面。
触摸命令控制
智能健身镜可以根据触摸屏采集到的用户触控指令对图形用户界面(GUI)进行控制。
多模态命令控制
多模态命令控制包括组合控制、增强控制和冲突控制。
其中组合控制包括但不限于多个动作的动作组合控制及多个关键词的语音组合控制,可以通过连续(间隔时间不超过阈值)的多个动作或关键词实现较为复杂的控制指令(如选中并收藏、选中并推荐给好友、截图并分享等)。
其中组合控制包括但不限于通过面部识别或场景识别对用户的动作指令控制或语音指令控制进行增强优化,进一步的包括但不限于用户的动作指令控制或语音指令控制为播放音乐时,识别到用户面部表情为不开心,为用户呈现舒缓心情的推荐结果同时对用户进行关心;用户的动作指令控制或语音指令控制为推荐课程时,识别到场景温度较低,减少冥想类课程的推荐。
其中冲突控制包括但不限于当用户使用多个不同的控制方法同时对智能健身镜进行控制时,出现了多个冲突的控制指令,通过对不同的控制方法进行优先级判断,确定用户希望执行的控制指令并由智能健身镜进行执行。
控制指令
控制指令包括但不限于系统唤醒指令、页面切换指令、页面交互指令和课程控制指令,用户可以通过这些控制指令实现对智能健身镜的控制。
其中系统唤醒指令包括将智能健身镜从待机状态唤醒到唤醒页面、主页面、锻炼页面、其他页面的指令和将智能健身镜从唤醒页面切换到主页面、锻炼页面、其他页面的指令。
其中页面切换指令是指用于控制智能健身镜在多个不同的页面中切换的指令,所述不同 的页面可以是类型不同的页面,也可以是内容不同的页面。进一步的页面切换指令包括但不限于翻页指令(如下一页面/上一页面)和控制智能健身镜进入页面切换页面的指令。
其中页面交互指令是指用于控制智能健身镜执行交互或输入的指令,所述页面交互指令包括但不限于控制交互指令和输入交互指令。进一步的所述控制交互指令是指控制图形用户界面(GUI)或图形用户界面(GUI)中的页面对象或交互对象实现其交互功能的指令,如确认/返回、加入收藏、加入计划、唤醒语音助手、音量控制、切换账号、切换模式、打开上下文菜单(contextual menu)等。进一步的所述输入交互指令是指在图形用户界面(GUI)或图形用户界面(GUI)中的页面对象或交互对象中输入信息的指令,如在进行课程评价时,每输入一次指令加一颗心;或在通过智能健身镜与其他用户进行聊天时,每输入一次指令切换一个预设表情/回复等
其中课程控制指令是指用于控制智能健身镜在锻炼页面播放锻炼视频时的控制指令,主要用于对锻炼视频的播放进行控制。进一步的课程控制指令包括但不限于暂停/播放、上一节/下一节、背景音量控制、教练音量控制、课程评价等。
数据管理
智能健身镜的数据源包括但不限于传感器数据和后台数据,所述传感器数据和后台数据用于用户控制指令的输入和页面信息的展示。
其中传感器数据包括但不限于镜上传感器数据和外置传感器数据,所述镜上传感器数据包括但不限于图像识别传感器、红外识别传感器和语音识别传感器等安装在智能健身镜上的传感器检测到的数据。进一步具体的,图像识别传感器检测到的数据包括但不限于图像识别传感器检测到的图像和对图像处理识别后得到的数据,如用户动作数据、用户姿态数据、面部识别数据、用户表情数据和环境图像数据等。进一步具体的,红外识别传感器检测到的数据包括但不限于红外识别传感器检测到的热成像图及对热成像图处理识别后得到的数据,如用户热成像数据、用户锻炼部位数据、环境热成像数据等。进一步具体的,语音识别传感器检测到的数据包括但不限于音识别传感器检测到的语音数据及对语音数据处理识别后得到的数据,如语音控制指令、关键词指令、唤醒词指令等。所述镜上传感器数据包括但不限于惯性传感器数据、生理数据传感器数据(如心率带数据、体脂称数据等)等与智能健身镜通过无线连接的传感器检测到的数据。进一步具体的,惯性传感器检测到的数据包括但不限于惯性传感器检测到的速度/加速度数据和对速度/加速度数据处理识别后得到的数据,如用户动作数据等。进一步具体的,生理数据传感器检测到的数据包括但不限于生理数据传感器检测到的生理数据和对生理数据处理识别后得到的数据,如心率数据、血压数据、血氧饱和度数 据等。
其中后台数据包括但不限于后台用户数据和后台其他数据,所述后台用户数据为智能健身镜和/或与智能健身镜连接的服务器存储的与用户直接关联的数据,所述后台用户数据包括但不限于用户生理数据、用户社交数据、用户日历数据和使用历史数据等。所述使用历史数据包括但不限于用户展示已完成的课程数据、收藏的课程数据、历史最佳成绩数据、课程过程中的生理数据和表现数据、上课的频率数据等。所述用户生理数据包括但不限于用户的身高、体重、性别、年龄、心率、血压、伤病史等。所述用户社交数据包括但不限于用户关注的人/好友相关的数据和用户加入的团体/兴趣组相关的数据,这些信息具体的可以是邀请用户完成指定课程的上课邀请数据、邀请用户进行一对一或多人的挑战邀请数据、用户间的排名及用户自己或其他用户上传的动态数据等。所述动态可以被同步到其他社交平台(如微信、微博、Facebook、Twitter等)。所述用户日历数据包括但不限于用户的日程信息数据、待办事项数据、课程计划数据等。所述后台其他数据为智能健身镜和/或与智能健身镜连接的服务器存储的与用户低关联度的数据,所述后台其他数据包括但不限于IP数据、环境数据、课程数据和第三方数据等。其中IP数据包括但不限于智能健身镜的IP地址数据、Mac地址数据等。其中环境数据包括但不限于智能健身镜的IP所在的区域的时间数据、定位数据、天气数据、温度数据、穿衣指数数据等。所述课程数据包括但不限于锻炼视频数据、教练名称、课程类型、课程时长、课程难度、课程标签和/或预计消耗等。所述课程类型包括但不限于是通过智能健身镜向用户展示HIIT、有氧舞、瑜伽、力量塑形、格斗训练、Barre、普拉提、拉伸、舞蹈、冥想、挑战等。所述课程标签包括但不限于是通过智能健身镜向用户展示是否为AI识别课程、是否为亲子课程、课程主要涉及的身体部位(如全身、手臂、腰腹、腿部等)及课程中需要使用的道具(如无器械、瑜伽垫、弹力圈、弹力带、哑铃等)等。所述第三方数据包括但不限于安装在智能健身镜上的第三方功能的数据。
数据存储
智能健身镜还可以取决于所使用的存储空间量在智能健身镜和/或远程存储装置(例如,云服务)上本地存储用户信息。例如,使用很少存储空间的用户信息可以本地存储在智能健身镜上,这些用户信息包括但不限于用户的姓名、年龄、身高、体重和性别。另外,课程数据也可以存储在智能健身镜中以降低可能影响视频流质量的网络等待时间的影响。所存储的视频内容量可能受到智能健身镜的存储容量的限制。在一些配置中,视频内容可以仅每天或每周临时存储,或者取决于所使用的智能健身镜容量的百分比。使用大量存储空间的后台用户数据可以存储在远程存储装置上,这些用户信息包括但不限于生理数据,诸如用户的心率和 消耗的卡路里以及在锻炼期间拍摄的用户的视频或骨骼点数据。智能健身镜可以检索该信息以用于后续分析和显示。
可以通过各种方式保护(例如,加密)智能健身镜与远程存储装置之间的数据传输,以防止用户信息的非所需丢失或被盗。例如,低功耗蓝牙协议包括可以由利用该协议的装置使用的内置安全特征。然而,只有在与加密建立连接之前完成蓝牙绑定操作时,才能使用这些安全特征。在一些情况下,可能未实现各种安全机制或各种安全机制可能发生故障,此时可以结合上述数据的分块规范来实现应用级安全性。例如,可以在前置消息的前导码之前应用消息的高级加密标准(AES)加密。在一些方面,低功耗蓝牙协议经由固件级别的内置安全特征执行类似过程,并且可以提供类似保护以防止人员阅读客户端与服务器之间的通信。
当客户端与服务器断开连接时,可以从服务器装置上的服务录制中删除为客户端添加的用于读取/通知消息的GATT服务。这确保了没有任何连接被打开,并且系统不会意外地将信息泄露给邪恶的窥探者。这种连接终止可以由服务器或客户端触发,并依赖于低功耗蓝牙堆栈以向双方提供连接已关闭的通知。如果在初始连接设置中使用蓝牙绑定以提供固件级加密安全性,则可以将绑定信息存储在每个装置上,使得在客户端与服务器之间的后续连接之后不需要重复绑定。
功能管理
智能健身镜还可以根据接收到的数据实现以下功能:用户识别、模式切换、注意力判断、身体/情绪评估、穿衣推荐。
其中用户识别功能允许智能健身镜通过输入的数据识别出正在使用智能健身镜的用户,该功能主要通过以下方法实现:用户通过数据输入装置输入用户的特征数据,智能健身镜将用户输入的特征数据与数据库中的特征数据对比,根据对比结果确定正在使用智能健身镜的用户,根据识别结果允许智能健身镜对图形用户界面(GUI)进行调整,如调整字体大小、页面信息偏好等。其中数据输入装置包括但不限于镜上传感器数据和外置传感器,用户的特征数据包括但不限于用户的骨骼点、声纹、体重、静息心率等镜上传感器数据和外置传感器采集到或采集后经过处理得到的数据,所述数据库中记录了用户和特征数据的映射关系。进一步的,用户识别功能可以被用于一个主账号下的多个子账号的识别和切换,也可以用于多个主账号之间的识别和切换。在另一个示例中,用户识别功能允许智能健身镜通过输入的数据识别出正在使用智能健身镜的用户的特征,如男性用户、女性用户、老人、小孩,根据识别结果允许智能健身镜对图形用户界面(GUI)进行调整,如调整字体大小、页面信息等。
其中模式切换功能允许智能健身镜在多个工作模式中切换以适应不同的工作场景。例如, 智能健身镜被允许切换为青少年模式,青少年模式中,青少年账号与父母账号或设备主账号绑定,在青少年模式下,智能健身镜展示的页面信息会进行适应性的调整,如课程推荐会调整为优先呈现适合青少年的内容,如跳绳、体感游戏等,同时,在该模式下的实时情况和运动数据可以被同步至父母账号或设备主账号的智能终端。在另一个示例中,智能健身镜被允许切换为访客模式,访客模式中,智能健身镜不需要与账号绑定也可以工作,智能健身镜展示的页面信息也可以进行适应性的调整,如课程推荐会调整为优先呈现适合新手的内容,如难度标签为低的内容等,同时,即使智能健身镜与账号绑定,在该模式下的运动数据不会计入绑定的账号,不会影响绑定账号的训练计划或完课情况。
其中注意力判断允许智能健身镜确定设备工作时用户的注意力是否在智能健身镜上,以便于智能健身镜判断用户与镜子的交互意图同时可操作的激活或关闭传感器、其他数输入装置以及功能。注意力判断主要是根据用户的表现来判断用户是否希望使用智能健身镜,可以但不限于通过以下特征进行判断:距离判断(用户与智能健身镜的距离不超过阈值)、体态判断(用户是否正面面对智能健身镜)、动作判断(用户是否做出了指定的动作或锻炼常用的动作)、语义判断(用户语音的识别结果是否与特定关键词相关)等。
其中身体/情绪评估允许智能健身镜确定用户使用设备时的身体/情绪状况,以便于智能健身镜根据用户的身体/情绪状况可操作的调整页面信息或其他反馈。用户的身体/情绪状况可以通过生理数据传感器和图像识别传感器(面部识别)进行识别。当识别到用户情绪为不开心,为用户呈现舒缓心情的页面信息推荐结果同时在语音交互时对用户进行关心;进一步的一个实例,用户通过唤醒词唤醒智能健身镜的语音功能时,智能健身镜通过图像识别传感器(面部识别)识别到用户的表情较为沮丧,智能健身镜发起第一轮对话“hey,怎么啦你看起来不太高兴哦,要不要听一首歌来放松一下”。所述身体/情绪状况的识别可以通过对用户的年龄、身体姿态、表情等信息的识别,来判别用户的情绪、压力、疲劳等身体/情绪的状态;还可以通过生理数据传感器检测到的血压、心率数据来进行识别。
其中穿衣推荐允许智能健身镜基于特定条件向用户发起主动交互。智能健身镜根据采集到的时间数据、定位数据,结合用户信息及用户在智能健身镜前的动作判断用户是否需要智能健身镜发起主动交互。如,在用户早上第一次出现在智能健身镜前并转了一圈时,智能健身镜开启穿衣推荐功能,向用户推荐当前定位适合今天穿着的衣服。进一步的如,在用户周末出现在智能健身镜前并转了一圈时,智能健身镜开启穿衣推荐功能,向用户输出当前衣物的穿搭建议。上述功能还可以用于智能健身镜基于用户和外部信息进行主动互动的其他功能。
一种电子装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行 的计算机程序,所述处理器执行所述计算机程序时实现所述多模态交互健身镜的操作。
其中,所述处理器可以是中央处理器,还可以是其他通用处理器、数字信号处理器、专用集成电路、现成可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器可用于存储所述计算机程序和/或模块,所述处理器通过运行或执行存储在所述存储器内的数据,实现本公开中多模态交互健身镜的装置的各种功能。所述存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等。此外,存储器可以包括高速随机存取存储器、还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡,安全数字卡,闪存卡、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
视频流传输方法
智能健身镜被配置为在显示面器120上显示来自视频制作工厂的视频内容。视频内容可以作为直播内容或预先录制好的录播内容流式传输。也可以将直播内容录制并存储在云服务器中,使得用户可以稍后请求和播放视频内容,因此成为录播内容。在进行视频流传输时,首先智能健身镜向API服务器发送指定课程的请求,API服务器返回的数据分为两部分,第一部分为指定课程的概览(包括课程图片及课程简介);第二部分为课程视频的地址(URL),然后智能健身镜根据收到的URL去OSS服务器上请求对应的课程视频。当课程为录播内容时,智能健身镜使用HLS协议进行视频播放,即课程视频的地址(URL)为对应课程的M3U8文件,然后智能健身镜的播放器根据M3U8文件的记载从OSS服务器上下载对应的课程视频切片。当课程为直播内容时,智能健身镜使用RTMP协议进行视频播放,即课程视频的地址(URL)为对应课程的RTMP直播地址,然后智能健身镜的播放器根据RTMP直播地址从OSS服务器上下载对应的课程视频直播数据。
在一些应用中,智能健身镜可以连接到在线流媒体服务,该在线流媒体服务向用户提供从服务器(例如,直接通过网络路由器或间接地通过用户的智能装置)流式传输的第三方视频内容。可以基于订阅向用户提供第三方内容。第三方可以向集中式分发平台提供内容,该集中式分发平台通过网络与智能健身镜通信。集中式分发平台的一个好处是内容到智能健身镜的分发更简单。可选地,第三方可以开发单独的分发平台,其可以在智能装置上使用单独的软件应用程序以供用户访问内容。
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程 序被处理器执行时实现所述多模态交互健身镜的操作。
本公开实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ReadOnlyMemory,ROM)、可擦式可编程只读存储器((ErasableProgrammableReadOnlyMemory,EPROM)或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
一种动作识别方法,
如图10所示,在上述实施例的基础上,本装置在交互过程,具体包括以下操作:
S121健身镜采集区域内用户图像,得到用户的动作图像;
S122根据动作图像生成用户的骨骼点信息,根据骨骼点信息建立坐标系;
S122.1骨骼点信息包括头顶骨骼点、盆骨骨骼点、左手骨骼点和右手骨骼点;
S122.2获取头顶骨骼点和盆骨骨骼点之间的直线距离,根据所述直线距离获取坐标系的单位长度;
S122.21获取头顶骨骼点和盆骨骨骼点之间的直线距离,将直线距离的六分之一作为坐标系的单位长度,即坐标系的单位长度1的大小为直线距离的六分之一;
S122.3将头顶骨骼点和盆骨骨骼点之间所在直线作为纵坐标、盆骨骨骼点为原点,并以获取的单位长度构建直角坐标系;
S123根据坐标系建立若干交互区域;
在本实施例中,交互区域共有五个,分别为第一交互区域、第二交互区域、第三交互区域、第四交互区域和第五交互区域,如图4所示,第一交互区域为图3中的A区域,第二交互区域为图4的B区域,第三交互区域为图4的C区域,第四交互区域为图4的D区域,第五交互区域为图4的E区域。
其中第一交互区域为以头顶骨骼点为中心,纵向5个单位长度,横向6个单位长度作为第一交互区域,即第一交互区域的长度为12个单位长度,高度为10个单位长度。第一交互区域为以头顶骨骼点为中心,纵向向上5个单位长度为上边,纵向向下5个单位长度为下边;以头顶骨骼点为中心,横向向左6个单位长度为左边,横向向右6个单位长度为右边。第一 交互区域预设的控制指令为确定A,即当交互姿势位于第一交互区域时,输出第一交互区域对应的控制指令,即输出确定A的控制指令。
第二交互区域的控制指令为向左,第三交互区域的控制指令为向右,第二交互区域为以盆骨骨骼点为起始点,纵向向上2个单位长度为上边,纵向向下10个单位长度为下边,以头顶骨骼点为起始点,横向向左3个单位长度为右边,横向向左16个单位长度为左边构成的长度为13个单位长度,高度为12个单位长度的区域。
第三交互区域为以盆骨骨骼点为起始点,纵向向上2个单位长度为上边,纵向向下10个单位长度为下边,以头顶骨骼点为起始点,横向向右3个单位长度为左边,横向向右16个单位长度为右边构成的长度为13个单位长度,高度为12个单位长度的区域。
第四交互区域的控制指令为确定D,第五交互区域的控制指令为确定E,第四交互区域为以头顶骨骼点为起始点,纵向向上4个单位长度为上边,纵向向下5个单位长度为下边,以头顶骨骼点为起始点,横向向左7个单位长度为右边,横向向左16个单位长度为左边构成的长度为11个单位长度,高度为9个单位长度的区域。
第五交互区域为以头顶骨骼点为起始点,纵向向上4个单位长度为上边,纵向向下5个单位长度为下边,以头顶骨骼点为起始点,横向向右7个单位长度为左边,横向向右16个单位长度为右边构成的长度为11个单位长度,高度为9个单位长度的区域。
S124根据骨骼点信息识别交互姿势;
S124.1获取左手骨骼点和右手骨骼点之间的距离;
S124.2将左手骨骼点和右手骨骼点之间的距离与阈值T对比,若左手骨骼点和右手骨骼点之间的距离小于或等于阈值T,则识别到交互姿势,在本实施例中阈值T为15cm,若左手骨骼点和右手骨骼点之间的距离小于或等于阈值T,则判断用户做出交互姿势,识别到交互姿势,例如在本实施例中交互姿势为击掌,即用户做出击掌动作;若左手骨骼点和右手骨骼点之间的距离大于阈值T,则用户未做出击掌动作;在本实施例中具体的交互姿势只做举例说明并不做限定。
S124.3用户做出交互姿势,则获取左手骨骼点和右手骨骼点之间的中间点坐标;
S125判断中间点坐标是否位于交互区域中;
S125.1判断中间点坐标是否位于交互区域中;
S125.11若中间点坐标位于其中一个交互区域中时,则该交互区域为激活状态,输出该交互区域对应的控制指令;其他交互区域为未激活状态;
S125.12输出该交互区域对应的控制指令的同时,该交互区域为激活状态,其余交互区域为未激活状态;激活状态的交互区域面积增大,未激活状态的交互区域面积缩小。
在本实施例中,激活状态的交互区域的长度增大1.5个单位长度,高度增大0.5个单位长度,具体的,在本实施例中,激活状态的交互区域的长度向左右两侧依次增大即外扩0.75个单位长度,高度向上下两端依次增大即外扩0.25个单位长度。未激活状态的交互区域长度缩小2个单位长度,高度缩小1个单位长度。具体的,未激活状态的交互区域长度向左右两侧依次缩小即内缩1个单位长度,高度向上下两端缩小即内缩0.5个单位长度。
在未输出该交互区域对应的控制指令时,交互区域的面积不改变,输出控制指令后,交互区域的面积调整,执行控制指令后,交互区域的面积恢复到原始状态。
当用户在第一交互区域中做出交互姿势后,健身镜识别到交互姿势,输出第一交互区域对应的控制指令,第一交互区域的面积扩大,即上边纵向向上增大0.25个单位长度,下边纵向向下增大0.25个单位长度,左边横向向左增大0.75个单位长度,右边横向向右增大0.75个单位长度。
本文描述的所有参数、尺寸、材料和配置旨在是示例性的,并且实际参数、尺寸、材料和/或配置将取决于使用本公开教导的一个或多个特定应用。应当理解,前述实施方案主要通过举例方式呈现,并且在所附权利要求及其等同物的范围内,本公开的实施方案可以不同于具体描述和要求保护的方式来实践。本公开文本的实施方案涉及本文描述的每个单独的特征、系统、物品、材料、套件和/或方法。
另外,如果此类特征、系统、物品、材料,套件和/或方法不相互矛盾,则两种或更多种这样的特征、系统、物品、材料、套件和/或方法的任何组合包括在本公开文本的范围。在不脱离本公开文本的精神的情况下,可以在示例性实施方式的相应元件的设计、操作条件和布置中进行其他替换、修改、改变和省略。数值范围的使用不排除落在范围之外的以相同方式实现相同功能以产生相同结果的等同物。
可以多种方式实现上述实施方案。例如,可以使用硬件、软件或其组合来实现实施方案。当在软件中实现时,软件代码可以在合适的处理器或处理器集合上执行,而无论是在单个计算机中提供还是分布在多个计算机之间。
此外,计算机可以被体现为多种形式中的任何一种,诸如机架式计算机、台式计算机、膝上型计算机或平板计算机。另外,计算机可以嵌入通常不被视为计算机但具有合适处理能力的装置中,该装置包括个人数字助理(PDA)、智能手机或任何其他合适的便携式或固定电子装置。
此外,计算机可以具有一个或多个输入和输出装置。除其他之外,这些装置还可以用于呈现用户界面。可以用于提供用户界面的输出装置的例子包括用于输出的视觉呈现的打印机 或显示屏和用于输出的可听呈现的扬声器或其他发声装置。可以用于用户界面的输入装置的例子包括键盘和指示装置,诸如鼠标、触摸板和数字化平板计算机。作为另一个例子,计算机可以通过语音识别或其他可听格式接收输入信息。
此类计算机可以通过一种或多种网络以合适形式互连,这些网络包括局域网或广域网,诸如企业网、智能网(IN)或因特网。此类网络可以基于合适的技术,可以根据合适的协议进行操作,并且可以包括无线网络、有线网络或光纤网络。
本文概述的各种方法或过程可以被编码为可在采用各种操作系统或平台中的任何一者的一个或多个处理器上执行的软件。另外,可以使用许多合适的编程语言和/或编程或脚本工具中的任何一种来编写这种软件,并且还可以将这种软件编译为在框架或虚拟机上执行的可执行机器语言代码或中间代码。一些实施方式可以具体地采用特定操作系统或平台以及特定编程语言和/或脚本工具中的一者或多者来促进执行。
此外,各种构思可以被体现为一种或多种方法,其中已经提供了这些方法的至少一个例子。作为该方法的一部分执行的动作在一些情况下可以以不同方式排序。因此,在一些公开实施方式中,给定方法的相应动作可以按与具体示出的顺序不同的顺序执行,其可以包括同时执行一些动作(即使此类动作在示例性实施方案中被示为顺序动作)。
本文提及的所有出版物、专利申请、专利以及其他参考文献都通过引用整体并入。
如本文定义和使用的所有定义应被理解为控制字典定义、通过引用并入的文献中的定义和/或所定义术语的普通含义。
除非明确指示相互矛盾,否则本说明书和权利要求书中使用的不定冠词“一”和“一个”应被理解为表示“至少一个”。
本说明书和权利要求书中使用的短语“和/或”应被理解为表示如此结合的“一个或两个”元件,即,在一些情况下结合存在并且在其他情况下分离存在的元件。用“和/或”列出的多个元件应以相同方式解释,即,“一个或多个”元件如此结合。除由“和/或”子句具体表示的元件之外,可以可选地存在其他元件,而无论是与具体表示的那些元件相关还是不相关。因此,作为非限制性例子,当与诸如“包括”之类的开放式语言结合使用时,对“A和/或B”的引用可以在一个实施方案中仅指代A(可选地包括除B之外的元件);在另一个实施方案中,仅指代B(可选地包括除A之外的元件);在又一个实施方案中,指代A和B两者(可选地包括其他元件);等等。
如本说明书和权利要求书中所用的,“或”应被理解为与如上所定义的“和/或”具有相同含义。例如,当分隔列表中的项目时,“或”或“和/或”应被解释为包含性的,即,包括至少一个,但是也包括许多元件或元件列表中的一个以上元件以及(可选地)附加的未列出项 目。只有明确指示相互矛盾,否则诸如“只有一个”或“恰好一个”或者在权利要求中使用时“由...组成”将指代恰好包括许多元件或元件列表中的一个元件。一般而言,如本文中所使用的术语“或”之后有诸如“两者之一”、“中的一个”、“中的仅一个”或“中的恰好一个”之类的排他性术语时仅应被解释为指示排他性备选方案(即,“一个或另一个但不是两个”)。“基本上由...组成”在权利要求中使用时它的普通意义如同在专利法领域中使用的那样。
如本说明书和权利要求书中所使用的,关于一个或多个元件的列表,短语“至少一个”应被理解为表示选自元件列表中的任何一个或多个元件的至少一个元件,但不一定包括元件列表中具体列出的每个元件中的至少一个元件,并且不排除元件列表中元件的任何组合。该定义还允许可选地存在除在短语“至少一个”所指代的元件列表内具体表示的元件之外的元件,而无论是与具体表示的那些元件相关还是不相关。因此,作为非限制性例子,“A和B中的至少一者”(或等同地,“A或B中的至少一者”,或等效地“A和/或B中的至少一者”)在一个实施方案中可以指代至少一个A,可选地包括一个以上A,而不存在B(并且可选地包括除B之外的元件);在另一个实施方案中,指代至少一个B,可选地包括一个以上B,而不存在A(并且可选地包括除A之外的元件);在又一个实施方案中,指代至少一个A,可选地包括一个以上A和至少一个B,可选地包括一个以上B(和可选地包括其他元件);等等。
在权利要求以及上面的说明书中,诸如“包括”、“包含”、“携带”、“具有”、“含有”、“涉及”、“持有”、“由...组成”等之类的所有过渡短语应被理解为开放式的,即,包括但不限于。
尽管已描述了本公开的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本公开范围的所有变更和修改。
显然,本领域的技术人员可以对本公开进行各种改动和变型而不脱离本公开的精神和范围。这样,倘若本公开的这些修改和变型属于本公开权利要求及其等同技术的范围之内,则本公开也意图包含这些改动和变型在内。
Claims (19)
- 交互式镜面装置,包括:显示器,其可操作地显示用户界面,用户界面具有至少一个用户界面对象;镜子,其具有部分反射部分,显示器透过部分反射部分显示的图像与镜子前用户的反射图像的一部分在部分反射部分处叠加;传感器,其采集用户相关的用户数据;处理器,所述处理器可操作地耦合到所述显示器和所述传感器,以基于传感器采集到的用户数据控制显示器由显示第一用户界面切换为第二用户界面。
- 根据权利要求1所述的交互式镜面装置,还包括通信接口,用于接收视频影像;其中,所述用户界面包括锻炼页面,用于显示通过所述通信接口接收的视频影像;所述用户界面具有至少一个用于展示用户运动表现的用户界面对象。
- 根据权利要求1或2所述的交互式镜面装置,其中,所述传感器包括图像识别传感器、红外识别传感器或语音识别传感器中的一个或多个。
- 根据权利要求3所述的交互式镜面装置,其中,所述处理器基于图像识别传感器采集到的用户动作,生成动作命令控制指令,控制显示器由显示第一用户界面切换为第二用户界面。
- 根据权利要求3所述的交互式镜面装置,其中,所述处理器基于图像识别传感器采集到的用户动作,控制显示器显示对应于所述用户动作的动作命令控制指令的提示图标。
- 根据权利要求4或5所述的交互式镜面装置,其中,所述图像识别传感器采集到的用户动作是静态动作或动态动作;所述静态动作是指用户以预定的身体部分做出预定的姿势并维持指定的时间,所述动态动作是指用户以预定的身体部位做出预定的动作。
- 根据权利要求4或5所述的交互式镜面装置,其中,所述动作命令控制指令包括对锻炼页面的控制指令和对非锻炼页面的用户界面的控制指令。
- 根据权利要求7所述的交互式镜面装置,其中,所述非锻炼页面的用户界面包括唤醒页面、主页面或其他页面;所述对非锻炼页面的用户界面的控制指令包括用户与非锻炼页面的交互指令,所述交互指令包括上一页、下一页、确定、返回、选择课程、课程收藏、加入计划、进入页面切换界面、唤醒语音助手、多媒体音量控制中的至少一个。
- 根据权利要求7所述的交互式镜面装置,其中,所述对锻炼页面的控制指令包括用户对锻炼课程播放的控制指令,包括课程暂停、课程播放、上一环节、下一环节、课程背景音乐音量控制、课程教练音量控制、课程评价中的至少一个。
- 由交互式镜面装置执行的方法,包括:由显示器可操作地显示用户界面,用户界面具有至少一个用户界面对象;在镜子的部分反射部分处叠加显示器透过部分反射部分显示的图像与镜子前用户的反射图像的一部分;通过传感器采集用户相关的用户数据;由可操作地耦合到所述显示器和所述传感器的处理器,基于传感器采集到的用户数据控制显示器由显示第一用户界面切换为第二用户界面。
- 根据权利要求10所述的方法,还包括通过通信接口接收视频影像;其中,所述用户界面包括锻炼页面,用于显示通过所述通信接口接收的视频影像;所述用户界面具有至少一个用于展示用户运动表现的用户界面对象。
- 根据权利要求10或11所述的方法,其中,所述传感器包括图像识别传感器、红外识别传感器或语音识别传感器中的一个或多个。
- 根据权利要求12所述的方法,其中,所述处理器基于图像识别传感器采集到的用户动作,生成动作命令控制指令,控制显示器由显示第一用户界面切换为第二用户界面。
- 根据权利要求12所述的方法,其中,所述处理器基于图像识别传感器采集到的用户动作,控制显示器显示对应于所述用户动作的动作命令控制指令的提示图标。
- 根据权利要求13或14所述的方法,其中,所述图像识别传感器采集到的用户动作是静态动作或动态动作;所述静态动作是指用户以预定的身体部分做出预定的姿势并维持指定的时间,所述动态动作是指用户以预定的身体部位做出预定的动作。
- 根据权利要求13或14所述的方法,其中,所述动作命令控制指令包括对锻炼页面的控制指令和对非锻炼页面的用户界面的控制指令。
- 根据权利要求16所述的方法,其中,所述非锻炼页面的用户界面包括唤醒页面、主页面或其他页面;所述对非锻炼页面的用户界面的控制指令包括用户与非锻炼页面的交互指令,所述交互指令包括上一页、下一页、确定、返回、选择课程、课程收藏、加入计划、进入页面切换界面、唤醒语音助手、多媒体音量控制中的至少一个。
- 根据权利要求16所述的方法,其中,所述对锻炼页面的控制指令包括用户对锻炼课程播放的控制指令,包括课程暂停、课程播放、上一环节、下一环节、课程背景音乐音量控制、课程教练音量控制、课程评价中的至少一个。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求10-18中任意一个所述的由交互式镜面装置执行的方法的操作。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210530211 | 2022-05-16 | ||
CN202210530211.0 | 2022-05-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023221233A1 true WO2023221233A1 (zh) | 2023-11-23 |
Family
ID=83520535
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/100664 WO2023221233A1 (zh) | 2022-05-16 | 2022-06-23 | 交互式镜面装置、系统及方法 |
Country Status (2)
Country | Link |
---|---|
CN (4) | CN115177938A (zh) |
WO (1) | WO2023221233A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117931357A (zh) * | 2024-03-22 | 2024-04-26 | 东莞莱姆森科技建材有限公司 | 基于交互数据处理的智能镜子、镜柜及其控制方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090291805A1 (en) * | 2008-05-23 | 2009-11-26 | Scott Alan Blum | Exercise apparatus and methods |
KR20160016263A (ko) * | 2014-08-04 | 2016-02-15 | 엘지전자 주식회사 | 미러 디스플레이 장치 및 그의 동작 방법 |
US20200047030A1 (en) * | 2018-08-07 | 2020-02-13 | Interactive Strength, Inc. | Interactive Exercise Machine System With Mirror Display |
CN112007348A (zh) * | 2018-05-29 | 2020-12-01 | 库里欧瑟产品公司 | 用于交互式训练和演示的反射视频显示设备及其使用方法 |
CN114073850A (zh) * | 2020-08-14 | 2022-02-22 | 乔山健身器材(上海)有限公司 | 线上同步课程的系统及方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105117112A (zh) * | 2015-09-25 | 2015-12-02 | 王占奎 | 空中交互式智能全息显示系统 |
KR20180052224A (ko) * | 2016-11-10 | 2018-05-18 | 인천대학교 산학협력단 | 홈 트레이닝 거울 |
CN108664179A (zh) * | 2017-04-01 | 2018-10-16 | 上海卓易科技股份有限公司 | 一种界面打开方法和装置 |
CN110191365A (zh) * | 2019-05-30 | 2019-08-30 | 深圳创维-Rgb电子有限公司 | 一种用于动作模仿的方法、存储介质及系统 |
CN113457105B (zh) * | 2020-03-30 | 2022-09-13 | 乔山健身器材(上海)有限公司 | 具健身选单的智能镜子 |
US20210379447A1 (en) * | 2020-06-09 | 2021-12-09 | Johnson HealthTech. Co., Ltd | Interactive exercise apparatus |
CN215691547U (zh) * | 2021-07-28 | 2022-02-01 | 乔山健身器材(上海)有限公司 | 运动引导设备 |
CN114028794A (zh) * | 2021-11-12 | 2022-02-11 | 成都拟合未来科技有限公司 | 具有互动功能的辅助健身方法及系统 |
CN114399833B (zh) * | 2021-12-01 | 2024-09-27 | 鉴丰电子科技有限公司 | 一种适用于智能健身镜的交互方法及系统 |
-
2022
- 2022-06-23 WO PCT/CN2022/100664 patent/WO2023221233A1/zh unknown
- 2022-08-04 CN CN202210931241.2A patent/CN115177938A/zh active Pending
- 2022-08-04 CN CN202210931230.4A patent/CN115569368A/zh active Pending
- 2022-08-04 CN CN202210931229.1A patent/CN115177937A/zh active Pending
- 2022-08-04 CN CN202210931098.7A patent/CN115212543A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090291805A1 (en) * | 2008-05-23 | 2009-11-26 | Scott Alan Blum | Exercise apparatus and methods |
KR20160016263A (ko) * | 2014-08-04 | 2016-02-15 | 엘지전자 주식회사 | 미러 디스플레이 장치 및 그의 동작 방법 |
CN112007348A (zh) * | 2018-05-29 | 2020-12-01 | 库里欧瑟产品公司 | 用于交互式训练和演示的反射视频显示设备及其使用方法 |
US20200047030A1 (en) * | 2018-08-07 | 2020-02-13 | Interactive Strength, Inc. | Interactive Exercise Machine System With Mirror Display |
CN114073850A (zh) * | 2020-08-14 | 2022-02-22 | 乔山健身器材(上海)有限公司 | 线上同步课程的系统及方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117931357A (zh) * | 2024-03-22 | 2024-04-26 | 东莞莱姆森科技建材有限公司 | 基于交互数据处理的智能镜子、镜柜及其控制方法 |
Also Published As
Publication number | Publication date |
---|---|
CN115212543A (zh) | 2022-10-21 |
CN115177938A (zh) | 2022-10-14 |
CN115569368A (zh) | 2023-01-06 |
CN115177937A (zh) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11986721B2 (en) | Reflective video display apparatus for interactive training and demonstration and methods of using same | |
KR102564949B1 (ko) | 상호작용식 훈련 및 데몬스트레이션을 위한 반사 비디오 디스플레이 장치 및 그 사용 방법들 | |
US11433275B2 (en) | Video streaming with multiplexed communications and display via smart mirrors | |
CN106462283B (zh) | 计算设备上的字符识别 | |
CN109287116A (zh) | 录制和广播应用视觉输出 | |
KR102355008B1 (ko) | 동작 인식 기반 상호작용 방법 및 기록 매체 | |
US20230116624A1 (en) | Methods and systems for assisted fitness | |
WO2021036954A1 (zh) | 一种智能语音播放方法及设备 | |
WO2016136104A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
JP6040745B2 (ja) | 情報処理装置、情報処理方法、情報処理プログラム及びコンテンツ提供システム | |
WO2023221233A1 (zh) | 交互式镜面装置、系统及方法 | |
KR20210007223A (ko) | 동영상 기반의 맞춤형 피드백 코칭정보 제공 시스템 및 방법 | |
KR102590988B1 (ko) | 아바타와 함께 운동하는 메타버스 서비스 제공 장치, 방법 및 프로그램 | |
Almeida et al. | Gym at Home-A Proof-of-Concept | |
US20240281078A1 (en) | Automatic remote control of computer devices in a physical room | |
Fisk et al. | Implicit and explicit interactions in video mediated collaboration | |
CN116185185A (zh) | 一种数字化体育教学方法及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22942264 Country of ref document: EP Kind code of ref document: A1 |