[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2019111348A1 - Video processing device, video processing method, computer program, and video processing system - Google Patents

Video processing device, video processing method, computer program, and video processing system Download PDF

Info

Publication number
WO2019111348A1
WO2019111348A1 PCT/JP2017/043803 JP2017043803W WO2019111348A1 WO 2019111348 A1 WO2019111348 A1 WO 2019111348A1 JP 2017043803 W JP2017043803 W JP 2017043803W WO 2019111348 A1 WO2019111348 A1 WO 2019111348A1
Authority
WO
WIPO (PCT)
Prior art keywords
image information
frame data
timing
time interval
scene
Prior art date
Application number
PCT/JP2017/043803
Other languages
French (fr)
Japanese (ja)
Inventor
努 松浦
Original Assignee
株式会社典雅
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社典雅 filed Critical 株式会社典雅
Priority to PCT/JP2017/043803 priority Critical patent/WO2019111348A1/en
Publication of WO2019111348A1 publication Critical patent/WO2019111348A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B22/00Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements
    • A63B22/18Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements with elements, i.e. platforms, having a circulating, nutating or rotating movement, generated by oscillating movement of the user, e.g. platforms wobbling on a centrally arranged spherical support
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/06Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of ships, boats, or other waterborne vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a video processing apparatus, a video processing method, a computer program, and a video processing system.
  • Patent Document 1 proposes a training apparatus incorporating a video processing system. In this training apparatus, shooting frame data shot every time the vehicle travels a prescribed distance is displayed in conjunction with the traveling distance on the treadmill.
  • the training device of Patent Document 1 assumes exercise that continues to run in one direction, such as running, and does not assume reciprocating motion of a specific part by the user.
  • the training device of Patent Document 1 has room for improvement in this respect.
  • the present invention has been made in view of such circumstances, and an object thereof is to realize video processing suitable for reciprocating motion of a specific part by a user.
  • the present invention is an operation which periodically changes with reciprocating motion of a specific part of the user between the first position and the second position, which is output from the operation information output unit.
  • An image processing apparatus that processes an image based on information, and an image storage unit that stores a plurality of captured image information captured in time series, and an image selection that selects the captured image information stored in the image storage unit.
  • a time interval setting unit for setting a time interval between the photographed image information, wherein the photographed image information corresponds to first photographed image information corresponding to the first position, and the second position.
  • the image selection unit further includes a prediction unit that includes second captured image information, and predicts, based on the operation information, the movement time of the specific part between the first position and the second position.
  • the first specific image information is selected at the first timing at which the fixed part is located at the first position, and the specific part after the movement time predicted by the prediction unit has passed from the first timing is
  • the second captured image information is selected at a second timing when the second position is reached, and the time interval setting unit determines the time between the captured image information from the first captured image information to the second captured image information.
  • the interval is set to a time interval based on the movement time predicted by the prediction unit.
  • FIG. 1 is a diagram showing a schematic configuration of a video display system according to an embodiment of the present invention. It is a figure showing the appearance of an operation information output device.
  • A is a figure which shows the whole structure of a rowing machine.
  • B is a figure which shows a user's wrist of a rowing machine in a position away from a user's chest.
  • C is a figure which shows a user's wrist of a rowing machine in a position close
  • (A) is a figure explaining the hardware constitutions of a control device.
  • B) is a figure which shows the various data memorize
  • (C) is a figure which shows the functional block of a control part. It is a figure showing hardware constitutions of an operation information output device.
  • (A) is a figure showing each axis of a gyro sensor.
  • (B) is a figure showing each axis of an acceleration sensor.
  • (C) is a figure explaining normalization of the sensor output outputted from a gyro sensor. It is a figure which shows the relationship between the sensor output from the gyro sensor after normalization, and the position of a wrist. It is a figure which shows switching of the introductory scene, the interlocking
  • (A) is a figure which shows the kind of imaging
  • (B) is a figure which shows the kind of audio
  • (C) is a figure which shows the metadata added with respect to imaging
  • (D) is a figure which shows the relationship between the interlocking scene of each period, and the reciprocation frequency of a wrist. It is a figure which shows the relationship between the marker data in each period of a interlocking scene, and imaging
  • (A) is a figure which shows the kind of audio
  • (B) is a figure which shows the production
  • A) is a figure which shows reproduction
  • (B) is a figure which shows reproduction
  • (C) is a figure which shows reproduction
  • (A) is a figure which shows a prediction time calculation procedure.
  • (B) is a figure which shows the dispersion
  • A is a figure which shows production
  • A) is a figure which shows production
  • B) is a figure which shows production
  • C) is a figure which shows production
  • D) is a figure showing generation of synthetic frame data F (mix14).
  • A) is a flowchart which shows the signal monitoring process which CPU of a control apparatus performs.
  • B) is a flowchart which shows the main processing which CPU of a control device performs. It is a flowchart which shows the interlocking scene reproduction process which CPU of a control apparatus performs. It is a figure which shows the example which makes imaging frame data in which the time interval was adjusted into display frame data.
  • FIG. 1 is a diagram showing a schematic configuration of a video display system 1 according to an embodiment of the present invention.
  • the video display system 1 shown in FIG. 1 stores video content (a plurality of shooting frame data shot in time series and audio data to be played back along with the plurality of shooting frame data), and controls playback of the video content.
  • Control device 2 an operation information output device 3 (operation information output unit) which is attached to the user and is communicably connected to the control device 2 and transmits / receives various signals to / from the control device 2;
  • a sound output device 5 connected communicably to the control device 2 and capable of reproducing audio of the video content.
  • the control device 2 functions as a video processing device that processes video content to be displayed on the display device 4 and the sound output device 5 based on a sensor output (motion information, which will be described later) from the motion information output device 3.
  • the control device 2 can use, for example, a general personal computer. The control device 2 will be described in detail later.
  • FIG. 2 is a view showing the appearance of the operation information output device 3.
  • the operation information output device 3 is attached to the housing 3 a housing the components such as the control unit 31 and the gyro sensor 32 (see FIG. 5) and the housing 3 a, and is used by the user's wrist. And a band 3b wound around the (specific part).
  • the first mode switch 34 and the second mode switch 35 which are operated when switching the playback scene of the video content, and various operations related to the playback of the video content are operated on the surface of the housing 3a.
  • a playback related switch 36 and a menu switch 37 operated when displaying a menu screen related to the video content are provided.
  • the motion information output device 3 transmits, to the control device 2, a sensor output in which the signal level changes periodically as the user's wrist reciprocates.
  • Display frame data is wirelessly transmitted and received between the display device 4 and the control device 2.
  • the display frame data is read from the frame buffer 23 (see FIG. 4) included in the control device 2 and transmitted.
  • the display device 4 displays an image according to the display frame data, specifically, a stereoscopic image which can be viewed stereoscopically.
  • the display device 4 is not limited to a head mounted display, and may be a liquid crystal display or a projector. Communication between the display device 4 and the control device 2 is not limited to wireless communication but may be wired communication.
  • headphones are preferably used.
  • An acoustic signal based on audio data (described later) is wirelessly transmitted and received between the sound output device 5 and the control device 2.
  • the sound output device 5 When the sound signal is input to the sound output device 5, the sound output device 5 outputs a sound according to the sound signal.
  • the sound output device 5 is not limited to headphones but may be a speaker.
  • Communication between the sound output device 5 and the control device 2 is not limited to wireless communication but may be wired communication.
  • FIG. 3A is a diagram showing an entire configuration of the rowing machine 6.
  • FIG. 3 (b) is a view showing that the user's wrist of the rowing machine 6 is located at a distance from the user's chest.
  • FIG. 3C shows the wrist of the user of the rowing machine 6 in a position close to the chest of the user.
  • the video display system 1 reproduces the video content in conjunction with the operation of the user, for example, when using the rowing machine 6 shown in FIGS. 3 (b) and 3 (c).
  • the illustrated rowing machine 6 has a prismatic frame 61, a seat member 62 movable along the frame 61, on which the user's buttocks can be placed, and a frame that can rotate around the proximal end. And a damper having a substantially T-shaped handle 63 whose tip end is gripped by the user's hands, a damper whose one end is rotatably attached to the frame 61 and whose other end is rotatably attached to the handle 63 64 and a footrest member 65 on which the user's sole is placed.
  • the user of the rowing machine 6 wears the motion information output device 3 on the wrist (specific part) at the time of training, and the display 4 on the head (head mounted display And the sound output device 5 (headphones).
  • the user uses the rowing machine 6 to perform bending and stretching exercises between the state in which the body is bent as shown in FIG. 3 (b) and the state in which the body is extended as shown in FIG. 3 (c).
  • the user's wrist is positioned at a distant position away from the user's chest with the body bent and at a close position close to the user's chest with the body extended. That is, the user's wrist (specific part) reciprocates along the trajectory of the other end of the handle 63 between the close position and the separated position. Therefore, the motion information output device 3 mounted on the wrist of the user also reciprocates between the close position and the separated position.
  • FIG. 4A is a diagram for explaining the hardware configuration of the control device 2.
  • FIG. 4B is a view showing various data stored in the storage unit of the control device 2.
  • the control device 2 stores display frame data corresponding to an image to be displayed on the display device 4, and a control unit 21 serving as a main body of various controls, a storage unit 22 storing various data.
  • a frame buffer 23 and a communication interface (I / F) 24 for transmitting and receiving information to and from other devices are provided.
  • the control unit 21 includes a central processing unit (CPU) 21 a that develops a program stored in the storage unit 22 in a random access memory (RAM) 21 b and executes the program.
  • the storage unit 22 is, for example, a hard disk drive (HDD) or a solid state drive (SSD), and as illustrated in FIG.
  • the frame buffer 23 is a volatile memory that stores display frame data for one screen displayed by the display device 4.
  • the display frame data is generated by the control unit 21 based on the shooting frame data stored in the storage unit 22.
  • the control unit 21 writes the generated display frame data in the frame buffer 23.
  • the generation of display frame data by the control unit 21 will be described later.
  • the communication interface 24 transmits and receives information to and from the operation information output device 3.
  • the communication interface 24a transmits and receives information to and from the operation information output device 3.
  • the communication interface 24b transmits and receives information to and from the display device 4.
  • a communication interface 24c is provided.
  • the communication interface 24 for example, one conforming to a communication method or protocol such as Bluetooth (registered trademark) or WirelessUSB (Wireless Universal Serial Bus) can be used. In the case of wired communication, one conforming to a method such as RS-232C or USB (Universal Serial Bus) can be used.
  • a communication method or protocol such as Bluetooth (registered trademark) or WirelessUSB (Wireless Universal Serial Bus)
  • wired communication one conforming to a method such as RS-232C or USB (Universal Serial Bus) can be used.
  • FIG. 4C is a diagram showing functional blocks of the control unit 21.
  • the functional block shown in FIG. 4C is realized by the CPU 21a developing a program stored in the storage unit 22 on the RAM 21b and executing the program.
  • the image selection unit 21 c selects shooting frame data stored in the storage unit 22 (image storage unit).
  • the time interval setting unit 21d sets a time interval between the imaging frame data.
  • the prediction unit 21e outputs the movement information output device 3 (movement information output time) of the movement time of the user's wrist (specific part) between one (first position) and the other (second position) of the proximity position and the separation position.
  • the display image information generation unit 21 f Prediction based on the sensor output (operation information) output from The display image information generation unit 21 f generates display frame data by performing alpha blending processing on a pair of shooting frame data which are adjacent to each other in accordance with the timing defined by the display device 4.
  • the photographed image information selection unit 21g selects photographed frame data of any one cycle among photographed frame data of a plurality of cycles (a plurality of sets of photographed image information) based on a periodic change of sensor output (operation information). select.
  • the communication control unit 21 h controls the communication interfaces 24 a to 24 c to communicate with the operation information output device 3, the display device 4, and the sound output device 5.
  • FIG. 5 is a diagram showing a hardware configuration of the operation information output device 3.
  • the operation information output device 3 is controlled mainly to perform various controls in addition to the first mode switch 34, the second mode switch 35, the reproduction related switch 36, and the menu switch 37 described above
  • a communication interface (I / F) 38 for transmitting and receiving information to and from the control unit 2, a gyro sensor 32 for detecting an angular velocity with respect to a reference axis, an acceleration sensor 33 for detecting acceleration, There is.
  • the control unit 31 includes a CPU 31a (Central Processing Unit) that executes a program stored in the non-volatile memory 31b, and a volatile memory 31c used by the CPU 31a.
  • CPU 31a Central Processing Unit
  • FIG. 6A shows the axes of the gyro sensor 32.
  • FIG. FIG. 6 (b) is a view showing each axis of the acceleration sensor 33.
  • FIG. 6C is a view for explaining normalization of the sensor output output from the gyro sensor 32.
  • the gyro sensor 32 detects angular velocities centered on the X axis, the Y axis, and the Z axis orthogonal to one another.
  • the acceleration sensor 33 detects an acceleration for each of the X axis, the Y axis, and the Z axis orthogonal to each other.
  • the control unit 31 of the motion information output device 3 detects the angular velocity detected by the gyro sensor 32 and the acceleration sensor when the user is exercising on the rowing machine 6 (see FIGS. 3B and 3C). Based on the acceleration detected by 33, a sensor output that changes periodically as the user's wrist reciprocates is acquired. For example, based on the sensor output of the acceleration sensor 33, the control unit 31 acquires the inclination of the housing included in the operation information output device 3 with respect to the gravity direction. Further, the control unit 31 obtains a sensor output by normalizing the angular velocity detected by the gyro sensor 32 with the reciprocating motion of the wrist according to the inclination of the housing 3a with respect to the gravity direction. For example, as shown in FIG.
  • the control unit 31 selects the angular velocity of the two axes with a large amount of change from the angular velocity of the three axes output by the gyro sensor 32, and selects the angular velocity of the selected two axes.
  • the sensor output is acquired by normalizing to one axis according to the inclination of the housing 3a.
  • the control unit 31 transmits the acquired sensor output to the control device 2 via the communication interface 38.
  • FIG. 7 is a diagram showing the relationship between the sensor output NS from the gyro sensor 32 after normalization and the position of the wrist.
  • the sensor output NS indicated by the alternate long and short dash line in FIG. 7 has a positive (+) value when the wrist moves in a direction away from the chest during exercise with the rowing machine 6, and the wrist approaches the chest It is negative (-) when moving.
  • the sensor output NS indicates the value "0" when the movement of the wrist stops.
  • the position PW of the wrist shown by the solid line in FIG. 7 when the sensor output NS changes from a negative value to "0", the wrist is at the closest position B closest to the chest (see FIG. 3C). It can be said.
  • the control unit 21 of the control device 2 obtains the position of the wrist (specific part) at the time of exercise on the rowing machine 6 (reciprocation of the wrist). can do.
  • FIG. 8 is a diagram showing the switching of the introductory scene, the interlocked scene, and the finish scene included in the video content.
  • the video content is played back by the user's instruction of playback.
  • reproduction of the video content is started. That is, the control unit 21 of the control device 2 starts the reproduction of the video content based on the operation information output by the reproduction related switch 36 when the operation is performed.
  • the video content includes an introduction scene, an interlocking scene, and a finish scene.
  • the introduction scene is a prologue scene.
  • the interlocked scene is a scene to be reproduced interlockingly with the user's action (reciprocal movement of the wrist).
  • the finish scene is an ending scene.
  • the introduction scene is a scene from before the race to just before the start.
  • the interlocked scene is a scene from the start of the competition to the front of the goal, and the finish scene is a scene from the front of the goal to the rear of the goal.
  • the reproduction of the video content is started from the introduction scene.
  • the reproduction of the introduction scene is performed regardless of the sensor output of the operation information output device 3. That is, the control unit 21 of the control device 2 reproduces the imaging frame data at a predetermined interval dt1 (see FIG. 10) and reproduces the audio data regardless of the sensor output of the operation information output device 3.
  • the control unit 21 of the control device 2 reproduces the imaging frame data at a predetermined interval dt1 (see FIG. 10) and reproduces the audio data regardless of the sensor output of the operation information output device 3.
  • the control device 2 receives the sensor output from the operation information output device 3 and controls the reproduction of the interlocked scene based on the sensor output (operation information).
  • the control unit 21 of the control device 2 starts reproduction of the finish scene.
  • the reproduction of the finish scene is performed regardless of the sensor output of the motion information output device 3, and the reproduction of the video content ends with the completion of the reproduction of the finish scene.
  • the video display system 1 acquires a sensor output (a sensor output from the normalized gyro sensor 32) from the motion information output device 3 when reproducing the interlocked scene, and the interlocked scene is acquired according to the acquired sensor output. Control playback. In other words, the video display system 1 controls the reproduction of the video content in accordance with the cycle of the user's wrist reciprocation when using the rowing machine 6.
  • five types of linked scenes are prepared: the first cycle to the fifth cycle, and immediately after switching to the linked scene, linked scenes having a specific cycle (for example, the first cycle) are reproduced. Thereafter, in accordance with the periodic change of the sensor output, the interlocked scene of the first period is sequentially switched to the interlocked scene of the appropriate period among the interlocked scenes of the fifth period. Further, even during the reproduction of the interlocked scene of one cycle, the reproduction of the interlocked scene is controlled in accordance with the reciprocating motion of the user's wrist. The details will be described below.
  • FIG. 9A is a view showing the contents of shooting frame data.
  • the shooting frame data includes three types of shooting frame data of the introduction scene, shooting frame data of the interlocked scene, and shooting frame data of the finish scene.
  • the video content is reproduced regardless of the sensor output from the motion information output device 3.
  • the interlocked scene the reproduction of the video content is controlled interlocked with the sensor output from the operation information output device 3.
  • the video display system 1 of the present embodiment includes five types from the interlocked scene of the first period to the interlocked scene of the fifth period, and switches the interlocked scene to be reproduced according to the cycle of the reciprocation of the wrist.
  • the interlocking scene of the first cycle is selected when the cycle of the reciprocating movement of the wrist is the longest
  • the interlocking scene of the fifth cycle is selected when the cycle of the reciprocating movement of the wrist is the shortest.
  • Ru for example, the linked scene of the first cycle is selected when the cycle of rowing all is the longest (the boat traveling speed is the slowest).
  • the interlocking scene of the fifth period is selected when the period in which the row is over the all is the shortest (when the boat traveling speed is the fastest).
  • the interlocked scene in the second period to the interlocked scene in the fourth period are also selected according to the cycle length of the reciprocating movement of the wrist.
  • the linked scene of the first cycle corresponds to the boat running scene in the case where the section from the start to the goal front is covered all at the longest repeat cycle
  • the linked scene of the second cycle is from the start to the goal
  • the run scene of the boat in the case where you go through the whole in the second longest repetition cycle corresponds to the section up to the section
  • the interlocking scene of the third cycle is the section from the start to the goal front in the third slow repetition cycle
  • the traveling scene of the boat in the case of looking over the oar corresponds.
  • the linked scene of the fourth cycle corresponds to the traveling scene of the boat in the case where the section from the start to the front of the goal is covered in the second short repetition cycle
  • the linked scene of the fifth cycle starts The run scene of the boat in the case where you go through all in the shortest repetition cycle corresponds to the section from the goal to the goal front.
  • the interlocking scene of each cycle is a scene captured by changing the traveling speed of the boat for each cycle in the section from the start to the goal front.
  • the linked scenes in each cycle have the same content but different traveling speeds. In the example shown in FIG.
  • the code F (S) indicates a plurality of shooting frame data belonging to the introduced scene
  • the code F (E) indicates a plurality of shooting frame data belonging to the finish scene
  • reference signs F (F1) to F (F5) respectively indicate a plurality of shooting frame data belonging to the interlocked scenes of the first to fifth cycles.
  • FIG. 9 (b) is a diagram showing the type of audio data.
  • the audio data like the shooting frame data, comprises three types of scenes: an introductory scene, an interlocking scene, and a finish scene. Each audio data is paired with corresponding shooting frame data. Therefore, the audio data of the interlocked scene includes five types of audio data from the interlocked scene of the first period to the interlocked period of the fifth period.
  • the audio data may be in any format as long as it can be handled by the control device 2. For example, it may be uncompressed PCM (Pulse Code Modulation) format or compressed MP3 (MPEG-1 Audio Layer-3) format.
  • FIG. 9C is a diagram showing metadata added to shooting frame data and audio data.
  • FIG. 9D is a diagram showing the relationship between the linked scene in each cycle and the round-trip frequency of the wrist.
  • as metadata a range of the round-trip frequency of the wrist corresponding to each period (first period to fifth period) of the interlocked scene, and a plurality of shooting frame data F (F1 belonging to the interlocked scene of each period )
  • To F (F5) marker data to be added to specific imaging frame data, and start time data indicating start time for each of a plurality of reproduction audio data.
  • the reciprocating frequency of the wrist is the number of times the reciprocating motion of the wrist is repeated per unit time (for example, one minute).
  • the control unit 21 of the control device 2 acquires the sensor output from the operation information output device 3.
  • the range of the round-trip frequency of the wrist is lower than the frequency H2 for the interlocked scene of the first period and higher than the frequency H1 for the interlocked scene of the second period, The frequency is H4 or less.
  • the frequency is higher than frequency H3 and lower than frequency H6 for the interlocked scene in the third period, and higher than frequency H5 and lower than frequency H8 for the interlocked scene in the fourth period, and the fifth period
  • the frequency of H7 or higher is set for the interlocked scene of
  • the upper limit frequency H2 in the interlocked scene of the first period is higher than the lower limit frequency H1 in the interlocked scene of the second period
  • the lower limit frequency H3 in the interlocked scene of the third period is the upper limit frequency H4 in the interlocked scene of the second period.
  • the upper limit frequency H6 in the interlocked scene of the third period is higher than the lower limit frequency H5 in the interlocked scene of the fourth period
  • the lower limit frequency H7 in the interlocked scene of the fifth period is the upper limit frequency in the interlocked scene of the fourth period Lower than H8.
  • the marker data shown in FIG. 9C is added to specific imaging frame data. Specifically, the marker data is added to the imaging frame data that is the basis of the image displayed on the display device 4 at the timing when the user's wrist reaches the proximity position B or the separation position T. Therefore, at the timing when the user's wrist reaches the proximity position B or the separation position T, the reproduction of the video content is controlled so that the image is displayed based on the imaging frame data to which the marker data is added.
  • marker data B ⁇ T is added to the imaging frame data F (F1 M1)
  • marker data T ⁇ B is added to the imaging frame data F (F1 M2).
  • the shooting frame data F (F1) is a plurality of shooting frame data belonging to the interlocked scene of the first cycle.
  • the code M1 indicates the first marker data. Therefore, the imaging frame data F (F1M1) indicates imaging frame data to which the first marker data is added among the plurality of imaging frame data belonging to the interlocked scene of the first cycle.
  • shooting frame data F (F1M2) to shooting frame data F (F1 M4) are shooting frame data to which the second marker data is added among the plurality of shooting frame data belonging to the interlocked scene of the first cycle.
  • the imaging frame data F (F1 M1) to the imaging frame data F (F1 M4) to which the respective marker data (B ⁇ T, T ⁇ B) are added are among the plurality of imaging frame data belonging to the interlocked scene of the first cycle.
  • 14 shows imaging frame data selected when the user's wrist reaches the proximity position B or the separation position T.
  • FIG. 10 is a diagram showing the relationship between marker data and shooting frame data in each cycle (first to fifth cycles) of the interlocked scene.
  • a plurality of shooting frame data belonging to the interlocked scene of the first cycle are shot at time intervals dt1, and the shooting frame data F (F1M1) to shooting frame data F (F1M5) are taken.
  • Marker data (B ⁇ T, T ⁇ B) is added.
  • a plurality of shooting frame data belonging to the interlocked scene of the second cycle are shot at time intervals dt1, and marker data (B ⁇ T,) for shooting frame data F (F2M1) to shooting frame data F (F2M5) T ⁇ B) is added.
  • a plurality of shooting frame data belonging to the interlocked scene of the third cycle are shot at time intervals dt1, and marker data (frame data F (F3M1) to shooting frame data F (F3M5) are recorded. B ⁇ T, T ⁇ B) are added.
  • a plurality of shooting frame data belonging to the interlocked scene of the fourth cycle are shot at time intervals dt1, and marker data (B ⁇ T,) for shooting frame data F (F4M1) to shooting frame data F (F4M5) T ⁇ B) is added.
  • a plurality of shooting frame data belonging to the interlocked scene of the fifth period are shot at time intervals dt1, and marker data (B ⁇ T,) for shooting frame data F (F5M1) to shooting frame data F (F5M5) T ⁇ B) is added.
  • the time length TW in the interlocking scene of each cycle is common.
  • the linked scenes in each cycle have the same content.
  • the marker data (B ⁇ T, T ⁇ B) is added to the imaging frame data at the timing when the user's wrist reaches the proximity position B or the separation position T. Therefore, if the ordinal numbers added after the code M are the same, it can be said that the contents of the photographed frame data are common even if the cycles are different.
  • the imaging frame data F (F1 M1) belonging to the interlocked scene of the first cycle to which the first marker data (B to T) is added is the second to which the first marker data (B to T) is added. It can be said that the content is common to the shooting frame data F (F2M1) belonging to the linked scene of the cycle.
  • the content of the shooting frame data F (F2M1) is common to the shooting frame data F (F3M1) belonging to the interlocked scene of the third cycle.
  • shooting frame data F (F1 M2) belonging to the linked scene of the first period to which the second marker data (T ⁇ B) is added is the same as the second to which the second marker data (T ⁇ B) is added.
  • the contents are common to the photographing frame data F (F3M2) belonging to the interlocked scene of three cycles and the photographing frame data F (F5M2) belonging to the interlocked scene of the fifth cycle.
  • switching of the cycle of the interlocked scene is performed between imaging frame data having the same ordinal number appended after the code M.
  • the linked scenes in each cycle have the same content in each cycle, but the traveling speed is different in each cycle.
  • the time interval at the time of shooting between successive shooting frame data is common to the time interval dt1 in the interlocked scene of each cycle. Therefore, a shooting frame included between the shooting frame data to which the marker data (B ⁇ T or T ⁇ B) is added and the shooting frame data to which the next marker data (T ⁇ B or B ⁇ T) is added The number of data is different in each cycle of the interlocked scene.
  • the number N 11 of shooting frame data F included between the shooting frame data F (F 1 M 1) to the shooting frame data F (F 1 M 2) in the linked scene of the first cycle is the shooting frame data F in the linked scene of the second cycle
  • This number is larger than the number N21 of shooting frame data F included between F2M1) and shooting frame data F (F2M2).
  • the number N32 of imaging frame data F included between imaging frame data F (F3M2) to imaging frame data F (F3 M3) in the interlocked scene of the third period is the imaging frame data F in the interlocked scene of the fourth period.
  • This number is larger than the number N42 of shooting frame data F included between (F4M2) and shooting frame data F (F4M3).
  • the imaging frame data in the interlocking cycle of each period is mutually including imaging frame data (one of the first imaging image information and the second imaging image information) to which the marker data (B ⁇ T) is added, and marker data (T It can be said that the number of other shooting frame data included between the shooting frame data (the first shooting image information and the other of the second shooting image information) to which B is added is different.
  • the control unit 21 of the control device 2 performs reproduction control of sound based on the reproduction sound data.
  • FIG. 11A shows the type of audio data.
  • FIG. 11 (b) is a diagram showing a generation procedure of reproduction audio data.
  • the storage unit 22 (see FIG. 4A) of the control device 2 stores audio data MF1-ALL to MF5-ALL of the interlocking scene of each cycle over the time length TW shown in FIG. 11A. There is.
  • the control unit 21 of the control device 2 refers to the combination of the audio file and the start time from the metadata shown in FIG. 9C, on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time. , Generate audio data for reproduction.
  • the control unit 21 of the control device 2 for reproduction on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-001.
  • Voice data MF1-001 is generated.
  • the control unit 21 of the control device 2 generates the audio data for reproduction MF1-002 on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-002.
  • the playback audio data MF1-XXX (XXX is a natural number of 3 digits, hereinafter the same) to be played back in the linked scene of the first cycle shown in FIG. 11 (b) to the playback voice played in the linked scene of the fifth cycle.
  • the case of generating the data MF5-XXX will be described as an example.
  • the control unit 21 of the control device 2 sets the audio data MF1- on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-XXX.
  • the voice data MF1-XXX ' is acquired by copying the portion of the prescribed time width Mtw from the start time Mt1-XXX in ALL.
  • the prescribed time width Mtw may be, for example, 0.5 seconds to 1.5 seconds, but is not limited to these seconds.
  • the control unit 21 applies fade-in processing and fade-out processing to the copied audio data MF1-XXX ′ to generate reproduction audio data MF1-XXX.
  • the generation of the reproduction audio data MF1-XXX has been described above, but the reproduction audio data MF2-XXX to MF5-XXX related to other cycles can also be generated in the same procedure.
  • FIG. 12 (a) is a diagram showing the reproduction of reproduction audio data at standard time intervals. As shown in FIG.
  • the linked scene in the first cycle is selected, and the playback audio data MF1-001 starts playback at the timing when display based on the shooting data frame F (F1L1) is performed, and playback is performed. It is assumed that the audio data MF1-002 starts to be reproduced at the timing when the display based on the shooting data frame F (F1L2) is performed. In the above case, since the fade-out of the audio data for reproduction MF1-001 and the fade-in of the audio data for reproduction MF1-002 overlap, the time for the audio data for reproduction MF1-001 and the audio data for reproduction MF1-002 is appropriate Played at intervals.
  • FIG. 12B is a diagram showing the reproduction of the audio data for reproduction at a time interval narrower than the standard time interval.
  • the reproduction start timing of the reproduction audio data MF1-002 starts earlier than the reference, since the reproduction audio data MF1-002 fades in, the reproduction audio data MF1 -Uncomfortable feeling with playback start can be suppressed.
  • the reproduction audio data MF1-001 is faded out, it is possible to suppress the discomfort caused by the end of reproduction of the reproduction audio data MF1-001.
  • FIG. 12C is a diagram showing the reproduction of the audio data for reproduction at a time interval wider than the standard time interval. In the case of FIG.
  • the control unit 21 calculates, as a predicted time based on the sensor output, the movement time taken for the reciprocation of the specific part to interlock the reciprocation of the specific part and the captured image in the interlocked scene Do.
  • This predicted time may be calculated for the time of each reciprocation based on the sensor output applied to one direction in the reciprocation, or may be calculated in the moving direction based on the bi-directional sensor output in the reciprocation. It may be calculated.
  • the prediction time is the average travel time of the reciprocating motion. In this embodiment, the case of calculating the predicted time in one direction will be described.
  • the control unit 21 corresponds to the interlocked scene, and the movement time (predictive time) taken for the one-way movement of the specific part in the reciprocation is the same direction that was performed multiple times before this one-way movement.
  • the movement is calculated based on the sensor output (operation information) output from the operation information output device 3 (operation information output unit) (described in detail below).
  • the control unit 21 (time interval setting unit 21d) generates next marker data (B ⁇ T) from the imaging frame data (first imaging image information) to which the marker data (one of B ⁇ T and T ⁇ B) is added.
  • the time interval up to the shooting frame data (second shooting image information) to which the other T ⁇ B) is added is set to the time interval based on the above-mentioned prediction time.
  • the control unit 21 (display image information generation unit 21 f) generates display image information by performing alpha blending processing on a pair of shooting frame data which are adjacent to each other in accordance with the timing defined by the display device 4.
  • FIGS. 13 to 19 show a procedure of generating display image information corresponding to the first marker data F (F1 M1) in the interlocked scene of the first cycle.
  • first marker data F (F1M1) is described below, it is merely an example.
  • the same procedure is performed for odd-numbered marker data such as the third marker data F (F1 M3) and the fifth marker data F (F1 M5).
  • FIG. 13 (a) is a diagram showing a calculation procedure of the prediction time.
  • the predicted time TS5 'shown in FIG. 13 is a predicted time until the sensor output corresponding to the imaging frame data F (F1 M1) is acquired from the latest sensor output acquisition timing ts5.
  • the predicted time TS5 ' is the movement time until the user's wrist moves from the remote position T to the near position B, which is predicted at the timing ts5.
  • the user's wrist is located at the separated position T at timings ts1, ts3 and ts5 and at the proximity position B at timings ts2 and ts4.
  • the control unit 21 of the control device 2 recognizes the respective timings ts 1 to ts 5 based on the sensor output from the operation information output device 3.
  • a preparation period is set immediately after switching to the interlocked scene.
  • reproduction of the video content interlocked with the operation of the user is not performed, and the photographing frame data is displayed at predetermined intervals dt1.
  • the preparation period continues, for example, over a predetermined time length or until predetermined imaging frame data is displayed.
  • preparation for interlocking control is performed.
  • the control unit 21 acquires the moving time TS1 required to move from the separated position T to the close position B at timing ts2, and the moving time required to move from the close position B to the separated position T at timing ts3. Get TS2.
  • the control unit 21 acquires the moving time TS3 required for moving from the separated position T to the near position B at timing ts4, and the movement required for moving from the near position B to the separated position T at timing ts5. Get time TS4.
  • the control unit 21 of the control device 2 obtains the predicted time TS5 '.
  • the prediction time TS5 ' relates to one-way movement in which the user moves from the separation position T to the near position B in one-way movement in which the user's wrist moves from one of the close position B to the other position T toward the other. It is predicted time.
  • the control unit 21 acquires the predicted time TS5 'based on the one-way motion from the separation position T to the proximity position B, which is performed before the timing ts5 at which the latest sensor output is acquired. Specifically, the control unit 21 obtains an average of the time TS1 required from the timing ts1 to the timing ts2 and the time TS3 required for the timing ts3 to the timing ts4 as the prediction time TS5 '.
  • FIG. 13 (b) is a diagram showing the variation of the prediction time TS 5 ′.
  • variations may occur in the predicted time TS5 '. This is because the respective timings ts1 to ts5 are acquired along with the reciprocating motion of the user's wrist.
  • a variation may occur as in the timings ts6a' to 6c 'also for the timing ts6' when the prediction time TS5 'has passed from the timing ts5.
  • FIG. 13 (b) is a diagram showing the variation of the prediction time TS 5 ′.
  • the imaging frame is generated from the imaging frame data F (F1 M0) displayed at the timing ts5.
  • the time interval between the plurality of shooting frame data included in the data F (F1M1) may be deviated from the time interval dt1 between the plurality of shooting frame data in the preparation period.
  • the time interval between shooting frame data from shooting frame data F (F1 M0) to shooting frame data F (F1 M1) is the time between multiple shooting frame data in the preparation period. It is equal to the interval dt1.
  • the time interval between the shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) is shorter than the time interval dt1 between the plurality of shooting frame data in the preparation period Become.
  • the time interval between the shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) is greater than the time interval d1 between the plurality of shooting frame data in the preparation period. become longer.
  • FIG. 14 is a diagram showing the time interval of the imaging frame data and the interval adjustment curve SP1 at the predicted time TS5 '.
  • the plurality of shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) are defined at equal time intervals dt2.
  • the time interval between the imaging frame data is adjusted by the interval adjustment curve SP1 so that the image is displayed smoothly at the control switching timing (for example, timing ts5).
  • the interval adjustment curve SP1 acquires a frame rate (the number of frame data per unit time) at each of the timing at which the latest sensor output is acquired and the timing defined by the prediction time, and spline interpolation (for example, cubic spline interpolation) It is acquired by doing.
  • the control unit 21 of the control device 2 calculates the effective frame rate V0 based on the number of imaging frame data actually displayed by the timing ts5 and the elapsed time until the timing ts5. . Further, the control unit 21 determines the frame rate based on the number of shooting frame data included in the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) and the predicted time from the timing ts5 to the timing ts6 '. Calculate V1 '. Furthermore, the control unit 21 spline-interpolates the effective frame rate V0 and the frame rate V1 'to acquire the interval adjustment curve SP1.
  • FIG. 15 is a diagram showing a time interval between imaging frame data adjusted by the interval adjustment curve SP1.
  • the interval between the imaging frame data F (F1M0 + 2) and the imaging frame data F (F1M0 + 3) is adjusted from the time interval dt2 to the time interval dt2a.
  • the interval between the imaging frame data F (F1 M0 + n) and the imaging frame data F (F 1 M 0 + m) is adjusted from the time interval dt2 to the time interval dt 2 b, and the imaging frame data F (F1 M1-3) and the imaging frame data F (F1 M1) Between -2), the time interval dt2 is adjusted to the time interval dt2c.
  • the interval adjustment curve SP1 adjusts the time interval between the imaging frame data in a stepwise manner toward the imaging frame data F (F1 M1).
  • FIG. 16 is a diagram showing a time interval between shooting frame data adjusted by another interval adjustment curve SP2.
  • the interval between the imaging frame data F (F1M0 + 2) and the imaging frame data F (F1M0 + 3) is adjusted from the time interval dt2 to the time interval dt2d.
  • the interval between the imaging frame data F (F1 M0 + n) and the imaging frame data F (F 1 M 0 + m) is adjusted from the time interval dt2 to the time interval dt2 e, and the imaging frame data F (F1 M1-3) and the imaging frame data F (F1 M1) Between -2), the time interval dt2 is adjusted to the time interval dt2f.
  • the interval adjustment curve SP2 adjusts the time interval between the imaging frame data in a stepwise manner toward the imaging frame data F (F1 M1).
  • FIG. 17 is a diagram showing generation of combined frame data F (mix 1) to F (mix 4).
  • FIG. 18A is a diagram showing the generation of combined frame data F (mix 1).
  • FIG. 18B is a diagram showing generation of combined frame data F (mix 2).
  • FIG. 18C is a diagram showing generation of combined frame data F (mix 3).
  • FIG. 18D is a diagram showing generation of combined frame data F (mix 4).
  • the display device 4 performs refresh at refresh timings Rf1 to Rf4.
  • the refresh timings Rf1 to Rf4 arrive at fixed time intervals dRf.
  • the time interval dRf is defined by the refresh rate.
  • the control unit 21 of the control device 2 controls the shooting frame data F (F1 M0) before the refresh timing Rf1 and the shooting frame data F (F1 M0 + 1) after the refresh timing Rf1.
  • the composite frame data F (mix1) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf1 as a coefficient.
  • the ratio of the shooting frame data F (F 1 M 0) is [(1 ⁇ 1) / dt 2]
  • the ratio of the shooting frame data F (F 1 M 0 + 1) is It is synthesized to be [ ⁇ 1 / dt2].
  • the control unit 21 of the control device 2 sets the shooting frame data F (F1 M0) before the refresh timing Rf2 and the shooting frame data F (F1 M0 + 1) after the refresh timing Rf2.
  • the composite frame data F (mix 2) is composited by alpha blending processing using the division ratio of the time interval between shooting frame data at the refresh timing Rf 2 as a coefficient.
  • the ratio of the shooting frame data F (F 1 M 0) is [(1 ⁇ 1) / dt 2] and the ratio of the shooting frame data F (F 1 M 0 + 1) is It is synthesized to be [ ⁇ 1 / dt2].
  • the control unit 21 of the control device 2 sets the shooting frame data F (F1M0 + 1) before the refresh timing Rf3 and the shooting frame data F (F1M0 + 2) after the refresh timing Rf3.
  • the composite frame data F (mix 3) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 3 as a coefficient.
  • the ratio of the shooting frame data F (F1M0 + 1) is [(1- ⁇ 1) / dt2]
  • the ratio of the shooting frame data F (F1M0 + 2) is It is synthesized to be [ ⁇ 1 / dt2].
  • the control unit 21 of the control device 2 sets the shooting frame data F (F1M0 + 1) before the refresh timing Rf4 and the shooting frame data F (F1M0 + 2) after the refresh timing Rf4.
  • the composite frame data F (mix 4) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 4 as a coefficient.
  • the ratio of the shooting frame data F (F1M0 + 1) is [(1-.delta.1) / dt2]
  • the ratio of the shooting frame data F (F1M0 + 2) is It is synthesized so as to be [ ⁇ 1 / dt2].
  • FIGS. 19 to 22 show a procedure of generating display image information corresponding to the second marker data F (F1M2) in the interlocked scene of the first cycle.
  • the second marker data F (F1M2) is described below, it is merely an example.
  • the same procedure is performed for even-numbered marker data such as the fourth marker data F (F1 M4) and the sixth marker data F (F1 M6).
  • FIG. 19A is a diagram showing a calculation procedure of the prediction time.
  • the predicted time TS6 ′ shown in FIG. 19A is a predicted time from when the latest sensor output acquisition timing ts5 until the sensor output corresponding to the imaging frame data F (F1M2) is acquired.
  • the wrist of the user is located at the proximity position B at timing ts6 after the example described in FIG.
  • the control unit 21 of the control device 2 recognizes the timing ts6 based on the sensor output from the operation information output device 3.
  • the control unit 21 of the control device 2 acquires the predicted time TS6 ′.
  • the prediction time TS6 ′ relates to one-way movement in which the user moves from the proximity position B to the separation position T in one-way movement in which the user's wrist moves from one of the close position B to the separation position T toward the other. It is predicted time.
  • the control unit 21 acquires the predicted time TS6 ′ based on the one-way motion from the proximity position B to the separated position T, which is performed before the timing ts6 at which the latest sensor output is acquired. Specifically, the control unit 21 obtains an average of the time TS2 required from the timing ts2 to the timing ts3 and the time TS4 required from the timing ts4 to the timing ts5 as the prediction time TS6 '.
  • FIG. 19 (b) is a diagram showing the variation of the prediction time. As shown in FIG. 19 (b), variations may occur in the prediction time TS6 '. With the variation of the prediction time TS6 ′, a variation may occur as in the timings ts7a ′ to 7c ′ also at the timing ts7 ′ where the prediction time TS6 ′ has elapsed from the timing ts6.
  • FIG. 20 is a diagram showing the time interval of the shooting frame data and the interval adjustment curve SP3 at the prediction time TS6 '.
  • a plurality of pieces of shooting frame data from the shooting frame data F (F1 M1) to the shooting frame data F (F1 M2) are defined at equal time intervals dt3.
  • the time interval between the imaging frame data is adjusted by the interval adjustment curve SP3 so that the image is displayed smoothly at the control switching timing (for example, timing ts6).
  • the interval adjustment curve SP3 is created by spline interpolation (for example, cubic spline interpolation) between the effective frame rate V1 and the frame rate V2 '.
  • the space adjustment curve SP3 is equivalent to the space adjustment curve SP1 described above.
  • the adjustment of the time interval between the imaging frame data using the interval adjustment curve SP3 is also performed similarly to the adjustment of the time interval between the imaging frame data using the interval adjustment curve SP1 described above. Therefore, the description is omitted.
  • FIG. 21 is a diagram showing the generation of combined frame data F (mix 11) to F (mix 14).
  • FIG. 22A shows generation of combined frame data F (mix 11).
  • FIG. 22B is a diagram showing generation of combined frame data F (mix 12).
  • FIG. 22C shows the generation of combined frame data F (mix 13).
  • FIG. 22D is a diagram showing the generation of combined frame data F (mix 14).
  • the display device 4 performs refresh at refresh timings Rf11 to Rf14.
  • the refresh timings Rf11 to Rf14 arrive at fixed time intervals dRf defined by the refresh rate.
  • the control unit 21 of the control device 2 uses the photographed frame data F (F1 M1) before the refresh timing Rf11 and the photographed frame data F (F1 M1 + 1) after the refresh timing Rf11 to synthesize frame data.
  • the combined frame data F (mix 11) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 11 as a coefficient.
  • the ratio of the shooting frame data F (F 1 M 1) is [(1 ⁇ 2) / dt 3]
  • the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized to be [ ⁇ 2 / dt 3].
  • the control unit 21 of the control device 2 controls the imaging frame data F (F1 M1) before the refresh timing Rf12 and the imaging frame data F (F1 M1 + 1) after the refresh timing Rf12.
  • the composite frame data F (mix 12) is composited by alpha blending processing using the division ratio of the time interval between the captured frame data at the refresh timing Rf 12 as a coefficient.
  • the ratio of the shooting frame data F (F 1 M 1) is [(1 ⁇ 2) / dt 3]
  • the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized to be [ ⁇ 2 / dt3].
  • the control unit 21 of the control device 2 controls the shooting frame data F (F1M1) before the refresh timing Rf13 and the shooting frame data F (F1M1 + 1) after the refresh timing Rf13.
  • the combined frame data F (mix 13) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 13 as a coefficient. As shown in FIG.
  • the ratio of the shooting frame data F (F 1 M 1) is [(1- ⁇ 2) / dt 3]
  • the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized so as to be [ ⁇ 2 / dt3].
  • the control unit 21 of the control device 2 controls the shooting frame data F (F1M1 + 1) before the refresh timing Rf14 and the shooting frame data F (F1M1 + 2) after the refresh timing Rf14.
  • the combined frame data F (mix 14) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 14 as a coefficient. As shown in FIG.
  • the ratio of the shooting frame data F (F1M1 + 1) is [(1 ⁇ 2) / dt3]
  • the ratio of the shooting frame data F (F1M1 + 2) is It synthesizes so that it may become [ ⁇ 2 / dt3].
  • FIG. 23A is a flowchart showing a signal monitoring process performed by the CPU 21a.
  • FIG. 23B is a flowchart showing main processing performed by the CPU 21a.
  • ⁇ Signal monitoring process> In the signal monitoring process shown in FIG. 23A, the CPU 21a determines whether or not a predetermined data acquisition timing has come (step S1). If it is determined that the data acquisition timing has arrived, the CPU 21a acquires various data from the operation information output device 3 (step S2). For example, the sensor output from the operation information output device 3 is acquired, or the operation information for the reproduction related switch 36 is acquired. After acquiring various data in step S2, the CPU 21a returns to step S1 and determines whether the data acquisition timing has arrived.
  • the CPU 21a determines whether or not the reproduction operation of the video content has been performed by the reproduction related switch 36 (step S11). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the reproduction related switch 36.
  • the CPU 21a starts reproduction of the introduction scene in the video content (step S12).
  • the CPU 21a reads the imaging frame data of the introduction scene described in FIG. 9A and stores the imaging frame data as display frame data in the frame data.
  • the display frame data stored in the frame data is transmitted to the display device 4 each time the refresh timing comes.
  • the display device 4 displays an image according to the display frame data.
  • the CPU 21a reads the audio data of the introduction scene described in FIG. 9B, generates an acoustic signal based on the audio data, and outputs the acoustic signal to the sound output device 5.
  • the sound output device 5 outputs a sound according to the sound signal.
  • step S11 When it is determined in step S11 that the reproduction operation of the video content is not performed, or when the reproduction of the introduction scene is started in step S12, the CPU 21a determines whether or not the introduction scene is being reproduced (step S13). . If it is determined that the introduction scene is being reproduced, the CPU 21a determines whether or not the first mode switch 34 has been operated (step S14). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the first mode switch 34. When the first mode switch 34 is operated, the CPU 21a performs interlocked scene reproduction processing (step S15). The interlocked scene reproduction processing will be described later.
  • the CPU 21a determines whether or not the reproduction of the introduced scene has ended (step S16). If it is determined that the introduction scene has been reproduced, the CPU 21a performs interlocked scene reproduction processing (step S15). If it is determined in step S13 that the introduction scene is not being reproduced, the CPU 21a determines whether or not the interlocked scene is being reproduced (step S17). If it is determined that the interlocked scene is being reproduced, the CPU 21a performs interlocked scene reproduction processing (step S15). If it is determined in step S16 that the reproduction of the introduced scene is not completed, and if it is determined in step S17 that the interlocked scene is not being reproduced, the CPU 21a returns to the process of step S11.
  • the CPU 21a determines whether or not the second mode switch 35 has been operated (step S18). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the second mode switch 35.
  • the CPU 21a performs a finish scene reproduction process (step S19).
  • the CPU 21a returns to the process of step S11.
  • the CPU 21a reads the photographing frame data of the finish scene described in FIG. 9A, and stores it in the frame data as display frame data.
  • the display device 4 displays an image according to the display frame data.
  • the CPU 21a reads the sound data of the finish scene described in FIG. 9 (b), generates an acoustic signal based on the sound data, and outputs the sound signal to the sound output device 5.
  • the sound output device 5 outputs a sound according to the sound signal. This finish scene reproduction process is performed until the reproduction of a series of finish scenes is completed.
  • the CPU 21a determines whether the cycle of the interlocked scene is not set (step S21). In other words, the CPU 21a determines whether it is immediately after the transition from the introduction scene to the interlocking scene.
  • the CPU 21a (the photographed image information selection unit 2e) sets an initial cycle (step S22). In the present embodiment, the CPU 21a sets a first period as an initial period. Along with this, reproduction of the interlocked scene of the first cycle is started. Next, the CPU 21a determines whether the sensor output from the operation information output device 3 has the value "0" (step S22).
  • the CPU 21a determines whether the wrist of the user is at either the close position B or the separated position T. If the CPU 21a determines that the sensor output is the value "0", the process proceeds to step S24. If the CPU 21a determines that the sensor output is not the value "0", the process proceeds to step S38.
  • step S24 the CPU 21a determines whether the sensor output has changed from a negative value ("-") to a value "0". In other words, the CPU 21a determines whether or not the wrist of the user who has moved in a direction approaching the chest of the user has stopped. If the sensor output changes from a negative value to a value “0”, the CPU 21a determines that the user's wrist is located at the close position (step S25). In other words, the CPU 21a (image selection unit 2a) determines that the display timing of the imaging frame data to which the marker data (B ⁇ T) is added has come.
  • step S24 determines whether the sensor output changes from a positive value ("+") to a value "0". Is determined (step S26). In other words, the CPU 21a determines whether or not the wrist of the user who has moved in a direction away from the user's chest has stopped. When the sensor output changes from a positive value to the value "0", the CPU 21a determines that the user's wrist is located at the separated position (step S27). In other words, the CPU 21a (image selection unit 2a) determines that the display timing of the imaging frame data to which the marker data (T ⁇ B) is added has come.
  • step S26 when it is determined that the sensor output does not become the value "0" from the positive value (when the sensor output of the value "0" was the previous process, too), the CPU 21a has the value "0". It is determined whether or not the sensor output of has continued a prescribed number of times (step S28). If it is determined that the sensor output of the value “0” has continued for a prescribed number of times, the CPU 21a determines that the wrist of the user is at rest (step S29). If it is determined that the sensor output of the value “0” is not continuous the prescribed number of times, the CPU 21a proceeds to the process of step S38.
  • step S30 the CPU 21a determines whether or not it is necessary to change the cycle in the interlocked scene (step S30). As described in FIG. 9D, the CPU 21a acquires the reciprocating frequency of the wrist based on the sensor signal from the operation information output device 3, and selects the interlocked scene of the cycle suitable for the acquired reciprocating frequency. If the cycle of the selected linked scene is different from the current linked scene cycle, the CPU 21a determines that the linked scene cycle change is necessary, and if both cycles are the same, the linked scene cycle change is not necessary. judge. The CPU 21a proceeds to the process of step S34 when determining that the cycle change of the interlocked scene is necessary, and proceeds to the process of step S31 when determining that the cycle change of the interlocked scene is not necessary.
  • the CPU 21a predicts the next sensor output timing. For example, as described in FIG. 13A and FIG. 19A, the CPU 21a acquires the prediction time based on the one-way movement of the wrist performed before the latest sensor output timing, and By adding the prediction time to the sensor output timing, the next sensor output timing is predicted.
  • the CPU 21a (time interval setting unit 2b) sets the time interval between shooting frame data from the latest sensor output timing to the next sensor timing as the time interval based on the predicted time. (Step S32). For example, as described in FIG. 14, the CPU 21a selects the imaging frame data F (F1 M0) at timing ts5, selects the imaging frame data F (F1 M1) at timing ts 6 ', and belongs between both imaging frame data A time interval dt2 is set between a plurality of shooting frame data. Similarly, as described in FIG.
  • the CPU 21a selects the photographing frame data F (F1M1) at timing ts6, and selects the photographing frame data F (F1M2) at timing ts7 ′, and between the two photographing frame data
  • a time interval dt3 is set between a plurality of pieces of shooting frame data to which the image belongs.
  • the CPU 21a sets an interval adjustment curve (step S33).
  • the interval adjustment curve SP1 described in FIG. 14, the interval adjustment curve SP2 described in FIG. 16, or the interval adjustment curve SP3 described in FIG. 20 is set.
  • the interval adjustment curve By setting the interval adjustment curve, the time interval between the imaging frame data is changed from the timing when the user's wrist is located at one of the close position B and the distant position T toward the timing when it reaches the other. Can.
  • the CPU 21a (captured image information selection unit 2e) predicts the next sensor output timing. For example, as described in FIG. 13A and FIG. 19A, the CPU 21a acquires the prediction time based on the one-way movement of the wrist performed before the latest sensor output timing, and By adding the prediction time to the sensor output timing, the next sensor output timing is predicted.
  • the CPU 21a predicts the next sensor output timing (step S35).
  • the CPU 21a (time interval setting unit 2b) sets the time interval between shooting frame data from the latest sensor output timing to the next sensor timing as the time interval based on the predicted time. (Step S36).
  • the processes of step S35 and step S36 are the same as the processes of step S31 and step S32 described above, and thus the description thereof is omitted.
  • the CPU 21a After executing the process of step S36, the CPU 21a sets a composite curve (step S37).
  • the composite curve is a curve that defines a composite ratio of shooting frame data belonging to the linked scene of the current cycle and shooting frame data belonging to the linked scene of the next cycle.
  • the CPU 21a combines both shooting frame data at the combining ratio defined by the combining curve in combining frame generation processing (step S39) described later, and generates combining frame data. As a result, when switching the cycle of the interlocked scene, the image can be changed smoothly.
  • step S33 is performed when it is determined that the sensor output is not the value "0" in the process of step S23.
  • the CPU 21a determines whether the refresh timing of the display device 4 has come (step S38).
  • the CPU 21a display image information generation unit 21f
  • the CPU 21a generates composite frame data by performing alpha blending processing on the pair of shooting frame data, and writes the composite frame data as display frame data in the frame buffer 23 (step S39).
  • the display frame data (combined frame data) written in the frame buffer 23 is output to the display device 4 at the refresh timing. Thereby, the display device 4 displays an image based on the display frame data. If it is determined in step S38 that the refresh timing is not reached, and after the process of step S39 is executed, the CPU 21a returns to the main process.
  • the video display system 1 By performing the above processing, in the video display system 1 according to the present embodiment, it is possible to reproduce the interlocked scene in interlock with the reciprocation movement of the user's wrist.
  • the types of exercise to be targeted can be expanded as compared with the conventional system that was directed to the exercise that continues traveling in one direction.
  • the first timing when the user's wrist is located at one of the close position B and the distant position T and the prediction time have elapsed from the first timing.
  • the photographing frame data is selected at each of the second timing when the other reaches the other of the proximity position B and the separation position T, and the time interval between the plurality of photographing frame data belonging to the selected photographing frame data is Since the time interval is set based on the video content, it is possible to follow and reproduce the video content even if the time of reciprocating movement of the wrist varies. Further, in the image display system 1 of the present embodiment, by using the interval adjustment curves SP1 to SP3, the time interval between the imaging frame data is changed along with going from the first timing to the second timing. Therefore, even when the selected set of shooting frame data is updated, the image can be smoothly changed. For example, from the set of shooting frame data F (F1 M0) and shooting frame data F (F1 M1) shown in FIG.
  • the image displayed on the display device 4 can be changed smoothly.
  • alpha-blend processing is performed on a pair of consecutive shooting frame data in succession to generate composite frame data, and the generated composite frame data is displayed on the display device 4 as display frame data. Because of this, the image displayed on the display device 4 can be smoothly changed also at this point.
  • shooting frame data F (F1 M0) is stored in the frame buffer 23 at timing Rf20
  • shooting frame data F (F1 M0 + 1) is stored in the frame buffer 23 at timing Rf21
  • shooting frame data F (F F1M0 + 2) is stored in the frame buffer 23.
  • the time interval from the timing Rf20 to the timing Rf21 is different from the time interval from the timing Rf21 to the timing Rf22, since the display device 4 corresponds to the variable refresh rate, the image is appropriately displayed. Can.
  • the interlocked scene to be reproduced first is not limited to the interlocked scene of the first period.
  • a linked scene of the second cycle may be used, or a linked scene of the third cycle may be used.
  • interlocking control is performed on a specific one-way motion related to the reciprocating motion. Specifically, the interlocking control is performed by the movement time of the wrist from the close position B to the separation position T and the movement time of the wrist from the separation position T to the close position B.
  • control of interlocking is not limited to these. For example, control may be performed based on the movement time obtained by averaging the movement time of the wrist from the close position B to the separation position T and the movement time of the wrist from the separation position T to the close position B.
  • control of interlocking may be performed on the basis of reciprocating motion. Specifically, the interlock control may be performed based on the movement time of the wrist from the proximity position B through the separation position T to the proximity position B. Similarly, interlock control may be performed by the movement time of the wrist from the separated position T to the separated position T through the close position B.
  • control apparatus 2 provided with CPU21a, RAM21b, and the frame buffer 23 was illustrated in the above-mentioned embodiment, it is not limited to this structure.
  • hardware logic may execute all or part of the processing executed by the CPU 21a.
  • a plurality of CPUs 21a may be mounted, or a plurality of processors of different architectures such as the CPU 21a and a GPU (Graphics Processing Unit) may be mounted.
  • the process performed using the frame buffer 23 may be performed using the RAM 21b, and conversely, the process performed using the RAM 21b may be performed using the frame buffer 23.
  • control device 2 the operation information output device 3, the display device 4, and the sound output device 5 are separately provided in the above embodiment, the present invention is not limited to this configuration.
  • the control device 2 may be incorporated in the operation information output device 3 and integrated, or the control device 2 may be integrated in the display device 4.
  • the control device 2, the display device 4, and the sound output device 5 may be integrated.
  • the composition which processes by one control device 2 was illustrated, it is not limited to this composition.
  • the configuration may be such that processing is performed by a plurality of control devices 2 communicably connected via a communication network.
  • the motion information output device 3 is illustrated as being worn on the wrist in the above-described embodiment, the motion information output device 3 is not limited to being worn on the wrist.
  • the motion information output device 3 may be attached to a portion capable of reciprocating movement, such as the user's ankle, head or waist.
  • operation information output device 3 was provided with control part 31, it is not limited to this composition.
  • the processing performed by the control unit 31 may be performed by hardware logic.
  • the control unit 31 of the operation information output device 3 may be omitted, and the operation information output device 3 may be controlled by the control unit 21 of the control device 2.
  • the sensor output of the gyro sensor 32 included in the operation information output device 3 is exemplified by performing normalization processing by the control unit 31 included in the operation information output device 3.
  • the present invention is not limited to this configuration.
  • the sensor output normalization process may be performed by the control unit 21 of the control device 2.
  • a video camera for capturing the motion of the user may be used, and the motion of the user may be extracted as motion information from the video obtained by the video camera.
  • the present invention is not limited to this configuration.
  • the present invention may be applied to a sperm collection support system.
  • the sperm collection support system is a system that supports the collection of sperm when collecting male sperm for medical research and therapeutic needs. This sperm collection support system is used to collect sperm for investigating the cause of infertility between husband and wife, to treat sexual dysfunction, and to acquire sperm for artificial insemination.
  • the sperm collection support system can meet various social needs such as prevention of sexual crimes by eliminating personal sexual desires, prevention of prostitution and reduction of the number of sexually transmitted infections.
  • the motion information output device 3 is worn on a male's wrist using adult content obtained by photographing a sexual activity of a male and female as video content. Then, in response to the reciprocation of the wrist accompanying the male masturbating act, the adult content is interlocked and reproduced.
  • the sense of reality can be enhanced and the collection efficiency of sperm can be enhanced, as compared with a general configuration in which adult content is simply reproduced.
  • An image processing apparatus that processes an image based on motion information (sensor output) that changes periodically along with the reciprocation of a specific part (wrist) of a person, and is a plurality of captured image information captured in time series
  • a prediction unit that includes information (photographing frame data F (F1 M1), F (
  • the prediction unit performs a moving time (time interval TS5 ') related to the one-way movement of the reciprocation for another one-way movement (at timing ts1 to ts2) performed a plurality of times before the one-way movement.
  • the movement, movement of the wrist performed from timing ts3 to ts4) is predicted based on the operation information output from the operation information output unit. According to the video processing device of this aspect, even if the moving time related to the one-way motion of the reciprocating motion varies, the display of the video can be made to follow the moving time of the specific part.
  • the time interval setting unit is characterized in that the time interval between the photographed image information is changed stepwise (dt2a to dt2c, dt2d to dt2f) as it goes from the first timing to the second timing. According to the video processing device of this aspect, an image can be displayed smoothly at the switching timing of control.
  • a display image information generation unit (display image information generation unit 21f) that generates display image information (display frame data) to be displayed on the display device (display device 4) based on the captured image information is characterized.
  • an image suitable for the display device can be generated based on the photographed image information.
  • the display image information generation unit generates display image information by performing alpha blend processing of a pair of successive captured image information, and the alpha blend processing is performed by dividing the time interval between the captured image information according to the refresh timing of the display device. Is a coefficient. According to the video processing device of this aspect, an image can be displayed smoothly.
  • the display device is characterized in that it is a stereoscopic image display device that displays photographed image information by a stereoscopic image that can be viewed stereoscopically. According to the video processing apparatus of this aspect, an image can be displayed as a stereoscopic image.
  • the photographed image information is another photographed image information included between the first photographed image information and the second photographed image information, and the number of the photographed image information is different.
  • a plurality of sets (photographing of the interlocked scene of the first to fifth periods)
  • the present invention is characterized by including a photographed image information selection unit (photographed image information selection unit 21g) that includes frame data and selects any one of a plurality of sets of photographed image information based on operation information. According to the image processing apparatus of this aspect, it is possible to cope with fluctuations in the speed of the reciprocation of a specific part in a wide range.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Cardiology (AREA)
  • Vascular Medicine (AREA)
  • Studio Devices (AREA)

Abstract

The present invention achieves video processing suitable for a reciprocating movement of a specific part by a user. A prediction part 21e predicts an amount of time it takes for a wrist of a user to move between a close position B and a distant position T on the basis of a sensor output which is output from a movement information output device 3. An image selection part 21c selects first photographing frame data at a first timing at which the wrist of the user is at the close position B during the reciprocating movement and selects second photographing frame data at a second timing at which the predicted movement time has passed from the first timing. A time interval setting part 21d sets the time interval between the sets of photographing frame data, that is, the time interval from the first photographing frame data to the second photographing frame data, to a time interval based on the predicted movement time.

Description

映像処理装置、映像処理方法、コンピュータプログラム、及び映像処理システムVideo processing apparatus, video processing method, computer program, and video processing system
 本発明は、映像処理装置、映像処理方法、コンピュータプログラム、及び映像処理システムに関する。 The present invention relates to a video processing apparatus, a video processing method, a computer program, and a video processing system.
 予め作製された映像コンテンツを使用者の動作に連動して再生させる映像処理システムが知られている。例えば、特許文献1には、映像処理システムを組み込んだトレーニング装置が提案されている。このトレーニング装置では、規定距離を走行する毎に撮影された撮影フレームデータを、トレッドミルでの走行距離に連動させて表示させている。 There is known a video processing system which reproduces video content prepared in advance in conjunction with the operation of a user. For example, Patent Document 1 proposes a training apparatus incorporating a video processing system. In this training apparatus, shooting frame data shot every time the vehicle travels a prescribed distance is displayed in conjunction with the traveling distance on the treadmill.
特開平9-220308号公報JP-A-9-220308
 特許文献1のトレーニング装置は、ランニングなどの一方向に走行し続ける運動を想定しており、使用者による特定部位の往復運動については想定していなかった。特許文献1のトレーニング装置は、この点に改良の余地があった。
 本発明はこのような事情に鑑みてなされたものであり、その目的は、使用者による特定部位の往復運動に適した映像処理を実現することにある。
The training device of Patent Document 1 assumes exercise that continues to run in one direction, such as running, and does not assume reciprocating motion of a specific part by the user. The training device of Patent Document 1 has room for improvement in this respect.
The present invention has been made in view of such circumstances, and an object thereof is to realize video processing suitable for reciprocating motion of a specific part by a user.
 上記の課題を解決するために、本発明は、動作情報出力部から出力される、第1位置と第2位置との間における使用者の特定部位の往復運動に伴って周期的に変化する動作情報に基づいて映像を処理する映像処理装置であって、時系列で撮影された複数の撮影画像情報を記憶する画像記憶部と、前記画像記憶部に記憶された撮影画像情報を選択する画像選択部と、前記撮影画像情報間の時間間隔を設定する時間間隔設定部と、を備え、前記撮影画像情報は、前記第1位置に対応する第1撮影画像情報と、前記第2位置に対応する第2撮影画像情報と、を含み、前記第1位置と前記第2位置との間における前記特定部位の移動時間を、前記動作情報に基づいて予測する予測部を、さらに備え、前記画像選択部は、前記往復運動について、前記特定部位が前記第1位置に位置した第1のタイミングでは前記第1撮影画像情報を選択し、前記予測部によって予測された移動時間が前記第1のタイミングから経過した後の、前記特定部位が前記第2位置に達する第2のタイミングでは前記第2撮影画像情報を選択し、前記時間間隔設定部は、前記第1撮影画像情報から前記第2撮影画像情報までの前記撮影画像情報間の時間間隔を、前記予測部によって予測された移動時間に基づいた時間間隔に設定することを特徴とする。 In order to solve the above-mentioned problems, the present invention is an operation which periodically changes with reciprocating motion of a specific part of the user between the first position and the second position, which is output from the operation information output unit. An image processing apparatus that processes an image based on information, and an image storage unit that stores a plurality of captured image information captured in time series, and an image selection that selects the captured image information stored in the image storage unit. And a time interval setting unit for setting a time interval between the photographed image information, wherein the photographed image information corresponds to first photographed image information corresponding to the first position, and the second position. The image selection unit further includes a prediction unit that includes second captured image information, and predicts, based on the operation information, the movement time of the specific part between the first position and the second position. For the reciprocating motion The first specific image information is selected at the first timing at which the fixed part is located at the first position, and the specific part after the movement time predicted by the prediction unit has passed from the first timing is The second captured image information is selected at a second timing when the second position is reached, and the time interval setting unit determines the time between the captured image information from the first captured image information to the second captured image information. The interval is set to a time interval based on the movement time predicted by the prediction unit.
 本発明によれば、使用者による特定部位の往復運動に適した映像処理を実現することができる。 According to the present invention, it is possible to realize video processing suitable for reciprocating motion of a specific part by a user.
本発明の実施の形態に係る映像表示システムの概略構成を示す図である。FIG. 1 is a diagram showing a schematic configuration of a video display system according to an embodiment of the present invention. 動作情報出力装置の外観を示す図である。It is a figure showing the appearance of an operation information output device. (a)はローイングマシンの全体構成を示す図である。(b)はローイングマシンの使用者の手首が使用者の胸部から離隔した位置にある様子を示す図である。(c)はローイングマシンの使用者の手首が使用者の胸部に近接した位置にある様子を示す図である。(A) is a figure which shows the whole structure of a rowing machine. (B) is a figure which shows a user's wrist of a rowing machine in a position away from a user's chest. (C) is a figure which shows a user's wrist of a rowing machine in a position close | similar to a user's chest. (a)は制御装置のハードウェア構成を説明する図である。(b)は制御装置の記憶部に記憶される各種データを示す図である。(c)は制御部の機能ブロックを示す図である。(A) is a figure explaining the hardware constitutions of a control device. (B) is a figure which shows the various data memorize | stored in the memory | storage part of a control apparatus. (C) is a figure which shows the functional block of a control part. 動作情報出力装置のハードウェア構成を示す図である。It is a figure showing hardware constitutions of an operation information output device. (a)はジャイロセンサの各軸を示す図である。(b)は加速度センサの各軸を示す図である。(c)はジャイロセンサから出力されるセンサ出力の正規化を説明する図である。(A) is a figure showing each axis of a gyro sensor. (B) is a figure showing each axis of an acceleration sensor. (C) is a figure explaining normalization of the sensor output outputted from a gyro sensor. 正規化後のジャイロセンサからのセンサ出力と手首の位置の関係を示す図である。It is a figure which shows the relationship between the sensor output from the gyro sensor after normalization, and the position of a wrist. 映像コンテンツに含まれる、導入シーン、連動シーン、及びフィニッシュシーンの切り替えを示す図である。It is a figure which shows switching of the introductory scene, the interlocking | linkage scene, and the finish scene contained in video content. (a)は撮影フレームデータの種類を示す図である。(b)は音声データの種類を示す図である。(c)は撮影フレームデータや音声データに対して付加されるメタデータを示す図である。(d)は各周期の連動シーンと手首の往復周波数の関係を示す図である。(A) is a figure which shows the kind of imaging | photography frame data. (B) is a figure which shows the kind of audio | speech data. (C) is a figure which shows the metadata added with respect to imaging | photography flame | frame data and audio | voice data. (D) is a figure which shows the relationship between the interlocking scene of each period, and the reciprocation frequency of a wrist. 連動シーンの各周期におけるマーカデータと撮影フレームデータの関係を示す図である。It is a figure which shows the relationship between the marker data in each period of a interlocking scene, and imaging | photography frame data. (a)は音声データの種類を示す図である。(b)は再生用音声データの生成手順を示す図である。(A) is a figure which shows the kind of audio | speech data. (B) is a figure which shows the production | generation procedure of the audio | voice data for reproduction | regeneration. (a)は標準的な時間間隔での再生用音声データの再生を示す図である。(b)は標準的な時間間隔よりも狭い時間間隔での再生用音声データの再生を示す図である。(c)は標準的な時間間隔よりも広い時間間隔での再生用音声データの再生を示す図である。(A) is a figure which shows reproduction | regeneration of the audio | voice data for reproduction | regeneration in a standard time interval. (B) is a figure which shows reproduction | regeneration of the audio | voice data for reproduction | regeneration in a time interval narrower than a standard time interval. (C) is a figure which shows reproduction | regeneration of the audio | voice data for reproduction | regeneration in a time interval wider than a standard time interval. (a)は予測時間算出手順を示す図である。(b)は予測時間のばらつきを示す図である。(A) is a figure which shows a prediction time calculation procedure. (B) is a figure which shows the dispersion | variation in prediction time. 予測時間における撮影フレームデータの時間間隔と間隔調整曲線を示す図である。It is a figure which shows the time interval of the imaging | photography flame | frame data in prediction time, and an interval adjustment curve. 間隔調整曲線によって調整された撮影フレームデータ間の時間間隔を示す図である。It is a figure which shows the time interval between the imaging | photography flame | frame data adjusted with the space | interval adjustment curve. 他の間隔調整曲線によって調整された撮影フレームデータ間の時間間隔を示す図である。It is a figure which shows the time interval between the imaging | photography flame | frame data adjusted by the other space | interval adjustment curve. 合成フレームデータF(mix1)~F(mix4)の生成を示す図である。It is a figure which shows production | generation of synthetic | combination frame data F (mix1)-F (mix 4). (a)は合成フレームデータF(mix1)の生成を示す図である。(b)は合成フレームデータF(mix2)の生成を示す図である。(c)は合成フレームデータF(mix3)の生成を示す図である。(d)は合成フレームデータF(mix4)の生成を示す図である。(A) is a figure which shows production | generation of the synthetic | combination frame data F (mix1). (B) is a figure which shows production | generation of synthetic | combination frame data F (mix2). (C) is a figure showing generation of synthetic frame data F (mix 3). (D) is a figure showing generation of synthetic frame data F (mix 4). (a)は予測時間算出手順を示す図である。(b)は予測時間のばらつきを示す図である。(A) is a figure which shows a prediction time calculation procedure. (B) is a figure which shows the dispersion | variation in prediction time. 予測時間TS6’における撮影フレームデータの時間間隔と間隔調整曲線を示す図である。It is a figure which shows the time interval of the imaging | photography flame | frame data in prediction time TS6 ', and an interval adjustment curve. 合成フレームデータF(mix11)~F(mix14)の生成を示す図である。It is a figure which shows production | generation of synthetic | combination frame data F (mix11)-F (mix14). (a)は合成フレームデータF(mix11)の生成を示す図である。(b)は合成フレームデータF(mix12)の生成を示す図である。(c)は合成フレームデータF(mix13)の生成を示す図である。(d)は合成フレームデータF(mix14)の生成を示す図である。(A) is a figure which shows production | generation of synthetic | combination frame data F (mix11). (B) is a figure which shows production | generation of synthetic | combination frame data F (mix12). (C) is a figure which shows production | generation of the synthetic | combination frame data F (mix13). (D) is a figure showing generation of synthetic frame data F (mix14). (a)は制御装置のCPUが行う信号監視処理を示すフローチャートである。(b)は制御装置のCPUが行うメイン処理を示すフローチャートである。(A) is a flowchart which shows the signal monitoring process which CPU of a control apparatus performs. (B) is a flowchart which shows the main processing which CPU of a control device performs. 制御装置のCPUが行う連動シーン再生処理を示すフローチャートである。It is a flowchart which shows the interlocking scene reproduction process which CPU of a control apparatus performs. 時間間隔が調整された撮影フレームデータを表示フレームデータとする例を示す図である。It is a figure which shows the example which makes imaging frame data in which the time interval was adjusted into display frame data.
 以下、本発明の実施の形態例を、図面を参照して詳細に説明する。図1は、本発明の実施の形態に係る映像表示システム1の概略構成を示す図である。
<映像表示システム1の概略構成>
 図1に示す映像表示システム1は、映像コンテンツ(時系列で撮影された複数の撮影フレームデータと、複数の撮影フレームデータと共に再生される音声データの組)を記憶し、映像コンテンツの再生を制御する制御装置2と、使用者に装着されると共に制御装置2と通信可能に接続され、制御装置2との間で各種信号を送受信する動作情報出力装置3(動作情報出力部)と、制御装置2と通信可能に接続され、映像コンテンツの画像を表示可能な表示装置4と、制御装置2と通信可能に接続され、映像コンテンツの音声を再生可能な音出力装置5と、を備えている。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. FIG. 1 is a diagram showing a schematic configuration of a video display system 1 according to an embodiment of the present invention.
<Schematic Configuration of Video Display System 1>
The video display system 1 shown in FIG. 1 stores video content (a plurality of shooting frame data shot in time series and audio data to be played back along with the plurality of shooting frame data), and controls playback of the video content. Control device 2, an operation information output device 3 (operation information output unit) which is attached to the user and is communicably connected to the control device 2 and transmits / receives various signals to / from the control device 2; And a sound output device 5 connected communicably to the control device 2 and capable of reproducing audio of the video content.
 制御装置2は、動作情報出力装置3からのセンサ出力(動作情報、後述)に基づいて、表示装置4、及び音出力装置5に表示させる映像コンテンツを処理する映像処理装置として機能する。制御装置2は、例えば、一般的なパーソナルコンピュータ(Personal Computer)を用いることができる。なお、制御装置2については後で詳しく説明する。 The control device 2 functions as a video processing device that processes video content to be displayed on the display device 4 and the sound output device 5 based on a sensor output (motion information, which will be described later) from the motion information output device 3. The control device 2 can use, for example, a general personal computer. The control device 2 will be described in detail later.
 図2は、動作情報出力装置3の外観を示す図である。図2に示すように、動作情報出力装置3は、制御部31やジャイロセンサ32(図5参照)などの部品を収容した筐体部3aと、筐体部3aに取り付けられると共に使用者の手首(特定部位)に巻き付けられるバンド3bと、を備えている。筐体部3aの表面には、映像コンテンツの再生シーンを切り替える場合に操作される第1モード切替スイッチ34、及び第2モード切替スイッチ35と、映像コンテンツの再生に関する各種操作を行う場合に操作される再生関連スイッチ36と、映像コンテンツに係るメニュー画面を表示させる場合に操作されるメニュースイッチ37と、が設けられている。後述するように、動作情報出力装置3は、使用者の手首の往復運動に伴って、周期的に信号レベルが変化するセンサ出力を制御装置2に送信する。 FIG. 2 is a view showing the appearance of the operation information output device 3. As shown in FIG. 2, the operation information output device 3 is attached to the housing 3 a housing the components such as the control unit 31 and the gyro sensor 32 (see FIG. 5) and the housing 3 a, and is used by the user's wrist. And a band 3b wound around the (specific part). The first mode switch 34 and the second mode switch 35, which are operated when switching the playback scene of the video content, and various operations related to the playback of the video content are operated on the surface of the housing 3a. A playback related switch 36 and a menu switch 37 operated when displaying a menu screen related to the video content are provided. As described later, the motion information output device 3 transmits, to the control device 2, a sensor output in which the signal level changes periodically as the user's wrist reciprocates.
 表示装置4は、図1に示すように、ヘッドマウントディスプレイが好適に用いられる。表示装置4と制御装置2との間では、表示フレームデータが無線で送受信される。表示フレームデータは、制御装置2が備えるフレームバッファ23(図4参照)から読み出されて送信される。表示フレームデータを受信すると、表示装置4は、表示フレームデータに応じた画像、具体的には立体視可能な立体画像を表示する。
 なお、表示装置4は、ヘッドマウントディスプレイに限らず、液晶ディスプレイやプロジェクタであってもよい。表示装置4と制御装置2との通信は、無線での通信に限らず有線での通信であってもよい。
As the display device 4, as shown in FIG. 1, a head mounted display is preferably used. Display frame data is wirelessly transmitted and received between the display device 4 and the control device 2. The display frame data is read from the frame buffer 23 (see FIG. 4) included in the control device 2 and transmitted. When the display frame data is received, the display device 4 displays an image according to the display frame data, specifically, a stereoscopic image which can be viewed stereoscopically.
The display device 4 is not limited to a head mounted display, and may be a liquid crystal display or a projector. Communication between the display device 4 and the control device 2 is not limited to wireless communication but may be wired communication.
 音出力装置5は、図1に示すように、ヘッドフォンが好適に用いられる。音出力装置5と制御装置2との間は、音声データ(後述)に基づく音響信号が無線で送受信される。音響信号が音出力装置5に入力されると、音出力装置5からは音響信号に応じた音が出力される。
 なお、音出力装置5は、ヘッドフォンに限らず、スピーカであってもよい。音出力装置5と制御装置2との通信は、無線での通信に限らず有線の通信であってもよい。
As the sound output device 5, as shown in FIG. 1, headphones are preferably used. An acoustic signal based on audio data (described later) is wirelessly transmitted and received between the sound output device 5 and the control device 2. When the sound signal is input to the sound output device 5, the sound output device 5 outputs a sound according to the sound signal.
The sound output device 5 is not limited to headphones but may be a speaker. Communication between the sound output device 5 and the control device 2 is not limited to wireless communication but may be wired communication.
 図3(a)は、ローイングマシン6の全体構成を示す図である。図3(b)は、ローイングマシン6の使用者の手首が使用者の胸部から離隔した位置にある様子を示す図である。図3(c)は、ローイングマシン6の使用者の手首が使用者の胸部に近接した位置にある様子を示す図である。
 本実施形態の映像表示システム1は、例えば図3(b)及び(c)に示すローイングマシン6の使用時に、使用者の動作に連動して映像コンテンツを再生する。ローイングマシン6の使用時に再生する映像コンテンツとしては、例えば競技者の視界に応じたボート競技の映像が好適である。
 例示したローイングマシン6は、角柱状のフレーム61と、フレーム61に沿って移動が可能であって、使用者の臀部が載せられる座部材62と、基端部を中心に回転可能な状態でフレームに取り付けられ、先端部が使用者の両手で把持される略T字形状のハンドル63と、一端部がフレーム61に回転可能に取り付けられ、他端部がハンドル63に回転可能に取り付けられたダンパー64と、使用者の足裏が載せられる足載せ部材65と、を備えている。
FIG. 3A is a diagram showing an entire configuration of the rowing machine 6. FIG. 3 (b) is a view showing that the user's wrist of the rowing machine 6 is located at a distance from the user's chest. FIG. 3C shows the wrist of the user of the rowing machine 6 in a position close to the chest of the user.
The video display system 1 according to the present embodiment reproduces the video content in conjunction with the operation of the user, for example, when using the rowing machine 6 shown in FIGS. 3 (b) and 3 (c). As video content reproduced when using the rowing machine 6, for example, a video of a boat competition according to the player's view is preferable.
The illustrated rowing machine 6 has a prismatic frame 61, a seat member 62 movable along the frame 61, on which the user's buttocks can be placed, and a frame that can rotate around the proximal end. And a damper having a substantially T-shaped handle 63 whose tip end is gripped by the user's hands, a damper whose one end is rotatably attached to the frame 61 and whose other end is rotatably attached to the handle 63 64 and a footrest member 65 on which the user's sole is placed.
 図3(b)及び(c)に示すように、トレーニング時においてローイングマシン6の使用者は、手首(特定部位)に動作情報出力装置3を装着し、頭部に表示装置4(ヘッドマウントディスプレイ)及び音出力装置5(ヘッドフォン)を装着する。
 使用者は、ローイングマシン6を使用して、図3(b)に示す身体を折り曲げた状態と図3(c)に示す身体を伸ばした状態との間で屈伸運動を行う。使用者の手首は、身体を折り曲げた状態において使用者の胸部から離隔した離隔位置に位置付けられ、身体を伸ばした状態において使用者の胸部に近接した近接位置に位置付けられる。すなわち、使用者の手首(特定部位)は、近接位置と離隔位置との間を、ハンドル63における他端部の軌跡に沿って往復運動する。したがって、使用者の手首に装着された動作情報出力装置3も近接位置と離隔位置との間を往復移動する。
As shown in FIGS. 3B and 3C, the user of the rowing machine 6 wears the motion information output device 3 on the wrist (specific part) at the time of training, and the display 4 on the head (head mounted display And the sound output device 5 (headphones).
The user uses the rowing machine 6 to perform bending and stretching exercises between the state in which the body is bent as shown in FIG. 3 (b) and the state in which the body is extended as shown in FIG. 3 (c). The user's wrist is positioned at a distant position away from the user's chest with the body bent and at a close position close to the user's chest with the body extended. That is, the user's wrist (specific part) reciprocates along the trajectory of the other end of the handle 63 between the close position and the separated position. Therefore, the motion information output device 3 mounted on the wrist of the user also reciprocates between the close position and the separated position.
<制御装置2>
 制御装置2について詳細に説明する。図4(a)は、制御装置2のハードウェア構成を説明する図である。図4(b)は、制御装置2の記憶部に記憶される各種データを示す図である。
<Control device 2>
The control device 2 will be described in detail. FIG. 4A is a diagram for explaining the hardware configuration of the control device 2. FIG. 4B is a view showing various data stored in the storage unit of the control device 2.
 図4(a)に示すように、制御装置2は、各種制御の主体となる制御部21、各種データを記憶する記憶部22、表示装置4に表示させる画像に対応する表示フレームデータを記憶するフレームバッファ23、及び他の装置との間で情報を送受信するための通信インタフェース(I/F)24を備えている。
 制御部21は、記憶部22に記憶されたプログラムをRAM(Random Access Memory)21bに展開して実行するCPU(Central Processing Unit)21aを備えている。
 記憶部22は、例えばHDD(Hard Disk Drive)やSSD(Solid State Drive)であり、図4(b)に示すように、CPU21aが実行するプログラムと、映像コンテンツを構成する撮影フレームデータ(撮影画像情報)、音声データ、及びメタデータとを記憶している。撮影フレームデータを記憶していることから、記憶部は画像記憶部として機能する。なお、各データについては後で詳しく説明する。
As shown in FIG. 4A, the control device 2 stores display frame data corresponding to an image to be displayed on the display device 4, and a control unit 21 serving as a main body of various controls, a storage unit 22 storing various data. A frame buffer 23 and a communication interface (I / F) 24 for transmitting and receiving information to and from other devices are provided.
The control unit 21 includes a central processing unit (CPU) 21 a that develops a program stored in the storage unit 22 in a random access memory (RAM) 21 b and executes the program.
The storage unit 22 is, for example, a hard disk drive (HDD) or a solid state drive (SSD), and as illustrated in FIG. 4B, a program executed by the CPU 21a and shooting frame data (shooting image data constituting video content) Information, voice data, and metadata are stored. Since the photographing frame data is stored, the storage unit functions as an image storage unit. Each data will be described in detail later.
 図4(a)に戻り、フレームバッファ23は、表示装置4が表示する1画面分の表示フレームデータを記憶する揮発性のメモリである。表示フレームデータは、記憶部22に記憶された撮影フレームデータに基づいて制御部21が生成する。制御部21は、生成した表示フレームデータを、フレームバッファ23に書き込む。なお、制御部21による表示フレームデータの生成については後述する。
 通信インタフェース24は、動作情報出力装置3との間で情報を送受信する通信インタフェース24a、表示装置4との間で情報を送受信する通信インタフェース24b、及び音出力装置5との間で情報を送受信する通信インタフェース24cを備えている。通信インタフェース24としては、例えばBluetooth(登録商標)やWirelessUSB(Wireless Universal Serial Bus)などの通信方式やプロトコルに準拠したものが使用できる。また、有線の場合は、RS-232CやUSB(Universal Serial Bus)などの方式に準拠したものが使用できる。
Returning to FIG. 4A, the frame buffer 23 is a volatile memory that stores display frame data for one screen displayed by the display device 4. The display frame data is generated by the control unit 21 based on the shooting frame data stored in the storage unit 22. The control unit 21 writes the generated display frame data in the frame buffer 23. The generation of display frame data by the control unit 21 will be described later.
The communication interface 24 transmits and receives information to and from the operation information output device 3. The communication interface 24a transmits and receives information to and from the operation information output device 3. The communication interface 24b transmits and receives information to and from the display device 4. A communication interface 24c is provided. As the communication interface 24, for example, one conforming to a communication method or protocol such as Bluetooth (registered trademark) or WirelessUSB (Wireless Universal Serial Bus) can be used. In the case of wired communication, one conforming to a method such as RS-232C or USB (Universal Serial Bus) can be used.
 図4(c)は、制御部21の機能ブロックを示す図である。図4(c)に示す機能ブロックは、CPU21aが記憶部22に記憶されたプログラムをRAM21bに展開して実行することにより実現される。
 画像選択部21cは、記憶部22(画像記憶部)に記憶された撮影フレームデータを選択する。時間間隔設定部21dは、撮影フレームデータ間の時間間隔を設定する。予測部21eは、近接位置と離隔位置の一方(第1位置)と他方(第2位置)との間における使用者の手首(特定部位)の移動時間を、動作情報出力装置3(動作情報出力部)から出力されたセンサ出力(動作情報)に基づいて予測する。
 表示画像情報生成部21fは、表示装置4が規定するタイミングに従って、相前後する一対の撮影フレームデータをアルファブレンド処理することで表示フレームデータを生成する。撮影画像情報選択部21gは、センサ出力(動作情報)の周期的な変化に基づいて、複数周期の撮影フレームデータ(複数組の撮影画像情報)のうち、何れか1つの周期の撮影フレームデータを選択する。通信制御部21hは、通信インタフェース24a~24cを制御して動作情報出力装置3、表示装置4、及び音出力装置5と通信を行う。
FIG. 4C is a diagram showing functional blocks of the control unit 21. As shown in FIG. The functional block shown in FIG. 4C is realized by the CPU 21a developing a program stored in the storage unit 22 on the RAM 21b and executing the program.
The image selection unit 21 c selects shooting frame data stored in the storage unit 22 (image storage unit). The time interval setting unit 21d sets a time interval between the imaging frame data. The prediction unit 21e outputs the movement information output device 3 (movement information output time) of the movement time of the user's wrist (specific part) between one (first position) and the other (second position) of the proximity position and the separation position. Prediction based on the sensor output (operation information) output from
The display image information generation unit 21 f generates display frame data by performing alpha blending processing on a pair of shooting frame data which are adjacent to each other in accordance with the timing defined by the display device 4. The photographed image information selection unit 21g selects photographed frame data of any one cycle among photographed frame data of a plurality of cycles (a plurality of sets of photographed image information) based on a periodic change of sensor output (operation information). select. The communication control unit 21 h controls the communication interfaces 24 a to 24 c to communicate with the operation information output device 3, the display device 4, and the sound output device 5.
<動作情報出力装置3>
 動作情報出力装置3について説明する。図5は、動作情報出力装置3のハードウェア構成を示す図である。図5に示すように、動作情報出力装置3は、前述した第1モード切替スイッチ34、第2モード切替スイッチ35、再生関連スイッチ36、及びメニュースイッチ37の他に、各種制御の主体となる制御部31と、基準軸に対する角速度を検出するジャイロセンサ32と、加速度を検出する加速度センサ33と、制御装置2との間で情報を送受信するための通信インタフェース(I/F)38とを備えている。
 制御部31は、不揮発性メモリ31bに記憶されたプログラムを実行するCPU31a(Central Processing Unit)と、CPU31aに使用される揮発性メモリ31cとを備えている。
<Operation information output device 3>
The operation information output device 3 will be described. FIG. 5 is a diagram showing a hardware configuration of the operation information output device 3. As shown in FIG. 5, the operation information output device 3 is controlled mainly to perform various controls in addition to the first mode switch 34, the second mode switch 35, the reproduction related switch 36, and the menu switch 37 described above A communication interface (I / F) 38 for transmitting and receiving information to and from the control unit 2, a gyro sensor 32 for detecting an angular velocity with respect to a reference axis, an acceleration sensor 33 for detecting acceleration, There is.
The control unit 31 includes a CPU 31a (Central Processing Unit) that executes a program stored in the non-volatile memory 31b, and a volatile memory 31c used by the CPU 31a.
 図6(a)は、ジャイロセンサ32の各軸を示す図である。図6(b)は、加速度センサ33の各軸を示す図である。図6(c)は、ジャイロセンサ32から出力されるセンサ出力の正規化を説明する図である。
 図6(a)に示すように、ジャイロセンサ32は、互いに直交するX軸、Y軸、及びZ軸のそれぞれを中心とする角速度を検出する。図6(b)に示すように、加速度センサ33は、互いに直交するX軸、Y軸、及びZ軸のそれぞれについて加速度を検出する。
FIG. 6A shows the axes of the gyro sensor 32. FIG. FIG. 6 (b) is a view showing each axis of the acceleration sensor 33. FIG. 6C is a view for explaining normalization of the sensor output output from the gyro sensor 32.
As shown in FIG. 6A, the gyro sensor 32 detects angular velocities centered on the X axis, the Y axis, and the Z axis orthogonal to one another. As shown in FIG. 6B, the acceleration sensor 33 detects an acceleration for each of the X axis, the Y axis, and the Z axis orthogonal to each other.
 動作情報出力装置3の制御部31は、使用者がローイングマシン6で運動している場合(図3(b)、図3(c)参照)において、ジャイロセンサ32が検出した角速度、及び加速度センサ33が検出した加速度に基づいて、使用者の手首の往復運動に伴って周期的に変化するセンサ出力を取得する。
 例えば、制御部31は、加速度センサ33のセンサ出力に基づいて、動作情報出力装置3が備える筐体の重力方向に対する傾きを取得する。また、制御部31は、手首の往復運動に伴ってジャイロセンサ32が検出した角速度を、筐体部3aの重力方向に対する傾きに応じて正規化することで、センサ出力を取得する。例えば、図6(c)に示すように、制御部31は、ジャイロセンサ32が出力した3つの軸の角速度から変化量の大きい2つの軸の角速度を選択し、選択した2つの軸の角速度を筐体部3aの傾きに応じて1軸に正規化することでセンサ出力を取得する。制御部31は、取得したセンサ出力を、通信インタフェース38を介して制御装置2に送信する。
The control unit 31 of the motion information output device 3 detects the angular velocity detected by the gyro sensor 32 and the acceleration sensor when the user is exercising on the rowing machine 6 (see FIGS. 3B and 3C). Based on the acceleration detected by 33, a sensor output that changes periodically as the user's wrist reciprocates is acquired.
For example, based on the sensor output of the acceleration sensor 33, the control unit 31 acquires the inclination of the housing included in the operation information output device 3 with respect to the gravity direction. Further, the control unit 31 obtains a sensor output by normalizing the angular velocity detected by the gyro sensor 32 with the reciprocating motion of the wrist according to the inclination of the housing 3a with respect to the gravity direction. For example, as shown in FIG. 6C, the control unit 31 selects the angular velocity of the two axes with a large amount of change from the angular velocity of the three axes output by the gyro sensor 32, and selects the angular velocity of the selected two axes. The sensor output is acquired by normalizing to one axis according to the inclination of the housing 3a. The control unit 31 transmits the acquired sensor output to the control device 2 via the communication interface 38.
 図7は、正規化後のジャイロセンサ32からのセンサ出力NSと手首の位置の関係を示す図である。図7に一点鎖線で示すセンサ出力NSは、ローイングマシン6での運動時において、手首が胸部から離隔する方向に移動している場合に正(+)の値となり、手首が胸部へ近づく方向に移動している場合に負(-)の値となる。加えて、センサ出力NSは、手首の移動が停止すると値「0」を示す。
 図7に実線で示す手首の位置PWから判るように、センサ出力NSが負の値から「0」に変化した場合、手首は胸部に最も近接した近接位置B(図3(c)参照)といえる。反対に、センサ出力が正の値から「0」に変化した場合、手首は胸部から最も離隔した離隔位置T(図3(b)参照)といえる。このように、センサ出力NSは、近接位置Bと離隔位置Tとの間における使用者の手首の往復運動に伴って、周期的に信号レベルが変化する。
 図7の例において、手首の位置PWは、センサ出力NSに比較して位相が略1/4周期進んでいる。したがって、制御装置2の制御部21は、動作情報出力装置3から受信したセンサ出力NSに基づいて、ローイングマシン6での運動時(手首の往復運動時)における手首(特定部位)の位置を取得することができる。
FIG. 7 is a diagram showing the relationship between the sensor output NS from the gyro sensor 32 after normalization and the position of the wrist. The sensor output NS indicated by the alternate long and short dash line in FIG. 7 has a positive (+) value when the wrist moves in a direction away from the chest during exercise with the rowing machine 6, and the wrist approaches the chest It is negative (-) when moving. In addition, the sensor output NS indicates the value "0" when the movement of the wrist stops.
As can be seen from the position PW of the wrist shown by the solid line in FIG. 7, when the sensor output NS changes from a negative value to "0", the wrist is at the closest position B closest to the chest (see FIG. 3C). It can be said. Conversely, when the sensor output changes from a positive value to "0", it can be said that the wrist is the farthest separated position T (see FIG. 3B) from the chest. Thus, the sensor output NS changes its signal level periodically as the user's wrist reciprocates between the proximity position B and the separation position T.
In the example of FIG. 7, the position PW of the wrist leads the phase by about 1⁄4 cycle as compared with the sensor output NS. Therefore, based on the sensor output NS received from the motion information output device 3, the control unit 21 of the control device 2 obtains the position of the wrist (specific part) at the time of exercise on the rowing machine 6 (reciprocation of the wrist). can do.
<映像コンテンツの再生概略>
 図8は、映像コンテンツに含まれる、導入シーン、連動シーン、及びフィニッシュシーンの切り替えを示す図である。
 映像表示システム1では、使用者による再生の指示によって映像コンテンツが再生される。例えば、使用者による再生関連スイッチ36の操作に伴い、映像コンテンツの再生が開始される。すなわち、制御装置2の制御部21は、操作が行われることで再生関連スイッチ36が出力した操作情報に基づいて、映像コンテンツの再生を開始する。
 図8に示すように、映像コンテンツは、導入シーン、連動シーン、及びフィニッシュシーンを含んでいる。導入シーンは、プロローグのシーンである。連動シーンは、使用者の動作(手首の往復運動)に連動して再生されるシーンである。フィニッシュシーンは、エンディングのシーンである。
 例えば、映像コンテンツがボート競技の映像コンテンツである場合、導入シーンは、レース前からスタート直前までのシーンとなる。同様に、連動シーンは、競技のスタートからゴール手前までのシーンとなり、フィニッシュシーンは、ゴール手前からゴール後までのシーンとなる。
<Playback Summary of Video Content>
FIG. 8 is a diagram showing the switching of the introductory scene, the interlocked scene, and the finish scene included in the video content.
In the video display system 1, the video content is played back by the user's instruction of playback. For example, in response to the user's operation of the reproduction related switch 36, reproduction of the video content is started. That is, the control unit 21 of the control device 2 starts the reproduction of the video content based on the operation information output by the reproduction related switch 36 when the operation is performed.
As shown in FIG. 8, the video content includes an introduction scene, an interlocking scene, and a finish scene. The introduction scene is a prologue scene. The interlocked scene is a scene to be reproduced interlockingly with the user's action (reciprocal movement of the wrist). The finish scene is an ending scene.
For example, when the video content is video content of a boat competition, the introduction scene is a scene from before the race to just before the start. Similarly, the interlocked scene is a scene from the start of the competition to the front of the goal, and the finish scene is a scene from the front of the goal to the rear of the goal.
 映像コンテンツの再生は導入シーンから開始される。導入シーンの再生は、動作情報出力装置3のセンサ出力に関係なく行われる。すなわち、制御装置2の制御部21は、動作情報出力装置3のセンサ出力に関係なく、撮影フレームデータを所定間隔dt1(図10参照)で再生すると共に音声データを再生する。
 導入シーンの再生中に使用者がシーン切り替えの指示を行うと、連動シーンの再生に切り替えられる。例えば、使用者による第1モード切替スイッチの操作に伴い、連動シーンの再生に切り替えられる。また、導入シーンの再生が終了した場合も、連動シーンの再生に切り替えられる。連動シーンにおいて、制御装置2は、動作情報出力装置3からのセンサ出力を受信し、センサ出力(動作情報)に基づいて、連動シーンの再生を制御する。
 連動シーンの再生期間において、動作情報出力装置3が備える第2モード切替スイッチ35が使用者によって操作されると、制御装置2の制御部21は、フィニッシュシーンの再生を開始する。フィニッシュシーンの再生は、動作情報出力装置3のセンサ出力に関係なく行われ、フィニッシュシーンの再生終了に伴い、映像コンテンツの再生が終了する。
The reproduction of the video content is started from the introduction scene. The reproduction of the introduction scene is performed regardless of the sensor output of the operation information output device 3. That is, the control unit 21 of the control device 2 reproduces the imaging frame data at a predetermined interval dt1 (see FIG. 10) and reproduces the audio data regardless of the sensor output of the operation information output device 3.
If the user gives an instruction to switch the scene during playback of the introduction scene, switching is made to playback of the interlocked scene. For example, in accordance with the operation of the first mode switching switch by the user, switching to playback of the interlocked scene is performed. In addition, even when the reproduction of the introduction scene is finished, it is possible to switch to the reproduction of the interlocked scene. In the interlocked scene, the control device 2 receives the sensor output from the operation information output device 3 and controls the reproduction of the interlocked scene based on the sensor output (operation information).
When the second mode switch 35 of the operation information output device 3 is operated by the user during the interlocked scene reproduction period, the control unit 21 of the control device 2 starts reproduction of the finish scene. The reproduction of the finish scene is performed regardless of the sensor output of the motion information output device 3, and the reproduction of the video content ends with the completion of the reproduction of the finish scene.
 ローイングマシン6での運動時において、使用者の手首の往復に要する時間にはばらつきが生じ得る。このばらつきにあわせて映像コンテンツの再生を連動させることで、映像コンテンツが提供する環境下に自身が存在するような感覚を使用者に与えることができる。
 そこで、映像表示システム1では、連動シーンの再生時に、動作情報出力装置3からのセンサ出力(正規化後のジャイロセンサ32からのセンサ出力)を取得し、取得したセンサ出力に応じて連動シーンの再生を制御している。換言すれば、映像表示システム1は、ローイングマシン6の使用時における使用者の手首の往復運動の周期に応じて、映像コンテンツの再生を制御している。
 例えば、連動シーンは第1周期から第5周期の5種類が用意されており、連動シーンへの切り替え直後は、特定周期(例えば第1周期)の連動シーンが再生される。その後、センサ出力の周期的な変化に応じて、第1周期の連動シーンから第5周期の連動シーンのうち、適切な周期の連動シーンに順次切り替えられる。また、1つの周期の連動シーンの再生中であっても、使用者の手首の往復運動にあわせて連動シーンの再生が制御される。以下、詳細に説明する。
During exercise on the rowing machine 6, variations may occur in the time required for the user's wrist to reciprocate. By interlocking the reproduction of the video content in accordance with this variation, it is possible to give the user a feeling that oneself exists in the environment provided by the video content.
Therefore, the video display system 1 acquires a sensor output (a sensor output from the normalized gyro sensor 32) from the motion information output device 3 when reproducing the interlocked scene, and the interlocked scene is acquired according to the acquired sensor output. Control playback. In other words, the video display system 1 controls the reproduction of the video content in accordance with the cycle of the user's wrist reciprocation when using the rowing machine 6.
For example, five types of linked scenes are prepared: the first cycle to the fifth cycle, and immediately after switching to the linked scene, linked scenes having a specific cycle (for example, the first cycle) are reproduced. Thereafter, in accordance with the periodic change of the sensor output, the interlocked scene of the first period is sequentially switched to the interlocked scene of the appropriate period among the interlocked scenes of the fifth period. Further, even during the reproduction of the interlocked scene of one cycle, the reproduction of the interlocked scene is controlled in accordance with the reciprocating motion of the user's wrist. The details will be described below.
<記憶部22に記憶される各種データ>
 まず、制御装置2の記憶部22に記憶される各種データについて説明する。図9(a)は、撮影フレームデータの内容を示す図である。
<Various data stored in storage unit 22>
First, various data stored in the storage unit 22 of the control device 2 will be described. FIG. 9A is a view showing the contents of shooting frame data.
<撮影フレームデータ>
 図9(a)に示すように、撮影フレームデータには、導入シーンの撮影フレームデータ、連動シーンの撮影フレームデータ、及びフィニッシュシーンの撮影フレームデータの3種類が含まれる。
 前述したように、導入シーン及びフィニッシュシーンでは、動作情報出力装置3からのセンサ出力とは関係なく映像コンテンツが再生される。これらに対し、連動シーンでは、動作情報出力装置3からのセンサ出力に連動して、映像コンテンツの再生が制御される。
<Photographed frame data>
As shown in FIG. 9A, the shooting frame data includes three types of shooting frame data of the introduction scene, shooting frame data of the interlocked scene, and shooting frame data of the finish scene.
As described above, in the introduction scene and the finish scene, the video content is reproduced regardless of the sensor output from the motion information output device 3. On the other hand, in the interlocked scene, the reproduction of the video content is controlled interlocked with the sensor output from the operation information output device 3.
 本実施形態の映像表示システム1では、第1周期の連動シーンから第5周期の連動シーンまでの5種類を備えており、手首の往復運動の周期に応じて再生対象の連動シーンを切り替える。
 ボート競技の映像コンテンツにおいて、第1周期の連動シーンは、手首の往復運動の周期が最も長い場合に選択され、第5周期の連動シーンは、手首の往復運動の周期が最も短い場合に選択される。例えば、第1周期の連動シーンは、オールを漕ぐ周期が最も長い場合(ボートの走行速度が最も遅い場合)に選択される。第5周期の連動シーンは、オールを漕ぐ周期が最も短い場合(ボートの走行速度が最も速い場合)に選択される。同様に、第2周期の連動シーンから第4周期の連動シーンについても手首の往復運動の周期の長さに応じて選択される。
The video display system 1 of the present embodiment includes five types from the interlocked scene of the first period to the interlocked scene of the fifth period, and switches the interlocked scene to be reproduced according to the cycle of the reciprocation of the wrist.
In the video content of the boat competition, the interlocking scene of the first cycle is selected when the cycle of the reciprocating movement of the wrist is the longest, and the interlocking scene of the fifth cycle is selected when the cycle of the reciprocating movement of the wrist is the shortest. Ru. For example, the linked scene of the first cycle is selected when the cycle of rowing all is the longest (the boat traveling speed is the slowest). The interlocking scene of the fifth period is selected when the period in which the row is over the all is the shortest (when the boat traveling speed is the fastest). Similarly, the interlocked scene in the second period to the interlocked scene in the fourth period are also selected according to the cycle length of the reciprocating movement of the wrist.
 例えば、第1周期の連動シーンは、スタートからゴール手前までの区間を、最も長い繰り返し周期でオールを漕いだ場合におけるボートの走行シーンが該当し、第2周期の連動シーンは、スタートからゴール手前までの区間を、2番目に長い繰り返し周期でオールを漕いだ場合におけるボートの走行シーンが該当し、第3周期の連動シーンは、スタートからゴール手前までの区間を、3番目に遅い繰り返し周期でオールを漕いだ場合におけるボートの走行シーンが該当する。同様に、第4周期の連動シーンは、スタートからゴール手前までの区間を、2番目に短い繰り返し周期でオールを漕いだ場合におけるボートの走行シーンが該当し、第5周期の連動シーンは、スタートからゴール手前までの区間を、最も短い繰り返し周期でオールを漕いだ場合におけるボートの走行シーンが該当する。
 換言すれば、各周期の連動シーンは、スタートからゴール手前までの区間を、ボートの走行速度を周期毎に変化させて撮影したシーンであるともいえる。別の言い方をすれば、各周期の連動シーンは、内容は共通であるが進行速度が異なっているともいえる。
 図9(a)に示す例において、符号F(S)は導入シーンに属する複数の撮影フレームデータを、符号F(E)はフィニッシュシーンに属する複数の撮影フレームデータを、それぞれ示す。同様に、符号F(F1)から符号F(F5)は、第1周期から第5周期の連動シーンに属する複数の撮影フレームデータを、それぞれ示す。
For example, the linked scene of the first cycle corresponds to the boat running scene in the case where the section from the start to the goal front is covered all at the longest repeat cycle, and the linked scene of the second cycle is from the start to the goal The run scene of the boat in the case where you go through the whole in the second longest repetition cycle corresponds to the section up to the section, the interlocking scene of the third cycle is the section from the start to the goal front in the third slow repetition cycle The traveling scene of the boat in the case of looking over the oar corresponds. Similarly, the linked scene of the fourth cycle corresponds to the traveling scene of the boat in the case where the section from the start to the front of the goal is covered in the second short repetition cycle, and the linked scene of the fifth cycle starts The run scene of the boat in the case where you go through all in the shortest repetition cycle corresponds to the section from the goal to the goal front.
In other words, it can be said that the interlocking scene of each cycle is a scene captured by changing the traveling speed of the boat for each cycle in the section from the start to the goal front. In other words, it can be said that the linked scenes in each cycle have the same content but different traveling speeds.
In the example shown in FIG. 9A, the code F (S) indicates a plurality of shooting frame data belonging to the introduced scene, and the code F (E) indicates a plurality of shooting frame data belonging to the finish scene. Similarly, reference signs F (F1) to F (F5) respectively indicate a plurality of shooting frame data belonging to the interlocked scenes of the first to fifth cycles.
<音声データ>
 図9(b)は、音声データの種類を示す図である。図9(b)に示すように、音声データは、撮影フレームデータと同様に、導入シーン、連動シーン、及びフィニッシュシーンの3種類のシーンを備えている。各音声データは、対応する撮影フレームデータと組になっている。したがって、連動シーンの音声データは、第1周期の連動シーンから第5周期の連動周期までの5種類の音声データを備えている。
 音声データは、制御装置2で扱うことができればフォーマットは問わない。例えば、非圧縮のPCM(Pulse Code Modulation)形式であってもよいし、圧縮されたMP3(MPEG-1 Audio Layer-3)形式であってもよい。
<Voice data>
FIG. 9 (b) is a diagram showing the type of audio data. As shown in FIG. 9B, the audio data, like the shooting frame data, comprises three types of scenes: an introductory scene, an interlocking scene, and a finish scene. Each audio data is paired with corresponding shooting frame data. Therefore, the audio data of the interlocked scene includes five types of audio data from the interlocked scene of the first period to the interlocked period of the fifth period.
The audio data may be in any format as long as it can be handled by the control device 2. For example, it may be uncompressed PCM (Pulse Code Modulation) format or compressed MP3 (MPEG-1 Audio Layer-3) format.
<メタデータ>
 図9(c)は、撮影フレームデータや音声データに対して付加されるメタデータを示す図である。図9(d)は、各周期の連動シーンと手首の往復周波数の関係を示す図である。
 本実施形態ではメタデータとして、連動シーンの各周期(第1周期~第5周期)のそれぞれに対応する手首の往復周波数の範囲と、各周期の連動シーンに属する複数の撮影フレームデータF(F1)~F(F5)のうち、特定の撮影フレームデータに付加されるマーカデータと、複数の再生用音声データのそれぞれについて開始時間を示す開始時間データとを備えている。
<Metadata>
FIG. 9C is a diagram showing metadata added to shooting frame data and audio data. FIG. 9D is a diagram showing the relationship between the linked scene in each cycle and the round-trip frequency of the wrist.
In the present embodiment, as metadata, a range of the round-trip frequency of the wrist corresponding to each period (first period to fifth period) of the interlocked scene, and a plurality of shooting frame data F (F1 belonging to the interlocked scene of each period ) To F (F5), marker data to be added to specific imaging frame data, and start time data indicating start time for each of a plurality of reproduction audio data.
 手首の往復周波数とは、手首の往復運動が単位時間(例えば1分)あたりに繰り返される回数である。本実施形態では、動作情報出力装置3からのセンサ出力に基づいて、制御装置2の制御部21が取得する。
 図9(c)に示すように、手首の往復周波数の範囲は、第1周期の連動シーンに対しては周波数H2以下とされ、第2周期の連動シーンに対しては周波数H1以上であって周波数H4以下とされている。同様に、第3周期の連動シーンに対しては周波数H3以上であって周波数H6以下とされ、第4周期の連動シーンに対しては周波数H5以上であって周波数H8以下とされ、第5周期の連動シーンに対しては周波数H7以上とされている。
The reciprocating frequency of the wrist is the number of times the reciprocating motion of the wrist is repeated per unit time (for example, one minute). In the present embodiment, the control unit 21 of the control device 2 acquires the sensor output from the operation information output device 3.
As shown in FIG. 9C, the range of the round-trip frequency of the wrist is lower than the frequency H2 for the interlocked scene of the first period and higher than the frequency H1 for the interlocked scene of the second period, The frequency is H4 or less. Similarly, the frequency is higher than frequency H3 and lower than frequency H6 for the interlocked scene in the third period, and higher than frequency H5 and lower than frequency H8 for the interlocked scene in the fourth period, and the fifth period The frequency of H7 or higher is set for the interlocked scene of
 図9(d)に示すように、周波数における符号Hの後に付した順序数は、大きいほど周波数が高いことを示している。したがって、第1周期の連動シーンにおける上限周波数H2は、第2周期の連動シーンにおける下限周波数H1よりも高く、第3周期の連動シーンにおける下限周波数H3は、第2周期の連動シーンにおける上限周波数H4よりも低い。
 同様に、第3周期の連動シーンにおける上限周波数H6は、第4周期の連動シーンにおける下限周波数H5よりも高く、第5周期の連動シーンにおける下限周波数H7は、第4周期の連動シーンにおける上限周波数H8よりも低い。
As shown in FIG. 9D, the larger the ordinal number added after the code H in the frequency, the higher the frequency. Accordingly, the upper limit frequency H2 in the interlocked scene of the first period is higher than the lower limit frequency H1 in the interlocked scene of the second period, and the lower limit frequency H3 in the interlocked scene of the third period is the upper limit frequency H4 in the interlocked scene of the second period. Lower than.
Similarly, the upper limit frequency H6 in the interlocked scene of the third period is higher than the lower limit frequency H5 in the interlocked scene of the fourth period, and the lower limit frequency H7 in the interlocked scene of the fifth period is the upper limit frequency in the interlocked scene of the fourth period Lower than H8.
 図9(c)に示すマーカデータは、特定の撮影フレームデータに付加される。具体的には、マーカデータは、使用者の手首が近接位置B又は離隔位置Tに達するタイミングにおいて、表示装置4で表示される画像の基となる撮影フレームデータに付加される。したがって、使用者の手首が近接位置B又は離隔位置Tに達するタイミングでは、マーカデータが付加された撮影フレームデータに基づいて画像が表示されるように、映像コンテンツの再生が制御される。
 図9(c)に示す例において、撮影フレームデータF(F1M1)に対してマーカデータB→Tが付加され、撮影フレームデータF(F1M2)に対してマーカデータT→Bが付加されている。同様に、撮影フレームデータF(F1M3)に対してマーカデータB→Tが付加され、撮影フレームデータF(F1M4)に対してマーカデータT→Bが付加されている。
 前述したように、撮影フレームデータF(F1)は、第1周期の連動シーンに属する複数の撮影フレームデータである。ここで、符号M1は、1番目のマーカデータを示す。したがって、撮影フレームデータF(F1M1)とは、第1周期の連動シーンに属する複数の撮影フレームデータのうち、1番目のマーカデータが付加された撮影フレームデータを示す。同様に、撮影フレームデータF(F1M2)~撮影フレームデータF(F1M4)は、第1周期の連動シーンに属する複数の撮影フレームデータのうち、2番目のマーカデータが付加された撮影フレームデータ~4番目のマーカデータが付加された撮影フレームデータを示す。
 マーカデータB→Tは、近接位置Bの手首が離隔位置Tに向かって移動することを示す。反対に、マーカデータT→Bは、離隔位置Tの手首が近接位置Bに向かって移動することを示す。したがって、各マーカデータ(B→T、T→B)が付加された撮影フレームデータF(F1M1)~撮影フレームデータF(F1M4)は、第1周期の連動シーンに属する複数の撮影フレームデータのうち、使用者の手首が近接位置B又は離隔位置Tに達するタイミングで選択される撮影フレームデータを示す。
 以上、第1周期の連動シーンの撮影フレームデータを例に挙げて説明したが、他の周期の連動シーンの撮影フレームデータについても同様にマーカデータが付加される。
The marker data shown in FIG. 9C is added to specific imaging frame data. Specifically, the marker data is added to the imaging frame data that is the basis of the image displayed on the display device 4 at the timing when the user's wrist reaches the proximity position B or the separation position T. Therefore, at the timing when the user's wrist reaches the proximity position B or the separation position T, the reproduction of the video content is controlled so that the image is displayed based on the imaging frame data to which the marker data is added.
In the example shown in FIG. 9C, marker data B → T is added to the imaging frame data F (F1 M1), and marker data T → B is added to the imaging frame data F (F1 M2). Similarly, marker data B → T is added to the imaging frame data F (F1 M3), and marker data T → B is added to the imaging frame data F (F1 M4).
As described above, the shooting frame data F (F1) is a plurality of shooting frame data belonging to the interlocked scene of the first cycle. Here, the code M1 indicates the first marker data. Therefore, the imaging frame data F (F1M1) indicates imaging frame data to which the first marker data is added among the plurality of imaging frame data belonging to the interlocked scene of the first cycle. Similarly, shooting frame data F (F1M2) to shooting frame data F (F1 M4) are shooting frame data to which the second marker data is added among the plurality of shooting frame data belonging to the interlocked scene of the first cycle. 17 shows imaging frame data to which the second marker data is added.
The marker data B → T indicates that the wrist at the close position B moves toward the separation position T. Conversely, the marker data T → B indicates that the wrist at the separated position T moves toward the close position B. Therefore, the imaging frame data F (F1 M1) to the imaging frame data F (F1 M4) to which the respective marker data (B → T, T → B) are added are among the plurality of imaging frame data belonging to the interlocked scene of the first cycle. 14 shows imaging frame data selected when the user's wrist reaches the proximity position B or the separation position T.
As mentioned above, although imaging frame data of the interlocking scene of the 1st cycle was mentioned as an example and explained, marker data is similarly added to imaging frame data of the interlocking scene of other cycles.
<マーカデータと撮影フレームデータの関係>
 図10は、連動シーンの各周期(第1周期~第5周期)におけるマーカデータと撮影フレームデータの関係を示す図である。
 図10に示すように、第1周期の連動シーンに属する複数の撮影フレームデータは、時間間隔dt1毎に撮影されており、撮影フレームデータF(F1M1)~撮影フレームデータF(F1M5)に対してマーカデータ(B→T、T→B)が付加されている。第2周期の連動シーンに属する複数の撮影フレームデータは、時間間隔dt1毎に撮影されており、撮影フレームデータF(F2M1)~撮影フレームデータF(F2M5)に対してマーカデータ(B→T、T→B)が付加されている。
 以下同様に、第3周期の連動シーンに属する複数の撮影フレームデータは、時間間隔dt1毎に撮影されており、撮影フレームデータF(F3M1)~撮影フレームデータF(F3M5)に対してマーカデータ(B→T、T→B)が付加されている。第4周期の連動シーンに属する複数の撮影フレームデータは、時間間隔dt1毎に撮影されており、撮影フレームデータF(F4M1)~撮影フレームデータF(F4M5)に対してマーカデータ(B→T、T→B)が付加されている。第5周期の連動シーンに属する複数の撮影フレームデータは、時間間隔dt1毎に撮影されており、撮影フレームデータF(F5M1)~撮影フレームデータF(F5M5)に対してマーカデータ(B→T、T→B)が付加されている。
 なお、本実施形態において、各周期の連動シーンにおける時間長TWは共通である。
<Relationship between marker data and shooting frame data>
FIG. 10 is a diagram showing the relationship between marker data and shooting frame data in each cycle (first to fifth cycles) of the interlocked scene.
As shown in FIG. 10, a plurality of shooting frame data belonging to the interlocked scene of the first cycle are shot at time intervals dt1, and the shooting frame data F (F1M1) to shooting frame data F (F1M5) are taken. Marker data (B → T, T → B) is added. A plurality of shooting frame data belonging to the interlocked scene of the second cycle are shot at time intervals dt1, and marker data (B → T,) for shooting frame data F (F2M1) to shooting frame data F (F2M5) T → B) is added.
Likewise, a plurality of shooting frame data belonging to the interlocked scene of the third cycle are shot at time intervals dt1, and marker data (frame data F (F3M1) to shooting frame data F (F3M5) are recorded. B → T, T → B) are added. A plurality of shooting frame data belonging to the interlocked scene of the fourth cycle are shot at time intervals dt1, and marker data (B → T,) for shooting frame data F (F4M1) to shooting frame data F (F4M5) T → B) is added. A plurality of shooting frame data belonging to the interlocked scene of the fifth period are shot at time intervals dt1, and marker data (B → T,) for shooting frame data F (F5M1) to shooting frame data F (F5M5) T → B) is added.
In the present embodiment, the time length TW in the interlocking scene of each cycle is common.
 前述したように、各周期の連動シーンは内容が共通である。マーカデータ(B→T、T→B)は、使用者の手首が近接位置B又は離隔位置Tに達するタイミングの撮影フレームデータに付加されている。したがって、符号Mの後に付した順序数が同じであれば、周期が異なっていても、撮影フレームデータ同士の内容は共通しているといえる。
 例えば、1番目のマーカデータ(B→T)が付加された第1周期の連動シーンに属する撮影フレームデータF(F1M1)は、同じく1番目のマーカデータ(B→T)が付加された第2周期の連動シーンに属する撮影フレームデータF(F2M1)と内容が共通しているといえる。この撮影フレームデータF(F2M1)は、第3周期の連動シーンに属する撮影フレームデータF(F3M1)とも内容が共通しているといえる。
 同様に、2番目のマーカデータ(T→B)が付加された第1周期の連動シーンに属する撮影フレームデータF(F1M2)は、同じく2番目のマーカデータ(T→B)が付加された第3周期の連動シーンに属する撮影フレームデータF(F3M2)や、第5周期の連動シーンに属する撮影フレームデータF(F5M2)などと内容が共通しているといえる。
 本実施形態において、連動シーンの周期の切り替えは、符号Mの後に付した順序数が同じ撮影フレームデータ同士の間で行うようにしている。このように構成することで、連動シーンの周期の切り替えに伴う違和感を抑制している。
As described above, the linked scenes in each cycle have the same content. The marker data (B → T, T → B) is added to the imaging frame data at the timing when the user's wrist reaches the proximity position B or the separation position T. Therefore, if the ordinal numbers added after the code M are the same, it can be said that the contents of the photographed frame data are common even if the cycles are different.
For example, the imaging frame data F (F1 M1) belonging to the interlocked scene of the first cycle to which the first marker data (B to T) is added is the second to which the first marker data (B to T) is added. It can be said that the content is common to the shooting frame data F (F2M1) belonging to the linked scene of the cycle. It can be said that the content of the shooting frame data F (F2M1) is common to the shooting frame data F (F3M1) belonging to the interlocked scene of the third cycle.
Similarly, shooting frame data F (F1 M2) belonging to the linked scene of the first period to which the second marker data (T → B) is added is the same as the second to which the second marker data (T → B) is added. It can be said that the contents are common to the photographing frame data F (F3M2) belonging to the interlocked scene of three cycles and the photographing frame data F (F5M2) belonging to the interlocked scene of the fifth cycle.
In the present embodiment, switching of the cycle of the interlocked scene is performed between imaging frame data having the same ordinal number appended after the code M. By configuring in this manner, the sense of incongruity accompanying switching of the cycle of the interlocked scene is suppressed.
 各周期の連動シーンは、内容は各周期で共通であるが進行速度が各周期で異なっている。加えて、相前後する撮影フレームデータ同士における撮影時の時間間隔は、各周期の連動シーンにおいて時間間隔dt1で共通である。したがって、マーカデータ(B→T又はT→B)が付加された撮影フレームデータと、次のマーカデータ(T→B又はB→T)が付加された撮影フレームデータとの間に含まれる撮影フレームデータの数は、連動シーンの各周期で相違する。
 例えば、第1周期の連動シーンにおける撮影フレームデータF(F1M1)から撮影フレームデータF(F1M2)の間に含まれる撮影フレームデータFの数N11は、第2周期の連動シーンにおける撮影フレームデータF(F2M1)から撮影フレームデータF(F2M2)の間に含まれる撮影フレームデータFの数N21に比較して多くなる。
 同様に、第3周期の連動シーンにおける撮影フレームデータF(F3M2)から撮影フレームデータF(F3M3)の間に含まれる撮影フレームデータFの数N32は、第4周期の連動シーンにおける撮影フレームデータF(F4M2)から撮影フレームデータF(F4M3)の間に含まれる撮影フレームデータFの数N42に比較して多くなる。
 したがって、各周期の連動周期における撮影フレームデータは、互いに、マーカデータ(B→T)が付加された撮影フレームデータ(第1撮影画像情報と第2撮影画像情報の一方)と、マーカデータ(T→B)が付加された撮影フレームデータ(第1撮影画像情報と第2撮影画像情報の他方)との間に含まれる他の撮影フレームデータの数が異なっているといえる。
The linked scenes in each cycle have the same content in each cycle, but the traveling speed is different in each cycle. In addition, the time interval at the time of shooting between successive shooting frame data is common to the time interval dt1 in the interlocked scene of each cycle. Therefore, a shooting frame included between the shooting frame data to which the marker data (B → T or T → B) is added and the shooting frame data to which the next marker data (T → B or B → T) is added The number of data is different in each cycle of the interlocked scene.
For example, the number N 11 of shooting frame data F included between the shooting frame data F (F 1 M 1) to the shooting frame data F (F 1 M 2) in the linked scene of the first cycle is the shooting frame data F in the linked scene of the second cycle This number is larger than the number N21 of shooting frame data F included between F2M1) and shooting frame data F (F2M2).
Similarly, the number N32 of imaging frame data F included between imaging frame data F (F3M2) to imaging frame data F (F3 M3) in the interlocked scene of the third period is the imaging frame data F in the interlocked scene of the fourth period. This number is larger than the number N42 of shooting frame data F included between (F4M2) and shooting frame data F (F4M3).
Therefore, the imaging frame data in the interlocking cycle of each period is mutually including imaging frame data (one of the first imaging image information and the second imaging image information) to which the marker data (B → T) is added, and marker data (T It can be said that the number of other shooting frame data included between the shooting frame data (the first shooting image information and the other of the second shooting image information) to which B is added is different.
<再生用音声データの生成>
 次に、各周期の連動シーンにおける再生用音声データの生成について説明する。
 前述したように、連動シーンでは、使用者の手首の往復運動の周期に応じて、映像コンテンツの再生が制御される。すなわち、使用者の手首が近接位置Bに達するタイミングでは、マーカデータ(B→T)に対応する撮影フレームデータに基づいて画像が表示されるように、映像コンテンツの再生が制御される。同様に、使用者の手首が離隔位置Tに達するタイミングでは、マーカデータ(T→B)に対応する撮影フレームデータに基づいて画像が表示されるように、映像コンテンツの再生が制御される。
 以上の再生制御を行った場合、音声データをそのまま再生してしまうと、画像と音声との間にずれが生じてしまう不都合が生じ得る。この不都合を抑制するため、制御装置2の制御部21は、再生用音声データに基づく音声の再生制御を行っている。
<Generation of audio data for playback>
Next, generation of audio data for reproduction in a linked scene of each cycle will be described.
As described above, in the interlocked scene, the reproduction of the video content is controlled in accordance with the cycle of the reciprocation of the user's wrist. That is, at the timing when the user's wrist reaches the proximity position B, the reproduction of the video content is controlled so that the image is displayed based on the imaging frame data corresponding to the marker data (B → T). Similarly, at timing when the user's wrist reaches the separation position T, reproduction of the video content is controlled so that an image is displayed based on the imaging frame data corresponding to the marker data (T → B).
When the above reproduction control is performed, if the audio data is reproduced as it is, there may be a disadvantage that a gap occurs between the image and the audio. In order to suppress this problem, the control unit 21 of the control device 2 performs reproduction control of sound based on the reproduction sound data.
 図11(a)は、音声データの種類を示す図である。図11(b)は、再生用音声データの生成手順を示す図である。
 制御装置2の記憶部22(図4(a)参照)には、図11(a)に示す、時間長TWに亘る各周期の連動シーンの音声データMF1-ALL~MF5-ALLが記憶されている。
 制御装置2の制御部21は、図9(c)に示すメタデータから音声ファイルと開始時間の組を参照し、連動シーンへの切り替えタイミングからの経過時間が開始時間に到達したことを条件に、再生用音声データを生成する。例えば、制御装置2の制御部21は、第1周期の連動シーンが選択されている場合、連動シーンへの切り替えタイミングからの経過時間が開始時間Mt1-001に到達したことを条件に、再生用音声データMF1-001を生成する。同様に、制御装置2の制御部21は、連動シーンへの切り替えタイミングからの経過時間が開始時間Mt1-002に到達したことを条件に、再生用音声データMF1-002を生成する。
FIG. 11A shows the type of audio data. FIG. 11 (b) is a diagram showing a generation procedure of reproduction audio data.
The storage unit 22 (see FIG. 4A) of the control device 2 stores audio data MF1-ALL to MF5-ALL of the interlocking scene of each cycle over the time length TW shown in FIG. 11A. There is.
The control unit 21 of the control device 2 refers to the combination of the audio file and the start time from the metadata shown in FIG. 9C, on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time. , Generate audio data for reproduction. For example, when the interlocked scene of the first cycle is selected, the control unit 21 of the control device 2 for reproduction on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-001. Voice data MF1-001 is generated. Similarly, the control unit 21 of the control device 2 generates the audio data for reproduction MF1-002 on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-002.
 図11(b)に示す、第1周期の連動シーンで再生される再生用音声データMF1-XXX(XXXは3桁の自然数、以下同じ)~第5周期の連動シーンで再生される再生用音声データMF5-XXXを生成する場合を例に挙げて説明する。
 第1周期の連動シーンが選択されている場合、制御装置2の制御部21は、連動シーンへの切り替えタイミングからの経過時間が開始時間Mt1-XXXに到達したことを条件に、音声データMF1-ALLにおける、開始時間Mt1-XXXから規定時間幅Mtwの部分をコピーすることで音声データMF1-XXX’を取得する。規定時間幅Mtwは例えば0.5秒間~1.5秒間とすることができるが、これらの秒数に限定されない。次に、制御部21は、コピーした音声データMF1-XXX’に対してフェードイン処理及びフェードアウト処理を施すことで、再生用音声データMF1-XXXを生成する。
 以上、再生用音声データMF1-XXXの生成について説明したが、他の周期に係る再生用音声データMF2-XXX~MF5-XXXについても同じ手順で生成できる。
The playback audio data MF1-XXX (XXX is a natural number of 3 digits, hereinafter the same) to be played back in the linked scene of the first cycle shown in FIG. 11 (b) to the playback voice played in the linked scene of the fifth cycle. The case of generating the data MF5-XXX will be described as an example.
When the interlocked scene of the first cycle is selected, the control unit 21 of the control device 2 sets the audio data MF1- on condition that the elapsed time from the switching timing to the interlocked scene has reached the start time Mt1-XXX. The voice data MF1-XXX 'is acquired by copying the portion of the prescribed time width Mtw from the start time Mt1-XXX in ALL. The prescribed time width Mtw may be, for example, 0.5 seconds to 1.5 seconds, but is not limited to these seconds. Next, the control unit 21 applies fade-in processing and fade-out processing to the copied audio data MF1-XXX ′ to generate reproduction audio data MF1-XXX.
The generation of the reproduction audio data MF1-XXX has been described above, but the reproduction audio data MF2-XXX to MF5-XXX related to other cycles can also be generated in the same procedure.
<再生用音声データの再生>
 生成した再生用音声データMF1-XXX~MF5-XXXは、開始時間Mt1-XXX~Mt5-XXXに対応する撮影フレームデータに基づく表示が行われるタイミングで、再生が開始される。例えば、開始時間Mt1-XXXを時間間隔dt1で除算することで、再生用音声データMF1-XXXの再生を開始させる撮影データフレームFの数が算出できる。
 図12(a)は、標準的な時間間隔での再生用音声データの再生を示す図である。図12(a)に示すように、第1周期の連動シーンが選択され、再生用音声データMF1-001は、撮影データフレームF(F1L1)に基づく表示が行われるタイミングで再生を開始し、再生用音声データMF1-002は、撮影データフレームF(F1L2)に基づく表示が行われるタイミングで再生を開始する場合を想定する。以上の場合、再生用音声データMF1-001のフェードアウトと再生用音声データMF1-002のフェードインとが重なることから、再生用音声データMF1-001と再生用音声データMF1-002とが適切な時間間隔で再生される。
<Playback of audio data for playback>
The reproduction of the generated reproduction audio data MF1-XXX to MF5-XXX is started at the timing when the display based on the photographing frame data corresponding to the start time Mt1-XXX to Mt5-XXX is performed. For example, by dividing the start time Mt1-XXX by the time interval dt1, it is possible to calculate the number of imaging data frames F for starting the reproduction of the reproduction audio data MF1-XXX.
FIG. 12 (a) is a diagram showing the reproduction of reproduction audio data at standard time intervals. As shown in FIG. 12A, the linked scene in the first cycle is selected, and the playback audio data MF1-001 starts playback at the timing when display based on the shooting data frame F (F1L1) is performed, and playback is performed. It is assumed that the audio data MF1-002 starts to be reproduced at the timing when the display based on the shooting data frame F (F1L2) is performed. In the above case, since the fade-out of the audio data for reproduction MF1-001 and the fade-in of the audio data for reproduction MF1-002 overlap, the time for the audio data for reproduction MF1-001 and the audio data for reproduction MF1-002 is appropriate Played at intervals.
 図12(b)は、標準的な時間間隔よりも狭い時間間隔での再生用音声データの再生を示す図である。図12(b)の場合、再生用音声データMF1-002の再生開始タイミングが基準よりも早く開始されるが、再生用音声データMF1-002をフェードインさせていることから、再生用音声データMF1-002の再生開始に伴う違和感を抑制できる。同様に、再生用音声データMF1-001をフェードアウトさせていることから再生用音声データMF1-001の再生終了に伴う違和感も抑制できる。
 図12(c)は、標準的な時間間隔よりも広い時間間隔での再生用音声データの再生を示す図である。図12(c)の場合、再生用音声データMF1-001の再生終了から再生用音声データMF1-002の再生開始までの間に無音の区間が生じるが、再生用音声データMF1-001をフェードアウトさせた後、再生用音声データMF1-002をフェードインさせていることから、再生用音声データMF1-001から再生用音声データMF1-002へ再生が切り替わる際の違和感を抑制できる。
FIG. 12B is a diagram showing the reproduction of the audio data for reproduction at a time interval narrower than the standard time interval. In the case of FIG. 12B, although the reproduction start timing of the reproduction audio data MF1-002 starts earlier than the reference, since the reproduction audio data MF1-002 fades in, the reproduction audio data MF1 -Uncomfortable feeling with playback start can be suppressed. Similarly, since the reproduction audio data MF1-001 is faded out, it is possible to suppress the discomfort caused by the end of reproduction of the reproduction audio data MF1-001.
FIG. 12C is a diagram showing the reproduction of the audio data for reproduction at a time interval wider than the standard time interval. In the case of FIG. 12C, a silent section occurs between the end of reproduction of the reproduction audio data MF1-001 and the start of reproduction of the reproduction audio data MF1-002, but the reproduction audio data MF1-001 is faded out. After that, since the reproduction audio data MF1-002 is faded in, it is possible to suppress the discomfort when the reproduction is switched from the reproduction audio data MF1-001 to the reproduction audio data MF1-002.
<連動シーンにおける再生制御>
 前述したように、連動シーンの再生期間において、使用者の手首が近接位置Bや離隔位置Tするタイミングでは、各マーカデータ(B→T、T→B)が付加された撮影フレームデータに基づく画像が表示装置4に表示される。
 前述したように、使用者の手首が近接位置Bや離隔位置Tに達するタイミングにはばらつきが生じる。したがって、使用者の手首の往復運動に連動して映像コンテンツをなめらかに再生するためには、撮影フレームデータに基づく画像表示を、手首の往復運動に応じて制御することが求められる。このため、本実施形態では、連動シーンにおいて以下の制御を行っている。
<Playback control in interlocked scene>
As described above, at the timing when the user's wrist is in the proximity position B or the separation position T during the interlocking scene playback period, an image based on imaging frame data to which each marker data (B → T, T → B) is added Is displayed on the display device 4.
As described above, the timing at which the user's wrist reaches the proximity position B or the separation position T varies. Therefore, in order to smoothly reproduce the video content in conjunction with the reciprocating motion of the user's wrist, it is required to control the image display based on the photographing frame data in accordance with the reciprocating motion of the wrist. Therefore, in the present embodiment, the following control is performed in the interlocked scene.
 制御部21(予測部21e)は、連動シーンにおいては、特定部位の往復動作と撮影画像とを連動させるために、特定部位の往復動作にかかる移動時間を、センサ出力に基づいて予測時間として算出する。
 この予測時間は、往復運動における一方向にかかるセンサ出力に基づいて、往復運動のそれぞれに係る時間に対して算出されてもよいし、往復運動における双方向のセンサ出力に基づいて移動方向にかかわらず算出されてもよい。この場合、予測時間は、往復運動の平均的な移動時間となる。
 本実施形態では、一方向についての予測時間を算出する場合について説明する。
 制御部21(予測部21e)は、連動シーンに対応する、往復動作における特定部位の一方向運動にかかる移動時間(予測時間)を、この一方向運動よりも前に複数回行われた同方向の運動について動作情報出力装置3(動作情報出力部)から出力されたセンサ出力(動作情報)に基づいて算出する(下記に詳述)。
 また制御部21(時間間隔設定部21d)は、マーカデータ(B→TとT→Bの一方)が付加された撮影フレームデータ(第1撮影画像情報)から次のマーカデータ(B→TとT→Bの他方)が付加された撮影フレームデータ(第2撮影画像情報)までの時間間隔を、上記予測時間に基づいた時間間隔に設定する。
 制御部21(表示画像情報生成部21f)は、表示装置4が規定するタイミングに従って、相前後する一対の撮影フレームデータをアルファブレンド処理することで表示画像情報を生成する。
The control unit 21 (prediction unit 21e) calculates, as a predicted time based on the sensor output, the movement time taken for the reciprocation of the specific part to interlock the reciprocation of the specific part and the captured image in the interlocked scene Do.
This predicted time may be calculated for the time of each reciprocation based on the sensor output applied to one direction in the reciprocation, or may be calculated in the moving direction based on the bi-directional sensor output in the reciprocation. It may be calculated. In this case, the prediction time is the average travel time of the reciprocating motion.
In this embodiment, the case of calculating the predicted time in one direction will be described.
The control unit 21 (prediction unit 21e) corresponds to the interlocked scene, and the movement time (predictive time) taken for the one-way movement of the specific part in the reciprocation is the same direction that was performed multiple times before this one-way movement. The movement is calculated based on the sensor output (operation information) output from the operation information output device 3 (operation information output unit) (described in detail below).
Also, the control unit 21 (time interval setting unit 21d) generates next marker data (B → T) from the imaging frame data (first imaging image information) to which the marker data (one of B → T and T → B) is added. The time interval up to the shooting frame data (second shooting image information) to which the other T → B) is added is set to the time interval based on the above-mentioned prediction time.
The control unit 21 (display image information generation unit 21 f) generates display image information by performing alpha blending processing on a pair of shooting frame data which are adjacent to each other in accordance with the timing defined by the display device 4.
<離隔位置Tから近接位置Bへの手首の移動時における表示画像情報の生成手順>
 まず、離隔位置Tから近接位置Bへの使用者の手首の移動時における表示画像情報の生成手順について説明する。図13から図19は、第1周期の連動シーンにおける1番目のマーカデータF(F1M1)に対応した表示画像情報の生成手順を示す。なお、以下では、1番目のマーカデータF(F1M1)について説明しているが、あくまで例示である。3番目のマーカデータF(F1M3)や5番目のマーカデータF(F1M5)など、奇数番目のマーカデータについても同様の手順で行われる。
<Procedure of generating display image information when moving the wrist from the separated position T to the close position B>
First, a procedure of generating display image information when the user's wrist moves from the separated position T to the close position B will be described. FIGS. 13 to 19 show a procedure of generating display image information corresponding to the first marker data F (F1 M1) in the interlocked scene of the first cycle. Although the first marker data F (F1M1) is described below, it is merely an example. The same procedure is performed for odd-numbered marker data such as the third marker data F (F1 M3) and the fifth marker data F (F1 M5).
<予測時間の算出>
 図13(a)は、上記予測時間の算出手順を示す図である。
 図13に示す予測時間TS5’は、直近のセンサ出力の取得タイミングts5から撮影フレームデータF(F1M1)に対応するセンサ出力が取得されるまでの予測時間である。換言すれば、予測時間TS5’は、タイミングts5において予測される、使用者の手首が離隔位置Tから近接位置Bまで移動するまでの移動時間である。
 図13(a)に示す例において、使用者の手首は、タイミングts1、ts3、及びts5で離隔位置Tに位置し、タイミングts2、及びts4で近接位置Bに位置している。制御装置2の制御部21は、動作情報出力装置3からのセンサ出力に基づいて各タイミングts1~ts5を認識する。
<Calculation of predicted time>
FIG. 13 (a) is a diagram showing a calculation procedure of the prediction time.
The predicted time TS5 'shown in FIG. 13 is a predicted time until the sensor output corresponding to the imaging frame data F (F1 M1) is acquired from the latest sensor output acquisition timing ts5. In other words, the predicted time TS5 'is the movement time until the user's wrist moves from the remote position T to the near position B, which is predicted at the timing ts5.
In the example shown in FIG. 13A, the user's wrist is located at the separated position T at timings ts1, ts3 and ts5 and at the proximity position B at timings ts2 and ts4. The control unit 21 of the control device 2 recognizes the respective timings ts 1 to ts 5 based on the sensor output from the operation information output device 3.
 図13(a)に示すように、連動シーンへの切り替え直後は、準備期間が設定される。準備期間において、使用者の動作に連動した映像コンテンツの再生は行われず、撮影フレームデータが所定間隔dt1毎に表示される。準備期間は、例えば、予め定めた時間長に亘って、又は、予め定めた撮影フレームデータが表示されるまで継続する。
 準備期間では、連動制御を行うための準備が行われる。例えば、制御部21は、タイミングts2において、離隔位置Tから近接位置Bへの移動に要した移動時間TS1を取得し、タイミングts3において、近接位置Bから離隔位置Tへの移動に要した移動時間TS2を取得する。同様に、制御部21は、タイミングts4において、離隔位置Tから近接位置Bへの移動に要した移動時間TS3を取得し、タイミングts5において、近接位置Bから離隔位置Tへの移動に要した移動時間TS4を取得する。
As shown in FIG. 13A, a preparation period is set immediately after switching to the interlocked scene. During the preparation period, reproduction of the video content interlocked with the operation of the user is not performed, and the photographing frame data is displayed at predetermined intervals dt1. The preparation period continues, for example, over a predetermined time length or until predetermined imaging frame data is displayed.
In the preparation period, preparation for interlocking control is performed. For example, the control unit 21 acquires the moving time TS1 required to move from the separated position T to the close position B at timing ts2, and the moving time required to move from the close position B to the separated position T at timing ts3. Get TS2. Similarly, the control unit 21 acquires the moving time TS3 required for moving from the separated position T to the near position B at timing ts4, and the movement required for moving from the near position B to the separated position T at timing ts5. Get time TS4.
 準備期間が終了すると連動期間に移行する。最初に、制御装置2の制御部21は、予測時間TS5’を取得する。予測時間TS5’は、使用者の手首が近接位置Bと離隔位置Tの一方から他方に向けて移動する一方向運動のうち、離隔位置Tから近接位置Bに向けて移動する一方向運動に係る予測時間である。
 制御部21は、直近のセンサ出力を取得したタイミングts5よりも前に行われた離隔位置Tから近接位置Bへの一方向運動に基づいて、予測時間TS5’を取得する。具体的には、制御部21は、タイミングts1からタイミングts2までに要した時間TS1とタイミングts3からタイミングts4までに要した時間TS3の平均を、予測時間TS5’として取得する。
When the preparation period ends, it shifts to an interlocking period. First, the control unit 21 of the control device 2 obtains the predicted time TS5 '. The prediction time TS5 'relates to one-way movement in which the user moves from the separation position T to the near position B in one-way movement in which the user's wrist moves from one of the close position B to the other position T toward the other. It is predicted time.
The control unit 21 acquires the predicted time TS5 'based on the one-way motion from the separation position T to the proximity position B, which is performed before the timing ts5 at which the latest sensor output is acquired. Specifically, the control unit 21 obtains an average of the time TS1 required from the timing ts1 to the timing ts2 and the time TS3 required for the timing ts3 to the timing ts4 as the prediction time TS5 '.
<予測時間のばらつき>
 図13(b)は、予測時間TS5’のばらつきを示す図である。図13(b)に示すように、予測時間TS5’にはばらつきが生じ得る。これは、各タイミングts1~ts5が、使用者の手首の往復運動に伴って取得されるためである。予測時間TS5’のばらつきに伴って、タイミングts5から予測時間TS5’を経過したタイミングts6’についても、タイミングts6a’~6c’のようにばらつきが生じ得る。
 図13(a)に示すように、タイミングts6’は、撮影フレームデータF(F1M1)を表示すべきタイミングに相当することから、タイミングts5で表示されている撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までに含まれる複数の撮影フレームデータ間の時間間隔は、準備期間における複数の撮影フレームデータ間の時間間隔dt1からずれる場合がある。
<Variations in prediction time>
FIG. 13 (b) is a diagram showing the variation of the prediction time TS 5 ′. As shown in FIG. 13 (b), variations may occur in the predicted time TS5 '. This is because the respective timings ts1 to ts5 are acquired along with the reciprocating motion of the user's wrist. With the variation of the prediction time TS5 ', a variation may occur as in the timings ts6a' to 6c 'also for the timing ts6' when the prediction time TS5 'has passed from the timing ts5.
As shown in FIG. 13A, since the timing ts6 'corresponds to the timing at which the imaging frame data F (F1 M1) should be displayed, the imaging frame is generated from the imaging frame data F (F1 M0) displayed at the timing ts5. The time interval between the plurality of shooting frame data included in the data F (F1M1) may be deviated from the time interval dt1 between the plurality of shooting frame data in the preparation period.
 図13(b)に示すタイミングts6b’の場合、撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までの撮影フレームデータ間の時間間隔は、準備期間における複数の撮影フレームデータ間の時間間隔dt1と等しくなる。
 一方、タイミングts6a’の場合、撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までの撮影フレームデータ間の時間間隔は、準備期間における複数の撮影フレームデータ間の時間間隔dt1よりも短くなる。
 反対に、タイミングts6c’の場合、撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までの撮影フレームデータ間の時間間隔は、準備期間における複数の撮影フレームデータ間の時間間隔dt1よりも長くなる。
In the case of timing ts6b 'shown in FIG. 13B, the time interval between shooting frame data from shooting frame data F (F1 M0) to shooting frame data F (F1 M1) is the time between multiple shooting frame data in the preparation period. It is equal to the interval dt1.
On the other hand, in the case of timing ts6a ', the time interval between the shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) is shorter than the time interval dt1 between the plurality of shooting frame data in the preparation period Become.
Conversely, in the case of timing ts6c ', the time interval between the shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) is greater than the time interval d1 between the plurality of shooting frame data in the preparation period. become longer.
<撮影フレームデータ間の時間間隔の調整>
 図14は、予測時間TS5’における撮影フレームデータの時間間隔と間隔調整曲線SP1を示す図である。図14に示す例では、撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までの複数の撮影フレームデータ間を互いに等しい時間間隔dt2に定めている。これにより、手首の往復運動の時間がばらついても、映像コンテンツを追従して再生することができる。
 本実施形態では、制御の切り替えタイミング(例えば、タイミングts5)において画像がなめらかに表示されるように、撮影フレームデータ間の時間間隔を間隔調整曲線SP1によって調整している。間隔調整曲線SP1は、直近のセンサ出力を取得したタイミングと予測時間によって規定されるタイミングのそれぞれでフレームレート(単位時間あたりのフレームデータ数)を取得し、スプライン補間(例えば3次スプライン補間)を行うことで取得される。
<Adjustment of time interval between shooting frame data>
FIG. 14 is a diagram showing the time interval of the imaging frame data and the interval adjustment curve SP1 at the predicted time TS5 '. In the example shown in FIG. 14, the plurality of shooting frame data from the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) are defined at equal time intervals dt2. As a result, even if the time of reciprocation of the wrist fluctuates, it is possible to follow and reproduce the video content.
In the present embodiment, the time interval between the imaging frame data is adjusted by the interval adjustment curve SP1 so that the image is displayed smoothly at the control switching timing (for example, timing ts5). The interval adjustment curve SP1 acquires a frame rate (the number of frame data per unit time) at each of the timing at which the latest sensor output is acquired and the timing defined by the prediction time, and spline interpolation (for example, cubic spline interpolation) It is acquired by doing.
 図14に示す例において、制御装置2の制御部21は、タイミングts5までに実際に表示された撮影フレームデータの数と、タイミングts5までの経過時間とに基づいて、実効フレームレートV0を算出する。また、制御部21は、撮影フレームデータF(F1M0)から撮影フレームデータF(F1M1)までに含まれる撮影フレームデータの数と、タイミングts5からタイミングts6’までの予測時間とに基づいて、フレームレートV1’を算出する。さらに、制御部21は、実効フレームレートV0とフレームレートV1’とをスプライン補間することで、間隔調整曲線SP1を取得する。 In the example illustrated in FIG. 14, the control unit 21 of the control device 2 calculates the effective frame rate V0 based on the number of imaging frame data actually displayed by the timing ts5 and the elapsed time until the timing ts5. . Further, the control unit 21 determines the frame rate based on the number of shooting frame data included in the shooting frame data F (F1 M0) to the shooting frame data F (F1 M1) and the predicted time from the timing ts5 to the timing ts6 '. Calculate V1 '. Furthermore, the control unit 21 spline-interpolates the effective frame rate V0 and the frame rate V1 'to acquire the interval adjustment curve SP1.
 図15は、間隔調整曲線SP1によって調整された撮影フレームデータ間の時間間隔を示す図である。図15に例示する間隔調整曲線SP1を用いることで、撮影フレームデータF(F1M0+2)と撮影フレームデータF(F1M0+3)との間が、時間間隔dt2から時間間隔dt2aに調整される。同様に、撮影フレームデータF(F1M0+n)と撮影フレームデータF(F1M0+m)との間が、時間間隔dt2から時間間隔dt2bに調整され、撮影フレームデータF(F1M1-3)と撮影フレームデータF(F1M1-2)との間が、時間間隔dt2から時間間隔dt2cに調整される。
 ここで、時間間隔dt2bは時間間隔dt2aと比較して短く、時間間隔dt2cは時間間隔dt2bと比較して短い。したがって、この間隔調整曲線SP1は、撮影フレームデータF(F1M1)に向かって、撮影フレームデータ間の時間間隔を段階的に短く調整しているといえる。
FIG. 15 is a diagram showing a time interval between imaging frame data adjusted by the interval adjustment curve SP1. By using the interval adjustment curve SP1 illustrated in FIG. 15, the interval between the imaging frame data F (F1M0 + 2) and the imaging frame data F (F1M0 + 3) is adjusted from the time interval dt2 to the time interval dt2a. Similarly, the interval between the imaging frame data F (F1 M0 + n) and the imaging frame data F (F 1 M 0 + m) is adjusted from the time interval dt2 to the time interval dt 2 b, and the imaging frame data F (F1 M1-3) and the imaging frame data F (F1 M1) Between -2), the time interval dt2 is adjusted to the time interval dt2c.
Here, the time interval dt2b is shorter than the time interval dt2a, and the time interval dt2c is shorter than the time interval dt2b. Therefore, it can be said that the interval adjustment curve SP1 adjusts the time interval between the imaging frame data in a stepwise manner toward the imaging frame data F (F1 M1).
 図16は、他の間隔調整曲線SP2によって調整された撮影フレームデータ間の時間間隔を示す図である。図16に例示する間隔調整曲線SP2を用いることで、撮影フレームデータF(F1M0+2)と撮影フレームデータF(F1M0+3)との間が、時間間隔dt2から時間間隔dt2dに調整される。同様に、撮影フレームデータF(F1M0+n)と撮影フレームデータF(F1M0+m)との間が、時間間隔dt2から時間間隔dt2eに調整され、撮影フレームデータF(F1M1-3)と撮影フレームデータF(F1M1-2)との間が、時間間隔dt2から時間間隔dt2fに調整される。
 ここで、時間間隔dt2eは時間間隔dt2dと比較して長く、時間間隔dt2fは時間間隔dt2eと比較して長い。したがって、この間隔調整曲線SP2は、撮影フレームデータF(F1M1)に向かって、撮影フレームデータ間の時間間隔を段階的に長く調整しているといえる。
FIG. 16 is a diagram showing a time interval between shooting frame data adjusted by another interval adjustment curve SP2. By using the interval adjustment curve SP2 illustrated in FIG. 16, the interval between the imaging frame data F (F1M0 + 2) and the imaging frame data F (F1M0 + 3) is adjusted from the time interval dt2 to the time interval dt2d. Similarly, the interval between the imaging frame data F (F1 M0 + n) and the imaging frame data F (F 1 M 0 + m) is adjusted from the time interval dt2 to the time interval dt2 e, and the imaging frame data F (F1 M1-3) and the imaging frame data F (F1 M1) Between -2), the time interval dt2 is adjusted to the time interval dt2f.
Here, the time interval dt2e is longer than the time interval dt2d, and the time interval dt2f is longer than the time interval dt2e. Therefore, it can be said that the interval adjustment curve SP2 adjusts the time interval between the imaging frame data in a stepwise manner toward the imaging frame data F (F1 M1).
<合成フレームデータF(mix1)~F(mix4)の生成>
 以上の処理で撮影フレームデータ間の時間間隔を調整した場合、表示装置4におけるリフレッシュタイミングとの間で同期がとれず、結果として滑らかさに欠けた画像が表示装置4に表示されてしまう恐れがある。
 本実施形態では、表示装置4で規定されるリフレッシュタイミングに応じて、相前後する一対の撮影フレームデータを合成処理(アルファブレンド処理)し、得られた合成フレームをフレームバッファ23に書き込んでいる。以下、合成処理について説明する。
<Generation of Composite Frame Data F (mix 1) to F (mix 4)>
When the time interval between the photographing frame data is adjusted by the above processing, synchronization with the refresh timing on the display device 4 can not be achieved, and as a result, there is a possibility that an image lacking in smoothness is displayed on the display device 4 is there.
In the present embodiment, in accordance with the refresh timing defined by the display device 4, the pair of shooting frame data which precede and follow each other is combined (alpha blend processing), and the combined frame obtained is written in the frame buffer 23. The combining process will be described below.
 図17は、合成フレームデータF(mix1)~F(mix4)の生成を示す図である。図18(a)は、合成フレームデータF(mix1)の生成を示す図である。図18(b)は、合成フレームデータF(mix2)の生成を示す図である。図18(c)は、合成フレームデータF(mix3)の生成を示す図である。図18(d)は、合成フレームデータF(mix4)の生成を示す図である。
 図17に示すように、表示装置4は、リフレッシュタイミングRf1~Rf4でリフレッシュを行う。各リフレッシュタイミングRf1~Rf4は、一定の時間間隔dRfごとに到来する。なお、時間間隔dRfは、リフレッシュレートで規定される。
FIG. 17 is a diagram showing generation of combined frame data F (mix 1) to F (mix 4). FIG. 18A is a diagram showing the generation of combined frame data F (mix 1). FIG. 18B is a diagram showing generation of combined frame data F (mix 2). FIG. 18C is a diagram showing generation of combined frame data F (mix 3). FIG. 18D is a diagram showing generation of combined frame data F (mix 4).
As shown in FIG. 17, the display device 4 performs refresh at refresh timings Rf1 to Rf4. The refresh timings Rf1 to Rf4 arrive at fixed time intervals dRf. The time interval dRf is defined by the refresh rate.
 図17に示すように、リフレッシュタイミングRf1において、制御装置2の制御部21は、リフレッシュタイミングRf1よりも前の撮影フレームデータF(F1M0)とリフレッシュタイミングRf1よりも後の撮影フレームデータF(F1M0+1)とを用いて合成フレームデータF(mix1)を生成する。合成フレームデータF(mix1)は、リフレッシュタイミングRf1による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図18(a)に示すように、合成フレームデータF(mix1)は、撮影フレームデータF(F1M0)の比率が[(1-α1)/dt2]となり、撮影フレームデータF(F1M0+1)の比率が[α1/dt2]となるように合成される。
As shown in FIG. 17, at the refresh timing Rf1, the control unit 21 of the control device 2 controls the shooting frame data F (F1 M0) before the refresh timing Rf1 and the shooting frame data F (F1 M0 + 1) after the refresh timing Rf1. To generate combined frame data F (mix 1). The composite frame data F (mix1) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf1 as a coefficient.
As shown in FIG. 18A, in the combined frame data F (mix 1), the ratio of the shooting frame data F (F 1 M 0) is [(1−α 1) / dt 2], and the ratio of the shooting frame data F (F 1 M 0 + 1) is It is synthesized to be [α1 / dt2].
 図17に示すように、リフレッシュタイミングRf2において、制御装置2の制御部21は、リフレッシュタイミングRf2よりも前の撮影フレームデータF(F1M0)とリフレッシュタイミングRf2よりも後の撮影フレームデータF(F1M0+1)とを用いて合成フレームデータF(mix2)を生成する。合成フレームデータF(mix2)は、リフレッシュタイミングRf2による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図18(b)に示すように、合成フレームデータF(mix2)は、撮影フレームデータF(F1M0)の比率が[(1-β1)/dt2]となり、撮影フレームデータF(F1M0+1)の比率が[β1/dt2]となるように合成される。
As shown in FIG. 17, at the refresh timing Rf2, the control unit 21 of the control device 2 sets the shooting frame data F (F1 M0) before the refresh timing Rf2 and the shooting frame data F (F1 M0 + 1) after the refresh timing Rf2. To generate combined frame data F (mix 2). The composite frame data F (mix 2) is composited by alpha blending processing using the division ratio of the time interval between shooting frame data at the refresh timing Rf 2 as a coefficient.
As shown in FIG. 18B, in the combined frame data F (mix 2), the ratio of the shooting frame data F (F 1 M 0) is [(1−β 1) / dt 2] and the ratio of the shooting frame data F (F 1 M 0 + 1) is It is synthesized to be [β1 / dt2].
 図17に示すように、リフレッシュタイミングRf3において、制御装置2の制御部21は、リフレッシュタイミングRf3よりも前の撮影フレームデータF(F1M0+1)とリフレッシュタイミングRf3よりも後の撮影フレームデータF(F1M0+2)とを用いて合成フレームデータF(mix3)を生成する。合成フレームデータF(mix3)は、リフレッシュタイミングRf3による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図18(c)に示すように、合成フレームデータF(mix3)は、撮影フレームデータF(F1M0+1)の比率が[(1-γ1)/dt2]となり、撮影フレームデータF(F1M0+2)の比率が[γ1/dt2]となるように合成される。
As shown in FIG. 17, at the refresh timing Rf3, the control unit 21 of the control device 2 sets the shooting frame data F (F1M0 + 1) before the refresh timing Rf3 and the shooting frame data F (F1M0 + 2) after the refresh timing Rf3. To generate combined frame data F (mix 3). The composite frame data F (mix 3) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 3 as a coefficient.
As shown in FIG. 18C, in the combined frame data F (mix 3), the ratio of the shooting frame data F (F1M0 + 1) is [(1-γ1) / dt2], and the ratio of the shooting frame data F (F1M0 + 2) is It is synthesized to be [γ1 / dt2].
 図17に示すように、リフレッシュタイミングRf4において、制御装置2の制御部21は、リフレッシュタイミングRf4よりも前の撮影フレームデータF(F1M0+1)とリフレッシュタイミングRf4よりも後の撮影フレームデータF(F1M0+2)とを用いて合成フレームデータF(mix4)を生成する。合成フレームデータF(mix4)は、リフレッシュタイミングRf4による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図18(d)に示すように、合成フレームデータF(mix4)は、撮影フレームデータF(F1M0+1)の比率が[(1-δ1)/dt2]となり、撮影フレームデータF(F1M0+2)の比率が[δ1/dt2]となるように合成される。
 以上の合成処理を行うことにより、リフレッシュタイミング毎の合成フレームデータF(mix1)~F(mix4)が生成され、滑らかな画像を表示装置4に表示させることができる。
As shown in FIG. 17, at the refresh timing Rf4, the control unit 21 of the control device 2 sets the shooting frame data F (F1M0 + 1) before the refresh timing Rf4 and the shooting frame data F (F1M0 + 2) after the refresh timing Rf4. To generate combined frame data F (mix 4). The composite frame data F (mix 4) is composited by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 4 as a coefficient.
As shown in FIG. 18D, in the combined frame data F (mix 4), the ratio of the shooting frame data F (F1M0 + 1) is [(1-.delta.1) / dt2], and the ratio of the shooting frame data F (F1M0 + 2) is It is synthesized so as to be [δ1 / dt2].
By performing the above combining process, combined frame data F (mix 1) to F (mix 4) for each refresh timing can be generated, and a smooth image can be displayed on the display device 4.
<近接位置Bから離隔位置Tへの手首の移動時における表示画像情報の生成手順>
 次に、近接位置Bから離隔位置Tへの使用者の手首の移動時における表示画像情報の生成手順について説明する。図19から図22は、第1周期の連動シーンにおける2番目のマーカデータF(F1M2)に対応した表示画像情報の生成手順を示す。なお、以下では、2番目のマーカデータF(F1M2)について説明しているが、あくまで例示である。4番目のマーカデータF(F1M4)や6番目のマーカデータF(F1M6)など、偶数番目のマーカデータについて同様の手順で行われる。
<Procedure of generation of display image information at the time of movement of the wrist from the proximity position B to the separation position T>
Next, a procedure for generating display image information when the user's wrist moves from the proximity position B to the separation position T will be described. FIGS. 19 to 22 show a procedure of generating display image information corresponding to the second marker data F (F1M2) in the interlocked scene of the first cycle. Although the second marker data F (F1M2) is described below, it is merely an example. The same procedure is performed for even-numbered marker data such as the fourth marker data F (F1 M4) and the sixth marker data F (F1 M6).
<予測時間の算出>
 図19(a)は予測時間の算出手順を示す図である。
 図19(a)に示す予測時間TS6’は、直近のセンサ出力の取得タイミングts5から撮影フレームデータF(F1M2)に対応するセンサ出力が取得されるまでの予測時間である。
 図19(a)に示す例では、図13(a)で説明した例の後、タイミングts6で使用者の手首が近接位置Bに位置している。これに伴い、制御装置2の制御部21が、動作情報出力装置3からのセンサ出力に基づいてタイミングts6を認識している。
<Calculation of predicted time>
FIG. 19A is a diagram showing a calculation procedure of the prediction time.
The predicted time TS6 ′ shown in FIG. 19A is a predicted time from when the latest sensor output acquisition timing ts5 until the sensor output corresponding to the imaging frame data F (F1M2) is acquired.
In the example shown in FIG. 19A, the wrist of the user is located at the proximity position B at timing ts6 after the example described in FIG. Along with this, the control unit 21 of the control device 2 recognizes the timing ts6 based on the sensor output from the operation information output device 3.
 図19(a)に示すように、制御装置2の制御部21は、タイミングts6を認識した後、予測時間TS6’を取得する。予測時間TS6’は、使用者の手首が近接位置Bと離隔位置Tの一方から他方に向けて移動する一方向運動のうち、近接位置Bから離隔位置Tに向けて移動する一方向運動に係る予測時間である。
 制御部21は、直近のセンサ出力を取得したタイミングts6よりも前に行われた近接位置Bから離隔位置Tへの一方向運動に基づいて、予測時間TS6’を取得する。具体的には、制御部21は、タイミングts2からタイミングts3までに要した時間TS2とタイミングts4からタイミングts5までに要した時間TS4の平均を、予測時間TS6’として取得する。
As illustrated in FIG. 19A, after recognizing the timing ts6, the control unit 21 of the control device 2 acquires the predicted time TS6 ′. The prediction time TS6 ′ relates to one-way movement in which the user moves from the proximity position B to the separation position T in one-way movement in which the user's wrist moves from one of the close position B to the separation position T toward the other. It is predicted time.
The control unit 21 acquires the predicted time TS6 ′ based on the one-way motion from the proximity position B to the separated position T, which is performed before the timing ts6 at which the latest sensor output is acquired. Specifically, the control unit 21 obtains an average of the time TS2 required from the timing ts2 to the timing ts3 and the time TS4 required from the timing ts4 to the timing ts5 as the prediction time TS6 '.
<予測時間のばらつき>
 図19(b)は、予測時間のばらつきを示す図である。図19(b)に示すように、予測時間TS6’にはばらつきが生じ得る。予測時間TS6’のばらつきに伴って、タイミングts6から予測時間TS6’を経過したタイミングts7’についても、タイミングts7a’~ 7c’のようにばらつきが生じ得る。
<Variations in prediction time>
FIG. 19 (b) is a diagram showing the variation of the prediction time. As shown in FIG. 19 (b), variations may occur in the prediction time TS6 '. With the variation of the prediction time TS6 ′, a variation may occur as in the timings ts7a ′ to 7c ′ also at the timing ts7 ′ where the prediction time TS6 ′ has elapsed from the timing ts6.
<撮影フレームデータ間の時間間隔の調整>
 図20は、予測時間TS6’における撮影フレームデータの時間間隔と間隔調整曲線SP3を示す図である。図20に示す例では、撮影フレームデータF(F1M1)から撮影フレームデータF(F1M2)までの複数の撮影フレームデータ間を互いに等しい時間間隔dt3に定めている。
 この例でも制御の切り替えタイミング(例えば、タイミングts6)において画像がなめらかに表示されるように、撮影フレームデータ間の時間間隔を間隔調整曲線SP3によって調整している。間隔調整曲線SP3は、実効フレームレートV1とフレームレートV2’とをスプライン補間(例えば3次スプライン補間)することで作成される。この間隔調整曲線SP3は、前述した間隔調整曲線SP1と同等である。そして、間隔調整曲線SP3を用いた撮影フレームデータ間の時間間隔の調整も、前述した間隔調整曲線SP1を用いた撮影フレームデータ間の時間間隔の調整と同様に行われる。このため、説明を省略する。
<Adjustment of time interval between shooting frame data>
FIG. 20 is a diagram showing the time interval of the shooting frame data and the interval adjustment curve SP3 at the prediction time TS6 '. In the example shown in FIG. 20, a plurality of pieces of shooting frame data from the shooting frame data F (F1 M1) to the shooting frame data F (F1 M2) are defined at equal time intervals dt3.
Also in this example, the time interval between the imaging frame data is adjusted by the interval adjustment curve SP3 so that the image is displayed smoothly at the control switching timing (for example, timing ts6). The interval adjustment curve SP3 is created by spline interpolation (for example, cubic spline interpolation) between the effective frame rate V1 and the frame rate V2 '. The space adjustment curve SP3 is equivalent to the space adjustment curve SP1 described above. The adjustment of the time interval between the imaging frame data using the interval adjustment curve SP3 is also performed similarly to the adjustment of the time interval between the imaging frame data using the interval adjustment curve SP1 described above. Therefore, the description is omitted.
<合成フレームデータF(mix11)~F(mix14)の生成>
 図21は、合成フレームデータF(mix11)~F(mix14)の生成を示す図である。図22(a)は、合成フレームデータF(mix11)の生成を示す図である。図22(b)は、合成フレームデータF(mix12)の生成を示す図である。図22(c)は、合成フレームデータF(mix13)の生成を示す図である。図22(d)は、合成フレームデータF(mix14)の生成を示す図である。
 図21に示すように、表示装置4は、リフレッシュタイミングRf11~Rf14でリフレッシュを行う。各リフレッシュタイミングRf11~Rf14は、リフレッシュレートで規定される一定の時間間隔dRfごとに到来する。
<Generation of Combined Frame Data F (mix 11) to F (mix 14)>
FIG. 21 is a diagram showing the generation of combined frame data F (mix 11) to F (mix 14). FIG. 22A shows generation of combined frame data F (mix 11). FIG. 22B is a diagram showing generation of combined frame data F (mix 12). FIG. 22C shows the generation of combined frame data F (mix 13). FIG. 22D is a diagram showing the generation of combined frame data F (mix 14).
As shown in FIG. 21, the display device 4 performs refresh at refresh timings Rf11 to Rf14. The refresh timings Rf11 to Rf14 arrive at fixed time intervals dRf defined by the refresh rate.
 リフレッシュタイミングRf11において、制御装置2の制御部21は、リフレッシュタイミングRf11よりも前の撮影フレームデータF(F1M1)とリフレッシュタイミングRf11よりも後の撮影フレームデータF(F1M1+1)とを用いて合成フレームデータF(mix11)を生成する。合成フレームデータF(mix11)は、リフレッシュタイミングRf11による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図22(a)に示すように、合成フレームデータF(mix11)は、撮影フレームデータF(F1M1)の比率が[(1-α2)/dt3]となり、撮影フレームデータF(F1M1+1)の比率が[α2/dt3]となるように合成される。
At the refresh timing Rf11, the control unit 21 of the control device 2 uses the photographed frame data F (F1 M1) before the refresh timing Rf11 and the photographed frame data F (F1 M1 + 1) after the refresh timing Rf11 to synthesize frame data. Generate F (mix 11). The combined frame data F (mix 11) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 11 as a coefficient.
As shown in FIG. 22A, in the combined frame data F (mix 11), the ratio of the shooting frame data F (F 1 M 1) is [(1−α 2) / dt 3], and the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized to be [α 2 / dt 3].
 図21に示すように、リフレッシュタイミングRf12において、制御装置2の制御部21は、リフレッシュタイミングRf12よりも前の撮影フレームデータF(F1M1)とリフレッシュタイミングRf12よりも後の撮影フレームデータF(F1M1+1)とを用いて合成フレームデータF(mix12)を生成する。合成フレームデータF(mix12)は、リフレッシュタイミングRf12による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図22(b)に示すように、合成フレームデータF(mix12)は、撮影フレームデータF(F1M1)の比率が[(1-β2)/dt3]となり、撮影フレームデータF(F1M1+1)の比率が[β2/dt3]となるように合成される。
As shown in FIG. 21, at the refresh timing Rf12, the control unit 21 of the control device 2 controls the imaging frame data F (F1 M1) before the refresh timing Rf12 and the imaging frame data F (F1 M1 + 1) after the refresh timing Rf12. To generate combined frame data F (mix 12). The composite frame data F (mix 12) is composited by alpha blending processing using the division ratio of the time interval between the captured frame data at the refresh timing Rf 12 as a coefficient.
As shown in FIG. 22B, in the combined frame data F (mix 12), the ratio of the shooting frame data F (F 1 M 1) is [(1−β 2) / dt 3], and the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized to be [β2 / dt3].
 図21に示すように、リフレッシュタイミングRf13において、制御装置2の制御部21は、リフレッシュタイミングRf13よりも前の撮影フレームデータF(F1M1)とリフレッシュタイミングRf13よりも後の撮影フレームデータF(F1M1+1)とを用いて合成フレームデータF(mix13)を生成する。合成フレームデータF(mix13)は、リフレッシュタイミングRf13による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図22(c)に示すように、合成フレームデータF(mix13)は、撮影フレームデータF(F1M1)の比率が[(1-γ2)/dt3]となり、撮影フレームデータF(F1M1+1)の比率が[γ2/dt3]となるように合成される。
As shown in FIG. 21, at the refresh timing Rf13, the control unit 21 of the control device 2 controls the shooting frame data F (F1M1) before the refresh timing Rf13 and the shooting frame data F (F1M1 + 1) after the refresh timing Rf13. To generate combined frame data F (mix 13). The combined frame data F (mix 13) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 13 as a coefficient.
As shown in FIG. 22C, in the combined frame data F (mix 13), the ratio of the shooting frame data F (F 1 M 1) is [(1-γ 2) / dt 3], and the ratio of the shooting frame data F (F 1 M 1 + 1) is It is synthesized so as to be [γ2 / dt3].
 図21に示すように、リフレッシュタイミングRf14において、制御装置2の制御部21は、リフレッシュタイミングRf14よりも前の撮影フレームデータF(F1M1+1)とリフレッシュタイミングRf14よりも後の撮影フレームデータF(F1M1+2)とを用いて合成フレームデータF(mix14)を生成する。合成フレームデータF(mix14)は、リフレッシュタイミングRf14による撮影フレームデータ間の時間間隔の分割比率を係数とするアルファブレンド処理によって合成される。
 図22(d)に示すように、合成フレームデータF(mix14)は、撮影フレームデータF(F1M1+1)の比率が[(1-δ2)/dt3]となり、撮影フレームデータF(F1M1+2)の比率が[δ2/dt3]となるように合成される。
As shown in FIG. 21, at the refresh timing Rf14, the control unit 21 of the control device 2 controls the shooting frame data F (F1M1 + 1) before the refresh timing Rf14 and the shooting frame data F (F1M1 + 2) after the refresh timing Rf14. To generate combined frame data F (mix 14). The combined frame data F (mix 14) is combined by alpha blending processing using a division ratio of a time interval between shooting frame data at the refresh timing Rf 14 as a coefficient.
As shown in FIG. 22D, in the combined frame data F (mix 14), the ratio of the shooting frame data F (F1M1 + 1) is [(1−δ2) / dt3], and the ratio of the shooting frame data F (F1M1 + 2) is It synthesizes so that it may become [δ2 / dt3].
 以上の合成処理を行うことにより、リフレッシュタイミング毎の合成フレームデータF(mix11)~F(mix14)が生成され、滑らかな画像を表示装置4に表示させることができる。 By performing the above combining process, combined frame data F (mix 11) to F (mix 14) for each refresh timing can be generated, and a smooth image can be displayed on the display device 4.
<制御装置2のCPU21aによる制御の流れ>
 次に、制御装置2のCPU21aによる制御の流れについて説明する。
 図23(a)は、CPU21aが行う信号監視処理を示すフローチャートである。図23(b)は、CPU21aが行うメイン処理を示すフローチャートである。
<信号監視処理>
 図23(a)に示す信号監視処理において、CPU21aは、予め定められたデータ取得タイミングが到来したか否かを判定する(ステップS1)。データ取得タイミングが到来したと判定した場合、CPU21aは、動作情報出力装置3から各種のデータを取得する(ステップS2)。例えば、動作情報出力装置3からのセンサ出力を取得したり、再生関連スイッチ36に対する操作情報を取得したりする。ステップS2で各種のデータを取得したならば、CPU21aは、ステップS1に戻ってデータ取得タイミングが到来したか否かを判定する。
<Flow of control by CPU 21a of control device 2>
Next, the flow of control by the CPU 21 a of the control device 2 will be described.
FIG. 23A is a flowchart showing a signal monitoring process performed by the CPU 21a. FIG. 23B is a flowchart showing main processing performed by the CPU 21a.
<Signal monitoring process>
In the signal monitoring process shown in FIG. 23A, the CPU 21a determines whether or not a predetermined data acquisition timing has come (step S1). If it is determined that the data acquisition timing has arrived, the CPU 21a acquires various data from the operation information output device 3 (step S2). For example, the sensor output from the operation information output device 3 is acquired, or the operation information for the reproduction related switch 36 is acquired. After acquiring various data in step S2, the CPU 21a returns to step S1 and determines whether the data acquisition timing has arrived.
<メイン処理>
 図23(b)に示すメイン処理において、CPU21aは、再生関連スイッチ36による映像コンテンツの再生操作があったか否かを判定する(ステップS11)。この判定は、信号監視処理のステップS2で取得したデータ、具体的には再生関連スイッチ36に対する操作情報に基づいて行われる。
 再生操作があった場合、CPU21aは、映像コンテンツにおける導入シーンの再生を開始する(ステップS12)。CPU21aは、図9(a)で説明した導入シーンの撮影フレームデータを読み出し、表示フレームデータとしてフレームデータに記憶する。フレームデータに記憶された表示フレームデータは、リフレッシュタイミングが到来する毎に表示装置4へ送信される。表示装置4では、表示フレームデータに応じた画像が表示される。
 また、CPU21aは、図9(b)で説明した導入シーンの音声データを読み出し、音声データに基づく音響信号を生成して音出力装置5へ出力する。音響信号が音出力装置5に入力されると、音出力装置5からは音響信号に応じた音が出力される。
<Main processing>
In the main processing shown in FIG. 23B, the CPU 21a determines whether or not the reproduction operation of the video content has been performed by the reproduction related switch 36 (step S11). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the reproduction related switch 36.
When there is a reproduction operation, the CPU 21a starts reproduction of the introduction scene in the video content (step S12). The CPU 21a reads the imaging frame data of the introduction scene described in FIG. 9A and stores the imaging frame data as display frame data in the frame data. The display frame data stored in the frame data is transmitted to the display device 4 each time the refresh timing comes. The display device 4 displays an image according to the display frame data.
Further, the CPU 21a reads the audio data of the introduction scene described in FIG. 9B, generates an acoustic signal based on the audio data, and outputs the acoustic signal to the sound output device 5. When the sound signal is input to the sound output device 5, the sound output device 5 outputs a sound according to the sound signal.
 ステップS11で、映像コンテンツの再生操作をしていないと判定した場合、又は、ステップS12で導入シーンの再生を開始した場合、CPU21aは、導入シーンの再生中か否かを判定する(ステップS13)。導入シーンの再生中と判定した場合、CPU21aは、第1モード切替スイッチ34に対する操作があったか否かを判定する(ステップS14)。この判定は、信号監視処理のステップS2で取得したデータ、具体的には第1モード切替スイッチ34に対する操作情報に基づいて行われる。
 第1モード切替スイッチ34に対する操作があった場合、CPU21aは、連動シーン再生処理を行う(ステップS15)。なお、連動シーン再生処理については、後で説明する。
When it is determined in step S11 that the reproduction operation of the video content is not performed, or when the reproduction of the introduction scene is started in step S12, the CPU 21a determines whether or not the introduction scene is being reproduced (step S13). . If it is determined that the introduction scene is being reproduced, the CPU 21a determines whether or not the first mode switch 34 has been operated (step S14). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the first mode switch 34.
When the first mode switch 34 is operated, the CPU 21a performs interlocked scene reproduction processing (step S15). The interlocked scene reproduction processing will be described later.
 第1モード切替スイッチ34に対する操作がなかった場合、CPU21aは、導入シーンの再生が終了したか否かを判定する(ステップS16)。導入シーンの再生が終了したと判定した場合、CPU21aは、連動シーン再生処理を行う(ステップS15)。
 ステップS13で、導入シーンの再生中でないと判定した場合、CPU21aは、連動シーンの再生中か否かを判定する(ステップS17)。連動シーンの再生中と判定した場合、CPU21aは、連動シーン再生処理を行う(ステップS15)。
 ステップS16で導入シーンの再生が終了していないと判定した場合、及び、ステップS17で連動シーンが再生中でないと判定した場合、CPU21aは、ステップS11の処理に戻る。
If there is no operation on the first mode changeover switch 34, the CPU 21a determines whether or not the reproduction of the introduced scene has ended (step S16). If it is determined that the introduction scene has been reproduced, the CPU 21a performs interlocked scene reproduction processing (step S15).
If it is determined in step S13 that the introduction scene is not being reproduced, the CPU 21a determines whether or not the interlocked scene is being reproduced (step S17). If it is determined that the interlocked scene is being reproduced, the CPU 21a performs interlocked scene reproduction processing (step S15).
If it is determined in step S16 that the reproduction of the introduced scene is not completed, and if it is determined in step S17 that the interlocked scene is not being reproduced, the CPU 21a returns to the process of step S11.
 ステップS15の連動シーン再生処理から復帰した場合、CPU21aは、第2モード切替スイッチ35に対する操作があったか否かを判定する(ステップS18)。この判定は、信号監視処理のステップS2で取得したデータ、具体的には第2モード切替スイッチ35に対する操作情報に基づいて行われる。
 第2モード切替スイッチ35に対する操作があった場合、CPU21aは、フィニッシュシーン再生処理を行う(ステップS19)。一方、第2モード切替スイッチ35に対する操作がなかった場合、CPU21aは、ステップS11の処理に戻る。
 フィニッシュシーン再生処理において、CPU21aは、図9(a)で説明したフィニッシュシーンの撮影フレームデータを読み出し、表示フレームデータとしてフレームデータに記憶する。表示装置4では、表示フレームデータに応じた画像が表示される。また、CPU21aは、図9(b)で説明したフィニッシュシーンの音声データを読み出し、音声データに基づく音響信号を生成して音出力装置5へ出力する。音響信号が音出力装置5に入力されると、音出力装置5からは音響信号に応じた音が出力される。このフィニッシュシーン再生処理は、一連のフィニッシュシーンの再生が終了するまで行われる。
When returning from the linked scene reproduction process of step S15, the CPU 21a determines whether or not the second mode switch 35 has been operated (step S18). This determination is performed based on the data acquired in step S2 of the signal monitoring process, specifically, the operation information for the second mode switch 35.
When the second mode switch 35 is operated, the CPU 21a performs a finish scene reproduction process (step S19). On the other hand, when there is no operation on the second mode changeover switch 35, the CPU 21a returns to the process of step S11.
In the finish scene reproduction process, the CPU 21a reads the photographing frame data of the finish scene described in FIG. 9A, and stores it in the frame data as display frame data. The display device 4 displays an image according to the display frame data. Further, the CPU 21a reads the sound data of the finish scene described in FIG. 9 (b), generates an acoustic signal based on the sound data, and outputs the sound signal to the sound output device 5. When the sound signal is input to the sound output device 5, the sound output device 5 outputs a sound according to the sound signal. This finish scene reproduction process is performed until the reproduction of a series of finish scenes is completed.
<連動シーン再生処理>
 図24に示す連動シーン再生処理において、CPU21aは、連動シーンの周期が未設定であるか否かを判定する(ステップS21)。換言すれば、CPU21aは、導入シーンから連動シーンへの移行直後であるか否かを判定する。連動シーンの周期が未設定である場合(導入シーンから連動シーンへの移行直後である場合)、CPU21a(撮影画像情報選択部2e)は、初期周期を設定する(ステップS22)。本実施形態において、CPU21aは、初期周期として第1周期を設定する。これに伴い、第1周期の連動シーンの再生が開始される。
 次に、CPU21aは、動作情報出力装置3からのセンサ出力が値「0」であるか否かを判定する(ステップS22)。換言すれば、CPU21aは、使用者の手首が近接位置Bと離隔位置Tの何れかにあるか否かを判定する。CPU21aは、センサ出力が値「0」であると判定した場合にはステップS24の処理に移行し、センサ出力が値「0」ではないと判定した場合にはステップS38の処理に移行する。
<Interlocking scene reproduction processing>
In the interlocked scene reproduction processing shown in FIG. 24, the CPU 21a determines whether the cycle of the interlocked scene is not set (step S21). In other words, the CPU 21a determines whether it is immediately after the transition from the introduction scene to the interlocking scene. When the cycle of the interlocked scene is not set (immediately after the transition from the introduction scene to the interlocked scene), the CPU 21a (the photographed image information selection unit 2e) sets an initial cycle (step S22). In the present embodiment, the CPU 21a sets a first period as an initial period. Along with this, reproduction of the interlocked scene of the first cycle is started.
Next, the CPU 21a determines whether the sensor output from the operation information output device 3 has the value "0" (step S22). In other words, the CPU 21a determines whether the wrist of the user is at either the close position B or the separated position T. If the CPU 21a determines that the sensor output is the value "0", the process proceeds to step S24. If the CPU 21a determines that the sensor output is not the value "0", the process proceeds to step S38.
 ステップS24の処理において、CPU21aは、センサ出力が負の値(「-」)から値「0」になったか否かを判定する。換言すれば、CPU21aは、使用者の胸部に近づく方向に移動していた使用者の手首が停止したか否かを判定する。センサ出力が負の値から値「0」になった場合、CPU21aは、使用者の手首が近接位置に位置していると判定する(ステップS25)。換言すれば、CPU21a(画像選択部2a)は、マーカデータ(B→T)が付加された撮影フレームデータの表示タイミングが到来したと判定する。
 ステップS24の処理において、センサ出力が負の値から値「0」になっていないと判定した場合、CPU21aは、センサ出力が正の値(「+」)から値「0」になったか否かを判定する(ステップS26)。換言すれば、CPU21aは、使用者の胸部から離隔する方向に移動していた使用者の手首が停止したか否かを判定する。センサ出力が正の値から値「0」になった場合、CPU21aは、使用者の手首が離隔位置に位置していると判定する(ステップS27)。換言すれば、CPU21a(画像選択部2a)は、マーカデータ(T→B)が付加された撮影フレームデータの表示タイミングが到来したと判定する。
 ステップS26の処理において、センサ出力が正の値から値「0」になっていないと判定した場合(前回の処理でも値「0」のセンサ出力であった場合)、CPU21aは、値「0」のセンサ出力が規定回数連続したか否かを判定する(ステップS28)。値「0」のセンサ出力が規定回数連続したと判定した場合、CPU21aは、使用者の手首は停止中であると判定する(ステップS29)。値「0」のセンサ出力が規定回数連続していないと判定した場合、CPU21aは、ステップS38の処理に移行する。
In the process of step S24, the CPU 21a determines whether the sensor output has changed from a negative value ("-") to a value "0". In other words, the CPU 21a determines whether or not the wrist of the user who has moved in a direction approaching the chest of the user has stopped. If the sensor output changes from a negative value to a value “0”, the CPU 21a determines that the user's wrist is located at the close position (step S25). In other words, the CPU 21a (image selection unit 2a) determines that the display timing of the imaging frame data to which the marker data (B → T) is added has come.
If it is determined in the process of step S24 that the sensor output does not change from a negative value to the value "0", the CPU 21a determines whether the sensor output changes from a positive value ("+") to a value "0". Is determined (step S26). In other words, the CPU 21a determines whether or not the wrist of the user who has moved in a direction away from the user's chest has stopped. When the sensor output changes from a positive value to the value "0", the CPU 21a determines that the user's wrist is located at the separated position (step S27). In other words, the CPU 21a (image selection unit 2a) determines that the display timing of the imaging frame data to which the marker data (T → B) is added has come.
In the process of step S26, when it is determined that the sensor output does not become the value "0" from the positive value (when the sensor output of the value "0" was the previous process, too), the CPU 21a has the value "0". It is determined whether or not the sensor output of has continued a prescribed number of times (step S28). If it is determined that the sensor output of the value “0” has continued for a prescribed number of times, the CPU 21a determines that the wrist of the user is at rest (step S29). If it is determined that the sensor output of the value “0” is not continuous the prescribed number of times, the CPU 21a proceeds to the process of step S38.
 ステップS25、ステップS27、又はステップS29の処理を実行した後、CPU21aは、連動シーンにおける周期の変更が必要か否かを判定する(ステップS30)。図9(d)で説明したように、CPU21aは、動作情報出力装置3からのセンサ信号に基づいて、手首の往復周波数を取得し、取得した往復周波数に適した周期の連動シーンを選択する。
 CPU21aは、選択した連動シーンの周期が現在の連動シーンの周期と異なっていれば、連動シーンの周期変更が必要と判定し、両周期が同じであれば、連動シーンの周期変更は必要ないと判定する。
 CPU21aは、連動シーンの周期変更が必要と判定した場合にはステップS34の処理に移行し、連動シーンの周期変更は必要ないと判定した場合にはステップS31の処理に移行する。
After executing the processing of step S25, step S27 or step S29, the CPU 21a determines whether or not it is necessary to change the cycle in the interlocked scene (step S30). As described in FIG. 9D, the CPU 21a acquires the reciprocating frequency of the wrist based on the sensor signal from the operation information output device 3, and selects the interlocked scene of the cycle suitable for the acquired reciprocating frequency.
If the cycle of the selected linked scene is different from the current linked scene cycle, the CPU 21a determines that the linked scene cycle change is necessary, and if both cycles are the same, the linked scene cycle change is not necessary. judge.
The CPU 21a proceeds to the process of step S34 when determining that the cycle change of the interlocked scene is necessary, and proceeds to the process of step S31 when determining that the cycle change of the interlocked scene is not necessary.
 ステップS31の処理において、CPU21a(予測部2c)は、次のセンサ出力タイミングを予測する。例えば、図13(a)及び図19(a)で説明したように、CPU21aは、直近のセンサ出力タイミングよりも前に行われた手首の一方向運動に基づいて予測時間を取得し、直近のセンサ出力タイミングに予測時間を加算することで、次のセンサ出力タイミングを予測する。 In the process of step S31, the CPU 21a (prediction unit 2c) predicts the next sensor output timing. For example, as described in FIG. 13A and FIG. 19A, the CPU 21a acquires the prediction time based on the one-way movement of the wrist performed before the latest sensor output timing, and By adding the prediction time to the sensor output timing, the next sensor output timing is predicted.
 ステップS31の処理を実行した後、CPU21a(時間間隔設定部2b)は、直近のセンサ出力タイミングから次のセンサタイミングまでの撮影フレームデータ間の時間間隔を、予測時間に基づいた時間間隔に設定する(ステップS32)。
 例えば、図14で説明したように、CPU21aは、タイミングts5では撮影フレームデータF(F1M0)を選択し、タイミングts6’では撮影フレームデータF(F1M1)を選択し、両撮影フレームデータの間に属する複数の撮影フレームデータ間を時間間隔dt2に設定する。
 同様に、図20で説明したように、CPU21aは、タイミングts6では撮影フレームデータF(F1M1)を選択し、タイミングts7’では撮影フレームデータF(F1M2)を選択し、両撮影フレームデータの間に属する複数の撮影フレームデータ間を時間間隔dt3に設定する。
After executing the processing of step S31, the CPU 21a (time interval setting unit 2b) sets the time interval between shooting frame data from the latest sensor output timing to the next sensor timing as the time interval based on the predicted time. (Step S32).
For example, as described in FIG. 14, the CPU 21a selects the imaging frame data F (F1 M0) at timing ts5, selects the imaging frame data F (F1 M1) at timing ts 6 ', and belongs between both imaging frame data A time interval dt2 is set between a plurality of shooting frame data.
Similarly, as described in FIG. 20, the CPU 21a selects the photographing frame data F (F1M1) at timing ts6, and selects the photographing frame data F (F1M2) at timing ts7 ′, and between the two photographing frame data A time interval dt3 is set between a plurality of pieces of shooting frame data to which the image belongs.
 ステップS32の処理を実行した後、CPU21a(時間間隔設定部2b)は、間隔調整曲線を設定する(ステップS33)。例えば、図14で説明した間隔調整曲線SP1、図16で説明した間隔調整曲線SP2、或いは、図20で説明した間隔調整曲線SP3を設定する。間隔調整曲線を設定することで、使用者の手首が近接位置Bと離隔位置Tの一方に位置したタイミングから他方に達するタイミングに向かうことに伴って、撮影フレームデータ間の時間間隔を変化させることができる。 After executing the process of step S32, the CPU 21a (time interval setting unit 2b) sets an interval adjustment curve (step S33). For example, the interval adjustment curve SP1 described in FIG. 14, the interval adjustment curve SP2 described in FIG. 16, or the interval adjustment curve SP3 described in FIG. 20 is set. By setting the interval adjustment curve, the time interval between the imaging frame data is changed from the timing when the user's wrist is located at one of the close position B and the distant position T toward the timing when it reaches the other. Can.
 ステップS34の処理において、CPU21a(撮影画像情報選択部2e)は、次のセンサ出力タイミングを予測する。例えば、図13(a)及び図19(a)で説明したように、CPU21aは、直近のセンサ出力タイミングよりも前に行われた手首の一方向運動に基づいて予測時間を取得し、直近のセンサ出力タイミングに予測時間を加算することで、次のセンサ出力タイミングを予測する。
 ステップS34の処理を実行した後、CPU21a(予測部2c)は、次のセンサ出力タイミングを予測する(ステップS35)。ステップS35の処理を実行した後、CPU21a(時間間隔設定部2b)は、直近のセンサ出力タイミングから次のセンサタイミングまでの撮影フレームデータ間の時間間隔を、予測時間に基づいた時間間隔に設定する(ステップS36)。これらのステップS35及びステップS36の処理は、前述したステップS31及びステップS32の処理と同様であるため、説明は省略する。
In the process of step S34, the CPU 21a (captured image information selection unit 2e) predicts the next sensor output timing. For example, as described in FIG. 13A and FIG. 19A, the CPU 21a acquires the prediction time based on the one-way movement of the wrist performed before the latest sensor output timing, and By adding the prediction time to the sensor output timing, the next sensor output timing is predicted.
After executing the process of step S34, the CPU 21a (prediction unit 2c) predicts the next sensor output timing (step S35). After executing the processing of step S35, the CPU 21a (time interval setting unit 2b) sets the time interval between shooting frame data from the latest sensor output timing to the next sensor timing as the time interval based on the predicted time. (Step S36). The processes of step S35 and step S36 are the same as the processes of step S31 and step S32 described above, and thus the description thereof is omitted.
 ステップS36の処理を実行した後、CPU21aは、合成曲線を設定する(ステップS37)。合成曲線は、現周期の連動シーンに属する撮影フレームデータと次周期の連動シーンに属する撮影フレームデータとの合成比率を規定する曲線である。CPU21aは、後述する合成フレーム生成処理(ステップS39)において、合成曲線で規定される合成比率で両撮影フレームデータを合成し、合成フレームデータを生成する。その結果、連動シーンの周期を切り替える場合において、画像をなめらかに変化させることができる。 After executing the process of step S36, the CPU 21a sets a composite curve (step S37). The composite curve is a curve that defines a composite ratio of shooting frame data belonging to the linked scene of the current cycle and shooting frame data belonging to the linked scene of the next cycle. The CPU 21a combines both shooting frame data at the combining ratio defined by the combining curve in combining frame generation processing (step S39) described later, and generates combining frame data. As a result, when switching the cycle of the interlocked scene, the image can be changed smoothly.
 ステップS23の処理でセンサ出力が値「0」ではないと判定した場合、ステップS28の処理で値「0」のセンサ出力が規定回数連続していないと判定した場合、ステップS33の処理を実行した後、又はステップS37の処理を実行した後、CPU21aは、表示装置4のリフレッシュタイミングが到来したか否かを判定する(ステップS38)。リフレッシュタイミングが到来した場合、CPU21a(表示画像情報生成部21f)は、一対の撮影フレームデータをアルファブレンド処理することで合成フレームデータを生成し、表示フレームデータとしてフレームバッファ23に書き込む(ステップS39)。
 フレームバッファ23に書き込まれた表示フレームデータ(合成フレームデータ)は、リフレッシュタイミングで表示装置4に出力される。これにより、表示装置4では表示フレームデータに基づく画像が表示される。
 ステップS38でリフレッシュタイミングではないと判定した場合、及びステップS39の処理を実行した後、CPU21aは、メイン処理に復帰する。
When it is determined that the sensor output is not the value "0" in the process of step S23, the process of step S33 is performed when it is determined that the sensor output of the value "0" is not continuous the specified number of times in the process of step S28. After or after executing the process of step S37, the CPU 21a determines whether the refresh timing of the display device 4 has come (step S38). When the refresh timing arrives, the CPU 21a (display image information generation unit 21f) generates composite frame data by performing alpha blending processing on the pair of shooting frame data, and writes the composite frame data as display frame data in the frame buffer 23 (step S39). .
The display frame data (combined frame data) written in the frame buffer 23 is output to the display device 4 at the refresh timing. Thereby, the display device 4 displays an image based on the display frame data.
If it is determined in step S38 that the refresh timing is not reached, and after the process of step S39 is executed, the CPU 21a returns to the main process.
 以上の処理を行うことにより、本実施形態の映像表示システム1では、使用者の手首の往復運動に連動させて連動シーンの再生を行うことができる。一方向に走行し続ける運動を対象にしていた従来システムに比較して、対象となる運動の種類を拡げることができる。
 また、本実施形態の映像表示システム1では、使用者の手首が近接位置Bと離隔位置Tの一方に位置した第1のタイミングと、第1のタイミングから予測時間が経過し、使用者の手首が近接位置Bと離隔位置Tの他方に達する第2のタイミングのそれぞれで、撮影フレームデータを選択し、選択した撮影フレームデータの間に属する複数の撮影フレームデータ間の時間間隔を、予測時間に基づいた時間間隔に設定しているので、手首の往復運動の時間がばらついても、映像コンテンツを追従して再生することができる。
 また、本実施形態の映像表示システム1では、間隔調整曲線SP1~SP3を用いることで、第1のタイミングから第2のタイミングに向かうことに伴って、撮影フレームデータ間の時間間隔を変化させているので、選択する撮影フレームデータの組が更新された場合においても、画像をなめらかに変化させることができる。例えば、図13(a)に示す撮影フレームデータF(F1M0)と撮影フレームデータF(F1M1)の組から、図19(a)に示す撮影フレームデータF(F1M1)と撮影フレームデータF(F1M2)の組に更新された場合においても、表示装置4に表示する画像をなめらかに変化させることができる。
 加えて、表示装置4のリフレッシュタイミングごとに、相前後する一対の撮影フレームデータをアルファブレンド処理して合成フレームデータを生成し、生成した合成フレームデータを表示フレームデータとして表示装置4に表示しているので、この点でも表示装置4に表示する画像をなめらかに変化させることができる。
By performing the above processing, in the video display system 1 according to the present embodiment, it is possible to reproduce the interlocked scene in interlock with the reciprocation movement of the user's wrist. The types of exercise to be targeted can be expanded as compared with the conventional system that was directed to the exercise that continues traveling in one direction.
Further, in the image display system 1 according to the present embodiment, the first timing when the user's wrist is located at one of the close position B and the distant position T and the prediction time have elapsed from the first timing. The photographing frame data is selected at each of the second timing when the other reaches the other of the proximity position B and the separation position T, and the time interval between the plurality of photographing frame data belonging to the selected photographing frame data is Since the time interval is set based on the video content, it is possible to follow and reproduce the video content even if the time of reciprocating movement of the wrist varies.
Further, in the image display system 1 of the present embodiment, by using the interval adjustment curves SP1 to SP3, the time interval between the imaging frame data is changed along with going from the first timing to the second timing. Therefore, even when the selected set of shooting frame data is updated, the image can be smoothly changed. For example, from the set of shooting frame data F (F1 M0) and shooting frame data F (F1 M1) shown in FIG. 13 (a), shooting frame data F (F 1 M1) and shooting frame data F (F 1 M2) shown in FIG. Even in the case where the image is updated to the above, the image displayed on the display device 4 can be changed smoothly.
In addition, at each refresh timing of the display device 4, alpha-blend processing is performed on a pair of consecutive shooting frame data in succession to generate composite frame data, and the generated composite frame data is displayed on the display device 4 as display frame data. Because of this, the image displayed on the display device 4 can be smoothly changed also at this point.
<変形例>
 前述の実施形態では、間隔調整曲線SP1~SP3で撮影フレームデータの時間間隔を調整した後、リフレッシュタイミングに応じて、相前後する一対の撮影フレームデータを合成処理していたが、この構成に限定されるものではない。
 例えば、可変リフレッシュレートに対応する表示装置4を用いる場合、図25に示すように、撮影フレームデータ間の時間間隔を調整した後の撮影フレームデータを、表示装置4に表示させる表示フレームデータとしてフレームバッファ23に記憶させてもよい。
 図25の例では、タイミングRf20で撮影フレームデータF(F1M0)がフレームバッファ23に記憶され、タイミングRf21で撮影フレームデータF(F1M0+1)がフレームバッファ23に記憶され、タイミングRf22で撮影フレームデータF(F1M0+2)がフレームバッファ23に記憶されている。
 タイミングRf20からタイミングRf21までの時間間隔と、タイミングRf21からタイミングRf22までの時間間隔とが相違しているが、表示装置4が可変リフレッシュレートに対応していることから、適切に画像を表示することができる。
<Modification>
In the above embodiment, after adjusting the time interval of the imaging frame data by the interval adjustment curves SP1 to SP3, according to the refresh timing, combining processing of a pair of imaging frame data which precedes and follows is performed, but it is limited to this configuration It is not something to be done.
For example, when using the display device 4 corresponding to the variable refresh rate, as shown in FIG. 25, the frame as the display frame data which causes the display device 4 to display shooting frame data after adjusting the time interval between shooting frame data It may be stored in the buffer 23.
In the example of FIG. 25, shooting frame data F (F1 M0) is stored in the frame buffer 23 at timing Rf20, shooting frame data F (F1 M0 + 1) is stored in the frame buffer 23 at timing Rf21, and shooting frame data F (F F1M0 + 2) is stored in the frame buffer 23.
Although the time interval from the timing Rf20 to the timing Rf21 is different from the time interval from the timing Rf21 to the timing Rf22, since the display device 4 corresponds to the variable refresh rate, the image is appropriately displayed. Can.
 連動シーンの再生制御に関し、前述の実施形態では第1周期の連動シーンを最初に再生したが、最初に再生される連動シーンは第1周期の連動シーンに限定されない。例えば、第2周期の連動シーンとしてもよいし、第3周期の連動シーンとしてもよい。 With regard to reproduction control of interlocked scenes, although the interlocked scene of the first period is reproduced first in the above embodiment, the interlocked scene to be reproduced first is not limited to the interlocked scene of the first period. For example, a linked scene of the second cycle may be used, or a linked scene of the third cycle may be used.
 前述の実施形態では、往復運動に係る特定の一方向運動を対象にして連動の制御を行っていた。具体的には、近接位置Bから離隔位置Tへの手首の移動時間、及び、離隔位置Tから近接位置Bへの手首の移動時間によって連動の制御を行っていた。しかしながら、連動の制御は、これらに限定されない。例えば、近接位置Bから離隔位置Tへの手首の移動時間、及び、離隔位置Tから近接位置Bへの手首の移動時間を平均した移動時間で制御を行ってもよい。また、往復運動を対象にして連動の制御を行ってもよい。具体的には、近接位置Bから離隔位置Tを通って近接位置Bに至る手首の移動時間によって連動の制御を行ってもよい。同様に、離隔位置Tから近接位置Bを通って離隔位置Tに至る手首の移動時間によって連動の制御を行ってもよい。 In the above embodiment, interlocking control is performed on a specific one-way motion related to the reciprocating motion. Specifically, the interlocking control is performed by the movement time of the wrist from the close position B to the separation position T and the movement time of the wrist from the separation position T to the close position B. However, control of interlocking is not limited to these. For example, control may be performed based on the movement time obtained by averaging the movement time of the wrist from the close position B to the separation position T and the movement time of the wrist from the separation position T to the close position B. Further, control of interlocking may be performed on the basis of reciprocating motion. Specifically, the interlock control may be performed based on the movement time of the wrist from the proximity position B through the separation position T to the proximity position B. Similarly, interlock control may be performed by the movement time of the wrist from the separated position T to the separated position T through the close position B.
 前述の実施形態において、CPU21a、RAM21b、及びフレームバッファ23を備えた制御装置2を例示したが、この構成に限定されない。例えば、CPU21aによって実行される処理の全部又は一部をハードウェアロジックに実行させてもよい。また、制御装置2に関し、複数のCPU21aを実装してもよく、CPU21aとGPU(Graphics Processing Unit)などアーキテクチャの異なる複数種類のプロセッサを実装してもよい。さらに、フレームバッファ23を用いて行われる処理を、RAM21bを用いて行ってもよく、反対に、RAM21bを用いて行われる処理を、フレームバッファ23を用いて行ってもよい。
 前述の実施形態において、制御装置2、動作情報出力装置3、表示装置4、及び音出力装置5はそれぞれ別体に設けられていたが、この構成に限定されない。例えば、制御装置2を動作情報出力装置3に組み込んで一体化してもよく、制御装置2を表示装置4に組み込んで一体化してもよい。また、制御装置2、表示装置4、及び音出力装置5を一体化してもよい。
 前述の実施形態において、1つの制御装置2で処理を行う構成を例示したが、この構成に限定されない。例えば、通信ネットワークを介して通信可能に接続された複数の制御装置2によって処理を行う構成であってもよい。
Although the control apparatus 2 provided with CPU21a, RAM21b, and the frame buffer 23 was illustrated in the above-mentioned embodiment, it is not limited to this structure. For example, hardware logic may execute all or part of the processing executed by the CPU 21a. Further, regarding the control device 2, a plurality of CPUs 21a may be mounted, or a plurality of processors of different architectures such as the CPU 21a and a GPU (Graphics Processing Unit) may be mounted. Furthermore, the process performed using the frame buffer 23 may be performed using the RAM 21b, and conversely, the process performed using the RAM 21b may be performed using the frame buffer 23.
Although the control device 2, the operation information output device 3, the display device 4, and the sound output device 5 are separately provided in the above embodiment, the present invention is not limited to this configuration. For example, the control device 2 may be incorporated in the operation information output device 3 and integrated, or the control device 2 may be integrated in the display device 4. In addition, the control device 2, the display device 4, and the sound output device 5 may be integrated.
In the above-mentioned embodiment, although the composition which processes by one control device 2 was illustrated, it is not limited to this composition. For example, the configuration may be such that processing is performed by a plurality of control devices 2 communicably connected via a communication network.
 前述の実施形態において、動作情報出力装置3は手首に装着するものを例示したが、手首に装着されるものに限られない。動作情報出力装置3は、使用者の足首、頭部、或いは腰部など、往復運動が可能な部位に装着されるものであればよい。
 前述の実施形態において、動作情報出力装置3は、制御部31を備えていたが、この構成に限定されない。例えば、制御部31が行う処理をハードウェアロジックに実行させてもよい。また、動作情報出力装置3の制御部31を省略し、制御装置2の制御部21によって動作情報出力装置3を制御してもよい。
 前述の実施形態では、動作情報出力装置3が備えるジャイロセンサ32のセンサ出力を、動作情報出力装置3が備える制御部31で正規化処理するものを例示したが、この構成に限定されない。例えば、センサ出力の正規化処理を制御装置2の制御部21で行ってもよい。
 また、動作情報出力装置3に代えて、使用者の動きを撮影するビデオカメラを用い、ビデオカメラで得られた映像から使用者の動きを動作情報として抽出してもよい。
Although the motion information output device 3 is illustrated as being worn on the wrist in the above-described embodiment, the motion information output device 3 is not limited to being worn on the wrist. The motion information output device 3 may be attached to a portion capable of reciprocating movement, such as the user's ankle, head or waist.
In the above-mentioned embodiment, although operation information output device 3 was provided with control part 31, it is not limited to this composition. For example, the processing performed by the control unit 31 may be performed by hardware logic. Further, the control unit 31 of the operation information output device 3 may be omitted, and the operation information output device 3 may be controlled by the control unit 21 of the control device 2.
In the above embodiment, the sensor output of the gyro sensor 32 included in the operation information output device 3 is exemplified by performing normalization processing by the control unit 31 included in the operation information output device 3. However, the present invention is not limited to this configuration. For example, the sensor output normalization process may be performed by the control unit 21 of the control device 2.
Also, instead of the motion information output device 3, a video camera for capturing the motion of the user may be used, and the motion of the user may be extracted as motion information from the video obtained by the video camera.
 前述の実施形態では、ローイングマシン6での運動に適した映像処理システムについて説明したが、この構成に限定されない。例えば、本発明を精子採取支援システムに適用してもよい。
 精子採取支援システムは、医学的な研究や治療上の必要から男性の精子を採取する際に、精子の採取を支援するシステムである。この精子採取支援システムは、夫婦間の不妊の原因を究明するために精子を採取したり、性機能障害を治療したり、人工授精のために精子を確保するために用いられる。更に、精子採取支援システムによって、個人的な性的欲求を解消させることによる性犯罪の予防、買春防止、性感染症感染者数の減少等の種々の社会的なニーズを満たすこともできる。
 本発明を精子採取支援システムに適用する場合、例えば、映像コンテンツとして男女の性行為中の様子を撮影したアダルトコンテンツを用い、動作情報出力装置3を男性の手首に装着する。そして、男性の自慰行為に伴う手首の往復運動に対して、アダルトコンテンツを連動させて再生する。
 この精子採取支援システムでは、アダルトコンテンツを単に再生する一般的な構成と比較して臨場感を高めることができ、精子の採取効率を高めることができる。
In the above-mentioned embodiment, although an image processing system suitable for exercise in the rowing machine 6 has been described, the present invention is not limited to this configuration. For example, the present invention may be applied to a sperm collection support system.
The sperm collection support system is a system that supports the collection of sperm when collecting male sperm for medical research and therapeutic needs. This sperm collection support system is used to collect sperm for investigating the cause of infertility between husband and wife, to treat sexual dysfunction, and to acquire sperm for artificial insemination. Furthermore, the sperm collection support system can meet various social needs such as prevention of sexual crimes by eliminating personal sexual desires, prevention of prostitution and reduction of the number of sexually transmitted infections.
When the present invention is applied to a sperm collection support system, for example, the motion information output device 3 is worn on a male's wrist using adult content obtained by photographing a sexual activity of a male and female as video content. Then, in response to the reciprocation of the wrist accompanying the male masturbating act, the adult content is interlocked and reproduced.
In this sperm collection support system, the sense of reality can be enhanced and the collection efficiency of sperm can be enhanced, as compared with a general configuration in which adult content is simply reproduced.
[本発明の実施態様例と作用、効果のまとめ]
<第一の実施態様>
 動作情報出力部(動作情報出力装置3)から出力される、第1位置(近接位置Bと離隔位置Tの一方)と第2位置(近接位置Bと離隔位置Tの他方)との間における使用者の特定部位(手首)の往復運動に伴って周期的に変化する動作情報(センサ出力)に基づいて映像を処理する映像処理装置であって、時系列で撮影された複数の撮影画像情報(撮影フレームデータ)を記憶する画像記憶部(記憶部22)と、画像記憶部に記憶された撮影画像情報を選択する画像選択部(画像選択部21c)と、撮影画像情報間の時間間隔を設定する時間間隔設定部(時間間隔設定部21d)と、を備え、撮影画像情報は、第1位置に対応する第1撮影画像情報(撮影フレームデータF(F1M0)、F(F1M1))と、第2位置に対応する第2撮影画像情報(撮影フレームデータF(F1M1)、F(F1M2))と、を含み、第1位置と第2位置との間における特定部位の移動時間を、動作情報に基づいて予測する予測部(予測部21e)を、さらに備え、画像選択部は、往復運動について、特定部位が第1位置に位置した第1のタイミング(タイミングts5,ts6)では第1撮影画像情報を選択し、予測部によって予測された移動時間(予測時間)が第1のタイミングから経過した後の、特定部位が第2位置に達する第2のタイミング(タイミングts6’,ts7’)では第2撮影画像情報を選択し、時間間隔設定部は、第1撮影画像情報から第2撮影画像情報までの撮影画像情報間の時間間隔を、予測部によって予測された移動時間に基づいた時間間隔(時間間隔dt2,dt3)に設定することを特徴とする。
 本態様の映像処理装置によれば、第1位置と第2位置との間の特定部位の移動時間がばらついたとしても、第1撮影画像情報から第2撮影画像情報までの撮影画像情報間の時間間隔が移動時間に応じて設定されるので、映像の表示を特定部位の移動時間に追従させることができる。その結果、使用者による特定部位の往復運動に適した映像処理を実現することができる。
[Example of embodiment of the present invention and action, summary of effect]
First Embodiment
Usage between the first position (one of the proximity position B and the separation position T) and the second position (the other of the proximity position B and the separation position T) output from the movement information output unit (the movement information output device 3) An image processing apparatus that processes an image based on motion information (sensor output) that changes periodically along with the reciprocation of a specific part (wrist) of a person, and is a plurality of captured image information captured in time series ( An image storage unit (storage unit 22) for storing shooting frame data), an image selection unit (image selection unit 21c) for selecting shooting image information stored in the image storage unit, and a time interval between shooting image information Time interval setting unit (time interval setting unit 21d), and the photographed image information includes first photographed image information (photographing frame data F (F1 M0), F (F1 M1)) corresponding to the first position; Second shot corresponding to 2 positions A prediction unit that includes information (photographing frame data F (F1 M1), F (F1 M2)) and predicts the movement time of the specific part between the first position and the second position based on the motion information 21e), and the image selection unit selects the first captured image information at a first timing (timing ts5, ts6) at which the specific part is located at the first position for reciprocating motion, and the prediction unit predicts At the second timing (timing ts6 ', ts7') at which the specific part reaches the second position after the movement time (predicted time) has passed from the first timing, the second photographed image information is selected, and the time interval is The setting unit sets a time interval between the photographed image information from the first photographed image information to the second photographed image information to a time interval based on the movement time predicted by the prediction unit (time interval dt2, dt3) It is characterized by being set to
According to the video processing device of the present aspect, even if the movement time of the specific part between the first position and the second position varies, the first to the second photographed image information may be taken between the photographed image information Since the time interval is set according to the moving time, the display of the image can be made to follow the moving time of the specific part. As a result, it is possible to realize video processing suitable for reciprocating movement of a specific part by the user.
<第二の実施態様>
 予測部は、往復運動の一方向運動に係る移動時間(時間間隔TS5’)を、一方向運動よりも前に複数回行われた他の一方向運動(タイミングts1からts2に行われた手首の移動、タイミングts3からts4に行われた手首の移動)について動作情報出力部から出力された動作情報に基づいて予測することを特徴とする。
 本態様の映像処理装置によれば、往復運動の一方向運動に係る移動時間がばらついたとしても、映像の表示を特定部位の移動時間に追従させることができる。
Second Embodiment
The prediction unit performs a moving time (time interval TS5 ') related to the one-way movement of the reciprocation for another one-way movement (at timing ts1 to ts2) performed a plurality of times before the one-way movement. The movement, movement of the wrist performed from timing ts3 to ts4) is predicted based on the operation information output from the operation information output unit.
According to the video processing device of this aspect, even if the moving time related to the one-way motion of the reciprocating motion varies, the display of the video can be made to follow the moving time of the specific part.
<第三の実施態様>
 時間間隔設定部は、第1のタイミングから第2のタイミングに向かうことに伴って、撮影画像情報間の時間間隔を段階的に変化させる(dt2a~dt2c、dt2d~dt2f)ことを特徴とする。
 本態様の映像処理装置によれば、制御の切り替えタイミングにおいて画像をなめらかに表示することができる。
Third Embodiment
The time interval setting unit is characterized in that the time interval between the photographed image information is changed stepwise (dt2a to dt2c, dt2d to dt2f) as it goes from the first timing to the second timing.
According to the video processing device of this aspect, an image can be displayed smoothly at the switching timing of control.
<第四の実施態様>
 表示装置(表示装置4)に表示させるための表示画像情報(表示フレームデータ)を撮影画像情報に基づいて生成する表示画像情報生成部(表示画像情報生成部21f)を備えることを特徴とする。
 本態様の映像処理装置によれば、表示装置に適した画像を撮影画像情報に基づいて生成することができる。
Fourth Embodiment
A display image information generation unit (display image information generation unit 21f) that generates display image information (display frame data) to be displayed on the display device (display device 4) based on the captured image information is characterized.
According to the video processing device of the present aspect, an image suitable for the display device can be generated based on the photographed image information.
<第五の実施態様>
 表示画像情報生成部は、相前後する一対の撮影画像情報をアルファブレンド処理することで表示画像情報を生成し、アルファブレンド処理は、表示装置のリフレッシュタイミングによる撮影画像情報間の時間間隔の分割比率を係数とすることを特徴とする。
 本態様の映像処理装置によれば、画像をなめらかに表示することができる。
<Fifth embodiment>
The display image information generation unit generates display image information by performing alpha blend processing of a pair of successive captured image information, and the alpha blend processing is performed by dividing the time interval between the captured image information according to the refresh timing of the display device. Is a coefficient.
According to the video processing device of this aspect, an image can be displayed smoothly.
<第六の実施態様>
 表示装置は、立体視可能な立体画像によって撮影画像情報を表示する立体画像表示装置であることを特徴とする。
 本態様の映像処理装置によれば、画像を立体画像によって表示することができる。
Sixth Embodiment
The display device is characterized in that it is a stereoscopic image display device that displays photographed image information by a stereoscopic image that can be viewed stereoscopically.
According to the video processing apparatus of this aspect, an image can be displayed as a stereoscopic image.
<第七の実施態様>
 撮影画像情報は、第1撮影画像情報と第2撮影画像情報との間に含まれる他の撮影画像情報であって数が異なるものを複数組(第1周期~第5周期の連動シーンの撮影フレームデータ)備え、動作情報に基づいて、複数組の撮影画像情報の何れか1つを選択する撮影画像情報選択部(撮影画像情報選択部21g)を備えることを特徴とする。
 本態様の映像処理装置によれば、特定部位の往復運動の速度が広い範囲で変動しても対応することができる。
Seventh Embodiment
The photographed image information is another photographed image information included between the first photographed image information and the second photographed image information, and the number of the photographed image information is different. A plurality of sets (photographing of the interlocked scene of the first to fifth periods) The present invention is characterized by including a photographed image information selection unit (photographed image information selection unit 21g) that includes frame data and selects any one of a plurality of sets of photographed image information based on operation information.
According to the image processing apparatus of this aspect, it is possible to cope with fluctuations in the speed of the reciprocation of a specific part in a wide range.
1…映像表示システム,2…制御装置(映像処理装置),3…動作情報出力装置,4…表示装置,5…音出力装置,6…ローイングマシン,21…制御部,21a…CPU,21b…RAM,21c…画像選択部,21d…時間間隔設定部,21e…予測部,21f…表示画像情報生成部,21g…撮影画像情報選択部,22…記憶部(画像記憶部),23…フレームバッファ,31…制御部,32…ジャイロセンサ,33…加速度センサ,34…第1モード切替スイッチ,35…第2モード切替スイッチ DESCRIPTION OF SYMBOLS 1 ... Video display system, 2 ... Control apparatus (video processing apparatus), 3 ... Operation information output apparatus, 4 ... Display apparatus, 5 ... Sound output apparatus, 6 ... Rowing machine, 21 ... Control part, 21a ... CPU, 21b ... RAM, 21c: image selection unit, 21d: time interval setting unit, 21e: prediction unit, 21f: display image information generation unit, 21g: photographed image information selection unit, 22: storage unit (image storage unit), 23: frame buffer , 31 ... control unit, 32 ... gyro sensor, 33 ... acceleration sensor, 34 ... first mode changeover switch, 35 ... second mode changeover switch

Claims (10)

  1.  動作情報出力部から出力される、第1位置と第2位置との間における使用者の特定部位の往復運動に伴って周期的に変化する動作情報に基づいて映像を処理する映像処理装置であって、
     時系列で撮影された複数の撮影画像情報を記憶する画像記憶部と、
     前記画像記憶部に記憶された撮影画像情報を選択する画像選択部と、
     前記撮影画像情報間の時間間隔を設定する時間間隔設定部と、
    を備え、
     前記撮影画像情報は、前記第1位置に対応する第1撮影画像情報と、前記第2位置に対応する第2撮影画像情報と、を含み、
     前記第1位置と前記第2位置との間における前記特定部位の移動時間を、前記動作情報に基づいて予測する予測部を、さらに備え、
     前記画像選択部は、前記往復運動について、前記特定部位が前記第1位置に位置した第1のタイミングでは前記第1撮影画像情報を選択し、前記予測部によって予測された移動時間が前記第1のタイミングから経過した後の、前記特定部位が前記第2位置に達する第2のタイミングでは前記第2撮影画像情報を選択し、
     前記時間間隔設定部は、前記第1撮影画像情報から前記第2撮影画像情報までの前記撮影画像情報間の時間間隔を、前記予測部によって予測された移動時間に基づいた時間間隔に設定することを特徴とする映像処理装置。
    An image processing apparatus that processes an image based on operation information that periodically changes with reciprocating motion of a specific part of a user between a first position and a second position, which is output from an operation information output unit. ,
    An image storage unit storing a plurality of pieces of captured image information captured in time series;
    An image selection unit for selecting photographed image information stored in the image storage unit;
    A time interval setting unit configured to set a time interval between the photographed image information;
    Equipped with
    The captured image information includes first captured image information corresponding to the first position and second captured image information corresponding to the second position,
    And a prediction unit configured to predict movement time of the specific part between the first position and the second position based on the motion information.
    The image selection unit selects the first captured image information at a first timing at which the specific part is located at the first position, and the movement time predicted by the prediction unit is the first movement. Selecting the second captured image information at a second timing at which the specific part reaches the second position after a lapse of time from the timing of
    The time interval setting unit sets a time interval between the photographed image information from the first photographed image information to the second photographed image information to a time interval based on the movement time predicted by the prediction unit. An image processing apparatus characterized by
  2.  前記予測部は、前記往復運動の一方向運動に係る移動時間を、前記一方向運動よりも前に複数回行われた他の一方向運動について前記動作情報出力部から出力された前記動作情報に基づいて予測することを特徴とする請求項1に記載の映像処理装置。 The prediction unit outputs the movement time related to the one-way movement of the reciprocating movement to the operation information output from the movement information output unit for another one-way movement performed a plurality of times before the one-way movement. The image processing apparatus according to claim 1, wherein the image processing apparatus predicts based on the image.
  3.  前記時間間隔設定部は、前記第1のタイミングから前記第2のタイミングに向かうことに伴って、前記撮影画像情報間の時間間隔を段階的に変化させることを特徴とする請求項1又は2に記載の映像処理装置。 3. The apparatus according to claim 1, wherein the time interval setting unit changes the time interval between the captured image information in a stepwise manner as the first timing moves to the second timing. Image processing apparatus as described.
  4.  表示装置に表示させるための表示画像情報を前記撮影画像情報に基づいて生成する表示画像情報生成部を備えることを特徴とする請求項1から3の何れか1項に記載の映像処理装置。 The video processing apparatus according to any one of claims 1 to 3, further comprising a display image information generation unit that generates display image information to be displayed on a display device based on the photographed image information.
  5.  前記表示画像情報生成部は、相前後する一対の前記撮影画像情報をアルファブレンド処理することで前記表示画像情報を生成し、
     前記アルファブレンド処理は、前記表示装置のリフレッシュタイミングによる前記撮影画像情報間の時間間隔の分割比率を係数とすることを特徴とする請求項4に記載の映像処理装置。
    The display image information generation unit generates the display image information by performing alpha blend processing on a pair of the photographed image information which are in succession.
    5. The video processing apparatus according to claim 4, wherein the alpha blending process uses a division ratio of a time interval between the photographed image information according to a refresh timing of the display device as a coefficient.
  6.  前記表示装置は、立体視可能な立体画像によって前記撮影画像情報を表示する立体画像表示装置であることを特徴とする請求項4又は5に記載の映像処理装置。 The video processing apparatus according to claim 4 or 5, wherein the display device is a stereoscopic image display device that displays the photographed image information by a stereoscopic image which can be viewed stereoscopically.
  7.  前記撮影画像情報は、前記第1撮影画像情報と前記第2撮影画像情報との間に含まれる他の撮影画像情報であって数が異なるものを複数組備え、
     前記動作情報に基づいて、前記複数組の前記撮影画像情報の何れか1つを選択する撮影画像情報選択部を備えることを特徴とする請求項1から6の何れか1項に記載の映像処理装置。
    The photographed image information includes a plurality of sets of other photographed image information which is included between the first photographed image information and the second photographed image information, the number of which differs.
    The video processing according to any one of claims 1 to 6, further comprising: a photographed image information selection unit which selects any one of the plurality of sets of photographed image information based on the operation information. apparatus.
  8.  動作情報出力部から出力される、第1位置と第2位置との間における使用者の特定部位の往復運動に伴って周期的に変化する動作情報に基づいて映像を処理する映像処理方法であって、
     時系列で撮影され、前記第1位置に対応する第1撮影画像情報と、前記第2位置に対応する第2撮影画像情報とを含む複数の撮影画像情報を記憶部に記憶させるステップと、
     前記第1位置と前記第2位置との間における前記特定部位の移動時間を、前記動作情報に基づいて予測するステップと、
     前記往復運動について、前記特定部位が前記第1位置に位置した第1のタイミングでは前記第1撮影画像情報を選択し、予測された移動時間が前記第1のタイミングから経過した後の、前記特定部位が前記第2位置に達する第2のタイミングでは前記第2撮影画像情報を選択するステップと、
     前記第1撮影画像情報から前記第2撮影画像情報までの前記撮影画像情報間の時間間隔を、前記予測された移動時間に基づいた時間間隔に設定するステップと
    を備えることを特徴とする映像処理方法。
    A video processing method for processing a video based on motion information periodically changed with reciprocating motion of a specific part of the user between the first position and the second position, which is output from the motion information output unit. ,
    Storing a plurality of pieces of photographed image information which are photographed in time series and which include first photographed image information corresponding to the first position and second photographed image information corresponding to the second position;
    Predicting the movement time of the specific part between the first position and the second position based on the motion information;
    Regarding the reciprocation, the first captured image information is selected at a first timing when the specific part is located at the first position, and the identification after the predicted movement time has elapsed from the first timing Selecting the second captured image information at a second timing when the part reaches the second position;
    Setting a time interval between the photographed image information from the first photographed image information to the second photographed image information to a time interval based on the predicted movement time. Method.
  9.  請求項8に記載の映像処理方法における各ステップをコンピュータに実行させることを特徴とするコンピュータプログラム。 A computer program causing a computer to execute each step in the video processing method according to claim 8.
  10.  第1位置と第2位置との間における使用者の特定部位の往復運動に伴って周期的に変化する動作情報を出力する動作情報出力部と、前記動作情報に基づいて映像を処理する映像処理装置とを備える映像処理システムであって、
     前記映像処理装置は、
     時系列で撮影された複数の撮影画像情報を記憶する画像記憶部と、
     前記画像記憶部に記憶された撮影画像情報を選択する画像選択部と、
     前記撮影画像情報間の時間間隔を設定する時間間隔設定部と、
    を備え、
     前記撮影画像情報は、前記第1位置に対応する第1撮影画像情報と、前記第2位置に対応する第2撮影画像情報と、を含み、
     前記第1位置と前記第2位置との間における前記特定部位の移動時間を、前記動作情報に基づいて予測する予測部と、
     前記画像選択部は、前記往復運動について、前記特定部位が前記第1位置に位置した第1のタイミングでは前記第1撮影画像情報を選択し、前記予測部によって予測された移動時間が前記第1のタイミングから経過した後の、前記特定部位が前記第2位置に達する第2のタイミングでは前記第2撮影画像情報を選択し、
     前記時間間隔設定部は、前記第1撮影画像情報から前記第2撮影画像情報までの前記撮影画像情報間の時間間隔を、前記予測部によって予測された移動時間に基づいた時間間隔に設定することを特徴とする映像処理システム。
    An operation information output unit that outputs operation information that periodically changes with reciprocating motion of a specific part of the user between the first position and the second position, and image processing that processes an image based on the operation information A video processing system comprising:
    The video processing device is
    An image storage unit storing a plurality of pieces of captured image information captured in time series;
    An image selection unit for selecting photographed image information stored in the image storage unit;
    A time interval setting unit configured to set a time interval between the photographed image information;
    Equipped with
    The captured image information includes first captured image information corresponding to the first position and second captured image information corresponding to the second position,
    A prediction unit that predicts the movement time of the specific part between the first position and the second position based on the motion information;
    The image selection unit selects the first captured image information at a first timing at which the specific part is located at the first position, and the movement time predicted by the prediction unit is the first movement. Selecting the second captured image information at a second timing at which the specific part reaches the second position after a lapse of time from the timing of
    The time interval setting unit sets a time interval between the photographed image information from the first photographed image information to the second photographed image information to a time interval based on the movement time predicted by the prediction unit. Video processing system characterized by
PCT/JP2017/043803 2017-12-06 2017-12-06 Video processing device, video processing method, computer program, and video processing system WO2019111348A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/043803 WO2019111348A1 (en) 2017-12-06 2017-12-06 Video processing device, video processing method, computer program, and video processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/043803 WO2019111348A1 (en) 2017-12-06 2017-12-06 Video processing device, video processing method, computer program, and video processing system

Publications (1)

Publication Number Publication Date
WO2019111348A1 true WO2019111348A1 (en) 2019-06-13

Family

ID=66751403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/043803 WO2019111348A1 (en) 2017-12-06 2017-12-06 Video processing device, video processing method, computer program, and video processing system

Country Status (1)

Country Link
WO (1) WO2019111348A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06154354A (en) * 1992-11-16 1994-06-03 Kanji Murakami Training device using stereoscopic image
JPH0780096A (en) * 1993-09-14 1995-03-28 Sony Corp Virtual experience device with image display device
JP2007144107A (en) * 2005-10-25 2007-06-14 Vr Sports:Kk Exercise assisting system
JP2011097988A (en) * 2009-11-04 2011-05-19 Pioneer Electronic Corp Training support device
JP2014531142A (en) * 2011-08-16 2014-11-20 デスティニーソフトウェアプロダクションズ インク Script-based video rendering
JP2016062184A (en) * 2014-09-16 2016-04-25 学校法人立命館 Moving image generation system, moving image generation device, moving image generation method, and computer program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06154354A (en) * 1992-11-16 1994-06-03 Kanji Murakami Training device using stereoscopic image
JPH0780096A (en) * 1993-09-14 1995-03-28 Sony Corp Virtual experience device with image display device
JP2007144107A (en) * 2005-10-25 2007-06-14 Vr Sports:Kk Exercise assisting system
JP2011097988A (en) * 2009-11-04 2011-05-19 Pioneer Electronic Corp Training support device
JP2014531142A (en) * 2011-08-16 2014-11-20 デスティニーソフトウェアプロダクションズ インク Script-based video rendering
JP2016062184A (en) * 2014-09-16 2016-04-25 学校法人立命館 Moving image generation system, moving image generation device, moving image generation method, and computer program

Similar Documents

Publication Publication Date Title
JPH1132284A (en) Image photograph reproduction system, method and recording medium recording image reproduction program
JP2015130169A (en) Systems and methods for recording and playing back point-of-view videos with haptic content
US20200388190A1 (en) Information processing apparatus, information processing method, and program
JPWO2018055899A1 (en) Display device and control method of display device
ES2192482A1 (en) Gymnastic and sports apparatus comprising a stereoscopic projection screen
US12023550B2 (en) Timeline and media controller for exercise machine
JP5786892B2 (en) Movie playback device, movie playback method and program
US20160179206A1 (en) Wearable interactive display system
WO2019111348A1 (en) Video processing device, video processing method, computer program, and video processing system
JP2013005041A (en) Reproduction device and reproduction method
TW201509487A (en) Method and system for exercise video training
JP6688378B1 (en) Content distribution system, distribution device, reception device, and program
WO2016003843A1 (en) Interactive game accompaniment music generation based on prediction of user moves durations.
EP4306192A1 (en) Information processing device, information processing terminal, information processing method, and program
JP3694663B2 (en) Mobile simulation experience apparatus and method
JP2010125253A (en) Exercise support system, method, device and teaching information generating device
JPWO2015033446A1 (en) Running support system and head mounted display device used therefor
JP5493362B2 (en) Movie playback apparatus and program
JP2008295037A (en) Moving image playback device and method thereof
WO2022040033A1 (en) Timeline and media controller for exercise machine
JP7049515B1 (en) Information processing equipment, programs and drawing methods
WO2023042436A1 (en) Information processing device and method, and program
JP6995237B1 (en) Semen collection device and data provision system
JP6665273B1 (en) Content distribution system, receiving device, and program
JP5675302B2 (en) Content playback apparatus and content playback method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP