[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2007063447A2 - Method of driving an interactive system, and a user interface system - Google Patents

Method of driving an interactive system, and a user interface system Download PDF

Info

Publication number
WO2007063447A2
WO2007063447A2 PCT/IB2006/054356 IB2006054356W WO2007063447A2 WO 2007063447 A2 WO2007063447 A2 WO 2007063447A2 IB 2006054356 W IB2006054356 W IB 2006054356W WO 2007063447 A2 WO2007063447 A2 WO 2007063447A2
Authority
WO
WIPO (PCT)
Prior art keywords
user interface
stationary base
unit
input
portable user
Prior art date
Application number
PCT/IB2006/054356
Other languages
French (fr)
Other versions
WO2007063447A3 (en
Inventor
Vasanth Philomin
Original Assignee
Philips Intellectual Property & Standards Gmbh
Koninklijke Philips Electronics N. V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Philips Intellectual Property & Standards Gmbh, Koninklijke Philips Electronics N. V. filed Critical Philips Intellectual Property & Standards Gmbh
Publication of WO2007063447A2 publication Critical patent/WO2007063447A2/en
Publication of WO2007063447A3 publication Critical patent/WO2007063447A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1688Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being integrated loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1632External expansion units, e.g. docking stations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0227Cooperation and interconnection of the input arrangement with other functional units of a computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes

Definitions

  • the invention relates to a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units. Moreover, the invention relates to an appropriate user interface system and to a dialog system comprising such a user interface system.
  • a dialog system comprising such a user interface system.
  • Advanced user interface systems no longer rely on display, keyboard, mouse, or remote control, but implement more intuitive input/output modalities like speech, gestural input, etc.
  • Such interactive systems it is possible to use hands- free and even eyes-free interaction, for example in the home, in the car, or in a work environment.
  • Such a dialog system can offer a user an intuitive means of interacting with a variety of different applications, such as mailbox applications, home entertainment applications, etc. Due to progress in technological trends towards network environment and minimization, dialog systems with various input/output modalities are set to become a part of everyday life.
  • An interactive device for use in such a dialog system can be made to resemble, for example, a human, animal, robot, or abstract figure. A typical example of such a dialog system is shown in DE 102 49 060 Al.
  • An input/output modality can be any hardware and, if required, software which allows a user to communicate with a dialog system by means of the input sensors featured by that system, such as microphone or camera, and/or output devices such as loudspeaker or display for command and information input and output.
  • An input/output modality can be, for example, speech recognition, face recognition, gesture recognition, etc.
  • an input/output modality uses an algorithm, usually comprising one or more computer software modules, to interpret input information and apply this to an algorithmic model in order to identify, for example, words that have been detected by a microphone or a face in an image generated by a camera.
  • Such an algorithmic model can be adaptable, for example in a training procedure, to suit the environment in which it is being used, or to suit the characteristics of the input sensors which deliver input information to the adaptable algorithmic model.
  • the algorithmic model thus adapted can be stored in a memory and retrieved whenever required.
  • Such a portable user interface unit can be, for example, a detachable "head" of an interactive device for attaching to a stationary base unit.
  • the portable user interface unit comprises at least one input sensor such as a microphone or camera, and at least one output element such as a loudspeaker or a display.
  • the user can communicate with the dialog system by means of the input sensors of the portable user interface unit and can receive feedback from the dialog system by means of the output elements of the portable user interface unit.
  • the stationary base unit is a unit which might generally be restricted to use within a particular environment, by being, for example, fixedly located in that environment. Because of the hardware required for performing operations such as speech recognition, face recognition, etc., these functions might often be implemented in the stationary base unit, or in a central unit connected to the different stationary base units of the dialog system, allowing the portable head unit to be simply detached and easily moved from one environment to the next.
  • the input/output modalities of a dialog system might be, for example, speech recognition, speech synthesis, face recognition, gesture recognition etc., with which the dialog system can interpret the user's input.
  • the quality of operation of such an input/output modality can be considerably influenced by, for example, microphone quality and room reverberation in the case of speech recognition, and camera properties and lighting conditions in the case of face and gesture recognition. Therefore, the quality of operation depends on the environment in which a stationary base unit is located, as well as on the characteristics of the input sensors of the portable head unit mounted on that stationary base unit.
  • the present invention provides a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units, any of which can be connected in a wired or wireless manner to any of the stationary base units, wherein a user interacts with the dialog system through a portable user interface unit connected to a stationary base unit using an input/output modality, which input/output modality utilises an algorithmic model and a number of adaptable algorithmic models utilised by the input/output modality are assigned to the stationary base units and/or to the portable user interface units.
  • the stationary base unit and/or the portable user interface unit used by user to interact with the dialog system is detected, and an adapted algorithmic model for the interaction is subsequently allocated to the input/output modality according to the stationary base unit and/or the portable user interface unit used by the user for interaction with the dialog system.
  • An obvious advantage of the method according to the invention is that the adaptable algorithmic model required by the input/output modality used by the portable user interface unit or stationary base unit is automatically implemented. Therefore, regardless of which portable user interface unit is used in connection with a stationary base unit in any environment, the method according to the invention ensures that the input/output modalities utilised in the interaction can avail of the corresponding algorithmic models, optimally adapted to the current constellation, thereby allowing the dialog system to operate with a high degree of robustness in the various environments. It is not necessary for a model to re-adapt to a new environment or input sensor each time the interactive system is used in a new constellation, instead, an adaptive model already adapted to a current constellation can be continually refined to suit this constellation.
  • a corresponding user interface system for a dialog system with a number of stationary base units and portable user interface units comprises an input/output modality using an adaptable model enabling communication between a user and the dialog system by a portable user interface unit connected to a stationary base unit.
  • a detection unit detects which stationary base unit and/or portable user interface unit is currently being utilised by the user to interact with the dialog system, and a memory means stores a number of adapted algorithmic models for the input/output modality, which adapted algorithmic models are each assigned to the stationary base units and/or to the portable user interface units.
  • An allocation unit allocates an adapted algorithmic model to the input/output modality according to the stationary base unit and/or the portable user interface unit utilised by the user in an interaction with the dialog system.
  • the adaptation of the model is carried out to suit the environment.
  • adaptation is carried out to suit characteristics of the input sensors. Should all the stationary base units be used in the same environment, the models only depend on the input sensors of the various portable user interface units. On the other hand, if only one type of portable user interface unit is being used in the dialog system, each one being equipped with the same input sensors, then each portable user interface unit can simply avail of the same models which then only depend on the different stationary base units.
  • the adapted algorithmic models are assigned to different interface/base-combinations, where an interface/base combination comprises a specific portable user interface unit attached to a specific stationary base unit, and an adapted algorithmic model is allocated to the input/output modality according to the specific interface/base- combination used by the user to interact with the dialog system.
  • Which portable user interface unit is connected to which stationary base unit is determined by the detecting unit of the user interface system. This can be determined automatically on connection, or some time later.
  • the corresponding models will be assigned to the input/output modalities available for that portable user interface unit.
  • an adapted algorithmic model which takes into consideration the microphone of a certain portable user interface unit and for the environment of the stationary base unit to which that portable user interface unit is attached can be allocated to that interface/base combination.
  • the adaptable algorithmic models might be stored locally, i.e. in a stationary base unit or in a portable user interface unit. This might be advantageous when a model is only dependent on a stationary base unit or a portable head unit.
  • the adapted algorithmic models are stored in a central database and retrieved and allocated to the input/output modality by a central model managing unit, or model manager, which can be realised, for example, as part of a dialog manager.
  • retrieval of the models can be effected by data transfer from the central database to the stationary base unit, for example by a cable connection such as a USB (universal serial bus) connection, or a WLAN (wireless local area network) connection.
  • a cable connection such as a USB (universal serial bus) connection, or a WLAN (wireless local area network) connection.
  • a stationary base unit can also be realised in a very basic way, for example, with only a requisite connection to a power supply such as the mains power supply or a battery power source, if the input/output modalities are located either in a portable user interface unit or in a central unit.
  • a power supply such as the mains power supply or a battery power source
  • an adapted algorithmic model assigned to a specific portable user interface unit or to an interface/base- combination comprising this portable user interface unit should preferably also be stored in a memory of the portable user interface unit or in the central unit as mentioned above.
  • a portable user interface unit or stationary base unit is used for a first time in the dialog system, or that conditions in the environment of a stationary base unit have changed, or that a portable user interface unit has been equipped with new input sensor hardware.
  • an adapted algorithmic model for the relevant portable user interface unit or stationary base unit may be outdated or unavailable. Therefore, in a preferred embodiment of the invention, if there is no adapted algorithmic model available for a certain input/output modality of a portable user interface unit, a default algorithmic model for that input/output modality is assigned to the stationary base unit and/or to the portable user interface unit. This default model is then adapted to the particular environment, for example in a training process, and stored for further interactive sessions.
  • the model training can occur in the background, without actively involving the user, or the user might be required to perform specific actions, such as saying certain words or phrases, or standing at certain positions in the room, in order for the training to result in a robust adapted model for that environment.
  • Each portable user interface unit, stationary base unit, or interface/base combination can be associated with a number of input/output modalities, and therefore also a number of adapted algorithmic models.
  • a portable user interface unit, a stationary base unit, or a single interface/base combination might have an adapted algorithmic model for face recognition, another for speech recognition, etc.
  • these adapted algorithmic models are, in a further preferred embodiment of the invention, preferably grouped together in a suitable profile associated with the corresponding portable user interface unit, stationary base unit or interface/base-combination.
  • a dialog system can comprise any number of stationary base units, any number of portable user interface units, and a user interface system as described above.
  • the stationary base units can be distributed in various different kinds of environments, and any of the portable user interface units can be attached to any of the stationary base units, as desired.
  • the elements of the user interface system can be divided among the stationary base units, portable user interface units, and, if required, an external model manager, as appropriate.
  • Fig. 1 is a schematic representation of a dialog system comprising a number of portable user interface units and stationary base units according to an embodiment of the invention
  • Fig. 2 shows a block diagram of a user interface system pursuant to a first embodiment the invention
  • Fig. 3 shows a block diagram of a user interface system pursuant to a second embodiment the invention
  • Fig. 4 shows a block diagram of a user interface system pursuant to a third embodiment the invention.
  • a number of portable user interface units H 1 , H 2 , H 3 of a dialog system are shown connected to a number of stationary base units B 1 , B 2 , B 3 , where each stationary base unit B 1 , B 2 , B 3 is located in a separate environment, as indicated by the dashed lines.
  • each stationary base unit B 1 , B 2 , B 3 is located in a separate environment, as indicated by the dashed lines.
  • only three combinations of interface/base units are shown, although any number of such combinations is possible.
  • a first interface/base combination consisting of the portable user interface unit H 2 and the stationary base unit B 1 , and therefore called “H 2 Bi" in the following, is shown on the left.
  • the stationary base unit Bi is assigned to a first environment, and is installed, perhaps permanently, in that environment.
  • a user of the dialog system has placed the portable user interface unit H 2 on that stationary base unit Bi, at least for the time being.
  • Two other interface/base combinations "H 1 B 3 " and "H 3 B 2 " are shown, where the combination "H 1 B 3 " consists of the portable user interface unit Hi attached to the stationary base unit B 3 , and the combination "H 3 B 2 " consists of the portable user interface unit H 3 attached to the stationary base unit B 2 .
  • Each interface/base combination of portable user interface units H 1 , H 2 , H 3 and stationary base units B 1 , B 2 , B 3 can avail of different input/output modalities and different hardware elements.
  • the portable user interface unit H 2 features a camera 10, a display 11, a pair of microphones 12, and a loudspeaker 13.
  • the stationary base unit Bi to which the portable user interface unit is attached features a number of communication interfaces, here a USB interface 14 and a WLAN interface 15. Using these interfaces, the stationary base unit can communicate with a remote server, not shown in the diagram.
  • the other portable user interface units H 1 , H 3 and stationary base units B 2 , B 3 can avail of the same or similar input/output modalities and communication interfaces for communication with a remote server, as indicated in the diagram.
  • Any of the portable user interface units H 1 , H 2 , H 3 can be used in conjunction with any of the stationary base units B 1 , B 2 , B 3 .
  • portable user interface unit H 2 might be removed from the stationary base unit Bi to which it is connected, and mounted instead onto either of the stationary base units B 2 , B 3 , or onto any another stationary base unit not shown in the diagram.
  • the information exchange between a portable user interface unit and a stationary base unit will be explained in detail with the aid of Fig. 2.
  • a user interface system 3 for a dialog system 1 is shown in relation to a user 2 and a number of applications A 1 , A 2 , ..., A n , such as a mailbox application, home entertainment application, intelligent home management system, etc.
  • applications A 1 , A 2 , ..., A n such as a mailbox application, home entertainment application, intelligent home management system, etc.
  • the portable user interface unit H 2 and stationary base unit Bi are shown in an abstract representation by means of the dashed lines.
  • the user interface system 3 comprises a number of input/output modalities, which are incorporated in the portable user interface unit H 2 .
  • a speech-based input/output modality 200 in the form of a speech recognition arrangement 200, uses, on the input side, a microphone 20 for detecting speech input of the user 2.
  • the speech recognition arrangement 200 can comprise the usual speech recognition module and a following language understanding module, so that speech utterances of the user 2 can be converted into digital form.
  • a speech- based input/output modality features a speech synthesis arrangement 210, which can comprise, for example, a language generation unit and a speech synthesis unit. The synthesised speech is then output to the user 2 by means of a loudspeaker 21.
  • a visual input/output modality 230 uses a camera 23 on the input side, and comprises an image analysis unit, here a face recognition unit 230, for processing the images generated by the camera 23.
  • an input/output modality comprises a display driver 220 for rendering visual output signals into a form suitable for displaying on a screen or display 22.
  • the operation of the input/output modalities 200, 210, 220, 230, as described above, depends on the models used, and therefore on the interface/base combination, particularly in the case of the speech recognition arrangement 200 and the face recognition unit 230.
  • a detection unit 4 determines which stationary base unit the portable user interface unit has been connected to.
  • the detection unit 4 informs an allocation unit 6, which can then retrieve the necessary adapted algorithmic models M 1 , M 2 from a memory 5 and allocate them to the appropriate input/output modalities 200, 210, 220, 230.
  • the adaptable algorithmic model Mi is a model for the user's speech adapted to the environment in which the stationary base unit Bi is located and to the microphone of the portable user interface unit H 2 , so that the speech- based input/output modality 200 can successfully interpret utterances spoken by the user 2 in that environment.
  • the adaptable algorithmic model M 2 is a model for the user's appearance and the properties of the visual sensor 23 of the portable user interface unit H 2 , so that the user 2 can be successfully recognised by the visual input/output modality 230 in the conditions prevalent in that environment.
  • the models M 1 , M 2 are stored in the stationary base unit, and differ only in the characteristics of the various input sensors of the portable user interface unit currently mounted onto the stationary base unit.
  • a dialog manager 7 manages the interaction between the user 2 and the applications A 1 , A 2 ,..., A n with which the user 2 can communicate in the dialog system 1. Such a dialog manager 7 analyses user input and issues appropriate instructions to the corresponding application, and deals with feedback or requests from the applications A 1 , A 2 , ..., A n . All of the components of the input/output modalities mentioned here, such as speech recognition 200, speech synthesis 210, face recognition 230 and visual output 220, and the components of the dialog manager 7 and the required interfaces (not shown in the diagram) between the dialog manager 7 and the individual applications A 1 , A 2 , ..., A n , are known to a person skilled in the art and will not therefore be described in more detail.
  • the detection module 4, allocation module 6 and dialog manager 7 could either be part of the portable user interface unit or of the stationary base unit.
  • Fig. 3 shows a different realisation of the user interface system 3.
  • the user 2 has mounted the portable user interface unit Hi on a stationary base unit B 3 which does not avail of any storage capacity.
  • the detection unit 4 determines the stationary base unit to which the portable user interface unit Hi is attached, and notes that this stationary base unit does not store any adaptable algorithmic models.
  • the portable user interface unit Hi in this case is equipped with a memory 5' from which the allocation unit 6 can retrieve the adaptable algorithmic models M 1 , M 2 required for the input/output modalities, which are shown to be the same as those for the portable user interface unit H 2 described above, but which need not necessarily be so.
  • the models M 1 , M 2 are stored in the portable user interface unit H 1 , and differ only in the characteristics of the environment of stationary base unit currently connected to the portable user interface unit.
  • the detection module 4, allocation module 6 and dialog manager 7 could preferably also be part of the portable user interface unit H 1 , so that the stationary base unit B 3 need only be a sort of base with power supply and a connector for receiving the portable user interface unit Hi and connecting to an external central unit.
  • FIG. 4 A further realisation of the user interface system 3 is shown in Fig. 4.
  • the adaptable algorithmic models M, Mi, M 2 , ..., M n for the different environments of the stationary base units of the user interface system 3 are gathered in a profile manager 8 of a central unit 9, which also comprises the detection module 4, the allocation module 6 and the dialog manager 7 as well as the various input/output modalities 200, 210, 220, 230.
  • the stationary base units and the portable user interface units of the user interface system 3 do not necessarily need to be equipped with storage capabilities for storing adaptable algorithmic models, or it may be that some stationary base units and/or portable user interface unit have such storage capabilities, while others do not, so that the profile manager 8 manages the adaptable algorithmic models for those units not availing of storage capabilities.
  • the detection unit 4 determines which interface/base combination is being used, and informs the allocation unit 6.
  • the allocation unit 6, issues appropriate commands to the profile manager 8 in order to retrieve the required models and allocate them to the corresponding input/output modalities 200, 210, 220, 230.
  • the stationary base unit B 2 to which the portable user interface unit H 3 is attached does not yet avail of an adaptable algorithmic model for one of the input/output modalities of the portable user interface unit H 3 .
  • the stationary base unit B 2 is new, or has been relocated to a new environment, or that conditions in its environment have changed, so that a new adaptable algorithmic model is required for speech recognition and/or face recognition.
  • a default algorithmic model M is retrieved from the profile manager 8 and allocated to the appropriate input/output modality. Thereafter, this default algorithmic model M can be trained in this environment for this stationary base unit B 2 , and then stored again in the profile manager 8, so that the next time this portable user interface unit H 3 is attached to this particular stationary base unit B 2 , the adapted algorithmic model for the input/output modality is available.
  • a “unit” or “module” can comprises a number of units or modules, unless otherwise stated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)

Abstract

The invention describes a method of driving a dialog system (1) comprising a number of stationary base units (B1, B2, B3) and a number of portable user interface units (H1, H2, H3), any of which can be connected to any of the stationary base units (B1, B2, B3), wherein a user (2) interacts with the dialog system (1) through a portable user interface unit (H1, H2, H3) connected to a stationary base unit (B1, B2, B3) using an input/output modality (200, 210, 220, 230). The input/output modality (200, 210, 220, 230) utilises an algorithmic model, and a number of adaptable algorithmic models (M, M1, M2, ..., Mn) utilised by the input/output modality (200, 210, 220, 230) are assigned to the stationary base units (B1, B2, B3) and/or to the portable user interface units (H1, H2, H3). The stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) used by user (2) to interact with the dialog system (1) is detected, and an adaptable algorithmic model (M, M1, M2, ..., Mn) for the interaction is allocated to the input/output modality (200, 210, 220, 230) according to the stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) used by the user (2) for interaction with the dialog system (1). Furthermore, the invention describes an appropriate user interface system (3) and a dialog system (1) comprising such a user interface system (3).

Description

Method of driving an interactive system, and a user interface system
The invention relates to a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units. Moreover, the invention relates to an appropriate user interface system and to a dialog system comprising such a user interface system. Recent developments in the area of man-machine interfaces have led to widespread use of technical devices which are operated through a dialog or interaction between a device and the user of the device. First developments in such "interactive systems" or "dialog systems" were based on the display of visual information and on manual interaction on the part of the user. For instance, almost every mobile telephone is operated by means of an operating dialog based on showing options in a display of the mobile telephone, and the user's pressing the appropriate button to choose a particular option. Advanced user interface systems no longer rely on display, keyboard, mouse, or remote control, but implement more intuitive input/output modalities like speech, gestural input, etc. With such interactive systems, it is possible to use hands- free and even eyes-free interaction, for example in the home, in the car, or in a work environment. Such a dialog system can offer a user an intuitive means of interacting with a variety of different applications, such as mailbox applications, home entertainment applications, etc. Due to progress in technological trends towards network environment and minimization, dialog systems with various input/output modalities are set to become a part of everyday life. An interactive device for use in such a dialog system can be made to resemble, for example, a human, animal, robot, or abstract figure. A typical example of such a dialog system is shown in DE 102 49 060 Al.
An input/output modality, as referred to in the following, can be any hardware and, if required, software which allows a user to communicate with a dialog system by means of the input sensors featured by that system, such as microphone or camera, and/or output devices such as loudspeaker or display for command and information input and output. An input/output modality can be, for example, speech recognition, face recognition, gesture recognition, etc. Generally, such an input/output modality uses an algorithm, usually comprising one or more computer software modules, to interpret input information and apply this to an algorithmic model in order to identify, for example, words that have been detected by a microphone or a face in an image generated by a camera. Such an algorithmic model can be adaptable, for example in a training procedure, to suit the environment in which it is being used, or to suit the characteristics of the input sensors which deliver input information to the adaptable algorithmic model. The algorithmic model thus adapted can be stored in a memory and retrieved whenever required.
However, the success of interaction between a user and such a dialog system depends to a large extent on the robustness of the input/output modalities and/or the algorithmic models utilised, and how well they deal with the prevalent environmental conditions, and with the technical characteristics of the hardware involved. An input/output modality of a state-of-the-art dialog system using a specific model which operates well in one environment may perform unsatisfactorily, or even fail, when placed in another environment. Such a dialog system is therefore essentially limited to use within a single environment. It would be desirable that such dialog systems could be used in a more portable manner, for example by allowing the user to take with him that part of the dialog system, typically that part which features input/output devices such as camera, loudspeaker, microphone etc., i.e. the user interface. Such a portable user interface unit can be, for example, a detachable "head" of an interactive device for attaching to a stationary base unit. The portable user interface unit comprises at least one input sensor such as a microphone or camera, and at least one output element such as a loudspeaker or a display. The user can communicate with the dialog system by means of the input sensors of the portable user interface unit and can receive feedback from the dialog system by means of the output elements of the portable user interface unit. The stationary base unit is a unit which might generally be restricted to use within a particular environment, by being, for example, fixedly located in that environment. Because of the hardware required for performing operations such as speech recognition, face recognition, etc., these functions might often be implemented in the stationary base unit, or in a central unit connected to the different stationary base units of the dialog system, allowing the portable head unit to be simply detached and easily moved from one environment to the next. The input/output modalities of a dialog system, as explained above, might be, for example, speech recognition, speech synthesis, face recognition, gesture recognition etc., with which the dialog system can interpret the user's input. However, the quality of operation of such an input/output modality can be considerably influenced by, for example, microphone quality and room reverberation in the case of speech recognition, and camera properties and lighting conditions in the case of face and gesture recognition. Therefore, the quality of operation depends on the environment in which a stationary base unit is located, as well as on the characteristics of the input sensors of the portable head unit mounted on that stationary base unit.
It is therefore an object of the invention to provide a method of driving such a desired flexible dialog system mentioned above which is able to deal, in an economical and uncomplicated manner, with the different conditions prevalent in different operating environments. To this end, the present invention provides a method of driving a dialog system comprising a number of stationary base units and a number of portable user interface units, any of which can be connected in a wired or wireless manner to any of the stationary base units, wherein a user interacts with the dialog system through a portable user interface unit connected to a stationary base unit using an input/output modality, which input/output modality utilises an algorithmic model and a number of adaptable algorithmic models utilised by the input/output modality are assigned to the stationary base units and/or to the portable user interface units. The stationary base unit and/or the portable user interface unit used by user to interact with the dialog system is detected, and an adapted algorithmic model for the interaction is subsequently allocated to the input/output modality according to the stationary base unit and/or the portable user interface unit used by the user for interaction with the dialog system.
An obvious advantage of the method according to the invention is that the adaptable algorithmic model required by the input/output modality used by the portable user interface unit or stationary base unit is automatically implemented. Therefore, regardless of which portable user interface unit is used in connection with a stationary base unit in any environment, the method according to the invention ensures that the input/output modalities utilised in the interaction can avail of the corresponding algorithmic models, optimally adapted to the current constellation, thereby allowing the dialog system to operate with a high degree of robustness in the various environments. It is not necessary for a model to re-adapt to a new environment or input sensor each time the interactive system is used in a new constellation, instead, an adaptive model already adapted to a current constellation can be continually refined to suit this constellation.
A corresponding user interface system for a dialog system with a number of stationary base units and portable user interface units comprises an input/output modality using an adaptable model enabling communication between a user and the dialog system by a portable user interface unit connected to a stationary base unit. A detection unit detects which stationary base unit and/or portable user interface unit is currently being utilised by the user to interact with the dialog system, and a memory means stores a number of adapted algorithmic models for the input/output modality, which adapted algorithmic models are each assigned to the stationary base units and/or to the portable user interface units. An allocation unit allocates an adapted algorithmic model to the input/output modality according to the stationary base unit and/or the portable user interface unit utilised by the user in an interaction with the dialog system.
The dependent claims and the subsequent description disclose particularly advantageous embodiments and features of the invention. With regard to the stationary base units, the adaptation of the model is carried out to suit the environment. With regard to the portable user interface units, adaptation is carried out to suit characteristics of the input sensors. Should all the stationary base units be used in the same environment, the models only depend on the input sensors of the various portable user interface units. On the other hand, if only one type of portable user interface unit is being used in the dialog system, each one being equipped with the same input sensors, then each portable user interface unit can simply avail of the same models which then only depend on the different stationary base units. In order to use different portable user interface units with stationary base units in different environments, the adapted algorithmic models, in a preferred embodiment of the invention, are assigned to different interface/base-combinations, where an interface/base combination comprises a specific portable user interface unit attached to a specific stationary base unit, and an adapted algorithmic model is allocated to the input/output modality according to the specific interface/base- combination used by the user to interact with the dialog system. Which portable user interface unit is connected to which stationary base unit is determined by the detecting unit of the user interface system. This can be determined automatically on connection, or some time later. Thus, no matter which stationary base unit the user avails of for a particular portable user interface unit, the corresponding models will be assigned to the input/output modalities available for that portable user interface unit. For example, an adapted algorithmic model which takes into consideration the microphone of a certain portable user interface unit and for the environment of the stationary base unit to which that portable user interface unit is attached can be allocated to that interface/base combination.
To this end, the adaptable algorithmic models might be stored locally, i.e. in a stationary base unit or in a portable user interface unit. This might be advantageous when a model is only dependent on a stationary base unit or a portable head unit.
Alternatively, it may be advantageous to store the adaptable algorithmic models centrally, particularly if the input/output modalities are also handled centrally. In one preferred embodiment of the invention, therefore, the adapted algorithmic models are stored in a central database and retrieved and allocated to the input/output modality by a central model managing unit, or model manager, which can be realised, for example, as part of a dialog manager.
If the input/output modalities are located in the stationary base unit or portable user interface unit, retrieval of the models can be effected by data transfer from the central database to the stationary base unit, for example by a cable connection such as a USB (universal serial bus) connection, or a WLAN (wireless local area network) connection.
A stationary base unit can also be realised in a very basic way, for example, with only a requisite connection to a power supply such as the mains power supply or a battery power source, if the input/output modalities are located either in a portable user interface unit or in a central unit. In this case, an adapted algorithmic model assigned to a specific portable user interface unit or to an interface/base- combination comprising this portable user interface unit should preferably also be stored in a memory of the portable user interface unit or in the central unit as mentioned above.
It may be that a portable user interface unit or stationary base unit is used for a first time in the dialog system, or that conditions in the environment of a stationary base unit have changed, or that a portable user interface unit has been equipped with new input sensor hardware. In such cases, an adapted algorithmic model for the relevant portable user interface unit or stationary base unit may be outdated or unavailable. Therefore, in a preferred embodiment of the invention, if there is no adapted algorithmic model available for a certain input/output modality of a portable user interface unit, a default algorithmic model for that input/output modality is assigned to the stationary base unit and/or to the portable user interface unit. This default model is then adapted to the particular environment, for example in a training process, and stored for further interactive sessions. The model training can occur in the background, without actively involving the user, or the user might be required to perform specific actions, such as saying certain words or phrases, or standing at certain positions in the room, in order for the training to result in a robust adapted model for that environment. Each portable user interface unit, stationary base unit, or interface/base combination can be associated with a number of input/output modalities, and therefore also a number of adapted algorithmic models. For example, a portable user interface unit, a stationary base unit, or a single interface/base combination might have an adapted algorithmic model for face recognition, another for speech recognition, etc. To facilitate rapid retrieval of the relevant adapted algorithmic models whenever a portable user interface unit is attached to a stationary base unit, these adapted algorithmic models are, in a further preferred embodiment of the invention, preferably grouped together in a suitable profile associated with the corresponding portable user interface unit, stationary base unit or interface/base-combination.
A dialog system according to the invention can comprise any number of stationary base units, any number of portable user interface units, and a user interface system as described above. The stationary base units can be distributed in various different kinds of environments, and any of the portable user interface units can be attached to any of the stationary base units, as desired. The elements of the user interface system can be divided among the stationary base units, portable user interface units, and, if required, an external model manager, as appropriate.
Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawing. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.
Fig. 1 is a schematic representation of a dialog system comprising a number of portable user interface units and stationary base units according to an embodiment of the invention;
Fig. 2 shows a block diagram of a user interface system pursuant to a first embodiment the invention;
Fig. 3 shows a block diagram of a user interface system pursuant to a second embodiment the invention;
Fig. 4 shows a block diagram of a user interface system pursuant to a third embodiment the invention.
In the diagrams, like numbers refer to like objects throughout.
In Fig. 1, a number of portable user interface units H1, H2, H3 of a dialog system are shown connected to a number of stationary base units B1, B2, B3, where each stationary base unit B1, B2, B3 is located in a separate environment, as indicated by the dashed lines. For the sake of simplicity, only three combinations of interface/base units are shown, although any number of such combinations is possible.
A first interface/base combination, consisting of the portable user interface unit H2 and the stationary base unit B1, and therefore called "H2Bi" in the following, is shown on the left. The stationary base unit Bi is assigned to a first environment, and is installed, perhaps permanently, in that environment. A user of the dialog system has placed the portable user interface unit H2 on that stationary base unit Bi, at least for the time being. Two other interface/base combinations "H1B3" and "H3B2" are shown, where the combination "H1B3" consists of the portable user interface unit Hi attached to the stationary base unit B3, and the combination "H3B2" consists of the portable user interface unit H3 attached to the stationary base unit B2.
Each interface/base combination of portable user interface units H1, H2, H3 and stationary base units B1, B2, B3 can avail of different input/output modalities and different hardware elements. Here, for example, the portable user interface unit H2 features a camera 10, a display 11, a pair of microphones 12, and a loudspeaker 13. The stationary base unit Bi to which the portable user interface unit is attached features a number of communication interfaces, here a USB interface 14 and a WLAN interface 15. Using these interfaces, the stationary base unit can communicate with a remote server, not shown in the diagram. The other portable user interface units H1, H3 and stationary base units B2, B3 can avail of the same or similar input/output modalities and communication interfaces for communication with a remote server, as indicated in the diagram. Any of the portable user interface units H1, H2, H3 can be used in conjunction with any of the stationary base units B1, B2, B3. For example, portable user interface unit H2 might be removed from the stationary base unit Bi to which it is connected, and mounted instead onto either of the stationary base units B2, B3, or onto any another stationary base unit not shown in the diagram. The information exchange between a portable user interface unit and a stationary base unit will be explained in detail with the aid of Fig. 2. Here, a user interface system 3 for a dialog system 1 is shown in relation to a user 2 and a number of applications A1, A2, ..., An, such as a mailbox application, home entertainment application, intelligent home management system, etc. For the sake of clarity, the portable user interface unit H2 and stationary base unit Bi are shown in an abstract representation by means of the dashed lines.
The user interface system 3 comprises a number of input/output modalities, which are incorporated in the portable user interface unit H2. Here, for example, a speech-based input/output modality 200, in the form of a speech recognition arrangement 200, uses, on the input side, a microphone 20 for detecting speech input of the user 2. The speech recognition arrangement 200 can comprise the usual speech recognition module and a following language understanding module, so that speech utterances of the user 2 can be converted into digital form. On the output side, a speech- based input/output modality features a speech synthesis arrangement 210, which can comprise, for example, a language generation unit and a speech synthesis unit. The synthesised speech is then output to the user 2 by means of a loudspeaker 21. A visual input/output modality 230 uses a camera 23 on the input side, and comprises an image analysis unit, here a face recognition unit 230, for processing the images generated by the camera 23. On the output side, an input/output modality comprises a display driver 220 for rendering visual output signals into a form suitable for displaying on a screen or display 22. The operation of the input/output modalities 200, 210, 220, 230, as described above, depends on the models used, and therefore on the interface/base combination, particularly in the case of the speech recognition arrangement 200 and the face recognition unit 230. A detection unit 4 determines which stationary base unit the portable user interface unit has been connected to. Having established that the stationary base unit in this case is the stationary base unit B1, the detection unit 4 informs an allocation unit 6, which can then retrieve the necessary adapted algorithmic models M1, M2 from a memory 5 and allocate them to the appropriate input/output modalities 200, 210, 220, 230. Here, the adaptable algorithmic model Mi is a model for the user's speech adapted to the environment in which the stationary base unit Bi is located and to the microphone of the portable user interface unit H2, so that the speech- based input/output modality 200 can successfully interpret utterances spoken by the user 2 in that environment. The adaptable algorithmic model M2 is a model for the user's appearance and the properties of the visual sensor 23 of the portable user interface unit H2, so that the user 2 can be successfully recognised by the visual input/output modality 230 in the conditions prevalent in that environment. In this realisation, the models M1, M2 are stored in the stationary base unit, and differ only in the characteristics of the various input sensors of the portable user interface unit currently mounted onto the stationary base unit.
A dialog manager 7 manages the interaction between the user 2 and the applications A1, A2,..., An with which the user 2 can communicate in the dialog system 1. Such a dialog manager 7 analyses user input and issues appropriate instructions to the corresponding application, and deals with feedback or requests from the applications A1, A2, ..., An. All of the components of the input/output modalities mentioned here, such as speech recognition 200, speech synthesis 210, face recognition 230 and visual output 220, and the components of the dialog manager 7 and the required interfaces (not shown in the diagram) between the dialog manager 7 and the individual applications A1, A2, ..., An, are known to a person skilled in the art and will not therefore be described in more detail. The detection module 4, allocation module 6 and dialog manager 7 could either be part of the portable user interface unit or of the stationary base unit.
Fig. 3 shows a different realisation of the user interface system 3. Here, the user 2 has mounted the portable user interface unit Hi on a stationary base unit B3 which does not avail of any storage capacity. The detection unit 4 determines the stationary base unit to which the portable user interface unit Hi is attached, and notes that this stationary base unit does not store any adaptable algorithmic models.
The portable user interface unit Hi in this case is equipped with a memory 5' from which the allocation unit 6 can retrieve the adaptable algorithmic models M1, M2 required for the input/output modalities, which are shown to be the same as those for the portable user interface unit H2 described above, but which need not necessarily be so.
Once the adaptable algorithmic models M1, M2 have been allocated to the corresponding input/output modalities, interaction between the user 2 and the applications A1, A2, ..., An can take place as described above.
In this realisation, the models M1, M2 are stored in the portable user interface unit H1, and differ only in the characteristics of the environment of stationary base unit currently connected to the portable user interface unit. Here, the detection module 4, allocation module 6 and dialog manager 7 could preferably also be part of the portable user interface unit H1, so that the stationary base unit B3 need only be a sort of base with power supply and a connector for receiving the portable user interface unit Hi and connecting to an external central unit.
A further realisation of the user interface system 3 is shown in Fig. 4. Here, the adaptable algorithmic models M, Mi, M2, ..., Mn for the different environments of the stationary base units of the user interface system 3 are gathered in a profile manager 8 of a central unit 9, which also comprises the detection module 4, the allocation module 6 and the dialog manager 7 as well as the various input/output modalities 200, 210, 220, 230. The stationary base units and the portable user interface units of the user interface system 3 in this case do not necessarily need to be equipped with storage capabilities for storing adaptable algorithmic models, or it may be that some stationary base units and/or portable user interface unit have such storage capabilities, while others do not, so that the profile manager 8 manages the adaptable algorithmic models for those units not availing of storage capabilities.
Once a user 2 places a portable user interface unit, in this case portable user interface unit H3, onto a stationary base unit, in this case stationary base unit B2, the detection unit 4 determines which interface/base combination is being used, and informs the allocation unit 6. The allocation unit 6, in turn, issues appropriate commands to the profile manager 8 in order to retrieve the required models and allocate them to the corresponding input/output modalities 200, 210, 220, 230. In this example, it is assumed that the stationary base unit B2 to which the portable user interface unit H3 is attached does not yet avail of an adaptable algorithmic model for one of the input/output modalities of the portable user interface unit H3. It may be that the stationary base unit B2 is new, or has been relocated to a new environment, or that conditions in its environment have changed, so that a new adaptable algorithmic model is required for speech recognition and/or face recognition. To this end, a default algorithmic model M is retrieved from the profile manager 8 and allocated to the appropriate input/output modality. Thereafter, this default algorithmic model M can be trained in this environment for this stationary base unit B2, and then stored again in the profile manager 8, so that the next time this portable user interface unit H3 is attached to this particular stationary base unit B2, the adapted algorithmic model for the input/output modality is available.
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.
For the sake of clarity, it is to be understood that the use of "a" or "an" throughout this application does not exclude a plurality, and "comprising" does not exclude other steps or elements. A "unit" or "module" can comprises a number of units or modules, unless otherwise stated.

Claims

CLAIMS:
1. A method of driving a dialog system (1) comprising a number of stationary base units (B1, B2, B3) and a number of portable user interface units (H1, H2, H3), any of which can be connected to any of the stationary base units (B1, B2, B3), wherein a user (2) interacts with the dialog system (1) through a portable user interface unit (H1, H2, H3) connected to a stationary base unit (B1, B2, B3) using an input/output modality (200, 210, 220, 230), which input/output modality (200, 210, 220, 230) utilises an algorithmic model; a number of adaptable algorithmic models (M, M1, M2, ..., Mn) utilised by the input/output modality (200, 210, 220, 230) are assigned to the stationary base units (B1, B2, B3) and/or to the portable user interface units (H1, H2, H3); the stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) used by user (2) to interact with the dialog system (1) is detected and an adaptable algorithmic model (M, M1, M2, ..., Mn) for the interaction is allocated to the input/output modality (200, 210, 220, 230) according to the stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) used by the user (2) for interaction with the dialog system (1).
2. A method according to claim 1, wherein adapted algorithmic models (M, M1, M2, ..., Mn) are assigned to different interface/base-combinations, each defined by a portable user interface unit (H1, H2, H3) and a stationary base unit (B1, B2, B3), and an adapted algorithmic model (M, M1, M2, ..., Mn) is allocated to the input/output modality (200, 210, 220, 230) according to the specific interface/base-combination used by the user (2) to interact with the dialog system (1).
3. A method according to claim 1 or 2, wherein the adapted algorithmic models (M, M1, M2, ..., Mn) are stored in a central database (5") and retrieved and allocated to the input/output modality (200, 210, 220, 230) by a central model managing unit (8).
4. A method according to any of claims 1 to 3, wherein an adapted algorithmic model (M1, M2, ..., Mn) assigned to a specific stationary base unit (B1, B2, B3) or to an interface/base-combination comprising this stationary base unit (B1, B2, B3) is stored in a memory (5) of this stationary base unit (B1, B2, B3).
5. A method according to any of claims 1 to 4, wherein an adapted algorithmic model (M1, M2, ..., Mn) assigned to a specific portable user interface unit (H1, H2, H3) or to a interface/base-combination comprising this portable user interface unit (H1, H2, H3) is stored in a memory (5') of this portable user interface unit (H1, H2, H3).
6. A method according to any of claims 1 to 5, wherein, if no adapted algorithmic model can be retrieved, a default algorithmic model (M) for the input/output modality (200, 210, 220, 230) is assigned to the stationary base unit (B1, B2, B3) and/or to the portable user interface unit (H1, H2, H3).
7. A method according to any of claims 1 to 7, wherein the dialog system (1) comprises a number of input/output modalities (200, 210, 220, 230), and the adapted algorithmic models (M1, M2, ..., Mn) for each portable user interface unit (H1, H2, H3), stationary base unit (B1, B2, B3), or interface/base-combination utilised for the different input/output modalities (200, 210, 220, 230) are grouped in a profile for the corresponding portable user interface unit (H1, H2, H3), stationary base unit (B1, B2, B3), or interface/base-combination
8. A user interface system (3) for a dialog system (1), which dialog system (1) comprises a number of stationary base units (B1, B2, B3) and a number of portable user interface units (H1, H2, H3), any of which can be connected to any of the stationary base units (B1, B2, B3), comprising an input/output modality (200, 210, 220, 230) using an algorithmic model enabling communication between a user (2) and the dialog system (1) by a portable user interface unit (H1, H2, H3) connected to a stationary base unit (B1, B2, B3); a detection unit (4) for detecting the stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) utilised by the user (2) to interact with the dialog system (1); a memory means (5, 5') for storing a number of adaptable algorithmic models (M, M1, M2, ..., Mn) for the input/output modality (200, 210, 220, 230), which adaptable algorithmic models (M, M1, M2, ..., Mn) are each assigned to the stationary base units (B1, B2, B3) and/or to the portable user interface units (H1, H2, H3), an allocation unit (6) for allocating an adaptable algorithmic model (M, Mi, M2, ..., Mn) to the input/output modality (200, 210, 220, 230) according to the stationary base unit (B1, B2, B3) and/or the portable user interface unit (H1, H2, H3) utilised by the user (2) in an interaction with the dialog system (1).
9. A dialog system (1), comprising - a number of stationary base units (B1, B2, B3) and a number of portable user interface units (H1, H2, H3), any of which can be connected to any of the stationary base units (B1, B2, B3), and a user interface system (3) according to claim 8.
10. A computer program product directly loadable into the memory of a programmable dialog system comprising software code portions for performing the steps of a method according to claims 1 to 7, when said program is run on the dialog system.
PCT/IB2006/054356 2005-11-30 2006-11-21 Method of driving an interactive system, and a user interface system WO2007063447A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05111480 2005-11-30
EP05111480.9 2005-11-30

Publications (2)

Publication Number Publication Date
WO2007063447A2 true WO2007063447A2 (en) 2007-06-07
WO2007063447A3 WO2007063447A3 (en) 2008-02-14

Family

ID=38092644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/054356 WO2007063447A2 (en) 2005-11-30 2006-11-21 Method of driving an interactive system, and a user interface system

Country Status (2)

Country Link
TW (1) TW200802035A (en)
WO (1) WO2007063447A2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096171A1 (en) * 2002-05-14 2003-11-20 Philips Intellectual Property & Standards Gmbh Dialog control for an electric apparatus
US20040235463A1 (en) * 2003-05-19 2004-11-25 France Telecom Wireless system having a dynamically configured multimodal user interface based on user preferences

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003096171A1 (en) * 2002-05-14 2003-11-20 Philips Intellectual Property & Standards Gmbh Dialog control for an electric apparatus
US20040235463A1 (en) * 2003-05-19 2004-11-25 France Telecom Wireless system having a dynamically configured multimodal user interface based on user preferences

Also Published As

Publication number Publication date
TW200802035A (en) 2008-01-01
WO2007063447A3 (en) 2008-02-14

Similar Documents

Publication Publication Date Title
EP2778865B1 (en) Input control method and electronic device supporting the same
US6988070B2 (en) Voice control system for operating home electrical appliances
US11615792B2 (en) Artificial intelligence-based appliance control apparatus and appliance controlling system including the same
CN104049732B (en) Multiinput control method and system and the electronic device for supporting this method and system
US6052666A (en) Vocal identification of devices in a home environment
US9396728B2 (en) Devices and systems for remote control
CN108605001A (en) Voice control lamp switch
US20180092189A1 (en) Lighting wall control with virtual assistant
JP2022500682A (en) Efficient, low-latency automatic assistant control for smart devices
CN106023995A (en) Voice recognition method and wearable voice control device using the method
CN109062468B (en) Split screen display method and device, storage medium and electronic equipment
CN105609122A (en) Control method and device of terminal device
CN109754795A (en) It is acted on behalf of close to perceptual speech
US11250850B2 (en) Electronic apparatus and control method thereof
CN110737335A (en) Interaction method and device of robot, electronic equipment and storage medium
US20100223548A1 (en) Method for introducing interaction pattern and application functionalities
CN109756825A (en) Classify the position of intelligent personal assistants
CN102033578A (en) Integrated machine system
EP3654170B1 (en) Electronic apparatus and wifi connecting method thereof
WO2007063447A2 (en) Method of driving an interactive system, and a user interface system
KR20170133989A (en) Electronic board and electronic board system having voice recognition function, method for converting mode of electronic board using the same
CN107391015A (en) Control method, device and equipment of intelligent tablet and storage medium
US20070078563A1 (en) Interactive system and method for controlling an interactive system
KR20200137403A (en) Electronic blackboard and electronic blackboard system with voice recognition function
CN114005431A (en) Configuration method, device and equipment of voice system and readable storage medium

Legal Events

Date Code Title Description
NENP Non-entry into the national phase in:

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06821515

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 06821515

Country of ref document: EP

Kind code of ref document: A2