[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TWI796883B - Method, electronic device, and user interface for user interface control - Google Patents

Method, electronic device, and user interface for user interface control Download PDF

Info

Publication number
TWI796883B
TWI796883B TW110147808A TW110147808A TWI796883B TW I796883 B TWI796883 B TW I796883B TW 110147808 A TW110147808 A TW 110147808A TW 110147808 A TW110147808 A TW 110147808A TW I796883 B TWI796883 B TW I796883B
Authority
TW
Taiwan
Prior art keywords
user interface
functional area
information
gesture
control method
Prior art date
Application number
TW110147808A
Other languages
Chinese (zh)
Other versions
TW202326387A (en
Inventor
羊振國
Original Assignee
荷蘭商荷蘭移動驅動器公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荷蘭商荷蘭移動驅動器公司 filed Critical 荷蘭商荷蘭移動驅動器公司
Priority to TW110147808A priority Critical patent/TWI796883B/en
Application granted granted Critical
Publication of TWI796883B publication Critical patent/TWI796883B/en
Publication of TW202326387A publication Critical patent/TW202326387A/en

Links

Images

Landscapes

  • Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)
  • Communication Control (AREA)
  • Electrophonic Musical Instruments (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This application discloses a user interface control method, an electronic device and a user interface. The user interface comprises at least one first functional area, and the method comprises the following steps. Acquiring eyeball movement information of a user. Selecting a corresponding first functional area according to the eyeball motion information. The first functional area comprises at least one second functional area. Collecting sensory input information of the user; and selecting the corresponding second functional area according to the sensory input information of the user. Executing the executable function corresponding to the second functional area.

Description

使用者介面控制方法、電子裝置及使用者介面 User interface control method, electronic device and user interface

本申請涉及使用者介面技術領域,並具體涉及一種使用者介面控制方法、電子裝置及使用者介面。 The present application relates to the technical field of user interface, and specifically relates to a user interface control method, an electronic device and a user interface.

隨著經濟之發展與科技之進步,汽車作為一種常見之代步工具於人們生活中得到越來越普及之應用。駕駛員於駕駛車輛時是否處於專注狀態對安全行車之影響非常嚴重,因此,應盡可能之避免駕駛員分心。於駕駛員於駕駛過程中可能會遇到來電或語音訊息,可能需要進行接聽或回覆。目前,駕駛員於駕駛過程中接聽或回覆消息之過程較為複雜,會導致駕駛員分心並無法及時瞭解道路情況,進而可能導致意外之發生。 Along with the development of economy and the advancement of science and technology, automobiles are used more and more widely in people's lives as a common means of transportation. Whether the driver is in a state of concentration while driving the vehicle has a very serious impact on safe driving. Therefore, driver distraction should be avoided as much as possible. The driver may encounter incoming calls or voice messages during driving, and may need to answer or reply. At present, the process for the driver to answer or reply to messages during driving is relatively complicated, which will cause the driver to be distracted and unable to understand the road conditions in time, which may lead to accidents.

有鑒於此,本申請提供使用者介面控制方法、電子裝置及使用者介面。能夠根據眼球運動資訊選定相應之功能區,再選取功能區內可供執行之功能,可以提高使用者介面之可操控性。 In view of this, the application provides a user interface control method, an electronic device and a user interface. The ability to select the corresponding functional area according to the eye movement information, and then select the functions that can be executed in the functional area can improve the manipulability of the user interface.

本申請之使用者介面控制方法,所述使用者介面包括至少一個第一功能區,所述方法包括:獲取使用者之眼球運動資訊;根據所述眼球運動資訊選定相應之第一功能區,所述第一功能區包括至少一個第二功能區;採集使 用者之感官輸入資訊;根據所述使用者之感官輸入資訊選擇相應之所述第二功能區,並執行所述第二功能區對應之可執行功能。 According to the user interface control method of the present application, the user interface includes at least one first functional area, and the method includes: acquiring user's eye movement information; selecting the corresponding first functional area according to the eye movement information, so The first functional area includes at least one second functional area; The user's sensory input information; select the corresponding second functional area according to the user's sensory input information, and execute the executable function corresponding to the second functional area.

本申請之電子裝置,包括眼球追蹤器、手勢識別模組、語音辨識模組、顯示幕以及處理器;所述處理器用於執行如上所述之使用者介面控制方法;所述眼球追蹤器用於獲取眼球運動資訊;所述手勢識別模組用於識別手勢資訊;所述語音辨識模組用於識別語音資訊。 The electronic device of the present application includes an eye tracker, a gesture recognition module, a speech recognition module, a display screen, and a processor; the processor is used to execute the user interface control method as described above; the eye tracker is used to obtain Eye movement information; the gesture recognition module is used to recognize gesture information; the voice recognition module is used to recognize voice information.

本申請之使用者介面,應用於顯示幕,所述顯示幕通訊連接眼球追蹤模組,所述使用者介面包括如上所述之電子裝置執行所述使用者介面控制方法時所顯示之使用者介面。 The user interface of the present application is applied to a display screen, and the display screen is connected to an eye tracking module through communication, and the user interface includes the user interface displayed when the above-mentioned electronic device executes the user interface control method .

100:電子裝置 100: Electronic device

110:顯示幕 110: display screen

111:使用者介面 111: User Interface

120:眼球追蹤器 120: Eye tracker

130:手勢識別模組 130: Gesture recognition module

140:語音識別模組 140:Speech recognition module

150:處理器 150: Processor

200:汽車 200: car

210:車輛電子控制裝置 210: Vehicle electronic control device

220:控制器局域網總線 220: Controller LAN bus

230:車載系統 230: Vehicle system

1111-1114:第一功能區 1111-1114: the first functional area

11121-11126:第二功能區 11121-11126: Second functional area

S100-S331:步驟 S100-S331: Steps

圖1為本申請一實施例提供的電子裝置模組示意圖。 FIG. 1 is a schematic diagram of an electronic device module provided by an embodiment of the present application.

圖2為本申請一實施例提供的汽車模組示意圖。 Fig. 2 is a schematic diagram of a car module provided by an embodiment of the present application.

圖3為本申請一實施例提供的使用者介面示意圖。 FIG. 3 is a schematic diagram of a user interface provided by an embodiment of the present application.

圖4為本申請另一實施例提供的使用者介面示意圖。 FIG. 4 is a schematic diagram of a user interface provided by another embodiment of the present application.

圖5為本申請一實施例提供的手勢識別示意圖。 FIG. 5 is a schematic diagram of gesture recognition provided by an embodiment of the present application.

圖6為本申請一實施例提供的戶介面控制方法流程示意圖。 FIG. 6 is a schematic flowchart of a user interface control method provided by an embodiment of the present application.

圖7為本申請另一實施例提供的戶介面控制方法流程示意圖。 FIG. 7 is a schematic flowchart of a user interface control method provided by another embodiment of the present application.

圖8為本申請另一實施例提供的戶介面控制方法流程示意圖。 FIG. 8 is a schematic flowchart of a user interface control method provided by another embodiment of the present application.

為能夠更清楚地理解本申請之上述目的、特徵與優點,下面結合附圖與具體實施例對本申請進行詳細描述。需要說明,於不衝突之情況下,本 申請之實施例及實施例中之特徵可以相互組合。於下面之描述中闡述了很多具體細節以便於充分理解本申請,所描述之實施例僅是本申請一部分實施例,而不是全部之實施例。 In order to more clearly understand the above purpose, features and advantages of the present application, the present application will be described in detail below in conjunction with the accompanying drawings and specific embodiments. Need to explain, in the case of no conflict, the The embodiments of the application and the features in the embodiments can be combined with each other. A lot of specific details are set forth in the following description to facilitate a full understanding of the application, and the described embodiments are only some of the embodiments of the application, not all of them.

需要說明,雖於流程圖中示出了邏輯順序,但於某些情況下,可以以不同於流程圖中之循序執行所示出或描述之步驟。本申請實施例中公開之方法包括用於實現方法之一個或複數步驟或動作。方法步驟與/或動作可以於不脫離請求項之範圍之情況下彼此互換。換句話說,除非指定步驟或動作之特定順序,否則特定步驟與/或動作之順序與/或使用可以於不脫離請求項範圍之情況下被修改。 It should be noted that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than in the flowchart. The methods disclosed in the embodiments of the present application include one or a plurality of steps or actions for realizing the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.

下面結合附圖,對申請之一些實施方式作詳細說明。於不衝突之情況下,下述之實施例及實施例中之特徵可以相互組合。 Some embodiments of the application will be described in detail below in conjunction with the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.

於汽車之各種車載設備中,中控螢幕佔據了重要位置,車主藉由中控螢幕能夠對汽車進行多元化操作。而現有中控螢幕主要分為按鍵操作和螢幕操作兩大類,但無論是哪種操作方式,中控螢幕均以大螢幕化之趨勢發展,集成顯示更多之汽車狀態和多媒體訊息,因此中控螢幕於行車中之重要性亦越來越高。隨著中控螢幕之尺寸變大,於汽車行駛過程中操控中控螢幕會帶來安全隱患。 Among the various on-board equipment of the car, the central control screen occupies an important position, and the car owner can perform multiple operations on the car through the central control screen. The existing central control screen is mainly divided into two categories: button operation and screen operation, but no matter what kind of operation method, the central control screen is developing with the trend of large screen, integrating and displaying more car status and multimedia information. Therefore, the central control The importance of screens in driving is also increasing. As the size of the central control screen becomes larger, manipulating the central control screen while the car is driving will bring safety hazards.

目前,車載系統中多集成有眼球追蹤技術以操控中控螢幕。但是,由於中控螢幕之功能越來越複雜,使用眼球追蹤技術難以進行精確之操控。 At present, most vehicle-mounted systems integrate eye-tracking technology to control the central control screen. However, as the functions of the central control screen become more and more complex, it is difficult to use eye tracking technology to perform precise control.

本申請實施例提供之戶介面控制方法、使用者介面及裝置,能夠根據眼球運動訊息選定相應之功能區,再藉由語音辨識或手勢識別選取功能區內可供執行之功能,可以提高使用者介面之可操控性。 The user interface control method, user interface and device provided by the embodiment of the present application can select the corresponding functional area according to the eye movement information, and then select the executable functions in the functional area by voice recognition or gesture recognition, which can improve the user experience. The manipulability of the interface.

下面結合附圖,對申請之一些實施方式作詳細說明。於不衝突之情況下,下述之實施例及實施例中之特徵可以相互組合。 Some embodiments of the application will be described in detail below in conjunction with the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.

圖1是本申請一實施例提供之電子裝置100。如圖1所示,電子裝置100包括顯示幕110、眼球追蹤器120、手勢識別模組130、語音識別模組140及處理器150。其中,顯示幕110用於顯示使用者介面111。 FIG. 1 is an electronic device 100 provided by an embodiment of the present application. As shown in FIG. 1 , the electronic device 100 includes a display screen 110 , an eye tracker 120 , a gesture recognition module 130 , a speech recognition module 140 and a processor 150 . Wherein, the display screen 110 is used for displaying a user interface 111 .

可以理解,電子裝置100藉由GPU,顯示幕110,以及應用處理器等實現顯示功能。GPU為影像處理之微處理器,連接顯示幕110和應用處理器。GPU用於執行數學和幾何計算,用於圖形渲染。電子裝置100可包括一個或多個GPU,其執行程式指令以生成或改變顯示訊息。 It can be understood that the electronic device 100 realizes the display function through the GPU, the display screen 110 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 110 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. The electronic device 100 may include one or more GPUs that execute program instructions to generate or change display information.

可以理解,顯示幕110用於顯示圖像,視頻等。顯示幕110包括顯示面板。顯示面板可以採用液晶顯示幕(liquid crystal display,LCD),有機發光二極體(organic light-emitting diode,OLED),有源矩陣有機發光二極體或主動矩陣有機發光二極體(active-matrix organic light emitting diode,AMOLED),柔性發光二極體(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子點發光二極體(quantum dot light emitting diodes,QLED)等。於一些實施例中,電子設備可以包括1個或N個顯示幕110,N為大於1之正整數。 It can be understood that the display screen 110 is used to display images, videos and the like. The display screen 110 includes a display panel. The display panel can be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix) organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device may include 1 or N display screens 110 , where N is a positive integer greater than 1.

可以理解,顯示幕110用於顯示使用者介面(user interface,UI)111。可以理解之時,使用者介面111,是應用程式或作業系統與使用者之間進行交互和訊息交換之介質介面,它實現訊息之內部形式與使用者可以接受形式之間之轉換。應用程式之使用者介面111是藉由java、可延伸標記語言(extensible markup language,XML)等特定電腦語言編寫之原始程式碼,介面原始程式碼於終端設備上經過解析,渲染,最終呈現為使用者可以識別之內容,比如圖片、文字、按鈕等控制項。控制項(control)亦稱為部件(widget),是使用者介面111之基本元素,典型之控制項有工具列(toolbar)、功能表列(menubar)、文字方塊(textbox)、按鈕(button)、捲軸(scrollbar)、圖片和文本。介面中之控制項之屬性和內容是藉由標籤或者節點來定義之,比如XML藉由 <Textview>、<ImgView>、<VideoView>等節點來規定介面所包含之控制項。一個節點對應介面中一個控制項或屬性,節點經過解析和渲染之後呈現為使用者可視之內容。此外,很多應用程式,比如混合應用(hybrid application)之介面中通常還包含有網頁。網頁,亦稱為頁面,可以理解為內嵌於應用程式介面中之一個特殊之控制項,網頁是藉由特定電腦語言編寫之原始程式碼,例如超文字標記語言(hyper text markup language,HTML),層疊樣式表(cascading style sheets,CSS),java腳本(JavaScript,JS)等,網頁原始程式碼可以由流覽器或與流覽器功能類似之網頁顯示元件載入和顯示為使用者可識別之內容。網頁所包含之具體內容亦是藉由網頁原始程式碼中之標籤或者節點來定義之,比如HTML藉由<p>、<img>、<video>、<canvas>來定義網頁之元素和屬性。 It can be understood that the display screen 110 is used for displaying a user interface (user interface, UI) 111 . When it can be understood, the user interface 111 is a medium interface for interaction and information exchange between the application program or the operating system and the user, and it realizes the conversion between the internal form of the information and the form acceptable to the user. The user interface 111 of the application program is an original code written in a specific computer language such as java and extensible markup language (XML). The original code of the interface is parsed and rendered on the terminal device, and finally presented as a Content that can be recognized by the user, such as pictures, text, buttons and other control items. Controls, also known as widgets, are the basic elements of the user interface 111. Typical control items include toolbars, menubars, textboxes, and buttons. , scrollbar (scrollbar), picture and text. The properties and content of the control items in the interface are defined by tags or nodes, such as XML by Nodes such as <Textview>, <ImgView>, <VideoView> define the control items included in the interface. A node corresponds to a control item or attribute in the interface. After the node is parsed and rendered, it is presented as the content visible to the user. In addition, the interfaces of many applications, such as hybrid applications, usually include webpages. A web page, also known as a page, can be understood as a special control item embedded in an application programming interface. A web page is a source code written in a specific computer language, such as hyper text markup language (HTML) , cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., the source code of the web page can be loaded and displayed as recognizable by the user by a browser or a web page display element similar to the browser function the content. The specific content contained in the webpage is also defined by the tags or nodes in the original code of the webpage. For example, HTML uses <p>, <img>, <video>, <canvas> to define the elements and attributes of the webpage.

可以理解,使用者介面111常用之表現形式是圖形化使用者介面(graphic user interface,GUI),是指採用圖形方式顯示之與電腦操作相關之使用者介面111。它可以是於電子裝置100之顯示幕110中顯示之一個圖示、視窗、控制項等介面元素,其中控制項可以包括圖示、按鈕、功能表、選項卡、文字方塊、對話方塊、狀態列、巡覽列、Widget等可視之介面元素。 It can be understood that the commonly used expression form of the user interface 111 is a graphical user interface (graphic user interface, GUI), which refers to the user interface 111 related to computer operation displayed in a graphical manner. It can be an interface element such as an icon, window, control item displayed on the display screen 110 of the electronic device 100, wherein the control item can include an icon, button, menu, tab, text box, dialog box, status bar , Tour bar, Widget and other visual interface elements.

可以理解,眼球追蹤器120可以包括攝像頭和紅外線裝置(例如紅外發射器)來檢測用戶之眼部動作,例如眼球注視方向、眨眼操作、注視操作等等,從而實現眼球追蹤(eye tracking),本申請於此不做限制。 It can be understood that the eye tracker 120 may include a camera and an infrared device (such as an infrared emitter) to detect the user's eye movements, such as eye gaze direction, blinking operation, gaze operation, etc., thereby realizing eye tracking (eye tracking). Applications are not limited here.

可以理解,手勢識別模組130可以藉由光學感測器(例如攝像頭、毫米波雷達、雷射雷達等),檢測出用戶之手勢動作之後,將電子裝置100內存儲之使用者之手勢模型與使用者之手勢訊息進行比較,判斷二者是否相匹配,於判斷為相匹配時,執行相應之控制。 It can be understood that after the gesture recognition module 130 detects the user's gestures through an optical sensor (such as a camera, millimeter-wave radar, lidar, etc.), the user's gesture model stored in the electronic device 100 can be compared with The user's gesture information is compared to determine whether the two match, and when it is determined that they match, the corresponding control is executed.

可以理解,語音識別模組140可以藉由麥克風及神經網路(neural-network,NN)計算處理器,藉由借鑒生物神經網路結構,例如借鑒人腦神經元之間傳遞模式,以實現語音辨識。 It can be understood that the speech recognition module 140 can use a microphone and a neural-network (NN) computing processor, by referring to the biological neural network structure, for example, referring to the transmission mode between neurons in the human brain, to realize speech identify.

可以理解,處理器150用於識別感官輸入訊息以產生控制信號。示例的,感官輸入訊息包括以下任意一種或者其組合:手勢訊息、語音訊息。其中,手勢訊息包括:運動軌跡訊息、空間位置訊息、手勢形狀訊息或者指向訊息以及其組合。 It can be understood that the processor 150 is used to identify sensory input information to generate control signals. Exemplarily, the sensory input information includes any one or a combination of the following: gesture information, voice information. Wherein, the gesture information includes: motion track information, spatial position information, gesture shape information or pointing information and combinations thereof.

示例的,處理器150用於根據眼球追蹤器120、手勢識別模組130或語音識別模組140採集到之眼球運動訊息、手勢訊息或語音訊息控制顯示幕110以調整顯示使用者介面111內之顯示內容,並控制汽車內之車機執行相應之功能。 For example, the processor 150 is configured to control the display screen 110 according to the eye movement information, gesture information or voice information collected by the eye tracker 120, the gesture recognition module 130 or the speech recognition module 140 to adjust and display the information in the user interface 111. Display content, and control the car machine in the car to perform corresponding functions.

可以理解,電子裝置100可以是手機、平板電腦、桌面型電腦、膝上型電腦、手持電腦、筆記型電腦、超級移動個人電腦(ultra-mobile personal computer,UMPC)、上網本,以及蜂窩電話、個人數位助理(personal digital assistant,PDA)、增強現實(augmented reality,AR)設備、虛擬實境(virtual reality,VR)設備、人工智慧(artificial intelligence,AI)設備、可穿戴式設備、車載設備、智慧家居設備和/或智慧城市設備,本申請之一些實施例對該電子裝置100之具體類型不作特殊限制。 It can be understood that the electronic device 100 can be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and a cell phone, a personal Digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, artificial intelligence (artificial intelligence, AI) equipment, wearable equipment, vehicle equipment, smart Household equipment and/or smart city equipment, some embodiments of the present application do not specifically limit the specific type of the electronic device 100 .

圖2是本申請一實施例提供之汽車200。如圖2所示,汽車200包括車輛電子控制裝置210、控制器局域網總線(Controller Area Network Bus,CAN Bus)220及車載系統230。 FIG. 2 is a car 200 provided by an embodiment of the present application. As shown in FIG. 2 , the car 200 includes a vehicle electronic control device 210 , a controller area network bus (Controller Area Network Bus, CAN Bus) 220 and a vehicle system 230 .

示例的,車載系統230設置有中央控制單元、車內相機、記憶體、顯示幕、喇叭、麥克風、語音助手模組及通訊模組。車載系統230藉由通訊模組連接電子裝置100。 Exemplarily, the in-vehicle system 230 is provided with a central control unit, an in-vehicle camera, a memory, a display screen, a speaker, a microphone, a voice assistant module, and a communication module. The vehicle system 230 is connected to the electronic device 100 through the communication module.

示例的,控制器局域網總線220連接車載系統230與車輛電子控制裝置210,實現指令和訊息之傳遞。 For example, the controller area network bus 220 connects the vehicle system 230 and the vehicle electronic control device 210 to realize the transmission of commands and messages.

示例的,車輛電子控制裝置210內集成有動力、轉向、檔位、懸掛刹車及車門控制系統。 Exemplarily, the vehicle electronic control device 210 is integrated with control systems for power, steering, gear, suspension, brake and door.

圖3是本申請一實施例提供之使用者介面111。如圖3所示,使用者介面111包括多個第一功能區。例如,圖3中所示為四個第一功能區1111、1112、1113及1114。 FIG. 3 is a user interface 111 provided by an embodiment of the present application. As shown in FIG. 3 , the user interface 111 includes a plurality of first functional areas. For example, four first functional areas 1111 , 1112 , 1113 and 1114 are shown in FIG. 3 .

可以理解,於使用者介面111處於開啟狀態時,眼球追蹤器120持續檢測人眼之視線注視範圍。當眼球追蹤器120檢測到人眼注視到使用者介面111時,則處理器150進一步判斷視線是否落到第一功能區1111、1112、1113、或1114。 It can be understood that when the user interface 111 is in the open state, the eye tracker 120 continuously detects the gaze range of the human eye. When the eye tracker 120 detects that the human eye is looking at the user interface 111 , the processor 150 further determines whether the line of sight falls on the first functional area 1111 , 1112 , 1113 , or 1114 .

於一些實施例中,處理器150為第一功能區內之第二功能區編號,每個所述第二功能區對應至少一個可執行功能。 In some embodiments, the processor 150 numbers the second functional areas within the first functional area, and each of the second functional areas corresponds to at least one executable function.

請一併參閱圖4,如圖4所示,第一功能區1112包括第二功能區11121、11122、11123、11124、11125及11126。 Please also refer to FIG. 4 . As shown in FIG. 4 , the first functional area 1112 includes second functional areas 11121 , 11122 , 11123 , 11124 , 11125 and 11126 .

示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第一功能區1112時,處理器150高亮第一功能區1112,並對第一功能區1112可供執行之功能進行編號並顯示其編號,以方便駕駛員利用手勢或語音之方式控制車機執行相關功能。例如,為第一功能區1112內之可供執行之功能編號第二功能區11121、11122、11123、11124、11125及11126。示例的,如圖4所示,為第二功能區11121至11126分別編號數位1至6,使用者可以藉由數位1至數位6選擇第二功能區11121至11126。隨後,採集使用者之感官輸入訊息,根據使用者之感官輸入訊息選擇相應之第二功能區,並執行第二功能區對應之可執行功能。示例的,感官輸入訊息可以包括手勢訊息及語音訊息。處理器150可以藉由手勢識別 模組130或語音識別模組140獲取使用者表述之數位1至數位6,再選擇數位對應之第二功能區。處理器150根據對應之第二功能區內可供執行之功能控制車載系統230執行相應之操作,或藉由控制器局域網總線220傳輸控制命令至車輛電子控制裝置210執行相應之操作。 For example, when the eye tracker 120 detects that the gaze range of the human eye falls on the first functional area 1112, the processor 150 highlights the first functional area 1112, and numbers the functions available for execution in the first functional area 1112 And display its number, so that the driver can use gestures or voice to control the vehicle to perform related functions. For example, the executable functions in the first functional area 1112 are numbered the second functional area 11121 , 11122 , 11123 , 11124 , 11125 and 11126 . For example, as shown in FIG. 4 , the second functional areas 11121 to 11126 are respectively numbered with digits 1 to 6, and the user can select the second functional areas 11121 to 11126 through digits 1 to 6 . Subsequently, the user's sensory input information is collected, the corresponding second functional area is selected according to the user's sensory input information, and the executable function corresponding to the second functional area is executed. For example, the sensory input information may include gesture information and voice information. The processor 150 can recognize by gesture The module 130 or the speech recognition module 140 obtains the digits 1 to 6 expressed by the user, and then selects the second functional area corresponding to the digits. The processor 150 controls the in-vehicle system 230 to perform corresponding operations according to the functions available in the corresponding second functional area, or transmits control commands to the vehicle electronic control device 210 through the controller area network bus 220 to perform corresponding operations.

示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第一功能區1112時,處理器150還可以對使用者介面111內之第一功能區1112進行選中提示,選中提示包括以下任意一種或者其組合:高亮提示、閃爍提示、聲音提示或震動提示。 For example, when the eye tracker 120 detects that the gaze range of the human eye falls on the first functional area 1112, the processor 150 may also prompt a selection of the first functional area 1112 in the user interface 111, and select the prompt Including any one or combination of the following: highlight prompt, flashing prompt, sound prompt or vibration prompt.

可以理解,處理器150於根據手勢識別模組130採集到之手勢訊息識別可供執行之功能時,可以根據手勢動作識別手勢訊息,其中,手勢訊息包括運動軌跡訊息、空間位置訊息或者指向訊息,以選定、移動及釋放功能區,再根據手勢形狀訊息選擇與手勢形狀對應之編號。 It can be understood that when the processor 150 recognizes the executable function according to the gesture information collected by the gesture recognition module 130, it can recognize the gesture information according to the gesture action, wherein the gesture information includes motion track information, spatial position information or pointing information, Select, move and release the functional area, and then select the number corresponding to the gesture shape according to the gesture shape information.

可以理解,手勢識別模組130對一些第二功能區可以設置特定之手勢形狀,當眼球追蹤器120檢測到人眼之視線注視範圍落於使用者介面111時,手勢識別模組130可以檢測特定手勢形狀,例如“V”等,以藉由處理器150控制汽車200執行特定之功能。 It can be understood that the gesture recognition module 130 can set specific gesture shapes for some second functional areas. Gesture shapes, such as "V", are used to control the car 200 to perform specific functions through the processor 150 .

語音訊息包括以下任意一種或者其組合:第二功能區之編號、所述第二功能區之名稱、所述第二功能區之簡稱、所述可執行功能之名稱、所述可執行功能之簡稱、確認或者取消。 The voice message includes any one or combination of the following: the number of the second functional area, the name of the second functional area, the abbreviation of the second functional area, the name of the executable function, and the abbreviation of the executable function , Confirm or Cancel.

請一併參閱圖5,是本申請一實施例提供之手勢識別示意圖。 Please also refer to FIG. 5 , which is a schematic diagram of gesture recognition provided by an embodiment of the present application.

示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第二功能區1112時,可以藉由手勢識別模組130識別使用者之手勢形狀所對應之數位記號,並根據識別出數位記號執行具有相同數位記號之第二功能區。例如,當用戶選擇使用第二功能區11122時,可以比出“2”對應之手勢,以藉由處理器 150控制汽車200執行第一功能區1112內之第二功能區11122對應之功能,例如圖4中示出之第二功能區11122對應之座椅加熱功能。 For example, when the eye tracker 120 detects that the sight range of the human eye falls on the second functional area 1112, the gesture recognition module 130 can recognize the digital sign corresponding to the shape of the user's gesture, and according to the recognized digit The token executes a second functional area with the same digit token. For example, when the user chooses to use the second functional area 11122, the gesture corresponding to "2" can be drawn, so that the processor can 150 controls the car 200 to execute the function corresponding to the second function area 11122 in the first function area 1112 , for example, the seat heating function corresponding to the second function area 11122 shown in FIG. 4 .

可以理解,由於不同地區代表數位之手勢可能存於差異,手勢識別模組130內可以存儲不同地區代表數位之手勢。 It can be understood that since gestures representing digits in different regions may be different, the gesture recognition module 130 may store gestures representing digits in different regions.

示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第二功能區1112時,處理器150可以藉由語音識別模組140識別使用者之語音,並根據語音辨識出對應之編號。例如,當用戶選擇使用第二功能區11121時,可以說“一”、“第一個”、“第一功能”、“第一可執行功能”等語音,以執行第二功能區11121。語音識別模組140可以藉由語義識別提取出關鍵字“一”,以選擇第二功能區11121。 For example, when the eye tracker 120 detects that the gaze range of the human eye falls on the second functional area 1112, the processor 150 can recognize the user's voice through the voice recognition module 140, and recognize the corresponding number according to the voice . For example, when the user chooses to use the second functional area 11121 , he can say "one", "the first", "the first function", "the first executable function" and other voices to execute the second functional area 11121 . The voice recognition module 140 can extract the keyword "one" through semantic recognition to select the second functional area 11121 .

可以理解,語音識別模組140對一些第二功能區可以設置特定之語音。示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於使用者介面111時,語音識別模組140可以檢測特定語音,例如“播放音樂”,“切歌”等,以執行播放音樂、切換歌曲等功能。 It can be understood that the voice recognition module 140 can set specific voices for some second functional areas. For example, when the eye tracker 120 detects that the gaze range of the human eye falls on the user interface 111, the voice recognition module 140 can detect a specific voice, such as "play music", "cut song", etc., to perform music playback , switch songs and other functions.

可以理解,語音識別模組140根據語音訊息獲取語音控制信號,語音控制信號包括以下任意一種或者其組合:選擇第二功能區、執行可執行功能、確認執行可執行功能、取消執行可執行功能或者返回上一級,處理器150再根據語音控制信號執行相應操作。 It can be understood that the voice recognition module 140 acquires a voice control signal according to the voice message, and the voice control signal includes any one of the following or a combination thereof: selecting the second functional area, executing an executable function, confirming the execution of an executable function, canceling the execution of an executable function, or Returning to the previous stage, the processor 150 performs corresponding operations according to the voice control signal.

圖6是本申請一實施例提供之戶介面控制方法流程示意圖。如圖6所示,戶介面控制方法至少包括以下步驟: FIG. 6 is a schematic flowchart of a user interface control method provided by an embodiment of the present application. As shown in Figure 6, the user interface control method at least includes the following steps:

S100:獲取使用者之眼球運動訊息。 S100: Obtain eye movement information of the user.

可以理解,眼球追蹤器120用於獲取使用者之眼球運動訊息,具體獲取方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the eye tracker 120 is used to acquire the eye movement information of the user. Please refer to FIG. 1 and FIG. 3 for the specific acquisition method, and details are not repeated here.

S200:根據眼球運動訊息選定相應之第一功能區。 S200: Select the corresponding first functional area according to the eye movement information.

可以理解,處理器150根據眼球追蹤器120獲取之使用者之眼球運動訊息判斷使用者之視線是否落於相應之第一功能區,並選定相應之第一功能區,具體選定方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the processor 150 determines whether the user's line of sight falls on the corresponding first functional area according to the user's eye movement information obtained by the eye tracker 120, and selects the corresponding first functional area. For the specific selection method, please refer to FIG. 1 and FIG. 3 are not repeated here.

S300:根據使用者指令選擇相應之第二功能區,並執行第二功能區對應之可執行功能。 S300: Select the corresponding second function area according to the user instruction, and execute the executable function corresponding to the second function area.

可以理解,藉由手勢識別模組130或語音識別模組140識別可供執行之功能,以藉由處理器150控制汽車200執行相應之功能。可以理解,具體選擇及執行方式,請一併參閱圖1至圖3,於此不再贅述。 It can be understood that, the gesture recognition module 130 or the speech recognition module 140 recognizes the functions that can be executed, so that the processor 150 controls the car 200 to execute the corresponding functions. It can be understood that, for specific selection and execution methods, please refer to FIG. 1 to FIG. 3 together, and details are not repeated here.

圖7是本申請另一實施例提供之戶介面控制方法流程示意圖。如圖7所示,戶介面控制方法至少包括以下步驟: FIG. 7 is a schematic flowchart of a user interface control method provided by another embodiment of the present application. As shown in Figure 7, the user interface control method at least includes the following steps:

S310:應用手勢識別技術識別手勢動作。 S310: Apply gesture recognition technology to recognize gesture actions.

可以理解,手勢識別模組130應用手勢識別技術識別手勢動作,具體識別方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the gesture recognition module 130 uses gesture recognition technology to recognize gesture actions. Please refer to FIG. 1 and FIG. 3 for specific recognition methods, and details will not be repeated here.

S320:根據手勢動作識別手勢控制信號。 S320: Identify the gesture control signal according to the gesture action.

可以理解,手勢識別模組130根據手勢動作識別手勢控制信號,具體識別方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the gesture recognition module 130 recognizes the gesture control signal according to the gesture action. For the specific recognition method, please refer to FIG. 1 and FIG. 3 together, which will not be repeated here.

S330:根據手勢控制信號選擇編號。 S330: Select a number according to the gesture control signal.

可以理解,處理器150根據手勢識別模組130識別到之手勢控制信號選擇編號,具體選擇方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the processor 150 selects the serial number according to the gesture control signal recognized by the gesture recognition module 130 . Please refer to FIG. 1 and FIG. 3 together for the specific selection method, which will not be repeated here.

圖8是本申請另一實施例提供之戶介面控制方法流程示意圖。如圖8所示,戶介面控制方法至少包括以下步驟: FIG. 8 is a schematic flowchart of a user interface control method provided by another embodiment of the present application. As shown in Figure 8, the user interface control method at least includes the following steps:

S311:應用語音辨識引擎識別語音訊息。 S311: Recognize the voice message by using the voice recognition engine.

可以理解,語音識別模組140應用語音辨識引擎識別語音訊息,具體識別方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the speech recognition module 140 uses a speech recognition engine to recognize speech messages. Please refer to FIG. 1 and FIG. 3 for specific recognition methods, which will not be repeated here.

S321:根據語音訊息獲取語音控制信號。 S321: Acquire a voice control signal according to the voice message.

可以理解,語音識別模組140根據識別到之語音訊息獲取語音控制信號,具體識別方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the speech recognition module 140 obtains the speech control signal according to the recognized speech information. Please refer to FIG. 1 and FIG. 3 for the specific recognition method, and details are not repeated here.

S331:根據語音控制信號執行相應操作。 S331: Execute a corresponding operation according to the voice control signal.

可以理解,處理器150根據語音識別模組140識別到之語音控制信號執行相應操作,具體識別方式請一併參閱圖1及圖3,於此不再贅述。 It can be understood that the processor 150 executes corresponding operations according to the voice control signal recognized by the voice recognition module 140. Please refer to FIG. 1 and FIG.

可以理解,本申請實施例提供之車載通訊方法還可以應用於圖2中示出之車載通訊系統10a,車載通訊系統10a執行車載通訊方法請一併參閱圖1、圖2及其相關描述,於此不再贅述。 It can be understood that the in-vehicle communication method provided by the embodiment of the present application can also be applied to the in-vehicle communication system 10a shown in FIG. This will not be repeated here.

上面結合附圖對本申請實施例作了詳細說明,但本申請不限於上述實施例,於所屬技術領域普通技術人員所具備之知識範圍內,還可以於不脫離本申請宗旨之前提下做出各種變化。此外,於不衝突之情況下,本申請之實施例及實施例中之特徵可以相互組合。 The embodiments of the present application have been described in detail above in conjunction with the accompanying drawings, but the present application is not limited to the above embodiments. Within the scope of knowledge of those of ordinary skill in the art, various modifications can be made without departing from the purpose of the present application. Variety. In addition, the embodiments of the present application and the features in the embodiments can be combined with each other under the condition of no conflict.

100:電子裝置 100: Electronic device

110:顯示幕 110: display screen

111:使用者介面 111: User Interface

120:眼球追蹤器 120: Eye tracker

130:手勢識別模組 130: Gesture recognition module

140:語音識別模組 140:Speech recognition module

150:處理器 150: Processor

Claims (10)

一種使用者介面控制方法,所述使用者介面包括至少一個第一功能區,其改良在於,方法包括:獲取使用者之眼球運動訊息;根據所述眼球運動訊息選定相應之第一功能區,所述第一功能區包括至少一個第二功能區;採集使用者之感官輸入訊息,所述感官輸入訊息包括手勢訊息,所述手勢訊息包括手勢形狀訊息;根據所述使用者之感官輸入訊息選擇相應之所述第二功能區,並執行所述第二功能區對應之可執行功能。 A user interface control method, the user interface includes at least one first functional area, the improvement is that the method includes: acquiring the user's eye movement information; selecting the corresponding first functional area according to the eye movement information, so The first functional area includes at least one second functional area; collect sensory input information of the user, the sensory input information includes gesture information, and the gesture information includes gesture shape information; select the corresponding the second functional area, and execute the executable function corresponding to the second functional area. 如請求項1所述之使用者介面控制方法,其中,所述根據所述使用者之感官輸入訊息選擇相應之所述第二功能區,並執行所述第二功能區對應之可執行功能:識別所述感官輸入訊息以產生控制信號;根據所述控制信號選擇所述相應之第一功能區之第二功能區;其中所述感官輸入訊息還包括語音訊息。 The user interface control method as described in Claim 1, wherein the corresponding second functional area is selected according to the sensory input information of the user, and the executable function corresponding to the second functional area is executed: Identifying the sensory input information to generate a control signal; selecting the second functional area of the corresponding first functional area according to the control signal; wherein the sensory input information also includes voice information. 如請求項2所述之使用者介面控制方法,其中,所述手勢訊息還包括以下任意一種或者其組合:運動軌跡訊息、空間位置訊息或者指向訊息。 The user interface control method according to Claim 2, wherein the gesture information further includes any one of the following or a combination thereof: motion track information, spatial position information or pointing information. 如請求項2所述之使用者介面控制方法,其中,所述語音訊息包括以下任意一種或者其組合:所述第二功能區之編號訊息、所述第二功能區之名稱訊息、所述第二功能區之簡稱訊息、所述第二功能區中可執行功能之名稱訊息、所述第二功能區中可執行功能之簡稱訊息、確認訊息或者取消訊息。 The user interface control method according to claim 2, wherein the voice message includes any one or a combination of the following: the number message of the second functional area, the name message of the second functional area, the first The abbreviation message of the second functional area, the name information of the executable function in the second functional area, the abbreviation message of the executable function in the second functional area, confirmation message or cancellation message. 如請求項2所述之使用者介面控制方法,其中,所述控制信號包括以下任意一種或者其組合:選擇所述第二功能區、執行所述第二功能區中可執行功能、確認執行所述第二功能區中可執行功能、取消執行所述第二功能區中可執行功能或者返回上一級之控制信號。 The user interface control method according to claim 2, wherein the control signal includes any one or a combination of the following: selecting the second functional area, executing executable functions in the second functional area, confirming the execution of the A control signal that can execute functions in the second function area, cancel the execution of the executable functions in the second function area, or return to the upper level. 如請求項1至5任一項所述之使用者介面控制方法,其中,所述方法還包括:為每個所述第二功能區編號,每個所述第二功能區對應至少一個可執行功能。 The user interface control method according to any one of claims 1 to 5, wherein the method further includes: numbering each of the second function areas, and each of the second function areas corresponds to at least one executable Function. 如請求項2所述之使用者介面控制方法,其中,所述方法還包括:當所述感官輸入訊息包括所述手勢訊息時,採集使用者之感官輸入訊息:識別所述手勢形狀訊息所對應之編號。 The user interface control method according to claim 2, wherein the method further includes: when the sensory input information includes the gesture information, collecting the user’s sensory input information: identifying the gesture shape information corresponding to number. 如請求項1所述之使用者介面控制方法,其中,於所述根據所述眼球運動訊息選定所述第一功能區之後,還包括:選中提示已選定之所述第一功能區,所述選中提示包括以下任意一種或者其組合:高亮提示、閃爍提示、聲音提示或震動提示。 The user interface control method according to claim 1, wherein, after selecting the first functional area according to the eye movement information, it further includes: selecting the first functional area to indicate that it has been selected, so The selection prompt includes any one or combination of the following: highlight prompt, flash prompt, sound prompt or vibration prompt. 一種電子裝置,其改良在於,包括眼球追蹤器、手勢識別模組、語音識別模組、顯示幕以及處理器;所述處理器用於執行如請求項1至8任一項所述之使用者介面控制方法;所述眼球追蹤器用於獲取眼球運動訊息;所述手勢識別模組用於識別手勢訊息;所述語音識別模組用於識別語音訊息;所述顯示幕用於顯示使用者介面。 An electronic device, which is improved in that it includes an eye tracker, a gesture recognition module, a voice recognition module, a display screen, and a processor; the processor is used to execute the user interface as described in any one of claims 1 to 8 Control method; the eye tracker is used to acquire eye movement information; the gesture recognition module is used to recognize gesture information; the voice recognition module is used to recognize voice information; the display screen is used to display user interface. 一種使用者介面,應用於顯示幕,所述顯示幕通訊連接眼球追蹤模組,其改良在於,所述使用者介面包括如請求項9所述之電子裝置執行所述使用者介面控制方法時所顯示之使用者介面。 A user interface is applied to a display screen, and the display screen is connected to an eye tracking module through communication. The improvement is that the user interface includes the electronic device as described in claim 9 when executing the user interface control method. The displayed user interface.
TW110147808A 2021-12-20 2021-12-20 Method, electronic device, and user interface for user interface control TWI796883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110147808A TWI796883B (en) 2021-12-20 2021-12-20 Method, electronic device, and user interface for user interface control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110147808A TWI796883B (en) 2021-12-20 2021-12-20 Method, electronic device, and user interface for user interface control

Publications (2)

Publication Number Publication Date
TWI796883B true TWI796883B (en) 2023-03-21
TW202326387A TW202326387A (en) 2023-07-01

Family

ID=86692466

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110147808A TWI796883B (en) 2021-12-20 2021-12-20 Method, electronic device, and user interface for user interface control

Country Status (1)

Country Link
TW (1) TWI796883B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467230A (en) * 2010-11-11 2012-05-23 由田新技股份有限公司 Method and system for operating and controlling vernier by human body
CN102822771A (en) * 2010-01-21 2012-12-12 托比技术股份公司 Eye tracker based contextual action
CN103850582A (en) * 2012-11-30 2014-06-11 由田新技股份有限公司 Eye-movement operation password input method and safe using same
CN109918001A (en) * 2019-03-28 2019-06-21 北京小米移动软件有限公司 Interface display method, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102822771A (en) * 2010-01-21 2012-12-12 托比技术股份公司 Eye tracker based contextual action
CN102467230A (en) * 2010-11-11 2012-05-23 由田新技股份有限公司 Method and system for operating and controlling vernier by human body
CN103850582A (en) * 2012-11-30 2014-06-11 由田新技股份有限公司 Eye-movement operation password input method and safe using same
CN109918001A (en) * 2019-03-28 2019-06-21 北京小米移动软件有限公司 Interface display method, device and storage medium

Also Published As

Publication number Publication date
TW202326387A (en) 2023-07-01

Similar Documents

Publication Publication Date Title
Ayoub et al. From manual driving to automated driving: A review of 10 years of autoui
US8261190B2 (en) Displaying help sensitive areas of a computer application
US20030046401A1 (en) Dynamically determing appropriate computer user interfaces
DE202017103671U1 (en) Automatic generation of a graphical user interface from notification data
DE102016125760A1 (en) Predicting search queries through a keyboard
DE202017104090U1 (en) Graphical keyboard application with integrated search
DE202011110334U1 (en) System for orthogonal dragging on scrollbars
Eisele et al. Effects of traffic context on eHMI icon comprehension
Ma et al. From action icon to knowledge icon: Objective-oriented icon taxonomy in computer science
Tabone et al. Augmented reality interfaces for pedestrian-vehicle interactions: An online study
Stephanidis Design for all in digital technologies
US20220318485A1 (en) Document Mark-up and Navigation Using Natural Language Processing
Zhang et al. Mid-air gestures for in-vehicle media player: elicitation, segmentation, recognition, and eye-tracking testing
TWI796883B (en) Method, electronic device, and user interface for user interface control
Singh Evaluating user-friendly dashboards for driverless vehicles: Evaluation of in-car infotainment in transition
CN116301509A (en) User interface control method, electronic device and user interface
Krstačić et al. Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023
Gao et al. Intelligent Cockpits for Connected Vehicles: Taxonomy, Architecture, Interaction Technologies, and Future Directions
Karas et al. Audiovisual Affect Recognition for Autonomous Vehicles: Applications and Future Agendas
CN116661635B (en) Gesture processing method and electronic equipment
Wang et al. Multimodal Interaction Design in Intelligent Vehicles
Chen et al. User Interface Design
Stephanidis User Interface Adaptation and Design for All
Moorhouse Natural user experience in tertiary driver-car interactions
Roider Natural Multimodal Interaction in the Car-Generating Design Support for Speech, Gesture, and Gaze Interaction while Driving