TWI796883B - Method, electronic device, and user interface for user interface control - Google Patents
Method, electronic device, and user interface for user interface control Download PDFInfo
- Publication number
- TWI796883B TWI796883B TW110147808A TW110147808A TWI796883B TW I796883 B TWI796883 B TW I796883B TW 110147808 A TW110147808 A TW 110147808A TW 110147808 A TW110147808 A TW 110147808A TW I796883 B TWI796883 B TW I796883B
- Authority
- TW
- Taiwan
- Prior art keywords
- user interface
- functional area
- information
- gesture
- control method
- Prior art date
Links
Images
Landscapes
- Input Circuits Of Receivers And Coupling Of Receivers And Audio Equipment (AREA)
- Communication Control (AREA)
- Electrophonic Musical Instruments (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
本申請涉及使用者介面技術領域,並具體涉及一種使用者介面控制方法、電子裝置及使用者介面。 The present application relates to the technical field of user interface, and specifically relates to a user interface control method, an electronic device and a user interface.
隨著經濟之發展與科技之進步,汽車作為一種常見之代步工具於人們生活中得到越來越普及之應用。駕駛員於駕駛車輛時是否處於專注狀態對安全行車之影響非常嚴重,因此,應盡可能之避免駕駛員分心。於駕駛員於駕駛過程中可能會遇到來電或語音訊息,可能需要進行接聽或回覆。目前,駕駛員於駕駛過程中接聽或回覆消息之過程較為複雜,會導致駕駛員分心並無法及時瞭解道路情況,進而可能導致意外之發生。 Along with the development of economy and the advancement of science and technology, automobiles are used more and more widely in people's lives as a common means of transportation. Whether the driver is in a state of concentration while driving the vehicle has a very serious impact on safe driving. Therefore, driver distraction should be avoided as much as possible. The driver may encounter incoming calls or voice messages during driving, and may need to answer or reply. At present, the process for the driver to answer or reply to messages during driving is relatively complicated, which will cause the driver to be distracted and unable to understand the road conditions in time, which may lead to accidents.
有鑒於此,本申請提供使用者介面控制方法、電子裝置及使用者介面。能夠根據眼球運動資訊選定相應之功能區,再選取功能區內可供執行之功能,可以提高使用者介面之可操控性。 In view of this, the application provides a user interface control method, an electronic device and a user interface. The ability to select the corresponding functional area according to the eye movement information, and then select the functions that can be executed in the functional area can improve the manipulability of the user interface.
本申請之使用者介面控制方法,所述使用者介面包括至少一個第一功能區,所述方法包括:獲取使用者之眼球運動資訊;根據所述眼球運動資訊選定相應之第一功能區,所述第一功能區包括至少一個第二功能區;採集使 用者之感官輸入資訊;根據所述使用者之感官輸入資訊選擇相應之所述第二功能區,並執行所述第二功能區對應之可執行功能。 According to the user interface control method of the present application, the user interface includes at least one first functional area, and the method includes: acquiring user's eye movement information; selecting the corresponding first functional area according to the eye movement information, so The first functional area includes at least one second functional area; The user's sensory input information; select the corresponding second functional area according to the user's sensory input information, and execute the executable function corresponding to the second functional area.
本申請之電子裝置,包括眼球追蹤器、手勢識別模組、語音辨識模組、顯示幕以及處理器;所述處理器用於執行如上所述之使用者介面控制方法;所述眼球追蹤器用於獲取眼球運動資訊;所述手勢識別模組用於識別手勢資訊;所述語音辨識模組用於識別語音資訊。 The electronic device of the present application includes an eye tracker, a gesture recognition module, a speech recognition module, a display screen, and a processor; the processor is used to execute the user interface control method as described above; the eye tracker is used to obtain Eye movement information; the gesture recognition module is used to recognize gesture information; the voice recognition module is used to recognize voice information.
本申請之使用者介面,應用於顯示幕,所述顯示幕通訊連接眼球追蹤模組,所述使用者介面包括如上所述之電子裝置執行所述使用者介面控制方法時所顯示之使用者介面。 The user interface of the present application is applied to a display screen, and the display screen is connected to an eye tracking module through communication, and the user interface includes the user interface displayed when the above-mentioned electronic device executes the user interface control method .
100:電子裝置 100: Electronic device
110:顯示幕 110: display screen
111:使用者介面 111: User Interface
120:眼球追蹤器 120: Eye tracker
130:手勢識別模組 130: Gesture recognition module
140:語音識別模組 140:Speech recognition module
150:處理器 150: Processor
200:汽車 200: car
210:車輛電子控制裝置 210: Vehicle electronic control device
220:控制器局域網總線 220: Controller LAN bus
230:車載系統 230: Vehicle system
1111-1114:第一功能區 1111-1114: the first functional area
11121-11126:第二功能區 11121-11126: Second functional area
S100-S331:步驟 S100-S331: Steps
圖1為本申請一實施例提供的電子裝置模組示意圖。 FIG. 1 is a schematic diagram of an electronic device module provided by an embodiment of the present application.
圖2為本申請一實施例提供的汽車模組示意圖。 Fig. 2 is a schematic diagram of a car module provided by an embodiment of the present application.
圖3為本申請一實施例提供的使用者介面示意圖。 FIG. 3 is a schematic diagram of a user interface provided by an embodiment of the present application.
圖4為本申請另一實施例提供的使用者介面示意圖。 FIG. 4 is a schematic diagram of a user interface provided by another embodiment of the present application.
圖5為本申請一實施例提供的手勢識別示意圖。 FIG. 5 is a schematic diagram of gesture recognition provided by an embodiment of the present application.
圖6為本申請一實施例提供的戶介面控制方法流程示意圖。 FIG. 6 is a schematic flowchart of a user interface control method provided by an embodiment of the present application.
圖7為本申請另一實施例提供的戶介面控制方法流程示意圖。 FIG. 7 is a schematic flowchart of a user interface control method provided by another embodiment of the present application.
圖8為本申請另一實施例提供的戶介面控制方法流程示意圖。 FIG. 8 is a schematic flowchart of a user interface control method provided by another embodiment of the present application.
為能夠更清楚地理解本申請之上述目的、特徵與優點,下面結合附圖與具體實施例對本申請進行詳細描述。需要說明,於不衝突之情況下,本 申請之實施例及實施例中之特徵可以相互組合。於下面之描述中闡述了很多具體細節以便於充分理解本申請,所描述之實施例僅是本申請一部分實施例,而不是全部之實施例。 In order to more clearly understand the above purpose, features and advantages of the present application, the present application will be described in detail below in conjunction with the accompanying drawings and specific embodiments. Need to explain, in the case of no conflict, the The embodiments of the application and the features in the embodiments can be combined with each other. A lot of specific details are set forth in the following description to facilitate a full understanding of the application, and the described embodiments are only some of the embodiments of the application, not all of them.
需要說明,雖於流程圖中示出了邏輯順序,但於某些情況下,可以以不同於流程圖中之循序執行所示出或描述之步驟。本申請實施例中公開之方法包括用於實現方法之一個或複數步驟或動作。方法步驟與/或動作可以於不脫離請求項之範圍之情況下彼此互換。換句話說,除非指定步驟或動作之特定順序,否則特定步驟與/或動作之順序與/或使用可以於不脫離請求項範圍之情況下被修改。 It should be noted that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than in the flowchart. The methods disclosed in the embodiments of the present application include one or a plurality of steps or actions for realizing the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
下面結合附圖,對申請之一些實施方式作詳細說明。於不衝突之情況下,下述之實施例及實施例中之特徵可以相互組合。 Some embodiments of the application will be described in detail below in conjunction with the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
於汽車之各種車載設備中,中控螢幕佔據了重要位置,車主藉由中控螢幕能夠對汽車進行多元化操作。而現有中控螢幕主要分為按鍵操作和螢幕操作兩大類,但無論是哪種操作方式,中控螢幕均以大螢幕化之趨勢發展,集成顯示更多之汽車狀態和多媒體訊息,因此中控螢幕於行車中之重要性亦越來越高。隨著中控螢幕之尺寸變大,於汽車行駛過程中操控中控螢幕會帶來安全隱患。 Among the various on-board equipment of the car, the central control screen occupies an important position, and the car owner can perform multiple operations on the car through the central control screen. The existing central control screen is mainly divided into two categories: button operation and screen operation, but no matter what kind of operation method, the central control screen is developing with the trend of large screen, integrating and displaying more car status and multimedia information. Therefore, the central control The importance of screens in driving is also increasing. As the size of the central control screen becomes larger, manipulating the central control screen while the car is driving will bring safety hazards.
目前,車載系統中多集成有眼球追蹤技術以操控中控螢幕。但是,由於中控螢幕之功能越來越複雜,使用眼球追蹤技術難以進行精確之操控。 At present, most vehicle-mounted systems integrate eye-tracking technology to control the central control screen. However, as the functions of the central control screen become more and more complex, it is difficult to use eye tracking technology to perform precise control.
本申請實施例提供之戶介面控制方法、使用者介面及裝置,能夠根據眼球運動訊息選定相應之功能區,再藉由語音辨識或手勢識別選取功能區內可供執行之功能,可以提高使用者介面之可操控性。 The user interface control method, user interface and device provided by the embodiment of the present application can select the corresponding functional area according to the eye movement information, and then select the executable functions in the functional area by voice recognition or gesture recognition, which can improve the user experience. The manipulability of the interface.
下面結合附圖,對申請之一些實施方式作詳細說明。於不衝突之情況下,下述之實施例及實施例中之特徵可以相互組合。 Some embodiments of the application will be described in detail below in conjunction with the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
圖1是本申請一實施例提供之電子裝置100。如圖1所示,電子裝置100包括顯示幕110、眼球追蹤器120、手勢識別模組130、語音識別模組140及處理器150。其中,顯示幕110用於顯示使用者介面111。
FIG. 1 is an
可以理解,電子裝置100藉由GPU,顯示幕110,以及應用處理器等實現顯示功能。GPU為影像處理之微處理器,連接顯示幕110和應用處理器。GPU用於執行數學和幾何計算,用於圖形渲染。電子裝置100可包括一個或多個GPU,其執行程式指令以生成或改變顯示訊息。
It can be understood that the
可以理解,顯示幕110用於顯示圖像,視頻等。顯示幕110包括顯示面板。顯示面板可以採用液晶顯示幕(liquid crystal display,LCD),有機發光二極體(organic light-emitting diode,OLED),有源矩陣有機發光二極體或主動矩陣有機發光二極體(active-matrix organic light emitting diode,AMOLED),柔性發光二極體(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子點發光二極體(quantum dot light emitting diodes,QLED)等。於一些實施例中,電子設備可以包括1個或N個顯示幕110,N為大於1之正整數。
It can be understood that the
可以理解,顯示幕110用於顯示使用者介面(user interface,UI)111。可以理解之時,使用者介面111,是應用程式或作業系統與使用者之間進行交互和訊息交換之介質介面,它實現訊息之內部形式與使用者可以接受形式之間之轉換。應用程式之使用者介面111是藉由java、可延伸標記語言(extensible markup language,XML)等特定電腦語言編寫之原始程式碼,介面原始程式碼於終端設備上經過解析,渲染,最終呈現為使用者可以識別之內容,比如圖片、文字、按鈕等控制項。控制項(control)亦稱為部件(widget),是使用者介面111之基本元素,典型之控制項有工具列(toolbar)、功能表列(menubar)、文字方塊(textbox)、按鈕(button)、捲軸(scrollbar)、圖片和文本。介面中之控制項之屬性和內容是藉由標籤或者節點來定義之,比如XML藉由
<Textview>、<ImgView>、<VideoView>等節點來規定介面所包含之控制項。一個節點對應介面中一個控制項或屬性,節點經過解析和渲染之後呈現為使用者可視之內容。此外,很多應用程式,比如混合應用(hybrid application)之介面中通常還包含有網頁。網頁,亦稱為頁面,可以理解為內嵌於應用程式介面中之一個特殊之控制項,網頁是藉由特定電腦語言編寫之原始程式碼,例如超文字標記語言(hyper text markup language,HTML),層疊樣式表(cascading style sheets,CSS),java腳本(JavaScript,JS)等,網頁原始程式碼可以由流覽器或與流覽器功能類似之網頁顯示元件載入和顯示為使用者可識別之內容。網頁所包含之具體內容亦是藉由網頁原始程式碼中之標籤或者節點來定義之,比如HTML藉由<p>、<img>、<video>、<canvas>來定義網頁之元素和屬性。
It can be understood that the
可以理解,使用者介面111常用之表現形式是圖形化使用者介面(graphic user interface,GUI),是指採用圖形方式顯示之與電腦操作相關之使用者介面111。它可以是於電子裝置100之顯示幕110中顯示之一個圖示、視窗、控制項等介面元素,其中控制項可以包括圖示、按鈕、功能表、選項卡、文字方塊、對話方塊、狀態列、巡覽列、Widget等可視之介面元素。
It can be understood that the commonly used expression form of the
可以理解,眼球追蹤器120可以包括攝像頭和紅外線裝置(例如紅外發射器)來檢測用戶之眼部動作,例如眼球注視方向、眨眼操作、注視操作等等,從而實現眼球追蹤(eye tracking),本申請於此不做限制。
It can be understood that the
可以理解,手勢識別模組130可以藉由光學感測器(例如攝像頭、毫米波雷達、雷射雷達等),檢測出用戶之手勢動作之後,將電子裝置100內存儲之使用者之手勢模型與使用者之手勢訊息進行比較,判斷二者是否相匹配,於判斷為相匹配時,執行相應之控制。
It can be understood that after the
可以理解,語音識別模組140可以藉由麥克風及神經網路(neural-network,NN)計算處理器,藉由借鑒生物神經網路結構,例如借鑒人腦神經元之間傳遞模式,以實現語音辨識。
It can be understood that the
可以理解,處理器150用於識別感官輸入訊息以產生控制信號。示例的,感官輸入訊息包括以下任意一種或者其組合:手勢訊息、語音訊息。其中,手勢訊息包括:運動軌跡訊息、空間位置訊息、手勢形狀訊息或者指向訊息以及其組合。
It can be understood that the
示例的,處理器150用於根據眼球追蹤器120、手勢識別模組130或語音識別模組140採集到之眼球運動訊息、手勢訊息或語音訊息控制顯示幕110以調整顯示使用者介面111內之顯示內容,並控制汽車內之車機執行相應之功能。
For example, the
可以理解,電子裝置100可以是手機、平板電腦、桌面型電腦、膝上型電腦、手持電腦、筆記型電腦、超級移動個人電腦(ultra-mobile personal computer,UMPC)、上網本,以及蜂窩電話、個人數位助理(personal digital assistant,PDA)、增強現實(augmented reality,AR)設備、虛擬實境(virtual reality,VR)設備、人工智慧(artificial intelligence,AI)設備、可穿戴式設備、車載設備、智慧家居設備和/或智慧城市設備,本申請之一些實施例對該電子裝置100之具體類型不作特殊限制。
It can be understood that the
圖2是本申請一實施例提供之汽車200。如圖2所示,汽車200包括車輛電子控制裝置210、控制器局域網總線(Controller Area Network Bus,CAN Bus)220及車載系統230。
FIG. 2 is a
示例的,車載系統230設置有中央控制單元、車內相機、記憶體、顯示幕、喇叭、麥克風、語音助手模組及通訊模組。車載系統230藉由通訊模組連接電子裝置100。
Exemplarily, the in-
示例的,控制器局域網總線220連接車載系統230與車輛電子控制裝置210,實現指令和訊息之傳遞。
For example, the controller
示例的,車輛電子控制裝置210內集成有動力、轉向、檔位、懸掛刹車及車門控制系統。
Exemplarily, the vehicle
圖3是本申請一實施例提供之使用者介面111。如圖3所示,使用者介面111包括多個第一功能區。例如,圖3中所示為四個第一功能區1111、1112、1113及1114。
FIG. 3 is a
可以理解,於使用者介面111處於開啟狀態時,眼球追蹤器120持續檢測人眼之視線注視範圍。當眼球追蹤器120檢測到人眼注視到使用者介面111時,則處理器150進一步判斷視線是否落到第一功能區1111、1112、1113、或1114。
It can be understood that when the
於一些實施例中,處理器150為第一功能區內之第二功能區編號,每個所述第二功能區對應至少一個可執行功能。
In some embodiments, the
請一併參閱圖4,如圖4所示,第一功能區1112包括第二功能區11121、11122、11123、11124、11125及11126。
Please also refer to FIG. 4 . As shown in FIG. 4 , the first
示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第一功能區1112時,處理器150高亮第一功能區1112,並對第一功能區1112可供執行之功能進行編號並顯示其編號,以方便駕駛員利用手勢或語音之方式控制車機執行相關功能。例如,為第一功能區1112內之可供執行之功能編號第二功能區11121、11122、11123、11124、11125及11126。示例的,如圖4所示,為第二功能區11121至11126分別編號數位1至6,使用者可以藉由數位1至數位6選擇第二功能區11121至11126。隨後,採集使用者之感官輸入訊息,根據使用者之感官輸入訊息選擇相應之第二功能區,並執行第二功能區對應之可執行功能。示例的,感官輸入訊息可以包括手勢訊息及語音訊息。處理器150可以藉由手勢識別
模組130或語音識別模組140獲取使用者表述之數位1至數位6,再選擇數位對應之第二功能區。處理器150根據對應之第二功能區內可供執行之功能控制車載系統230執行相應之操作,或藉由控制器局域網總線220傳輸控制命令至車輛電子控制裝置210執行相應之操作。
For example, when the
示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第一功能區1112時,處理器150還可以對使用者介面111內之第一功能區1112進行選中提示,選中提示包括以下任意一種或者其組合:高亮提示、閃爍提示、聲音提示或震動提示。
For example, when the
可以理解,處理器150於根據手勢識別模組130採集到之手勢訊息識別可供執行之功能時,可以根據手勢動作識別手勢訊息,其中,手勢訊息包括運動軌跡訊息、空間位置訊息或者指向訊息,以選定、移動及釋放功能區,再根據手勢形狀訊息選擇與手勢形狀對應之編號。
It can be understood that when the
可以理解,手勢識別模組130對一些第二功能區可以設置特定之手勢形狀,當眼球追蹤器120檢測到人眼之視線注視範圍落於使用者介面111時,手勢識別模組130可以檢測特定手勢形狀,例如“V”等,以藉由處理器150控制汽車200執行特定之功能。
It can be understood that the
語音訊息包括以下任意一種或者其組合:第二功能區之編號、所述第二功能區之名稱、所述第二功能區之簡稱、所述可執行功能之名稱、所述可執行功能之簡稱、確認或者取消。 The voice message includes any one or combination of the following: the number of the second functional area, the name of the second functional area, the abbreviation of the second functional area, the name of the executable function, and the abbreviation of the executable function , Confirm or Cancel.
請一併參閱圖5,是本申請一實施例提供之手勢識別示意圖。 Please also refer to FIG. 5 , which is a schematic diagram of gesture recognition provided by an embodiment of the present application.
示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第二功能區1112時,可以藉由手勢識別模組130識別使用者之手勢形狀所對應之數位記號,並根據識別出數位記號執行具有相同數位記號之第二功能區。例如,當用戶選擇使用第二功能區11122時,可以比出“2”對應之手勢,以藉由處理器
150控制汽車200執行第一功能區1112內之第二功能區11122對應之功能,例如圖4中示出之第二功能區11122對應之座椅加熱功能。
For example, when the
可以理解,由於不同地區代表數位之手勢可能存於差異,手勢識別模組130內可以存儲不同地區代表數位之手勢。
It can be understood that since gestures representing digits in different regions may be different, the
示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於第二功能區1112時,處理器150可以藉由語音識別模組140識別使用者之語音,並根據語音辨識出對應之編號。例如,當用戶選擇使用第二功能區11121時,可以說“一”、“第一個”、“第一功能”、“第一可執行功能”等語音,以執行第二功能區11121。語音識別模組140可以藉由語義識別提取出關鍵字“一”,以選擇第二功能區11121。
For example, when the
可以理解,語音識別模組140對一些第二功能區可以設置特定之語音。示例的,當眼球追蹤器120檢測到人眼之視線注視範圍落於使用者介面111時,語音識別模組140可以檢測特定語音,例如“播放音樂”,“切歌”等,以執行播放音樂、切換歌曲等功能。
It can be understood that the
可以理解,語音識別模組140根據語音訊息獲取語音控制信號,語音控制信號包括以下任意一種或者其組合:選擇第二功能區、執行可執行功能、確認執行可執行功能、取消執行可執行功能或者返回上一級,處理器150再根據語音控制信號執行相應操作。
It can be understood that the
圖6是本申請一實施例提供之戶介面控制方法流程示意圖。如圖6所示,戶介面控制方法至少包括以下步驟: FIG. 6 is a schematic flowchart of a user interface control method provided by an embodiment of the present application. As shown in Figure 6, the user interface control method at least includes the following steps:
S100:獲取使用者之眼球運動訊息。 S100: Obtain eye movement information of the user.
可以理解,眼球追蹤器120用於獲取使用者之眼球運動訊息,具體獲取方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S200:根據眼球運動訊息選定相應之第一功能區。 S200: Select the corresponding first functional area according to the eye movement information.
可以理解,處理器150根據眼球追蹤器120獲取之使用者之眼球運動訊息判斷使用者之視線是否落於相應之第一功能區,並選定相應之第一功能區,具體選定方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S300:根據使用者指令選擇相應之第二功能區,並執行第二功能區對應之可執行功能。 S300: Select the corresponding second function area according to the user instruction, and execute the executable function corresponding to the second function area.
可以理解,藉由手勢識別模組130或語音識別模組140識別可供執行之功能,以藉由處理器150控制汽車200執行相應之功能。可以理解,具體選擇及執行方式,請一併參閱圖1至圖3,於此不再贅述。
It can be understood that, the
圖7是本申請另一實施例提供之戶介面控制方法流程示意圖。如圖7所示,戶介面控制方法至少包括以下步驟: FIG. 7 is a schematic flowchart of a user interface control method provided by another embodiment of the present application. As shown in Figure 7, the user interface control method at least includes the following steps:
S310:應用手勢識別技術識別手勢動作。 S310: Apply gesture recognition technology to recognize gesture actions.
可以理解,手勢識別模組130應用手勢識別技術識別手勢動作,具體識別方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S320:根據手勢動作識別手勢控制信號。 S320: Identify the gesture control signal according to the gesture action.
可以理解,手勢識別模組130根據手勢動作識別手勢控制信號,具體識別方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S330:根據手勢控制信號選擇編號。 S330: Select a number according to the gesture control signal.
可以理解,處理器150根據手勢識別模組130識別到之手勢控制信號選擇編號,具體選擇方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
圖8是本申請另一實施例提供之戶介面控制方法流程示意圖。如圖8所示,戶介面控制方法至少包括以下步驟: FIG. 8 is a schematic flowchart of a user interface control method provided by another embodiment of the present application. As shown in Figure 8, the user interface control method at least includes the following steps:
S311:應用語音辨識引擎識別語音訊息。 S311: Recognize the voice message by using the voice recognition engine.
可以理解,語音識別模組140應用語音辨識引擎識別語音訊息,具體識別方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S321:根據語音訊息獲取語音控制信號。 S321: Acquire a voice control signal according to the voice message.
可以理解,語音識別模組140根據識別到之語音訊息獲取語音控制信號,具體識別方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
S331:根據語音控制信號執行相應操作。 S331: Execute a corresponding operation according to the voice control signal.
可以理解,處理器150根據語音識別模組140識別到之語音控制信號執行相應操作,具體識別方式請一併參閱圖1及圖3,於此不再贅述。
It can be understood that the
可以理解,本申請實施例提供之車載通訊方法還可以應用於圖2中示出之車載通訊系統10a,車載通訊系統10a執行車載通訊方法請一併參閱圖1、圖2及其相關描述,於此不再贅述。 It can be understood that the in-vehicle communication method provided by the embodiment of the present application can also be applied to the in-vehicle communication system 10a shown in FIG. This will not be repeated here.
上面結合附圖對本申請實施例作了詳細說明,但本申請不限於上述實施例,於所屬技術領域普通技術人員所具備之知識範圍內,還可以於不脫離本申請宗旨之前提下做出各種變化。此外,於不衝突之情況下,本申請之實施例及實施例中之特徵可以相互組合。 The embodiments of the present application have been described in detail above in conjunction with the accompanying drawings, but the present application is not limited to the above embodiments. Within the scope of knowledge of those of ordinary skill in the art, various modifications can be made without departing from the purpose of the present application. Variety. In addition, the embodiments of the present application and the features in the embodiments can be combined with each other under the condition of no conflict.
100:電子裝置 100: Electronic device
110:顯示幕 110: display screen
111:使用者介面 111: User Interface
120:眼球追蹤器 120: Eye tracker
130:手勢識別模組 130: Gesture recognition module
140:語音識別模組 140:Speech recognition module
150:處理器 150: Processor
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110147808A TWI796883B (en) | 2021-12-20 | 2021-12-20 | Method, electronic device, and user interface for user interface control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW110147808A TWI796883B (en) | 2021-12-20 | 2021-12-20 | Method, electronic device, and user interface for user interface control |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI796883B true TWI796883B (en) | 2023-03-21 |
TW202326387A TW202326387A (en) | 2023-07-01 |
Family
ID=86692466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW110147808A TWI796883B (en) | 2021-12-20 | 2021-12-20 | Method, electronic device, and user interface for user interface control |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI796883B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467230A (en) * | 2010-11-11 | 2012-05-23 | 由田新技股份有限公司 | Method and system for operating and controlling vernier by human body |
CN102822771A (en) * | 2010-01-21 | 2012-12-12 | 托比技术股份公司 | Eye tracker based contextual action |
CN103850582A (en) * | 2012-11-30 | 2014-06-11 | 由田新技股份有限公司 | Eye-movement operation password input method and safe using same |
CN109918001A (en) * | 2019-03-28 | 2019-06-21 | 北京小米移动软件有限公司 | Interface display method, device and storage medium |
-
2021
- 2021-12-20 TW TW110147808A patent/TWI796883B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102822771A (en) * | 2010-01-21 | 2012-12-12 | 托比技术股份公司 | Eye tracker based contextual action |
CN102467230A (en) * | 2010-11-11 | 2012-05-23 | 由田新技股份有限公司 | Method and system for operating and controlling vernier by human body |
CN103850582A (en) * | 2012-11-30 | 2014-06-11 | 由田新技股份有限公司 | Eye-movement operation password input method and safe using same |
CN109918001A (en) * | 2019-03-28 | 2019-06-21 | 北京小米移动软件有限公司 | Interface display method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW202326387A (en) | 2023-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ayoub et al. | From manual driving to automated driving: A review of 10 years of autoui | |
US8261190B2 (en) | Displaying help sensitive areas of a computer application | |
US20030046401A1 (en) | Dynamically determing appropriate computer user interfaces | |
DE202017103671U1 (en) | Automatic generation of a graphical user interface from notification data | |
DE102016125760A1 (en) | Predicting search queries through a keyboard | |
DE202017104090U1 (en) | Graphical keyboard application with integrated search | |
DE202011110334U1 (en) | System for orthogonal dragging on scrollbars | |
Eisele et al. | Effects of traffic context on eHMI icon comprehension | |
Ma et al. | From action icon to knowledge icon: Objective-oriented icon taxonomy in computer science | |
Tabone et al. | Augmented reality interfaces for pedestrian-vehicle interactions: An online study | |
Stephanidis | Design for all in digital technologies | |
US20220318485A1 (en) | Document Mark-up and Navigation Using Natural Language Processing | |
Zhang et al. | Mid-air gestures for in-vehicle media player: elicitation, segmentation, recognition, and eye-tracking testing | |
TWI796883B (en) | Method, electronic device, and user interface for user interface control | |
Singh | Evaluating user-friendly dashboards for driverless vehicles: Evaluation of in-car infotainment in transition | |
CN116301509A (en) | User interface control method, electronic device and user interface | |
Krstačić et al. | Safety Aspects of In-Vehicle Infotainment Systems: A Systematic Literature Review from 2012 to 2023 | |
Gao et al. | Intelligent Cockpits for Connected Vehicles: Taxonomy, Architecture, Interaction Technologies, and Future Directions | |
Karas et al. | Audiovisual Affect Recognition for Autonomous Vehicles: Applications and Future Agendas | |
CN116661635B (en) | Gesture processing method and electronic equipment | |
Wang et al. | Multimodal Interaction Design in Intelligent Vehicles | |
Chen et al. | User Interface Design | |
Stephanidis | User Interface Adaptation and Design for All | |
Moorhouse | Natural user experience in tertiary driver-car interactions | |
Roider | Natural Multimodal Interaction in the Car-Generating Design Support for Speech, Gesture, and Gaze Interaction while Driving |