CN116208742A - Signal processing method, intelligent cat eye and intelligent door - Google Patents
Signal processing method, intelligent cat eye and intelligent door Download PDFInfo
- Publication number
- CN116208742A CN116208742A CN202211436337.8A CN202211436337A CN116208742A CN 116208742 A CN116208742 A CN 116208742A CN 202211436337 A CN202211436337 A CN 202211436337A CN 116208742 A CN116208742 A CN 116208742A
- Authority
- CN
- China
- Prior art keywords
- video
- thread
- processing
- interaction
- cat eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- E—FIXED CONSTRUCTIONS
- E06—DOORS, WINDOWS, SHUTTERS, OR ROLLER BLINDS IN GENERAL; LADDERS
- E06B—FIXED OR MOVABLE CLOSURES FOR OPENINGS IN BUILDINGS, VEHICLES, FENCES OR LIKE ENCLOSURES IN GENERAL, e.g. DOORS, WINDOWS, BLINDS, GATES
- E06B7/00—Special arrangements or measures in connection with doors or windows
- E06B7/28—Other arrangements on doors or windows, e.g. door-plates, windows adapted to carry plants, hooks for window cleaners
- E06B7/30—Peep-holes; Devices for speaking through; Doors having windows
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a signal processing method for intelligent cat eye man-machine interaction, an intelligent cat eye, a computer readable storage medium and an intelligent door, wherein the signal processing method comprises the following steps: running or starting a service processing process, wherein the service processing process comprises a video processing thread, and the video processing thread is configured to acquire and process video acquired by a camera; running or starting a UI interaction process, wherein the UI interaction process comprises an interface display thread, and the interface display thread displays a preset image and/or a video acquired by a camera through a display interface; and transmitting information between the business processing process and the UI interaction process through the communication library, wherein the business processing process and the UI interaction process are mutually independent. The embodiment of the invention not only ensures that the intelligent cat eye can realize the functions of displaying video images and explaining user instructions, but also enables the business processing process and the UI interaction process to be mutually independent, is convenient for realizing targeted adjustment aiming at specific functions, and improves the stability of an intelligent cat eye system.
Description
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to a signal processing method for intelligent cat eye man-machine interaction, an intelligent cat eye and an intelligent door.
Background
The intelligent cat eye is the intelligent important embodiment in the intelligent door, and common intelligent cat eye generally possesses the basic functions of obtaining video or image and displaying video or image, for example towards the camera of intelligent door outside and towards the inboard display screen of intelligent door. In order to simplify the structure of the intelligent door, the interface for displaying the video generally has the functions of displaying the state of the intelligent door and realizing user interaction, for example, the functions of displaying outdoor environment information according to various sensors, controlling to start a corresponding camera through key or screen touch, selecting video or image storage and playback, controlling unlocking of the intelligent door and the like.
However, a mature intelligent cat eye system is lacking at present, the existing system is too simple and lacks flexibility, wherein the process of displaying video images and the process of realizing man-machine interaction are not distinguished or deeply bundled, dynamic adjustment cannot be performed according to corresponding services, for example, an interactive interface of a display screen is upgraded to add functions or adapt hardware, but the function of displaying video images is not affected, in this case, the complete intelligent cat eye system can only be updated, the process is complex and the compatibility is low, so that the subsequent system updating and repairing of the intelligent cat eye are difficult to ensure, and the adjustment of the process of displaying video images is the same.
The matters in the background section are only those known to the inventors and do not, of course, represent prior art in the field.
Disclosure of Invention
Aiming at one or more defects in the prior art, the embodiment of the invention provides a signal processing method for intelligent cat eye man-machine interaction, which comprises the following steps:
running or starting a service processing process, wherein the service processing process comprises a video processing thread, and the video processing thread is configured to collect and process video acquired by a camera;
running or starting a U I (user interface) interaction process, wherein the U I interaction process comprises an interface display thread, and the interface display thread displays a preset image and/or a video acquired by a camera through a display interface; and
information is transferred between the business process and the U I interaction process through a communication library,
wherein the business processing process and U I interaction process are independent of each other.
According to one aspect of the invention, the U I interactive process further comprises an instruction processing thread receiving instructions issued by a user, the instruction processing thread being configured to identify the instructions issued by the user and pass to the business processing process via the communication library for processing.
According to one aspect of the invention, the business process further comprises a message processing thread configured to be able to acquire the status of the camera and the video processing thread and to send instructions to the video processing thread.
According to one aspect of the invention, the communication library includes a message lookup table that includes a mapping of messages I D (identities) and processing functions, through which user issued instructions and/or states of video processing threads are interpreted.
According to one aspect of the invention, the mapping of the message I D and the processing function is established by a hash algorithm.
According to one aspect of the invention, the signal processing method further comprises:
establishing a shared memory for the business processing process and the U I interaction process; the video processing thread processes the acquired video and stores the processed video in the shared memory for calling by the U I interaction process.
According to one aspect of the invention, wherein the business processing process further comprises an information storage thread, the information storage thread comprising:
initializing a shared memory and creating a data queue;
acquiring and obtaining a video by a camera, and storing the video in a data queue of a shared memory;
the data queue is updated.
According to one aspect of the invention, wherein the U I interaction process further comprises an information reading thread comprising:
acquiring a shared memory address;
acquiring the information data state in the shared memory;
video data is read.
According to one aspect of the present invention, the present invention also includes a smart cat eye comprising:
a camera;
a display screen; and
and the processor is in signal connection with the camera and the display screen and is configured to be capable of executing the signal processing method for intelligent cat eye man-machine interaction.
According to one aspect of the present invention, the present invention further includes a computer-readable storage medium including computer-executable instructions stored thereon that, when executed by a processor, implement a signal processing method for smart cat eye human-machine interaction as described above.
According to one aspect of the present invention, the present invention also includes a smart door comprising:
a door body;
the intelligent peephole is arranged on the door body.
Compared with the prior art, the embodiment of the invention provides a signal processing method for intelligent cat eye man-machine interaction, wherein a business processing process and a U I interaction process are independently operated, and information transmission between the business processing process and the U I interaction process is realized through a communication library, so that the intelligent cat eye can realize the functions of displaying video images and explaining user instructions, the business processing process and the U I interaction process can be mutually independent, the specific function can be conveniently adjusted, and the stability of an intelligent cat eye system is improved. The invention also includes an embodiment of a smart cat eye, a computer readable storage medium, and a smart door, wherein the smart cat eye performs signal processing using the method described above, and the computer readable storage medium is capable of implementing the method described above when executed by a processor, and the smart door uses the smart cat eye described above.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a signal processing method for intelligent cat eye human-computer interaction according to an embodiment of the invention;
FIG. 2 is an architecture diagram of a business processing process and U I interaction process in one embodiment of the invention;
FIG. 3 is a schematic diagram of a process for communicating information by an instruction processing thread to a message processing thread in one embodiment of the invention;
FIG. 4 is a flow chart of a signal processing method including a process of establishing a shared memory according to an embodiment of the invention;
FIG. 5 is a flow diagram of a business process in one embodiment of the invention;
FIG. 6 is a flow diagram of a U I interaction process in one embodiment of the invention;
fig. 7 is a block diagram of a smart cat eye in accordance with one embodiment of the present invention.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be fixedly connected, detachably connected, or integrally connected, and may be mechanically connected, electrically connected, or may communicate with each other, for example; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is less level than the second feature.
The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Fig. 1 shows a specific flow of a signal processing method 100 for intelligent cat eye man-machine interaction according to an embodiment of the present invention, fig. 2 shows an architecture relationship between a business processing process and a U I interaction process according to an embodiment of the present invention, and is described below in conjunction with fig. 1 and fig. 2.
In the embodiment, the intelligent cat eye has a function of displaying the video image acquired by the camera in the display interface, as shown in fig. 1, in step S101, a service processing process is run or started, where the service processing process includes a video processing thread, and the video processing thread is used for acquiring and processing the video acquired by the camera. The video images acquired by the cameras in the intelligent cat eye can be preprocessed and displayed in the display interface, for example, the video images are cut in size, the resolution of the region of interest is optimized, the video images acquired by the cameras are integrated, and the like.
In step S102, a U I interaction procedure is run or initiated, wherein the U I interaction procedure includes an interface display thread for displaying a preset image and/or video acquired by the camera through the display interface. The preset image may be an image containing information such as instructions, trademarks, etc. as a user interface. The video acquired by the camera can be directly displayed in the display interface, or can be pre-processed in the service processing process and then displayed in the display interface through the U I interaction process.
According to different embodiments of the invention, when the function of the intelligent cat eye is to directly display the video acquired by the camera through the display interface, the video acquired by the camera can be directly called through the U I interaction interface and displayed on the display interface, and when the intelligent cat eye has the function of processing the video image acquired by the camera, the video image acquired by the camera can be called through the business processing process, and the processing is completed and then is called by the U I interaction process and displayed in the display interface.
In this embodiment, the function of displaying images on the display interface in the intelligent cat eye is commonly implemented through the service processing process and the U I interaction process, and the service processing process and the U I interaction process are independent of each other, and in step S103, the service processing process and the U I interaction process can implement information transfer through the communication library, but the operation process can be independently implemented. Preferably, the communication library is, for example, nng library (a high performance communication library), and nng library has mechanisms such as message queues, error retransmission, etc., and can be used for inter-process and intra-process communication. When one of the processes needs to be dynamically adjusted, for example, an updating function, an error repairing function and the like, the two processes can be operated respectively, mutual influence is avoided, a channel of a communication library is reserved, and information transmission can be achieved. The method of the embodiment improves the stability and usability of the intelligent cat eye system and provides a basis for updating the intelligent cat eye system.
According to a preferred embodiment of the present invention, as shown in fig. 2, the U I interaction process further comprises an instruction processing thread for receiving an instruction issued by a user, wherein the instruction processing thread is capable of recognizing the instruction issued by the user and passing the instruction to the service processing process for processing via the communication library. The instruction sent by the user in this embodiment may be received through a display interface, an entity key, a voice receiving device, etc., specifically determined according to hardware and software support. The instruction processing thread can identify instructions sent by a user, for example, the user inputs instructions to the hardware in a pressing mode, a gesture mode, a voice mode and the like, electric signal changes occur, the instruction processing thread can identify the electric signal changes, further the instructions sent by the user are identified, and further the instruction processing thread can be used for filtering signal noise generated by non-user instructions. In some embodiments of the present invention, the instruction processing thread is configured to identify whether the signal change is an instruction issued by a user, where the specific correspondence between the instruction issued by the user and the control of the smart cat eye or the smart door is completed by a service processing process.
As shown in fig. 2, in the preferred embodiment of the present invention, the service processing process further includes a message processing thread, where the message processing thread can obtain the states of the camera and the video processing thread, and can send an instruction to the video processing thread, for example, the message processing thread can obtain the on-off state, the focal length state, the coordination state of the plurality of cameras, and the like of the camera, and can also obtain the states of the video processing thread, for example, the video processing operation that is being executed or is about to be executed by the video processing thread, and corresponding information such as video. Meanwhile, the message processing thread can also send instructions to the video processing thread, for example, an instruction for performing edge clipping on video acquired by a camera is sent to the video processing thread.
Further in accordance with a preferred embodiment of the present invention, the communication library includes a message lookup table, wherein the message lookup table includes a mapping of messages I D and processing functions, and wherein the user issued instructions and/or the state of the video processing threads are interpreted by the message lookup table. Specifically, after the instruction sent by the user is identified by the instruction processing thread, a corresponding message I D is obtained, the message I D is converted into a corresponding processing function through a message comparison table in the communication library, and the corresponding processing function is sent to the camera or the video processing thread through the message processing thread, so as to control the camera and perform a corresponding processing process on the video image acquired by the camera. For example, after a user inputs an instruction for starting the camera through the hardware device and is identified by an instruction processing thread in the U I interaction process, the instruction for starting the camera is translated into a corresponding processing function through a message comparison table in a communication library, and the camera is controlled to be started according to the processing function through the message processing thread.
For the states of the camera and the video processing thread, a user can input an instruction for calling the states of the camera and the video processing thread, after the instruction is received by the instruction processing thread, a corresponding message I D is obtained and transmitted to the message processing thread through a communication library, the message I D is translated into a corresponding processing function in the transmission process, the states of the camera and the video processing thread are obtained through the message processing thread, and then the states are transmitted to the U I interaction process through the communication library and are displayed in a display interface.
According to the preferred embodiment of the invention, the corresponding messages I D are distributed to the instruction sent by the user and the states of the camera and the video processing thread received by the instruction processing thread, so as to ensure that the information is kept accurate in the process of transmission.
Specifically, in the preferred embodiment of the present invention, the mapping relationship between the message I D and the processing function is established by a hash algorithm, which can map binary values with arbitrary lengths to binary values with shorter fixed lengths, and the mapping relationship between the message I D and the processing function is established by the hash algorithm, so that the communication pressure of information transfer can be reduced, and quick searching and corresponding between the message I D and the processing function can be realized.
Fig. 3 shows a specific process 200 for transferring information from an instruction processing thread to a message processing thread according to a preferred embodiment of the present invention, wherein in step S201, the instruction processing thread and the message processing thread are run or started, reflecting that in the hardware level, the start of a smart cat eye, the start of a business processing process and a U I interaction process may be performed automatically.
In step S202, a message comparison table is initialized, a mapping between the message I D and the processing function is established, and in this step, a set value can be provided when the smart cat eye leaves the factory, or a user-defined message comparison table can be provided, for example, a user provides an input mode and a situation of an instruction, and a message I D is allocated, where an action or operation executed by the smart cat eye has a corresponding processing function, and the user-defined message comparison table can define a mapping relationship between the message I D and the processing function by the user. Wherein the mapping of the message I D and the processing function may be established by a hashing algorithm.
In step S203, after waiting for the user to issue an instruction, the intelligent cat eye is started, where the camera can be directly started to work, or can be in a closed state, and when the user needs to use, the intelligent cat eye is started by issuing the instruction. And receiving instructions sent by the user by utilizing hardware equipment capable of realizing user interaction in the intelligent cat eye, such as a touch screen, physical keys and other structures. In this step, the instruction processing thread receives the electrical signal change of the hardware device, and recognizes therefrom an instruction issued by the user, for example, an instruction for the user to press a physical key to turn on the camera, and after receiving the instruction, the instruction processing thread gives a message I D corresponding to the electrical signal change. For example, after the intelligent cat eye is started, the instruction processing thread continuously monitors the change of the electrical signal of the hardware device for realizing the user interaction, and in step S204, it is determined whether an instruction is received, and if the instruction is not received, the process returns to step S203, and the hardware device is continuously monitored. If an instruction is received, in step S205, the received instruction is parsed, specifically, according to the message lookup table in the communication library, a corresponding processing function is obtained according to the message I D given with the instruction, for example, the message I D is received by the message processing thread, and the message lookup table in the communication library is called, so as to obtain the processing function corresponding to the message I D.
Further, in this embodiment, the processing result may also be sent to the instruction processing thread through the communication library in step S206, and the feedback is returned to step S203 to confirm that the instruction has been executed or that an error has occurred and has not been executed. When the instruction is not executed, it may be further determined whether to resend the message I D, ensure instruction execution, or send an error report.
Fig. 4 shows a specific flow of a signal processing method 300 according to a preferred embodiment of the present invention, including a process of establishing a shared memory, which is described below in conjunction with fig. 2 and 3.
As shown in fig. 2, the architecture of the service processing process and the U I interaction process further includes a shared memory, where both the service processing process and the U I interaction process can access the shared memory, and in the signal processing method 300, steps S302, S303, and S304 are substantially the same as steps S101, S102, and S103 in the signal processing method 100, which are not described herein again. In step S301, a shared memory is established for the service processing process and the U I interaction process, the video processing thread processes the acquired video and stores the processed video in the shared memory for the U I interaction process to call, the shared memory can be used for data transmission between processes, and for larger data such as video pictures, efficiency can be improved through the transmission of the shared memory. For example, a part of area is independently divided in the memory of the intelligent cat eye system to be used as a shared memory, the business processing process and the U I interaction process can be independently operated by the respective operation memories, the business processing process stores the video image in the shared memory area after processing the video image, and when the part of video image needs to be displayed, the U I interaction process calls the video image in the shared memory area, so that the data transmission times are reduced, and the timeliness of data transmission is ensured.
For example, for android application layer, a piece of shared memory can be described by memryfi e, and in different embodiments of the present invention, the shared memory is established in a corresponding manner according to the system of the intelligent cat eye, so as to realize data transfer between the service processing process and the U I interaction process.
The step of establishing the shared memory in this embodiment may be performed before or after other steps, and is not limited to the sequence shown in fig. 4, and the step of initializing the shared memory according to the preferred embodiment of the present invention includes defining a portion of a memory area with a fixed size, for example. Meanwhile, a service processing process, such as a video processing thread in the service processing process, can be used for obtaining the address of the shared memory as a storage address after the video processing is completed, and the U I interaction process can read the address of the shared memory, for example, the information of the address of the shared memory is transferred to the U I interaction process through a communication library. When the video image stored in the shared memory needs to be displayed on the display interface, the video image in the shared memory is called through U I interaction process, for example, an interface display thread in the interaction process, and is displayed on the display interface. In particular, the processing, saving, delivering and displaying of video images therein is described in connection with fig. 5 and 6.
According to a preferred embodiment of the present invention, the service processing process further includes an information storage thread, which is used for storing the video acquired by the camera in the shared memory, and the specific process is as shown in fig. 5, in the information storage thread, first in step S401, the shared memory is initialized, and a data queue is created, where the data queue is used for sorting the data in the shared memory, for example, the video images acquired by different cameras are classified according to the corresponding cameras, and so on. The data queue may include information such as number, data size, data type, acquisition time, etc.
In step S402, a video is acquired by a camera and stored in a shared memory. The video processing thread in the service processing thread can read the video in the shared memory, for example, after receiving an instruction for processing the video, the video processing thread can read the video from the shared memory and store the video in the shared memory after the processing is completed. In step S403, the data queue is updated, and after the information storage thread stores the video in the shared memory, the information of the video in the data queue is updated, and it is determined whether the video is ready.
Fig. 6 shows a specific flow of the information reading thread in the U I interaction procedure according to the preferred embodiment of the present invention, wherein the U I interaction procedure includes the information reading thread, specifically, in step S501, the information reading thread reads the address of the shared memory, and simultaneously obtains the address of the shared memory during the establishment of the shared memory, and the information reading thread obtains the address of the shared memory, for example, through the communication library, so as to invoke the data stored in the shared memory.
In step S502, the data information state in the shared memory is obtained, for example, the information reading thread receives the data queue of the shared memory, where the data information includes information such as the data state of the video. After the information data state in the shared memory is acquired, according to the instruction sent by the user, in step S503, the corresponding video data is read, and an interface display thread may be executed, and the read video data is displayed in the display interface.
As shown in fig. 7, the present invention further includes an embodiment of the smart cat eye 1, where the smart cat eye 1 includes a camera 10, a display screen 20, and a processor 30, where the processor 30 is in signal connection with the camera and the display screen, and is capable of executing the aforementioned signal processing method for man-machine interaction of the smart cat eye. Further, the present invention also includes an embodiment of an intelligent door, where the intelligent door includes a door body and the aforementioned intelligent cat eye 1, the intelligent cat eye 1 is installed on the door body, specifically, the camera 10 is installed on a side of the door body facing outdoors, the display screen 20 is installed on a side of the door body facing indoors, and the processor 30 is located inside the door body, and can execute the aforementioned signal processing method, and display the video signal acquired by the camera 10 in the display screen 20.
The present invention also includes an embodiment of a computer readable storage medium comprising computer executable instructions stored thereon that when executed by a processor implement a signal processing method for smart cat eye human-machine interaction as described above.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (11)
1. A signal processing method for intelligent cat eye human-computer interaction, comprising:
running or starting a service processing process, wherein the service processing process comprises a video processing thread, and the video processing thread is configured to collect and process video acquired by a camera;
running or starting a UI interaction process, wherein the UI interaction process comprises an interface display thread, and the interface display thread displays a preset image and/or a video acquired by a camera through a display interface; and
information is transferred between the business process and the UI interaction process through a communication library,
wherein the business processing process and the UI interaction process are independent from each other.
2. The signal processing method of claim 1, wherein the UI interaction process further comprises an instruction processing thread that receives instructions issued by a user, the instruction processing thread configured to identify the instructions issued by the user and pass to the business processing process for processing via the communication library.
3. The signal processing method of claim 2, wherein the traffic processing process further comprises a message processing thread configured to be able to obtain the status of the camera and the video processing thread and to send instructions to the video processing thread.
4. A signal processing method according to claim 3, wherein the communication library comprises a message look-up table comprising a mapping of message IDs and processing functions, the status of the user issued instructions and/or video processing threads being interpreted by the message look-up table.
5. The signal processing method of claim 4, wherein the mapping of the message IDs and processing functions is established by a hash algorithm.
6. The signal processing method of claim 1, further comprising:
establishing a shared memory for the business processing process and the UI interaction process; and the video processing thread processes the acquired video and stores the processed video in the shared memory so as to be called by the UI interaction process.
7. The signal processing method of claim 6, wherein the traffic processing process further comprises an information storage thread, the information storage thread comprising:
initializing a shared memory and creating a data queue;
acquiring and obtaining a video by a camera, and storing the video into a shared memory;
the data queue is updated.
8. The signal processing method of claim 7, wherein the UI interaction process further comprises an information reading thread comprising:
acquiring a shared memory address;
acquiring the information data state in the shared memory;
video data is read.
9. A smart cat eye comprising:
a camera;
a display screen; and
a processor in signal communication with the camera and the display screen and configured to perform the signal processing method for intelligent cat eye human-machine interaction of any one of claims 1-8.
10. A computer readable storage medium comprising computer executable instructions stored thereon, which when executed by a processor implement the signal processing method for smart cat eye human machine interaction of any one of claims 1-8.
11. A smart door, comprising:
a door body;
the smart cat eye of claim 9, wherein the smart cat eye is mounted to the door body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211436337.8A CN116208742A (en) | 2022-11-16 | 2022-11-16 | Signal processing method, intelligent cat eye and intelligent door |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211436337.8A CN116208742A (en) | 2022-11-16 | 2022-11-16 | Signal processing method, intelligent cat eye and intelligent door |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116208742A true CN116208742A (en) | 2023-06-02 |
Family
ID=86515281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211436337.8A Pending CN116208742A (en) | 2022-11-16 | 2022-11-16 | Signal processing method, intelligent cat eye and intelligent door |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116208742A (en) |
-
2022
- 2022-11-16 CN CN202211436337.8A patent/CN116208742A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110888615B (en) | Multi-input equipment interaction method, device and medium for double-screen different display of Android system | |
CN102223436B (en) | Mobile terminal and controlling method thereof | |
CN116360725B (en) | Display interaction system, display method and device | |
EP2478434A1 (en) | Method and apparatus for providing application interface portions on peripheral computer devices | |
EP4246957A1 (en) | Photographing method, system, and electronic device | |
CN110413383B (en) | Event processing method, device, terminal and storage medium | |
KR20150051640A (en) | Method and apparatus for checking status of message in a electronic device | |
CN113628304B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111526232B (en) | Camera control method based on double-screen terminal and double-screen terminal | |
CN110991368A (en) | Camera scene recognition method and related device | |
CN114489350B (en) | Input method calling method and related equipment | |
CN116208742A (en) | Signal processing method, intelligent cat eye and intelligent door | |
CN114115673A (en) | Control method of vehicle-mounted screen | |
CN116866545B (en) | Mapping relation adjustment method, equipment and storage medium of camera module | |
CN104754195A (en) | Electronic Apparatus And Method For Controlling The Same | |
EP2624176A1 (en) | Menu assembly method, menu assembly system and terminal | |
WO2024139364A1 (en) | Connection method for bluetooth hearing aid, and electronic device | |
CN115981576B (en) | Method for sharing data, electronic device and storage medium | |
CN115686338B (en) | Screen splitting method and electronic equipment | |
CN214851559U (en) | Microscope intelligent terminal system | |
KR100265077B1 (en) | Method and apparatus of generating interactive services which uses short message service | |
CN115002821B (en) | Call state monitoring method, device, equipment and storage medium | |
CN115334239B (en) | Front camera and rear camera photographing fusion method, terminal equipment and storage medium | |
CN111211964B (en) | Instruction transmission method, device, storage medium and server | |
CN111479075B (en) | Photographing terminal and image processing method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |