US20150235540A1 - Voice alert methods and systems - Google Patents
Voice alert methods and systems Download PDFInfo
- Publication number
- US20150235540A1 US20150235540A1 US14/633,709 US201514633709A US2015235540A1 US 20150235540 A1 US20150235540 A1 US 20150235540A1 US 201514633709 A US201514633709 A US 201514633709A US 2015235540 A1 US2015235540 A1 US 2015235540A1
- Authority
- US
- United States
- Prior art keywords
- remote electronic
- digitized voice
- sensor
- network
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 105
- 230000000694 effects Effects 0.000 claims abstract description 43
- 238000012544 monitoring process Methods 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 14
- 239000000779 smoke Substances 0.000 claims description 8
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 claims description 5
- 229910002091 carbon monoxide Inorganic materials 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 2
- 238000009877 rendering Methods 0.000 claims 3
- 238000012384 transportation and delivery Methods 0.000 abstract description 5
- 238000004891 communication Methods 0.000 description 55
- 230000006854 communication Effects 0.000 description 54
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 19
- 230000005540 biological transmission Effects 0.000 description 13
- 230000001755 vocal effect Effects 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000010267 cellular communication Effects 0.000 description 9
- 230000001413 cellular effect Effects 0.000 description 9
- 238000013459 approach Methods 0.000 description 8
- 238000012360 testing method Methods 0.000 description 6
- 238000009826 distribution Methods 0.000 description 5
- 238000007792 addition Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 239000000969 carrier Substances 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 101001094649 Homo sapiens Popeye domain-containing protein 3 Proteins 0.000 description 1
- 101000608234 Homo sapiens Pyrin domain-containing protein 5 Proteins 0.000 description 1
- 101000578693 Homo sapiens Target of rapamycin complex subunit LST8 Proteins 0.000 description 1
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 1
- 102100027802 Target of rapamycin complex subunit LST8 Human genes 0.000 description 1
- HGCXXEYNHRNBTF-JQRSIXJKSA-N ac1l4grw Chemical compound C([C@@]1(C)[C@H](O)CC2)[C@H]3OC(=O)C(=C)[C@@H]3C[C@]31[C@]2(C)O3 HGCXXEYNHRNBTF-JQRSIXJKSA-N 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- JLQUFIHWVLZVTJ-UHFFFAOYSA-N carbosulfan Chemical compound CCCCN(CCCC)SN(C)C(=O)OC1=CC=CC2=C1OC(C)(C)C2 JLQUFIHWVLZVTJ-UHFFFAOYSA-N 0.000 description 1
- 239000003245 coal Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000003305 oil spill Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000699 topical effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/012—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using recorded signals, e.g. speech
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/10—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/14—Relay systems
- H04B7/15—Active relay systems
- H04B7/185—Space-based or airborne stations; Stations for satellite systems
- H04B7/18502—Airborne stations
- H04B7/18504—Aircraft used as relay or high altitude atmospheric platform
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H20/00—Arrangements for broadcast or for distribution combined with broadcast
- H04H20/38—Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
- H04L63/105—Multiple levels of security
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
- H04W4/14—Short messaging services, e.g. short message services [SMS] or unstructured supplementary service data [USSD]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M11/00—Telephonic communication systems specially adapted for combination with other electrical systems
- H04M11/04—Telephonic communication systems specially adapted for combination with other electrical systems with alarm systems, e.g. fire, police or burglar alarm systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/04—Large scale networks; Deep hierarchical networks
- H04W84/042—Public Land Mobile systems, e.g. cellular systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/10—Small scale networks; Flat hierarchical networks
- H04W84/12—WLAN [Wireless Local Area Networks]
Definitions
- Embodiments are generally related to the provision of instant voice alerts sent automatically to remote mobile electronic devices such as cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- Embodiments are also related to wireless communications networks such as cellular telephone networks and wireless LAN type networks.
- Embodiments are additionally related to emergency services and security monitoring of residences, businesses, and government and military facilities.
- one aspect of the disclosed embodiments to provide for the transmission of instant voice alerts automatically to remote electronic devices such as, for example, cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- remote electronic devices such as, for example, cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- remote electronic devices such as, for example, cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- an activity can be detected utilizing one or more sensors.
- a text message indicative of the activity can be generated and converted into a digitized voice alert.
- the digitized voice alert can then be transmitted through a network for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices.
- an “activity” as utilized herein may be, for example, any number of different actions or events.
- a sensor can detect an activity or condition, such as a door entry security sensor that may detect that a door has opened while the occupants of the home are away.
- the opening of the door would constitute an “activity”.
- a live utterance such as a live speech given by, for example, the President of the United States could constitute as an “activity” as discussed in more detail herein.
- the digitized voice message can be instantly and automatically broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile.
- the one or more languages can be pre-selected in the user profile (e.g., during a set-up of the user-profile or during changes to the users profile).
- the user profile can be established as a user preference via a server during a set up (or at a later time) of the one or more remote electronic devices.
- the user profile can be established as a user preference via an intelligent router during a set up of the one or more remote electronic devices.
- the one or more languages can be selected from a plurality of different languages.
- the digitized voice message can be converted into the particular language specified by the remote electronic device(s).
- digitized voice message can be converted into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the one or more remote electronic devices.
- a wireless data network can be provided, which includes one or more sensors that communicate with the wireless data network within a location (e.g., a residence, building, business, government facility, military facility, etc.).
- An activity/condition can be detected utilizing one or more sensors associated with the location.
- a text message indicative of the activity can be generated and converted into a digitized voice alert.
- the digitized voice alert can be transmitted through a network for broadcast to one or more electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the remote electronic devices (e.g., a speaker associated with or integrated with such devices such as the speaker in a mobile phone).
- the remote electronic devices e.g., a speaker associated with or integrated with such devices such as the speaker in a mobile phone.
- Methods, systems, and processor-readable media are also disclosed for providing emergency voice alerts to wireless hand held device users in a specified region.
- An emergency situation can be detected affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region.
- a text message indicative of the emergency situation can be generated and converted into a digitized voice alert.
- the digitized voice alert can be transmitted through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region.
- Method, systems, and processor-readable media are also disclosed for providing an instant voice announcement automatically to remote electronic devices.
- a live announcement e.g., an announcement from the city Mayor, or the President of the United States
- the digitized voice message can be associated with a text message to be transmitted through a network to a plurality of remote electronic devices that communicate with the network.
- the text message with the digitized voice message can be transmitted through a network (e.g., cellular communications network, the Internet, etc.) for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through one or more remote electronic devices among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the one or more remote electronic devices among the plurality of remote electronic devices.
- a network e.g., cellular communications network, the Internet, etc.
- a current call taking place at one or more of the remote electronic devices can be automatically interrupted in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playing of the digitized voice message via a remote electronic device.
- operations can be implemented for automatically opening the digitized voice message, in response to receipt of the text message with the digitized voice message at the one or more remote electronic devices among the plurality of remote electronic devices, and automatically playing the digitized voice message through a speaker associated with the one or more remote electronic devices in response to automatically opening the digitized voice message.
- the identity of the speaker associated with the live announcement can be authenticated prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement.
- authentication of the speaker e.g., the President or other official
- the digitized voice message can be broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile.
- one or more languages can be pre-selected in the user profile.
- the user profile can be established in some embodiments as a user preference via a server during a setup of one or more of the remote electronic devices.
- the user profile can be established as a user preference via an intelligent router during a setup of the one or more remote electronic device.
- one or more languages can be selected from a plurality of different languages.
- the digitized voice message e.g., an announcement from the President
- the civil communications hub can allow users to forward messages to other recipients and the forwarded messages can include sending user annotations together with captured data sent by authorities.
- FIG. 1 illustrates a first exemplary schematic/flow chart in accordance with an embodiment
- FIG. 2 illustrates a second exemplary schematic/flow chart in accordance with an embodiment
- FIGS. 3( a ) to 3 ( d ) illustrate exemplary screen shots of a user interface in accordance with one or more embodiments
- FIG. 4 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment
- FIG. 5 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant voice alerts to remote electronic devices regarding incidents detected by a security system, in accordance with an embodiment
- FIG. 6 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment
- FIG. 7 illustrates a block diagram of a system for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment
- FIG. 8 illustrates a block diagram of a system for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment
- FIG. 9 illustrates a block diagram of a system for automatically providing emergency instant voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment
- FIG. 10 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to perform a process to, for example, provide automatic and instant voice alerts to remote electronic devices, in accordance with an embodiment
- FIG. 11 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to, for example, perform a process to automatically provide instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment
- FIG. 12 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to perform, for example, a process to automatically provide instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment
- FIG. 13 illustrates a block diagram of a system for providing automatic and instant voice alerts through a network, in accordance with an embodiment
- FIG. 14 illustrates a high-level flow chart of logical operations for providing automatic and instant digitized voice alerts, and converting such digitized voice alerts into more than one language for broadcast of the digitized voice alert in consecutively different languages through one or more remote electronic devices, in accordance with an embodiment
- FIG. 15 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 16 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 17 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 18 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 19 illustrates a block diagram of a system for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 20 illustrates a block diagram of a processor-readable medium for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment
- FIG. 21 illustrates an exemplary data processing system which may be included in devices operating in accordance with some embodiments
- FIG. 22 illustrates an exemplary environment for operations and devices according to some embodiments of the present invention
- FIG. 23 illustrates a block diagram of an unmanned vehicle system for monitoring using sensors and providing an instant voice announcement from the unmanned vehicle automatically to remote electronic devices, in accordance with an embodiment
- FIG. 24 illustrates a block diagram of an unmanned vehicle system for providing data in the form of instant voice announcements based on a condition from the unmanned vehicle automatically to remote electronic devices, in accordance with an embodiment
- FIG. 25 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, based on a sensed condition, automatically to remote electronic devices, in accordance with an embodiment
- FIG. 26 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, based on a sensed condition, automatically to remote electronic devices, in accordance with an embodiment
- FIG. 27 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, in accordance with an embodiment.
- the present invention can be embodied as a method, system, and/or a processor-readable medium. Accordingly, the embodiments may take the form of an entire hardware application, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the embodiments may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer-readable medium or processor-readable medium may be utilized including, for example, but not limited to, hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
- Computer program code for carrying out operations of the disclosed embodiments may be written in an object oriented programming language (e.g., Java, C++, etc.).
- the computer program code, however, for carrying out operations of the disclosed embodiments may also be written in conventional procedural programming languages such as the “C” programming language, HTML, XML, etc., or in a visually oriented programming environment such as, for example, Visual Basic.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
- the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network, e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
- FIG. 1 illustrates an overview of a system 200 according to embodiments of the present invention.
- System 200 broadly includes a server 205 or central computer, web service tool 210 , runtime tool 215 , voice recognition engine 220 , text-to-speech engine 225 , and one or more databases 230 .
- the server 205 may include each of the web service tool 210 , runtime tool 215 , voice recognition engine 220 , text-to-speech engine 225 , and one or more database 230 .
- one or more of the web service tool 210 , runtime application 215 , voice recognition engine 220 , text-to-speech engine 225 , and one or more databases 230 may be remote and in communication with the server 205 or central computer.
- the server 205 may be remote and in communication with the server 205 or central computer.
- server refers generally to one of three possible implementations or combinations thereof.
- the server can be a computer program running as a service to serve the needs or requests of other programs (referred to in this context as “clients”) which may or may not be running on the same computer.
- clients programs
- the server can be a physical computer dedicated to running one or more such services to serve the needs of programs running on other computers on the same network.
- a server can be a software/hardware system (i.e., a software service running on a dedicated computer) such as a database server, file server, mail server, enterprise server, print server, etc.
- the server can be a program that operates as a socket listener.
- a server can be a host that is deployed to execute one or more such programs.
- the server can be a server computer implemented as a single computer or a series of computers that link other computers or electronic devices together.
- Such a server implementation can provide essential services across a network, either to private users inside a large organization (e.g., Intranet) or to public users via the internet. For example, when one enters a query in a search engine, the query is sent from a user's computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to the user's computer.
- the server 205 can communicate with one or more substantially, real-time services 235 being operated by any number of entities such as, for example, security companies (e.g., Sonitrol, Brinks, etc.) or government agencies (e.g., U.S. Department of Homeland Security, government contractors, etc.) operating, for example, particular web sites.
- the services or informational feeds 235 may include websites offered by government agencies such as the Homeland Security Department, local 911 organizations, private companies or non-profit agencies, FEMA (Federal Emergency Management Agency), and so forth. As shown in FIG. 1 , these services can provide information via, for example, Feed 1 , Feed 2 , Feed 3 , and so forth.
- Feed 1 may provide a series of emergency announcements.
- Feed 2 may provide, for example, information related to construction on highways in a particular geographical region, whereas Feed 3 may provide updated weather information in a particular area.
- a user 240 can initially make a request 242 for specific and/or general voice alerts (e.g., text to voice) and/or other information via an electronic remote device such as a smartphone 199 , 201 , a tablet 202 , television 203 , or automobile Bluetooth® type system 204 .
- an electronic remote device such as a smartphone 199 , 201 , a tablet 202 , television 203 , or automobile Bluetooth® type system 204 .
- the user can make the request 242 in a text format guided by prompts or a template displayed on, for example, a display of smartphone 199 , 201 , tablet 202 , etc.
- FIGS. 3( a ) to 3 ( d ) illustrate exemplary screen shots of such prompts.
- FIG. 3( a ) depicts a home screen shot 105 comprising a list of topical icons from which the user may select using various user interfaces including touch screen display, trackball, buttons, and the like.
- Five selectable icons 106 , 107 , 108 , 109 , and 110 are shown in FIG. 3( a ).
- a user can select one of the icons 106 , 107 , 108 , 109 , and 110 . If a user selects icon 106 , for example, the user will tap into an emergency informational feed. The user would then be taken to other screens which would allow a user to set up an emergency informational feed that is ultimately fed to his or her device (e.g., Smartphones 199 , 201 , tablet 202 , automobile 204 , etc.) and provided according to particular preselected criteria in the form of text-to-voice informational emergency announcements.
- his or her device e.g., Smartphones 199 , 201 , tablet 202 , automobile 204 , etc.
- a user selects icon 107 , the user will tap into a weather informational feed that use preselects and is again provided with particular voice alerts (e.g., text-to-voice) regarding important weather announcements.
- Road condition voice alerts can also be provided by selecting, for example, icon 108 .
- a user can additionally configure text-to-voice alerts with respect to his or her business or home, as shown by selectable icons 109 and 110 .
- FIG. 3( b ) depicts a residential screen shot 115 responsive to the user selecting “Home” in accordance with an embodiment.
- the user would see next the screen shot 115 and one or more icons 116 , 117 , 118 , and 119 , respectively labeled, for example, Sensor 1 , Sensor 2 , Sensor 3 , and Sensor 4 .
- Such sensor icons are associated with, for example, sensors (e.g., security/surveillance sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage monitoring, door or window opening sensors, etc.) located in, for example, a residence of a user.
- sensors e.g., security/surveillance sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage monitoring, door or window opening sensors, etc.
- voice alerts e.g., text-to-voice
- the user's device e.g., smartphone, automobile, tablet computer, etc.
- FIG. 3( c ) depicts a screen shot 120 that includes example icons 121 , 122 , and 123 .
- the user can select particular conditions to monitor in the house. For example, selection of condition 1 may be the temperature inside the house or a particular zone of the house.
- Condition 2 may be, for example, energy usage monitored by an energy usage sensor in the house. The user may also set how often the user wishes to receive updates.
- FIG. 3( d ) depicts a screen shot 125 responsive to a user selecting, for example, an update (i.e., icon 123 in FIG. 3( c )).
- the screen shot 125 depicts available time frames 126 for which the user may receive substantially real-time alerts.
- a user can select how often the substantially, real-time alerts or other informational alerts are received.
- the user may make a live voice request for a specific voice alert information.
- a voice recognition engine 220 is responsible for converting a live voice or verbal command or input into text.
- the text may be in the form of XML or another appropriate language.
- the text can be a proprietary language.
- the XML or other programming or mark-up language can provide a communications protocol between the user and the server 205 , namely the web service tool 210 .
- the web service tool 210 can act as the gate keeper for the system 200 and authenticates the request 244 . This authentication process can determine whether or not the request emanates from a device registered or otherwise permitted to make the request.
- the user may need to input a pin or code, which would then be authenticated by the web service tool 210 . If the request is not authenticated, an error message 246 can be transmitted to the user 240 via the device. Optionally, instructions on remedying the underlying basis for the error response can also be transmitted to the device.
- the request type can be checked (e.g., text or voice/verbal 248 ). If verbal, the web service tool 210 can transmit the live voice request to the voice recognition engine 220 , which is configured to convert the voice request into a text request 250 .
- the voice request can be saved into an audio file prior to being serviced by the voice recognition engine 220 .
- voice recognition engines including proprietary engines, are suitable for the embodiments discussed herein. For example, a live voice or verbal request in the form “Need voice alert for residence” may be converted to “Residence Alert” or similar text containing the required terms to locate the desired information.
- a verbal request in the form of “How do I set up voice alerts?” may be converted to “Set Voice Alert” to locate the desired information.
- the system 200 may also teach users how to best phrase verbal requests to most efficiently allow the system 200 to locate the desired information. For example, in one embodiment, after downloading application software from, for example, a server, users can be provided with access to a tutorial or similar feature which assists users in phrasing verbal requests directed to, for example, particular types of alerts such as, for example, emergency alerts, weather, business alerts, alerts based on home sensors (entry sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage, etc.). Any improper verbal request (e.g., not enough information to identify desired information or improper format) may be met with a general error message or specific error message detailing required information necessary to identify the desired information.
- a tutorial or similar feature which assists users in phrasing verbal requests directed to, for example, particular types of alerts such as, for example, emergency alerts, weather, business alerts, alerts based on home sensors (entry sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage, etc.
- the runtime application 215 can be an executable program, which handles various functions associated with system 200 as described herein.
- the runtime application 215 can be, for example, code comprising instructions to perform particular steps or operations of a process.
- the runtime application 215 can make a request 254 to the one or more substantially, real-time feeds 235 .
- the request to one or more feeds 235 can result in the runtime application 215 obtaining a key corresponding to the request. That is, the one or more feeds 235 can assign keys to each source of desired information which is being tracked.
- the runtime application 215 can cause the request and the key to be stored as shown as arrow 256 in one or more databases 230 thereby linking the device to the feed 235 within the one or more databases 230 .
- the one or more databases 230 can maintain each user's profile of desired alert information. Accordingly, users can track, if desired, multiple types of information via the system 200 .
- the runtime application 215 can queue, for example, emergency information related to multiple requests to be transmitted to the user to prevent any interruption thereof. Once the key is obtained and it is determined that, for example, a particular emergency or a particular activity is in progress, the one or more databases 230 can maintain a corresponding request as active.
- the one or more databases 230 stores the key and maintains the request as temporarily active until a particular status (e.g., tornado activity is confirmed over or tornado activity has resumed) may be transmitted to the user. Responsive to final information being transmitted to the user, the temporary active status can be changed to inactive.
- a particular status e.g., tornado activity is confirmed over or tornado activity has resumed
- the runtime application 215 can be configured to poll the one or more databases 230 to determine the status of each request. Any inactive request (e.g., tornado activity has ended and it is now safe to go outside) can be removed from the one or more databases 230 by the runtime application 215 .
- the one or more databases 230 may link multiple users with the same active key when those multiple users have requested the same type of alert information (e.g., tornados, weather, national alerts, Homeland Security alerts, information from home sensors, etc.).
- Text requests can be unpacked 252 and handed directly to the runtime application 215 . From that point, the process is similar to the verbal requests converted to text as described above.
- the open communication linked between the database 230 and information feed 235 can provide a conduit for the requested information to be transmitted to the one or more databases 230 at any desired interval. For example, if the users have selected alert information every 30 minutes, the runtime application 215 determines that the request is active every 30 minutes by polling one or more databases 230 . Polling can occur at any necessary interval, including continuously, to allow all users to receive alerts at the users-selected time period. If active, the runtime application 215 can pull, grab or obtain the desired substantially, real-time alert information from the feed 235 (or information may be pushed from the feed 235 ) using the previously obtained key and transmits the alert information to the one or more databases 230 and eventually to the user as described.
- the alert information can be stored in the one or more databases 230 either long term or short term depending on the needs of the operator of system 200 and its users.
- a text file can be handed to the text-to-speech engine 225 depicted in FIG. 1 .
- a text file containing the emergency or other alert information can be converted into an audio file such as, for example, a MP3 or similar audio file.
- the text-to-speech (also text-to-voice) engine 225 discussed herein can be implemented with natural speech features to voice so “robotic voice” text to speech synthesis, which is important for broadcasting or sending voice alerts in more “human” type voice audio, which is more receptive to listeners than the more “robotic voice” text-to-speech applications.
- Using a more natural sounding text-to-speech engine for engine 225 ensures that voice alerts are actually heard by listeners, which is particularly important during emergency situations.
- the text-to-speech engine 225 can be configured to offer text-to-speech conversion in multiple languages. Such a text-to-speech engine 225 can also be configured to convert the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the remote electronic devices (e.g., devices 198 , 199 , 201 , 202 , 203 , 204 ).
- the remote electronic devices e.g., devices 198 , 199 , 201 , 202 , 203 , 204 .
- Orpheus a multilingual text-to-speech synthesizer from Meridian One for Laptop, Notebook, and Desktop computers running Microsoft Windows Windows 7, Vista or Microsoft Windows XP.
- Orpheus is available as Orpheus TTS Plus or Orpheus TTS.
- Orpheus TTS plus and Orpheus TTS speaks 25 languages with synthetic voices capable of high intelligibility at the fastest talking rates.
- Orpheus TTS Plus adds natural sounding voices for UK English, US English, and Swedish.
- the audio file can then be transmitted to devices such as, for example, devices 199 , 201 , 202 , 203 , 204 , etc.
- the application software causes the audio file to automatically play upon receipt by the device.
- users can receive automatic alert-related information in substantially real-time based on user-selected parameters.
- the text file can be transmitted to the device in the form of a text or an instant message without the need for converting the text file to an audio file.
- runtime application 215 can send the text alert to the user device and the text alert can be converted to a voice alert (i.e., text-to-voice alert) at the device itself.
- a community of users can receive substantially, real-time alert information.
- users simply identify particular desired information (e.g., emergency announcements, weather, road conditions, road construction, etc.) and become part of a community or other users interested in receiving substantially, real-time alert related information alerts in text and/or audio format.
- users belonging to a community interested in emergency announcements receive the same substantially, real-time alerts.
- Default settings may be used with this particular embodiments such that each user receives alerts at the same time over the same staggered time period (e.g., once an hour, every thirty minutes, once per day, etc.).
- Single users may also utilize default settings without joining a community of users. Users wanting a different scheme can customize the alerts as shown via the example screen shots illustrated in FIGS. 3( a )- 3 ( d ).
- system 200 can be configured to allow a user to send a message to a social media account (e.g., Twitter®, Facebook®, etc.) along with an attachment with an audio message from the user.
- a social media account e.g., Twitter®, Facebook®, etc.
- the user may send an alert to one or more friends with an audio message (e.g., tornados in southwest Kansas, watch out!).
- the system 200 may prompt the user and/or a home page may depict an icon which allows the user to verbalize a message for delivery to one or more intended recipients along with an alert.
- the voice recognition engine 220 can generate an audio file representing the user's message, which can be an actual voice or computer-generated voice, into an audio file and store the audio file in the one or more databases 230 linking it to the other user's remote electronic device.
- System 200 can then transmit the audio file along with the alert (or another alert) to one or more intended recipients via a social media account.
- the intended recipients may be stored by the system 200 previously, or may be inputted at the time the message is to be sent.
- the user is able to select from a list of friends established within the application software by the user.
- the personal message can be saved in, for example, database 230 and linked to the user.
- the alert (or other information) can be transmitted along with the personal message.
- FIG. 4 illustrates a high-level flow chart of operations depicting logical operations of a method 400 for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment.
- the process can be initiated.
- an activity can be detected utilizing one or more sensors.
- a text messaged indicative of such activity can be generated. For example, a message indicating that a particular sensor has determined that the backdoor of a particular house has been opened would generate text stating “Backdoor is open”.
- such a text message can be converted, as depicted at block 408 , into a digitized voice alert via, for example, the text-to-speech recognition engine 225 shown in FIG. 1 .
- a test can be performed, as indicated at block 410 , to determine if the digitized voice message should be broadcast in another language. For example, if it is determined that the voice alert should be broadcasted in another language (e.g., following broadcast of the message in the initial language), then as described at block 411 , the digitized voice message can be converted into a pre-selected or specified language and then as indicated at block 412 transmitted through a network (e.g., network 501 shown in FIG.
- a network e.g., network 501 shown in FIG.
- the digitized voice message is transmitted in the original language through the network (e.g., network 501 shown in FIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for the playing of the automatic audio announcement (e.g., voice alert) through the remote electronic device(s).
- the process can then terminate, as indicated at block 414 .
- the aforementioned digitized voice message can be broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile.
- the one or more languages can be pre-selected in the user profile.
- the user profile can be established as a user preference via a service during a set up of the one or more remote electronic devices.
- the user profile can, in some embodiments, be established as a user preference via an intelligent router during a set up of the one or more remote electronic devices.
- the one or more languages can be selected from a plurality of different languages.
- the digitized voice message can be converted into the particular language specified by a user via the one or more remote electronic devices.
- the disclosed embodiments including the methods, systems, and processor-readable media discussed herein, when implemented, will vocalize, for example, regional, national, government, presidential, and other alerts instantly and automatically and in various languages which would automatically follow the base language (e.g., English in the United States, Spanish in Mexico, French in France, etc.) utterance.
- base language e.g., English in the United States, Spanish in Mexico, French in France, etc.
- the aforementioned one or more sensors can communicate with a server that communicates with the network (e.g., network 501 shown in FIG. 13 ).
- the one or more sensors can communicate with an intelligent router (e.g., a server, a packet router, etc.) that communicates with the network.
- an intelligent router e.g., a server, a packet router, etc.
- many types of intelligent routers e.g., intelligent or smart wireless routers
- Examples of intelligent routers 233 , 235 are shown in FIG. 13 .
- the senor or sensors can communicate with the one or more sensors through the network.
- each of the one or more sensors can comprise a self-contained computer that communicates with the network (e.g., network 501 shown in FIG. 13 ).
- the network e.g., network 501 shown in FIG. 13 .
- sensors can be located in, for example, a residence, a business, enterprise, a government entity (e.g., a secure facility, military base, etc.), and so forth.
- FIG. 5 illustrates a high-level flow chart of operations depicting logical operations of a method 420 for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment.
- the process can be initiated.
- a wireless data network can be provided which includes and/or communicates with one or more of the sensors in communication with the wireless data network (e.g., network 501 shown in FIG. 13 ).
- the sensors can be located within, for example, a residence, a building, government agency, secure military facility, etc.
- the one or more sensors in and/or associated with the residence can detect an activity (e.g., window opens, door opens, smoke detected, etc.).
- a text message can be generated, which is indicative of the activity (e.g., “Smoke Detected in Living Room”).
- the text message can be converted into a digitized voice alert via, for example, the text-to-speech engine 225 shown in FIG. 1 .
- the digitized voice alert can be transmitted through a network (e.g., a cellular communications network) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices (e.g., a speaker integrated with a Smartphone, laptop computer, automobile, etc.).
- a network e.g., a cellular communications network
- the aforementioned operations involving language pre-selection, language conversion, etc., shown in FIG. 4 can be adapted for use with the methodology shown in FIG. 5 .
- the process shown in FIG. 5 can then terminate, as depicted at block 434 .
- FIG. 6 illustrates a high-level flow chart of operations depicting logical operations of a method 440 for providing automatic and instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment.
- the method 440 provides for an instant automatic delivery of a voice alert to one or more remote electronic devices via a network such as, for example, network 501 discussed herein.
- Method 440 takes into account several scenarios. The first scenario involves those who are unable to look at their instant text alert such as when driving, or otherwise unable so as not to be distracted. This is not possible with the current PLAN (e.g., see description of PLAN in greater detail herein), which sends text only to wireless carriers, whereas, with the approach of the disclosed embodiments, users can hear the message without doing anything.
- PLAN e.g., see description of PLAN in greater detail herein
- the disclosed embodiments handle the situation of those that are without a phone, who are reading the TEXT on their computers, and so forth.
- Such individuals are now be able to HEAR the PLAN Alert via an approach such as that of method 440 . They can hear the voice alert without doing anything, and also indicated herein, hear the voice alert in sequential languages without doing anything.
- a live utterance e.g., announcement
- the process can be initiated.
- an operation can be implemented for determining an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region.
- a step can be implemented for generating a text message indicative of the emergency situation (e.g., “Flooding, Leave to Higher Ground!”).
- an operation can be implemented for converting a text message indicative of the emergency situation into a digitized voice alert (e.g., text-to-voice).
- the conversion operation depicted at block 448 can be provided by, for example, the text-to-speech engine 225 shown in FIG. 1 .
- the digitized voice alert can be transmitted, as depicted at block 450 , through specific towers of a cellular communication network (e.g., network 501 shown in FIG. 13 ) in the specified region for distribution, as shown next at block 452 , of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region.
- a cellular communication network e.g., network 501 shown in FIG. 13
- the aforementioned operations involving language pre-selection, language conversion, etc., shown in FIG. 4 can be adapted for use with the methodology shown in FIG. 6 .
- the process shown in FIG. 6 can then terminate, as depicted at block 454 .
- any other processes described herein can be implemented in the context of hardware and/or software.
- such operations/instructions of the methods described herein can be implemented as, for example, computer-executable instructions such as program modules being executed by a single computer or a group of computers or other processors and processing devices.
- a “module” constitutes a software application.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
- program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions.
- program modules include, but are not limited to, routines, sub
- module may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines; and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module.
- the term module may also simply refer to an application such as a computer program designed to assist in the performance of a specific task such as word processing, accounting, inventory management, etc. Additionally, the term “module” can also refer in some instances to a hardware component such as a computer chip or other hardware.
- FIG. 7 illustrates a block diagram of a system 490 for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment.
- system 490 includes a processor 480 and a data bus 481 coupled to the processor 480 .
- System 490 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a software module or group of software modules).
- the computer-usable medium 482 is generally coupled to or can communicate with the data bus 481 .
- the computer program code or module 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, the method 400 described above.
- Such a method 400 can include detecting an activity utilizing at least one sensor, generating and converting a text message indicative of the activity into a digitized voice alert; and transmitting the digitized voice alert through a network (e.g., network 501 shown in FIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices.
- a network e.g., network 501 shown in FIG. 13
- FIG. 8 illustrates a block diagram of a system 492 for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment.
- system 492 includes a processor 480 and a data bus 481 coupled to the processor 480 .
- the system 492 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a module or group of modules).
- the computer-usable medium 482 is also generally coupled to or in communication with the data bus 481 .
- the computer program code or module 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, the method 420 described above.
- Such a method 420 can include, for example, providing a wireless data network (e.g., a cellular network, a WLAN, etc.) including one or more sensors in communication with the wireless data network within a location (e.g., residence, building, military facility, government location, etc.); detecting an activity utilizing one or more sensors associated with the location; generating and converting a text message indicative of the activity into a digitized voice alert; and transmitting the digitized voice alert through a network (e.g., network 501 shown in FIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network (e.g., network 501 ) for an automatic audio announcement of the digitized voice alert through the remote electronic device(s).
- a wireless data network e.g., a cellular network, a WLAN, etc.
- a location e.g., residence, building, military facility, government location, etc.
- detecting an activity utilizing one or more sensors associated with the location generating and converting a
- FIG. 9 illustrates a block diagram of a system 494 for automatically providing instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment.
- system 494 includes a processor 480 and a data bus 481 coupled to the processor 480 .
- the system 492 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a module or group of modules).
- the computer-usable medium 482 is also generally coupled to or in communication with the data bus 481 .
- the computer program code or module 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, the method 440 described above.
- Such a method 440 can include, for example, determining an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region; generating and converting a text message indicative of the emergency situation into a digitized voice alert; and transmitting the digitized voice alert through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region.
- the computer-usable medium 482 discussed herein can be, for example, an application such as a downloadable software which may be in the form of a downloadable application software (“app”) retrieved from a server such as, for example, server, 231 shown in FIG. 13 , and then stored in a memory of a user device such as, for example, remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc.
- a server such as, for example, server, 231 shown in FIG. 13
- remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc.
- the computer-usable medium 482 may be a computer chip or other electronic module that can actually be incorporated into or added to a remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc., either during manufacture or as after-market type modules.
- FIG. 10 illustrates a block diagram of a processor-readable medium 490 that can store code 484 representing instructions to cause a processor to perform a process to, for example, provide automatic and instant voice alerts to remote electronic devices, in accordance with an embodiment.
- the code 484 can comprise code (e.g., module or group of modules) to perform the instructions of, for example, method 400 including code to detect an activity utilizing one or more sensors; generate and convert a text message indicative of the activity into a digitized voice alert; and transmit the digitized voice alert through a network (e.g., network 501 shown in FIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices.
- a network e.g., network 501 shown in FIG. 13
- FIG. 11 illustrates a block diagram of a processor-readable medium 492 that can store code representing instructions to cause a processor to, for example, perform a process to provide automatic and instant voice alerts to remote electronic devices from incidents detected within a security monitoring system, in accordance with an embodiment.
- Such a code can comprise code 484 (e.g., module or group of modules, etc.) to perform the instructions of method 420 such as, for example, to provide a wireless data network including one or more sensors in communication with the wireless data network within a location such as a residence, building, business, government facility, etc.; detect an activity utilizing one or more sensors associated with the location; generate and convert a text message indicative of the activity into a digitized voice alert; and transmit the digitized voice alert through a network (e.g., network 501 shown in FIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices.
- code 484 e.g., module or group of modules, etc.
- FIG. 12 illustrates a block diagram of a processor-readable medium 494 that can store code representing instructions to cause a processor to perform, for example, a process to automatically provide instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment.
- Such a code 484 can comprise code to perform the instructions of, for example, method 440 including code to determine an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region; generate and convert a text message indicative of the emergency situation into a digitized voice alert; and transmit the digitized voice alert through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region.
- the processor-readable media 490 , 492 , and 494 discussed herein can be, for example, an application such as a downloadable software which may be in the form of a downloadable application software (“app”) retrieved from a server such as, for example, server 231 shown in FIG. 13 , and then stored in a memory of a user device such as, for example, remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc.
- a server such as, for example, server 231 shown in FIG. 13
- remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc.
- the processor-readable media 490 , 492 , 494 , etc. may each be provided as a computer chip or other electronic module that can actually be incorporated into or added to remote electronic devices such as computer 198 , Smartphones 199 , 201 , Tablet 202 , television 203 , automobile 204 , etc., either during manufacture or as after-market type modules.
- FIG. 13 illustrates a voice alert system 500 that can be implemented in accordance with the disclosed embodiments. It can be appreciated that one or more of the disclosed embodiments can be utilized to implement various aspects of system 500 shown in FIG. 13 .
- System 500 generally includes a network 501 that can communicate with one or more of the remote electronic devices such as computer 198 , Smartphones 199 , 201 , etc., tablet computing device 202 , a television 203 , an automobile 204 , etc.
- One or more servers, such as server 231 can also communicate with network 501 .
- the database 230 (and other databases) can communicate with (via a network connection or other communication means with server 231 ) or is preferably stored in a memory of server 231 .
- server 231 may be a standalone computer server or may be composed of multiple servers that communicate with one another and with network 501 . Also, in some embodiments server 231 of FIG. 13 and server 205 of FIG. 1 may actually be the same server/computer, depending upon design considerations and goals.
- one or more sensors 512 located in, for example, a residence 511 can communicate with the network 501 individually or may be interlinked with one another in the context of a home based network (e.g., a Wireless LAN) that communicates with the network 501 .
- a home based network e.g., a Wireless LAN
- one or more sensors 514 can be located at key positions within a building 513 . Such sensors 514 may be interlinked with one another or communicate individually with the network 513 either directly or via a network located in a building 513 such as a Wireless LAN.
- the one or more sensors 512 can communicate with an intelligent router 233 via, for example, a WLAN.
- the one or more sensors 514 can also communicate with an intelligent router 235 via communications means 239 , similar to the communications configuration involving the intelligent router 233 , one or more sensors 512 , and communications means 237 .
- each of the intelligent routers 233 and/or 235 can also communicate with the network 501 .
- server 231 (or other servers in communication with network 501 ) can function as an intelligent router, depending upon design considerations.
- a variety of enterprises, business, government agencies, and so forth can also communicate with network 501 .
- local or state emergency services 510 e.g., Fire Department, Police Department, etc.
- a Homeland Security Agency 502 e.g., including FEMA, etc.
- a 911 Organization 504 can additionally communicate with network 501 .
- a military organization e.g., U.S. Air force, U.S. Army, U.S. Navy, Department of Defense, etc.
- a security monitoring enterprise 508 e.g., Sonitrol, Brinks, etc.
- the security monitoring enterprise 508 may monitor house 511 and/or building 513 respectively via one or more sensors 512 and/or 514 , depending upon the implemented embodiment.
- Network 501 can be, for example, a network such as the Internet, which is the well-known global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless, and optical networking technologies.
- the Internet carries a vast range of information resources and services such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.
- WWW World Wide Web
- Network 501 can also be, for example, a wireless communications network such as, for example, a cellular communications network.
- a cellular communications network is a radio network distributed over land areas called cells, each served by one or more fixed-location transceivers known as a cell site or base station. When joined together, these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.
- portable transceivers e.g., mobile phones, pagers, etc.
- network 501 may be implemented as a WiFi network such as, for example, an IEEE 802.11 type network, WLAN (Wireless Local Area Network, etc.), so-called Super Wi-Fi, coined by the U.S. Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US, and so forth.
- WiFi Wireless Local Area Network
- WLAN Wireless Local Area Network, etc.
- Super Wi-Fi coined by the U.S. Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US, and so forth.
- Network 501 can also be configured to operate as, for example, a PLAN (Personal Localized Alert Network) for the transmission of local emergency services, Amber alerts, Presidential messages, government notices, etc. Assuming network 501 is either a configured PLAN or equipped with PLAN capabilities, authorized government officials can utilize network 501 as a PLAN to send emergency text messages to participating wireless companies, which will then use their cell towers to forward the messages to subscribers in the affected area.
- PLAN Personal Localized Alert Network
- Such text messages can be converted to synthesize voice/speech via, for example, text-to-speech engine 225 either before being sent through the network 501 or via a server such as server 231 (and/or other services) or via the receiving remote electronic device such as, for example, remote electronic devices 198 , 199 , 201 , 202 , 203 , 204 , etc., that communicate with the network 501 .
- a server such as server 231 (and/or other services) or via the receiving remote electronic device such as, for example, remote electronic devices 198 , 199 , 201 , 202 , 203 , 204 , etc., that communicate with the network 501 .
- a variety of different types of text message alerts can be generated and converted to synthesized speech (e.g., “natural” voice) as indicated herein.
- Most security system sensors provide a simple switched output that changes state, and that's based on whether the sensor has been tripped or not, which means that when connected up in a circuit they behave just like a switch that is activated automatically, and that makes them extremely easy to connect in the same (text to speech) technology.
- a remote electronic device such as, for example, smartphone, computer, iPad, and/or to a security center (e.g., security monitoring 508 ) or directly to their security patrol car.
- Airline/Travel “Jet Blue Air Flight 355 JFK to Burbank has JUST arrived AT four twenty seven pm BAGGAGE CLAIM 3.”
- the transmission of the voice alerts can be rendered in, for example, a dozen languages and also different voices.
- a Bluetooth® application e.g., a Bluetooth® connection
- it connects to the user's remote electronic device (e.g., Smartphone) to a stereo of the automobile for playing of the voice alert.
- the Bluetooth® connection in the automobile would allow the user/driver to instantly hear the President and also in some embodiments, in consecutive multiple languages, and without visually distracting the user/driver while the user/driver continues to operate his or her automobile.
- FIG. 14 illustrates a high-level flow chart of logical operations of a method 401 for providing automatic and instant digitized voice alerts, and converting such digitized voice alerts into more than one language for broadcast of the digitized voice alert in consecutively different languages through one or more remote electronic devices, in accordance with an embodiment.
- the operational steps shown in FIG. 14 are similar to those depicted in FIG. 4 , except for differences shown at blocks 411 and 413 . That is, assuming it is determined to convert the digitized voice alert into other languages, an operation can be implemented, as indicated at block 411 , to convert the digitized voice alert into multiple languages (e.g., English to Spanish, Italian, Vietnamese, etc.).
- the voice alert can be instantly broadcast consecutively in different languages (e.g., English followed by Spanish, Italian, Vietnamese, and then back to English again).
- a loop of voice alerts in different languages can be provided.
- a live utterance can be instantly converted into a digitized voice alert for automatic delivery in a selected series of languages following the base language (e.g., English).
- the combined digitized voice alert can then be instantly transmitted through, for example, network 501 for broadcast through one or more of the remote electronic devices 198 , 199 , 201 , 202 , 203 , 204 , etc.
- alert messages e.g., a live speech or live announcement
- a telephone e.g., cell phone, landline, Internet Telephony based phone, etc.
- speak an utterance or announcement such as “This is a national emergency”.
- the voice of the President can thus be captured and converted into a digitized voice alert (e.g., a wave file or other audio file) and then transmitted through, for example, network 501 to one or more of devices 198 , 199 , 201 , 202 , 203 , 204 , etc.
- a digitized voice alert e.g., a wave file or other audio file
- FIG. 15 illustrates a high-level flow chart of operations depicting logical operations of a method 530 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- the methodology shown in FIG. 15 does not utilize text-to-speech conversion, but actually relies on the original live voice/utterance itself.
- a speaker e.g., the President
- speaks directly into a voice capturing device such as, for example, a cell phone, landline phone, etc., as indicated at block 536 .
- the voice of the speaker e.g., a live announcement
- the digitized voice message (of the captured utterance) is associated with a text message, which may or may not contain text.
- the digitized voice message can be attached to the text message or may be bundled with the text message.
- the digitized voice message can be automatically transmitted through network 501 to one or more remote electronic devices such as devices 198 , 199 , 201 , 202 , 203 , 204 , etc., that communicates with the network 501 .
- a test can be performed to automatically confirm if the text message (which includes the digitized voice message) has been received at a device such as one or more of devices 198 , 199 , 201 , 202 , 203 , 204 , etc.
- Such a test can include, in some embodiments, automatically detecting header information (e.g., packet header) to determine point of origin and point of transmission (e.g., the remote electronic device) to assist in determining if the text message (with digitized voice message attached) is received at the device. If so, then the process continues, as indicated at block 550 . If not, a test is determined whether or not to transmit again or “try again” as shown at block 543 , and the operation repeated. Assuming, it is determined not to “try again” (e.g., after a certain amount of time or a certain amount of repeat transmissions), the process can then terminate, as described at block 556 .
- header information e.g., packet header
- point of origin and point of transmission e.g., the remote electronic device
- the text message (with the attached/associated digitized voice message) can be transmitted with the digitized voice message through network 501 for broadcast to the one or more electronic devices for automatic playback of the digitized voice message through the one or more remote electronic device upon receipt of the text message with the digitized voice message at the device(s).
- FIG. 16 illustrates a high-level flow chart of operations depicting logical operations of a method 531 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- the method 531 shown in FIG. 16 is similar to the method 530 depicted in FIG. 15 , the difference being in the addition of a test to determine if a call (e.g., phone call) or other activity is in progress at the device at the time of receipt of the text message (with its attached/associated digitized voice message).
- a call e.g., phone call
- the call can be interrupted and the text message with its attached/associated digitized voice message (e.g., announcement from the President) pushed ahead of the current call to allow the digitized voice message to be automatically opened via the device, as shown at block 550 .
- the digital voice message e.g., vocal utterance
- the digitized voice message can be automatically played, as indicated at block 554 , via the device and in the case of an interrupted call, takes precedence over the interrupted call.
- the operations shown in FIG. 16 allows for an automatic interruption of a current call in each remote electronic device in order to push the text message with the digitized voice message through to each remote electronic device for automatic playback of the digitized voice message.
- the digitized voice message can, in some embodiments, be automatically opened in response to receipt of the text message with the digitized voice message at the one or more remote electronic devices, and automatically played through respective speakers associated with each remote electronic device in response to automatically opening the digitized voice message.
- the identity of the speaker (e.g., the President) associated with the live announcement can be authenticated via, for example, the voice recognition engine 220 shown in FIG. 1 , prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement.
- FIG. 17 illustrates a high-level flow chart of operations depicting logical operations of a method 533 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- the methodology of FIG. 17 is similar to that of FIGS. 15-16 , the difference being that that method 533 of FIG. 17 does not utilize a text message transmission. Instead, in method 533 , the original voice announcement or utterance is captured and configured in a digitized voice alert format and transmitted and pushed through via network 501 to devices 198 , 199 , 201 , 202 , 203 , 204 , etc.
- FIG. 18 illustrates a high-level flow chart of operations depicting logical operations of a method 535 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- the methodology of FIG. 18 is similar to that of FIGS. 15-17 , the difference being that the method 535 shown in FIG. 18 includes a language conversion and broadcast feature, as indicated by blocks 547 and 551 . This is similar to the language features discussed earlier herein. Note that the actual language conversion can take place at the mobile device via, for example, a language conversion module, or may take place earlier in the process prior to transmission of the live announcement but after capturing the announcement or utterance from the speaker.
- FIG. 19 illustrates a block diagram of a system 560 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- System 560 generally includes a processor 480 and a data bus 481 coupled to the processor 480 .
- System 560 can also include a computer-usable medium 482 embodying computer code 484 (or a module or group of modules).
- the computer-usable medium 482 is generally coupled to the data bus 481
- the computer program code 484 comprises instructions executable by the processor 480 and configured for performing the instructions/operations of, for example, methods 401 , 530 , 531 , 533 , and/or 535 respectfully illustrated and discussed herein with respect to FIGS. 14-18 .
- the computer-program code 484 of FIG. 19 can comprise instructions executable by processor 480 and configured for capturing a live announcement; automatically converting the live announcement into a digitized voice message indicative of the live announcement, in response to capturing the live announcement; associating the digitized voice message with a text message to be transmitted through network 501 to a plurality of remote electronic devices that communicate with the network 501 ; and transmitting the text message with the digitized voice message through network 501 for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through at least one remote electronic device among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices.
- the code 484 may comprise instructions configured for automatically interrupting a current call in each remote electronic device among the plurality of remote electronic devices in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playback of the digitized voice message via the plurality of remote electronic devices.
- the code 484 may comprise instructions for automatically opening the digitized voice message in response to receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices; and automatically playing the digitized voice message through a speaker associated with the at least one remote electronic device in response to automatically opening the digitized voice message.
- the code 484 may comprise instructions configured for authenticating an identity of a speaker associated with the live announcement prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. Authentication may occur, for example, automatically utilizing a voice recognition engine.
- instructions of the code 484 can be further configured for broadcasting the digitized voice message through the at least one remote electronic device in at least one language based on a language setting in a user profile. In yet other embodiments, instructions of the code 484 can be further configured for pre-selecting the at least one language in the user profile. In other embodiments, instructions of the code 484 can be configured for establishing the user profile as a user preference via a server during a set up of the at least one remote electronic device. Additionally, in other embodiments, instructions of the code 484 can be configured for establishing the user profile as a user preference via an intelligent router during a set up of the at least one remote electronic device.
- the code 484 can include instructions configured during a set up of the at least one remote electronic device for selecting the at least one language from a plurality of different languages. In other embodiments, the code 484 can include instructions configure for converting the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the at least one remote electronic device.
- FIG. 20 illustrates a block diagram of a processor-readable medium 562 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.
- Processor-readable medium 562 can store code representing instructions to cause the processor 480 to perform a process to automatically provide an instant voice announcement to remote electronic devices.
- the code 484 can comprise code to implement the instructions/operations of, for example, methods 401 , 530 , 531 , 533 , and/or 535 respectfully as illustrated and discussed herein with respect to FIGS. 14-18 .
- Such a code 484 can comprise code to, for example, capture a live announcement, automatically convert the live announcement into a digitized voice message indicative of the live announcement in response to capturing the live announcement; associate the digitized voice message with a text message to be transmitted through network 501 to a plurality of remote electronic devices that communicate with the network; and transmit the text message with the digitized voice message through network 501 for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through at least one remote electronic device among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices.
- such a code 484 can further comprise code to automatically interrupt a current call in each remote electronic device among the plurality of remote electronic devices in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playback of the digitized voice message via the plurality of remote electronic devices.
- such a code 484 can comprise code to automatically open the digitized voice message in response to receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices; and automatically play the digitized voice message through a speaker associated with the at least one remote electronic device in response to automatically opening the digitized voice message.
- the code 484 can also in some embodiments comprise code to authenticate an identity of a speaker associated with the live announcement prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. In other embodiments, the code 484 can comprise code to authenticate the identity of the speaker further utilizing a voice recognition engine. In other embodiments, the code 484 can comprise code to broadcast the digitized voice message through the at least one remote electronic device in at least one language based on a language setting in a user profile.
- the code 484 can comprise code to pre-select at least one language in the user profile, and/or to establish the user profile as a user preference via a server during a set up of the at least one remote electronic device, and/or to establish the user profile as a user preference via an intelligent router during a set up of the at least one remote electronic device.
- the code 484 can comprise code during a set up of the at least one remote electronic device to select at least one language from a plurality of different languages.
- the code 484 can comprise code to convert the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the at least one remote electronic device.
- an exemplary data processing system 600 may be included in devices operating in accordance with some embodiments.
- the data processing system 600 generally includes a processor 480 , a memory 636 , and input/output circuits 646 .
- the data processing system 600 may be incorporated in, for example, the personal or laptop computer 198 , portable wireless hand held devices (e.g., Smartphone, etc.) 199 , 201 , tablet 202 , television 203 , automobile 204 , or a router, server, or the like.
- An example of such a server is, for example, server 205 shown in FIG. 1 , server 231 shown in FIG. 13 , and so forth.
- the processor 480 can communicate with the memory 636 via an address/data bus 648 and can communicate with the input/output circuits 646 via, for example, an address/data bus 649 .
- the input/output circuits 646 can be used to transfer information between the memory 636 and another computer system or a network using, for example, an Internet Protocol (IP) connection and/or wireless or wired communications.
- IP Internet Protocol
- These components may be conventional components such as those used in many conventional data processing systems, which may be configured to operate as described herein.
- the processor 480 can be any commercially available or custom microprocessor, microcontroller, digital signal processor, or the like.
- the memory 636 may include any memory devices containing the software and data used to implement the functionality circuits or modules used in accordance with embodiments of the present invention.
- the memory 636 can include, for example, but is not limited to, the following types of devices: cache, ROM, PROM, EPROM, EEPROM, flash memory, SRAM, DRAM, and magnetic disk.
- the memory 636 may be, for example, a content addressable memory (CAM).
- the memory 636 may include several categories of software and data used in the data processing system 600 : an operating system 652 ; application programs 654 ; input/output device drivers 658 ; and data 656 .
- the operating system 652 may be any operating system suitable for use with a data processing system such as, for example, Linux, Windows XP, Mac OS, Unix, operating systems for Smartphones, tablet devices, etc.
- the input/output device drivers 658 typically include software routines accessed through the operating system 652 by the application programs 654 to communicate with devices such as the input/output circuits 646 and certain memory 636 components.
- the application programs 654 are illustrative of the programs that implement the various features of the circuits and modules according to some embodiments of the present invention.
- the data 656 represents static and dynamic data that can be used by the application programs 654 , the operating system 652 , the input/output device drivers 658 , and other software programs that may reside in the memory 636 .
- the data 656 may include, for example, user profile data 628 and other information 630 for use by the circuits and modules of the application programs 654 according to some embodiments of the present invention as discussed further herein.
- applications programs 654 can include, for example, one or more modules 622 , 624 , 626 , etc. While the present invention is illustrated with reference to the modules 622 , 624 , 626 , etc., being application programs in FIG. 21 , as will be appreciated by those skilled in the art, other configurations fall within the scope of the disclosed embodiments. For example, rather than being application programs 654 , these modules may also be incorporated into the operating system 652 or other such logical division of the data processing system 600 . Modules 622 , 624 , and 626 can include instructions/code and/or processor-readable media for performing the various operations/instructions and methods discussed herein.
- modules 622 , 624 , and/or 626 , etc. can be utilized to store the instructions of, for example, the methods and processes shown in FIGS. 1-2 , 4 - 12 , and 15 - 18 , depending upon design considerations.
- modules 622 , 624 , and 626 are illustrated in a single data processing system, as will be appreciated by those skilled in the art, such functionality may be distributed across one or more data processing systems.
- the disclosed embodiments should not be construed as limited to the configuration illustrated in FIG. 21 , but may be provided by other arrangements and/or divisions of functions between data processing systems.
- FIG. 21 is illustrated as having various circuits/modules, one or more of these circuits may be combined without departing from the scope of the embodiments, preferred or alternative.
- module generally refers to a collection or routines (and/or subroutines) and/or data structures that perform a particular task or implements a particular abstract data type. Modules usually include two parts: an interface, which lists the constants, data types, variables, and routines that can be accessed by other modules or routines; and an implementation, which is typically, but not always, private (accessible only to the module) and which contains the source code that actually implements the routines in the module.
- module may also refer to a self-contained component that can provide a complete function to a system and can be interchanged with other modules that perform similar functions.
- the environment 705 may include a communication/computing device 710 , the data communications network 501 as discussed earlier, a first server 740 , and a second server 745 . It can be appreciated that additional servers may be utilized with respect to network 501 . It can also be appreciated that in some embodiments, only a single server such as server 740 may be required. Note that servers 745 and 740 shown in FIG. 22 are analogous or similar to server 205 shown in FIG. 1 and server 231 depicted in FIG. 13 . Similarly, databases 730 and 735 are analogous or similar to database 230 shown in FIGS.
- the communication device 710 allows a user of the communication device 710 to communicate via bi-directional communication with one or more servers 740 , 745 , 205 , 231 , etc., over the data communication network 501 .
- the communication device 710 depicted in FIG. 22 may include one or more modules 622 , 624 , 626 , etc., or system 600 according to some embodiments.
- the application programs 654 discussed above with respect to FIG. 21 can be included in system 600 of the communication device 710 .
- the communication device 710 may be, for example, devices such as devices 198 , 199 , 201 , 202 , 203 , 204 , etc., that communicate with network 501 .
- the communication device 710 can include, for example, a user interface 744 and/or a web browser 715 that may be accessible through the user interface 744 , according to some embodiments.
- the first server 740 may include a database 730 and the second server 745 may include a database 735 .
- the communication device 710 may communicate over the network 501 , for example, the Internet through a wireless communications link, an Ethernet connection, a telephone line, a digital subscriber link (DSL), a broadband cable link, cellular communications means or other wireless links, etc.
- the first and second servers 740 and 745 may also communicate over the network 501 .
- the network 501 may convey data between the communication device 710 and the first and second servers 740 and 745 .
- PLAN personal area network
- PLAN personal area network
- PLAN authenticates the alert, verifies that the sender is authorized, and then PLAN sends the alert to participating wireless carriers. Participating wireless carriers push the alerts from, for example, cell towers to mobile telephones and other mobile electronic devices in the affected area.
- the alerts appear similar to text messages on mobile devices.
- Such “text-like messages” are geographically targeted. For example, a customer living in downtown New York would not receive a threat alert if they happen to be in Chicago when the alert is sent. Similarly, someone visiting downtown New York from Chicago on that same day would receive the alert.
- alerts from PLAN including alerts issued by the President, alerts involving imminent threats to safety of life, and Amber alerts.
- voice alerts e.g., digitized voice alert from the President, which the public would recognize
- Such messages can be transmitted in different languages or in different sequences of languages.
- the digitized voice alert of an announcement from the President for example, can be automatically converted into one or more other languages.
- Push technology also known as server push, describes a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull technology, where the request for the transmission of information is initiated by the receiver or client.
- Synchronous conferencing and instant messaging are typical examples of push services. Chat messages, and sometimes files, are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient.
- decentralized peer-to-peer programs such as WASTE
- centralized programs such as IRC or XMPP
- Email is also a type of push system: the SMTP protocol on which it is based is a push protocol (see Push e-mail).
- the last step from mail server to desktop computer, typically uses a pull protocol like POP3 or IMAP.
- POP3 POP3
- IMAP IMAP
- Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail.
- the IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive.
- the original BlackBerry was the first popular example of push technology for email in a wireless context.
- PointCast Network Another popular type of Internet push technology was PointCast Network, which gained popularity in the 1990s. It delivered news and stock market data. Both Netscape and Microsoft integrated it into their software at the height of the browser wars, but it later faded away and was replaced in the 2000s with RSS (a pull technology). Other uses are push enabled web applications including market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, and sensor network monitoring.
- UAVs Unmanned Aerial Vehicles
- federal and state agencies e.g., U.S. Military, FBI, local and state police, U.S. Forest Service, U.S. Border Patrol, etc.
- Private commercial applications are also feasible and foreseeable (e.g., large private land holdings or leased open space, environmental and geographical data gathering, university research).
- UAVs have the distinctive capability of providing better-than-human, aerial, visual information to ground units that may not have the time or means to use a manned plane for their surveillance/reconnaissance.
- a ground control operator can remotely fly and control an unmanned aerial vehicle (UAV), also known as a pilotless drone.
- UAV unmanned aerial vehicle
- Land- and maritime-based vehicles are similarly controlled.
- These unmanned vehicles are equipped with camera equipment and are best known for capturing real-time images during warfare, but now these drones have become increasingly affordable for use in civilian high risk incidents such as search missions, border security, wildfire and oil spill detection, police tracking, weather monitoring, and natural disasters.
- the airborne drone acquires image data from the camera and flight parameters from onboard systems.
- the aerial footage captured by the camera onboard the UAV is transmitted to the Ground Control Station which transfers it to their work station for analysis and possible enhancement.
- a push notification can arrive in a manner comprised of separate technologies such as cellular/Internet voice (voice to text, voice recognition), video stills (embedded with personalized iconographic identifiers), and can further include the capability of a secondary purpose of allowing notified recipients to engage others by retransmitting the message received, along with their own typed notations, so as to create their own real-time civil communications hub for ongoing situational awareness (a system that currently doesn't exist, but can be achievable by software applications running on servers).
- Once software is in place within a system (e.g., including servers), the only major expense can be largely limited to yearly system maintenance and data management.
- data collected by the remote unmanned vehicle is identified as restricted data and public data, and providing the public data to mobile devices registered by the server.
- up-to-the-minute UAV aerial imagery as selected by drone ground-based commanders, to be automatically transmitted to subscribed end-users via the current mobile operating systems for smartphones, iPads, laptops, and web-enabled devices in a manner comprised of separate technologies such as voice (voice to text, voice recognition), video stills, and data that can be embedded with personalized iconographic identifiers and messages.
- a system can be adapted to enable civil UAV authorities to transmit UAV video along with their voice-and-text notations to the public via their smartphones, iPads, laptops, and web-enabled devices, thus enabling these application registrants to form a civil awareness hub that would allow them to stay connected in times of emergency.
- the unmanned vehicle aspect of the present invention differs from city websites and telephone-based emergency notification systems in as much as the SkySpeak application can deploy a software-centric web platform to automatically transmit instant voice notifications and enriched data to those who have installed the application onto their smartphone and Internet devices.
- the SkySpeak Application can automatically voice its message and display the video stills (embedded with personalized iconographic identifiers) on user handheld devices (e.g., smartphones, iPads, etc.) and can automatically voice its message as a multilingual transmission without the recipients having do anything to the devices in use on their end.
- an unmanned aerial vehicle (UAV) system 800 in accordance with an embodiment of the invention, is illustrated that includes an avionics and guidance module 801 , a motor 803 , propeller hardware 805 , and a fuel source 807 .
- UAV unmanned aerial vehicle
- Reference to an unmanned aerial vehicle (UAV) is not meant to limit application of features of the present invention to a particular vehicle system. It should be appreciated that the vehicle is unmanned, but can also be land-based or maritime-based. Reference to an unmanned vehicle (UV) can more accurately set the scope for vehicles that can be used to collect data for the present invention.
- the UV is managed by a controller 810 .
- An onboard controller can also manage sensors 811 , imaging equipment 813 , and location/GPS modules 815 engaged in navigation and data collection within the unmanned vehicle.
- Data collected by the UV can be separated into restricted data 821 and public data 823 . Separation into these categories can occur onboard the UV or after transmission to a server (to be discussed in FIG. 24 ).
- a communications module 825 enables communication of the UV with remote resources (e.g., servers) via any means of wireless communications (e.g., cellular, microwave, satellite, etc.) reasonably available in the unmanned vehicle field.
- UVs 800 are shown transmitting data through wireless communications means 831 (e.g., cellular transmission) through a data network 835 wherein data can be received and managed by a server 837 .
- the server 837 can organize data into restricted data and public data. Restricted data can go to clients 832 controlled by authorities (e.g., police, government operators), wherein public data can be provided to mobile devices 830 (e.g., smartphones) that are registered with the server to receive public data.
- authorities e.g., police, government operators
- Data collected by a remote unmanned vehicle can be transmitted to be received by a server, as shown in step 841 .
- Data can then be identified as restricted data and public data at the server, as shown in step 842 .
- public data can be provided to users registered at the server to receive the public data.
- Restricted data can be accessed by cleared civil personnel such as police or government operators (e.g., homeland security, ICE, FBI), while public data can be received by civilians and reporters and the cleared civil personnel.
- a flow diagram is shown in accordance with features of the invention.
- users can register their mobile devices with a server to receive data collected by remote unmanned vehicles.
- users can request data from the server, wherein the data can be collected by an unmanned vehicle and identified as public data by the server.
- the server as shown in step 853 , can then provide public data to registered user mobile devices.
- FIG. 27 another flow diagram is shown wherein users can register their mobile devices with a server to receive data collected by remote unmanned vehicles, as shown in step 861 . Then, as shown in step 862 , the server can automatically provide public data to registered user mobile devices.
- the present invention can be used to instantly inform authorities and members of a community with instant voice notifications, which can also supplement other emergency services as the FEMA National Radio System (FNARS), the Emergency Alert System (EAS), which is a national warning system in the United States which uses AM, FM, and Land Mobile Radio Service as well as broadcasts via VHF, UHF, and cable television including low-power stations and with EAN (Emergency Action Notification), and with AMBER Alerts and with their existing robo-calling, telephone-based centers serving 911 Reverse and NG 911.
- FNARS FEMA National Radio System
- EAS Emergency Alert System
- Robo-callers are often connected to a public switched telephone network by multiple phone lines because they can only send out one message at a time per phone line.
- the advantage of the robo-caller is that it is compatible with the most basic phone service. That very basic service has essentially stayed unchanged for a century because it is just a simple phone on a landline.
- the present invention does not make phone calls. It cannot get a busy signal because it is not making a phone call. It receives the alert as data regardless if the alert is vocal or text, an application operating on a user's handheld devices then plays the message. The recipient simply gets the message. Text can be transmitted to user handheld device where it can also be converted to speech. One benefit is lower bandwidth, which means you can alert more people more quickly. The other is that the text goes through a non-voice channel to the phone.
- the present invention can use communications methods other than the phone's voice channel. Alerts can be received by people already talking on their smartphones. Alerts can be somewhat intrusive in that they can nag recipients until they at least acknowledge the alert.
- the registration process can be far simpler in that the user only needs to download the application on their mobile device, everything else (e.g., communications with a data providing server) can be automated.
- the present invention can be fully capable of delivering vocally recorded alerts, visuals, text alerts, and supplemental information.
- a data recipient should not need to answer the telephone in order to receive basic alert information because a message can be played on their handheld device display and/or announced via their handheld device speaker with the present invention. Spoken data is especially important for drivers and similarly occupied people that cannot take a moment to read a display.
- a UAV ground base station notifier can select a drone-image and enters it onto the application's screen display.
- the notifier can then use the application's voice recognition to dictate an accompanying voice-activated message that is typed and that can be uttered automatically.
- the combined content can be transmitted to selected recipients who can then type their own comments to other recipients thus forming an ongoing web-enabled hub for the constant updating of information over OS mobile operating systems for smartphones, iPads, and laptops.
- the UAV Ground Base Station (land, maritime or air) notifier selects a screen image and enters on to the interface of a server-based application
- the notifier can have the ability to modify his notifications with a voice-activated message that is automatically typed as text and/or uttered via speaker when transmitted to end-user handheld devices.
- recipients in turn can use the present system to type their own comments and forward them to other recipients, thus forming an ongoing web-enabled hub for the constant updating of information.
- the system can also recognize that notification is not communication and that the notification, in itself, does not guarantee an ongoing communication.
- the system can, for example, allow the imagery expert at a drone base station's video terminal to quickly transmit a still frame as captured from the incoming video and automatically resize it, such as to 460 kb, and attach it to the application's user interface (UI) such as a display screen on which a voice and text symbol can appear so that an imagery expert can easily dictate the text caption to be submitted with a photo (such as using Google HTML+CSS code for implementation) and then can automatically submit the notification to the registered recipients' smartphone or web-enabled devices along with the expert's voice.
- UI user interface
- data e.g., video, still images
- authorities this needs to be “authorized”
- data can be provided to the public using automatic instant voice alerts to mobile devices registered with the system. Notifications can be sent to registered users along with the authorities desired voice/text/map additions without the registered citizens having to do anything. Registered users can also send the notification and their own notes to other recipients using the system or other communications (e.g., SMS) and form a community awareness hub.
- SMS Short e.g., SMS
- circuits and other means supported by each block and combinations of blocks can be implemented by special purpose hardware, software or firmware operating on special or general-purpose data processors, or combinations thereof. It should also be noted that, in some alternative implementations, the operations noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, or the varying embodiments described herein can be combined with one another or portions of such embodiments can be combined with portions of other embodiments in another embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Computer Security & Cryptography (AREA)
- Medical Informatics (AREA)
- Astronomy & Astrophysics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Alarm Systems (AREA)
- Telephonic Communication Services (AREA)
Abstract
Methods, systems, and processor-readable media for providing instant/real-time voice alerts automatically to remote electronic devices. An activity can be detected utilizing one or more sensors. A text message indicative of the activity can be generated and converted into a digitized voice alert. The activity can also be a live utterance (e.g., a live announcement), which can then be instantly converted into a digitized voice alert for automatic delivery in a selected series of languages following the base language (e.g., English). The combined digitized voice alert can then be instantly transmitted through a network for broadcast of consecutive alerts (e.g., English followed by Spanish followed by Vietnamese, etc.) to one or more remote electronic devices that communicates with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices.
Description
- This patent application claims priority as a continuation to U.S. Nonprovisional patent application Ser. No. 13/361,409, entitled “Unmanned Vehicle Civil Communications Systems and Methods”, which also claims priority as a continuation-in-part nonprovisional patent application to U.S. Nonprovisional patent application Ser. No. 13/324,118, entitled “Voice Alert Methods, Systems and Processor-readable Media”, which further claims priority as a continuation patent application of Provisional Application Ser. No. 61/489,621 entitled, “Voice Alert Methods, Systems and Processor-Readable Media,” which was filed on May 24, 2011. All reference are incorporated herein by reference in their entirety.
- Embodiments are generally related to the provision of instant voice alerts sent automatically to remote mobile electronic devices such as cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc. Embodiments are also related to wireless communications networks such as cellular telephone networks and wireless LAN type networks. Embodiments are additionally related to emergency services and security monitoring of residences, businesses, and government and military facilities.
- In today's highly mobile society, there are increasing numbers of people who work at locations other than their homes or who are away from home for long periods of time. There are also a growing number of people who have elderly parents living alone. Additionally, there are also many businesses, enterprises, government agencies, and so forth with offices, buildings, and other facilities that require constant monitoring, particularly during times when no one is available on-site. Finally, many emergency situations are such that immediate and quick notification to the public of such emergencies will save lives and resources.
- Accordingly, a need exists for an improved and efficient approach for transmitting or broadcasting instant voice alerts to remote electronic devices automatically during times of emergencies or as a part of security monitoring systems.
- The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiment and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
- It is, therefore, one aspect of the disclosed embodiments to provide for the transmission of instant voice alerts automatically to remote electronic devices such as, for example, cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- It is another aspect of the disclosed embodiments to provide for text-to-voice alerts to be transmitted instantly and automatically to remote electronic devices such as, for example, cellular telephones, computers, Smartphones, tablet computing devices, televisions, remote electronic devices in automobiles, etc.
- It is yet another aspect of the disclosed embodiments to provide methods, systems, and processor-readable media for the generation and conversion of alerts from text messages to synthesized speech to be instantly and automatically transmitted as instant voice alerts to remote electronic devices.
- The aforementioned aspects and other objectives and advantages can now be achieved as described herein. Methods, systems, and processor-readable media are disclosed for automatically providing instant voice alerts to remote electronic devices. In some embodiments, an activity can be detected utilizing one or more sensors. A text message indicative of the activity can be generated and converted into a digitized voice alert. The digitized voice alert can then be transmitted through a network for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices. Note that an “activity” as utilized herein may be, for example, any number of different actions or events. In the context of a home security/monitoring system, a sensor can detect an activity or condition, such as a door entry security sensor that may detect that a door has opened while the occupants of the home are away. The opening of the door would constitute an “activity”. In other situations, a live utterance such as a live speech given by, for example, the President of the United States could constitute as an “activity” as discussed in more detail herein.
- In some embodiments, the digitized voice message can be instantly and automatically broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile. In some embodiments, the one or more languages can be pre-selected in the user profile (e.g., during a set-up of the user-profile or during changes to the users profile). In some embodiments, the user profile can be established as a user preference via a server during a set up (or at a later time) of the one or more remote electronic devices. In other embodiments, the user profile can be established as a user preference via an intelligent router during a set up of the one or more remote electronic devices. In other embodiments, during a set up of the one or more remote electronic devices, the one or more languages can be selected from a plurality of different languages. In still other embodiments, the digitized voice message can be converted into the particular language specified by the remote electronic device(s). In yet other embodiments, digitized voice message can be converted into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the one or more remote electronic devices.
- Methods, systems, and processor-readable media are also disclosed for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system (e.g., a home security system, a military security monitoring system, an enterprise/business building security monitoring system, etc.). A wireless data network can be provided, which includes one or more sensors that communicate with the wireless data network within a location (e.g., a residence, building, business, government facility, military facility, etc.). An activity/condition can be detected utilizing one or more sensors associated with the location. A text message indicative of the activity can be generated and converted into a digitized voice alert. The digitized voice alert can be transmitted through a network for broadcast to one or more electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the remote electronic devices (e.g., a speaker associated with or integrated with such devices such as the speaker in a mobile phone).
- Methods, systems, and processor-readable media are also disclosed for providing emergency voice alerts to wireless hand held device users in a specified region. An emergency situation can be detected affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region. A text message indicative of the emergency situation can be generated and converted into a digitized voice alert. The digitized voice alert can be transmitted through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region.
- Method, systems, and processor-readable media are also disclosed for providing an instant voice announcement automatically to remote electronic devices. In such an approach, a live announcement (e.g., an announcement from the city Mayor, or the President of the United States) can be captured and then automatically converted into a digitized voice message indicative of the live announcement. The digitized voice message can be associated with a text message to be transmitted through a network to a plurality of remote electronic devices that communicate with the network. The text message with the digitized voice message can be transmitted through a network (e.g., cellular communications network, the Internet, etc.) for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through one or more remote electronic devices among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the one or more remote electronic devices among the plurality of remote electronic devices.
- In some embodiments, a current call taking place at one or more of the remote electronic devices can be automatically interrupted in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playing of the digitized voice message via a remote electronic device. In other embodiments, operations can be implemented for automatically opening the digitized voice message, in response to receipt of the text message with the digitized voice message at the one or more remote electronic devices among the plurality of remote electronic devices, and automatically playing the digitized voice message through a speaker associated with the one or more remote electronic devices in response to automatically opening the digitized voice message.
- In other embodiments, the identity of the speaker associated with the live announcement can be authenticated prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. In some embodiments, authentication of the speaker (e.g., the President or other official) can be authenticated utilizing a voice recognition engine. In still other embodiments, the digitized voice message can be broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile.
- As indicated previously, one or more languages can be pre-selected in the user profile. Additionally, the user profile can be established in some embodiments as a user preference via a server during a setup of one or more of the remote electronic devices. In some embodiments, the user profile can be established as a user preference via an intelligent router during a setup of the one or more remote electronic device. In other embodiments, during a setup of the one or more remote electronic devices, one or more languages can be selected from a plurality of different languages. In yet another embodiment, the digitized voice message (e.g., an announcement from the President) can be converted into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the one or more remote electronic devices.
- It is also a feature of the present invention to provide a method for providing public users with data collected by an unmanned vehicle that registers mobile devices authorized to receive data collected by said remote unmanned vehicle at a server, wherein data collected by the remote unmanned vehicle is identified as restricted data and public data, and providing the public data to mobile devices registered by the server.
- It is yet another feature of the invention to provide a mass notification push application and a civic-communication application with a secondary purpose of allowing the notified recipients to engage others by retransmitting the message received, along with their own typed notations, so as to create their own real-time civic communications hub for ongoing situational awareness. The civil communications hub can allow users to forward messages to other recipients and the forwarded messages can include sending user annotations together with captured data sent by authorities.
- The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description herein, serve to explain the principles of the disclosed embodiments.
-
FIG. 1 illustrates a first exemplary schematic/flow chart in accordance with an embodiment; -
FIG. 2 illustrates a second exemplary schematic/flow chart in accordance with an embodiment; -
FIGS. 3( a) to 3(d) illustrate exemplary screen shots of a user interface in accordance with one or more embodiments; -
FIG. 4 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment; -
FIG. 5 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant voice alerts to remote electronic devices regarding incidents detected by a security system, in accordance with an embodiment; -
FIG. 6 illustrates a high-level flow chart of operations depicting logical operations of a method for automatically providing instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment; -
FIG. 7 illustrates a block diagram of a system for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment; -
FIG. 8 illustrates a block diagram of a system for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment; -
FIG. 9 illustrates a block diagram of a system for automatically providing emergency instant voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment; -
FIG. 10 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to perform a process to, for example, provide automatic and instant voice alerts to remote electronic devices, in accordance with an embodiment; -
FIG. 11 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to, for example, perform a process to automatically provide instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment; -
FIG. 12 illustrates a block diagram of a processor-readable medium that can store code representing instructions to cause a processor to perform, for example, a process to automatically provide instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment; -
FIG. 13 illustrates a block diagram of a system for providing automatic and instant voice alerts through a network, in accordance with an embodiment; -
FIG. 14 illustrates a high-level flow chart of logical operations for providing automatic and instant digitized voice alerts, and converting such digitized voice alerts into more than one language for broadcast of the digitized voice alert in consecutively different languages through one or more remote electronic devices, in accordance with an embodiment; -
FIG. 15 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 16 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 17 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 18 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 19 illustrates a block diagram of a system for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 20 illustrates a block diagram of a processor-readable medium for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 21 illustrates an exemplary data processing system which may be included in devices operating in accordance with some embodiments; -
FIG. 22 illustrates an exemplary environment for operations and devices according to some embodiments of the present invention; -
FIG. 23 illustrates a block diagram of an unmanned vehicle system for monitoring using sensors and providing an instant voice announcement from the unmanned vehicle automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 24 illustrates a block diagram of an unmanned vehicle system for providing data in the form of instant voice announcements based on a condition from the unmanned vehicle automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 25 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, based on a sensed condition, automatically to remote electronic devices, in accordance with an embodiment; -
FIG. 26 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, based on a sensed condition, automatically to remote electronic devices, in accordance with an embodiment; and -
FIG. 27 illustrates a high-level flow chart of operations depicting logical operations of a method for providing an instant voice announcement, in accordance with an embodiment. - The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
- The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative are shown. The embodiments disclosed herein can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosed embodiments. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which disclosed embodiments belong. It will be further understood that terms such as those defined in commonly used dictionaries should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
- As will be appreciated by one skilled in the art, the present invention can be embodied as a method, system, and/or a processor-readable medium. Accordingly, the embodiments may take the form of an entire hardware application, an entire software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the embodiments may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer-readable medium or processor-readable medium may be utilized including, for example, but not limited to, hard disks, USB Flash Drives, DVDs, CD-ROMs, optical storage devices, magnetic storage devices, etc.
- Computer program code for carrying out operations of the disclosed embodiments may be written in an object oriented programming language (e.g., Java, C++, etc.). The computer program code, however, for carrying out operations of the disclosed embodiments may also be written in conventional procedural programming languages such as the “C” programming language, HTML, XML, etc., or in a visually oriented programming environment such as, for example, Visual Basic.
- The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to a user's computer through a local area network (LAN) or a wide area network (WAN), wireless data network, e.g., WiFi, Wimax, 802.xx, and cellular network or the connection may be made to an external computer via most third party supported networks (for example, through the Internet using an Internet Service Provider).
- The disclosed embodiments are described in part below with reference to flowchart illustrations and/or block diagrams of methods, systems, computer program products, and data structures according to embodiments of the invention. It will be understood that each block of the illustrations, and combinations of blocks, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block or blocks.
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the block or blocks.
- The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block or blocks.
-
FIG. 1 illustrates an overview of asystem 200 according to embodiments of the present invention.System 200 broadly includes aserver 205 or central computer,web service tool 210,runtime tool 215,voice recognition engine 220, text-to-speech engine 225, and one ormore databases 230. Theserver 205 may include each of theweb service tool 210,runtime tool 215,voice recognition engine 220, text-to-speech engine 225, and one ormore database 230. Alternatively, one or more of theweb service tool 210,runtime application 215,voice recognition engine 220, text-to-speech engine 225, and one ormore databases 230 may be remote and in communication with theserver 205 or central computer. Theserver 205 may be remote and in communication with theserver 205 or central computer. - Note that as utilized herein the term “server” (e.g.,
server 205 shown inFIG. 1 ,server 231 shown inFIG. 13 , etc.) refers generally to one of three possible implementations or combinations thereof. First, the server can be a computer program running as a service to serve the needs or requests of other programs (referred to in this context as “clients”) which may or may not be running on the same computer. Second, the server can be a physical computer dedicated to running one or more such services to serve the needs of programs running on other computers on the same network. Finally, a server can be a software/hardware system (i.e., a software service running on a dedicated computer) such as a database server, file server, mail server, enterprise server, print server, etc. - In some embodiments, the server can be a program that operates as a socket listener. In other embodiments, a server can be a host that is deployed to execute one or more such programs. In still other embodiments, the server can be a server computer implemented as a single computer or a series of computers that link other computers or electronic devices together. Such a server implementation can provide essential services across a network, either to private users inside a large organization (e.g., Intranet) or to public users via the internet. For example, when one enters a query in a search engine, the query is sent from a user's computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to the user's computer.
- The
server 205 can communicate with one or more substantially, real-time services 235 being operated by any number of entities such as, for example, security companies (e.g., Sonitrol, Brinks, etc.) or government agencies (e.g., U.S. Department of Homeland Security, government contractors, etc.) operating, for example, particular web sites. In some embodiment, the services orinformational feeds 235 may include websites offered by government agencies such as the Homeland Security Department, local 911 organizations, private companies or non-profit agencies, FEMA (Federal Emergency Management Agency), and so forth. As shown inFIG. 1 , these services can provide information via, for example,Feed 1,Feed 2,Feed 3, and so forth. In some embodiments,Feed 1 may provide a series of emergency announcements.Feed 2 may provide, for example, information related to construction on highways in a particular geographical region, whereasFeed 3 may provide updated weather information in a particular area. - In practice, as depicted in
FIG. 1 andFIG. 2 , auser 240 can initially make arequest 242 for specific and/or general voice alerts (e.g., text to voice) and/or other information via an electronic remote device such as asmartphone tablet 202,television 203, or automobile Bluetooth® type system 204. In one embodiment, the user can make therequest 242 in a text format guided by prompts or a template displayed on, for example, a display ofsmartphone tablet 202, etc. -
FIGS. 3( a) to 3(d) illustrate exemplary screen shots of such prompts.FIG. 3( a), for example, depicts a home screen shot 105 comprising a list of topical icons from which the user may select using various user interfaces including touch screen display, trackball, buttons, and the like. Fiveselectable icons FIG. 3( a). - A user can select one of the
icons icon 106, for example, the user will tap into an emergency informational feed. The user would then be taken to other screens which would allow a user to set up an emergency informational feed that is ultimately fed to his or her device (e.g.,Smartphones tablet 202,automobile 204, etc.) and provided according to particular preselected criteria in the form of text-to-voice informational emergency announcements. Similarly, if a user selectsicon 107, the user will tap into a weather informational feed that use preselects and is again provided with particular voice alerts (e.g., text-to-voice) regarding important weather announcements. Road condition voice alerts can also be provided by selecting, for example,icon 108. A user can additionally configure text-to-voice alerts with respect to his or her business or home, as shown byselectable icons -
FIG. 3( b) depicts a residential screen shot 115 responsive to the user selecting “Home” in accordance with an embodiment. In the example screen shot 115 shown inFIG. 3( b), assuming the user has selected icon 110 (“Home”) shown inFIG. 3( a), the user would see next the screen shot 115 and one ormore icons Sensor 1,Sensor 2,Sensor 3, andSensor 4. Such sensor icons are associated with, for example, sensors (e.g., security/surveillance sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage monitoring, door or window opening sensors, etc.) located in, for example, a residence of a user. In this case, the user can select each sensor and set up voice alerts (e.g., text-to-voice) related to particular conditions or activities that such sensors may detect. For example, if a sensor detects that a particular window in a user's home opens while the user is away, information related to this condition will be transmitted as a text-to-voice alert to the user's device (e.g., smartphone, automobile, tablet computer, etc.). -
FIG. 3( c) depicts a screen shot 120 that includesexample icons condition 1 may be the temperature inside the house or a particular zone of the house.Condition 2 may be, for example, energy usage monitored by an energy usage sensor in the house. The user may also set how often the user wishes to receive updates. -
FIG. 3( d) depicts a screen shot 125 responsive to a user selecting, for example, an update (i.e.,icon 123 inFIG. 3( c)). The screen shot 125 depictsavailable time frames 126 for which the user may receive substantially real-time alerts. Thus, a user can select how often the substantially, real-time alerts or other informational alerts are received. - In another embodiment, the user may make a live voice request for a specific voice alert information. In this embodiment, a
voice recognition engine 220 is responsible for converting a live voice or verbal command or input into text. In one embodiment, the text may be in the form of XML or another appropriate language. In another embodiment, the text can be a proprietary language. The XML or other programming or mark-up language can provide a communications protocol between the user and theserver 205, namely theweb service tool 210. Theweb service tool 210 can act as the gate keeper for thesystem 200 and authenticates therequest 244. This authentication process can determine whether or not the request emanates from a device registered or otherwise permitted to make the request. For example, the user may need to input a pin or code, which would then be authenticated by theweb service tool 210. If the request is not authenticated, anerror message 246 can be transmitted to theuser 240 via the device. Optionally, instructions on remedying the underlying basis for the error response can also be transmitted to the device. - Once authenticated, the request type can be checked (e.g., text or voice/verbal 248). If verbal, the
web service tool 210 can transmit the live voice request to thevoice recognition engine 220, which is configured to convert the voice request into atext request 250. Optionally, the voice request can be saved into an audio file prior to being serviced by thevoice recognition engine 220. It can be appreciated that a number of different types of voice recognition engines, including proprietary engines, are suitable for the embodiments discussed herein. For example, a live voice or verbal request in the form “Need voice alert for residence” may be converted to “Residence Alert” or similar text containing the required terms to locate the desired information. In another example, a verbal request in the form of “How do I set up voice alerts?” may be converted to “Set Voice Alert” to locate the desired information. - The
system 200 may also teach users how to best phrase verbal requests to most efficiently allow thesystem 200 to locate the desired information. For example, in one embodiment, after downloading application software from, for example, a server, users can be provided with access to a tutorial or similar feature which assists users in phrasing verbal requests directed to, for example, particular types of alerts such as, for example, emergency alerts, weather, business alerts, alerts based on home sensors (entry sensors, smoke detectors, fire detectors, carbon monoxide detectors, energy usage, etc.). Any improper verbal request (e.g., not enough information to identify desired information or improper format) may be met with a general error message or specific error message detailing required information necessary to identify the desired information. - Once represented desired types of information is converted into text, the request is unpacked 252 and handed to a
runtime application 215. Theruntime application 215 can be an executable program, which handles various functions associated withsystem 200 as described herein. Theruntime application 215 can be, for example, code comprising instructions to perform particular steps or operations of a process. - Initially, based on the converted text request, the
runtime application 215 can make arequest 254 to the one or more substantially, real-time feeds 235. The request to one ormore feeds 235 can result in theruntime application 215 obtaining a key corresponding to the request. That is, the one ormore feeds 235 can assign keys to each source of desired information which is being tracked. Once the key is obtained, theruntime application 215 can cause the request and the key to be stored as shown asarrow 256 in one ormore databases 230 thereby linking the device to thefeed 235 within the one ormore databases 230. - The one or
more databases 230 can maintain each user's profile of desired alert information. Accordingly, users can track, if desired, multiple types of information via thesystem 200. In one embodiment, theruntime application 215 can queue, for example, emergency information related to multiple requests to be transmitted to the user to prevent any interruption thereof. Once the key is obtained and it is determined that, for example, a particular emergency or a particular activity is in progress, the one ormore databases 230 can maintain a corresponding request as active. - Should information relating to a particular emergency or activity no longer be needed because the particular emergency or particular activity has ended (e.g., tornado activity in a particular region has ended), the one or
more databases 230 stores the key and maintains the request as temporarily active until a particular status (e.g., tornado activity is confirmed over or tornado activity has resumed) may be transmitted to the user. Responsive to final information being transmitted to the user, the temporary active status can be changed to inactive. - The
runtime application 215 can be configured to poll the one ormore databases 230 to determine the status of each request. Any inactive request (e.g., tornado activity has ended and it is now safe to go outside) can be removed from the one ormore databases 230 by theruntime application 215. To alleviate backlog, the one ormore databases 230 may link multiple users with the same active key when those multiple users have requested the same type of alert information (e.g., tornados, weather, national alerts, Homeland Security alerts, information from home sensors, etc.). - Text requests can be unpacked 252 and handed directly to the
runtime application 215. From that point, the process is similar to the verbal requests converted to text as described above. - The open communication linked between the
database 230 and information feed 235 can provide a conduit for the requested information to be transmitted to the one ormore databases 230 at any desired interval. For example, if the users have selected alert information every 30 minutes, theruntime application 215 determines that the request is active every 30 minutes by polling one ormore databases 230. Polling can occur at any necessary interval, including continuously, to allow all users to receive alerts at the users-selected time period. If active, theruntime application 215 can pull, grab or obtain the desired substantially, real-time alert information from the feed 235 (or information may be pushed from the feed 235) using the previously obtained key and transmits the alert information to the one ormore databases 230 and eventually to the user as described. The alert information can be stored in the one ormore databases 230 either long term or short term depending on the needs of the operator ofsystem 200 and its users. Once obtained from thefeed 235, a text file can be handed to the text-to-speech engine 225 depicted inFIG. 1 . - Those skilled in the art will recognize that many text-to-speech engines and applications, including proprietary engines and approaches, are suitable for use with the embodiments. A text file containing the emergency or other alert information can be converted into an audio file such as, for example, a MP3 or similar audio file.
- In general, the text-to-speech (also text-to-voice)
engine 225 discussed herein can be implemented with natural speech features to voice so “robotic voice” text to speech synthesis, which is important for broadcasting or sending voice alerts in more “human” type voice audio, which is more receptive to listeners than the more “robotic voice” text-to-speech applications. Using a more natural sounding text-to-speech engine forengine 225 ensures that voice alerts are actually heard by listeners, which is particularly important during emergency situations. - It can be appreciated that the text-to-
speech engine 225 can be configured to offer text-to-speech conversion in multiple languages. Such a text-to-speech engine 225 can also be configured to convert the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the remote electronic devices (e.g.,devices speech engine 225 discussed herein is “Orpheus,” a multilingual text-to-speech synthesizer from Meridian One for Laptop, Notebook, and Desktop computers running Microsoft Windows Windows 7, Vista or Microsoft Windows XP. Orpheus is available as Orpheus TTS Plus or Orpheus TTS. Orpheus TTS plus and Orpheus TTS speaks 25 languages with synthetic voices capable of high intelligibility at the fastest talking rates. Orpheus TTS Plus adds natural sounding voices for UK English, US English, and Swedish. - The audio file can then be transmitted to devices such as, for example,
devices runtime application 215 can send the text alert to the user device and the text alert can be converted to a voice alert (i.e., text-to-voice alert) at the device itself. - In another embodiment, a community of users can receive substantially, real-time alert information. In such an embodiment, users simply identify particular desired information (e.g., emergency announcements, weather, road conditions, road construction, etc.) and become part of a community or other users interested in receiving substantially, real-time alert related information alerts in text and/or audio format. For example, users belonging to a community interested in emergency announcements receive the same substantially, real-time alerts. Default settings may be used with this particular embodiments such that each user receives alerts at the same time over the same staggered time period (e.g., once an hour, every thirty minutes, once per day, etc.). Single users may also utilize default settings without joining a community of users. Users wanting a different scheme can customize the alerts as shown via the example screen shots illustrated in
FIGS. 3( a)-3(d). - In another embodiment, the
system 200 can be configured to allow a user to send a message to a social media account (e.g., Twitter®, Facebook®, etc.) along with an attachment with an audio message from the user. In another embodiment, the user may send an alert to one or more friends with an audio message (e.g., tornados in southwest Kansas, watch out!). In this embodiment, thesystem 200 may prompt the user and/or a home page may depict an icon which allows the user to verbalize a message for delivery to one or more intended recipients along with an alert. Thevoice recognition engine 220 can generate an audio file representing the user's message, which can be an actual voice or computer-generated voice, into an audio file and store the audio file in the one ormore databases 230 linking it to the other user's remote electronic device.System 200 can then transmit the audio file along with the alert (or another alert) to one or more intended recipients via a social media account. - The intended recipients may be stored by the
system 200 previously, or may be inputted at the time the message is to be sent. In one embodiment, the user is able to select from a list of friends established within the application software by the user. Once a voice or verbal personal message is created, the personal message can be saved in, for example,database 230 and linked to the user. When theruntime application 215 next communicates with thedatabase 230, the alert (or other information) can be transmitted along with the personal message. -
FIG. 4 illustrates a high-level flow chart of operations depicting logical operations of amethod 400 for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment. As indicated atblock 402, the process can be initiated. Thereafter, as illustrated atblock 404, an activity can be detected utilizing one or more sensors. Then, as indicated atblock 406, a text messaged indicative of such activity can be generated. For example, a message indicating that a particular sensor has determined that the backdoor of a particular house has been opened would generate text stating “Backdoor is open”. Following the generation of such text, typically in the form of a text message or other appropriate text data file, such a text message can be converted, as depicted atblock 408, into a digitized voice alert via, for example, the text-to-speech recognition engine 225 shown inFIG. 1 . - Following the processing of the operation shown at
block 408, a test can be performed, as indicated atblock 410, to determine if the digitized voice message should be broadcast in another language. For example, if it is determined that the voice alert should be broadcasted in another language (e.g., following broadcast of the message in the initial language), then as described atblock 411, the digitized voice message can be converted into a pre-selected or specified language and then as indicated atblock 412 transmitted through a network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more electronic devices which communicate with such a network for automatic audio announcement of the digitized voice alert (e.g., in one or multiple languages) through the remote electronic device (e.g., a speaker integrated with a Smartphone). If, however, it is determined that conversion of the digitized voice message to another language is not necessary, then the digitized voice message is transmitted in the original language through the network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for the playing of the automatic audio announcement (e.g., voice alert) through the remote electronic device(s). The process can then terminate, as indicated atblock 414. - In some embodiments, the aforementioned digitized voice message can be broadcast through the one or more remote electronic devices in one or more languages based on a language setting in a user profile. The one or more languages can be pre-selected in the user profile. In other embodiments, the user profile can be established as a user preference via a service during a set up of the one or more remote electronic devices. The user profile can, in some embodiments, be established as a user preference via an intelligent router during a set up of the one or more remote electronic devices. In some embodiments, during a set up of the one or more remote electronic devices, the one or more languages can be selected from a plurality of different languages.
- In general, the digitized voice message can be converted into the particular language specified by a user via the one or more remote electronic devices. The disclosed embodiments, including the methods, systems, and processor-readable media discussed herein, when implemented, will vocalize, for example, regional, national, government, presidential, and other alerts instantly and automatically and in various languages which would automatically follow the base language (e.g., English in the United States, Spanish in Mexico, French in France, etc.) utterance.
- Note that in some embodiments, the aforementioned one or more sensors can communicate with a server that communicates with the network (e.g.,
network 501 shown inFIG. 13 ). In other embodiments, the one or more sensors can communicate with an intelligent router (e.g., a server, a packet router, etc.) that communicates with the network. It can be appreciated that many types of intelligent routers (e.g., intelligent or smart wireless routers) can be implemented in accordance with an embodiment. Examples ofintelligent routers FIG. 13 . - In yet other embodiments, the sensor or sensors (e.g., a group of networked sensors) can communicate with the one or more sensors through the network. In other embodiments, each of the one or more sensors can comprise a self-contained computer that communicates with the network (e.g.,
network 501 shown inFIG. 13 ). Note that such sensors can be located in, for example, a residence, a business, enterprise, a government entity (e.g., a secure facility, military base, etc.), and so forth. -
FIG. 5 illustrates a high-level flow chart of operations depicting logical operations of amethod 420 for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment. As indicated atblock 422, the process can be initiated. Thereafter, as illustrated atblock 424, a wireless data network can be provided which includes and/or communicates with one or more of the sensors in communication with the wireless data network (e.g.,network 501 shown inFIG. 13 ). The sensors can be located within, for example, a residence, a building, government agency, secure military facility, etc. Next, as depicted atblock 426, the one or more sensors in and/or associated with the residence can detect an activity (e.g., window opens, door opens, smoke detected, etc.). - Assuming that the sensor or sensors detect an activity, then as illustrated at
block 428, a text message can be generated, which is indicative of the activity (e.g., “Smoke Detected in Living Room”). Thereafter, as illustrated atblock 430, the text message can be converted into a digitized voice alert via, for example, the text-to-speech engine 225 shown inFIG. 1 . Next, as depicted atblock 432, the digitized voice alert can be transmitted through a network (e.g., a cellular communications network) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices (e.g., a speaker integrated with a Smartphone, laptop computer, automobile, etc.). Note that the aforementioned operations involving language pre-selection, language conversion, etc., shown inFIG. 4 can be adapted for use with the methodology shown inFIG. 5 . The process shown inFIG. 5 can then terminate, as depicted atblock 434. -
FIG. 6 illustrates a high-level flow chart of operations depicting logical operations of amethod 440 for providing automatic and instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment. Themethod 440 provides for an instant automatic delivery of a voice alert to one or more remote electronic devices via a network such as, for example,network 501 discussed herein.Method 440 takes into account several scenarios. The first scenario involves those who are unable to look at their instant text alert such as when driving, or otherwise unable so as not to be distracted. This is not possible with the current PLAN (e.g., see description of PLAN in greater detail herein), which sends text only to wireless carriers, whereas, with the approach of the disclosed embodiments, users can hear the message without doing anything. They can hear the voice alert in sequential languages, also without doing anything, as described further herein. Second, the disclosed embodiments, such as that ofmethod 440, handle the situation of those that are without a phone, who are reading the TEXT on their computers, and so forth. Such individuals are now be able to HEAR the PLAN Alert via an approach such as that ofmethod 440. They can hear the voice alert without doing anything, and also indicated herein, hear the voice alert in sequential languages without doing anything. Additionally, a live utterance (e.g., announcement) can be instantly converted into a digitized voice alert for automatic delivery in the manner as indicated above, and also in the manner described herein with respect to, for example, the methodology ofFIGS. 14-15 . - As indicated at
block 442, the process can be initiated. Next, as described atblock 444, an operation can be implemented for determining an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region. Thereafter, as illustrated atblock 446, a step can be implemented for generating a text message indicative of the emergency situation (e.g., “Flooding, Leave to Higher Ground!”). Then, as indicated atblock 448, an operation can be implemented for converting a text message indicative of the emergency situation into a digitized voice alert (e.g., text-to-voice). The conversion operation depicted atblock 448 can be provided by, for example, the text-to-speech engine 225 shown inFIG. 1 . - Following the processing of the operation shown at
block 448, the digitized voice alert can be transmitted, as depicted atblock 450, through specific towers of a cellular communication network (e.g.,network 501 shown inFIG. 13 ) in the specified region for distribution, as shown next atblock 452, of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region. Note that the aforementioned operations involving language pre-selection, language conversion, etc., shown inFIG. 4 can be adapted for use with the methodology shown inFIG. 6 . The process shown inFIG. 6 can then terminate, as depicted atblock 454. - Note that the instructions described herein such as, for example, the operations/instructions depicted in
FIGS. 4 , 5, 6, 14, 15, and 16, and any other processes described herein (e.g., processes shown inFIGS. 1-2 ) can be implemented in the context of hardware and/or software. In the context of software, such operations/instructions of the methods described herein can be implemented as, for example, computer-executable instructions such as program modules being executed by a single computer or a group of computers or other processors and processing devices. In most instances, a “module” constitutes a software application. - Generally, program modules include, but are not limited to, routines, subroutines, software applications, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and instructions. Moreover, those skilled in the art will appreciate that the disclosed method and system may be practiced with other computer system configurations such as, for example, hand-held devices, multi-processor systems, data networks, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, servers, and the like.
- Note that the term module as utilized herein may refer to a collection of routines and data structures that perform a particular task or implements a particular abstract data type. Modules may be composed of two parts: an interface, which lists the constants, data types, variable, and routines that can be accessed by other modules or routines; and an implementation, which is typically private (accessible only to that module) and which includes source code that actually implements the routines in the module. The term module may also simply refer to an application such as a computer program designed to assist in the performance of a specific task such as word processing, accounting, inventory management, etc. Additionally, the term “module” can also refer in some instances to a hardware component such as a computer chip or other hardware.
-
FIG. 7 illustrates a block diagram of asystem 490 for automatically providing instant voice alerts to remote electronic devices, in accordance with an embodiment. In general,system 490 includes aprocessor 480 and adata bus 481 coupled to theprocessor 480.System 490 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a software module or group of software modules). The computer-usable medium 482 is generally coupled to or can communicate with thedata bus 481. The computer program code ormodule 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, themethod 400 described above. Such amethod 400 can include detecting an activity utilizing at least one sensor, generating and converting a text message indicative of the activity into a digitized voice alert; and transmitting the digitized voice alert through a network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices. -
FIG. 8 illustrates a block diagram of asystem 492 for automatically providing instant voice alerts to remote electronic devices from incidents detected within a security system, in accordance with an embodiment. In general,system 492 includes aprocessor 480 and adata bus 481 coupled to theprocessor 480. Thesystem 492 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a module or group of modules). The computer-usable medium 482 is also generally coupled to or in communication with thedata bus 481. The computer program code ormodule 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, themethod 420 described above. Such amethod 420 can include, for example, providing a wireless data network (e.g., a cellular network, a WLAN, etc.) including one or more sensors in communication with the wireless data network within a location (e.g., residence, building, military facility, government location, etc.); detecting an activity utilizing one or more sensors associated with the location; generating and converting a text message indicative of the activity into a digitized voice alert; and transmitting the digitized voice alert through a network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network (e.g., network 501) for an automatic audio announcement of the digitized voice alert through the remote electronic device(s). -
FIG. 9 illustrates a block diagram of asystem 494 for automatically providing instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment. In general,system 494 includes aprocessor 480 and adata bus 481 coupled to theprocessor 480. Thesystem 492 can also include a computer-usable medium 482 embodying, for example, computer code 484 (e.g., in the form of a module or group of modules). The computer-usable medium 482 is also generally coupled to or in communication with thedata bus 481. The computer program code ormodule 484 can be configured to comprise instructions executable by the processor and configured for implementing, for example, themethod 440 described above. Such amethod 440 can include, for example, determining an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region; generating and converting a text message indicative of the emergency situation into a digitized voice alert; and transmitting the digitized voice alert through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region. - It can be appreciated that in some embodiments, the computer-usable medium 482 discussed herein can be, for example, an application such as a downloadable software which may be in the form of a downloadable application software (“app”) retrieved from a server such as, for example, server, 231 shown in
FIG. 13 , and then stored in a memory of a user device such as, for example, remote electronic devices such ascomputer 198,Smartphones Tablet 202,television 203,automobile 204, etc. In other embodiments, the computer-usable medium 482 may be a computer chip or other electronic module that can actually be incorporated into or added to a remote electronic devices such ascomputer 198,Smartphones Tablet 202,television 203,automobile 204, etc., either during manufacture or as after-market type modules. -
FIG. 10 illustrates a block diagram of a processor-readable medium 490 that can storecode 484 representing instructions to cause a processor to perform a process to, for example, provide automatic and instant voice alerts to remote electronic devices, in accordance with an embodiment. Thecode 484 can comprise code (e.g., module or group of modules) to perform the instructions of, for example,method 400 including code to detect an activity utilizing one or more sensors; generate and convert a text message indicative of the activity into a digitized voice alert; and transmit the digitized voice alert through a network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices. -
FIG. 11 illustrates a block diagram of a processor-readable medium 492 that can store code representing instructions to cause a processor to, for example, perform a process to provide automatic and instant voice alerts to remote electronic devices from incidents detected within a security monitoring system, in accordance with an embodiment. Such a code can comprise code 484 (e.g., module or group of modules, etc.) to perform the instructions ofmethod 420 such as, for example, to provide a wireless data network including one or more sensors in communication with the wireless data network within a location such as a residence, building, business, government facility, etc.; detect an activity utilizing one or more sensors associated with the location; generate and convert a text message indicative of the activity into a digitized voice alert; and transmit the digitized voice alert through a network (e.g.,network 501 shown inFIG. 13 ) for broadcast to one or more remote electronic devices that communicate with the network for an automatic audio announcement of the digitized voice alert through the one or more remote electronic devices. -
FIG. 12 illustrates a block diagram of a processor-readable medium 494 that can store code representing instructions to cause a processor to perform, for example, a process to automatically provide instant emergency voice alerts to wireless hand held device users in a specified region, in accordance with an embodiment. Such a code 484 (e.g., a module) can comprise code to perform the instructions of, for example,method 440 including code to determine an emergency situation affecting a specified region and requiring emergency notification of the emergency to wireless hand held device users in the specified region; generate and convert a text message indicative of the emergency situation into a digitized voice alert; and transmit the digitized voice alert through specific towers of a cellular communications network in the specified region for distribution of an automatic audio announcement of the digitized voice alert to all remote electronic devices in communication with the specific towers in the specified region. - It can be appreciated that in some embodiments, the processor-
readable media server 231 shown inFIG. 13 , and then stored in a memory of a user device such as, for example, remote electronic devices such ascomputer 198,Smartphones Tablet 202,television 203,automobile 204, etc. In other embodiments, the processor-readable media computer 198,Smartphones Tablet 202,television 203,automobile 204, etc., either during manufacture or as after-market type modules. -
FIG. 13 illustrates avoice alert system 500 that can be implemented in accordance with the disclosed embodiments. It can be appreciated that one or more of the disclosed embodiments can be utilized to implement various aspects ofsystem 500 shown inFIG. 13 .System 500 generally includes anetwork 501 that can communicate with one or more of the remote electronic devices such ascomputer 198,Smartphones tablet computing device 202, atelevision 203, anautomobile 204, etc. One or more servers, such asserver 231, can also communicate withnetwork 501. The database 230 (and other databases) can communicate with (via a network connection or other communication means with server 231) or is preferably stored in a memory ofserver 231. It can be appreciated thatserver 231 may be a standalone computer server or may be composed of multiple servers that communicate with one another and withnetwork 501. Also, in someembodiments server 231 ofFIG. 13 andserver 205 ofFIG. 1 may actually be the same server/computer, depending upon design considerations and goals. - Additionally, one or
more sensors 512 located in, for example, aresidence 511, can communicate with thenetwork 501 individually or may be interlinked with one another in the context of a home based network (e.g., a Wireless LAN) that communicates with thenetwork 501. Similarly, one ormore sensors 514 can be located at key positions within abuilding 513.Such sensors 514 may be interlinked with one another or communicate individually with thenetwork 513 either directly or via a network located in abuilding 513 such as a Wireless LAN. In some cases, the one ormore sensors 512 can communicate with anintelligent router 233 via, for example, a WLAN. Thecommunications arrows FIG. 13 represent, for example, wireless communications (e.g., a WLAN or other appropriate wireless network) means or direct (e.g., Ethernet) communications means, depending on particular implementations. The one ormore sensors 514 can also communicate with anintelligent router 235 via communications means 239, similar to the communications configuration involving theintelligent router 233, one ormore sensors 512, and communications means 237. Although not specifically shown inFIG. 13 , it can be appreciated that each of theintelligent routers 233 and/or 235 can also communicate with thenetwork 501. In some cases, for example, server 231 (or other servers in communication with network 501) can function as an intelligent router, depending upon design considerations. - A variety of enterprises, business, government agencies, and so forth can also communicate with
network 501. For example, local or state emergency services 510 (e.g., Fire Department, Police Department, etc.) can communicate withnetwork 501. A Homeland Security Agency 502 (e.g., including FEMA, etc.) can also communicate withnetwork 501. A911 Organization 504 can additionally communicate withnetwork 501. A military organization (e.g., U.S. Air force, U.S. Army, U.S. Navy, Department of Defense, etc.) can also communicate withnetwork 501. Additionally, a security monitoring enterprise 508 (e.g., Sonitrol, Brinks, etc.) can also communicate withnetwork 501. In some embodiments, thesecurity monitoring enterprise 508 may monitorhouse 511 and/or building 513 respectively via one ormore sensors 512 and/or 514, depending upon the implemented embodiment. -
Network 501 can be, for example, a network such as the Internet, which is the well-known global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail. -
Network 501 can also be, for example, a wireless communications network such as, for example, a cellular communications network. A cellular communications network is a radio network distributed over land areas called cells, each served by one or more fixed-location transceivers known as a cell site or base station. When joined together, these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission. In some embodiments, such as a limited geographical area,network 501 may be implemented as a WiFi network such as, for example, an IEEE 802.11 type network, WLAN (Wireless Local Area Network, etc.), so-called Super Wi-Fi, coined by the U.S. Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US, and so forth. -
Network 501 can also be configured to operate as, for example, a PLAN (Personal Localized Alert Network) for the transmission of local emergency services, Amber alerts, Presidential messages, government notices, etc. Assumingnetwork 501 is either a configured PLAN or equipped with PLAN capabilities, authorized government officials can utilizenetwork 501 as a PLAN to send emergency text messages to participating wireless companies, which will then use their cell towers to forward the messages to subscribers in the affected area. Such text messages can be converted to synthesize voice/speech via, for example, text-to-speech engine 225 either before being sent through thenetwork 501 or via a server such as server 231 (and/or other services) or via the receiving remote electronic device such as, for example, remoteelectronic devices network 501. - A variety of different types of text message alerts can be generated and converted to synthesized speech (e.g., “natural” voice) as indicated herein. Most security system sensors provide a simple switched output that changes state, and that's based on whether the sensor has been tripped or not, which means that when connected up in a circuit they behave just like a switch that is activated automatically, and that makes them extremely easy to connect in the same (text to speech) technology. Below is a sampling of “Instant Voiced Alerts” that can be sent directly to a remote electronic device such as, for example, smartphone, computer, iPad, and/or to a security center (e.g., security monitoring 508) or directly to their security patrol car.
- Home: “Activity has just been detected behind your back kitchen door.” Warehouse: “Motion has been detected in
Area 4. Camera has now been triggered for recording.” - Bank: “
Wired Sensor 3 has lost its signal. Parking Entrance has now been permanently disarmed.” - School: “Campus Motion Detector has just been triggered outside the windows of the Female Lounge Area.”
- Restaurant: “Freezer Window Alarm has triggered. Please call ADT Home Security 505-717-0000 if accidental.”
- Airport: “Infra-red beam on incoming oversized baggage belt 8 has been broken and then manually reset.”
- Police: “Danger: Road Closing Alert for Bryn Mawr Drive between Silver Avenue and Coal Avenue.”
- Public Service: “Skywarn Alert—Tornado has moved east toward Albuquerque and stalled over the area. Winds 40 mph.”
- Hospital: “Smoke is being detected in the Seniors Ward. Automatic alarm has not sounded.”
- Medical: “This is your Medical Monitoring System informing you that help is on the way.”
- Military: “Kirtland underground weapons sensors not complying with commands from the 377th Air Base Wing.”
- Retail: “EAS merchandise tag #Slk221 on Armani Suit has not been deactivated.”
- Airline/Travel: “Jet Blue Air Flight 355 JFK to Burbank has JUST arrived AT four twenty seven
pm BAGGAGE CLAIM 3.” - The transmission of the voice alerts can be rendered in, for example, a dozen languages and also different voices. In context of an automobile scenario, for example, once the alert is routed to, for example, a Bluetooth® application (e.g., a Bluetooth® connection), it connects to the user's remote electronic device (e.g., Smartphone) to a stereo of the automobile for playing of the voice alert. In the same automobile scenario and accessing a PLAN network as described earlier herein, if a user/driver is driving in the event of, for example, a national emergency in which the President of the United States addresses the nation, the Bluetooth® connection in the automobile would allow the user/driver to instantly hear the President and also in some embodiments, in consecutive multiple languages, and without visually distracting the user/driver while the user/driver continues to operate his or her automobile.
- In general, it can be appreciated that the disclosed embodiments, including the methods, systems, and processor-readable media discussed herein, when implemented, will vocalize, for example, regional, national, government, presidential, and other alerts instantly and automatically, and in various languages which would automatically follow the base language (e.g., English) utterance.
-
FIG. 14 illustrates a high-level flow chart of logical operations of amethod 401 for providing automatic and instant digitized voice alerts, and converting such digitized voice alerts into more than one language for broadcast of the digitized voice alert in consecutively different languages through one or more remote electronic devices, in accordance with an embodiment. Note that the operational steps shown inFIG. 14 are similar to those depicted inFIG. 4 , except for differences shown atblocks block 411, to convert the digitized voice alert into multiple languages (e.g., English to Spanish, Italian, Vietnamese, etc.). - Then, as indicated at
block 413, the voice alert can be instantly broadcast consecutively in different languages (e.g., English followed by Spanish, Italian, Vietnamese, and then back to English again). Thus, a loop of voice alerts in different languages can be provided. In some embodiments, a live utterance can be instantly converted into a digitized voice alert for automatic delivery in a selected series of languages following the base language (e.g., English). The combined digitized voice alert can then be instantly transmitted through, for example,network 501 for broadcast through one or more of the remoteelectronic devices - Note that the transmission of text messages and text-to-speech conversion is one approach for broadcasting voice alerts. Another approach and thus another embodiment, involves alert messages (e.g., a live speech or live announcement) sent directly from a phone call. For example, in the case of a national emergency or national announcement, the President can speak directly into a telephone (e.g., cell phone, landline, Internet Telephony based phone, etc.) and speak an utterance or announcement such as “This is a national emergency”. The voice of the President can thus be captured and converted into a digitized voice alert (e.g., a wave file or other audio file) and then transmitted through, for example,
network 501 to one or more ofdevices -
FIG. 15 illustrates a high-level flow chart of operations depicting logical operations of amethod 530 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment. The methodology shown inFIG. 15 does not utilize text-to-speech conversion, but actually relies on the original live voice/utterance itself. In general, a speaker (e.g., the President) speaks directly into a voice capturing device such as, for example, a cell phone, landline phone, etc., as indicated atblock 536. Then, as illustrated atblock 538, the voice of the speaker (e.g., a live announcement) is captured. Thereafter, as shown atblock 540, the captured utterance (e.g., live announcement) is automatically converted into a digitized voice message that is indicative of the live announcement (e.g., a digital audio recording of the live announcement) in response to capturing the live announcement. - Next, as depicted at
block 542, the digitized voice message (of the captured utterance) is associated with a text message, which may or may not contain text. In some embodiments, the digitized voice message can be attached to the text message or may be bundled with the text message. Thereafter, as described atblock 544, the digitized voice message can be automatically transmitted throughnetwork 501 to one or more remote electronic devices such asdevices network 501. Then, as shown atblock 546, a test can be performed to automatically confirm if the text message (which includes the digitized voice message) has been received at a device such as one or more ofdevices - Such a test can include, in some embodiments, automatically detecting header information (e.g., packet header) to determine point of origin and point of transmission (e.g., the remote electronic device) to assist in determining if the text message (with digitized voice message attached) is received at the device. If so, then the process continues, as indicated at
block 550. If not, a test is determined whether or not to transmit again or “try again” as shown atblock 543, and the operation repeated. Assuming, it is determined not to “try again” (e.g., after a certain amount of time or a certain amount of repeat transmissions), the process can then terminate, as described atblock 556. Assuming, however, that the answer is “Yes” in response to the test indicated atblock 546 and it is confirmed that the text message is received at the device, then as depicted atblock 550, the digitized voice message associated with and/or attached to the text message is automatically opened and then as indicated atblock 554, the digitized voice message is automatically played (e.g., via a speaker) via the device. The process can then terminate, as shown asblock 556. - Thus, the text message (with the attached/associated digitized voice message) can be transmitted with the digitized voice message through
network 501 for broadcast to the one or more electronic devices for automatic playback of the digitized voice message through the one or more remote electronic device upon receipt of the text message with the digitized voice message at the device(s). -
FIG. 16 illustrates a high-level flow chart of operations depicting logical operations of amethod 531 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment. Note that themethod 531 shown inFIG. 16 is similar to themethod 530 depicted inFIG. 15 , the difference being in the addition of a test to determine if a call (e.g., phone call) or other activity is in progress at the device at the time of receipt of the text message (with its attached/associated digitized voice message). If a call is in progress, as shown atblock 548, then as indicated atblock 549, the call can be interrupted and the text message with its attached/associated digitized voice message (e.g., announcement from the President) pushed ahead of the current call to allow the digitized voice message to be automatically opened via the device, as shown atblock 550. Assuming a call is not in progress, then as indicated atblocks block 554, via the device and in the case of an interrupted call, takes precedence over the interrupted call. Thus, the operations shown inFIG. 16 allows for an automatic interruption of a current call in each remote electronic device in order to push the text message with the digitized voice message through to each remote electronic device for automatic playback of the digitized voice message. - The digitized voice message can, in some embodiments, be automatically opened in response to receipt of the text message with the digitized voice message at the one or more remote electronic devices, and automatically played through respective speakers associated with each remote electronic device in response to automatically opening the digitized voice message. In other embodiments, the identity of the speaker (e.g., the President) associated with the live announcement can be authenticated via, for example, the
voice recognition engine 220 shown inFIG. 1 , prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. -
FIG. 17 illustrates a high-level flow chart of operations depicting logical operations of amethod 533 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment. Note that the methodology ofFIG. 17 is similar to that ofFIGS. 15-16 , the difference being that thatmethod 533 ofFIG. 17 does not utilize a text message transmission. Instead, inmethod 533, the original voice announcement or utterance is captured and configured in a digitized voice alert format and transmitted and pushed through vianetwork 501 todevices -
FIG. 18 illustrates a high-level flow chart of operations depicting logical operations of amethod 535 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment. The methodology ofFIG. 18 is similar to that ofFIGS. 15-17 , the difference being that themethod 535 shown inFIG. 18 includes a language conversion and broadcast feature, as indicated byblocks -
FIG. 19 illustrates a block diagram of asystem 560 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment.System 560 generally includes aprocessor 480 and adata bus 481 coupled to theprocessor 480.System 560 can also include a computer-usable medium 482 embodying computer code 484 (or a module or group of modules). The computer-usable medium 482 is generally coupled to thedata bus 481, and thecomputer program code 484 comprises instructions executable by theprocessor 480 and configured for performing the instructions/operations of, for example,methods FIGS. 14-18 . - In some embodiments, the computer-
program code 484 ofFIG. 19 can comprise instructions executable byprocessor 480 and configured for capturing a live announcement; automatically converting the live announcement into a digitized voice message indicative of the live announcement, in response to capturing the live announcement; associating the digitized voice message with a text message to be transmitted throughnetwork 501 to a plurality of remote electronic devices that communicate with thenetwork 501; and transmitting the text message with the digitized voice message throughnetwork 501 for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through at least one remote electronic device among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices. - In other embodiments, the
code 484 may comprise instructions configured for automatically interrupting a current call in each remote electronic device among the plurality of remote electronic devices in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playback of the digitized voice message via the plurality of remote electronic devices. In other embodiments, thecode 484 may comprise instructions for automatically opening the digitized voice message in response to receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices; and automatically playing the digitized voice message through a speaker associated with the at least one remote electronic device in response to automatically opening the digitized voice message. - In yet other embodiments, the
code 484 may comprise instructions configured for authenticating an identity of a speaker associated with the live announcement prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. Authentication may occur, for example, automatically utilizing a voice recognition engine. - In still other embodiments, instructions of the
code 484 can be further configured for broadcasting the digitized voice message through the at least one remote electronic device in at least one language based on a language setting in a user profile. In yet other embodiments, instructions of thecode 484 can be further configured for pre-selecting the at least one language in the user profile. In other embodiments, instructions of thecode 484 can be configured for establishing the user profile as a user preference via a server during a set up of the at least one remote electronic device. Additionally, in other embodiments, instructions of thecode 484 can be configured for establishing the user profile as a user preference via an intelligent router during a set up of the at least one remote electronic device. In still other embodiments, thecode 484 can include instructions configured during a set up of the at least one remote electronic device for selecting the at least one language from a plurality of different languages. In other embodiments, thecode 484 can include instructions configure for converting the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the at least one remote electronic device. -
FIG. 20 illustrates a block diagram of a processor-readable medium 562 for providing an instant voice announcement automatically to remote electronic devices, in accordance with an embodiment. Processor-readable medium 562 can store code representing instructions to cause theprocessor 480 to perform a process to automatically provide an instant voice announcement to remote electronic devices. Thecode 484 can comprise code to implement the instructions/operations of, for example,methods FIGS. 14-18 . - Such a code 484 (or a module or group modules, routines, subroutines, etc.) can comprise code to, for example, capture a live announcement, automatically convert the live announcement into a digitized voice message indicative of the live announcement in response to capturing the live announcement; associate the digitized voice message with a text message to be transmitted through
network 501 to a plurality of remote electronic devices that communicate with the network; and transmit the text message with the digitized voice message throughnetwork 501 for broadcast to the plurality of electronic devices for automatic playback of the digitized voice message through at least one remote electronic device among the plurality of remote electronic devices upon receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices. - In some embodiments, such a
code 484 can further comprise code to automatically interrupt a current call in each remote electronic device among the plurality of remote electronic devices in order to push the text message with the digitized voice message through to each of the plurality of remote electronic devices for automatic playback of the digitized voice message via the plurality of remote electronic devices. In other embodiments, such acode 484 can comprise code to automatically open the digitized voice message in response to receipt of the text message with the digitized voice message at the at least one remote electronic device among the plurality of remote electronic devices; and automatically play the digitized voice message through a speaker associated with the at least one remote electronic device in response to automatically opening the digitized voice message. - The
code 484 can also in some embodiments comprise code to authenticate an identity of a speaker associated with the live announcement prior to automatically converting the live announcement into the digitized voice message indicative of the live announcement. In other embodiments, thecode 484 can comprise code to authenticate the identity of the speaker further utilizing a voice recognition engine. In other embodiments, thecode 484 can comprise code to broadcast the digitized voice message through the at least one remote electronic device in at least one language based on a language setting in a user profile. In still other embodiments, thecode 484 can comprise code to pre-select at least one language in the user profile, and/or to establish the user profile as a user preference via a server during a set up of the at least one remote electronic device, and/or to establish the user profile as a user preference via an intelligent router during a set up of the at least one remote electronic device. In yet other embodiments, thecode 484 can comprise code during a set up of the at least one remote electronic device to select at least one language from a plurality of different languages. In yet other embodiments, thecode 484 can comprise code to convert the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the at least one remote electronic device. - Referring now to
FIG. 21 , an exemplarydata processing system 600 may be included in devices operating in accordance with some embodiments. As illustrated, thedata processing system 600 generally includes aprocessor 480, amemory 636, and input/output circuits 646. Thedata processing system 600 may be incorporated in, for example, the personal orlaptop computer 198, portable wireless hand held devices (e.g., Smartphone, etc.) 199, 201,tablet 202,television 203,automobile 204, or a router, server, or the like. An example of such a server is, for example,server 205 shown inFIG. 1 ,server 231 shown inFIG. 13 , and so forth. - The
processor 480 can communicate with thememory 636 via an address/data bus 648 and can communicate with the input/output circuits 646 via, for example, an address/data bus 649. The input/output circuits 646 can be used to transfer information between thememory 636 and another computer system or a network using, for example, an Internet Protocol (IP) connection and/or wireless or wired communications. These components may be conventional components such as those used in many conventional data processing systems, which may be configured to operate as described herein. - Note that the
processor 480 can be any commercially available or custom microprocessor, microcontroller, digital signal processor, or the like. Thememory 636 may include any memory devices containing the software and data used to implement the functionality circuits or modules used in accordance with embodiments of the present invention. Thememory 636 can include, for example, but is not limited to, the following types of devices: cache, ROM, PROM, EPROM, EEPROM, flash memory, SRAM, DRAM, and magnetic disk. In some embodiments of the present invention, thememory 636 may be, for example, a content addressable memory (CAM). - As further illustrated in
FIG. 21 , thememory 636 may include several categories of software and data used in the data processing system 600: anoperating system 652;application programs 654; input/output device drivers 658; anddata 656. As will be appreciated by those skilled in the art, theoperating system 652 may be any operating system suitable for use with a data processing system such as, for example, Linux, Windows XP, Mac OS, Unix, operating systems for Smartphones, tablet devices, etc. The input/output device drivers 658 typically include software routines accessed through theoperating system 652 by theapplication programs 654 to communicate with devices such as the input/output circuits 646 andcertain memory 636 components. Theapplication programs 654 are illustrative of the programs that implement the various features of the circuits and modules according to some embodiments of the present invention. Thedata 656 represents static and dynamic data that can be used by theapplication programs 654, theoperating system 652, the input/output device drivers 658, and other software programs that may reside in thememory 636. As illustrated inFIG. 21 , thedata 656 may include, for example,user profile data 628 andother information 630 for use by the circuits and modules of theapplication programs 654 according to some embodiments of the present invention as discussed further herein. - In the embodiment shown in
FIG. 21 ,applications programs 654 can include, for example, one ormore modules modules FIG. 21 , as will be appreciated by those skilled in the art, other configurations fall within the scope of the disclosed embodiments. For example, rather than beingapplication programs 654, these modules may also be incorporated into theoperating system 652 or other such logical division of thedata processing system 600.Modules modules FIGS. 1-2 , 4-12, and 15-18, depending upon design considerations. - Furthermore, while
modules FIG. 21 , but may be provided by other arrangements and/or divisions of functions between data processing systems. For example, althoughFIG. 21 is illustrated as having various circuits/modules, one or more of these circuits may be combined without departing from the scope of the embodiments, preferred or alternative. - Note that as discussed earlier herein, the term “module” generally refers to a collection or routines (and/or subroutines) and/or data structures that perform a particular task or implements a particular abstract data type. Modules usually include two parts: an interface, which lists the constants, data types, variables, and routines that can be accessed by other modules or routines; and an implementation, which is typically, but not always, private (accessible only to the module) and which contains the source code that actually implements the routines in the module. The term “module” may also refer to a self-contained component that can provide a complete function to a system and can be interchanged with other modules that perform similar functions.
- Referring now to
FIG. 22 , anexemplary environment 705 for operations and devices according to some embodiments of the present invention will be discussed. As illustrated inFIG. 22 , theenvironment 705 may include a communication/computing device 710, thedata communications network 501 as discussed earlier, afirst server 740, and asecond server 745. It can be appreciated that additional servers may be utilized with respect tonetwork 501. It can also be appreciated that in some embodiments, only a single server such asserver 740 may be required. Note thatservers FIG. 22 are analogous or similar toserver 205 shown inFIG. 1 andserver 231 depicted inFIG. 13 . Similarly,databases database 230 shown inFIGS. 1 and 13 , etc. In general, thecommunication device 710 allows a user of thecommunication device 710 to communicate via bi-directional communication with one ormore servers data communication network 501. - As illustrated, the
communication device 710 depicted inFIG. 22 may include one ormore modules system 600 according to some embodiments. For example, theapplication programs 654 discussed above with respect toFIG. 21 can be included insystem 600 of thecommunication device 710. Thecommunication device 710 may be, for example, devices such asdevices network 501. - The
communication device 710 can include, for example, a user interface 744 and/or aweb browser 715 that may be accessible through the user interface 744, according to some embodiments. Thefirst server 740 may include adatabase 730 and thesecond server 745 may include adatabase 735. Thecommunication device 710 may communicate over thenetwork 501, for example, the Internet through a wireless communications link, an Ethernet connection, a telephone line, a digital subscriber link (DSL), a broadband cable link, cellular communications means or other wireless links, etc. The first andsecond servers network 501. Thus, thenetwork 501 may convey data between thecommunication device 710 and the first andsecond servers - The various embodiments of methods, systems, processor-readable media, etc., that are described herein can be utilized in the context of the PLAN system discussed above. In general, authorized national, state or local government officials can send alerts to PLAN. PLAN authenticates the alert, verifies that the sender is authorized, and then PLAN sends the alert to participating wireless carriers. Participating wireless carriers push the alerts from, for example, cell towers to mobile telephones and other mobile electronic devices in the affected area. The alerts appear similar to text messages on mobile devices. Such “text-like messages” are geographically targeted. For example, a customer living in downtown New York would not receive a threat alert if they happen to be in Chicago when the alert is sent. Similarly, someone visiting downtown New York from Chicago on that same day would receive the alert. Users can receive three types of alerts from PLAN including alerts issued by the President, alerts involving imminent threats to safety of life, and Amber alerts. The approach described herein, however, if adapted to PLAN, would allow for actual voice alerts (e.g., digitized voice alert from the President, which the public would recognize) to be pushed through to mobile devices in communication with, for example,
network 501. Additionally, as indicated earlier, such messages can be transmitted in different languages or in different sequences of languages. The digitized voice alert of an announcement from the President, for example, can be automatically converted into one or more other languages. - Note that the various methods, systems, and processor-readable media discussed herein can be implemented in the context of, for example, push technology such as, for example, instant push notification. Push technology, also known as server push, describes a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull technology, where the request for the transmission of information is initiated by the receiver or client.
- Synchronous conferencing and instant messaging are typical examples of push services. Chat messages, and sometimes files, are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient.
- Email is also a type of push system: the SMTP protocol on which it is based is a push protocol (see Push e-mail). However, the last step, from mail server to desktop computer, typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push technology for email in a wireless context.
- Another popular type of Internet push technology was PointCast Network, which gained popularity in the 1990s. It delivered news and stock market data. Both Netscape and Microsoft integrated it into their software at the height of the browser wars, but it later faded away and was replaced in the 2000s with RSS (a pull technology). Other uses are push enabled web applications including market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, and sensor network monitoring.
- Unmanned Aerial Vehicles (UAVs) have become the leaders in persistent surveillance over the past several years for federal and state agencies (e.g., U.S. Military, FBI, local and state police, U.S. Forest Service, U.S. Border Patrol, etc.). Private commercial applications are also feasible and foreseeable (e.g., large private land holdings or leased open space, environmental and geographical data gathering, university research). UAVs have the distinctive capability of providing better-than-human, aerial, visual information to ground units that may not have the time or means to use a manned plane for their surveillance/reconnaissance.
- A ground control operator can remotely fly and control an unmanned aerial vehicle (UAV), also known as a pilotless drone. Land- and maritime-based vehicles are similarly controlled. These unmanned vehicles are equipped with camera equipment and are best known for capturing real-time images during warfare, but now these drones have become increasingly affordable for use in civilian high risk incidents such as search missions, border security, wildfire and oil spill detection, police tracking, weather monitoring, and natural disasters. During its mission, the airborne drone acquires image data from the camera and flight parameters from onboard systems. The aerial footage captured by the camera onboard the UAV is transmitted to the Ground Control Station which transfers it to their work station for analysis and possible enhancement.
- There is clearly a growing civilian need for improved emergency applications by providing citizens with selected unmanned vehicle images through push notifications via a data communications network such as the Internet and that are not dependent on an aging public switched telephone network (PTSN), which is known to fail during certain crisis. A push notification can arrive in a manner comprised of separate technologies such as cellular/Internet voice (voice to text, voice recognition), video stills (embedded with personalized iconographic identifiers), and can further include the capability of a secondary purpose of allowing notified recipients to engage others by retransmitting the message received, along with their own typed notations, so as to create their own real-time civil communications hub for ongoing situational awareness (a system that currently doesn't exist, but can be achievable by software applications running on servers). Once software is in place within a system (e.g., including servers), the only major expense can be largely limited to yearly system maintenance and data management.
- It is another feature of the present invention to provide a method for providing public users with data collected by an unmanned vehicle that registers mobile devices authorized to receive data collected by said remote unmanned vehicle at a server, wherein data collected by the remote unmanned vehicle is identified as restricted data and public data, and providing the public data to mobile devices registered by the server. For example, up-to-the-minute UAV aerial imagery, as selected by drone ground-based commanders, to be automatically transmitted to subscribed end-users via the current mobile operating systems for smartphones, iPads, laptops, and web-enabled devices in a manner comprised of separate technologies such as voice (voice to text, voice recognition), video stills, and data that can be embedded with personalized iconographic identifiers and messages. In accordance with feature of the present invention, a system can be adapted to enable civil UAV authorities to transmit UAV video along with their voice-and-text notations to the public via their smartphones, iPads, laptops, and web-enabled devices, thus enabling these application registrants to form a civil awareness hub that would allow them to stay connected in times of emergency.
- The unmanned vehicle aspect of the present invention (which can also be referred to herein as “SkySpeak”) differs from city websites and telephone-based emergency notification systems in as much as the SkySpeak application can deploy a software-centric web platform to automatically transmit instant voice notifications and enriched data to those who have installed the application onto their smartphone and Internet devices. Unlike being notified by an incoming phone call, the SkySpeak Application can automatically voice its message and display the video stills (embedded with personalized iconographic identifiers) on user handheld devices (e.g., smartphones, iPads, etc.) and can automatically voice its message as a multilingual transmission without the recipients having do anything to the devices in use on their end.
- Referring to
FIG. 23 , an unmanned aerial vehicle (UAV)system 800, in accordance with an embodiment of the invention, is illustrated that includes an avionics andguidance module 801, amotor 803,propeller hardware 805, and afuel source 807. Reference to an unmanned aerial vehicle (UAV) is not meant to limit application of features of the present invention to a particular vehicle system. It should be appreciated that the vehicle is unmanned, but can also be land-based or maritime-based. Reference to an unmanned vehicle (UV) can more accurately set the scope for vehicles that can be used to collect data for the present invention. The UV is managed by acontroller 810. An onboard controller can also managesensors 811,imaging equipment 813, and location/GPS modules 815 engaged in navigation and data collection within the unmanned vehicle. Data collected by the UV can be separated into restricteddata 821 andpublic data 823. Separation into these categories can occur onboard the UV or after transmission to a server (to be discussed inFIG. 24 ). Acommunications module 825 enables communication of the UV with remote resources (e.g., servers) via any means of wireless communications (e.g., cellular, microwave, satellite, etc.) reasonably available in the unmanned vehicle field. - Referring to
FIG. 24 , asystem 830 in accordance with features of the present invention is shown.UVs 800 are shown transmitting data through wireless communications means 831 (e.g., cellular transmission) through adata network 835 wherein data can be received and managed by aserver 837. Theserver 837 can organize data into restricted data and public data. Restricted data can go toclients 832 controlled by authorities (e.g., police, government operators), wherein public data can be provided to mobile devices 830 (e.g., smartphones) that are registered with the server to receive public data. - Referring to
FIG. 25 , a flow chart of a method in accordance with features of the present invention is shown. Data collected by a remote unmanned vehicle can be transmitted to be received by a server, as shown instep 841. Data can then be identified as restricted data and public data at the server, as shown instep 842. Then, as shown instep 843, public data can be provided to users registered at the server to receive the public data. Restricted data can be accessed by cleared civil personnel such as police or government operators (e.g., homeland security, ICE, FBI), while public data can be received by civilians and reporters and the cleared civil personnel. - Referring to
FIG. 26 , a flow diagram is shown in accordance with features of the invention. As shown instep 851, users can register their mobile devices with a server to receive data collected by remote unmanned vehicles. Then as shown instep 852, users can request data from the server, wherein the data can be collected by an unmanned vehicle and identified as public data by the server. The server, as shown instep 853, can then provide public data to registered user mobile devices. - Referring to
FIG. 27 , another flow diagram is shown wherein users can register their mobile devices with a server to receive data collected by remote unmanned vehicles, as shown instep 861. Then, as shown instep 862, the server can automatically provide public data to registered user mobile devices. - Instant Knowledge is king in-times-of-emergency. The present invention can be used to instantly inform authorities and members of a community with instant voice notifications, which can also supplement other emergency services as the FEMA National Radio System (FNARS), the Emergency Alert System (EAS), which is a national warning system in the United States which uses AM, FM, and Land Mobile Radio Service as well as broadcasts via VHF, UHF, and cable television including low-power stations and with EAN (Emergency Action Notification), and with AMBER Alerts and with their existing robo-calling, telephone-based centers serving 911 Reverse and
NG 911. - Robo-callers are often connected to a public switched telephone network by multiple phone lines because they can only send out one message at a time per phone line. The advantage of the robo-caller is that it is compatible with the most basic phone service. That very basic service has essentially stayed unchanged for a century because it is just a simple phone on a landline.
- On the other hand, the present invention does not make phone calls. It cannot get a busy signal because it is not making a phone call. It receives the alert as data regardless if the alert is vocal or text, an application operating on a user's handheld devices then plays the message. The recipient simply gets the message. Text can be transmitted to user handheld device where it can also be converted to speech. One benefit is lower bandwidth, which means you can alert more people more quickly. The other is that the text goes through a non-voice channel to the phone.
- The present invention can use communications methods other than the phone's voice channel. Alerts can be received by people already talking on their smartphones. Alerts can be somewhat intrusive in that they can nag recipients until they at least acknowledge the alert.
- The registration process can be far simpler in that the user only needs to download the application on their mobile device, everything else (e.g., communications with a data providing server) can be automated. The present invention can be fully capable of delivering vocally recorded alerts, visuals, text alerts, and supplemental information.
- A data recipient should not need to answer the telephone in order to receive basic alert information because a message can be played on their handheld device display and/or announced via their handheld device speaker with the present invention. Spoken data is especially important for drivers and similarly occupied people that cannot take a moment to read a display.
- As an example of the inventions use, a UAV ground base station notifier can select a drone-image and enters it onto the application's screen display. The notifier can then use the application's voice recognition to dictate an accompanying voice-activated message that is typed and that can be uttered automatically. The combined content can be transmitted to selected recipients who can then type their own comments to other recipients thus forming an ongoing web-enabled hub for the constant updating of information over OS mobile operating systems for smartphones, iPads, and laptops.
- Once the UAV Ground Base Station (land, maritime or air) notifier selects a screen image and enters on to the interface of a server-based application, the notifier can have the ability to modify his notifications with a voice-activated message that is automatically typed as text and/or uttered via speaker when transmitted to end-user handheld devices.
- In accordance with an optional feature, once the notification is received, recipients in turn can use the present system to type their own comments and forward them to other recipients, thus forming an ongoing web-enabled hub for the constant updating of information. The system can also recognize that notification is not communication and that the notification, in itself, does not guarantee an ongoing communication. The system can, for example, allow the imagery expert at a drone base station's video terminal to quickly transmit a still frame as captured from the incoming video and automatically resize it, such as to 460 kb, and attach it to the application's user interface (UI) such as a display screen on which a voice and text symbol can appear so that an imagery expert can easily dictate the text caption to be submitted with a photo (such as using Google HTML+CSS code for implementation) and then can automatically submit the notification to the registered recipients' smartphone or web-enabled devices along with the expert's voice.
- In light of the foregoing and using the forest fire example, suppose that the sheriff who spots a fire could use an application to notify UAV Control to send up a drone then, when a drone takes flight, incoming video from the UAV can be sent automatically to all authorities over a data communication network (wired or wireless). In the aforementioned Las Conchas Fire, it is conceivable that a forest ranger could have been in such a position so as to have mitigated the extent of damages by quickly providing more information to the public. Authorities can analyze data and determine a risk assessment for the situation. Authorities can then decide to send a new request for more data and also whether the data should be shared publically. If data (e.g., video, still images) is approved for public dissemination by authorities (this needs to be “authorized”), then data can be provided to the public using automatic instant voice alerts to mobile devices registered with the system. Notifications can be sent to registered users along with the authorities desired voice/text/map additions without the registered citizens having to do anything. Registered users can also send the notification and their own notes to other recipients using the system or other communications (e.g., SMS) and form a community awareness hub.
- It will be understood that the circuits and other means supported by each block and combinations of blocks can be implemented by special purpose hardware, software or firmware operating on special or general-purpose data processors, or combinations thereof. It should also be noted that, in some alternative implementations, the operations noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, or the varying embodiments described herein can be combined with one another or portions of such embodiments can be combined with portions of other embodiments in another embodiment.
- It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also, that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
Claims (20)
1. A method for providing instant voice alerts automatically to remote electronic devices, said method comprising:
detecting a condition or activity at a premises utilizing at least one sensor via a monitoring system also located at the premises and connected to a data network;
generating a message indicative of said condition or activity into a data file that can be rendered on a remote electronic device as a digitized voice alert; and
transmitting the data file through the data network for receipt by at least one remote electronic device that is registered to communicate remotely with the monitoring system and receive messages over the data network for rendering of the digitized voice alert from the data file.
2. The method of claim 1 , further comprising configuring the at least one sensor to communicate via wire or wirelessly with the monitoring system and the monitoring system communicates with the at least one remote electronic device over the data network.
3. The method of claim 1 , wherein the monitoring system includes an intelligent router and the at least one sensor is configured to communicate via a wire or wirelessly with the intelligent router that thereafter communicates with said network.
4. The method of claim 1 , wherein the at least one sensor includes at least one of: a security sensor, a surveillance sensor, a smoke detector, a fire detector, a carbon monoxide detector, an energy usage monitoring sensor, a door or window opening sensor, and a flood sensor.
5. The method of claim 1 further comprising configuring said at least one sensor to comprise a self-contained computer that communicates with said network.
6. The method of claim 1 further comprising broadcasting said digitized voice message through said at least one remote electronic device in at least one language based on a language setting in a user profile.
7. The method of claim 6 further comprising pre-selecting said at least one language in said user profile.
8. The method of claim 6 further comprising establishing said user profile as a user preference via a server during a setup of said at least one remote electronic device.
9. The method of claim 6 further comprising establishing said user profile as a user preference via an intelligent router during a setup of said at least one remote electronic device.
10. The method of claim 6 further comprising during a setup of said at least one remote electronic device, selecting said at least one language from a plurality of different languages.
11. The method of claim 1 further comprising converting said digitized voice message into more than one language from among a plurality of languages for broadcast of said digitized voice alert in consecutively different languages through said at least one remote electronic device.
12. A system for providing instant voice alerts automatically to remote electronic devices, said system comprising:
a processor;
a data bus coupled to the processor; and
a computer-usable medium embodying computer code, the computer-usable medium being coupled to said data bus, the computer program code comprising instructions executable by said processor and configured for:
detecting a condition or activity at a premises utilizing at least one sensor via a monitoring system also located at the premises and connected to a data network;
generating a message indicative of said condition or activity into a data file that can be rendered on a remote electronic device as a digitized voice alert; and
transmitting the data file through the data network for receipt by at least one remote electronic device that is registered to communicate remotely with the monitoring system and receive messages over the data network for rendering of the digitized voice alert from the data file.
13. The system of claim 12 , wherein the at least one sensor communicates with at least one of a monitoring system, intelligent router, and server, that in-turn communicates with said network.
14. The system of claim 12 , wherein the at least one sensor includes at least one of: a security sensor, a surveillance sensor, a smoke detector, a fire detector, a carbon monoxide detector, an energy usage monitoring sensor, a door or window opening sensor, and a flood sensor.
15. The system of claim 13 , wherein the at least one sensor includes at least one of: a security sensor, a surveillance sensor, a smoke detector, a fire detector, a carbon monoxide detector, an energy usage monitoring sensor, a door or window opening sensor, and a flood sensor.
16. The system of claim 12 , wherein the instructions are further configured for broadcasting the digitized voice message through the at least one remote electronic device in at least one language based on a language setting in a user profile.
17. The system of claim 16 , wherein the instructions are further configured for allowing a pre-selection of the at least one language in said user profile.
18. The system of claim 17 , wherein said instructions are further configured during a set up of said at least one remote electronic device for selecting said at least one language from a plurality of different languages.
19. The system of claim 12 , wherein the instructions are further configured for converting the digitized voice message into more than one language from among a plurality of languages for broadcast of the digitized voice alert in consecutively different languages through the at least one remote electronic device.
20. A processor-readable medium storing code representing instructions to cause a processor to perform a process to automatically provide an instant voice announcement to remote electronic devices, said code comprising of code to:
detect a condition or activity at a premises utilizing at least one sensor via a monitoring system also located at the premises and connected to a data network;
automatically convert the detected condition or activity into a digitized voice message indicative of the detected condition or activity in response to detection of the condition or activity;
generate a message indicative of said condition or activity into a data file that can be rendered on a remote electronic device as a digitized voice alert; and
transmit the data file through the data network for receipt by at least one remote electronic device that is registered to communicate remotely with the monitoring system and receive messages over the data network for rendering of the digitized voice alert from the data file.
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/633,709 US20150235540A1 (en) | 2011-05-24 | 2015-02-27 | Voice alert methods and systems |
US15/224,930 US9883001B2 (en) | 2011-05-24 | 2016-08-01 | Digitized voice alerts |
US15/822,600 US10282960B2 (en) | 2011-05-24 | 2017-11-27 | Digitized voice alerts |
US16/371,595 US10769923B2 (en) | 2011-05-24 | 2019-04-01 | Digitized voice alerts |
US16/985,041 US11403932B2 (en) | 2011-05-24 | 2020-08-04 | Digitized voice alerts |
US17/855,016 US20230130701A1 (en) | 2011-05-24 | 2022-06-30 | Digitized voice alerts |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161489621P | 2011-05-24 | 2011-05-24 | |
US13/324,118 US8265938B1 (en) | 2011-05-24 | 2011-12-13 | Voice alert methods, systems and processor-readable media |
US13/361,409 US8970400B2 (en) | 2011-05-24 | 2012-01-30 | Unmanned vehicle civil communications systems and methods |
US14/633,709 US20150235540A1 (en) | 2011-05-24 | 2015-02-27 | Voice alert methods and systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/361,409 Continuation US8970400B2 (en) | 2011-05-24 | 2012-01-30 | Unmanned vehicle civil communications systems and methods |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/224,930 Continuation US9883001B2 (en) | 2011-05-24 | 2016-08-01 | Digitized voice alerts |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150235540A1 true US20150235540A1 (en) | 2015-08-20 |
Family
ID=47218860
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/361,409 Expired - Fee Related US8970400B2 (en) | 2011-05-24 | 2012-01-30 | Unmanned vehicle civil communications systems and methods |
US14/633,709 Abandoned US20150235540A1 (en) | 2011-05-24 | 2015-02-27 | Voice alert methods and systems |
US15/224,930 Active US9883001B2 (en) | 2011-05-24 | 2016-08-01 | Digitized voice alerts |
US15/822,600 Active US10282960B2 (en) | 2011-05-24 | 2017-11-27 | Digitized voice alerts |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/361,409 Expired - Fee Related US8970400B2 (en) | 2011-05-24 | 2012-01-30 | Unmanned vehicle civil communications systems and methods |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/224,930 Active US9883001B2 (en) | 2011-05-24 | 2016-08-01 | Digitized voice alerts |
US15/822,600 Active US10282960B2 (en) | 2011-05-24 | 2017-11-27 | Digitized voice alerts |
Country Status (1)
Country | Link |
---|---|
US (4) | US8970400B2 (en) |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160023665A1 (en) * | 2014-07-22 | 2016-01-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method for remote communication with and through a vehicle |
US20160165600A1 (en) * | 2014-12-04 | 2016-06-09 | Samsung Electronics Co., Ltd. | Apparatus and method for transmitting and receiving message |
US9547306B2 (en) * | 2014-09-30 | 2017-01-17 | Speak Loud SPA | State and context dependent voice based interface for an unmanned vehicle or robot |
US9749664B1 (en) * | 2016-06-01 | 2017-08-29 | Panasonic Avionics Corporation | Methods and systems for public announcements on a transportation vehicle |
US9883001B2 (en) | 2011-05-24 | 2018-01-30 | Verna Ip Holdings, Llc | Digitized voice alerts |
US20180077646A1 (en) * | 2015-05-18 | 2018-03-15 | Humberto Jose Moran-Cirkovic | Interoperating sensing devices and mobile devices |
US9942733B1 (en) * | 2016-12-21 | 2018-04-10 | Globestar, Inc. | Responding to a message generated by an event notification system |
US10078630B1 (en) * | 2017-05-09 | 2018-09-18 | International Business Machines Corporation | Multilingual content management |
US10249174B2 (en) * | 2015-07-31 | 2019-04-02 | Siemens Industry, Inc. | Wireless emergency alert notifications |
US20190200099A1 (en) * | 2016-07-05 | 2019-06-27 | Sharp Kabushiki Kaisha | Systems and methods for communicating user settings in conjunction with execution of an application |
US10390160B2 (en) * | 2017-06-12 | 2019-08-20 | Tyco Fire & Security Gmbh | System and method for testing emergency address systems using voice recognition |
US20190391734A1 (en) * | 2018-06-20 | 2019-12-26 | Jesse Baptiste | System for control of public announcements |
US10607461B2 (en) * | 2017-01-31 | 2020-03-31 | Albert Williams | Drone based security system |
US10764763B2 (en) * | 2016-12-01 | 2020-09-01 | T-Mobile Usa, Inc. | Tactical rescue wireless base station |
US10769923B2 (en) | 2011-05-24 | 2020-09-08 | Verna Ip Holdings, Llc | Digitized voice alerts |
US11475884B2 (en) * | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US11636870B2 (en) | 2020-08-20 | 2023-04-25 | Denso International America, Inc. | Smoking cessation systems and methods |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US11760169B2 (en) | 2020-08-20 | 2023-09-19 | Denso International America, Inc. | Particulate control systems and methods for olfaction sensors |
US11760170B2 (en) | 2020-08-20 | 2023-09-19 | Denso International America, Inc. | Olfaction sensor preservation systems and methods |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US11813926B2 (en) | 2020-08-20 | 2023-11-14 | Denso International America, Inc. | Binding agent and olfaction sensor |
US11828210B2 (en) | 2020-08-20 | 2023-11-28 | Denso International America, Inc. | Diagnostic systems and methods of vehicles using olfaction |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US11881093B2 (en) | 2020-08-20 | 2024-01-23 | Denso International America, Inc. | Systems and methods for identifying smoking in vehicles |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11932080B2 (en) | 2020-08-20 | 2024-03-19 | Denso International America, Inc. | Diagnostic and recirculation control systems and methods |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US12017506B2 (en) | 2020-08-20 | 2024-06-25 | Denso International America, Inc. | Passenger cabin air control systems and methods |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US12136419B2 (en) | 2019-03-18 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
US12148277B2 (en) | 2021-04-30 | 2024-11-19 | Arlo Technologies, Inc. | Electronic monitoring system using push notifications with custom audio alerts |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201210596D0 (en) | 2012-06-14 | 2012-08-01 | Microsoft Corp | Notification of communication events |
GB201210600D0 (en) | 2012-06-14 | 2012-08-01 | Microsoft Corp | Call invites |
GB2504461B (en) | 2012-06-14 | 2014-12-03 | Microsoft Corp | Notification of communication events |
GB201210598D0 (en) | 2012-06-14 | 2012-08-01 | Microsoft Corp | Notification of communication events |
CN103324203A (en) * | 2013-06-08 | 2013-09-25 | 西北工业大学 | Unmanned airplane avionics system based on intelligent mobile phone |
CN113489676A (en) * | 2013-11-14 | 2021-10-08 | Ksi数据科技公司 | System for managing and analyzing multimedia information |
US9311760B2 (en) | 2014-05-12 | 2016-04-12 | Unmanned Innovation, Inc. | Unmanned aerial vehicle authorization and geofence envelope determination |
US9273981B1 (en) | 2014-05-12 | 2016-03-01 | Unmanned Innovation, Inc. | Distributed unmanned aerial vehicle architecture |
US10163164B1 (en) * | 2014-09-22 | 2018-12-25 | State Farm Mutual Automobile Insurance Company | Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval |
US9754496B2 (en) | 2014-09-30 | 2017-09-05 | Elwha Llc | System and method for management of airspace for unmanned aircraft |
US9802701B1 (en) | 2014-10-21 | 2017-10-31 | Joshua Hawes | Variable elevation signal acquisition and data collection system and method |
US9858478B2 (en) | 2014-12-19 | 2018-01-02 | Intel Corporation | Bi-directional community information brokerage |
CN104648685B (en) * | 2015-02-12 | 2016-08-24 | 武汉科技大学 | Quadrotor specified path based on smart mobile phone is taken photo by plane system and method |
WO2016131005A1 (en) | 2015-02-13 | 2016-08-18 | Unmanned Innovation, Inc. | Unmanned aerial vehicle sensor activation and correlation |
CN104808678B (en) * | 2015-02-17 | 2017-11-03 | 珠海磐磊智能科技有限公司 | Flying vehicles control device and control method |
FR3033470B1 (en) * | 2015-03-02 | 2017-06-30 | Clement Christomanos | METHOD FOR TRANSMITTING CONTROLS AND A VIDEO STREAM BETWEEN A TELE-PILOT DEVICE AND A GROUND STATION, AND TOGETHER SUCH A DEVICE AND A SUCH STATION |
CN108334109B (en) * | 2015-03-30 | 2021-02-12 | 绵阳硅基智能科技有限公司 | Voice control device |
EP3254404A4 (en) | 2015-03-31 | 2018-12-05 | SZ DJI Technology Co., Ltd. | Authentication systems and methods for generating flight regulations |
DK3158553T3 (en) * | 2015-03-31 | 2019-03-18 | Sz Dji Technology Co Ltd | Authentication systems and methods for identifying authorized participants |
CN107408352B (en) | 2015-03-31 | 2021-07-09 | 深圳市大疆创新科技有限公司 | System and method for geo-fencing device communication |
CN104809918B (en) * | 2015-05-27 | 2017-07-28 | 张忠义 | A kind of unmanned plane during flying management method |
CN105187715A (en) * | 2015-08-03 | 2015-12-23 | 杨珊珊 | Method and device for sharing aerial photography content, and unmanned aerial vehicle |
CN105430322B (en) * | 2016-01-22 | 2019-02-01 | 深圳市星网信通科技有限公司 | A kind of unmanned plane access video-meeting method and system |
US10853756B2 (en) * | 2016-03-02 | 2020-12-01 | International Business Machines Corporation | Vehicle identification and interception |
US9773419B1 (en) | 2016-03-24 | 2017-09-26 | International Business Machines Corporation | Pre-positioning aerial drones |
DE102016212645B4 (en) | 2016-07-12 | 2018-06-14 | Minimax Gmbh & Co. Kg | Unmanned vehicle, system and method for initiating a fire-extinguishing action |
DE102017204261A1 (en) * | 2016-07-12 | 2018-01-18 | Minimax Gmbh & Co. Kg | Procedure and unmanned vehicle for checking fire protection components |
CN106231184A (en) * | 2016-08-01 | 2016-12-14 | 苏州倍声声学技术有限公司 | A kind of unmanned plane phonetic warning system |
CN106683291B (en) * | 2016-12-12 | 2020-05-15 | 深圳怡化电脑股份有限公司 | Banknote taking method and system |
CN107087440B (en) * | 2016-12-27 | 2019-04-19 | 深圳市大疆创新科技有限公司 | The control method and equipment of information processing method and system and unmanned plane |
CN106713491A (en) * | 2017-01-20 | 2017-05-24 | 亿航智能设备(广州)有限公司 | Cloud-based flight data management method and device |
US10223892B2 (en) | 2017-02-21 | 2019-03-05 | Ford Global Technologies, Llc | Civil-defense system |
US10969777B2 (en) * | 2017-06-23 | 2021-04-06 | Qualcomm Incorporated | Local drone identification verification |
US10923104B2 (en) * | 2017-06-30 | 2021-02-16 | Ademco Inc. | Systems and methods for customizing and providing automated voice prompts for text displayed on a security system keypad |
US20200241572A1 (en) * | 2017-08-11 | 2020-07-30 | Beijing Xiaomi Mobile Software Co., Ltd. | Drone control method and device, drone and core network device |
CN107380443A (en) * | 2017-09-08 | 2017-11-24 | 深圳市道通智能航空技术有限公司 | Unmanned aerial vehicle control system and implementation method, GCU and relay station |
US10491778B2 (en) | 2017-09-21 | 2019-11-26 | Honeywell International Inc. | Applying features of low-resolution data to corresponding high-resolution data |
CN108475342B (en) * | 2017-10-09 | 2021-04-16 | 深圳市大疆创新科技有限公司 | Registration information processing method and system, terminal device and control device |
WO2019090438A1 (en) * | 2017-11-13 | 2019-05-16 | Yoppworks Inc. | Vehicle enterprise fleet management system and method |
WO2019120573A1 (en) * | 2017-12-22 | 2019-06-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and arrangement for providing autonomous emergency assistance |
ES1209941Y (en) * | 2018-02-23 | 2018-07-04 | Del Castillo Igareda Jesus Antonio | AIR DEVICE FOR EMERGENCY NOTICE IN AREAS WITH CRITICAL INFRASTRUCTURES. |
US10580283B1 (en) * | 2018-08-30 | 2020-03-03 | Saudi Arabian Oil Company | Secure enterprise emergency notification and managed crisis communications |
US20200106818A1 (en) * | 2018-09-28 | 2020-04-02 | Quoc Luong | Drone real-time interactive communications system |
US10778916B2 (en) | 2018-10-24 | 2020-09-15 | Honeywell International Inc. | Applying an annotation to an image based on keypoints |
US11163434B2 (en) | 2019-01-24 | 2021-11-02 | Ademco Inc. | Systems and methods for using augmenting reality to control a connected home system |
CN109979170A (en) * | 2019-03-28 | 2019-07-05 | 山东中烟工业有限责任公司 | Wisdom cigarette power-equipment alarm system and method |
CN110444049A (en) * | 2019-07-05 | 2019-11-12 | 视联动力信息技术股份有限公司 | A kind of winged method and device of unmanned plane limit based on view networking |
CN112688979B (en) * | 2019-10-17 | 2022-08-16 | 阿波罗智能技术(北京)有限公司 | Unmanned vehicle remote login processing method, device, equipment and storage medium |
US11671797B2 (en) | 2020-05-11 | 2023-06-06 | Apple Inc. | Techniques for relaying audio messages to devices |
CN114531196A (en) * | 2022-03-04 | 2022-05-24 | 大连理工大学 | Long-distance covert communication method under relay assistance of unmanned aerial vehicle |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4692742A (en) * | 1985-10-21 | 1987-09-08 | Raizen David T | Security system with correlated signalling to selected satellite stations |
US4897630A (en) * | 1987-01-21 | 1990-01-30 | Electronic Security Products Of California, Inc. | Programmable alarm system having proximity detection with vocal alarm and reporting features |
US7561517B2 (en) * | 2001-11-02 | 2009-07-14 | Internap Network Services Corporation | Passive route control of data networks |
US8265938B1 (en) * | 2011-05-24 | 2012-09-11 | Verna Ip Holdings, Llc | Voice alert methods, systems and processor-readable media |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6151385A (en) | 1998-07-07 | 2000-11-21 | 911 Notify.Com, L.L.C. | System for the automatic notification that a 9-1-1 call has occurred |
KR100343165B1 (en) | 1998-09-04 | 2002-08-22 | 삼성전자 주식회사 | Computer having the function of emergency call and emergency calling method using a computer |
US7162528B1 (en) * | 1998-11-23 | 2007-01-09 | The United States Of America As Represented By The Secretary Of The Navy | Collaborative environment implemented on a distributed computer network and software therefor |
US6510207B1 (en) | 1999-06-29 | 2003-01-21 | Agere Systems Inc. | Voice messaging system storage of emergency alert system warnings |
US7356129B1 (en) | 1999-08-18 | 2008-04-08 | Moody Martin D | Apparatus for locating a station initiating transmission of an emergency message in a network having multiple transmission sources |
US6509833B2 (en) | 2001-05-18 | 2003-01-21 | Siemens Information And Communication Networks, Inc. | Method and system for providing a warning alert |
US7016477B2 (en) | 2001-11-27 | 2006-03-21 | Bellsouth Intellectual Property Corporation | Method of notifying a party of an emergency |
US8077877B1 (en) | 2002-01-31 | 2011-12-13 | Mitek Corp., Inc. | Courtesy announcement system and method of using |
AU2003295779A1 (en) | 2002-11-20 | 2004-06-15 | Corybant, Inc. | Interactive voice enabled email notification and alert system and method |
US7321712B2 (en) | 2002-12-20 | 2008-01-22 | Crystal Fibre A/S | Optical waveguide |
US7664233B1 (en) | 2003-06-25 | 2010-02-16 | Everbridge, Inc. | Emergency and non-emergency telecommunications notification system |
US9830623B2 (en) | 2004-06-07 | 2017-11-28 | Keal, Inc. | System and method for managing numerous facets of a work relationship |
CN100349474C (en) | 2004-07-09 | 2007-11-14 | 华为技术有限公司 | Method for processing push notification in multimedia message service |
US7173525B2 (en) | 2004-07-23 | 2007-02-06 | Innovalarm Corporation | Enhanced fire, safety, security and health monitoring and alarm response method, system and device |
US7890586B1 (en) | 2004-11-01 | 2011-02-15 | At&T Mobility Ii Llc | Mass multimedia messaging |
US7617162B2 (en) | 2005-03-04 | 2009-11-10 | Atul Saini | Real time push notification in an event driven network |
US7885817B2 (en) | 2005-03-08 | 2011-02-08 | Microsoft Corporation | Easy generation and automatic training of spoken dialog systems using text-to-speech |
US7756539B2 (en) | 2005-05-27 | 2010-07-13 | Microsoft Corporation | Push-to-talk event notification |
US7933385B2 (en) | 2005-08-26 | 2011-04-26 | Telecommunication Systems, Inc. | Emergency alert for voice over internet protocol (VoIP) |
US7391314B2 (en) | 2005-10-31 | 2008-06-24 | Honeywell International Inc. | Event communication system for providing user alerts |
US20070097993A1 (en) | 2005-11-02 | 2007-05-03 | Bojahra Richard D | System and method for remote control of local devices over a wide area network |
US20090248398A1 (en) | 2005-11-03 | 2009-10-01 | Elta Systems Ltd | Vocal Alert Unit Having Automatic Situation Awareness |
WO2007084960A2 (en) | 2006-01-19 | 2007-07-26 | Vigicomm, Inc. | Location specific communications |
US7920679B1 (en) | 2006-02-02 | 2011-04-05 | Sprint Communications Company L.P. | Communication system and method for notifying persons of an emergency telephone call |
US7671732B1 (en) | 2006-03-31 | 2010-03-02 | At&T Mobility Ii Llc | Emergency alert notification for the hearing impaired |
US7862569B2 (en) | 2006-06-22 | 2011-01-04 | Kyphon Sarl | System and method for strengthening a spinous process |
US20080037753A1 (en) | 2006-07-07 | 2008-02-14 | Lucent Technologies Inc. | Call priority management system for communication network |
WO2008112932A2 (en) * | 2007-03-13 | 2008-09-18 | Mywaves, Inc. | An apparatus and method for sending video content to a mobile device |
US7312712B1 (en) * | 2007-04-11 | 2007-12-25 | Douglas Bevan Worrall | Traveler safety notification system |
US8199885B2 (en) * | 2007-05-21 | 2012-06-12 | At&T Intellectual Property I, L.P. | Method and apparatus for transmitting emergency messages |
US7907930B2 (en) | 2007-07-16 | 2011-03-15 | Cisco Technology, Inc. | Emergency alert system distribution to mobile wireless towers |
US7873520B2 (en) | 2007-09-18 | 2011-01-18 | Oon-Gil Paik | Method and apparatus for tagtoe reminders |
US20090313020A1 (en) | 2008-06-12 | 2009-12-17 | Nokia Corporation | Text-to-speech user interface control |
US7995487B2 (en) | 2009-03-03 | 2011-08-09 | Robert Bosch Gmbh | Intelligent router for wireless sensor network |
US20100261448A1 (en) | 2009-04-09 | 2010-10-14 | Vixxi Solutions, Inc. | System and method for emergency text messaging |
US9761219B2 (en) | 2009-04-21 | 2017-09-12 | Creative Technology Ltd | System and method for distributed text-to-speech synthesis and intelligibility |
US8468344B2 (en) * | 2009-05-26 | 2013-06-18 | Raytheon Company | Enabling multi-level security in a single-level security computing system |
US20110111805A1 (en) | 2009-11-06 | 2011-05-12 | Apple Inc. | Synthesized audio message over communication links |
US8970400B2 (en) * | 2011-05-24 | 2015-03-03 | Verna Ip Holdings, Llc | Unmanned vehicle civil communications systems and methods |
-
2012
- 2012-01-30 US US13/361,409 patent/US8970400B2/en not_active Expired - Fee Related
-
2015
- 2015-02-27 US US14/633,709 patent/US20150235540A1/en not_active Abandoned
-
2016
- 2016-08-01 US US15/224,930 patent/US9883001B2/en active Active
-
2017
- 2017-11-27 US US15/822,600 patent/US10282960B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4692742A (en) * | 1985-10-21 | 1987-09-08 | Raizen David T | Security system with correlated signalling to selected satellite stations |
US4897630A (en) * | 1987-01-21 | 1990-01-30 | Electronic Security Products Of California, Inc. | Programmable alarm system having proximity detection with vocal alarm and reporting features |
US7561517B2 (en) * | 2001-11-02 | 2009-07-14 | Internap Network Services Corporation | Passive route control of data networks |
US8265938B1 (en) * | 2011-05-24 | 2012-09-11 | Verna Ip Holdings, Llc | Voice alert methods, systems and processor-readable media |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11979836B2 (en) | 2007-04-03 | 2024-05-07 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9883001B2 (en) | 2011-05-24 | 2018-01-30 | Verna Ip Holdings, Llc | Digitized voice alerts |
US10282960B2 (en) | 2011-05-24 | 2019-05-07 | Verna Ip Holdings, Llc | Digitized voice alerts |
US10769923B2 (en) | 2011-05-24 | 2020-09-08 | Verna Ip Holdings, Llc | Digitized voice alerts |
US11403932B2 (en) | 2011-05-24 | 2022-08-02 | Verna Ip Holdings, Llc | Digitized voice alerts |
US11862186B2 (en) | 2013-02-07 | 2024-01-02 | Apple Inc. | Voice trigger for a digital assistant |
US12009007B2 (en) | 2013-02-07 | 2024-06-11 | Apple Inc. | Voice trigger for a digital assistant |
US12118999B2 (en) | 2014-05-30 | 2024-10-15 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US12067990B2 (en) | 2014-05-30 | 2024-08-20 | Apple Inc. | Intelligent assistant for home automation |
US11838579B2 (en) | 2014-06-30 | 2023-12-05 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9440660B2 (en) * | 2014-07-22 | 2016-09-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method for remote communication with and through a vehicle |
US20160023665A1 (en) * | 2014-07-22 | 2016-01-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method for remote communication with and through a vehicle |
US9547306B2 (en) * | 2014-09-30 | 2017-01-17 | Speak Loud SPA | State and context dependent voice based interface for an unmanned vehicle or robot |
US10044483B2 (en) * | 2014-12-04 | 2018-08-07 | Samsung Electronics Co., Ltd. | Apparatus and method for transmitting and receiving message |
US20160165600A1 (en) * | 2014-12-04 | 2016-06-09 | Samsung Electronics Co., Ltd. | Apparatus and method for transmitting and receiving message |
US12001933B2 (en) | 2015-05-15 | 2024-06-04 | Apple Inc. | Virtual assistant in a communication session |
US20180077646A1 (en) * | 2015-05-18 | 2018-03-15 | Humberto Jose Moran-Cirkovic | Interoperating sensing devices and mobile devices |
US10249174B2 (en) * | 2015-07-31 | 2019-04-02 | Siemens Industry, Inc. | Wireless emergency alert notifications |
US11954405B2 (en) | 2015-09-08 | 2024-04-09 | Apple Inc. | Zero latency digital assistant |
US11809886B2 (en) | 2015-11-06 | 2023-11-07 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US9749664B1 (en) * | 2016-06-01 | 2017-08-29 | Panasonic Avionics Corporation | Methods and systems for public announcements on a transportation vehicle |
US20190200099A1 (en) * | 2016-07-05 | 2019-06-27 | Sharp Kabushiki Kaisha | Systems and methods for communicating user settings in conjunction with execution of an application |
US11206461B2 (en) * | 2016-07-05 | 2021-12-21 | Sharp Kabushiki Kaisha | Systems and methods for communicating user settings in conjunction with execution of an application |
US10764763B2 (en) * | 2016-12-01 | 2020-09-01 | T-Mobile Usa, Inc. | Tactical rescue wireless base station |
US9942733B1 (en) * | 2016-12-21 | 2018-04-10 | Globestar, Inc. | Responding to a message generated by an event notification system |
US11790741B2 (en) | 2017-01-31 | 2023-10-17 | Albert Williams | Drone based security system |
US10607461B2 (en) * | 2017-01-31 | 2020-03-31 | Albert Williams | Drone based security system |
US10078630B1 (en) * | 2017-05-09 | 2018-09-18 | International Business Machines Corporation | Multilingual content management |
US11837237B2 (en) | 2017-05-12 | 2023-12-05 | Apple Inc. | User-specific acoustic models |
US11862151B2 (en) | 2017-05-12 | 2024-01-02 | Apple Inc. | Low-latency intelligent automated assistant |
US12026197B2 (en) | 2017-05-16 | 2024-07-02 | Apple Inc. | Intelligent automated assistant for media exploration |
US10390160B2 (en) * | 2017-06-12 | 2019-08-20 | Tyco Fire & Security Gmbh | System and method for testing emergency address systems using voice recognition |
US11907436B2 (en) | 2018-05-07 | 2024-02-20 | Apple Inc. | Raise to speak |
US12061752B2 (en) | 2018-06-01 | 2024-08-13 | Apple Inc. | Attention aware virtual assistant dismissal |
US12067985B2 (en) | 2018-06-01 | 2024-08-20 | Apple Inc. | Virtual assistant operations in multi-device environments |
US11630525B2 (en) | 2018-06-01 | 2023-04-18 | Apple Inc. | Attention aware virtual assistant dismissal |
US20190391734A1 (en) * | 2018-06-20 | 2019-12-26 | Jesse Baptiste | System for control of public announcements |
US11893992B2 (en) | 2018-09-28 | 2024-02-06 | Apple Inc. | Multi-modal inputs for voice commands |
US12136419B2 (en) | 2019-03-18 | 2024-11-05 | Apple Inc. | Multimodality in digital assistant systems |
US11475884B2 (en) * | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11790914B2 (en) | 2019-06-01 | 2023-10-17 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11914848B2 (en) | 2020-05-11 | 2024-02-27 | Apple Inc. | Providing relevant data items based on context |
US11838734B2 (en) | 2020-07-20 | 2023-12-05 | Apple Inc. | Multi-device audio adjustment coordination |
US11750962B2 (en) | 2020-07-21 | 2023-09-05 | Apple Inc. | User identification using headphones |
US11696060B2 (en) | 2020-07-21 | 2023-07-04 | Apple Inc. | User identification using headphones |
US12017506B2 (en) | 2020-08-20 | 2024-06-25 | Denso International America, Inc. | Passenger cabin air control systems and methods |
US11881093B2 (en) | 2020-08-20 | 2024-01-23 | Denso International America, Inc. | Systems and methods for identifying smoking in vehicles |
US11760169B2 (en) | 2020-08-20 | 2023-09-19 | Denso International America, Inc. | Particulate control systems and methods for olfaction sensors |
US11760170B2 (en) | 2020-08-20 | 2023-09-19 | Denso International America, Inc. | Olfaction sensor preservation systems and methods |
US11813926B2 (en) | 2020-08-20 | 2023-11-14 | Denso International America, Inc. | Binding agent and olfaction sensor |
US11636870B2 (en) | 2020-08-20 | 2023-04-25 | Denso International America, Inc. | Smoking cessation systems and methods |
US11828210B2 (en) | 2020-08-20 | 2023-11-28 | Denso International America, Inc. | Diagnostic systems and methods of vehicles using olfaction |
US11932080B2 (en) | 2020-08-20 | 2024-03-19 | Denso International America, Inc. | Diagnostic and recirculation control systems and methods |
US12148277B2 (en) | 2021-04-30 | 2024-11-19 | Arlo Technologies, Inc. | Electronic monitoring system using push notifications with custom audio alerts |
Also Published As
Publication number | Publication date |
---|---|
US20170034295A1 (en) | 2017-02-02 |
US20120299751A1 (en) | 2012-11-29 |
US20180152532A1 (en) | 2018-05-31 |
US9883001B2 (en) | 2018-01-30 |
US10282960B2 (en) | 2019-05-07 |
US8970400B2 (en) | 2015-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150235540A1 (en) | Voice alert methods and systems | |
US11403932B2 (en) | Digitized voice alerts | |
US8265938B1 (en) | Voice alert methods, systems and processor-readable media | |
US11915579B2 (en) | Apparatus and methods for distributing and displaying communications | |
US11943693B2 (en) | Providing status of user devices during a biological threat event | |
CN111279730B (en) | Emergency alert user system and method | |
CN108140298B (en) | Emergency alert system and method | |
US8009035B1 (en) | Alert warning system | |
US8928478B2 (en) | Emergency alert system and method | |
US8013733B1 (en) | Alert warning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: VERNA IP HOLDINGS, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERNA IP HOLDINGS, LLC;REEL/FRAME:065802/0898 Effective date: 20231011 |