[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US10268689B2 - Providing media content based on user state detection - Google Patents

Providing media content based on user state detection Download PDF

Info

Publication number
US10268689B2
US10268689B2 US15/008,543 US201615008543A US10268689B2 US 10268689 B2 US10268689 B2 US 10268689B2 US 201615008543 A US201615008543 A US 201615008543A US 10268689 B2 US10268689 B2 US 10268689B2
Authority
US
United States
Prior art keywords
user
data
keywords
media content
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/008,543
Other versions
US20170223092A1 (en
Inventor
Prakash Subramanian
Nicholas Brandon Newell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dish Technologies LLC
Original Assignee
Dish Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dish Technologies LLC filed Critical Dish Technologies LLC
Priority to US15/008,543 priority Critical patent/US10268689B2/en
Assigned to ECHOSTAR TECHNOLOGIES L.L.C. reassignment ECHOSTAR TECHNOLOGIES L.L.C. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SUBRAMANIAN, PRAKASH, NEWELL, NICHOLAS BRANDON
Publication of US20170223092A1 publication Critical patent/US20170223092A1/en
Priority to US16/296,970 priority patent/US10719544B2/en
Assigned to DISH Technologies L.L.C. reassignment DISH Technologies L.L.C. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ECHOSTAR TECHNOLOGIES L.L.C.
Application granted granted Critical
Publication of US10268689B2 publication Critical patent/US10268689B2/en
Assigned to U.S. BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment U.S. BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DISH BROADCASTING CORPORATION, DISH NETWORK L.L.C., DISH Technologies L.L.C.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • G06F17/30032
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • G06F16/436Filtering based on additional data, e.g. user or group profiles using biological or physiological data of a human being, e.g. blood pressure, facial expression, gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • G06F17/30056
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures
    • H04L65/1089In-session procedures by adding media; by removing media
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • H04L67/22

Definitions

  • Users of media content lack mechanisms for selecting appropriate items of media content, e.g., programming that may include movies, sports, live events, etc.
  • present media content delivery devices e.g., set-top boxes and the like, are lacking in the ability to detect various user attributes, e.g., to interpret data indicating a user's mental state and events.
  • FIG. 1 is a diagram of an exemplary media system for providing media content based on user state detection.
  • FIG. 2 is a diagram of an exemplary user device for the media system of FIG. 1 .
  • FIG. 3 is a diagram of an exemplary media device for the media system of FIG. 1 .
  • FIGS. 4A and 4B are a diagram of an exemplary process for providing media content based on a user state.
  • a user device computer collects data that can be used to predict a user's mental state.
  • collected data may indicate a user's mental state by providing values for attributes such as a user's physical condition, recent events affecting a user (sometimes referred to as a user's “personal circumstances” or “situation”), events possibly affecting a user's mental state, etc.
  • Physical condition data that can be used to determine a mental state can include, for example voice samples, facial expressions, biometric data (respiration, heartrate, body temperature, etc.), etc.
  • Situation data may be collected related to events (received promotion, had a child, bought a new home, etc.), locations (at work, in Hawaii, in a bar, in the living room, etc.), demographic characteristics (age, gender, religion, marital status, financial circumstances, etc.), company (people user is with, their gender, age, relationship, etc.) time (date, day of the week, time of day, user's birthday, holiday) and any other information that may be used to understand the current situation of the user.
  • Sources for collected data indicating a user's mental state may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning (GPS) data, calendar data (past, present and future events), internet browsing history, mobile purchases, input provided by the user, etc.
  • the user device computer is typically is programmed to provide the collected data to a server of a media content provider.
  • the media server can store and maintain the data about the user.
  • the media content provider computer is programmed to assign one or more predetermined keywords describing the user's situation and one or more predetermined keywords describing the user's mood.
  • Predetermined keywords to describe a user's situation may include, e.g., “inspirational,” “comedy,” “jimcarrey,” “family vacation,” “goofy,” “grief,” etc.
  • Predetermined keywords to describe a user's mood may include, e.g., “happy,” “sad,” “excited,” “bored,” “frustrated,” “relaxed,” etc.
  • the media server is programmed to generate and provide to the user a set of the assigned keywords.
  • the set of assigned keywords may include two predetermined keywords selected according to a user situation or situations and one predetermined keyword selected according to one or more detected user physical conditions.
  • the media provider computer may provide the set of assigned keywords, e.g., in response to a request from the user, or for example, on a regular basis (e.g., once per hour), or for example, based on the location of the user (when the user arrives at home, arrives in the living room, etc.).
  • the user may identify, and request a media content item which is provided by the media provider for selection according to a current predicted user mental state, e.g., based on a user's situation, physical condition(s), etc., based on the set of assigned keywords.
  • the user may send the request to the media content provider computer, either via the user device or via a media device available to the user.
  • the media content provider may then provide the requested media content to the user via, e.g., the user device or the media device.
  • an exemplary media system 10 includes one or more user devices 12 , one or more media devices 13 , a network 14 , and a media server 16 .
  • the media device 13 may be communicatively coupled to a display device 22 .
  • the user device 12 , media device 13 and display device 22 may be included in a customer premises 11 .
  • a user may be a consumer of media content who provides access to various user data by the media system 10 .
  • the user operates a user device 12 which collects some or all of the data related to the user, and provides the data to the media server 16 .
  • the server 16 may be provided by a media content provider such as are known, e.g., a cable or satellite media provider, an internet site, etc.
  • the user device 12 may collect data regarding the user and provide the collected data to the media server 16 .
  • a media server 16 may assign to the user one or more keywords describing a user mental state, e.g., based on a mood of the user indicated by collected data concerning physical attributes of the user, and/or one or more keywords describing a situation of the user indicated by data collected from documentation concerning the user, e.g., from an audio conversation, e-mail, text messages, calendar entries, etc.
  • the media server 16 may recommend or provide one or more items of media content to the user via, e.g., the media device 13 .
  • the user device 12 may be a known device such as a mobile telephone, tablet, smart wearable (smart watch, fitness band, etc.), other portable computing device, etc.
  • the user device 12 may include one or more applications such as email, a calendar, web browser, social media interfaces, etc., and one or more data collectors such as a video camera, biometric sensors, a global positioning system, etc.
  • the user device 12 may additionally include an application for collecting data related to the user from the one or more applications and one or more data collectors, and providing the collected data to the media server 16 computer or to another computing device.
  • the media device 13 receives and displays media content, and is typically a known device such as a set-top box, a laptop, desktop, tablet computer, game box, etc.
  • media content refers to digital audio and/or video data received in the user device 12 computer and/or in the media device 13 .
  • the media content may be received, for example, from the media server 16 via the network 14 .
  • the media device 13 is connected to or could include a display device 22 .
  • the display device 22 may be, for example, a television receiver, a monitor, a desktop computer, a laptop computer, a tablet, a mobile telephone, etc.
  • the display device 22 may include one or more displays and one or more speakers for outputting respectively the video and audio portions of media content and advertisement content received from the media device 13 .
  • the network 14 represents one or more mechanisms for providing communications, including the transfer of media content items, between the user device 12 , media device 13 , and the media server 16 .
  • the network 14 may be one or more of various wired or wireless communication mechanisms, including any desired combination of wired (e.g., cable and fiber) and/or wireless (e.g., cellular, wireless, satellite, microwave, and radio frequency) communication mechanisms and any desired network topology (or topologies when multiple communication mechanisms are utilized).
  • Exemplary communication networks include wireless communication networks, local area networks (LAN) and/or wide area networks (WAN), including the Internet, etc.
  • the media server 16 may be, for example, a known computing device included in one or more of a cable or satellite television headend, a video streaming service such as generally includes a multimedia web server (or some other computing device), etc.
  • the media server 16 may provide media content, e.g., a movie, live event, audio, to the user device 12 and/or media device 13 .
  • the media content is typically delivered as compressed audio and/or video data.
  • the data may be formatted according to known standards such as MPEG or H.264.
  • MPEG refers to a set of standards generally promulgated by the International Standards Organization/International Electrical Commission Moving Picture Experts Group (MPEG).
  • H.264 refers to a standard promulgated by the International Telecommunications Union (ITU).
  • media content may be provided to a media device 13 in a format such as the MPEG-1, MPEG-2 or the H.264/MPEG-4 Advanced Video Coating standards (AVC) (H.264 and MPEG-4 at present being consistent) and HEVC/H.265.
  • AVC H.264 and MPEG-4 at present being consistent
  • MPEG and H.264 data include metadata, audio, and video components.
  • media content and advertisement content in the media system 10 could alternatively or additionally be provided according to some other standard or standards.
  • media content and advertisement content could be audio data formatted according to standards such as MPEG-2 Audio Layer III (MP3), Advanced Audio Coding (AAC), etc.
  • MP3 MPEG-2 Audio Layer III
  • AAC Advanced Audio Coding
  • the user device 12 includes a computer 32 , a communications element 34 and a user interface 36 . Additionally, the user device 12 may include and/or be communicatively coupled, e.g., in a known manner, with one or more data collectors 30 .
  • the data collectors 30 may include, for example cameras, microphones, biometric sensors, accelerometers, gyroscopes, a global positioning system and other types of sensors for collecting data regarding the respective user of the user device 12 .
  • the data collectors 30 are communicatively coupled to the computer 32 , and may be included in or remote to the user device 12 .
  • the data collectors 30 may be used to collect data related to the user and other people and objects proximate to the user device 12 .
  • Proximate to the user device 12 may be defined, for example, to be within a detection range of a respective sensor, within a same room as the user device 12 , or within a fixed distance, for example 20 meters, of the user device 12 .
  • the computer 32 may be authorized to collect data via the data collectors 30 at any time, or based on conditions established, for example, by the user of the user device 12 .
  • a microphone in the user device 12 may listen, on a substantially continuous basis, to the surroundings, and record conversations and other received sounds and provide audio data to the computer 32 .
  • the computer 32 may determine whether the received data may be useful in determining a situation and/or mood of the user.
  • the computer 32 may store data determined to be useful in determining a situation and/or mood of the user and discard data which is determined not to be useful in determining a situation and/or mood of the user. For example, the content of a conversation that includes only an exchange of greetings and small talk may be discarded, whereas the tone quality of the same conversation may be determined to be indicative of mood, and may be stored.
  • the data collectors 30 may further be used to collect biometric data related to the user.
  • the data collectors 30 may measure the user's blood pressure, heartrate, body temperature, etc.
  • the communications element 34 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications.
  • the hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc.
  • the software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
  • the types of wireless communications may include cellular communications, WiFi communications, two-way satellite communications (e.g., emergency services), one way satellite communications (e.g., receiving digital audio radio broadcasts), AM/FM radio, etc.
  • the user interface 36 may include one or more input elements such as buttons, a key board, a touchscreen, a microphone, a touchpad etc. for receiving input from a user.
  • the user interface 36 may further include one or more display elements such as an LCD display, speaker, light emitting diodes, buzzers, etc. for outputting data to the user.
  • the computer 32 includes a memory, and one or more processors, the memory storing program code, i.e., computer-executable instructions, executable by the processor.
  • the computer 32 is operable to receive input from a user and transmit the input to another computing device such as the media device 13 or the media server 16 .
  • the computer 32 further may include one or more applications such as are known for email, a calendar, texting, social media interfaces, web browsers, etc., and may send data to and receive data from remote computers, including without limitation the media server 16 , for such applications.
  • the computer 32 is programmed to collect data related to the user and provide the collected data to another computing device such as the media server 16 .
  • the data may be collected from other applications installed on the computer 32 , or from the data collectors 30 .
  • the collected data may include, e.g., data which may be useful for determining the mood and the situation of the user.
  • the collected data may include words or phrases parsed from documents or files, e.g., from monitoring or recording voice communications, parsing e-mails, text messages, calendar entries, etc.
  • the collected data could include data concerning current physical attributes of a user, e.g., heartrate, respiration, skin color, body temperature, etc.
  • the media device 13 includes a computer 42 , a communications element 44 , and a user interface 46 .
  • the media device 13 may further include one or more data collectors 40 .
  • the computer 42 is communicatively coupled with each of the data collectors 40 , communications element 44 and user interface 46 .
  • the data collectors 40 may include, for example cameras, microphones, motion detectors, infrared sensors, ultrasonic sensors, and other types of sensors for collecting data regarding users proximate to the media device 13 .
  • Proximate to the media device 13 may be defined, e.g., as within a range to be detected by the data collectors 40 .
  • proximate to the media device 13 may be defined to be within a fixed distance, e.g., 20 meters, of the media device 13 , within a range to view a display device 22 included in the media device 13 , within a room including the media device 13 , etc.
  • the data collectors 40 are communicatively coupled to the computer 42 , and may be included in or remote to the media device 13 .
  • the data collectors 40 may be used to collect visual data, audio data, motion data, biometric data, etc. related to one or more users proximate to (e.g., in a room with and/or within a predetermined distance of) the media device 13 .
  • the communications element 44 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications.
  • the hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc.
  • the software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
  • the types of wireless communications may include cellular communications, WiFi communications, two-way satellite communications (e.g., emergency services), one way satellite communications (e.g., receiving digital audio radio broadcasts), AM/FM radio, etc.
  • the user interface 46 may include one or more input elements such as buttons, a key board, a touchscreen, a roller ball, a touchscreen, a mouse, a microphone, switches, etc. for receiving input from a user.
  • the user interface 46 may further include one or more display elements such as an LCD display, plasma display, speaker, lamps, light emitting diodes, buzzers, etc. for outputting data to the one or more users.
  • the computer 42 includes a memory, and one or more processors, the memory storing program code, i.e., computer-executable instructions, executable by the processor.
  • the computer 42 is operable to receive media content from the media server 16 and display received media content on the display device 22 .
  • the computer 42 is may be programmed to collect data regarding the users proximate to the media device 13 .
  • the data may be collected, via, e.g., the data collectors 40 .
  • the collected data may include, e.g., data which may be useful for determining the mood and the situation of the user.
  • the media server 16 may provide media content to the user device 12 and/or media device 13 .
  • the media server 16 may include one or more processors and memories as is known, as well as known mechanisms for communicating via the network 14 .
  • the memory of the server 16 can store program code, i.e., computer-executable instructions, executable by the processor.
  • the server 16 is programmed to provide media content to the user device 12 and/or media device 13 , via the network 14 .
  • the server 16 may be programmed to receive data related to the situation and mood of the user. Based on the data, and as described in additional detail below, the server 16 may be programmed to assign one or more predetermined keywords describing the user's situation and/or one or more predetermined keywords describing the user's mood to the user. The server 16 may further be programmed to provide the keywords to the user. The server 16 may yet further be programmed to provide one or more media content items to the user based on the assigned keywords. In some cases the server 16 may recommend one or more media content items to the user based on the assigned keywords to the user, e.g., via the user device 12 . The server 16 may then receive a request for a media content item selected by the user from the one or more recommended media content items, and provide the media content to the user based on the request.
  • the communications element 54 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications.
  • the hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc.
  • the software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
  • the communications element may be programmed to transmit and receive media content, e.g., via satellite and/or wired (cable) communications. Additionally, the communications element may be programmed for wireless communications such as cellular communications and WiFi communications.
  • the device 12 computer 32 may collect various types of data related to the user.
  • the collected data may be categorized, e.g., as mood data and situation data.
  • Mood data may include, e.g., audio data and body data.
  • the audio data may include, e.g., voice samples from the user.
  • the body data may include visual data of the user, e.g., (facial expression, posture, etc.) and biometric data (heart rate, blood pressure, body temperature, pupil dilation, etc.)
  • Situation data may be collected related to events (received a promotion, bought a new home, birth of a child, etc.), locations (at work, in Hawaii, in a bar, in the living room, etc.), circumstances (age, gender, religion, marital status, financial circumstances, etc.), company (people user is with, their gender, age, relationship, etc.) time (date, day of the week, time of day, user's birthday, holiday) and other information that may be used to understand the situation of the user.
  • the computer 32 may collect the data via data collectors 30 included in or communicatively coupled to the computer 32 .
  • the computer 32 may receive audio data.
  • the computer 32 may detect, for example, when the user is speaking with another person, and record the speech. Additionally or alternatively, the computer 32 may, for example, when the user indicates that the user would like to select a media content item, engage the user in conversation. For example, the computer 32 may ask the user “What is your current mood?” or “How are you feeling today?”
  • the computer 32 may collect samples of speech for use to analyze the mood and/or situation of the user.
  • the computer 32 may further collect visual data from the data collectors 30 .
  • the data collectors 30 may include one or more cameras included in or communicatively coupled to the computer 32 .
  • the cameras may collect visual data related to the user facial expressions, related to the user's posture, etc.
  • the cameras may further collect data indicating, for example, a degree of dilation of the user's pupils, a degree of coloration of the skin (for example, blushing), etc.
  • the data collectors 30 may include sensors for measuring blood pressure, pulse, body temperature, etc. of the user.
  • the data collectors 30 may still further be used to collect situational data related to the user.
  • an audio data collector (microphone) 30 may detect the voices of other people in conversation with or otherwise proximate to the user.
  • a camera 30 may receive images of people proximate to the user, or of the environment of the user.
  • the computer 32 may collect data related to the user from other sources.
  • the data sources may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning data (GPS), calendar or other scheduling application (past, present and future appointments, events, etc.), internet browsing history, mobile purchases, input provided by the user, etc.
  • a time stamp may be associated with some or all of the data.
  • the computer 32 may note the time that the voice sample was collected, and store the time along with the voice sample.
  • data such as voice data may by correlated in time to other data such as biometric data or visual data related to the user.
  • a date and time that a communication was sent may be extracted from the communication and stored with the data extracted from a text of the email.
  • a time can be associated with a text of the communication. For example, an email from a user complaining about a decision by a colleague may be associated with a time of the communication.
  • the computer 32 may collect data from the user interactively. For example, the computer 32 may (e.g., via a speech program) ask the user how the user is feeling today, or how the user's day is going. The computer 32 may record the user's response and use the recorded response as a voice sample.
  • the computer 32 may collect data from the user interactively. For example, the computer 32 may (e.g., via a speech program) ask the user how the user is feeling today, or how the user's day is going.
  • the computer 32 may record the user's response and use the recorded response as a voice sample.
  • the computer 32 may provide the data to, for example, the media provider 16 server 16 , which may use the data to determine the user's mood and situation. Based on the determined mood and situation of the user, the media provider 16 server 16 (or other computing device) may assign one or more keywords to the user, indicating the mood and situation of the user.
  • the media provide 16 server 16 may, as indicated above, determine the user's mood based on the collected data. For example, the server 16 may evaluate voice data and body data related to the user.
  • the user device 12 computer 32 may collect voice samples related to the user. For example, as described above, a microphone 30 may be activated while the user is conducting conversations with other people, and the computer 32 may collect one or more samples of the user's voice. As another example, when, for example, the user indicates that the user would like to select an item of media content for viewing, the computer 32 may engage the user in conversation. The computer 32 may ask questions of the user such as “What is your mood at the moment?” or “How has your day gone?” The computer 32 may collect the responses as current samples of the user's voice. The computer 32 may provide the collected data to, e.g., the media provider 16 server 16 , which may analyze the voice samples to determine the user's mood.
  • the media provider 16 server 16 may analyze the voice samples to determine the user's mood.
  • the server 16 may, e.g., analyze the voice samples for qualities such as tone, volume, voice inflection, pitch, speed, contrast, etc.
  • the server 16 may compare the data to data for the general population, or for a demographic segment thereof, e.g., according to age, gender, etc., of the user.
  • Such comparison using the general population data as a baseline, can be used to evaluate a user mental state, e.g., a user may have a positive or negative mental state that could be quantified as a percentage worse or better than the general population. For example, speech at a speed within a particular range may indicate that the user is excited or agitated. Speech at a speed below a speed threshold may indicate that the user is relaxed. Speech in a monotone voice may indicate that the user is bored.
  • the server 16 may analyze the voice samples in comparison to one or more baseline samples of the user. For example, the server 16 may request a baseline voice sample from the user when the user is happy and another baseline sample when the user is angry, etc. The server 16 may analyze the current voice samples by comparing the samples to the one or more baseline samples.
  • the server 16 could, upon receiving a voice sample, ask the user to describe the user's mood. In this manner the server 16 could build a table of voice samples reflecting different user moods.
  • the user device 12 computer 32 may collect data related to the facial expressions, biometrics, body language, etc. related to the user.
  • the computer 32 may collect via a camera 30 or video recorder 30 included in or communicatively coupled with the user device 12 , facial expressions of the user and body language of the user.
  • the computer 32 may further collect, via, e.g., biometric sensors 30 as are known, biometric data such as heart rate, blood pressure, body temperature, pupil dilation, etc. related to the user.
  • the computer 32 may collect the data on an on-going basis as the data is available. Additionally or alternatively, the computer 32 may for example, collect biometric data, visual data, etc. at a time when the user indicates that the user would like to select an item of media content.
  • the computer 32 may provide the collected data, e.g., to the media server 16 .
  • the server 16 may analyze the data, and determine whether the body data indicates a particular mental state (e.g., mood), e.g., that the user is sad, angry, agitated, calm, sleepy, etc.
  • a particular mental state e.g., mood
  • the user is sad, angry, agitated, calm, sleepy, etc.
  • the server 16 may recognize that the user is smiling, or frowning, and determine respectively that the user is happy or sad.
  • the server 16 may recognize that the user's eyes are opened widely, and determine that the user is surprised.
  • the server 16 may determine, based on skin temperature, or skin color that the user is flushed, and further determine that the user is embarrassed.
  • the server 16 may further combine voice data, visual data and/or biometric data to more precisely identify the mood of the user.
  • the voice data, visual data, biometric data, etc. may be collected and stored together with time stamps when the data was received (or generated). Data from a same or similar time period may be combined to identify a mood (or situation) of the user.
  • high paced speech may indicate either that the user is angry or that the user is excited. Combining this data with facial data indicating that the user is smiling at a same time that the high paced speech is observed may assist the server 16 in accurately identifying that the user is excited.
  • elevated blood pressure together with high pitched speech may be an indication that the user is frightened.
  • the computer 32 may collect what is sometimes referred to as document data, e.g., text files, audio files, video files, etc., related to the user's situation via the data collectors 30 .
  • the computer 32 may determine a user's identity and/or location, etc., based on data from a camera and/or microphone 30 .
  • the data sources may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning (GPS) data, calendar or other scheduling application (past, present and future appointments, events, etc.), Internet browsing history, mobile purchases, input provided by the user, etc.
  • the server 16 may analyze the user's current situation based on events in the user's life (just completed a stressful meeting, planning to meet an old friend, a relative recently died, just received a raise, etc.), the user's company (people with the user), the user's location (the user's living room, in a hotel, etc.), the user's circumstances (financial status, age, religion, politics, etc.), and the day and time (year, month, week, weekday, weekend, holiday, user's birthday, birthday of the user's spouse, anniversary, morning, evening, etc.).
  • the computer 32 may collect situation data related to the user, and determine the situation of the user based on the data.
  • the server 16 may identify past, present and future events related to the user. For example, based on data entered in, e.g., a calendar, the server 16 may determine that the user has a business trip scheduled for the following week. Based on email exchanges, the server 16 may determine that the business trip will require extensive preparation, and that the user will be travelling with his boss.
  • the server 16 may determine that a relative of the user recently died.
  • the server 16 may determine, based on collected data other people who are with the user at a current time. For example, calendar data (email exchanges, text message exchanges, etc.) may indicate that the user is scheduled to meet with friends to watch a sporting event at the current time, or to meet with colleagues to discuss a business issue. Visual and audio data, collected by cameras 30 and microphones 30 may be used to identify people proximate to the user 30 , using image recognition and audio recognition techniques, as are known. The server 16 may determine general demographic information about the people with the user such as their gender, approximate age, etc. Additionally or alternatively, the server 16 may identify specific people, and associate the specific people with stored data about the people. The stored data may indicate, for example, the situation of the people.
  • calendar data email exchanges, text message exchanges, etc.
  • Visual and audio data collected by cameras 30 and microphones 30 may be used to identify people proximate to the user 30 , using image recognition and audio recognition techniques, as are known.
  • the server 16 may determine general demographic information about the people with the
  • the server 16 may determine, based on collected data, a past, present or future location of the user. For example, the server 16 may, based on global positioning data, visual data, audio data, calendar data, etc., that the user is in the user's living room, in the user's office, in Hawaii, etc.
  • the server 16 may determine, based on collected data, both long and short term circumstances related to the user. Circumstances may include age, gender, employment status, marital status, political views, religion, financial status, health status, place of residence, etc. For example, the server 16 may determine data related to financial status for documents such as electronic tax returns or W2 forms, or from communications such as emails, texts and conversations. Data related to internet browsing history may, e.g., provide an indication of political views or health status.
  • the computer 32 may collect and provide, e.g., to the server 16 , day and time data related to the user. For example, the computer 32 may determine, based on documents, communications, etc. important dates to the user such as the user's birthday, the birthday of the user's spouse, the user's anniversary, etc. The computer 32 may further determine, and provide to the server 16 , routines of the user such as work schedule, times when the user is at home, days and times when the user attends a fitness center, etc. The server 16 may, based on this data, determine what a user is mostly likely doing at a particular time on a particular day of the week, date of the month, etc. Additionally, the server 16 may, for example, take into consideration that a religious holiday important to the user, or the birthday of the user will soon occur.
  • the server 16 may accumulate situation data related to the user on an on-going basis.
  • the accumulated data 52 may be considered together with current data for determining the situation of the user.
  • the server 16 may assign or update keywords related to a user in response to a trigger event.
  • the server 16 may be programmed to assign or update keywords related to the user every evening at a particular time, such as 7 pm.
  • the time may be selected, for example, to correspond to a time when the user generally watches television.
  • the user may select the time via input to the server 16 , or the server 16 may determine the time based on routines of the user.
  • the server 16 may receive a request from the user for updated situation keywords.
  • the user may input a request via the user device 12 for updated keywords, in preparation for selecting a movie to watch.
  • the server 16 may receive an indication, for example, from the media device 13 , that the user has turned the media device 13 on, and is (apparently) planning to watch television.
  • server 16 may assign or update one or more keywords to the user.
  • the keywords may be related to the user's mental state at a current time.
  • the current time may be, for example, the time the trigger event is received, or within an hour before the trigger was received, etc.
  • the server 16 may assign, based on data related to the user, one or more keywords to a user selected from a list of predetermined keywords.
  • the predetermined keywords may be single words such as “home”, “travelling”, “Republican”, “married”, etc., and/or phrases such as “preparing for a difficult task”, or “going out with friends”, “planning a trip”, “relative died”, etc.
  • a set of predetermined keywords may be available to the server 16 , which allow the server 16 to characterize a wide range of situations which may occur in the life of the user.
  • the server 16 may identify one or more keywords which describe a current situation of the user. For example, based on global positioning data, the server 16 may identify that the user is home. Based on data extracted from a calendar, the server 16 may know that the user just completed entertaining a client for two days. Based on the available data, the server 16 may assign a first keyword “home” and a second keyword “completed demanding task” to the user.
  • the server 16 may assign keywords to the user based on sets of sub-keywords or watchwords associated with each keyword.
  • the watchwords may be single words or phrases.
  • the server 16 may associate a set of watchwords with each keyword.
  • the set of watchwords for a particular keyword may contain any number of watchwords from just one or two watchwords to hundreds or even thousands of watchwords.
  • the server 16 may create a list of watchwords associated with the user.
  • the watchwords may be, e.g., extracted from user communications, determined based on user physical conditions, or associated with the user based on the user location, people together with the user, demographics of the user, etc.
  • the server 16 may match the watchwords assigned to the user with the watchwords assigned to each keyword. The server 16 may then, assign the two keywords to the user with the highest number of matches between the watchwords assigned to the user and the watchwords assigned to the keyword.
  • the server 16 may associate watchwords “flushed face”, “clenched teeth” and “red face” with the user.
  • the watchword “flushed face” may be included in the set of watchwords for both of the keywords “embarrassed” and “angry”.
  • the watchwords “clenched teeth” and “red face” may however, only be included in the set of watchwords for the keyword “angry”. Because more watchwords are matched for “angry” than for “embarrassed”, the server 16 may assign the keyword “angry” to the user.
  • the server 16 may ask the user to select a keyword. The question may ask the user to choose between two different keywords. For example, the server 16 may ask “Would you say that angry or embarrassed better describes your mental state at this time?”
  • Priorities may be assigned to different types of data when determining keywords.
  • the data may be, for example, assigned a priority value in a range of from 1 to 100, with 100 being the highest priority.
  • a death in the immediate family or being diagnosed with a serious illness may be assigned a value of 100.
  • a change in employment situation or recently moving may be assigned a value of 75.
  • a planned, routine business trip may be assigned a value of 54.
  • the priority values may be determined based on when the event occurred (or is scheduled to occur) relative to the current time. For example, a death in the family that occurred within the last three months may have a priority value of 100. The priority value may be reduced slowly as time passes following the initial three months. With regard to a planned event, the priority value may increase as the event comes closer.
  • time-based algorithms may be used. For example, following the death of an immediate family member, this event may receive an increased priority value during holidays, and on the anniversary date of the event.
  • the user's company i.e., the people with the user, may be given a high priority value. For example, when the user is together with the user's spouse, this may outweigh events related to the user, and be given a high priority value.
  • the server 16 determines data that may be most relevant to the current situation of the user.
  • the server 16 selects one or more situation keywords which may best describe the user.
  • the server 16 may select the keywords by selecting the data (event, company, location, circumstances, time, etc.) with the highest priority values as described above, and selecting the predetermined keywords which best match the selected data.
  • the predetermined keyword may be a direct match with the data.
  • a set of predetermined keywords may include “death in the immediate family” which may directly match to high priority data related to the user.
  • the server 16 may need to identify a keyword which best correlates to data related to the user. For example, several different types of situations, such as changing a job, moving to a new location, graduating from a school, etc. may all be associated with a situation keyword “major transition”. In order to establish this correlation, the server 16 may, for example, be provided with a list of examples of events that qualify as a “major transition”, and the server 16 may search for the event in the list. Additionally or alternatively, the server 16 may analyze the received data and determine a meaning as is known, and compare the meaning to a meaning of the situation keyword.
  • the server 16 may assign, based on data collected concerning a user's physical state, one or more keywords to a user.
  • predetermined keywords may indicate a user's mood, and may be words or expressions such as “joyful,” “relieved,” “sad,” “excited,” “overwhelmed,” “at peace,” “happily surprised,” etc.
  • a set of predetermined keywords may be available to the server 16 , which allow the server 16 based on various possible detected physical attributes of a user.
  • the server 16 may measure the voice parameters such as rate of speech, pitch, contrast, volume, tone, etc. Speech at a high rate of speed, e.g., may indicate that the user is excited. Speech with a low contrast, i.e., in a monotone voice, may indicate that the user is bored. Sentences with an upward inflection at the end may indicate that user is feeling insecure.
  • the server 16 may further analyze data related to body language of the user. For example, visual data may indicate that the user is standing erectly with the head facing forward and determine that the user is feeling confident.
  • the server 16 may analyze the user's face, detect tightness and determine that the user is feeling tense.
  • the server 16 may analyze biometric data such as the heart rate of the user or blood pressure of the user.
  • a high pulse rate may indicate that the user is frightened or excited.
  • a low pulse rate may indicate that the user is relaxed.
  • the server 16 may consider a combination of voice data, visual data and biometric data to determine a user's mental state. For example, a high pitched voice, combined with a high pulse rate and a smile detected on the user's face may indicate that the user is “happily surprised.” Also as discussed above, the server 16 may use time stamps associated with the voice, visual and biometric data in order to combine data associated with a particular time.
  • Different types of data may be given higher priority than other types of data.
  • facial expressions such as a smile or frown may be strong indications of a particular mood
  • voice data or biometric data may be associated with multiple moods.
  • a facial expression which is a strong indicator of a particular mood may be given priority over the voice and biometric data.
  • a particular facial expression may be indicative of two or more possible moods.
  • Voice and biometric data may be used in order to select between the possible moods.
  • Current data for example, voice samples or visual data from the previous five minutes, may be given priority over data which is older.
  • the server 16 may assign one or more mood keywords to the user which may best characterize the mood of the user at the current time.
  • the server 16 may assign one or more keywords to the user at a current time. In some cases, however, the server 16 may wish to exchange an assigned keyword with a complementary keyword prior to using the keywords for selecting a media content item. This may particularly be the case when the keyword indicates a strong mental state, e.g., very sad, very happy, highly valued, etc.
  • the server 16 when it determines that the user may be experiencing a negative mental state, may assign a mood keyword describing a type of media content which will encourage the user.
  • the server 16 when the user is feeling a positive mental state such as joy, happiness, gratitude, encouraged, valued, loved, capable, etc., the user may wish to engage in more challenging entertainment, and watch a serious movie about social injustice.
  • the server 16 when it determines that the user may be experiencing a positive mental state, may assign a mood keyword describing a type of media content which will challenge the user.
  • the server 16 may further select one or more media content items to recommend and/or provide to the user.
  • the server 16 may compare keywords assigned to the user with keywords assigned to the media content item, e.g., a media content item may include metadata specifying one or more keywords.
  • the server 16 may prepare a list of media content items to recommend to the user based on a number of matches found between the user keywords and the media content item keywords.
  • the server 16 may assign one mood keyword and two situation keywords to the user.
  • the server 16 may give a highest ranking to media content items which have keywords matching all three of the keywords assigned to the user.
  • the server 16 may give a second highest ranking to media content items which have a matching mood keyword, and one matching situation keyword.
  • the server 16 may give a third highest ranking to a media content item having two matching situation keywords, and not having a matching mood keyword, etc.
  • the server 16 may rank media content items based on the ratings the media content items received from other users which had been assigned the same set of keywords. As described below, the server 16 may receive a rating from users following the viewing of a media content item. The server 16 may store the rating, along with keywords assigned to the user, and the media content item which was viewed by the user. In this way, the server 16 can recommend media content items which received high ratings from other users when they were assigned the same set of keywords.
  • the server 16 may only consider other users identified as friends of the user, or may only consider other users that viewed the media content item within a predetermined period of time of the current time, etc.
  • the server 16 may present a list of the media content items to the user.
  • the media content items with the highest ranking may appear at the top of the list, and media content items with lower rankings may appear lower in the list.
  • the server 16 may transmit the list to the user via, e.g., the user device 12 or the media device 13 .
  • the user may select a media content item for view, and send a request for the media content item to the server 16 .
  • the server 16 may, e.g., stream the media content item to the media device 13 .
  • the server 16 may further present a list of media content viewed by friends while having been assigned the same set of keywords, or, e.g., at least one of the same keywords.
  • the list may be arranged in chronological order, and indicate the media content viewed by the friend, together with the name of the friend, and if available, the rating provided the friend.
  • the server 16 may select a media content item, for example the media content item at the top of the list, and present the media content item to the user via a computing device such as the user device 12 or the media device 13 .
  • the server 16 may also identify other users that currently or recently (e.g., within the last week), have been assigned the same set of keywords.
  • the server 16 may, e.g., prioritize friends of the user. In this manner, the user may contact the other user with the same set of keywords, and ask, e.g., for a recommendation for media content, or otherwise strike up a conversation.
  • the server 16 may receive a rating of the media content item from the user. For example, the server 16 may send a request, via the user device 12 , or via the media device 13 , for the user to rate the media content item.
  • the rating may be, for example, a numerical value between zero and five, with five being the highest rating and zero being the lowest rating.
  • the user may provide the rating, using for example, the user interface 36 on the user device 12 , or the user interface 46 on the media device 13 .
  • the server 16 may store the rating received from the user, along with the identity of the media content item, and the keywords assigned to the user at the time the media content item was provided. In this manner, the server 16 can develop statistical data indicating the responses of users with a particular set of keywords, to a particular media content item.
  • the user may create new keywords and associate the new keywords with the media content item.
  • the user in addition to assigning a rating of five out of five to a media content item, the user may describe the media content with the keywords “life-changing”, “drama”, “thriller” and “cerebral”.
  • the server 16 may add this keyword.
  • the server 16 may further associate the watchwords associated with the user, e.g., at the time of the rating, to the keyword “life-changing”.
  • the list of watchwords associated with the keyword “life-changing” will grow by association, as other users are then, assigned the keyword “life-changing”, or otherwise participate in the conversation around the media content item.
  • FIGS. 4A and 4B are a diagram of an exemplary process 400 for providing media content to a user based on a user mental state.
  • the process 400 begins in a block 405 .
  • a computer 32 in the user device 12 determines if the computer 32 is authorized to collect data related to the user.
  • the computer 32 may have previously received and stored authorization from the user.
  • the user may have, e.g., authorized the computer 32 to collect all available data.
  • the user may have restricted the types of data which may be collected (only email data, only voice data, all types of data except voice data, etc.), or the times and/or situations when data may be collected (at particular times of the day, when the user is watching media content, when the user is at home, etc.).
  • the authorization may have been input by the user and stored by the user device 12 .
  • the user device 12 may query the stored data from time-to-time to determine authorization. In the case that the user device 12 is not authorized to collect data, the process continues in the block 405 . In the case that the user device 12 has received authorization from the user, the process 400 continues in a block 410 .
  • the computer 32 of user device 12 collects data related to the user.
  • the computer 32 may collect data such as audio data, visual data and biometric data from data collectors 30 . Additionally, the computer 32 may collect data related to the user from other sources.
  • the data sources may include, e.g., communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning data (GPS), calendar or other scheduling application (past, present and future appointments, events, etc.), internet browsing history, mobile purchases, input provided by the user, etc.
  • the computer 32 may provide the collected data to another computing device such as the media server 16 .
  • the media device 13 may collect data.
  • the computer 42 of the media device 13 may collect data related to the user, via data collectors 40 .
  • the computer 42 may provide the collected data, e.g., to the server 16 of media server 16 .
  • the process 400 Upon receipt of the data by the server 16 , the process 400 continues in a block 415 .
  • the server 16 may sort the data as short term data and long term data.
  • Short term data may be data that is determined not to have a long term impact on the mood or situation of the user. For example, data indicating that the user followed a regular routine and met a colleague for lunch may be considered as short term data which does not have long term significance. Data indicating that the user is expecting a child may be considered to be data that has long term significance.
  • the short term data may be stored for a short period of time, e.g., 24 hours.
  • the long term data may be added to long term data storage, where it is stored, for example, indefinitely.
  • a time stamp may be associated with some or all of the stored data. Based on the type of data, the weighting of the data when determining the mood and the situation of the user may be adjusted as time passes. After storing the data, the process 400 continues in a block 420 .
  • the server 16 determines if a trigger event has occurred to provide keywords to the user. As described above, the server 16 may be programmed to assign or update keywords related to the user every evening at a particular time, such as 7 pm.
  • the server 16 may receive a request from the user for updated keywords.
  • the user may input a request via the user device 12 for updated keywords, in preparation for selecting, e.g., a movie to watch.
  • the server 16 may receive an indication, for example, from the media device 13 , that the user has turned the media device 13 on, and infer that the user is planning to watch television.
  • the server 16 may recognize the turning on of the media device 13 as a trigger to provide keywords to the user.
  • the process 400 continues in a block 425 . In the case that the server 16 does not recognize a trigger event, the process 400 continues in the block 405 .
  • the server 16 assigns, as described above, one or more situation and/or mood keywords to the user.
  • the server 16 may select situation and mood keywords from sets of predetermined situation and mood keywords, based on data related to the user. In some cases, as described above, when for example the selected keywords associated with the user indicate strong feelings, complementary mood keywords may be exchanged for the originally selected keywords.
  • the process 400 continues in a block 430 .
  • the server 16 may provide the assigned keywords to the user. For example, the server 16 may transmit the keywords to the user device 12 computer 32 , and instruct the computer 32 to display the keywords on the user interface 36 . As another example, the server 16 may transmit the keywords to the media device 13 computer 42 and instruct the computer 42 to display the keywords on the user interface 46 .
  • the process 400 continues in a block 435 .
  • the server 16 determines whether the user has requested a recommended list of media content. For example, the user may, in response to receiving the assigned keywords, request, via the user device 12 computer 32 to the media provider 16 server 16 , a list of recommended content based on the keywords assigned to the user. Alternatively, the server 16 may receive, for example, a request for an electronic programming guide (EPG) from the media device 13 . In the case that the server 16 receives (or otherwise identifies) a request for a recommended list of media content, the process 400 continues in a block 440 . Otherwise, the process 400 continues in the block 405 .
  • EPG electronic programming guide
  • the server 16 provides a list of recommended media content to the user via, e.g., the user device 12 computer 32 , or the media device 13 computer 42 .
  • the media content may be ranked based on the keywords assigned to the user and the keywords assigned to each media content item, as described above.
  • the process 400 continues in a block 445 .
  • the server 16 receives a selection for a media content item from the user.
  • the selection may be received, for example, via the user interface 36 of the user device 12 , or the user interface 46 of the media device 13 .
  • the process 400 continues in a block 450 .
  • the server 16 provides the media content to the user.
  • the server 16 may stream the media content to the media device 13 computer 42 .
  • the process 400 continues in a block 455 .
  • the server 16 may request feedback from the user regarding the media content. For example, during, or upon finishing the streaming of the media content item, the server 16 may send a request to the user, via the user device 12 computer 32 , requesting that the user rate the media content item.
  • the process 400 continues in a block 460 .
  • the server 16 determines whether the server 16 has received feedback from the user. In the case that the server 16 has received feedback, the process 400 continues in a block 465 . In the case, for example, that after waiting a predetermined time from the request for data, the server 16 does not receive feedback, the process 400 continues in a block 405 .
  • the predetermined time may be, e.g., 10 minutes.
  • the server 16 updates metadata related to the media content item.
  • the metadata may include a rating of the media content item based on other users that had the same assigned keywords when viewing the media content item.
  • the server 16 may update the keyword specific rating to take into account the rating from the user that provided the feedback.
  • the metadata may include a rating of the media content item based on other users that had the same mood keyword as the user and are indicated to be friends of the user.
  • the process 400 Upon updating the metadata associated with the media content item, the process 400 continues in a block 470 .
  • the server 16 determines whether the process 400 should continue. For example, the server 16 may be programmed to continue to receive data related to the user from the user device 12 on an on-going basis. In this case, the process 400 may continue in the block 405 . Additionally or alternatively, the server 16 may ask the user, e.g., via the user device 12 , to confirm that the process 400 should continue. In the case that the server 16 receives confirmation, the process 400 may continue in the block 405 . Otherwise, the process 400 may end. The server 16 may, for example, send an instruction to the user device 12 computer 32 , to discontinue collecting data, or, may discontinue receiving data from the user device 12 . In this case, the process 400 ends.
  • the descriptions of operations performed by the one or more user devices 12 , one or more media devices 13 , and the media server 16 are exemplary and non-limiting.
  • the one or more user devices 12 , one or more media devices 13 , and media server 16 are communicatively coupled computing devices. Accordingly, computing operations may be distributed between them. Operations such as collecting data, selecting data to be stored, assigning keywords to a user, comparing user keywords to media content keywords, may be performed in any one of the computing devices, or distributed over any combination of the computing devices.
  • the adverb “substantially” means that a shape, structure, measurement, quantity, time, etc. may deviate from an exact described geometry, distance, measurement, quantity, time, etc., because of imperfections in materials, machining, manufacturing, etc.
  • exemplary is used herein in the sense of signifying an example, e.g., a reference to an “exemplary widget” should be read as simply referring to an example of a widget.
  • Networked devices such as those discussed herein generally each include instructions executable by one or more networked devices such as those identified above, and for carrying out blocks or steps of processes described above.
  • process blocks discussed above may be embodied as computer-executable instructions.
  • Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, JavaTM, C, C++, Visual Basic, Java Script, Perl, HTML, etc.
  • a processor e.g., a microprocessor
  • receives instructions e.g., from a memory, a computer-readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein.
  • Such instructions and other data may be stored and transmitted using a variety of computer-readable media.
  • a file in a networked device is generally a collection of data stored on a computer readable medium, such as a storage medium, a random access memory, etc.
  • a computer-readable medium includes any medium that participates in providing data (e.g., instructions), which may be read by a computer. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, etc.
  • Non-volatile media include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media include dynamic random access memory (DRAM), which typically constitutes a main memory.
  • DRAM dynamic random access memory
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system includes a computing device including a processor programmed to receive data identifying a mental state of a user, the data including at least one of a user physical condition and a user communication. Based on the mental state data, the processor is programmed to assign one or more stored keywords to the user, and provide media content to the user based on the keywords assigned to the user based on the mental state data.

Description

BACKGROUND
Users of media content lack mechanisms for selecting appropriate items of media content, e.g., programming that may include movies, sports, live events, etc. For example, present media content delivery devices (e.g., set-top boxes and the like, are lacking in the ability to detect various user attributes, e.g., to interpret data indicating a user's mental state and events.
DRAWINGS
FIG. 1 is a diagram of an exemplary media system for providing media content based on user state detection.
FIG. 2 is a diagram of an exemplary user device for the media system of FIG. 1.
FIG. 3 is a diagram of an exemplary media device for the media system of FIG. 1.
FIGS. 4A and 4B are a diagram of an exemplary process for providing media content based on a user state.
DETAILED DESCRIPTION
Exemplary System
A user device computer, with authorization from the user, collects data that can be used to predict a user's mental state. For example, collected data may indicate a user's mental state by providing values for attributes such as a user's physical condition, recent events affecting a user (sometimes referred to as a user's “personal circumstances” or “situation”), events possibly affecting a user's mental state, etc. Physical condition data that can be used to determine a mental state can include, for example voice samples, facial expressions, biometric data (respiration, heartrate, body temperature, etc.), etc. Situation data may be collected related to events (received promotion, had a child, bought a new home, etc.), locations (at work, in Hawaii, in a bar, in the living room, etc.), demographic characteristics (age, gender, religion, marital status, financial circumstances, etc.), company (people user is with, their gender, age, relationship, etc.) time (date, day of the week, time of day, user's birthday, holiday) and any other information that may be used to understand the current situation of the user. Sources for collected data indicating a user's mental state may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning (GPS) data, calendar data (past, present and future events), internet browsing history, mobile purchases, input provided by the user, etc.
The user device computer is typically is programmed to provide the collected data to a server of a media content provider. The media server can store and maintain the data about the user. Based on the stored collected data, the media content provider computer is programmed to assign one or more predetermined keywords describing the user's situation and one or more predetermined keywords describing the user's mood. Predetermined keywords to describe a user's situation may include, e.g., “inspirational,” “comedy,” “jimcarrey,” “family vacation,” “goofy,” “grief,” etc. Predetermined keywords to describe a user's mood may include, e.g., “happy,” “sad,” “excited,” “bored,” “frustrated,” “relaxed,” etc.
The media server is programmed to generate and provide to the user a set of the assigned keywords. For example, the set of assigned keywords may include two predetermined keywords selected according to a user situation or situations and one predetermined keyword selected according to one or more detected user physical conditions. The media provider computer may provide the set of assigned keywords, e.g., in response to a request from the user, or for example, on a regular basis (e.g., once per hour), or for example, based on the location of the user (when the user arrives at home, arrives in the living room, etc.).
As described in additional detail below, the user may identify, and request a media content item which is provided by the media provider for selection according to a current predicted user mental state, e.g., based on a user's situation, physical condition(s), etc., based on the set of assigned keywords. The user may send the request to the media content provider computer, either via the user device or via a media device available to the user. The media content provider may then provide the requested media content to the user via, e.g., the user device or the media device.
As shown in FIG. 1, an exemplary media system 10 includes one or more user devices 12, one or more media devices 13, a network 14, and a media server 16. The media device 13 may be communicatively coupled to a display device 22. The user device 12, media device 13 and display device 22 may be included in a customer premises 11.
A user may be a consumer of media content who provides access to various user data by the media system 10. Generally, the user operates a user device 12 which collects some or all of the data related to the user, and provides the data to the media server 16. The server 16 may be provided by a media content provider such as are known, e.g., a cable or satellite media provider, an internet site, etc.
As described in additional detail below, the user device 12, data collectors associated with the user device 12, and/or other data collectors communicatively coupled to the user device 12, media device 13 or media server 16, may collect data regarding the user and provide the collected data to the media server 16. Based on the collected data, a media server 16 may assign to the user one or more keywords describing a user mental state, e.g., based on a mood of the user indicated by collected data concerning physical attributes of the user, and/or one or more keywords describing a situation of the user indicated by data collected from documentation concerning the user, e.g., from an audio conversation, e-mail, text messages, calendar entries, etc. Further, based on the keywords associated with the user, the media server 16 may recommend or provide one or more items of media content to the user via, e.g., the media device 13.
The user device 12 may be a known device such as a mobile telephone, tablet, smart wearable (smart watch, fitness band, etc.), other portable computing device, etc. As described in additional detail below, the user device 12 may include one or more applications such as email, a calendar, web browser, social media interfaces, etc., and one or more data collectors such as a video camera, biometric sensors, a global positioning system, etc. The user device 12 may additionally include an application for collecting data related to the user from the one or more applications and one or more data collectors, and providing the collected data to the media server 16 computer or to another computing device.
The media device 13 receives and displays media content, and is typically a known device such as a set-top box, a laptop, desktop, tablet computer, game box, etc. The term “media content” as used herein, refers to digital audio and/or video data received in the user device 12 computer and/or in the media device 13. The media content may be received, for example, from the media server 16 via the network 14.
The media device 13 is connected to or could include a display device 22. The display device 22 may be, for example, a television receiver, a monitor, a desktop computer, a laptop computer, a tablet, a mobile telephone, etc. The display device 22 may include one or more displays and one or more speakers for outputting respectively the video and audio portions of media content and advertisement content received from the media device 13.
The network 14 represents one or more mechanisms for providing communications, including the transfer of media content items, between the user device 12, media device 13, and the media server 16. Accordingly, the network 14 may be one or more of various wired or wireless communication mechanisms, including any desired combination of wired (e.g., cable and fiber) and/or wireless (e.g., cellular, wireless, satellite, microwave, and radio frequency) communication mechanisms and any desired network topology (or topologies when multiple communication mechanisms are utilized). Exemplary communication networks include wireless communication networks, local area networks (LAN) and/or wide area networks (WAN), including the Internet, etc.
The media server 16 may be, for example, a known computing device included in one or more of a cable or satellite television headend, a video streaming service such as generally includes a multimedia web server (or some other computing device), etc. The media server 16 may provide media content, e.g., a movie, live event, audio, to the user device 12 and/or media device 13.
The media content is typically delivered as compressed audio and/or video data. For example, the data may be formatted according to known standards such as MPEG or H.264. MPEG refers to a set of standards generally promulgated by the International Standards Organization/International Electrical Commission Moving Picture Experts Group (MPEG). H.264 refers to a standard promulgated by the International Telecommunications Union (ITU). Accordingly, by way of example and not limitation, media content may be provided to a media device 13 in a format such as the MPEG-1, MPEG-2 or the H.264/MPEG-4 Advanced Video Coating standards (AVC) (H.264 and MPEG-4 at present being consistent) and HEVC/H.265. As is known, MPEG and H.264 data include metadata, audio, and video components. Further, media content and advertisement content in the media system 10 could alternatively or additionally be provided according to some other standard or standards. For example, media content and advertisement content could be audio data formatted according to standards such as MPEG-2 Audio Layer III (MP3), Advanced Audio Coding (AAC), etc.
As shown in FIG. 2, the user device 12 includes a computer 32, a communications element 34 and a user interface 36. Additionally, the user device 12 may include and/or be communicatively coupled, e.g., in a known manner, with one or more data collectors 30.
The data collectors 30 may include, for example cameras, microphones, biometric sensors, accelerometers, gyroscopes, a global positioning system and other types of sensors for collecting data regarding the respective user of the user device 12. The data collectors 30 are communicatively coupled to the computer 32, and may be included in or remote to the user device 12.
The data collectors 30 may be used to collect data related to the user and other people and objects proximate to the user device 12. Proximate to the user device 12 may be defined, for example, to be within a detection range of a respective sensor, within a same room as the user device 12, or within a fixed distance, for example 20 meters, of the user device 12. As discussed below, the computer 32 may be authorized to collect data via the data collectors 30 at any time, or based on conditions established, for example, by the user of the user device 12.
For example, a microphone in the user device 12 may listen, on a substantially continuous basis, to the surroundings, and record conversations and other received sounds and provide audio data to the computer 32. The computer 32 may determine whether the received data may be useful in determining a situation and/or mood of the user. The computer 32 may store data determined to be useful in determining a situation and/or mood of the user and discard data which is determined not to be useful in determining a situation and/or mood of the user. For example, the content of a conversation that includes only an exchange of greetings and small talk may be discarded, whereas the tone quality of the same conversation may be determined to be indicative of mood, and may be stored.
The data collectors 30 may further be used to collect biometric data related to the user. For example, the data collectors 30 may measure the user's blood pressure, heartrate, body temperature, etc.
The communications element 34 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications. The hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc. The software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
The types of wireless communications may include cellular communications, WiFi communications, two-way satellite communications (e.g., emergency services), one way satellite communications (e.g., receiving digital audio radio broadcasts), AM/FM radio, etc.
The user interface 36 may include one or more input elements such as buttons, a key board, a touchscreen, a microphone, a touchpad etc. for receiving input from a user. The user interface 36 may further include one or more display elements such as an LCD display, speaker, light emitting diodes, buzzers, etc. for outputting data to the user.
The computer 32 includes a memory, and one or more processors, the memory storing program code, i.e., computer-executable instructions, executable by the processor. The computer 32 is operable to receive input from a user and transmit the input to another computing device such as the media device 13 or the media server 16. The computer 32 further may include one or more applications such as are known for email, a calendar, texting, social media interfaces, web browsers, etc., and may send data to and receive data from remote computers, including without limitation the media server 16, for such applications.
Additionally, the computer 32 is programmed to collect data related to the user and provide the collected data to another computing device such as the media server 16. The data may be collected from other applications installed on the computer 32, or from the data collectors 30. The collected data may include, e.g., data which may be useful for determining the mood and the situation of the user. For example, the collected data may include words or phrases parsed from documents or files, e.g., from monitoring or recording voice communications, parsing e-mails, text messages, calendar entries, etc. Alternatively or additionally, the collected data could include data concerning current physical attributes of a user, e.g., heartrate, respiration, skin color, body temperature, etc.
As shown in FIG. 3, the media device 13 includes a computer 42, a communications element 44, and a user interface 46. The media device 13 may further include one or more data collectors 40. The computer 42 is communicatively coupled with each of the data collectors 40, communications element 44 and user interface 46.
The data collectors 40 may include, for example cameras, microphones, motion detectors, infrared sensors, ultrasonic sensors, and other types of sensors for collecting data regarding users proximate to the media device 13. Proximate to the media device 13 may be defined, e.g., as within a range to be detected by the data collectors 40. As other examples, proximate to the media device 13 may be defined to be within a fixed distance, e.g., 20 meters, of the media device 13, within a range to view a display device 22 included in the media device 13, within a room including the media device 13, etc.
The data collectors 40 are communicatively coupled to the computer 42, and may be included in or remote to the media device 13. The data collectors 40 may be used to collect visual data, audio data, motion data, biometric data, etc. related to one or more users proximate to (e.g., in a room with and/or within a predetermined distance of) the media device 13.
The communications element 44 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications. The hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc. The software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
The types of wireless communications may include cellular communications, WiFi communications, two-way satellite communications (e.g., emergency services), one way satellite communications (e.g., receiving digital audio radio broadcasts), AM/FM radio, etc.
The user interface 46 may include one or more input elements such as buttons, a key board, a touchscreen, a roller ball, a touchscreen, a mouse, a microphone, switches, etc. for receiving input from a user. The user interface 46 may further include one or more display elements such as an LCD display, plasma display, speaker, lamps, light emitting diodes, buzzers, etc. for outputting data to the one or more users.
The computer 42 includes a memory, and one or more processors, the memory storing program code, i.e., computer-executable instructions, executable by the processor. The computer 42 is operable to receive media content from the media server 16 and display received media content on the display device 22.
Additionally, the computer 42 is may be programmed to collect data regarding the users proximate to the media device 13. The data may be collected, via, e.g., the data collectors 40. The collected data may include, e.g., data which may be useful for determining the mood and the situation of the user.
The media server 16 may provide media content to the user device 12 and/or media device 13. The media server 16 may include one or more processors and memories as is known, as well as known mechanisms for communicating via the network 14.
The memory of the server 16 can store program code, i.e., computer-executable instructions, executable by the processor. The server 16 is programmed to provide media content to the user device 12 and/or media device 13, via the network 14.
Additionally, the server 16 may be programmed to receive data related to the situation and mood of the user. Based on the data, and as described in additional detail below, the server 16 may be programmed to assign one or more predetermined keywords describing the user's situation and/or one or more predetermined keywords describing the user's mood to the user. The server 16 may further be programmed to provide the keywords to the user. The server 16 may yet further be programmed to provide one or more media content items to the user based on the assigned keywords. In some cases the server 16 may recommend one or more media content items to the user based on the assigned keywords to the user, e.g., via the user device 12. The server 16 may then receive a request for a media content item selected by the user from the one or more recommended media content items, and provide the media content to the user based on the request.
The communications element 54 may include hardware, software, firmware, etc., such as are known, and may be configured for one or more types of wireless communications. The hardware may include, e.g., one or more transceivers, one or more receivers, one or more transmitters, one or more antennas, one or more microcontrollers, one or more memories, one or more electronic components etc. The software may be stored on a memory, and may include, e.g., one or more encoders, one or more decoders, etc. for converting messages from one protocol to another protocol. Some functions, e.g., encoding functions, may be realized via firmware.
The communications element may be programmed to transmit and receive media content, e.g., via satellite and/or wired (cable) communications. Additionally, the communications element may be programmed for wireless communications such as cellular communications and WiFi communications.
Processes
Collecting User Data
As described above, the device 12 computer 32 may collect various types of data related to the user. The collected data may be categorized, e.g., as mood data and situation data.
Mood data may include, e.g., audio data and body data. The audio data may include, e.g., voice samples from the user. The body data may include visual data of the user, e.g., (facial expression, posture, etc.) and biometric data (heart rate, blood pressure, body temperature, pupil dilation, etc.)
Situation data may be collected related to events (received a promotion, bought a new home, birth of a child, etc.), locations (at work, in Hawaii, in a bar, in the living room, etc.), circumstances (age, gender, religion, marital status, financial circumstances, etc.), company (people user is with, their gender, age, relationship, etc.) time (date, day of the week, time of day, user's birthday, holiday) and other information that may be used to understand the situation of the user.
The computer 32 may collect the data via data collectors 30 included in or communicatively coupled to the computer 32. For example, the computer 32 may receive audio data. The computer 32 may detect, for example, when the user is speaking with another person, and record the speech. Additionally or alternatively, the computer 32 may, for example, when the user indicates that the user would like to select a media content item, engage the user in conversation. For example, the computer 32 may ask the user “What is your current mood?” or “How are you feeling today?” The computer 32 may collect samples of speech for use to analyze the mood and/or situation of the user.
The computer 32 may further collect visual data from the data collectors 30. The data collectors 30 may include one or more cameras included in or communicatively coupled to the computer 32. The cameras may collect visual data related to the user facial expressions, related to the user's posture, etc. The cameras may further collect data indicating, for example, a degree of dilation of the user's pupils, a degree of coloration of the skin (for example, blushing), etc.
Still further, the data collectors 30 may include sensors for measuring blood pressure, pulse, body temperature, etc. of the user.
The data collectors 30 may still further be used to collect situational data related to the user. For example, an audio data collector (microphone) 30 may detect the voices of other people in conversation with or otherwise proximate to the user. A camera 30 may receive images of people proximate to the user, or of the environment of the user.
In addition to collecting data via the data collectors 30, the computer 32 may collect data related to the user from other sources. The data sources may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning data (GPS), calendar or other scheduling application (past, present and future appointments, events, etc.), internet browsing history, mobile purchases, input provided by the user, etc.
A time stamp may be associated with some or all of the data. For example, when collecting a voice sample, the computer 32 may note the time that the voice sample was collected, and store the time along with the voice sample. In this way, data such as voice data may by correlated in time to other data such as biometric data or visual data related to the user.
Similarly, a date and time that a communication was sent may be extracted from the communication and stored with the data extracted from a text of the email. In this manner, a time can be associated with a text of the communication. For example, an email from a user complaining about a decision by a colleague may be associated with a time of the communication.
In addition to monitoring the user, the user's surroundings, the user's communications, etc., the computer 32 may collect data from the user interactively. For example, the computer 32 may (e.g., via a speech program) ask the user how the user is feeling today, or how the user's day is going. The computer 32 may record the user's response and use the recorded response as a voice sample.
The computer 32 may provide the data to, for example, the media provider 16 server 16, which may use the data to determine the user's mood and situation. Based on the determined mood and situation of the user, the media provider 16 server 16 (or other computing device) may assign one or more keywords to the user, indicating the mood and situation of the user.
Evaluating User Data
The media provide 16 server 16 may, as indicated above, determine the user's mood based on the collected data. For example, the server 16 may evaluate voice data and body data related to the user.
Voice Data:
The user device 12 computer 32 may collect voice samples related to the user. For example, as described above, a microphone 30 may be activated while the user is conducting conversations with other people, and the computer 32 may collect one or more samples of the user's voice. As another example, when, for example, the user indicates that the user would like to select an item of media content for viewing, the computer 32 may engage the user in conversation. The computer 32 may ask questions of the user such as “What is your mood at the moment?” or “How has your day gone?” The computer 32 may collect the responses as current samples of the user's voice. The computer 32 may provide the collected data to, e.g., the media provider 16 server 16, which may analyze the voice samples to determine the user's mood.
The server 16 may, e.g., analyze the voice samples for qualities such as tone, volume, voice inflection, pitch, speed, contrast, etc. The server 16 may compare the data to data for the general population, or for a demographic segment thereof, e.g., according to age, gender, etc., of the user. Such comparison, using the general population data as a baseline, can be used to evaluate a user mental state, e.g., a user may have a positive or negative mental state that could be quantified as a percentage worse or better than the general population. For example, speech at a speed within a particular range may indicate that the user is excited or agitated. Speech at a speed below a speed threshold may indicate that the user is relaxed. Speech in a monotone voice may indicate that the user is bored.
Additionally or alternatively, the server 16 may analyze the voice samples in comparison to one or more baseline samples of the user. For example, the server 16 may request a baseline voice sample from the user when the user is happy and another baseline sample when the user is angry, etc. The server 16 may analyze the current voice samples by comparing the samples to the one or more baseline samples.
Still further, the server 16 could, upon receiving a voice sample, ask the user to describe the user's mood. In this manner the server 16 could build a table of voice samples reflecting different user moods.
Body Data:
The user device 12 computer 32 may collect data related to the facial expressions, biometrics, body language, etc. related to the user. For example, the computer 32 may collect via a camera 30 or video recorder 30 included in or communicatively coupled with the user device 12, facial expressions of the user and body language of the user. The computer 32 may further collect, via, e.g., biometric sensors 30 as are known, biometric data such as heart rate, blood pressure, body temperature, pupil dilation, etc. related to the user. The computer 32 may collect the data on an on-going basis as the data is available. Additionally or alternatively, the computer 32 may for example, collect biometric data, visual data, etc. at a time when the user indicates that the user would like to select an item of media content. The computer 32 may provide the collected data, e.g., to the media server 16. The server 16 may analyze the data, and determine whether the body data indicates a particular mental state (e.g., mood), e.g., that the user is sad, angry, agitated, calm, sleepy, etc.
For example, using facial recognition techniques as are known, the server 16, may recognize that the user is smiling, or frowning, and determine respectively that the user is happy or sad. The server 16 may recognize that the user's eyes are opened widely, and determine that the user is surprised. The server 16 may determine, based on skin temperature, or skin color that the user is flushed, and further determine that the user is embarrassed.
The server 16 may further combine voice data, visual data and/or biometric data to more precisely identify the mood of the user. As discussed above, the voice data, visual data, biometric data, etc. may be collected and stored together with time stamps when the data was received (or generated). Data from a same or similar time period may be combined to identify a mood (or situation) of the user.
For example, high paced speech may indicate either that the user is angry or that the user is excited. Combining this data with facial data indicating that the user is smiling at a same time that the high paced speech is observed may assist the server 16 in accurately identifying that the user is excited. As another example, elevated blood pressure together with high pitched speech may be an indication that the user is frightened.
As described above, the computer 32 may collect what is sometimes referred to as document data, e.g., text files, audio files, video files, etc., related to the user's situation via the data collectors 30. For example, the computer 32 may determine a user's identity and/or location, etc., based on data from a camera and/or microphone 30.
Additionally, data from other sources may be used to determine the situation of the user. The data sources may include communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning (GPS) data, calendar or other scheduling application (past, present and future appointments, events, etc.), Internet browsing history, mobile purchases, input provided by the user, etc.
The server 16 may analyze the user's current situation based on events in the user's life (just completed a stressful meeting, planning to meet an old friend, a relative recently died, just received a raise, etc.), the user's company (people with the user), the user's location (the user's living room, in a hotel, etc.), the user's circumstances (financial status, age, religion, politics, etc.), and the day and time (year, month, week, weekday, weekend, holiday, user's birthday, birthday of the user's spouse, anniversary, morning, evening, etc.). The computer 32 may collect situation data related to the user, and determine the situation of the user based on the data.
Events:
The server 16 may identify past, present and future events related to the user. For example, based on data entered in, e.g., a calendar, the server 16 may determine that the user has a business trip scheduled for the following week. Based on email exchanges, the server 16 may determine that the business trip will require extensive preparation, and that the user will be travelling with his boss.
As another example, based on text messages, emails, and conversations, etc., the server 16 may determine that a relative of the user recently died.
Company:
The server 16 may determine, based on collected data other people who are with the user at a current time. For example, calendar data (email exchanges, text message exchanges, etc.) may indicate that the user is scheduled to meet with friends to watch a sporting event at the current time, or to meet with colleagues to discuss a business issue. Visual and audio data, collected by cameras 30 and microphones 30 may be used to identify people proximate to the user 30, using image recognition and audio recognition techniques, as are known. The server 16 may determine general demographic information about the people with the user such as their gender, approximate age, etc. Additionally or alternatively, the server 16 may identify specific people, and associate the specific people with stored data about the people. The stored data may indicate, for example, the situation of the people.
Location:
The server 16 may determine, based on collected data, a past, present or future location of the user. For example, the server 16 may, based on global positioning data, visual data, audio data, calendar data, etc., that the user is in the user's living room, in the user's office, in Hawaii, etc.
Circumstances:
The server 16 may determine, based on collected data, both long and short term circumstances related to the user. Circumstances may include age, gender, employment status, marital status, political views, religion, financial status, health status, place of residence, etc. For example, the server 16 may determine data related to financial status for documents such as electronic tax returns or W2 forms, or from communications such as emails, texts and conversations. Data related to internet browsing history may, e.g., provide an indication of political views or health status.
Day and Time:
The computer 32 may collect and provide, e.g., to the server 16, day and time data related to the user. For example, the computer 32 may determine, based on documents, communications, etc. important dates to the user such as the user's birthday, the birthday of the user's spouse, the user's anniversary, etc. The computer 32 may further determine, and provide to the server 16, routines of the user such as work schedule, times when the user is at home, days and times when the user attends a fitness center, etc. The server 16 may, based on this data, determine what a user is mostly likely doing at a particular time on a particular day of the week, date of the month, etc. Additionally, the server 16 may, for example, take into consideration that a religious holiday important to the user, or the birthday of the user will soon occur.
The server 16 may accumulate situation data related to the user on an on-going basis. The accumulated data 52 may be considered together with current data for determining the situation of the user.
Assigning Keywords to the User According to a Determined Mental State
The server 16 may assign or update keywords related to a user in response to a trigger event. For example, the server 16 may be programmed to assign or update keywords related to the user every evening at a particular time, such as 7 pm. The time may be selected, for example, to correspond to a time when the user generally watches television. The user may select the time via input to the server 16, or the server 16 may determine the time based on routines of the user.
As another example, the server 16 may receive a request from the user for updated situation keywords. For example, the user may input a request via the user device 12 for updated keywords, in preparation for selecting a movie to watch.
As yet another example, the server 16 may receive an indication, for example, from the media device 13, that the user has turned the media device 13 on, and is (apparently) planning to watch television.
Upon identifying a trigger event, server 16 may assign or update one or more keywords to the user. The keywords may be related to the user's mental state at a current time. The current time may be, for example, the time the trigger event is received, or within an hour before the trigger was received, etc.
The server 16 may assign, based on data related to the user, one or more keywords to a user selected from a list of predetermined keywords. The predetermined keywords may be single words such as “home”, “travelling”, “Republican”, “married”, etc., and/or phrases such as “preparing for a difficult task”, or “going out with friends”, “planning a trip”, “relative died”, etc. A set of predetermined keywords may be available to the server 16, which allow the server 16 to characterize a wide range of situations which may occur in the life of the user.
Based on the data collected, as described above, the server 16 may identify one or more keywords which describe a current situation of the user. For example, based on global positioning data, the server 16 may identify that the user is home. Based on data extracted from a calendar, the server 16 may know that the user just completed entertaining a client for two days. Based on the available data, the server 16 may assign a first keyword “home” and a second keyword “completed demanding task” to the user.
The server 16 may assign keywords to the user based on sets of sub-keywords or watchwords associated with each keyword. The watchwords may be single words or phrases. For example, the server 16 may associate a set of watchwords with each keyword. The set of watchwords for a particular keyword may contain any number of watchwords from just one or two watchwords to hundreds or even thousands of watchwords.
Further, while analyzing user mental state data, the server 16 may create a list of watchwords associated with the user. The watchwords may be, e.g., extracted from user communications, determined based on user physical conditions, or associated with the user based on the user location, people together with the user, demographics of the user, etc.
At a time, e.g., when the server 16 is triggered to assign keywords to the user, the server 16 may match the watchwords assigned to the user with the watchwords assigned to each keyword. The server 16 may then, assign the two keywords to the user with the highest number of matches between the watchwords assigned to the user and the watchwords assigned to the keyword.
For example, based on a sensor data, the server 16 may associate watchwords “flushed face”, “clenched teeth” and “red face” with the user. The watchword “flushed face” may be included in the set of watchwords for both of the keywords “embarrassed” and “angry”. The watchwords “clenched teeth” and “red face” may however, only be included in the set of watchwords for the keyword “angry”. Because more watchwords are matched for “angry” than for “embarrassed”, the server 16 may assign the keyword “angry” to the user. In some cases, for example, when there are an equal number of matches for two different keywords, the server 16 may ask the user to select a keyword. The question may ask the user to choose between two different keywords. For example, the server 16 may ask “Would you say that angry or embarrassed better describes your mental state at this time?”
Priorities may be assigned to different types of data when determining keywords. The data may be, for example, assigned a priority value in a range of from 1 to 100, with 100 being the highest priority. A death in the immediate family or being diagnosed with a serious illness may be assigned a value of 100. A change in employment situation or recently moving may be assigned a value of 75. A planned, routine business trip may be assigned a value of 54.
Further, the priority values may be determined based on when the event occurred (or is scheduled to occur) relative to the current time. For example, a death in the family that occurred within the last three months may have a priority value of 100. The priority value may be reduced slowly as time passes following the initial three months. With regard to a planned event, the priority value may increase as the event comes closer.
Other types of time-based algorithms may be used. For example, following the death of an immediate family member, this event may receive an increased priority value during holidays, and on the anniversary date of the event.
The user's company, i.e., the people with the user, may be given a high priority value. For example, when the user is together with the user's spouse, this may outweigh events related to the user, and be given a high priority value.
The server 16, based, e.g., on assigned priorities, determines data that may be most relevant to the current situation of the user. The server 16 then selects one or more situation keywords which may best describe the user. The server 16 may select the keywords by selecting the data (event, company, location, circumstances, time, etc.) with the highest priority values as described above, and selecting the predetermined keywords which best match the selected data. In some cases, the predetermined keyword may be a direct match with the data. For example, a set of predetermined keywords may include “death in the immediate family” which may directly match to high priority data related to the user.
In other cases, the server 16 may need to identify a keyword which best correlates to data related to the user. For example, several different types of situations, such as changing a job, moving to a new location, graduating from a school, etc. may all be associated with a situation keyword “major transition”. In order to establish this correlation, the server 16 may, for example, be provided with a list of examples of events that qualify as a “major transition”, and the server 16 may search for the event in the list. Additionally or alternatively, the server 16 may analyze the received data and determine a meaning as is known, and compare the meaning to a meaning of the situation keyword.
As stated above, the server 16 may assign, based on data collected concerning a user's physical state, one or more keywords to a user. Such predetermined keywords may indicate a user's mood, and may be words or expressions such as “joyful,” “relieved,” “sad,” “excited,” “overwhelmed,” “at peace,” “happily surprised,” etc. A set of predetermined keywords may be available to the server 16, which allow the server 16 based on various possible detected physical attributes of a user.
For example, based on a recorded voice sample, the server 16 may measure the voice parameters such as rate of speech, pitch, contrast, volume, tone, etc. Speech at a high rate of speed, e.g., may indicate that the user is excited. Speech with a low contrast, i.e., in a monotone voice, may indicate that the user is bored. Sentences with an upward inflection at the end may indicate that user is feeling insecure.
The server 16 may further analyze data related to body language of the user. For example, visual data may indicate that the user is standing erectly with the head facing forward and determine that the user is feeling confident. The server 16 may analyze the user's face, detect tightness and determine that the user is feeling tense.
Still further, the server 16 may analyze biometric data such as the heart rate of the user or blood pressure of the user. A high pulse rate may indicate that the user is frightened or excited. A low pulse rate may indicate that the user is relaxed.
As discussed above, the server 16 may consider a combination of voice data, visual data and biometric data to determine a user's mental state. For example, a high pitched voice, combined with a high pulse rate and a smile detected on the user's face may indicate that the user is “happily surprised.” Also as discussed above, the server 16 may use time stamps associated with the voice, visual and biometric data in order to combine data associated with a particular time.
Different types of data may be given higher priority than other types of data. For example, facial expressions, such as a smile or frown may be strong indications of a particular mood, whereas voice data or biometric data may be associated with multiple moods. In such cases, a facial expression which is a strong indicator of a particular mood may be given priority over the voice and biometric data. In other cases, for example, a particular facial expression may be indicative of two or more possible moods. Voice and biometric data may be used in order to select between the possible moods.
Current data, for example, voice samples or visual data from the previous five minutes, may be given priority over data which is older.
Based on the visual data, voice data and biometric data related to the user, the server 16 may assign one or more mood keywords to the user which may best characterize the mood of the user at the current time.
Assigning a Complementary Keywords to the User
As described above, the server 16 may assign one or more keywords to the user at a current time. In some cases, however, the server 16 may wish to exchange an assigned keyword with a complementary keyword prior to using the keywords for selecting a media content item. This may particularly be the case when the keyword indicates a strong mental state, e.g., very sad, very happy, highly valued, etc.
For example, when a user is experiencing a negative mental state such as depression, loneliness, anger, fear, etc., the user may wish to view media content which is uplifting and cheerful. Accordingly, the server 16, when it determines that the user may be experiencing a negative mental state, may assign a mood keyword describing a type of media content which will encourage the user. Similarly, when the user is feeling a positive mental state such as joy, happiness, gratitude, encouraged, valued, loved, capable, etc., the user may wish to engage in more challenging entertainment, and watch a serious movie about social injustice. Accordingly, the server 16, when it determines that the user may be experiencing a positive mental state, may assign a mood keyword describing a type of media content which will challenge the user.
Selecting a Media Content Item Based on Assigned Keywords
Based on the one or more situation keywords and one or more mood keywords assigned to the user, the server 16 may further select one or more media content items to recommend and/or provide to the user.
As one example algorithm, the server 16 may compare keywords assigned to the user with keywords assigned to the media content item, e.g., a media content item may include metadata specifying one or more keywords. The server 16 may prepare a list of media content items to recommend to the user based on a number of matches found between the user keywords and the media content item keywords.
For example, the server 16 may assign one mood keyword and two situation keywords to the user. The server 16 may give a highest ranking to media content items which have keywords matching all three of the keywords assigned to the user. The server 16 may give a second highest ranking to media content items which have a matching mood keyword, and one matching situation keyword. The server 16 may give a third highest ranking to a media content item having two matching situation keywords, and not having a matching mood keyword, etc.
As another example, the server 16 may rank media content items based on the ratings the media content items received from other users which had been assigned the same set of keywords. As described below, the server 16 may receive a rating from users following the viewing of a media content item. The server 16 may store the rating, along with keywords assigned to the user, and the media content item which was viewed by the user. In this way, the server 16 can recommend media content items which received high ratings from other users when they were assigned the same set of keywords.
Additional or alternative criteria may be used, together with the keyword data, to select media content to recommend or provide to the user. For example, when applying ratings to combinations of keywords and media content, the server 16 may only consider other users identified as friends of the user, or may only consider other users that viewed the media content item within a predetermined period of time of the current time, etc.
Upon determining a ranking of media content items to recommend or provide to the user, the server 16 may present a list of the media content items to the user. The media content items with the highest ranking may appear at the top of the list, and media content items with lower rankings may appear lower in the list. The server 16 may transmit the list to the user via, e.g., the user device 12 or the media device 13. The user may select a media content item for view, and send a request for the media content item to the server 16. The server 16 may, e.g., stream the media content item to the media device 13.
The server 16 may further present a list of media content viewed by friends while having been assigned the same set of keywords, or, e.g., at least one of the same keywords. The list may be arranged in chronological order, and indicate the media content viewed by the friend, together with the name of the friend, and if available, the rating provided the friend.
Alternatively, the server 16 may select a media content item, for example the media content item at the top of the list, and present the media content item to the user via a computing device such as the user device 12 or the media device 13.
In addition to recommending and/or providing media content based on the keywords assigned to the user, the server 16 may also identify other users that currently or recently (e.g., within the last week), have been assigned the same set of keywords. The server 16 may, e.g., prioritize friends of the user. In this manner, the user may contact the other user with the same set of keywords, and ask, e.g., for a recommendation for media content, or otherwise strike up a conversation.
Ranking a Media Content Item
As described above, during, or after providing the media content item to the user, the server 16 may receive a rating of the media content item from the user. For example, the server 16 may send a request, via the user device 12, or via the media device 13, for the user to rate the media content item. The rating may be, for example, a numerical value between zero and five, with five being the highest rating and zero being the lowest rating. Based on the request, the user may provide the rating, using for example, the user interface 36 on the user device 12, or the user interface 46 on the media device 13.
The server 16 may store the rating received from the user, along with the identity of the media content item, and the keywords assigned to the user at the time the media content item was provided. In this manner, the server 16 can develop statistical data indicating the responses of users with a particular set of keywords, to a particular media content item.
Additionally, the user may create new keywords and associate the new keywords with the media content item. For example, the user, in addition to assigning a rating of five out of five to a media content item, the user may describe the media content with the keywords “life-changing”, “drama”, “thriller” and “cerebral”. In the case that the keyword “life-changing” had not already been assigned to the media content item, the server 16 may add this keyword. The server 16 may further associate the watchwords associated with the user, e.g., at the time of the rating, to the keyword “life-changing”. The list of watchwords associated with the keyword “life-changing” will grow by association, as other users are then, assigned the keyword “life-changing”, or otherwise participate in the conversation around the media content item.
Example Process
FIGS. 4A and 4B are a diagram of an exemplary process 400 for providing media content to a user based on a user mental state. The process 400 begins in a block 405.
In the block 405, a computer 32 in the user device 12 determines if the computer 32 is authorized to collect data related to the user. For example, the computer 32 may have previously received and stored authorization from the user. The user may have, e.g., authorized the computer 32 to collect all available data. Alternatively, the user may have restricted the types of data which may be collected (only email data, only voice data, all types of data except voice data, etc.), or the times and/or situations when data may be collected (at particular times of the day, when the user is watching media content, when the user is at home, etc.). The authorization may have been input by the user and stored by the user device 12.
The user device 12 may query the stored data from time-to-time to determine authorization. In the case that the user device 12 is not authorized to collect data, the process continues in the block 405. In the case that the user device 12 has received authorization from the user, the process 400 continues in a block 410.
In the block 410, as described above, the computer 32 of user device 12 collects data related to the user. The computer 32 may collect data such as audio data, visual data and biometric data from data collectors 30. Additionally, the computer 32 may collect data related to the user from other sources. The data sources may include, e.g., communications (emails, texts, conversations, etc.), documents (tax returns, income statements, etc.), global positioning data (GPS), calendar or other scheduling application (past, present and future appointments, events, etc.), internet browsing history, mobile purchases, input provided by the user, etc.
The computer 32 may provide the collected data to another computing device such as the media server 16.
Additionally, other computing devices, such as the media device 13 may collect data. For example, the computer 42 of the media device 13 may collect data related to the user, via data collectors 40. The computer 42 may provide the collected data, e.g., to the server 16 of media server 16.
Upon receipt of the data by the server 16, the process 400 continues in a block 415.
In the block 415, the server 16 (or a computer communicatively coupled to the server 16 such as the computer 32) may sort the data as short term data and long term data. Short term data may be data that is determined not to have a long term impact on the mood or situation of the user. For example, data indicating that the user followed a regular routine and met a colleague for lunch may be considered as short term data which does not have long term significance. Data indicating that the user is expecting a child may be considered to be data that has long term significance. The short term data may be stored for a short period of time, e.g., 24 hours. The long term data may be added to long term data storage, where it is stored, for example, indefinitely.
Additionally or alternatively, as described above, a time stamp may be associated with some or all of the stored data. Based on the type of data, the weighting of the data when determining the mood and the situation of the user may be adjusted as time passes. After storing the data, the process 400 continues in a block 420.
In the block 420, the server 16 determines if a trigger event has occurred to provide keywords to the user. As described above, the server 16 may be programmed to assign or update keywords related to the user every evening at a particular time, such as 7 pm.
Additionally or alternatively, the server 16 may receive a request from the user for updated keywords. For example, the user may input a request via the user device 12 for updated keywords, in preparation for selecting, e.g., a movie to watch.
As another example, the server 16 may receive an indication, for example, from the media device 13, that the user has turned the media device 13 on, and infer that the user is planning to watch television. The server 16 may recognize the turning on of the media device 13 as a trigger to provide keywords to the user.
In the case that the server 16 recognizes a trigger event, the process 400 continues in a block 425. In the case that the server 16 does not recognize a trigger event, the process 400 continues in the block 405.
In the block 425, the server 16 assigns, as described above, one or more situation and/or mood keywords to the user. The server 16 may select situation and mood keywords from sets of predetermined situation and mood keywords, based on data related to the user. In some cases, as described above, when for example the selected keywords associated with the user indicate strong feelings, complementary mood keywords may be exchanged for the originally selected keywords. The process 400 continues in a block 430.
In the block 430, the server 16 may provide the assigned keywords to the user. For example, the server 16 may transmit the keywords to the user device 12 computer 32, and instruct the computer 32 to display the keywords on the user interface 36. As another example, the server 16 may transmit the keywords to the media device 13 computer 42 and instruct the computer 42 to display the keywords on the user interface 46. The process 400 continues in a block 435.
In the block 435, the server 16 determines whether the user has requested a recommended list of media content. For example, the user may, in response to receiving the assigned keywords, request, via the user device 12 computer 32 to the media provider 16 server 16, a list of recommended content based on the keywords assigned to the user. Alternatively, the server 16 may receive, for example, a request for an electronic programming guide (EPG) from the media device 13. In the case that the server 16 receives (or otherwise identifies) a request for a recommended list of media content, the process 400 continues in a block 440. Otherwise, the process 400 continues in the block 405.
In the block 440, the server 16 provides a list of recommended media content to the user via, e.g., the user device 12 computer 32, or the media device 13 computer 42. The media content may be ranked based on the keywords assigned to the user and the keywords assigned to each media content item, as described above. The process 400 continues in a block 445.
In the block 445, the server 16 receives a selection for a media content item from the user. The selection may be received, for example, via the user interface 36 of the user device 12, or the user interface 46 of the media device 13. The process 400 continues in a block 450.
In the block 450, the server 16 provides the media content to the user. For example, the server 16 may stream the media content to the media device 13 computer 42. The process 400 continues in a block 455.
In the block 455, the server 16 may request feedback from the user regarding the media content. For example, during, or upon finishing the streaming of the media content item, the server 16 may send a request to the user, via the user device 12 computer 32, requesting that the user rate the media content item. The process 400 continues in a block 460.
In the block 460, the server 16 determines whether the server 16 has received feedback from the user. In the case that the server 16 has received feedback, the process 400 continues in a block 465. In the case, for example, that after waiting a predetermined time from the request for data, the server 16 does not receive feedback, the process 400 continues in a block 405. The predetermined time may be, e.g., 10 minutes.
In the block 465, the server 16 updates metadata related to the media content item. For example, the metadata may include a rating of the media content item based on other users that had the same assigned keywords when viewing the media content item. The server 16 may update the keyword specific rating to take into account the rating from the user that provided the feedback. As another example, the metadata may include a rating of the media content item based on other users that had the same mood keyword as the user and are indicated to be friends of the user.
Upon updating the metadata associated with the media content item, the process 400 continues in a block 470.
In the block 470, the server 16 determines whether the process 400 should continue. For example, the server 16 may be programmed to continue to receive data related to the user from the user device 12 on an on-going basis. In this case, the process 400 may continue in the block 405. Additionally or alternatively, the server 16 may ask the user, e.g., via the user device 12, to confirm that the process 400 should continue. In the case that the server 16 receives confirmation, the process 400 may continue in the block 405. Otherwise, the process 400 may end. The server 16 may, for example, send an instruction to the user device 12 computer 32, to discontinue collecting data, or, may discontinue receiving data from the user device 12. In this case, the process 400 ends.
The descriptions of operations performed by the one or more user devices 12, one or more media devices 13, and the media server 16 are exemplary and non-limiting. The one or more user devices 12, one or more media devices 13, and media server 16 are communicatively coupled computing devices. Accordingly, computing operations may be distributed between them. Operations such as collecting data, selecting data to be stored, assigning keywords to a user, comparing user keywords to media content keywords, may be performed in any one of the computing devices, or distributed over any combination of the computing devices.
CONCLUSION
As used herein, the adverb “substantially” means that a shape, structure, measurement, quantity, time, etc. may deviate from an exact described geometry, distance, measurement, quantity, time, etc., because of imperfections in materials, machining, manufacturing, etc.
The term “exemplary” is used herein in the sense of signifying an example, e.g., a reference to an “exemplary widget” should be read as simply referring to an example of a widget.
Networked devices such as those discussed herein generally each include instructions executable by one or more networked devices such as those identified above, and for carrying out blocks or steps of processes described above. For example, process blocks discussed above may be embodied as computer-executable instructions.
Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies, including, without limitation, and either alone or in combination, Java™, C, C++, Visual Basic, Java Script, Perl, HTML, etc. In general, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer-readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of computer-readable media. A file in a networked device is generally a collection of data stored on a computer readable medium, such as a storage medium, a random access memory, etc.
A computer-readable medium includes any medium that participates in providing data (e.g., instructions), which may be read by a computer. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, etc. Non-volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random access memory (DRAM), which typically constitutes a main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
In the drawings, the same reference numbers indicate the same elements. Further, some or all of these elements could be changed. With regard to the media, processes, systems, methods, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claimed invention.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the arts discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the invention is capable of modification and variation and is limited only by the following claims.
All terms used in the claims are intended to be given their plain and ordinary meanings as understood by those skilled in the art unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.

Claims (19)

The invention claimed is:
1. A system comprising:
a computing device including a processor and a memory, the memory storing instructions executable by the processor such that the processor is programmed to:
receive at least one user communication;
extract, from the at least one user communication, a first set of watchwords;
identify one or more matches between the first set of watchwords, and a second set of watchwords, each of the second watchwords assigned to a keyword in a set of stored keywords;
determine a ranking for the stored keywords based on a number of matches associated with the respective keywords;
assign one or more stored keywords to the user based on the determined ranking;
identify one or more keywords related to a user physical condition as a first type of keyword;
replace, prior to providing the media content to the user, the one or more keywords related to the user physical condition identified as a first type of keyword with one or more predetermined complementary keywords identified as a second type of keyword;
wherein keywords are identified based at least in part on an analysis of data from at least one of a voice communication, an email, a text message, an event on a user calendar, user transaction records, and user on-line browsing history;
wherein the first type of keyword is associated with a negative mental state and the second type of keyword is associated with a positive mental state; and
provide media content based at least in part on the one or more keywords assigned to the user.
2. The system of claim 1, wherein the mental state data is received from a second computing device associated with the user.
3. The system of claim 2, wherein the second computing device includes one of a mobile telephone, a tablet and a smart wearable device.
4. The system of claim 1, wherein the data identifying the user mental state further includes user location data.
5. The system of claim 1, wherein the data identifying the user mental state further includes other people proximate to the user.
6. The system of claim 1, wherein the data identifying the user mental state further includes user demographic data.
7. The system of claim 1, wherein the processor is further programmed to:
receive metadata including keywords related respectively to one or more items of media content available to the user; and
compare the one or more keywords assigned to the user with the keywords related respectively to the one or more items of media content, wherein providing the media content to the user is based at least in part on the comparison.
8. The system of claim 7, wherein the processor is further programmed to:
recommend, based on the comparison, one or more media content items to the user; and
receive an input from the user selecting one of the one or more recommended media content items, wherein, providing the media content is based on the selection.
9. The system of claim 8, wherein assigning the at least one keyword related to the mental state of the user to the user is based at least in part on an analysis of the voice of the user.
10. The system of claim 8, wherein assigning the at least one keyword related to the mental state of the user to the user is based at least in part on an analysis of the body language of the user.
11. The system of claim 8, wherein the processor is further programmed to:
ask the user, via an interface, a question;
receive an answer via an audio sensor; and
associate at least one keyword related to the mental state of the user with the user based at least in part on a response to the question.
12. The system of claim 1, wherein the mental state data includes at least one of audio data related to the user, visual data related to the user and biometric data related to the user.
13. The system of claim 1, wherein at least one of the one or more keywords assigned to the user is based at least in part on the user communication, and at least one of the keywords associated with the user is based at least in part on the user physical condition.
14. The system of claim 1, wherein the processor is further programmed to:
receive a numerical rating of the media content item from the user;
store the numerical rating, together with the keywords associated with the user; and
generate a ranking of the media content item relative to other media content items, taking into account the numerical rating together with the keywords associated with the user.
15. The system of claim 1, wherein the processor is further programmed to:
receive a keyword attributed by the user to the media content item; and
include the keyword in metadata associated with the media content item.
16. The system of claim 1, further comprising a second computing device associated with the user, the second computing device including a second processor and a second memory, the second memory storing instructions executable by the second processor such that the second processor is programmed to:
collect at least a portion of the data identifying the user mental state; and
provide the portion of the data to the processor.
17. The system of claim 1, wherein the second processor is further programmed to:
receive a recommendation for media content items from the processor;
display the recommendation to the user;
receive an input from the user selecting media content; and
transmit the input to the first processor.
18. A method comprising:
receiving, by a processor, at least one user communication;
extracting, from the at least one user communication, a first set of watchwords;
identifying one or more matches between the first set of watchwords, and a second set of watchwords, each of the second watchwords assigned to a keyword in a set of stored keywords;
determining a ranking for the stored keywords based on a number of matches associated with the respective keywords;
assigning one or more stored keywords to the user based on the determined ranking;
replacing, prior to providing the media content to the user, the one or more keywords related to the user physical condition identified as a first type of keyword with one or more predetermined complementary keywords identified as a second type of keyword;
wherein keywords are identified based at least in part on an analysis of data from at least one of a voice communication, an email, a text message, an event on a user calendar, user transaction records, and user on-line browsing history;
wherein the first type of keyword is associated with a negative mental state and the second type of keyword is associated with a positive mental state; and
providing media content to the user based at least in part on the keywords assigned to the user.
19. The method of claim 18, further comprising:
receiving data identifying a user mental state; and
associating at least one keyword with the user based at least in part on an analysis of data from at least one of a voice communication, an email, a text message, an event on a user calendar, user transaction records, and user on-line browsing history.
US15/008,543 2016-01-28 2016-01-28 Providing media content based on user state detection Active 2036-10-03 US10268689B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/008,543 US10268689B2 (en) 2016-01-28 2016-01-28 Providing media content based on user state detection
US16/296,970 US10719544B2 (en) 2016-01-28 2019-03-08 Providing media content based on user state detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/008,543 US10268689B2 (en) 2016-01-28 2016-01-28 Providing media content based on user state detection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/296,970 Continuation US10719544B2 (en) 2016-01-28 2019-03-08 Providing media content based on user state detection

Publications (2)

Publication Number Publication Date
US20170223092A1 US20170223092A1 (en) 2017-08-03
US10268689B2 true US10268689B2 (en) 2019-04-23

Family

ID=59387319

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/008,543 Active 2036-10-03 US10268689B2 (en) 2016-01-28 2016-01-28 Providing media content based on user state detection
US16/296,970 Active US10719544B2 (en) 2016-01-28 2019-03-08 Providing media content based on user state detection

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/296,970 Active US10719544B2 (en) 2016-01-28 2019-03-08 Providing media content based on user state detection

Country Status (1)

Country Link
US (2) US10268689B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11942194B2 (en) 2018-06-19 2024-03-26 Ellipsis Health, Inc. Systems and methods for mental health assessment

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11270699B2 (en) * 2011-04-22 2022-03-08 Emerging Automotive, Llc Methods and vehicles for capturing emotion of a human driver and customizing vehicle response
US9493130B2 (en) * 2011-04-22 2016-11-15 Angel A. Penilla Methods and systems for communicating content to connected vehicle users based detected tone/mood in voice input
US10838967B2 (en) * 2017-06-08 2020-11-17 Microsoft Technology Licensing, Llc Emotional intelligence for a conversational chatbot
JP2021529382A (en) 2018-06-19 2021-10-28 エリプシス・ヘルス・インコーポレイテッド Systems and methods for mental health assessment
CN109448735B (en) * 2018-12-21 2022-05-20 深圳创维-Rgb电子有限公司 Method and device for adjusting video parameters based on voiceprint recognition and read storage medium
US11138981B2 (en) * 2019-08-21 2021-10-05 i2x GmbH System and methods for monitoring vocal parameters
US10958758B1 (en) 2019-11-22 2021-03-23 International Business Machines Corporation Using data analytics for consumer-focused autonomous data delivery in telecommunications networks
JP2021193511A (en) * 2020-06-08 2021-12-23 富士フイルムビジネスイノベーション株式会社 Information processing device and program
EP3996030A1 (en) 2020-11-06 2022-05-11 Koninklijke Philips N.V. User interface system for selecting learning content
US20220223140A1 (en) 2021-01-08 2022-07-14 Samsung Electronics Co., Ltd. Structural assembly for digital human interactive display

Citations (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4870579A (en) 1987-10-01 1989-09-26 Neonics, Inc. System and method of predicting subjective reactions
US6321221B1 (en) 1998-07-17 2001-11-20 Net Perceptions, Inc. System, method and article of manufacture for increasing the user value of recommendations
US20030063222A1 (en) 2001-10-03 2003-04-03 Sony Corporation System and method for establishing TV setting based on viewer mood
US20040001616A1 (en) 2002-06-27 2004-01-01 Srinivas Gutta Measurement of content ratings through vision and speech recognition
US6774926B1 (en) 1999-09-03 2004-08-10 United Video Properties, Inc. Personal television channel system
US20050132401A1 (en) 2003-12-10 2005-06-16 Gilles Boccon-Gibod Method and apparatus for exchanging preferences for replaying a program on a personal video recorder
US20050144064A1 (en) 2003-12-19 2005-06-30 Palo Alto Research Center Incorporated Keyword advertisement management
US6978470B2 (en) 2001-12-26 2005-12-20 Bellsouth Intellectual Property Corporation System and method for inserting advertising content in broadcast programming
US20060159109A1 (en) 2000-09-07 2006-07-20 Sonic Solutions Methods and systems for use in network management of content
US20070288987A1 (en) 2006-04-18 2007-12-13 Samsung Electronics Co., Ltd. Device and method for editing channel list of digital broadcasting service
US20080046917A1 (en) 2006-07-31 2008-02-21 Microsoft Corporation Associating Advertisements with On-Demand Media Content
US20090030792A1 (en) 2007-07-24 2009-01-29 Amit Khivesara Content recommendation service
US20090144075A1 (en) 2004-11-04 2009-06-04 Manyworlds Inc. Adaptive Social Network Management
US20090234727A1 (en) 2008-03-12 2009-09-17 William Petty System and method for determining relevance ratings for keywords and matching users with content, advertising, and other users based on keyword ratings
US20100114937A1 (en) * 2008-10-17 2010-05-06 Louis Hawthorne System and method for content customization based on user's psycho-spiritual map of profile
US20100138416A1 (en) * 2008-12-02 2010-06-03 Palo Alto Research Center Incorporated Context and activity-driven content delivery and interaction
US20100324992A1 (en) 2007-03-02 2010-12-23 Birch James R Dynamically reactive response and specific sequencing of targeted advertising and content delivery system
US7958525B2 (en) 2002-12-11 2011-06-07 Broadcom Corporation Demand broadcast channels and channel programming based on user viewing history, profiling, and requests
US20110238495A1 (en) 2008-03-24 2011-09-29 Min Soo Kang Keyword-advertisement method using meta-information related to digital contents and system thereof
US20110282947A1 (en) 2010-05-17 2011-11-17 Ifan Media Corporation Systems and methods for providing a social networking experience for a user
US20110320471A1 (en) 2010-06-24 2011-12-29 Hitachi Consumer Electronics Co., Ltd. Movie Recommendation System and Movie Recommendation Method
US20120005224A1 (en) 2010-07-01 2012-01-05 Spencer Greg Ahrens Facilitating Interaction Among Users of a Social Network
US8180804B1 (en) 2010-04-19 2012-05-15 Facebook, Inc. Dynamically generating recommendations based on social graph information
US8195460B2 (en) 2008-06-17 2012-06-05 Voicesense Ltd. Speaker characterization through speech analysis
US20120266191A1 (en) 2010-12-17 2012-10-18 SONY ERICSSON MOBILE COMMUNICATIONS AB (A company of Sweden) System and method to provide messages adaptive to a crowd profile
US8327395B2 (en) 2007-10-02 2012-12-04 The Nielsen Company (Us), Llc System providing actionable insights based on physiological responses from viewers of media
US20120311618A1 (en) 2011-06-06 2012-12-06 Comcast Cable Communications, Llc Asynchronous interaction at specific points in content
US20130145385A1 (en) 2011-12-02 2013-06-06 Microsoft Corporation Context-based ratings and recommendations for media
US8561095B2 (en) 2001-11-13 2013-10-15 Koninklijke Philips N.V. Affective television monitoring and control in response to physiological data
US20130297638A1 (en) 2012-05-07 2013-11-07 Pixability, Inc. Methods and systems for identifying distribution opportunities
US8589968B2 (en) 2009-12-31 2013-11-19 Motorola Mobility Llc Systems and methods providing content on a display based upon facial recognition of a viewer
US20140036022A1 (en) 2012-05-31 2014-02-06 Volio, Inc. Providing a conversational video experience
US8654952B2 (en) 2009-08-20 2014-02-18 T-Mobile Usa, Inc. Shareable applications on telecommunications devices
US20140067953A1 (en) 2012-08-29 2014-03-06 Wetpaint.Com, Inc. Personalization based upon social value in online media
US20140089801A1 (en) 2012-09-21 2014-03-27 Comment Bubble, Inc. Timestamped commentary system for video content
US20140088952A1 (en) 2012-09-25 2014-03-27 United Video Properties, Inc. Systems and methods for automatic program recommendations based on user interactions
US20140108142A1 (en) 2008-04-25 2014-04-17 Cisco Technology, Inc. Advertisement campaign system using socially collaborative filtering
US20140173653A1 (en) 2012-12-19 2014-06-19 Ebay Inc. Method and System for Targeted Commerce in Network Broadcasting
US8768744B2 (en) 2007-02-02 2014-07-01 Motorola Mobility Llc Method and apparatus for automated user review of media content in a mobile communication device
US20140188997A1 (en) 2012-12-31 2014-07-03 Henry Will Schneiderman Creating and Sharing Inline Media Commentary Within a Network
US20140195328A1 (en) 2013-01-04 2014-07-10 Ron Ferens Adaptive embedded advertisement via contextual analysis and perceptual computing
US8782681B2 (en) 2007-03-08 2014-07-15 The Nielsen Company (Us), Llc Method and system for rating media and events in media based on physiological data
US20140201125A1 (en) 2013-01-16 2014-07-17 Shahram Moeinifar Conversation management systems
US20140244636A1 (en) 2013-02-28 2014-08-28 Echostar Technologies L.L.C. Dynamic media content
CN104038836A (en) 2014-06-03 2014-09-10 四川长虹电器股份有限公司 Television program intelligent pushing method
US20140279751A1 (en) 2013-03-13 2014-09-18 Deja.io, Inc. Aggregation and analysis of media content information
US8849649B2 (en) 2009-12-24 2014-09-30 Metavana, Inc. System and method for determining sentiment expressed in documents
US8849199B2 (en) 2010-11-30 2014-09-30 Cox Communications, Inc. Systems and methods for customizing broadband content based upon passive presence detection of users
US20140337427A1 (en) 2013-05-07 2014-11-13 DeNA Co., Ltd. System for recommending electronic contents
US20140344039A1 (en) 2013-05-17 2014-11-20 Virtual Agora, Ltd Network Based Discussion Forum System and Method with Means for Improving Post Positioning and Debater Status by Trading Arguments, Purchasing, Acquiring and Trading Points
US20140365349A1 (en) 2013-06-05 2014-12-11 Brabbletv.Com Llc System and Method for Media-Centric and Monetizable Social Networking
US20150020086A1 (en) 2013-07-11 2015-01-15 Samsung Electronics Co., Ltd. Systems and methods for obtaining user feedback to media content
US20150026706A1 (en) 2013-07-18 2015-01-22 Comcast Cable Communications, Llc Content rating
US20150039549A1 (en) 2013-07-30 2015-02-05 Reccosend LLC System and method for computerized recommendation delivery, tracking, and prioritization
US8973022B2 (en) 2007-03-07 2015-03-03 The Nielsen Company (Us), Llc Method and system for using coherence of biological responses as a measure of performance of a media
US9009024B2 (en) 2011-10-24 2015-04-14 Hewlett-Packard Development Company, L.P. Performing sentiment analysis
US20150112918A1 (en) 2012-03-17 2015-04-23 Beijing Yidian Wangju Technology Co., Ltd. Method and system for recommending content to a user
US9026476B2 (en) 2011-05-09 2015-05-05 Anurag Bist System and method for personalized media rating and related emotional profile analytics
US20150294221A1 (en) 2012-07-13 2015-10-15 Telefonica, S.A. A method and a system for generating context-based content recommendations to users
US20160034970A1 (en) 2009-10-13 2016-02-04 Luma, Llc User-generated quick recommendations in a media recommendation system
US9306989B1 (en) 2012-10-16 2016-04-05 Google Inc. Linking social media and broadcast media
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US20160147767A1 (en) 2014-11-24 2016-05-26 RCRDCLUB Corporation Dynamic feedback in a recommendation system
US20160239547A1 (en) 2015-02-17 2016-08-18 Samsung Electronics Co., Ltd. Method and apparatus for recommending content based on activities of a plurality of users
US20160259797A1 (en) 2015-03-05 2016-09-08 Google Inc. Personalized content sharing
US20160277787A1 (en) 2015-03-16 2016-09-22 Sony Computer Entertainment Inc. Information processing apparatus, video recording reservation supporting method, and computer program
US9454519B1 (en) 2012-08-15 2016-09-27 Google Inc. Promotion and demotion of posts in social networking services
US20170048184A1 (en) 2015-08-10 2017-02-16 Google Inc. Privacy aligned and personalized social media content sharing recommendations
US20170134803A1 (en) 2015-11-11 2017-05-11 At&T Intellectual Property I, Lp Method and apparatus for content adaptation based on audience monitoring
US9679570B1 (en) 2011-09-23 2017-06-13 Amazon Technologies, Inc. Keyword determinations from voice data
US20170169726A1 (en) 2015-12-09 2017-06-15 At&T Intellectual Property I, Lp Method and apparatus for managing feedback based on user monitoring
US9712587B1 (en) * 2014-12-01 2017-07-18 Google Inc. Identifying and rendering content relevant to a user's current mental state and context
US20170322947A1 (en) 2016-05-03 2017-11-09 Echostar Technologies L.L.C. Providing media content based on media element preferences
US20170339467A1 (en) 2016-05-18 2017-11-23 Rovi Guides, Inc. Recommending media content based on quality of service at a location
US9832619B2 (en) 2014-07-31 2017-11-28 Samsung Electronics Co., Ltd. Automated generation of recommended response messages
US20170366861A1 (en) 2016-06-21 2017-12-21 Rovi Guides, Inc. Methods and systems for recommending to a first user media assets for inclusion in a playlist for a second user based on the second user's viewing activity
US20180040019A1 (en) 2016-08-03 2018-02-08 Facebook, Inc. Recommendation system to enhance online content creation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8744237B2 (en) 2011-06-20 2014-06-03 Microsoft Corporation Providing video presentation commentary

Patent Citations (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4870579A (en) 1987-10-01 1989-09-26 Neonics, Inc. System and method of predicting subjective reactions
US6321221B1 (en) 1998-07-17 2001-11-20 Net Perceptions, Inc. System, method and article of manufacture for increasing the user value of recommendations
US6774926B1 (en) 1999-09-03 2004-08-10 United Video Properties, Inc. Personal television channel system
US20060159109A1 (en) 2000-09-07 2006-07-20 Sonic Solutions Methods and systems for use in network management of content
US20030063222A1 (en) 2001-10-03 2003-04-03 Sony Corporation System and method for establishing TV setting based on viewer mood
US8561095B2 (en) 2001-11-13 2013-10-15 Koninklijke Philips N.V. Affective television monitoring and control in response to physiological data
US6978470B2 (en) 2001-12-26 2005-12-20 Bellsouth Intellectual Property Corporation System and method for inserting advertising content in broadcast programming
US20040001616A1 (en) 2002-06-27 2004-01-01 Srinivas Gutta Measurement of content ratings through vision and speech recognition
US7958525B2 (en) 2002-12-11 2011-06-07 Broadcom Corporation Demand broadcast channels and channel programming based on user viewing history, profiling, and requests
US20050132401A1 (en) 2003-12-10 2005-06-16 Gilles Boccon-Gibod Method and apparatus for exchanging preferences for replaying a program on a personal video recorder
US20050144064A1 (en) 2003-12-19 2005-06-30 Palo Alto Research Center Incorporated Keyword advertisement management
US20090144075A1 (en) 2004-11-04 2009-06-04 Manyworlds Inc. Adaptive Social Network Management
US20070288987A1 (en) 2006-04-18 2007-12-13 Samsung Electronics Co., Ltd. Device and method for editing channel list of digital broadcasting service
US20080046917A1 (en) 2006-07-31 2008-02-21 Microsoft Corporation Associating Advertisements with On-Demand Media Content
US8768744B2 (en) 2007-02-02 2014-07-01 Motorola Mobility Llc Method and apparatus for automated user review of media content in a mobile communication device
US20100324992A1 (en) 2007-03-02 2010-12-23 Birch James R Dynamically reactive response and specific sequencing of targeted advertising and content delivery system
US8973022B2 (en) 2007-03-07 2015-03-03 The Nielsen Company (Us), Llc Method and system for using coherence of biological responses as a measure of performance of a media
US8782681B2 (en) 2007-03-08 2014-07-15 The Nielsen Company (Us), Llc Method and system for rating media and events in media based on physiological data
US20090030792A1 (en) 2007-07-24 2009-01-29 Amit Khivesara Content recommendation service
US8327395B2 (en) 2007-10-02 2012-12-04 The Nielsen Company (Us), Llc System providing actionable insights based on physiological responses from viewers of media
US8332883B2 (en) 2007-10-02 2012-12-11 The Nielsen Company (Us), Llc Providing actionable insights based on physiological responses from viewers of media
US20090234727A1 (en) 2008-03-12 2009-09-17 William Petty System and method for determining relevance ratings for keywords and matching users with content, advertising, and other users based on keyword ratings
US20110238495A1 (en) 2008-03-24 2011-09-29 Min Soo Kang Keyword-advertisement method using meta-information related to digital contents and system thereof
US20140108142A1 (en) 2008-04-25 2014-04-17 Cisco Technology, Inc. Advertisement campaign system using socially collaborative filtering
US8682666B2 (en) 2008-06-17 2014-03-25 Voicesense Ltd. Speaker characterization through speech analysis
US8195460B2 (en) 2008-06-17 2012-06-05 Voicesense Ltd. Speaker characterization through speech analysis
US20100114937A1 (en) * 2008-10-17 2010-05-06 Louis Hawthorne System and method for content customization based on user's psycho-spiritual map of profile
US20100138416A1 (en) * 2008-12-02 2010-06-03 Palo Alto Research Center Incorporated Context and activity-driven content delivery and interaction
US8654952B2 (en) 2009-08-20 2014-02-18 T-Mobile Usa, Inc. Shareable applications on telecommunications devices
US20160034970A1 (en) 2009-10-13 2016-02-04 Luma, Llc User-generated quick recommendations in a media recommendation system
US8849649B2 (en) 2009-12-24 2014-09-30 Metavana, Inc. System and method for determining sentiment expressed in documents
US8589968B2 (en) 2009-12-31 2013-11-19 Motorola Mobility Llc Systems and methods providing content on a display based upon facial recognition of a viewer
US8180804B1 (en) 2010-04-19 2012-05-15 Facebook, Inc. Dynamically generating recommendations based on social graph information
US20110282947A1 (en) 2010-05-17 2011-11-17 Ifan Media Corporation Systems and methods for providing a social networking experience for a user
US20110320471A1 (en) 2010-06-24 2011-12-29 Hitachi Consumer Electronics Co., Ltd. Movie Recommendation System and Movie Recommendation Method
US20120005224A1 (en) 2010-07-01 2012-01-05 Spencer Greg Ahrens Facilitating Interaction Among Users of a Social Network
US8849199B2 (en) 2010-11-30 2014-09-30 Cox Communications, Inc. Systems and methods for customizing broadband content based upon passive presence detection of users
US20120266191A1 (en) 2010-12-17 2012-10-18 SONY ERICSSON MOBILE COMMUNICATIONS AB (A company of Sweden) System and method to provide messages adaptive to a crowd profile
US9026476B2 (en) 2011-05-09 2015-05-05 Anurag Bist System and method for personalized media rating and related emotional profile analytics
US20120311618A1 (en) 2011-06-06 2012-12-06 Comcast Cable Communications, Llc Asynchronous interaction at specific points in content
US9679570B1 (en) 2011-09-23 2017-06-13 Amazon Technologies, Inc. Keyword determinations from voice data
US9009024B2 (en) 2011-10-24 2015-04-14 Hewlett-Packard Development Company, L.P. Performing sentiment analysis
US20130145385A1 (en) 2011-12-02 2013-06-06 Microsoft Corporation Context-based ratings and recommendations for media
US20150112918A1 (en) 2012-03-17 2015-04-23 Beijing Yidian Wangju Technology Co., Ltd. Method and system for recommending content to a user
US20130297638A1 (en) 2012-05-07 2013-11-07 Pixability, Inc. Methods and systems for identifying distribution opportunities
US20140036022A1 (en) 2012-05-31 2014-02-06 Volio, Inc. Providing a conversational video experience
US20150294221A1 (en) 2012-07-13 2015-10-15 Telefonica, S.A. A method and a system for generating context-based content recommendations to users
US9454519B1 (en) 2012-08-15 2016-09-27 Google Inc. Promotion and demotion of posts in social networking services
US20140067953A1 (en) 2012-08-29 2014-03-06 Wetpaint.Com, Inc. Personalization based upon social value in online media
US20140089801A1 (en) 2012-09-21 2014-03-27 Comment Bubble, Inc. Timestamped commentary system for video content
US20140088952A1 (en) 2012-09-25 2014-03-27 United Video Properties, Inc. Systems and methods for automatic program recommendations based on user interactions
US9306989B1 (en) 2012-10-16 2016-04-05 Google Inc. Linking social media and broadcast media
US20140173653A1 (en) 2012-12-19 2014-06-19 Ebay Inc. Method and System for Targeted Commerce in Network Broadcasting
US20140188997A1 (en) 2012-12-31 2014-07-03 Henry Will Schneiderman Creating and Sharing Inline Media Commentary Within a Network
US20140195328A1 (en) 2013-01-04 2014-07-10 Ron Ferens Adaptive embedded advertisement via contextual analysis and perceptual computing
US20140201125A1 (en) 2013-01-16 2014-07-17 Shahram Moeinifar Conversation management systems
US20140244636A1 (en) 2013-02-28 2014-08-28 Echostar Technologies L.L.C. Dynamic media content
US20140279751A1 (en) 2013-03-13 2014-09-18 Deja.io, Inc. Aggregation and analysis of media content information
US20140337427A1 (en) 2013-05-07 2014-11-13 DeNA Co., Ltd. System for recommending electronic contents
US20140344039A1 (en) 2013-05-17 2014-11-20 Virtual Agora, Ltd Network Based Discussion Forum System and Method with Means for Improving Post Positioning and Debater Status by Trading Arguments, Purchasing, Acquiring and Trading Points
US20140365349A1 (en) 2013-06-05 2014-12-11 Brabbletv.Com Llc System and Method for Media-Centric and Monetizable Social Networking
US20150020086A1 (en) 2013-07-11 2015-01-15 Samsung Electronics Co., Ltd. Systems and methods for obtaining user feedback to media content
US20150026706A1 (en) 2013-07-18 2015-01-22 Comcast Cable Communications, Llc Content rating
US20150039549A1 (en) 2013-07-30 2015-02-05 Reccosend LLC System and method for computerized recommendation delivery, tracking, and prioritization
CN104038836A (en) 2014-06-03 2014-09-10 四川长虹电器股份有限公司 Television program intelligent pushing method
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9832619B2 (en) 2014-07-31 2017-11-28 Samsung Electronics Co., Ltd. Automated generation of recommended response messages
US20160147767A1 (en) 2014-11-24 2016-05-26 RCRDCLUB Corporation Dynamic feedback in a recommendation system
US9712587B1 (en) * 2014-12-01 2017-07-18 Google Inc. Identifying and rendering content relevant to a user's current mental state and context
US20160239547A1 (en) 2015-02-17 2016-08-18 Samsung Electronics Co., Ltd. Method and apparatus for recommending content based on activities of a plurality of users
US20160259797A1 (en) 2015-03-05 2016-09-08 Google Inc. Personalized content sharing
US20160277787A1 (en) 2015-03-16 2016-09-22 Sony Computer Entertainment Inc. Information processing apparatus, video recording reservation supporting method, and computer program
US20170048184A1 (en) 2015-08-10 2017-02-16 Google Inc. Privacy aligned and personalized social media content sharing recommendations
US20170134803A1 (en) 2015-11-11 2017-05-11 At&T Intellectual Property I, Lp Method and apparatus for content adaptation based on audience monitoring
US20170169726A1 (en) 2015-12-09 2017-06-15 At&T Intellectual Property I, Lp Method and apparatus for managing feedback based on user monitoring
US20170322947A1 (en) 2016-05-03 2017-11-09 Echostar Technologies L.L.C. Providing media content based on media element preferences
US20170339467A1 (en) 2016-05-18 2017-11-23 Rovi Guides, Inc. Recommending media content based on quality of service at a location
US20170366861A1 (en) 2016-06-21 2017-12-21 Rovi Guides, Inc. Methods and systems for recommending to a first user media assets for inclusion in a playlist for a second user based on the second user's viewing activity
US20180040019A1 (en) 2016-08-03 2018-02-08 Facebook, Inc. Recommendation system to enhance online content creation

Non-Patent Citations (20)

* Cited by examiner, † Cited by third party
Title
Advisory Action dated Jan. 24, 2019 for Appl. No. 15/008,540.
Bublitz et al., "Using Statistical Data for Context Sensitive Pervasive Advertising," IEEE, 2014, pp. 41-44.
Final Office Action dated Feb. 21, 2019 for U.S. Appl. No. 15/145,060.
Final Office Action dated Feb. 21, 2019 for U.S. Appl. No. 15/389,730.
Final Office Action dated Feb. 8, 2019 for U.S. Appl. No. 15/389,718.
Final Office Action dated May 1, 2018 for U.S. Appl. No. 15/378,950 (58 pages).
Final Office Action dated Nov. 2, 2018 for U.S. Appl. No. 15/008,540 (68 pages).
Hong et al., "A Comparative Study of Video Recommender Systems in Big Data Era," IEEE, 2016, pp. 125-127.
Kompan et al., "Context-based Satisfaction Modelling for Personalized Recommendations," 8th International Workshop on Semantic and Social Media Adaptation and Personalization, IEEE, 2013, pp. 33-38.
Mao et al., "Multirelational Social Recommendations via Multigraph Ranking," IEEE, 2016, pp. 1-13.
Non-Final Office Action dated Aug. 24, 2018 for U.S. Appl. No. 15/389,730 (60 pages).
Non-Final Office Action dated Feb. 8, 2017 in U.S. Appl. No. 15/289,585 (16 pages).
Non-Final Office Action dated Jan. 2, 2019 for U.S. Appl. No. 15/389,694.
Non-Final Office Action dated Jul. 13, 2018 for U.S. Appl. No. 15/389,718 (57 pages).
Non-Final Office Action dated Jun. 13, 2018 for U.S. Appl. No. 15/145,060 (40 pages).
Non-Final Office Action dated Mar. 27, 2018 for U.S. Appl. No. 15/008,540 (34 pages).
Non-Final Office Action dated Nov. 17, 2017 for U.S. Appl. No. 15/378,950 (53 pages).
Notice of Allowance dated Aug. 26, 2016 in U.S. Appl. No. 14/802,842 (26 pages).
Notice of Allowance dated Jun. 6, 2017 in U.S. Appl. No. 15/289,585 (11 pages).
Sato et al., "Recommender System by Grasping Individual Preference and Influence from other users," 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM'13), ACM, 2013, pp. 1345-1351.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11942194B2 (en) 2018-06-19 2024-03-26 Ellipsis Health, Inc. Systems and methods for mental health assessment

Also Published As

Publication number Publication date
US10719544B2 (en) 2020-07-21
US20190205329A1 (en) 2019-07-04
US20170223092A1 (en) 2017-08-03

Similar Documents

Publication Publication Date Title
US10719544B2 (en) Providing media content based on user state detection
US12135942B1 (en) Conversation facilitation system for mitigating loneliness
Thorson Attracting the news: Algorithms, platforms, and reframing incidental exposure
US11989223B2 (en) Providing media content based on media element preferences
CN108351870B (en) Computer speech recognition and semantic understanding from activity patterns
US8612866B2 (en) Information processing apparatus, information processing method, and information processing program
US9026476B2 (en) System and method for personalized media rating and related emotional profile analytics
US20180082313A1 (en) Systems and methods for prioritizing user reactions to content for response on a social-media platform
CN114556354A (en) Automatically determining and presenting personalized action items from an event
WO2018183019A1 (en) Distinguishing events of users for efficient service content distribution
US12120193B2 (en) Communications channels in media systems
CN111310019A (en) Information recommendation method, information processing method, system and equipment
US11631401B1 (en) Conversation system for detecting a dangerous mental or physical condition
US11483409B2 (en) Communications channels in media systems
US9489626B2 (en) Systems and methods for identifying and notifying users of electronic content based on biometric recognition
US9710567B1 (en) Automated content publication on a social media management platform
Wicks et al. Partisan media selective exposure during the 2012 presidential election
US20180184157A1 (en) Communications channels in media systems
US20240256620A1 (en) Systems and methods for processing subjective queries
JP6767808B2 (en) Viewing user log storage system, viewing user log storage server, and viewing user log storage method
KR102642589B1 (en) Personalized content recommendation system by diary analysis
WO2022251378A1 (en) Method of matching analytics and communication establishment
Yuan Predicting Mobile Interruptibility

Legal Events

Date Code Title Description
AS Assignment

Owner name: ECHOSTAR TECHNOLOGIES L.L.C., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUBRAMANIAN, PRAKASH;NEWELL, NICHOLAS BRANDON;SIGNING DATES FROM 20160125 TO 20160127;REEL/FRAME:037624/0685

AS Assignment

Owner name: DISH TECHNOLOGIES L.L.C., COLORADO

Free format text: CHANGE OF NAME;ASSIGNOR:ECHOSTAR TECHNOLOGIES L.L.C.;REEL/FRAME:048544/0071

Effective date: 20180202

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: U.S. BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, MINNESOTA

Free format text: SECURITY INTEREST;ASSIGNORS:DISH BROADCASTING CORPORATION;DISH NETWORK L.L.C.;DISH TECHNOLOGIES L.L.C.;REEL/FRAME:058295/0293

Effective date: 20211126

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4