WO2022147146A4 - Devices, methods, and graphical user interfaces for interacting with three-dimensional environments - Google Patents
Devices, methods, and graphical user interfaces for interacting with three-dimensional environments Download PDFInfo
- Publication number
- WO2022147146A4 WO2022147146A4 PCT/US2021/065537 US2021065537W WO2022147146A4 WO 2022147146 A4 WO2022147146 A4 WO 2022147146A4 US 2021065537 W US2021065537 W US 2021065537W WO 2022147146 A4 WO2022147146 A4 WO 2022147146A4
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- dimensional environment
- user interface
- displaying
- representation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims 240
- 230000004044 response Effects 0.000 claims abstract 89
- 230000000007 visual effect Effects 0.000 claims 45
- 230000008859 change Effects 0.000 claims 20
- 230000010365 information processing Effects 0.000 claims 18
- 230000004913 activation Effects 0.000 claims 11
- 230000007704 transition Effects 0.000 claims 6
- 230000008569 process Effects 0.000 claims 4
- 230000000977 initiatory effect Effects 0.000 claims 3
- 230000003190 augmentative effect Effects 0.000 claims 2
- 230000002708 enhancing effect Effects 0.000 claims 2
- 230000009466 transformation Effects 0.000 claims 2
- 230000003213 activating effect Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 claims 1
- 238000003384 imaging method Methods 0.000 claims 1
- 238000007654 immersion Methods 0.000 claims 1
- 230000008447 perception Effects 0.000 claims 1
- 230000002093 peripheral effect Effects 0.000 claims 1
- 210000000707 wrist Anatomy 0.000 claims 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/038—Indexing scheme relating to G06F3/038
- G06F2203/0381—Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04804—Transparency, e.g. transparent or translucent windows
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computer system displays a first view of a three-dimensional environment. While displaying the first view, the computer system detects a gaze input directed to a first position in the three-dimensional environment corresponding to a location of a user's hand in a physical environment in conjunction with detecting a movement of the user's hand in the physical environment that meets preset criteria. In response, the computer system displays a plurality of user interface objects at respective second positions that are away from the first position in the three-dimensional environment corresponding to the location of the user's hand in the physical environment, wherein a respective user interface object, when activated, causes display of a corresponding computer-generated experience in the three-dimensional environment.
Claims
1. A method, including: at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment; while displaying the first view of the three-dimensional environment, detecting a gaze input directed to a respective first position in the three-dimensional environment that corresponds to a location of a user’s hand in a physical environment; while displaying the first view of the three-dimensional environment, detecting movement of the user’s hand in the physical environment, including detecting at least an initial portion of the movement of the user’s hand while detecting the gaze input at the respective first position in the three-dimensional environment that corresponds to the location of the user’ s hand; and in response to detecting the movement of the user’s hand in the physical environment: in accordance with a determination that the movement of the user’s hand in the physical environment meets first criteria, wherein the first criteria are met when the movement of the user’s hand corresponds to a first hand gesture, displaying a plurality of user interface objects at respective second positions in the three-dimensional environment that are away from the respective first position in the three-dimensional environment that corresponds to the location of the user’s hand in the physical environment, wherein a respective user interface object of the plurality of user interface objects, when activated, initiates a process for causing display of a corresponding computer-generated experience in the three-dimensional environment.
2. The method of claim 1, including: in response to detecting the movement of the user’s hand in the physical environment, in accordance with a determination that the movement of the user’s hand in the physical environment does not meet the first criteria, foregoing displaying the plurality of user interface objects at the respective second positions in the three-dimensional environment that are away from the respective position that corresponds to the location of the user’s hand in the physical environment.
3. The method of any of claims 1-2, including: in response to detecting the gaze input directed to the first position in the three- dimensional environment that corresponds to the location of the user’s hand in the physical environment, displaying, via the first display generation component, a first user interface object at the respective first position in the three-dimensional environment that corresponds to the location of the user’s hand in the physical environment.
4. The method of claim 3, wherein detecting the movement of the user’s hand in the physical environment includes detecting at least the initial portion of the movement of the user’s hand while displaying the first user interface object at the respective first position in the three-dimensional environment that corresponds to the location of the user’s hand.
5. The method of any of claims 1-4, wherein the first criteria include a requirement that is met when the movement of the user’s hand includes a rotation of the user’s hand around a wrist connected to the user’s hand.
6. The method of any of claims 1-5, wherein displaying the plurality of user interface objects at the respective second positions in the three-dimensional environment includes displaying an animated transition that includes movement of a first user interface object that corresponds to the plurality of user interface objects from the respective first position that corresponds to the location of the user’s hand toward the respective second positions and transformation of the first user interface object into the plurality of user interface objects prior to their arrival at the respective second positions in the three-dimensional environment.
7. The method of any of claims 1-6, including: generating a first audio output in conjunction with displaying the plurality of user interface objects at the respective second positions in the three-dimensional environment.
8. The method of any of claims 1-7, wherein the respective second positions in the three- dimensional environment are within a predefined distance threshold from the respective first position in the three-dimensional environment.
9. The method of any of claims 1-8, including: while displaying the plurality of user interface objects in accordance with the determination that the movement of the user’s hand that meets the first criteria, detecting first further movement of the user’s hand in the physical environment; and
in response to detecting the first further movement of the user’s hand in the physical environment: in accordance with a determination that the first further movement of the user’s hand in the physical environment meets second criteria, wherein the second criteria are met when the first further movement of the user’s hand in the physical environment corresponds to a second hand gesture different from the first hand gesture, ceasing display of the plurality of user interface objects at the respective second positions in the three- dimensional environment.
10. The method of claim 9, including: in response to detecting the first further movement of the user’s hand in the physical environment: in accordance with a determination that the first further movement of the user’s hand in the physical environment meets second criteria, displaying a first user interface object at a respective third position in the three-dimensional environment that corresponds to the location of the user’s hand in the physical environment.
11. The method of any of claims 3-10, including: in response to detecting at least an initial portion of the movement of the user’s hand in the physical environment: in accordance with a determination that the first criteria have not been met by the at least the initial portion of the movement of the user’s hand that has been detected, moving the first user interface object in the three-dimensional environment in accordance with the at least the initial portion of the movement of the user’s hand in the physical environment to maintain a respective spatial relationship between the first user interface object and a position that corresponds to a current location of the user’s hand; and in accordance with a determination that the first criteria have been met by the at least the initial portion of the movement of the user’s hand that has been detected, maintaining display of the plurality of user interface objects at the respective second positions in the physical environment, irrespective of further movement of the user’s hand after the first criteria have been met by the at least the initial portion of the movement of the user’s hand.
12. The method of any of claims 3-11, including: while displaying, via the first display generation component, the first user interface object at the respective first position in the three-dimensional environment that corresponds
to the location of the user’s hand in the physical environment, detecting an input directed to the first user interface object; and in response to detecting the input directed to the first user interface object, in accordance with a determination that the input directed to the first user interface object meets third criteria and that a location of the input corresponds to the respective user interface object of the plurality of user interface objects, initiating a process for causing display of the corresponding computer-generated experience in the three-dimensional environment.
13. The method of any of claims 1-12, wherein displaying the plurality of the user interface objects at the respective second positions in the three-dimensional environment includes moving the plurality of user interface objects from the respective first position in the three-dimensional environment that corresponds to the location of the hand in the physical environment to the respective second positions in the three-dimensional environment.
14. The method of claim 13, wherein displaying the plurality of the user interface objects at the respective second positions in the three-dimensional environment includes expanding respective sizes of the plurality of user interface objects as the plurality of user interface objects move from the respective first position in the three-dimensional environment that corresponds to the location of the hand in the physical environment to the respective second positions in the three-dimensional environment.
15. The method of any of claims 1-14, including: while displaying the plurality of user interface objects at the respective second positions in the in the three-dimensional environment, detecting an event that meets preset criteria; and in response to detecting the event that meets the preset criteria, transitioning from displaying the first view of the three-dimensional environment to displaying a second view of the three-dimensional environment that is different from the first view of the three dimensional environment, wherein displaying the second view of the three-dimensional environment includes maintaining display of the plurality of user interface objects at the respective second positions in the three-dimensional environment as a viewpoint a currently displayed view of the three-dimensional environment changes.
16. The method of any of claims 1-14, including:
while displaying the plurality of user interface objects at the respective second positions in the in the three-dimensional environment, detecting an event that meets preset criteria; and in response to detecting the event that meets the preset criteria, transitioning from displaying the first view of the three-dimensional environment to displaying a second view of the three-dimensional environment that is different from the first view of the three dimensional environment, wherein displaying the second view of the three-dimensional environment includes moving the plurality of user interface objects to respective third positions in the three-dimensional environment such that a spatial relationship between the plurality of user interface objects and a viewpoint corresponding to a currently displayed view of the three-dimensional environment is maintained.
17. The method of any of claims 15-16, including: in response to detecting the event that meets the preset criteria, in accordance with a determination that the event meets a preset threshold, ceasing to display the plurality of user interface objects in the three-dimensional environment.
18. The method of any of claims 1-17, including: while displaying the plurality of user interface objects at the respective positions in the three-dimensional environment, detecting a user input that is directed to the user’s hand; and in response to detecting the user input that is directed to the user’s hand, in accordance with a determination that the user input corresponds to a request to interact with the plurality of user interface objects, performing a respective operation in the three- dimensional environment that changes or activates at least one of the plurality of user interface objects.
19. The method of claim 18, includes: while displaying the plurality of user interface objects at the respective positions in the three-dimensional environment, displaying a first user interface object at the respective first position in the three-dimensional environment that corresponds to the location of the user’s hand.
20. The method of any of claims 1-19, including: while displaying the plurality of user interface objects at the respective second positions in the three-dimensional environment, detecting a user input that corresponds to a
request to select a respective user interface object of the plurality of user interface objects; and in response to detecting the user input that corresponds to the request to select the respective user interface object, changing a size of the respective user interface object relative to one or more user interface objects of the plurality of user interface objects.
21. The method of any of claims 1-20, including: displaying a first user interface object that corresponds to the plurality of user interface objects at the respective first position in the three-dimensional environment that corresponds to the location of the user’s hand in the physical environment, while the plurality of user interface objects are not displayed at the respective second positions, and while the plurality of user interface objects are displayed at the respective second positions, in the three-dimensional environment.
22. The method of claim 21, including: while displaying the first user interface object at the respective first position that corresponds to the location of the user’s hand, detecting a change in hand posture of the user’s hand that corresponds to a transition between an open posture and a closed posture of the user’ s hand; and in response to detecting the change in hand posture that corresponds to the transition between the open posture and the closed posture of the user’s hand, changing a size of the first user interface object in accordance with the change in hand posture of the user’s hand.
23. A method, comprising: at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment, including displaying at least a first user interface object at a first position in the three-dimensional environment and a second user interface object at a second position different from the first position in the three-dimensional environment; while displaying the first view of the three-dimensional environment via the first display generation component, detecting a first gaze input directed to a first region in the three-dimensional environment that corresponds to the first position in the three-dimensional environment; and
while detecting the first gaze input directed to the first region in the three- dimensional environment, detecting a first movement of a hand that meets first gesture criteria; and in response to detecting the first movement of the hand that meets the first gesture criteria while detecting the first gaze input directed to the first region in the three- dimensional environment: in accordance with a determination that the first movement of the hand that meets the first gesture criteria is detected after first gaze criteria have been met, wherein first gaze criteria require that the first gaze input is held at the first region for at least a first preset amount of time in order for the first gaze criteria to be met, selecting the first user interface object between the first user interface object and the second user interface object; and in accordance with a determination that the first movement of the hand that meets the first gesture criteria is detected before the first gaze criteria have been met, forgoing selection of the first user interface object.
24. The method of claim 23, wherein selecting the first user interface object includes performing a first operation corresponding to the first user interface object.
25. The method of the claim 23, including: while the first user interface object remains selected, detecting a first user input that meets preset criteria for activating a respective one of the first user interface object and the second user interface object while the respective one of the first user interface object or the second user interface object is selected; and in response to detecting the first user input, performing a first operation corresponding to the first user interface object.
26. The method of any of the claims 23-25, including: displaying a position indicator at a respective position in the three-dimensional environment that corresponds to a current position of the first gaze input, wherein the respective position of the position indicator is updated in accordance with movement of the first gaze input before the first gaze input is detected at the first region in the three- dimensional environment as well as while the first gaze input is detected at the first region in the three-dimensional environment.
27. The method of any of the claims 23-26, wherein:
the first movement of the hand that meets the first gesture criteria includes pinching two or more fingers of the hand together and moving the hand while the two or more fingers are pinched together, and in response to detecting the movement of the hand while the two or more fingers are pinched together, in accordance with a determination that the first user interface object has been selected, scrolling content displayed within the first user interface object in accordance with the movement of the hand while the two or more fingers are pinched together.
28. The method of any of the claims 23-26, wherein: the first movement of the hand that meets the first gesture criteria includes pinching two or more fingers of the hand together and moving the hand while the two or more fingers are pinched together, and in response to detecting the movement of the hand while the two or more fingers are pinched together, in accordance with a determination that the first user interface object has been selected, moving the first user interface object in the three-dimensional environment in accordance with the movement of the hand while the two or more fingers of the hand are pinched together.
29. The method of any of the claims 23-28, wherein the first gesture criteria are capable of being met in response to the first movement of the hand irrespective of whether or not the first movement of the hand occurs in a region of a physical environment that corresponds to the first region of the three-dimensional environment.
30. The method of any of the claims 23-29, including: in response to detecting the first gaze input directed to the first region in the three- dimensional environment, changing a value of a first visual property of the first user interface object at the first position in the three-dimensional environment.
31. The method of any of the claims 23-29, including: in response to detecting the first gaze input directed to the first region in the three- dimensional environment: in accordance with a determination that the first gaze input meets the first gaze criteria, changing a value of a first visual property of the first user interface object at the first position in the three-dimensional environment; and
in accordance with a determination that the first gaze input does not meet the first gaze criteria, forgoing changing the value of the first visual property of the first user interface object in the three-dimensional environment.
32. The method of any of the claims 23-29, including: in response to detecting the first movement of the hand that meets the first gesture criteria while detecting the first gaze input directed to the first region in the three-dimensional environment: in accordance with a determination that the first movement of the hand that meets the first gesture criteria is detected while the first gaze input is detected, changing a value of a first visual property of the first user interface object at the first position in the three-dimensional environment.
33. The method of any of the claims 23-32, including: in accordance with a determination that the first gesture criteria and the first gaze criteria are both met, providing a first visual feedback in the three-dimensional environment; and in accordance with a determination that the first gesture criteria are met and the first gaze criteria are not met, providing a second visual feedback in the three-dimensional environment, the second visual effect being different from the first visual effect.
34. A method, comprising: at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment, including displaying, at a first position in the three-dimensional environment, a first group of selectable user interface objects from multiple different groups of selectable user interface objects; while displaying the first group of selectable user interface objects at the first position in the three-dimensional environment, detecting a user input that corresponds to a request to display a second group of selectable user interface objects from the multiple different groups of selectable user interface objects at the first position in the three-dimensional environment, the second group of selectable user interface objects being different from the first group of selectable user interface objects; and
470
in response to detecting the user input that corresponds to a request to display the second group of selectable user interface objects at the first position in the three-dimensional environment: moving the first group of selectable user interface objects in the three- dimensional environment away from the first position in the three-dimensional environment and reducing visual prominence of the first group of selectable user interface objects in the three-dimensional environment as the first group of selectable user interface objects are moved away from the first position in the three-dimensional environment; and moving the second group of selectable user interface objects in the three- dimensional environment toward the first position in the three-dimensional environment and increasing visual prominence of the second group of selectable user interface objects as the second group of selectable user interface object are moved to the first position in the three- dimensional environment.
35. The method of claim 34, wherein the first group of selectable user interface objects and the second group of selectable user interface objects are concurrently displayed in the first view of the three-dimensional environment at a time when the user input that corresponds to the request to display the second group of selectable user interface objects at the first position in the three-dimensional environment is detected.
36. The method of claim 34, wherein the second group of selectable user interface objects are not displayed in the first view of the three-dimensional environment at a time when a start of the user input that corresponds to the request to display the second group of selectable user interface objects at the first position in the three-dimensional environment is detected.
37. The method of any of claims 34-36, including: while displaying the first group of selectable user interface objects at the first position in the three-dimensional environment, displaying respective group representations corresponding to the multiple different groups of user interface objects in the first view of the three-dimensional environment, the respective group representations including a first group representation that corresponds to the first group of selectable user interface objects and a second group representation that corresponds to the second group of selectable user interface objects; and while moving the second group of selectable user interface objects toward the first position in the three-dimensional environment, maintaining display of the respective group
471
representations corresponding to the multiple different groups of user interface objects in the first view of the three-dimensional environment.
38. The method of claim 37, wherein detecting the user input that corresponds to a request to display the second group of selectable user interface objects at the first position in the three-dimensional environment includes: detecting a selection input directed to the second group representation while the first group of selectable user interface objects are displayed at the first position in the three- dimensional environment.
39. The method of claim 38, wherein detecting the selection input directed to the second group representation displayed in the three-dimensional environment includes: detecting a gaze input directed to the second group representation in conjunction with detecting a preset gesture input in a physical environment.
40. The method of any of claims 38-39, including: while displaying a respective group of selectable user interface objects at the first position in the three-dimensional environment, detecting a respective selection input directed to a respective group representation of the respective group of selectable user interface objects; and in response to detecting the respective selection input directed to the respective group representation of the respective group of selectable user interface objects, ceasing to display the respective group of selectable user interface objects at the first position in the three- dimensional environment, without moving another group of selectable user interface objects to the first position.
41. The method of any of claims 37-40, including: while displaying the first group of selectable user interface objects at the first position in the three-dimensional environment, displaying the first group representation with greater visual prominence than the second group representation; and while moving the second group of selectable user interface objects toward the first position in the three-dimensional environment, transitioning from displaying the first group representation with greater visual prominence than the second group representation to displaying the second group representation with greater visual prominence than the first group representation.
472
42. The method of any of claims 37-41, wherein an area occupied by the respective group representations corresponding to the multiple different groups of user interface objects changes in the first view of the three-dimensional environment in response to a gaze input being directed to the area occupied by the respective group representations.
43. The method of any of the claims 34-42, wherein the different groups of selectable user interface objects include two or more groups from: a first group of selectable user interface objects corresponding to different applications that, when a respective user interface object from the first group of selectable user interface objects is selected in accordance with first criteria, cause the computer system to display a corresponding application in the three-dimensional environment, a second group of selectable user interface objects corresponding to different users that, when a respective user interface object from the second group of selectable user interface objects is selected in accordance with the first criteria, cause the computer system to initiate a process to establish communication with one or more corresponding users in the three-dimensional environment, and a third group of selectable user interface objects corresponding to different computer generated experiences that, when a respective user interface object from the third group of selectable user interface objects selected in accordance with the first criteria, cause the computer system to initiate a process to display a corresponding computer-generated experience in the three-dimensional environment (e.g., an augmented reality atmosphere that changes lighting of a physical environment of the physical environment or a virtual environment that includes an immersive or partially immersive simulated three-dimensional space).
44. The method of any of the claims 34-43, including: before displaying the first group of selectable user interface objects at the first position in the three-dimensional environment, detecting a first gaze input directed to a first region in the three dimensional environment that corresponds to the location of a user’s hand in a physical environment; and in response to detecting the first gaze input directed to the first region in the three- dimensional environment, displaying a first user interface object at the first region in the three-dimensional environment that corresponds to the location of the user’s hand in the physical environment.
45. The method of any of claims 34-44, including:
473
before displaying the first group of selectable user interface objects at the first position in the three-dimensional environment, detecting a first gaze input directed to a first region in the three dimensional environment that corresponds to the location of a user’s hand in a physical environment in conjunction with detecting a preset gesture input on the hand, wherein the first group of selectable user interface objects are displayed at the first position in the three-dimensional environment in response to detecting the first gaze input directed to the first region in the three-dimensional environment in conjunction with detecting the preset gesture input on the hand.
46. The method of claim 45, wherein: the first group of selectable user interface objects are displayed at the first position in the three-dimensional environment in response to detecting the first gaze input directed to the first region in the three-dimensional environment that corresponds to the user’s hand in conjunction with detecting the preset gesture input at a first location on the hand; and the user input that corresponds to the request to display the second group of selectable user interface objects at the first position in the three-dimensional environment includes a second gaze input directed to the first region in the three-dimensional environment that corresponds to the user’s hand detected in conjunction with the preset gesture input at a second location on the hand.
47. The method of any of the claims 34-46, wherein the user input that corresponds to a request to display the second group of selectable user interface objects at the first position in the three-dimensional environment includes a gaze input that is directed to a respective position in the three-dimensional environment of a respective group representation of the second group of selectable user interface object.
48. The method of claim 47, including: in response to detecting the gaze input that is directed to the respective position of the respective group representation of the second group of selectable user interface objects, applying a visual effect to visually distinguishing the respective group representation of the second group of selectable user interface objects from the respective group representation of the first group of selectable user interface objects.
49. The method of any of the claims 34-48, including: while displaying a first subset of the first group of selectable user interface objects at the first position in the three-dimensional environment, detecting a user input that includes
474
movement of a user’s hand in a first direction in the physical environment while the user’s hand is in a first preset posture; and in response to detecting the user input that includes movement of the user’s hand in the first direction in the physical environment while the user’s hand is in the first preset posture, displaying a second subset of the first group of selectable user interface objects at the first position in the three-dimensional environment, wherein the second subset of the first group of selectable objects are different from the first subset of the first group of selectable objects.
50. The method of claim 49, wherein the user input that corresponds to the request to display the second group of selectable user interface objects includes movement of the user’s hand in a second direction different from the first direction in the physical environment while the user’s hand is in the first preset posture.
51. The method of any of the claims 34-50, wherein the first group of selectable user interface objects include a first subset of selectable user interface objects that correspond to a first set of users with which communication has occurred in a preset time period preceding the current time, and wherein the first subset of selectable user interface objects include at least a respective user interface object that corresponds to a first user with which communication has occurred in the preset time period with a first communication mode, and a respective user interface object that corresponds to a second user with which communication has occurred in the preset time period with a second communication mode different from the first communication mode.
52. The method of any of the claims 34-51, wherein displaying the first group of selectable user interface objects at the first position in the three-dimensional environment includes displaying representations of a plurality of users in different subgroups that are organized in accordance with respective status information associated with the plurality of users, including a first subgroup including representations of a first subset of the plurality of users that are associated with a first type of status information, and a second subgroup including representations of a second subset of the plurality of users that are associated with a second type of status information different from the first type of status information.
53. The method of any of the claims 34-52, wherein displaying the first group of selectable user interface objects at the first position in the three-dimensional environment includes displaying respective representations of a first subset of users with a higher priority
475
than respective representations of a second subset of users in accordance with a determination that the first subset of users have an active status and the second subset of users do not have an active status.
54. The method of any of the claims 52-53, wherein displaying the first group of selectable user interface objects at the first position in the three-dimensional environment includes displaying a representation for a respective user of the plurality of users that is associated with two or more subgroups according to the respective status information associated with the respective user, in a single subgroup of the more than one subgroups.
55. The method of any of the claims 34-54, wherein displaying the first group of selectable user interface objects at the first position in the three-dimensional environment includes displaying representations of a plurality of users in different subgroups, wherein a respective representation of a respective user of the plurality of users is displayed with a visual indication of a type of association between the respective user and a respective subgroup that caused assignment of the respective user to the respective subgroup.
56. The method of any of the claims 34-55, wherein displaying the first group of selectable user interface objects at the first position in the three-dimensional environment includes displaying representations of a plurality of users in different subgroups, wherein respective representations of a first subset of the plurality of users included in a first subgroup are reordered based on changed communication timing information associated with the first subset of the plurality of users, and respective representations of a second subset of the plurality of users included in a second subgroup are not reordered.
57. The method of any of the claims 34-56, wherein displaying the first group of selectable user interface objects includes displaying respective selectable user interface objects in the first group with characteristic values of a first display property that are determined based on respective positions of the respective selectable user interface objects among the first group of selectable user interface objects.
58. The method of any of the claims 34-57, wherein displaying the first group of selectable user interface objects includes displaying the first group of selectable user interface objects in a respective region that spans a predefined viewing angle in the three-dimensional environment.
59. A method, comprising:
476
at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment, including concurrently displaying, at a first position in the three- dimensional environment, a plurality of user interface objects corresponding to different operations, and at a second position in the three-dimensional environment, background content, wherein the plurality of user interface object includes a first user interface object and a second user interface object, and wherein the background content has a first value for a respective visual property relative to corresponding values of the respective visual property of the plurality of user interface objects in the first view of the three-dimensional environment; while concurrently displaying the plurality of user interface objects with the background content that has the first value for the respective visual property relative to corresponding values for the respective visual property of the plurality of user interface objects in the first view of the three-dimensional environment, detecting a user input that activates the first user interface object in accordance with first activation criteria; and in response to detecting the user input that activates the first user interface object in accordance with the first activation criteria: initiating a communication session with a first user corresponding to the first user interface object; and adjusting a value of the respective visual property of at least a first portion of the background content from the first value to a second value that is different from the first value.
60. The method of claim 59, including: in response to detecting the user input that activates the first user interface object in accordance with the first activation criteria, reducing visibility of one or more user interface objects of the plurality of user interface objects other than the first user interface object, in conjunction with adjusting visibility of the at least the first portion of the background content from a first level of visibility to a second level of visibility.
61. The method of any of claims 59-60, wherein the background content includes a representation of a physical environment surrounding the first display generation component, and wherein adjusting the value of the respective visual property of the at least the first portion of the background content from the first value to the second value includes reducing
477
visibility of at least a first portion of the representation of the physical environment in the first view of the three-dimensional environment.
62. The method of any of claims 59-61, including: while concurrently displaying the plurality of user interface objects with the background content that has the first value for the respective visual property in the first view of the three-dimensional environment, detecting a user input that activates the second user interface object in accordance with second activation criteria; and in response to detecting the user input that activates the second user interface object in accordance with the second activation criteria, performing an operation that corresponds to the second user interface object, while maintaining the first value of the respective visual property of the first portion of the background content.
63. The method of any of claims 59-62, wherein the plurality of user interface objects represent a plurality of different users, the first user interface object represents the first user, the second user interface object represents a second user different from the first user.
64. The method of any of claims 59-63, wherein initiating the communication session with the first user corresponding to the first user interface object includes: displaying a first set of user interface objects corresponding to the communication session with the first user in the first view of the three-dimensional environment, wherein the at least the first portion of the background content has the second value for the respective visual property relative to values of the respective visual property of the first set of user interface objects.
65. The method of claim 64, wherein displaying the first set of user interface objects corresponding to the communication session with the first user in the first view of the three- dimensional environment includes displaying a first affordance that corresponds to a first communication mode of a plurality of different communication modes, wherein activation of the first affordance in accordance with preset criteria corresponding to the first affordance causes the computer system to initiate the communication session with the first user using the first communication mode.
66. The method of claim 65, including: providing access to selectable options corresponding to a plurality of different communication modes in which the communication session can be conducted, including one
478
or more second communication modes different from the first communication mode, wherein access to the selectable options corresponding to the one or more second communication modes are deprioritized below access to the first affordance corresponding to the first communication mode.
67. The method of any of claims 64-66, including: while displaying the first set of user interface objects corresponding to the communication session with the first user in the first view of the three-dimensional environment, wherein the at least the first portion of the background content has the second value for the respective visual property and wherein the first set of user interface objects includes a representation of the first user, detecting an event that causes the computer system to send out a request to communicate with the first user and wait for a response to the request; and while waiting for the response to the request, reducing visibility of one or more of the first set of user interface objects relative to the representation of the first user.
68. The method of claim 67, including: while waiting for the response to the request and displaying the representation of the first user in the first view of the three-dimensional environment, detecting a user input that corresponds to a request to move the representation of the first user from a first position to a second position in the three-dimensional environment; in response to detecting the user input that corresponds to the request to move the representation of the first user from the first position to the second position in the three- dimensional environment, displaying the representation of the first user at the second position in the three-dimensional environment; and in response to detecting that the communication session is established in accordance with the response to the request, updating the representation of the first user at the second position in accordance with data received from the first user.
69. The method of claim 68, including: during the communication session with the first user, in accordance with a determination that the first user has joined the communication session using a first communication mode, updating a position of the representation of the first user in the three- dimensional environment in accordance with data received from the first user.
70. The method of any of claims 68-69, including:
479
during the communication session with the first user, in accordance with a determination that the first user has joined the communication session using a second communication mode, updating a position of the representation of the first user in the three- dimensional environment in accordance with user input received from a user of the computer system.
71. The method of any of claims 67-70, including: while waiting for the response to the request, maintaining display of a respective user interface object that includes an indication that the response to the request has not been accepted in a view of the three-dimensional environment until the request is canceled or accepted.
72. The method of claim 71, including: while displaying the respective user interface object that includes the indication that the request has not been accepted in the view of the three-dimensional environment, detecting a user input that corresponds to a request to move the respective user interface object from a first position to a second position in the three-dimensional environment; and in response to detecting the user input that corresponds to the request to move the respective user interface object from the first position to the second position in the three- dimensional environment, displaying the respective user interface object at the second position in the three-dimensional environment.
73. The method of any of claims 59-70, including: while displaying at least the first portion of the background content with the second level of visibility, detecting a gaze input directed to the first portion of the background content; and in response to detecting the gaze input directed to the first portion of the background content, and in accordance with a determination that the gaze input meets first preset criteria, restoring the visibility of the at least the first portion of the background content to the first level of visibility.
74. The method of any of claims 65-73, including: after the communication session with the first user is started, displaying a control user interface object that corresponds to the communication session with the first user, wherein the control user interface object includes one or more affordances for displaying additional content related to the communication session.
480
75. The method of any of claims 65-74, including: after the communication session with the first user is started, displaying a control user interface object that corresponds to the communication session with the first user; and after the control user interface object is displayed in the three-dimensional environment for at least a first preset amount of time, in accordance with a determination that a gaze input has not been detected in a region of the control user interface object in the three- dimensional environment for at least a second preset amount of time, reducing visibility of the control user interface object in the three-dimensional environment.
76. The method of claim 74, wherein displaying the control user interface object that corresponds to the communication session with the first user includes, for at least a first period of time, displaying the control user interface object at a position in the three- dimensional environment that is anchored to the three-dimensional environment.
77. The method of any of claims 74-76, wherein displaying the control user interface object that corresponds to the communication session with the first user includes, for at least a second period of time, displaying the control user interface object at a position in the three- dimensional environment that is anchored to a position of a first portion of the user of the computer system.
78. The method of any of claims 74-77, including: while displaying the control user interface object that corresponds to the communication session with the first user, detecting a user input that corresponds to a request to display additional information related to the communication session; and in response to detecting the user input that corresponds to the request to display the additional information related to the communication session, displaying the additional information within the control user interface object.
79. The method of any of claim 65-78, including: after the communication session with the first user is started, displaying a first affordance for terminating the communication session, wherein activation of the first affordance causes the computer system to terminate the communication session with the first user.
80. The method of any of claim 65-79, including:
481
after the communication session with the first user is started, displaying a second affordance for switching a currently used communication mode of the communication session, wherein activation of the second affordance causes the computer system to change a communication mode of the communication session with the first user from a first communication mode to a second communication mode different from the first communication mode.
81. A method, compri sing : at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment; while displaying the first view of the three-dimensional environment, detecting a first event that corresponds to receipt of a first notification; in response to detecting the first event: displaying a first representation of the first notification in the three- dimensional environment at a position that is selected based on a position of a first portion of a first user in a physical environment; and after displaying the first representation of the first notification at the position in the three-dimensional environment that that is selected based on a position of the first portion of the first user in the physical environment, displaying a second representation of the first notification at a second position in the three-dimensional environment that is selected based on a position of a second portion of the first user in the physical environment, wherein the second portion of the first user is different from the first portion of the first user and can be moved relative to the first portion of the first user; and while displaying the second representation of the first notification at the position in the three-dimensional environment that is selected based on the position of the second portion of the first user in the physical environment, detecting movement of the second portion of the first user in the physical environment relative to the first portion of the first user; and in response to detecting movement of the second portion of the first user in the physical environment relative to the first portion of the first user, moving the second representation of the first notification in the three-dimensional environment based on the movement of the second portion of the first user in the physical environment.
482
82. The method of claim 81, the first representation of the first notification is different from the second representation of the first notification.
83. The method of claim 81, the first representation of the first notification is the same as the second representation of the first notification.
84. The method of any of claims 81-83, including: while displaying the first representation of the first notification at the position in the three-dimensional environment that is selected based on the position of the first portion of the first user in the physical environment, detecting first movement of the first portion of the first user in the physical environment; and in response to detecting the first movement of the first portion of the first user in the physical environment, updating the position of the first representation of the first notification in a currently displayed view of the three-dimensional environment in accordance with the first movement of the first portion of the first user in the physical environment.
85. The method of any of claims 81-84, including: while displaying the first representation of the first notification at the position in the three-dimensional environment that is selected based on the position of the first portion of the first user in the physical environment, detecting second movement of the second portion of the first user in the physical environment relative to the first portion of the first user; and in response to detecting the second movement of the second portion of the first user in the physical environment relative to the first portion of the first user, maintaining the position of the first representation of the first notification in a currently displayed view of the three- dimensional environment.
86. The method of any of claims 81-85, including: while displaying the second representation of the first notification at the position in the three-dimensional environment that is selected based on the position of the second portion of the first user in the physical environment, detecting third movement of the first portion of the first user in the physical environment without detecting movement of the second portion of the first user in the physical environment; and in response to detecting the third movement of the first portion of the first user in the physical environment, maintaining the position of the second representation of the first notification in a currently displayed view of the three-dimensional environment.
483
87. The method of any of claims 81-86, wherein displaying the first representation of the first notification in the three-dimensional environment at the position that is selected based on the position of the first portion of the first user in the physical environment includes displaying the first representation of the first notification in a peripheral portion of a field of view provided via the first display generation component.
88. The method of claim 87, including: displaying transformation of the first representation of the first notification into the second representation of the first notification and movement of the second representation of the first notification to the position in the three-dimensional environment that is selected based on the position of the second portion of the first user in the physical environment.
89. The method of any of claims 81-88, including: while displaying the second representation of the first notification in the three- dimensional environment at the position that is selected based on the position of the second portion of the first user in the physical environment, detecting movement of a hand of the first user in the physical environment; and in response to detecting the movement of the hand of the first user in the physical environment: in accordance with a determination that the movement of the hand of the first user in the physical environment meets first preset criteria, displaying a third representation of the first notification at a position in the three-dimensional environment that corresponds to a position of the hand of the first user.
90. The method of claim 89, wherein the first preset criteria require that the movement of the hand of the first user is detected in conjunction with a first gaze input being directed to a preset position in the three-dimensional environment in order for the first criteria to be met, and the method includes: in response to detecting the movement of the hand of the first user in the physical environment: in accordance with a determination that the movement of the hand of the first user in the physical environment fails to meet the first criteria due to absence of the first gaze input directed to the preset position in the three-dimensional environment, providing visual feedback indicating a requirement of the first gaze input at a position in the three-dimensional environment that corresponds to the hand of the first user.
484
91. The method of any of claims 89-90, wherein the first preset criteria require that the movement of the hand of the first user is detected in conjunction with a first gaze input being directed to a preset position in the three-dimensional environment for at least a first threshold amount of time in order for the first criteria to be met, and the method includes: in response to detecting the movement of the hand of the first user in the physical environment: in accordance with a determination that the first criteria are not met due to the first gaze input not being held within a threshold range of the preset position in the three- dimensional environment for at least the first threshold amount of time, forgoing display of the third representation of the first notification.
92. The method of any of claims 89-91, wherein the first preset criteria require that the movement of the hand of the first user includes turning a palm of the hand of the first user toward a preset direction in order for the first criteria to be met.
93. The method of any of claims 81-92, including: while displaying a respective representation of the first notification in the three- dimensional environment, detecting a gaze input directed to a region in the three-dimensional environment that corresponds to the respective representation of the first notification; and in response to detecting the gaze input directed to the region in the three-dimensional environment that corresponds to the respective representation of the first notification, changing one or more display properties of the respective representation of the first notification in a currently displayed view of the three-dimensional environment.
94. The method of any of claims 81-93, including: while displaying a respective representation of the first notification in the three- dimensional environment, detecting a first gesture performed by a hand of the first user in conjunction with a gaze input directed to a region in the three-dimensional environment that corresponds to the respective representation of the first notification; and in response to detecting the first gesture in conjunction with the gaze input directed to the region in the three-dimensional environment that corresponds to the respective representation of the first notification: in accordance with a determination that the first gesture and the gaze input meet activation criteria, performing an operation in the three-dimensional environment corresponding to the first notification in accordance with the first gesture, and dismissing the first notification.
485
95. The method of claim 94, wherein the activation criteria are met by the first gesture in accordance with a determination that the first gesture includes movement of a first preset portion of the hand of the first user relative to a second preset portion of the hand of the first user that starts from or ends in contact between the first portion of the hand and the second portion of the hand.
96. The method of any of claims 94-95, wherein the activation criteria are met by the first gesture in accordance with a determination that the first gesture includes a touch input performed by a first hand of the first user directed to a second hand of the first user.
97. The method of any of claims 81-96, including: in response to detecting the first event: outputting a first spatial audio alert with an associated spatial location in the physical environment that corresponds to the position of the second portion of the first user in the physical environment; and in accordance with a determination that a gaze input is detected in a preset range of the associated spatial location of the first spatial audio alert while the first spatial audio alert is being output, displaying a third representation of the first notification at a position in the three-dimensional environment that corresponds to the second portion of the first user in the physical environment.
98. The method of any of claims 81-97, wherein: the first view of the three-dimensional environment includes a first three-dimensional region that is shared between the first user and a second user that is engaged in a shared- experience with the first user, and while displaying the first representation of the first notification in the first view of the three-dimensional environment, forgoing providing notification content of the first notification in the first three-dimensional region that is shared between the first user and the second user.
99. The method of any of claims 81-98, including: in response to detecting the first event, generating a first audio alert in conjunction with displaying the first representation of the first notification.
100. The method of claim 99, wherein generating the first audio alert in conjunction with displaying the first representation of the first notification includes generating a first spatial
486
audio alert with a simulated spatial location in the physical environment that corresponds to the position of the first representation of the first notification in the three-dimensional environment that is selected based on the position of the first portion of the first user in the physical environment.
101. The method of any of claims 81-100, including: displaying an animated transition between displaying the first representation of the first notification at the position in the three-dimensional environment that is based on the position of the first portion of the first user in the physical environment and displaying the second representation of the first notification at the position in the three-dimensional environment that is based on the position of the second portion of the first user in the physical environment; and generating an audio output in conjunction with displaying the animated transition.
102. The method of any of claims 81-101, including: while displaying a respective representation of the first notification, detecting movement of a first hand that corresponds to the second portion of the first user in the physical environment; and in response to detecting the movement of the first hand that corresponds to the second portion of the first user, in accordance with a determination that the movement of the first hand includes opening the first hand, displaying a third representation of the first notification while the first hand is open.
103. The method of claim 102, including: in response to detecting the movement of the first hand that corresponds to the second portion of the first user, in accordance with a determination that the movement of the first hand includes closing of the first hand after opening the first hand, ceasing to display the third representation of the first notification.
104. The method of claim 102, including: in response to detecting the movement of the first hand that corresponds to the second portion of the first user, in accordance with a determination that the movement of the first hand includes closing of the first hand after opening the first hand, ceasing to display the third representation of the first notification and dismissing the first notification.
105. The method of any of claims 102-104, including:
487
while displaying the third representation of the first notification in the three- dimensional environment, detecting movement of a second hand of the first user, the second hand being different from the first hand; and in response to detecting the movement of the second hand of the first user, in accordance with a determination that the movement of the second hand corresponds to a drag and drop gesture that starts at a location in physical environment that corresponds to a position of a first content item in the three-dimensional environment and ends at a location in the physical environment that corresponds to the position of the third representation of the first notification, adding a representation of the first content item in an input area within the third representation of the first notification.
106. The method of any of claims 102-105, including: in response to detecting the movement of the first hand of the first user: in accordance with a determination that the movement of the first hand includes first movement of the first hand away from the first user during a first period of time, changing a size of the third representation of the first notification in accordance with the first movement of the first hand away from the first user during the first period of time; and in accordance with a determination that the movement of the first hand includes second movement of the first hand toward the first user during a second period of time, changing the size of the third representation of the first notification in accordance with the second movement of the first hand toward the first user during the second period of time.
107. The method of any of claims 102-106, including: while displaying the third representation of the first notification in the three- dimensional environment, detecting a first user input that corresponds to a request to drag and drop a first object from the three-dimensional environment to the third representation of the first notification; and in response to detecting the first user input, adding the first object in an input area within the third representation of the first notification.
108. The method of any of claims 102-107, including: while displaying the third representation of the first notification in the three- dimensional environment, the third representation of the first notification includes a second object, detecting a second user input that corresponds to a request to drag and drop the second object from the third representation of the first notification to the three-dimensional environment; and
488
in response to detecting the second user input, adding the second object to a position in the three-dimensional environment outside of the third representation of the first notification.
109. A method, comprising: at a computer system that is in communication with a first display generation component and one or more input devices: displaying, via the first display generation component, a first view of a three- dimensional environment; while displaying the first view of the three-dimensional environment, detecting a first event that corresponds to an incoming request for establishing a communication session between a first user corresponding to the computer system and a second user different from the first user; in response to detecting the first event, displaying a first notification at a first position in the three-dimensional environment, wherein the first position is selected based on a respective position in the three-dimensional environment that corresponds to a location of at least a first portion of the first user in a physical environment; while displaying the first notification at the first position in the three-dimensional environment that is selected based on the respective position in the three-dimensional environment that corresponds to the location of at least the first portion of the first user in the physical environment, detecting a sequence of one or more user inputs that correspond to acceptance of the incoming request for establishing the communication session; and in response to detecting the sequence of one or more user inputs that correspond to the acceptance of the incoming request for establishing the communication session, displaying a plurality of user interface objects associated with the communication session at respective second positions in the three-dimensional environment that are at least a threshold distance away from a characteristic position of the first user in the three-dimensional environment.
110. The method of claim 109, wherein the location of the at least the first portion of the first user includes a characteristic location of the first user as a whole in the physical environment, and the method includes: while the first notification is displayed at the first position that is selected based on the respective position in the three-dimensional environment that corresponds to the characteristic location of the first user as a whole in the physical environment, detecting
489
movement of the characteristic location of the first user as a whole from a first location to a second location in the physical environment; and in response to detecting the movement of the characteristic location of the first user as a whole from the first location to the second location in the physical environment: displaying a second view of the three-dimensional environment, wherein a viewpoint of the second view corresponds to the second location in the physical environment; and displaying the first notification in the second view of the three-dimensional environment at an updated first position that is selected based on the second location in the physical environment.
111. The method of claim 109, wherein the location of the at least the first portion of the first user includes a characteristic location and orientation of a head of the first user in the physical environment, and the method includes: while the first notification is displayed at the first position that is selected based on the respective position in the three-dimensional environment that corresponds to the characteristic location and orientation of the head of the first user in the physical environment, detecting a change in the characteristic location and orientation of the head of the first user from a first state to a second state in the physical environment; and in response to detecting the change in the characteristic location and orientation of the head of the first user in the physical environment: displaying a third view of the three-dimensional environment, wherein a viewpoint of the third view corresponds to the second state of the head of the first user in the physical environment; and displaying the first notification in the third view of the three-dimensional environment at an updated first position that is selected based on the second state of the head of the first user in the physical environment.
112. The method of claim 111, wherein movement of the first notification from the first position in the first view of the three-dimensional environment to the first updated position in the third view of the three-dimensional environment lags behind a change from displaying the first view of the three-dimensional environment to displaying the third view of the three- dimensional environment.
113. The method of any of claims 109-112, wherein:
490
the location of the at least the first portion of the first user includes a characteristic location of a hand of the first user in the physical environment, and displaying the first notification at the first position that is selected based on the respective position in the three-dimensional environment that corresponds to the location of at least the first portion of the first user includes displaying the first notification at a position that is selected based on the characteristic location of the hand of the first user in the physical environment.
114. The method of claim 113, wherein the position that is selected based on the characteristic location of the hand of the first user in the physical environment includes a respective position at or near a representation of the hand of the first user, in accordance with a determination that the representation of the hand of the first user is within a currently displayed view of the three-dimensional environment.
115. The method of any of claims 113-114, wherein the position that is selected based on the characteristic location of the hand of the first user in the physical environment includes a respective position at or near an edge portion of the currently displayed view of the three- dimensional environment, in accordance with a determination that the representation of the hand of the first user is outside the currently displayed view of the three-dimensional environment.
116. The method of any of claims 109-115, including: while displaying the first notification at the position that is selected based on the characteristic location of the hand of the first user in the physical environment, detecting movement of the hand of the first user from a third location to a fourth location in the physical environment; and in response to detecting the movement of the hand of the first user from the third location to the fourth location in the physical environment, moving the first notification in the three-dimensional environment from a third position that is selected based on the third location in the physical environment to a fourth position that is selected based on the fourth location in the physical environment.
117. The method of claim 116, wherein: the third position in the three-dimensional environment is at or near a representation of the hand of the first user in a currently displayed view of the three-dimensional
491
environment when the hand of the first user is at the third location in the physical environment, and the fourth position in the three-dimensional environment is at or near the representation of the hand of the first user in the currently displayed view of the three- dimensional environment when the hand of the first user is at the fourth location in the physical environment.
118. The method of any of claims 109-117, including: in response to detecting the first event, displaying movement of the first notification from an initial display position of the first notification toward the first position in the three- dimensional environment that is selected based on the respective position in the three- dimensional environment that corresponds to the location of at least the first portion of the first user in the physical environment.
119. The method of claim 118, wherein the first position in the three-dimensional environment is selected based on a respective position in the three-dimensional environment that corresponds to the location of a hand of the first user in the physical environment, and the movement of the first notification starts from the initial display position of the first notification ends at or near the respective position in the three-dimensional environment that corresponds to the location of the hand of the first user in the physical environment.
120. The method of claim 118, wherein the first position in the three-dimensional environment is selected based on a respective position in the three-dimensional environment that corresponds to the location of a head of the first user in the physical environment, and the movement of the first notification starts from the initial display position of the first notification and ends at or near the respective position in the three-dimensional environment that corresponds to the location of the hand of the first user in the physical environment.
121. The method of any of claims 109-120, including: in response to detecting the first event, moving the first notification from a first depth from a viewpoint of a currently displayed view of the three-dimensional environment to a second depth from the viewpoint of the currently displayed view of the three-dimensional environment.
122. The method of any of claims 109-121, including:
492
in response to detecting the first event, generating a first audio alert in conjunction with displaying the first notification at the first position in the three-dimensional environment that is selected based on the respective position in the three-dimensional environment that corresponds to the location of at least the first portion of the first user in the physical environment.
123. The method of claim 122, wherein the first audio alert is started after an initial appearance of the first notification in a currently displayed view of the three-dimensional environment.
124. The method of any of claims 109-123, including: while displaying the first notification in a currently displayed view of the three- dimensional environment, detecting movement of a hand of the first user toward a respective location in the physical environment that corresponds to a current position of the first notification in the three-dimensional environment; and in response to detecting the movement of the hand of the first user toward the respective location in the physical environment that corresponds to the current position of the first notification in the three-dimensional environment, changing an appearance of the first notification.
125. The method of any of claims 109-124, including: in response to detecting the first event, displaying movement of the first notification into a currently displayed view of the three-dimensional environment from a side edge or comer of the currently displayed view.
126. The method of any of claims 109-125, wherein the first notification identifies the second user.
127. The method of any of claims 109-126, including: in response to detecting the first event, displaying the first notification with notification content for a first period of time; and in accordance with a determination that the sequence of one or more user inputs that correspond to the acceptance of the incoming request for establishing the communication session have not been received after a first preset period of time of initial display of the first notification with the notification content, ceasing to display at least a portion of the
493
notification content and moving the first notification to a different position in the three- dimensional environment.
128. The method of any of claims 109-127, wherein displaying the plurality of user interface objects associated with the communication session at the respective second positions in the three-dimensional environment includes maintaining a spatial relationship between the plurality of user interface objects and a representation of a physical environment in the three-dimensional environment during movement of a viewpoint of a currently displayed view of the three-dimensional environment.
129. A method, comprising: at a first computer system that is in communication with a first display generation component and one or more first input devices: displaying, via the first display generation component, a first user interface of a first communication session between a first user corresponding to the first computer system and a second user corresponding to a second computer system different from the first computer system, wherein the second user is participating in the first communication session within a first three-dimensional environment, and wherein displaying the first user interface of the first communication session includes displaying a respective representation of the second user that indicates a spatial positioning of the second user relative to a respective representation of at least one other user in the first communication session in the first three- dimensional environment; while displaying the first user interface of the first communication session including the respective representation of the second user that indicates the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment, detecting a first user input that corresponds to a request to display the respective representation of the second user without indicating the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment; and in response to detecting the first user input, displaying the respective representation of the second user without indicating the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment.
130. The method of claim 129, wherein:
494
while the respective representation of the second user is displayed, via the first display generation component, without indicating the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment: a second user interface of the first communication session is displayed via a second display generation component of the second computer system, displaying the second user interface including displaying a first view of the first three-dimensional environment via the second display generation component that is in communication with the second computer system, and displaying a respective representation of the first user that indicates a spatial positioning of the first user relative to the second user in the first three-dimensional environment.
131. The method of claim 130, wherein: in accordance with a second user input from the second user that moves the respective representation of the first user in the first view of the first three-dimensional environment, the respective representation of the first user in the first view of the first three-dimensional environment is moved by the second computer system to indicate an update to the spatial positioning of the first user relative to the second user in the first three-dimensional environment, without causing a corresponding change in the first user interface of the first communication session displayed via the first display generation component.
132. The method of any of claims 130-131, wherein: while the respective representation of the second user is displayed via the first display generation component with the indication of the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment, the second user interface of the first communication session is displayed by the second computer system, where displaying the second user interface of the first communication session includes displaying the first view of the first three-dimensional environment via the second display generation component that is in communication with the second computer system, and displaying the respective representation of the first user that indicates the spatial positioning of the first user relative to the second user in the first three-dimensional environment, and wherein the respective representation of the first user is not controlled by inputs received from the second user
133. The method of any of claims 129-132, including:
495
while displaying the first user interface of the first communication session including the respective representation of the second user that indicates the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment, detecting a third user input that corresponds to a request to move a virtual position of the first user in the first three- dimensional environment; and in response to detecting the third user input that corresponds to the request to move the virtual position of the first user in the first three-dimensional environment, changing the virtual position of the first user in the first three-dimensional environment.
134. The method of claim 133, wherein detecting the third user input that corresponds to the request to move the virtual position of the first user in the first three-dimensional environment includes detecting first movement of the first display generation component in a first physical environment of the first user.
135. The method of any of claims 133-134, wherein detecting the third user input that corresponds to the request to move the virtual position of the first user in the first three- dimensional environment includes detecting a predefined gesture input provided by the first user that includes movement of a hand of the first user.
136. The method of any of claims 133-135, including: while displaying the first user interface of the first communication session including the respective representation of the second user that indicates the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment, and in response to detecting the third user input that corresponds to the request to move the virtual position of the first user in the first three-dimensional environment, updating the first user interface of the first communication session, including updating the respective representation of the second user to indicate a changed spatial positioning of the second user relative to the virtual position of the first user in the first three-dimensional environment.
137. The method of any of claims 133-136, wherein in accordance with a change in the virtual position of the first user in the first three-dimensional environment made in response to the third user input that corresponds to the request to move the virtual position of the first user in the first three-dimensional environment, a respective representation of the first user is
496
moved from a first position to a second position in a first view of the first three-dimensional environment shown by the second computer system.
138. The method of any of claims 129-136, wherein the first user interface of the first communication session includes at least a respective representation of a third user that is different from the first user and the second user.
139. The method of claim 138, wherein: displaying the respective representation of the second user that indicates the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user with a first orientation and displaying the respective representation of the third user with a second orientation different from the first orientation; and displaying the respective representation of the second user without indicating the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user and the respective representation of the third user with a third orientation.
140. The method of any of claims 129-139, wherein displaying the respective representation of the second user without indicating the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user in a virtual environment.
141. The method of any of claims 129-140, wherein displaying the respective representation of the second user indicating the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user at a first position in a first augmented reality environment that includes a representation of the first physical environment.
142. The method of any of claims 129-141, wherein the second computer system is in communication with a second display generation component, and a second user interface of the first communication session between the first user and the second user is displayed via the
497
second display generation component of the second computer system, wherein displaying the second user interface includes displaying a respective representation of the first user that indicates a spatial positioning of the first user relative to a respective representation of at least one other user in the first communication session in the first three-dimensional environment.
143. The method of claim 142, wherein the respective representation of the first user in the second user interface of the first communication session includes a video feed of the first user.
144. The method of any of claims 142-143, wherein: the second computer system detects a fourth user input from the second user that corresponds to a request to move the respective representation of the first user in the first three-dimensional environment; and in response to detecting the fourth user input from the second user, the second computer system moves the respective representation of the first user in the second user interface of the first communication session to change the spatial positioning of the first user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment.
145. The method of any of claims 129-144, wherein displaying, via the first display generation component, the first user interface of the first communication session between the first user and the second user includes displaying a respective representation of the first user at a preset position in the first user interface.
146. The method of any of claims 129-145, wherein the respective representation of the second user includes an avatar of the second user that corresponds to a preset camera angle and/or imaging distance.
147. The method of any of claims 129-146, including: displaying a media player object in a shared region of the first three-dimensional environment, wherein the media player object is shown via the first display generation component in the first user interface of the first communication session, wherein the media player object includes a video that is playing; and in response to receiving, via the second computer system, a fifth user input that corresponds to a request to move the media player object from a third position to a fourth position within the shared region of the first three-dimensional environment, moving the
498
media player object in the first user interface of the first communication session to indicate the changed position of the media player object in the first three-dimensional environment.
148. The method of any of claims 129-147, wherein the first user interface of the first communication session includes a first affordance that, when activated in accordance with preset criteria, causes display of the respective representation of the second user that indicates the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment.
149. The method of any of claims 129-148, wherein: displaying the respective representation of the second user that indicates the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user at a position that is chosen automatically by a computer system.
150. The method of any of claims 129-149, wherein: displaying the respective representation of the second user that indicates the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user at a position that is chosen in accordance with an input from the second user provided through the second computer system.
151. The method of any of claims 129-150, wherein: displaying the respective representation of the second user that indicates the spatial positioning of the second user relative to the respective representation of the at least one other user in the first communication session in the first three-dimensional environment includes displaying the respective representation of the second user and a respective representation of a third user to indicate a first spatial relationship between the second user and the third user in the first three-dimensional environment.
152. The method of any of claims 129-151, including: while displaying the respective representation of the second user that indicates the spatial positioning of the second user relative to the representation of the at least one other user in the first communication session in the first three-dimensional environment, detecting
499
a sixth user input that corresponds to a request to resize the respective representation of the second user; and in response to detecting the sixth user input that corresponds to the request to resize the respective representation of the second user, changing a size of the respective representation of the second user in the first user interface of the first communication session in accordance with the sixth user input.
153. A method, including : at a first computer system that is in communication with a first display generation component and one or more first input devices: displaying, via the first display generation component, a first user interface of a first communication session between a first user and a second user, wherein the first user interface includes a representation of the second user; while displaying the first user interface of the first communication session between the first user and the second user, detecting a first movement of the first display generation component in a first physical environment; and in response to detecting the first movement of the first display generation component in the first physical environment: in accordance with a determination that the first display generation component is in a first orientation during the first movement of the first display generation component, updating an appearance of the representation of the second user based on the first movement of the first display generation component in the first physical environment and a simulated viewpoint of the first user into a respective three dimensional environment in which the representation of the second user is displayed; and in accordance with a determination that the first display generation component is in a second orientation, different from the first orientation, during the first movement of the first display generation component, maintaining display of the representation of the second user at a respective location in the first user interface of the communication session as the first display generation component moves in the first physical environment.
154. The method of claim 153, including: after updating the first user interface of the first communication session in response to detecting the first movement of the first display generation component in the first physical environment:
500
detecting a first change in orientation of the first display generation component and after detecting the first change in orientation of the first display generation component, detecting second movement of the first display generation component in the first physical environment; and in response to detecting second movement of the first display generation component in the first physical environment: in accordance with a determination that the first display generation component is in the first orientation during the second movement of the first display generation component, updating the appearance of the representation of the second user based on the second movement of the first display generation component in the first physical environment and the simulated viewpoint of the first user into the respective three dimensional environment in which the representation of the second user is displayed; and in accordance with a determination that the first display generation component is in the second orientation during the second movement of the first display generation component, maintaining display of the representation of the second user at the respective location in the first user interface of the first communication session as the first display generation component moves in the first physical environment.
155. The method of any of claims 153-154, wherein the first communication session is between at least the first user, the second user, and a third user that is different from the first user and the second user, and wherein the first user interface of the first communication session further includes a representation of the third user, and the method includes: in response to detecting the first movement of the first display generation component in the first physical environment: in accordance with the determination that the first display generation component is in the first orientation during the first movement of the first display generation component, updating an appearance of the representation of the third user based on the first movement of the first display generation component in the first physical environment and based on the simulated viewpoint of the first user into the respective three-dimensional environment in which the representation of the second user is displayed; and in accordance with the determination that the first display generation component is in the second orientation during the first movement of the first display generation component, maintaining display of the representation of the third user at a second respective location in the first user interface of the first communication session as the first display generation component moves in the first physical environment.
501
156. The method of claim 155, wherein, in accordance with the determination that the first movement does not causes the first display generation component to transition between the first orientation and the second orientation, a three-dimensional spatial relationship between the representation of the second user and the representation of the third user in the respective three-dimensional environment is maintained independent of movement of the first display generation component in the first physical environment.
157. The method of any of claims 153-156, wherein the first computer system is in communication with a first set of cameras facing a first direction relative to the first display generation component, and in communication with a second set of cameras facing a second direction relative to the first display generation component, and the method includes: determining, in accordance with first image data received from the first set of cameras and second image data received from the second set of cameras, a position of the simulated viewpoint of the first user in the respective three-dimensional environment in which the representation of the second user is displayed.
158. The method of any of claims 153-157, wherein: a representation of the first user is displayed in a second user interface of the first communication session shown via a second display generation component used by the second user, and in accordance with a determination that the first display generation component is in the first orientation during the first movement of the first user in the first physical environment, a position of the representation of the first user in a view of the respective three-dimensional environment shown in the second user interface is updated in accordance with the first movement of the first user in the first physical environment.
159. The method of any of claims 153-158, wherein displaying, via the first display generation component, the first user interface of the first communication session between the first user and the second user includes: in accordance with a determination that the first display generation component is in the first orientation, displaying the representation of the second user in a first view of the respective three-dimensional environment, wherein the first view of the respective three- dimensional environment includes a representation of the first physical environment, and the representation of the second user is displayed at a position in the first view of the respective three-dimensional environment with a preset spatial relationship to the representation of the first physical environment.
502
160. The method of claim 159, wherein displaying, via the first display generation component, the first user interface of the first communication session between the first user and the second user includes: in accordance with a determination that the first display generation component is in the first orientation, displaying a representation of the first user at a respective position in the first user interface of the first communication session that does not have a three-dimensional spatial relationship to the representation of the first physical environment in the first view of the respective three-dimensional environment.
161. The method of any of claims 153-160, wherein displaying, via the first display generation component, the first user interface of the first communication session between the first user and the second user includes: in accordance with the determination that the first display generation component is in the first orientation, displaying the representation of the second user with at least a first amount of depth difference relative to the simulated viewpoint of the first user into the respective three dimensional environment in which the representation of the second user is displayed; and in accordance with the determination that the first display generation component is in the second orientation, displaying the representation of the second user with less than the first amount of depth difference.
162. The method of any of claims 153-161, wherein the simulated viewpoint of the first user into the respective three-dimensional environment in which the representation of the second user is displayed has a position in the respective three-dimensional environment that is determined based on a vertical position of the first display generation component in the first physical environment.
163. The method of any of claims 153-162, wherein a second display generation component used by the second user displays a second user interface of the first communication session, and wherein the second user interface displays a representation of the first user that includes an indication of a device type of the first computer system.
164. The method of claim 163, wherein the indication of the device type of the first display generation component is a frame that surrounds an image of the first user.
503
165. The method of claim 164, wherein an orientation of the frame corresponds to the orientation of the first display generation component in the first physical environment.
166. A method, comprising: at a first computer system that is in communication with a first display generation component, a first audio output component, and one or more input devices: presenting first computer-generated content in a computer-generated three- dimensional environment, wherein the computer-generated three-dimensional environment reduces a respective user’s perception of a respective physical environment surrounding the respective user while the respective user is in a position to perceive the first computer generated content in the computer-generated three-dimensional environment; while presenting the first computer-generated content in the computer generated three-dimensional environment, detecting first movement of a first user that is in a position to perceive the first computer-generated content in the computer-generated three- dimensional environment, wherein the first movement of the first user is in a first physical environment of the first user; and in response to detecting the first movement of the first user in the first physical environment of the first user: in accordance with a determination that the first movement of the first user in the first physical environment of the first user changes a spatial relationship between the first user and a second user in the first physical environment of the first user from a first spatial relationship to a second spatial relationship and a determination that the change in spatial relationship from the first spatial relationship to the second spatial relationship meets first criteria, changing one or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three- dimensional environment; and in accordance with the determination that the first movement of the first user in the first physical environment of the first user changes the spatial relationship between the first user and the second user in the first physical environment of the first user from the first spatial relationship to the second spatial relationship and a determination that the change in spatial relationship from the first spatial relationship to the second spatial relationship does not meet the first criteria, presenting the first computer-generated content in the computer-generated three-dimensional environment without changing the one or more output properties of the first computer-generated content.
504
167. The method of claim 166, wherein presenting the first computer-generated content in the computer-generated three-dimensional environment includes displaying first computer generated visual content of the first computer-generated content in the computer-generated three-dimensional environment via the first display generation component.
168. The method of claim 167, wherein changing the one or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three-dimensional environment includes reducing visual prominence of at least a first portion of the first computer-generated visual content to reveal a respective representation of the second user in a first region of a field of view provided via the first display generation component that was previously occupied by the at least the first portion of the first computer-generated visual content.
169. The method of any of claims 166-168, wherein presenting the first computer generated content in the computer-generated three-dimensional environment includes outputting first computer-generated audio content of the first computer-generated content via the first audio output component.
170. The method of claim 169, wherein changing the one or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three-dimensional environment includes reducing audio prominence of the first computer-generated audio content relative to sound from the physical environment of the first user.
171. The method of any of claims 166-170, wherein the first computer-generated content includes first computer-generated visual content and first computer-generated audio content that accompanies the first computer-generated visual content, and wherein changing the one or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three-dimensional environment includes reducing visual prominence of the first computer-generated visual content relative to a representation of the second user, while maintaining audio prominence the first computer generated audio content relative to sound from the physical environment of the first user.
172. The method of any of claims 166-170, wherein the first computer-generated content includes first computer-generated visual content and first computer-generated audio content that accompanies the first computer-generated visual content, and wherein changing the one
505
or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three-dimensional environment includes reducing audio prominence the first computer-generated audio content relative to sound from the physical environment of the first user, while maintaining visual prominence of the first computer-generated visual content relative to a representation of the second user.
173. The method of any of claims 166-172, the first criteria require that the first movement of the first user in the first physical environment of the first user that changes the spatial relationship between the first user and the second user in the first physical environment of the first user is detected in conjunction with detection of speech in order for the first criteria to be met.
174. The method of any of claims 166-173, wherein the first criteria require that the first movement of the first user in the first physical environment of the first user that changes the spatial relationship between the first user and the second user in the first physical environment of the first user includes the first user turning to face the second user.
175. The method of any of claims 166-174, wherein the first computer-generated content includes first computer-generated audio content, and wherein changing the one or more output properties of the first computer-generated content while presenting the first computer generated content in the computer-generated three-dimensional environment includes reducing respective volumes of a first subset of frequency bands in the first computer generated audio content without reducing respective volumes of a second subset of frequency bands in the first computer-generated audio content.
176. The method of any of claims 166-175, including: in response to detecting the first movement of the first user in the first physical environment of the first user: in accordance with the determination that the first movement of the first user in the physical environment of the first user changes the spatial relationship between the first user and the second user in the first physical environment of the first user from the first spatial relationship to the second spatial relationship and the determination that the change in spatial relationship from the first spatial relationship to the second spatial relationship meets the first criteria, enhancing sound from the first physical environment of the first user; and in accordance with the determination that the first movement of the first user in the first physical environment of the first user changes the spatial relationship between the
506
first user and the second user in the physical environment of the first user from the first spatial relationship to the second spatial relationship and the determination that the change in spatial relationship from the first spatial relationship to the second spatial relationship does not meet the first criteria, forgoing enhancing the sound from the first physical environment of the first user.
177. The method of any of claims 166-176, including: while presenting the first computer-generated content in the computer-generated three-dimensional environment, detecting first audio input from the first user and detecting second audio input from the second user; and generating first virtual audio reverberations for the first audio input in accordance with simulated acoustic properties of a first three-dimensional environment and generating second virtual audio reverberations for the second audio input in accordance with the simulated acoustic properties of the first three-dimensional environment.
178. The method of claim 177, wherein the simulated acoustic properties of the first three- dimensional environment are different from actual acoustic properties of the first physical environment of the first user.
179. The method of any of claims 177-178, wherein presenting the first computer generated content in the computer-generated three-dimensional environment includes outputting second audio content that is shared between the first user and a third user that is different from the first user and the second user in a communication session between the first user and the third user, wherein outputting the second audio content includes generating third virtual audio reverberations for the second audio content in accordance with the simulated acoustic properties of the first three-dimensional environment.
180. The method of any of claims 166-179, wherein the first computer-generated content includes first computer-generated visual content, and wherein changing the one or more output properties of the first computer-generated content while presenting the first computer generated content in the computer-generated three-dimensional environment includes changing one or more display properties of the first computer-generated visual content in the computer-generated three-dimensional environment.
181. The method of claim 180, wherein changing the one or more display properties of the first computer-generated visual content in the computer-generated three-dimensional
507
environment includes increasing visual prominence of a representation of the physical environment of the first user in the computer-generated three-dimensional environment.
182. The method of claim 180, wherein changing the one or more display properties of the first computer-generated visual content in the computer-generated three-dimensional environment includes reducing a level of immersion provided by the first computer-generated visual content in the computer-generated three-dimensional environment.
183. The method of any of claims 166-182, including: while presenting the first computer-generated content in the computer-generated three-dimensional environment, detecting second movement of the second user that changes the spatial relationship between the first user and the second user in the first physical environment of the first user from a third spatial relationship to a fourth spatial relationship; and in response to detecting the second movement of the second user: in accordance with a determination that the change in spatial relationship from the third spatial relationship to the fourth spatial relationship meets second criteria, changing the one or more output properties of the first computer-generated content while presenting the first computer-generated content in the computer-generated three-dimensional environment; and in accordance with a determination that the change in spatial relationship from the third spatial relationship to the fourth spatial relationship does not meet the second criteria, presenting the first computer-generated content without changing the one or more output properties of the first computer-generated content in the computer-generated three- dimensional environment.
184. The method of any of claims 166-182, wherein the first computer-generated content includes a view of a three-dimensional environment that is shared by the first user and a third user, and wherein the view of the three-dimensional environment includes a representation of the third user that indicates a spatial positioning of the third user relative to the first user in the three-dimensional environment; and in response to detecting the first movement of the first user in the physical environment of the first user: in accordance with a determination that the first movement of the first user in the first physical environment of the first user changes a spatial relationship between a respective representation of the first user and a respective representation of the third user in
508
the three-dimensional environment from a fifth spatial relationship to a sixth spatial relationship and a determination that the change in spatial relationship from the fifth spatial relationship to the sixth spatial relationship meets third criteria, changing the one or more output properties of the first computer-generated content while presenting the first computer generated content in the computer-generated three-dimensional environment; and in accordance with a determination that the first movement of the first user in the physical environment of the first user changes the spatial relationship between the respective representation of the first user and the respective representation of the third user in the three-dimensional environment from the fifth spatial relationship to the sixth spatial relationship and a determination that the change in spatial relationship from the first spatial relationship to the second spatial relationship does not meet the third criteria, presenting the first computer-generated content without changing the one or more output properties of the first computer-generated content in the computer-generated three-dimensional environment.
185. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-22.
186. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 1-22.
187. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 1-22.
188. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 1-22.
189. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 1-22.
190. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 23-33.
191. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 23-33.
192. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 23-33.
193. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 23-33.
194. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 23-33.
195. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 34-58.
196. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 34-58.
197. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 34-58.
198. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 34-58.
199. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 34-58.
200. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 59-80.
201. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 59-80.
202. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 59-80.
203. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 59-80.
204. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 59-80.
205. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 81-108.
206. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 81-108.
207. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 81-108.
208. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 81-108.
512
209. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 81-108.
210. A computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 109-128.
211. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 109-128.
212. A graphical user interface on a computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 109-128.
213. A computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 109-128.
214. An information processing apparatus for use in a computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 109-128.
215. A first computer system, comprising: a first display generation component; one or more input devices; one or more processors; and
513
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 129-152.
216. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a first computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 129-152.
217. A graphical user interface on a first computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 129-152.
218. A first computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 129-152.
219. An information processing apparatus for use in a first computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 129-152.
220. A first computer system, comprising: a first display generation component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 153-165.
221. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a first computer system that includes a first display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 153-165.
514
222. A graphical user interface on a first computer system that includes a first display generation component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 153-165.
223. A first computer system, comprising: a first display generation component; one or more input devices; and means for performing any of the methods of claims 153-165.
224. An information processing apparatus for use in a first computer system that includes a first display generation component and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 153-165.
225. A computer system, comprising: a first display generation component; a first audio output component; one or more input devices; one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 166-184.
226. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system that includes a first display generation component, a first audio output component, and one or more input devices, cause the computer system to perform any of the methods of claims 166-184.
227. A graphical user interface on a computer system that includes a first display generation component, a first audio output component, one or more input devices, memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 166-184.
228. A computer system, comprising: a first display generation component;
515
a first audio output component; one or more input devices; and means for performing any of the methods of claims 166-184.
229. An information processing apparatus for use in a computer system that includes a first display generation component, a first audio output component, and one or more input devices, the information processing apparatus comprising: means for performing any of the methods of claims 166-184.
516
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21854915.2A EP4256422A1 (en) | 2021-01-04 | 2021-12-29 | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
CN202311747041.2A CN117724630A (en) | 2021-01-04 | 2021-12-29 | Apparatus, method and graphical user interface for interacting with a three-dimensional environment |
CN202180095195.0A CN117043713A (en) | 2021-01-04 | 2021-12-29 | Apparatus, method and graphical user interface for interacting with a three-dimensional environment |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163133742P | 2021-01-04 | 2021-01-04 | |
US63/133,742 | 2021-01-04 | ||
US17/563,864 US11954242B2 (en) | 2021-01-04 | 2021-12-28 | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
US17/563,864 | 2021-12-28 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2022147146A1 WO2022147146A1 (en) | 2022-07-07 |
WO2022147146A4 true WO2022147146A4 (en) | 2022-08-25 |
Family
ID=80445590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/065537 WO2022147146A1 (en) | 2021-01-04 | 2021-12-29 | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022147146A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US20240070299A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Revealing collaborative object using countdown timer |
US12019773B2 (en) * | 2022-08-31 | 2024-06-25 | Snap Inc. | Timelapse of generating a collaborative object |
US20240070302A1 (en) * | 2022-08-31 | 2024-02-29 | Youjean Cho | Collaborative object associated with a geographical location |
US12079395B2 (en) * | 2022-08-31 | 2024-09-03 | Snap Inc. | Scissor hand gesture for a collaborative object |
US20240094882A1 (en) * | 2022-09-21 | 2024-03-21 | Apple Inc. | Gestures for selection refinement in a three-dimensional environment |
WO2024064930A1 (en) * | 2022-09-23 | 2024-03-28 | Apple Inc. | Methods for manipulating a virtual object |
WO2024064280A1 (en) * | 2022-09-23 | 2024-03-28 | Apple Inc. | User interfaces for managing live communication sessions |
WO2024163798A1 (en) * | 2023-02-03 | 2024-08-08 | Apple Inc. | Devices, methods, and graphical user interfaces for device position adjustment |
WO2024226681A1 (en) * | 2023-04-25 | 2024-10-31 | Apple Inc. | Methods for displaying and rearranging objects in an environment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002073246A (en) * | 2000-08-25 | 2002-03-12 | Minolta Co Ltd | Video display system |
JP2012155655A (en) * | 2011-01-28 | 2012-08-16 | Sony Corp | Information processing device, notification method, and program |
WO2019067470A1 (en) * | 2017-09-29 | 2019-04-04 | Zermatt Technologies Llc | Physical boundary guardian |
-
2021
- 2021-12-29 WO PCT/US2021/065537 patent/WO2022147146A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022147146A1 (en) | 2022-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022147146A4 (en) | Devices, methods, and graphical user interfaces for interacting with three-dimensional environments | |
JP7008681B2 (en) | Dynamic switching and merging of heads, gestures, and touch inputs in virtual reality | |
US11138809B2 (en) | Method and system for providing an object in virtual or semi-virtual space based on a user characteristic | |
US10788927B2 (en) | Electronic communication based on user input and determination of active execution of application for playback | |
DK179412B1 (en) | Context-Specific User Interfaces | |
NL2021036B1 (en) | Electronic touch communication | |
DK180081B1 (en) | Access to system user interfaces on an electronic device | |
EP4038472B1 (en) | Moving applications on multi-screen computing device | |
US9542010B2 (en) | System for interacting with objects in a virtual environment | |
US20220291806A1 (en) | Device, method, and graphical user interface for composing cgr files | |
DK201970533A1 (en) | Methods and user interfaces for sharing audio | |
TW201042531A (en) | Multi tasking views for small screen devices | |
KR101986781B1 (en) | Method and apparatus for accommodating display migration among a plurality of physical displays | |
JP2012048725A (en) | Portable electronic device having multi-touch input | |
TW201104555A (en) | Navigating a plurality of instantiated virtual desktops | |
WO2016099563A1 (en) | Collaboration with 3d data visualizations | |
JP4912377B2 (en) | Display device, display method, and program | |
US12086407B2 (en) | Intelligent keyboard attachment for mixed reality input | |
WO2020171907A1 (en) | Locating slicing planes or slicing volumes via hand locations | |
WO2016141597A1 (en) | Touch control method, device, terminal and graphical user interface thereof | |
US20240273838A1 (en) | System and method of augmented representation of an electronic device | |
AU2022201369A1 (en) | Methods and user interfaces for sharing audio | |
EP3791253B1 (en) | Electronic device and method for providing virtual input tool | |
EP4324192A1 (en) | Adaptive video conference user interfaces | |
WO2023049063A1 (en) | Content manipulation via a computer-generated representation of a trackpad |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21854915 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021854915 Country of ref document: EP Effective date: 20230705 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180095195.0 Country of ref document: CN |