[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024206378A1 - Methods, architectures, apparatuses and systems for proximity-aware federated learning with interim model aggregation in future wireless - Google Patents

Methods, architectures, apparatuses and systems for proximity-aware federated learning with interim model aggregation in future wireless Download PDF

Info

Publication number
WO2024206378A1
WO2024206378A1 PCT/US2024/021596 US2024021596W WO2024206378A1 WO 2024206378 A1 WO2024206378 A1 WO 2024206378A1 US 2024021596 W US2024021596 W US 2024021596W WO 2024206378 A1 WO2024206378 A1 WO 2024206378A1
Authority
WO
WIPO (PCT)
Prior art keywords
wtru
local model
information
model
aggregation
Prior art date
Application number
PCT/US2024/021596
Other languages
French (fr)
Inventor
Chonggang Wang
Xu Li
Robert Gazda
Ulises Olvera-Hernandez
Original Assignee
Interdigital Patent Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Patent Holdings, Inc. filed Critical Interdigital Patent Holdings, Inc.
Publication of WO2024206378A1 publication Critical patent/WO2024206378A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • H04W88/04Terminal devices adapted for relaying to or from another terminal or user

Definitions

  • the present disclosure is generally directed to the fields of communications, software and/or encoding, including, for example, to methods, architectures, apparatuses, and/or systems related to proximity-aware federated learning (FL) with interim model aggregation in wireless networks.
  • FL proximity-aware federated learning
  • Federated learning is a framework for distributed machine learning.
  • training data may be maintained locally at multiple distributed Federated Learning Clients (FLCs), such as user devices or mobile devices.
  • FLCs distributed Federated Learning Clients
  • a FLC may perform local training, generate local model updates, and/or send local model updates to a Federated Learning Server (FLS).
  • FLS Federated Learning Server
  • An embodiment may be directed to a first wireless transmit/receive unit (WTRU) that includes circuitry, including any of a processor, memory, transmitter and/or receiver.
  • the circuitry is configured to receive, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU and, based on the first information, to determine that the at least one other WTRU is in proximity of the first WTRU and/or to determine to agree to serve as the relay node for the at least one other WTRU.
  • the circuitry is configured to transmit second information, to the network node, indicating that the first WTRU agrees to serve as the relay node, to receive, from the network node, third information indicating one or more interim model aggregation instructions, to establish a direct link with the at least one other WTRU, and to receive, via the direct link, a local model update from one or more of the at least one other WTRU.
  • the circuitry is configured to aggregate, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record, and to send the aggregated local model update and the associated model aggregation record to the network node.
  • An embodiment is directed to a method, implemented in a first wireless transmit/receive unit (WTRU).
  • the method may include receiving, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU and, based on the first information, determining that the at least one other WTRU is in proximity of the first WTRU and determining to agree to serve as the relay node for the at least one other WTRU.
  • An embodiment may be directed to an apparatus comprising circuitry, including any of a processor, memory, transmitter and receiver.
  • the circuitry is configured to send first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), to receive the proximity information from the network function, to select one of the one or more WTRUs to serve as a relay node, and to send, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU.
  • WTRUs wireless transmit/receive units
  • the circuitry may be configured to receive third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node, to send, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and to receive, from the selected WTRU, an aggregated local model update and associated model aggregation record.
  • An embodiment may be directed to a method comprising sending first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), receiving the proximity information from the network function, selecting one of the one or more WTRUs to serve as a relay node, and sending, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU.
  • WTRUs wireless transmit/receive units
  • the method may also include receiving third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node, sending, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and receiving, from the selected WTRU, an aggregated local model update and associated model aggregation record.
  • FIG. 1 A is a system diagram illustrating an example communications system
  • FIG. IB is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1 A;
  • WTRU wireless transmit/receive unit
  • FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A;
  • RAN radio access network
  • CN core network
  • FIG. ID is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1 A;
  • FIG. 2 is a diagram illustrating an example of a federated learning (FL) process, according to an embodiment
  • FIG. 3 is a system diagram illustrating an example of FL learning in wireless networks, according to an embodiment.
  • FIG. 4 is a system diagram illustrating an example of a 5G system architecture, according to various embodiments.
  • FIG. 5 is a system diagram illustrating some problems relating to FL in wireless networks;
  • FIG. 6 is an architectural design for proximity-aware FL with interim model aggregation, according to various embodiments;
  • FIG. 7 is a signaling diagram illustrating server-controlled proximity-aware FL with interim model aggregation, according to various embodiments
  • FIG. 8 is a signaling diagram illustrating a network function-controlled proximity-aware FL with interim model aggregation, according to various embodiments
  • FIG. 9 is a signaling diagram illustrating relay-node-controlled proximity-aware FL with interim model aggregation, according to various embodiments.
  • FIG. 10 is a system diagram illustrating an example of native FL in beyond 5G systems, such as 6G, according to some embodiments.
  • FIG. 11 illustrates an example flow diagram of a method, according to one example embodiment.
  • the methods, apparatuses and systems provided herein are well-suited for communications involving both wired and wireless networks.
  • An overview of various types of wireless devices and infrastructure is provided with respect to FIGs. 1A-1D, where various elements of the network may utilize, perform, be arranged in accordance with and/or be adapted and/or configured for the methods, apparatuses and systems provided herein.
  • FIG. 1A is a system diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented.
  • the communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
  • the communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth.
  • the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), singlecarrier FDMA (SC-FDMA), zero-tail (ZT) unique-word (UW) discreet Fourier transform (DFT) spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block- filtered OFDM, filter bank multicarrier (FBMC), and the like.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA singlecarrier FDMA
  • ZT zero-tail
  • ZT UW unique-word
  • DFT discreet Fourier transform
  • OFDM ZT UW DTS-s OFDM
  • UW-OFDM unique word OFDM
  • FBMC filter bank multicarrier
  • the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104/113, a core network (CN) 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
  • Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include (or be) a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi- Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • the communications systems 100 may also include a base station 114a and/or a base station 114b.
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d, e.g., to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the networks 112.
  • the base stations 114a, 114b may be any of a base transceiver station (BTS), a Node-B (NB), an eNode-B (eNB), a Home Node-B (HNB), a Home eNode-B (HeNB), a gNode-B (gNB), a NR Node-B (NR NB), a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each or any sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE- Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • E-UTRA Evolved UMTS Terrestrial Radio Access
  • LTE Long Term Evolution
  • LTE-A LTE- Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., an eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (Wi-Fi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 IX, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (Wi-Fi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 IX, CDMA2000 EV-DO Code Division Multiple Access 2000
  • IS-2000 Interim Standard 95
  • IS-856 Interim Standard 856
  • GSM Global
  • the base station 114b in FIG. 1 A may be a wireless router, Home Node-B, Home eNode- B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like.
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR, etc.) to establish any of a small cell, picocell or femtocell.
  • a cellular-based RAT e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR, etc.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106/115.
  • the RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT.
  • the CN 106/115 may also be in communication with another RAN (not shown) employing any of a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or Wi-Fi radio technology.
  • the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or other networks 112.
  • the PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite.
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/114 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. IB is a system diagram illustrating an example WTRU 102.
  • the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other elements/peripherals 138, among others.
  • GPS global positioning system
  • the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
  • the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122.
  • the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116.
  • a base station e.g., the base station 114a
  • the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 102 may include any number of transmit/receive elements 122.
  • the WTRU 102 may employ MIMO technology.
  • the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
  • the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
  • the WTRU 102 may have multi-mode capabilities.
  • the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
  • the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
  • the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
  • the non-removable memory 130 may include random-access memory (RAM), readonly memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
  • the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
  • the power source 134 may be any suitable device for powering the WTRU 102.
  • the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
  • location information e.g., longitude and latitude
  • the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 118 may further be coupled to other elements/peripherals 138, which may include one or more software and/or hardware modules/units that provide additional features, functionality and/or wired or wireless connectivity.
  • the elements/peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (e.g., for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a virtual reality and/or augmented reality (VR/AR) device, an activity tracker, and the like.
  • FM frequency modulated
  • the elements/peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • a gyroscope an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
  • the WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the uplink (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118).
  • the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the uplink (e.g., for transmission) or the downlink (e.g., for reception)).
  • a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the uplink (e.g., for transmission) or the downlink (e.g., for reception)).
  • FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, and 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, and 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink (UL) and/or downlink (DL), and the like. As shown in FIG. 1C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the CN operator.
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • the MME 162 may be connected to each of the eNode-Bs 160a, 160b, and 160c in the RAN 104 via an SI interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via the SI interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter-eNode-B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IP gateway e.g., an IP multimedia subsystem (IMS) server
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGs. 1A-1D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • the other network 112 may be a WLAN.
  • a WLAN in infrastructure basic service set (BSS) mode may have an access point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have an access or an interface to a distribution system (DS) or another type of wired/wireless network that carries traffic into and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic.
  • the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802. l ie DLS or an 802.1 Iz tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an "ad-hoc" mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier sense multiple access with collision avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadj acent 20 MHz channel to form a 40 MHz wide channel.
  • VHT STAs may support 20 MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data, after channel encoding may be passed through a segment parser that may divide the data into two streams.
  • Inverse fast fourier transform (IFFT) processing, and time domain processing may be done on each stream separately.
  • IFFT Inverse fast fourier transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above-described operation for the 80+80 configuration may be reversed, and the combined data may be sent to a medium access control (MAC) layer, entity, etc.
  • MAC medium access control
  • Sub 1 GHz modes of operation are supported by 802.1 laf and 802.11 ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.1 laf and 802.1 lah relative to those used in
  • 802.1 laf supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV white space (TVWS) spectrum
  • 802.1 lah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum. According to a representative embodiment,
  • MTC meter type control/machine-type communications
  • MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths.
  • the MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
  • WLAN systems which may support multiple channels, and channel bandwidths, such as
  • 802.1 In, 802.1 lac, 802.1 laf, and 802.1 lah include a channel which may be designated as the primary channel.
  • the primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS.
  • the bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode.
  • the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other ST As in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes.
  • Carrier sensing and/or network allocation vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
  • NAV network allocation vector
  • the available frequency bands which may be used by 802.1 lah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.1 lah is 6 MHz to 26 MHz depending on the country code.
  • FIG. ID is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment.
  • the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 113 may also be in communication with the CN 115.
  • the RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment.
  • the gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the gNBs 180a, 180b, 180c may implement MIMO technology.
  • gNBs 180a, 180b may utilize beamforming to transmit signals to and/or receive signals from the WTRUs 102a, 102b, 102c.
  • the gNB 180a may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • the gNBs 180a, 180b, 180c may implement carrier aggregation technology.
  • the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum.
  • the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology.
  • WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
  • CoMP Coordinated Multi-Point
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., including a varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • the gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non- standalone configuration.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c).
  • WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point.
  • WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band.
  • WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c.
  • WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously.
  • eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
  • Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards user plane functions (UPFs) 184a, 184b, routing of control plane information towards access and mobility management functions (AMFs) 182a, 182b, and the like. As shown in FIG. ID, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
  • UPFs user plane functions
  • AMFs access and mobility management functions
  • the CN 115 shown in FIG. ID may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one session management function (SMF) 183a, 183b, and at least one Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • AMF session management function
  • the AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node.
  • the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different protocol data unit (PDU) sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like.
  • PDU protocol data unit
  • Network slicing may be used by the AMF 182a, 182b, e.g., to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c.
  • different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for MTC access, and/or the like.
  • URLLC ultra-reliable low latency
  • eMBB enhanced massive mobile broadband
  • the AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • radio technologies such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
  • the SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface.
  • the SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface.
  • the SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b.
  • the SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like.
  • a PDU session type may be IP -based, non-IP based, Ethernet-based, and the like.
  • the UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, e.g., to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • the UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multihomed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
  • the CN 115 may facilitate communications with other networks.
  • the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
  • DN local Data Network
  • one or more, or all, of the functions described herein with regard to any of: WTRUs 102a-d, base stations 114a- b, eNode-Bs 160a-c, MME 162, SGW 164, PGW 166, gNBs 180a-c, AMFs 182a-b, UPFs 184a- b, SMFs 183a-b, DNs 185a-b, and/or any other element(s)/device(s) described herein, may be performed by one or more emulation elements/devices (not shown).
  • the emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
  • the emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment.
  • the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network.
  • the one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
  • the one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network.
  • the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components.
  • the one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
  • RF circuitry e.g., which may include one or more antennas
  • Embodiments disclosed herein are representative and do not limit the applicability of the apparatus, procedures, functions and/or methods to any particular wireless technology, any particular communication technology and/or other technologies.
  • the term network in this disclosure may generally refer to one or more base stations or gNBs or other network entity which in turn may be associated with one or more Transmission/Reception Points (TRPs), or to any other node in the radio access network.
  • TRPs Transmission/Reception Points
  • serving base station may be used interchangeably to designate any network element such as, e.g., a network element acting as a serving base station.
  • base station may be used interchangeably to designate any network element such as, e.g., a network element acting as a serving base station.
  • gNB network element acting as a serving base station.
  • Embodiments described herein are not limited to gNBs and are applicable to any other type of base stations.
  • Federated Learning is a framework for distributed machine learning.
  • training data is maintained locally at multiple distributed Federated Learning Clients (FLCs) (e.g., mobile devices).
  • FLCs distributed Federated Learning Clients
  • Each FLC performs local training (e.g., deep learning), generates local model updates, and sends local model updates to a Federated Learning Server (FLS) which could be an application server function or a network function in the cloud or edge, for example.
  • FLS Federated Learning Server
  • the FLS aggregates local model updates received from FLCs and generates global model updates, which will be sent to the participating FLCs for the next training round.
  • Some advantages of federated learning may include: (1) improved data privacy-preservation since training data stays at FLCs; (2) reduced communication overhead since it is not required to collect/transmit training data to a central entity; and (3) improved learning speed since model training now leverages distributed computation resources at FLCs.
  • FL involves the transmission of model updates between the FLS and FLCs, which introduces additional communication overhead compared to centralized machine learning.
  • FL inherits some potential security issues and threats such as data poisoning and model poisoning attacks.
  • FIG. 2 illustrates an example of the general federated learning process, according to an embodiment.
  • the FLS and FLCs may jointly take the illustrated steps to perform a FL task.
  • the FLS may select a set of FLCs to participate in a FL task.
  • the FLS may configure the FL task to each selected FLC.
  • the FLS may send an initial global model to each selected FLC.
  • one or more of the FLCs (e.g., each FLC) may independently train the global model based on the received initial global model and its local data.
  • one or more of the FLCs may generate a local model update and send it to the FLS.
  • the FLS may receive local model updates from one or more of the FLCs (e.g., all FLCs), aggregate them, and generate a new global model update.
  • the FLS may need to wait to receive local model updates from all FLCs before performing the model aggregation (i.e., synchronous FL) or the FLS may start the aggregation after receiving the local model updates from some of FLCs (i.e., asynchronous FL).
  • the FLS may (re)select some new FLCs for next training round.
  • FL can be leveraged in wireless networks.
  • the FLS can be deployed in the core and/or edge, while FLCs may be end devices and/or UEs.
  • FIG. 3 illustrates an example FL application for wireless networks, where the UEs (e.g., each UE in this example) hosts an FLC, which collaboratively participates in training a global model.
  • FL may be used for spectrum management.
  • the UEs i.e., UE-1, UE-2, UE-3 and UE-4
  • the UEs may host a FLC to generate a local model update; local model updates may be sent to an edger server, which has an FLS mainly responsible for aggregating local model updates from UEs to generate a global model update.
  • the global model update may be sent to one or more of the UEs (e.g., all UEs) to continue the next training round until the global model converges. Then, the converged final global model may be transmitted to one or more of the UEs, which can use the final global model to manage their spectrum access.
  • the 5G system architecture includes one or more UEs, Radio Access Network (RAN), and Core Network [1],
  • RAN Radio Access Network
  • Core Network One of the design principles for the 5G system architecture is servicecentric or service-based.
  • a 5G Core Network may contain a variety of network functions, which work together to fulfill and provide needed services to the RAN, UEs, and Application Server s/Service Providers.
  • a network function can access other network functions in request/response mode or subscription/notification mode. Before two network functions interact with each other, they first need to register with the Network Repository Function (NRF) so that they can discover each other via the NRF.
  • NRF Network Repository Function
  • Access and Mobility Management Function is dedicated to managing UE’s access to 5G system and its mobility
  • Session Management Function is responsible for establishing sessions between a UE and 5G core network
  • AUSF Authentication Server Function
  • PCF Policy Control Function
  • PCF provides policy rules for other control plane network functions and UEs; PCF assigns an identifier for each created policy rule, which other control plane network functions and UEs use to refer to the corresponding policy rule.
  • User Plane Function is the only core network function in the data plane that facilitates monitoring, managing, controlling, and redirecting user plane traffic flows such as between a UE and an Application Server (AS).
  • AS Application Server
  • the Network Exposure Function enables access to 5G control plane functions to entities such as network applications and ASs which are outside of 5G System (5GS) and not in the same trusted domain.
  • 5G core network also provides data storage and analytics services through functions like Unified Data Management (UDM), Unified Data Repository (UDR), Unstructured Data Storage Function (UDSF) and Network Data Analytics Function (NWDAF).
  • UDM Unified Data Management
  • UDR Unified Data Repository
  • UDSF Unstructured Data Storage Function
  • NWDAAF Network Data Analytics Function
  • Another critical feature of 5G system is network slicing, which is facilitated by Network Slice Selection Function (NSSF).
  • NSSF Network Slice Selection Function
  • 3GPP TS 23.288 [2] defines stage-2 architecture enhancements for 5GS to support Network Data Analytics Services (NWDAF), a network function in 5G core network.
  • NWDAF Network Data Analytics Services
  • multiple NWDAF instances could be deployed to edge networks in future wireless systems, such as 6G.
  • NWDAF provides a set of AI- related functionalities and services, some of which include: (1) data collection based on subscription to events of other network and/or application functions; (2) retrieve data and information from other network functions; (3) provide on-demand data analytics to consumers (i.e., network and/or application functions).
  • the services provided by NWDAF can be exposed to and leveraged by other network functions in 5G core network and application functions (i.e., application servers).
  • 3GPP TR 23.700-80 [4] describes key issues and solutions for supporting AI/ML-based services in 5GS. The following seven key issues have been defined in 3GPP TR 23.700-80: monitoring of network resource utilization for support of application AI/ML operations; 5GC information exposure to UE; 5GC information exposure to authorized 3 rd party for application layer AI/ML operation; enhancing external parameter provisioning; 5GC enhancements to enable application AI/ML traffic transport; Quality of Service (QoS) and policy enhancements; and 5GS assistance to federated learning operation.
  • QoS Quality of Service
  • 3GPP TS 23.304 V17.4.0 (2022-09) [6] defines architecture for Proximity based Services (ProSe), where one UE acting as a relay (i.e., UE-to-Network relay or UE-to-NW relay) can connect other remote UEs being in proximity to the network. In other words, a remote UE leverages the UE-to-NW relay (another UE) to access to the 5GS.
  • 5GS ProSe functions defined in [6] include 5G ProSe direct discovery, 5G direct communication, and 5G ProSe UE-to-Network Relay.
  • 5G ProSe direct discovery describes the process for nearby UEs (remote UEs and UE-to- NW relays) to use direct radio transmissions to discover each other.
  • 5G ProSe direct communication refers to the process where multiple UEs in proximity communicate with each other directly without going through any other network nodes (e.g., a base station).
  • 5G ProSe UE- to-NW relay provides functions to support connecting one or multiple remote UEs to the network via a UE-to-Network relay.
  • a UE-to-UE relay is a 5G ProSe-enabled UE that provides functions to and connects a remote/end UE to another remote/end UE.
  • a typical FL deployment in future wireless networks may include “an FLS at the edge/core network” and “FLCs at UEs”.
  • FIG. 5 illustrates an example of the communication- related issues that may arise in this FL over wireless deployment.
  • the wireless connectivity between an FLC e.g., FLC2, FLC3
  • the FLS may have too limited capacity and might not support timely transmission of a local model update from this FLC to the FLS.
  • other nearby FLCs e.g., FLCI
  • UEs as FLCs may lose uplink connectivity to the FLS, while other FLCs may still have connectivity to the FLS.
  • An FLC may have limited residual energy and might not be able to finish transmitting the local model update directly to the FLS.
  • UEs within the same proximity could directly communicate with each other, which can in turn be leveraged to improve the FL process.
  • one UE/FLC could help to relay local model updates from another UE/FLC to the FLS.
  • certain specific technical issues need to be solved. These issues may include that, during FL training, each FLC needs to send the new local model update to the FLS repeatedly after completing each local training round, which causes high communication overhead especially when the number of FLCs and/or the number of required local training rounds are large.
  • various embodiments provide apparatuses, systems, architectures and/or methods of proximity-aware interim model aggregation.
  • a FLC may send its local model updates to a nearby FLC, e.g., over a direct link.
  • the nearby FLC may act as a relay node and can receive local model updates from multiple other FLCs, aggregate the received local model updates optionally with its own local model update according to pre-configured interim model aggregation instructions, generate aggregated local model updates and aggregation records, and/or forward the aggregated local model updates and the aggregation records to the FLS.
  • FIG. 6 illustrates an example of the architectural design for proximity-aware FL with interim model aggregation, according to various embodiments.
  • an existing FLC FLC
  • FLC2/UE2 may act as a UE-to-NW relay for receiving local model updates from other nearby FLCs (e.g., FLC2/UE2), and aggregating those local model updates (which may be referred to as interim model aggregation). Then, this existing FLC (FLC1/UE1) may forward the aggregated local model updates to the FLS.
  • Such interim model aggregation can reduce uplink traffic from FLCs to the FLS, for example, from (n+1) * size-of-model -update to 1 * size-of-model-update.
  • n is the number of other FLCs that are relayed by this existing FLC.
  • UE1 may be a UE-to-UE relay.
  • the FLS can select and configure FLC1/UE1 as a UE-to-NW relay for other FLCs (e.g., FLC2/UE2) within a proximity; for this purpose, the FLS may need to check proximity information about existing FLCs/UEs from the Proximity Management Function (PMF).
  • PMF Proximity Management Function
  • FLC1/UE1 can directly request to become a UE-to-NW relay (or a UE-to-UE relay) for other FLCs.
  • FLC2/UE2 may request the FLS to select FLC 1/UE1 as its relay node.
  • a Proximity Management Function PMF can establish and configure a computation-aware relay relationship between FLCI and other FLCs. Computation-aware relay enables other FLCs to not only request for existing communication- focused ProSe relay services from FLCI but also request computation-oriented services (i.e., interim model aggregation) from FLC1.
  • the PMF could be a ProSe Function, a 5G Direct Discovery Name Management Function (DDNMF), a Policy Control Function (PCF), an Access and Mobility Function (AMF), another existing network function, a new network function, and/or a combination of those functions.
  • other FLCs e.g., FLC2
  • FLCI may send their local model update over a direct link to FLCI when they generate a new local model update.
  • FLCI may receive local model updates from other FLCs (e.g., FLC2) and may optionally aggregate them with its own local model update to generate an aggregated local model update. Then, FLCI may forward the aggregated local model update to the FLS.
  • FLCI might not perform local training but may just aggregate local model updates from other FLCs and forwards the aggregated model update to the FLS.
  • FIG. 7 illustrates an example signaling diagram of a procedure for network node or server- controlled proximity-aware FL interim model aggregation, according to various embodiments.
  • the network node or server may be a FLS or an application server, for example.
  • the network node or server is depicted as an FLS; however, it should be understood that this is provided as one example and that other types of nodes or servers may also be used.
  • FIG. 7 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 7 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein.
  • an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs.
  • the FLS may select FLCI as a relay node for two other FLCs (i.e., FLC2 and FLC3); note that FLCI could be a relay node for more than two FLCs (not illustrated in the example of FIG. 7).
  • FLC2 and FLC3 may send their local model updates to FLCI instead of sending them to the FLS.
  • FLCI may aggregate local model updates received from FLC2 and FLC3 with its own local model update to generate an aggregated local model update, which is referred to as interim model aggregation.
  • FLCI may send the aggregated local model update to the FLS, where the final model aggregation will be performed.
  • the FLS receives (e.g., only receives) one model (i.e., the aggregated local model update) from the FLCI, instead of three local model updates respectively from FLCI, FLC2, and FLC3; thus, the uplink traffic from FLCs to the FLS is greatly reduced.
  • the procedures 701 to 712 in FIG. 7 may also be performed before a FL task is executed; in other words, 701-712 of FIG. 7 can be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC.
  • UE1 in the example of FIG. 7 will be a UE-to-UE relay, but the same procedure depicted in the example of FIG. 7 may apply.
  • the FLS may send a request to a network element, such as a Proximity Management Function (PMF), to retrieve proximity information and UE context information about some existing FLCs.
  • PMF Proximity Management Function
  • This request may serve one or more of the following purposes:
  • this request may contain (e.g., may only contain) the identifier of UE1;
  • this request may be used to retrieve the proximity and UE context information about those FLCs (e.g., all of those FLCs), which are target UEs. As such, this request may contain the identifiers of all those FLCs (e.g., FLCI, FLC2, and FLC3);
  • this request may contain the identifier of any number of existing FLCs as selected by the FLS, which are target UEs; and/or
  • the FLS may retrieve the proximity information about a specific type of UEs in a region (e.g., within a building, on a segment of a highway, etc.), which are target UEs. Then, this request may contain the region information and the type of UEs (e.g., robots, vehicles, smart phones, etc.).
  • a region e.g., within a building, on a segment of a highway, etc.
  • this request may contain the region information and the type of UEs (e.g., robots, vehicles, smart phones, etc.).
  • the request in step 701 may also contain the identifier of the FLS and the identifier of the existing FL task (e.g., FL-Task-A) so that the PMF can use them to authenticate if the FLS has the access rights to retrieve proximity information about existing FLCs.
  • the existing FL task e.g., FL-Task-A
  • either FLC2 or FLC3 may send a request to the FLS asking the FLS to select a relay node with interim model aggregation function for them; thus, this request can trigger the FLS to start step 701.
  • the PMF may process the request from step 701. If the FLS is allowed to retrieve the proximity information of the target UEs, the PMF may look up corresponding proximity information and may return it in a response. The PMF may send the response to the FLS. In general, the response may contain the proximity information and UE context information about target UEs as indicated in step 701 and their identifiers. The PMF may return and expose the following proximity information to the FLS:
  • UE1/FLC1, UE2/FLC2, or UE3/FLC3 e.g., UE1/FLC1, UE2/FLC2, or UE3/FLC3
  • UE1/FLC1, UE2/FLC2, and/or UE3/FLC3 are currently within a proximity and could reach each other directly;
  • UE1/FLC1, UE2/FLC2, or UE3/FLC3 has been authorized to use ProSe services
  • the FLS may preselect FLC1/UE1 as the relay node (e.g., if FLC1/UE1 is in the proximity of both UE2 and UE3).
  • the FLS may send a message to FLCI to request FLCI serve as a relay node for FLC2 and FLC3.
  • This message may contain the identifier of the existing FL task (e.g., FL-Task-A), the identifiers of FLC2/UE2 and FLC3/UE3, and/or the identifier of the FLS.
  • One or multiple of the following parameters may also be contained in the message for FLCI to know or estimate how long it needs to act as a relay node: Relaying Time Window that may indicate the start time and end time of the relay services that FLCI will provide to FLC2 and FLC3, Current Global Model Accuracy that indicates the accuracy of the current global model, and/or remaining training rounds that indicates a number of remaining training rounds to be completed.
  • the FLCI may check if FLC2 and FLC3 are in its proximity. For example, FLCI may broadcast a short message containing the identifier of FLC2 and FLC3 over the local direct link; when FLC2 and FLC3 receives the short message, they may send and FLCI may receive an acknowledgement containing their identifier and indicating their reachability. Alternatively, if FLCI does not receive an acknowledgement, FLCI may determine that it cannot reach FLC2 and/or FLC3; as a result, FLCI may simply include a message such as: “Cannot reach FLC2 and/or FLC3” in the response to the FLS depicted at 707.
  • FLCI may determine if it agrees to be a relay node for FLC2 and FLC3. As a relay node, FLCI not only may need to receive and/or store local model updates from FLC2 and FLC3, but may also need to aggregate them. As a result, FLC 1 may need to spend both computation and storage resources for processing local model updates from FLC2 and FLC3. From the message received at 704, FLCI may know the FL task and can estimate the size of local model update (i.e., the required storage resource, processing CPU load, processing time, etc.).
  • FLCI From the message received at 704 (e.g., Relaying Time Window or Current Global Model Accuracy), FLCI also can estimate the required computation resource. To make the correct decision, FLCI may also consider extra energy consumption from providing relay services to FLC2 and FLC3. Based on one or more of these multiple metrics (e.g., the required storage resource, the required computation resource, the required energy consumption, etc.) that may result from providing relaying services and interim model aggregation to other FLCs (e.g., FLC2 and FLC3), FLCI may agree or reject to be a relay node for other FLCs (e.g., FLC2 and FLC3), for example, based on its local polices; as an example, if the required computation resurce exceeds a threshold or what FLCI can afford, FLCI may reject to be a relay node; as another example, if each of those metric is below a threshold or FLCI can afford it, FLCI may agree to be a relay node and provide interim nodel aggregation
  • FLCI may send a response to the FLS containing the decision made in step 706.
  • this response may contain a list of FLCs that FLCI agreed to be a relay node for and provide interim model aggregation to.
  • this response may simply contain “a rejection”; in this case, the procedure may end without the execution of the following steps.
  • the FLS may send a confirmation to FLC1.
  • the FLS may select a subset from the list of those FLCs as contained in the response at 707; the identifiers of selected FLCs in the subset may be contained or indicated in the confirmation message sent at 708. In this example, for purposes of illustration, it may be assumed that FLC2 and FLC3 are selected FLCs in the subset.
  • the confirmation sent at 708 may also contain one (or multiple non-conflicting) interim model aggregation instructions such as one or more of the following example instructions: a) One example of interim model aggregation instruction: treat local model updates from FLC2 and FLC3 equally; aggregate them without considering FLCl’s local model update or without FLCl’s local training; send the aggregated local model update to the FLS. b) Another example of interim model aggregation instruction: treat local model updates from FLCI, FLC2 and FLC3 equally; aggregate them all together; send the aggregated local model update to the FLS.
  • interim model aggregation instruction treat local model updates from FLCI, FLC2 and FLC3 differently (e.g., assign different weights to each: wl to FLCI, w2 to FLC2, w3 to FLC3); aggregate local model updates from FLCI, FLC2, FLC3 according to their weights; send the aggregated local model update to the FLS.
  • interim model aggregation instruction once there are 2 local model updates available (e.g., from FLCI and FLC2, or from FLCI and FLC3, or from FLC2 and FLC3), aggregate them equally with the same weight or proportionally with different weights; send the aggregated local model update to the FLS.
  • each Interim Model Aggregation Instruction j may contain, for example, one or more of the following parameters:
  • LMUs Local Model Updates
  • the aggregation mode (synchronous and asynchronous).
  • the FLCI shall receive LMUs from all target FLCs before aggregating them; for asynchronous aggregation, the FLCI does not have to wait for LMUs from all target FLCs, but some of them as specified in the aggregation condition; • The aggregation conditions for the FLCI to execute the interim model aggregation according to IMALj (e.g., once two LMUs are available from any two FLCs from the list of target FLCs, when FLCs are with or near a particular location, a time window, etc.);
  • the interim model aggregation algorithm for IMALj (e.g., average, weighted average, etc.) including weights for each FLC from the list of target FLCs; and/or
  • the destination address which the aggregated local model update shall be sent to e.g., the FLS.
  • Each interim model aggregation instruction may have a unique identifier within the FLS and FLC1.
  • FLCI may store the interim model aggregation instructions.
  • the FLS may send a notification to FLC2 as shown at 709a, and to FLC3 as shown at 709b.
  • the notification(s) may contain or indicate the identifier of FLC1/UE1 and the identifier of the existing FL task (FL-Task-A).
  • the notification(s) may also contain or indicate the identifier of the FLS.
  • the notification(s) may also contain or indicate a value k, which indicates that FLC2 and FLC3 only need to send the first k layers (or the last k layers, or the k layers in the middle of a deep neural network) of their local model update to FLCI assuming the trained model is a deep neural network. It is noted that the notifications 709a and/or 709b are optional.
  • FLCI and FLC2 may discover each other via step 710a, and FLCI and FLC3 may discover each other via step 710b (e.g., using 3GPP direct device discovery). Since the relay services to be provided by FLCI is not only communication-related, but also needs computation, FLCI and FLC2 (and FLCI and FLC3) may exchange Relaying Computation Requirement (RCR) during the process of discovering each other.
  • RCR may contain any one or more of the following information:
  • Computation Frequency The frequency of computation operation (e.g., one local model update aggregation per second)
  • Computation Size number of operations for each computation (e.g., the product of the size of local model update and the number of models being aggregated for this case).
  • Storage Size size of resulting storage. For this case, it’s the size of required local model updates that FLCI needs to collect and store before it can aggregate them, which is dependent on the interim model aggregation instructions that the FLS configured to FLCI in the message sent at 708.
  • Computation Time Window The time window for performing the type of computation as indicated by “Computation Type”.
  • Computation Waiting Time The maximum waiting time or latency for a request computation to be performed.
  • the discovery at 710a and/or 710b can be skipped. However, if FLCI and FLC2 (and/or FLCI and FLC3) need to exchange RCR, the discovery at 710a and/or 710b may still be performed since they did not exchange RCR at 705; optionally, FLCI and FLC2 (and/or FLCI and FLC3) may exchange RCR at 711.
  • FLCI and FLC2 may establish computation-aware direct link (e.g., using 3GPP direct link establishment), during which FLCI and FLC2 may exchange RCR to achieve computation awareness as a part of direct link establishment; similarly, FLCI and FLC3 may establish computation-aware direct link at 711b.
  • RCR if it was not exchanged in step 710, may be exchanged between FLCI and FLC2/FLC3.
  • FLC2 may send a direct link establishment request to FLCI containing FLC2’s RCR on FLCI, and FLCI may receive FLC2’s RCR.
  • FLCI may reject FLC2’s direct link establishment (e.g., if computation size and/or storage size is over what FLCI can or is willing to afford). If FLCI rejects FLC2’s direct link request, FLC2 may need to find another relaying UE as the relaying FLC to perform interim model aggregation and FLCI may send a rejection notification to FLC2 and FLS.
  • FLC3 may send a direct link establishment request to FLCI containing FLC3’s RCR on FLCI, and FLCI may receive FLC3’s RCR.
  • FLCI may reject FLC3’s direct link establishment (e.g., if computation size and/or storage size is over what FLCI can or is willing to afford). If FLCI rejects FLC3’s direct link request, FLC3 may need to find another relaying UE as the relaying FLC to perform interim model aggregation and FLCI may send a rejection notification to FLC3 and FLS.
  • FLCI may send a notification to the FLS (at 712a) and the PMF (at 712b).
  • the notification may contain information about each established direct link (e.g., RCR, the identifier of the sender and the receiver of the direct link (e.g., FLCI and FLC2, or FLCI and FLC3)).
  • FLCI may generate a new local model update LMU1. If LMU1 may need to be aggregated with model updates from FLC2 and/or FLC3 according to the interim model aggregation instructions received at 708. FLCI may store LMU1 locally and/or may send LMU1 to the FLS.
  • FLC2 may generate a new local model update LMU2 (at 714a) and FLC3 may also generate a new local model update LMU3 (at 714b).
  • FLC2 may send LMU2 to FLCI via direct link (at 715a) and FLC3 may send LMU3 to FLCI via direct link (at 715b).
  • FLC2 and FLC3 may also send the following additional information together with local model updates to FLCI :
  • step 713 may occur after step 714. Also, step 715a could take place before step 715b.
  • FLCI may receive LMU2 and LMU3, respectively, from FLC2 and FLC3.
  • FLCI may aggregate those local model updates (e.g., LMU1, LMU2, LMU3) according to the interim model aggregation instructions received at 708. For example, if the aggregation instructions specify that FLCI just needs to aggregate local model updates from some (not all) of other FLCs, FLCI might not need to wait to receive local model updates from all other FLCs before performing interim model aggregation.
  • an aggregation instruction may say “FLCI only needs to aggregate LMU1 and LMU2”; as such, in some embodiments, step 716 could occur right after step 715a but before step 715b; then, when FLCI receives LMU3 at 715b, it can simply forward LMU3 to the FLS or aggregate LMU3 with another local model update (e.g., LMU4 received from another FLC4/UE4) to generage an aggregated “LMU3+LMU4” to be sent to the FLS.
  • another local model update e.g., LMU4 received from another FLC4/UE4
  • FLCI can generate an aggregated local model update (i.e., aggLMU) and an associated model aggregation record.
  • the model aggregation record may contain any one or more of the following information: the identifier of corresponding interim model aggregation instruction used to generate aggLMU; the identifier of FLCI and other FLCs whose local model updates have been aggregated in aggLMU; the creation time of each LMUs that have been aggregated into aggLMU; the model accuracy of each local model being aggregated in aggLMU; the number of data samples used to generate each local model being aggregatd in aggLMU; and/or the creation time of aggLMU.
  • FLCI may sends aggLMU and the model aggregation record to the FLS.
  • the FLS is able to know how aggLMU was generated and based on which interim model aggregation instruction. Then, the FLS may decide how aggLMU will be furthermore aggregated with other aggLMUs from other FLCs acting as a relay node and/or LMUs from other FLCs that send local model updates directly to the FLS.
  • FIG. 8 illustrates an example signaling diagram of a procedure for core network node or network function-coordinated proximity-aware FL interim model aggregation, according to various embodiments.
  • the core network node or network function may be a PMF or other NF, for example.
  • the core network node or NF is depicted as an PMF; however, it should be understood that this is provided as one example and that other types of nodes or servers may also be used while remaining within the scope of example embodiments.
  • FIG. 8 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 8 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein (e.g., may be combined with or modified by one or more elements of FIG. 7). Additionally, it is noted that, although the example of FIG.
  • the PMF 8 may depict the PMF as the entity coordinating the model aggregation, the PMF may be replaced by other network elements or nodes (e.g., core network nodes) in some embodiments. As such, the PMF is provided as one example.
  • an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs (e.g., FLCI, FLC2, and FLC3).
  • the FLS may sense the increased latency in receiving local model updates from FLCs and interim model aggregation can help to reduce uplink traffic and latency in transmitting local model updates from FLCs to the FLS.
  • the FLS may request the PMF to group FLCs based on their proximity information that the PMF maintains.
  • the PMF may group FLCI, FLC2, and FLC3 together; the PMF may also select FLCI as the relay node for both FLC2 and FLC3.
  • the PMF may instruct FLCI and FLC2/FLC3 to discover each other and establish direct links between FLC2 and FLC 1 , and between FLC3 and FLC 1.
  • FLC2 and FLC3 may send their local model updates to FLCI, which as an example may be aggregated by FLCI with FLCl’s local model updates, according to interim model update instructions that the FLS configures to the relay node FLC1.
  • Steps 801 to 812 in FIG. 8 may also be performed before a FL task is executed; in other words, steps 801-812 of FIG. 8 can be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC.
  • UE1 in the example of FIG. 8 will be a UE-to-UE relay, but the same procedure in FIG. 8 can apply.
  • the FLS may send a message to the PMF to request the PMF to group FLCs.
  • This message may contain or indicate the identifier of the FLS, the identifiers of UEs that host FLCs, the identifiers of FLCs, the identifier of the corresponding FL task (e.g., FL-Task-A), and/or the size of a local model, etc.
  • the PMF may check the proximity information of the UEs/FLCs contained in the request at 801. Based on the proximity information, the PMF may group the FLCs within the same proximity and the PMF may select a UE/FLC as a relay node for each group. As one example, FLCI, FLC2, and FLC3 may be selected by the PMF to form a group with FLCI as the relay node separately for FLC2 and FLC3, because FLC2/FLC3 can reach FLCI over direct link.
  • the PMF may send a request to FLCI asking FLC 1 to be the relay node for FLC2 and FLC3.
  • this request may contain the same information included in the request at 704 of FIG. 7.
  • FLCI may determine if it agrees to be a relay node for FLC2 and FLC3.
  • the decision at 804 may be the same or similar to that discussed above with respect to step 706.
  • FLCI may send a response to the PMF indicating it agrees to be the relay node for FLC2, FLC3, or both of them.
  • the response may contain or indicate the identifiers of UEs/FLCs that FLCI agreed to be the relay node for them.
  • the PMF may send a notification to FLC2 at 806a and to FLC3 at 806b.
  • the notification(s) may contain or indicate the same or similar information as contained in step 709 of FIG. 7.
  • the discovery steps illustrated at 807a and 807b may be the same as or similar to that of steps 710a and 710b, respectively, in FIG. 7.
  • FLCI and FLC2 may establish a computation-aware direct link and, at 808b, FLCI and FLC3 may establish a computation-aware direct link.
  • Steps 808a and 808b may be the same or similar to that of steps 711a and 711b discussed above with respect to FIG. 7.
  • the FLCI may send a notification to PMF.
  • the notification sent at 809 may be the same or similar to that of step 712b discussed above with respect to FIG. 7.
  • steps 803-809 may be repeated for each group of UEs/FLCs which the PMF determined at 802.
  • the PMF may send a response to the FLS indicating what the FLS requested in the request at 801 has been processed.
  • this response may contain the identifiers of all UEs/FLCs in each group and indicate which UE/FLC is the relay node for each group.
  • the FLS may send a notification to FLCI, which may contain or indicate the same or similar information as step 708 of FIG. 7 (e.g., the interim model aggregation instructions discussed above).
  • the FLS may embed or indicate “interim model aggregation instructions” in step 801, which will then be forwarded to each relay node (e.g., FLCI) by the PMF in the request sent at 803.
  • the FLS may send a notification to FLC2 as shown at 812a, and to FLC3 as shown at 812b. These notifications may the same or similar to that of notifications 709a and 709b, respectively, discussed above regarding FIG. 7. Steps 812a and/or 812b may not be needed if FLC2 and FLC3 has obtained sufficient information from step 806 (e.g., if step 806 and step 812 contain the same information). Steps 813, 814, 815, 816, and 817 of FIG. 8 may be the same as or similar to steps 713, 714, 715, 716, and 717, respectively, as discussed above in connection with FIG. 7.
  • FIG. 9 illustrates an example signaling diagram for a procedure of relay-node-initiated proximity-aware FL interim model aggregation, according to various embodiments. It is noted that FIG. 9 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 9 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein (e.g., may be combined with or modified by one or more elements of FIGs. 7 and/or 8).
  • an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs (e.g., FLCI, FLC2, and FLC3).
  • FLCI may find nearby UEs/FLCs (e.g., FLC2 and FLC3) that participate in the same FL task.
  • FLCI has sufficient computation and storage resource and may decide to be a relay node with interim model aggregation function for FLC2 and FLC3.
  • FLCI may send a request to the FLS to get its approval.
  • the FLS will contact the PMF that authorizes if FLCI can be a relay node for UE2/FLC2 and UE3/FLC3. After the authorization, the FLS may send interim model aggregation instructions to FLC1. Then, FLCI may establish a direct link with FLC2/FLC3, through which FLC2/FLC3 can send their local model updates to FLC1. FLCI may aggregate local model updates from FLC2/FLC3 with its own local model update, may generate an aggregated local model update, and may send the aggregated local model update to the FLS. Steps 901 to 909 in FIG. 9 may also be performed before a FL task is executed; in other words, steps 901-909 of FIG. 9 may be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC.
  • UE1 in the example of FIG. 9 will be a UE-to-UE relay, but the same procedure in FIG. 9 can apply.
  • FLCI may discover UE2/FLC2 (step 901a) and UE3/FLC3 (step 901b) participating in the same FL task (e.g., FL-Task-A).
  • FLCI may announce and/or broadcast a device request message over the radio link, which may contain the identifier of UE1/FLC1, the identifier of the FL task, the identifier of the FLS, and/or FLC l’s willingness to be a relay node for aggregating local model updates.
  • UE2/FLC2 When UE2/FLC2 (or UE3/FLC3) receives the announced device request message, it may send a device response message directly to FLCI if UE2/FLC2 (or UE3/FLC3) participates in the same FL task and likes to be relayed by FLCI (but to be authorized by the FLS); this device response message may contain the identifier of UE2/FLC2.
  • Steps 901a and/or 901b may contain similar parameters as contained in step 701 of FIG. 7 discussed above.
  • UE2/FLC2 in 901a may actively request UE1/FLC1 to be its relay node for interim model aggregation; as an example, when UE1/FLC1 receives a good number of such requests from other FLCs (e.g., FLC2 and FLC3), UE1/FLC1 decides to be a relay node for those other FLCs (e.g., FLC2 and FLC3).
  • FLCs e.g., FLC2 and FLC3
  • UE1/FLC1 decides to be a relay node for those other FLCs (e.g., FLC2 and FLC3).
  • FLCI may decide to be a relay node for a selected number of discovered FLCs (e.g., FLC2 and FLC3).
  • FLCI may send a message to the FLS requesting to be a relay node with model aggregation function for FLC2 and FLC3. This message may contain the identifiers of selected FLCs from step 902 (e.g., FLC2 and FLC3) and FLC l’s identifier.
  • the FLS may receive the request message sent at 903.
  • the FLS may, at 904, authenticate if FLCI can be a relay node for FLC2 and FLC3 from FL perspective, may forward the message to the PMF for connectivity-level and computation-level authorization.
  • the PMF may receive the request message sent at 904 and authorize if FLCI can provide proximity service and if FLC2 and FLC3 can use proximity service from FLC1.
  • the PMF may check AUSF for retrieving UEl/FLCl’s, UE2/FLC2’ s, and UE3/FLC3 ’ s subscription data and check PCF for any proximity-related policies for those UEs/FLCs. Based on their subscription data and proximity-related polices, the PMF may approve if FLCI can be a relay node for interim model aggregation for FLC2 and/or FLC3.
  • the PMF may, at 905, send a response to the FLS indicating an approval or a rejection.
  • the FLS may send a response to FLCI. If the response from the PMF at 905 shows an approval, the response at 906 may also contain interim model aggregation instructions, similar to step 708 of FIG. 7 discussed above.
  • the FLS may send a notification to FLC2 as shown at 907a, and to FLC3 as shown at 907b. These notifications may the same or similar to that of notifications 709a and 709b, respectively, discussed above regarding FIG. 7.
  • Steps 908, 909, 910, 911, 912, 913 and 914 of FIG. 9 may be the same as or similar to steps 711, 712, 713, 714, 715, 716, and 717, respectively, as discussed above in connection with FIG. 7.
  • Certain embodiments may provide for native FL with interim model aggregation in 3 GPP systems such as, but not limited to, a 6G system.
  • FL may become a native Al function or service of a next generation system, such as a 6G system (6GS), which can be leveraged by other 6G network functions (e.g., an SMF) for more efficient 6G network management and automation.
  • 6GS 6G system
  • FIG. 10 illustrates an example of native FL in 6G.
  • a network data analytics function (NWDAF) (or at least its model training logical function) may be pushed from 3 GPP core network to UEs, e.g., to avoid collecting data from UEs to 3 GPP core network.
  • NWDAF-C may be a NWDAF instance located in 6G core network (or even in a 6G edge network), which acts as an FLS to coordinate and work with other FLCs (e.g., NWDAF 1, NWDAF2, NWDAF3).
  • NWDAF1 may be located in UE1 and acts as a federated learning client (i.e., FLCI).
  • NWDAF2 may be located in UE2 and acts as a federated learning client (i.e., FLC2).
  • NWDAF3 may be located in UE3 and acts as a federated learning client (i.e., FLC3). It is noted that there could be more NWDAF instances located in other UEs as federated learning clients.
  • UE1 in FIG. 10 may be a UE-to-UE relay, but the same procedure in FIG. 10 may apply.
  • NWDAF-C may send an initial global model to NWDAF 1, NWDAF2, and NWDAF3.
  • NWDAF-C (and/or a 6G NF such as 6G-version DDNMF) may configure interim model aggregation instructions to NWDAF 1.
  • NWDAF2 may perform local training, generate a local model update, and send its local model updates to NWDAF1.
  • NWDAF3 may also perform local training, generate a local model update, and send its local model updates to NWDAF 1.
  • NWDAF 1 may also perform local training and may generate a local model update.
  • NWDAF 1 may perform the proposed interim model aggregation to aggregate local model updates received from NWDAF2 and NWDAF3 optionally with NWDAF l’s local model update, according to interim model aggregation instructions configured by NWDAF-C and/or another NF (e.g., 6G NF).
  • NWDAF-C e.g., 6G NF
  • NWDAF-C can control NWDAF 1 to perform interim model aggregation using the procedure in FIG. 7, for example.
  • NWDAF-C is the FLS
  • NWDAF1 is the UE1/FLC1
  • NWDAF2 is the UE2/FLC2
  • NWDAF3 is the UE3/FLC3.
  • another 6G proximity-management-related NF can also request and coordinate NWDAF 1 to perform interim model aggregation using the procedure in FIG. 8, for example.
  • NWDAF-C is the FLS
  • NWFAF1 is the UE1/FLC1
  • NWDAF2 is the UE2/FLC2
  • NWDAF3 is the UE3/FLC3.
  • NWDAF 1 can initiate to perform interim model aggregation using the procedure in FIG. 9, for example.
  • NWDAF-C is the FLS
  • NWFAF1 is the UE1/FLC1
  • NWDAF2 is the UE2/FLC2
  • NWDAF3 is the UE3/FLC3.
  • NWDAF 1 can also be located within a 6G edge network which are close to NWDAF2 and NWDAF3.
  • FIG. 11 illustrates an example flow diagram of a method 1100, which may be implemented in a first wireless transmit/receive unit (WTRU). It should be understood that the method 1100 may include any one or more of the steps performed by or associated with FLCI and/or UE1 as discussed elsewhere herein, such as described in or with respect to FIGs. 7-9. It should also be understood that one or more of the steps of the method may be optional, may be omitted, and/or may be performed in a different order.
  • WTRU wireless transmit/receive unit
  • the method 1100 may include, at 1105, receiving, from a network node (e.g., a server or FLS), a first information indicating a request to serve as a relay node for at least one other WTRU (e.g., for a second WTRU and a third WTRU, or any number of WTRUs). Based on the first information, the method 1100 may include, at 1110, determining that the at least one other WTRU is in proximity of the first WTRU and/or determining to agree to serve as the relay node for the at least one other WTRU.
  • a network node e.g., a server or FLS
  • a first information indicating a request to serve as a relay node for at least one other WTRU e.g., for a second WTRU and a third WTRU, or any number of WTRUs.
  • the method 1100 may include, at 1110, determining that the at least one other WTRU is in proximity of the first
  • the method may include, at 1115, transmitting a second information, to the network node, indicating that the first WTRU agrees to serve as the relay node.
  • the method 1100 may include receiving, from the network node, a third information indicating one or more interim model aggregation instructions.
  • the method 1100 may include, at 1125, establishing a direct link with the at least one other WTRU.
  • the direct link may be a computation-aware direct link as described elsewhere herein.
  • the first information received from the network node may further indicate any one or more of: (i) an identifier associated with the at least one other WTRU, (ii) an identifier associated with an existing federated learning (FL) task, and (iii) an identifier associated with the network node, (iv) information indicating a relaying time window, (v) information indicating an accuracy of a current global model, and (vi) information indicating a number of remaining training rounds to be completed.
  • determining that the at least one other WTRU is in proximity of the first WTRU may include broadcasting a message indicating an identifier of the at least one other WTRU over a local direct radio link, and receiving an acknowledgement indicating the identifier of the at least one other WTRU and indicating a reachability of the at least one other WTRU.
  • the second information sent to the network node may include a list or other indication or information indicating and/or identifying the WTRU(s) for which the first WTRU agrees or accepts to be a relay node.
  • the method 1100 may include, at 1130, receiving, via the direct link (e.g., a computation-aware direct link), a local model update from one or more of the at least one other WTRU.
  • the method 1100 may include, at 1135, aggregating, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate any of an aggregated local model update and/or an associated model aggregation record.
  • the method 1100 may then include, at 1140, sending any of the aggregated local model update and/or the associated model aggregation record to the network node.
  • the interim model aggregation instructions may indicate or include any one or more of the following: (i) to treat local model updates from second WTRU and the third WTRU equally, to aggregate them without considering the first WTRU’s local model update or without the first WTRU’ s local training, and to send the aggregated local model update to the FLS; (ii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU equally, to aggregate them all together, and to send the aggregated local model update to the network node; (iii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU differently, to aggregate local model updates from the first WTRU, the second WTRU, and the third WTRU according to their weights, and to send the aggregated local model update to the FLS; (iv) once there are two local model updates available, to aggregate them equally with a same weight or proportionally with different weights, and to send the aggregated local model
  • the interim model aggregation instructions may indicate or include an indication of any one or more of the following: (i) a unique identifier associated with a respective one of the interim model aggregation instructions; (ii) a list of target WTRUs whose local model updates are to be updated; (iii) an aggregation mode associated with a respective one of the interim model aggregation instructions; (iv) aggregation conditions for the first WTRU to execute the interim model aggregation according to the interim model aggregation instructions; (v) an interim model aggregation algorithm associated with a respective one of the interim model aggregation instructions; and/or (vi) a destination address to which the aggregated local model update is to be sent.
  • the method 1100 may include receiving a discovery message or the like from the at least one other WTRU.
  • the discovery message may indicate a relaying computation requirement (RCR), where the RCR indicates any of: a computation type, a computation size, a storage size, a computation frequency, a computation time window, and/or a computation waiting time.
  • RCR relaying computation requirement
  • the establishing of the direct link may include receiving, from the at least one other WTRU, a direct link establishment message or request indicating a relaying computation requirement (RCR) associated with the at least one other WTRU.
  • RCR relaying computation requirement
  • the method 1100 may include sending a notification, to the network node, which indicates information associated with, or identifying, the established direct link.
  • the method 1100 may include receiving, e.g., with the local model update from one or more of the at least one other WTRU, information that indicates or includes any one or more of the following: (i) an accuracy of the local model update; (ii) a model compression scheme and related parameters used to compress the local model update; and/or (iii) data distribution properties of training data that was used to generate the local model update.
  • the model aggregation record may indicate or include any one or more of the following: (i) an identifier associated with the interim model aggregation instruction used to generate the aggregated local model update; (ii) an identifier of the first WTRU and the one or more other WTRUs whose local model updates have been aggregated to generate the aggregated local model update; (iii) a creation time of the local model updates used to generate the aggregated local model update (e.g., a time (or times) at which the local model updates were created); and/or (iv) a creation time of the aggregated local model update (e.g., a time at which aggregated local model update was created or aggregated).
  • Various embodiments may be directed to a method, which may be implemented in an apparatus or server, such as a FLS. It should be understood that the method may include any one or more of the steps performed by or associated with a FLS as discussed elsewhere herein, such as described in or with respect to FIGs. 7-9. It should also be understood that one or more of the steps of the method may be optional, may be omitted, and/or may be performed in a different order.
  • the method may include sending first information indicating a first request to a network function (e.g., PMF) to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), and receiving the proximity information from the network function.
  • the method may include selecting one of the one or more WTRUs to serve as a relay node, and sending, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU.
  • the method may also include receiving third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node.
  • the method may include sending, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and receiving, from the selected WTRU, an aggregated local model update and associated model aggregation record.
  • (e.g., configuration) information may be described as received by a WTRU from the network, for example, through system information or via any kind of protocol message.
  • the same (e.g., configuration) information may be pre-configured in the WTRU (e.g., via any kind of pre-configuration methods such as e.g., via factory settings), such that this (e.g., configuration) information may be used by the WTRU without being received from the network.
  • Any characteristic, variant or embodiment described for a method is compatible with an apparatus device comprising means for processing the disclosed method, such as with a device comprising a processor configured to process the disclosed method, a computer program product comprising program code instructions and a non-transitory computer-readable storage medium storing program instructions.
  • infrared capable devices i.e., infrared emitters and receivers.
  • the embodiments discussed are not limited to these systems but may be applied to other systems that use other forms of electromagnetic waves or non-electromagnetic waves such as acoustic waves.
  • video or the term “imagery” may mean any of a snapshot, single image and/or multiple images displayed over a time basis.
  • the terms “user equipment” and its abbreviation “UE”, the term “remote” and/or the terms “head mounted display” or its abbreviation “HMD” may mean or include (i) a wireless transmit and/or receive unit (WTRU); (ii) any of a number of embodiments of a WTRU; (iii) a wireless-capable and/or wired-capable (e.g., tetherable) device configured with, inter alia, some or all structures and functionality of a WTRU; (iii) a wireless-capable and/or wired-capable device configured with less than all structures and functionality of a WTRU; or (iv) the like.
  • WTRU wireless transmit and/or receive unit
  • any of a number of embodiments of a WTRU any of a number of embodiments of a WTRU
  • a wireless-capable and/or wired-capable (e.g., tetherable) device configured with, inter alia, some
  • FIGs. 1 A-1D Details of an example WTRU, which may be representative of any WTRU recited herein, are provided herein with respect to FIGs. 1 A-1D.
  • various disclosed embodiments herein supra and infra are described as utilizing a head mounted display.
  • a device other than the head mounted display may be utilized and some or all of the disclosure and various disclosed embodiments can be modified accordingly without undue experimentation. Examples of such other device may include a drone or other device configured to stream information for providing the adapted reality experience.
  • the methods provided herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor.
  • Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media.
  • Examples of computer- readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
  • processing platforms, computing systems, controllers, and other devices that include processors are noted. These devices may include at least one Central Processing Unit (“CPU”) and memory.
  • CPU Central Processing Unit
  • memory In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being “executed,” “computer executed” or “CPU executed.”
  • an electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals.
  • the memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the provided methods.
  • the data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (RAM)) or non-volatile (e.g., Read-Only Memory (ROM)) mass storage system readable by the CPU.
  • the computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It should be understood that the embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the provided methods.
  • any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium.
  • the computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.
  • a signal bearing medium examples include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc., and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.
  • a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a typical data processing system may generally include one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity, control motors for moving and/or adjusting components and/or quantities).
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
  • the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
  • the terms “any of' followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of,” “any combination of,” “any multiple of,” and/or “any combination of multiples of the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items.
  • the term “set” is intended to include any number of items, including zero.
  • the term “number” is intended to include any number, including zero.
  • the term “multiple”, as used herein, is intended to be synonymous with “a plurality”.
  • a range includes each individual member.
  • a group having 1-3 cells refers to groups having 1, 2, or 3 cells.
  • a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Procedures, apparatuses, systems, devices, and computer program products for proximity-aware federated learning (FL) are described. One method may include a first wireless transmit/receive unit (WTRU) receiving a request to serve as a relay node for at least one other WTRU, determining that the at least one other WTRU is in proximity of the first WTRU and/or determining to agree to serve as the relay node, receiving one or more model aggregation instructions, establishing a direct link with the at least one other WTRU, and receiving, via the direct link, a local model update from one or more of the other WTRU(s). The method may also include aggregating, according to the model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record, which may be sent to a network node.

Description

METHODS, ARCHITECTURES, APPARATUSES AND SYSTEMS FOR PROXIMITY- AWARE FEDERATED LEARNING WITH INTERIM MODEL AGGREGATION IN
FUTURE WIRELESS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/456,101, filed March 31, 2023, which is incorporated herein by reference in its entirety.
FIELD
[0002] The present disclosure is generally directed to the fields of communications, software and/or encoding, including, for example, to methods, architectures, apparatuses, and/or systems related to proximity-aware federated learning (FL) with interim model aggregation in wireless networks.
BACKGROUND
[0003] Federated learning (FL) is a framework for distributed machine learning. In Federated learning (FL), training data may be maintained locally at multiple distributed Federated Learning Clients (FLCs), such as user devices or mobile devices. A FLC may perform local training, generate local model updates, and/or send local model updates to a Federated Learning Server (FLS).
SUMMARY
[0004] An embodiment may be directed to a first wireless transmit/receive unit (WTRU) that includes circuitry, including any of a processor, memory, transmitter and/or receiver. The circuitry is configured to receive, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU and, based on the first information, to determine that the at least one other WTRU is in proximity of the first WTRU and/or to determine to agree to serve as the relay node for the at least one other WTRU. The circuitry is configured to transmit second information, to the network node, indicating that the first WTRU agrees to serve as the relay node, to receive, from the network node, third information indicating one or more interim model aggregation instructions, to establish a direct link with the at least one other WTRU, and to receive, via the direct link, a local model update from one or more of the at least one other WTRU. The circuitry is configured to aggregate, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record, and to send the aggregated local model update and the associated model aggregation record to the network node. [0005] An embodiment is directed to a method, implemented in a first wireless transmit/receive unit (WTRU). The method may include receiving, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU and, based on the first information, determining that the at least one other WTRU is in proximity of the first WTRU and determining to agree to serve as the relay node for the at least one other WTRU. The method may include transmitting second information, to the network node, indicating that the first WTRU agrees to serve as the relay node, receiving, from the network node, third information indicating one or more interim model aggregation instructions, establishing a direct link with the at least one other WTRU, and receiving, via the direct link, a local model update from one or more of the at least one other WTRU. The method may also include aggregating, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record, and sending the aggregated local model update and the associated model aggregation record to the network node.
[0006] An embodiment may be directed to an apparatus comprising circuitry, including any of a processor, memory, transmitter and receiver. The circuitry is configured to send first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), to receive the proximity information from the network function, to select one of the one or more WTRUs to serve as a relay node, and to send, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU. The circuitry may be configured to receive third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node, to send, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and to receive, from the selected WTRU, an aggregated local model update and associated model aggregation record.
[0007] An embodiment may be directed to a method comprising sending first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), receiving the proximity information from the network function, selecting one of the one or more WTRUs to serve as a relay node, and sending, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU. The method may also include receiving third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node, sending, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and receiving, from the selected WTRU, an aggregated local model update and associated model aggregation record.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] A more detailed understanding may be had from the detailed description below, given by way of example in conjunction with drawings appended hereto. Figures in such drawings, like the detailed description, are examples. As such, the Figures (FIGs.) and the detailed description are not to be considered limiting, and other equally effective examples are possible and likely. Furthermore, like reference numerals ("ref.") in the FIGs. indicate like elements, and wherein: [0009] FIG. 1 A is a system diagram illustrating an example communications system;
[0010] FIG. IB is a system diagram illustrating an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 1 A;
[0011] FIG. 1C is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 1A;
[0012] FIG. ID is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 1 A;
[0013] FIG. 2 is a diagram illustrating an example of a federated learning (FL) process, according to an embodiment;
[0014] FIG. 3 is a system diagram illustrating an example of FL learning in wireless networks, according to an embodiment; and
[0015] FIG. 4 is a system diagram illustrating an example of a 5G system architecture, according to various embodiments;
[0016] FIG. 5 is a system diagram illustrating some problems relating to FL in wireless networks; [0017] FIG. 6 is an architectural design for proximity-aware FL with interim model aggregation, according to various embodiments;
[0018] FIG. 7 is a signaling diagram illustrating server-controlled proximity-aware FL with interim model aggregation, according to various embodiments;
[0019] FIG. 8 is a signaling diagram illustrating a network function-controlled proximity-aware FL with interim model aggregation, according to various embodiments;
[0020] FIG. 9 is a signaling diagram illustrating relay-node-controlled proximity-aware FL with interim model aggregation, according to various embodiments; [0021] FIG. 10 is a system diagram illustrating an example of native FL in beyond 5G systems, such as 6G, according to some embodiments; and
[0022] FIG. 11 illustrates an example flow diagram of a method, according to one example embodiment.
DETAILED DESCRIPTION
[0023] In the following detailed description, numerous specific details are set forth to provide a thorough understanding of embodiments and/or examples disclosed herein. However, it will be understood that such embodiments and examples may be practiced without some or all of the specific details set forth herein. In other instances, well-known methods, procedures, components and circuits have not been described in detail, so as not to obscure the following description. Further, embodiments and examples not specifically described herein may be practiced in lieu of, or in combination with, the embodiments and other examples described, disclosed or otherwise provided explicitly, implicitly and/or inherently (collectively "provided") herein. Although various embodiments are described and/or claimed herein in which an apparatus, system, device, etc. and/or any element thereof carries out an operation, process, algorithm, function, etc. and/or any portion thereof, it is to be understood that any embodiments described and/or claimed herein assume that any apparatus, system, device, etc. and/or any element thereof is configured to carry out any operation, process, algorithm, function, etc. and/or any portion thereof.
[0024] The methods, apparatuses and systems provided herein are well-suited for communications involving both wired and wireless networks. An overview of various types of wireless devices and infrastructure is provided with respect to FIGs. 1A-1D, where various elements of the network may utilize, perform, be arranged in accordance with and/or be adapted and/or configured for the methods, apparatuses and systems provided herein.
[0025] FIG. 1A is a system diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), singlecarrier FDMA (SC-FDMA), zero-tail (ZT) unique-word (UW) discreet Fourier transform (DFT) spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block- filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0026] As shown in FIG. 1A, the communications system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104/113, a core network (CN) 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a "station" and/or a "STA", may be configured to transmit and/or receive wireless signals and may include (or be) a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi- Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d, or any other WTRU mentioned or described herein, may be interchangeably referred to as a UE.
[0027] The communications systems 100 may also include a base station 114a and/or a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d, e.g., to facilitate access to one or more communication networks, such as the CN 106/115, the Internet 110, and/or the networks 112. By way of example, the base stations 114a, 114b may be any of a base transceiver station (BTS), a Node-B (NB), an eNode-B (eNB), a Home Node-B (HNB), a Home eNode-B (HeNB), a gNode-B (gNB), a NR Node-B (NR NB), a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0028] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in an embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each or any sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0029] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0030] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 116 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
[0031] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE- Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0032] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
[0033] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., an eNB and a gNB).
[0034] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (Wi-Fi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 IX, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0035] The base station 114b in FIG. 1 A may be a wireless router, Home Node-B, Home eNode- B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR, etc.) to establish any of a small cell, picocell or femtocell. As shown in FIG. 1 A, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106/115.
[0036] The RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1 A, it will be appreciated that the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing an NR radio technology, the CN 106/115 may also be in communication with another RAN (not shown) employing any of a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or Wi-Fi radio technology. [0037] The CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/114 or a different RAT.
[0038] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0039] FIG. IB is a system diagram illustrating an example WTRU 102. As shown in FIG. IB, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other elements/peripherals 138, among others. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0040] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. IB depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together, e.g., in an electronic package or chip. [0041] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 116. For example, in an embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In an embodiment, the transmit/receive element 122 may be configured to transmit and/or receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0042] Although the transmit/receive element 122 is depicted in FIG. IB as a single element, the WTRU 102 may include any number of transmit/receive elements 122. For example, the WTRU 102 may employ MIMO technology. Thus, in an embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0043] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as NR and IEEE 802.11, for example.
[0044] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), readonly memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0045] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[0046] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
[0047] The processor 118 may further be coupled to other elements/peripherals 138, which may include one or more software and/or hardware modules/units that provide additional features, functionality and/or wired or wireless connectivity. For example, the elements/peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (e.g., for photographs and/or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, a virtual reality and/or augmented reality (VR/AR) device, an activity tracker, and the like. The elements/peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0048] The WTRU 102 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the uplink (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 118). In an embodiment, the WTRU 102 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the uplink (e.g., for transmission) or the downlink (e.g., for reception)).
[0049] FIG. 1C is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, and 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0050] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In an embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
[0051] Each of the eNode-Bs 160a, 160b, and 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink (UL) and/or downlink (DL), and the like. As shown in FIG. 1C, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface. [0052] The CN 106 shown in FIG. 1C may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the CN operator.
[0053] The MME 162 may be connected to each of the eNode-Bs 160a, 160b, and 160c in the RAN 104 via an SI interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
[0054] The SGW 164 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via the SI interface. The SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The SGW 164 may perform other functions, such as anchoring user planes during inter-eNode-B handovers, triggering paging when DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
[0055] The SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. [0056] The CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
[0057] Although the WTRU is described in FIGs. 1A-1D as a wireless terminal, it is contemplated that in certain representative embodiments that such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network. [0058] In representative embodiments, the other network 112 may be a WLAN.
[0059] A WLAN in infrastructure basic service set (BSS) mode may have an access point (AP) for the BSS and one or more stations (STAs) associated with the AP. The AP may have an access or an interface to a distribution system (DS) or another type of wired/wireless network that carries traffic into and/or out of the BSS. Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs. Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations. Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA. The traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic. The peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS). In certain representative embodiments, the DLS may use an 802. l ie DLS or an 802.1 Iz tunneled DLS (TDLS). A WLAN using an Independent BSS (IBSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other. The IBSS mode of communication may sometimes be referred to herein as an "ad-hoc" mode of communication.
[0060] When using the 802.1 lac infrastructure mode of operation or a similar mode of operations, the AP may transmit a beacon on a fixed channel, such as a primary channel. The primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling. The primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP. In certain representative embodiments, Carrier sense multiple access with collision avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems. For CSMA/CA, the STAs (e.g., every STA), including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off. One STA (e.g., only one station) may transmit at any given time in a given BSS.
[0061] High throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadj acent 20 MHz channel to form a 40 MHz wide channel.
[0062] Very high throughput (VHT) STAs may support 20 MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels. The 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels. A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams. Inverse fast fourier transform (IFFT) processing, and time domain processing, may be done on each stream separately. The streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA. At the receiver of the receiving STA, the above-described operation for the 80+80 configuration may be reversed, and the combined data may be sent to a medium access control (MAC) layer, entity, etc.
[0063] Sub 1 GHz modes of operation are supported by 802.1 laf and 802.11 ah. The channel operating bandwidths, and carriers, are reduced in 802.1 laf and 802.1 lah relative to those used in
802.1 In, and 802.1 lac. 802.1 laf supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV white space (TVWS) spectrum, and 802.1 lah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum. According to a representative embodiment,
802.1 lah may support meter type control/machine-type communications (MTC), such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths. The MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
[0064] WLAN systems, which may support multiple channels, and channel bandwidths, such as
802.1 In, 802.1 lac, 802.1 laf, and 802.1 lah, include a channel which may be designated as the primary channel. The primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode. In the example of 802.1 lah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other ST As in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes. Carrier sensing and/or network allocation vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
[0065] In the United States, the available frequency bands, which may be used by 802.1 lah, are from 902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.1 lah is 6 MHz to 26 MHz depending on the country code.
[0066] FIG. ID is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment. As noted above, the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 113 may also be in communication with the CN 115.
[0067] The RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment. The gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In an embodiment, the gNBs 180a, 180b, 180c may implement MIMO technology. For example, gNBs 180a, 180b may utilize beamforming to transmit signals to and/or receive signals from the WTRUs 102a, 102b, 102c. Thus, the gNB 180a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a. In an embodiment, the gNBs 180a, 180b, 180c may implement carrier aggregation technology. For example, the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum. In an embodiment, the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology. For example, WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
[0068] The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., including a varying number of OFDM symbols and/or lasting varying lengths of absolute time).
[0069] The gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non- standalone configuration. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non- standalone configuration WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c. For example, WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non- standalone configuration, eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
[0070] Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards user plane functions (UPFs) 184a, 184b, routing of control plane information towards access and mobility management functions (AMFs) 182a, 182b, and the like. As shown in FIG. ID, the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
[0071] The CN 115 shown in FIG. ID may include at least one AMF 182a, 182b, at least one UPF 184a, 184b, at least one session management function (SMF) 183a, 183b, and at least one Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0072] The AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node. For example, the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different protocol data unit (PDU) sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like. Network slicing may be used by the AMF 182a, 182b, e.g., to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c. For example, different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for MTC access, and/or the like. The AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
[0073] The SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface. The SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface. The SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b. The SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like. A PDU session type may be IP -based, non-IP based, Ethernet-based, and the like.
[0074] The UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, e.g., to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multihomed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
[0075] The CN 115 may facilitate communications with other networks. For example, the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108. In addition, the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. In an embodiment, the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
[0076] In view of FIGs. 1 A-1D, and the corresponding description of FIGs. 1 A-1D, one or more, or all, of the functions described herein with regard to any of: WTRUs 102a-d, base stations 114a- b, eNode-Bs 160a-c, MME 162, SGW 164, PGW 166, gNBs 180a-c, AMFs 182a-b, UPFs 184a- b, SMFs 183a-b, DNs 185a-b, and/or any other element(s)/device(s) described herein, may be performed by one or more emulation elements/devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0077] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0078] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0079] Embodiments disclosed herein are representative and do not limit the applicability of the apparatus, procedures, functions and/or methods to any particular wireless technology, any particular communication technology and/or other technologies. The term network in this disclosure may generally refer to one or more base stations or gNBs or other network entity which in turn may be associated with one or more Transmission/Reception Points (TRPs), or to any other node in the radio access network.
[0080] It is noted that, throughout example embodiments described herein, the terms “serving base station”, “base station”, “gNB”, collectively “gNB” may be used interchangeably to designate any network element such as, e.g., a network element acting as a serving base station. Embodiments described herein are not limited to gNBs and are applicable to any other type of base stations.
[0081] Federated Learning (FL) is a framework for distributed machine learning. In FL, training data is maintained locally at multiple distributed Federated Learning Clients (FLCs) (e.g., mobile devices). Each FLC performs local training (e.g., deep learning), generates local model updates, and sends local model updates to a Federated Learning Server (FLS) which could be an application server function or a network function in the cloud or edge, for example. The FLS aggregates local model updates received from FLCs and generates global model updates, which will be sent to the participating FLCs for the next training round. Some advantages of federated learning may include: (1) improved data privacy-preservation since training data stays at FLCs; (2) reduced communication overhead since it is not required to collect/transmit training data to a central entity; and (3) improved learning speed since model training now leverages distributed computation resources at FLCs. However, FL involves the transmission of model updates between the FLS and FLCs, which introduces additional communication overhead compared to centralized machine learning. In addition, FL inherits some potential security issues and threats such as data poisoning and model poisoning attacks.
[0082] FIG. 2 illustrates an example of the general federated learning process, according to an embodiment. As illustrated in the example of FIG. 2, the FLS and FLCs may jointly take the illustrated steps to perform a FL task. At 210, the FLS may select a set of FLCs to participate in a FL task. At 220, the FLS may configure the FL task to each selected FLC. At 230, the FLS may send an initial global model to each selected FLC. At 240, one or more of the FLCs (e.g., each FLC) may independently train the global model based on the received initial global model and its local data. At 250, after each training round, one or more of the FLCs (e.g., each FLC) may generate a local model update and send it to the FLS. At 260, the FLS may receive local model updates from one or more of the FLCs (e.g., all FLCs), aggregate them, and generate a new global model update. In some embodiments, the FLS may need to wait to receive local model updates from all FLCs before performing the model aggregation (i.e., synchronous FL) or the FLS may start the aggregation after receiving the local model updates from some of FLCs (i.e., asynchronous FL). The FLS may (re)select some new FLCs for next training round. At 270, similar to step 230, the FLS may send the global model updates to one or more of the FLCs (e.g., all FLCs). At 280, similar to step 240, one or more of the FLCs (e.g., each FLC) may start the next local training.
[0083] FL can be leveraged in wireless networks. For example, the FLS can be deployed in the core and/or edge, while FLCs may be end devices and/or UEs. FIG. 3 illustrates an example FL application for wireless networks, where the UEs (e.g., each UE in this example) hosts an FLC, which collaboratively participates in training a global model.
[0084] In some embodiments, FL may be used for spectrum management. For instance, as shown in the example of FIG. 3, FL may be used to learn an accurate spectrum utilization model. In an embodiment, as illustrated in the example of FIG. 3, the UEs (i.e., UE-1, UE-2, UE-3 and UE-4) may host a FLC to generate a local model update; local model updates may be sent to an edger server, which has an FLS mainly responsible for aggregating local model updates from UEs to generate a global model update. The global model update may be sent to one or more of the UEs (e.g., all UEs) to continue the next training round until the global model converges. Then, the converged final global model may be transmitted to one or more of the UEs, which can use the final global model to manage their spectrum access.
[0085] The 5G system architecture includes one or more UEs, Radio Access Network (RAN), and Core Network [1], One of the design principles for the 5G system architecture is servicecentric or service-based. As shown in the example of FIG. 4, a 5G Core Network may contain a variety of network functions, which work together to fulfill and provide needed services to the RAN, UEs, and Application Server s/Service Providers. A network function can access other network functions in request/response mode or subscription/notification mode. Before two network functions interact with each other, they first need to register with the Network Repository Function (NRF) so that they can discover each other via the NRF. Among these network functions, Access and Mobility Management Function (AMF) is dedicated to managing UE’s access to 5G system and its mobility, Session Management Function (SMF) is responsible for establishing sessions between a UE and 5G core network, and Authentication Server Function (AUSF) takes charge of UE authentication. In addition, Policy Control Function (PCF) provides policy rules for other control plane network functions and UEs; PCF assigns an identifier for each created policy rule, which other control plane network functions and UEs use to refer to the corresponding policy rule. User Plane Function (UPF) is the only core network function in the data plane that facilitates monitoring, managing, controlling, and redirecting user plane traffic flows such as between a UE and an Application Server (AS). The Network Exposure Function (NEF) enables access to 5G control plane functions to entities such as network applications and ASs which are outside of 5G System (5GS) and not in the same trusted domain. 5G core network also provides data storage and analytics services through functions like Unified Data Management (UDM), Unified Data Repository (UDR), Unstructured Data Storage Function (UDSF) and Network Data Analytics Function (NWDAF). Another critical feature of 5G system is network slicing, which is facilitated by Network Slice Selection Function (NSSF). Although these network functions are defined as separate logical entities, a particular scenario may require multiple network functions; for instance, UE mobility will need not only AMF, but also AUSF and SMF. For a type of network function, multiple instances could be instantiated and NRF will maintain the information of each instantiated network function instance. With the emergence of edge computing, some network functions in 5G Core Network such as UPF and NEF could be deployed and resided in an edge network that is much nearer to and potentially co-located with the RAN.
[0086] Some Artificial Intelligence (Al)-related functions and services specified by 3 GPP are summarized as follows. 3GPP TS 23.288 [2] defines stage-2 architecture enhancements for 5GS to support Network Data Analytics Services (NWDAF), a network function in 5G core network. Note that multiple NWDAF instances could be deployed to edge networks in future wireless systems, such as 6G. Interacting with other network functions, NWDAF provides a set of AI- related functionalities and services, some of which include: (1) data collection based on subscription to events of other network and/or application functions; (2) retrieve data and information from other network functions; (3) provide on-demand data analytics to consumers (i.e., network and/or application functions). The services provided by NWDAF can be exposed to and leveraged by other network functions in 5G core network and application functions (i.e., application servers).
[0087] 3GPP TS 22.261 [3] specifies Al model transfer requirements for three types of Al operations in 5GS: (1) Al operation splitting between Al endpoints; (2) Al model/data distribution and sharing over 5GS; and (3) distributed/federated learning over 5GS. 3GPP TS 22.261 also specifies Key Performance Indicators (KPIs) for AI/ML model transfer in 5GS, specifically: (1) uplink and downlink KPIs for split AI/ML inference between UE and network server/application functions; (2) KPIs for AI/ML model downloading; and (3) KPIs for federated learning between UE and network server/application functions.
[0088] 3GPP TR 23.700-80 [4] describes key issues and solutions for supporting AI/ML-based services in 5GS. The following seven key issues have been defined in 3GPP TR 23.700-80: monitoring of network resource utilization for support of application AI/ML operations; 5GC information exposure to UE; 5GC information exposure to authorized 3rd party for application layer AI/ML operation; enhancing external parameter provisioning; 5GC enhancements to enable application AI/ML traffic transport; Quality of Service (QoS) and policy enhancements; and 5GS assistance to federated learning operation.
[0089] 3GPP has an ongoing SAI release-19 study item TR 22.876 [5] for the phase-2 study of AI/ML model transfer in future wireless systems. TR 22.876 has three main objectives: (1) identify the use cases for distributed Al inference; (2) identify the use cases for distributed/decentralized model training; and (3) analyze potential gaps to existing 5GS mechanism to support the distributed Al inference and model training. Especially, TR 22.876 will study and define the following aspects of distributed AI/ML: split AI/ML operation between AI/ML endpoints for Al inference by leveraging direct device connection; AI/ML model/data distribution and sharing by leveraging direct device connection; and distributed/federated learning by leveraging direct device connection.
[0090] 3GPP TS 23.304 V17.4.0 (2022-09) [6] defines architecture for Proximity based Services (ProSe), where one UE acting as a relay (i.e., UE-to-Network relay or UE-to-NW relay) can connect other remote UEs being in proximity to the network. In other words, a remote UE leverages the UE-to-NW relay (another UE) to access to the 5GS. 5GS ProSe functions defined in [6] include 5G ProSe direct discovery, 5G direct communication, and 5G ProSe UE-to-Network Relay. 5G ProSe direct discovery describes the process for nearby UEs (remote UEs and UE-to- NW relays) to use direct radio transmissions to discover each other. 5G ProSe direct communication refers to the process where multiple UEs in proximity communicate with each other directly without going through any other network nodes (e.g., a base station). 5G ProSe UE- to-NW relay provides functions to support connecting one or multiple remote UEs to the network via a UE-to-Network relay.
[0091] 3GPP TR 23.700-33 VI.1.0 (2022-10) [7] studies architecture enhancements to [6], such as UE-to-UE relay for unicast, enhancements of 5G ProSe UE-to-Network functionality, and path switching between direct New Radio (NR) Un communication path and direct NR PC5 communication path.
[0092] A UE-to-UE relay is a 5G ProSe-enabled UE that provides functions to and connects a remote/end UE to another remote/end UE.
[0093] Several key issues and corresponding solutions have been described in [7], These key issues include: support of UE-to-UE relay, support of path switching between two indirect network communication paths for UE-to-Network relaying with service continuity consideration, support direct communication path switching between PC5 and Uu, support of path switching between direct network communication path and indirect network communication path for layer-2 UE-to- Network relay with session continuity consideration, support of multi-path transmission for UE- to-Network relay, support of PC5 service authorization and policy/parameter provisioning, and support of emergency for UE-to-Network relaying.
[0094] A typical FL deployment in future wireless networks may include “an FLS at the edge/core network” and “FLCs at UEs”. FIG. 5 illustrates an example of the communication- related issues that may arise in this FL over wireless deployment. As illustrated in the example of FIG. 5, the wireless connectivity between an FLC (e.g., FLC2, FLC3) and the FLS may have too limited capacity and might not support timely transmission of a local model update from this FLC to the FLS. However, in the example of FIG. 5, other nearby FLCs (e.g., FLCI) may have enough wireless capacity and operate correctly with the FLS. Thus, UEs as FLCs may lose uplink connectivity to the FLS, while other FLCs may still have connectivity to the FLS. An FLC may have limited residual energy and might not be able to finish transmitting the local model update directly to the FLS.
[0095] UEs within the same proximity could directly communicate with each other, which can in turn be leveraged to improve the FL process. For example, one UE/FLC could help to relay local model updates from another UE/FLC to the FLS. To fully leverage direct communication links among UEs/FLCs, certain specific technical issues need to be solved. These issues may include that, during FL training, each FLC needs to send the new local model update to the FLS repeatedly after completing each local training round, which causes high communication overhead especially when the number of FLCs and/or the number of required local training rounds are large. [0096] In view of the above, various embodiments provide apparatuses, systems, architectures and/or methods of proximity-aware interim model aggregation. According to an embodiment, instead of uploading local model updates to the FLS, a FLC may send its local model updates to a nearby FLC, e.g., over a direct link. The nearby FLC may act as a relay node and can receive local model updates from multiple other FLCs, aggregate the received local model updates optionally with its own local model update according to pre-configured interim model aggregation instructions, generate aggregated local model updates and aggregation records, and/or forward the aggregated local model updates and the aggregation records to the FLS.
[0097] FIG. 6 illustrates an example of the architectural design for proximity-aware FL with interim model aggregation, according to various embodiments. As illustrated in the example of FIG. 6, an existing FLC (FLC1/UE1) may act as a UE-to-NW relay for receiving local model updates from other nearby FLCs (e.g., FLC2/UE2), and aggregating those local model updates (which may be referred to as interim model aggregation). Then, this existing FLC (FLC1/UE1) may forward the aggregated local model updates to the FLS. Such interim model aggregation can reduce uplink traffic from FLCs to the FLS, for example, from (n+1) * size-of-model -update to 1 * size-of-model-update. Here, n is the number of other FLCs that are relayed by this existing FLC. [0098] For the scenario where the FLS is hosted at a UE, in the example of FIG. 6, UE1 may be a UE-to-UE relay. As an example, in the architectural design illustrated in the example of FIG. 6, the FLS can select and configure FLC1/UE1 as a UE-to-NW relay for other FLCs (e.g., FLC2/UE2) within a proximity; for this purpose, the FLS may need to check proximity information about existing FLCs/UEs from the Proximity Management Function (PMF). Alternatively, or additionally, FLC1/UE1 can directly request to become a UE-to-NW relay (or a UE-to-UE relay) for other FLCs. In addition, FLC2/UE2 (and other FLCs) may request the FLS to select FLC 1/UE1 as its relay node. In some embodiments, a Proximity Management Function (PMF) can establish and configure a computation-aware relay relationship between FLCI and other FLCs. Computation-aware relay enables other FLCs to not only request for existing communication- focused ProSe relay services from FLCI but also request computation-oriented services (i.e., interim model aggregation) from FLC1. The PMF could be a ProSe Function, a 5G Direct Discovery Name Management Function (DDNMF), a Policy Control Function (PCF), an Access and Mobility Function (AMF), another existing network function, a new network function, and/or a combination of those functions. According to certain embodiments, other FLCs (e.g., FLC2) may send their local model update over a direct link to FLCI when they generate a new local model update. FLCI may receive local model updates from other FLCs (e.g., FLC2) and may optionally aggregate them with its own local model update to generate an aggregated local model update. Then, FLCI may forward the aggregated local model update to the FLS. It is noted that, in some embodiments, FLCI might not perform local training but may just aggregate local model updates from other FLCs and forwards the aggregated model update to the FLS.
[0099] FIG. 7 illustrates an example signaling diagram of a procedure for network node or server- controlled proximity-aware FL interim model aggregation, according to various embodiments. In some embodiments, the network node or server may be a FLS or an application server, for example. In the example of FIG. 7, the network node or server is depicted as an FLS; however, it should be understood that this is provided as one example and that other types of nodes or servers may also be used.
[0100] It is also noted that FIG. 7 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 7 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein.
[0101] In the scenario illustrated in the example of FIG. 7, an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs. The FLS may select FLCI as a relay node for two other FLCs (i.e., FLC2 and FLC3); note that FLCI could be a relay node for more than two FLCs (not illustrated in the example of FIG. 7). With FLCI as the relay node, FLC2 and FLC3 may send their local model updates to FLCI instead of sending them to the FLS. Then, FLCI may aggregate local model updates received from FLC2 and FLC3 with its own local model update to generate an aggregated local model update, which is referred to as interim model aggregation. FLCI may send the aggregated local model update to the FLS, where the final model aggregation will be performed. With the proposed interim model aggregation, the FLS receives (e.g., only receives) one model (i.e., the aggregated local model update) from the FLCI, instead of three local model updates respectively from FLCI, FLC2, and FLC3; thus, the uplink traffic from FLCs to the FLS is greatly reduced. Furthermore, the traffic due to the model exchange from FLC2/FLC3 to FLCI is throttled within the proximity and over the direct link, which alleviates the traffic pressure from FLCs/UEs to their base station. The procedures 701 to 712 in FIG. 7 may also be performed before a FL task is executed; in other words, 701-712 of FIG. 7 can be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC.
[0102] For the scenario where the FLS is hosted at a UE, UE1 in the example of FIG. 7 will be a UE-to-UE relay, but the same procedure depicted in the example of FIG. 7 may apply.
[0103] As illustrated in the example of FIG. 7, at 701, the FLS may send a request to a network element, such as a Proximity Management Function (PMF), to retrieve proximity information and UE context information about some existing FLCs. This request may serve one or more of the following purposes:
• Retrieve the proximity information and UE context information about UE1 and its nearby UEs, which are target UEs. For this purpose, this request may contain (e.g., may only contain) the identifier of UE1;
• If the FLS knows that FLCI and other FLCs (e.g., FLC2 and FLC3) are likely within the proximity, this request may be used to retrieve the proximity and UE context information about those FLCs (e.g., all of those FLCs), which are target UEs. As such, this request may contain the identifiers of all those FLCs (e.g., FLCI, FLC2, and FLC3);
• If the FLS does not know the location of existing FLCs, this request may contain the identifier of any number of existing FLCs as selected by the FLS, which are target UEs; and/or
• The FLS may retrieve the proximity information about a specific type of UEs in a region (e.g., within a building, on a segment of a highway, etc.), which are target UEs. Then, this request may contain the region information and the type of UEs (e.g., robots, vehicles, smart phones, etc.).
[0104] In addition, the request in step 701 may also contain the identifier of the FLS and the identifier of the existing FL task (e.g., FL-Task-A) so that the PMF can use them to authenticate if the FLS has the access rights to retrieve proximity information about existing FLCs.
[0105] According to an embodiment, prior to step 701, either FLC2 or FLC3 may send a request to the FLS asking the FLS to select a relay node with interim model aggregation function for them; thus, this request can trigger the FLS to start step 701. [0106] As illustrated in the example of FIG. 7, at 702, the PMF may process the request from step 701. If the FLS is allowed to retrieve the proximity information of the target UEs, the PMF may look up corresponding proximity information and may return it in a response. The PMF may send the response to the FLS. In general, the response may contain the proximity information and UE context information about target UEs as indicated in step 701 and their identifiers. The PMF may return and expose the following proximity information to the FLS:
• The current location of a UE (e.g., UE1/FLC1, UE2/FLC2, or UE3/FLC3);
• If two or more UEs (e.g., UE1/FLC1, UE2/FLC2, and/or UE3/FLC3) are currently within a proximity and could reach each other directly;
• If a UE (e.g., UE1/FLC1, UE2/FLC2, or UE3/FLC3) has been authorized to use ProSe services;
• If two UEs (e.g., UE1/FLC1 and UE2/FLC2) have discovered each other using direct device discovery;
• If two UEs (e.g., UE1/FLC1 and UE3/FLC3) have established direct communication link;
• A list of other UEs that a particular UE (e.g., UE1/FLC1) has discovered using direct device discovery since a time tl;
• A list of other UEs that a particular UE (e.g., UE1/FLC1) has established direct communication link with since a time t2;
• A list of other UE that a particular UE (e.g., UE1/FLC1) had direct communications with within a time internal in the past; and/or
• A list of other UE that a particular UE (e.g., UE1/FLC1) discovered within a time internal in the past.
[0107] As illustrated in the example of FIG. 7, at 703, according to the response received at 702, the FLS may preselect FLC1/UE1 as the relay node (e.g., if FLC1/UE1 is in the proximity of both UE2 and UE3). At 704, the FLS may send a message to FLCI to request FLCI serve as a relay node for FLC2 and FLC3. This message may contain the identifier of the existing FL task (e.g., FL-Task-A), the identifiers of FLC2/UE2 and FLC3/UE3, and/or the identifier of the FLS. One or multiple of the following parameters may also be contained in the message for FLCI to know or estimate how long it needs to act as a relay node: Relaying Time Window that may indicate the start time and end time of the relay services that FLCI will provide to FLC2 and FLC3, Current Global Model Accuracy that indicates the accuracy of the current global model, and/or remaining training rounds that indicates a number of remaining training rounds to be completed.
[0108] As illustrated in the example of FIG. 7, at 705, the FLCI may check if FLC2 and FLC3 are in its proximity. For example, FLCI may broadcast a short message containing the identifier of FLC2 and FLC3 over the local direct link; when FLC2 and FLC3 receives the short message, they may send and FLCI may receive an acknowledgement containing their identifier and indicating their reachability. Alternatively, if FLCI does not receive an acknowledgement, FLCI may determine that it cannot reach FLC2 and/or FLC3; as a result, FLCI may simply include a message such as: “Cannot reach FLC2 and/or FLC3” in the response to the FLS depicted at 707. In this case, the FLS might not send the confirmation at 708 and the whole procedure may end. [0109] As illustrated in the example of FIG. 7, at 706, FLCI may determine if it agrees to be a relay node for FLC2 and FLC3. As a relay node, FLCI not only may need to receive and/or store local model updates from FLC2 and FLC3, but may also need to aggregate them. As a result, FLC 1 may need to spend both computation and storage resources for processing local model updates from FLC2 and FLC3. From the message received at 704, FLCI may know the FL task and can estimate the size of local model update (i.e., the required storage resource, processing CPU load, processing time, etc.). From the message received at 704 (e.g., Relaying Time Window or Current Global Model Accuracy), FLCI also can estimate the required computation resource. To make the correct decision, FLCI may also consider extra energy consumption from providing relay services to FLC2 and FLC3. Based on one or more of these multiple metrics (e.g., the required storage resource, the required computation resource, the required energy consumption, etc.) that may result from providing relaying services and interim model aggregation to other FLCs (e.g., FLC2 and FLC3), FLCI may agree or reject to be a relay node for other FLCs (e.g., FLC2 and FLC3), for example, based on its local polices; as an example, if the required computation resurce exceeds a threshold or what FLCI can afford, FLCI may reject to be a relay node; as another example, if each of those metric is below a threshold or FLCI can afford it, FLCI may agree to be a relay node and provide interim nodel aggregation.
[0110] As illustrated in the example of FIG. 7, at 707, FLCI may send a response to the FLS containing the decision made in step 706. As an example, this response may contain a list of FLCs that FLCI agreed to be a relay node for and provide interim model aggregation to. Alternatively, if FLCI does not agree to be a relay node for any FLC, this response may simply contain “a rejection”; in this case, the procedure may end without the execution of the following steps. [OHl] As illustrated in the example of FIG. 7, at 708, the FLS may send a confirmation to FLC1. Even if FLCI agreed to be a relay node for multiple or all FLCs, the FLS may select a subset from the list of those FLCs as contained in the response at 707; the identifiers of selected FLCs in the subset may be contained or indicated in the confirmation message sent at 708. In this example, for purposes of illustration, it may be assumed that FLC2 and FLC3 are selected FLCs in the subset. In addition, the confirmation sent at 708 may also contain one (or multiple non-conflicting) interim model aggregation instructions such as one or more of the following example instructions: a) One example of interim model aggregation instruction: treat local model updates from FLC2 and FLC3 equally; aggregate them without considering FLCl’s local model update or without FLCl’s local training; send the aggregated local model update to the FLS. b) Another example of interim model aggregation instruction: treat local model updates from FLCI, FLC2 and FLC3 equally; aggregate them all together; send the aggregated local model update to the FLS. c) Another example of interim model aggregation instruction: treat local model updates from FLCI, FLC2 and FLC3 differently (e.g., assign different weights to each: wl to FLCI, w2 to FLC2, w3 to FLC3); aggregate local model updates from FLCI, FLC2, FLC3 according to their weights; send the aggregated local model update to the FLS. d) Another example of interim model aggregation instruction: once there are 2 local model updates available (e.g., from FLCI and FLC2, or from FLCI and FLC3, or from FLC2 and FLC3), aggregate them equally with the same weight or proportionally with different weights; send the aggregated local model update to the FLS. e) Another example of interim model aggregation instruction: if the model being trained is a deep neural network, aggregate the first k layers (or the last k layers, or the k layers in the middle of a deep neural network) of local model updates from FLCI, FLC2, and FLC3, equally with the same weight or proportionally with different weights; combine the aggregated first k layers ((or the last k layers, or the k layers in the middle of a deep neural network)) with the other layers from FLCI to form an aggregated local model update; send the aggregated local model update to the FLS.
[0112] According to some embodiments, each Interim Model Aggregation Instruction j (IMA j) may contain, for example, one or more of the following parameters:
• A unique identifier of IMA j ;
• The list of target FLCs whose Local Model Updates (LMUs) shall be updated (e.g., FLCI, FLC2, and/or FLC3);
• The aggregation mode (synchronous and asynchronous). For synchronous aggregation, the FLCI shall receive LMUs from all target FLCs before aggregating them; for asynchronous aggregation, the FLCI does not have to wait for LMUs from all target FLCs, but some of them as specified in the aggregation condition; • The aggregation conditions for the FLCI to execute the interim model aggregation according to IMALj (e.g., once two LMUs are available from any two FLCs from the list of target FLCs, when FLCs are with or near a particular location, a time window, etc.);
• The interim model aggregation algorithm for IMALj (e.g., average, weighted average, etc.) including weights for each FLC from the list of target FLCs; and/or
• The destination address which the aggregated local model update shall be sent to (e.g., the FLS).
[0113] Each interim model aggregation instruction may have a unique identifier within the FLS and FLC1. FLCI may store the interim model aggregation instructions.
[0114] As illustrated in the example of FIG. 7, the FLS may send a notification to FLC2 as shown at 709a, and to FLC3 as shown at 709b. The notification(s) may contain or indicate the identifier of FLC1/UE1 and the identifier of the existing FL task (FL-Task-A). The notification(s) may also contain or indicate the identifier of the FLS. The notification(s) may also contain or indicate a value k, which indicates that FLC2 and FLC3 only need to send the first k layers (or the last k layers, or the k layers in the middle of a deep neural network) of their local model update to FLCI assuming the trained model is a deep neural network. It is noted that the notifications 709a and/or 709b are optional.
[0115] As illustrated in the example of FIG. 7, FLCI and FLC2 may discover each other via step 710a, and FLCI and FLC3 may discover each other via step 710b (e.g., using 3GPP direct device discovery). Since the relay services to be provided by FLCI is not only communication-related, but also needs computation, FLCI and FLC2 (and FLCI and FLC3) may exchange Relaying Computation Requirement (RCR) during the process of discovering each other. For example, RCR may contain any one or more of the following information:
• Computation Type: “Local Model Aggregation” for this case.
• Computation Frequency: The frequency of computation operation (e.g., one local model update aggregation per second)
• Computation Size: number of operations for each computation (e.g., the product of the size of local model update and the number of models being aggregated for this case).
• Storage Size: size of resulting storage. For this case, it’s the size of required local model updates that FLCI needs to collect and store before it can aggregate them, which is dependent on the interim model aggregation instructions that the FLS configured to FLCI in the message sent at 708.
• Computation Time Window: The time window for performing the type of computation as indicated by “Computation Type”. • Computation Waiting Time: The maximum waiting time or latency for a request computation to be performed.
[0116] If FLCI has already discovered FLC2 and FLC3 at 705, the discovery at 710a and/or 710b can be skipped. However, if FLCI and FLC2 (and/or FLCI and FLC3) need to exchange RCR, the discovery at 710a and/or 710b may still be performed since they did not exchange RCR at 705; optionally, FLCI and FLC2 (and/or FLCI and FLC3) may exchange RCR at 711.
[0117] As illustrated in the example of FIG. 7, at 711a, FLCI and FLC2 may establish computation-aware direct link (e.g., using 3GPP direct link establishment), during which FLCI and FLC2 may exchange RCR to achieve computation awareness as a part of direct link establishment; similarly, FLCI and FLC3 may establish computation-aware direct link at 711b. During this process, RCR, if it was not exchanged in step 710, may be exchanged between FLCI and FLC2/FLC3.
[0118] For example, FLC2 may send a direct link establishment request to FLCI containing FLC2’s RCR on FLCI, and FLCI may receive FLC2’s RCR. In one example, FLCI may reject FLC2’s direct link establishment (e.g., if computation size and/or storage size is over what FLCI can or is willing to afford). If FLCI rejects FLC2’s direct link request, FLC2 may need to find another relaying UE as the relaying FLC to perform interim model aggregation and FLCI may send a rejection notification to FLC2 and FLS. For example, FLC3 may send a direct link establishment request to FLCI containing FLC3’s RCR on FLCI, and FLCI may receive FLC3’s RCR. In one example, FLCI may reject FLC3’s direct link establishment (e.g., if computation size and/or storage size is over what FLCI can or is willing to afford). If FLCI rejects FLC3’s direct link request, FLC3 may need to find another relaying UE as the relaying FLC to perform interim model aggregation and FLCI may send a rejection notification to FLC3 and FLS.
[0119] As illustrated in the example of FIG. 7, at 712, after the direct link is established between FLCI and FLC2/FLC3, FLCI may send a notification to the FLS (at 712a) and the PMF (at 712b). The notification may contain information about each established direct link (e.g., RCR, the identifier of the sender and the receiver of the direct link (e.g., FLCI and FLC2, or FLCI and FLC3)).
[0120] As illustrated in the example of FIG. 7, at 713, FLCI may generate a new local model update LMU1. If LMU1 may need to be aggregated with model updates from FLC2 and/or FLC3 according to the interim model aggregation instructions received at 708. FLCI may store LMU1 locally and/or may send LMU1 to the FLS. At 714, FLC2 may generate a new local model update LMU2 (at 714a) and FLC3 may also generate a new local model update LMU3 (at 714b). At 715, FLC2 may send LMU2 to FLCI via direct link (at 715a) and FLC3 may send LMU3 to FLCI via direct link (at 715b). FLC2 and FLC3 may also send the following additional information together with local model updates to FLCI :
• The accuracy (or the loss function value) of LMU2 and LMU3;
• The model compression scheme and related parameters if used to compress LMU2 and LMU3; for example, if LMU2 could be just the first k layers (or the last k layers, or the k layers in the middle of a deep neural network) of the original local model update generated by FLCI - then the value k will be sent together with LMU2 to FLCI; and
• The data distribution properties of the training data that was used to generate LMU2 and LMU3 if FLC2 and FLC3 like to expose such information to FLC1.
[0121] In some embodiments, depending on how fast FLCI, FLC2, and FLC3 can generate new local model updates, step 713 may occur after step 714. Also, step 715a could take place before step 715b.
[0122] As illustrated in the example of FIG. 7, at 716, FLCI may receive LMU2 and LMU3, respectively, from FLC2 and FLC3. FLCI may aggregate those local model updates (e.g., LMU1, LMU2, LMU3) according to the interim model aggregation instructions received at 708. For example, if the aggregation instructions specify that FLCI just needs to aggregate local model updates from some (not all) of other FLCs, FLCI might not need to wait to receive local model updates from all other FLCs before performing interim model aggregation. For example, an aggregation instruction may say “FLCI only needs to aggregate LMU1 and LMU2”; as such, in some embodiments, step 716 could occur right after step 715a but before step 715b; then, when FLCI receives LMU3 at 715b, it can simply forward LMU3 to the FLS or aggregate LMU3 with another local model update (e.g., LMU4 received from another FLC4/UE4) to generage an aggregated “LMU3+LMU4” to be sent to the FLS.
[0123] By performing interim model aggregation, FLCI can generate an aggregated local model update (i.e., aggLMU) and an associated model aggregation record. For example, the model aggregation record may contain any one or more of the following information: the identifier of corresponding interim model aggregation instruction used to generate aggLMU; the identifier of FLCI and other FLCs whose local model updates have been aggregated in aggLMU; the creation time of each LMUs that have been aggregated into aggLMU; the model accuracy of each local model being aggregated in aggLMU; the number of data samples used to generate each local model being aggregatd in aggLMU; and/or the creation time of aggLMU.
[0124] As illustrated in the example of FIG. 7, at 717, FLCI may sends aggLMU and the model aggregation record to the FLS. According to the model aggregation record, the FLS is able to know how aggLMU was generated and based on which interim model aggregation instruction. Then, the FLS may decide how aggLMU will be furthermore aggregated with other aggLMUs from other FLCs acting as a relay node and/or LMUs from other FLCs that send local model updates directly to the FLS.
[0125] FIG. 8 illustrates an example signaling diagram of a procedure for core network node or network function-coordinated proximity-aware FL interim model aggregation, according to various embodiments. In some embodiments, the core network node or network function may be a PMF or other NF, for example. In the example of FIG. 8, the core network node or NF is depicted as an PMF; however, it should be understood that this is provided as one example and that other types of nodes or servers may also be used while remaining within the scope of example embodiments.
[0126] It is also noted that FIG. 8 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 8 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein (e.g., may be combined with or modified by one or more elements of FIG. 7). Additionally, it is noted that, although the example of FIG. 8 (and other examples herein) may depict the PMF as the entity coordinating the model aggregation, the PMF may be replaced by other network elements or nodes (e.g., core network nodes) in some embodiments. As such, the PMF is provided as one example.
[0127] In the scenario depicted in the example of FIG. 8, an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs (e.g., FLCI, FLC2, and FLC3). The FLS may sense the increased latency in receiving local model updates from FLCs and interim model aggregation can help to reduce uplink traffic and latency in transmitting local model updates from FLCs to the FLS. As a result, the FLS may request the PMF to group FLCs based on their proximity information that the PMF maintains. As an example, the PMF may group FLCI, FLC2, and FLC3 together; the PMF may also select FLCI as the relay node for both FLC2 and FLC3. Then, the PMF may instruct FLCI and FLC2/FLC3 to discover each other and establish direct links between FLC2 and FLC 1 , and between FLC3 and FLC 1. Then, FLC2 and FLC3 may send their local model updates to FLCI, which as an example may be aggregated by FLCI with FLCl’s local model updates, according to interim model update instructions that the FLS configures to the relay node FLC1. Steps 801 to 812 in FIG. 8 may also be performed before a FL task is executed; in other words, steps 801-812 of FIG. 8 can be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC. [0128] For the scenario where the FLS is hosted at a UE, UE1 in the example of FIG. 8 will be a UE-to-UE relay, but the same procedure in FIG. 8 can apply.
[0129] As illustrated in the example of FIG. 8, at 801, the FLS may send a message to the PMF to request the PMF to group FLCs. This message may contain or indicate the identifier of the FLS, the identifiers of UEs that host FLCs, the identifiers of FLCs, the identifier of the corresponding FL task (e.g., FL-Task-A), and/or the size of a local model, etc.
[0130] In the example of FIG. 8, at 802, the PMF may check the proximity information of the UEs/FLCs contained in the request at 801. Based on the proximity information, the PMF may group the FLCs within the same proximity and the PMF may select a UE/FLC as a relay node for each group. As one example, FLCI, FLC2, and FLC3 may be selected by the PMF to form a group with FLCI as the relay node separately for FLC2 and FLC3, because FLC2/FLC3 can reach FLCI over direct link.
[0131] As illustrated in the example of FIG. 8, at 803, the PMF may send a request to FLCI asking FLC 1 to be the relay node for FLC2 and FLC3. In an embodiment, this request may contain the same information included in the request at 704 of FIG. 7.
[0132] At 804, as discussed above with respect to step 706 in the example of FIG. 7, FLCI may determine if it agrees to be a relay node for FLC2 and FLC3. Thus, the decision at 804 may be the same or similar to that discussed above with respect to step 706.
[0133] As illustrated in the example of FIG. 8, at 805, FLCI may send a response to the PMF indicating it agrees to be the relay node for FLC2, FLC3, or both of them. For this purpose, the response may contain or indicate the identifiers of UEs/FLCs that FLCI agreed to be the relay node for them.
[0134] In the example of FIG. 8, the PMF may send a notification to FLC2 at 806a and to FLC3 at 806b. The notification(s) may contain or indicate the same or similar information as contained in step 709 of FIG. 7.
[0135] In the example of FIG. 8, the discovery steps illustrated at 807a and 807b may be the same as or similar to that of steps 710a and 710b, respectively, in FIG. 7. At 808a, FLCI and FLC2 may establish a computation-aware direct link and, at 808b, FLCI and FLC3 may establish a computation-aware direct link. Steps 808a and 808b may be the same or similar to that of steps 711a and 711b discussed above with respect to FIG. 7. At 809, the FLCI may send a notification to PMF. The notification sent at 809 may be the same or similar to that of step 712b discussed above with respect to FIG. 7. It is noted that, in certain embodiments, steps 803-809 may be repeated for each group of UEs/FLCs which the PMF determined at 802. [0136] As illustrated in the example of FIG. 8, at 810, the PMF may send a response to the FLS indicating what the FLS requested in the request at 801 has been processed. As an example, this response may contain the identifiers of all UEs/FLCs in each group and indicate which UE/FLC is the relay node for each group. At 811, the FLS may send a notification to FLCI, which may contain or indicate the same or similar information as step 708 of FIG. 7 (e.g., the interim model aggregation instructions discussed above). As an alternative to this step, the FLS may embed or indicate “interim model aggregation instructions” in step 801, which will then be forwarded to each relay node (e.g., FLCI) by the PMF in the request sent at 803.
[0137] As further illustrated in the example of FIG. 8, the FLS may send a notification to FLC2 as shown at 812a, and to FLC3 as shown at 812b. These notifications may the same or similar to that of notifications 709a and 709b, respectively, discussed above regarding FIG. 7. Steps 812a and/or 812b may not be needed if FLC2 and FLC3 has obtained sufficient information from step 806 (e.g., if step 806 and step 812 contain the same information). Steps 813, 814, 815, 816, and 817 of FIG. 8 may be the same as or similar to steps 713, 714, 715, 716, and 717, respectively, as discussed above in connection with FIG. 7.
[0138] FIG. 9 illustrates an example signaling diagram for a procedure of relay-node-initiated proximity-aware FL interim model aggregation, according to various embodiments. It is noted that FIG. 9 is provided as one example of a method or procedure according to some embodiments, and that various modifications or changes may be made while remaining within the scope of example embodiments of the present disclosure. For example, one or more of the steps or procedures depicted in the example of FIG. 9 may be performed in a different order from that which is illustrated, may be omitted, and/or may be combined with one or more steps or procedures discussed elsewhere herein (e.g., may be combined with or modified by one or more elements of FIGs. 7 and/or 8).
[0139] In the scenario depicted in the example of FIG. 9, an existing FL task (e.g., FL-Task-A) is in progress between the FLS and a set of FLCs (e.g., FLCI, FLC2, and FLC3). Through direct discovery, FLCI may find nearby UEs/FLCs (e.g., FLC2 and FLC3) that participate in the same FL task. FLCI has sufficient computation and storage resource and may decide to be a relay node with interim model aggregation function for FLC2 and FLC3. In order to become a relay node for FLC2 and FLC3, FLCI may send a request to the FLS to get its approval. The FLS will contact the PMF that authorizes if FLCI can be a relay node for UE2/FLC2 and UE3/FLC3. After the authorization, the FLS may send interim model aggregation instructions to FLC1. Then, FLCI may establish a direct link with FLC2/FLC3, through which FLC2/FLC3 can send their local model updates to FLC1. FLCI may aggregate local model updates from FLC2/FLC3 with its own local model update, may generate an aggregated local model update, and may send the aggregated local model update to the FLS. Steps 901 to 909 in FIG. 9 may also be performed before a FL task is executed; in other words, steps 901-909 of FIG. 9 may be used to determine the relay FLC and install interim model aggregation instructions to the relay FLC before the FL task is installed to each FLC.
[0140] For the scenario where the FLS is hosted at a UE, UE1 in the example of FIG. 9 will be a UE-to-UE relay, but the same procedure in FIG. 9 can apply.
[0141] As illustrated in the example of FIG. 9, at 901, e.g., using direct device discovery, FLCI may discover UE2/FLC2 (step 901a) and UE3/FLC3 (step 901b) participating in the same FL task (e.g., FL-Task-A). For example, FLCI may announce and/or broadcast a device request message over the radio link, which may contain the identifier of UE1/FLC1, the identifier of the FL task, the identifier of the FLS, and/or FLC l’s willingness to be a relay node for aggregating local model updates. When UE2/FLC2 (or UE3/FLC3) receives the announced device request message, it may send a device response message directly to FLCI if UE2/FLC2 (or UE3/FLC3) participates in the same FL task and likes to be relayed by FLCI (but to be authorized by the FLS); this device response message may contain the identifier of UE2/FLC2. Steps 901a and/or 901b may contain similar parameters as contained in step 701 of FIG. 7 discussed above. Alternatively, UE2/FLC2 in 901a (or UE3/FLC3 in 901b) may actively request UE1/FLC1 to be its relay node for interim model aggregation; as an example, when UE1/FLC1 receives a good number of such requests from other FLCs (e.g., FLC2 and FLC3), UE1/FLC1 decides to be a relay node for those other FLCs (e.g., FLC2 and FLC3).
[0142] In the example of FIG. 9, at 902, based on the number of FLCs discovered in step 901 and FLC l’s capability (e.g., available computation resource, available storage resource, residual energy, etc.), FLCI may decide to be a relay node for a selected number of discovered FLCs (e.g., FLC2 and FLC3). At 903, FLCI may send a message to the FLS requesting to be a relay node with model aggregation function for FLC2 and FLC3. This message may contain the identifiers of selected FLCs from step 902 (e.g., FLC2 and FLC3) and FLC l’s identifier.
[0143] As further illustrated in the example of FIG. 9, the FLS may receive the request message sent at 903. The FLS may, at 904, authenticate if FLCI can be a relay node for FLC2 and FLC3 from FL perspective, may forward the message to the PMF for connectivity-level and computation-level authorization.
[0144] In the example of FIG. 9, the PMF may receive the request message sent at 904 and authorize if FLCI can provide proximity service and if FLC2 and FLC3 can use proximity service from FLC1. For this purpose, the PMF may check AUSF for retrieving UEl/FLCl’s, UE2/FLC2’ s, and UE3/FLC3 ’ s subscription data and check PCF for any proximity-related policies for those UEs/FLCs. Based on their subscription data and proximity-related polices, the PMF may approve if FLCI can be a relay node for interim model aggregation for FLC2 and/or FLC3. Then, the PMF may, at 905, send a response to the FLS indicating an approval or a rejection. At 906, the FLS may send a response to FLCI. If the response from the PMF at 905 shows an approval, the response at 906 may also contain interim model aggregation instructions, similar to step 708 of FIG. 7 discussed above.
[0145] As further illustrated in the example of FIG. 9, the FLS may send a notification to FLC2 as shown at 907a, and to FLC3 as shown at 907b. These notifications may the same or similar to that of notifications 709a and 709b, respectively, discussed above regarding FIG. 7. Steps 908, 909, 910, 911, 912, 913 and 914 of FIG. 9 may be the same as or similar to steps 711, 712, 713, 714, 715, 716, and 717, respectively, as discussed above in connection with FIG. 7.
[0146] Certain embodiments may provide for native FL with interim model aggregation in 3 GPP systems such as, but not limited to, a 6G system. In the future, FL may become a native Al function or service of a next generation system, such as a 6G system (6GS), which can be leveraged by other 6G network functions (e.g., an SMF) for more efficient 6G network management and automation.
[0147] FIG. 10 illustrates an example of native FL in 6G. In this example, a network data analytics function (NWDAF) (or at least its model training logical function) may be pushed from 3 GPP core network to UEs, e.g., to avoid collecting data from UEs to 3 GPP core network. As a result, there may be multiple distributed NWDAF instances, which collaboratively train an Al model by joining a federated learning task. For instance, in the example of FIG. 10, NWDAF-C may be a NWDAF instance located in 6G core network (or even in a 6G edge network), which acts as an FLS to coordinate and work with other FLCs (e.g., NWDAF 1, NWDAF2, NWDAF3). NWDAF1 may be located in UE1 and acts as a federated learning client (i.e., FLCI). NWDAF2 may be located in UE2 and acts as a federated learning client (i.e., FLC2). NWDAF3 may be located in UE3 and acts as a federated learning client (i.e., FLC3). It is noted that there could be more NWDAF instances located in other UEs as federated learning clients.
[0148] It is noted that, for the scenario where the NWDAF-C is hosted at a UE, UE1 in FIG. 10 may be a UE-to-UE relay, but the same procedure in FIG. 10 may apply.
[0149] According to various embodiments, the procedures in FIG. 6, FIG. 7, FIG. 8, and FIG. 9 can be directly applied to the example shown in FIG. 10. For example, NWDAF-C may send an initial global model to NWDAF 1, NWDAF2, and NWDAF3. NWDAF-C (and/or a 6G NF such as 6G-version DDNMF) may configure interim model aggregation instructions to NWDAF 1. NWDAF2 may perform local training, generate a local model update, and send its local model updates to NWDAF1. NWDAF3 may also perform local training, generate a local model update, and send its local model updates to NWDAF 1. NWDAF 1 may also perform local training and may generate a local model update. NWDAF 1 may perform the proposed interim model aggregation to aggregate local model updates received from NWDAF2 and NWDAF3 optionally with NWDAF l’s local model update, according to interim model aggregation instructions configured by NWDAF-C and/or another NF (e.g., 6G NF).
[0150] According to some embodiments, NWDAF-C can control NWDAF 1 to perform interim model aggregation using the procedure in FIG. 7, for example. In this case, according to certain embodiments, NWDAF-C is the FLS, NWDAF1 is the UE1/FLC1, NWDAF2 is the UE2/FLC2, and NWDAF3 is the UE3/FLC3.
[0151] In certain embodiments, another 6G proximity-management-related NF can also request and coordinate NWDAF 1 to perform interim model aggregation using the procedure in FIG. 8, for example. In this case, NWDAF-C is the FLS, NWFAF1 is the UE1/FLC1, NWDAF2 is the UE2/FLC2, and NWDAF3 is the UE3/FLC3.
[0152] NWDAF 1 can initiate to perform interim model aggregation using the procedure in FIG. 9, for example. In this case, NWDAF-C is the FLS, NWFAF1 is the UE1/FLC1, NWDAF2 is the UE2/FLC2, and NWDAF3 is the UE3/FLC3.
[0153] According to some embodiments, NWDAF 1 can also be located within a 6G edge network which are close to NWDAF2 and NWDAF3.
[0154] FIG. 11 illustrates an example flow diagram of a method 1100, which may be implemented in a first wireless transmit/receive unit (WTRU). It should be understood that the method 1100 may include any one or more of the steps performed by or associated with FLCI and/or UE1 as discussed elsewhere herein, such as described in or with respect to FIGs. 7-9. It should also be understood that one or more of the steps of the method may be optional, may be omitted, and/or may be performed in a different order.
[0155] In an embodiment, the method 1100 may include, at 1105, receiving, from a network node (e.g., a server or FLS), a first information indicating a request to serve as a relay node for at least one other WTRU (e.g., for a second WTRU and a third WTRU, or any number of WTRUs). Based on the first information, the method 1100 may include, at 1110, determining that the at least one other WTRU is in proximity of the first WTRU and/or determining to agree to serve as the relay node for the at least one other WTRU. The method may include, at 1115, transmitting a second information, to the network node, indicating that the first WTRU agrees to serve as the relay node. At 1120, the method 1100 may include receiving, from the network node, a third information indicating one or more interim model aggregation instructions. In some examples, the method 1100 may include, at 1125, establishing a direct link with the at least one other WTRU. For example, the direct link may be a computation-aware direct link as described elsewhere herein. [0156] In some examples, the first information received from the network node may further indicate any one or more of: (i) an identifier associated with the at least one other WTRU, (ii) an identifier associated with an existing federated learning (FL) task, and (iii) an identifier associated with the network node, (iv) information indicating a relaying time window, (v) information indicating an accuracy of a current global model, and (vi) information indicating a number of remaining training rounds to be completed.
[0157] In one example, determining that the at least one other WTRU is in proximity of the first WTRU may include broadcasting a message indicating an identifier of the at least one other WTRU over a local direct radio link, and receiving an acknowledgement indicating the identifier of the at least one other WTRU and indicating a reachability of the at least one other WTRU.
[0158] In one example, determining to agree to serve as the relay node for the at least one other WTRU may include estimating, based on the first information received from the network node, any of a size of the local model updates and/or a required computation resource.
[0159] In some examples, the second information sent to the network node may include a list or other indication or information indicating and/or identifying the WTRU(s) for which the first WTRU agrees or accepts to be a relay node.
[0160] According to certain embodiments, the method 1100 may include, at 1130, receiving, via the direct link (e.g., a computation-aware direct link), a local model update from one or more of the at least one other WTRU. The method 1100 may include, at 1135, aggregating, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate any of an aggregated local model update and/or an associated model aggregation record. The method 1100 may then include, at 1140, sending any of the aggregated local model update and/or the associated model aggregation record to the network node.
[0161] In some examples, the interim model aggregation instructions may indicate or include any one or more of the following: (i) to treat local model updates from second WTRU and the third WTRU equally, to aggregate them without considering the first WTRU’s local model update or without the first WTRU’ s local training, and to send the aggregated local model update to the FLS; (ii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU equally, to aggregate them all together, and to send the aggregated local model update to the network node; (iii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU differently, to aggregate local model updates from the first WTRU, the second WTRU, and the third WTRU according to their weights, and to send the aggregated local model update to the FLS; (iv) once there are two local model updates available, to aggregate them equally with a same weight or proportionally with different weights, and to send the aggregated local model update to the network node; and/or (v) if the model being trained is a deep neural network, to aggregate the first k layers of local model updates from the first WTRU, the second WTRU, and the third WTRU equally with the same weight or proportionally with different weights, to combine the aggregated first k layers with the other layers from the first WTRU to form an aggregated local model update, and to send the aggregated local model update to the network node.
[0162] According to some examples, the interim model aggregation instructions may indicate or include an indication of any one or more of the following: (i) a unique identifier associated with a respective one of the interim model aggregation instructions; (ii) a list of target WTRUs whose local model updates are to be updated; (iii) an aggregation mode associated with a respective one of the interim model aggregation instructions; (iv) aggregation conditions for the first WTRU to execute the interim model aggregation according to the interim model aggregation instructions; (v) an interim model aggregation algorithm associated with a respective one of the interim model aggregation instructions; and/or (vi) a destination address to which the aggregated local model update is to be sent.
[0163] In an embodiment, the method 1100 may include receiving a discovery message or the like from the at least one other WTRU. For example, the discovery message may indicate a relaying computation requirement (RCR), where the RCR indicates any of: a computation type, a computation size, a storage size, a computation frequency, a computation time window, and/or a computation waiting time.
[0164] According to an example, the establishing of the direct link (e.g., the computation-aware direct link) at 1125 may include receiving, from the at least one other WTRU, a direct link establishment message or request indicating a relaying computation requirement (RCR) associated with the at least one other WTRU.
[0165] In some examples, the method 1100 may include sending a notification, to the network node, which indicates information associated with, or identifying, the established direct link.
[0166] According to an embodiment, the method 1100 may include receiving, e.g., with the local model update from one or more of the at least one other WTRU, information that indicates or includes any one or more of the following: (i) an accuracy of the local model update; (ii) a model compression scheme and related parameters used to compress the local model update; and/or (iii) data distribution properties of training data that was used to generate the local model update. [0167] In some examples, the model aggregation record may indicate or include any one or more of the following: (i) an identifier associated with the interim model aggregation instruction used to generate the aggregated local model update; (ii) an identifier of the first WTRU and the one or more other WTRUs whose local model updates have been aggregated to generate the aggregated local model update; (iii) a creation time of the local model updates used to generate the aggregated local model update (e.g., a time (or times) at which the local model updates were created); and/or (iv) a creation time of the aggregated local model update (e.g., a time at which aggregated local model update was created or aggregated).
[0168] Various embodiments may be directed to a method, which may be implemented in an apparatus or server, such as a FLS. It should be understood that the method may include any one or more of the steps performed by or associated with a FLS as discussed elsewhere herein, such as described in or with respect to FIGs. 7-9. It should also be understood that one or more of the steps of the method may be optional, may be omitted, and/or may be performed in a different order.
[0169] In an embodiment, the method may include sending first information indicating a first request to a network function (e.g., PMF) to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs), and receiving the proximity information from the network function. The method may include selecting one of the one or more WTRUs to serve as a relay node, and sending, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU. The method may also include receiving third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node. The method may include sending, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions, and receiving, from the selected WTRU, an aggregated local model update and associated model aggregation record.
[0170] Although features and elements are provided above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations may be made without departing from its spirit and scope, as will be apparent to those skilled in the art. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly provided as such. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods or systems.
[0171] In some example embodiments described herein, (e.g., configuration) information may be described as received by a WTRU from the network, for example, through system information or via any kind of protocol message. Although not explicitly mentioned throughout embodiments described herein, the same (e.g., configuration) information may be pre-configured in the WTRU (e.g., via any kind of pre-configuration methods such as e.g., via factory settings), such that this (e.g., configuration) information may be used by the WTRU without being received from the network.
[0172] Any characteristic, variant or embodiment described for a method is compatible with an apparatus device comprising means for processing the disclosed method, such as with a device comprising a processor configured to process the disclosed method, a computer program product comprising program code instructions and a non-transitory computer-readable storage medium storing program instructions.
[0173] The foregoing embodiments are discussed, for simplicity, with regard to the terminology and structure of infrared capable devices, i.e., infrared emitters and receivers. However, the embodiments discussed are not limited to these systems but may be applied to other systems that use other forms of electromagnetic waves or non-electromagnetic waves such as acoustic waves. [0174] It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the term "video" or the term "imagery" may mean any of a snapshot, single image and/or multiple images displayed over a time basis. As another example, when referred to herein, the terms "user equipment" and its abbreviation "UE", the term "remote" and/or the terms "head mounted display" or its abbreviation "HMD" may mean or include (i) a wireless transmit and/or receive unit (WTRU); (ii) any of a number of embodiments of a WTRU; (iii) a wireless-capable and/or wired-capable (e.g., tetherable) device configured with, inter alia, some or all structures and functionality of a WTRU; (iii) a wireless-capable and/or wired-capable device configured with less than all structures and functionality of a WTRU; or (iv) the like. Details of an example WTRU, which may be representative of any WTRU recited herein, are provided herein with respect to FIGs. 1 A-1D. As another example, various disclosed embodiments herein supra and infra are described as utilizing a head mounted display. Those skilled in the art will recognize that a device other than the head mounted display may be utilized and some or all of the disclosure and various disclosed embodiments can be modified accordingly without undue experimentation. Examples of such other device may include a drone or other device configured to stream information for providing the adapted reality experience.
[0175] In addition, the methods provided herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer- readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
[0176] Variations of the method, apparatus and system provided above are possible without departing from the scope of the invention. In view of the wide variety of embodiments that can be applied, it should be understood that the illustrated embodiments are examples only, and should not be taken as limiting the scope of the following claims. For instance, the embodiments provided herein include handheld devices, which may include or be utilized with any appropriate voltage source, such as a battery and the like, providing any appropriate voltage.
[0177] Moreover, in the embodiments provided above, processing platforms, computing systems, controllers, and other devices that include processors are noted. These devices may include at least one Central Processing Unit ("CPU") and memory. In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being "executed," "computer executed" or "CPU executed."
[0178] One of ordinary skill in the art will appreciate that the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU. An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the provided methods.
[0179] The data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (RAM)) or non-volatile (e.g., Read-Only Memory (ROM)) mass storage system readable by the CPU. The computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It should be understood that the embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the provided methods.
[0180] In an illustrative embodiment, any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium. The computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.
[0181] There is little distinction left between hardware and software implementations of aspects of systems. The use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software may become significant) a design choice representing cost versus efficiency tradeoffs. There may be various vehicles by which processes and/or systems and/or other technologies described herein may be effected (e.g., hardware, software, and/or firmware), and the preferred vehicle may vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle. If flexibility is paramount, the implementer may opt for a mainly software implementation. Alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[0182] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples include one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples may be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In an embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), and/or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, may be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein may be distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc., and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[0183] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein may be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system may generally include one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity, control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
[0184] The herein described subject matter sometimes illustrates different components included within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures may be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality may be achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated may also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated may also be viewed as being "operably couplable" to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
[0185] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0186] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, where only one item is intended, the term "single" or similar language may be used. As an aid to understanding, the following appended claims and/or the descriptions herein may include usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim including such introduced claim recitation to embodiments including only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should be interpreted to mean "at least one" or "one or more"). The same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B." Further, the terms "any of' followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include "any of," "any combination of," "any multiple of," and/or "any combination of multiples of the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Moreover, as used herein, the term "set" is intended to include any number of items, including zero. Additionally, as used herein, the term "number" is intended to include any number, including zero. And the term "multiple", as used herein, is intended to be synonymous with "a plurality".
[0187] In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
[0188] As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein may be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as "up to," "at least," "greater than," "less than," and the like includes the number recited and refers to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth. [0189] Moreover, the claims should not be read as limited to the provided order or elements unless stated to that effect. In addition, use of the terms "means for" in any claim is intended to invoke 35 U.S.C. §112, 6 or means-plus-function claim format, and any claim without the terms "means for" is not so intended.
[0190] Although various embodiments have been described in terms of communication systems, it is contemplated that the systems may be implemented in software on microprocessors/general purpose computers (not shown). In certain embodiments, one or more of the functions of the various components may be implemented in software that controls a general-purpose computer.
[0191] In addition, although some example embodiments are illustrated and described herein, the invention is not intended to just be limited to the details shown. Rather, various modifications and variations may be made in the details within the scope and range of equivalents of the claims and without departing from the spirit or scope invention.
REFERENCES
[0192] The following references may have been referred to hereinabove, each of which is incorporated herein by reference in its entirety.
[0193] [1] 3GPP TS 23.501 V16.4.0 (2020-03); System architecture for the 5G System (5GS); Stage 2 (Release 16);
[0194] [2] 3GPP TS 23.288 V17.5.0 (2022-06), “Architecture Enhancements for 5G System (5GS) to Support Network Data Analytics Services (Release 17),” June 2022;
[0195] [3] 3GPP TS 22.261 V19.0.0 (2022-09), "Service Requirements for the 5G system; Stage l(Release 19)", September 2022;
[0196] [4] 3GPP TR 23.700-80 VI.0.0 (2022-09), “Study on 5G System Support for AI/ML- based Services (Release 18),” September 2022;
[0197] [5] 3GPP TR 22.876 V0.1.0 (2022-09), “Study on AI/ML Model Transfer-Phase 2 (Release 19),” September 2022;
[0198] [6] 3GPP TS 23.304 V17.4.0 (2022-09), Proximity based Services (ProSe) in the 5G System (5GS) (Release 17);
[0199] [7] 3GPP TR 23.700-33 VI.1.0 (2022-10), Study on system enhancement for Proximity based Services (ProSe) in the 5G System (5GS); Phase 2 (Release 18).
LISTING OF POSSIBLE ABBREVIATIONS AND TERMS
[0200] 3GPP 3rd Generation Partnership Project
[0201] 5G 5th Generation
[0202] 5G DDNMF 5G Direct Discovery Name Management Function
[0203] 5GC 5G Core Network [0204] 5GS 5G System
[0205] 6G 6th Generation
[0206] 6GC 6G Core Network
[0207] 6GS 6G System
[0208] AF Application Function
[0209] AMF Access and Mobility Management Function
[0210] AUSF Authentication Server Function
[0211] FL Federated Learning
[0212] FLC Federated Learning Client
[0213] FLS Federated Learning Server
[0214] LMU Local Model Update
[0215] NF Network Function
[0216] NRF Network Repository Function
[0217] NW Network
[0218] NWDAF Network Data Analytics Function
[0219] PCF Policy Control Function
[0220] PMF Proximity Management Function
[0221] ProSe Proximity based Service
[0222] SA Service Architecture
[0223] UDM Unified Data Management
[0224] UDR Unified Data Repository
[0225] UDSF Unstructured Data Storage Function
[0226] UE User Equipment.

Claims

CLAIMS What is claimed is:
1. A first wireless transmit/receive unit (WTRU), comprising: circuitry, including any of a processor, memory, transmitter and receiver, configured to: receive, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU; based on the first information, determine that the at least one other WTRU is in proximity of the first WTRU and determine to agree to serve as the relay node for the at least one other WTRU; transmit second information, to the network node, indicating that the first WTRU agrees to serve as the relay node; receive, from the network node, third information indicating one or more interim model aggregation instructions; establish a direct link with the at least one other WTRU; receive, via the direct link, a local model update from one or more of the at least one other WTRU; aggregate, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record; and send the aggregated local model update and the associated model aggregation record to the network node.
2. The WTRU of claim 1, wherein the at least one other WTRU comprises a second WTRU and a third WTRU.
3. The WTRU of any of claims 1-2, wherein the first information further indicates any of: (i) an identifier associated with the at least one other WTRU, (ii) an identifier associated with an existing federated learning (FL) task, and (iii) an identifier associated with the network node, (iv) information indicating a relaying time window, (v) information indicating an accuracy of a current global model, and (vi) information indicating a number of remaining training rounds to be completed.
4. The WTRU of any of claims 1-3, wherein, to determine that the at least one other WTRU is in proximity of the first WTRU, the circuitry is configured to: broadcast a message indicating an identifier of the at least one other WTRU over a local direct radio link; and receive an acknowledgement indicating the identifier of the at least one other WTRU and indicating a reachability of the at least one other WTRU.
5. The WTRU of any of claims 1-4, wherein, to determine to agree to serve as the relay node for the at least one other WTRU, the circuitry is configured to: estimate, based on the first information, any of a size of the local model updates and a required computation resource.
6. The WTRU of any of claims 1-5, wherein the second information comprises a list indicating the at least one WTRU that the first WTRU agrees to be a relay node for.
7. The WTRU of any of claims 1-6, wherein the interim model aggregation instructions indicate any of
(i) to treat local model updates from second WTRU and the third WTRU equally, to aggregate them without considering the first WTRU’ s local model update or without the first WTRU’s local training, and to send the aggregated local model update to the FLS;
(ii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU equally, to aggregate them all together, and to send the aggregated local model update to the network node;
(iii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU differently, to aggregate local model updates from the first WTRU, the second WTRU, and the third WTRU according to their weights, and to send the aggregated local model update to the FLS;
(iv) once there are two local model updates available, to aggregate them equally with a same weight or proportionally with different weights, and to send the aggregated local model update to the network node; and
(v) if the model being trained is a deep neural network, to aggregate the first k layers of local model updates from the first WTRU, the second WTRU, and the third WTRU equally with the same weight or proportionally with different weights, to combine the aggregated first k layers with the other layers from the first WTRU to form an aggregated local model update, and to send the aggregated local model update to the network node.
8. The WTRU of any of claims 1-7, wherein the interim model aggregation instructions indicate any of: a unique identifier associated with a respective one of the interim model aggregation instructions; a list of target WTRUs whose local model updates are to be updated; an aggregation mode associated with a respective one of the interim model aggregation instructions; aggregation conditions for the first WTRU to execute the interim model aggregation according to the interim model aggregation instructions; an interim model aggregation algorithm associated with a respective one of the interim model aggregation instructions; and a destination address to which the aggregated local model update is to be sent.
9. The WTRU of any of claims 1-8, wherein the circuitry is configured to: receive a discovery message, from the at least one other WTRU, the discovery message indicating a relaying computation requirement (RCR), wherein the RCR indicates any of: a computation type, a computation size, a storage size, a computation frequency, a computation time window, and a computation waiting time.
10. The WTRU of any of claims 1-9, wherein, to establish the direct link with the at least one other WTRU, the circuitry is configured to: receive, from the at least one other WTRU, a direct link establishment request indicating a relaying computation requirement (RCR) associated with the at least one other WTRU.
11. The WTRU of any of claims 1-10, wherein the circuitry is configured to: send a notification, to the network node, indicating information associated with the established direct link.
12. The WTRU of any of claims 1-11, wherein the circuitry is configured to receive, with the local model update from one or more of the at least one other WTRU, information indicating any of:
(i) an accuracy of the local model update;
(ii) a model compression scheme and related parameters used to compress the local model update; and (iii) data distribution properties of training data that was used to generate the local model update.
13. The WTRU of any of claims 1-12, wherein the model aggregation record indicates any of the following information:
(i) an identifier associated with the interim model aggregation instruction used to generate the aggregated local model update;
(ii) an identifier of the first WTRU and the one or more other WTRUs whose local model updates have been aggregated to generate the aggregated local model update;
(iii) a creation time of the local model updates used to generate the aggregated local model update; and
(iv) a creation time of the aggregated local model update.
14. A method, implemented in a first wireless transmit/receive unit (WTRU), the method comprising: receiving, from a network node, first information indicating a request to serve as a relay node for at least one other WTRU; based on the first information, determining that the at least one other WTRU is in proximity of the first WTRU and determining to agree to serve as the relay node for the at least one other WTRU; transmitting second information, to the network node, indicating that the first WTRU agrees to serve as the relay node; receiving, from the network node, third information indicating one or more interim model aggregation instructions; establishing a direct link with the at least one other WTRU; receiving, via the direct link, a local model update from one or more of the at least one other WTRU; aggregating, according to the interim model aggregation instructions, the received local model updates with a local model update generated at the first WTRU to generate an aggregated local model update and an associated model aggregation record; and sending the aggregated local model update and the associated model aggregation record to the network node.
15. The method of claim 14, wherein the at least one other WTRU comprises a second WTRU and a third WTRU.
16. The method of any of claims 14-15, wherein the first information further indicates any of: (i) an identifier associated with the at least one other WTRU, (ii) an identifier associated with an existing federated learning (FL) task, and (iii) an identifier associated with the network node, (iv) information indicating a relaying time window, (v) information indicating an accuracy of a current global model, and (vi) information indicating a number of remaining training rounds to be completed.
17. The method of any of claims 14-16, wherein determining that the at least one other WTRU is in proximity of the first WTRU comprises: broadcasting a message indicating an identifier of the at least one other WTRU over a local direct radio link; and receiving an acknowledgement indicating the identifier of the at least one other WTRU and indicating a reachability of the at least one other WTRU.
18. The method of any of claims 14-17, wherein determining to agree to serve as the relay node for the at least one other WTRU comprises: estimating, based on the first information, any of a size of the local model updates and a required computation resource.
19. The method of any of claims 14-18, wherein the second information comprises a list indicating the at least one WTRU that the first WTRU agrees to be a relay node for.
20. The method of any of claims 14-19, wherein the interim model aggregation instructions indicate any of:
(i) to treat local model updates from second WTRU and the third WTRU equally, to aggregate them without considering the first WTRU’ s local model update or without the first WTRU’s local training, and to send the aggregated local model update to the FLS;
(ii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU equally, to aggregate them all together, and to send the aggregated local model update to the network node; (iii) to treat local model updates from the first WTRU, the second WTRU, and the third WTRU differently, to aggregate local model updates from the first WTRU, the second WTRU, and the third WTRU according to their weights, and to send the aggregated local model update to the FLS;
(iv) once there are two local model updates available, to aggregate them equally with a same weight or proportionally with different weights, and to send the aggregated local model update to the network node; and
(v) if the model being trained is a deep neural network, to aggregate the first k layers of local model updates from the first WTRU, the second WTRU, and the third WTRU equally with the same weight or proportionally with different weights, to combine the aggregated first k layers with the other layers from the first WTRU to form an aggregated local model update, and to send the aggregated local model update to the network node.
21. The method of any of claims 14-20, wherein the interim model aggregation instructions indicate any of: a unique identifier associated with a respective one of the interim model aggregation instructions; a list of target WTRUs whose local model updates are to be updated; an aggregation mode associated with a respective one of the interim model aggregation instructions; aggregation conditions for the first WTRU to execute the interim model aggregation according to the interim model aggregation instructions; an interim model aggregation algorithm associated with a respective one of the interim model aggregation instructions; and a destination address to which the aggregated local model update is to be sent.
22. The method of any of claims 14-21, comprising: receiving a discovery message, from the at least one other WTRU, the discovery message indicating a relaying computation requirement (RCR), wherein the RCR indicates any of: a computation type, a computation size, and a storage size.
23. The method of any of claims 14-22, wherein establishing the direct link with the at least one other WTRU comprises: receiving, from the at least one other WTRU, a direct link establishment request indicating a relaying computation requirement (RCR) associated with the at least one other WTRU.
24. The method of any of claims 14-23, comprising sending a notification, to the network node, indicating information associated with the established direct link.
25. The method of any of claims 14-24, comprising receiving, with the local model update from one or more of the at least one other WTRU, information indicating any of:
(i) an accuracy of the local model update;
(ii) a model compression scheme and related parameters used to compress the local model update; and
(iii) data distribution properties of training data that was used to generate the local model update.
26. The method of any of claims 14-25, wherein the model aggregation record indicates any of the following information:
(i) an identifier associated with the interim model aggregation instruction used to generate the aggregated local model update;
(ii) an identifier of the first WTRU and the one or more other WTRUs whose local model updates have been aggregated to generate the aggregated local model update;
(iii) a creation time of the local model updates used to generate the aggregated local model update; and
(iv) a creation time of the aggregated local model update.
27. An apparatus, comprising: circuitry, including any of a processor, memory, transmitter and receiver, configured to: send first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs); receive the proximity information from the network function; select one of the one or more WTRUs to serve as a relay node; send, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU; receive third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node; send, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions; and receive, from the selected WTRU, an aggregated local model update and associated model aggregation record.
28. The apparatus of claim 27, wherein the first request indicates any of: a request to retrieve the proximity information and the context information associated with a first WTRU and WTRUs in proximity of the first WTRU, the first request indicating an identifier associated with the first WTRU; a request to retrieve the proximity information and the context information associated with WTRUs in the proximity of the first WTRU, the first request indicating an identifier associated with each of the WTRUs in the proximity of the first WTRU; a request to retrieve the proximity information and the context information associated with selected WTRUs a request to retrieve the proximity information and the context information associated with a type of WTRU in a region, the first request indicating information associated with the region and the type of WTRU.
29. A method, comprising: sending first information indicating a first request to a network function to retrieve proximity information and context information associated with one or more wireless transmit/receive units (WTRUs); receiving the proximity information from the network function; selecting one of the one or more WTRUs to serve as a relay node; sending, to the selected WTRU, second information indicating a request for the selected WTRU to serve as the relay node for at least one other WTRU; receiving third information, from the selected WTRU, indicating that the selected WTRU agrees to serve as the relay node; sending, to the selected WTRU, fourth information indicating one or more interim model aggregation instructions; and receiving, from the selected WTRU, an aggregated local model update and associated model aggregation record.
30. The method of claim 29, wherein the first request indicates any of: a request to retrieve the proximity information and the context information associated with a first WTRU and WTRUs in proximity of the first WTRU, the first request indicating an identifier associated with the first WTRU; a request to retrieve the proximity information and the context information associated with WTRUs in the proximity of the first WTRU, the first request indicating an identifier associated with each of the WTRUs in the proximity of the first WTRU; a request to retrieve the proximity information and the context information associated with selected WTRUs a request to retrieve the proximity information and the context information associated with a type of WTRU in a region, the first request indicating information associated with the region and the type of WTRU.
PCT/US2024/021596 2023-03-31 2024-03-27 Methods, architectures, apparatuses and systems for proximity-aware federated learning with interim model aggregation in future wireless WO2024206378A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363456101P 2023-03-31 2023-03-31
US63/456,101 2023-03-31

Publications (1)

Publication Number Publication Date
WO2024206378A1 true WO2024206378A1 (en) 2024-10-03

Family

ID=90811212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/021596 WO2024206378A1 (en) 2023-03-31 2024-03-27 Methods, architectures, apparatuses and systems for proximity-aware federated learning with interim model aggregation in future wireless

Country Status (1)

Country Link
WO (1) WO2024206378A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022060748A1 (en) * 2020-09-18 2022-03-24 Google Llc User equipment-coordination set federated learning for deep neural networks
US20220237507A1 (en) * 2021-01-28 2022-07-28 Qualcomm Incorporated Sidelink-supported federated learning for training a machine learning component

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022060748A1 (en) * 2020-09-18 2022-03-24 Google Llc User equipment-coordination set federated learning for deep neural networks
US20220237507A1 (en) * 2021-01-28 2022-07-28 Qualcomm Incorporated Sidelink-supported federated learning for training a machine learning component

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Study on AI/ML Model Transfer-Phase 2 (Release 19)", no. V1.0.0, 10 March 2023 (2023-03-10), pages 1 - 35, XP052283939, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/22_series/22.876/22876-100.zip 22876-100.doc> [retrieved on 20230310] *
"Architecture Enhancements for 5G System (5GS) to Support Network Data Analytics Services (Release 17", 3GPP TS 23.288 V17.5.0, June 2022 (2022-06-01)
"Proximity based Services (ProSe) in the 5G System (5GS) (Release 17", 3GPP TS 23.304 V17.4.0
"Service Requirements for the 5G system; Stage 1(Release 19", 3GPP TS 22.261 V19.0.0, September 2022 (2022-09-01)
"Study on 5G System Support for AI/ML-based Services (Release 18", 3GPP TR 23.700-80 V1.0.0, September 2022 (2022-09-01)
"Study on AI/MI, Model Transfer-Phase 2 (Release 19", 3GPP TR 22.876 V0.1.0, September 2022 (2022-09-01)
"Study on system enhancement for Proximity based Services (ProSe) in the 5G System (5GS); Phase 2 (Release 18", 3GPP TR 23.700-33 V1.1.0
"System architecture for the 5G System (5GS); Stage 2 (Release 16", 3GPP TS 23.501 V16.4.0, March 2020 (2020-03-01)

Similar Documents

Publication Publication Date Title
WO2021155311A1 (en) Methods, architectures, apparatuses and systems directed to improved service continuity for out of range proximity wireless transmit/receive devices
JP2024508460A (en) Methods, apparatus, and systems for integrating constrained multi-access edge computing hosts into multi-access edge computing systems
US20240129968A1 (en) Methods, architectures, apparatuses and systems for supporting multiple application ids using layer-3 relay
EP4500928A1 (en) Methods, apparatus, and systems for providing information to wtru via control plane or user plane
EP4264928A1 (en) Methods, apparatuses and systems directed to wireless transmit/receive unit based joint selection and configuration of multi-access edge computing host and reliable and available wireless network
WO2024206378A1 (en) Methods, architectures, apparatuses and systems for proximity-aware federated learning with interim model aggregation in future wireless
WO2024206381A1 (en) Methods, architectures, apparatuses and systems for leveraging direct links to improve federated learning training process in future wireless
US20250071034A1 (en) Method and apparatus for real-time qos monitoring and prediction
US20240107602A1 (en) Methods, architectures, apparatuses and systems for service continuity for premises networks
US20230308840A1 (en) Multicast-broadcast services support for network relay
WO2024206203A1 (en) Device discovery for aggregated wtru
WO2023146777A1 (en) Method and apparatus for real-time qos monitoring and prediction
WO2024233897A1 (en) Methods, architectures, apparatuses and systems for establishing policy charging and control rules
WO2024233904A1 (en) Methods, architectures, apparatuses and systems for managing resource conflicts
WO2024072719A1 (en) Methods, architectures, apparatuses and systems for device association over direct communication for aggregated devices
WO2024168121A1 (en) Methods, architectures, apparatuses and systems for artificial intelligence based alter ego functionality in a communications system
WO2023167979A1 (en) Methods, architectures, apparatuses and systems for multi-modal communication including multiple user devices
WO2022221321A1 (en) Discovery and interoperation of constrained devices with mec platform deployed in mnos edge computing infrastructure
WO2025019551A1 (en) Methods, architectures, apparatuses and systems for path switching
WO2024211765A1 (en) Deregistration of inactive wtru of ai/ml network slice
WO2024097408A1 (en) System and methods to improve the performance of federated learning via sidelink communications
EP4500911A1 (en) Methods and apparatus for enhancing 3gpp systems to support federated learning application intermediate model privacy violation detection
WO2024147975A1 (en) Method and apparatus for integrated discovery support with ue-to-ue relay
WO2025024278A1 (en) METHODS, ARCHITECTURES, APPARATUSES AND SYSTEMS FOR INFORMING CHANGED QUALITY OF SERVICE INFORMATION IN NETWORK ELEMENTS IN IoT NETWORKS
WO2023147049A1 (en) Personal internet of things network connectivity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24720387

Country of ref document: EP

Kind code of ref document: A1