Please refer to RP-213599 for detailed scope of the SI.
R1-2205695 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC) (rev of R1-2205572)
R1-2205021 Work plan for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
R1-2205022 TR skeleton for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
[109-e-R18-AI/ML-01] – Juan (Qualcomm)
Email discussion and approval of TR skeleton for Rel-18 SI on AI/ML for NR air interface by May 13
R1-2205478 [109-e-R18-AI/ML-01] Email discussion and approval of TR skeleton for Rel-18 SI on AI/ML for NR air interface Moderator (Qualcomm Incorporated)
R1-2205476 TR 38.843 skeleton for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
Decision: As per email decision posted on May 22nd, the revised skeleton in R1-2205478 is still not stable. Discussion to continue in next meeting.
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2203280 General aspects of AI PHY framework Ericsson
· Proposal 5: Study the following three collaboration cases:
o Single-sided ML functionality at the gNB/NW only,
o Single-sided ML functionality at the UE only,
o Dual-sided joint ML functionality at both the UE and gNB/NW (joint operation).
Decision: The document is noted.
R1-2204570 ML terminology, descriptions, and collaboration framework Nokia, Nokia Shanghai Bell
· Proposal 1: RAN1 maintains a list of ML-related terms and definitions. Terminology in Annex A could be used as a starting point.
· Proposal 2: RAN1 agrees that the terms used in this study are valid only for the air interface, at the final stage, some adjustments in terminology may be needed with other 3GPP groups.
· Proposal 3: RAN1 at least to differentiate RL-based algorithms from other types of ML algorithms.
· Proposal 4: RAN1 will support only the collaboration-based solutions if they outperform implementation-based ML solutions and/or non-ML baselines.
· Proposal 5: RAN1 defines and maintains possible collaboration options and uses them to map the collaboration in the use-cases under study.
· Proposal 6: RAN1 to adopt a high level description of the ML-based solutions using a defined set of processing blocks, including at least the description of their input and outputs data, type of algorithm, hyperparameters, and control mechanisms used.
· Proposal 7: The RAN1 complexity comparison is to be performed between the different ML-enabled solutions proposed for the same function (sub-use case).
· Proposal 8: The RAN1 complexity estimation of an ML-enabled function should include the analysis of both training and inference operating modes.
· Proposal 9: RAN1 to consider including at least the following items in the complexity analysis of ML-enabled solutions:
o Training or (initial training/exploration for RL)
§ Number of floating-point operations required for one iteration (forward-backward) of the ML-algorithm
§ Number of required training iterations (steps and epochs) to reach the training performance/accuracy
§ Alternatively, to a) and b), the floating-point operations per second needed to run the training
§ Memory footprint of the ML algorithm (Mbit)
§ Memory footprint of the potentially required input and output data storage (Gbit)
§ Number of floating-point operations required to prepare (and format, convert) the input data in case these are not direct measurements or estimates readily available in the radio entity executing the ML-enabled function
§ Estimated number and payload (bytes) of additional signalling messages required to convey the ML-input and ML-output information between the involved radio entities (gNB and UE)
· This might be complemented by the estimated required ML-input and ML-output data rates, i.e., factoring in the acceptable transmission delays
o Inference (or exploration/exploitation for RL)
§ Number of floating point operations required for one forward pass of the ML-algorithm
§ Alternatively, to a), the floating-point operations per second needed to run the ML algorithm for (X) seconds
§ Number of floating-point operations required to prepare (and format, convert) the input data in case these are not direct measurements or estimates readily available in the radio entity executing the ML-enabled function
§ Estimated number and payload (bytes) of additional signalling messages required to convey the ML-input and ML-output information between the involved radio entities (gNB, UE)
· This might be complemented by the estimated required ML-input and ML-output data rates, i.e., factoring in the acceptable transmission delays
· Proposal 10: RAN1 to use simulator data for the study, after sufficient progress and the convergence on the solutions, evaluation with field data can be discussed.
Decision: The document is noted.
R1-2205023 General aspects of AIML framework Qualcomm Incorporated
· Proposal 15: Consider the role of model performance monitoring in relation to RAN4 tests.
Decision: The document is noted.
R1-2204416 General aspects of AI/ML framework Lenovo
· Proposal 1: A general framework for this study on AI/ML for NR air interface enhancement is needed to align the understanding on the relevant functions for future investigation.
· Proposal 2: Define and construct different data sets for different purposes, such as for model training and for model validation.
· Proposal 3: Using Option 1a or 1b, i.e., simulation data based, to construct the data set at least for model training, and the data set construction for other purposes needs to be further discussion.
· Proposal 4: The acquisition on ground-truth data for supervised learning needs to be workable in practice for any proposed AI/ML approach.
· Proposal 5: Define three categories of gNB-UE collaboration levels as listed in Table 1, according to the interacted AI/ML operation-related information.
· Proposal 6: Adopt the AI Model Characterization Card (MCC) of an AI/ML model in Table 2 as a starting point for further discussion and refinement.
· Proposal 7: Consider the KPIs/Metrics (if applicable) in Table 4 as a starting point for the common aspects of an evaluation methodology of a proposed AI/ML model for any of the agreed use cases.
Decision: The document is noted.
R1-2203067 Discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2203139 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2203247 Discussion on common AI/ML framework ZTE
R1-2203404 Discussions on AI-ML framework New H3C Technologies Co., Ltd.
R1-2203450 Discussion on AI/ML framework for air interface CATT
R1-2203549 General discussions on AI/ML framework vivo
R1-2203656 Discussion on general aspects of AI/ML for NR air interface China Telecom
R1-2203690 Discussion on general aspects of AI ML framework NEC
R1-2203728 Consideration on common AI/ML framework Sony
R1-2203807 Initial views on the general aspects of AI/ML framework xiaomi
R1-2203896 General aspects of AI ML framework and evaluation methodogy Samsung
R1-2204014 On general aspects of AI/ML framework OPPO
R1-2204062 Evaluating general aspects of AI-ML framework Charter Communications, Inc
R1-2204077 General aspects of AI/ML framework Panasonic
R1-2204120 Considerations on AI/ML framework SHARP Corporation
R1-2204148 General aspects on AI/ML framework LG Electronics
R1-2204179 Views on general aspects on AI-ML framework CAICT
R1-2204237 Discussion on general aspect of AI/ML framework Apple
R1-2204294 Discussion on general aspects of AI/ML framework CMCC
R1-2204374 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2204498 Discussion on general aspects of AIML framework Spreadtrum Communications
R1-2204650 Discussion on AI/ML framework for NR air interface ETRI
R1-2204792 Discussion of AI/ML framework Intel Corporation
R1-2204839 On general aspects of AI and ML framework for NR air interface NVIDIA
R1-2204859 General aspects of AI/ML framework for NR air interface AT&T
R1-2204936 General aspects of AI/ML framework Mavenir
R1-2205065 AI/ML Model Life cycle management Rakuten Mobile
R1-2205075 Discussions on general aspects of AI/ML framework Fujitsu Limited
R1-2205099 Overview to support artificial intelligence over air interface MediaTek Inc.
[109-e-R18-AI/ML-02] – Taesang (Qualcomm)
Email discussion on general aspects of AI/ML by May 20
- Check points: May 18
R1-2205285 Summary#1 of [109-e-R18-AI/ML-02] Moderator (Qualcomm)
From May 13th GTW session
Agreement
· Use 3gpp channel models (TR 38.901) as the baseline for evaluations.
· Note: Companies may submit additional results based on other dataset than generated by 3GPP channel models
R1-2205401 Summary#2 of [109-e-R18-AI/ML-02] Moderator (Qualcomm)
From May 17th GTW session
Working Assumption
Include the following into a working list of terminologies to be used for RAN1 AI/ML air interface SI discussion.
The description of the terminologies may be further refined as the study progresses.
New terminologies may be added as the study progresses.
It is FFS which subset of terminologies to capture into the TR.
Terminology |
Description |
Data collection |
A process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference |
AI/ML Model |
A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. |
AI/ML model training |
A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference |
AI/ML model Inference |
A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs |
AI/ML model validation |
A subprocess of training, to evaluate the quality of an AI/ML model using a dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training. |
AI/ML model testing |
A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model. |
UE-side (AI/ML) model |
An AI/ML Model whose inference is performed entirely at the UE |
Network-side (AI/ML) model |
An AI/ML Model whose inference is performed entirely at the network |
One-sided (AI/ML) model |
A UE-side (AI/ML) model or a Network-side (AI/ML) model |
Two-sided (AI/ML) model |
A paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e, the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa. |
AI/ML model transfer |
Delivery of an AI/ML model over the air interface, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model. |
Model download |
Model transfer from the network to UE |
Model upload |
Model transfer from UE to the network |
Federated learning / federated training |
A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g., UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples. |
Offline field data |
The data collected from field and used for offline training of the AI/ML model |
Online field data |
The data collected from field and used for online training of the AI/ML model |
Model monitoring |
A procedure that monitors the inference performance of the AI/ML model |
Supervised learning |
A process of training a model from input and its corresponding labels. |
Unsupervised learning |
A process of training a model without labelled data. |
Semi-supervised learning |
A process of training a model with a mix of labelled data and unlabelled data |
Reinforcement Learning (RL) |
A process of training an AI/ML model from input (a.k.a. state) and a feedback signal (a.k.a. reward) resulting from the model’s output (a.k.a. action) in an environment the model is interacting with. |
Model activation |
enable an AI/ML model for a specific function |
Model deactivation |
disable an AI/ML model for a specific function |
Model switching |
Deactivating a currently active AI/ML model and activating a different AI/ML model for a specific function |
Conclusion
As indicated in SID, although specific AI/ML algorithms and models may be studied for evaluation purposes, AI/ML algorithms and models are implementation specific and are not expected to be specified.
Observation
Where AI/ML functionality resides depends on specific use cases and sub-use cases.
Conclusion
· RAN1 discussion should focus on network-UE interaction.
o AI/ML functionality mapping within the network (such as gNB, LMF, or OAM) is up to RAN2/3 discussion.
R1-2205474 Summary#3 of [109-e-R18-AI/ML-02] Moderator (Qualcomm)
R1-2205522 Summary#4 of [109-e-R18-AI/ML-02] Moderator (Qualcomm)
From May 20th GTW session
Take the following network-UE collaboration levels as one aspect for defining collaboration levels
1. Level x: No collaboration
2. Level y: Signaling-based collaboration without model transfer
3. Level z: Signaling-based collaboration with model transfer
Note: Other aspect(s), for defining collaboration levels is not precluded and will be discussed in later meetings, e.g., with/without model updating, to support training/inference, for defining collaboration levels will be discussed in later meetings
FFS: Clarification is needed for Level x-y boundary
Note: Extended email discussion focusing on evaluation assumptions to take place
· Dates: May 23 – 24
Including evaluation methodology, KPI, and performance evaluation results.
R1-2203897 Evaluation on AI ML for CSI feedback enhancement Samsung
· Proposal 1-1: For CSI prediction, to model user mobility, consider the link-level channel model with Doppler information in Section 7.5 of TR 38.901.
· Proposal 1-2: For CSI prediction, consider Rel-16 CSI feedback and Rel-17 CSI feedback, as benchmark schemes.
· Proposal 1-3: For CSI predictions, reuse channel models in TR 38.901 to generate datasets for training/testing/validation in this sub-use case.
· Proposal 1-4: For KPIs in CSI prediction, proxy metrics such as NMSE and cosine similarity can be considered as intermediated KPIs and system-level metrics such as UPT can be used for general KPIs.
· Proposal 1-5: For CSI prediction, consider capability-related KPIs such as computational complexity, power consumption, memory storage, and hardware requirements.
· Proposal 2-1: Consider an auto-encoder as a baseline AI/ML model for CSI feedback compression and reconstruction tasks. Further study is needed to select the baseline type of neural network (e.g. CNN, RNN, LSTM).
· Proposal 2-2: For calibration in CSI compression, consider both performance-related KPIs (e.g., reconstruction accuracy) and capability-related KPIs (e.g., computational complexity) for the baseline AI/ML model.
· Proposal 2-3: Only for the model calibration in CSI compression, aligned loss function, hyper-parameter values, and details of the AI model are considered together.
· Proposal 2-4: For CSI compression, consider intermediate performance metrics (e.g., NMSE, CS) and UPT as final metric.
· Proposal 2-5: Consider various aspects of AI/ML models including computational complexity and the model size to study the AI processing burden and requirement at the UE.
· Proposal 2-6: To evaluate the capability of model generalization concerning various channel parameters (e.g., Rician K factor, path loss, angles, delays, powers, etc.)), consider datasets from mixed scenarios or different distributions of channel parameters in a single scenario.
· Proposal 3-1.: Consider a two-phased approach for evaluation. Phase I to compare various AI/ML models and their gain for representative sub-use case selection and Phase II to evaluate the gain of AI/ML schemes as compared to conventional benchmark schemes in communication systems.
· Proposal 3-2: Strive to reuse the evaluation assumptions of Rel. 16/17 codebook enhancement as much as possible with additional mobility modeling. FFS: mobility modeling, and other additional considerations to model time-correlated CSI.
· Proposal 3-3: Target moderate UE mobility, e.g., up to 30kmphr for joint CSI prediction and compression.
· Proposal 3-4: Consider either Rel-16 or Rel-17 CBs as a benchmark conventional scheme for performance comparison purposes. The selection of a benchmark conventional scheme could be based on whether angle-delay reciprocity is exploited in the channel measurement.
· Proposal 3-5: Consider an autoencoder-based AI/ML solution for joint CSI compression and prediction.
· Proposal 3-6: Consider simpler performance metrics, e.g., NMSE, cosine similarity, for Phase I of evaluation. Traditional performance metrics employed for codebook performance evaluation, such as UPT vs. feedback overhead, can be considered for Phase II.
· Proposal 3-7: Consider UE capability-related KPIs for AI/ML-based CSI compression and prediction, including computational complexity, memory storage, inference latency, model/training data transfer overhead, if applicable.
Decision: The document is noted.
R1-2203550 Evaluation on AI/ML for CSI feedback enhancement vivo
Proposal 1: The dataset for AI-model training, validation and testing can be constructed mainly based on the channel model(s) defined in TR 38.901, namely, UMi, UMa, and Indoor scenarios in system level simulation, and optionally on CDL in link level simulation.
Proposal 2: Consider both cases with same or different input data dimensions for data set construction to verify generalization performance.
Proposal 3: For CSI enhancement, the data set should be constructed in a way that data samples across different UEs, different cells, different drops, different scenarios are all included.
Proposal 4: Both the following two cases should be considered for generalization performance verification
a) Case1: the training data set is constructed by mixing data from different setup
b) Case2: training set and testing data set are from different setups
Proposal 5: For the case that the training data set is constructed by mixing data from different setup, dataset for generalization can be constructed based on the combination of different scenarios and configurations. Different ratio of data mixture can be evaluated with the same total sample number for each dataset.
Proposal 6: For AI model calibration, the parameters used to construct dataset needs to be aligned.
Proposal 7: Companies are encouraged to share the data set and model files in a public accessible way for cross check purposes. Our initial data set file for CSI compression and CSI prediction is on the following link [5] and [6].
Proposal 8: Ideal downlink channel estimation is assumed as the starting point for the performance evaluation.
Proposal 9: Use ideal UCI feedback for the performance evaluation.
Proposal 10: The evaluation assumption in Table 2 is used as the SLS assumptions for both non-AI and AI-based performance evaluations.
Proposal 11: Parameter perturbation based on the basic parameter in Table 2 can be conducted to verify generalization performance of each case.
Proposal 12: The evaluation assumption in Table 3 is used as the LLS assumptions for AI-based CSI prediction evaluations.
Proposal 13: Study the performance loss caused by the n-bits quantization of AI model parameters with the float number AI model parameters as baseline.
Proposal 14: Clarify the quantification level of the AI model for evaluation.
Proposal 15: Spectral efficiency [bits/s/Hz] can be used for the final evaluation metric while absolute or square of cosine similarity and NMSE can be used to measure the similarity and difference between input and output as an intermediate metric.
Proposal 16: Generalization performance is also used as one KPI to verify whether AI/ML can work across multiple setups.
Proposal 17: The complexity, parameter sizes, quantization, latencies and power consumption of models needs to be considered.
Proposal 18: The impact of the type of historical CSI inputs should be studied for the AI-based CSI prediction.
Proposal 19: The choice of number of historical CSI inputs should be studied for the AI-based CSI prediction.
Proposal 20: The study on the prediction of multiple future CSIs is with high priority.
Proposal 21: The generalization performance across frequency domain should be studied.
Proposal 22: The generalization capability with respect to scenarios should be studied.
Proposal 23: Finetuning of AI-based CSI prediction should be studied.
Decision: The document is noted.
R1-2203650 Evaluation on AI-based CSI feedback SEU
R1-2204041 Considerations on AI-enabled CSI overhead reduction CENC
R1-2204606 Discussion on the AI/ML methods for CSI feedback enhancements Fraunhofer IIS, Fraunhofer HHI
R1-2203068 Discussion on evaluation of AI/ML for CSI feedback enhancement use case FUTUREWEI
R1-2203140 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2203248 Evaluation assumptions on AI/ML for CSI feedback ZTE
R1-2203281 Evaluations on AI-CSI Ericsson
R1-2203451 Discussion on evaluation on AI/ML for CSI feedback CATT
R1-2203808 Discussion on evaluation on AI/ML for CSI feedback enhancement xiaomi
R1-2204015 Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement OPPO
R1-2204050 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2204055 Evaluation of CSI compression with AI/ML Beijing Jiaotong University
R1-2204063 Performance evaluation of ML techniques for CSI feedback enhancement Charter Communications, Inc
R1-2204149 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2204180 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2204238 Initial evaluation on AI/ML for CSI feedback Apple
R1-2204295 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2204375 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2204417 Evaluation on AI/ML for CSI feedback Lenovo
R1-2204499 Discussion on evaluation on AI/ML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2204571 Evaluation on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2204793 Evaluation for CSI feedback enhancements Intel Corporation
R1-2204840 On evaluation assumptions of AI and ML for CSI feedback enhancement NVIDIA
R1-2204860 Evaluation of AI/ML for CSI feedback enhancements AT&T
R1-2205024 Evaluation on AIML for CSI feedback enhancement Qualcomm Incorporated
R1-2205076 Evaluation on AI/ML for CSI feedback enhancement Fujitsu Limited
R1-2205100 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
[109-e-R18-AI/ML-03] – Yuan (Huawei)
Email discussion on evaluation of AI/ML for CSI feedback enhancement by May 20
- Check points: May 18
R1-2205222 Summary#1 of [109-e-R18-AI/ML-03] Moderator (Huawei)
R1-2205223 Summary#2 of [109-e-R18-AI/ML-03] Moderator (Huawei)
From May 13th GTW session
Agreement
For the performance evaluation of the AI/ML based CSI feedback enhancement, system level simulation approach is adopted as baseline
· Link level simulation is optionally adopted
R1-2205224 Summary#3 of [109-e-R18-AI/ML-03] Moderator (Huawei)
Decision: As per email decision posted on May 19th,
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, for the calibration purpose on the dataset and/or AI/ML model over companies, consider to align the parameters (e.g., for scenarios/channels) for generating the dataset in the simulation as a starting point.
Decision: As per email decision posted on May 20th,
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, ideal DL channel estimation is optionally taken into the baseline of EVM for the purpose of calibration and/or comparing intermediate results (e.g., accuracy of AI/ML output CSI, etc.)
· Note: Eventual performance comparison with the benchmark release and drawing SI conclusions should be based on realistic DL channel estimation.
· FFS: the ideal channel estimation is applied for dataset construction, or performance evaluation/inference.
· FFS: How to model the realistic channel estimation
· FFS: Whether ideal channel is used as target CSI for intermediate results calculation with AI/ML output CSI from realistic channel estimation
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, companies can consider performing intermediate evaluation on AI/ML model performance to derive the intermediate KPI(s) (e.g., accuracy of AI/ML output CSI) for the purpose of AI/ML solution comparison.
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, Floating point operations (FLOPs) is adopted as part of the ‘Evaluation Metric’, and reported by companies.
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, AI/ML memory storage in terms of AI/ML model size and number of AI/ML parameters is adopted as part of the ‘Evaluation Metric’, and reported by companies who may select either or both.
· FFS: the format of the AI/ML parameters
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases, a two-sided model is considered as a starting point, including an AI/ML-based CSI generation part to generate the CSI feedback information and an AI/ML-based CSI reconstruction part which is used to reconstruct the CSI from the received CSI feedback information.
· At least for inference, the CSI generation part is located at the UE side, and the CSI reconstruction part is located at the gNB side.
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the following table is taken as a baseline of EVM
· Note: the following table captures the common parts of the R16 CSI enhancement EVM table and the R17 CSI enhancement EVM table, while the different parts are FFS.
· Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.
o The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.
· FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.
Parameter |
Value |
|
Duplex, Waveform |
FDD (TDD is not precluded), OFDM |
|
Multiple access |
OFDMA |
|
Scenario |
Dense Urban (Macro only) is a baseline. Other scenarios (e.g. UMi@4GHz 2GHz, Urban Macro) are not precluded. |
|
Frequency Range |
FR1 only, FFS 2GHz or 4GHz as a baseline |
|
Inter-BS distance |
200m |
|
Channel model |
According to TR 38.901 |
|
Antenna setup and port layouts at gNB |
Companies need to report which option(s) are used between - 32 ports: (8,8,2,1,1,2,8), (dH,dV) = (0.5, 0.8)λ - 16 ports: (8,4,2,1,1,2,4), (dH,dV) = (0.5, 0.8)λ Other configurations are not precluded. |
|
Antenna setup and port layouts at UE |
4RX: (1,2,2,1,1,1,2), (dH,dV) = (0.5, 0.5)λ for (rank 1-4) 2RX: (1,1,2,1,1,1,1), (dH,dV) = (0.5, 0.5)λ for (rank 1,2) Other configuration is not precluded. |
|
BS Tx power |
41 dBm for 10MHz, 44dBm for 20MHz, 47dBm for 40MHz |
|
BS antenna height |
25m |
|
UE antenna height & gain |
Follow TR36.873 |
|
UE receiver noise figure |
9dB |
|
Modulation |
Up to 256QAM |
|
Coding on PDSCH |
LDPC Max code-block size=8448bit |
|
Numerology |
Slot/non-slot |
14 OFDM symbol slot |
SCS |
15kHz for 2GHz, 30kHz for 4GHz |
|
Simulation bandwidth |
FFS |
|
Frame structure |
Slot Format 0 (all downlink) for all slots |
|
MIMO scheme |
FFS |
|
MIMO layers |
For all evaluation, companies to provide the assumption on the maximum MU layers (e.g. 8 or 12) |
|
CSI feedback |
Feedback assumption at least for baseline scheme
|
|
Overhead |
Companies shall provide the downlink overhead assumption (i.e., whether the CSI-RS transmission is UE-specific or not and take that into account for overhead computation) |
|
Traffic model |
FFS |
|
Traffic load (Resource utilization) |
FFS |
|
UE distribution |
- 80% indoor (3km/h), 20% outdoor (30km/h) FFS whether/what other indoor/outdoor distribution and/or UE speeds for outdoor UEs needed |
|
UE receiver |
MMSE-IRC as the baseline receiver |
|
Feedback assumption |
Realistic |
|
Channel estimation |
Realistic as a baseline FFS ideal channel estimation |
|
Evaluation Metric |
Throughput and CSI feedback overhead as baseline metrics. Additional metrics, e.g., ratio between throughput and CSI feedback overhead, can be used. Maximum overhead (payload size for CSI feedback)for each rank at one feedback instance is the baseline metric for CSI feedback overhead, and companies can provide other metrics. |
|
Baseline for performance evaluation |
FFS |
R1-2205491 Summary#4 of [109-e-R18-AI/ML-03] Moderator (Huawei)
Decision: As per email decision posted on May 22nd,
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, as a starting point, take the intermediate KPIs of GCS/SGCS and/or NMSE as part of the ‘Evaluation Metric’ to evaluate the accuracy of the AI/ML output CSI
· For GCS/SGCS,
o FFS: how to calculate GCS/SGCS for rank>1
o FFS: whether GCS or SGCS is adopted
· FFS other metrics, e.g., equivalent MSE, received SNR, or numerical spectral efficiency gap.
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if LLS is preferred, the following table is taken as a baseline of EVM
· Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.
o The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.
· FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.
· FFS: other parameters and values if needed
Parameter |
Value |
Duplex, Waveform |
FDD (TDD is not precluded), OFDM |
Carrier frequency |
2GHz as baseline, optional for 4GHz |
Bandwidth |
10MHz or 20MHz |
Subcarrier spacing |
15kHz for 2GHz, 30kHz for 4GHz |
Nt |
32: (8,8,2,1,1,2,8), (dH,dV) = (0.5, 0.8)λ |
Nr |
4: (1,2,2,1,1,1,2), (dH,dV) = (0.5, 0.5)λ |
Channel model |
CDL-C as baseline, CDL-A as optional |
UE speed |
3kmhr, 10km/h, 20km/h or 30km/h to be reported by companies |
Delay spread |
30ns or 300ns |
Channel estimation |
Realistic channel estimation algorithms (e.g. LS or MMSE) as a baseline, FFS ideal channel estimation |
Rank per UE |
Rank 1-4. Companies are encouraged to report the Rank number, and whether/how rank adaptation is applied |
Agreement (modified by May 23rd post)
For the evaluation of the AI/ML based CSI feedback enhancement, study the verification of generalization. Companies are encouraged to report how they verify the generalization of the AI/ML model, including:
· The training dataset of configuration(s)/ scenario(s), including potentially the mixed training dataset from multiple configurations/scenarios
· The configuration(s)/ scenario(s) for testing/inference
· The detailed list of configuration(s) and/or scenario(s)
· Other details are not precluded
Note: Above agreement is updated as follows
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, study the verification of generalization. Companies are encouraged to report how they verify the generalization of the AI/ML model, including:
· The configuration(s)/ scenario(s) for training dataset, including potentially the mixed training dataset from multiple configurations/scenarios
· The configuration(s)/ scenario(s) for testing/inference
· Other details are not precluded
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases, companies are encouraged to report the details of their models, including:
· The structure of the AI/ML model, e.g., type (CNN, RNN, Transformer, Inception, …), the number of layers, branches, real valued or complex valued parameters, etc.
· The input CSI type, e.g., raw channel matrix estimated by UE, eigenvector(s) of the raw channel matrix estimated by UE, etc.
o FFS: the input CSI is obtained from the channel with or without analog BF
· The output CSI type, e.g., channel matrix, eigenvector(s), etc.
· Data pre-processing/post-processing
· Loss function
· Others are not precluded
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the following parameters are taken into the baseline of EVM
· Note: The 2nd column applies if R16 TypeII codebook is selected as baseline, and the 3rd column applies if R17 TypeII codebook is selected as baseline.
o Additional assumptions from R17 TypeII EVM Same consideration with respect to utilizing angle-delay reciprocity should be considered taken for the AI/ML based CSI feedback and the baseline scheme if R17 TypeII codebook is selected as baseline
o FFS baseline for potential sub use cases involving CSI enhancement on time domain
· Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.
o The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.
· FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.
Parameter |
Value (if R16 as baseline) |
Value (if R17 as baseline) |
Frequency Range |
FR1 only, 2GHz as baseline, optional for 4GHz. |
FR1 only, 2GHz with duplexing gap of 200MHz between DL and UL, optional for 4GHz |
Simulation bandwidth |
10 MHz for 15kHz as a baseline, and configurations which emulate larger BW, e.g., same sub-band size as 40/100 MHz with 30kHz, may be optionally considered. Above 15kHz is replaced with 30kHz SCS for 4GHz. |
20 MHz for 15kHz as a baseline (optional for 10 MHz with 15KHz), and configurations which emulate larger BW, e.g., same sub-band size as 40/100 MHz with 30kHz, may be optionally considered. Above 15kHz is replaced with 30kHz SCS for 4GHz |
MIMO scheme |
SU/MU-MIMO with rank adaptation. Companies are encouraged to report the SU/MU-MIMO with RU |
SU/MU-MIMO with rank adaptation. Companies are encouraged to report the SU/MU-MIMO with RU |
Traffic load (Resource utilization) |
20/50/70% Companies are encouraged to report the MU-MIMO utilization. |
20/50/70% Companies are encouraged to report the MU-MIMO utilization. |
Decision: As per email decision posted on May 25th,
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the ‘Baseline for performance evaluation’ in the baseline of EVM is captured as follows
Baseline for performance evaluation |
Companies need to report which option is used between - Rel-16 TypeII Codebook as the baseline for performance and overhead evaluation. - Rel-17 TypeII Codebook as the baseline for performance and overhead evaluation. - FFS: Whether Type I Codebook can be optionally considered at least for performance evaluation |
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if the GCS/SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, companies to report the GCS/SGCS calculation/extension methods, including:
· Method 1: Average over all layers
o
Note: is the
eigenvector of the target CSI at resource unit i and K is
the rank.
is the
output vector of the output CSI of resource unit i.
is the total number of resource units.
denotes the average operation over multiple samples.
· Method 2: Weighted average over all layers
o Note: Companies to report the formula (e.g., whether normalization is applied for eigenvalues)
· Method 3: GCS/SGCS is separately calculated for each layer (e.g., for K layers, K GCS/SGCS values are derived respectively, and comparison is performed per layer)
· Other methods are not precluded
· FFS: Further down-selection among the above options or take one/a subset of the above methods as baseline(s).
Final summary in R1-2205492.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2203069 Discussion on sub use cases of AI/ML for CSI feedback enhancement use case FUTUREWEI
R1-2203141 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2203249 Discussion on potential enhancements for AI/ML based CSI feedback ZTE
R1-2203282 Discussions on AI-CSI Ericsson
R1-2203452 Discussion on other aspects on AI/ML for CSI feedback CATT
R1-2203551 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2203614 Discussion on AI/ML for CSI feedback enhancement GDCNI (Late submission)
R1-2203729 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2203809 Discussion on AI for CSI feedback enhancement xiaomi
R1-2203898 Representative sub use cases for CSI feedback enhancement Samsung
R1-2203939 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2204016 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2204051 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2204057 CSI compression with AI/ML Beijing Jiaotong University
R1-2204150 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2204181 Discussions on AI-ML for CSI feedback CAICT
R1-2204239 Discussion on other aspects on AI/ML for CSI feedback Apple
R1-2204296 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2204376 Discussion on other aspects on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2204418 Further aspects of AI/ML for CSI feedback Lenovo
R1-2204500 Discussion on other aspects on AI/ML for CSI feedback Spreadtrum Communications
R1-2204568 Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement TCL Communication
R1-2204572 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2204659 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2204794 Use-cases and specification for CSI feedback Intel Corporation
R1-2204841 On other aspects of AI and ML for CSI feedback enhancement NVIDIA
R1-2204861 CSI feedback enhancements for AI/ML based MU-MIMO scheduling and parameter configuration AT&T
R1-2204937 AI/ML for CSI feedback enhancement Mavenir
R1-2205025 Other aspects on AIML for CSI feedback enhancement Qualcomm Incorporated
R1-2205077 Views on sub-use case selection and STD impacts on AI/ML for CSI feedback enhancement Fujitsu Limited
R1-2205101 On the challenges of collecting field data for training and testing of AI/ML for CSI feedback enhancement MediaTek Inc.
[109-e-R18-AI/ML-04] – Huaning (Apple)
Email discussion on other aspects of AI/ML for CSI feedback enhancement by May 20
- Check points: May 18
R1-2205467 Email discussion on other aspects of AI/ML for CSI enhancement Moderator (Apple)
Decision: As per email decision posted on May 20th,
Agreement
Spatial-frequency domain CSI compression using two-sided AI model is selected as one representative sub use case.
· Note: Study of other sub use cases is not precluded.
· Note: All pre-processing/post-processing, quantization/de-quantization are within the scope of the sub use case.
Conclusion
· Further discuss temporal-spatial-frequency domain CSI compression using two-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.
· Further discuss improving the CSI accuracy based on traditional codebook design using one-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.
· Further discuss CSI prediction using one-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion
· Further discuss CSI-RS configuration and overhead reduction as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion
· Further discuss resource allocation and scheduling as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion
· Further discuss joint CSI prediction and compression as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.
Final summary in R1-2205556.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2204377 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
· Proposal 1: Time-domain beam prediction should be studied as a sub use-case of beam management in Rel-18 AI/ML for AI.
· Proposal 2: 3GPP statistical channel models are considered in the evaluation for representative sub use-case selection.
· Proposal 3: Discuss and decide whether and which deterministic channel models should be used to capture the final evaluation results of selected sub use-cases.
· Proposal 4: Spatial-domain beam estimation should be studied as a sub use-case of beam management in Rel-18 AI/ML for AI.
Decision: The document is noted.
R1-2203250 Evaluation assumptions on AI/ML for beam management ZTE
Proposal 1: Due to stronger computing power and comprehensive awareness of the surrounding environment, AI inference is performed on the gNB side to ensure high prediction accuracy and low processing delay.
Proposal 2: Top-K candidate beams with higher predicted RSRP can be filtered out for refined small-range beam sweeping, resulting in a relatively good trade-off between training overhead and performance.
Proposal 3: Deep neutral network is exploited for the spatial-domain beam prediction due to its excellent ability on classification tasks and learning complex nonlinear relationships.
Proposal 4: AI/ML based spatial-domain beam prediction can significantly reduce the beam training overhead by avoiding exhaustive beam sweeping.
Proposal 5: Beam prediction accuracy can be used as the performance indicators at the early stage, which may include top-1/top-K beam prediction accuracy, average RSRP difference, and CDFs of RSRP difference between the AI-predicted beam and ideal beam.
Proposal 6: Since the data sets and AI models used by different companies are different, it is necessary to provide common data sets and baseline models for simulation calibration and performance cross-validation.
Proposal 7: AI/ML based solutions are expected to be studied and evaluated to do beam prediction so as to reduce beam tracking latency and RS overhead in high mobility scenarios.
Proposal 8: Consider predictable mobility for beam management as an enhancement aspect for improving UE experience in FR2 high mobility scenario (e.g., high-speed train and high-way).
- Study and evaluate the feasibility and potential system level gain on predictable mobility for beam management based on the identified scenario(s).
Decision: The document is noted.
R1-2203142 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2203255 Model and data-driven beam predictions in high-speed railway scenarios PML
R1-2203283 Evaluations on AI-BM Ericsson
R1-2203374 Discussion for evaluation on AI/ML for beam management InterDigital, Inc.
R1-2203453 Discussion on evaluation on AI/ML for beam management CATT
R1-2203552 Evaluation on AI/ML for beam management vivo
R1-2203810 Evaluation on AI/ML for beam management xiaomi
R1-2203899 Evaluation on AI ML for Beam management Samsung
R1-2204017 Evaluation methodology and preliminary results on AI/ML for beam management OPPO
R1-2204059 Evaluation methodology of beam management with AI/ML Beijing Jiaotong University
R1-2204102 Discussion on evaluation of AI/ML for beam management use case FUTUREWEI
R1-2204151 Evaluation on AI/ML for beam management LG Electronics
R1-2204182 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2204240 Evaluation on AI based Beam Management Apple
R1-2204297 Discussion on evaluation on AI/ML for beam management CMCC
R1-2204419 Evaluation on AI/ML for beam management Lenovo
R1-2204573 Evaluation on ML for beam management Nokia, Nokia Shanghai Bell
R1-2204795 Evaluation for beam management Intel Corporation
R1-2204842 On evaluation assumptions of AI and ML for beam management NVIDIA
R1-2204862 Evaluation methodology aspects on AI/ML for beam management AT&T
R1-2205026 Evaluation on AIML for beam management Qualcomm Incorporated
R1-2205078 Evaluation on AI/ML for beam management Fujitsu Limited
R1-2205102 AI-assisted Target Cell Prediction for Inter-cell Beam Management MediaTek Inc.
[109-e-R18-AI/ML-05] – Feifei (Samsung)
Email discussion on evaluation of AI/ML for beam management by May 20
- Check points: May 18
R1-2205269 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From May 17th GTW session
Agreement
· For dataset construction and performance evaluation (if applicable) for the AI/ML in beam management, system level simulation approach is adopted as baseline
o Link level simulation is optionally adopted
Agreement
· At least for temporal beam prediction, companies report the one of spatial consistency procedures:
o Procedure A in TR38.901
o Procedure B in TR38.901
Agreement
· At least for temporal beam prediction, Dense Urban (macro-layer only, TR 38.913) is the basic scenario for dataset generation and performance evaluation.
o Other scenarios are not precluded.
· For spatial-domain beam prediction, Dense Urban (macro-layer only, TR 38.913) is the basic scenario for dataset generation and performance evaluation.
o Other scenarios are not precluded.
Agreement
· At least for spatial-domain beam prediction in initial phase of the evaluation, UE trajectory model is not necessarily to be defined.
Agreement
· At least for temporal beam prediction in initial phase of the evaluation, UE trajectory model is defined. FFS on the details.
R1-2205270 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
R1-2205271 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
Decision: As per email decision posted on May 20th,
Agreement
· UE rotation speed is reported by companies.
o Note: UE rotation speed = 0, i.e., no UE rotation, is not precluded.
Agreement
· For AI/ML in beam management evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.
Conclusion
Further study AI/ML model generalization in beam management evaluating the inference performance of beam prediction under multiple different scenarios/configurations.
· FFS on different scenarios/configurations
· Companies report the training approach, at least including the dataset assumption for training
Agreement
· For evaluation of AI/ML in BM, the KPI may include the model complexity and computational complexity.
o FFS: the details of model complexity and computational complexity
Agreement
· For spatial-domain beam prediction, further study the following options as baseline performance
o Option 1: Select the best beam within Set A of beams based on the measurement of all RS resources or all possible beams of beam Set A (exhaustive beam sweeping)
§ FFS CSI-RS/SSB as the RS resources
o Option 2: Select the best beam within Set A of beams based on the measurement of RS resources from Set B of beams
§ FFS: Set B is a subset of Set A and/or Set A consists of narrow beams and Set B consists of wide beams
§ FFS: how conventional scheme to obtain performance KPIs
§ FFS: how to determine the subset of RS resources is reported by companies
o Other options are not precluded.
Decision: As per email decision posted on May 22nd,
Agreement
· For dataset generation and performance evaluation for AI/ML in beam management, take the parameters (if applicable) in Table 1.2-1b for Dense Urban scenario for SLS
Table 1.2-1b Assumptions for Dense Urban scenario for AI/ML in beam management
Parameters |
Values |
Frequency Range |
FR2 @ 30 GHz · SCS: 120 kHz |
Deployment |
200m ISD, · 2-tier model with wrap-around (7 sites, 3 sectors/cells per site) Other deployment assumption is not precluded |
Channel mode |
UMa with distance-dependent LoS probability function defined in Table 7.4.2-1 in TR 38.901. |
System BW |
80MHz |
UE Speed |
· For spatial domain beam prediction, 3km/h · For time domain beam prediction: 30km/h (baseline), 60km/h (optional) · Other values are not precluded |
UE distribution |
· FFS UEs per sector/cell for evaluation. More UEs per sector/cell for data generation is not precluded. · For spatial domain beam prediction: FFS: o Option 1: 80% indoor ,20% outdoor as in TR 38.901 o Option 2: 100% outdoor · For time domain prediction: 100% outdoor |
Transmission Power |
Maximum Power and Maximum EIRP for base station and UE as given by corresponding scenario in 38.802 (Table A.2.1-1 and Table A.2.1-2) |
BS Antenna Configuration |
· [One panel: (M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), (dV, dH) = (0.5, 0.5) λ as baseline] · [Four panels: (M, N, P, Mg, Ng) = (4, 8, 2, 2, 2), (dV, dH) = (0.5, 0.5) λ. (dg,V, dg,H) = (2.0, 4.0) λ as optional] · Other assumptions are not precluded.
Companies to explain TXRU weights mapping. Companies to explain beam selection. Companies to explain number of BS beams |
BS Antenna radiation pattern |
TR 38.802 Table A.2.1-6, Table A.2.1-7 |
UE Antenna Configuration |
[Panel structure: (M,N,P) = (1,4,2)] · 2 panels (left, right) with (Mg, Ng) = (1, 2) as baseline · Other assumptions are not precluded
Companies to explain TXRU weights mapping. Companies to explain beam and panel selection. Companies to explain number of UE beams |
UE Antenna radiation pattern |
TR 38.802 Table A.2.1-8, Table A.2.1-10 |
Beam correspondence |
Companies to explain beam correspondence assumptions (in accordance to the two types agreed in RAN4) |
Link adaptation |
Based on CSI-RS |
Traffic Model |
FFS: · Option 1: Full buffer · Option 2: FTP model Other options are not precluded |
Inter-panel calibration for UE |
Ideal, non-ideal following 38.802 (optional) – Explain any errors |
Control and RS overhead |
Companies report details of the assumptions |
Control channel decoding |
Ideal or Non-ideal (Companies explain how it is modelled) |
UE receiver type |
MMSE-IRC as the baseline, other advanced receiver is not precluded |
BF scheme |
Companies explain what scheme is used |
Transmission scheme |
Multi-antenna port transmission schemes Note: Companies explain details of the using transmission scheme. |
Other simulation assumptions |
Companies to explain serving TRP selection Companies to explain scheduling algorithm |
Other potential impairments |
Not modelled (assumed ideal). If impairments are included, companies will report the details of the assumed impairments |
BS Tx Power |
[40 dBm] |
Maximum UE Tx Power |
23 dBm |
BS receiver Noise Figure |
7 dB |
UE receiver Noise Figure |
10 dB |
Inter site distance |
200m |
BS Antenna height |
25m |
UE Antenna height |
1.5 m |
Car penetration Loss |
38.901, sec 7.4.3.2: μ = 9 dB, σp = 5 dB |
Agreement
· For temporal beam prediction, the following options can be considered as a starting point for UE trajectory model for further study. Companies report further changes or modifications based on the following options for UE trajectory model. Other options are not precluded.
o Option #2: Linear trajectory model with random direction change.
§
UE moving trajectory: UE
will move straightly along the selected direction to the end of an time interval, where the length of the time
interval is provided by using an exponential distribution with average interval
length, e.g., 5s, with granularity of 100 ms.
· UE moving direction change: At the end of the time interval, UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°].
· UE move straightly within the time interval with the fixed speed.
o Option #3: Linear trajectory model with random and smooth direction change.
§ UE
moving trajectory: UE will change the moving direction by multiple steps within
an time internal, where the length of
the time interval is provided by using an exponential distribution with average
interval length, e.g., 5s, with granularity of 100 ms.
· UE moving direction change: At the end of the time interval, UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°].
· The time interval is further broken into N sub-intervals, e.g. 100ms per sub-interval, and at the end of each sub-interval, UE change the direction by the angle of A_diff/N.
· UE move straightly within the time sub-interval with the fixed speed.
o Option #4: Random direction straight-line trajectories.
§ Initial UE location, moving direction and speed: UE is randomly dropped in a cell, and an initial moving direction is randomly selected, with a fixed speed.
· The initial UE location should be randomly drop within the following blue area
where d1 is the minimum distance that UE should be away from the BS.
o Each sector is a cell and that the cell association is geometry based.
o During the simulation, inter-cell handover or switching should be disabled.
For training data generation
§ For each UE moving trajectory: the total length of the UE trajectory can be set as T second if it is in time, of set as D meter if it is in distance.
· The value of T (or D) can be further discussed
· The trajectory sampling interval granularity depends on UE speed and it can be further discussed.
§ UE can move straightly along the entire trajectory, or
§
UE can move straightly
during the time interval, where the time interval is provided by using an
exponential distribution with average interval length
· UE may change the moving direction at the end of the time interval. UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°]
§ If the UE trajectory hit the cell boundary (the red line), the trajectory should be terminated.
· If the trajectory length (in time) is less than the length of observation window + prediction window, the trajectory should be discarded.
· At the current stage, the length of observation window + prediction window is not fixed and the companies can report their values.
· Generalization issue is FFS
Agreement
· For temporal beam prediction, further study the following options as baseline performance
o Option 1a: Select the best beam for T2 within Set A of beams based on the measurements of all the RS resources or all possible beams from Set A of beams at the time instants within T2
o Option 2: Select the best beam for T2 within Set A of beams based on the measurements of all the RS resources from Set B of beams at the time instants within T1
§ Companies explain the detail on how to select the best beam for T2 from Set A based on the measurements in T1
o Where T2 is the time duration for the best beam selection, and T1 is a time duration to obtain the measurements of all the RS resource from Set B of beams.
§ T1 and T2 are aligned with those for AI/ML based methods
o Whether Set A and Set B are the same or different depend on the sub-use case
o Other options are not precluded.
Agreement
· For dataset generation and performance evaluation for AI/ML in beam management, take the following assumption for LLS as optional methodology
Parameter |
Value |
Frequency |
30GHz. |
Subcarrier spacing |
120kHz |
Data allocation |
[8 RBs] as baseline, companies can report larger number of RBs First 2 OFDM symbols for PDCCH, and following 12 OFDM symbols for data channel |
PDCCH decoding |
Ideal or Non-ideal (Companies explain how is oppler) |
Channel model |
FFS: LOS channel: CDL-D extension, DS = 100ns NLOS channel: CDL-A/B/C extension, DS = 100ns Companies explains details of extension methodology considering spatial consistency
Other channel models are not precluded. |
BS antenna configurations |
· One panel: (M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), (dV, dH) = (0.5, 0.5) λ as baseline · Other assumptions are not precluded.
Companies to explain TXRU weights mapping. Companies to explain beam selection. Companies to explain number of BS beams |
BS antenna element radiation pattern |
Same as SLS |
BS antenna height and antenna array downtile angle |
25m, 110° |
UE antenna configurations |
Panel structure: (M, N, P) = (1, 4, 2), · 2 panels (left, right) with (Mg, Ng) = (1, 2) as baseline · 1 panel as optional · Other assumptions are not precluded
Companies to explain TXRU weights mapping. Companies to explain beam and panel selection. Companies to explain number of UE beams |
UE antenna element radiation pattern |
Same as SLS |
UE moving speed |
Same as SLS |
Raw data collection format |
Depends on sub-use case and companies’ choice. |
Decision: As per email decision posted on May 25th,
Agreement
· For UE trajectory model, UE orientation can be independent from UE moving trajectory model. FFS on the details.
o Other UE orientation model is not precluded.
Agreement
· Companies are encouraged to report the following aspects of AI/ML model in RAN 1 #110. FFS on whether some of aspects need be defined or reported.
o Description of AI/ML model, e.g, NN architecture type
o Model inputs/outputs (per sub-use case)
o Training methodology, e.g.
§ Loss function/optimization function
§ Training/ validity /testing dataset:
· Dataset size, number of training/ validity /test samples
· Model validity area: e.g., whether model is trained for single sector or multiple sectors
· Details on Model monitoring and model update, if applicable
o Others related aspects are not precluded
Agreement
· To evaluate the performance of AI/ML in beam management, further study the following KPI options:
o Beam prediction accuracy related KPIs, may include the following options:
§ Average L1-RSRP difference of Top-1 predicted beam
§ Beam prediction accuracy (%) for Top-1 and/or Top-K beams, FFS the definition:
· Option 1: The beam prediction accuracy (%) is the percentage of “the Top-1 predicted beam is one of the Top-K genie-aided beams”
· Option 2: The beam prediction accuracy (%) is the percentage of “the Top-1 genie-aided beam is one of the Top-K predicted beams”
§ CDF of L1-RSRP difference for Top-1 predicted beam
§ Beam prediction accuracy (%) with 1dB margin for Top-1 beam
· The beam prediction accuracy (%) with 1dB margin is the percentage of the Top-1 predicted beam “whose ideal L1-RSRP is within 1dB of the ideal L1-RSRP of the Top-1 genie-aided beam”
§ the definition of L1-RSRP difference of Top-1 predicted beam:
· the difference between the ideal L1-RSRP of Top-1 predicted beam and the ideal L1-RSRP of the Top-1 genie-aided beam
§ Other beam prediction accuracy related KPIs are not precluded and can be reported by companies.
o System performance related KPIs, may include the following options:
§ UE throughput: CDF of UE throughput, avg. and 5%ile UE throughput
§ RS overhead reduction at least for spatial-domain beam prediction at least for top-1 beam:
· 1-N/M,
o where N is the number of beams (with reference signal (SSB and/or CSI-RS)) required for measurement
o where (FFS) M is the total number of beams
o Note: Non-AI/ML approach based on the measurement of these M beams may be used as a baseline
· FFS on whether to define a proper value for M for evaluation.
§ Other System performance related KPIs are not precluded and can be reported by companies.
o Other KPIs are not precluded and can be reported by companies, for example:
§ Reporting overhead reduction: (FFS) The number of UCI report and UCI payload size, for temporal /spatial prediction
§ Latency reduction:
· (FFS) (1 – [Total transmission time of N beams] / [Total transmission time of M beams])
o where N is the number of beams (with reference signal (SSB and/or CSI-RS)) in the input beam set required for measurement
o where M is the total number of beams
§ Power consumption reduction: FFS on details
Final summary in R1-2205641.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2203143 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2203251 Discussion on potential enhancements for AI/ML based beam management ZTE
R1-2203284 Discussions on AI-BM Ericsson
R1-2203375 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2203454 Discussion on other aspects on AI/ML for beam management CATT
R1-2203553 Other aspects on AI/ML for beam management vivo
R1-2203691 Discussion on other aspects on AI/ML for beam management NEC
R1-2203730 Consideration on AI/ML for beam management Sony
R1-2203811 Other aspects on AI/ML for beam management xiaomi
R1-2203900 Representative sub use cases for beam management Samsung
R1-2204018 Other aspects of AI/ML for beam management OPPO
R1-2204060 Beam management with AI/ML Beijing Jiaotong University
R1-2204078 Discussion on sub use cases of beam management Panasonic
R1-2204103 Discussion on sub use cases of AI/ML for beam management use case FUTUREWEI
R1-2204152 Other aspects on AI/ML for beam management LG Electronics
R1-2204183 Discussions on AI-ML for Beam management CAICT
R1-2204241 Enhancement on AI based Beam Management Apple
R1-2204298 Discussion on other aspects on AI/ML for beam management CMCC
R1-2204378 Discussion on other aspects on AI/ML for beam management NTT DOCOMO, INC.
R1-2204420 Further aspects of AI/ML for beam management Lenovo
R1-2204501 Discussion on other aspects on AI/ML for beam management Spreadtrum Communications
R1-2204569 Discussions on Sub-Use Cases in AI/ML for Beam Management TCL Communication
R1-2204574 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2204796 Use-cases and specification for beam management Intel Corporation
R1-2204843 On other aspects of AI and ML for beam management NVIDIA
R1-2204863 System performance aspects on AI/ML for beam management AT&T
R1-2204938 AI/ML for beam management Mavenir
R1-2205027 Other aspects on AIML for beam management Qualcomm Incorporated
R1-2205079 Sub-use cases and spec impact on AI/ML for beam management Fujitsu Limited
R1-2205094 Discussion on Codebook Enhancement with AI/ML Charter Communications, Inc
[109-e-R18-AI/ML-06] – Zhihua (OPPO)
Email discussion on other aspects of AI/ML for beam management by May 20
- Check points: May 18
R1-2205252 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
R1-2205253 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
From May 17th GTW session
Agreement
For AI/ML-based beam management, support BM-Case1 and BM-Case2 for characterization and baseline performance evaluations
· BM-Case1: Spatial-domain DL beam prediction for Set A of beams based on measurement results of Set B of beams
· BM-Case2: Temporal DL beam prediction for Set A of beams based on the historic measurement results of Set B of beams
· FFS: details of BM-Case1 and BM-Case2
· FFS: other sub use cases
Note: For BM-Case1 and BM-Case2, Beams in Set A and Set B can be in the same Frequency Range
Agreement
Regarding the sub use case BM-Case2, the measurement results of K (K>=1) latest measurement instances are used for AI/ML model input:
· The value of K is up to companies
Agreement
Regarding the sub use case BM-Case2, AI/ML model output should be F predictions for F future time instances, where each prediction is for each time instance.
· At least F = 1
· The other value(s) of F is up to companies
Agreement
For the sub use case BM-Case1, consider both Alt.1 and Alt.2 for further study:
· Alt.1: AI/ML inference at NW side
· Alt.2: AI/ML inference at UE side
Agreement
For the sub use case BM-Case2, consider both Alt.1 and Alt.2 for further study:
· Alt.1: AI/ML inference at NW side
· Alt.2: AI/ML inference at UE side
R1-2205453 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
Decision: As per email decision posted on May 20th,
Conclusion
For the sub use case BM-Case1, consider the following alternatives for further study:
· Alt.1: Set B is a subset of Set A
o FFS: the number of beams in Set A and B
o FFS: how to determine Set B out of the beams in Set A (e.g., fixed pattern, random pattern, …)
· Alt.2: Set A and Set B are different (e.g. Set A consists of narrow beams and Set B consists of wide beams)
o FFS: the number of beams in Set A and B
o FFS: QCL relation between beams in Set A and beams in Set B
o
FFS: construction of
Set B (e.g., regular pre-defined codebook, codebook other than regular
pre-defined one)
· Note1: Set A is for DL beam prediction and Set B is for DL beam measurement.
· Note2: The narrow and wide beam terminology is for SI discussion only and have no specification impact
· Note3: The codebook constructions of Set A and Set B can be clarified by the companies.
Conclusion
Regarding the sub use case BM-Case1, further study the following alternatives for AI/ML input:
· Alt.1: Only L1-RSRP measurement based on Set B
· Alt.2: L1-RSRP measurement based on Set B and assistance information
o FFS: Assistance information. The following were mentioned by companions in the discussion: Tx and/or Rx beam shape information (e.g., Tx and/or Rx beam pattern, Tx and/or Rx beam boresight direction (azimuth and elevation), 3dB beamwidth, etc.), expected Tx and/or Rx beam for the prediction (e.g., expected Tx and/or Rx angle, Tx and/or Rx beam ID for the prediction), UE position information, UE direction information, Tx beam usage information, UE orientation information, etc.
§ Note: The provision of assistance information may be infeasible due to the concern of disclosing proprietary information to the other side.
· Alt.3: CIR based on Set B
· Alt.4: L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID
· Note1: It is up to companies to provide other alternative(s) including the combination of some alternatives
· Note2: All the inputs are “nominal” and only for discussion purpose.
Conclusion
For the sub use case BM-Case2, further study the following alternatives with potential down-selection:
· Alt.1: Set A and Set B are different (e.g. Set A consists of narrow beams and Set B consists of wide beams)
o FFS: QCL relation between beams in Set A and beams in Set B
· Alt.2: Set B is a subset of Set A (Set A and Set B are not the same)
o FFS: how to determine Set B out of the beams in Set A (e.g., fixed pattern, random pattern, …)
· Alt.3: Set A and Set B are the same
· Note1: Predicted beam(s) are selected from Set A and measured beams used as input are selected from Set B.
· Note2: It is up to companies to provide other alternative(s)
· Note3: The narrow and wide beam terminology is for SI discussion only and have no specification impact
Conclusion
Regarding the sub use case BM-Case2, further study the following alternatives of measurement results for AI/ML input (for each past measurement instance):
· Alt.1: Only L1-RSRP measurement based on Set B
· Alt 2: L1-RSRP measurement based on Set B and assistance information
o FFS: Assistance information. The following were mentioned by companies in the discussion:, Tx and/or Rx beam angle, position information, UE direction information, positioning-related measurement (such as Multi-RTT), expected Tx and/or Rx beam/occasion for the prediction (e.g., expected Tx and/or Rx beam angle for the prediction, expected occasions of the prediction), Tx and/or Rx beam shape information (e.g., Tx and/or Rx beam pattern, Tx and/or Rx beam boresight directions (azimuth and elevation), 3dB beamwidth, etc.) , increase ratio of L1-RSRP for best N beams, UE orientation information
§ Note: The provision of assistance information may be infeasible due to the concern of disclosing proprietary information to the other side.
· Alt.3: L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID
· Note1: It is up to companies to provide other alternative(s) including the combination of some alternatives
· Note2: All the inputs are “nominal” and only for discussion purpose.
Final summary in R1-2205454.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2203554 Evaluation on AI/ML for positioning accuracy enhancement vivo
· Select the InF-DH scenario with clutter parameter {density 60%, height 6m, size 2m} as a typical scenario for positioning accuracy enhancement evaluation.
· Dataset and AI model sharing among different companies should be encouraged.
· For the purpose of link level and system level evaluation, statistical models (from TR 38.901 and TR 38.857) are utilized to generate dataset for AI/ML based positioning for model training/validation and testing.
o Field data measured in actual deployment for AI/ML model performance testing should be allowed and encouraged
· The positioning accuracy performance of AI/ML based positioning should be evaluated under all scenarios.
· Spatial consistency assumption should be adopted for performance evaluation.
· Performance related KPIs, such as @50%, @90% positioning accuracy defined in TR 38.857, can be used directly to evaluate the performance gain of AI/ML based positioning.
· Consider the following different levels of generalization performance for performance evaluation.
o Generalization performance form one cell to another
o Generalization performance from one one drop to another
o Generalization performance from one scenario to another
· Computational complexity, parameter quantity and training data requirement are three crucial cost-related KPIs for AI/ML based positioning, and should be considered with high priority at the beginning of this study .
· Support time domain CIR as the model input for AI/ML based positioning.
· Study further on the benefits of two-step positioning for AI/ML based positioning in terms of positioning accuracy and AI model generalization.
· Study further on the benefits of fine-tuning for AI/ML based positioning in terms of positioning accuracy and AI model generalization.
Decision: The document is noted.
R1-2203144 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
Proposal 1: For AI/ML-based LOS/NLOS identification evaluation, adopt the normalized Power Delay Profile as the training inputs.
Proposal 2: For AI/ML-based fingerprint positioning evaluation, adopt the Channel Impulse Response as the training inputs.
Proposal 3: For AI/ML-based positioning evaluation, adopt the positioning accuracy and model complexity as the KPIs.
Proposal 4: For heavy NLOS scenarios, spatial consistent channel modeling shall be employed for the evaluation of AI/ML-based fingerprint positioning. Adopt one or both of the following concepts:
· 2D-Filtering method.
· Interpolation method.
Proposal 5: For AI/ML-based positioning evaluation, adopt IIoT scenario as baseline.
· A small number of gNB antennas should be evaluated.
Proposal 6: For AI/ML-based LOS/NLOS Identification evaluation, the baseline solution should be aligned with an existing traditional algorithm.
Proposal 7: For AI/ML-based positioning evaluation, training inputs generated from simulation platform should be a baseline.
Proposal 8: AI/ML-based fingerprint positioning should be studied for positioning accuracy enhancements under heavy NLOS conditions in Rel-18.
Proposal 9: For the evaluation of AI/ML-based fingerprint positioning, study the generalization of the AI/ML model for varying environments.
Decision: The document is noted.
R1-2203252 Evaluation assumptions on AI/ML for positioning ZTE
R1-2203285 Evaluations on AI-Pos Ericsson
R1-2203455 Discussion on evaluation on AI/ML for positioning CATT
R1-2203812 Initial views on the evaluation on AI/ML for positioning accuracy enhancement xiaomi
R1-2203901 Evaluation on AI ML for Positioning Samsung
R1-2204019 Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement OPPO
R1-2204104 Discussion on evaluation of AI/ML for positioning accuracy enhancements use case FUTUREWEI
R1-2204153 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2204159 Evaluation assumptions and results for AI/ML based positioning InterDigital, Inc.
R1-2204184 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2204242 Evaluation on AI/ML for positioning accuracy enhancement Apple
R1-2204299 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2204421 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2204575 Evaluation on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2204837 Evaluation on AI/ML for positioning accuracy enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2204844 On evaluation assumptions of AI and ML for positioning enhancement NVIDIA
R1-2205028 Evaluation on AIML for positioning accuracy enhancement Qualcomm Incorporated
R1-2205066 Initial view on AI/ML application to positioning use cases Rakuten Mobile
R1-2205080 Discussion on Evaluation related issues for AI/ML for positioning accuracy enhancement Fujitsu Limited
[109-e-R18-AI/ML-07] – Yufei (Ericsson)
Email discussion on evaluation of AI/ML for positioning accuracy enhancement by May 20
- Check points: May 18
R1-2205217 Summary #1 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2205218 Summary #2 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2205219 Summary #3 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From May 17th GTW session
Agreement
The IIoT indoor factory (InF) scenario is a prioritized scenario for evaluation of AI/ML based positioning.
Agreement
For evaluation of AI/ML based positioning, at least the InF-DH sub-scenario is prioritized in the InF deployment scenario for FR1 and FR2.
Agreement
For InF-DH channel, the prioritized clutter parameters {density, height, size} are:
· {60%, 6m, 2m};
· {40%, 2m, 2m}.
o Note: an individual company may treat {40%, 2m, 2m} as optional in their evaluation considering their specific AI/ML design.
Agreement
For evaluation of AI/ML based positioning, reuse the common scenario parameters defined in Table 6-1 of TR 38.857.
Agreement
For evaluation of InF-DH scenario, the parameters are modified from TR 38.857 Table 6.1-1 as shown in the table below.
· The parameters in the table are applicable to InF-DH at least. If another InF sub-scenario is prioritized in addition to InF-DH, some parameters in the table below may be updated.
Parameters common to InF scenario (Modified from TR 38.857 Table 6.1-1)
|
FR1 Specific Values |
FR2 Specific Values |
||
Channel model |
|
|
||
Layout |
Hall size |
InF-DH: (baseline) 120x60 m (optional) 300x150 m |
||
BS locations |
18 BSs on a square lattice with spacing D, located D/2 from the walls. - for the small hall (L=120m x W=60m): D=20m - for the big hall (L=300m x W=150m): D=50m
|
|||
Room height |
10m |
|||
Total gNB TX power, dBm |
24dBm |
24dBm EIRP should not exceed 58 dBm |
||
gNB antenna configuration |
(M, N, P, Mg, Ng) = (4, 4, 2, 1, 1), dH=dV=0.5λ – Note 1 Note: Other gNB antenna configurations are not precluded for evaluation |
(M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), dH=dV=0.5λ – Note 1 One TXRU per polarization per panel is assumed |
||
gNB antenna radiation pattern |
Single sector – Note 1 |
3-sector antenna configuration – Note 1 |
||
Penetration loss |
0dB |
|||
Number of floors |
1 |
|||
UE horizontal drop procedure |
Uniformly distributed over the horizontal evaluation area for obtaining the CDF values for positioning accuracy, The evaluation area should be selected from - the convex hull of the horizontal BS deployment. - the whole hall area if the CDF values for positioning accuracy is obtained from whole hall area. FFS: which of the above should be baseline. FFS: if an optional evaluation area is needed |
|||
UE antenna height |
Baseline: 1.5m (Optional): uniformly distributed
within [0.5, X2]m, where X2 = 2m for scenario 1(InF-SH) and X2= FFS: if the optional UE antenna height is needed |
|||
UE mobility |
3km/h |
|||
Min gNB-UE distance (2D), m |
0m |
|||
gNB antenna height |
Baseline: 8m (Optional): two fixed heights,
either {4, 8} m, or {max(4, FFS: if the optional gNB antenna height is needed |
|||
Clutter parameters: {density |
High clutter density: - {40%, 2m, 2m} - {60%, 6m, 2m} o Note: an individual company may treat {40%, 2m, 2m} as optional in their evaluation considering their specific AI/ML design. |
|||
Note 1: According to Table A.2.1-7 in TR 38.802 |
||||
Agreement
For AI/ML-based positioning evaluation, the baseline performance to compare against is that of existing Rel-16/Rel-17 positioning methods.
· As a starting point, each participating company report the specific existing positioning method (e.g., DL-TDOA, Multi-RTT) used as comparison.
Agreement
For all scenarios and use cases, the main KPI is the CDF percentiles of horizonal accuracy.
· Companies can optionally report vertical accuracy.
Agreement
The CDF percentiles to analyse are: {50%, 67%, 80%, 90%}.
· 90% is the baseline. {50%, 67% 80%} are optional.
Agreement
Target positioning requirements for horizonal accuracy and vertical accuracy are not defined for AI/ML-based positioning evaluation.
Agreement
For evaluation of AI/ML based positioning, the KPI include the model complexity and computational complexity.
· FFS: the details of model complexity and computational complexity
Agreement
Synthetic dataset generated according to the statistical channel models in TR38.901 is used for model training, validation, and testing.
Agreement
The dataset is generated by a system level simulator based on 3GPP simulation methodology.
Agreement
As a starting point, the training, validation and testing dataset are from the same large-scale and small-scale propagation parameters setting. Subsequent evaluation can study the performance when the training dataset and testing dataset are from different settings.
Agreement
For AI/ML-based positioning evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.
R1-2205480 Summary #4 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2205481 Summary #5 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement Moderator (Ericsson)
Decision: As per email decision posted on May 20th,
Agreement
The entry “UE horizontal drop procedure” in the simulation parameter table for InF is updated to the following.
UE horizontal drop procedure |
Uniformly distributed over the horizontal evaluation area for obtaining the CDF values for positioning accuracy, The evaluation area should be selected from - (baseline) the whole hall area, and the CDF values for positioning accuracy is obtained from whole hall area. - (optional) the convex hull of the horizontal BS deployment, and the CDF values for positioning accuracy is obtained from the convex hull. |
Agreement
The entries “UE antenna height” and “gNB antenna height” in the simulation parameter table for InF is updated to the following.
UE antenna height |
Baseline: 1.5m (Optional): uniformly distributed within [0.5, X2]m, where X2 =
2m for scenario 1(InF-SH) and X2= |
… |
… |
gNB antenna height |
Baseline: 8m (Optional): two fixed heights, either {4, 8} m, or {max(4, |
Agreement
If spatial consistency is enabled for the evaluation, companies model at least one of: large scale parameters, small scale parameters and absolute time of arrival, where
·
the large scale
parameters are according to Section 7.5 of TR 38.901 and correlation distance
= for InF (Section 7.6.3.1 of TR 38.901)
· the small scale parameters are according to Section 7.6.3.1 of TR 38.901
· the absolute time of arrival is according to Section 7.6.9 of TR 38.901
Agreement
If spatial consistency is enabled for the evaluation of AI/ML based positioning, the baseline evaluation does not incorporate spatially consistent UT/BS mobility modelling (Section 7.6.3.2 of TR 38.901).
· It is optional to implement spatially consistent UT/BS mobility modelling (Section 7.6.3.2 of TR 38.901).
Agreement
For evaluation of AI/ML based positioning, companies are encouraged to evaluate the model generalization.
· FFS: the metrics for evaluating the model generalization (e.g., model performance based on agreed KPIs under different settings)
Decision: As per email decision posted on May 25th,
Agreement
Companies are encouraged to provide evaluation results for:
Agreement
When reporting evaluation results with direct AI/ML positioning and/or AI/ML assisted positioning, proponent company is expected to describe if a one-sided model or a two-sided model is used.
· If one-sided model (i.e., UE-side model or network-side model), the proponent company report which side the model inference is performed (e.g. UE, network), and any details specific to the side that performs the AI/ML model inference.
· If two-sided model, the proponent company report which side (e.g., UE, network) performs the first part of interference, and which side (e.g., network, UE) performs the remaining part of the inference.
Agreement
For evaluation of AI/ML based positioning, the computational complexity can be reported via the metric of floating point operations (FLOPs).
· Note: For AI/ML assisted methods, computational complexity for the AI/ML model is only one component of the overall complexity for estimating the UE’s location.
· Note: Other metrics to measure the computational complexity are not precluded.
Agreement
For evaluation of AI/ML based positioning, details of the training dataset generation are to be reported by proponent company. The report may include (in addition to other selected settings, if applicable):
· The size of training dataset, for example, the total number of UEs in the evaluation area for generating training dataset;
· The distribution of UE location for generating the training dataset may be one of the following:
o Option 1: grid distribution, i.e., one training data is collected at the center of one small square grid, where, for example, the width of the square grid can be 0.25/0.5/1.0 m.
o Option 2: uniform distribution, i.e., the UE location is randomly and uniformly distributed in the evaluation area.
Final summary in R1-2205633.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2203145 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2203253 Discussion on potential enhancements for AI/ML based positioning ZTE
R1-2203286 Discussions on AI-Pos Ericsson
R1-2203456 Discussion on other aspects on AI/ML for positioning CATT
R1-2203555 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2203692 Discussion on other aspects on AI/ML for positioning accuracy enhancement NEC
R1-2203731 Considerations on AI/ML for positioning accuracy enhancement Sony
R1-2203813 Initial views on the other aspects of AI/ML-based positioning accuracy enhancement xiaomi
R1-2203902 Representative sub use cases for Positioning Samsung
R1-2204020 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2204105 Discussion on sub use cases of AI/ML for positioning accuracy enhancements use case FUTUREWEI
R1-2204154 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2204160 Potential specification impacts for AI/ML based positioning InterDigital, Inc.
R1-2204185 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2204243 Discussion on other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2204300 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2204422 AI/ML Positioning use cases and Associated Impacts Lenovo
R1-2204576 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2204798 Use-cases and specification for positioning Intel Corporation
R1-2204838 On potential specification impact of AI/ML for positioning Fraunhofer IIS, Fraunhofer HHI
R1-2204845 On other aspects of AI and ML for positioning enhancement NVIDIA
R1-2205029 Other aspects on AIML for positioning accuracy enhancement Qualcomm Incorporated
R1-2205081 Sub-use cases and spec impacts for AI/ML for positioning accuracy enhancement Fujitsu Limited
[109-e-R18-AI/ML-08] – Huaming (vivo)
Email discussion on other aspects of AI/ML for positioning accuracy enhancement by May 20
- Check points: May 18
R1-2205229 Discussion summary #1 of [109-e-R18-AI/ML-08] Moderator (vivo)
From May 18th GTW session
Agreement
Study further on sub use cases and potential specification impact of AI/ML for positioning accuracy enhancement considering various identified collaboration levels.
· Companies are encouraged to identify positioning specific aspects on collaboration levels if any in agenda 9.2.4.2.
· Note1: terminology, notation and common framework of Network-UE collaboration levels are to be discussed in agenda 9.2.1 and expected to be applicable to AI/ML for positioning accuracy enhancement.
· Note2: not every collaboration level may be applicable to an AI/ML approach for a sub use case
Agreement
For further study, at least the following aspects of AI/ML for positioning accuracy enhancement are considered.
· Direct AI/ML positioning: the output of AI/ML model inference is UE location
o E.g., fingerprinting based on channel observation as the input of AI/ML model
o FFS the details of channel observation as the input of AI/ML model, e.g. CIR, RSRP and/or other types of channel observation
o FFS: applicable scenario(s) and AI/ML model generalization aspect(s)
· AI/ML assisted positioning: the output of AI/ML model inference is new measurement and/or enhancement of existing measurement
o E.g., LOS/NLOS identification, timing and/or angle of measurement, likelihood of measurement
o FFS the details of input and output for corresponding AI/ML model(s)
o FFS: applicable scenario(s) and AI/ML model generalization aspect(s)
· Companies are encouraged to clarify all details/aspects of their proposed AI/ML approaches/sub use case(s) of AI/ML for positioning accuracy enhancement
Agreement
Companies are encouraged to study and provide inputs on potential specification impact at least for the following aspects of AI/ML approaches for sub use cases of AI/ML for positioning accuracy enhancement.
· AI/ML model training
o training data type/size
o training data source determination (e.g., UE/PRU/TRP)
o assistance signalling and procedure for training data collection
· AI/ML model indication/configuration
o assistance signalling and procedure (e.g., for model configuration, model activation/deactivation, model recovery/termination, model selection)
· AI/ML model monitoring and update
o assistance signalling and procedure (e.g., for model performance monitoring, model update/tuning)
· AI/ML model inference input
o report/feedback of model input for inference (e.g., UE feedback as input for network side model inference)
o model input acquisition and pre-processing
o type/definition of model input
· AI/ML model inference output
o report/feedback of model inference output
o post-processing of model inference output
· UE capability for AI/ML model(s) (e.g., for model training, model inference and model monitoring)
· Other aspects are not precluded
· Note: not all aspects may apply to an AI/ML approach in a sub use case
· Note2: the definitions of common AI/ML model terminologies are to be discussed in agenda 9.2.1
Final summary in R1-2205498.
R1-2203254 Discussion on other use cases for AI/ML ZTE
R1-2203405 Discussions on AI-ML challenges and limitations New H3C Technologies Co., Ltd.
R1-2203457 Views on UE capability of AI/ML for air interface CATT
R1-2203556 Discussions on AI/ML for DMRS vivo
R1-2203670 Draft skeleton of TR 38.843 Ericsson
R1-2204577 On ML capability exchange, interoperability, and testability aspects Nokia, Nokia Shanghai Bell
R1-2204846 GPU hosted 5G virtual RAN baseband processing and AI applications NVIDIA
R1-2204911 Discussion on other potential use cases of AI/ML for NR air interface Huawei, HiSilicon
R1-2205067 Consideration on UE processing capability for AI/ML utilization Rakuten Mobile
Please refer to RP-221348 for detailed scope of the SI.
R1-2208145 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)
[110-R18-AI/ML] Email to be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc – Taesang (Qualcomm)
R1-2207222 Technical report for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
TR 38.843
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2205752 Continued discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2205830 General aspects of dataset construction Keysight Technologies UK Ltd
R1-2205889 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2205966 Discussions on Common Aspects of AI/ML Framework TCL Communication
R1-2206031 Discussions on AI/ML framework vivo
R1-2206067 Discussion on general aspects of common AI PHY framework ZTE
R1-2206113 Considerations on common AI/ML framework Sony
R1-2206163 Discussion on general aspects of AI/ML framework Fujitsu
R1-2206194 On General Aspects of AI/ML Framework Google
R1-2206314 On general aspects of AI/ML framework OPPO
R1-2206390 AI/ML framework for air interface CATT
R1-2206466 Discussion on general aspects of AI ML framework NEC
R1-2206507 General aspects of AI and ML framework for NR air interface NVIDIA
R1-2206509 General aspects of AI/ML framework Lenovo
R1-2206577 General aspects of AI/ML framework Intel Corporation
R1-2206603 Discussion on general aspects of AIML framework Spreadtrum Communications
R1-2206634 Views on the general aspects of AL/ML framework Xiaomi
R1-2206674 Considerations on general aspects on AI-ML framework CAICT
R1-2206686 Discussion on general aspects of AI/ML for NR air interface China Telecom
R1-2206819 General aspects of AI ML framework and evaluation methodogy Samsung
R1-2206873 General aspects on AI/ML framework LG Electronics
R1-2206885 Discussion on general aspects of AI/ML framework Ericsson
R1-2206901 Discussion on general aspects of AI/ML framework CMCC
R1-2206952 Discussion on general aspects of AI/ML framework for NR air interface ETRI
R1-2206967 Further discussion on the general aspects of ML for Air-interface Nokia, Nokia Shanghai Bell
R1-2206987 General aspects of AI/ML framework MediaTek Inc.
R1-2207117 Discussion on AI/ML Model Life Cycle Management Rakuten Mobile, Inc
R1-2207223 General aspects of AI/ML framework Qualcomm Incorporated
R1-2207293 Discussion on general aspects of AI/ML framework Panasonic
R1-2207327 General aspect of AI/ML framework Apple
R1-2207400 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2207457 Observation of Channel Matrix Sharp
R1-2207459 Discussion on general aspects of AI/ML framework KDDI Corporation
R1-2207879 Summary#1 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Monday session
Agreement
Study the following aspects, including the definition of components (if needed) and necessity, in Life Cycle Management
Note: Some aspects in the list may not have specification impact.
Note:
Aspects with square brackets are tentative and pending terminology
definition.
Note: More aspects may be added as study progresses.
R1-2207932 Summary#2 of General Aspects of AI/ML Framework Moderator (Qualcomm)
R1-2208063 Summary#3 of General Aspects of AI/ML Framework Moderator (Qualcomm)
Agreement
The following is an initial list of common KPIs (if applicable) for evaluating performance benefits of AI/ML
· Performance
o Intermediate KPIs
o Link and system level performance
o Generalization performance
· Over-the-air Overhead
o Overhead of assistance information
o Overhead of data collection
o Overhead of model delivery/transfer
o Overhead of other AI/ML-related signaling
· Inference complexity
o Computational complexity of model inference: FLOPs
o Computational complexity for pre- and post-processing
o Model complexity: e.g., the number of parameters and/or size (e.g. Mbyte)
· Training complexity
· LCM related complexity and storage overhead
o FFS: specific aspects
· FFS: Latency, e.g., Inference latency
Note: Other aspects may be added in the future, e.g. training related KPIs
Note: Use-case specific KPIs may be additionally considered for the given use-case.
Working Assumption
Terminology |
Description |
Online training |
An AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples. Note: the notion of (near) real-time vs. non real-time is context-dependent and is relative to the inference time-scale. Note: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as online training by commonly accepted conventions. Note: Fine-tuning/re-training may be done via online or offline training. (This note could be removed when we define the term fine-tuning.) |
Offline training |
An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference. Note: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions. |
Note: It is encouraged for the 3gpp discussion to proceed without waiting for online/offline training terminologies.
R1-2208178 Summary#4 of General Aspects of AI/ML Framework Moderator (Qualcomm)
Working Assumption
Include the following into a working list of terminologies to be used for RAN1 AI/ML air interface SI discussion.
Terminology |
Description |
AI/ML model delivery |
A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. Note: An entity could mean a network node/function (e.g., gNB, LMF, etc.), UE, proprietary server, etc. |
Note: Companies are encouraged to bring discussions on various options and their views on how to define Level y/z boundary in the next RAN1 meeting.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2205890 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2206032 Evaluation on AI/ML for CSI feedback enhancement vivo
R1-2206068 Evaluation on AI for CSI feedback enhancement ZTE
R1-2206164 Evaluation on AI/ML for CSI feedback enhancement Fujitsu
R1-2206195 On Evaluation of AI/ML based CSI Google
R1-2206315 Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement OPPO
R1-2206334 Evaluation on AI/ML-based CSI feedback enhancement BJTU
R1-2206336 Continued discussion on evaluation of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2206391 Evaluation on AI/ML for CSI feedback CATT
R1-2206510 Evaluation on AI/ML for CSI feedback Lenovo
R1-2206520 Evaluation of AI and ML for CSI feedback enhancement NVIDIA
R1-2206578 Evaluation for CSI feedback enhancements Intel Corporation
R1-2206604 Discussion on evaluation on AIML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2206635 Discussion on evaluation on AI/ML for CSI feedback enhancement Xiaomi
R1-2206675 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2206820 Evaluation on AI ML for CSI feedback enhancement Samsung
R1-2206874 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2206902 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2206953 Evaluation on AI/ML for CSI feedback enhancement ETRI
R1-2206968 Evaluation of ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2206988 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2207063 On evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS, Fraunhofer HHI (Late submission)
R1-2207081 Views on Evaluation of AI/ML for CSI Feedback Enhancement Mavenir
R1-2207152 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2207224 Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2207328 Evaluation on AI/ML for CSI feedback Apple
R1-2207401 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2207475 Evaluation on AI/ML for CSI feedback enhancement in spatial-frequency-time domain SEU (rev of R1-2205824)
R1-2207720 Evaluations of AI-CSI Ericsson (rev of R1-2206883)
R1-2207836 Summary#1 for CSI evaluation of [110-R18-AI/ML] Moderator (Huawei)
From Monday session
Agreement
The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:
R1-2207837 Summary#2 for CSI evaluation of [110-R18-AI/ML] Moderator (Huawei)
From Tuesday session, previous agreement is completed as follows
Agreement
The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:
· Case 1: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a dataset from the same Scenario#A/Configuration#A
· Case 2: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B
· Case 3: The AI/ML model is trained based on training dataset constructed by mixing datasets from multiple scenarios/configurations including Scenario#A/Configuration#A and a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B, and then the AI/ML model performs inference/test on a dataset from a single Scenario/Configuration from the multiple scenarios/configurations, e.g., Scenario#A/Configuration#A, Scenario#B/Configuration#B, Scenario#A/Configuration#B.
o Note: Companies to report the ratio for dataset mixing
o Note: number of the multiple scenarios/configurations can be larger than two
· FFS the detailed set of scenarios/configurations
· FFS other cases for generalization verification, e.g.,
o Case 2A: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.
R1-2207838 Summary#3 for CSI evaluation of [110-R18-AI/ML] Moderator (Huawei)
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if the GCS/SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’, between GCS and SGCS, SGCS is adopted.
Agreement
For CSI enhancement evaluations, to verify the generalization performance of an AI/ML model over various scenarios, the set of scenarios are considered focusing on one or more of the following aspects as a starting point:
· Various deployment scenarios (e.g., UMa, UMi, InH)
· Various outdoor/indoor UE distributions for UMa/UMi (e.g., 10:0, 8:2, 5:5, 2:8, 0:10)
· Various carrier frequencies (e.g., 2GHz, 3.5GHz)
· Other aspects of scenarios are not precluded, e.g., various antenna spacing, various antenna virtualization (TxRU mapping), various ISDs, various UE speeds, etc.
· Companies to report the selected scenarios for generalization verification
Conclusion
If the AI/ML based CSI prediction sub use cases is to be selected as a sub use case, consider CSI prediction involving temporal domain as a starting point.
R1-2207839 Summary#4 for CSI evaluation of [110-R18-AI/ML] Moderator (Huawei)
Agreement
For CSI enhancement evaluations, to verify the generalization/scalability performance of an AI/ML model over various configurations (e.g., which may potentially lead to different dimensions of model input/output), the set of configurations are considered focusing on one or more of the following aspects as a starting point:
· Various bandwidths (e.g., 10MHz, 20MHz) and/or frequency granularities, (e.g., size of subband)
· Various sizes of CSI feedback payloads, FFS candidate payload number
· Various antenna port layouts, e.g., (N1/N2/P) and/or antenna port numbers (e.g., 32 ports, 16 ports)
· Other aspects of configurations are not precluded, e.g., various numerologies, various rank numbers/layers, etc.
· Companies to report the selected configurations for generalization verification
· Companies are encouraged to report the method to achieve generalization over various configurations to achieve scalability of the AI/ML input/output, including pre-processing, post-processing, etc.
Conclusion
For the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, it is up to companies to choose the error modeling method for realistic channel estimation and report by willingness.
· Note: It is not precluded that companies use ideal channel to calibrate
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, the throughput in the ‘Evaluation Metric’ includes average UPT, 5%ile UE throughput, and CDF of UPT.
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases, companies are encouraged to report the specific quantization/dequantization method, e.g., vector quantization, scalar quantization, etc.
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases, the capability/complexity related KPIs, including FLOPs as well as AI/ML model size and/or number of AI/ML parameters, are to be reported separately for the CSI generation part and the CSI reconstruction part.
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, a one-sided structure is considered as a starting point, where the AI/ML inference is performed at either gNB or UE.
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for evaluation,
· 100% outdoor UE is assumed for UE distribution.
o FFS: whether to add O2I car penetration loss per TS 38.901 if the simulation assumes UEs inside vehicles
· UE speed is assumed for evaluation with 10, 20, 30, 60, 120km/h
o Note: Companies to report the set/subset of speeds
· 5ms CSI feedback periodicity is taken as baseline, while other CSI feedback periodicity values can be reported for the EVM
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, companies are encouraged to report the details of their models for evaluation, including:
· The structure of the AI/ML model, e.g., type (FCN, RNN, CNN,…), the number of layers, branches, format of parameters, etc.
· The input CSI type, e.g., raw channel matrix, eigenvector(s) of the raw channel matrix, feedback CSI information, etc.
· The output CSI type, e.g., channel matrix, eigenvector(s), feedback CSI information, etc.
· Data pre-processing/post-processing
· Loss function
· Others are not precluded
Final summary in R1-2207840.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2205891 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2205967 Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement TCL Communication
R1-2206033 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2206069 Discussion on other aspects for AI CSI feedback enhancement ZTE
R1-2206114 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2206165 Discussion on other aspects of AI/ML for CSI feedback enhancement Fujitsu
R1-2206185 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2206196 On Enhancement of AI/ML based CSI Google
R1-2206241 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2206316 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2206337 Continued discussion on other aspects of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2206392 Other aspects on AI/ML for CSI feedback CATT
R1-2206511 Further aspects of AI/ML for CSI feedback Lenovo
R1-2206521 AI and ML for CSI feedback enhancement NVIDIA
R1-2206579 Use-cases and specification for CSI feedback Intel Corporation
R1-2206605 Discussion on other aspects on AIML for CSI feedback Spreadtrum Communications
R1-2206636 Discussion on potential specification impact for CSI feedback based on AI/ML Xiaomi
R1-2206676 Discussions on AI-ML for CSI feedback CAICT
R1-2206687 Discussion on AI/ML for CSI feedback enhancement China Telecom
R1-2206821 Representative sub use cases for CSI feedback enhancement Samsung
R1-2206875 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2206884 Discussion on AI-CSI Ericsson
R1-2206903 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2206954 Discussion on other aspects on AI/ML for CSI feedback enhancement ETRI
R1-2206969 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2206989 Other aspects on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2207153 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2207225 Other aspects on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2207329 Other aspects on AI/ML for CSI Apple
R1-2207370 Sub-use cases for AI/ML feedback enhancements AT&T
R1-2207402 Discussion on other aspects on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2207780 Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
R1-2207853 Summary #2 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Tuesday session
Agreement
In CSI compression using two-sided model use case, the following AI/ML model training collaborations will be further studied:
· Type 1: Joint training of the two-sided model at a single side/entity, e.g., UE-sided or Network-sided.
· Type 2: Joint training of the two-sided model at network side and UE side, repectively.
· Type 3: Separate training at network side and UE side, where the UE-side CSI generation part and the network-side CSI reconstruction part are trained by UE side and network side, respectively.
· Note: Joint training means the generation model and reconstruction model should be trained in the same loop for forward propagation and backward propagation. Joint training could be done both at single node or across multiple nodes (e.g., through gradient exchange between nodes).
· Note: Separate training includes sequential training starting with UE side training, or sequential training starting with NW side training [, or parallel training] at UE and NW
· Other collaboration types are not excluded.
R1-2207854 Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
Conclusion
CSI-RS configuration and overhead reduction is NOT selected as one representative sub-use case for CSI feedback enhancement use case.
Conclusion
Resource allocation and scheduling is NOT selected as one representative sub-use case for CSI feedback enhancement use case.
Agreement
In CSI compression using two-sided model use case, further study potential specification impact on CSI report, including at least
· CSI generation model output and/or CSI reconstruction model input, including configuration(size/format) and/or potential post/pre-processing of CSI generation model output/CSI reconstruction model input.
· CQI determination
· RI determination
R1-2208077 Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
Agreement
In CSI compression using two-sided model use case, further study potential specification impact on output CSI, including at least
· Model output type/dimension/configuration and potential post processing
Agreement
In CSI compression using two-sided model use case, further discuss at least the following aspects, including their necessity/feasibility/potential specification impact, for data collection for AI/ML model training/inference/update/monitoring:
· Assistance signaling for UE’s data collection
· Assistance signaling for gNB’s data collection
· Delivery of the datasets
Including evaluation methodology, KPI, and performance evaluation results.
R1-2205753 Continued discussion on evaluation of AI/ML for beam management FUTUREWEI
R1-2205892 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2206034 Evaluation on AI/ML for beam management vivo
R1-2206070 Evaluation on AI for beam management ZTE
R1-2206166 Evaluation on AI/ML for beam management Fujitsu
R1-2206181 Discussion for evaluation on AI/ML for beam management InterDigital, Inc.
R1-2206197 On Evaluation of AI/ML based Beam Management Google
R1-2206250 Evaluation of AI/ML based beam management Rakuten Mobile, Inc
R1-2206317 Evaluation methodology and preliminary results on AI/ML for beam management OPPO
R1-2206393 Evaluation on AI/ML for beam management CATT
R1-2206512 Evaluation on AI/ML for beam management Lenovo
R1-2206522 Evaluation of AI and ML for beam management NVIDIA
R1-2206580 Evaluation for beam management Intel Corporation
R1-2206637 Evaluation on AI/ML for beam management Xiaomi
R1-2206677 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2206688 Evaluation on AI/ML for beam management China Telecom
R1-2206822 Evaluation on AI ML for Beam management Samsung
R1-2206876 Evaluation on AI/ML for beam management LG Electronics
R1-2206904 Discussion on evaluation on AI/ML for beam management CMCC
R1-2206938 Evaluation on AI/ML for beam management Ericsson
R1-2206970 Evaluation of ML for beam management Nokia, Nokia Shanghai Bell
R1-2206990 Evaluation on AI/ML for beam management MediaTek Inc.
R1-2207068 Evaluation on AI/ML for beam management CEWiT
R1-2207226 Evaluation on AI/ML for beam management Qualcomm Incorporated
R1-2207330 Evaluation on AI/ML for beam management Apple
R1-2207403 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
R1-2207774 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From Monday session
Agreement
· The following updated based on the agreements in RAN 1 #109-e is adopted
Parameters |
Values |
UE distribution
|
o Other values are not precluded
|
UE Antenna Configuration |
· Antenna setup and port layouts at UE: [1,2,1,4,2,1,1], 2 panels (left, right)
· Other assumptions are not precluded
Companies to explain TXRU weights mapping. Companies to explain beam and panel selection. Companies to explain number of UE beams |
R1-2207775 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
From Wed session
Agreement
The following updated based on the agreements in RAN 1 #109-e is adopted:
Parameters |
Values |
UE Speed |
· For spatial domain beam prediction, 3km/h · For time domain beam prediction: 3km/h(optional), 30km/h (baseline), 60km/h (optional), 90km/h (optional), 120km/h (optional) · Other values are not precluded |
UE distribution |
· For spatial domain beam prediction: o Option 1: 80% indoor ,20% outdoor as in TR 38.901 o Option 2: 100% outdoor · For time domain prediction: 100% outdoor |
R1-2207776 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
R1-2208104 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
R1-2208105 Feature lead summary #4 evaluation of AI/ML for beam management Moderator (Samsung)
Agreement
Agreement
Agreement
Final summary in R1-2208106.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2205754 Continued discussion on other aspects of AI/ML for beam management FUTUREWEI
R1-2205893 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2205968 Discussions on Sub-Use Cases in AI/ML for Beam Management TCL Communication
R1-2206035 Other aspects on AI/ML for beam management vivo
R1-2206071 Discussion on other aspects for AI beam management ZTE
R1-2206115 Considerations on AI/ML for beam management Sony
R1-2206167 Sub use cases and specification impact on AI/ML for beam management Fujitsu
R1-2206182 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2206198 On Enhancement of AI/ML based Beam Management Google
R1-2206251 Other aspects on AI/ML for beam management Rakuten Mobile, Inc
R1-2206318 Other aspects of AI/ML for beam management OPPO
R1-2206332 Beam management with AI/ML in high-speed railway scenarios BJTU
R1-2206394 Other aspects on AI/ML for beam management CATT
R1-2206472 Discussion on AI/ML for beam management NEC
R1-2206513 Further aspects of AI/ML for beam management Lenovo
R1-2206523 AI and ML for beam management NVIDIA
R1-2206581 Use-cases and specification for beam management Intel Corporation
R1-2206606 Discussion on other aspects on AIML for beam management Spreadtrum Communications
R1-2206638 Discussion on other aspects on AI/ML for beam management Xiaomi
R1-2206678 Discussions on AI-ML for Beam management CAICT
R1-2206823 Representative sub use cases for beam management Samsung
R1-2206877 Other aspects on AI/ML for beam management LG Electronics
R1-2206905 Discussion on other aspects on AI/ML for beam management CMCC
R1-2206940 Discussion on AI/ML for beam management Ericsson
R1-2206971 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2206991 Other aspects on AI/ML for beam management MediaTek Inc.
R1-2207227 Other aspects on AI/ML for beam management Qualcomm Incorporated
R1-2207331 Other aspects on AI/ML for beam management Apple
R1-2207404 Discussion on other aspects on AI/ML for beam management NTT DOCOMO, INC.
R1-2207506 Discussion on sub use cases of AI/ML beam management Panasonic
R1-2207551 Discussion on Performance Related Aspects of Codebook Enhancement with AI/ML Charter Communications, Inc
R1-2207590 Discussion on other aspects on AI/ML for beam management KT Corp.
R1-2207871 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
R1-2207872 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
Agreement
For the sub use case BM-Case1, support the following alternatives for further study:
· Alt.1: Set A and Set B are different (Set B is NOT a subset of Set A)
· Alt.2: Set B is a subset of Set A
· Note1: Set A is for DL beam prediction and Set B is for DL beam measurement.
· Note2: The beam patterns of Set A and Set B can be clarified by the companies.
Agreement
For the data collection for AI/ML model training (if supported), study the following aspects as a starting point for potential necessary specification impact:
· Signaling/configuration/measurement/report for data collection, e.g., signaling aspects related to assistance information (if supported), Reference signals
· Content/type of the collected data
· Other aspect(s) is not precluded
Agreement
At least for the sub use case BM-Case1 and BM-Case2, support both Alt.1 and Alt.2 for the study of AI/ML model training:
· Alt.1: AI/ML model training at NW side;
· Alt.2: AI/ML model training at UE side.
Note: Whether it is online or offline training is a separate discussion.
Agreement
For the sub use case BM-Case1 and BM-Case2, further study the following alternatives for the predicted beams:
· Alt.1: DL Tx beam prediction
· Alt.2: DL Rx beam prediction
· Alt.3: Beam pair prediction (a beam pair consists of a DL Tx beam and a corresponding DL Rx beam)
· Note1: DL Rx beam prediction may or may not have spec impact
R1-2207873 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
Agreement
For the sub use case BM-Case2, further study the following alternatives:
· Alt.1: Set A and Set B are different (Set B is NOT a subset of Set A)
· Alt.2: Set B is a subset of Set A (Set A and Set B are not the same)
· Alt.3: Set A and Set B are the same
· Note1: The beam pattern of Set A and Set B can be clarified by the companies.
Agreement
Regarding the model monitoring for BM-Case1 and BM-Case2, to investigate specification impacts from the following aspects
· Performance metric(s)
· Benchmark/reference for the performance comparison
· Signaling/configuration/measurement/report for model monitoring, e.g., signaling aspects related to assistance information (if supported), Reference signals
· Other aspect(s) is not precluded
R1-2207874 Summary#4 for other aspects on AI/ML for beam management Moderator (OPPO)
Agreement
In order to facilitate the AI/ML model inference, study the following aspects as a starting point:
· Enhanced or new configurations/UE reporting/UE measurement, e.g., Enhanced or new beam measurement and/or beam reporting
· Enhanced or new signaling for measurement configuration/triggering
· Signaling of assistance information (if applicable)
· Other aspect(s) is not precluded
Agreement
Regarding the sub use case BM-Case1 and BM-Case2, study the following alternatives for AI/ML output:
· Alt.1: Tx and/or Rx Beam ID(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams
o E.g., N predicted beams can be the top-N predicted beams
· Alt.2: Tx and/or Rx Beam ID(s) of the N predicted DL Tx and/or Rx beams and other information
o FFS: other information (e.g., probability for the beam to be the best beam, the associated confidence, beam application time/dwelling time, Predicted Beam failure)
o E.g., N predicted beams can be the top-N predicted beams
· Alt.3: Tx and/or Rx Beam angle(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams
o E.g., N predicted beams can be the top-N predicted beams
o FFS: details of Beam angle(s)
· FFS: how to select the N DL Tx and/or Rx beams (e.g., L1-RSRP higher than a threshold, a sum probability of being the best beams higher than a threshold, RSRP corresponding to the expected Tx and/or Rx beam direction(s))
· Note1: It is up to companies to provide other alternative(s)
· Note2: Beam ID is only used for discussion purpose
· Note3: All the outputs are “nominal” and only for discussion purpose
· Note4: Values of N is up to each company.
· Note5: All of the outputs in the above alternatives may vary based on whether the AI/ML model inference is at UE side or gNB side.
· Note 6: The Top-N beam IDs might have been derived via post-processing of the ML-model output
Including evaluation methodology, KPI, and performance evaluation results.
R1-2205894 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2205915 Evaluation on AI/ML for positioning accuracy enhancement PML
R1-2206036 Evaluation on AI/ML for positioning accuracy enhancement vivo
R1-2206072 Evaluation on AI for positioning enhancement ZTE
R1-2206168 Preliminary evaluation results and discussions of AI positioning accuracy enhancement Fujitsu
R1-2206199 On Evaluation of AI/ML based Positioning Google
R1-2206224 Evaluation method on AI/ML for positioning accuracy enhancement PML
R1-2206248 Evaluation of AI/ML for Positioning Accuracy Enhancement Ericsson
R1-2206252 Evaluation on AI/ML for positioning accuracy enhancement Rakuten Mobile, Inc
R1-2206319 Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement OPPO
R1-2206395 Evaluation on AI/ML for positioning CATT
R1-2206514 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2206524 Evaluation of AI and ML for positioning enhancement NVIDIA
R1-2206639 Evaluation on AI/ML for positioning accuracy enhancement Xiaomi
R1-2206679 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2206689 Evaluation on AI/ML for positioning accuracy enhancement China Telecom
R1-2206824 Evaluation on AI ML for Positioning Samsung
R1-2206878 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2206906 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2206972 Evaluation of ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2207094 Evaluation on AI/ML for positioning accuracy enhancement InterDigital, Inc.
R1-2207123 Evaluation on AI/ML for positioning accuracy enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2207228 Evaluation on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2207862 Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Monday session
Agreement
For AI/ML-based positioning, both approaches below are studied and evaluated by RAN1:
· Direct AI/ML positioning
· AI/ML assisted positioning
Agreement
For AI/ML-based positioning, study impact from implementation imperfections.
Agreement
For evaluation of AI/ML based positioning, the model complexity is reported via the metric of “number of model parameters”.
R1-2207863 Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Wed session
Agreement
To investigate the model generalization capability, at least the following aspect(s) are considered for the evaluation for AI/ML based positioning:
Note: It’s up to participating companies to decide whether to evaluate one aspect at a time, or evaluate multiple aspects at the same time.
Agreement
When providing evaluation results for AI/ML based positioning, participating companies are expected to describe data labelling details, including:
· Meaning of the label (e.g., UE coordinates; binary identifier of LOS/NLOS; ToA)
· Percentage of training data without label, if incomplete labeling is considered in the evaluation
· Imperfection of the ground truth labels, if any
Agreement
For evaluation of AI/ML based positioning, study the performance impact from availability of the ground truth labels (i.e., some training data may not have ground truth labels). The learning algorithm (e.g., supervised learning, semi-supervised learning, unsupervised learning) is reported by participating companies.
R1-2207864 Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
Agreement
For AI/ML-based positioning, for evaluation of the potential performance benefits of model finetuning, report at least the following:
· training dataset setting (e.g., training dataset size necessary for performing model finetuning)
· horizontal positioning accuracy (in meters) before and after model finetuning.
Agreement
For both direct AI/ML positioning and AI/ML assisted positioning, the following table is adopted for reporting the evaluation results.
Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [with or without] model generalization, [short model description]
Model input |
Model output |
Label |
Clutter param |
Dataset size |
AI/ML complexity |
Horizontal positioning accuracy at CDF=90% (meters) |
||
Training |
test |
Model complexity |
Computational complexity |
AI/ML |
||||
|
|
|
|
|
|
|
|
|
To report the following in table caption:
· Which side the model is deployed
· Model generalization investigation, if applied
· Short model description: e.g., CNN
Further info for the columns:
· Model input: input type and size
· Model output: output type and size
· Label: meaning of ground truth label; percentage of training data set without label if data labeling issue is investigated (default = 0%)
· Clutter parameter: e.g., {60%, 6m, 2m}
· Dataset size, both the size of training/validation dataset and the size of test dataset
· AI/ML complexity: both model complexity in terms of “number of model parameters”, and computational complexity in terms of FLOPs
· Horizontal positioning accuracy: the accuracy (in meters) of the AI/ML based method
Note: To report other simulation assumptions, if any.
R1-2208160 Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
Agreement
For evaluation of AI/ML assisted positioning, an intermediate performance metric of model output is reported.
· FFS: Detailed definition of the intermediate performance metric of the model output
Agreement
To investigate the model generalization capability, the following aspect is also considered for the evaluation of AI/ML based positioning:
· UE/gNB RX and TX timing error.
o The baseline non-AI/ML method may enable the Rel-17 enhancement features (e.g., UE Rx TEG, UE RxTx TEG).
Final summary in R1-2208161.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2205895 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2206037 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2206073 Discussion on other aspects for AI positioning enhancement ZTE
R1-2206116 Considerations on AI/ML for positioning accuracy enhancement Sony
R1-2206169 Discussions on sub use cases and spec impacts for AIML for positioning accuracy enhancement Fujitsu
R1-2206200 On Enhancement of AI/ML based Positioning Google
R1-2206249 Other Aspects of AI/ML Based Positioning Enhancement Ericsson
R1-2206253 Other aspects on AI/ML based positioning Rakuten Mobile, Inc
R1-2206320 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2206396 Other aspects on AI/ML for positioning CATT
R1-2206477 Discussion on AI/ML for positioning accuracy enhancement NEC
R1-2206515 AI/ML Positioning use cases and Associated Impacts Lenovo
R1-2206525 AI and ML for positioning enhancement NVIDIA
R1-2206607 Discussion on other aspects on AIML for positioning accuracy enhancement Spreadtrum Communications
R1-2206640 Views on the other aspects of AI/ML-based positioning accuracy enhancement Xiaomi
R1-2206680 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2206825 Representative sub use cases for Positioning Samsung
R1-2206879 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2206907 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2206973 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2207093 Designs and potential specification impacts of AIML for positioning InterDigital, Inc.
R1-2207122 On potential specification impact of AI/ML for positioning Fraunhofer IIS, Fraunhofer HHI
R1-2207229 Other aspects on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2207333 Other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2207754 FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
R1-2207880 FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Wed session
Agreement
For characterization and performance evaluations of AI/ML based positioning accuracy enhancement, the following two AI/ML based positioning methods are selected.
Conclusion
Defer the discussion of prioritization of AI/ML positioning based on collaboration level until more progress on collaboration level discussion in agenda 9.2.1.
Agreement
Regarding data collection for AI/ML model training, to study and provide inputs on potential specification impact at least for the following aspects of AI/ML based positioning accuracy enhancement
Agreement
Regarding AI/ML model monitoring and update, to study and provide inputs on potential specification impact at least for the following aspects of AI/ML based positioning accuracy enhancement
R1-2208049 FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
Agreement
Study aspects in terms of potential benefit(s) and requirement(s)/specification impact(s) of AI/ML model training and inference in AI/ML for positioning accuracy enhancement considering at least
· UE-side or Network-side training
· UE-side or Network-side inference
o Note: model inference at both UE and network side is not precluded where proponent(s) are encouraged to clarify their AI/ML approaches
Note: companies are encouraged to clarify aspects of their proposed AI/ML approaches for positioning when AI/ML model training and inference are not performed at the same entity
Conclusion
To use the following terminology defined in TS 38.305 when describe their proposed positioning methods
· UE-based
· UE-assisted/LMF-based
· NG-RAN node assisted
Note: companies are required to clarify their positioning method(s) when their approaches do not fall in one of the above.
Please refer to RP-221348 for detailed scope of the SI.
R1-2210690 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)
R1-2209974 Technical report for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2208365 Continued discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2208428 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2208520 Discussion on general aspects of common AI PHY framework ZTE
R1-2208546 Discussion on general aspects of AIML framework Spreadtrum Communications
R1-2208633 Discussions on AI/ML framework vivo
R1-2208739 Discussion on general aspects of AI/ML framework SEU
R1-2208768 Discussion on general aspects of AI/ML for NR air interface China Telecom
R1-2208849 On general aspects of AI/ML framework OPPO
R1-2208877 On General Aspects of AI/ML Framework Google
R1-2208898 General aspects on AI/ML framework LG Electronics
R1-2208908 Discussion on general aspects of AI/ML framework Ericsson
R1-2208966 General aspects of AI/ML framework for NR air interface CATT
R1-2209010 Discussion on general aspects of AI/ML framework Fujitsu
R1-2209046 Discussion on general aspects of AI/ML framework Intel Corporation
R1-2209088 General aspects of AI/ML framework AT&T
R1-2209094 Considerations on common AI/ML framework Sony
R1-2209119 General aspects of AI/ML framework Lenovo
R1-2209145 Discussion on general aspects of AI ML framework NEC
R1-2209229 Considerations on general aspects on AI-ML framework CAICT
R1-2209276 Views on the general aspects of AL/ML framework xiaomi
R1-2209327 Discussion on general aspects of AI/ML framework CMCC
R1-2209366 Further discussion on the general aspects of ML for Air-interface Nokia, Nokia Shanghai Bell
R1-2209389 Discussions on Common Aspects of AI/ML Framework TCL Communication
R1-2209399 Discussion on general aspects of AI/ML framework for NR air interface ETRI
R1-2209505 General aspects of AI/ML framework MediaTek Inc.
R1-2209575 General aspect of AI/ML framework Apple
R1-2209624 General aspects of AI and ML framework for NR air interface NVIDIA
R1-2209639 Discussion on general aspects of AI ML framework InterDigital, Inc.
R1-2209721 General aspects of AI ML framework and evaluation methodology Samsung
R1-2209764 Discussion on AI/ML framework Rakuten Mobile, Inc
R1-2209813 Discussion on general aspects of AI/ML framework Panasonic
R1-2209865 Discussion on general aspects of AI/ML framework KDDI Corporation
R1-2209895 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2209975 General aspects of AI/ML framework Qualcomm Incorporated
[110bis-e-R18-AI/ML-01] – Taesang (Qualcomm)
Email discussion on general aspects of AI/ML by October 19
- Check points: October 14, October 19
R1-2210396 Summary#1 of General Aspects of AI/ML Framework Moderator (Qualcomm Incorporated) (rev of R1-2210375)
From Oct 11th GTW session
Working Assumption
· Define Level y-z boundary based on whether model delivery is transparent to 3gpp signalling over the air interface or not.
· Note: Other procedures than model transfer/delivery are decoupled with collaboration level y-z.
· Clarifying note: Level y includes cases without model delivery.
R1-2210472 Summary#2 of General Aspects of AI/ML Framework Moderator (Qualcomm Incorporated)
From Oct 13th GTW session
Agreement
Clarify Level x/y boundary as:
·
Level x is
implementation-based AI/ML operation without any dedicated AI/ML-specific
enhancement (e.g., LCM related signalling, RS)
collaboration between network and UE.
(Note: The AI/ML operation may rely on future specification not related to
AI/ML collaboration. The AI/ML approaches can be used as baseline for
performance evaluation for future releases.)
Agreement
Study LCM procedure on the basis that an
AI/ML model has a model ID with associated information and/or model
functionality at least for some AI/ML operations when network needs to be
aware of UE AI/ML models
· FFS: Detailed discussion of model ID with associated information and/or model functionality.
· FFS: usage of model ID with associated information and/or model functionality based LCM procedure
· FFS: whether support of model ID
· FFS: the detailed applicable AI/ML operations
Agreement
For model selection, activation, deactivation, switching, and fallback at least for UE sided models and two-sided models, study the following mechanisms:
· Decision by the network
o Network-initiated
o UE-initiated, requested to the network
· Decision by the UE
o Event-triggered as configured by the network, UE’s decision is reported to network
o UE-autonomous, UE’s decision is reported to the network
o UE-autonomous, UE’s decision is not reported to the network
FFS: for network sided models
FFS: other mechanisms
R1-2210661 Summary#3 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Oct 18th GTW session
Conclusion
Data collection may be performed for different purposes in LCM, e.g., model training, model inference, model monitoring, model selection, model update, etc. each may be done with different requirements and potential specification impact.
FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)
Agreement
Study potential specification impact needed to enable the development of a set of specific models, e.g., scenario-/configuration-specific and site-specific models, as compared to unified models.
Note: User data privacy needs to be preserved. The provision of assistance information may need to consider feasibility of disclosing proprietary information to the other side.
Agreement
Study the specification impact to support multiple AI models for the same functionality, at least including the following aspects:
· Procedure and assistance signaling for the AI model switching and/or selection
FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)
Agreement
Study AI/ML model monitoring for at least the following purposes: model activation, deactivation, selection, switching, fallback, and update (including re-training).
FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)
Agreement
Study at least the following metrics/methods for AI/ML model monitoring in lifecycle management per use case:
Note: Model monitoring metric calculation may be done at NW or UE
From Oct 19th GTW session
Agreement
Study performance monitoring approaches, considering the following model monitoring KPIs as general guidance
· Accuracy and relevance (i.e., how well does the given monitoring metric/methods reflect the model and system performance)
· Overhead (e.g., signaling overhead associated with model monitoring)
· Complexity (e.g., computation and memory cost for model monitoring)
· Latency (i.e., timeliness of monitoring result, from model failure to action, given the purpose of model monitoring)
· FFS: Power consumption
· Other KPIs are not precluded.
Note: Relevant KPIs may vary across different model monitoring approaches.
FFS: Discussion of KPIs for other LCM procedures
Agreement
Study various approaches for achieving good performance across different scenarios/configurations/sites, including
· Model generalization, i.e., using one model that is generalizable to different scenarios/configurations/sites
· Model switching, i.e., switching among a group of models where each model is for a particular scenario/configuration/site
o [Models in a group of models may have varying model structures, share a common model structure, or partially share a common sub-structure. Models in a group of models may have different input/output format and/or different pre-/post-processing.]
· Model update, i.e., using one model whose parameters are flexibly updated as the scenario/configuration/site that the device experiences changes over time. Fine-tuning is one example.
Agreement
The following are additionally considered for the initial list of common KPIs (if applicable) for evaluating performance benefits of AI/ML
Conclusion
This RAN1 study considers ML TOP/FLOP/MACs as KPIs for computational complexity for inference. However, there may be a disconnection between actual complexity and the complexity evaluated using these KPIs due to the platform- dependency and implementation (hardware and software) optimization solutions, which are out of the scope of 3GPP.
Final summary in R1-2210708.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2208366 Continued discussion on evaluation of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2208429 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2208521 Evaluation on AI for CSI feedback enhancement ZTE
R1-2208547 Discussion on evaluation on AIML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2208634 Evaluation on AI/ML for CSI feedback enhancement vivo
R1-2208729 Evaluations on AI-CSI Ericsson
R1-2208769 Evaluation on AI/ML for CSI feedback enhancement China Telecom
R1-2208850 Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement OPPO
R1-2208878 On Evaluation of AI/ML based CSI Google
R1-2208899 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2208967 Evaluation on AI/ML for CSI feedback enhancement CATT
R1-2209011 Evaluation on AI/ML for CSI feedback enhancement Fujitsu
R1-2209047 Evaluation for CSI feedback enhancements Intel Corporation
R1-2209120 Evaluation on AI/ML for CSI feedback Lenovo
R1-2209131 Discussion on evaluation methodology and KPI on AI/ML for CSI feedback enhancement Panasonic
R1-2209230 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2209277 Discussion on evaluation on AI/ML for CSI feedback enhancement xiaomi
R1-2209328 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2209367 Evaluation of ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2209386 GRU for Historical CSI Prediction Sharp
R1-2209400 Evaluation on AI/ML for CSI feedback enhancement ETRI
R1-2209506 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2209548 Evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2209576 Evaluation on AI/ML for CSI feedback Apple
R1-2209625 Evaluation of AI and ML for CSI feedback enhancement NVIDIA
R1-2210272 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc. (rev of R1-2209640)
R1-2209652 Evaluation on AI/ML for CSI Feedback Enhancement Mavenir
R1-2209722 Evaluation on AI ML for CSI feedback enhancement Samsung
R1-2209794 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2209896 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2209976 Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated
[110bis-e-R18-AI/ML-02] – Yuan (Huawei)
Email discussion on evaluation on CSI feedback enhancement by October 19
- Check points: October 14, October 19
R1-2210365 Summary#1 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
From Oct 10th GTW session
Conclusion
For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the ‘Traffic model’ in the baseline of EVM is captured as follows:
Traffic model |
At least, FTP model 1 with packet size 0.5 Mbytes is assumed Other options are not precluded. |
R1-2210366 Summary#2 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
R1-2210367 Summary#3 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
From Oct 13th GTW session
Agreement
In the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, if realistic DL channel estimation is considered, regarding how to calculate the intermediate KPI of CSI accuracy,
· Use the target CSI from ideal channel and use output CSI from the realistic channel estimation
o The target CSI from ideal channel equally applies to AI/ML based CSI feedback enhancement, and the baseline codebook
Note: there is no restriction on model training
R1-2210368 Summary#4 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
Decision: As per email decision posted on Oct 17th,
Agreement
In the evaluation of the AI/ML based CSI feedback enhancement, for “Baseline for performance evaluation” in the EVM table, Type I Codebook (if it outperforms Type II Codebook) can be optionally considered for comparing AI/ML schemes up to companies
· Note: Type II Codebook is baseline as agreed
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for the outdoor UEs, add O2I car penetration loss per TS 38.901 if the simulation assumes UEs inside vehicles.
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, no explicit trajectory modeling is considered for evaluation
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, and if the AI/ML model outputs multiple predicted instances, the intermediate KPI is calculated for each prediction instance
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, both of the following types of AI/ML model input are considered for evaluations:
· Raw channel matrixes
· Eigenvector(s)
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for the evaluation of CSI prediction:
· Companies are encouraged to report the assumptions on the observation window, including number/time distance of historic CSI/channel measurements as the input of the AI/ML model, and
· Companies to report the assumptions on the prediction window, including number/time distance of predicted CSI/channel as the output of the AI/ML model
R1-2210369 Summary#5 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
From Oct 18th GTW session
Conclusion
If ideal DL channel estimation is considered (which is optional) for the evaluations of CSI feedback enhancement, there is no consensus on how to use the ideal channel estimation for dataset construction, or performance evaluation/inference.
· It is up to companies to report whether/how ideal channel is used in the dataset construction as well as performance evaluation/inference.
Conclusion
For the evaluation of Type 2 (Joint training of the two-sided model at network side and UE side, respectively), following procedure is considered as an example:
· For each FP/BP loop,
o Step 1: UE side generates the FP results (i.e., CSI feedback) based on the data sample(s), and sends the FP results to NW side
o Step 2: NW side reconstructs the CSI based on FP results, trains the CSI reconstruction part, and generates the BP information (e.g., gradients), which are then sent to UE side
o Step 3: UE side trains the CSI generation part based on the BP information from NW side
· Note: the dataset between UE side and NW side is aligned.
· Other Type 2 training approaches are not precluded and reported by companies
Conclusion
For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with NW side training (NW-first training):
· Step1: NW side trains the NW side CSI generation part (which is not used for inference) and the NW side CSI reconstruction part jointly
· Step2: After NW side training is finished, NW side shares UE side with a set of information (e.g., dataset) that is used by the UE side to be able to train the UE side CSI generation part
· Step3: UE side trains the UE side CSI generation part based on the received set of information
· Other Type 3 NW-first training approaches are not precluded and reported by companies
Conclusion
For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with UE side training (UE-first training):
· Step1: UE side trains the UE side CSI generation part and the UE side CSI reconstruction part (which is not used for inference) jointly
· Step2: After UE side training is finished, UE side shares NW side with a set of information (e.g., dataset) that is used by the NW side to be able to train the CSI reconstruction part
· Step3: NW side trains the NW side CSI reconstruction part based on the received set of information
· Other Type 3 UE-first training approaches are not precluded and reported by companies
Working assumption
In the evaluation of the AI/ML based CSI feedback enhancement, if SGCS is adopted as the intermediate KPI for the rank>1 situation, companies to ensure the correct calculation of SGCS and to avoid disorder issue of the output eigenvectors
· Note: Eventual KPI can still be used to compare the performance
Agreement
For the evaluation of the AI/ML based CSI feedback enhancement, if the SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, at least Method 3 is adopted, FFS whether additionally adopt a down-selected metric between Method 1 and Method 2.
· Method 1: Average over all layers
· Method 2: Weighted average over all layers
where
is the jth eigenvector of the target CSI at resource unit i and K is the
rank.
is the jth output vector of the output CSI of
resource unit i. N is
the total number of resource units.
denotes the average operation over multiple
samples.
is an eigenvalue of the channel covariance matrix corresponding to
.
· Method 3: SGCS is separately calculated for each layer (e.g., for K layers, K SGCS values are derived respectively, and comparison is performed per layer)
Agreement
In CSI compression using two-sided model use case, evaluate and study quantization of CSI feedback, including at least the following aspects:
· Quantization non-aware training
· Quantization-aware training
· Quantization methods including uniform vs non-uniform quantization, scalar versus vector quantization, and associated parameters, e.g., quantization resolution, etc.
· How to use the quantization methods
R1-2210752 Summary#6 of [110bis-e-R18-AI/ML-02] Moderator (Huawei)
From Oct 19th GTW session
Agreement
For evaluating the performance impact of ground-truth quantization in the CSI compression, study high resolution quantization methods for ground-truth CSI, e.g., including at least the following options
· High resolution scalar quantization, e.g., Float32, Float16, etc.
o FFS select one of the scalar quantization resolutions as baseline
· High resolution codebook quantization, e.g., R16 Type II-like method with new parameters
o FFS new parameters
· Other quantization methods are not precluded
Agreement
For the evaluation of the potential performance benefits of model fine-tuning of CSI feedback enhancement which is optionally considered by companies, the following case is taken
· The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B
· Company to report the fine-tuning dataset setting (e.g., size of dataset) and the improvement of performance
Agreement
For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following cases are considered for evaluations:
· Case 1 (baseline): Aligned AI/ML model structure between NW side and UE side
· Case 2: Not aligned AI/ML model structures between NW side and UE side
o Companies to report the AI/ML structures for the UE part model and the NW part model, e.g., different backbone (e.g., CNN, Transformer, etc.), or same backbone but different structure (e.g., number of layers)
· FFS different sizes of datasets between NW side and UE side
· FFS aligned/different quantization/dequantization methods between NW side and UE side
· FFS: whether/how to evaluate the case where the input/output types and/or pre/post-processing are not aligned between NW part model and UE part model
Agreement
For the evaluation of Type 2 (Joint training of the two-sided model at network side and UE side, respectively), the following evaluation cases are considered for multi-vendors,
· Case 1 (baseline): Type 2 training between one NW part model to one UE part model
· Case 2: Type 2 training between one NW part model and M>1 separate UE part models
o Companies to report the AI/ML structures for the UE part model and the NW part model
o FFS Companies to report the dataset used at UE part models, e.g., whether the same or different dataset(s) are used among M UE part models
· Case 3: Type 2 training between one UE part model and N>1 separate NW part models
o Companies to report the AI/ML structures for the UE part model and the NW part model
o FFS Companies to report the dataset used at NW part models, e.g., whether the same or different dataset(s) are used among N NW part models
· FFS N NW part models to M UE part models
· FFS different quantization/dequantization methods between NW and UE
· FFS: whether/how to evaluate the case where the input/output types and/or pre/post-processing are not aligned between NW part model and UE part model
· FFS: companies to report the training order of UE-NW pair(s) in case of M UE part models and/or N NW part models
· FFS: whether/how to report overhead
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases, at least the following types of AI/ML model input (for CSI generation part)/output (for CSI reconstruction part) are considered for evaluations
· Raw channel matrix, e.g., channel matrix with the dimensions of Tx, Rx, and frequency unit
o Companies to report the raw channel is in frequency domain or delay domain
· Precoding matrix
o Companies to report the precoding matrix is a group of eigenvector(s) or an eType II-like reporting (i.e., eigenvectors with angular-delay domain representation)
· Other input/output types are not precluded
· Companies to report the combination of input (for CSI generation part) and output (for CSI reconstruction part),
o Note: the input and output may be of different types
Conclusion
If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for SLS, spatial consistency procedure A with 50m decorrelation distance from 38.901 is used (if not used, company should state this in their simulation assumptions)
· UE velocity vector is assumed as fixed over time in Procedure A modeling
Agreement
In the evaluation of the AI/ML based CSI feedback enhancement, for the calculation of intermediate KPI, the following is considered as the granularity of the frequency unit for averaging operation
· For 15kHz SCS: For 10MHz bandwidth: 4 RBs; for 20MHz bandwidth: 8 RBs
· For 30kHz SCS: For 10MHz bandwidth: 2 RBs; for 20MHz bandwidth: 4 RBs
· Note: Other frequency unit granularity is not precluded and reported by companies
Final summary in R1-2210753.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2208367 Continued discussion on other aspects of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2208430 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2208522 Discussion on other aspects for AI CSI feedback enhancement ZTE
R1-2208548 Discussion on other aspects on AIML for CSI feedback Spreadtrum Communications
R1-2208635 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2208728 Discussions on AI-CSI Ericsson
R1-2208770 Discussion on AI/ML for CSI feedback enhancement China Telecom
R1-2208851 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2208879 On Enhancement of AI/ML based CSI Google
R1-2208900 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2208968 Discussion on AI/ML for CSI feedback enhancement CATT
R1-2209012 Views on specification impact for CSI compression with two-sided model Fujitsu
R1-2209048 Use-cases and specification for CSI feedback Intel Corporation
R1-2209095 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2209121 Further aspects of AI/ML for CSI feedback Lenovo
R1-2209161 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2209231 Discussions on AI-ML for CSI feedback CAICT
R1-2209278 Discussion on specification impact for AI/ML based CSI feedback xiaomi
R1-2209329 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2209368 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2209390 Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement TCL Communication
R1-2209401 Discussion on other aspects on AI/ML for CSI feedback enhancement ETRI
R1-2209424 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2209507 Other aspects on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2209577 Other aspects on AI/ML for CSI Apple
R1-2209626 AI and ML for CSI feedback enhancement NVIDIA
R1-2209641 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2209723 Representative sub use cases for CSI feedback enhancement Samsung
R1-2209795 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2209897 Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2209977 Other aspects on AI/ML for CSI feedback enhancement Qualcomm Incorporated
[110bis-e-R18-AI/ML-03] – Huaning (Apple)
Email discussion on other aspects on AI/ML for CSI feedback enhancement by October 19
- Check points: October 14, October 19
R1-2210319 Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
R1-2210320 Summary #2 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Oct 14th GTW session
Conclusion
Joint CSI prediction and CSI compression is NOT selected as one representative sub-use case for CSI feedback enhancement use case.
Conclusion
CSI accuracy enhancement based on traditional codebook design is NOT selected as one representative sub-use case for CSI feedback enhancement use case.
Conclusion
Temporal-spatial-frequency domain CSI compression using two-sided model is NOT selected as one representative sub-use case for CSI enhancement use case.
· Up to each company to report whether past CSI is used as model input for spatial-frequency domain CSI compression
R1-2210321 Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
Presented in Oct 18th GTW session
R1-2210611 Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Oct 19th GTW session
Agreement
In CSI compression using two-sided model use case, study potential specification impact for performance monitoring including:
Agreement
In CSI compression using two-sided model use case, further study potential specification impact related to assistance signaling and procedure for model performance monitoring.
Agreement
In CSI compression using two-sided model use case, further study potential specification impact related to potential co-existence and fallback mechanisms between AI/ML-based CSI feedback mode and legacy non-AI/ML-based CSI feedback mode.
Agreement
In CSI compression using two-sided model use case, further study at least the following options for performance monitoring metrics/methods:
· Intermediate KPIs as monitoring metrics (e.g., SGCS)
· Eventual KPIs (e.g., Throughput, hypothetical BLER, BLER, NACK/ACK).
· Legacy CSI based monitoring: schemes using additional legacy CSI reporting
· Other monitoring solutions, at least including the following option:
o Input or Output data based monitoring: such as data drift between training dataset and observed dataset and out-of-distribution detection
Agreement
In CSI compression using two-sided model use case, further study at least use cases of the following potential specification impact on quantization method alignment between CSI generation part at UE and CSI reconstruction part at gNB:
· Alignment of the quantization/dequantization method and the feedback message size between Network and UE
Including evaluation methodology, KPI, and performance evaluation results.
R1-2208368 Continued discussion on evaluation of AI/ML for beam management FUTUREWEI
R1-2208431 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2208523 Evaluation on AI for beam management ZTE
R1-2208549 Evaluation on AI for beam management Spreadtrum Communications
R1-2208636 Evaluation on AI/ML for beam management vivo
R1-2210240 Discussion for evaluation on AI/ML for beam management InterDigital, Inc. (rev of R1-2208682)
R1-2208771 Evaluation on AI/ML for beam management China Telecom
R1-2208852 Evaluation methodology and preliminary results on AI/ML for beam management OPPO
R1-2210327 On Evaluation of AI/ML based Beam Management Google (rev of R1-2208880)
R1-2208901 Evaluation on AI/ML for beam management LG Electronics
R1-2208906 Evaluation on AI/ML for beam management Ericsson
R1-2208969 Evaluation on AI/ML for beam management CATT
R1-2209013 Evaluation on AI/ML for beam management Fujitsu
R1-2209049 Evaluations for AI/ML beam management Intel Corporation
R1-2209122 Evaluation on AI/ML for beam management Lenovo
R1-2209232 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2209279 Evaluation on AI/ML for beam management xiaomi
R1-2209330 Discussion on evaluation on AI/ML for beam management CMCC
R1-2209369 Evaluation of ML for beam management Nokia, Nokia Shanghai Bell
R1-2209508 Evaluation on AI/ML for beam management MediaTek Inc.
R1-2209578 Evaluation on AI/ML for beam management Apple
R1-2209613 Evaluation of AI/ML based beam management Rakuten Symphony
R1-2209627 Evaluation of AI and ML for beam management NVIDIA
R1-2209724 Evaluation on AI ML for Beam management Samsung
R1-2209898 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
R1-2209978 Evaluation on AI/ML for beam management Qualcomm Incorporated
R1-2210107 Evaluation on AI/ML for beam management CEWiT
[110bis-e-R18-AI/ML-04] – Feifei (Samsung)
Email discussion on evaluation on AI/ML for beam management by October 19
- Check points: October 14, October 19
R1-2210359 Feature lead summary #0 evaluation of AI/ML for beam management Moderator (Samsung)
From Oct 10th GTW session
Working Assumption
The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:
· Case 1: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a dataset from the same Scenario#A/Configuration#A
· Case 2: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B
· Case 3: The AI/ML model is trained based on training dataset constructed by mixing datasets from multiple scenarios/configurations including Scenario#A/Configuration#A and a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B, and then the AI/ML model performs inference/test on a dataset from a single Scenario/Configuration from the multiple scenarios/configurations, e.g., Scenario#A/Configuration#A, Scenario#B/Configuration#B, Scenario#A/Configuration#B.
o Note: Companies to report the ratio for dataset mixing
o Note: number of the multiple scenarios/configurations can be larger than two
· FFS the detailed set of scenarios/configurations
· FFS other cases for generalization verification, e.g.,
o Case 2A: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.
Conclusion
For system performance related KPI (if supported) evaluation (model inference), companies report either of the following traffic model:
· Option 1: Full buffer
· Option 2: FTP model with detail assumptions (e.g., FTP model 1, FTP model 3)
Agreement
· BS antenna configuration:
o antenna setup and port layouts at gNB: (4, 8, 2, 1, 1, 1, 1), (dV, dH) = (0.5, 0.5) λ
o Other assumptions are not precluded
· BS Tx power for evaluation:
o 40dBm (baseline)
o Other values (e.g. 34 dBm) are not precluded and can be reported by companies
· UE antenna configuration (Clarification of agreement in RAN 1 #110):
o antenna setup and port layouts at UE: (1, 4, 2, 1, 2, 1, 1), 2 panels (left, right)
o Other assumptions are not precluded
Agreement
· For the evaluation of both BM-Case1 and BM-Case2, 32 or 64 downlink Tx beams (maximum number of available beams) at NW side.
o Other values, e.g., 256, etc, are not precluded and can be reported by companies.
· For the evaluation of both BM-Case1 and BM-Case2, 4 or 8 downlink Rx beams (maximum number of available beams) per UE panel at UE side.
o Other values, e.g., 16, etc, are not precluded and can be reported by companies.
R1-2210360 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From Oct 14th GTW session
Agreement
The options to evaluate beam prediction accuracy (%):
· Top-1 (%): the percentage of “the Top-1 genie-aided beam is Top-1 predicted beam”
· Top-K/1 (%): the percentage of “the Top-1 genie-aided beam is one of the Top-K predicted beams”
· Top-1/K (%) (Optional): the percentage of “the Top-1 predicted beam is one of the Top-K genie-aided beams”
· Where K >1 and values can be reported by companies.
Agreement
For DL Tx beam prediction, the definition of Top-1 genie-aided Tx beam considers the following options
· Option A, the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx and Rx beams
· Option B, the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx beams with specific Rx beam(s)
o FFS on specific Rx beam(s)
o Note: specific Rx beams are subset of all Rx beams
R1-2210361 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
From Oct 18th GTW session
Agreement
For DL Tx-Rx beam pair prediction, the definition of Top-1 genie-aided Tx-Rx beam pair considers the following options:
· Option A: The Tx-Rx beam pair that results in the largest L1-RSRP over all Tx and Rx beams
· Option B: The Tx-Rx beam pair that results in the largest L1-RSRP over all Tx over all Tx beams with specific Rx beam(s)
o FFS on specific Rx beam(s)
o Note: specific Rx beams are subset of all Rx beams
R1-2210362 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
From Oct 19th GTW session
Agreement
· Companies to report the selected scenarios/configurations for generalization verification
· Note: other approaches for achieving good generalization performance for AI/ML-based schemes are not precluded.
Working Assumption
For both BM-Case1 and BM-Case 2, the following table is adopted as working assumption for reporting the evaluation results.
Table X. Evaluation results for [BM-Case1 or BM-Case2] without model generalization for [DL Tx beam prediction or Tx-Rx beam pair prediction or Rx beam prediction]
|
Company A |
…… |
||
Assumptions |
Number of [beams/beam pairs] in Set A |
|
|
|
Number of [beams/beam pairs] in Set B |
|
|
||
Baseline scheme |
|
|
||
AI/ML model input/output |
Model input |
|
|
|
Model output |
|
|
||
Data Size |
Training |
|
|
|
Testing |
|
|
||
AI/ML model |
[Short model description] |
|
|
|
Model complexity |
|
|
||
Computational complexity |
|
|
||
Evaluation results [With AI/ML / baseline] |
[Beam prediction accuracy (%)] |
[KPI A] |
|
|
[KPI B] … |
|
|
||
[L1-RSRP Diff] |
[Average L1-RSRP diff] … |
|
|
|
[System performance] |
[RS overhead Reduction (%)/ RS overhead] |
|
|
|
[UCI report] |
|
|
||
[UPT] … |
|
|
To report the following in table caption:
· Which side the model is deployed
Further info for the columns:
· Assumptions
o Number of beams/beam pairs in Set A
o Number of beams/beam pairs in Set B
o Baseline scheme, e.g., Option 1 (exhaustive beam sweeping), Option 2(based on measurements of Set B), or baseline described by companies
o Other assumptions can be added later based on agreements
· Model input: input type(s)
·
Model output: output
type(s), e.g., the best DL Tx and/or Rx beam ID, and/or L1-RSRPs
of N beams(pairs)
· Dataset size, both the size of training/validation dataset and the size of test dataset
· Short model description: e.g., CNN, LSTM
· Model complexity, in terms of “number of model parameters” and/or size (e.g. Mbyte)”, and
· Computational complexity in terms of FLOPs
· Evaluation results: agreed KPIs, with AI/ML / with baseline scheme (if applicable)
· Note: To report other simulation assumptions, if any.
Agreement
· Study the following options on the selection of Set B of beams (pairs)
Working assumption
Agreement
· At least for BM-Case 2, consider the following assumptions for evaluation
o Periodicity of time instance for each measurement/report in T1:
§ 20ms, 40ms, 80ms, [100ms], 160ms, [960ms]
§ Other values can be reported by companies.
o Number of time instances for measurement/report in T1 can be reported by companies.
o Time instance(s) for prediction can be reported by companies.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2208369 Continued discussion on other aspects of AI/ML for beam management FUTUREWEI
R1-2208432 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2208524 Discussion on other aspects for AI beam management ZTE
R1-2208550 Discussion on other aspects on AIML for beam management Spreadtrum Communications
R1-2208637 Other aspects on AI/ML for beam management vivo
R1-2208683 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2208853 Other aspects of AI/ML for beam management OPPO
R1-2208881 On Enhancement of AI/ML based Beam Management Google
R1-2208902 Other aspects on AI/ML for beam management LG Electronics
R1-2208907 Discussion on AI/ML for beam management Ericsson
R1-2208970 Discussion on AI/ML for beam management CATT
R1-2209014 Sub use cases and specification impact on AI/ML for beam management Fujitsu
R1-2209050 Use-cases and Specification Impact for AI/ML beam management Intel Corporation
R1-2209096 Consideration on AI/ML for beam management Sony
R1-2209123 Further aspects of AI/ML for beam management Lenovo
R1-2209146 Discussion on AI/ML for beam management NEC
R1-2209233 Discussions on AI-ML for Beam management CAICT
R1-2209280 Discussion on other aspects on AI/ML for beam management xiaomi
R1-2209331 Discussion on other aspects on AI/ML for beam management CMCC
R1-2209370 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2209391 Discussions on Sub-Use Cases in AI/ML for Beam Management TCL Communication
R1-2209402 Discussion on other aspects on AI/ML for beam management ETRI
R1-2209509 Other aspects on AI/ML for beam management MediaTek Inc.
R1-2209579 Other aspects on AI/ML for beam management Apple
R1-2209614 Discussion on AI/ML for beam management Rakuten Symphony
R1-2209628 AI and ML for beam management NVIDIA
R1-2209725 Representative sub use cases for beam management Samsung
R1-2209899 Discussion on AI/ML for beam management NTT DOCOMO, INC.
R1-2209979 Other aspects on AI/ML for beam management Qualcomm Incorporated
R1-2210085 Discussion on sub use cases of AI/ML beam management Panasonic
R1-2210086 Discussion on other aspects on AI/ML for beam management KT Corp.
[110bis-e-R18-AI/ML-05] – Zhihua (OPPO)
Email discussion on other aspects of AI/ML for beam management by October 19
- Check points: October 14, October 19
R1-2210353 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
R1-2210354 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
From Oct 14th GTW session
Conclusion
For AI/ML based beam management, RAN1 has no consensus to support on studying any other sub use case in addition to BM-Case1 and BM-Case2.
Note: this conclusion is independent of the discussion on the alternatives of AI/ML model inputs for BM-Case1 and BM-Case2.
Conclusion
For the sub use case BM-Case1 and BM-Case2, Set B is a set of beams whose measurements are taken as inputs of the AI/ML model,
R1-2210355 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
R1-2210356 Summary#4 for other aspects on AI/ML for beam management Moderator (OPPO)
Presented in Oct 18th GTW session
R1-2210357 Summary#5 for other aspects on AI/ML for beam management Moderator (OPPO)
From Oct 19th GTW session
Agreement
For BM-Case1 with a UE-side AI/ML model, study the potential specification impact of L1 signaling to report the following information of AI/ML model inference to NW
· The beam(s) that is based on the output of AI/ML model inference
· FFS: Predicted L1-RSRP corresponding to the beam(s)
· FFS: other information
Agreement
For BM-Case2 with a UE-side AI/ML model, study the potential specification impact of L1 signaling to report the following information of AI/ML model inference to NW
· The beam(s) of N future time instance(s) that is based on the output of AI/ML model inference
o FFS: value of N
· FFS: Predicted L1-RSRP corresponding to the beam(s)
· Information about the timestamp corresponding the reported beam(s)
o FFS: explicit or implicit
· FFS: other information
Agreement
For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the following alternatives for model monitoring with potential down-selection:
· Atl1. UE-side Model monitoring
o UE monitors the performance metric(s)
o UE makes decision(s) of model selection/activation/ deactivation/switching/fallback operation
· Atl2. NW-side Model monitoring
o NW monitors the performance metric(s)
o NW makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation
· Alt3. Hybrid model monitoring
o UE monitors the performance metric(s)
o NW makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation
Decision: As per email decision posted on Oct 19th,
Working Assumption
For BM-Case1 and BM-Case2 with a network-side AI/ML model, study the following L1 beam reporting enhancement for AI/ML model inference
· UE to report the measurement results of more than 4 beams in one reporting instance
· Other L1 reporting enhancements can be considered
Agreement
For BM-Case1 and BM-Case2 with a network-side AI/ML model, study the NW-side model monitoring:
· NW monitors the performance metric(s) and makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation
Agreement
Regarding NW-side model monitoring for a network-side AI/ML model of BM-Case1 and BM-Case2, study the potential specification impacts from the following aspects
· Beam measurement and report for model monitoring
· Note: This may or may not have specification impact.
Final summary in R1-2210764.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2208399 Evaluation of AI/ML for Positioning Accuracy Enhancement Ericsson
R1-2208433 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2208525 Evaluation on AI for positioning enhancement ZTE
R1-2208638 Evaluation on AI/ML for positioning accuracy enhancement vivo
R1-2208772 Evaluation on AI/ML for positioning accuracy enhancement China Telecom
R1-2208854 Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement OPPO
R1-2208882 On Evaluation of AI/ML based Positioning Google
R1-2208903 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2208971 Evaluation on AI/ML for positioning enhancement CATT
R1-2209015 Discussions on evaluation of AI positioning accuracy enhancement Fujitsu
R1-2209124 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2209234 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2209281 Evaluation on AI/ML for positioning accuracy enhancement xiaomi
R1-2209332 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2209371 Evaluation of ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2209484 Evaluation on AI/ML for positioning accuracy enhancement InterDigital, Inc.
R1-2209510 Evaluation on AI/ML for positioning accuracy enhancement MediaTek Inc.
R1-2209537 Evaluation on AI/ML for positioning accuracy enhancement Faunhofer IIS, Fraunhofer HHI
R1-2209580 Evaluation on AI/ML for positioning accuracy enhancement Apple
R1-2209615 Evaluation of AI/ML based positioning accuracy enhancement Rakuten Symphony
R1-2209629 Evaluation of AI and ML for positioning enhancement NVIDIA
R1-2209726 Evaluation on AI ML for Positioning Samsung
R1-2209980 Evaluation on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
[110bis-e-R18-AI/ML-06] – Yufei (Ericsson)
Email discussion on evaluation on AI/ML for positioning accuracy enhancement by October 19
- Check points: October 14, October 19
R1-2210385 Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2210386 Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Oct 14th GTW session
Agreement
To investigate the model generalization capability, the following aspect is also considered for the evaluation of AI/ML based positioning:
· InF scenarios, e.g., training dataset from one InF scenario (e.g., InF-DH), test dataset from a different InF scenario (e.g., InF-HH)
Agreement
For both direct AI/ML positioning and AI/ML assisted positioning, if fine-tuning is not evaluated, the template agreed in RAN1#110 is updated to the following for reporting the evaluation results.
Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description]
Model input |
Model output |
Label |
Settings (e.g., drops, clutter param, mix) |
Dataset size |
AI/ML complexity |
Horizontal pos. accuracy at CDF=90% (m) |
|||
Train |
Test |
Train |
test |
Model complexity |
Computation complexity |
AI/ML |
|||
|
|
|
|
|
|
|
|
|
|
Agreement
For both direct AI/ML positioning and AI/ML assisted positioning, if fine-tuning is evaluated, the template agreed in RAN1#110 is updated to the following for reporting the evaluation results.
Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description]
Model input |
Model output |
Label |
Settings (e.g., drops, clutter param, mix) |
Dataset size |
AI/ML complexity |
Horizontal pos. accuracy at CDF=90% (m) |
|||||
Train |
Fine-tune |
Test |
Train |
Fine-tune |
test |
Model complexity |
Computation complexity |
AI/ML |
|||
|
|
|
|
|
|
|
|
|
|
|
|
Agreement
For AI/ML-assisted positioning, companies report which construction is applied in their evaluation:
· Single-TRP construction: the input of the ML model is the channel measurement between the target UE and a single TRP, and the output of the ML model is for the same pair of UE and TRP.
· Multi-TRP construction: the input of the ML model contains N sets of channel measurements between the target UE and N (N>1) TRPs, and the output of the ML model contains N sets of values, one for each of the N TRPs.
Note: For a measurement (e.g., RSTD) which is a relative value between a given TRP and a reference TRP, the TRP in “single-TRP” and “multi-TRP” refers to the given TRP only.
Note: For single-TRP construction, companies report whether they consider same model for all TRPs or N different models for TRPs
Conclusion
For evaluation of AI/ML based positioning, suspend the discussion on intra-site (or zone-specific) variations until concepts and channel model construction not in TR38.901 (e.g., “intra-site” or “zone”) are clarified under AI 9.2.1.
Note: An individual company can still submit evaluation results for intra-site variation.
Conclusion
For evaluation of AI/ML based positioning, the sampling period is selected by proponent companies. Each company report the sampling period used in their evaluation.
Agreement
For evaluation of AI/ML assisted positioning, the following intermediate performance metrics are used:
· LOS classification accuracy, if the model output includes LOS/NLOS indicator of hard values, where the LOS/NLOS indicator is generated for a link between UE and TRP;
· Timing estimation accuracy (expressed in meters), if the model output includes timing estimation (e.g., ToA, RSTD).
· Angle estimation accuracy (in degrees), if the model output includes angle estimation (e.g., AoA, AoD).
· Companies provide info on how LOS classification accuracy and timing/angle estimation accuracy are estimated, if the ML output is a soft value that represents a probability distribution (e.g., probability of LOS, probability of timing, probability of angle, mean and variance of timing/angle, etc.)
R1-2210387 Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2210388 Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2210650 Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Oct 18th GTW session
Conclusion
For evaluation of AI/ML based positioning, it’s up to each company to take into account the channel estimation error in their evaluation. Companies describe the details of their simulation assumption, e.g., realistic or ideal channel estimation, error models, receiver algorithms.
R1-2210651 Summary #6 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2210652 Final Summary of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Oct 19th GTW session
Agreement
For AI/ML assisted positioning, when single-TRP construction is used for the AI/ML model, companies report at least the AI/ML complexity (Model complexity, Computation complexity) for N TRPs, which are used to determine the position of a target UE.
Table. Model complexity and computation complexity to support N TRPs for a target UE
|
Model complexity to support N TRPs |
Computation complexity to process N TRPs |
Single-TRP, same model for N TRPs |
When
the model is at UE-side, where FFS: if the model is at network-side |
Where
|
Single-TRP, N models for N TRPs |
When the model is at UE-side, Where
FFS: if the model is at network-side |
Where
|
Multi-TRP (i.e., one model for N TRPs) |
Where
|
Where
|
Agreement
For AI/ML based positioning, if an InF scenario different from InF-DH is evaluated for the model generalization capability, the selected parameters (e.g., clutter parameters) are compliant with TR 38.901 Table 7.2-4 (Evaluation parameters for InF).
· Note: In TR 38.857 Table 6.1-1 (Parameters common to InF scenarios), InF-SH scenario uses the clutter parameter {20%, 2m, 10m} which is compliant with TR 38.901.
Agreement
For the model input used in evaluations of AI/ML based positioning, if time-domain channel impulse response (CIR) or power delay profile (PDP) is used as model input in the evaluation, companies report the input dimension NTRP * Nport * Nt, where NTRP is the number of TRPs, Nport is the number of transmit/receive antenna port pairs, Nt is the number of time domain samples.
· Note: CIR and PDP may have different dimensions.
· Note: Companies provide details on their assumption on how PDP is constructed and how (if applicable) it is mapped to Nt samples.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2208400 Other Aspects of AI/ML Based Positioning Enhancement Ericsson
R1-2208434 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2208526 Discussion on other aspects for AI positioning enhancement ZTE
R1-2208551 Discussion on other aspects on AIML for positioning accuracy enhancement Spreadtrum Communications
R1-2208639 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2208855 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2208883 On Enhancement of AI/ML based Positioning Google
R1-2208904 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2208972 Discussion on AI/ML for positioning enhancement CATT
R1-2209016 Discussions on sub use cases and specification impacts for AIML positioning Fujitsu
R1-2209097 Discussion on AI/ML for positioning accuracy enhancement Sony
R1-2209125 AI/ML Positioning use cases and Associated Impacts Lenovo
R1-2209147 Other aspects on AI/ML for positioning NEC
R1-2209235 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2209282 Views on the other aspects of AI/ML-based positioning accuracy enhancement xiaomi
R1-2209333 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2209372 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2209485 Designs and potential specification impacts of AIML for positioning InterDigital, Inc.
R1-2209538 On potential specification impact of AI/ML for positioning Faunhofer IIS, Fraunhofer HHI
R1-2209581 Other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2209616 Discussion on AI/ML for positioning accuracy enhancement Rakuten Symphony
R1-2209630 AI and ML for positioning enhancement NVIDIA
R1-2209727 Representative sub use cases for Positioning Samsung
R1-2209900 Discussion on AI/ML for positioning accuracy enhancement NTT DOCOMO, INC.
R1-2209981 Other aspects on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
[110bis-e-R18-AI/ML-07] – Huaming (vivo)
Email discussion on other aspects of AI/ML for positioning accuracy enhancement by October 19
- Check points: October 14, October 19
R1-2210308 FL summary #1 of [110bis-e-R18-AI/ML-07] Moderator (vivo)
R1-2210427 FL summary #2 of [110bis-e-R18-AI/ML-07] Moderator (vivo)
From Oct 14th GTW session
Conclusion
· Defer the discussion of prioritization of online/offline training for AI/ML based positioning until more progress on online vs. offline training discussion in agenda 9.2.1.
Agreement
· Study and provide inputs on benefit(s) and potential specification impact at least for the following cases of AI/ML based positioning accuracy enhancement
o Case 1: UE-based positioning with UE-side model, direct AI/ML or AI/ML assisted positioning
o Case 2a: UE-assisted/LMF-based positioning with UE-side model, AI/ML assisted positioning
o Case 2b: UE-assisted/LMF-based positioning with LMF-side model, direct AI/ML positioning
o Case 3a: NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning
o Case 3b: NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning
Agreement
Regarding AI/ML model indication[/configuration], to study and provide inputs on potential specification impact at least for the following aspects on conditions/criteria of AI/ML model for AI/ML based positioning accuracy enhancement
· Validity conditions, e.g., applicable area/[zone/]scenario/environment and time interval, etc.
· Model capability, e.g., positioning accuracy quality and model inference latency
· Conditions and requirements, e.g., required assistance signalling and/or reference signals configurations, dataset information
· Note: other aspects are not precluded
Agreement
Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on potential specification impact for the following aspects
R1-2210565 FL summary #3 of [110bis-e-R18-AI/ML-07] Moderator (vivo)
Presented in Oct 18th GTW session
R1-2210669 FL summary #4 of [110bis-e-R18-AI/ML-07] Moderator (vivo)
From Oct 19th GTW session
Agreement
Regarding data collection for AI/ML model training for AI/ML based positioning, at least for each of the agreed cases (Case 1 to Case 3b)
Please refer to RP-221348 for detailed scope of the SI.
R1-2212845 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)
Endorsed and contents incorporated below.
[111-R18-AI/ML] – Taesang (Qualcomm)
To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc
R1-2212106 Technical report for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2210840 Continued discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2210884 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2210997 Discussions on AI/ML framework vivo
R1-2211056 Discussion on general aspects of common AI PHY framework ZTE
R1-2211072 Discussion on general aspects of AI/ML framework Fujitsu
R1-2211123 On General Aspects of AI/ML Framework Google
R1-2211188 General aspects of AI/ML framework CATT
R1-2211215 Discussion on general aspects of AI/ML framework KDDI Corporation
R1-2211226 Discussion on general aspects of AIML framework Spreadtrum Communications
R1-2211287 Discussion on general aspects of AI/ML framework Ericsson
R1-2211354 Views on the general aspects of AI/ML framework xiaomi
R1-2211392 Discussion on general aspects of AI/ML framework Intel Corporation
R1-2211477 On general aspects of AI/ML framework OPPO
R1-2211508 Discussions on Common Aspects of AI/ML Framework TCL Communication
R1-2211555 Discussion on general aspects of AI/ML framework for NR air interface ETRI
R1-2211606 Considerations on common AI/ML framework Sony
R1-2211671 Discussion on general aspects of AI/ML framework CMCC
R1-2211714 General aspects of AI and ML framework for NR air interface NVIDIA
R1-2211729 Discussion on general aspects of AI/ML framework InterDigital, Inc.
R1-2211772 General aspects of AI/ML framework Lenovo
R1-2211804 Discussion on general aspect of AI/ML framework Apple
R1-2211866 General aspects on AI/ML framework LG Electronics
R1-2211910 Considerations on general aspects on AI-ML framework CAICT
R1-2211933 Discussion on general aspects of AI/ML framework Panasonic
R1-2211934 General aspects of AI/ML framework AT&T
R1-2211976 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2212035 General aspects of AI ML framework and evaluation methodology Samsung
R1-2212107 General aspects of AI/ML framework Qualcomm Incorporated
R1-2212225 General aspects of AI/ML framework MediaTek Inc.
R1-2212312 Discussion on AI/ML Model Life Cycle Management Rakuten Mobile, Inc
R1-2212326 Further discussion on the general aspects of ML for Air-interface Nokia, Nokia Shanghai Bell
R1-2212355 Discussion on general aspects of AI ML framework NEC
R1-2212654 Summary#1 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Nov 14th session
Agreement
For UE-part/UE-side models, study the following mechanisms for LCM procedures:
R1-2212655 Summary#2 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Nov 15th session
Working Assumption
Consider “proprietary model” and “open-format model” as two separate model format categories for RAN1 discussion,
Proprietary-format models |
ML models of vendor-/device-specific proprietary format, from 3GPP perspective NOTE: An example is a device-specific binary executable format |
Open-format models |
ML models of specified format that are mutually recognizable across vendors and allow interoperability, from 3GPP perspective |
From RAN1 discussion viewpoint, RAN1 may assume that:
· Proprietary-format models are not mutually recognizable across vendors, hide model design information from other vendors when shared.
· Open-format models are mutually recognizable between vendors, do not hide model design information from other vendors when shared
R1-2212656 Summary#3 of General Aspects of AI/ML Framework Moderator (Qualcomm)
Presented in Nov 16th session.
R1-2212657 Summary#4 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Nov 17th session
Working Assumption
Terminology |
Description |
Model identification |
A process/method of identifying an AI/ML model for the common understanding between the NW and the UE Note: The process/method of model identification may or may not be applicable. Note: Information regarding the AI/ML model may be shared during model identification. |
Terminology |
Description |
Functionality identification |
A process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE Note: Information regarding the AI/ML functionality may be shared during functionality identification. FFS: granularity of functionality |
Note: whether and how to indicate Functionality will be discussed separately.
R1-2212658 Final summary of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Nov 18th session
Working Assumption
Terminology |
Description |
Model update |
Process of updating the model parameters and/or model structure of a model |
Model parameter update |
Process of updating the model parameters of a model |
Final summary in R1-2213003.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2210841 Continued discussion on evaluation of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2210885 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2210954 Evaluation of AI-CSI Ericsson
R1-2210998 Evaluation on AI/ML for CSI feedback enhancement vivo
R1-2211057 Evaluation on AI for CSI feedback enhancement ZTE
R1-2211073 Evaluation on AI/ML for CSI feedback enhancement Fujitsu
R1-2211124 On Evaluation of AI/ML based CSI Google
R1-2211189 Evaluation methodology and results on AI/ML for CSI feedback enhancement CATT
R1-2211227 Discussion on evaluation on AIML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2211258 Evaluation on AI/ML for CSI feedback enhancement Comba
R1-2211355 Discussion on evaluation on AI/ML for CSI feedback enhancement xiaomi
R1-2211393 Evaluation for CSI feedback enhancements Intel Corporation
R1-2211478 Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement OPPO
R1-2211525 Evaluation on AI/ML for CSI feedback enhancement China Telecom
R1-2211556 Evaluation on AI/ML for CSI feedback enhancement ETRI
R1-2211589 Evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2211672 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2211716 Evaluation of AI and ML for CSI feedback enhancement NVIDIA
R1-2211731 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2211773 Evaluation on AI/ML for CSI feedback Lenovo
R1-2211805 Evaluation for AI/ML based CSI feedback enhancement Apple
R1-2211867 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2211892 Model Quantization for CSI feedback Sharp
R1-2211911 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2211977 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2212036 Evaluation on AI ML for CSI feedback enhancement Samsung
R1-2212108 Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2212226 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2212327 Evaluation of ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2212452 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2212669 Summary#1 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)
R1-2212670 Summary#2 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)
From Nov 16th session
Working Assumption
The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression without generalization/scalability verification
· FFS the description and results for generalization/scalability may need a separate table
· FFS the value or range of payload size X/Y/Z
· FFS the description and results for different training types/cases may need a separate table
· FFS: training related overhead
Table X. Evaluation results for CSI compression without model generalization/scalability, [traffic type], [Max rank value], [RU] [training type/case]
|
|
Source 1 |
|
… |
CSI generation part |
AL/ML model backbone |
|
|
|
Pre-processing |
|
|
|
|
Post-processing |
|
|
|
|
FLOPs/M |
|
|
|
|
Number of parameters/M |
|
|
|
|
[Storage /Mbytes] |
|
|
|
|
CSI reconstruction part |
AL/ML model backbone |
|
|
|
[Pre-processing] |
|
|
|
|
[Post-processing] |
|
|
|
|
FLOPs/M |
|
|
|
|
Number of parameters/M |
|
|
|
|
[Storage /Mbytes] |
|
|
|
|
Common description |
Input type |
|
|
|
Output type |
|
|
|
|
Quantization /dequantization method |
|
|
|
|
Dataset description |
Train/k |
|
|
|
Test/k |
|
|
|
|
Ground-truth CSI quantization method |
|
|
|
|
[Other assumptions/settings agreed to be reported] |
|
|
|
|
Benchmark |
|
|
|
|
Intermediate KPI I#1 of benchmark, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Intermediate KPI I#1 of benchmark, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for intermediate KPI I#1, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for intermediate KPI#1, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
… |
|
|
|
|
Intermediate KPI I#2 of benchmark, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Intermediate KPI I#2 of benchmark, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for intermediate KPI I#2, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for intermediate KPI#2, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
… |
|
|
|
|
Gain for Mean UPT |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for 5% UPT |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
… |
|
|
|
|
FFS others |
|
|
|
|
Agreement
For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following evaluation cases for sequential training are considered for multi-vendors
· Case 1 (baseline): Type 3 training between one NW part model and one UE part model
o Note 1: Case 1 can be naturally applied to the NW-first training case where 1 NW part model to M>1 separate UE part models
§ Companies to report the dataset used between the NW part model and the UE part model, e.g., whether dataset for training UE part model is the same or a subset of the dataset for training NW part model
o Note 2: Case 1 can be naturally applied to the UE-first training case where 1 UE part model to N>1 separate NW part models
§ Companies to report the dataset used between the NW part model and the UE part model, e.g., whether dataset for training NW part model is the same or a subset of the dataset for training UE part model
o Companies to report the AI/ML structures for the combination(s) of UE part model and NW part model, which can be the same or different
o FFS: different quantization methods between NW side and UE side
· Case 2: For UE-first training, Type 3 training between one NW part model and M>1 separate UE part models
o Note: Case 2 can be also applied to the M>1 UE part models to N>1 NW part models
o Companies to report the AI/ML structures for the M>1 UE part models and the NW part model
o Companies to report the dataset used at UE part models, e.g., same or different dataset(s) among M UE part models
· Case 3: For NW-first training, Type 3 training between one UE part model and N>1 separate NW part models
o Note: Case 3 can be also applied to the N>1 NW part models to M>1 UE part models
o Companies to report the AI/ML structures for the UE part model and the N>1 NW part models
o Companies to report the dataset used at NW part models, e.g., same or different dataset(s) among N NW part models
· FFS: whether/how to report overhead of dataset
R1-2212671 Summary#3 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)
R1-2212672 Summary#4 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)
From Nov 17th session
Working Assumption
For the AI/ML based CSI prediction sub use case, the nearest historical CSI w/o prediction as well as non-AI/ML/collaboration level x AI/ML based CSI prediction approach are both taken as baselines for the benchmark of performance comparison, and the specific non-AI/ML/collaboration level x AI/ML based CSI prediction is reported by companies.
Agreement
For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different input dimensions of CSI generation part (e.g., different bandwidths/frequency granularities, or different antenna ports), the generalization cases of are elaborated as follows
· Case 1: The AI/ML model is trained based on training dataset from a fixed dimension X1 (e.g., a fixed bandwidth/frequency granularity, and/or number of antenna ports), and then the AI/ML model performs inference/test on a dataset from the same dimension X1.
· Case 2: The AI/ML model is trained based on training dataset from a single dimension X1, and then the AI/ML model performs inference/test on a dataset from a different dimension X2.
· Case 3: The AI/ML model is trained based on training dataset by mixing datasets subject to multiple dimensions of X1, X2,..., Xn, and then the AI/ML model performs inference/test on a single dataset subject to the dimension of X1, or X2,…, or Xn.
· Note: For Case 2/3, the solutions to achieve the scalability between Xi and Xj, are reported by companies, including, e.g., pre-processing to angle-delay domain, padding, additional adaptation layer in AI/ML model, etc.
· FFS the verification of fine-tuning
· FFS other additional cases
Agreement
For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different output dimensions of CSI generation part (e.g., different generated CSI feedback dimensions), the generalization cases of are elaborated as follows
· Case 1: The AI/ML model is trained based on training dataset from a fixed output dimension Y1 (e.g., a fixed CSI feedback dimension), and then the AI/ML model performs inference/test on a dataset from the same output dimension Y1.
· Case 2: The AI/ML model is trained based on training dataset from a single output dimension Y1, and then the AI/ML model performs inference/test on a dataset from a different output dimension Y2.
· Case 3: The AI/ML model is trained based on training dataset by mixing datasets subject to multiple dimensions of Y1, Y2,..., Yn, and then the AI/ML model performs inference/test on a single dataset of Y1, or Y2,…, or Yn.
· Note: For Case 1/2/3, companies to report whether the output of the CSI generation part is before quantization or after quantization.
· Note: For Case 2/3, the solutions to achieve the scalability between Yi and Yj, are reported by companies, including, e.g., truncation, additional adaptation layer in AI/ML model, etc.
· FFS the verification of fine-tuning
· FFS other additional cases
R1-2212673 Summary#5 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)
From Nov 17th session
Agreement
For the evaluation of the high resolution quantization of the ground-truth CSI in the CSI compression, Float32 is adopted as the baseline/upper-bound of performance comparison.
Agreement
For the evaluation of quantization aware/non-aware training, the following cases are considered and reported by companies:
Agreement
For the evaluation of an example of Type 3 (Separate training at NW side and UE side) with sequential training, companies to report the set of information (e.g., dataset) shared in Step 2
Working Assumption
For the AI/ML based CSI prediction sub use case, the following initial template is considered for companies to report the evaluation results of AI/ML-based CSI prediction for the case without generalization/scalability verification
· FFS the description and results for generalization/scalability may need a separate table
· FFS whether/how to capture the muliptle predicted CSI instances and their mapping to slots
Table X. Evaluation results for CSI prediction without model generalization/scalability, [traffic type], [Max rank value], [RU]
|
|
Source 1 |
… |
AI/ML model description |
AL/ML model backbone |
|
|
[Pre-processing] |
|
|
|
[Post-processing] |
|
|
|
FLOPs/M |
|
|
|
Parameters/M |
|
|
|
[Storage /Mbytes] |
|
|
|
Input type |
|
|
|
Output type |
|
|
|
Assumption |
UE speed |
|
|
CSI feedback periodicity |
|
|
|
Observation window (number/distance) |
|
|
|
Prediction window (number/distance) |
|
|
|
Whether/how to adopt spatial consistency |
|
|
|
Dataset size |
Train/k |
|
|
Test/k |
|
|
|
Benchmark 1 |
|
|
|
Intermediate KPI #1 of Benchmark 1 |
|
|
|
Gain for intermediate KPI#1 over Benchmark 1 |
|
|
|
Intermediate KPI #2 of Benchmark 1 |
|
|
|
Gain for intermediate KPI#2 over Benchmark 1 |
|
|
|
Gain for eventual KPI (Benchmark 1) |
Mean UPT |
|
|
5% UPT |
|
|
|
Benchmark 2 |
|
|
|
Intermediate KPI #1 of Benchmark 2 |
|
|
|
Gain for intermediate KPI#1 over Benchmark 2 |
|
|
|
Intermediate KPI #2 of Benchmark 2 |
|
|
|
Gain for intermediate KPI#2 over Benchmark 2 |
|
|
|
Gain for eventual KPI (Benchmark 2) |
Mean UPT |
|
|
5% UPT |
|
|
|
FFS others |
|
|
|
Agreement
For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different input/output dimensions, companies to report which case(s) in the following are evaluated
· Case 0 (benchmark for comparison): One CSI generation part with fixed input and output dimensions to 1 CSI reconstruction part with fixed input and output dimensions for each of the different input and/or output dimensions.
· Case 1: One CSI generation part with scalable input and/or output dimensions to N>1 separate CSI reconstruction parts each with fixed and different output and/or input dimensions
· Case 2: M>1 separate CSI generation parts each with fixed and different input and/or output dimensions to one CSI reconstruction part with scalable output and/or input dimensions
· Case 3: A pair of CSI generation part with scalable input/output dimensions and CSI reconstruction part with scalable output and/or input dimensions
Agreement
For
the evaluation of the high resolution quantization of the ground-truth CSI in
the CSI compression, if R16
Type II-like method is
considered, companies to report the R16 Type II parameters with specified or
new/larger values to achieve higher resolution of the ground-truth CSI labels,
e.g., L,,
, reference amplitude,
differential amplitude, phase, etc.
Final summary in R1-2212966.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2210842 Continued discussion on other aspects of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2210886 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2210955 Discussion on AI-CSI Ericsson
R1-2210999 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2211058 Discussion on other aspects for AI CSI feedback enhancement ZTE
R1-2211074 Views on specification impact for CSI compression with two-sided model Fujitsu
R1-2211125 On Enhancement of AI/ML based CSI Google
R1-2211133 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2211190 Other aspects on AI/ML for CSI feedback enhancement CATT
R1-2211228 Discussion on other aspects on AIML for CSI feedback Spreadtrum Communications
R1-2211356 Views on potential specification impact for CSI feedback based on AI/ML xiaomi
R1-2211394 Use-cases and specification for CSI feedback Intel Corporation
R1-2211479 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2211509 Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement TCL Communication
R1-2211526 Discussion on AI/ML for CSI feedback enhancement China Telecom
R1-2212542 Discussion on other aspects on AI/ML for CSI feedback enhancement ETRI (rev of R1-2211557)
R1-2211607 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2211673 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2211718 AI and ML for CSI feedback enhancement NVIDIA
R1-2211733 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2211750 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2211774 Further aspects of AI/ML for CSI feedback Lenovo
R1-2211806 Discussion on other aspects of AI/ML for CSI enhancement Apple
R1-2211868 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2211912 Discussions on AI-ML for CSI feedback CAICT
R1-2211978 Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2212037 Representative sub use cases for CSI feedback enhancement Samsung
R1-2212109 Other aspects on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2212227 Other aspects on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2212328 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2212453 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2212641 Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
R1-2212642 Summary #2 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Nov 16th session
Agreement
Time domain CSI prediction using UE sided model is selected as a representative sub-use case for CSI enhancement.
Note: Continue evaluation discussion in 9.2.2.1.
Note: RAN1 defer potential specification impact discussion at 9.2.2.2 until the RAN1#112b-e, and RAN1 will revisit at RAN1#112b-e whether to defer further till the end of R18 AI/ML SI.
Note: LCM related potential specification impact follow the high level principle of other one-sided model sub-cases.
R1-2212643 Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
R1-2212644 Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
R1-2212909 Summary #5 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Nov 18th session
Conclusion
In CSI compression using two-sided model use case, training collaboration type 2 over the air interface for model training (not including model update) is deprioritized in R18 SI.
Note:
· To align terminology, output CSI assumed at UE in previous agreement will be referred as output-CSI-UE.
· To align terminology, input-CSI-NW is the input CSI assumed at NW.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2210843 Continued discussion on evaluation of AI/ML for beam management FUTUREWEI
R1-2210887 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2211000 Evaluation on AI/ML for beam management vivo
R1-2211059 Evaluation on AI for beam management ZTE
R1-2211075 Evaluation on AI/ML for beam management Fujitsu
R1-2211126 On Evaluation of AI/ML based Beam Management Google
R1-2211191 Evaluation methodology and results on AI/ML for beam management CATT
R1-2211229 Evaluation on AI for beam management Spreadtrum Communications
R1-2211288 Evaluation of AIML for beam management Ericsson
R1-2211315 Discussion for evaluation on AI/ML for beam management InterDigital, Inc.
R1-2211357 Evaluation on AI/ML for beam management xiaomi
R1-2211395 Evaluations for AI/ML beam management Intel Corporation
R1-2211480 Evaluation methodology and preliminary results on AI/ML for beam management OPPO
R1-2211527 Evaluation on AI/ML for beam management China Telecom
R1-2211674 Discussion on evaluation on AI/ML for beam management CMCC
R1-2211719 Evaluation of AI and ML for beam management NVIDIA
R1-2211775 Evaluation on AI/ML for beam management Lenovo
R1-2211807 Evaluation on AI/ML for beam management Apple
R1-2211869 Evaluation on AI/ML for beam management LG Electronics
R1-2211913 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2211979 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
R1-2212038 Evaluation on AI ML for Beam management Samsung
R1-2212110 Evaluation on AI/ML for beam management Qualcomm Incorporated
R1-2212228 Evaluation on AI/ML for beam management MediaTek Inc.
R1-2212329 Evaluation of ML for beam management Nokia, Nokia Shanghai Bell
R1-2212423 Evaluation on AI/ML for beam management CEWiT
R1-2212591 Feature lead summary #0 evaluation of AI/ML for beam management Moderator (Samsung)
R1-2212592 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From Nov 15th session
Agreement
The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:
Agreement
R1-2212593 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
From Nov 16th session
Agreement
Agreement
For BM Case-1 and BM Case 2, to verify the generalization performance of an AI/ML model over various scenarios/configurations, additionally considering
· Various Set B of beam(pairs)
Agreement
At least for evaluation on the performance of DL Tx beam prediction, consider the following options for Rx beam for providing input for AI/ML model for training and/or inference if applicable
R1-2212594 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
From Nov 17th session
Agreement
· For generalization performance verification, consider the following
o Scenarios
§ Various deployment scenarios,
· e.g., UMa, UMi and others,
· e.g., 200m ISD or 500m ISD and others
· e.g., same deployment, different cells with different configuration/assumption
· e.g., gNB height and UE height
· FFS: e.g., Carrier frequencies
§ Various outdoor/indoor UE distributions, e.g., 100%/0%, 20%/80%, and others
§ Various UE mobility,
· e.g., 3km/h, 30km/h, 60km/h and others
o Configurations (parameters and settings)
§ Various UE parameters, e.g., number of UE Rx beams (including number of panels and UE antenna array dimensions)
§ Various gNB settings, e.g., DL Tx beam codebook (including various Set A of beam(pairs) and gNB antenna array dimensions)
§ Various Set B of beam (pairs)
§ T1 for measurement /T2 for prediction for BM-Case2
o Other scenarios/configurations(parameters and settings) are not precluded and can be reported by companies.
R1-2212904 Feature lead summary #4 evaluation of AI/ML for beam management Moderator (Samsung)
From Nov 18th session
Agreement
· For the evaluation of the overhead for BM-Case2, adoption the following metrics:
o RS overhead reduction,
§
Option 2:
· where N is the total number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for AI/ML, including the beams (pairs) required for additional measurements before/after the prediction if applicable
· where M is the total number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for baseline scheme
· Companies report the assumption on additional measurements
§
FFS: Option 3:
· where N is the number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for AI/ML in each time instance
· where M is the total number of beams (pairs) to be predicted for each time instance
· where L is ratio of periodicity of time instance for measurements to periodicity of time instance for prediction
§ Companies report the assumption on T1 and T2 patterns
§ Other options are not precluded and can be reported by companies.
Final summary in R1-2212905.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2210844 Continued discussion on other aspects of AI/ML for beam management FUTUREWEI
R1-2210888 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2211001 Other aspects on AI/ML for beam management vivo
R1-2211038 Discussion on other aspects of AI/ML beam management New H3C Technologies Co., Ltd.
R1-2211060 Discussion on other aspects for AI beam management ZTE
R1-2211076 Sub use cases and specification impact on AI/ML for beam management Fujitsu
R1-2211127 On Enhancement of AI/ML based Beam Management Google
R1-2211192 Other aspects on AI/ML for beam management CATT
R1-2211230 Discussion on other aspects on AIML for beam management Spreadtrum Communications
R1-2211289 Discussion on AI/ML for beam management Ericsson
R1-2211316 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2211358 Potential specification impact on AI/ML for beam management xiaomi
R1-2211396 Use-cases and Specification Impact for AI/ML beam management Intel Corporation
R1-2211481 Other aspects of AI/ML for beam management OPPO
R1-2211510 Discussions on Sub-Use Cases in AI/ML for Beam Management TCL Communication
R1-2211528 Other aspects on AI/ML for beam management China Telecom
R1-2211558 Discussion on other aspects on AI/ML for beam management ETRI
R1-2211590 Discussion on sub use cases of AI/ML beam management Panasonic
R1-2211608 Consideration on AI/ML for beam management Sony
R1-2211675 Discussion on other aspects on AI/ML for beam management CMCC
R1-2211721 AI and ML for beam management NVIDIA
R1-2211776 Further aspects of AI/ML for beam management Lenovo
R1-2211808 Discussion on other aspects of AI/ML for beam management Apple
R1-2211870 Other aspects on AI/ML for beam management LG Electronics
R1-2211914 Discussions on AI-ML for Beam management CAICT
R1-2211980 Discussion on AI/ML for beam management NTT DOCOMO, INC.
R1-2212039 Representative sub use cases for beam management Samsung
R1-2212111 Other aspects on AI/ML for beam management Qualcomm Incorporated
R1-2212150 Discussion on other aspects on AI/ML for beam management KT Corp.
R1-2212229 Other aspects on AI/ML for beam management MediaTek Inc.
R1-2212320 Other aspects on AI/ML for beam management Rakuten Symphony
R1-2212330 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2212372 Discussion on AI/ML for beam management NEC
R1-2212718 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
From Nov 15th session
Agreement
For the sub use case BM-Case1 and BM-Case2, at least support Alt.1 and Alt.2 for AI/ML model training and inference for further study:
· Alt.1. AI/ML model training and inference at NW side
· Alt.2. AI/ML model training and inference at UE side
· The discussion on Alt.3 for BM-Case1 and BM-Case2 is dependent on the conclusion/agreement of Agenda item 9.2.1 of RAN1 and/or RAN2 on whether to support model transfer for UE-side AI/ML model or not
o Alt.3. AI/ML model training at NW side, AI/ML model inference at UE side
R1-2212719 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
From Nov 16th session
Agreement
For BM-Case1 and BM-Case2 with a network-side AI/ML model, study potential specification impact on the following L1 reporting enhancement for AI/ML model inference
· UE to report the measurement results of more than 4 beams in one reporting instance
· Other L1 reporting enhancements can be considered
Agreement
Regarding the data collection for AI/ML model training at UE side, study the potential specification impact considering the following additional aspects.
· Whether and how to initiate data collection
· Configurations, e.g., configuration related to set A and/or Set B, information on association/mapping of Set A and Set B
· Assistance information from Network to UE (If supported)
· Other aspect(s) is not precluded
R1-2212720 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
Presented in Nov 17th session.
R1-2212927 Summary#4 for other aspects on AI/ML for beam management Moderator (OPPO)
From Nov 18th session
Agreement
Regarding NW-side model monitoring for a network-side AI/ML model of BM-Case1 and BM-Case2, study the necessity and the potential specification impacts from the following aspects:
· UE reporting of beam measurement(s) based on a set of beams indicated by gNB.
· Signaling, e.g., RRC-based, L1-based.
· Note: Performance and UE complexity, power consumption should be considered.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2210854 Evaluation of AI/ML for Positioning Accuracy Enhancement Ericsson
R1-2210889 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2211002 Evaluation on AI/ML for positioning accuracy enhancement vivo
R1-2211061 Evaluation on AI for positioning enhancement ZTE
R1-2211077 Further evaluation results and discussions of AI positioning accuracy enhancement Fujitsu
R1-2211128 On Evaluation of AI/ML based Positioning Google
R1-2211193 Evaluation methodology and results on AI/ML for positioning enhancement CATT
R1-2211359 Evaluation on AI/ML for positioning accuracy enhancement xiaomi
R1-2211482 Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement OPPO
R1-2211529 Evaluation on AI/ML for positioning accuracy enhancement China Telecom
R1-2211676 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2211715 Evaluation on AI/ML for positioning accuracy enhancement InterDigital, Inc.
R1-2211722 Evaluation of AI and ML for positioning enhancement NVIDIA
R1-2211777 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2211809 On Evaluation on AI/ML for positioning accuracy enhancement Apple
R1-2211871 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2211915 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2212040 Evaluation on AI ML for Positioning Samsung
R1-2212112 Evaluation on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2212230 Evaluation on AI/ML for positioning accuracy enhancement MediaTek Inc.
R1-2212331 Evaluation of ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2212382 Evaluation on AI/ML for positioning accuracy enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2212610 Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Nov 15th session
Agreement
Study how AI/ML positioning accuracy is affected by: user density/size of the training dataset.
Note: details of user density/size of training dataset to be reported in the evaluation.
Agreement
For reporting the model input dimension NTRP
* Nport * Nt of CIR and PDP, Nt refers to the
first Nt consecutive time domain samples.
R1-2212611 Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Nov 16th session
Agreement
For reporting the model input dimension NTRP * Nport * Nt:
Agreement
At least for model inference of AI/ML assisted positioning, evaluate and report the AI/ML model output, including (a) the type of information (e.g., ToA, RSTD, AoD, AoA, LOS/NLOS indicator) to use as model output, (b) soft information vs hard information, (c) whether the model output can reuse existing measurement report (e.g., NRPPa, LPP).
Agreement
For AI/ML assisted positioning, evaluate the three constructions:
Note: Individual company may evaluate one or more of the three constructions.
Agreement
For AI/ML assisted approach, study the performance of model monitoring metrics at least where the metrics are obtained from inference accuracy of model output.
Agreement
For both direct and AI/ML assisted positioning methods, investigate at least the impact of the amount of fine-tuning data on the positioning accuracy of the fine-tuned model.
Agreement
For the RAN1#110bis agreement on the calculation of model complexity, the FFS are resolved with the following update:
|
Model complexity to support N TRPs |
Single-TRP, same model for N TRPs |
where
|
Single-TRP, N models for N TRPs |
where
|
Note: The reported model complexity above is intended for inference and may not be directly applicable to complexity of other LCM aspects.
Observation
Direct AI/ML positioning can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods when the generalization aspects are not considered.
· For InF-DH with clutter parameter setting {60%, 6m, 2m}, evaluation results submitted to RAN1#111 indicate that the direct AI/ML positioning can achieve horizontal positioning accuracy of <1m at CDF=90%, as compared to >15m for conventional positioning method.
R1-2212612 Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Nov 17th session
Agreement
For AI/ML based positioning, company optionally evaluate the impact of at least the following issues related to measurements on the positioning accuracy of the AI/ML model. The simulation assumptions reflecting these issues are up to companies.
R1-2212816 Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Nov 18th session
Conclusion
Companies describe how their computational complexity values are obtained.
· It is out of 3GPP scope to consider computational complexity values that have platform-dependency and/or use implementation (hardware and software) optimization solutions.
Observation
AI/ML assisted positioning can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods when the generalization aspects are not considered.
Note: how to capture the observation(s) into TR is separate discussion.
Agreement
· For AI/ML assisted approach, for a given AI/ML model design (e.g., input, output, single-TRP vs multi-TRP), identify the generalization aspects where model fine-tuning/mixed training dataset/model switching is necessary.
Final summary in R1-2212817.
Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.
R1-2210855 Other Aspects of AI/ML Based Positioning Enhancement Ericsson
R1-2210890 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2211003 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2211062 Discussion on other aspects for AI positioning enhancement ZTE
R1-2211078 Discussions on spec impacts of model training, data collection, model identification and model monitoring for AIML for positioning accuracy enhancement Fujitsu
R1-2211129 On Enhancement of AI/ML based Positioning Google
R1-2211194 Other aspects on AI/ML for positioning enhancement CATT
R1-2211231 Discussion on other aspects on AIML for positioning accuracy enhancement Spreadtrum Communications
R1-2211360 Views on the other aspects of AI/ML-based positioning accuracy enhancement xiaomi
R1-2211483 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2211609 On AI/ML for positioning accuracy enhancement Sony
R1-2211677 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2211717 Designs and potential specification impacts of AIML for positioning InterDigital, Inc.
R1-2211725 AI and ML for positioning enhancement NVIDIA
R1-2211778 AI/ML Positioning use cases and Associated Impacts Lenovo
R1-2211810 On Other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2211872 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2211916 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2211981 Discussion on AI/ML for positioning accuracy enhancement NTT DOCOMO, INC.
R1-2212041 Representative sub use cases for Positioning Samsung
R1-2212113 Other aspects on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2212214 Other aspects on AI-ML for positioning accuracy enhancement Baicells
R1-2212231 Other aspects on AI/ML for positioning accuracy enhancement MediaTek Inc.
R1-2212332 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2212358 Discussion on AI/ML for positioning accuracy enhancement NEC
R1-2212383 On potential AI/ML solutions for positioning Fraunhofer IIS, Fraunhofer HHI
R1-2212549 FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Nov 15th session
Agreement
For the study of benefit(s) and potential specification impact for AI/ML based positioning accuracy enhancement, one-sided model whose inference is performed entirely at the UE or at the network is prioritized in Rel-18 SI.
Agreement
Regarding AI/ML model inference, to study and provide inputs on potential specification impact (including necessity and applicability of specifying AI/ML model input and/or output) at least for the following aspects for each of the agreed cases (Case 1 to Case 3b) in AI/ML based positioning accuracy enhancement
R1-2212742 FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Nov 16th session
Agreement
Regarding data collection for AI/ML model training for AI/ML based positioning,
R1-2212783 FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Nov 17th session
Agreement
Regarding data collection for AI/ML model training for AI/ML based positioning, study benefits, feasibility and potential specification impact (including necessity) for the following aspects
R1-2212877 FL summary #4 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Nov 18th session
Agreement
Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on feasibility, potential benefits (if any) and potential specification impact at least for the following aspects
Agreement
For AI/ML based positioning accuracy enhancement, direct AI/ML positioning and AI/ML assisted positioning are selected as representative sub-use cases.
Please refer to RP-221348 for detailed scope of the SI.
R1-2302063 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)
[112-R18-AI/ML] – Taesang (Qualcomm)
To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc
R1-2301402 Technical report for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2300043 Discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2300107 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2300170 Discussion on general aspects of common AI PHY framework ZTE
R1-2300178 Discussion on general aspects of AIML framework Ericsson
R1-2300210 Discussion on general aspects of AI/ML framework Spreadtrum Communications
R1-2300279 On general aspects of AI/ML framework OPPO
R1-2300396 On General Aspects of AI/ML Framework Google
R1-2300443 Discussions on AI/ML framework vivo
R1-2300529 General aspects on AI/ML framework LG Electronics
R1-2300566 Views on the general aspects of AI/ML framework xiaomi
R1-2300603 Further discussion on the general aspects of ML for Air-interface Nokia, Nokia Shanghai Bell
R1-2300670 Discussion on general aspects of AI/ML framework CATT
R1-2300743 Discussion on general aspects of AI/ML framework Fujitsu
R1-2300823 Discussion on general aspects of AI ML framework NEC
R1-2300840 Considerations on general aspects on AI-ML framework CAICT
R1-2300868 Considerations on common AI/ML framework Sony
R1-2300906 Discussion on general aspects of AI/ML framework KDDI Corporation
R1-2300940 Discussion on general aspects of AI/ML framework Intel Corporation
R1-2300989 Discussion on general aspects of AI/ML framework CMCC
R1-2301040 Discussion on general aspects of AI/ML framework for NR air interface ETRI
R1-2301139 General aspects of AI/ML framework Fraunhofer IIS, Fraunhofer HHI
R1-2301147 Discussion on general aspects of AI/ML framework Panasonic
R1-2301155 Discussion on general aspects of AI/ML framework InterDigital, Inc.
R1-2301160 Discussion on AI/ML Framework Rakuten Mobile, Inc
R1-2301177 General aspects of AI and ML framework for NR air interface NVIDIA
R1-2301198 General aspects of AI/ML framework Lenovo
R1-2301220 General aspects of AI/ML Framework AT&T
R1-2301254 General aspects of AI ML framework and evaluation methodology Samsung
R1-2301336 Discussion on general aspect of AI/ML framework Apple
R1-2301403 General aspects of AI/ML framework Qualcomm Incorporated
R1-2301484 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2301586 Discussion on general aspects of AI/ML LCM MediaTek Inc.
R1-2301663 Discussions on Common Aspects of AI/ML Framework TCL Communication Ltd.
R1-2301664 Identifying Procedures for General Aspects of AI/ML Frameworks Indian Institute of Tech (M), CEWiT, IIT Kanpur
R1-2301863 Summary#1 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Monday session
Agreement
To facilitate the discussion, consider at least the following Cases for model delivery/transfer to UE, training location, and model delivery/transfer format combinations for UE-side models and UE-part of two-sided models.
Case |
Model delivery/transfer |
Model storage location |
Training location |
y |
model delivery (if needed) over-the-top |
Outside 3gpp Network |
UE-side / NW-side / neutral site |
z1 |
model transfer in proprietary format |
3GPP Network |
UE-side / neutral site |
z2 |
model transfer in proprietary format |
3GPP Network |
NW-side |
z3 |
model transfer in open format |
3GPP Network |
UE-side / neutral site |
z4 |
model transfer in open format of a known model structure at UE |
3GPP Network |
NW-side |
z5 |
model transfer in open format of an unknown model structure at UE |
3GPP Network |
NW-side |
Note: The Case definition is only for the purpose of facilitating discussion and does not imply applicability, feasibility, entity mapping, architecture, signalling nor any prioritization.
Note: The Case definition is NOT intended to introduce sub-levels of Level z.
Note: Other cases may be included further upon interest from companies.
FFS: Z4 and Z5 boundary
R1-2301864 Summary#2 of General Aspects of AI/ML Framework Moderator (Qualcomm)
Presented in Tuesday session
R1-2301865 Summary#3 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From Wednesday session
Agreement
For UE-side models and UE-part of two-sided models:
FFS: Relationship between functionality identification and model identification
FFS: Performance monitoring and RAN4 impact
FFS: detailed understanding on model
Agreement
· AI/ML-enabled Feature refers to a Feature where AI/ML may be used.
Agreement
· For functionality identification, there may be either one or more than one Functionalities defined within an AI/ML-enabled feature.
Agreement
For 3GPP AI/ML for PHY SI discussion, when companies report model complexity, the complexity shall be reported in terms of “number of real-value model parameters” and “number of real-value operations” regardless of underlying model arithmetic.
Final summary in R1-2301868 Final Summary of General Aspects of AI/ML Framework Moderator (Qualcomm)
Including evaluation methodology, KPI, and performance evaluation results.
R1-2300044 Discussion and evaluation of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2300108 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2300154 Evaluations of AI-CSI Ericsson
R1-2300171 Evaluation on AI CSI feedback enhancement ZTE
R1-2300211 Discussion on evaluation on AI/ML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2300280 Evaluation methodology and results on AI/ML for CSI feedback enhancement OPPO
R1-2300348 Evaluation on AI ML for CSI feedback enhancement Mavenir
R1-2300397 On Evaluation of AI/ML based CSI Google
R1-2300444 Evaluation on AI/ML for CSI feedback enhancement vivo
R1-2300501 Evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS, Fraunhofer HHI
R1-2300530 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2300567 Discussion on evaluation on AI/ML for CSI feedback enhancement xiaomi
R1-2300604 Evaluation of ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2300671 Evaluation on AI/ML for CSI feedback enhancement CATT
R1-2300716 Evaluation on AI/ML for CSI feedback enhancement China Telecom
R1-2300744 Evaluation on AI/ML for CSI feedback enhancement Fujitsu
R1-2300841 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2300941 Evaluation for CSI feedback enhancements Intel Corporation
R1-2300990 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2301031 Evaluation on AI/ML for CSI feedback enhancement Indian Institute of Tech (H)
R1-2301041 Evaluation on AI/ML for CSI feedback enhancement ETRI
R1-2301097 Evaluation of joint CSI estimation and compression with AI/ML BJTU
R1-2301156 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2301178 Evaluation of AI and ML for CSI feedback enhancement NVIDIA
R1-2301199 Evaluation on AI/ML for CSI feedback Lenovo
R1-2301223 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2301255 Evaluation on AI/ML for CSI feedback enhancement Samsung
R1-2301337 Evaluation for AI/ML based CSI feedback enhancement Apple
R1-2301404 Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2301466 Evaluation of AI/ML based methods for CSI feedback enhancement SEU (Late submission)
R1-2301485 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2301587 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2301666 Discussion on AI/ML based CSI Feedback Enhancement Indian Institute of Tech (M), CEWiT, IIT Kanpur
R1-2301805 Evaluation of AI and ML for CSI feedback enhancement CEWiT (rev of R1-2301688)
R1-2301936 Summary#1 for CSI evaluation of [112-R18-AI/ML] Moderator (Huawei)
From Monday session
Conclusion
For the evaluation of the AI/ML based CSI feedback enhancement, if the SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, except for Method 3 which has been supported, There is no consensus on whether to adopt an additional method.
Agreement
Confirm the following working assumption of RAN1#110bis-e:
Working assumption In the evaluation of the AI/ML based CSI feedback enhancement, if SGCS is adopted as the intermediate KPI for the rank>1 situation, companies to ensure the correct calculation of SGCS and to avoid disorder issue of the output eigenvectors · Note: Eventual KPI can still be used to compare the performance |
Conclusion
For the intermediate KPI for evaluating the accuracy of the AI/ML output CSI, except for SGCS and NMSE which have been agreed as the baseline metrics, for whether/how to introduce an additional intermediate KPI, NO additional intermediate KPI is adopted as mandatory.
· It is up to companies to optionally report other intermediate KPIs, e.g., Relative achievable rate (RAR)
Agreement
For the evaluation of CSI enhancements, companies can optionally provide the additional throughput baseline based on CSI without compression (e.g., eigenvector from measured channel), which is taken as an upper bound for performance comparison.
R1-2301937 Summary#2 for CSI evaluation of [112-R18-AI/ML] Moderator (Huawei)
From Tuesday session
Agreement
· Confirm the following WA on the benchmark for CSI prediction achieved in RAN1#111:
Working Assumption For the AI/ML based CSI prediction sub use case, the nearest historical CSI w/o prediction as well as non-AI/ML/collaboration level x AI/ML based CSI prediction approach are both taken as baselines for the benchmark of performance comparison, and the specific non-AI/ML/collaboration level x AI/ML based CSI prediction is reported by companies. · Note: the specific non-AI/ML based CSI prediction is compatible with R18 MIMO; collaboration level x AI/ML based CSI prediction could be implementation based AI/ML compatible with R18 MIMO as an example o It does not imply any restriction on future specification for CSI prediction · FFS how to model the simulation cases for collaboration level x CSI prediction and LCM for collaboration level y/z CSI prediction |
Agreement
The CSI prediction-specific generalization scenario of various UE speeds (e.g., 10km/h, 30km/h, 60km/h, 120km/h, etc.) is added to the list of scenarios for performing the generalization verification.
· FFS various frequency PRBs (e.g., trained based on one set of PRBs, inference on the same/different set of PRBs)
Agreement
For how to separate the templates for different training types/cases for AI/ML-based CSI compression without generalization/scalability verification, the following is considered:
· The determined template in the RAN1#111 working assumption is entitled with “1-on-1 joint training”
· A second separate template is introduced to capture the evaluation results for “multi-vendor joint training”
o Note: this table captures the results for the joint training cases of 1 NW part model to M>1 UE part models, N>1 NW part models to 1 UE part model, or N>1 NW part models to M>1 UE part models. An example is multi-vendor Type 2 training.
· A third separate template is introduced to capture the evaluation results for “separate training”
· FFS: additional KPIs for each template, e.g., overhead, latency, etc.
Agreement
For the evaluation of training Type 3 under CSI compression, besides the 3 cases considered for multi-vendors, add one new Case (1-on-1 training with joint training) as benchmark/upper bound for performance comparison.
· FFS the relationship between the pair(s) of models for Type 3 and the pair(s) of models for new Case
R1-2301938 Summary#3 for CSI evaluation of [112-R18-AI/ML] Moderator (Huawei)
From Wednesday session
Agreement
For the evaluation of the AI/ML based CSI compression sub use cases with rank >=1, companies to report the specific option adopted for AI/ML model settings to adapt to ranks/layers.
Agreement
The CSI feedback overhead is calculated as the weighted average of CSI payload per rank and the distribution of ranks reported by the UE.
Working Assumption
For the initial template for AI/ML-based CSI compression without generalization/scalability verification achieved in the working assumption in the RAN1#111 meeting, X, Y and Z are determined as:
· X is <=80bits
· Y is 100bits-140bits
· Z is >=230bits
Working Assumption
X, Y and Z are applicable for per layer
R1-2301939 Summary#4 for CSI evaluation of [112-R18-AI/ML] Moderator (Huawei)
From Friday session
Working assumption
The following initial template is considered to replace the template achieved in the working assumption in the RAN1#111 meeting, for companies to report the evaluation results of AI/ML-based CSI compression of 1-on-1 joint training without generalization/scalability verification
Table X. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, [traffic type], [Max rank value], [RU]
|
|
Source 1 |
|
… |
CSI generation part |
AI/ML model backbone |
|
|
|
Pre-processing |
|
|
|
|
Post-processing |
|
|
|
|
FLOPs/M |
|
|
|
|
Number of parameters/M |
|
|
|
|
[Storage /Mbytes] |
|
|
|
|
CSI reconstruction part |
AI/ML model backbone |
|
|
|
[Pre-processing] |
|
|
|
|
[Post-processing] |
|
|
|
|
FLOPs/M |
|
|
|
|
Number of parameters/M |
|
|
|
|
[Storage /Mbytes] |
|
|
|
|
Common description |
Input type |
|
|
|
Output type |
|
|
|
|
Quantization /dequantization method |
|
|
|
|
Rank/layer adaptation settings for rank>1 |
|
|
|
|
Dataset description |
Train/k |
|
|
|
Test/k |
|
|
|
|
Ground-truth CSI quantization method (including scalar/codebook based quantization, and the parameters) |
|
|
|
|
Overhead reduction compared to Float32 if high resolution quantization of ground-truth CSI is applied |
|
|
|
|
[Other assumptions/settings agreed to be reported] |
|
|
|
|
Benchmark |
|
|
|
|
Benchmark assumptions, e.g., CSI overhead calculation method (Optional) |
|
|
|
|
SGCS of benchmark, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
SGCS of benchmark, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for SGCS, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for SGCS, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
… (other layers) |
|
|
|
|
NMSE of benchmark, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
NMSE of benchmark, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for NMSE, [layer 1] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
Gain for NMSE, [layer 2] |
CSI feedback payload X |
|
|
|
CSI feedback payload Y |
|
|
|
|
CSI feedback payload Z |
|
|
|
|
… (other layers) |
|
|
|
|
Other intermediate KPI (description/value) (optional) |
|
|
|
|
Gain for other intermediate KPI (description/value) (optional) |
|
|
|
|
Gain for Mean UPT (for a specific CSI feedback overhead) |
[CSI feedback payload X*Max rank value] |
|
|
|
[CSI feedback payload Y*Max rank value] |
|
|
|
|
[CSI feedback payload Z*Max rank value] |
|
|
|
|
Gain for 5% UPT |
[CSI feedback payload X*Max rank value] |
|
|
|
[CSI feedback payload Y*Max rank value] |
|
|
|
|
[CSI feedback payload Z*Max rank value] |
|
|
|
|
Gain for upper bound without CSI compression over Benchmark –Mean UPT (Optional) |
[CSI feedback payload X*Max rank value] |
|
|
|
[CSI feedback payload Y*Max rank value] |
|
|
|
|
[CSI feedback payload Z*Max rank value] |
|
|
|
|
Gain for upper bound without CSI compression over Benchmark –5% UPT (Optional) |
[CSI feedback payload X*Max rank value] |
|
|
|
[CSI feedback payload Y*Max rank value] |
|
|
|
|
[CSI feedback payload Z*Max rank value] |
|
|
|
|
[CSI feedback reduction (%)] |
|
|
|
|
… |
|
|
|
|
FFS others |
|
|
|
|
Note: “Benchmark” means the type of Legacy CB used for comparison.
Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.
Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.
Working assumption
A separate table to capture the evaluation results of generalization/scalability verification for AI/ML-based CSI compression is given in the following initial template
· To be collected before 112bis-e meeting
· FFS whether the intermediate KPI results are gain over benchmark or absolute values
· FFS whether the intermediate KPI results are in forms of linear or dB
Table X. Evaluation results for CSI compression with model generalization/scalability, [Max rank value], [Scenario/configuration]
|
|
Source 1 |
… |
CSI generation part |
AL/ML model backbone |
|
|
Pre-processing |
|
|
|
Post-processing |
|
|
|
FLOPs/M |
|
|
|
Number of parameters/M |
|
|
|
[Storage /Mbytes] |
|
|
|
CSI reconstruction part |
AL/ML model backbone |
|
|
[Pre-processing] |
|
|
|
[Post-processing] |
|
|
|
FLOPs/M |
|
|
|
Number of parameters/M |
|
|
|
[Storage /Mbytes] |
|
|
|
Common description |
Input type |
|
|
Output type |
|
|
|
Quantization /dequantization method |
|
|
|
Generalization/Scalability method description if applicable, e.g., truncation, adaptation layer, etc. |
|
|
|
Input/output scalability dimension if applicable, e.g., N>=1 NW part model(s) to M>=1 UE part model(s) |
|
|
|
Dataset description |
Ground-truth CSI quantization method |
|
|
[Other assumptions/settings agreed to be reported] |
|
|
|
Generalization Case 1 |
Train (setting#A, size/k) |
|
|
Test (setting#A, size/k) |
|
|
|
SGCS, layer 1 |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
SGCS, layer 2 |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
NMSE, layer 1 |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
NMSE, layer 2 |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
… (other settings for Case 1) |
|
|
|
… |
|
|
|
Generalization Case 2 |
Train (setting#A, size/k) |
|
|
Test (setting#B, size/k) |
|
|
|
… (results for Case 2) |
|
|
|
… (other settings for Case 2) |
|
|
|
Generalization Case 3 |
Train (setting#A+#B, size/k) |
|
|
Test (setting#A/#B, size/k) |
|
|
|
… (results for Case 3) |
|
|
|
… (other settings for Case 3) |
|
|
|
Fine-tuning case (optional) |
Train (setting#A, size/k) |
|
|
Fine-tune (setting#B, size/k) |
|
|
|
Test (setting#B, size/k) |
|
|
|
… (results for Fine-tuning) |
|
|
|
… (other settings for Fine-tuning) |
|
|
|
FFS others |
|
|
|
Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.
Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.
Working Assumption
The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI prediction with generalization verification
· To be collected before 112bis-e meeting
· FFS whether the intermediate KPI results are gain over benchmark or absolute values
· FFS whether the intermediate KPI results are in forms of linear or dB
Table X. Evaluation results for CSI prediction with model generalization, [Max rank value]
|
|
Source 1 |
… |
AI/ML model description |
AL/ML model description (e.g., backbone, structure) |
|
|
[Pre-processing] |
|
|
|
[Post-processing] |
|
|
|
FLOPs/M |
|
|
|
Parameters/M |
|
|
|
[Storage /Mbytes] |
|
|
|
Input type |
|
|
|
Output type |
|
|
|
Assumption |
CSI feedback periodicity |
|
|
Observation window (number/distance) |
|
|
|
Prediction window (number/distance between prediction instances/distance from the last observation instance to the 1st prediction instance) |
|
|
|
Whether/how to adopt spatial consistency |
|
|
|
Generalization Case 1 |
Train (setting#A, size/k) |
|
|
Test (setting#A, size/k) |
|
|
|
SGCS (1,…N, N is number of prediction instances) |
|
|
|
NMSE (1,…N, N is number of prediction instances) |
|
|
|
… (other settings and results for Case 1) |
|
|
|
Generalization Case 2 |
Train (setting#A, size/k) |
|
|
Test (setting#B, size/k) |
|
|
|
SGCS (1,…N, N is number of prediction instances) |
|
|
|
NMSE (1,…N, N is number of prediction instances) |
|
|
|
… (other settings and results for Case 2) |
|
|
|
Generalization Case 3 |
Train (setting#A+#B, size/k) |
|
|
Test (setting#A/#B, size/k) |
|
|
|
SGCS (1,…N, N is number of prediction instances) |
|
|
|
NMSE (1,…N, N is number of prediction instances) |
|
|
|
… (other settings and results for Case 3) |
|
|
|
Fine-tuning case (optional) |
Train (setting#A, size/k) |
|
|
Fine-tune (setting#B, size/k) |
|
|
|
Test (setting#B, size/k) |
|
|
|
SGCS (1,…N, N is number of prediction instances) |
|
|
|
NMSE (1,…N, N is number of prediction instances) |
|
|
|
… (other settings and results for Fine-tuning) |
|
|
|
FFS others |
|
|
|
Working Assumption
The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression for multi-vendor joint training and without generalization/scalability verification
· To be collected before 112bis-e meeting
· FFS whether the intermediate KPI results are gain over benchmark or absolute values
· FFS whether the intermediate KPI results are in forms of linear or dB
· FFS case of multiple layers
Table X. Evaluation results for CSI compression of multi-vendor joint training without model generalization/scalability, [Max rank value]
|
|
Source 1 |
… |
Common description |
Input type |
|
|
Output type |
|
|
|
[Training method] |
|||
Quantization /dequantization method |
|
|
|
Dataset description |
Train/k |
|
|
Test/k |
|
|
|
Ground-truth CSI quantization method |
|
|
|
Case 1 (baseline): NW#1-UE#1 |
UE part AI/ML model backbone/structure |
|
|
Network part AI/ML model backbone/structure |
|
|
|
... (other NW-UE combinations for Case 1) |
|
|
|
Case 2 (1 NW part to M>1 UE parts) |
NW part model backbone/structure |
|
|
UE#1 part model backbone/structure |
|
|
|
UE#1 part training dataset description and size |
|
|
|
… |
|
|
|
UE#M part model backbone/structure |
|
|
|
UE#M part training dataset description and size |
|
|
|
Case 3 (N>1 NW parts to 1 UE part) |
UE part model backbone/structure |
|
|
NW#1 part model backbone/structure |
|
|
|
NW#1 part training dataset description and size |
|
|
|
… |
|
|
|
NW#N part model backbone/structure |
|
|
|
NW#N part training dataset description and size |
|
|
|
Intermediate KPI type (SGCS/NMSE) |
|
|
|
FFS other cases |
|
|
|
Case 1: NW#1-UE#1: Intermediate KPI |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
… (results for other NW-UE combinations for Case 1) |
|
|
|
Case 2: Intermediate KPI |
CSI feedback payload X, NW-UE#1 |
|
|
… |
|
|
|
CSI feedback payload X, NW-UE#M |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
Case 3: Intermediate KPI |
CSI feedback payload X, NW#1-UE |
|
|
… |
|
|
|
CSI feedback payload X, NW#N-UE |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
FFS other cases |
|
|
|
FFS others |
|
|
|
Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.
Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.
Working Assumption
The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression for sequentially separate training and without generalization/scalability verification
· To be collected before 112bis-e meeting
· FFS whether the intermediate KPI results are gain over benchmark or absolute values
· FFS whether the intermediate KPI results are in forms of linear or dB
· FFS case of multiple layers
Table X. Evaluation results for CSI compression of separate training without model generalization/scalability, [Max rank value]
|
|
Source 1 |
… |
Common description |
Input type |
|
|
Output type |
|
|
|
Quantization /dequantization method |
|
|
|
Shared output of CSI generation part/input of reconstruction part is before or after quantization |
|
|
|
Dataset description |
Test/k |
|
|
Ground-truth CSI quantization method |
|
|
|
[Benchmark: NW#1-UE#1 joint training] |
UE part AI/ML model backbone/structure |
|
|
Network part AI/ML model backbone/structure |
|
|
|
Training dataset size |
|
|
|
... (other NW-UE combinations for benchmark) |
|
|
|
Case 1-NW first training |
NW part AI/ML model backbone/structure |
|
|
UE#1 part model backbone/structure |
|
|
|
UE#1 part training dataset description and size |
|
|
|
… |
|
|
|
UE#M part model backbone/structure |
|
|
|
UE#M part training dataset description and size |
|
|
|
[air-interface overhead of information (e.g., dataset) sharing] |
|
|
|
Case 1-UE first training |
NW#1 part model backbone/structure |
|
|
NW#1 part training dataset description and size |
|
|
|
… |
|
|
|
NW#N part model backbone/structure |
|
|
|
NW#N part training dataset description and size |
|
|
|
UE part model backbone/structure |
|
|
|
[air-interface overhead of information (e.g., dataset) sharing] |
|
|
|
Case 2-UE first training |
UE#1 part model backbone/structure |
|
|
… |
|
|
|
UE#M part model backbone/structure |
|
|
|
UE part AI/ML model backbone/structure |
|
|
|
NW part training dataset description and size (e.g., description/size of dataset from M UEs and how to merge) |
|
|
|
Case 3-NW first training |
NW#1 part model backbone/structure |
|
|
… |
|
|
|
NW#N part model backbone/structure |
|
|
|
UE part model backbone/structure |
|
|
|
UE part training dataset description and size (e.g., description/size of dataset from N NWs and how to merge) |
|
|
|
Intermediate KPI type (SGCS/NMSE) |
|
|
|
FFS other cases |
|
|
|
NW#1-UE#1 joint training: Intermediate KPI |
CSI feedback payload X |
|
|
CSI feedback payload Y |
|
|
|
CSI feedback payload Z |
|
|
|
… (results for other 1-on-1 NW-UE joint training combinations) |
|
|
|
Case 1-NW first training: Intermediate KPI |
CSI feedback payload X, NW-UE#1 |
|
|
… |
|
|
|
CSI feedback payload X, NW-UE#M |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
Case 1-UE first training: Intermediate KPI |
CSI feedback payload X, NW#1-UE |
|
|
… |
|
|
|
CSI feedback payload X, NW#N-UE |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
Case 2-NW first training: Intermediate KPI |
CSI feedback payload X, NW#1-UE |
|
|
… |
|
|
|
CSI feedback payload X, NW#N-UE |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
Case 3-NW first training: Intermediate KPI |
CSI feedback payload X, NW-UE#1 |
|
|
… |
|
|
|
CSI feedback payload X, NW-UE#M |
|
|
|
CSI feedback payload Y … |
|
|
|
CSI feedback payload Z … |
|
|
|
FFS other cases |
|
|
|
FFS others |
|
|
|
Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.
Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.
Final summary in R1-2301940.
Including potential specification impact.
R1-2300045 Discussion on other aspects of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2300071 Further discussions of AI/ML for CSI feedback enhancement Keysight Technologies UK Ltd, Universidad de Málaga
R1-2300109 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2300153 Discussion on AI-CSI Ericsson
R1-2300172 Discussion on other aspects for AI CSI feedback enhancement ZTE
R1-2300212 Discussion on other aspects on AI/ML for CSI feedback Spreadtrum Communications
R1-2300281 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2300398 On Enhancement of AI/ML based CSI Google
R1-2300445 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2300531 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2300568 Discussion on potential specification impact for CSI feedback based on AI/ML xiaomi
R1-2300605 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2300672 Potential specification impact on AI/ML for CSI feedback enhancement CATT
R1-2300717 Discussion on AI/ML for CSI feedback enhancement China Telecom
R1-2300745 Views on specification impact for CSI feedback enhancement Fujitsu
R1-2300767 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2300842 Discussions on AI-ML for CSI feedback CAICT
R1-2300863 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2300869 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2300942 On other aspects on AI/ML for CSI feedback Intel Corporation
R1-2300991 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2301042 Discussion on other aspects on AI/ML for CSI feedback enhancement ETRI
R1-2301098 Joint CSI estimation and compression with AI/ML BJTU
R1-2301157 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2301179 AI and ML for CSI feedback enhancement NVIDIA
R1-2301200 Further aspects of AI/ML for CSI feedback Lenovo
R1-2301224 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2301256 Representative sub use cases for CSI feedback enhancement Samsung
R1-2301313 Discussion on AI/ML for CSI Feedback Enhancement III
R1-2301338 Discussion on other aspects of AI/ML for CSI enhancement Apple
R1-2301405 Other aspects on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2301486 Discussion on other aspects on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2301588 Other aspects on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2301665 Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement TCL Communication Ltd.
R1-2301910 Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Monday session
Agreement
In CSI compression using two-sided model use case, further study potential specification impact of the following output-CSI-UE and input-CSI-NW at least for Option 1:
· Option 1: Precoding matrix
o 1a: The precoding matrix in spatial-frequency domain
o 1b: The precoding matrix represented using angular-delay domain projection
· Option 2: Explicit channel matrix (i.e., full Tx * Rx MIMO channel)
o 2a: raw channel is in spatial-frequency domain
o 2b: raw channel is in angular-delay domain
· Note: Whether Option 2 is also studied depends on the performance evaluations in 9.2.2.1.
· Note: RI and CQI will be discussed separately
R1-2301911 Summary #2 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Tuesday session
Agreement
In CSI compression using two-sided model use case, further study the following options for CQI determination in CSI report, if CQI in CSI report is configured.
Conclusion
In CSI compression using two-sided model use case, further discuss the pros/cons of different offline training collaboration types including at least the following aspects:
· Whether model can be kept proprietary
· Requirements on privacy-sensitive dataset sharing
· Flexibility to support cell/site/scenario/configuration specific model
· gNB/device specific optimization – i.e., whether hardware-specific optimization of the model is possible, e.g. compilation for the specific hardware
· Model update flexibility after deployment
· feasibility of allowing UE side and NW side to develop/update models separately
· Model performance based on evaluation in 9.2.2.1
· Whether gNB can maintain/store a single/unified model
· Whether UE device can maintain/store a single/unified model
· Extendability: to train new UE-side model compatible with NW-side model in use; Or to train new NW-side model compatible with UE-side model in use
· Whether training data distribution can be matched to the device that will use the model for inference
· Whether device capability can be considered for model development
· Other aspects are not precluded
· Note: training data collection and dataset/model delivery will be discussed separately
R1-2301912 Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Wednesday session
Agreement
R1-2301913 Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From Friday session
Agreement
In CSI compression using two-sided model use case, further study the following aspects for CSI configuration and report:
Agreement
In CSI compression using two-sided model use case, further study the feasibility and methods to support the legacy CSI reporting principles including at least:
Agreement
In CSI compression using two-sided model use case, further study the necessity, feasibility, and potential specification impact for intermediate KPIs based monitoring including at least:
· UE-side monitoring based on the output of the CSI reconstruction model, subject to the aligned format, associated to the CSI report, indicated by the NW or obtained from the network side.
o Network may configure a threshold criterion to facilitate UE to perform model monitoring.
· UE-side monitoring based on the output of the CSI reconstruction model at the UE-side
o Note: CSI reconstruction model at the UE-side can be the same or different comparing to the actual CSI reconstruction model used at the NW-side.
o Network may configure a threshold criterion to facilitate UE to perform model monitoring.
· FFS: Other solutions, e.g., UE-side uses a model that directly outputs intermediate KPI. Network-side monitoring based on target CSI measured via SRS from the UE.
Note: Monitoring approaches not based on intermediate KPI are not precluded
Note: the study of intermediate KPIs based monitoring should take into account the monitoring reliability (accuracy), overhead, complexity, and latency.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2300046 Discussion and evaluation of AI/ML for beam management FUTUREWEI
R1-2300110 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2300173 Evaluation on AI beam management ZTE
R1-2300179 Evaluations of AIML for beam management Ericsson
R1-2300213 Evaluation on AI/ML for beam management Spreadtrum Communications
R1-2300282 Evaluation methodology and results on AI/ML for beam management OPPO
R1-2300399 On Evaluation of AI/ML based Beam Management Google
R1-2300446 Evaluation on AI/ML for beam management vivo
R1-2300532 Evaluation on AI/ML for beam management LG Electronics
R1-2300569 Evaluation on AI/ML for beam management xiaomi
R1-2300593 Discussion for evaluation on AI/ML for beam management InterDigital, Inc.
R1-2300606 Evaluation of ML for beam management Nokia, Nokia Shanghai Bell
R1-2300673 Evaluation on AI/ML for beam management CATT
R1-2300718 Evaluation on AI/ML for beam management China Telecom
R1-2300746 Evaluation on AI/ML for beam management Fujitsu
R1-2300843 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2300943 Evaluations for AI/ML beam management Intel Corporation
R1-2300992 Discussion on evaluation on AI/ML for beam management CMCC
R1-2301180 Evaluation of AI and ML for beam management NVIDIA
R1-2301201 Evaluation on AI/ML for beam management Lenovo
R1-2301257 Evaluation on AI/ML for Beam management Samsung
R1-2301339 Evaluation for AI/ML based beam management enhancements Apple
R1-2301406 Evaluation on AI/ML for beam management Qualcomm Incorporated
R1-2301487 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
R1-2301589 Evaluation on AI/ML for beam management MediaTek Inc.
R1-2301689 Evaluation on AI/ML for beam management CEWiT
R1-2301956 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From Monday session
Agreement
Agreement
R1-2301957 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
From Tuesday session
Agreement
o Option A (baseline): the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx and Rx beams
o Option B(optional), the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx beams with specific Rx beam(s)
§ FFS on specific Rx beam(s)
§ Note: specific Rx beams are subset of all Rx beams
Agreement
· For AI/ML models, which provide L1-RSRP as the model output, to evaluate the accuracy of predicted L1-RSRP, companies optionally report average (absolute value)/CDF of the predicted L1-RSRP difference, where the predicted L1-RSRP difference is defined as:
o The difference between the predicted L1-RSRP of Top-1[/K] predicted beam and the ideal L1-RSRP of the same beam.
R1-2301958 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
From Thursday session
Agreement
Agreement
· Additionally study the following option on the selection of Set B of beams (pairs) (for Option 2: Set B is variable)
Final summary in R1-2301959.
Including potential specification impact.
R1-2300047 Discussion on other aspects of AI/ML for beam management FUTUREWEI
R1-2300111 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2300174 Discussion on other aspects for AI beam management ZTE
R1-2300180 Discussion on AIML for beam management Ericsson
R1-2300195 Discussion on other aspects of AI/ML beam management New H3C Technologies Co., Ltd.
R1-2300214 Other aspects on AI/ML for beam management Spreadtrum Communications
R1-2300283 Other aspects of AI/ML for beam management OPPO
R1-2300400 On Enhancement of AI/ML based Beam Management Google
R1-2300447 Other aspects on AI/ML for beam management vivo
R1-2300533 Other aspects on AI/ML for beam management LG Electronics
R1-2300570 Potential specification impact on AI/ML for beam management xiaomi
R1-2300594 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2300607 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2300674 Potential specification impact on AI/ML for beam management CATT
R1-2300747 Sub use cases and specification impact on AI/ML for beam management Fujitsu
R1-2300824 Discussion on AI/ML for beam management NEC
R1-2300844 Discussions on AI-ML for Beam management CAICT
R1-2300870 Consideration on AI/ML for beam management Sony
R1-2300944 Other aspects on AI/ML for beam management Intel Corporation
R1-2300993 Discussion on other aspects on AI/ML for beam management CMCC
R1-2301043 Discussion on other aspects on AI/ML for beam management ETRI
R1-2301181 AI and ML for beam management NVIDIA
R1-2301197 Discussion on AI/ML for beam management Panasonic
R1-2301202 Further aspects of AI/ML for beam management Lenovo
R1-2301258 Representative sub use cases for beam management Samsung
R1-2301340 Discussion on other aspects of AI/ML for beam management Apple
R1-2301407 Other aspects on AI/ML for beam management Qualcomm Incorporated
R1-2301488 Discussion on other aspects on AI/ML for beam management NTT DOCOMO, INC.
R1-2301539 Discussion on other aspects on AI/ML for beam management KT Corp.
R1-2301590 Other aspects on AI/ML for beam management MediaTek Inc.
R1-2301685 Discussions on Sub-Use Cases in AI/ML for Beam Management TCL Communication Ltd.
R1-2301894 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
From Monday session
Conclusion
For the sub use case BM-Case1 and BM-Case2, “Alt.2: DL Rx beam prediction” is deprioritized.
Agreement
Regarding the performance metric(s) of AI/ML model monitoring for BM-Case1 and BM-Case2, study the following alternatives (including feasibility/necessity) with potential down-selection:
R1-2301895 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
From Tuesday session
Conclusion
Regarding the explicit assistance information from UE to network for NW-side AI/ML model, RAN1 has no consensus to support the following information
· UE location
· UE moving direction
· UE Rx beam shape/direction
R1-2301896 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
From Thursday session
Agreement
For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the necessity, feasibility and the potential specification impact (if needed) of the following information reported from UE to network:
Agreement
For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study potential specification impact of AI model inference from the following additional aspects on top of previous agreements:
Conclusion
Regarding the explicit assistance information from network to UE for UE-side AI/ML model, RAN1 has no consensus to support the following information
Agreement
For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding NW-side performance monitoring, study the following aspects as a starting point including the study of necessity:
R1-2301897 Summary#4 for other aspects on AI/ML for beam management Moderator (OPPO)
From Friday session
Agreement
For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding UE-side performance monitoring, study the following aspects as a starting point including the study of necessity and feasibility:
· Indication/request/report from UE to gNB for performance monitoring
o Note: The indication/request/report may be not needed in some case(s)
· Configuration/Signaling from gNB to UE for performance monitoring
· Other aspect(s) is not precluded
Including evaluation methodology, KPI, and performance evaluation results.
R1-2300112 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2300141 Evaluation of AI/ML for Positioning Accuracy Enhancement Ericsson Inc.
R1-2300175 Evaluation on AI positioning enhancement ZTE
R1-2300284 Evaluation methodology and results on AI/ML for positioning accuracy enhancement OPPO
R1-2300401 On Evaluation of AI/ML based Positioning Google
R1-2300448 Evaluation on AI/ML for positioning accuracy enhancement vivo
R1-2300534 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2300571 Evaluation on AI/ML for positioning accuracy enhancement xiaomi
R1-2300608 Evaluation of ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2300675 Evaluation on AI/ML for positioning enhancement CATT
R1-2300719 Evaluation on AI/ML for positioning accuracy enhancement China Telecom
R1-2300748 Discussions on evaluation results of AIML positioning accuracy enhancement Fujitsu
R1-2300845 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2300994 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2301101 Evaluation on AI/ML for positioning accuracy enhancement InterDigital, Inc.
R1-2301182 Evaluation of AI and ML for positioning enhancement NVIDIA
R1-2301203 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2301259 Evaluation on AI/ML for Positioning Samsung
R1-2301341 Evaluation on AI/ML for positioning accuracy enhancement Apple
R1-2301408 Evaluation on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2301591 Evaluation of AIML for Positioning Accuracy Enhancement MediaTek Inc.
R1-2301806 Evaluation on AI/ML for Positioning Accuracy Enhancement CEWiT (rev of R1-2301690)
R1-2301946 Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Monday session
Agreement
For both direct AI/ML positioning and AI/ML assisted positioning, companies include the evaluation area in their reporting template, assuming the same evaluation area is used for training dataset and test dataset.
Note:
· Baseline evaluation area for InF-DH = 120x60 m.
· if different evaluation areas are used for training dataset and test dataset, they are marked out separately under “Train” and “Test” instead.
Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [with or without] model generalization, [short model description], UE distribution area = [e.g., 120x60 m, 100x40 m]
Model input |
Model output |
Label |
Clutter param |
Dataset size |
AI/ML complexity |
Horizontal positioning accuracy at CDF=90% (meters) |
||
Train |
Test |
Model complexity |
Computation complexity |
AI/ML |
||||
|
|
|
|
|
|
|
|
|
Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description], UE distribution area = [e.g., 120x60 m, 100x40 m]
Model input |
Model output |
Label |
Settings (e.g., drops, clutter param, mix) |
Dataset size |
AI/ML complexity |
Horizontal pos. accuracy at CDF=90% (m) |
|||
Train |
Test |
Train |
Test |
Model complexity |
Computation complexity |
AI/ML |
|||
|
|
|
|
|
|
|
|
|
|
Agreement
The agreement made in RAN1#110 AI 9.2.4.1 is updated by adding additional note:
Note: if complex value is used in modelling process, the number of the model parameters is doubled, which is also applicable for other AIs of AI/ML.
R1-2301947 Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Tuesday session
Agreement
For both the direct AI/ML positioning and AI/ML assisted positioning, study the model input, considering the tradeoff among model performance, model complexity and computational complexity.
· The type of information to use as model input. The candidates include at least: time-domain CIR, PDP.
· The dimension of model input in terms of NTRP, Nt, and Nt’.
· Note: For the direct AI/ML positioning, model input size has impact to signaling overhead for model inference.
Agreement
For direct AI/ML positioning, study the performance of model monitoring methods, including:
· Label based methods, where ground truth label (or its approximation) is provided for monitoring the accuracy of model output.
· Label-free methods, where model monitoring does not require ground truth label (or its approximation).
Agreement
For AI/ML assisted approach, study the performance of label-free model monitoring methods, which do not require ground truth label (or its approximation) for model monitoring.
R1-2301948 Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2301949 Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2302169 Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From Thursday session
Conclusion
· No dedicated evaluation is needed for the positioning accuracy performance of model switching
· It does not preclude future discussion on model switching related performance
Agreement
For direct AI/ML positioning, study the impact of labelling error to positioning accuracy
· The ground truth label error in each dimension of x-axis and y-axis can be modeled as a truncated Gaussian distribution with zero mean and standard deviation of L meters, with truncation of the distribution to the [-2*L, 2*L] range.
o Value L is up to sources.
· Other models are not precluded
· [Whether/how to study the impact of labelling error to label-based model monitoring methods]
· [Whether/how to study the impact of labelling error for AI/ML assisted positioning.]
Observation
Evaluation of the following generalization aspects show that the positioning accuracy of direct AI/ML positioning deteriorates when the AI/ML model is trained with dataset of one deployment scenario, while tested with dataset of a different deployment scenario.
Note: ideal model training and switching may provide the upper bound of achievable performance when the AI/ML model needs to handle different deployment scenarios.
Final summary in R1-2302170.
Including potential specification impact.
R1-2300113 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2300142 Other Aspects of AI/ML Based Positioning Enhancement Ericsson Inc.
R1-2300176 Discussion on other aspects for AI positioning enhancement ZTE
R1-2300215 Discussion on other aspects on AI/ML for positioning accuracy enhancement Spreadtrum Communications
R1-2300285 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2300402 On Enhancement of AI/ML based Positioning Google
R1-2300449 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2300535 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2300572 Views on the other aspects of AI/ML-based positioning accuracy enhancement xiaomi
R1-2300602 Other aspects on AI-ML for positioning accuracy enhancement Baicells
R1-2300609 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2300676 Potential specification impact on AI/ML for positioning enhancement CATT
R1-2300749 Discussions on specification impacts for AIML positioning accuracy enhancement Fujitsu
R1-2300831 Discussion on AI/ML for positioning accuracy enhancement NEC
R1-2300846 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2300871 On Other Aspects on AI/ML for Positioning Accuracy Enhancement Sony
R1-2300995 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2301115 Designs and potential specification impacts of AIML for positioning InterDigital, Inc.
R1-2301140 On potential AI/ML solutions for positioning Fraunhofer IIS, Fraunhofer HHI
R1-2301183 AI and ML for positioning enhancement NVIDIA
R1-2301204 AI/ML Positioning use cases and associated Impacts Lenovo
R1-2301260 Representative sub use cases for Positioning Samsung
R1-2301342 On Other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2301409 Other aspects on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2301489 Discussion on other aspects on AI/ML for positioning accuracy enhancement NTT DOCOMO, INC.
R1-2301592 Other Aspects on AI ML Based Positioning Enhancement MediaTek Inc.
R1-2301667 Contributions on AI/ML based Positioning Accuracy Enhancement Indian Institute of Tech (M), CEWiT, IIT Kanpur
R1-2301847 FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Monday session
Agreement
Regarding training data generation for AI/ML based positioning,
R1-2301996 FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Tuesday session
Agreement
Regarding training data collection for AI/ML based positioning, study benefit(s) and potential specification impact (including necessity) at least for the following aspects
R1-2302019 FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From Thursday session
Agreement
Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on benefit(s), feasibility, necessity and potential specification impact for the following aspects
Agreement
Regarding AI/ML model inference, to study the potential specification impact (including the feasibility, and the necessity of specifying AI/ML model input and/or output) at least for the following aspects for AI/ML based positioning accuracy enhancement
Note: Companies are encouraged to report their assumption of functionality and their assumption of information element(s) of AI/ML functionality identification for AI/ML based positioning with UE-side model (Case 1 and 2a).
Please refer to RP-221348 for detailed scope of the SI.
R1-2304168 Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)
R1-2303580 Technical report for Rel-18 SI on AI and ML for NR air interface Qualcomm Incorporated
R1-2304148 TR38.843 v0.1.0: Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface Rapporteur (Qualcomm)
Note: This TR for the SI on AI/ML for NR air interface captures all the RAN1 agreements made until RAN1#112. Not formally endorsed; for RAN1 review and comments. New version of the TR to be prepared to capturing the agreements from this meeting in input to RAN1#113.
Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.
R1-2302318 Discussion on common AI/ML characteristics and operations FUTUREWEI
R1-2302357 Discussion on general aspects of AI/ML framework Huawei, HiSilicon
R1-2302436 Discussion on general aspects of common AI PHY framework ZTE
R1-2302476 Discussions on AI/ML framework vivo
R1-2302539 On general aspects of AI/ML framework OPPO
R1-2302592 Discussion on general aspects of AIML framework Spreadtrum Communications
R1-2302627 Further discussion on the general aspects of ML for Air-interface Nokia, Nokia Shanghai Bell
R1-2302694 Discussion on AI/ML framework for NR air interface CATT
R1-2302789 General aspects of AI/ML framework for NR air interface Intel Corporation
R1-2302821 Discussion on general aspects of AI/ML framework InterDigital, Inc.
R1-2302841 Considerations on common AI/ML framework Sony
R1-2302877 Discussion on general aspects of AIML framework Ericsson
R1-2302903 Discussion on general aspects of AI/ML framework Fujitsu
R1-2302974 Views on the general aspects of AI/ML framework xiaomi
R1-2303041 Discussion on general aspects of AI/ML framework Panasonic
R1-2303049 On General Aspects of AI/ML Framework Google
R1-2303075 General aspects on AI/ML framework LG Electronics
R1-2303119 General aspects of AI ML framework and evaluation methodogy Samsung
R1-2303182 Considerations on general aspects on AI-ML framework CAICT
R1-2303193 Discussion on general aspects of AI/ML framework for NR air interface ETRI
R1-2303223 Discussion on general aspects of AI/ML framework CMCC
R1-2303335 Discussion on general aspects of AI/ML LCM MediaTek Inc.
R1-2303412 General aspects of AI/ML framework Fraunhofer IIS, Fraunhofer HHI
R1-2303434 General aspects of AI and ML framework for NR air interface NVIDIA
R1-2303474 Discussion on general aspect of AI/ML framework Apple
R1-2303523 General aspects of AI/ML framework Lenovo
R1-2303581 General aspects of AI/ML framework Qualcomm Incorporated
R1-2303630 Discussion on general aspects of AI/ML framework KDDI Corporation
R1-2303648 Discussion on AI/ML framework Rakuten Mobile, Inc
R1-2303649 General Aspects of AI/ML framework AT&T
R1-2303668 Discussion on general aspects of AI ML framework NEC
R1-2303704 Discussion on general aspects of AI/ML framework NTT DOCOMO, INC.
R1-2303809 Discussions on Common Aspects of AI/ML Framework TCL Communication Ltd.
[112bis-e-R18-AI/ML-01] – Taesang (Qualcomm)
Email discussion on general aspects of AI/ML by April 26th
- Check points: April 21, April 26
R1-2304049 Summary#1 of General Aspects of AI/ML Framework Moderator (Qualcomm)
Presented in April 18th GTW session.
R1-2304050 Summary#2 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From April 21st GTW session
Agreement
· For AI/ML functionality identification and functionality-based LCM of UE-side models and/or UE-part of two-sided models:
o Functionality refers to an AI/ML-enabled Feature/FG enabled by configuration(s), where configuration(s) is(are) supported based on conditions indicated by UE capability.
o Correspondingly, functionality-based LCM operates based on, at least, one configuration of AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature/FG.
§ FFS: Signaling to support functionality-based LCM operations, e.g., to activate/deactivate/fallback/switch AI/ML functionalities
§ FFS: Whether/how to address additional conditions (e.g., scenarios, sites, and datasets) to aid UE-side transparent model operations (without model identification) at the Functionality level
§ FFS: Other aspects that may constitute Functionality
o FFS: which aspects should be specified as conditions of a Feature/FG available for functionality will be discussed in each sub-use-case agenda.
· For AI/ML model identification and model-ID-based LCM of UE-side models and/or UE-part of two-sided models:
o model-ID-based LCM operates based on identified models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between UE-side and NW-side.
o FFS: Which aspects should be considered as additional conditions, and how to include them into model description information during model identification will be discussed in each sub-use-case agenda.
o FFS: Relationship between functionality and model, e.g., whether a model may be identified referring to functionality(s).
o FFS: relationship between functionality-based LCM and model-ID-based LCM
· Note: Applicability of functionality-based LCM and model-ID-based LCM is a separate discussion.
R1-2304051 Summary#3 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From April 25th GTW session
Conclusion
From RAN1 perspective, it is clarified that an AI/ML model identified by a model ID may be logical, and how it maps to physical AI/ML model(s) may be up to implementation.
· When distinction is necessary for discussion purposes, companies may use the term a logical AI/ML model to refer to a model that is identified and assigned a model ID, and physical AI/ML model(s) to refer to an actual implementation of such a model.
R1-2304052 Summary#4 of General Aspects of AI/ML Framework Moderator (Qualcomm)
From April 26th GTW session
Agreement
· Study necessity, mechanisms, after functionality identification, for UE to report updates on applicable functionality(es) among [configured/identified] functionality(es), where the applicable functionalities may be a subset of all [configured/identified] functionalities.
· Study necessity, mechanisms, after model identification, for UE to report updates on applicable UE part/UE-side model(s), where the applicable models may be a subset of all identified models.
Decision: As per email decision posted on April 28th,
Working Assumption
The definition of ‘AI/ML model transfer’ is revised (marked in red) as follows:
AI/ML model transfer |
Delivery of an AI/ML model over the air interface in a manner that is not transparent to 3GPP signaling, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model. |
Working Assumption
Model selection |
The process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature. Note: Model selection may or may not be carried out simultaneously with model activation |
Final summary in R1-2304054.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2302319 Discussion and evaluation of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2302358 Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2302437 Evaluation on AI CSI feedback enhancement ZTE
R1-2302477 Evaluation on AI/ML for CSI feedback enhancement vivo
R1-2302540 Evaluation methodology and results on AI/ML for CSI feedback enhancement OPPO
R1-2302593 Discussion on evaluation on AIML for CSI feedback enhancement Spreadtrum Communications, BUPT
R1-2302628 Evaluation of ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2302637 Evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS
R1-2302695 Evaluation on AI/ML-based CSI feedback enhancement CATT
R1-2302790 Evaluation for CSI feedback enhancements Intel Corporation
R1-2302822 Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2302904 Evaluation on AI/ML for CSI feedback enhancement Fujitsu
R1-2302918 Evaluations of AI-CSI Ericsson
R1-2302975 Discussion on evaluation on AI/ML for CSI feedback enhancement xiaomi
R1-2303050 On Evaluation of AI/ML based CSI Google
R1-2303076 Evaluation on AI/ML for CSI feedback enhancement LG Electronics
R1-2303087 Evaluation on AI for CSI feedback enhancement Mavenir
R1-2303120 Evaluation on AI ML for CSI feedback enhancement Samsung
R1-2303174 Evaluation of AI and ML for CSI feedback enhancement RAN1, Comba
R1-2303183 Some discussions on evaluation on AI-ML for CSI feedback CAICT
R1-2303194 Evaluation on AI/ML for CSI feedback enhancement ETRI
R1-2303224 Discussion on evaluation on AI/ML for CSI feedback enhancement CMCC
R1-2303336 Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2303435 Evaluation of AI and ML for CSI feedback enhancement NVIDIA
R1-2303475 Evaluation for AI/ML based CSI feedback enhancement Apple
R1-2303524 Evaluation on AI/ML for CSI feedback Lenovo
R1-2303582 Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2303654 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2303705 Discussion on evaluation on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2303776 Evaluation on AI/ML for CSI feedback enhancement Indian Institute of Tech (H)
[112bis-e-R18-AI/ML-02] – Yuan (Huawei)
Email discussion on evaluation on CSI feedback enhancement by April 26th
- Check points: April 21, April 26
R1-2303988 Summary#1 for [112bis-e-R18-AIML-02] Moderator (Huawei)
From April 18th GTW session
Agreement
For the rank >1 options under AI/ML-based CSI compression, for a given configured Max rank=K, the complexity of FLOPs is reported as the maximum FLOPs over all ranks each includes the summation of FLOPs for inference per layer if applicable, e.g.,
· Option 1-1 (rank specific): Max FLOPs over K rank specific models.
· Option 1-2 (rank common): FLOPs of the rank common model.
· Option 2-1 (layer specific and rank common): Sum of the FLOPs of K models (for the rank=K).
· Option 2-2 (layer specific and rank specific): Max of the FLOPs over K ranks, k=1,…K, each with a sum of k models.
· Option 3-1 (layer common and rank common): K * FLOPs of the common model.
· Option 3-2 (layer common and rank specific): Max of the FLOPs over K ranks, k=1,…K, each with k * FLOPs of the layer common model.
Agreement
For the rank >1 options under AI/ML-based CSI compression, the storage of memory storage/number of parameters is reported as the summation of memory storage/number of parameters over all models potentially used for any layer/rank, e.g.,
· Option 1-1 (rank specific)/Option 3-2 (layer common and rank specific): Sum of memory storage/number of parameters over all rank specific models.
· Option 1-2 (rank common): A single memory storage/number of parameters for the rank common model.
· Option 2-1 (layer specific and rank common): Sum of memory storage/number of parameters over all layer specific models.
· Option 2-2 (layer specific and rank specific): Sum of memory storage/number of parameters for the specific models over all ranks and all layers in per rank.
· Option 3-1 (layer common and rank common): A single memory storage/number of parameters for the common model.
R1-2303989 Summary#2 for [112bis-e-R18-AIML-02] Moderator (Huawei)
From April 20th GTW session
Working assumption
For the forms of the intermediate KPI results for the following templates:
Table 2. Evaluation results for CSI compression with model generalization Table 3. Evaluation results for CSI compression with model scalability, Table 4. Evaluation results for CSI compression of multi-vendor joint training without model generalization/scalability, Table 5. Evaluation results for CSI compression of separate training without model generalization/scalability, Table 7. Evaluation results for CSI prediction with model generalization |
· The intermediate KPI results are in forms of absolute values and the gain over benchmark, e.g., in terms of “absolute value (gain over benchmark)”
· The intermediate KPI results are in forms of linear value for SGCS and dB value for NMSE
Working Assumption
For the per layer CSI payload size X/Y/Z in
the templates of CSI compression, as a clarification, the X/Y/Z ranges in the
working assumption achieved in RAN1#112 meeting is applicable to Max rank =
1/2. For Max rank () = 3/4, the per layer basis X/Y/Z ranges are re-determined as:
·
X is <=bits
·
Y is bits-
bits
·
Z is >=bits
Working Assumption
For the template of Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, the CSI feedback reduction is provided for 3 CSI feedback overhead ranges, where for each CSI feedback overhead range of the benchmark, it is calculated as the gap between the CSI feedback overhead of benchmark and the CSI feedback overhead of AI/ML corresponding to the same mean UPT.
· Note: the CSI feedback overhead reduction and gain for mean/5%tile UPT are determined at the same payload size for benchmark scheme
CSI feedback reduction (%) (for a given CSI feedback overhead in the benchmark scheme) |
[X*Max rank value], RU<=39% |
[Y*Max rank value], RU<=39% |
|
[Z*Max rank value], RU<=39% |
|
[X*Max rank value], RU 40%-69% |
|
[Y*Max rank value], RU 40%-69% |
|
[Z*Max rank value], RU 40%-69% |
|
[X*Max rank value], RU >=70% |
|
[Y*Max rank value], RU >=70% |
|
[Z*Max rank value], RU >=70% |
Note: for result collection for the generalization verification of AI/ML based CSI compression over various deployment scenarios, till the RAN1#112bis-e meeting,
Agreement
For the AI/ML based CSI prediction, add an entry for “Table 6. Evaluation results for CSI prediction without model generalization/scalability” to report the Codebook type for CSI report.
Assumption |
UE speed |
CSI feedback periodicity |
|
Observation window (number/distance) |
|
Prediction window (number/distance [between prediction instances/distance from the last observation instance to the 1st prediction instance]) |
|
Whether/how to adopt spatial consistency |
|
Codebook type for CSI report |
R1-2303990 Summary#3 for [112bis-e-R18-AIML-02] Moderator (Huawei)
R1-2303991 Summary#4 for [112bis-e-R18-AIML-02] Moderator (Huawei)
From April 24th GTW session
Agreement
To evaluate the performance of the intermediate KPI based monitoring mechanism for CSI compression, the model monitoring methodology is considered as:
Agreement
To
evaluate the performance of the intermediate KPI based monitoring mechanism for
CSI compression, for Step2 of the model monitoring methodology, the per sample is considered for
R1-2303992 Summary#5 for [112bis-e-R18-AIML-02] Moderator (Huawei)
R1-2303993 Summary#6 for [112bis-e-R18-AIML-02] Moderator (Huawei)
Decision: As per email decision posted on April 26th,
Conclusion
For the evaluation of CSI enhancements, when reporting the computational complexity including the pre-processing and post-processing, the complexity metric of FLOPs may be reported separately for the AI/ML model and the pre/post processing.
· How to calculate the FLOPs for pre/post processing is up to companies.
· While reporting the FLOPs of pre-processing and post-processing the following boundaries are considered.
o Estimated raw channel matrix per each frequency unit as an input for pre-processing of the CSI generation part
o Precoding vectors per each frequency unit as an output of post-processing of the CSI reconstruction part
Agreement
For the evaluation of CSI compression, companies are allowed to report (by introducing an additional field in the template to describe) the specific CQI determination method(s) for AI/ML, e.g.,
· Option 2a: CQI is calculated based on CSI reconstruction output, if CSI reconstruction model is available at the UE and UE can perform reconstruction model inference with potential adjustment
o Option 2a-1: The CSI reconstruction part for CQI calculation at the UE same as the actual CSI reconstruction part at the NW
o Option 2a-2: The CSI reconstruction part for CQI calculation at the UE is a proxy model, which is different from the actual CSI reconstruction part at the NW
· Option 2b: CQI is calculated using two stage approach, UE derives CQI using precoded CSI-RS transmitted with a reconstructed precoder
· Option 1a: CQI is calculated based on the target CSI from the realistic channel estimation
· Option 1b: CQI is calculated based on the target CSI from the realistic channel estimation and potential adjustment
· Option 1c: CQI is calculated based on traditional codebook
· Other options if adopted, to be described by companies
Agreement
For the AI/ML based CSI prediction sub use case, if collaboration level x is reported as the benchmark, the EVM to distinguish level x and level y/z based AI/ML CSI prediction is considered from the generalization aspect.
· E.g., collaboration level y/z based CSI prediction is modeled as the fine-tuning case or generalization Case 1, while collaboration level x based CSI prediction is modeled as generalization Case 2 or Case 3.
From April 26th GTW session
Agreement
To evaluate the performance of the
intermediate KPI based monitoring mechanism for CSI compression, is in forms of
Working Assumption
For the template of Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, the CSI feedback overhead for the metric of eventual KPI (e.g., mean/5% UPT) is re-determined as:
· CSI feedback overhead A: <=β* 80 bits.
· CSI feedback overhead B: β* (100bits – 140 bits).
· CSI feedback overhead C: >=β* 230 bits.
· Note: β=1 for max rank = 1, andβ=1.5 for max rank = 2/3/4.
· FFS for rank 2/3/4, whether to add an additional CSI feedback overhead D: >=γ* 230 bits, γ= [1.9], and limit the range of CSI feedback overhead C as:β* 230 bits-γ* 230 bits.
· Note: companies additionally report the exact CSI feedback overhead they considered
Observation
For the scalability verification of AI/ML based CSI compression over various CSI payload sizes, till the RAN1#112bis-e meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain CSI payload size#B and applied for inference with a same CSI payload size#B,
Observation
For the AI/ML based CSI prediction, till the RAN1#112bis-e meeting,
Agreement
For the AI/ML based CSI compression, for the submission of simulation results to the RAN1#113 meeting, for Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, companies are encouraged to take the following assumptions as baseline for the calibration purpose:
Agreement
For the AI/ML based CSI prediction, for the submission of simulation results to the RAN1#113 meeting,
Final summary in R1-2304247.
Including potential specification impact.
R1-2302320 Discussion on other aspects of AI/ML for CSI feedback enhancement FUTUREWEI
R1-2302359 Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon
R1-2302438 Discussion on other aspects for AI CSI feedback enhancement ZTE
R1-2302478 Other aspects on AI/ML for CSI feedback enhancement vivo
R1-2302541 On sub use cases and other aspects of AI/ML for CSI feedback enhancement OPPO
R1-2302594 Discussion on other aspects on AIML for CSI feedback Spreadtrum Communications
R1-2302629 Other aspects on ML for CSI feedback enhancement Nokia, Nokia Shanghai Bell
R1-2302696 Discussion on AI/ML-based CSI feedback enhancement CATT
R1-2302750 Discussion on AI/ML for CSI feedback enhancement NEC
R1-2302791 On other aspects on AI/ML for CSI feedback Intel Corporation
R1-2302823 Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.
R1-2302842 Considerations on CSI measurement enhancements via AI/ML Sony
R1-2302905 Views on specification impact for CSI feedback enhancement Fujitsu
R1-2302919 Discussion on AI-CSI Ericsson
R1-2302976 Discussion on specification impact for CSI feedback based on AI/ML Xiaomi
R1-2303026 Discussion on AI/ML for CSI feedback enhancement China Telecom
R1-2303038 Discussion on AI/ML for CSI feedback enhancement Panasonic
R1-2303051 On Enhancement of AI/ML based CSI Google
R1-2303077 Other aspects on AI/ML for CSI feedback enhancement LG Electronics
R1-2303121 Discussion on potential specification impact for CSI feedback enhancement Samsung
R1-2303184 Discussions on AI-ML for CSI feedback CAICT
R1-2303195 Discussion on other aspects on AI/ML for CSI feedback enhancement ETRI
R1-2303225 Discussion on other aspects on AI/ML for CSI feedback enhancement CMCC
R1-2303337 Other aspects on AI/ML for CSI feedback enhancement MediaTek Inc.
R1-2303436 AI and ML for CSI feedback enhancement NVIDIA
R1-2303476 Discussion on other aspects of AI/ML for CSI enhancement Apple
R1-2303525 Further aspects of AI/ML for CSI feedback Lenovo
R1-2303583 Other aspects on AI/ML for CSI feedback enhancement Qualcomm Incorporated
R1-2303655 Discussion on AI/ML for CSI feedback enhancement AT&T
R1-2303706 Discussion on other aspects on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.
R1-2303810 Discussions on CSI measurement enhancement for AI/ML communication TCL Communication Ltd.
[112bis-e-R18-AI/ML-03] – Huaning (Apple)
Email discussion on other aspects on AI/ML for CSI feedback enhancement by April 26th
- Check points: April 21, April 26
R1-2303979 Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From April 18th GTW session
Agreement
The study of AI/ML based CSI compression should be based on the legacy CSI feedback signaling framework. Further study potential specification enhancement on
· CSI-RS configurations (No discussion on CSI-RS pattern design enhancements)
· CSI reporting configurations
· CSI report UCI mapping/priority/omission
· CSI processing procedures.
· Other aspects are not precluded.
Agreement
In CSI compression using two-sided model use case, for UE-side monitoring, further study potential specification impact on triggering and means for reporting the monitoring metrics, including periodic/semi-persistent and aperiodic reporting, and other reporting initiated from UE.
R1-2303980 Summary #2 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From April 20th GTW session
Agreement
In CSI prediction using UE-side model use case, whether to address the potential spec impact of CSI prediction depends on RAN#100 final conclusion, focusing on the following
R1-2303981 Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From April 24th GTW session
Agreement
In CSI compression using two-sided model use case, for NW-side monitoring, further study the necessity, feasibility and potential specification impact to enable performance monitoring using an existing CSI feedback scheme as the reference.
R1-2303982 Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)
From April 26th GTW session
Conclusion
In CSI compression using two-sided model use case, gradient-exchange based sequential training over the air interface is deprioritized in R18 SI.
Agreement
In CSI compression using two-sided model use case, further study the necessity and potential specification impact of the following aspects related to the ground truth CSI format for NW side data collection for model training:
· Scalar quantization for ground-truth CSI
o FFS: any processing applied to the ground-truth CSI before scalar quantization, based on evaluation results in 9.2.2.1
· Codebook-based quantization for ground-truth CSI
o FFS: Parameter set enhancement of existing eType II codebook, based on evaluation results in 9.2.2.1
· Number of layers for which the ground truth data is collected. And whether UE or NW determine the number of layers for ground-truth CSI data collection.
Agreement
In CSI compression using two-sided model use case, further study the necessity and potential specification impact on quantization alignment, including at least:
· For vector quantization scheme,
o The format and size of the VQ codebook
o Size and segmentation method of the CSI generation model output
· For scalar quantization scheme,
o Uniform and non-uniform quantization
o The format, e.g., quantization granularity, the distribution of bits assigned to each float.
· Quantization alignment using 3GPP aware mechanism.
Final summary in R1-2303983.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2302321 Discussion and evaluation of AI/ML for beam management FUTUREWEI
R1-2302360 Evaluation on AI/ML for beam management Huawei, HiSilicon
R1-2302439 Evaluation on AI beam management ZTE
R1-2302479 Evaluation on AI/ML for beam management vivo
R1-2302542 Evaluation methodology and results on AI/ML for beam management OPPO
R1-2302595 Evaluation on AI/ML for beam management Spreadtrum Communications
R1-2302630 Evaluation of ML for beam management Nokia, Nokia Shanghai Bell
R1-2302697 Evaluation on AI/ML-based beam management CATT
R1-2302792 Evaluations for AI/ML beam management Intel Corporation
R1-2302825 Discussion for evaluation on AI/ML for beam management InterDigital, Inc.
R1-2302878 Evaluation of AIML for beam management Ericsson
R1-2302906 Evaluation on AI/ML for beam management Fujitsu
R1-2302977 Evaluation on AI/ML for beam management xiaomi
R1-2303052 On Evaluation of AI/ML based Beam Management Google
R1-2303078 Evaluation on AI/ML for beam management LG Electronics
R1-2303122 Evaluation on AI ML for Beam management Samsung
R1-2303185 Some discussions on evaluation on AI-ML for Beam management CAICT
R1-2303226 Discussion on evaluation on AI/ML for beam management CMCC
R1-2303301 Evaluation on AI/ML for beam management CEWiT
R1-2303338 Evaluation on AI/ML for beam management MediaTek Inc.
R1-2303437 Evaluation of AI and ML for beam management NVIDIA
R1-2303477 Evaluation for AI/ML based beam management enhancements Apple
R1-2303526 Evaluation on AI/ML for beam management Lenovo
R1-2303584 Evaluation on AI/ML for beam management Qualcomm Incorporated
R1-2303707 Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.
[112bis-e-R18-AI/ML-04] – Feifei (Samsung)
Email discussion on evaluation on AI/ML for beam management by April 26th - extended to April 28th
- Check points: April 21, April 26
R1-2303994 Feature lead summary #0 evaluation of AI/ML for beam management Moderator (Samsung)
From April 18th GTW session
Agreement
Agreement
R1-2303995 Feature lead summary #1 evaluation of AI/ML for beam management Moderator (Samsung)
From April 20th GTW session
Conclusion
Agreement
At least for evaluation on the performance of DL Tx beam prediction, consider the following options for Rx beam for providing input for AI/ML model for training and/or inference if applicable
Other options are not precluded and can be reported by companies.
Observation
· At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B, existing quantization granularity of L1-RSRP (i.e., 1dB for the best beam, 2dB for the difference to the best beam) causes [a minor loss x%~y%, if applicable] in beam prediction accuracy compared to unquantized L1-RSRPs of beams in Set B.
R1-2303996 Feature lead summary #2 evaluation of AI/ML for beam management Moderator (Samsung)
From April 24th GTW session
Agreement
R1-2303997 Feature lead summary #3 evaluation of AI/ML for beam management Moderator (Samsung)
From April 26th GTW session
Observation
Agreement
For performance evaluation of AI/ML based DL Tx beam prediction for BM-Case1 and BM-Case2, optionally study the performance with a quasi-optimal Rx beam (i.e., not all the measurements as inputs of AI/ML are from the “best” Rx beam) with less measurement/RS overhead compared to exhaustive Rx beam sweeping.
o Opt A: Identify the quasi-optimal Rx beams to be utilized for measuring Set B/Set C based on the previous measurements.
§ Companies can report the time information and beam type (e.g., whether the same Tx beam(s) in Set B) of the reference signal to use.
§ Companies report how to find the quasi-optimal Rx beam with “previous measurement”
o FFS: Opt B: The Rx beams for measuring Set B/Set C consist of the X% of “best” Rx beam exhaustive Rx beam sweeping and (1-X%) of random Rx beams [or the adjacent Rx beam to the “best” Rx beam].
§ X%= 80% or 90%, or other values reported by companies.
§ Note: X% is the percentage of measurements with “best” Rx beams out of all measurements
o Other options are not precluded.
· Companies report the measurement/RS overhead together with beam prediction accuracy.
Conclusion
To evaluate the performance of BM-Case1 for both DL Tx beam and pair prediction, aiming to analysis the following aspects:
Decision: As per email decision posted on April 28th,
Observation
Conclusion
To evaluate the performance of BMCase-2 for both DL Tx beam and pair prediction, aiming to analysis the following aspects:
Including potential specification impact.
R1-2302322 Discussion on other aspects of AI/ML for beam management FUTUREWEI
R1-2302361 Discussion on AI/ML for beam management Huawei, HiSilicon
R1-2302432 Discussion on other aspects of AI/ML beam management New H3C Technologies Co., Ltd.
R1-2302440 Discussion on other aspects for AI beam management ZTE
R1-2302480 Other aspects on AI/ML for beam management vivo
R1-2302543 Other aspects of AI/ML for beam management OPPO
R1-2302596 Other aspects on AI/ML for beam management Spreadtrum Communications
R1-2302631 Other aspects on ML for beam management Nokia, Nokia Shanghai Bell
R1-2302698 Discussion on AI/ML-based beam management CATT
R1-2302793 Other aspects on AI/ML for beam management Intel Corporation
R1-2302826 Discussion for other aspects on AI/ML for beam management InterDigital, Inc.
R1-2302843 Consideration on AI/ML for beam management Sony
R1-2302868 Discussion on AI/ML for beam management Panasonic
R1-2302883 Discussion on AI/ML for beam management Ericsson
R1-2302907 Discussion for specification impacts on AI/ML for beam management Fujitsu
R1-2302978 Potential specification impact on AI/ML for beam management xiaomi
R1-2303053 On Enhancement of AI/ML based Beam Management Google
R1-2303079 Other aspects on AI/ML for beam management LG Electronics
R1-2303123 Discussion on potential specification impact for beam management Samsung
R1-2303186 Discussions on AI-ML for Beam management CAICT
R1-2303196 Discussion on other aspects on AI/ML for beam management ETRI
R1-2303227 Discussion on other aspects on AI/ML for beam management CMCC
R1-2303339 Other aspects on AI/ML for beam management MediaTek Inc.
R1-2303438 AI and ML for beam management NVIDIA
R1-2303478 Discussion on other aspects of AI/ML for beam management enhancement Apple
R1-2303527 Further aspects of AI/ML for beam management Lenovo
R1-2303585 Other aspects on AI/ML for beam management Qualcomm Incorporated
R1-2303669 Discussion on AI/ML for beam management NEC
R1-2303708 Discussion on other aspects on AI/ML for beam management NTT DOCOMO, INC.
[112bis-e-R18-AI/ML-05] – Zhihua (OPPO)
Email discussion on other aspects of AI/ML for beam management by April 26th
- Check points: April 21, April 26
R1-2303966 Summary#1 for other aspects on AI/ML for beam management Moderator (OPPO)
From April 18th GTW session
Agreement
Regarding the data collection at UE side for UE-side AI/ML model, study the potential specification impact of UE reporting to network from the following aspect
· Supported/preferred configurations of DL RS transmission
· Other aspect(s) is not precluded
R1-2303967 Summary#2 for other aspects on AI/ML for beam management Moderator (OPPO)
From April 20th GTW session
Agreement
Regarding the data collection at UE side for UE-side AI/ML model, study the potential specification impact (if any) to initiate/trigger data collection from RAN1 point of view by considering the following options as a starting point
R1-2303968 Summary#3 for other aspects on AI/ML for beam management Moderator (OPPO)
Presented in April 24th GTW session.
R1-2303969 Summary#4 for other aspects on AI/ML for beam management Moderator (OPPO)
From April 26th GTW session
Agreement
Regarding data collection for NW-side AI/ML model, study the following options (including the combination of options) for the contents of collected data,
Agreement
Regarding data collection for NW-side AI/ML model, study necessity, benefits and beam-management-specific potential specification impact from RAN1 point of view on the following additional aspects
Decision: As per email decision posted on April 28th,
Agreement
For AI/ML performance monitoring for BM-Case1 and BM-Case2, study potential specification impact of at least the following alternatives as the benchmark/reference (if applicable) for performance comparison:
· Alt.1: The best beam(s) obtained by measuring beams of a set indicated by gNB (e.g., Beams from Set A)
o FFS: gNB configures one or multiple sets for one or multiple benchmarks/references
· Alt.4: Measurements of the predicted best beam(s) corresponding to model output (e.g., Comparison between actual L1-RSRP and predicted RSRP of predicted Top-1/K Beams)
· FFS:
o Alt.3: The beam corresponding to some or all the indicated/activated TCI state(s)
· Other alternative is not precluded.
Final summary in R1-2303970.
Including evaluation methodology, KPI, and performance evaluation results.
R1-2302335 Evaluation of AI/ML for Positioning Accuracy Enhancement Ericsson
R1-2302362 Evaluation on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2302441 Evaluation on AI positioning enhancement ZTE
R1-2302481 Evaluation on AI/ML for positioning accuracy enhancement vivo
R1-2302544 Evaluation methodology and results on AI/ML for positioning accuracy enhancement OPPO
R1-2302632 Evaluation of ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2302699 Evaluation on AI/ML-based positioning enhancement CATT
R1-2302908 Discussions on evaluation results of AIML positioning accuracy enhancement Fujitsu
R1-2302979 Evaluation on AI/ML for positioning accuracy enhancement xiaomi
R1-2303054 On Evaluation of AI/ML based Positioning Google
R1-2303080 Evaluation on AI/ML for positioning accuracy enhancement LG Electronics
R1-2303124 Evaluation on AI ML for Positioning Samsung
R1-2303187 Some discussions on evaluation on AI-ML for positioning accuracy enhancement CAICT
R1-2303228 Discussion on evaluation on AI/ML for positioning accuracy enhancement CMCC
R1-2303340 Evaluation of AIML for Positioning Accuracy Enhancement MediaTek Inc.
R1-2303439 Evaluation of AI and ML for positioning enhancement NVIDIA
R1-2303450 Evaluation on AI/ML for positioning accuracy enhancement InterDigital, Inc.
R1-2303926 Evaluation on AI/ML for positioning accuracy enhancement Apple (rev of R1-2303479)
R1-2303528 Discussion on AI/ML Positioning Evaluations Lenovo
R1-2303586 Evaluation on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
[112bis-e-R18-AI/ML-06] – Yufei (Ericsson)
Email discussion on evaluation on AI/ML for positioning accuracy enhancement by April 26th - extended till April 28th
- Check points: April 21, April 26
R1-2304016 Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2304017 Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From April 18th GTW session
Agreement
For evaluation of both the direct AI/ML positioning and AI/ML assisted positioning, company optionally adopt delay profile (DP) as a type of information for model input.
· DP is a degenerated version of PDP, where the path power is not provided.
Agreement
For the evaluation of AI/ML based positioning, the study of model input due to different number of TRPs include the following approaches. Proponent of each approach provide analysis for model performance, signaling overhead (including training data collection and model inference), model complexity and computational complexity.
Agreement
In the evaluation of AI/ML based positioning, if N’TRP<18, the set of N’TRP TRPs that provide measurements to model input of an AI/ML model are reported using the TRP indices shown below.
R1-2304018 Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2304019 Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From April 20th GTW session
Agreement
For AI/ML assisted positioning with TOA as model output, study the impact of labelling error to TOA accuracy and/or positioning accuracy.
o Value L is up to sources.
Agreement
For AI/ML assisted positioning with LOS/NLOS indicator as model output, study the impact of labelling error to LOS/NLOS indicator accuracy and/or positioning accuracy.
o Value m and n are up to sources.
R1-2304103 Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From April 24th GTW session
Agreement
For the evaluation of AI/ML based positioning method, the measurement size and signalling overhead for the model input is reported.
Observation
For AI/ML based positioning method, companies have submitted evaluation results to show that for their evaluated cases, for a given company’s model design, a lower complexity (model complexity and computational complexity) model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.
Note: For easy reference, sources include CMCC (R1-2303228), InterDigital (R1-2303450), Ericsson (R1-2302335), Huawei/HiSilicon (R1-2302362), CATT (R1-2302699), Nokia (R1-2302632).
Observation
For direct AI/ML positioning, for L in the range of 0.25m to 5m, the positioning error increases approximately in proportion to L, where L (in meters) is the standard deviation of truncated Gaussian Distribution of the ground truth label error.
R1-2304104 Summary #6 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
R1-2304105 Summary #7 of Evaluation on AI/ML for positioning accuracy enhancement Moderator (Ericsson)
From April 26th GTW session
Observation
For AI/ML assisted positioning, evaluation results have been provided by sources for label-based model monitoring methods. With TOA and/or LOS/NLOS indicator as model output, the estimated ground truth label (i.e., TOA and/or LOS/NLOS indicator) is provided by the location estimation from the associated conventional positioning method. The associated conventional positioning method refers to the method which utilizes the AI/ML model output to determine target UE location.
Note: Sources include vivo (R1-2302481), MediaTek (R1-2303340), Ericsson (R1-2302335)
Observation
For both direct AI/ML and AI/ML assisted positioning, evaluation results have been provided by sources to demonstrate the feasibility of label-free model monitoring methods.
Note: Sources include vivo (R1-2302481), CATT (R1-2302699), MediaTek (R1-2303340), Ericsson (R1-2302335), Nokia (R1-2302632).
Decision: As per email decision posted on April 28th,
Observation
For both direct AI/ML and AI/ML assisted positioning, evaluation results submitted to RAN1#112bis show that with CIR model input for a trained model,
Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.
Observation
For direct AI/ML positioning, based on evaluation results of timing error in the range of 0-50 ns, when the model is trained by a dataset with UE/gNB RX and TX timing error t1 (ns) and tested in a deployment scenario with UE/gNB RX and TX timing error t2 (ns), for a given t1,
Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.
Observation
For direct AI/ML positioning, based on evaluation results of network synchronization error in the range of 0-50 ns, when the model is trained by a dataset with network synchronization error t1 (ns) and tested in a deployment scenario with network synchronization error t2 (ns), for a given t1,
Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.
Final summary in R1-2304106.
Including potential specification impact.
R1-2302336 Other Aspects of AI/ML Based Positioning Enhancement Ericsson
R1-2302363 Discussion on AI/ML for positioning accuracy enhancement Huawei, HiSilicon
R1-2302442 Discussion on other aspects for AI positioning enhancement ZTE
R1-2302482 Other aspects on AI/ML for positioning accuracy enhancement vivo
R1-2302545 On sub use cases and other aspects of AI/ML for positioning accuracy enhancement OPPO
R1-2302597 Discussion on other aspects on AIML for positioning accuracy enhancement Spreadtrum Communications
R1-2302633 Other aspects on ML for positioning accuracy enhancement Nokia, Nokia Shanghai Bell
R1-2302700 Discussion on AI/ML-based positioning enhancement CATT
R1-2302739 Other aspects on AI-ML for positioning accuracy enhancement Baicells
R1-2302844 Discussions on AI-ML for positioning accuracy enhancement Sony
R1-2302909 Discussions on specification impacts for AIML positioning accuracy enhancement Fujitsu
R1-2302980 Views on the other aspects of AI/ML-based positioning accuracy enhancement xiaomi
R1-2303055 On Enhancement of AI/ML based Positioning Google
R1-2303081 Other aspects on AI/ML for positioning accuracy enhancement LG Electronics
R1-2303125 Discussion on potential specification impact for Positioning Samsung
R1-2303188 Discussions on AI-ML for positioning accuracy enhancement CAICT
R1-2303229 Discussion on other aspects on AI/ML for positioning accuracy enhancement CMCC
R1-2303341 Other Aspects on AI ML Based Positioning Enhancement MediaTek Inc.
R1-2303413 On potential AI/ML solutions for positioning Fraunhofer IIS, Fraunhofer HHI
R1-2303440 AI and ML for positioning enhancement NVIDIA
R1-2303451 Designs and potential specification impacts of AIML for positioning InterDigital, Inc.
R1-2303480 On Other aspects on AI/ML for positioning accuracy enhancement Apple
R1-2303529 AI/ML Positioning use cases and associated Impacts Lenovo
R1-2303587 Other aspects on AI/ML for positioning accuracy enhancement Qualcomm Incorporated
R1-2303675 Discussion on AI/ML for positioning accuracy enhancement NEC
R1-2303709 Discussion on other aspects on AI/ML for positioning accuracy enhancement NTT DOCOMO, INC.
[112bis-e-R18-AI/ML-07] – Huaming (vivo)
Email discussion on other aspects of AI/ML for positioning accuracy enhancement by April 26th
- Check points: April 21, April 26
R1-2303940 FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From April 18th GTW session
Agreement
Regarding monitoring for AI/ML based positioning, at least the following entities are identified to derive monitoring metric
· UE at least for Case 1 and 2a (with UE-side model)
· gNB at least for Case 3a (with gNB-side model)
· LMF at least for Case 2b and 3b (with LMF-side model)
R1-2304056 FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From April 20th GTW session
Working Assumption
Regarding data collection at least for model training for AI/ML based positioning, at least the following information of data with potential specification impact are identified.
Agreement
Regarding monitoring for AI/ML based positioning, at least the following aspects are identified for further study on benefit(s), feasibility, necessity and potential specification impact for each case (Case 1 to 3b)
R1-2304102 FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
Presented in April 24th GTW session.
R1-2304177 FL summary #4 of other aspects on AI/ML for positioning accuracy enhancement Moderator (vivo)
From April 26th GTW session
Agreement
Regarding LCM of AI/ML based positioning accuracy enhancement, at least for Case 1 and Case 2a (model is at UE-side), further study the following aspects on information related to the conditions