Sunday, June 6, 2010

Troubleshoot Tips

Even though you have the whole knowledge of this technology, it is impossible to implement a protocol stack or test cases which does not need any troubleshooting. In reality, most of the engineers has relatively good knolwedge on a specific area or specific layer but much less knowledge on other layers. However when a problem happens, we usually have to analyze across multiple layers meaning that we need knowledge on several layers and detailed interrelations between those layers.

In a word, there is no way to troubleshoot in a single shot and no short cut for it. A third of them came from the knowledge and a third of them came from experience and the other third came from the combination of these two.

In this section, I would try putting down some troubleshooting tips mostly based on my experience.

Tools for troubleshoot

The more tools you have, generally the easier to troubleshoot. I hope I can get at least the followings tools as minimum (in many case even this minimum are not meet giving me more headache though)

i) Logging tools on network emulator (It should have not only signaling log (L3 and above) but also all the lower layer log as well)
ii) Logging tools on UE (In this case as well we need not only signaling log but also all the lower layer log)
iii) RF Vector Sepctrum Analyzer (This should have good qulaity of zero-span with triggering capability. It would help a lot at RACH process or handover process troubleshooting).

The most important initial 5 steps

The most important 5 steps for registration are as follows :
We have to know every details of these process and all the factors influencing this process.

i) RACH Preamble
ii) RACH Response (Msg 2)
iii) RRC Connection Request (Msg 3)
iv) RRC Connection Setup
v) RRC Connection Setup Complete

First thing we have to consider is timing requirement between each step and the following step. Time Interval between i) and ii) is 0~12 sub frames. The requirement between ii) and iii) is 6 sub frames. The network should complete the lower layer configuration for Msg3 reception at least 4 sub frames before the msg3 comes into the network.


"No Service" on Power On

When you turn on the UE connected to the network simulator, you will see "Searching Network... " message for several seconds and you will be sweating a lot for this period if you are protocol stack developer or test case developer.

If it goes to next step and UE start registration, you will be happy and the problem happens when it stop searching and "No Service" message pops up.

First step would be to read section 5.1, 5.2 of 36.331 and get the clear understanding of what is the expected procedure on UE and Network side.

If I have the UE logging tool, I would first check it to see if the UE correctly decoded MIB, SIB1, SIB2 at least. When the UE side log is not available or UE log shows that any one of these are not recieved, we have to see the network side log or protocol stack source code if it is available. In most case you would see that MIB, SIB1, SIB2 is not missing. Then why UE fails to decode them ?

Two possibilities that I can think of
i) The scheduling information on SIB1 for other SIBs so that multiple SIBs overwrite each other.
ii) There is no problem in the scheduling, but UE has some issues with being tuned for the specific schedule. (This kind of situation would not happen when the technology is mature but possible at the initial phase of technology like LTE and I have experienced this situation).

Network detected but no further progress

"No Service" message shown on UE screen but no registration process starts. The first item you have to check at this stage is to check whether UE sent RACH or not. How do we verify this ?
i) Check UE log if the log says "RACH" get transmitted
ii) Check Network emulator log if it received "PRACH" signal (You need to have Network emulator which has very detailed logging capability to show this).
iii) Use spectrum analyzer to detect PRACH from UE (Since the signal analyzer does not know exactly the PRACH signal comes in and the PRACH is a burst type of signal, put the spectrum analyzer in zero-span mode and set the proper trigger for it).

UE keep sending PRACH

In normal case, if UE send PRACH and network send RACH Response and UE is supposed to stop sending PRACH and initiate RRC Session by sending 'RRC Connection Request'. If UE keep sending PRACH, it means there is some issues with processing 'RACH Response' process.

i) Check Network emulator log if it received "PRACH" signal (You need to have Network emulator which has very detailed logging capability to show this).
ii) Check Network emulator log if it sent RA Response.
iii) Check Network emulator log if the timing requirement between PRACH reception and RA Response has been satisfied. (Even though network sents the RA Reponse, UE keep trying RACH process if network sent it too late).
iv) Check UE log if the log says "RACH Response" recieved
v) Check UE log if the PRACH transmission and RA Response has been done within the timing requirement.

Unfortunately in this case it is hard to use a spectrum analyzer because downlink signal has so many trains of other signals to make it hard to set the trigger on the spectrum analyzer side to detect the RACH response unless the spectrum analyzer has specific decoding capability so that it can use RACH response itself as a trigger.

Another possibility would be the following case,
i) UE transmit PRACH Preamble
ii) Network sent RACH Response
iii) UE properly decode RACH response
iv) UE sent 'RRC Connection Request'
v) Network failed to decode 'RRC Connection Request' and does not send 'RRC Connection Setup'
vi) (Timeout for 'RRC Connection Setup') UE reinitiate PRACH process

If you see the network side log, you would not see 'RRC Connection Request' even though UE log says it sent the message.

The most common cause for this situation would be related to step iii) and iv). If you read the 'Understand RACH !' section, you would remember that RACH Response message carries 'UL Grant' which basically carries the resource allocation for 'RRC Connection Request' message. If UE uncorrectly decoded 'RA response' message, it will send 'RRC Connection Request' message in the wrong locations and network would fail to decode it even though UE sent the message. Another possibility can on network side. If network sent wrong RACH Response message (wrong UL Grant) which is different from it's MAC Layer setting setting for UL CCCH, it would fail to decode it. This kind of problem would happen pretty often when you create test case for UE testing. If you have a working test scenario on a certain system bandwidth and then just changed the System Bandwidth and all of the sudden RACH process fails.. in this case the first place you have to check would be UL Grant field of RA Response message.




Wednesday, June 2, 2010

DCI

When you study the physical frame structure of LTE, you may be impressed by flexibility (meaning complexity in other way) of all the possible ways of resource allocation. It was combination of Time Domain, Frequency Domain and the modulation scheme. Especially in frequency domain, you have so many resource blocks you can use (100 Resource Blocks in case of 20 Mhz Bandwidth) and if you think of all the possible permutation of these variables, the number will be very huge. Then you would have this question (At least I had this question).. How can the other party (the recieving side) can figure out exactly where in the slot and in which modulation scheme that the sender (transmitter) transmit the data(subframe)? I just captured the physical signal but how can I (the reciever) decode this signal. This is where the term called 'DCI(Downlink Control Indicator)' comes in.

It is DCT which carries those detailed information like which resource block carries your data and what kind of demodulation scheme you have to use it to decode data and some other additional information. It means you (the reciever) first have to decode DCI and based on the informat you got from the DCI you can decode the real data. It means without DCI, decoding the data delivered to you is impossible.Not only in LTE, but also in most of wireless communication the reciever has the same problem(the same question). In WCDMA R99, Slot format and TFCI carries those information and in HSDPA HS-SCCH carried those information and in HSUPA E-TFCI carried it.

In terms of protocol implementation with respect to carrying these information, R99 seems to be the most complicated one. You had to define all the possible combination of resource allocation in the form of TFCS (a kind of look-up table for TFCI) and you have to convey those information through L3 message (e.g, Radio Bearer Setup message and RRC Connection Setup message) and the transmitter also have to configure itself according to the table. A lot of error meaning headache came from the mismatches between the TFCS information you configured in L3 message and the configuration the transmitter applied to itself (transmitter's lower layer configuration). It has been too much headache to me. HSDPA relieved the headache a lot since it carries these information directly on HS-SCCH and this job is done by MAC layer. The resource allocation information carried by HS-SCCH is called 'TFRI'. So I don't have to care much about L3 message.. but still I need to jump around the multiple different 3GPP document to define any meaningful TFRIs. And other complication was that even in HSDPA we still using R99 DPCH for power control and signaling purpose, so I cannot completely remove the headache of handling TFCS.Now in LTE, this information is carried by DCI as I explained above and we only have to care about just a couple of parameters like Number of RBs, the starting point of RBs and the modulation scheme and I don't have to care anything about configuring these things in RRC messages. This is a kind of blessing to me.

As one example showing how/when DCI is used, refer to http://jaekuryu.blogspot.com/2010/01/lte-signalinig-essentials.html section "Uplink Data Transmission Scheduling - Persistent Scheduling"

Types of DCIs

DCI carries the following information :
i) UL resource allocation (persistent and non-persistent)
ii) Descriptions about DL data transmitted to the UE.

L1 signaling is done by DCI and Up to 8 DCIs can be configured in the PDCCH. These DCIs can have 6 formats : 1 format for UL scheduling, 2 formats for Non-MIMO DL scheduling, 1 format for MIMO DL Scheduling and 2 formats for UL power control.

Format 0 : UL SIMO and UL Power Control. This functions as a Grant for UL transmission
Format 1, 1 A : DL SIMO and UL Power Control
Format 2 : DL MIMO and UL Power Control
Format 3 : UL Power Control Only (for multiple UEs)
Format 3A : UL Power Control Only (for multiple UEs)


DCI has various formats for the information sent to define resource allocations. The resource allocation information contains the following items in it.
i) number of resource blocks being used
ii) duration of allocation in TTI
iii) support for multiple antenna transmission

What determines a DCI Format for the specific situation ?

There are two major factors to determine a DCI format for a specific situation as follows :
i) RNTI Type
ii) Transmission Mode

This means that you cannot change only one of these parameters arbitrarily and you always have to think of the relationships among these when you change one of these parameters. Otherwise you will spend a long time for troubleshooting -:)

Those tables from 3GPP 36.213 shows the relationships between RNTI Type, Transmission Mode and DCI format.





Any relations between DCI format and Layer 3 signaling message ?

Yes, there is a relationship. You have to know which DCI format is required for which RRC message. Following tables from 3GPP 36.321 shows the relationship between RNTI and logical channel and you would know which RRC message is carried by which logical channel. So with two step induction, you will figure out the link between RRC message and it's corresponding DCI format.


For example, if you see the "Security Mode Command" message of section 6.2.2 of 36.331, it says

Signalling radio bearer: SRB1
RLC-SAP: AM
Logical channel: DCCH
Direction: UE to E-UTRAN

If you see the table, you would see this message is using C-RNTI. and you will figure out the possible candiates from table 7.1-5 of 36.213 and if you would have detailed information of the transmission mode, you can pinpoint out exactly which DCI format you have to use for this message for a specific case. Assuming TM mode in this case is TM1 and scheduling is dynamic scheduling, if you see Table 7.1-2 you will figure out that this is using C-RNTI. With this RNTI Type and TM mode, if you see table 7.1-5, this case use DCI Format 1 or DCI Format 1A.

DCI Decoding Examples

Example 1 > DCIFormat 0, value = 0x2584A800

You can figure out the Start of RB and N_RB (Number of allocated RB) from RIV value.

How can I calcuate Start_RB and N_RB from RIV. The simple calcuation is as follows :
i) N_RB = Floor(RIV/MAX_N_RB) + 1= Floor(1200/50) + 1 = 25, where MAX_N_RB = 50 in this case since this is 10 Mhz System BW.
ii) Start_RB = RIV mod MAX_N_RB = 1200 mod 50 = 0

Saturday, March 13, 2010

Protocol Sequence : Typical Packet Call

1) MIB
2) SIB 1
3) SIB 2
4) RRC : PRACH Preamble
5) RRC : RACH Response
6) RRC : RRC Connection Request
7) RRC : RRC Connection Setup
8) RRC : RRC Connection Setup Complete + NAS : Attach Request
9) RRC : DL Information Transfer + NAS : Authentication Request
10) RRC : UL Information Transfer + NAS : Authentication Response
11) RRC : DL Information Transfer + NAS : Security Mode Command
12) RRC : UL Information Transfer + NAS : Security Mode Complete
13) RRC : Security Mode Command
14) RRC : Security Mode Complete
15) RRC : RRC Connection Reconfiguration + NAS : Attach Accept
16) RRC : RRC Connection Reconfiguration Complete + NAS : Attach Complete
17) RRC : RRC Connection Release
18) RRC : PRACH Preamble
19) RRC : RACH Response
20) RRC : RRC Connection Request
21) RRC : RRC Connection Setup
22) RRC : RRC Connection Setup Complete + NAS : Service Request
23) RRC : Security Mode Command
24) RRC : Security Mode Complete
25) RRC : RRC Connection Reconfiguration + NAS : Activate Dedicated EPS Bearer Context Request
26) RRC : RRC Connection Reconfiguration Complete + NAS : Activate Dedicated EPS Bearer Context Accept
27) RRC : UL Information Transfer + NAS : Deactivate Dedicated EPS Bearer Context Accept
28) RRC : RRC Connection Release

LTE Unique Sequences

Even though overall sequence is pretty similar to WCDMA sequence, there are a couple of different points comparing to WCDMA sequence.

First point you have to look at is that in LTE 'RACH Preamble' is sent as a part of RRC message. As you know RACH process was there in WCDMA, but in WCDMA it was a part of Physical layer process.

Another part I notice is that RRC Connection Setup Complete and Attach Request is carried in a single step.

These are the differences you can notice just by looking at the message type, there are more differences you will find when you go into the information elements of each messages as you will see in following sections.

Overall Comparision with WCDMA

First thing you will notice would be that there are much less SIBs being transmitted in LTE comparting to WCDMA. Of course there are more SIBs not being transmitted in this sequence (LTE has 10 SIBs in total), but with only these two SIBs it can transmit all the information to let UE camp on the network. In WCDMA there are a total 18 SIBs and in most case we used at least SIB1,3,5,7,11 even in very basic configurations. And some of the WCDMA SIBs like SIB5 and 11 has multipe segments. In LTE, number of SIB is small and none of them are segmented.

1) MIB

MIB in LTE has very minimal information (This is a big difference from WCDMA MIB) . The only information it carries are

i) BandWidth
ii) PHICH
iii) SystemFrameNumber

Of course the most important information is "BandWidth".

According to 36.331 section 5.2.1.2, the MIB scheduling is as follows :
The MIB uses a fixed schedule with a periodicity of 40 ms and repetitions made within 40 ms. The first transmission ofthe MIB is scheduled in subframe #0 of radio frames for which the SFN mod 4 = 0, and repetitions are scheduled insubframe #0 of all other radio frames.


2) SIB 1

SIB 1 in LTE contains the information like the ones in WCDMA MIB & SIB1 & SIB3. The important information on SIB 1 is

i) PLMN
ii) Tracking Area Code
iii) Cell Selection Info
iv) Frequency Band Indicator
v) Scheduling information (periodicity) of other SIBs

You may notice that LTE SIB1 is very similar to WCDMA MIB.
Especially at initial test case development, you have to be very careful about item v). If you set this value incorrectly, all the other SIBs will not be decoded by UE. And as a result, UE would not recognize the cell and show "No Service" message.

According to 36.331 section 5.2.1.2, the SIB1 scheduling is as follows :
The SystemInformationBlockType1 uses a fixed schedule with a periodicity of 80 ms and repetitions made within 80 ms.The first transmission of SystemInformationBlockType1 is scheduled in subframe #5 of radio frames for which the SFNmod 8 = 0, and repetitions are scheduled in subframe #5 of all other radio frames for which SFN mod 2 = 0.

This means that even though SIB1 periodicity is 80 ms, different copies (Redudancy version : RV) of the SIB1 is transmitted every 20ms. Meaning that at L3 you will see the SIB1 every 80 ms, but at PHY layer you will see it every 20ms. For the detailed RV assignment for each transmission, refer to 36.321 section 5.3.1 (the last part of the section)

One example of LTE SIB1 is as follows :

RRC_LTE:BCCH-DL-SCH-Message
BCCH-DL-SCH-Message ::= SEQUENCE
+-message ::= CHOICE [c1]
+-c1 ::= CHOICE [systemInformationBlockType1]
+-systemInformationBlockType1 ::= SEQUENCE [000]
+-cellAccessRelatedInfo ::= SEQUENCE [0]
+-plmn-IdentityList ::= SEQUENCE OF SIZE(1..6) [1]
+-PLMN-IdentityInfo ::= SEQUENCE
+-plmn-Identity ::= SEQUENCE [1]
+-mcc ::= SEQUENCE OF SIZE(3) OPTIONAL:Exist
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [1]
+-mnc ::= SEQUENCE OF SIZE(2..3) [2]
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [1]
+-cellReservedForOperatorUse ::= ENUMERATED [notReserved]
+-trackingAreaCode ::= BIT STRING SIZE(16) [0000000000000001]
+-cellIdentity ::= BIT STRING SIZE(28) [0000000000000000000100000000]
+-cellBarred ::= ENUMERATED [notBarred]
+-intraFreqReselection ::= ENUMERATED [notAllowed]
+-csg-Indication ::= BOOLEAN [FALSE]
+-csg-Identity ::= BIT STRING OPTIONAL:Omit
+-cellSelectionInfo ::= SEQUENCE [0]
+-q-RxLevMin ::= INTEGER (-70..-22) [-53]
+-q-RxLevMinOffset ::= INTEGER OPTIONAL:Omit
+-p-Max ::= INTEGER OPTIONAL:Omit
+-freqBandIndicator ::= INTEGER (1..64) [7]
+-schedulingInfoList ::= SEQUENCE OF SIZE(1..maxSI-Message[32]) [2]
+-SchedulingInfo ::= SEQUENCE
+-si-Periodicity ::= ENUMERATED [rf8]
+-sib-MappingInfo ::= SEQUENCE OF SIZE(0..maxSIB-1[31]) [0]
+-SchedulingInfo ::= SEQUENCE
+-si-Periodicity ::= ENUMERATED [rf8]
+-sib-MappingInfo ::= SEQUENCE OF SIZE(0..maxSIB-1[31]) [1]
+-SIB-Type ::= ENUMERATED [sibType3]
+-tdd-Config ::= SEQUENCE OPTIONAL:Omit
+-si-WindowLength ::= ENUMERATED [ms20]
+-systemInfoValueTag ::= INTEGER (0..31) [0]
+-nonCriticalExtension ::= SEQUENCE OPTIONAL:Omit


3) SIB 2

The important information on SIB2 is

i) RACH Configuration
ii) bcch, pcch, pdsch, pusch, pucch configuration
iii) sounding RS Configuration
iv) UE Timers

Following is one example of SIB2. I looks to me that LTE SIB2 is similar to WCDMA SIB5 configuring various common channel.

Ver:8,0,0,18RRC_LTE:BCCH-DL-SCH-Message
BCCH-DL-SCH-Message ::= SEQUENCE
+-message ::= CHOICE [c1]
+-c1 ::= CHOICE [systemInformation]
+-systemInformation ::= SEQUENCE
+-criticalExtensions ::= CHOICE [systemInformation-r8]
+-systemInformation-r8 ::= SEQUENCE [0]
+-sib-TypeAndInfo ::= SEQUENCE OF SIZE(1..maxSIB[32]) [1]
+- ::= CHOICE [sib2]
+-sib2 ::= SEQUENCE [00]
+-ac-BarringInfo ::= SEQUENCE OPTIONAL:Omit
+-radioResourceConfigCommon ::= SEQUENCE
+-rach-Config ::= SEQUENCE
+-preambleInfo ::= SEQUENCE [0]
+-numberOfRA-Preambles ::= ENUMERATED [n52]
+-preamblesGroupAConfig ::= SEQUENCE OPTIONAL:Omit
+-powerRampingParameters ::= SEQUENCE
+-powerRampingStep ::= ENUMERATED [dB2]
+-preambleInitialReceivedTargetPower ::= ENUMERATED [dBm-104]
+-ra-SupervisionInfo ::= SEQUENCE
+-preambleTransMax ::= ENUMERATED [n6]
+-ra-ResponseWindowSize ::= ENUMERATED [sf10]
+-mac-ContentionResolutionTimer ::= ENUMERATED [sf48]
+-maxHARQ-Msg3Tx ::= INTEGER (1..8) [4]
+-bcch-Config ::= SEQUENCE
+-modificationPeriodCoeff ::= ENUMERATED [n4]
+-pcch-Config ::= SEQUENCE
+-defaultPagingCycle ::= ENUMERATED [rf128]
+-nB ::= ENUMERATED [oneT]
+-prach-Config ::= SEQUENCE
+-rootSequenceIndex ::= INTEGER (0..837) [22]
+-prach-ConfigInfo ::= SEQUENCE
+-prach-ConfigIndex ::= INTEGER (0..63) [3]
+-highSpeedFlag ::= BOOLEAN [FALSE]
+-zeroCorrelationZoneConfig ::= INTEGER (0..15) [5]
+-prach-FreqOffset ::= INTEGER (0..94) [2]
+-pdsch-Config ::= SEQUENCE
+-referenceSignalPower ::= INTEGER (-60..50) [18]
+-p-b ::= INTEGER (0..3) [0]
+-pusch-Config ::= SEQUENCE
+-pusch-ConfigBasic ::= SEQUENCE
+-n-SB ::= INTEGER (1..4) [1]
+-hoppingMode ::= ENUMERATED [interSubFrame]
+-pusch-HoppingOffset ::= INTEGER (0..98) [4]
+-enable64QAM ::= BOOLEAN [FALSE]
+-ul-ReferenceSignalsPUSCH ::= SEQUENCE
+-groupHoppingEnabled ::= BOOLEAN [TRUE]
+-groupAssignmentPUSCH ::= INTEGER (0..29) [0]
+-sequenceHoppingEnabled ::= BOOLEAN [FALSE]
+-cyclicShift ::= INTEGER (0..7) [0]
+-pucch-Config ::= SEQUENCE
+-deltaPUCCH-Shift ::= ENUMERATED [ds2]
+-nRB-CQI ::= INTEGER (0..98) [2]
+-nCS-AN ::= INTEGER (0..7) [6]
+-n1PUCCH-AN ::= INTEGER (0..2047) [0]
+-soundingRS-UL-Config ::= CHOICE [setup]
+-setup ::= SEQUENCE [0]
+-srs-BandwidthConfig ::= ENUMERATED [bw3]
+-srs-SubframeConfig ::= ENUMERATED [sc0]
+-ackNackSRS-SimultaneousTransmission ::= BOOLEAN [TRUE]
+-srs-MaxUpPts ::= ENUMERATED OPTIONAL:Omit
+-uplinkPowerControl ::= SEQUENCE
+-p0-NominalPUSCH ::= INTEGER (-126..24) [-85]
+-alpha ::= ENUMERATED [al08]
+-p0-NominalPUCCH ::= INTEGER (-127..-96) [-117]
+-deltaFList-PUCCH ::= SEQUENCE
+-deltaF-PUCCH-Format1 ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format1b ::= ENUMERATED [deltaF3]
+-deltaF-PUCCH-Format2 ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format2a ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format2b ::= ENUMERATED [deltaF0]
+-deltaPreambleMsg3 ::= INTEGER (-1..6) [4]
+-ul-CyclicPrefixLength ::= ENUMERATED [len1]
+-ue-TimersAndConstants ::= SEQUENCE
+-t300 ::= ENUMERATED [ms1000]
+-t301 ::= ENUMERATED [ms1000]
+-t310 ::= ENUMERATED [ms1000]
+-n310 ::= ENUMERATED [n1]
+-t311 ::= ENUMERATED [ms1000]
+-n311 ::= ENUMERATED [n1]
+-freqInfo ::= SEQUENCE [00]
+-ul-CarrierFreq ::= INTEGER OPTIONAL:Omit
+-ul-Bandwidth ::= ENUMERATED OPTIONAL:Omit
+-additionalSpectrumEmission ::= INTEGER (1..32) [1]
+-mbsfn-SubframeConfigList ::= SEQUENCE OF OPTIONAL:Omit
+-timeAlignmentTimerCommon ::= ENUMERATED [sf750]
+-nonCriticalExtension ::= SEQUENCE OPTIONAL:Omit

4) RRC : PRACH Preamble

Text

5) RRC : RACH Response
Text

6) RRC : RRC Connection Request
Text

Interim Comments

From this point on, the L3 message carries both RRC and NAS messages. So you need to have overall understanding of NAS messages as well as RRC messages.
You need to understand all the details of TS 29.274 to handle to handle data traffic related IEs in NAS message. Of course it would be impossible to understand all those details within a day.. my approach is to go through following tables as often as possible until I get some big picture in my mind. You may have to go back and forth between 36.331 and 29.274.

* Table 7.2.2-1: Information Elements in a Create Session Response
* Table 7.2.3-1: Information Elements in a Create Bearer Request
* Table 7.2.3-2: Bearer Context within Create Bearer Request
* Table 7.2.5-1: Information Elements in a Bearer Resource Command
* Table 7.2.7-1: Information Elements in a Modify Bearer Request
* Table 7.2.8-1: Information Elements in a Modify Bearer Response
* Table 7.2.9.1-1: Information Elements in a Delete Session Request
* Table 7.2.9.2-1: Information Elements in a Delete Bearer Request
* Table 7.2.10.2-1: Information Elements in Delete Bearer Response
* Table 7.3.5-1: Information Elements in a Context Request
* Table 7.3.6-2: MME/SGSN UE EPS PDN Connections within Context Response
* Table 7.3.8-1: Information Elements in an Identification Request


7) RRC : RRC Connection Setup

As you see in the following diagram, the most important IE (infomration element) in RRC Connection Setup message is "RadioResourceConfigDedicated" under which you can setup SRB, DRB, MAC and PHY config. Even thouth there is IEs related to DRB, in most case we setup only SRBs in RRC Connection Setup. It is similar to WCDMA RRC Connection setup message in which you usually setup only SRB (Control Channel Part) even though there is IEs for RB(Data Traffic).

One thing you have to notice is that you will find "RadioResourceCondigDedicated" IE not only in RRC Connection Setup message but also in RRC Connection Reconfiguration message. In that case, you have to be careful so that the one you set in RRC Connection Reconfig message properly match the one you set in RRC Connection Setup message. It means that you have to understand the correlation very clearly between RRC Connection Setup message and RRC Connection Reconfig message. This is also very similar to WCDMA case.



8) RRC : RRC Connection Setup Complete + NAS : Attach Request
Text

15) RRC : RRC Connection Reconfiguration + NAS : Attach Accept

An important procedure done in this step is "ESM : Activate Default EPS Bearer Context Request".

One thing you notice here is that in LTE Packet call is initiated by Network where as in UMST most of the packet call is initiated by UE. Network specifies an IP for the UE here.



16) RRC : RRC Connection Reconfiguration Complete + NAS : Attach Complete

Overal Protocol Sequence of the typical packet call is as follows : (You will notice overall sequence is very similar to WCDMA sequence)


An important procedure done in this step is "ESM : Activate Default EPS Bearer Context Accept".

25) RRC : RRC Connection Reconfiguration + NAS : Activate Dedicated EPS Bearer Context Request








Sunday, February 28, 2010

From R99 to LTE

From R99 to LTE

I have been working on creating various test cases on UMTS sides for a couple of years. During this period, I saw technologies changing from R99 to HSDPA, HSUPA and now to LTE. But in terms of RRC and above layers which I have been mostly working on, I haven't seen much differences. Sometimes I got the impressions that the higher layer signaling (RRC and above) gets even more simpler as the technology changes from old technology to a next technologies. If you look at the higher layer technologies of LTE, you will feel that LTE signaling looks simpler than other other existing technologies.

Then, How we could have higher data rate and less latencies and more effective usange of radio channels with simpler signaling ? The secrete is that as the technologies evolves, the higher layer signaling stays similar or even simpler, the lower layer (PHY and MAC) gets complicated and these lower layers are the one that enable us to enjoy all those evolved features especially high data rate and low latencies. So to understand the details of the evolved technologies, we have to understand the details of low layer - e.g, PHY and MAC.

Here is a list of questions you have to have when you want to study a new technology.
i) What kind of additional PHY channes has been added comparing to R99 ?
ii) What kind of information is carried by the additional physical channels ?
iii) What kind of MAC identieis are added comparing to R99 ?
iv) What is the role of the new MAC identities especially in terms of scheduling ?

Why we needed HSDPA ?

Before I start talking about the questions listed above, let's think about why we wanted a new technology called HSDPA ? In any communication technolgy, the biggest motivation for a new technology was to increase the data rate. Then a question arises. How can we increase the data rate ? Regardless of communication types, we usually has taken similar method for this, which is as follows :

i) Change modulation scheme
ii) Decrease the latency between communicating party
iii) Optimization at multi user level rather than optimization at single user level

Let's take some examples. The evolution path of bluetooth was from the standard rate to 2 Mb EDR (Enhanced Data Rate) to 3 Mb EDR and the biggest changes for each step was the change of modulation scheme. What happened in GSM evolutionary path, GSM to GPRS to EGPRS(EDGE) to EDGE Evolution ? The biggest changes in this path is modulation scheme changes as well. From R99 to HSDPA, we also introduced a new modulation scheme called 16 QAM. The advantage of using new modulation scheme to increase the data rate is the simplicity of the concept, but a disadvantage would be that it requires hardware changes.

Next step would be to decrease the latency between communicating party. How can we achieve this ? Increasing physical data propagation speed between two communicating parties ? It is almost impossible because the propagation speed is already at the speed of light. Then what is another option ? It is improving scheduling algorithm of the communication. What is it ? It is a little bit hard to explain in simple way so I will talk on this in a separate section.

Lastly let's think about the optimization at multi user level rather than optimization at single user level. Let's suppose a situation where 10 user is communicating to one Node B. In R99, each user has a separate and independant communication path to the node B via a special channel called DPCH (Dedicated Physical Channel). The optimization in this case means ten separate optimization process for each user. OK.. now let's think each of the users is getting the max data rate for the specific UE at the specific environment to which the UE is exposed. Does this guarantee that the whole resource of the Node B was fully utilized ? It is hard to say "Yes" to this question. Isn't there any possibility that some of the resources was being wasted ? It would be hard to say "No" to this question. We will think about this issue in next section.

Introduction of new Channels in HSDPA

In HSDPA, three four new physical channels were introduced

i) HS-DSCH
ii) HS-SCCH
iii) HS-DPCCH
iv) F-DPCH

With introduction of these four channels, we could implement many of the methods to improve the data rate which has been briefly descrived in previous section.

The most important channel is definately HS-DSCH (High Speed Downlink Shared Channel). As the name implies, it is a SHARED channel whereas in R99 we used a DEDICATED channel. It means all the users within a cell is sharing a single channel which is a big pipes rather than each of the users has it's own dedicated channel which is a small pipes. With this the network can optimize the resource allocation among the multi users more efficiently. For example as an extreme case the network allocate 91% of resources to a single UE and only 1% of resources to each of the remaining 9 users when the nine user does not require much resource or those 9 users are in such a poor environment where it can utilize only small fraction of the transmission capacity. In case of using dedicated channel, we cannot do this kind of extreme resource allocation because each of the dedicated channels requires a certain level of minum resource allocation even when the real utilization is lower than the minimum resource allocation.

I said HS-DSCH is a shared channel. It means that the whole data in the channel is recieved by all users. Then how can a UE figure out whether the data is for that UE or for some other UEs. I also said in HSDPA multiple modulation scheme is used, QAM and 16 QAM. Then how can a UE knows whether the data is QAM modulated or 16 QAM modulated ? To carry all these information, another new channel was introduced and it is HS-CCCH (High Speed Common Control Channel). The information carried by HS-CCCH is as follows :
i) Transport format information - code tree for the data, modulation scheme, transport blocksize
ii) Hybrid-ARQ related information

I said at the beginning, HSDPA uses a shared channel and try to achieve the optimum resource allocation at multi user level. To do this, the network should know the exact status of the UE. And the network should know whether the data it sent successfully reached it's destination (a specific UE). To enable this, UE reports its communication quality and the data reception status to the network repeatedly. For UE to send this information to network, it uses a special channel called HS-DPCCH. This channel carries CQI (Communication Quality Indicator) and Ack/Nak info.

So far so good. It seems there is only advantages of introducing these new channels, but there is nothing that gains 100% without losing anything. There is a drawback of relying on these shared channel method. It is about power control issue. You know that one of the critical requirement of WCDMA technology is a very sophisticated power control. If UE power is too low, Node B would have difficulties decoding it and if the power is too strong it can act as a noise to other UEs communicating with the Node B. For this purpose, Node B sends a UE a power control message periodically and this message should be different for all the UE because each UE may be in a different channel condition, meaning this power control message should be a "Dedicated" message. But as I explained HS-DSCH is a shared channel. Then how can Node B deliver the power control message for each specific UE. The solution was to use R99 dedicated channel (DPCH) carrying only the power control message. But using a full DPCH only for carrying a small power control message is waste of resource. To improve this situation, from Release 6 a new channel was introduced and it is F-DPCH (Fractional DPCH). The details of F-DPCH is out of the scope of this section and I wouldn't explain any further on this channel.

Improved scheduling in HSDPA

The whole purpose of improving scheduling is to decrease the latency between the communicating parties. In this case, the communicating parties are a UE and a Network. The basic idea of this improvement is to refine the granularity of the scheduling period.

In WCDMA network, this scheduling happens for every TTI (Transmission Time Interval) and in R99 the common TTI is 10 ms (sometimes 20ms, 40ms TTI is taken). In HSDPA, this TTI has been changed to 2 ms. Why it is 2 ms ? What can't it be 1 ms or 4 ms ? It is just a result of trade-off of various factors. If the TTI is longer like 4ms or 6 ms, the effect of the schedule time refinement would not be outstanding. However if the TTI is too short, the ratio of scheduling overhead and the refinement would decrease because executing the scheduling algorithm requires a certain amount of time and resources.

Another means of decreasing latency came from the way to handle the data with errors. In R99, those error can only be detected by RLC by Ack/Nak from the other party and whether it would request retransmission or not is determined by even higher layer. But in HSDPA, this error is detected at Physical layer. When a UE recieves data, it checks CRC and sends Ack or Nack on HS-DPCCH being transmitted 5 ms after it received the data. If UE sends Nack, Network retransmit the data. This error detection and retransmition mechanism is called H-ARQ(Hybrid ARQ).

Another mechansim for the improved scheduling adopted in HSDPA is to allocate the optimized resources for each UE. How can this be achieved ? To do this, the network should need some information to make a best decision for each UE. The important informations for this decision making is


i) CQI
ii) Buffer Status
iii) Priority of the data

CQI is calculated by the UE based on the signal-to-noise ratio of the recieved common pilot. If you look into details of TFRI determination by MAC layer, you will notice CQI is the only parameter to determin the TFRI. (What is TFRI ? I will talk this later in this article or some other place. This is very important to implement a test case for maximum throughput testing).

Buffer Status shows how much data is stored in the buffer for each UE. If there is no data in the buffer, the node B should not allocate any resources for the UE. So checking the buffer status is also important for optimum resource allocation.

The overall scheduling algoritm is to allocate more resource to UE which report higher CQI, but there are some cases where the Node B should allocate a certain amount of resources for a specific UE even when it reports a poor CQI. The common example for this situation is a certain RRC message with tight time out value and some streaming data which has some expiration time. To handle these situation, the scheduler (Node B MAC layer, MAC-hs) assign a priority to each data blocks and put those blocks into separate priority Queues.

What I explained so far is just brief overview and to provide motivation to study further. If you are involved in test case creation or protocol stack development, this level of understanding would not help much. If you want to study further so that it may give you practical help for test case development or protocol stack optimization, I recommend you to study very details of MAC-hs and TFRI selection mechanism.

Why we needed HSUPA ?

In previous section, we talked on why we needed HSDPA and how the HSDPA imroved the data throughput. But HSDPA improved only Downlink throughput and did nothing about Uplink. So the natural tendency of next evolution would be improvement on Uplink side. This is how we came out with another technology called HSUPA.

The overall mechanism by which HSUPA improved the uplink throughput is similar to the one used in HSDPA. So if you became familiar with HSDPA mechanism you would not have difficulties understanding HSUPA mechanism.

Introduction of new Channels in HSDPA

As in HSDPA, several new channels were introduced to implement HSUPA and they are as follows :
i) E-DPDCH
ii) E-DPCCH
iii) E-HICH
iv) E-RGCH
v) E-AGCH

Briefly speaking, E-DPDCH is equivalent version of HS-DPSCH and E-DPCCH is equivalent to HS-SCCH and E-HICH is equivalent to HS-DPCCH. But there is a main difference between these HSUPA channels and HSDPA channel. E-DPDCH and E-DPCCH are dedicated channels whereas HS-DPSCH and HS-SCCH are shared channel, but this is understandable because in HSDPA case the source and target of the data transmission is one-to-many but in HSUPA case the source and target is one-to-one, so it is understandable to use dedicated channels in HSUPA.

There are another big difference between HSDPA and HSUPA. It is about scheduling issue. Regardless of whether it is HSDPA or HSUPA, the scheduler (the decision maker) is on Node B, not on a UE. For scheduling we need two very important information, the channel quality and buffer status information. In HSDPA, the information that the decision maker (the scheduler) needs to get from the target of the transmission is only channel quality information and this information was provided via HS-DPCCH and the buffer status information is already available to the scheduler because the transmission buffer is located in the same place (node B) as the scheduler. So in HSDPA the transmitter (node B) can send the data anytime the situation is allowed, but in HSUPA case the transmitter (UE) cannot send the data anytime it wants to send. Before the UE send the data, it has to check whether the target (the reciever, Node B) is ready and has enough resource to recieve the data. For UE to check the status of the reciever (node B) and get the approval from the node B, E-AGCH (Absolute Grant Channel) and E-RGCH (Relative Grant Channel) are used. Node B (the scheduler) send the scheduling grants to UE on when and at what data rate the UE can transmit the data.

The difference between E-AGCH and E-RGCH are
i) E-AGCH is a shared channel and E-RGCH is a dedicated channel
ii) E-AGCH is typically used for large changes in the data rate and the E-RGCH is used for smaller adjustments.

Scheduling for HSUPA

HSUPA Scheduling is quite a complex process but the overall process in simple form are as follows :
i) UE sends Grant Request to Node B
ii) Node B send Abosolute Grant (AGCH) and Relative Grants(RGCH) to UE
iii) UE sets the Serving Grant Value based on AGCH value and RGCH value
iv) Based on the Serving Grant Value, UE sets the E-TFC value for the specific moment of the transmission.

For further details, we need to study the detailed mechanism of MAC-e.

Why we needed HSPA+

We have seen the evolutionary path from R99 to HSDPA and HSUPA and now we have the speed improvement on both uplink and downlink side. We call the combination of HSDPA and HSUPA as HSPA. As we go forward, we got the HSPA further evolved and this evolved version of HSPA is called HSPA+. Now a question would arise, what would be the factors to get the HSPA improved in terms of speed ? Followings are the key items for HSPA+.

i) CPC - DL DRX/UL DTX, HS-SCCH less operation, Enhanced F-DPCH
ii) Layer 2 Improvement
iii) 64 QAM for HSDPA
iv) 16 QAM for HSUPA
v) Enhanced Cell_FACH

Now you may notice right away what 64 QAM and 16 QAM is for and these are mainly for the increase the size of the transmission pipe in physical layer. I would not explain on this any more. Up until HSPA, most of the efforts to increase the throughput was done at the physica layer or MAC layer, but there are bottle necks at every layer. If we remove all the bottle necks from every layer, we would get the ideal max throughput, but this kind of bottle neck removal cannot be done at single shot. From a HSPA+, a big bottleneck on layer 2 (RLC) was improved. The RLC PDU size in HSDPA was 320 bits or 640 bits. Let's suppose you sent a one IP packet with 1.5 Kb. It should be splitted into multiple RLC PDUs and sent by multiple transmission. But in HSPA+, the maximum RLC PDU size can be over 3 Kb. So even the largest IP packet can be transmitted at once. This is done by "L2 Improvement".

Let's suppose another situation, say Web browsing for example. While you are reading a page, you are not downloading any data and there are no data communication between UE and the network. During this time, usually RRC state is put into Cell_FACH and Cell_PCH. When you finish reading the page and try to go next page, in this case the RRC state should change back into Cell_DCH. CPC is a mechanism to reduce the time for these state changes and let user experience like "Continuous Connection".

Another way to improve the problem related to RRC State changes would be to increase the data rate at Cell_FACH. Theoretically you can transmit the data in Cell_FACH in previous technology and ideally the throughput is around 34 K. But if you really try it, you may notice the real throughput is much less than this. For HSPA+, the throuhgput for Cell_FACH has been much increased by Enhanced Cell_FACH.

Finally LTE !


I will not talk much about LTE here because the whole blog here is for LTE. Just a couple of quick comments in terms of evolutionary path. In LTE, both Uplink and Downlink are all shared channel. There is no dedicated channel. In terms of Modulation scheme, it can have QPSK, 16QAM and 64 QAM in downlink and QPSK & 16 QAM in uplink side. And one TTI became 1 ms, which means PHY/MAC layer scheduling should be much faster than previous technology. To make best use of these features, MAC layer scheduling become much sophisticated (implying more complicated) and it use more information from UE to allocate resources dynamically. It use CQI (in Non-MIMO) as in HSDPA and it also use PMI(Precoding Matrix Index) and RI (Rank Index) in MIMO condition.

Latency for almost every layer became much shorter than previous technology (e.g, UE to eNode B latency should be less than 5 ms). There are only two call statues, "Idle" and "Connected" whereas there are multiple call status, Idle,DCH,FACH,PCH and transition among these status takes long time in previous technology. If you see another section in this blog dealing with LTE signaling, you will find number of message transaction for registration and call setup get less.

If you go a little bit deeper into signaling side, you will notice only one reconfiguration message "RRC Reconfiguration" will do all kinds of dynamic reconfiguration from higher layer whereas there were three different type of reconfiguration in WCDMA/HSPA, called "Radio Bearer Reconfiguration", "Transport Channel Reconfiguration" and "Physical Channel Reconfiguration". (Much less headache to test case developer -:)

Simply put, in LTE PHY layer capacity has been increased with higher modulation scheme and latency become short and signaling got simplified. Everything sounds too fancy ? Superficiouly yes. But I am not sure how much headache I will have when it comes to MAC layer scheduling for optimal use of the resources and best performance. We will see.

Sunday, February 7, 2010

LTE RF Test and Measurement

In any wireless communication device, we have to go through two large group of testing. One for testing transmission path and the other for testing recieve path.

For a wireless communication device to work properly, it should meet following hardware requirement

i) The device should transmit the signal which is strong enough power to make it sure it reaches the other party of the communication.
ii) The device should not transmit the signal which is so strong that it interfere the communication between other parties.
iii) The device should transmit the signal with good enough quality which can be decoded/corrected by the other party.
iv) The device should transmit the signal in the exact frequency that has been allocated for the communication.
v) The device should not generate any noise out side of the frequency area that has been allocated for the device.

If any of these condition deviate too much from the specification, the device cannot communicate with the other party or let some other device to communicate. In terms of measurement equipment, item i) and ii) belong to "power measurement", item iii) is related to "Modulation Analysis" and item iv) falls into "Frequency Error measurement". Item v) is also a kind of "power measurement", but the measurement area in frequency domain is different from item i) & item ii). Anyway if you have any equipment that can perform the following three measurement for your communication technology, you can do the most critical part of transmission path.

a) Power Measurement
b) Modulation Analysis
c) Frequency Error Measurement

Now let's think about the recieve path measurement. What would be the most important reciever characteristics for the communication device ?

i) The reciever must be able to decode successfuly the signal coming from a transmitter even though the signal strength is very low.
ii) The reciever must be able to decode successfuly the signal coming from a transmitter even when there are a certain level of noise around the signal.

In terms of measurement logic, item i) and ii) are the same. Equipment sends a pattern of the known signal and let the reciever decode it and compare the original signal from the equipment and the decoded signal by reciever and how much different they are. The more different they are, the poorer reciever quality it is. We call this method "BER(Bit Error Rate) measurement". Item i) measures BER when the input signal to the device is very low and Item ii) measures BER when there are noise to the input signal.

Before we go forward to LTE measurement, pick any technology you are already familiar with and make a list of measurement on your test plan and try to map those items with the measurement principles I described above. Once you are familiar with this mapping, you will understand LTE measurement items more easily.

LTE RF Measurement Items

Now let's look a little bit detail into LTE RF measurement. First thing I have done is to make a list of measurement items from 3GPP 36.521-1 and try to map my measurement principles with each of the measurement items.

Here goes the Transmitter measurement items first. You see a lot of "Power Measurement" and some of "Modulation Analysis". Why do we have so many different power measurement and so many different Modulation Analysis. How do they differ from each other ? This is the question you have to find answers on your own. The answer itself is described in 3GPP 36.521-1 but the question is how much I can understand what is described there just by reading it.

The first step would be to read "Test Purpose", "Initial Condition", "Test Procedure" section of each test case as often as possible and try at least to be familiar to each test case.


Here goes the reciever measurement items.


Snapshots of LTE Uplink Signals for RF Testing

As I mentioned earlier, it is not easy to understand all the details of LTE RF Measurement just by reading the specification. I have read the test case purpose, "Initial Condition", "Test Procedure" over and over.. but still everything is vague. As I try to get more into details, the first obstables that blocks me is a lot of complicated tables describing the test condition. Of course we saw this kind of tables in other technology specification like CDMA, WCDMA but it seems the tables for LTE measurement looks bigger and more complicated. So I decided to see some of the signal patterns described in the specification on spectrum analyzer so that I can get some intuitive idea of the overall RF characteristics of each condition.








Even though we have new technology every couple of years and LTE is new to many people, RF test and measurement technology have a lot in common with other wireless communication technology. If you had experience with any wireless technology, eg CDMA, GSM, WCDMA, Bluetooth, WLAN, you may find the common logics in LTE.

Challenges for LTE RF Testing

One of the biggest challenges in LTE measurement for UE development or test engineer would be that there are too many sub tests with too many different parameter settings.Before I get into details, I want to briefly skim through overall RF measurement from C2K.

I don't have much experience with C2K measurement, but with only a little experience I could tell there are much fewer measurement items in this area comparing to WCDMA/HSDPA and even comparing to GSM/GPRS. As far as I remember, following is allmost all that I did for C2K.

i) Total Channel Power
ii) CDP (Code Domain Power)
iii) Rho
iv) Spectrum Emission
v) ACLR
vi) OBW (Occupied Bandwidth)

But the items listed above is more than what I experienced in C2K. For conformance, I think we may have to go through all of these items. But since C2K is very mature technology now, in the RF part developmental stage we wouldn't go through all of these items. In an extreme case that I heard of was "just measure total power, if there is no problem with it. usually no problem with other parts".

Now let's look into WCDMA. For WCDMA R99 (Non HSPA), If I briefly put the list,

i) Max Power
ii) Min Power
iii) On/Off Power
iv) RACH Power
v) EVM
vi) Spectrum Emission
vii) ACLR
viii) OBW (Occupied Bandwidth)

Just in terms of list, it doesn't look like much difference from C2K. But practically the engineer would meet various characteristics which may look quite different from C2K. The first thing we can think of is that the channel bandwith get tripled compared to C2K and this would introduce a lot of complication in RF design. Another issue is RACH process in WCDMA is more complicated than probing process in C2K and add a couple of important test steps.

Now let's look further into HSDPA. You may think HSDPA would not be much different from R99 in terms of Uplink measurement because HSDPA is only for downlink data rate. It is true in terms of high level protocol, but in physical/RF layer an important factor was added to uplink in HSDPA. It is HS-DPCCH. HS-DPCCH is for UE to report CQI and ACK/NACK to BTS. The problem is that even with this additional channels the UE has to maitain the total uplink power as before. So the UE recalculate/rearrange each of the physical channel power. So if you look at the RF conformance test case list, you would not find much difference in terms of test case items but you would find quite a many of sub items were added to the existing test case due to the introduction of HSDPCCH. (If you want to go into further detail, open up 3GPP 34.121 and find the test cases with the keyword "HSDPCCH" in the test title).

Going one step further into HSUPA, you also find no such a big difference in terms of measurement items. But as in HSDPA case, a new physical channel was introduced and it is called E-DPCH. Even with this additional channel, UE also have to maintain the total channel power as in R99. So, as you may guess, UE has to recalculate/rearrange each of physical channel powers. As a result, we would get a couple of additional sub-items added to RF testing.

Finally.. let's think about LTE. What is the biggest difference between LTE and C2K/WCDMA/HSPA in terms of PHY/RF layer ? It would be OFDM. Yes, it is. What kind of additional measurement items would be introduced to RF testing due to the OFDM ? Since OFDM is made up of a lot of sub carrier with very narrow bandwidth, we have to measure most of the characteristics listed above for each OFDM subcarrier. But if we do all of the items for each of the sub carriers, it would take one full day just for one item. Another big difference would be that LTE specification allow many different type of system bandwidth whereas in C2K/WCDMA, the system bandwidth is always same. It means you have to measure the whole set of test items for multiple different system bandwidth which multiplies the measurement time and parameter settings in measurement equipment.Based on the LTE specification, an LTE system bandwidth can be any of 1.4 Mhz, 3 Mhz, 5 Mhz, 10 Mhz, 15 Mhz, 20 Mhz whereas C2K can only have single bandwidth of 1.28 and WCDMA can only have single bandwidth of 3.84. Of course, a specific system operator would use only one of the bandwidth in their network but Mobile device manufacturer should design the UE which support all of these bandwidth.On top of this, there is another factors to make LTE test even more complex especially for mobile phone design/test. It is the fact that a real bandwidth being used at a specific time can change dinamically.

One intuitive example is shown in the following measurement screen. This the RF signal captured for LTE call connection and data transfer. When you initiate a call, the mobile device would go through the protocol sequence for call setup and then data fraffic would start. If you see at the bottom of the screen (spectrogram) of the measurement screen, you would notice that frequency allocation (bandwidth being used) during this period changes. In this screen, the frequence allocation for data traffic does not change, but in live network this bandwidth would change dynamically.


What is the implication of these multiple system bandwith and dynamic bandwidth change to Mobile phone designer and the test engineer ? For designers, the biggest issues would be how to optimize various kinds of design parameters to be best fit for all of these bands. For test engineers, the biggest issue would be huge number of the test cases they have to go through.
Final outcome of all these considerations on multiple bandwidth and dynamic bandwidth change can be examplified as a table shown below. This is a table for only one test case. See all those different system bandwith you have to cover. Different RB allocations is for dynamic frequency allocation that I mentioned above. In LTE, for every test case you would have this kind of tables and this will be huge headache to designers and test engineers.