Saturday, March 13, 2010

Protocol Sequence : Typical Packet Call

1) MIB
2) SIB 1
3) SIB 2
4) RRC : PRACH Preamble
5) RRC : RACH Response
6) RRC : RRC Connection Request
7) RRC : RRC Connection Setup
8) RRC : RRC Connection Setup Complete + NAS : Attach Request
9) RRC : DL Information Transfer + NAS : Authentication Request
10) RRC : UL Information Transfer + NAS : Authentication Response
11) RRC : DL Information Transfer + NAS : Security Mode Command
12) RRC : UL Information Transfer + NAS : Security Mode Complete
13) RRC : Security Mode Command
14) RRC : Security Mode Complete
15) RRC : RRC Connection Reconfiguration + NAS : Attach Accept
16) RRC : RRC Connection Reconfiguration Complete + NAS : Attach Complete
17) RRC : RRC Connection Release
18) RRC : PRACH Preamble
19) RRC : RACH Response
20) RRC : RRC Connection Request
21) RRC : RRC Connection Setup
22) RRC : RRC Connection Setup Complete + NAS : Service Request
23) RRC : Security Mode Command
24) RRC : Security Mode Complete
25) RRC : RRC Connection Reconfiguration + NAS : Activate Dedicated EPS Bearer Context Request
26) RRC : RRC Connection Reconfiguration Complete + NAS : Activate Dedicated EPS Bearer Context Accept
27) RRC : UL Information Transfer + NAS : Deactivate Dedicated EPS Bearer Context Accept
28) RRC : RRC Connection Release

LTE Unique Sequences

Even though overall sequence is pretty similar to WCDMA sequence, there are a couple of different points comparing to WCDMA sequence.

First point you have to look at is that in LTE 'RACH Preamble' is sent as a part of RRC message. As you know RACH process was there in WCDMA, but in WCDMA it was a part of Physical layer process.

Another part I notice is that RRC Connection Setup Complete and Attach Request is carried in a single step.

These are the differences you can notice just by looking at the message type, there are more differences you will find when you go into the information elements of each messages as you will see in following sections.

Overall Comparision with WCDMA

First thing you will notice would be that there are much less SIBs being transmitted in LTE comparting to WCDMA. Of course there are more SIBs not being transmitted in this sequence (LTE has 10 SIBs in total), but with only these two SIBs it can transmit all the information to let UE camp on the network. In WCDMA there are a total 18 SIBs and in most case we used at least SIB1,3,5,7,11 even in very basic configurations. And some of the WCDMA SIBs like SIB5 and 11 has multipe segments. In LTE, number of SIB is small and none of them are segmented.

1) MIB

MIB in LTE has very minimal information (This is a big difference from WCDMA MIB) . The only information it carries are

i) BandWidth
ii) PHICH
iii) SystemFrameNumber

Of course the most important information is "BandWidth".

According to 36.331 section 5.2.1.2, the MIB scheduling is as follows :
The MIB uses a fixed schedule with a periodicity of 40 ms and repetitions made within 40 ms. The first transmission ofthe MIB is scheduled in subframe #0 of radio frames for which the SFN mod 4 = 0, and repetitions are scheduled insubframe #0 of all other radio frames.


2) SIB 1

SIB 1 in LTE contains the information like the ones in WCDMA MIB & SIB1 & SIB3. The important information on SIB 1 is

i) PLMN
ii) Tracking Area Code
iii) Cell Selection Info
iv) Frequency Band Indicator
v) Scheduling information (periodicity) of other SIBs

You may notice that LTE SIB1 is very similar to WCDMA MIB.
Especially at initial test case development, you have to be very careful about item v). If you set this value incorrectly, all the other SIBs will not be decoded by UE. And as a result, UE would not recognize the cell and show "No Service" message.

According to 36.331 section 5.2.1.2, the SIB1 scheduling is as follows :
The SystemInformationBlockType1 uses a fixed schedule with a periodicity of 80 ms and repetitions made within 80 ms.The first transmission of SystemInformationBlockType1 is scheduled in subframe #5 of radio frames for which the SFNmod 8 = 0, and repetitions are scheduled in subframe #5 of all other radio frames for which SFN mod 2 = 0.

This means that even though SIB1 periodicity is 80 ms, different copies (Redudancy version : RV) of the SIB1 is transmitted every 20ms. Meaning that at L3 you will see the SIB1 every 80 ms, but at PHY layer you will see it every 20ms. For the detailed RV assignment for each transmission, refer to 36.321 section 5.3.1 (the last part of the section)

One example of LTE SIB1 is as follows :

RRC_LTE:BCCH-DL-SCH-Message
BCCH-DL-SCH-Message ::= SEQUENCE
+-message ::= CHOICE [c1]
+-c1 ::= CHOICE [systemInformationBlockType1]
+-systemInformationBlockType1 ::= SEQUENCE [000]
+-cellAccessRelatedInfo ::= SEQUENCE [0]
+-plmn-IdentityList ::= SEQUENCE OF SIZE(1..6) [1]
+-PLMN-IdentityInfo ::= SEQUENCE
+-plmn-Identity ::= SEQUENCE [1]
+-mcc ::= SEQUENCE OF SIZE(3) OPTIONAL:Exist
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [1]
+-mnc ::= SEQUENCE OF SIZE(2..3) [2]
+-MCC-MNC-Digit ::= INTEGER (0..9) [0]
+-MCC-MNC-Digit ::= INTEGER (0..9) [1]
+-cellReservedForOperatorUse ::= ENUMERATED [notReserved]
+-trackingAreaCode ::= BIT STRING SIZE(16) [0000000000000001]
+-cellIdentity ::= BIT STRING SIZE(28) [0000000000000000000100000000]
+-cellBarred ::= ENUMERATED [notBarred]
+-intraFreqReselection ::= ENUMERATED [notAllowed]
+-csg-Indication ::= BOOLEAN [FALSE]
+-csg-Identity ::= BIT STRING OPTIONAL:Omit
+-cellSelectionInfo ::= SEQUENCE [0]
+-q-RxLevMin ::= INTEGER (-70..-22) [-53]
+-q-RxLevMinOffset ::= INTEGER OPTIONAL:Omit
+-p-Max ::= INTEGER OPTIONAL:Omit
+-freqBandIndicator ::= INTEGER (1..64) [7]
+-schedulingInfoList ::= SEQUENCE OF SIZE(1..maxSI-Message[32]) [2]
+-SchedulingInfo ::= SEQUENCE
+-si-Periodicity ::= ENUMERATED [rf8]
+-sib-MappingInfo ::= SEQUENCE OF SIZE(0..maxSIB-1[31]) [0]
+-SchedulingInfo ::= SEQUENCE
+-si-Periodicity ::= ENUMERATED [rf8]
+-sib-MappingInfo ::= SEQUENCE OF SIZE(0..maxSIB-1[31]) [1]
+-SIB-Type ::= ENUMERATED [sibType3]
+-tdd-Config ::= SEQUENCE OPTIONAL:Omit
+-si-WindowLength ::= ENUMERATED [ms20]
+-systemInfoValueTag ::= INTEGER (0..31) [0]
+-nonCriticalExtension ::= SEQUENCE OPTIONAL:Omit


3) SIB 2

The important information on SIB2 is

i) RACH Configuration
ii) bcch, pcch, pdsch, pusch, pucch configuration
iii) sounding RS Configuration
iv) UE Timers

Following is one example of SIB2. I looks to me that LTE SIB2 is similar to WCDMA SIB5 configuring various common channel.

Ver:8,0,0,18RRC_LTE:BCCH-DL-SCH-Message
BCCH-DL-SCH-Message ::= SEQUENCE
+-message ::= CHOICE [c1]
+-c1 ::= CHOICE [systemInformation]
+-systemInformation ::= SEQUENCE
+-criticalExtensions ::= CHOICE [systemInformation-r8]
+-systemInformation-r8 ::= SEQUENCE [0]
+-sib-TypeAndInfo ::= SEQUENCE OF SIZE(1..maxSIB[32]) [1]
+- ::= CHOICE [sib2]
+-sib2 ::= SEQUENCE [00]
+-ac-BarringInfo ::= SEQUENCE OPTIONAL:Omit
+-radioResourceConfigCommon ::= SEQUENCE
+-rach-Config ::= SEQUENCE
+-preambleInfo ::= SEQUENCE [0]
+-numberOfRA-Preambles ::= ENUMERATED [n52]
+-preamblesGroupAConfig ::= SEQUENCE OPTIONAL:Omit
+-powerRampingParameters ::= SEQUENCE
+-powerRampingStep ::= ENUMERATED [dB2]
+-preambleInitialReceivedTargetPower ::= ENUMERATED [dBm-104]
+-ra-SupervisionInfo ::= SEQUENCE
+-preambleTransMax ::= ENUMERATED [n6]
+-ra-ResponseWindowSize ::= ENUMERATED [sf10]
+-mac-ContentionResolutionTimer ::= ENUMERATED [sf48]
+-maxHARQ-Msg3Tx ::= INTEGER (1..8) [4]
+-bcch-Config ::= SEQUENCE
+-modificationPeriodCoeff ::= ENUMERATED [n4]
+-pcch-Config ::= SEQUENCE
+-defaultPagingCycle ::= ENUMERATED [rf128]
+-nB ::= ENUMERATED [oneT]
+-prach-Config ::= SEQUENCE
+-rootSequenceIndex ::= INTEGER (0..837) [22]
+-prach-ConfigInfo ::= SEQUENCE
+-prach-ConfigIndex ::= INTEGER (0..63) [3]
+-highSpeedFlag ::= BOOLEAN [FALSE]
+-zeroCorrelationZoneConfig ::= INTEGER (0..15) [5]
+-prach-FreqOffset ::= INTEGER (0..94) [2]
+-pdsch-Config ::= SEQUENCE
+-referenceSignalPower ::= INTEGER (-60..50) [18]
+-p-b ::= INTEGER (0..3) [0]
+-pusch-Config ::= SEQUENCE
+-pusch-ConfigBasic ::= SEQUENCE
+-n-SB ::= INTEGER (1..4) [1]
+-hoppingMode ::= ENUMERATED [interSubFrame]
+-pusch-HoppingOffset ::= INTEGER (0..98) [4]
+-enable64QAM ::= BOOLEAN [FALSE]
+-ul-ReferenceSignalsPUSCH ::= SEQUENCE
+-groupHoppingEnabled ::= BOOLEAN [TRUE]
+-groupAssignmentPUSCH ::= INTEGER (0..29) [0]
+-sequenceHoppingEnabled ::= BOOLEAN [FALSE]
+-cyclicShift ::= INTEGER (0..7) [0]
+-pucch-Config ::= SEQUENCE
+-deltaPUCCH-Shift ::= ENUMERATED [ds2]
+-nRB-CQI ::= INTEGER (0..98) [2]
+-nCS-AN ::= INTEGER (0..7) [6]
+-n1PUCCH-AN ::= INTEGER (0..2047) [0]
+-soundingRS-UL-Config ::= CHOICE [setup]
+-setup ::= SEQUENCE [0]
+-srs-BandwidthConfig ::= ENUMERATED [bw3]
+-srs-SubframeConfig ::= ENUMERATED [sc0]
+-ackNackSRS-SimultaneousTransmission ::= BOOLEAN [TRUE]
+-srs-MaxUpPts ::= ENUMERATED OPTIONAL:Omit
+-uplinkPowerControl ::= SEQUENCE
+-p0-NominalPUSCH ::= INTEGER (-126..24) [-85]
+-alpha ::= ENUMERATED [al08]
+-p0-NominalPUCCH ::= INTEGER (-127..-96) [-117]
+-deltaFList-PUCCH ::= SEQUENCE
+-deltaF-PUCCH-Format1 ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format1b ::= ENUMERATED [deltaF3]
+-deltaF-PUCCH-Format2 ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format2a ::= ENUMERATED [deltaF0]
+-deltaF-PUCCH-Format2b ::= ENUMERATED [deltaF0]
+-deltaPreambleMsg3 ::= INTEGER (-1..6) [4]
+-ul-CyclicPrefixLength ::= ENUMERATED [len1]
+-ue-TimersAndConstants ::= SEQUENCE
+-t300 ::= ENUMERATED [ms1000]
+-t301 ::= ENUMERATED [ms1000]
+-t310 ::= ENUMERATED [ms1000]
+-n310 ::= ENUMERATED [n1]
+-t311 ::= ENUMERATED [ms1000]
+-n311 ::= ENUMERATED [n1]
+-freqInfo ::= SEQUENCE [00]
+-ul-CarrierFreq ::= INTEGER OPTIONAL:Omit
+-ul-Bandwidth ::= ENUMERATED OPTIONAL:Omit
+-additionalSpectrumEmission ::= INTEGER (1..32) [1]
+-mbsfn-SubframeConfigList ::= SEQUENCE OF OPTIONAL:Omit
+-timeAlignmentTimerCommon ::= ENUMERATED [sf750]
+-nonCriticalExtension ::= SEQUENCE OPTIONAL:Omit

4) RRC : PRACH Preamble

Text

5) RRC : RACH Response
Text

6) RRC : RRC Connection Request
Text

Interim Comments

From this point on, the L3 message carries both RRC and NAS messages. So you need to have overall understanding of NAS messages as well as RRC messages.
You need to understand all the details of TS 29.274 to handle to handle data traffic related IEs in NAS message. Of course it would be impossible to understand all those details within a day.. my approach is to go through following tables as often as possible until I get some big picture in my mind. You may have to go back and forth between 36.331 and 29.274.

* Table 7.2.2-1: Information Elements in a Create Session Response
* Table 7.2.3-1: Information Elements in a Create Bearer Request
* Table 7.2.3-2: Bearer Context within Create Bearer Request
* Table 7.2.5-1: Information Elements in a Bearer Resource Command
* Table 7.2.7-1: Information Elements in a Modify Bearer Request
* Table 7.2.8-1: Information Elements in a Modify Bearer Response
* Table 7.2.9.1-1: Information Elements in a Delete Session Request
* Table 7.2.9.2-1: Information Elements in a Delete Bearer Request
* Table 7.2.10.2-1: Information Elements in Delete Bearer Response
* Table 7.3.5-1: Information Elements in a Context Request
* Table 7.3.6-2: MME/SGSN UE EPS PDN Connections within Context Response
* Table 7.3.8-1: Information Elements in an Identification Request


7) RRC : RRC Connection Setup

As you see in the following diagram, the most important IE (infomration element) in RRC Connection Setup message is "RadioResourceConfigDedicated" under which you can setup SRB, DRB, MAC and PHY config. Even thouth there is IEs related to DRB, in most case we setup only SRBs in RRC Connection Setup. It is similar to WCDMA RRC Connection setup message in which you usually setup only SRB (Control Channel Part) even though there is IEs for RB(Data Traffic).

One thing you have to notice is that you will find "RadioResourceCondigDedicated" IE not only in RRC Connection Setup message but also in RRC Connection Reconfiguration message. In that case, you have to be careful so that the one you set in RRC Connection Reconfig message properly match the one you set in RRC Connection Setup message. It means that you have to understand the correlation very clearly between RRC Connection Setup message and RRC Connection Reconfig message. This is also very similar to WCDMA case.



8) RRC : RRC Connection Setup Complete + NAS : Attach Request
Text

15) RRC : RRC Connection Reconfiguration + NAS : Attach Accept

An important procedure done in this step is "ESM : Activate Default EPS Bearer Context Request".

One thing you notice here is that in LTE Packet call is initiated by Network where as in UMST most of the packet call is initiated by UE. Network specifies an IP for the UE here.



16) RRC : RRC Connection Reconfiguration Complete + NAS : Attach Complete

Overal Protocol Sequence of the typical packet call is as follows : (You will notice overall sequence is very similar to WCDMA sequence)


An important procedure done in this step is "ESM : Activate Default EPS Bearer Context Accept".

25) RRC : RRC Connection Reconfiguration + NAS : Activate Dedicated EPS Bearer Context Request








Sunday, February 28, 2010

From R99 to LTE

From R99 to LTE

I have been working on creating various test cases on UMTS sides for a couple of years. During this period, I saw technologies changing from R99 to HSDPA, HSUPA and now to LTE. But in terms of RRC and above layers which I have been mostly working on, I haven't seen much differences. Sometimes I got the impressions that the higher layer signaling (RRC and above) gets even more simpler as the technology changes from old technology to a next technologies. If you look at the higher layer technologies of LTE, you will feel that LTE signaling looks simpler than other other existing technologies.

Then, How we could have higher data rate and less latencies and more effective usange of radio channels with simpler signaling ? The secrete is that as the technologies evolves, the higher layer signaling stays similar or even simpler, the lower layer (PHY and MAC) gets complicated and these lower layers are the one that enable us to enjoy all those evolved features especially high data rate and low latencies. So to understand the details of the evolved technologies, we have to understand the details of low layer - e.g, PHY and MAC.

Here is a list of questions you have to have when you want to study a new technology.
i) What kind of additional PHY channes has been added comparing to R99 ?
ii) What kind of information is carried by the additional physical channels ?
iii) What kind of MAC identieis are added comparing to R99 ?
iv) What is the role of the new MAC identities especially in terms of scheduling ?

Why we needed HSDPA ?

Before I start talking about the questions listed above, let's think about why we wanted a new technology called HSDPA ? In any communication technolgy, the biggest motivation for a new technology was to increase the data rate. Then a question arises. How can we increase the data rate ? Regardless of communication types, we usually has taken similar method for this, which is as follows :

i) Change modulation scheme
ii) Decrease the latency between communicating party
iii) Optimization at multi user level rather than optimization at single user level

Let's take some examples. The evolution path of bluetooth was from the standard rate to 2 Mb EDR (Enhanced Data Rate) to 3 Mb EDR and the biggest changes for each step was the change of modulation scheme. What happened in GSM evolutionary path, GSM to GPRS to EGPRS(EDGE) to EDGE Evolution ? The biggest changes in this path is modulation scheme changes as well. From R99 to HSDPA, we also introduced a new modulation scheme called 16 QAM. The advantage of using new modulation scheme to increase the data rate is the simplicity of the concept, but a disadvantage would be that it requires hardware changes.

Next step would be to decrease the latency between communicating party. How can we achieve this ? Increasing physical data propagation speed between two communicating parties ? It is almost impossible because the propagation speed is already at the speed of light. Then what is another option ? It is improving scheduling algorithm of the communication. What is it ? It is a little bit hard to explain in simple way so I will talk on this in a separate section.

Lastly let's think about the optimization at multi user level rather than optimization at single user level. Let's suppose a situation where 10 user is communicating to one Node B. In R99, each user has a separate and independant communication path to the node B via a special channel called DPCH (Dedicated Physical Channel). The optimization in this case means ten separate optimization process for each user. OK.. now let's think each of the users is getting the max data rate for the specific UE at the specific environment to which the UE is exposed. Does this guarantee that the whole resource of the Node B was fully utilized ? It is hard to say "Yes" to this question. Isn't there any possibility that some of the resources was being wasted ? It would be hard to say "No" to this question. We will think about this issue in next section.

Introduction of new Channels in HSDPA

In HSDPA, three four new physical channels were introduced

i) HS-DSCH
ii) HS-SCCH
iii) HS-DPCCH
iv) F-DPCH

With introduction of these four channels, we could implement many of the methods to improve the data rate which has been briefly descrived in previous section.

The most important channel is definately HS-DSCH (High Speed Downlink Shared Channel). As the name implies, it is a SHARED channel whereas in R99 we used a DEDICATED channel. It means all the users within a cell is sharing a single channel which is a big pipes rather than each of the users has it's own dedicated channel which is a small pipes. With this the network can optimize the resource allocation among the multi users more efficiently. For example as an extreme case the network allocate 91% of resources to a single UE and only 1% of resources to each of the remaining 9 users when the nine user does not require much resource or those 9 users are in such a poor environment where it can utilize only small fraction of the transmission capacity. In case of using dedicated channel, we cannot do this kind of extreme resource allocation because each of the dedicated channels requires a certain level of minum resource allocation even when the real utilization is lower than the minimum resource allocation.

I said HS-DSCH is a shared channel. It means that the whole data in the channel is recieved by all users. Then how can a UE figure out whether the data is for that UE or for some other UEs. I also said in HSDPA multiple modulation scheme is used, QAM and 16 QAM. Then how can a UE knows whether the data is QAM modulated or 16 QAM modulated ? To carry all these information, another new channel was introduced and it is HS-CCCH (High Speed Common Control Channel). The information carried by HS-CCCH is as follows :
i) Transport format information - code tree for the data, modulation scheme, transport blocksize
ii) Hybrid-ARQ related information

I said at the beginning, HSDPA uses a shared channel and try to achieve the optimum resource allocation at multi user level. To do this, the network should know the exact status of the UE. And the network should know whether the data it sent successfully reached it's destination (a specific UE). To enable this, UE reports its communication quality and the data reception status to the network repeatedly. For UE to send this information to network, it uses a special channel called HS-DPCCH. This channel carries CQI (Communication Quality Indicator) and Ack/Nak info.

So far so good. It seems there is only advantages of introducing these new channels, but there is nothing that gains 100% without losing anything. There is a drawback of relying on these shared channel method. It is about power control issue. You know that one of the critical requirement of WCDMA technology is a very sophisticated power control. If UE power is too low, Node B would have difficulties decoding it and if the power is too strong it can act as a noise to other UEs communicating with the Node B. For this purpose, Node B sends a UE a power control message periodically and this message should be different for all the UE because each UE may be in a different channel condition, meaning this power control message should be a "Dedicated" message. But as I explained HS-DSCH is a shared channel. Then how can Node B deliver the power control message for each specific UE. The solution was to use R99 dedicated channel (DPCH) carrying only the power control message. But using a full DPCH only for carrying a small power control message is waste of resource. To improve this situation, from Release 6 a new channel was introduced and it is F-DPCH (Fractional DPCH). The details of F-DPCH is out of the scope of this section and I wouldn't explain any further on this channel.

Improved scheduling in HSDPA

The whole purpose of improving scheduling is to decrease the latency between the communicating parties. In this case, the communicating parties are a UE and a Network. The basic idea of this improvement is to refine the granularity of the scheduling period.

In WCDMA network, this scheduling happens for every TTI (Transmission Time Interval) and in R99 the common TTI is 10 ms (sometimes 20ms, 40ms TTI is taken). In HSDPA, this TTI has been changed to 2 ms. Why it is 2 ms ? What can't it be 1 ms or 4 ms ? It is just a result of trade-off of various factors. If the TTI is longer like 4ms or 6 ms, the effect of the schedule time refinement would not be outstanding. However if the TTI is too short, the ratio of scheduling overhead and the refinement would decrease because executing the scheduling algorithm requires a certain amount of time and resources.

Another means of decreasing latency came from the way to handle the data with errors. In R99, those error can only be detected by RLC by Ack/Nak from the other party and whether it would request retransmission or not is determined by even higher layer. But in HSDPA, this error is detected at Physical layer. When a UE recieves data, it checks CRC and sends Ack or Nack on HS-DPCCH being transmitted 5 ms after it received the data. If UE sends Nack, Network retransmit the data. This error detection and retransmition mechanism is called H-ARQ(Hybrid ARQ).

Another mechansim for the improved scheduling adopted in HSDPA is to allocate the optimized resources for each UE. How can this be achieved ? To do this, the network should need some information to make a best decision for each UE. The important informations for this decision making is


i) CQI
ii) Buffer Status
iii) Priority of the data

CQI is calculated by the UE based on the signal-to-noise ratio of the recieved common pilot. If you look into details of TFRI determination by MAC layer, you will notice CQI is the only parameter to determin the TFRI. (What is TFRI ? I will talk this later in this article or some other place. This is very important to implement a test case for maximum throughput testing).

Buffer Status shows how much data is stored in the buffer for each UE. If there is no data in the buffer, the node B should not allocate any resources for the UE. So checking the buffer status is also important for optimum resource allocation.

The overall scheduling algoritm is to allocate more resource to UE which report higher CQI, but there are some cases where the Node B should allocate a certain amount of resources for a specific UE even when it reports a poor CQI. The common example for this situation is a certain RRC message with tight time out value and some streaming data which has some expiration time. To handle these situation, the scheduler (Node B MAC layer, MAC-hs) assign a priority to each data blocks and put those blocks into separate priority Queues.

What I explained so far is just brief overview and to provide motivation to study further. If you are involved in test case creation or protocol stack development, this level of understanding would not help much. If you want to study further so that it may give you practical help for test case development or protocol stack optimization, I recommend you to study very details of MAC-hs and TFRI selection mechanism.

Why we needed HSUPA ?

In previous section, we talked on why we needed HSDPA and how the HSDPA imroved the data throughput. But HSDPA improved only Downlink throughput and did nothing about Uplink. So the natural tendency of next evolution would be improvement on Uplink side. This is how we came out with another technology called HSUPA.

The overall mechanism by which HSUPA improved the uplink throughput is similar to the one used in HSDPA. So if you became familiar with HSDPA mechanism you would not have difficulties understanding HSUPA mechanism.

Introduction of new Channels in HSDPA

As in HSDPA, several new channels were introduced to implement HSUPA and they are as follows :
i) E-DPDCH
ii) E-DPCCH
iii) E-HICH
iv) E-RGCH
v) E-AGCH

Briefly speaking, E-DPDCH is equivalent version of HS-DPSCH and E-DPCCH is equivalent to HS-SCCH and E-HICH is equivalent to HS-DPCCH. But there is a main difference between these HSUPA channels and HSDPA channel. E-DPDCH and E-DPCCH are dedicated channels whereas HS-DPSCH and HS-SCCH are shared channel, but this is understandable because in HSDPA case the source and target of the data transmission is one-to-many but in HSUPA case the source and target is one-to-one, so it is understandable to use dedicated channels in HSUPA.

There are another big difference between HSDPA and HSUPA. It is about scheduling issue. Regardless of whether it is HSDPA or HSUPA, the scheduler (the decision maker) is on Node B, not on a UE. For scheduling we need two very important information, the channel quality and buffer status information. In HSDPA, the information that the decision maker (the scheduler) needs to get from the target of the transmission is only channel quality information and this information was provided via HS-DPCCH and the buffer status information is already available to the scheduler because the transmission buffer is located in the same place (node B) as the scheduler. So in HSDPA the transmitter (node B) can send the data anytime the situation is allowed, but in HSUPA case the transmitter (UE) cannot send the data anytime it wants to send. Before the UE send the data, it has to check whether the target (the reciever, Node B) is ready and has enough resource to recieve the data. For UE to check the status of the reciever (node B) and get the approval from the node B, E-AGCH (Absolute Grant Channel) and E-RGCH (Relative Grant Channel) are used. Node B (the scheduler) send the scheduling grants to UE on when and at what data rate the UE can transmit the data.

The difference between E-AGCH and E-RGCH are
i) E-AGCH is a shared channel and E-RGCH is a dedicated channel
ii) E-AGCH is typically used for large changes in the data rate and the E-RGCH is used for smaller adjustments.

Scheduling for HSUPA

HSUPA Scheduling is quite a complex process but the overall process in simple form are as follows :
i) UE sends Grant Request to Node B
ii) Node B send Abosolute Grant (AGCH) and Relative Grants(RGCH) to UE
iii) UE sets the Serving Grant Value based on AGCH value and RGCH value
iv) Based on the Serving Grant Value, UE sets the E-TFC value for the specific moment of the transmission.

For further details, we need to study the detailed mechanism of MAC-e.

Why we needed HSPA+

We have seen the evolutionary path from R99 to HSDPA and HSUPA and now we have the speed improvement on both uplink and downlink side. We call the combination of HSDPA and HSUPA as HSPA. As we go forward, we got the HSPA further evolved and this evolved version of HSPA is called HSPA+. Now a question would arise, what would be the factors to get the HSPA improved in terms of speed ? Followings are the key items for HSPA+.

i) CPC - DL DRX/UL DTX, HS-SCCH less operation, Enhanced F-DPCH
ii) Layer 2 Improvement
iii) 64 QAM for HSDPA
iv) 16 QAM for HSUPA
v) Enhanced Cell_FACH

Now you may notice right away what 64 QAM and 16 QAM is for and these are mainly for the increase the size of the transmission pipe in physical layer. I would not explain on this any more. Up until HSPA, most of the efforts to increase the throughput was done at the physica layer or MAC layer, but there are bottle necks at every layer. If we remove all the bottle necks from every layer, we would get the ideal max throughput, but this kind of bottle neck removal cannot be done at single shot. From a HSPA+, a big bottleneck on layer 2 (RLC) was improved. The RLC PDU size in HSDPA was 320 bits or 640 bits. Let's suppose you sent a one IP packet with 1.5 Kb. It should be splitted into multiple RLC PDUs and sent by multiple transmission. But in HSPA+, the maximum RLC PDU size can be over 3 Kb. So even the largest IP packet can be transmitted at once. This is done by "L2 Improvement".

Let's suppose another situation, say Web browsing for example. While you are reading a page, you are not downloading any data and there are no data communication between UE and the network. During this time, usually RRC state is put into Cell_FACH and Cell_PCH. When you finish reading the page and try to go next page, in this case the RRC state should change back into Cell_DCH. CPC is a mechanism to reduce the time for these state changes and let user experience like "Continuous Connection".

Another way to improve the problem related to RRC State changes would be to increase the data rate at Cell_FACH. Theoretically you can transmit the data in Cell_FACH in previous technology and ideally the throughput is around 34 K. But if you really try it, you may notice the real throughput is much less than this. For HSPA+, the throuhgput for Cell_FACH has been much increased by Enhanced Cell_FACH.

Finally LTE !


I will not talk much about LTE here because the whole blog here is for LTE. Just a couple of quick comments in terms of evolutionary path. In LTE, both Uplink and Downlink are all shared channel. There is no dedicated channel. In terms of Modulation scheme, it can have QPSK, 16QAM and 64 QAM in downlink and QPSK & 16 QAM in uplink side. And one TTI became 1 ms, which means PHY/MAC layer scheduling should be much faster than previous technology. To make best use of these features, MAC layer scheduling become much sophisticated (implying more complicated) and it use more information from UE to allocate resources dynamically. It use CQI (in Non-MIMO) as in HSDPA and it also use PMI(Precoding Matrix Index) and RI (Rank Index) in MIMO condition.

Latency for almost every layer became much shorter than previous technology (e.g, UE to eNode B latency should be less than 5 ms). There are only two call statues, "Idle" and "Connected" whereas there are multiple call status, Idle,DCH,FACH,PCH and transition among these status takes long time in previous technology. If you see another section in this blog dealing with LTE signaling, you will find number of message transaction for registration and call setup get less.

If you go a little bit deeper into signaling side, you will notice only one reconfiguration message "RRC Reconfiguration" will do all kinds of dynamic reconfiguration from higher layer whereas there were three different type of reconfiguration in WCDMA/HSPA, called "Radio Bearer Reconfiguration", "Transport Channel Reconfiguration" and "Physical Channel Reconfiguration". (Much less headache to test case developer -:)

Simply put, in LTE PHY layer capacity has been increased with higher modulation scheme and latency become short and signaling got simplified. Everything sounds too fancy ? Superficiouly yes. But I am not sure how much headache I will have when it comes to MAC layer scheduling for optimal use of the resources and best performance. We will see.

Sunday, February 7, 2010

LTE RF Test and Measurement

In any wireless communication device, we have to go through two large group of testing. One for testing transmission path and the other for testing recieve path.

For a wireless communication device to work properly, it should meet following hardware requirement

i) The device should transmit the signal which is strong enough power to make it sure it reaches the other party of the communication.
ii) The device should not transmit the signal which is so strong that it interfere the communication between other parties.
iii) The device should transmit the signal with good enough quality which can be decoded/corrected by the other party.
iv) The device should transmit the signal in the exact frequency that has been allocated for the communication.
v) The device should not generate any noise out side of the frequency area that has been allocated for the device.

If any of these condition deviate too much from the specification, the device cannot communicate with the other party or let some other device to communicate. In terms of measurement equipment, item i) and ii) belong to "power measurement", item iii) is related to "Modulation Analysis" and item iv) falls into "Frequency Error measurement". Item v) is also a kind of "power measurement", but the measurement area in frequency domain is different from item i) & item ii). Anyway if you have any equipment that can perform the following three measurement for your communication technology, you can do the most critical part of transmission path.

a) Power Measurement
b) Modulation Analysis
c) Frequency Error Measurement

Now let's think about the recieve path measurement. What would be the most important reciever characteristics for the communication device ?

i) The reciever must be able to decode successfuly the signal coming from a transmitter even though the signal strength is very low.
ii) The reciever must be able to decode successfuly the signal coming from a transmitter even when there are a certain level of noise around the signal.

In terms of measurement logic, item i) and ii) are the same. Equipment sends a pattern of the known signal and let the reciever decode it and compare the original signal from the equipment and the decoded signal by reciever and how much different they are. The more different they are, the poorer reciever quality it is. We call this method "BER(Bit Error Rate) measurement". Item i) measures BER when the input signal to the device is very low and Item ii) measures BER when there are noise to the input signal.

Before we go forward to LTE measurement, pick any technology you are already familiar with and make a list of measurement on your test plan and try to map those items with the measurement principles I described above. Once you are familiar with this mapping, you will understand LTE measurement items more easily.

LTE RF Measurement Items

Now let's look a little bit detail into LTE RF measurement. First thing I have done is to make a list of measurement items from 3GPP 36.521-1 and try to map my measurement principles with each of the measurement items.

Here goes the Transmitter measurement items first. You see a lot of "Power Measurement" and some of "Modulation Analysis". Why do we have so many different power measurement and so many different Modulation Analysis. How do they differ from each other ? This is the question you have to find answers on your own. The answer itself is described in 3GPP 36.521-1 but the question is how much I can understand what is described there just by reading it.

The first step would be to read "Test Purpose", "Initial Condition", "Test Procedure" section of each test case as often as possible and try at least to be familiar to each test case.


Here goes the reciever measurement items.


Snapshots of LTE Uplink Signals for RF Testing

As I mentioned earlier, it is not easy to understand all the details of LTE RF Measurement just by reading the specification. I have read the test case purpose, "Initial Condition", "Test Procedure" over and over.. but still everything is vague. As I try to get more into details, the first obstables that blocks me is a lot of complicated tables describing the test condition. Of course we saw this kind of tables in other technology specification like CDMA, WCDMA but it seems the tables for LTE measurement looks bigger and more complicated. So I decided to see some of the signal patterns described in the specification on spectrum analyzer so that I can get some intuitive idea of the overall RF characteristics of each condition.








Even though we have new technology every couple of years and LTE is new to many people, RF test and measurement technology have a lot in common with other wireless communication technology. If you had experience with any wireless technology, eg CDMA, GSM, WCDMA, Bluetooth, WLAN, you may find the common logics in LTE.

Challenges for LTE RF Testing

One of the biggest challenges in LTE measurement for UE development or test engineer would be that there are too many sub tests with too many different parameter settings.Before I get into details, I want to briefly skim through overall RF measurement from C2K.

I don't have much experience with C2K measurement, but with only a little experience I could tell there are much fewer measurement items in this area comparing to WCDMA/HSDPA and even comparing to GSM/GPRS. As far as I remember, following is allmost all that I did for C2K.

i) Total Channel Power
ii) CDP (Code Domain Power)
iii) Rho
iv) Spectrum Emission
v) ACLR
vi) OBW (Occupied Bandwidth)

But the items listed above is more than what I experienced in C2K. For conformance, I think we may have to go through all of these items. But since C2K is very mature technology now, in the RF part developmental stage we wouldn't go through all of these items. In an extreme case that I heard of was "just measure total power, if there is no problem with it. usually no problem with other parts".

Now let's look into WCDMA. For WCDMA R99 (Non HSPA), If I briefly put the list,

i) Max Power
ii) Min Power
iii) On/Off Power
iv) RACH Power
v) EVM
vi) Spectrum Emission
vii) ACLR
viii) OBW (Occupied Bandwidth)

Just in terms of list, it doesn't look like much difference from C2K. But practically the engineer would meet various characteristics which may look quite different from C2K. The first thing we can think of is that the channel bandwith get tripled compared to C2K and this would introduce a lot of complication in RF design. Another issue is RACH process in WCDMA is more complicated than probing process in C2K and add a couple of important test steps.

Now let's look further into HSDPA. You may think HSDPA would not be much different from R99 in terms of Uplink measurement because HSDPA is only for downlink data rate. It is true in terms of high level protocol, but in physical/RF layer an important factor was added to uplink in HSDPA. It is HS-DPCCH. HS-DPCCH is for UE to report CQI and ACK/NACK to BTS. The problem is that even with this additional channels the UE has to maitain the total uplink power as before. So the UE recalculate/rearrange each of the physical channel power. So if you look at the RF conformance test case list, you would not find much difference in terms of test case items but you would find quite a many of sub items were added to the existing test case due to the introduction of HSDPCCH. (If you want to go into further detail, open up 3GPP 34.121 and find the test cases with the keyword "HSDPCCH" in the test title).

Going one step further into HSUPA, you also find no such a big difference in terms of measurement items. But as in HSDPA case, a new physical channel was introduced and it is called E-DPCH. Even with this additional channel, UE also have to maintain the total channel power as in R99. So, as you may guess, UE has to recalculate/rearrange each of physical channel powers. As a result, we would get a couple of additional sub-items added to RF testing.

Finally.. let's think about LTE. What is the biggest difference between LTE and C2K/WCDMA/HSPA in terms of PHY/RF layer ? It would be OFDM. Yes, it is. What kind of additional measurement items would be introduced to RF testing due to the OFDM ? Since OFDM is made up of a lot of sub carrier with very narrow bandwidth, we have to measure most of the characteristics listed above for each OFDM subcarrier. But if we do all of the items for each of the sub carriers, it would take one full day just for one item. Another big difference would be that LTE specification allow many different type of system bandwidth whereas in C2K/WCDMA, the system bandwidth is always same. It means you have to measure the whole set of test items for multiple different system bandwidth which multiplies the measurement time and parameter settings in measurement equipment.Based on the LTE specification, an LTE system bandwidth can be any of 1.4 Mhz, 3 Mhz, 5 Mhz, 10 Mhz, 15 Mhz, 20 Mhz whereas C2K can only have single bandwidth of 1.28 and WCDMA can only have single bandwidth of 3.84. Of course, a specific system operator would use only one of the bandwidth in their network but Mobile device manufacturer should design the UE which support all of these bandwidth.On top of this, there is another factors to make LTE test even more complex especially for mobile phone design/test. It is the fact that a real bandwidth being used at a specific time can change dinamically.

One intuitive example is shown in the following measurement screen. This the RF signal captured for LTE call connection and data transfer. When you initiate a call, the mobile device would go through the protocol sequence for call setup and then data fraffic would start. If you see at the bottom of the screen (spectrogram) of the measurement screen, you would notice that frequency allocation (bandwidth being used) during this period changes. In this screen, the frequence allocation for data traffic does not change, but in live network this bandwidth would change dynamically.


What is the implication of these multiple system bandwith and dynamic bandwidth change to Mobile phone designer and the test engineer ? For designers, the biggest issues would be how to optimize various kinds of design parameters to be best fit for all of these bands. For test engineers, the biggest issue would be huge number of the test cases they have to go through.
Final outcome of all these considerations on multiple bandwidth and dynamic bandwidth change can be examplified as a table shown below. This is a table for only one test case. See all those different system bandwith you have to cover. Different RB allocations is for dynamic frequency allocation that I mentioned above. In LTE, for every test case you would have this kind of tables and this will be huge headache to designers and test engineers.





Tuesday, January 12, 2010

Tips for utilizing 3GPP specification


If you have something about which you have no idea of what they are talking about, you would ask somebody else for explanation.. and usually what you think is expert would give you very brief explanation which would not help you much and say "Refer to the 3GPP spec AA.BBB for the detailed explanation". So you download and open the specification AA.BBB and start reading. Does this help ? In most case, NO. If you read the spec AA.BBB and it also says in similar way as your expert did, meaning "giving you a minimum description and saying 'refer to spec BB.CCC" and if you gets into the spec BB.CCC.. you will get into the same situation. This would be the first frustration when you try to understand based on the 3GPP specification.

Is there any easy solution for this ? Honestly and unfortunately, NO. But one thing that may help you in long term would be to understand relationships among multiple specifications.
Pick a specific area where you are specially interested in and make a list of additional specification and define the relationship among those specification. I have a couple of examples here.

Tips for myself.

Currently I am pretty heavily involved in writing test cases for Physical Layer and Higher Layer Signaling (RRC/NAS) and the following is 3GPP standards that I frequently refer to. This may or may not help you much.. but I keep this for my own quick reference.

  • Overall RRC Protocol - TS36.331
  • NAS : EPS - TS 24.301, TS 29.274
  • NAS : QoS Related - TS 23.203 (Policy and Charging Control Architecture)
  • DCI Format - 36.213 7.1 UE procedure for receiving the physical downlink shared channel
  • Transport Block Size and Throughput Calculation - TS36.213 Table7.1.7.1-1

PUCCH Reference Signal

  • TS 36.211 - 5.4 Physical uplink control channel
  • TS 36.211 - 5.4.1 PUCCH formats 1, 1a and 1b
  • TS 36.211 - 5.4.3 Mapping to physical resources
  • TS 36.211 - 5.5.1.3 Group hopping
  • TS 36.211 - 5.5.2.2 Demodulation reference signal for PUCCH
  • TS 36.211 - 5.5.2.2.1 Reference signal sequence
  • TS 36.212 - 5.2 Uplink transport channels and control information
  • TS 36.212 - 5.2.3 Uplink control information on PUCCH
  • TS 36.213 - 10.1 UE procedure for determining physical uplink control channel assignment
  • TS 36.331 - 6.3.2 Radio resource control information elements - PUCCH-Config

PUSCH

  • TS 36.211 - 5.5.2.1 Demodulation reference signal for PUSCH
  • TS 36.212 - 5.2 Uplink transport channels and control information
  • TS 36.212 - 5.2.2 Uplink shared channel
  • TS 36.212 - 5.2.3 Uplink control information on PUCCH
  • TS 36.331 - 6.3.1 System information blocks – SystemInformationBlockType2
  • TS 36.331 - 6.3.2 Radio resource control information elements – PUSCH-Config

PHICH

  • TS 36.211 - 6.9 Physical hybrid ARQ indicator channel
  • TS 36.213 - 8.3 UE ACK/NACK procedure
  • TS 36.213 - 9.1.2 PHICH Assignment Procedure
RACH Procedure

  • TS36.211 - 5.7 Physical random access channel
  • TS36.211 - Table 5.7.1-2 Frame structure type 1 random access configuration for preamble format 0-3.
  • TS36.331 - 6.3.1 System information blocks – SystemInformationBlockType2
  • TS36.331 - 6.3.2 Radio resource control information elements - PRACH-Config
  • TS36.331 - 6.3.2 Radio resource control information elements - RACH-ConfigCommon
  • TS36.331 - 6.3.2 Radio resource control information elements - RACH-ConfigDedicated
Paging Procedure

  • TS 36.304 - 7 Paging

CCE Index Calcuation
  • TS 36.213 - 9.1.1 PDCCH Assignment Procedure

Specifications for UE conformance Testing

If you want to know the details of Conformance Test for LTE, you need to refer to the specification listed at http://www.3gpp.org/ftp/Specs/html-info/36-series.htm

Even though you are not in the conformance type of testing, a lot of IOT (Inter Operability Test) and any user defined test cases has similar concept to the conformance. So trying to understand the conformance testing would help you understand any type of testing.

The first set of specification you have to be familar with are as follows :


  • 36.521-1 : RF Conformance (Transmitter Test, Reciever Test and Performance Test)
  • 36.521-3 : RF Conformance (RRM Test)
  • 36.523 : Protocol Conformance
    • When you read these specification, they would describe only test purpose/expected result and overal procedure, but does not describe details of protocol sequence and IE(information elements) for each of the Layer 3 messages. Regarding these, you have to refer to


      • 36.508 : UE Test Environment
      • 36.101 : UE TxRx

      For example, if you want to understand the procedure about "LTE Transmit Power" Measurement, the first thing you will look into will be the following description of 36.521.

      ** 6.2.2.4.1 Initial Condition **
      1. Connect the SS to the UE antenna connectors as shown in TS 36.508[7] Annex A Figure A3.
      2. The parameter settings for the cell are set up according to TS 36.508 [7] subclause 4.4.3.
      3. Downlink signals are initially set up according to Annex C.0, C.1, and C.3.0, and uplink signals according to Annex H.1 and H.3.0.
      4. The UL Reference Measurement channels isset according to Table 6.2.2.4.1-1.
      5. Propagation conditions are set according to Annex B.0.6. Ensure the UE is in State 3A according to TS 36.508 [7] clause 4.5.3A. Message contents are defined in clause6.2.2.4.3.

      ** 6.2.2.4.2 Test procedure **
      1. SS sends uplink scheduling information every TTI via PDCCH DCI format 0 for C_RNTI to schedule the ULRMC according to Table 6.2.2.1.4.1-1. Since the UE has no payload and no loopback data to send the UE sendsuplink MAC padding bits on the UL RMC
      2. Send continuously uplink power control “up” commands in the uplink scheduling information to the UE until theUE transmits at its maximum output power state according to the test configuration from Table 6.2.2.4.1-1.
      3. Measure the mean power of the UE in the channel bandwidth of the radio access mode. The period ofmeasurement shall be one sub-frame (1ms).

      Just for overall test procedure, the information described above would be enough even without refering to other specification or table. But if you have to do the troubleshooting of the failed test/test error or if you are an engineer who has to develop the test cases, you should not miss even a tiny particles out of all of the following items which was refered in the procedure.


      • TS 36.508 Annex A Figure A3
      • TS 36.508 subclause 4.4.3.
      • TS 36.521 Annex C.0, C.1, and C.3.0
      • TS 36.521 Annex H.1 and H.3.0
      • TS 36.521 Table 6.2.2.4.1-1
      • TS 36.508 clause 4.5.3A
      • TS 36.521 Table 6.2.2.1.4.1-1

      If you want to know the protocol sequence during the test case execution, you have to refer to TS 36.508 section "4.5 Generic Procedure"

      Then you need to know the contents (IE : Information Elements) of each message which is described in 36.508 section 4.6, 4.7, 4.7A.

      Isn't this complicated enough to make your head spinning ? -:) But once you got into this area, you would never survive without being familiar with this documents and struggling with them.



      Friday, January 8, 2010

      Tips for LTE TTCN Source


      Where can I get TTCN and Viewer ?

      This is where you can download the TTCN source code from
      http://www.3gpp.org/ftp/tsg_ran/WG5_Test_ex-T1/TTCN/Deliveries/LTE_SAE/

      You can download free TTCN-3 Viewer from
      http://www.eu.anritsu.com/ttcn

      Where do I have to start ?

      If you open up the TTCN source, it would look extremly complicated and I don't know where to start to look at. Where do I have to start ?

      If you want to have overall understanding of test procedure/protocol sequence of each test case, I think it would be better to look at test case description in 3GPP 36.523-1. But when you go over the test procedure over and over, I would start further details of parameters and it's default values of the IE (information elements) for the test cases. I would start with EUTRA_RRC_ASN1_Definitions.asn in CommonEUTRA_Def folder.


      Understand RACH !

      What is the most tricky part in device troubleshooting ? My experience says "If a problem happens in the middle of doing something, it is relatively easy to find the root cause and troubleshoot it (probably I might have over-simplified the situation -:), but if something happened before anything started, it would be a nightmare." For example, you set the all the parameters at the network emulator for a UE you want to test and then turned on the UE. In a several second UE start booting and then in a couple of second you see a couple of antenna bars somewhere at the top of UE screen.. and then in several seconds you see 'SOS' or 'Service Not Available' in stead of your network operator name displayed on your screen and normal Antenna bars. This is what I mean by "problem in the middle of doing something". In this case, if you collect UE log and equipment log, at least you can easily pin point out the location the problem happens and start from there for further details. But what if you are in this situation ? you set the all the parameters at the network emulator side and turn on the UE.. UE start booting up .. showing the message saying "Searching Network ...." and got stuck there.. with no Antenna bars .. not even 'SOS' .. just saying "No service". And I collected UE side log and Network Emulator side log, but no signalling message. This is where our headache starts.


      As examples,


      i) What if you don't see 'RRC Connection Request' when your turned on the WCDMA UE ?


      ii) What if you don't see 'Channel Request' when your turned on the GSM UE ?


      iii) What if you don't see 'RACH Preamble' when your turned on the LTE UE ?


      First thing you have to do is to understand the every details of this procedure not only in the higher signaling layer, but also all the way down to the physical layers related to these first step. And also you have to use proper equipment which can show these detailed process. If you have an equipment that does not provide the logging or it provides log but only higher layer singnaling log, it will be extremly difficult to troubleshoot. Given that you have the proper tools, the next thing you have to be ready is to understand the detailed knowledge of these process. Without the knowledge, however good tools I have it doesn't mean anything to me. So ? I want to teach myself here about the first step of LTE signaling which is RACH process. (Somebody would say there are many of other steps even before the RACH, like frequency Sync, Time Sync, MIB/SIB decoding.. but it put these aside for now.. since it is more like baseband processing).


      When RACH Process occurs ?


      It would be helpful to understand if you think about when 'RRC Connection' happens (or when PRACH process happens if you are interested in lower layer stuffs) in WCDMA. It would also be helpful if you think about when 'Channel Request' happens in GSM UE.


      My impression of LTE RACH process is like the combination of PRACH process (WCDMA) and Channel Request (GSM). It may not be 100% correct analogy.. but anyway I got this kind of impression. In LTE, RACH process happens (The following list came from the LTE training manual by Award Solution. Of course, this description is also based on 3GPP specification)


      i) During initial access to register or RRC Connection Request


      ii) When a radio link failure occurs during the initial access


      iii) During the Handover to allocate the scheduling grant from the target eNB


      iv) UL synchronisation is lost


      v) When there are no dedicated resources (no PUCCH resources have been assinged to UE).


      Two types of RACH process : Contention-based and Contention-free


      When a UE transmit a PRACH Preamble, it transmits with a specific pattern and this specific pattern is called a "Signature". In each LTE cell, total 64 preamble signatures are available and UE select randomly one of these signatures.


      UE select "Randomly" one of these signatures ?


      Does this mean that there is some possibility that multiple UEs send PRACH with identical signatures ?


      Yes.


      There is such a possibility. It means the same PRACH preamble from multipe UE reaches the NW at the same time.. this kind of PRACH collision is called "Contention" and the RACH process that allows this type of "Contention" is called "Contention based" RACH Process. In this kind of contention based RACH process, Network would go through additional process at later step to resolve these contention and this process is called "Contention Resolution" step.


      But there is some cases that these kind of contention is not acceptable due to some reason (e.g, timing restriction) and these contention can be prevented. Usually in this case, the Network informs each of the UE of exactly when and which preamble signature it has to use. Of course, in this case Network will allocate these preamble signature so that it would not collide. This kind of RACH process is called "Contention Free" RACH procedure. To initiate the "Contention Free" RACH process, UE should be in Connected Mode before the RACH process as in Handover case.


      Typical 'Contention Based' RACH Procedure is as follows :


      i) UE --> NW : RACH Preamble (RA-RNTI, indication for L2/L3 message size)


      ii) UE <-- NW : Random Access Response (Timing Advance, T_C-RNTI, UL grant for L2/L3 message)


      iii) UE --> NW : L2/L3 message iv) Message for early contention resolution


      Now let's assume that a contention happened at step i). For example, two UEs sent PRACH. In this case, both of the UE will recieve the same T_C-RNTI and resource allocation at step ii). And as a result, both UE would send L2/L3 message through the same resource allocation(meaning with the same time/frequency location) to NW at step iii). What would happen when both UE transmit the exact same information on the exact same time/frequency location ? One possibility is that these two signal act as interference to each other and NW decode neither of them. In this case, none of the UE would have any response (HARQ ACK) from NW and they all think that RACH process has failed and go back to step i). The other possibility would be that NW could successfully decode the message from only one UE and failed to decode it from the other UE. In this case, the UE with the successful L2/L3 decoding on NW side will get the HARQ ACK from Network. This HARQ ACK process for step iii) message is called "contention resolution" process.


      Typical 'Contention Free' RACH Procedure is as follows :


      i) UE <--NW : RACH Preamble Assignment


      ii) UE --> NW : RACH Preamble (RA-RNTI, indication for L2/L3 message size)


      iii) UE <--NW : Random Access Response (Timing Advance, C-RNTI, UL grant for L2/L3 message)


      Exactly when a UE transmit RACH ?



      To answer to this question, you need to refer to 3GPP specification TS36.211 - Table 5.7.1-2.


      Did you open the specification now ? It shows exactly when a UE is supposed to send RACH depending on a parameter called "PRACH Configuration Index".


      For example, if the UE is using "PRACH Configuration Idex 0", it should transmit the RACH only in EVEN number SFN(System Frame Number). Is this good enough answer ? Does this mean that this UE can transmit the RACH in any time within the specified the SFN ? The answer to this question is in "Sub Frame Number" colulmn of the table. It says "1" for "PRACH Configuration Idex 0". It means the UE is allowed to transmit RACH only at sub frame number 1 of every even SFN.


      Checking your understanding of the table, I will give you one question. With which "PRACH Configuration Idex", it would be the easiest for the Network to detect the RACH from UE ? and Why ?


      The answer would be 14, because UE can send the RACH in any SFN and any slots within the frame.


      How does Network knows exactly when UE will transmit the RACH ?


      It is simple. Network knows when UE will send the RACH even before UE sends it because Network tells UE when the UE is supposed to transmit the RACH. (If UE fails to decode properly the network information about the RACH, Network will fail to detect it even though UE sends RACH).


      Following section will describe network informaton on RACH.


      Which RRC Message contains RACH Configuration ?


      It is in SIB2 and you can find the details in 3GPP 36.331. (Click the image to enlarge it so that you read it clearly)



      Exactly when Network transmit RACH Response


      We all knows that Network should transmit RACH Response after it recieved RACH Preamble from UE, but do we know exactly when, in exactly which subframe, the network should transmit the RACH Response ? The following is what 3GPP 36.321 (section 5.1.4) describes.


      Once the Random Access Preamble is transmitted and regardless of the possible occurrence of a measurement gap, the UE shall monitor the PDCCH for Random Access Response(s) identified by the RA-RNTI defined below, in the RA Response window which starts at the subframe that contains the end of the preamble transmission [7] plus three subframes and has length ra-ResponseWindowSize subframes.


      It means the earliest time when the network can transmit the RACH response is 3 subframe later from the end of RACH Preamble. Then what is the latest time when the network can transmit it ? It is determined by ra-ResponseWindowSize. This window size can be the number between 0 and 10 in the unit of subframes. This means that the maximum time difference between the end of RACH preamble and RACH Response is only 12 subframes (12 ms) which is pretty tight timing requirement.


      RACH Procedure during Initial Registration - RACH Procedure Summary


      Follwing is an example of RACH procedure which happens during the initiail registration. If you will be an engineer who is working on protocol stack development or test case development, you should be very familiar with all the details of this process.


      Again, we have to know every details of every step without missing anything to be a developer, but of course it is not easy to understand everything at a single shot. So, let's start with something the most important part, which I think is the details of RACH response. Following diagram shows one example of RACH Response with 5 Mhz bandwidth. We don't have to memorize the detailed value itself but should be familiar with the data format and understand which part of this bit string means what. If you decode UL Grant part, you will get the following result. You will notice that the information it carries would be very similar to DCI format 0 which carries Resource Allocation for uplink data. This information in UL Grant in RACH Response message is the resource allocation for msg3 (e.g, RRC Connection Request).



      Let me describe this procedure in verbal form again.


      i) UE initiate a Random Access Procedure on the (uplink) Random Access Channel (RACH).(The location of RACH in the frequency/time resource grid the RACH is known to the mobile via the (downlink) Broadcast Channel (BCH). The random access message itself only consists of 6 bits and the main content is a random 5 bit identity)


      ii) Network sends a Random Access Response Message(RARM) at a time and location on the Physical Downlink Shared Channel (PDSCH) (The time and location of RARM on PDSCH can be calculated from the time and location the random access message was sent. This message contains the random identity sent by the device, a Cell Radio Network Temporary ID (T_C-RNTI) which will be used for all further bandwidth assignments, and an initial uplink bandwidth assignment)


      iii) The mobile device then uses the bandwidth assignment to send a short (around 80 bits) RRC Connection Request message which includes it's identity which has previously been assigned to it by the core network


      Only the step i) uses physical-layer processing specifically designed for random access. The remaining steps utilizes the same physical layer processing as used for normal uplink and downlink data transmission


      3GPP Standard for RACH Process


      3GTS 36.300 (10.1.5) : Overall description of RACH Process. Read this first.


      3GTS 36.211 (5.7) : RRC Messages and IE (Information Elements) which are involved in RACH process.


      3GTS 36.213 (6) : MAC Layer Procedure related to RACH Process.