Sunday, February 28, 2010

From R99 to LTE

From R99 to LTE

I have been working on creating various test cases on UMTS sides for a couple of years. During this period, I saw technologies changing from R99 to HSDPA, HSUPA and now to LTE. But in terms of RRC and above layers which I have been mostly working on, I haven't seen much differences. Sometimes I got the impressions that the higher layer signaling (RRC and above) gets even more simpler as the technology changes from old technology to a next technologies. If you look at the higher layer technologies of LTE, you will feel that LTE signaling looks simpler than other other existing technologies.

Then, How we could have higher data rate and less latencies and more effective usange of radio channels with simpler signaling ? The secrete is that as the technologies evolves, the higher layer signaling stays similar or even simpler, the lower layer (PHY and MAC) gets complicated and these lower layers are the one that enable us to enjoy all those evolved features especially high data rate and low latencies. So to understand the details of the evolved technologies, we have to understand the details of low layer - e.g, PHY and MAC.

Here is a list of questions you have to have when you want to study a new technology.
i) What kind of additional PHY channes has been added comparing to R99 ?
ii) What kind of information is carried by the additional physical channels ?
iii) What kind of MAC identieis are added comparing to R99 ?
iv) What is the role of the new MAC identities especially in terms of scheduling ?

Why we needed HSDPA ?

Before I start talking about the questions listed above, let's think about why we wanted a new technology called HSDPA ? In any communication technolgy, the biggest motivation for a new technology was to increase the data rate. Then a question arises. How can we increase the data rate ? Regardless of communication types, we usually has taken similar method for this, which is as follows :

i) Change modulation scheme
ii) Decrease the latency between communicating party
iii) Optimization at multi user level rather than optimization at single user level

Let's take some examples. The evolution path of bluetooth was from the standard rate to 2 Mb EDR (Enhanced Data Rate) to 3 Mb EDR and the biggest changes for each step was the change of modulation scheme. What happened in GSM evolutionary path, GSM to GPRS to EGPRS(EDGE) to EDGE Evolution ? The biggest changes in this path is modulation scheme changes as well. From R99 to HSDPA, we also introduced a new modulation scheme called 16 QAM. The advantage of using new modulation scheme to increase the data rate is the simplicity of the concept, but a disadvantage would be that it requires hardware changes.

Next step would be to decrease the latency between communicating party. How can we achieve this ? Increasing physical data propagation speed between two communicating parties ? It is almost impossible because the propagation speed is already at the speed of light. Then what is another option ? It is improving scheduling algorithm of the communication. What is it ? It is a little bit hard to explain in simple way so I will talk on this in a separate section.

Lastly let's think about the optimization at multi user level rather than optimization at single user level. Let's suppose a situation where 10 user is communicating to one Node B. In R99, each user has a separate and independant communication path to the node B via a special channel called DPCH (Dedicated Physical Channel). The optimization in this case means ten separate optimization process for each user. OK.. now let's think each of the users is getting the max data rate for the specific UE at the specific environment to which the UE is exposed. Does this guarantee that the whole resource of the Node B was fully utilized ? It is hard to say "Yes" to this question. Isn't there any possibility that some of the resources was being wasted ? It would be hard to say "No" to this question. We will think about this issue in next section.

Introduction of new Channels in HSDPA

In HSDPA, three four new physical channels were introduced

i) HS-DSCH
ii) HS-SCCH
iii) HS-DPCCH
iv) F-DPCH

With introduction of these four channels, we could implement many of the methods to improve the data rate which has been briefly descrived in previous section.

The most important channel is definately HS-DSCH (High Speed Downlink Shared Channel). As the name implies, it is a SHARED channel whereas in R99 we used a DEDICATED channel. It means all the users within a cell is sharing a single channel which is a big pipes rather than each of the users has it's own dedicated channel which is a small pipes. With this the network can optimize the resource allocation among the multi users more efficiently. For example as an extreme case the network allocate 91% of resources to a single UE and only 1% of resources to each of the remaining 9 users when the nine user does not require much resource or those 9 users are in such a poor environment where it can utilize only small fraction of the transmission capacity. In case of using dedicated channel, we cannot do this kind of extreme resource allocation because each of the dedicated channels requires a certain level of minum resource allocation even when the real utilization is lower than the minimum resource allocation.

I said HS-DSCH is a shared channel. It means that the whole data in the channel is recieved by all users. Then how can a UE figure out whether the data is for that UE or for some other UEs. I also said in HSDPA multiple modulation scheme is used, QAM and 16 QAM. Then how can a UE knows whether the data is QAM modulated or 16 QAM modulated ? To carry all these information, another new channel was introduced and it is HS-CCCH (High Speed Common Control Channel). The information carried by HS-CCCH is as follows :
i) Transport format information - code tree for the data, modulation scheme, transport blocksize
ii) Hybrid-ARQ related information

I said at the beginning, HSDPA uses a shared channel and try to achieve the optimum resource allocation at multi user level. To do this, the network should know the exact status of the UE. And the network should know whether the data it sent successfully reached it's destination (a specific UE). To enable this, UE reports its communication quality and the data reception status to the network repeatedly. For UE to send this information to network, it uses a special channel called HS-DPCCH. This channel carries CQI (Communication Quality Indicator) and Ack/Nak info.

So far so good. It seems there is only advantages of introducing these new channels, but there is nothing that gains 100% without losing anything. There is a drawback of relying on these shared channel method. It is about power control issue. You know that one of the critical requirement of WCDMA technology is a very sophisticated power control. If UE power is too low, Node B would have difficulties decoding it and if the power is too strong it can act as a noise to other UEs communicating with the Node B. For this purpose, Node B sends a UE a power control message periodically and this message should be different for all the UE because each UE may be in a different channel condition, meaning this power control message should be a "Dedicated" message. But as I explained HS-DSCH is a shared channel. Then how can Node B deliver the power control message for each specific UE. The solution was to use R99 dedicated channel (DPCH) carrying only the power control message. But using a full DPCH only for carrying a small power control message is waste of resource. To improve this situation, from Release 6 a new channel was introduced and it is F-DPCH (Fractional DPCH). The details of F-DPCH is out of the scope of this section and I wouldn't explain any further on this channel.

Improved scheduling in HSDPA

The whole purpose of improving scheduling is to decrease the latency between the communicating parties. In this case, the communicating parties are a UE and a Network. The basic idea of this improvement is to refine the granularity of the scheduling period.

In WCDMA network, this scheduling happens for every TTI (Transmission Time Interval) and in R99 the common TTI is 10 ms (sometimes 20ms, 40ms TTI is taken). In HSDPA, this TTI has been changed to 2 ms. Why it is 2 ms ? What can't it be 1 ms or 4 ms ? It is just a result of trade-off of various factors. If the TTI is longer like 4ms or 6 ms, the effect of the schedule time refinement would not be outstanding. However if the TTI is too short, the ratio of scheduling overhead and the refinement would decrease because executing the scheduling algorithm requires a certain amount of time and resources.

Another means of decreasing latency came from the way to handle the data with errors. In R99, those error can only be detected by RLC by Ack/Nak from the other party and whether it would request retransmission or not is determined by even higher layer. But in HSDPA, this error is detected at Physical layer. When a UE recieves data, it checks CRC and sends Ack or Nack on HS-DPCCH being transmitted 5 ms after it received the data. If UE sends Nack, Network retransmit the data. This error detection and retransmition mechanism is called H-ARQ(Hybrid ARQ).

Another mechansim for the improved scheduling adopted in HSDPA is to allocate the optimized resources for each UE. How can this be achieved ? To do this, the network should need some information to make a best decision for each UE. The important informations for this decision making is


i) CQI
ii) Buffer Status
iii) Priority of the data

CQI is calculated by the UE based on the signal-to-noise ratio of the recieved common pilot. If you look into details of TFRI determination by MAC layer, you will notice CQI is the only parameter to determin the TFRI. (What is TFRI ? I will talk this later in this article or some other place. This is very important to implement a test case for maximum throughput testing).

Buffer Status shows how much data is stored in the buffer for each UE. If there is no data in the buffer, the node B should not allocate any resources for the UE. So checking the buffer status is also important for optimum resource allocation.

The overall scheduling algoritm is to allocate more resource to UE which report higher CQI, but there are some cases where the Node B should allocate a certain amount of resources for a specific UE even when it reports a poor CQI. The common example for this situation is a certain RRC message with tight time out value and some streaming data which has some expiration time. To handle these situation, the scheduler (Node B MAC layer, MAC-hs) assign a priority to each data blocks and put those blocks into separate priority Queues.

What I explained so far is just brief overview and to provide motivation to study further. If you are involved in test case creation or protocol stack development, this level of understanding would not help much. If you want to study further so that it may give you practical help for test case development or protocol stack optimization, I recommend you to study very details of MAC-hs and TFRI selection mechanism.

Why we needed HSUPA ?

In previous section, we talked on why we needed HSDPA and how the HSDPA imroved the data throughput. But HSDPA improved only Downlink throughput and did nothing about Uplink. So the natural tendency of next evolution would be improvement on Uplink side. This is how we came out with another technology called HSUPA.

The overall mechanism by which HSUPA improved the uplink throughput is similar to the one used in HSDPA. So if you became familiar with HSDPA mechanism you would not have difficulties understanding HSUPA mechanism.

Introduction of new Channels in HSDPA

As in HSDPA, several new channels were introduced to implement HSUPA and they are as follows :
i) E-DPDCH
ii) E-DPCCH
iii) E-HICH
iv) E-RGCH
v) E-AGCH

Briefly speaking, E-DPDCH is equivalent version of HS-DPSCH and E-DPCCH is equivalent to HS-SCCH and E-HICH is equivalent to HS-DPCCH. But there is a main difference between these HSUPA channels and HSDPA channel. E-DPDCH and E-DPCCH are dedicated channels whereas HS-DPSCH and HS-SCCH are shared channel, but this is understandable because in HSDPA case the source and target of the data transmission is one-to-many but in HSUPA case the source and target is one-to-one, so it is understandable to use dedicated channels in HSUPA.

There are another big difference between HSDPA and HSUPA. It is about scheduling issue. Regardless of whether it is HSDPA or HSUPA, the scheduler (the decision maker) is on Node B, not on a UE. For scheduling we need two very important information, the channel quality and buffer status information. In HSDPA, the information that the decision maker (the scheduler) needs to get from the target of the transmission is only channel quality information and this information was provided via HS-DPCCH and the buffer status information is already available to the scheduler because the transmission buffer is located in the same place (node B) as the scheduler. So in HSDPA the transmitter (node B) can send the data anytime the situation is allowed, but in HSUPA case the transmitter (UE) cannot send the data anytime it wants to send. Before the UE send the data, it has to check whether the target (the reciever, Node B) is ready and has enough resource to recieve the data. For UE to check the status of the reciever (node B) and get the approval from the node B, E-AGCH (Absolute Grant Channel) and E-RGCH (Relative Grant Channel) are used. Node B (the scheduler) send the scheduling grants to UE on when and at what data rate the UE can transmit the data.

The difference between E-AGCH and E-RGCH are
i) E-AGCH is a shared channel and E-RGCH is a dedicated channel
ii) E-AGCH is typically used for large changes in the data rate and the E-RGCH is used for smaller adjustments.

Scheduling for HSUPA

HSUPA Scheduling is quite a complex process but the overall process in simple form are as follows :
i) UE sends Grant Request to Node B
ii) Node B send Abosolute Grant (AGCH) and Relative Grants(RGCH) to UE
iii) UE sets the Serving Grant Value based on AGCH value and RGCH value
iv) Based on the Serving Grant Value, UE sets the E-TFC value for the specific moment of the transmission.

For further details, we need to study the detailed mechanism of MAC-e.

Why we needed HSPA+

We have seen the evolutionary path from R99 to HSDPA and HSUPA and now we have the speed improvement on both uplink and downlink side. We call the combination of HSDPA and HSUPA as HSPA. As we go forward, we got the HSPA further evolved and this evolved version of HSPA is called HSPA+. Now a question would arise, what would be the factors to get the HSPA improved in terms of speed ? Followings are the key items for HSPA+.

i) CPC - DL DRX/UL DTX, HS-SCCH less operation, Enhanced F-DPCH
ii) Layer 2 Improvement
iii) 64 QAM for HSDPA
iv) 16 QAM for HSUPA
v) Enhanced Cell_FACH

Now you may notice right away what 64 QAM and 16 QAM is for and these are mainly for the increase the size of the transmission pipe in physical layer. I would not explain on this any more. Up until HSPA, most of the efforts to increase the throughput was done at the physica layer or MAC layer, but there are bottle necks at every layer. If we remove all the bottle necks from every layer, we would get the ideal max throughput, but this kind of bottle neck removal cannot be done at single shot. From a HSPA+, a big bottleneck on layer 2 (RLC) was improved. The RLC PDU size in HSDPA was 320 bits or 640 bits. Let's suppose you sent a one IP packet with 1.5 Kb. It should be splitted into multiple RLC PDUs and sent by multiple transmission. But in HSPA+, the maximum RLC PDU size can be over 3 Kb. So even the largest IP packet can be transmitted at once. This is done by "L2 Improvement".

Let's suppose another situation, say Web browsing for example. While you are reading a page, you are not downloading any data and there are no data communication between UE and the network. During this time, usually RRC state is put into Cell_FACH and Cell_PCH. When you finish reading the page and try to go next page, in this case the RRC state should change back into Cell_DCH. CPC is a mechanism to reduce the time for these state changes and let user experience like "Continuous Connection".

Another way to improve the problem related to RRC State changes would be to increase the data rate at Cell_FACH. Theoretically you can transmit the data in Cell_FACH in previous technology and ideally the throughput is around 34 K. But if you really try it, you may notice the real throughput is much less than this. For HSPA+, the throuhgput for Cell_FACH has been much increased by Enhanced Cell_FACH.

Finally LTE !


I will not talk much about LTE here because the whole blog here is for LTE. Just a couple of quick comments in terms of evolutionary path. In LTE, both Uplink and Downlink are all shared channel. There is no dedicated channel. In terms of Modulation scheme, it can have QPSK, 16QAM and 64 QAM in downlink and QPSK & 16 QAM in uplink side. And one TTI became 1 ms, which means PHY/MAC layer scheduling should be much faster than previous technology. To make best use of these features, MAC layer scheduling become much sophisticated (implying more complicated) and it use more information from UE to allocate resources dynamically. It use CQI (in Non-MIMO) as in HSDPA and it also use PMI(Precoding Matrix Index) and RI (Rank Index) in MIMO condition.

Latency for almost every layer became much shorter than previous technology (e.g, UE to eNode B latency should be less than 5 ms). There are only two call statues, "Idle" and "Connected" whereas there are multiple call status, Idle,DCH,FACH,PCH and transition among these status takes long time in previous technology. If you see another section in this blog dealing with LTE signaling, you will find number of message transaction for registration and call setup get less.

If you go a little bit deeper into signaling side, you will notice only one reconfiguration message "RRC Reconfiguration" will do all kinds of dynamic reconfiguration from higher layer whereas there were three different type of reconfiguration in WCDMA/HSPA, called "Radio Bearer Reconfiguration", "Transport Channel Reconfiguration" and "Physical Channel Reconfiguration". (Much less headache to test case developer -:)

Simply put, in LTE PHY layer capacity has been increased with higher modulation scheme and latency become short and signaling got simplified. Everything sounds too fancy ? Superficiouly yes. But I am not sure how much headache I will have when it comes to MAC layer scheduling for optimal use of the resources and best performance. We will see.