Sunday, February 28, 2010

From R99 to LTE

From R99 to LTE

I have been working on creating various test cases on UMTS sides for a couple of years. During this period, I saw technologies changing from R99 to HSDPA, HSUPA and now to LTE. But in terms of RRC and above layers which I have been mostly working on, I haven't seen much differences. Sometimes I got the impressions that the higher layer signaling (RRC and above) gets even more simpler as the technology changes from old technology to a next technologies. If you look at the higher layer technologies of LTE, you will feel that LTE signaling looks simpler than other other existing technologies.

Then, How we could have higher data rate and less latencies and more effective usange of radio channels with simpler signaling ? The secrete is that as the technologies evolves, the higher layer signaling stays similar or even simpler, the lower layer (PHY and MAC) gets complicated and these lower layers are the one that enable us to enjoy all those evolved features especially high data rate and low latencies. So to understand the details of the evolved technologies, we have to understand the details of low layer - e.g, PHY and MAC.

Here is a list of questions you have to have when you want to study a new technology.
i) What kind of additional PHY channes has been added comparing to R99 ?
ii) What kind of information is carried by the additional physical channels ?
iii) What kind of MAC identieis are added comparing to R99 ?
iv) What is the role of the new MAC identities especially in terms of scheduling ?

Why we needed HSDPA ?

Before I start talking about the questions listed above, let's think about why we wanted a new technology called HSDPA ? In any communication technolgy, the biggest motivation for a new technology was to increase the data rate. Then a question arises. How can we increase the data rate ? Regardless of communication types, we usually has taken similar method for this, which is as follows :

i) Change modulation scheme
ii) Decrease the latency between communicating party
iii) Optimization at multi user level rather than optimization at single user level

Let's take some examples. The evolution path of bluetooth was from the standard rate to 2 Mb EDR (Enhanced Data Rate) to 3 Mb EDR and the biggest changes for each step was the change of modulation scheme. What happened in GSM evolutionary path, GSM to GPRS to EGPRS(EDGE) to EDGE Evolution ? The biggest changes in this path is modulation scheme changes as well. From R99 to HSDPA, we also introduced a new modulation scheme called 16 QAM. The advantage of using new modulation scheme to increase the data rate is the simplicity of the concept, but a disadvantage would be that it requires hardware changes.

Next step would be to decrease the latency between communicating party. How can we achieve this ? Increasing physical data propagation speed between two communicating parties ? It is almost impossible because the propagation speed is already at the speed of light. Then what is another option ? It is improving scheduling algorithm of the communication. What is it ? It is a little bit hard to explain in simple way so I will talk on this in a separate section.

Lastly let's think about the optimization at multi user level rather than optimization at single user level. Let's suppose a situation where 10 user is communicating to one Node B. In R99, each user has a separate and independant communication path to the node B via a special channel called DPCH (Dedicated Physical Channel). The optimization in this case means ten separate optimization process for each user. OK.. now let's think each of the users is getting the max data rate for the specific UE at the specific environment to which the UE is exposed. Does this guarantee that the whole resource of the Node B was fully utilized ? It is hard to say "Yes" to this question. Isn't there any possibility that some of the resources was being wasted ? It would be hard to say "No" to this question. We will think about this issue in next section.

Introduction of new Channels in HSDPA

In HSDPA, three four new physical channels were introduced

i) HS-DSCH
ii) HS-SCCH
iii) HS-DPCCH
iv) F-DPCH

With introduction of these four channels, we could implement many of the methods to improve the data rate which has been briefly descrived in previous section.

The most important channel is definately HS-DSCH (High Speed Downlink Shared Channel). As the name implies, it is a SHARED channel whereas in R99 we used a DEDICATED channel. It means all the users within a cell is sharing a single channel which is a big pipes rather than each of the users has it's own dedicated channel which is a small pipes. With this the network can optimize the resource allocation among the multi users more efficiently. For example as an extreme case the network allocate 91% of resources to a single UE and only 1% of resources to each of the remaining 9 users when the nine user does not require much resource or those 9 users are in such a poor environment where it can utilize only small fraction of the transmission capacity. In case of using dedicated channel, we cannot do this kind of extreme resource allocation because each of the dedicated channels requires a certain level of minum resource allocation even when the real utilization is lower than the minimum resource allocation.

I said HS-DSCH is a shared channel. It means that the whole data in the channel is recieved by all users. Then how can a UE figure out whether the data is for that UE or for some other UEs. I also said in HSDPA multiple modulation scheme is used, QAM and 16 QAM. Then how can a UE knows whether the data is QAM modulated or 16 QAM modulated ? To carry all these information, another new channel was introduced and it is HS-CCCH (High Speed Common Control Channel). The information carried by HS-CCCH is as follows :
i) Transport format information - code tree for the data, modulation scheme, transport blocksize
ii) Hybrid-ARQ related information

I said at the beginning, HSDPA uses a shared channel and try to achieve the optimum resource allocation at multi user level. To do this, the network should know the exact status of the UE. And the network should know whether the data it sent successfully reached it's destination (a specific UE). To enable this, UE reports its communication quality and the data reception status to the network repeatedly. For UE to send this information to network, it uses a special channel called HS-DPCCH. This channel carries CQI (Communication Quality Indicator) and Ack/Nak info.

So far so good. It seems there is only advantages of introducing these new channels, but there is nothing that gains 100% without losing anything. There is a drawback of relying on these shared channel method. It is about power control issue. You know that one of the critical requirement of WCDMA technology is a very sophisticated power control. If UE power is too low, Node B would have difficulties decoding it and if the power is too strong it can act as a noise to other UEs communicating with the Node B. For this purpose, Node B sends a UE a power control message periodically and this message should be different for all the UE because each UE may be in a different channel condition, meaning this power control message should be a "Dedicated" message. But as I explained HS-DSCH is a shared channel. Then how can Node B deliver the power control message for each specific UE. The solution was to use R99 dedicated channel (DPCH) carrying only the power control message. But using a full DPCH only for carrying a small power control message is waste of resource. To improve this situation, from Release 6 a new channel was introduced and it is F-DPCH (Fractional DPCH). The details of F-DPCH is out of the scope of this section and I wouldn't explain any further on this channel.

Improved scheduling in HSDPA

The whole purpose of improving scheduling is to decrease the latency between the communicating parties. In this case, the communicating parties are a UE and a Network. The basic idea of this improvement is to refine the granularity of the scheduling period.

In WCDMA network, this scheduling happens for every TTI (Transmission Time Interval) and in R99 the common TTI is 10 ms (sometimes 20ms, 40ms TTI is taken). In HSDPA, this TTI has been changed to 2 ms. Why it is 2 ms ? What can't it be 1 ms or 4 ms ? It is just a result of trade-off of various factors. If the TTI is longer like 4ms or 6 ms, the effect of the schedule time refinement would not be outstanding. However if the TTI is too short, the ratio of scheduling overhead and the refinement would decrease because executing the scheduling algorithm requires a certain amount of time and resources.

Another means of decreasing latency came from the way to handle the data with errors. In R99, those error can only be detected by RLC by Ack/Nak from the other party and whether it would request retransmission or not is determined by even higher layer. But in HSDPA, this error is detected at Physical layer. When a UE recieves data, it checks CRC and sends Ack or Nack on HS-DPCCH being transmitted 5 ms after it received the data. If UE sends Nack, Network retransmit the data. This error detection and retransmition mechanism is called H-ARQ(Hybrid ARQ).

Another mechansim for the improved scheduling adopted in HSDPA is to allocate the optimized resources for each UE. How can this be achieved ? To do this, the network should need some information to make a best decision for each UE. The important informations for this decision making is


i) CQI
ii) Buffer Status
iii) Priority of the data

CQI is calculated by the UE based on the signal-to-noise ratio of the recieved common pilot. If you look into details of TFRI determination by MAC layer, you will notice CQI is the only parameter to determin the TFRI. (What is TFRI ? I will talk this later in this article or some other place. This is very important to implement a test case for maximum throughput testing).

Buffer Status shows how much data is stored in the buffer for each UE. If there is no data in the buffer, the node B should not allocate any resources for the UE. So checking the buffer status is also important for optimum resource allocation.

The overall scheduling algoritm is to allocate more resource to UE which report higher CQI, but there are some cases where the Node B should allocate a certain amount of resources for a specific UE even when it reports a poor CQI. The common example for this situation is a certain RRC message with tight time out value and some streaming data which has some expiration time. To handle these situation, the scheduler (Node B MAC layer, MAC-hs) assign a priority to each data blocks and put those blocks into separate priority Queues.

What I explained so far is just brief overview and to provide motivation to study further. If you are involved in test case creation or protocol stack development, this level of understanding would not help much. If you want to study further so that it may give you practical help for test case development or protocol stack optimization, I recommend you to study very details of MAC-hs and TFRI selection mechanism.

Why we needed HSUPA ?

In previous section, we talked on why we needed HSDPA and how the HSDPA imroved the data throughput. But HSDPA improved only Downlink throughput and did nothing about Uplink. So the natural tendency of next evolution would be improvement on Uplink side. This is how we came out with another technology called HSUPA.

The overall mechanism by which HSUPA improved the uplink throughput is similar to the one used in HSDPA. So if you became familiar with HSDPA mechanism you would not have difficulties understanding HSUPA mechanism.

Introduction of new Channels in HSDPA

As in HSDPA, several new channels were introduced to implement HSUPA and they are as follows :
i) E-DPDCH
ii) E-DPCCH
iii) E-HICH
iv) E-RGCH
v) E-AGCH

Briefly speaking, E-DPDCH is equivalent version of HS-DPSCH and E-DPCCH is equivalent to HS-SCCH and E-HICH is equivalent to HS-DPCCH. But there is a main difference between these HSUPA channels and HSDPA channel. E-DPDCH and E-DPCCH are dedicated channels whereas HS-DPSCH and HS-SCCH are shared channel, but this is understandable because in HSDPA case the source and target of the data transmission is one-to-many but in HSUPA case the source and target is one-to-one, so it is understandable to use dedicated channels in HSUPA.

There are another big difference between HSDPA and HSUPA. It is about scheduling issue. Regardless of whether it is HSDPA or HSUPA, the scheduler (the decision maker) is on Node B, not on a UE. For scheduling we need two very important information, the channel quality and buffer status information. In HSDPA, the information that the decision maker (the scheduler) needs to get from the target of the transmission is only channel quality information and this information was provided via HS-DPCCH and the buffer status information is already available to the scheduler because the transmission buffer is located in the same place (node B) as the scheduler. So in HSDPA the transmitter (node B) can send the data anytime the situation is allowed, but in HSUPA case the transmitter (UE) cannot send the data anytime it wants to send. Before the UE send the data, it has to check whether the target (the reciever, Node B) is ready and has enough resource to recieve the data. For UE to check the status of the reciever (node B) and get the approval from the node B, E-AGCH (Absolute Grant Channel) and E-RGCH (Relative Grant Channel) are used. Node B (the scheduler) send the scheduling grants to UE on when and at what data rate the UE can transmit the data.

The difference between E-AGCH and E-RGCH are
i) E-AGCH is a shared channel and E-RGCH is a dedicated channel
ii) E-AGCH is typically used for large changes in the data rate and the E-RGCH is used for smaller adjustments.

Scheduling for HSUPA

HSUPA Scheduling is quite a complex process but the overall process in simple form are as follows :
i) UE sends Grant Request to Node B
ii) Node B send Abosolute Grant (AGCH) and Relative Grants(RGCH) to UE
iii) UE sets the Serving Grant Value based on AGCH value and RGCH value
iv) Based on the Serving Grant Value, UE sets the E-TFC value for the specific moment of the transmission.

For further details, we need to study the detailed mechanism of MAC-e.

Why we needed HSPA+

We have seen the evolutionary path from R99 to HSDPA and HSUPA and now we have the speed improvement on both uplink and downlink side. We call the combination of HSDPA and HSUPA as HSPA. As we go forward, we got the HSPA further evolved and this evolved version of HSPA is called HSPA+. Now a question would arise, what would be the factors to get the HSPA improved in terms of speed ? Followings are the key items for HSPA+.

i) CPC - DL DRX/UL DTX, HS-SCCH less operation, Enhanced F-DPCH
ii) Layer 2 Improvement
iii) 64 QAM for HSDPA
iv) 16 QAM for HSUPA
v) Enhanced Cell_FACH

Now you may notice right away what 64 QAM and 16 QAM is for and these are mainly for the increase the size of the transmission pipe in physical layer. I would not explain on this any more. Up until HSPA, most of the efforts to increase the throughput was done at the physica layer or MAC layer, but there are bottle necks at every layer. If we remove all the bottle necks from every layer, we would get the ideal max throughput, but this kind of bottle neck removal cannot be done at single shot. From a HSPA+, a big bottleneck on layer 2 (RLC) was improved. The RLC PDU size in HSDPA was 320 bits or 640 bits. Let's suppose you sent a one IP packet with 1.5 Kb. It should be splitted into multiple RLC PDUs and sent by multiple transmission. But in HSPA+, the maximum RLC PDU size can be over 3 Kb. So even the largest IP packet can be transmitted at once. This is done by "L2 Improvement".

Let's suppose another situation, say Web browsing for example. While you are reading a page, you are not downloading any data and there are no data communication between UE and the network. During this time, usually RRC state is put into Cell_FACH and Cell_PCH. When you finish reading the page and try to go next page, in this case the RRC state should change back into Cell_DCH. CPC is a mechanism to reduce the time for these state changes and let user experience like "Continuous Connection".

Another way to improve the problem related to RRC State changes would be to increase the data rate at Cell_FACH. Theoretically you can transmit the data in Cell_FACH in previous technology and ideally the throughput is around 34 K. But if you really try it, you may notice the real throughput is much less than this. For HSPA+, the throuhgput for Cell_FACH has been much increased by Enhanced Cell_FACH.

Finally LTE !


I will not talk much about LTE here because the whole blog here is for LTE. Just a couple of quick comments in terms of evolutionary path. In LTE, both Uplink and Downlink are all shared channel. There is no dedicated channel. In terms of Modulation scheme, it can have QPSK, 16QAM and 64 QAM in downlink and QPSK & 16 QAM in uplink side. And one TTI became 1 ms, which means PHY/MAC layer scheduling should be much faster than previous technology. To make best use of these features, MAC layer scheduling become much sophisticated (implying more complicated) and it use more information from UE to allocate resources dynamically. It use CQI (in Non-MIMO) as in HSDPA and it also use PMI(Precoding Matrix Index) and RI (Rank Index) in MIMO condition.

Latency for almost every layer became much shorter than previous technology (e.g, UE to eNode B latency should be less than 5 ms). There are only two call statues, "Idle" and "Connected" whereas there are multiple call status, Idle,DCH,FACH,PCH and transition among these status takes long time in previous technology. If you see another section in this blog dealing with LTE signaling, you will find number of message transaction for registration and call setup get less.

If you go a little bit deeper into signaling side, you will notice only one reconfiguration message "RRC Reconfiguration" will do all kinds of dynamic reconfiguration from higher layer whereas there were three different type of reconfiguration in WCDMA/HSPA, called "Radio Bearer Reconfiguration", "Transport Channel Reconfiguration" and "Physical Channel Reconfiguration". (Much less headache to test case developer -:)

Simply put, in LTE PHY layer capacity has been increased with higher modulation scheme and latency become short and signaling got simplified. Everything sounds too fancy ? Superficiouly yes. But I am not sure how much headache I will have when it comes to MAC layer scheduling for optimal use of the resources and best performance. We will see.

Sunday, February 7, 2010

LTE RF Test and Measurement

In any wireless communication device, we have to go through two large group of testing. One for testing transmission path and the other for testing recieve path.

For a wireless communication device to work properly, it should meet following hardware requirement

i) The device should transmit the signal which is strong enough power to make it sure it reaches the other party of the communication.
ii) The device should not transmit the signal which is so strong that it interfere the communication between other parties.
iii) The device should transmit the signal with good enough quality which can be decoded/corrected by the other party.
iv) The device should transmit the signal in the exact frequency that has been allocated for the communication.
v) The device should not generate any noise out side of the frequency area that has been allocated for the device.

If any of these condition deviate too much from the specification, the device cannot communicate with the other party or let some other device to communicate. In terms of measurement equipment, item i) and ii) belong to "power measurement", item iii) is related to "Modulation Analysis" and item iv) falls into "Frequency Error measurement". Item v) is also a kind of "power measurement", but the measurement area in frequency domain is different from item i) & item ii). Anyway if you have any equipment that can perform the following three measurement for your communication technology, you can do the most critical part of transmission path.

a) Power Measurement
b) Modulation Analysis
c) Frequency Error Measurement

Now let's think about the recieve path measurement. What would be the most important reciever characteristics for the communication device ?

i) The reciever must be able to decode successfuly the signal coming from a transmitter even though the signal strength is very low.
ii) The reciever must be able to decode successfuly the signal coming from a transmitter even when there are a certain level of noise around the signal.

In terms of measurement logic, item i) and ii) are the same. Equipment sends a pattern of the known signal and let the reciever decode it and compare the original signal from the equipment and the decoded signal by reciever and how much different they are. The more different they are, the poorer reciever quality it is. We call this method "BER(Bit Error Rate) measurement". Item i) measures BER when the input signal to the device is very low and Item ii) measures BER when there are noise to the input signal.

Before we go forward to LTE measurement, pick any technology you are already familiar with and make a list of measurement on your test plan and try to map those items with the measurement principles I described above. Once you are familiar with this mapping, you will understand LTE measurement items more easily.

LTE RF Measurement Items

Now let's look a little bit detail into LTE RF measurement. First thing I have done is to make a list of measurement items from 3GPP 36.521-1 and try to map my measurement principles with each of the measurement items.

Here goes the Transmitter measurement items first. You see a lot of "Power Measurement" and some of "Modulation Analysis". Why do we have so many different power measurement and so many different Modulation Analysis. How do they differ from each other ? This is the question you have to find answers on your own. The answer itself is described in 3GPP 36.521-1 but the question is how much I can understand what is described there just by reading it.

The first step would be to read "Test Purpose", "Initial Condition", "Test Procedure" section of each test case as often as possible and try at least to be familiar to each test case.


Here goes the reciever measurement items.


Snapshots of LTE Uplink Signals for RF Testing

As I mentioned earlier, it is not easy to understand all the details of LTE RF Measurement just by reading the specification. I have read the test case purpose, "Initial Condition", "Test Procedure" over and over.. but still everything is vague. As I try to get more into details, the first obstables that blocks me is a lot of complicated tables describing the test condition. Of course we saw this kind of tables in other technology specification like CDMA, WCDMA but it seems the tables for LTE measurement looks bigger and more complicated. So I decided to see some of the signal patterns described in the specification on spectrum analyzer so that I can get some intuitive idea of the overall RF characteristics of each condition.








Even though we have new technology every couple of years and LTE is new to many people, RF test and measurement technology have a lot in common with other wireless communication technology. If you had experience with any wireless technology, eg CDMA, GSM, WCDMA, Bluetooth, WLAN, you may find the common logics in LTE.

Challenges for LTE RF Testing

One of the biggest challenges in LTE measurement for UE development or test engineer would be that there are too many sub tests with too many different parameter settings.Before I get into details, I want to briefly skim through overall RF measurement from C2K.

I don't have much experience with C2K measurement, but with only a little experience I could tell there are much fewer measurement items in this area comparing to WCDMA/HSDPA and even comparing to GSM/GPRS. As far as I remember, following is allmost all that I did for C2K.

i) Total Channel Power
ii) CDP (Code Domain Power)
iii) Rho
iv) Spectrum Emission
v) ACLR
vi) OBW (Occupied Bandwidth)

But the items listed above is more than what I experienced in C2K. For conformance, I think we may have to go through all of these items. But since C2K is very mature technology now, in the RF part developmental stage we wouldn't go through all of these items. In an extreme case that I heard of was "just measure total power, if there is no problem with it. usually no problem with other parts".

Now let's look into WCDMA. For WCDMA R99 (Non HSPA), If I briefly put the list,

i) Max Power
ii) Min Power
iii) On/Off Power
iv) RACH Power
v) EVM
vi) Spectrum Emission
vii) ACLR
viii) OBW (Occupied Bandwidth)

Just in terms of list, it doesn't look like much difference from C2K. But practically the engineer would meet various characteristics which may look quite different from C2K. The first thing we can think of is that the channel bandwith get tripled compared to C2K and this would introduce a lot of complication in RF design. Another issue is RACH process in WCDMA is more complicated than probing process in C2K and add a couple of important test steps.

Now let's look further into HSDPA. You may think HSDPA would not be much different from R99 in terms of Uplink measurement because HSDPA is only for downlink data rate. It is true in terms of high level protocol, but in physical/RF layer an important factor was added to uplink in HSDPA. It is HS-DPCCH. HS-DPCCH is for UE to report CQI and ACK/NACK to BTS. The problem is that even with this additional channels the UE has to maitain the total uplink power as before. So the UE recalculate/rearrange each of the physical channel power. So if you look at the RF conformance test case list, you would not find much difference in terms of test case items but you would find quite a many of sub items were added to the existing test case due to the introduction of HSDPCCH. (If you want to go into further detail, open up 3GPP 34.121 and find the test cases with the keyword "HSDPCCH" in the test title).

Going one step further into HSUPA, you also find no such a big difference in terms of measurement items. But as in HSDPA case, a new physical channel was introduced and it is called E-DPCH. Even with this additional channel, UE also have to maintain the total channel power as in R99. So, as you may guess, UE has to recalculate/rearrange each of physical channel powers. As a result, we would get a couple of additional sub-items added to RF testing.

Finally.. let's think about LTE. What is the biggest difference between LTE and C2K/WCDMA/HSPA in terms of PHY/RF layer ? It would be OFDM. Yes, it is. What kind of additional measurement items would be introduced to RF testing due to the OFDM ? Since OFDM is made up of a lot of sub carrier with very narrow bandwidth, we have to measure most of the characteristics listed above for each OFDM subcarrier. But if we do all of the items for each of the sub carriers, it would take one full day just for one item. Another big difference would be that LTE specification allow many different type of system bandwidth whereas in C2K/WCDMA, the system bandwidth is always same. It means you have to measure the whole set of test items for multiple different system bandwidth which multiplies the measurement time and parameter settings in measurement equipment.Based on the LTE specification, an LTE system bandwidth can be any of 1.4 Mhz, 3 Mhz, 5 Mhz, 10 Mhz, 15 Mhz, 20 Mhz whereas C2K can only have single bandwidth of 1.28 and WCDMA can only have single bandwidth of 3.84. Of course, a specific system operator would use only one of the bandwidth in their network but Mobile device manufacturer should design the UE which support all of these bandwidth.On top of this, there is another factors to make LTE test even more complex especially for mobile phone design/test. It is the fact that a real bandwidth being used at a specific time can change dinamically.

One intuitive example is shown in the following measurement screen. This the RF signal captured for LTE call connection and data transfer. When you initiate a call, the mobile device would go through the protocol sequence for call setup and then data fraffic would start. If you see at the bottom of the screen (spectrogram) of the measurement screen, you would notice that frequency allocation (bandwidth being used) during this period changes. In this screen, the frequence allocation for data traffic does not change, but in live network this bandwidth would change dynamically.


What is the implication of these multiple system bandwith and dynamic bandwidth change to Mobile phone designer and the test engineer ? For designers, the biggest issues would be how to optimize various kinds of design parameters to be best fit for all of these bands. For test engineers, the biggest issue would be huge number of the test cases they have to go through.
Final outcome of all these considerations on multiple bandwidth and dynamic bandwidth change can be examplified as a table shown below. This is a table for only one test case. See all those different system bandwith you have to cover. Different RB allocations is for dynamic frequency allocation that I mentioned above. In LTE, for every test case you would have this kind of tables and this will be huge headache to designers and test engineers.