Since OFDM is an important feature of, but not the only one in LTE, this post will contain some more details about what makes LTE into the standard it needs to be.
LTE Bandwidth
As already mentioned briefly, LTE is designed to operate in frequency bands between 700 and 2600 MHz. Parts of the frequency within that range may be used for LTE. For those parts, regulations are made as to how the blocks of bandwidth must be used. The frequency can be used in blocks of 1.4, 3, 5, 10, 15 or 20 MHz bandwidth. The advantage of using such blocks is that it is possible to use a small block when the rest of the surrounding frequency is already used. When a big block of bandwidth can be allocated, the spectrum efficiency will increase. For larger bandwidth, the relative overhead of guard bands (to separate channels) and control channels is smaller. Fading is the process of attenuation of the signal when traveling through the ether. This can be because of obstruction of the signal, or interference from reflections. Should part of the spectrum fade, this will have more impact for smaller bandwidths. A large block of bandwidth is less likely to experience fading of the complete channel. Therefore, it is advisable to implement LTE on as large bandwidth as possible.
Error correction codes
Since it is only expected that not all bits of information sent at the transmitter will arrive at the receiver as intended, error correcting codes are implemented. Error correcting codes have the ability to somehow reconstruct the original data, in case some parts of it are gone (too noisy, never received, jammed) How that works exactly for the latest ‘Turbo coding’ techniques is a bit much technical detail for now, but a short example might help to see it is possible. Suppose the sender has got 1 bit (0 or 1) to send, a very simple and redundant code could be to just send the bit 3 times. If the communication channel is introduces some errors, the receiver might see 8 versions of the message.
Received data | Decoded as |
000 | 0 (error free) |
001 | 0 |
010 | 0 |
100 | 0 |
111 | 1 (error free) |
110 | 1 |
101 | 1 |
011 | 1 |
So instead of taking 1 bit, and deciding what it is, the receiver takes 3 bits, and ‘averages’ them into 1. This is just an example of a very redundant error correction code example; it sends 3 times the data needed. More sophisticated codes use more ‘tricks’ to be able to correct more difficult errors. As long as there are not too many errors in a row (called a burst) they can mostly be corrected. This is the part where frequency jumps in again. Error correcting codes in LTE can distribute a stream of data over multiple subcarriers. This technique is called frequency interleaving. If a single subcarrier fades, it is possible for the correction mechanism to repair the bits that were lost, since is contains ‘random’ bits from the multiple data streams. The same technique is used for the time domain. The data stream is then ‘scrambled’ so that an error burst during transmission is not a burst anymore when the decoding places the bits in the right order again.
Picture explaining how interleaving helps with burst errors. Source: Link |
Architecture
Besides higher data rates, LTE has the goal of low latency and making an All IP Network (AIPN). These goals result in a simpler network with fewer elements. In the 2G and 3G network, most activity is done close to the core of the network. The access points (NodeB) together with the Radio Network Controllers (RNC) form the Radio Access Network. The NodeBs are implemented as relative simple elements in the network. The real intelligence is at the high capacity RNCs; they support resource and traffic management (for instance switching a call from one NodeB to another) and various other essential radio protocols. In LTE, the NodeB has evolved into the (not very originally named) evolved NodeB (eNodeB). This eNodeB is the only element in the evolved UMTS Terrestrial Radio Access Network (eUTRAN) for LTE, and connects directly to the core network (Evolved Packet Core – EPC). With more intelligence at the eNodeBs, they handle parts of traffic management themselves. Handovers (switching calls to other access points) can be done without bothering the core network, since eNodeBs communicate with each other and handle those handovers themselves. This is a good step forward to lower latency in the access network.
LTE Architecture. Source: Link |
The LTE architecture also satisfies the goal of realizing an All IP Network. In UMTS, the air interface uses extra protocols, like GPRS Tunneling Protocol (GTP) to handle the voice data in combination with data packets on the same network. Historically, the telephone system uses a circuit switched network. For a call, a dedicated connection path is set up, that circuit is ‘ours’ to use for the duration of the phone call. The internet is a packet switched network, meaning that data packets are individually routed over the World Wide Web. All packets combined, they assemble the website or whatever you requested from the internet. Main difference is resource usage, the packet switched network is not being used when I have downloaded the website and I’m reading the text from my screen. The circuit switched network for phone calls is loaded with our call, whether we are talking or not. In GSM and GPRS, these services were combined; in LTE, the switch is made to a total packet switched network. To integrate with the rest of the internet, the Internet Protocol (IP) is used.
References:
Geen opmerkingen:
Een reactie posten