Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6628340
  • 博文数量: 227
  • 博客积分: 10047
  • 博客等级: 上将
  • 技术积分: 6678
  • 用 户 组: 普通用户
  • 注册时间: 2006-07-11 10:33
个人简介

网上的蜘蛛

文章分类

全部博文(227)

文章存档

2010年(19)

2009年(29)

2008年(179)

分类: 系统运维

2008-07-18 16:20:26

Making NS-2 simulate an 802.11b link
Last update: 4-29-05

jpr -at- rice.edu

For a research project I worked on in 2004 (resulting paper ), I used two Netgear MA311 cards to create a simple ad hoc connection. These cards are 802.11b PCI cards, stuck in the back of brand new dell workstations. I did some simple throughput tests using , and found that my results differed significantly from what NS told me.

So I set about to figure out why things were different and what needed to be done to sync the results. I'll quickly explain my findings as it wasn't a complex process. Skip to if you want to see my actual results.

First, I'm assuming version 2.27 or newer here. If you're using older versions, some things will be different and I'll try to point them out if I can.

Data Rate
NS, by default, has the data rate for the MAC set at 2 Mbps. But cards are faster now. My cards are 802.11b, which means they're 11 Mbps, and so we need to change this. Add the following to the beginning of your simulation script:
Mac/802_11 set dataRate_ 11Mb

The card can send at 1, 2, 5.5, or 11 Mbps. Most cards support some kind of ARF (Auto-Rate Fallback) for automatic rate selection between these choices. ARF basically seems to be a slow-timescale feedback mechanism. If there are a lot of packet errors, ARF will step down the rate, and conversely, if there are no errors then the rate will be increased. I'm not explaining this in detail because NS doesn't support any multi-rate functionality by default. That means mobile nodes will always send their packets at dataRate_. So if you really want to be realistic, you need to support this somehow. I didn't.

RTS Threshold
Almost all commercial 802.11b cards have the RTS/CTS exchange turned off by default. This is not a bad decision since I think most people's home wlan networks are simple enough so that the RTS/CTS really is just unnecessary overhead. NS by default has this feature turned on, so we probably want to tell NS not to use this feature. Add this line to the beginning of your script:
Mac/802_11 set RTSThreshold_ 3000

This means that an RTS will only be sent for packets that are bigger than 3000 bytes, which should be never. Note: if you want RTS/CTS on, then set this value to zero.

Preamble
I think this is probably the least obvious modification so I'll try to be a little more detailed. Every packet is sent with a preamble, which is just a known pattern of bits at the beginning of the packet so that the receiver can sync up and be ready for the real data. This preamble must be sent at the basic rate (1 Mbps), according to the official standard. But there are two different kinds of preambles, short and long - referring to the length of the sync field. The long preamble has a field size of 128 bits, while the short preamble is only 56 bits. I would guess this short preamble option came about as hardware progressed and transceivers got better at locking on to a signal. NS is set by default to use the long preamble. My cards use the short preamble by default, and unfortunately, I don't know a good way to determine if your card is using long or short preambles. Email me if you have any ideas.

To support short preambles in NS, add the following line at the beginning of your script:
Mac/802_11 set PreambleLength_ 72
Note: there are 16 other bits in the preamble that aren't affected by the short/long distinction. To go back to long, change this value to 144.

The Channel
Above is everything you need to simulate an 802.11b card accurately (at least more accurately than the default NS does), but there's still a big assumption in NS - that's the wireless channel model. Currently the received power of a packet only depends on the distance between sender and receiver. But in real life, there are a lot of other factors influencing received power. And if you want a realistic simulation, you need to simulate this. I would suggest going to find out more information about a more realistic channel fading model.

Packet Size
There is a slightly annoying default setting in many versions of ns that makes your packet size not what you think it is. The default setting is this:
Agent/UDP set packetSize_ 1000
Which means that if you try to set your UDP packet size to greater than this, it will actually split up each packet into two smaller ones. You really want this line:
Agent/UDP set packetSize_ 1500
If you are not sure if this is a problem, I would recommend checking the packet sizes in your trace file. If you see the wrong packet sizes, this is most likely the problem.

Results
The table below shows achieved UDP throughput in Mbps.

Packet Size (Bytes)

Simulation
RTS off

Experimental
RTS off

Simulation
RTS on

Experimental
RTS on

128

1.28

1.2

0.75

0.76

256

2.03

2.08

1.4

1.42

512

3.67

3.58

2.48

2.5

1024

5.49

5.38

4.03

4.05

1440

6.41

6.35

4.93

4.96


As you can see, the simulation results are very close to the real results I obtained. In fact, I believe that they are close enough so that the difference can be entirely accounted for by the randomness of the CSMA MAC.

Please send me any comments/questions/corrections. Thanks.

阅读(2525) | 评论(0) | 转发(1) |
给主人留下些什么吧!~~