Search This Blog

Tuesday, November 16, 2010

Week 11: ICEM

Integrated Concurrency and Energy Management (ICEM)

ICEM is a device driver architecture that enables simple, energy efficient wireless sensornet applications. Authors of the paper challenge the idea that sensornets cannot have standardized OS with simply API to manage the power and concurrency. The main idea is to deviate from the standard approach— hand-tuning the whole nesC code for power optimizations. ICEM offers set of tools that allows you to build a much shorter code and still manage power efficiency: Virtualized, Dedicated and Shared drivers. These tools leverage on the concurrency knowledge that application provides. The event-driven nature of the TinyOS uses split-phase operations that provide the valuable information i.e. when each operations ends it will inform that it ended using back-call, which is the main difference between asynchronous I/O that is used in threaded operating system. With such knowledge, ICEM drivers can efficiently turn and off underlying devices without wasting energy. However, there is another complication that might arise when some operations need to make series of atomic operations or require access to several peripherals at the same time. To handle these situations, ICEM creates the hardware lock abstraction in non-blocking (non threaded) environment using “Power Locks.” These locks are implemented by the shared drivers because they buffer “lock” requests, while virtualized drivers buffer “functional” requests. Thus, both drivers provide concurrency and energy management to their clients using them. Since dedicated drivers don't have such support, ICEM also provides them set of interfaces to allow building the system. They are part of “Lock” interface and include: StdControl, SplitControl and AsynStdControl (where Std is used for immediate access, Split for deferred access and AsyncStd for immediate hardware interrupts). As for the shared driver's Power Locks, ICEM provides the whole component library to implement them: arbiters, power managers and configurators.

The strength of the ICEM paper is definitely the depth of coverage of all the design components. Since complexity of the energy management is hidden from the application with the use of ICEM, it is important to understand how all of the drivers and components work behind ICEM. Thus, thorough explanation of why certain drivers were made and how they create abstractions is necessary. With such knowledge in mind, nesC developer will be able to understand why in some cases hand-tuned implementation is superior to ICEM implementation and in some cases equivalent. For example, flexibility of ICEM comes with the price of configuration requirements of some of its peripherals before arbiters can use them to make sure the drivers can interact with them. Since not all peripherals require configuration, a future consideration to ICEM could be to make configurations an option by providing a flag for it in its library components.

The other strong aspect of this paper is the scope of evaluations made to prove ICEM's efficiency in providing energy management. It is imperative for the sensornet developer to understand what ROM and RAM overhead library components are causing, which is very delicately done by measuring the current drawings at different power states and peripheral operations. With such experiment, authors identified the most expensive operation, which was obviously granting a request. By simply taking an integral of current drawing, energy consumption was identified. Using this measurement model, different implementation samples were made to compare ICEM against blocking threaded with different sample rate and hand tuned implementation. As predicted, ICEM scored within 1.6% of efficiency to the best hand tuned implementation of our example application. What makes this discovery powerful is the fact that 68 nesC lines of code were written instead of 400 lines, making ICEM definitely the strong candidate for application where 1.6% deviation from perfect is allowed. The only drawback of ICEM that has been left out as the open question in this paper is the deadlock handling. Failure to address this issue might cause system design (with peripherals sending a lot of parallel requests to acquire access to several pieces or making series of atomic operations) to fail. The question on how to handle deadlock situations will be asked in class.

Class discussions:

One of the first discussion was the ability of ICEM to deal with deadlock situations. Students suggested that static analysis will be the only feasible solution that will prevent deadlock to happen after all. One of the main arguments was the fact that nesC is built in a similar way. Thus, having deadlock prevention mechanism working in a similar fashion will not be ugly in terms of overall system design.

The next discussion questioned the completeness of the power lock queuing logic. Student asked whether power locks can be given priority or not. Since there is no direct control of prioritizing the locks, it was concluded that it will be the matter of creating work around using the ordering scheme inside the queue such as the round robin or first-come first-served approach.

The basic Lock Interface question was asked about the difference between StdControl and AsyncStdControl. In other words, what is the difference between them and which one is invoked in what kind of situations. Simple explanation was given by drawing parallel with the need of hardware interrupts. For example, if bits are received on the wire, hardware interrupt is needed to handle it immediately, so that the next set of bits will not override and loss will occur. Similarly, AsyncStdControl is used for hardware interrupts and StdControl is used for those devices that could be taken control immediately without startup latencies. After that, another interface question was asked about the role of the DefaultOwner, which is simply the default client that gets the power lock when no one else needs it. This way, ICEM controls the energy management by turning off the devices when DefaultOwner is holding the lock and turning on conversely.

The major drawback the whole class agreed upon is the need for configuration before any power lock can be granted to the client. One of the students argued that such approach proves to be inflexible in real life implementation. However, presenter argues that extensive evaluation showed that ICEM works great with the most popular platforms without having problems in configuring some motes (for example to switch to certain modes in their buses) before obtaining locks.

Overall, the class was pleased with the quality of the paper and the graph that shows the comparison of energy management results comparing ICEM to the perfect hand tuned implementation. The general impression was that the semester long discussion about optimizing idle listening was the complete waste of time since ICEM was able to score within 1.6% of the best implementation.

Friday, November 5, 2010

IP is Dead, Long Live IP for Wireless Sensor networks

Wireless Networks community has been observing many research efforts to devise protocols specific to application domains, with energy efficiency being the primer. There has been a growing consensus in the community, towards building a generalized WSN architecture, so as to combine these disjoint protocols under a common protocol layer, and address the WSN problems in broader networking sense and let the designers and implementers work towards a common goal of improving the system as a whole.

Since the nascent days of WSN, IPV6 was seen as a candidate to provide such architecture, but the majority of researchers denounced its use in WSN due to several concerns as lack of resources like memory, Energy, processing power and bandwidth. The large ip headers and payloads, end-to-end protocols were deemed un-necessary to WSN.

Since then IP, and WSN have evolved through a lot of research and increased practical deployment experience. It is worthwhile looking again towards IP protocol, to assess if it could provide sensor networks the much-coveted backbone architecture.

It is impractical to use IPV6 architecture in its current format due to obvious reasons of relatively huge header as well as assumptions about link connectivity. This paper propose an IPV6 based architecture for WSN, which would conform to the layering abstractions, services and interfaces of protocol stack, yet allow integration of existing duty cycled mac layer protocol concepts which have already been proven to be quite effective and efficient.

IPV6 architecture:

The architecture specifies WSN existing at the edge of IP networks, with border routers communicating with the external IP network.

The IPV6 network for WSN emulates the link layer, Network Layer and Transport layer abstraction of IPV6 architecture in wired networks. A combination of sample listening and localized schedule synchronization is used to emulate IP link.

Sampled listening uses short chirps as wakeup signal, and append destination address as well as schedule synchronization information to it, so as to avoid overhearing problem as well as synchronize a rendezvous time for sender and receiver with minimal chirp transmission. The scheduled time lets the receiver sleep, and then wakeup exactly at the point of data transfer. We’ll discuss about the problems with this approach in the next section.

Given the size of IP packets, there exist a layer 2.5 between Link layer and Network layer, which compress and fragments the data packets as necessary. The compression scheme works by eliding the obvious information as Version, Traffic class etc. as well as removing redundant information like Link Level Identifiers, as they could be derived form the link layer header as well. The 6LP_IPHC compression scheme uses 1 bit of compression state, and is able to compress 48byte UDP/IPV6 header up to 6 bytes.

Network layer duties include Neighbor discovery, Auto Configuration, Forwarding and Routing. The border routers propagate Routing Advertisements to announce optimal routes, as well as network parameters. The trickle algorithm is applied to assure minimal overhead by RA transmission. The increasing sequence numbers of RA’s make sure that only the most recent RA versions are applied by the nodes. Autoconf feature of IPV6 allows the unattended nodes to self configure identifier by appending the link layer identifier to Local link identifier, or else DHCP could be employed to let a centralized server allocate identifiers to the nodes.

Forwarding and routing are the other two features of network layer. Each intermediate node works as a default router, and maintains next hops for a partial number of neighbors, as well as a default route to border router to redirect rest of the packets. The border routers learn the default routes and reverse link them to analyze and generate optimal links to all the nodes. The destination path is injected in datagram’s header, so as to make sure it reaches the destination. Loops and inefficient links are identified by inserting expected hop count and eTX for next hop on the datagram and the receiving hop checks if the hop count is lesser than expected to detect loops, and if ETX is lesser than expected than it indicates increased path cost has not been updated.

We had thorough discussions over a lot of issues related to the paper:

Experimental Setup and Test-bed: The experimental deployment to evaluate protocol, doesn’t really prove a great deal as it considers very mild data rate if 1 packet per minute, which doesn’t really test the network congestion at all. The slow data rate perfectly explains the low duty cycle of 0.65%! The general consensus was that the tests seemed a bit hollow, and the protocol in its current state would struggle in harsher environment, with unpredictable data rates.

Congestion Control: The congestion control algorithm used in the protocol looks very inefficient as it waits for congestion to happen and then starts applying backpressure to decrease the data rate. This could lead to extensive link level retransmissions, which could be lost as well, as they would just add more to the congestion, and eventually the links would drop packets after exceeding retransmission count.

Rendezvous Time setup and Synchronization: Again, with low or no contention whatsoever it seems very easy to set up rendezvous time and transfer data but think about a huge network with many nodes contending to send data to a receiver! In this case with so many nodes undergoing sleep – wakeup schedule and transferring chirps it is inevitable that rendezvous times would collide, and it would be very tough to reach consensus, without avoiding extended wakeups and hurting overall energy efficiency.

Frame Pending bit and Starvation: The advantage with this approach is that the receiver doesn’t have to wait for timeout to decide that data is not pending, as the sender explicitly sets the frame pending bit. The issue of starvation was discussed in the class, and as it seems, it is impossible to avoid starvation in the current approach if some node decides to send forever by always setting the frame pending bit. This would lead to starvation for other motes as well as worse energy efficiency.

An issue was pointed out in the review for the approach of using IPV6 data packet structure for acks. The smaller acks could be comfortably sent between the chirps as well as data streams, but how would this extended Ack structure affect it is debatable.

Another interesting thought was, the possibility of WSN acting as a bridge between IP networks. Then, would it route the IP packets for other IP networks? This would severely affect the energy efficiency of WSN network.

The introduction of security in WSN courtesy IPV6 is definitely appreciable. Given the sensitivity of a lot of WSN deployments, security surely is an implicit requirement.

The discussion ended with probably the most basic yet debatable question:

Do we really need IPV6 with all its complexity and extra baggage (long headers) at all in WSN? If it is about connecting to outside network, why not let the border routers speak IP, and let the WSN speak its own language, in its efficient ways?

The answer was the same, when different hardware, software organizations agreed over common protocols and platform for Internet. The answer remains the same now as well! The extended future of WSN calls for a generalized architecture and the need for sensor nodes speak the same language (IP) as the rest of the IP world speaks. One interesting thought is that IPV6 architecture would give each sensor node an IP address, which makes it possible to access every “dust” around us!

Wednesday, October 27, 2010

RCRT: Rate-Controlled Reliable Transport Protocol for Wireless Sensor Networks

The paper talks about a protocol that is built for applications that require the transport of high-rate data and which have got a little attention so far. But there are problems in supporting these applications which are limited radio bandwidth and these applications being loss-intolerant. The application data might be compressed to reduce bandwidth requirements but still they require reliable delivery. The protocols like these which handle voluminous data are normally seen to result in poor network performance just because of the volume of data they handle. An experiment performed is Los Angeles is cited as an example of this in the paper. This is just because of congestion as the design for such protocols is not carefully handled. Congestion in the network can lead to serious latency and eventually will lead to packet losses.

RCRT protocol is different than the already existing protocols as all the protocols studied so far either deal with one of the two issues but not both of them. The two issues being end-to-end delivery and congestion control. It seems strange to study this fact as both of these issues seem interlinked to give out a good network performance. RCRT protocol also demands the network to be able to support multiple concurrent streams from each sensor node as this seems to be the future which will make this protocol to be used widely. RCRT protocol also separates the capacity allocation policy to each node from the underlying transport mechanisms which makes this protocol to stand out.

End-to-end delivery of data and avoiding congestion control are the two basic features of this protocol. The entire protocol revolves around the sink which is the destination, also discovers the missing packets and explicitly requests them form the sensors. Sink decides that the network is congested if the time to repair a loss is significantly higher than a round-trip time. There are six goals which motivate the design of RCRT protocol which are Reliable end-to-end transmission, Network efficiency, support for concurrent applications, flexibility, minimal sensor functionality and robustness.

Three distinct Components which make RCRT different from other protocols are the Congestion detection component, Rate adaptation component and Rate Allocation component. End-to-End Reliability is one of the main features of RCRT and is implemented with the use of NACKs (Negative Acknowledgements). The design of RCRT protocol leverages the fact that the sink has enough memory to keep the track of all missing packets. The sink keeps track of sequence number of received packets and a gap in this sequence number indicates packet loss. NACKs ( which avoid ACK implosion and thus reduces overhead) are used to indicate the respective sensor node of the loss of packets. The source node overwrites its retransmit buffer once it receives cumulative ACKs in the feedback packet. The sink also maintains a list of out-of-order packets for each flow to provide in-order delivery of data packets to the application.

Feedback packets are structured in such a way that they are self-contained so even if one of them is lost, the next feedback packet will adjust the node’s rate. Feedback packet contains assigned rate r, a list of NACKed sequence numbers, a cumulative ACK and the average value of RTT for that node. Due to increase in overhead, feedback packets are sent only on one of the conditions Detection of one or more missing packets, the node is sending at a rate different from the assigned rate, a duplicate packet with an already acknowledged sequence number has been received, a feedback packet has been explicitly requested by the node (generally happens after half of the retransmit queue is filled).

Some of the issues RCRT tackles which have not been addressed in the previous protocols are of detecting the loss of last packet during connection tear-down where the sender and the sink synchronize their sequence numbers. RCRT adaptation to the network scenario will be delayed in case of bursty losses and loss of consecutive feedback packets but will eventually control the situation as it will receive one packet from at least one of the congested nodes.

The drawbacks of this protocol are not addressed in the paper and all of them are left for future work. RCRT protocol supports multiple concurrent applications but coordination between rate allocation across multiple sinks is yet to be taken care of. RCRT adjusts the total flow in the network and hence unconstrained bottleneck regions also get affected. As RCRT makes decisions on RTT timescales, the convergence time could be large in case of networks with high maximum RTT. Also since each of the nodes experience congestion for different fractions of their lifetimes, demand proportional allocation policy used by RCRT is possible but difficult to implement on a long term basis which is left to future work. Also in real world when the protocol is implemented it needs to deal with issues such as network partition which the author does not address in here.

According to an argument in the paper, "If it takes more than few RTTs to recover the losses, then the network is more likely to congest." But the paper doesn’t explain much about why it is the case. To my knowledge, it is possible that more causes will cause the packet to be delayed for which the paper does not provide much information. The paper does not test the protocol with asymmetric nodes if the nodes have different sized retransmit buffers, nodes with smaller buffers may “hog” the traffic and essentially become higher priority than nodes that are capable of buffering packets. This could impact application performance if the application is expecting to receive traffic from multiple nodes for time-sensitive analysis. In a future paper,the authors could perform tests of the lifetime of the sources, as well as the sinks. I would assume the sinks would either have larger batteries or be connected to a constant source of power. Although reliability is an important factor in transporting large blocks of data, long battery life is an important feature that was not addressed.


Tuesday, October 12, 2010

Week 6: Typhoon

Typhoon: A Reliable Data Dissemination Protocol for Wireless Sensor Networks

Chieh-Jan Mike Liang, Razvan Musaloiu-E., and Andreas Terzis

We review Typhoon, a protocol developed at Johns Hopkins for disseminating large objects quickly and reliably. Typhoon reduces contention and promotes spatial re-use by combining spatially-tuned timers, prompt retransmissions and frequency diversity. The cornerstone of the Typhoon protocol is its use of snooping and channel switching to minimize transmissions/retransmissions and reduce collisions/contention, respectively. Channel switching was especially effective in order to avoid traffic generated on the default channel by hand-shaking and by Trickle- the protocol implemented on top of Typhoon to disseminate meta-data. Typhoon is ACK based, utilizing the stop-and-wait ARQ protocol, which is effectively a sliding window protocol with a window size of one. Typhoon is compared against Deluge, a NACK based protocol. Deluge is the de-facto standard for disseminating large objects in TinyOS. Through testing, Typhoon proved to be more efficient than Deluge by reducing dissemination time and energy consumption by up to three times.

The general purpose of Typhoon is to disseminate large objects, for example source code updates which are usually between 50-100 KB in size. Typhoon handles these large objects by first breaking them into 1 KB pages. The pages are then split up into packets and sent over the wire. The stop-and-wait ARQ protocol means individual ACKs are sent for every packet. Because of this, the amount of time that goes by where a packet is lost without being retransmitted is small. Deluge uses a NACK based protocol, and it waits until the entire page is sent to NACK individual packets. When contrasted against Deluge, the class found that Typhoon benefits greatly from its quick ACKs as opposed to Deluge's slow NACKs. Typhoon also uses Trickle to disseminate meta-data on the default channel. Typhoon's only other transfers on the default channel are the node handshakes, after which nodes will go to a different random channel to begin transmitting data. This is the channel switching that the paper introduces. The class noted that some channels will have different properties such as link quality and contention than other channels, and so the channel randomness plays a critical role in the success of Typhoon. There is only one transmitter and one explicit receiver for each data transfer, although there may be other implicit receivers that snoop on the data transfer. They hear the PageOffer with the channel number and silently switch to that channel and collect the data that is sent. If any packets are lost, though, the entire page is discarded (since implicit receivers don't send ACKs). This is referred to as Typhoon's all-or-nothing snooping approach.

The class discussed the pros and cons behind Typhoon's all-or-nothing snooping approach. Some students believed that Typhoon would reduce a great amount of transmissions if snooping nodes were able to temporarily store the pages with missing packets, instead of unequivocally discarding them. In certain scenarios where a snooping node may only be missing a small number of packets from a page, the students argued it is very likely that ensuing snooping opportunities would involve that same page. This is because of Typhoon's wave-like dispersal, where nodes in close proximity are likely to require the same pages. The nodes with the unfinished page could then snoop on these ensuing transmissions (which would take place anyway) and acquire the missing packets. This would minimize the number of PageOffers and subsequent transmissions, and hopefully the number of necessary snoops to acquire a full page. The downside of this approach is the requirement that nodes store unfinished pages in their memory, which is minimal in the first place, and to keep track of which pages are unfinished and which packets they are missing from that page. The class came to the conclusion that it thinks the added complexity of this protocol modification is small compared to the benefit in speed and data reduction that Typhoon would receive from it.

The class also criticized the comparison of Typhoon in four modes:

  • snooping and channel switching

  • channel switching only

  • snooping only

  • neither snooping nor channel switching

The class made the observation that Typhoon was specifically designed to optimize its benefit from snooping and channel switching, and so to turn any of them off and then compare against the outcome is almost useless. The class did find the first two cases useful, as it showed the benefit of snooping for this particular protocol, and so we can weigh this benefit against the high data overhead that is mostly incurred from snooping. On the other hand, the class did not find the last two bullets useful at all. The class felt that snooping was more of an enhancement to channel switching, and that channel switching was essentially the backbone of the protocol, so providing data for Typhoon using snooping only or neither snooping nor channel switching was pointless and misleading. This is because without channel switching, all data is sent on the same channel, and considering Typhoon's aggressive behavior this would result in a disastrous protocol. The class argued that instead of contrasting snooping only, the authors should have compared the results against other full protocols that do not use channel switching or channel switching with snooping.

The class discussed the Trickle protocol in-depth because the authors did not go into it much, being that it was not the primary topic of the paper. The class commented on the fact that Typhoon waited some amount of time for Trickle to finish before starting up. The class believed that it makes sense to wait until Trickle has done its job before Typhoon starts up, because the nodes need to know what data they need in order to work properly in Typhoon (i.e. a node will know to send a PageRequest when they get a particular PageOffer). The class also wondered whether any other meta-data could be passed around in the Trickle phase, such as network typology data. The class believes that if network topology data can be quickly pushed to each node (perhaps only data regarding upstream links since Trickle disseminates downstream) then each node in Typhoon could more optimally get the data it needs. An example of such an ideal scenario would be a node avoiding a connection with a weak link neighbor because the node has many other stronger link neighbors at the same level of dissemination as the weak link neighbor.

The class also noted that the testbed experiments were very limited in scope. The authors compared Typhoon against Deluge with the CSMA MAC protocol using default values. The experiments were carried out in an office building where there was a notable bottleneck in the middle. The class argued that Deluge may not have been tuned best for that environment, whereas Typhoon was most definitely tuned correctly to showcase its best results.

Friday, October 1, 2010

Week 5: Presentation and Discussion of "Collection Tree Protocol"

O. Gnawali, R. Fonseca, K. Jamieson, D. Moss, and P. Levis
SenSys 2009


This paper presents a collection tree protocol in wireless sensor networks -- CTP Noe, which has three contributions: first of all, it introduce two key principles, adaptive beaconing and datapath validation, to solve two tricky problems of routing in sensor networks, link dynamics and transient loops; second, it describes the design and implementation of CTP Noe; third, it provides a comparative study by estimating CTP Noe on 12 different testbeds.


We discuss a key problem first -- why collection preforms well in controlled situations yet poorly in practice? To know the reason, the authors have done some previous research on that, e.g., "An Empirical Study of Low Power Wireless" and "Four-bit Wireless Link Estimation". In short, the reason of this problem is that we don't understand low power link layer well or our model about it is impractical. On the one hand, current protocols typically use periodic beacons which can't adapt dynamic low power wireless link. On the other hand, rapid link topology changes bring transient loops which cause losses in data plane or long periods of disconnection. Previous solutions have to discard packets or pause the data transmission until next beaconing period. Why? Because in these solutions, topology repairs happen operates at a time scale orders of magnitude longer than data plane. So the authors' goal is to design a collection tree protocol which can adapt its beaconing rate efficiently and agilely, meanwhile, repair topology rapidly to avoid discarding packets and pausing the data transmission with a long time.


CTP Noe gives us a good solution with two mechanisms: datapath validation and adaptive beaconing. Datapath validation makes sure the routing protocol can update the stale datapath information in good time by using data packets. Adaptive beaconing increases agility and efficiency -- sending very few control beacons when the topology is consistent and quickly adapting when the datapath discovers a possible problem. With these two mechanisms, CTP Noe does not need to directly drop packets or pause a long time when meeting loops. How to achieve that? We discussed the reason at class. When transient loops occur, CTP Noe will decrease its beaconing interval to the lowest immediately which make it can repair the datapath at once. So packets only need to be paused for the length of the minimum beacon interval. However, former protocols, such as MultipleLQI and DSDV, can not achieve this, because fixing the datapath needs a long time. They have no choice: either drop or overflow.


Going in thorough details of implementation, we talked about several components in control plane and data plane. CTP Noe defines three events to trigger the beacon interval resetting: 1. P bit equals one; 2. cost drops significantly; 3. a topology inconsistency has been detected. We discussed the usage of these three events. In data plane design, some students misunderstand the concept of client queues. Actually the client here means an up-layer application or component of current sensor node. While the pool here is used to store coming packets from other sensors. Here the duplicate judgement is important because it decreases large and useless transmissions for duplicate packets from looping and false negatives of ack. For transmit timer, we discussed the reason why it needs at least two packet times between transmissions.


Finally, from all the figures and descriptions in evaluation, we can see that CTP Noe achieve reliability, robustness, efficiency and hardware independence. But in Figure 9, we don't know how to conclude that Channel 1 and 11 are most heavily used by the building occupants. 


Here's a few short descriptions from our reviews:

  • The authors performed a good bit of research for the experiment, using a variety of testbeds and modifying the many of the parameters. In future tests, the authors could modify parameters that remained fixed over long periods in their tests. For example, IPI could be modified in tests instead of using fixed value. Also, nodes could be removed and reintroduced, instead of just removed(6.7). Another option is to add new nodes over some of the longer tests and look at delivery ratios.
  • What happens in the event that the sink itself is intermittent?  Perhaps in a mobile ad hoc network, the sink leaves the network.  How do the nodes re‐route to a different sink? 
  • It achieves the four major goals considerably but the question of this protocol being general to distance vector algorithms still remains to be answered, which on answered in the favor of collection protocols will take these protocol into much wider applications.

W4:A-MAC

Presentation Summary for “Design and Evaluation of a Versatile and Efficient Receiver-Initiated Link Layer for Low Power Wireless”


The paper proposes a new prototype link layer protocol for low power wireless communication. The key objective of the design is to minimize power use. The important idea behind the protocol is that receivers send periodic probes periodically. When a sender has a packet for that receiver, it responds to the probe with an Auto-ACK packet, which is generated by the hardware. The sender then waits a short, random period of time and then sends over the data payload. The receiver then sends another probe once the packet has been received, indicating the previous packet has been successfully received.

The novelty of this design is that it takes advantage of the auto-ack design of IEEE 802.15.4 standard to improve the efficiency of the protocol. Using the hardware ack mechanism, the A-MAC reduces the turnaround time from approximately 3.75ms (Compare to Ri-MAC) to exactly 192us, preventing overhearing of irrelevant nodes. In addition, the authors presents their discovery that if multiple acks are sent by multiple senders, as long as the path delay difference is less than 500ns, do not causes destructive collide and can be received successfully. This second advantages improves the efficiency and interference–resistant when there are environmental wireless interference source around, such as a 802.11 AP.
They also implemented variants of the protocol for broadcast and network wakeup. For broadcast, a sender waits for and acknowledges the probes from all of its neighbors and then sends the packet respectively. For wakeup, two approaches are given. The first is to use the same protocol is used to send for wakeup messages. However, the process is much slower as nodes send probes at a much lower rate when not awake. The second approach is to have a reserved broadcast address. This however, violates the 802.15.4 standard and is not supported by all hardware. The paper also presents a pollcast implementation.
The paper then presents many results of experiment showing that A-MAC is indeed power efficient and reliable, including multiple Ack robustness experiments, 802.11 interference experiments and micro and marcro- benchmarks. Results concluded that the performance is reasonable.

Students of CS450 discussed multiple topics about this paper:

1. When multiple senders want to send multiple data packets to the same receiver, it may repeat contentions. For example, when Node S1 and Node S2 want to send packet to node R, they both wait and answer the R’s periodical probe, send auto-acks and send data packets. Data packets will certainly cause contention, and then R declares a probe with contention window.S1 and S2 respectively flip coins and choose its backoff times. Let’s assume S1 timeout first and resent data packets. When S1 finishes sending the first data packet, R sends a probe in responding to S1. It is unclear that what S2 will do at this time. If S2 here reset itself and answer to the probe with auto-act and then send its packet, it is surely to cause contention once again, which means each data packets either from S1 or S2 will cause a new contention or even worse, increasing contention window. On the other hand if S2 does not reset at the arrival of R’s probe that replies S1’s data packet, S2 will wait until S2 timeouts, which is a suboptimal solution.

2. The disadvantage of receiver-initiated protocols is that because of periodic probing, their channel utilization scales with node density and probe frequency rather than strictly with traffic. Packet delivery ratio drops noticeably with increase in density and decreasing probe intervals.

Wednesday, September 22, 2010

An Empirical Study of Low Power Wireless

Presentation Summary for “An Empirical Study of Low Power Wireless”

The paper presented a study of packet delivery performance for 802.15.4 sensor networks. The tests were aimed around the common conceptual model generally used by protocol designers when developing communications protocols. The authors conducted experiments using two different kinds of motes and three different testbeds that vary in the number of motes, the size of the network, and environment. The first was in a 6000 sq ft office, the next in a 2500 sq ft space on a University campus, and the final on a dry outdoor lakebed. Based on the results of the experiments the authors developed theories on network performance due to internal and external conditions, with the environment playing a large role in packet performance. The first test consisted of studying the effects of modifying the inter-packet interval(IPI) between packet transmission. The results showed that over time the number of intermediate links increased and the numbers of poor and good links were reduced. An intermediate link is defined as a link with a packet reception ratio(PRR) between 10-90%. The next test was an observation of PRRs across all receivers observing each channel from a single transmitter. The authors were able to conclude that reception rates vary across channels. For example, a single receiver may have excellent packet reception for channel 26, but have poor to no reception for channel 17. A statistical analysis of packet success/failure was performed to identify any predictability for packet performance. Plotting consecutive successes and failures, the authors used the conditional packet delivery function(CPDF) and found that losses over time are independent, but over short periods contained temporal correlation. Further tests showed this was due to the received signal strength(RSSI) maintaining similar values over short periods, but varying greatly over longer periods. The authors also researched external noise(802.11,Bluetooth) and link asymmetries as reasons for poor packet reception.

During the discussion several topics were discussed including strengths and weaknesses of the article. One observation was around external noise. The authors provided the frequency spectrums of 802.15.4, 802.11, and Bluetooth, which explained the different reception rates across channels. Another question in the same topic was how and why does “channel” or frequency hopping affect performance. Going back to the frequency spectrum, we could see that Bluetooth completely overlapped the 802.15.4 frequency range, but didn’t not cause as much interference as in 802.11 due to the fact that is moves across the frequency spectrum very quickly(ever 625usecs). A question on link asymmetries was asked regarding noise floor differences and RSSI asymmetries and the effects on asymmetric links. After quickly perusing the paper, we concluded that the authors skimmed over the topic and didn’t provide any insight in the reasons behind the occurrence.