Typhoon: A Reliable Data Dissemination Protocol for Wireless Sensor Networks
Chieh-Jan Mike Liang, Razvan Musaloiu-E., and Andreas Terzis
We review Typhoon, a protocol developed at Johns Hopkins for disseminating large objects quickly and reliably. Typhoon reduces contention and promotes spatial re-use by combining spatially-tuned timers, prompt retransmissions and frequency diversity. The cornerstone of the Typhoon protocol is its use of snooping and channel switching to minimize transmissions/retransmissions and reduce collisions/contention, respectively. Channel switching was especially effective in order to avoid traffic generated on the default channel by hand-shaking and by Trickle- the protocol implemented on top of Typhoon to disseminate meta-data. Typhoon is ACK based, utilizing the stop-and-wait ARQ protocol, which is effectively a sliding window protocol with a window size of one. Typhoon is compared against Deluge, a NACK based protocol. Deluge is the de-facto standard for disseminating large objects in TinyOS. Through testing, Typhoon proved to be more efficient than Deluge by reducing dissemination time and energy consumption by up to three times.
The general purpose of Typhoon is to disseminate large objects, for example source code updates which are usually between 50-100 KB in size. Typhoon handles these large objects by first breaking them into 1 KB pages. The pages are then split up into packets and sent over the wire. The stop-and-wait ARQ protocol means individual ACKs are sent for every packet. Because of this, the amount of time that goes by where a packet is lost without being retransmitted is small. Deluge uses a NACK based protocol, and it waits until the entire page is sent to NACK individual packets. When contrasted against Deluge, the class found that Typhoon benefits greatly from its quick ACKs as opposed to Deluge's slow NACKs. Typhoon also uses Trickle to disseminate meta-data on the default channel. Typhoon's only other transfers on the default channel are the node handshakes, after which nodes will go to a different random channel to begin transmitting data. This is the channel switching that the paper introduces. The class noted that some channels will have different properties such as link quality and contention than other channels, and so the channel randomness plays a critical role in the success of Typhoon. There is only one transmitter and one explicit receiver for each data transfer, although there may be other implicit receivers that snoop on the data transfer. They hear the PageOffer with the channel number and silently switch to that channel and collect the data that is sent. If any packets are lost, though, the entire page is discarded (since implicit receivers don't send ACKs). This is referred to as Typhoon's all-or-nothing snooping approach.
The class discussed the pros and cons behind Typhoon's all-or-nothing snooping approach. Some students believed that Typhoon would reduce a great amount of transmissions if snooping nodes were able to temporarily store the pages with missing packets, instead of unequivocally discarding them. In certain scenarios where a snooping node may only be missing a small number of packets from a page, the students argued it is very likely that ensuing snooping opportunities would involve that same page. This is because of Typhoon's wave-like dispersal, where nodes in close proximity are likely to require the same pages. The nodes with the unfinished page could then snoop on these ensuing transmissions (which would take place anyway) and acquire the missing packets. This would minimize the number of PageOffers and subsequent transmissions, and hopefully the number of necessary snoops to acquire a full page. The downside of this approach is the requirement that nodes store unfinished pages in their memory, which is minimal in the first place, and to keep track of which pages are unfinished and which packets they are missing from that page. The class came to the conclusion that it thinks the added complexity of this protocol modification is small compared to the benefit in speed and data reduction that Typhoon would receive from it.
The class also criticized the comparison of Typhoon in four modes:
snooping and channel switching
channel switching only
neither snooping nor channel switching
The class discussed the Trickle protocol in-depth because the authors did not go into it much, being that it was not the primary topic of the paper. The class commented on the fact that Typhoon waited some amount of time for Trickle to finish before starting up. The class believed that it makes sense to wait until Trickle has done its job before Typhoon starts up, because the nodes need to know what data they need in order to work properly in Typhoon (i.e. a node will know to send a PageRequest when they get a particular PageOffer). The class also wondered whether any other meta-data could be passed around in the Trickle phase, such as network typology data. The class believes that if network topology data can be quickly pushed to each node (perhaps only data regarding upstream links since Trickle disseminates downstream) then each node in Typhoon could more optimally get the data it needs. An example of such an ideal scenario would be a node avoiding a connection with a weak link neighbor because the node has many other stronger link neighbors at the same level of dissemination as the weak link neighbor.
The class also noted that the testbed experiments were very limited in scope. The authors compared Typhoon against Deluge with the CSMA MAC protocol using default values. The experiments were carried out in an office building where there was a notable bottleneck in the middle. The class argued that Deluge may not have been tuned best for that environment, whereas Typhoon was most definitely tuned correctly to showcase its best results.