Digest of Proceedings

2nd IEEE Workshop on
Mobile Computing Systems and Applications

New Orleans, Louisiana, USA
February 25-26, 1999

Ramón Cáceres
Program Chair


Contents


Introduction

Following the example of the first WMCSA in December 1994, the goal of this workshop was to foster the exchange of ideas in mobile computing among workers in the field. Attendance was limited to maintain an informal atmosphere and thereby encourage interaction among participants. The program included presentations of refereed papers, short talks on work in progress, and a break-out panel.

WMCSA '99 took place February 25 and 26, 1999, in New Orleans, Louisiana. It was held at the New Orleans Marriott Hotel, co-located with and immediately following the 3rd USENIX Symposium on Operating Systems Design and Implementation. The workshop was sponsored by the IEEE Computer Society's Technical Committee on the Internet and Technical Committee on Operating Systems, in cooperation with the USENIX Association and ACM SIGMOBILE.

This digest summarizes the discussions that took place during the workshop. It is intended as a supplement to the printed proceedings and the electronic copy of the program, both available through the World Wide Web as described at the end of this document. This document is patterned after the digest of proceedings of the first WMCSA, written by M. Satyanarayanan. The following is based on detailed notes taken by two student volunteers, Terry Duchastel and Tiki Suarez. I would like to thank them for their diligent work. Any errors or omissions are my own.


Tools and Applications

The opening session, chaired by Karin Petersen, dealt with tools and applications for mobile computing.

PowerScope: A Tool for Profiling the Energy Usage of Mobile Applications

In the first talk, Jason Flynn presented PowerScope, a tool for profiling the energy usage of mobile applications. PowerScope helps identify components of an application that consume the most energy and are thus targets for optimization. The tool is made up of three parts: (1) The System Monitor, which monitors the program counter and process ID. (2) The Energy Monitor, which collects measurements from a digital multimeter. (3) The Energy Analyzer, which processes the output of the two monitors and associates energy usage with application components. In the case study presented, PowerScope helped reduce the energy consumption of an adaptive video application by 46 percent.

Discussion after the talk centered around two main themes. These questions and answers are paraphrased here:

Q. (Mary Baker, Ramón Cáceres, and Carla Ellis) Is it possible to accurately assign the energy usage of different hardware components to particular applications?
A. This is a difficult problem. For example, the disk can spin up to serve requests from more than one application, and network interrupts are often serviced in the context of a process different than the process to which they logically correspond. Until these issues are addressed, the application under study should be the only one running for PowerScope to be effective.

Q. (Anthony Joseph and David Steere) Can PowerScope be used dynamically as an application is running?
A. There is no fundamental reason for a tool like PowerScope to run off-line, but PowerScope would need to be modified in several ways before it could be used on-line.

Caches in the Air: Disseminating Tourist Information in the GUIDE System

In the second talk of the session, Keith Mitchell presented the GUIDE project. The project aims to provide a context-sensitive tourist guide for the city of Lancaster, England. The guide application runs on a portable device running Windows 95 and connected to a 2Mbps WaveLAN wireless network. The application is written in Java and the tourist information is represented in HTML with custom tags. Information is tailored to the preferences and location of the user, and downloaded to the user as needed. Location is deduced from the identity of the wireless access point a device is using. Ongoing work includes evaluating whether the current user interface is sufficiently intuitive, developing a city editor tool, and writing a data dissemination tool.

Q. (M. Satyanarayanan) Are there any dead zones in the wireless coverage?
A. Yes. We wanted cells to be distinct, so there is no overlap.

Q. (Mary) Won't people get lost when they are in a dead zone?
A. There's a map stored locally on the device.

Q. (unknown) What is the radio range?
A. 300 meters in line-of-sight conditions. Inside buildings it is more like 25-30 meters.

Q. (unknown) What is the granularity of the location information?
A. Only to within one wireless cell.

Q. (Satya) Does the system ever push information to the user instead of waiting for a request?
A. Yes, it can make suggestions, e.g., it will suggest alternatives if the castle is closed.

Q. (unknown) Are the suggestions randomized, for example to avoid everyone going to the same restaurant?
A. It could be done easily, but currently they are static.

Q. (Carla Ellis) Can you order what is displayed according to proximity?
A. Yes, starting with what is in the current wireless cell.

Q. (Anthony) Why not pre-load everything and download only when needed, thus reducing response time and bandwidth consumption?
A. We could have used more pre-fetching, but for flexibility we wanted to make things more dynamic.

Q. (unknown) What are the minimum hardware requirements?
A. The device costs US$2,000, but will get cheaper. An i486 processor is sufficient since the device is just a Web client.

Q. (Satya) Why didn't you use something like a Libretto, which is cheaper and includes a more powerful Pentium processor?
A. We didn't want a keyboard.

Q. (Y.C. Tay) Can you bill for tourist services? How about access to external Web sites?
A. Currently there is no billing. In addition, users can't enter an arbitrary URL so they are restricted to the content provided.

Q. (unknown) Can you offer general Internet access?
A. We already have a wireless network covering the city and providing network access to schools. That network can also provide access for everyone interested.

Q. (unknown) Have you gathered traces of user mobility patterns? Making anonymized traces available would be a valuable contribution to the community.
A. The system can record movements, but so far that information has only been used internally to the project.

Adaptive Groupware for Wireless Networks

In the third talk of the session, Tara Whalen presented an analysis of Calliope, a groupware document editor, together with suggestions on how to adapt the behavior of Calliope and similar applications to improve their performance when running over a wireless network. Since wireless connectivity is often intermittent as well as slow, applications should take advantage of the network when it's available in addition to reducing overall bandwith consumption. The analysis concentrated on Calliope's use of the network. Observations included that block updates and locking were very efficient, but that immediate updates were very expensive. Suggestions included:

An example of an adaptive strategy is to disable telepointing (seeing other people's mouse cursors) when connectivity degrades since telepointing is an expensive feature. Applying their suggestions resulted in a bandwidth reduction of 90% in one scenario.

Q. (Satya) What about incorporating user feedback to guide adaptability?
A. That would be definitely be useful, but it raises a lot of design issues. How does the user interact with the system? Does the system present the user with a list of choices?

Q. (Brian Noble) How do you evaluate the usefulness/desirability of application features?
A. We haven't done much work done in that area, for example in prioritizing features to choose which to disable first.

Q. (Sandeep Gupta) What are typical group sizes for communication among Calliope users?
A. It currently uses peer-to-peer communication. It therefore does not scale very well.

Q. (unknown) Was the application usable after your changes?
A. Subjectively, yes. In my trials I was using large documents.

Q. (Thomas Kunz) Could one instance of Calliope give feedback to other instances?
A. Yes, it could be done, but it could be expensive depending on how you implement it, e.g., n-1 updates.

Q. (unknown) Would the feature set have changed if the application was designed initially for low-bandwith, intermittent connectivity?
A. Yes, for example, another package uses a different kind of telepointer.


Protocols and Handoffs

The second session was chaired by Joe Duran and dealt with computer networking in mobile and wireless environments.

RAT: A Quick (And Dirty?) Push for Mobility Support

Rhandeev Singh gave the first talk of the session on a mobility support scheme called RAT. RAT stands for Reachability using twice-network Address Translation. It uses existing protocols and relies on address translation hardware to handle mobile clients. It works at the application level, so it is network-stack and operating-system independent. Two disadvantages of RAT are that it doesn't provide end-to-end security at the network level and that it assumes the RAT device is trusted. Its main advantages are that it can be deployed now and that it provides an upgrade path towards Mobile IP. The table below summarizes the differences between RAT and Mobile IP:
RAT MIP
End-to-end security No Yes
Seamless mobility No Yes
Route optimization No Yes
Reachability Yes Yes
OS independence Yes No
Immediate utility Yes No

Q. (Mary) Do you have to buy a NAT box to deploy RAT?
A. Yes, but you only need it at your home network. You don't have to depend on it elsewhere, as you depend on foreign agents in Mobile IP.

Q. (Mary) Is the real issue that Mobile IP features are not widely implemented yet, for instance in Microsoft software? Mobile IP foreign agents do not need to be deployed after all.
A. No. RAT is easier to deploy, and Mobile IP is much harder to implement.

Q. (Ramón) Can you more easily go through firewalls with RAT?
A. Yes, it is easier because RAT uses topologically correct IP addresses.

Q. (unknown) But you may get no more people using RAT than are using Mobile IP?
A. Yes, that may be true.

Comment: (Satya) Is seamless mobility a good thing? We may be solving the wrong problem.

TCP Performance in Wireless Multihop Networks

Ken Tang presented the next paper, a simulation study of TCP in ad-hoc networking environments with multiple hops. The study used the GloMoSim simulator to compare three different MAC protocols: CSMA, FAMA, and MACAW. Additionally, it evaluated different TCP window sizes and the impact of link-layer acknowledgements. The medium was a hypothetical radio link not based on existing products such as WaveLAN. Briefly, the results were as follows. In a string topology with small packets, CSMA performed best, then FAMA, and last MACAW. CSMA degraded badly with large packets due to collisions. FAMA performed well when the TCP path was shorter than 5 hops. MACAW is best with more than 5 hops because it does not cause TCP timeouts. FAMA resulted in the best throughput in a ring topology. MACAW was the most fair but unfortunately had the lowest throughput. In a grid topology, MACAW is again the most fair. In general, there was a significant trade-off between throughput and fairness. The overall conclusions where that CSMA and FAMA suffered from the capture effect, that an adaptive TCP window was counterproductive in FAMA and MACAW, and that link-layer acknowledgements do help in reducing capture.

Q. (Brian) What if I'm running something other than TCP? How does TCP encourage capture?
A. Actually, it's not really TCP dependent. Capturing is due more to MAC backoff.

Comment: (Ramón) Capture definitely occurs in real wireless networks, for example WaveLAN.

Q. (Cormac Sreenan) What about 802.11? Does it avoid capture?
A. Yes, because each host takes a turn.

Q. (Tiki) Do link-layer retransmissions affect the application? What about the transport protocol?
A. No, other than slightly degrading performance when there's no contention.

Q. (Brian) What if the transport layer does not need reliability, for example UDP? Do link-layer acks help in that case?
A. We haven't done any tests with UDP.

Policy-Enabled Handoffs Across Heterogeneous Wireless Networks

Helen Wang presented the last paper of the session, which argued for building into systems support for policies governing handoffs between different wireless networks. The killer app for wireless networks is access. Therefore, systems should aim to provide the best connectivity at any given moment, where best is user-subjective. Goals include seamlessness, low latency, and minimal user interaction. The choices include the different wireless networks available at the current location, as well as using Mobile IP or a local address. Policies must accommodate cost, network conditions, power consumption, and connection setup time, as well as other factors such as user speed and user activity. There is a need for a uniform interface for accessing system dynamics.

Q. (Karin) How would you infer the bandwith requirements based on the app? Can you change the policy dynamically?
A. You can load profiles, and in that way change the policy more easily.

Q. (Rhandeev) If you have a variety of needs, then why not use multiple networks at the same time? For example send the text over IR, and the images over radio?
A. That would require application knowledge, which is tricky.

Q. (unknown) Policies are typically very convoluted. Have you looked at what people pick for policies?
A. I can imagine simple policies such as 100% cost, or 50%/50% if you don't care about one of the attributes.

Comment: (David) You can automate policy making decision. Also, take a look at previous work done looking at which route is best. There could also be some interesting military applications. Could it be used for encryption for example?

Q. (unknown) Is it possible to have conflicting policies, for example cost vs. latency?
A. That's a good question. We will look into this in future work.

Q. (Karin) You mentioned a period of instability. Have you seen that in practice? Is it really that bad?
A. That may not be something that happens that often.

Q. (unknown) How frequently do you evaluate network conditions? What is the overhead?
A. We haven't measured the overhead yet.

Comment: (David) Monitoring can be passive, you don't always have to probe the network. You can also adaptively adjust your monitoring.

Comment: (Mary) In Odyssey, the overhead was not significant.

Comment: (David) Looking at the network may not be enough. There are sometimes other considerations, such as reliability.

Q. (Ramón) This is a software engineering question. You mentioned using both C++ and OTcl. How hard was it to code and debug, particularly across the interface between the two languages?
A. I liked the language environment. It was easier to prototype things in OTcl than it is in lower-level languages.


Work in Progress

The last session on Thursday consisted of a series of short talks describing work still in progress. Cormac Sreenan served as session chair.

The Iceberg Project

Anthony Joseph started things off by talking about the Iceberg Project at UC Berkeley. The general aim is to take communication from Telco time to Internet time, and in particular to use new devices as they are introduced, allowing user control over information. However, they also want to provide Telco levels of availability. They will develop and introduce a universal inbox (similar to a PIM or personal information manager). The plan is to have a high-speed backbone with diverse access networks. The testbed will include a 400-processor node, gateways to a variety of networks, and a GSM base station that will cover the campus. The general architecture is to have lots of access devices that connect to one PIM, which then relays information to any of many receptive devices. They are also looking into speech-enabled control. The key will be Dynamic Content Transcoding. For example, a typical transaction might involve Cell Phone -> PIM -> Directory Service -> GSM to PCM Conversion -> Voicemail Service -> Speech to Text Conversion -> Email. Finally, they have developed a profiling tool to monitor the various networks.

Adaptive Mobile Applications Based on Mobile Code

Thomas Kunz next described work going on in his research group at Carleton University. Most people focus on bandwidth variation. However, in a mobile environment there are other scarce resources (power, memory, CPU). Additionally, most solutions introduce proxies to move the data closer to the user. Some other solutions include teleporting, where only the user interface is on the on the user's device. Their research is geared more towards adapting the application. They are looking towards 3rd generation cell phones, where video conferencing is feasible. A good example would be playing an MPEG movie on a handheld device. Is it better to send the compressed MPEG and then have the end device decode it, or is better to send a huge raw video file, but have very little decoding? They are envisioning two proxies: a low-level proxy that supports the protocol stack and a high-level proxy that supports applications. They are currently working with WaveLAN and WindowsCE devices. Future work includes collecting traces from a real Canadian wireless service. They will also be looking into whether they can deduce something about user mobility. Furthermore, they want to look at active networks, push vs. pull, and merging IETF standards with Telco standards.

The Mobile People Architecture Project

Mary Baker gave the next talk on the Mobile People Architecture project at Stanford University. Their overall goal is to make it possible to always reach people. The future will include multiple communication devices, but the sender really wants to reach a person, not some device or application. At the same time, the receiver should have control over what gets delivered when and where. For example, a user may want no phone calls during dinner. Privacy of receiver location would also be nice Their approach is to add another layer to the network stack: the People layer on top of the Application layer. The design is to create a personal proxy. It will be both a tracking agent and a dispatcher. Note that the personal proxy must be trusted, either an ISP or some other service provider. All a user needs to know to reach someone is their unique name. The device will then contact a Directory Service, which returns the address of the personal proxy. The dispatcher then transforms and converts the information as needed. Currently they have conversion working between email and voicemail. They are working on live voice and interactive text messaging.

A Case for Nomadic Computing

Next, David Steere presented work on nomadic computing going on at the Oregon Graduate Institute. He defined nomadic computing as periods of stationary work interspersed with periods of movement when no active work is taking place. He contrasted this with seamless mobile computing, where work continues even during periods of movement. He argued that seamless mobility as provided by Mobile IP comes at a high cost, is seldom needed, is not supported by many computing platforms (e.g., WINSOCK), and in some situations is simply inappropriate (e.g., while driving a car). For example, the one important application he sees for Mobile IP is Telnet since Telnet maintains an open connection for long periods, is often inactive, and has modest performance needs. He instead proposes Location Independent Naming (LIN), which allows a mobile device to keep its name but change its address as its changes location. LIN extends Dynamic DNS and DHCP to support cross-domain registration and secure updates across insecure networks. The drawbacks of LIN are that you lose transparent mobility at the IP layer as well as authentication based on IP addresses.

The Milly Watt Project

Carla Ellis then presented the Milly Watt project at Duke University. The overall goal is to reduce energy consumption in mobile computing devices. This would extend battery life, reduce heat production and fan noise, and reduce environmental impact by conserving energy resources. Energy is a critical, shared, and limited resource. Systems should have a global view to resolve conflicts. Application-specific knowledge can help, and the operating system can help. Therefore the solution should include application and operating system collaboration. Energy should become a "first class" performance issue. What is needed are appropriate abstractions (a power state model) and empirical measurements of power costs (energy consumption in each state). There are two ways to reduce the energy used for a task: reduce the power cost of a state, or reduce the time spent in a state. What is needed is some type of architectural support, algorithms and system support, and APIs allowing application direction. She also presented measurements of power consumption on a Palm Pilot running Hiker's Buddy, a GPS application.

Q. (David) How did you determine power utilization of the Palm Pilot?
A. I put the machine into a stable state, then measured it with a multimeter.

Q. (David) Is the power drain the same regardless of how much power remains in the battery?
A. I don't believe so, but that should be examined further.

Mackinac: Bridging the Application/Channel Gap

The final speaker of the session was Brian Noble, who presented the Mackinac project at the University of Michigan. The motivation for the project is that there isn't much cooperation between Computer Science and Computer Engineering, and that if they worked together there would be significant benefits. In CS there is Mobile Computing, which looks at adaptive software layers and protocol-level modifications. In CE there is Wireless Computing, which involves trying to understand about communication channels in fine detail. What if the two were to merge? There is a lot of potential that is not being realized. Full transparency, or exposing all the channel details to the application and vice versa, would solve some problems. However, that would be too monolithic and dependent on specific hardware and protocols. They propose instead to provide a translucent interface, where applications advise devices, for example by coloring packets based on application semantics. At the same time, lower levels provide hints to higher levels, for example by signaling that the channel has deteriorated. They hope to answer the following two questions: (1) how much can you benefit from simple translucency?; and (2) how much would full transparency buy you? They plan to combine CMU's Odyssey system with ITT's handheld multimedia terminal. They hope to implement a video conferencing system that is fully integrated with a smart radio, and compare it to the fully opaque case.

Q. (Peter Honeyman) So, you're going to define some kind of API?
A. Yes, but I'm not sure what kind of API to write. That's something we have yet to figure out.

Q. (Peter) Won't that become the QoS API that we've all been looking for?
A. Not really. The difference is that we have a lot more control over a radio link than we have over a whole network.


Workflow and Databases

The first paper session on Friday was chaired by Nigel Davies and dealt with workflow and database issues in mobile environments.

Workflow and Application Adaptations in Mobile Environments

Jin Jing gave the first talk of the session on a proposed approach to workflow and application adaptation in mobile computing environments. Workflow management systems manage a sequence of activities and assign resources to those activities. Workflow resources include software, hardware, and human resources. Applications initiate and execute activities, for example location-dependent work. Traditional approaches to workflow management have included pessimistic and optimistic assignment of resources, while approaches to application data access have involved tradeoffs between performance and data quality. The approach suggested here is a complementary and collaborative adaptation by both workflow management and application data access:

Q. (David) How often would the workers have to communicate with their home base? Constantly or once they get back home?
A. You want the workers to receive their assignments in real-time. For example, you can guarantee that you can reply to a query within the hour.

Q. (Brian) Regarding text, what is the relative proportion compared to video, etc.? A. It depends on the application and we haven't looked at that yet.

DataX: An Approach to Ubiquitous Database Access

Hui Lei gave the next talk on the DataX package. DataX is middleware to connect to databases from mobile computers using thin client technology. About 1 in 4 workers will be mobile by 2002, and therefore there will be a large demand for software like this. DataX provides disconnected database access through:

The proxy uses ODBC to connect to the server, and the rendering is done on the client. Data is sent as XML, and is therefore device independent. DataX defines an architecture to subset the database and create policies. Subsetting is done by creating "folders". A folder defines an encapsulation of data: its name and parameters, its access, and its operations. Targetting is done using filtering and transcoding, based on rules. Synchronization is used to update the server. It's weakly consistent.

Q. (Satya) What is the status of the work?
A. This project was started only recently. At this point we've finished targetting and synchronization. We're working on rendering and subsetting.

Q. (Karin) Are you planning to do more with folders and XML?
A. We plan to allow users to customize the interface.

Q. (Karin) How do you expect different devices to render the information? Tags don't really help with this problem.
A. The renderer is device-specific. Each renderer will render the data as best it can on that device.

Q. (Sumi) What if I change devices? How would that affect me?
A. All the preferences are stored on the proxy. The only thing you need to do is hoard the appropriate folders.

Q. (David) The database creator might have different expectations of consistency than what your system provides the user.
A. We're working with customers currently. They're both building the database and using it, so they know what kind of consistency they want.

Broadcast of Consistent Data to Read-Only Transactions from Mobile Clients

Mei-Wai Au gave the last presentation of the session on a new algorithm to address inconsistency problem in data broadcast. While data items in a mobile computing system are being broadcast, update transactions may install new values for the data items. If the executions of update transactions and the broadcast of data items are interleaved without any control, mobile transactions may obtain inconsistent data values. This work proposes a new algorithm, called Update-First with Order (UFO), for concurrency control among the mobile transactions and update transactions. The mobile transactions are assumed to be read-only. In the UFO algorithm, all the schedules among them are serializable. Two important properties of the UFO algorithm are that (1) the mobile transactions do not need to set any lock before they read the data items from the "air"; and (2) its impact on the adopted broadcast algorithm, which has been shown to be an efficient method for data dissemination in mobile computing systems, is minimal.

Q. (Y.C.) Do you always have to wait until the end of the current broadcast to make sure information doesn't change?
A. No, once you have all the items you need, then you can just stop listening.

Q. (Karin) What kind of applications will you be supporting with this?
A. Stock quotes. Stock prices change constantly. If you query something, you can have a conflict with the update.

Q. (Karin) Do the values returned have to be up to date?
A. Yes, for example if you're ready to buy a stock.

Q. (Nigel) Have you looked at how your algorithm interacts with existing broadcast algorithms?
A. Our aim is to develop an algorithm that can be used with any kind of broadcast cycle. However, there may be some problems, for example with algorithms that use indexing.


Ad Hoc and Multicast

The last paper session was chaired by John Zavgren. It dealt with ad hoc and multicast routing in wireless networks.

Ad-Hoc On Demand Distance Vector Routing

Elizabeth Royer gave the first presentation on the AODV routing algorithm. In AODV, each mobile host operates as a specialized router, and routes are obtained on demand with little or no reliance on periodic advertisements. The algorithm is suitable for dynamic self-starting networks like ad-hoc networks. AODV provides loop-free routes even while repairing broken links. Because the protocol does not require global periodic routing advertisements, its bandwidth demands are substantially less than for protocols that use such advertisements. Nevertheless, AODV maintains most of the advantages of basic distance-vector routing mechanisms. Simulation was used to verify the operation of AODV. The algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks.

Q. (Mary) Is 200 milliseconds good or bad?
A. We're assuming the host moves at 0.4 to 0.8 meters per second, so a handoff time of 200 milliseconds is good.

Q. (Mary) Have you compared AODV with other techniques?
A. The DSR findings in their MobiCom paper were different because they used a different simulation package and different simulation parameters.

Q. (David) Is the loss model a cliff? Do you suddenly lose all connectivity?
A. Yes, it's a cliff.

Q. (David) What would happen with a more realistic loss model?
A. I'm not sure. We haven't modeled what happens with fading. We did model collisions and assumed no capture.

Q. (Satya) What happens if you remove the HELLO messages?
A. We're being pessimistic and assume that if we haven't heard from a host in a timeout period, then we invalidate that route.

Q. (Satya) Isn't that overly pessimistic?
A. Possibly. However, we're assuming that hosts move.

Q. (Satya) Have you looked into intermediary nodes repairing the route?
A. Yes, that is one of our future goals. That's what we call local route repair.

Q. (Satya) What about asymmetrical connections? How would you handle that?
A. We would like to look into that. Likely, the destination would have to start route discovery back to the source.

Geocasting in Mobile Ad Hoc Networks: Location-Based Multicast Algorithms

Nitin Vaidya gave the next talk on a new routing technique called geocasting. Geocasting involves sending a packet to all people in a certain geographic region. The difference, compared to multicasting, is that you don't have to subscribe to and unsubscribe from groups. In geocasting, a node automatically becomes a member of a group based on its geographic location. In other words, geocast is a special case of multicast. DSR and AODV both have high overheard because they flood the network in all directions. LAR (Location Aided Routing) is an optimization of DSR/AODV. It tries to avoid flooding the entire network, and instead hosts only propagate route-discovery packets if they are physically closer than the previous node. A combination of LAR and multicast flooding yields geocasting.

Q. (Y.C.) What happens if the path goes outside the forwarding zone?
A. Then your accuracy is very low.

Q. (Y.C.) How do you define accuracy?
A. The chance that the desired host is in the expected zone.

Q. (David) How does node density affect your results?
A. We didn't look at that. We used random placement and varied the number of hosts.

Q. (Elizabeth) How do you know what geographic region to send a packet to?
A. You have to include some location information in each packet.

Q. (Elizabeth) How do nodes know what geographic region they are in?
A. Each node needs to know its physical location, through GPS for example.

Q. (Karin) How did you define overhead?
A. It's defined as all the packets sent versus the packets received.

An Adaptive Protocol for Reliable Multicast in Multi-Hop Radio Networks

Sandeep Gupta gave the final talk on a reliable multicast protocol for wireless networks. Reliable multicast requires validity and agreement, and also integrity. It's been noticed that we tend to move in clusters. The assumptions made in this work include a neighbor-discovery protocol at the data link layer, that there are no permanent node failures, that there exists a reliable unicast protocol, and that the multicast groups are static. The general idea is that should one link break, you broadcast a discovery packet that locates the rest of the route.

Q. (Elizabeth) How dependent are you on the source/core node?
A. We assume that it's always available.

Q. (unknown) Do you use flooding to discover routes?
A. Yes, but you can bound it, for example by the number of hops.

Q. (Mary) You're assuming that group membership is static?
A. Yes, it is not necessarily a realistic model.

Q. (Mary) In what situations does this assumption make sense?
A. For example, if you're collaborating on a document.

Q. (Rhandeev) How does clustering help?
A. We're not doing clustering at all. It's just conceptual. I'm not maintaining that info anywhere.

Q. (Jason) Have you looked into previous work done in fault-tolerance?
A. I did look into using self-stabilization for the tree.


Break-Out Panel

The final session of the workshop consisted of a break-out panel. The program chair posed three questions at the beginning of the workshop and asked the attendees to think about them over the course of the workshop. In the final session, the attendees separated into four teams to prepare answers to these questions. The questions were: The workshop reconvened and a representative from each team presented the team's positions, which are summarized below.

Blue Team

The Blue Team was led by Mary Baker and also included Jason Flinn, Jin Jing, Larry Klos, Dushyanth Narayanan, Tiki Suarez, Y.C. Tay, and K. Vibhatavanij.

In their view, current problems include a mismatch between mobile computing and wireless expectations, and a lack of profit mechanisms like billing. The inconvenience of current devices is also a big deterrent, even if ubiquitous network access becomes a reality We are also missing context-dependent modalities for input/output, for example in a conference you need a keyboard for typing while in a car you need speech because your hands are occupied.

The solution is (Note: The following was presented in jest.) to equip all drug pushers and addicts with wireless mobile devices. This will provide a profit motive, true mobility, and bottom-up adoption of the technology. It will also ensure that the necessary/practical pieces work well before requiring that the complete solution work well.

In five years we'll see mostly incremental improvements, with perhaps dramatic improvement in batteries or input/output options, and we'll see a PDA/phone combination.

Green Team

The Green Team was led by Nigel Davies and also included Mei-Wai Au, Barbara Hohlt, Anthony Joseph, Ahmed Karmouch, Emre Kiciman, Keith Mitchell, and Helen Wang.

Vertical applications such as email, health care, and telephony are very successful already. The question is why have those worked so well? It was because these applications were designed, usability trials were done, and zero configuration is needed. In contrast, there has been no overall design or accountability for horizontal applications like Web browsing, Lotus Notes, and general data processing,

In the next few years, there will be progress in receiving data on mobile devices without the user even realizing it. The killer app will be providing local services and information, and it will be location-dependent. It would be designed, tested, and easy to use. The next 5 years will include many more vertical applications, much better infrastructure, and a personal home base to centralize data. Furthermore, a Bluetooth-like shift will be a big hit, but it will have to be well designed.

Red Team

The Red Team was led by Sumi Helal and also included Terry Duchastel, Rasit Eskicioglu, Thomas Kunz, Golden Richard, Tara Whalen, and John Zavgren.

Current problems include a mismatch between computer science and computer engineering, and also a mismatch between cost and power. Additionally there is a general lack of innovative marketing strategies and appealing flat-rate tariffs. There is also a lack of killer apps and telephony support in PDAs. Also missing is a universally accepted PDA format (palmtop, sub-notebook, notebook?). Power will of course always be a problem. The reality is that there will always be significant constraints compared to a wired environment, which leads to unreasonably high expectations from users, who are ultimately disappointed. A lack of support for ubiquitous computing is a problem: currently you're a slave to your devices, which you have to frequently plug in into their cradles. Security and privacy also needs to be addressed better. Finally, we badly need a wireless networking standard.

Answers must be found for the problems of cost, ease of use, performance, availability, and security. We need a killer app, but it has to be commercially viable. In five years we'll see IP over everything, billing systems will have evolved, and ad hoc networks will emerge.

Black Team

The Black Team was led by Cormac Sreenan and also included Joe Duran, Sandeep Gupta, Karin Petersen, Elizabeth Royer, and Rhandeev Singh.

Why is it taking so long to integrate? That's actually not true for vertical applications, for example parcel pickup/delivery and rental car return. One of the problems is that we're still discovering the right amount of connectivity needed, for example on a Palm Pilot. Also, the information needs of mobile users are not understood. Finally, the devices are costly and hard to use.

Solutions will include identifying niche roles for mobility. Things must be kept simple! We need ease of integration/interaction, and we need standards and lower costs. The future will include lots of special-purpose devices that will be small, have great style, and will have higher-bandwith connectivity.


Conclusion

Judging from comments I received during and after the workshop, WMCSA '99 was a success. Many of the participants found it both productive and enjoyable. They liked its small, informal nature and the active participation of so many of the attendees. They also praised the quality of the program. A number of them asked if there would be another WMCSA, and several volunteered to help organize it.

Many people contributed to this success. The general chair, Sumi Helal, and the other organizers, Mukesh Singhal, Joe Duran, and Jin Jing, deserve thanks for putting together a high-quality event. My colleagues on the program committee also did an excellent job reviewing and choosing papers on a very tight schedule. They were Mary Baker, B. Badrinath, Nigel Davies, Dave Johnson, Anthony Joseph, Jay Kistler, Karin Petersen, Steve Pink, Srini Seshan, and Cormac Sreenan. The session chairs, identified earlier in this document, helped to keep things moving while at the same time encouraging interaction. The local arrangements chair, Golden Richard, assisted by Larry Klos, Richard Miller, and Stefan Rahm, welcomed us to the wonderful city of New Orleans, handled registrations, provided Internet access, and were extremely helpful throughout. Finally, the authors, speakers, and attendees gave the workshop its character through their enthusiastic participation.


Additional information

More information about the workshop is available from the WMCSA '99 home page, including electronic copies of the accepted papers and work-in-progress talks, as well as a complete list of attendees and their affiliations. In addition, the full printed proceedings can be ordered from the IEEE Computer Society online catalog. This digest is also available on the Web at http://www.research.att.com/conf/wmcsa99/digest.html.


Ramón Cáceres <ramon@research.att.com>