Tuesday, March 3, 2009

Assignment 6: Tom Ternquist

Over the past few days, I’ve taken a good, hard look at my iPhone. Admittedly, I think it’s a great device, but there is still some things that I have problems with.

One of my chief complaints, and certainly one that I’m not alone on, is the iPhone’s inability to keep multiple device profiles, specifically, when the phone should ring or when it should only vibrate. With all its connectivity, via calendars, email, location-awareness, etc., the iPhone should be able to estimate with a relatively high certainty, my social context and adjust settings accordingly.

I would like to focus on using the iPhone’s plethora of contextual data that it constantly receives, to design an auto-sensing device profile mode. One obvious design feature is to have the ability to create and name these profiles in various ways. Beyond the creation of these profiles, the phone needs be able to process contextual information to judge which profile should be active. An obvious source of information can come from the phone’s calendar. Tagging events under different profiles is a relatively straightforward way to enable this auto-switching.

Taking it a step further would be to use the iPhone’s GPS and location-aware technology to develop different zones of behavior that can provide suggestions to which mode should be active, based on user input at those specific locations. This would help to give the phone a guide for behavior when none is specified by a calendar event.

However, location-aware technology brings with it some social technical gaps. For example, knowing your coordinates on earth can still be too vague to be of use in a social context. For instance, you may be in an academic building, where you’d normally want your phone on silent, but the actual room you’re in is a lounge where it’s relatively loud. It would be good to have your phone on ring, but your device may not be able to pick up on this.

To overcome this gap, the device would need to pull data from more sources, such as Google Maps and/or other services that provide information about what is at a given coordinate, at a very fine level of detail. Having the device learn behavior at very precise and specific coordinates through user input could also lessen this gap.

3 comments:

  1. Having your phone automatically switch the ringer on and off at the right times seems to be a common theme in this assignment. The idea of using your location and contextual data for this purpose is very intriguing. The phone would almost certainly make some mistakes like the lounge example you gave, but I'm sure the technology would improve over time. Maybe the phone would be able to pick up sound, so when there was a lot of noise it would assume that it's okay to have the ringer on. Artificial intelligence is quite impressive when it works well.

    ReplyDelete
  2. The idea of your phone essentially having the level of knowledge about your life that a top-level secretary would have is interesting - I agree that it would be great to not have to worry about programming your phone to do anything and just trusting it to know best.
    However, the trade off is that you _are_ trusting a device to know best, yet you still have to deal with the consequences of mistakes that it can make.

    ReplyDelete
  3. I think it would be super difficult to have your phone determine what state you are in when deciding on a ringer. This is one social technical gap that seems very, very big. That requires a lot of intelligence that technology does not have. Ackerman said, although it is difficult to impose human intelligence on technology, its best to compliment our intelligence with technology. I would have three modes set up on your phone, and then pick the one that best fits your state. I cannot think of signals for a phone to use when determining its ring tone, that represents a long road for technology to take.

    ReplyDelete