So today at Access 2011, it’s hackfest, with ~60-70 people, quite big!
I decided to work on the augmented library topic with 5 others. We discussed two different software products out there at the moment and possible implimentations.
Layar allows for mobile app development using GPS/Geolocation to provide more information and image recognition to make things/the environment more interactive. Layar is available on the Apple app store and Android.
Advantages: Drupal module, centralized database to search for all layars
Disadvantage: not available on iPod Touch (presumably not on iPad either).
Developed by Georgia Tech, Argon allows mobile app development using KML for more information based on GPS/Geolocation.
- open source
- works on iPod Touch
- in development (can be buggy)
- non-centralized (need exact link)
- only available on iOS products (Android in development, but no timeline)
- shelf/branch location of item
- scan book covers to bring up book info, reviews/ratings, etc. – would work better in public library setting
- locate subject area, maps displaying subject areas
- reference/info desk locator
- interactive pop up e.g. what user wants to do, scan room number for booking system
- Layar: search Access 2011 (should be public in a day or two)
- Argon: Access 2011 hosted by UVic (need Argon browser installed)
I think the ideal would really be to create a mobile app that helps the user do just about everything. Wayfinding, searching, find general information (such as hours), find item information (including reviews/ratings), find availability to computers, etc.
What was interesting about the discussions we had was talking about how best might it be implemented with the technology that we have today. Apparently, the University of Illinois developed an app that tells users where to find an item on the shelf using signal strength positioning, but we could imagine it going very wrong especially around a lot of metal shelving. Would it be better to not have it at all than to direct a user to the wrong place? I imagine many would say yes.
Obviously, there are pros and cons to every method, but I think I concluded that if you were to develop a mobile app with the technology we currently have without spending an enormous amount of time on it, the app would work better with image recognition (something a la layar vision or QR codes) combined with input from the user.
For example, if a user wanted to find books on a particular subject, an app would ask what subject the user would like to find, then use GPS to direct them to the branch (for multi-branch campuses) if applicable, then once in the branch, it would pop up a mini-map for the user directing them to that particular subject on the shelf. If at any time they get lost, they just need to scan the appropriate image and the app could come up with a new mini-map providing a path from their current position to the shelf with the subject they’re looking for.
The advantages of a dynamic path map versus real-time positioning is that positioning technology is still not very accurate, and most users will not give apps more than one or two chances before deciding whether it’s useful or not.
Hopefully we can get the layar one public and then rather than simply showing a short video, we can have people try the app themselves.