The Vision and the Reality

Published on Thursday, December 20th, 2012

Augmenting Reality with the Kinect

Rob Miles, University of Hull – Robmiles.com

The Kinect includes two cameras, an infrared sensor and four microphones

The further you are from the sensor, the further apart the infrared dots appear to the Kinect. When you are about 80cm away it has very good depth perception. However, objects such as tables cast shadows. The software can track six people – two in detail and four by position only. Multiple Kinect sensors can work because each one moves slightly eccentrically

A Kinect body has 19 bones and 20 joints. The new sensors will be able to track fingers.

The Vision and the Reality – discussion

Who is augmenting reality? Mainly marketing and the military.

From an educational perspective, the army is using it for drill and skill, it can be also used for surgery training – for example to let you see where a tumour is.

There is a tendency to use AR for its novelty value – Wikipedia with a shiny wrapping.

QR codes are not necessarily used in a thoughtful fashion http://wtfqrcodes.com/

Augmented reality has the potential to be used to support situated learning and enhance a sense of space. The Holodesk Microsoft is a place to go to carry out premeditated activities.

Newcastle University Rock Art on Mobile Phones project http://rockartmob.ncl.ac.uk/indexD.php

History pin – pin your history to a collaborative map http://www.historypin.com

The SecondSight app can provide a premeditated way of taking another look at National Trust properties http://www.mysecondsight.com/experiences/index.php

Augmented reality can provide a connection to data that isn’t available through a keyboard

Steve Boneham – Conclusion

If we are going to use AR, we need a reason for doing so.

We want to go beyond a shallow marketing experience and build activities that allow for prompts, collaboration, investigation and interactivity. We shouldn’t use AR just to replicate what we could do already.


Augmenting Education – The Past, Present and Future of AR

Published on Thursday, December 20th, 2012

Luke Robert Mason, Director of Virtual Futures

We can now use devices to deposit versions of ourselves into an online environment.

Vision is proving to be a limited way to augment our perception. AR can pollute our visual senses.

Neil Harbison is a colour-blind artist. He wears a prosthesis that allows him to hear colour.

http://www.kuriositas.com/2010/06/i-am-borg-worlds-first-recognized.html

Data gloves let us feel virtual objects

http://www.vrealities.com/glove.html

Olly, the web-connected smelly robot, gives you smell notifications – and you can make one yourself if you have a 3D printer

http://www.ollyfactory.com/

Link to scent-based AR: http://www.virtualworldlets.net/Shop/ProductsDisplay/VRInterface.php?ID=29

If we are to start navigating information environments, we need to look at things that are already good at those environments, such as machines and robots. Digital alter egos can help us to navigate these information environments.

Kanye West: Media Cyborg http://snarkmarket.com/2010/6262

Telepod allows us to have telepresence – talk to a 3D holographic representation of someone rather than just a screen image http://www.hml.queensu.ca/telehuman


Augmented Reality in Action

Published on Thursday, December 20th, 2012

Lester Maddan, Augmented Planet

http://www.augmentedplanet.com/

AR is a collection of technologies that can be used together in certain ways to produce what we call augmented reality. Virtual reality was about entering a computer-generated world, isolated from physical reality. AR is about bridging that gap.

AR provides an immersive experience. You have a phone in your picket, it recognises where you are and starts to give you an audio tour. If you stand still, that suggests you are interested, so it carries on.

A barcode can be thought of as an early form of augmented reality, interacting with print through technology. After barcodes we had QR codes, and Microsoft tags.

Markers are high-contrast images that enable tracking and are used for 3D. The computer picks up where it is in relation to the marker, and also your orientation to the marker. You cannot add markers retrospectively.

Natural feature tracking allows you to overlay 3D graphics. Software picks out high-contrast areas of an image and builds an image map that is uploaded to a server. The machine then looks for matches. Natural markers can be applied to most things, and can be applied retrospectively.

Markerless AR uses the camera to provide context and also uses information from the GPS accelerometer and other sensors for location. It can, for example, track where you are in relation to the white line in the road, or the car in front.

Other techniques are object recognition and face recognition. Both Google and Apple are investing in face recognition. An app could recognise someone’s face and then display their social profile.

Demos of some augmented reality apps developed using ‘String’ http://www.poweredbystring.com/showcase

Scrawl lets you do 3D drawing in augmented reality
http://www.youtube.com/watch?v=GRM68MiEixU

Interact with NASA spacecraft in AR
https://itunes.apple.com/us/app/spacecraft-3d/id541089908?mt=8

The Google Goggles app lets you do visual search – for example, it will identify a famous building or picture – or give you reviews of a bottle of wine based on its label
http://www.google.com/mobile/goggles/

Aurasma is a visual browser
http://www.aurasma.com/


AR: A view of the future

Published on Thursday, December 20th, 2012

Lee Stott, Microsoft @lee_stott

AR is a method of looking at the world through a different lens.

Several current issues: connectivity, need for Internet and app store, device ownership, user interfaces. Either you build new interfaces for every devices, and people keep buying new devices, or you go for the lowest common denominator.

Commercial AR: Kogan AR app allows you to see what a piece of tech such as a television would look like in your room.

Games-based AR: finding objects in your house that increase your points in a game that you are playing.

Sensor-based AR: looks at input from device sensors. Examples include Photosynth and the Live Butterflies viewer http://www.youtube.com/watch?v=gO43NOXgzyE that uses the iPhone gyroscope.  You can download a free toolkit to build these (although I didn’t get a link to that, and I can’t find it)

Geo-AR: takes real-world objects and adds information from the augmented world. It takes into account your location and your direction. Intel have a device that pulls information from your Facebook account and advertises directly to you when you walk into a shop such as Top Shop (again, I can’t track down a link to this, though Intel are obviously doing things with AR http://venturebeat.com/2012/09/11/intels-checklist-of-innovations-coming-the-next-18-months-on-the-pc/ )

Wii as AR – the wii is augmented reality in that you are doing things and the computer knows about it. Microsoft is about to launch Nike Plus Sports, which will allow you to get awards and recognition for games but also to build up stamina and fitness.

Physical-interaction-based AR. Two million sensors embedded in a table, which is thin like an LED screen and which can be mounted at any angle.

Microsoft is working with Guide Dogs for the Blind to transform the ways in which blind people get out and about. Good technology is almost invisible to the user – they view as part of their physicality. The work in this area is not solely for people with a visual impairment, these technologies have the potential to benefit everybody.

Cool video – well worth watching www.guidedogs.org.uk/inspiring-future-technologies

Personal AR – an evolution of personal AI, and of decision engines. Getting an AI agent to do things like make appointments for you and to make predictive suggestions.

AR Browser: Nokia City Lens is built into the Nokia Lumia phone. It knows where you are and can display local information http://www.youtube.com/watch?v=vMdNtVqYJIw

What lies ahead: more cultural heritage in digital form, AR more accessible to more people, people better equipped with tools to add creatively to the AR resources available, an exponential growth in mass cultural expression, and a cloud culture.

Graphene is making all this possible – it conducts electricity and can be embedded into any material http://en.wikipedia.org/wiki/Graphene

Also important is near field communication (NFC), a set of standards for smartphones and similar devices to establish communication when near each other or touching. This enables, for example, contactless payment.


scARlet

Published on Thursday, December 20th, 2012

Matthew Ramirez, University of Manchester

Prezi: http://prezi.com/k4rkzzlqgvkt/augmented-reality-in-education-event-2012/

Blog: http://teamscarlet.wordpress.com/

Wiki: scarlet.mimas.ac.uk/mediawiki – this contains lots of useful info and links

Project scarlet uses computer graphics to add a layer of information to the real world. As part of their course, students need to consult books within controlled conditions within library study rooms. In these rooms, objects are isolated from secondary resources and digital resources. AR helps students to look at primary sources surrounded by contextual materials. The experience is led by an academic and built into the aims and objectives of the module. It required a multidisciplinary team, a special collections manager, student voice, academics and developers.

They worked with ten key editions of Dante and with the oldest surviving fragment of the Gospel of St John. Around the fragment, the project provides its original context in the document, a peer-reviewed video, the English translation, and a mobile-optimised page with information links and bibliography. With the Dante you get a commentary by the academic, and can link to a mobile-optimised set of resources. Students were able to book out ipads in order to view the content, and could also view it elsewhere on a standard computer.

They used junaio – glue-based recognition http://www.junaio.com/develop/docs/glue/

A problem for AR is that there are no ratified standards, so there is no code base on which to build AR. The upside of this is that competing companies are driving innovation.

Students liked going to the library, seeing the artefacts and having all the information gathered together. It could be used not only for content delivery but also to get students to develop content themselves. The project was also a way of surfacing library content not only to students but also to academics and the wider public, [providing access to underused resources.

The first year students found it a good way of establishing basic knowledge. It enthused them and encouraged them. Final year students found the basic content less relevant, but liked the video introductions to specific objects, and liked having a central reference point they could use for the initial planning of essays.

It is important that students are immersed in the activities; otherwise they ask why they can’t just view the material in the VLE.

Use of AR needs to be contextual, closely linked to both the objective and to the learning. It shouldn’t be a generic resource. Important not to underestimate the time needed to create it.

For the future, they are looking at the possibilities for medical learning or for hairdresser training. AR works very well in situational learning, where you don’t have access to computers but you do have access to mobile devices. For example, you could link to a video on how to cut a certain style.

They also do a service called Land Map that allows you to access topographical data from across the UK http://www.landmap.ac.uk/index.php/About/ This could be used, for example, to create 3D models showing the type of housing in an area, and these models could then be linked to 3D printers.

The Team Scarlet wiki gives you access to a toolkit that helps you to align augmented reality, technology and pedagogy with specific aims and objectives.

They are currently working with the University of Sussex and the University of the Creative Arts. They have a video of an interview with Lucy Robinson at the University of Sussex who is leading a project on the eighties, and is excited that you can take objects and ephemera and set them in their wider context, the world that they spoke from and that they spoke to.

You can put Google Analytics on to AR resources in order to monitor how many times resources are accessed.


Augmented reality in education

Published on Thursday, December 20th, 2012

City University, London, 21 0ctober 2012 – #AREE2012

http://blogs.city.ac.uk/care/ar-event/

http://blogs.city.ac.uk/care/2012/11/23/ar-in-education-event/

Videos http://blogs.city.ac.uk/care/?p=238

Augmented reality in education http://blogs.city.ac.uk/care/

cARe, Creating Augmented Reality for Education

Farzana Latif, City University

Video overview of project http://www.youtube.com/watch?v=kMWdFadqjg0

Describes a public health walk around east London. When students find a marker, they can access resources about the history of the area, and a series of reflective tasks. They are encouraged to use the technology they have with them in order to tweet and take photos. In each area, you get a video with subtitles.

Issues included vandalism (markers were obscured or defaced), not everyone had appropriate devices (the app requires a video camera and a camera), GPS accuracy is not always good, personal safety.

The app is available free from the iTunes store, and requires a built-in camera


Assessing learner analytics

Published on Sunday, October 30th, 2011

Learner analytics use the experiences of a community [network?] of learners to help individual learners in that community to identify more effectively learning content from a potentially overwhelming set of choices.

Analytics and recommendations have generative power. The object recommended many not yet exist – it may be something that the learner must construct or that is constructed from the recommendation.

Analytics can be assessed from numerous perspectives, including: accuracy, adaptivity,  behaviour,  confidence, coverage,  coverage,  diversity,  drop-out rate,  effectiveness of learning,  efficiency of learning,  learner improvement,  novelty,  precision (comparing resources selected by user with those selected by algorithm),  prediction accuracy,  privacy,  reaction of learners,  recall (ratio of relevant results to those presented by algorithm) results,  risk,  robustness,  satisfaction, scalability,  serendipity,  trust, user preference,  utility.

(MUPPLE seminar – Hendrik Drachsler)


Personal environments for learning

Published on Sunday, October 30th, 2011

Philippe distinguishes between a personal information environment and a personal learning environment. [I know that in this case a personal learning environment isn’t just everything around me when I’m learning, but is a personalised form of a VLE. In that case, what is a personal information environment? Is it all the sources from which I gain information? In which case it seems to me the same as my learning network.]

Personal information environment = learning network?

Twitter supports a read/write loop. It shows us what we have done and what we can do next. The function of a re-tweet is to spread information to another community. When we retweet, are we spreading the information or are we aiming at reducing the information gap in our own community? [I’m not sure what an information gap is. Also, I think this view assumes a particular type of Twitter user, who has selected and weeded both the people they follow and their followers. Other Twitter users have different models – they follow everyone who follows them, or they try to collect as many followers as possible. One function of a retweet is to spread the information, another is to establish yourself as a good source of information, another is to open up the possibility of new ties between your readers and the writer you are retweeting).

(MUPPLE seminar – Philippe Dessus, Grenoble)


Augmented history

Published on Sunday, October 30th, 2011

The Civil War app changes in real time and plays out over four and a half years, producing a daily casualty count. This has been criticised as being too immersive – uncomfortable for many people.

The Iraq War Memorial app superimposes a war memorial on a real scence

http://gamesalfresco.com/2011/02/21/augmented-reality-u-s-iraq-war-memorial/

The war memorial app, brings a virtual representation into the physical world. Street Museum brings the past of the physical world into the present by syuperimposing pictures of London’s past over London’s present. For example, it shows pictures of London during the Blitz.

Suchtweetsorrow.com presents a modern retelling and reworking of the story of Romeo and Juliet

http://www.flickr.com/photos/garyhayes/5778206030/ looks at the building blocks of experiential media – including the physical, mental, social, emotional and spiritual experiential building blocks. The spiritual level involves the experience prompting those involved to change their belief system.

(Notes from Creating Second Lives in Bangor 2011)


Learning in context

Published on Sunday, October 30th, 2011

Learning occurs in contexts – it also creates contexts.

Context is not fixed – it is an emergent property of interaction.

The challenge is to go beyond modeling human activity in context, in order to augment it.

(Take-away from Liz FitzGerald technology coffee morning, 19 October)