Institute of Educational Technology
Just published, Learning Analytics for Open and Distance Education, an edition of CEMCA EdTech Notes. This is a topical start-up guide series on emerging topics in the field of educational media. CEMCA, based in New Delhi, is the Commonwealth Educational Media Centre for Asia. Its mission is to assist governments and institutions to expand the scale, efficiency and quality of learning by using multiple media in open, distance and technology-enhanced learning.
Ferguson, R. (2013). Learning Analytics for Open and Distance Education. In S. Mishra (Ed.), CEMCA EdTech Notes. New Delhi, India: Commonwealth Educational Media Centre for Asia (CEMCA).Report introduction Learning analytics make use of large datasets in order to improve learning and the environments in which it takes place. Students and educators can use these analytics to review work that has been done in the past, to support current study and to access recommendations for future activities. In the context of open and distance education, analytics are able to draw on information about online learning activity in order to support teachers and to help guide learners. Many of the major learning management systems (LMSs) used to support education worldwide currently have basic-level learning analytics built into them, and new tools are currently under development that will expand the use of analytics within open and distance education.
Last week I went to the 6th International Conference in Critical Theory, based at The John Felice Rome Center of Loyola University Chicago in Rome. I got some useful stuff out my my presentation, both in terms of some headspace to work on the essence of what I wanted to say in some of my PhD work and in that there were some useful comments to come out it too. I’m newly confident that there’s a reasonable journal article in there somewhere.
It’s been a while since I spent a full three days listening to philosophy papers. No doubt I’m a bit rusty in terms of my ability to listen, but I repeatedly found myself thinking that reading out from a prepared manuscript is rarely the most stimulating or pedagogically effective way to present material.
The idea that we should aspire to be innovative in how we present information is kind of a given in the environment I work in at present. Experiments in form have value in themselves. But philosophers tend to stick to a long-established way of doing things. You write a paper in advance, print it out, and then read it out load for anywhere between 50% and 99% of the time available for the session with any leftover time devoted to discussion.
Not all philosophers do this, and for those that do the benefit is that you say exactly what you want to say, no matter how vexed or convoluted. When you’re trying to explain something complicated, it’s easy to get it wrong especially when you’re presenting to a room full of people who are very keen to point out any mistakes. For many presenters who are shy, a script to hide behind can be a comfort.
However, there are a few things that have come to irk me about this way of doing things. Firstly, there aren’t many concessions being made to the audience when you are effectively asking them to digest something like a journal article or book chapter in one go, often without a paper copy of your own to follow. It can be hard to keep a question in mind if you want to keep up with the presentation. Perhaps this is exacerbated when you’re trying to listen in a language that is not your own; I was chairing at a conference recently and one audience member complained loudly to this effect. A whole day of passively listening to people speak is fairly draining no matter how interesting the presentation, and it’s hard to think that people can sustain this for a number of days.
You have a great deal of collective intelligence in the room at seminars like these, but it’s hard to see how reading is a good use of that time. At the conference in Rome there were academics and graduate students from around the world. Lots of resources have gone into putting these people in the same room. It’s a chance to have a really good discussion – or at least it would be if everyone had the materials in advance.
This got me thinking along the lines of the ‘flipped‘ classroom, where you do the information delivery (lecture, video, etc) outside of the class and keep the precious (expensive) contact time for discussion and activities. Students can digest the material over time, through multiple viewings if need be. If we were to do the same thing with conference sessions you could have all kinds of new formats, or work towards producing something tangible. (It’s quite ironic that the complaint about technology creating barriers to human interaction is used to defend reading your paper at an audience.) It also relies on people being organised enough to produce materials in advance but there’s no reason why it need be compulsory.
I’m coming out in favour of the flipped conference. I understand that a similar call has been made by Alan Levine and Audrey Watters. There are issues, though, especially to do with recording unfinished or progressing work. I can’t see many people in the humanities going for that.
They’re a conservative (with a smal ‘c’) bunch, really, philosophers. I imagine most of the humanities are the same, however: they like the old ways of doing things and that’s partly how they ended up where they are. It’s quite telling that when you are in an educational technology conference everybody is sitting on there devices, tweeting, checking things, looking things up and so on. At a philosophy conference very few do this. When I went on Twitter there was only one other person on there and we were both looking for some sort of hashtag or conversation to follow. There’s a sense of defence of a sanctified space among these communities but I wonder how much of that is about the most effective use of that space.
OK… things aren’t that bad! But I was sad to leave Houston after a great week with the Connexions conference team and attendees. Thanks to everyone for making me feel so welcome and for taking time out to chat. I’ve been back in the UK for over a week now, so – without procrastinating further – here’s the final blog post.
As part of my schedule, and with the aim of building up a “snapshot” of different stakeholder perspectives on OER, I interviewed a number of people who were participating in the Connexions conference. As befits a project about “openness” we are aiming to release as much of this footage as we can “in the open” e.g. on open licenses. I’ll be working on the footage over the coming weeks so stay turned for more on this. In the meantime, extra special thanks to all those who participated in interviews with me: Richard Baraniuk (Rice/Connexions), David Harris (Open Stax College), Sara Frank Bristow and Pete Forsyth from Communicate OER, Sidney Burrus (Rice/Connexions), Provost George McLendon (Rice) and Dr Mark Morvant (Oklahoma). It was a privilege to speak with you all.
The rest of the conference…
The final conference session I attended, prior to Richard Baraniuk’s closing remarks, was “Impact: Faculty and Student Perspectives” with Mark Morvant (Oklahoma), Heather Wylie (Shasta College), and Erik Christensen (College of South Florida). It was an exciting session with speakers discussing the impact of OpenStax on learning and teaching practice. Great footage of students talking about OpenStax too!
You can watch Erik’s Opentextbook testimonial here and more on this session is available via Twitter (see @BeckPitt).
Sprinting toward the Finish Line…
Content and coding sprints took place on the Wednesday and Thursday. I joined Pete Forsyth and Sara Frank Bristow of Communicate OER to find out more about how to contribute Wikipedia. This was followed by working with Daniel and others to create the initial OpenStax College Wikipedia entry. There was also much debate and discussion around the Open Educational Resources article: whilst a lot of work has taken place recently to improve this entry, more input is needed, particularly on the OER Policy section.
The self-reflective bit
As part of the project, we’ve been asked to reflect on our research trips, perhaps from a more personal perspective. Here’s what I noted (in no particular order) over the duration of the visit. Definitely no deep thoughts or surprises here (sorry!):
1) Going well-prepared for different scenarios and activities is vital. You can’t just “pop back” and grab something vital that you forgot when you’re 5,000 miles from home. It’s a cliche but the “best laid plans…” mean that you need to be flexible and be able to adjust to different circumstances seamlessly (or at least give the impression that this is the case!) Be ready to freestyle, change your plans and beware of the “one-size-fits-all” approach/activity. For example, what worked well at one conference, might not work so well elsewhere.
2) Interviewing conditions vary when you’re on the road: you will need to accept that the ideal set-up will remain just that… Speaking with people in a variety of locations (e.g. vacant classrooms, interviewees offices etc.) means that, for example, the position and height of the camera will be determined by the furniture set-up and what you can feasibly move around, backgrounds are inconsistent and lighting varies etc. At certain events it may appropriate and feasible to organise a specific space to carry out interviews in. Otherwise, be prepared to improvise! Oh, and (at the risk of stating the obvious) remember to pack an audio recorder as a back-up to your video recordings.
3) If something goes awry, make a joke out of it. Something happen that shouldn’t of? Glaringly obvious mistake that somehow missed the numerous proof readings you did? Touting UK-centric lists because you forgot to make them relevant for a US audience? Don’t sweat it. Keep smiling and don’t worry, it’s usually not a big deal. Plus the more different perspectives/feedback you can get on something the better.
4) Be prepared to jump in, seize every opportunity that comes your way and speak to as many people as you can (in the lunch queue? somebody on their own? person sitting next to you? Go forth and converse!) … and fight that jetlag/lack of sleep! Or, if things are getting too full on, just tweak your original plans. For example, my plan to live-blog earlier in the week (which I never quite managed as all my blog posts were slightly later than the events they were referring to; a bit like this rather overdue post, ahem!) was replaced by tweeting mid-way-through on Monday. A good experience (as I’d never tried to live-blog before) but juggling drafting a blog post with taking photos and tweeting for extended periods is possibly a bit overkill and I was certainly feeling it after a few hours.
5) Power hungry appliances or lots of tech (e.g. laptop, mobile for tweeting and photos and a video camera)? Or maybe you haven’t used the equipment for extended periods yet? Bring more than one currency converter plug with you and carry at least one with you at all times.
6) Tweeting from the conference sessions has an added bonus: people know who you are through the conference Twitter feed. Although on the couple of occasions where I accidentally referred to somebody else as a speaker who was no longer attending (as the programme had changed) meant that I probably wished at that point that people didn’t know who I was! An example of multi-tasking gone bad and over-reliance on the programme – Beware!
#oerrhub #cnxcon #cnxsprint #self-reflection #wikipedia
My slides from today’s presentation…
I’ve just had notification that Perspectives on Open and Distance Learning: Open Educational Resources: Innovation, Research and Practice (for which I co-wrote a chapter has now been published… you can download directly from here.
I just saw this quote over at Radical Cartography and thought it was really interesting to think about in relation to data visualization, which is essentially also making spatial representations of information.
Information is already abstraction from experience because in regarding it as knowledge rather than immediate sensation. So, creating representations of information is moving away from the referent and towards the ‘hyperreal’. This is compounded when we visualize data in order to inform decision making as the ‘map that precedes the territory’.
At the same time, there is something organic and biopolitical about the growth, flourishing and decline of different representations of the world which inevitably reflect and express surrounding power structures.If we were able to take as the finest allegory of simulation the Borges tale where the cartographers of the Empire draw up a map so detailed that it ends up exactly covering the territory (but where the decline of the Empire sees this map become frayed and finally ruined, a few shreds still discernible in the deserts — the metaphysical beauty of this ruined abstraction, bearing witness to an Imperial pride and rotting like a carcass, returning to the substance of the soil, rather as an aging double ends up being confused with the real thing) — then this fable has come full circle for us, and now has nothing but the discrete charm of second-order simulacra. Abstraction today is no longer that of the map, the double, the mirror or the concept. Simulation is no longer that of a territory, a referential being or substance. It is the generation of models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory — PRECESSION OF SIMULACRA — it is the map that engenders the territory and if we were to revive the fable today, it would be the territory whose shreds are slowly rotting across the map. It is the real, and not the map, whose vestiges subsist here and there, in the deserts which are no longer those of the Empire but our own: The desert of the real itself.
Jean Baudrillard (1981) “The Precession of Simulacra” in Simulacra and Simulation.
There’s some quite interesting stuff over there, in fact.
Tuesday’s Connexions conference (see here for the full programme) and the content and coding sprint days which followed were a fantastic experience: there was some great talks (both the keynote, Susan Badger, and the session on “Impact: Faculty and Student Perspectives” were particular highlights) with much discussion about what “next steps” were needed to take OER, and in particular open textbooks, to the next level. A big shout out to anybody I connected with over the duration of the conference – thank you for taking the time out to chat with me, and find out more about what we are doing on the OER Research Hub project. The content and coding days were equally productive: my time was spent both interviewing and finding out more about, and working with others, on Wikipedia articles. More on both these activities below, or in a future post.
My tweets from the conference are available here: @BeckPitt. However, I’ve also put together some supplemental thoughts/scribblings from the sessions. Whilst by no means complete or representative of entire talks or sessions, these notes compliment/overlap with my tweets. I hope they’ll be useful…
Publisher Engagement with OER: “A pat on the head for Open…”
Susan Badger, was the conference keynote: “OER: If Not Now, When?” Describing how she moved from a position of thinking that innovation in OER would originate from corporations, Susan eventually left the publishing industry (Pearson) disillusioned. For Susan the “crim[inal]” strategy of publishers (whose motive is profit) is one which “passif[ies] the OER movement” by perpetuating the idea that OER is a “supplemental resource” in learning and teaching. For Susan publisher involvement in OER/OpenTextbooks is a “trojan horse” due to commercial interest. Susan was clear that remixing content is critical, that “complete solutions” are missing and “visibility” of OER remain key issues for the movement. Content is needed which is “ready-to-go with tweaking.”
Susan’s talk highlighted many of the the 2012 Babson Report (funded by Hewlett and Pearson) findings, which look at “barriers to widespread adoption of OER material in Higher Education.” You can read an overview of the findings here. Key areas to address include raising “awareness” of OER and enabling it to be found more easily (“search-ability”) and the “perception of quality” of OER. Susan also stressed that there is “confusion over what OER is” which, as faculty have “less help” than previously, is being exploited by publishers: ”Big publishers live on faculty apathy.”
For Barbara the way forward is to understand and respond to student needs (e.g. they “like print,” used textbooks are “convenient and collateral” and maintaining “choices” is central), ensure that educators are “empowered to do remixing” and look for “big piece solutions” (e.g. whole courses which are easily remixable). She also suggested that Student PIRGs should, as Pearson do with their own advocates, look to “recruit an army” to help raise awareness of OER and open textbooks. From Barbara’s perspective “adaptive learning is the future of the industry” and OER must take appropriate action to ensure that it has a “big seat at the table.”
“Transforming the OER User Experience”
Some notes on this panel session, which unveiled both a new YouTube video for OpenStax (OS) and more information on the Connexions platform rework.
Richard Baraniuk (Rice/Connexions) opened this session with some great news: OpenStax College Physics now has over 3% of the physics book market! OpenStax has saved students, to date and since June 2012 (as David Harris would later note, this launch was “outside the adoption” timeframe for educators; a fact which makes the impact of OS all the more impressive), $2.3 million. Moreover, OpenStax has 1.2 million unique users (from 200 countries) and 160 schools have formally adopted the textbooks. You can view the new video on OpenStax, which was launched at the conference, here.
Further detail on the new Connexions platform was also unveiled during this session: as Daniel Williamson (Rice/Connexions) described it the new platform “unlock[s] the promise of remix” with “semantic content” extended and a new Editor tool.
“Opening Up the MOOC: Different Perspectives”
The panel session with Andrew Ng (Coursera) and Don Johnson (Rice), described as a “seasoned Coursera instructor” by Richard Baraniuk, was illuminating.
Coursera have 300 courses and over 3.3 million students. Both Andrew and Don reflected on the ways in which “high quality content” was produced through the Coursera Wiki system, student feedback and collaboration on lecture notes. Later Andrew would note that 40% of Coursera users are from the “developing world” in comparison to Khan Academy, who have 80% of their users based in the US. Coursera are currently in discussion with NGOs regarding use of their materials in countries where learners have limited or no internet access.
Andrew stressed that it was the personal relationships formed with educators and fellow students which were of “real value” and unique to University study. As he would later describe it: there is “something almost sacred about the student/professor relation”. From this perspective, and in order to maximise student/faculty contact time, he proposed that MOOCs could be an effective enabler: giving people the space to “have those amazing conversations.” Proposing a “flipped learning” approach, Andrew noted that using a MOOC to provide the lecture content etc. outside of the classroom environment had the potential to “preserves valuable classroom time.”
Both Don and Andrew, as educators, stated that they were not interested in the potential money to be made from their course materials. For Andrew “do[ing] what’s best for the student is Coursera’s number one rule.” Andrew also made a number of remarks concerning certification; including describing technology which can verify a student’s identity through one’s unique “tapping rhythm” (e.g. keystrokes).
I have a copy of the following book available for anyone who would like to review it for JiME. Just let me know if you are interested…
Jenkins, H., Kelley, W., Clinton, K., McWilliams, J., Pitts-Wiley, R. and Reilly, E. (eds.) (2013). Reading in a Participatory Culture: Remixing Moby Dick in the English Classroom. Teachers College Press: New York.
Despite appearances, I argue that oEmbed is alive and kicking.
Brian Mearns has put together a document describing some concerns around the oEmbed standard, partly around security. It's generally very useful. The discussion on the oEmbed Google Group goes onto whether oEmbed is dead, and what the alternatives are.
I welcome the discussion, particularly the notes on alternatives. However, I'll add my voice to that of Sean Creeley, CEO of Embed.ly, and say that yes the specification has languished (though it is on Github - this should help keep it alive). But oEmbed isn't dead!
I concur that the benefits of oEmbed include:
I also second Sean's wish list.
I should also probably apologize and pick up on Ross' point that "Though there are some large implementations, new ones aren't really appearing…". So, I'm standing up to be counted…OU Embed
We've been developing an oEmbed service here at The Open University for several years. We are using it as the vehicle to ease deployment of our OU Media Player, which is accessible (to those with disabilities), easy to embed (largely because of oEmbed), and OU branded.
OU Embed is also a vehicle to:
OU Emded is written in PHP, based on CodeIgniter, and I hope to release (most of) it open-source in due course.Conclusion
Having thought some more, I suggest that precisely because the oEmbed specification is simple, it is already widely deployed, it works "under the hood", and people can do what they want with it (it just works), there is perhaps the ''perception'' by stakeholders that we don't need to refresh the specification and deal with issues.
As the Google Group discussion reveals though, we need to both nurture oEmbed and ensure that it is seen to be alive.Useful links:
Greetings from Houston! The First Personalized Learning Workshop at Rice University is underway… I’ve been tweeting from the conference (check out @Beck Pitt), but also wrote a blog post to cover the first of the morning sessions. So (better late than never) here goes…
Remarking on the “personalized Houston weather” as fitting for the conference, Vice Provost Caroline Levander welcomed us to Rice with an overview of activities across campus: from STEMscopes, Connexions and OpenStax to Rice “experimenting with” Coursera and EdX… Richard Baraniuk (Rice) followed with an overview of the workshop theme “Scaling Up Success.” Highlighting the problem of “one-size-fits-all learning” Richard noted that how the “globally exploding phenomenon” of Ed Tech (with $1 billion worth of investment in the last year) was perceived as being a solution to this issue. However, despite this, “What is different this time?” According to Richard, the shift away from “broadcast technologies” has heralded the opportunity to get hold of data about how students use course materials online, analyse it and “close the learning feedback loop”. This is the difference between the past and present. Richard closed his Introduction by remarking on issues which need to be addressed as part of this process: 1) Cost: currently prohibitive (citing Inquire’s attempt to personalise learning); 2) Adoption: training for educators and students: “people want learning to be quick and easy” whilst there is often “disappointment” 3) What do we mean by “optimising learning”? 4) “Privacy issues” (e.g. FERPA) also need to be addressed in order for research to be carried out effectively. Handing over to Reid Whitaker of STEMscopes (who introduced us to the K12 science program started in 2010 which is used by 1.2 million students and ca. 40K schools) to close this Intro, it was of note that in a poll of the audience composition, 2/3 of attendees are linked to K12 learning.
The “Firehose of Data”
David Kuntz has been at Knewton for 4 out of its 5 years. Interested in how we can use data on student learning to help get “big data for what and how you learn…” Knewton is built on the Amazon webservice: AWS. David described how Amazon created an infrastructure which enables others to easily build and create educational apps. 60,000 courses with “personalisation services” across a range of subjects with more forthcoming. Wiley, Pearson and others using Knewton API (application programming interface): crosses programme languages. Knewton API aims to identify the next steps for students so that they can achieve and to understand individual students’ needs so that they have the eureka moment (“continuously personalised, differentiated learning experiences for each student”).
David noted that there are “management problems” potentially for educators, in addition to state requirements, that need to be taken into account. “Targeted recommendations” were noted as one solution (e.g. focusing in on particular solutions that addressed common student concerns). David described a scenario of engagement with one billion students (and the potential for “a fire-hose of data”): a question of how to scale and use this data effectively… and give student what they need immediately.
“Psychometrics”: content/students. “assumptions which are violated” – “test environment” “students learn and forget” things – usually this isn’t taken this into account. How do you sort information, coming from different sources and “standardise” it, then understand it? System and solution needs to be durable. 4 stages: “investigate, innovate, iterate, automate”. David described how the final stage is key. General and specific picture of student learning: what means that students “succeed”? Relation of success in one area to another etc. David ended his talk by asking educators: What are the core questions which YOU would like answering? Today there is a new opportunity to “better our understanding of what really goes on when students are learning…” Answering these questions, David concluded, is: “critical to scaling up student success”
MOOCs as a “Superior Research Venue”
The second audience poll of the day was instigated by Dave Prichard (MIT) who asked whether everyone knew what a MOOC was. 1 person responded in the negative (not clear if they were joking or not though!) According to Dave, MOOCs provide a “Superior Research Venue.”
How can we answer the question: “What does learning mean?” Dave compared a ”traditional course” with “6.002x online“. Observations on the latter included: the discussion forum becoming more popular as course went on. 60% of time on course spent by “certificate earners”. Peaks at weekend: “more time/assignments due” issue. Discussions “used to help with homework”. “Time/Activity” relation of particular note: whilst videos have the most time spent on them there is more use of discussion forums (although less time spent using them by students). Dave noted research (Sadler and Tye?) on textbook use and success: less student success the more one uses a textbook. Research still needed on lecture videos and success relation. Dave walked us through a slide showing that different use of course components depends on what student aim is (e.g. different types of exam or homework). Textbooks get more use for exams rather than homework. Dave noted the “irony” of “utiliz[ing] large size of MOOC to individualise”.
“We can’t cut open the mind of a student…” thus we “have to infer” and work out how much people are using from different resources. “Hidden Markov Model of Capability” Not using it for “knowledge tracing” but about “learning value for each of these resources”. Dave presented two slides of note RE: MOOCs. One “Sprouting the Seeds of Self-Destruction” and it’s counterpart “Why they are good for you.” Mark argued that “digitising the dinosaur” doesn’t work. Described how one semester at ASU gives out as much accreditation as the total of MOOC certificates given out to date. Lots of PR generated by MOOCs but difficult to “self-sustain” even with pay-for certification. Certain subjects, in this instance physics, benefit more from MOOCs – provides an opportunity for Flipped Learning and what Mark described as a SOOC (Small Open Online Course). Fundamental issue is value for money: ”What are students getting for their $40,000?” Second most important question and focus should be research.
And finally, a bonus snapshot of the start of the second session of this morning where I managed to capture the majority of Steve’s presentation.
Steve Ritter: “The Six Million Dollar Man” (Rich Baraniuk)
Steve Ritter (Carnegie) took 3.091x: Intro to Solid State Chemistry to see how worked in practice. Herb Simon quotation sums up what Carnegie is trying to do: “Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.” Steven stressed the need in education to acknowledge this, arguing that “theory, model and evidence” are needed. Carnegie: “tries to change system from within”. Theory based on ACT-R (John Anderson, Mike Byrne). Steve described how quantifying learning and “knowledge components” in relation to the “Skillometer” which charts students understanding and is the basis for feedback on their progress. Steve also described how eye tracking revealed different student approaches to a math problem: Student B, in comparison to Student A, did not look at the formula provided by the instructor to solve a math puzzle. One example of “real life” Vs “algebra class” with formula “another way of saying what you know” but the benefits of being able to use formula only apparent later on, when trying to solve more difficult math problems. For Steve “model tracing” is understanding different ways of solving a problem whilst “knowledge tracing” is understanding what the student knows…
Feel free to chip in with corrections/additions to the above. You can also follow the conference via the webcast.
#HCI #oerrhub #CNXCON #STEMscopes #RicePLW #flippedlearning
Just a few photos from our (me and @beckpitt) recent trip to S. Korea…
On 12 April, Bieke Schreurs presented a paper I had co-authored at the annual conference on learning analytics and knowledge, LAK13, which took place in Leuven, Belgium.
Schreurs, Bieke; Teplovs, Chris; Ferguson, Rebecca; De Laat, Maarten and Buckingham Shum, Simon (2013). Visualizing social learning ties by type and topic: rationale and concept demonstrator. In: Third Conference on Learning Analytics and Knowledge (LAK 2013), 8-12 April 2013, Leuven, Belgium.
Doug Clow liveblogged the presentation.
This paper builds on the ideas of Social Learning Analytics and focuses on the question of how people develop and maintain a ‘web’ of social relations that support their learning. We describe how we visualise learning ties in SocialLearn, an online learning space in use at the UK’s Open University. To gain more insight into the networked learning processes, we constructed a theoretical framework with the intention of identifying what counts as a learning tie by classifying the online interactions that promote the learning process. Based on this model we created a plug in, based on The Network Awareness Tool (NAT), to visualise learning ties within SocialLearn. NAT visualises socio-material networks by identifying relationships between people who interact with the same learning topics. The tool serves different goals for different target groups. It has been shown to provoke learning-centric reflection by learners on how they use their peers for learning. Learners can also use it as a Social Learning Browser to locate who are dealing with the same learning topics. Educators can use it to guide the students in their networked learning competences and to gain insight into the ability of groups of students to learn collectively over time. For researchers, the analysis of learning ties and networks helps clarifies how professionals engage in learning relationships and the value of this engagement. This work informs the field of learning analytics by identifying ways on making networked learning activities more explicit and therefore more accessible for professionals to share and analyse.
Photo by gr0uch0 on Flickr.
On 11 April, I presented a full paper at the learning analytics and knowledge conference, LAK13, in Leuven, Belgium.
The paper, ‘An Evaluation of Learning Analytics To Identify Exploratory Dialogue in Online Discussions‘ was co-authored by Zhongyu Wei of the Chinese University of Hong Kong, Yulan He, now at Aston University, and Simon Buckingham Shum from The Open University.
Social learning analytics are concerned with the process of knowledge construction as learners build knowledge together in their social and cultural environments. One of the most important tools employed during this process is language. In this paper we take exploratory dialogue, a joint form of co-reasoning, to be an external indicator that learning is taking place. Using techniques developed within the field of computational linguistics, we build on previous work using cue phrases to identify exploratory dialogue within online discussion. Automatic detection of this type of dialogue is framed as a binary classification task that labels each contribution to an online discussion as exploratory or non-exploratory. We describe the development of a self-training framework that employs discourse features and topical features for classification by integrating both cue-phrase matching and k-nearest neighbour classification. Experiments with a corpus constructed from the archive of a two-day online conference show that our proposed framework outperforms other approaches. A classifier developed using the self-training framework is able to make useful distinctions between the learning dialogue taking place at different times within an online conference as well as between the contributions of individual participants.
Doug Clow liveblogged the presentation.
Photo from gr0uch0‘s excellent LAK13 conference set.
Together with Davide Taibi, Ágnes Sándor, Duygu Simsek, Simon Buckingham Shum and Anna Deliddo, I entered the LAK Data Challenge 2013, associated with the LAK13 conference.
The challenge was phrased as “What do analytics on learning analytics tell us? How can we make sense of this emerging field’s historical roots, current state, and future trends, based on how its members report and debate their research?”
Our paper focused on ‘Visualizing the LAK/EDM Literature Using Combined Concept and Rhetorical Sentence Extraction’.
Davide produced a video explaining our paper.
All entries are available on the challenge website, including the winner, ‘Linked Data based applications for Learning Analytics Research: faceted searches, enriched contexts, graph browsing and dynamic graphic visualisation of data’.
On 8 April I co-chaired the 1st International Workshop on Discourse-centric Learning Analytics, which took in place with the third Learning Analytics and Knowledge conference (LAK13) in Leuven, Belgium.
The workshop began with a keynote by David Williamson Shaffer on ‘How Research into Epistemics Might Inform DCLA’.
This was followed by six papers – one of which I had co-authored.
The workshop concluded with a discussion session.
The event was liveblogged by Doug Clow.
I’ve been meaning to spend a bit of time trying to better understand the open education movement of the 1970s and how it relates to contemporary developments in academia. A useful summary of some key texts is over at infed.org but I’ve copied the bibliographic details here just in case it goes down or I can’t find it again. I’m particularly interested in getting my hands on the Nyberg (for obvious reasons).
Easthope, G. (1975) Community, Hierarchy and Open Education, London: Routledge and Kegan Paul.
Nyberg, D. (ed.) (1975) The Philosophy of Open Education, London: Routledge and Kegan Paul.
Puckrose, H. (1975) Open School, Open Society, London: Evans.
Sharp, J. (1973) Open School. The experience of 1964-70 at Wyndham School, Egremont, Cumberland, London: Dent.
I’ve just filed my copy for a review of Martin Weller‘s book, The Digital Scholar: How Technology is Changing Academic Practice (which, incidentally you can buy online but if I was you I would just grab the free version online because there’s less chance of that getting wet and ultimately crispy like my copy did). Hopefully it will be forthcoming in JiME fairly soon.
It’s a bit of a strange experience to review someone’s work when you work for them – normally this happens behind a veneer of relative anonymity – but I hope I’ve managed to find the golden mean between obsequiousness and being critical just for the sake of it…
Anyway, the point of this post is to capture something that I was thinking about a long time ago and in the course of writing the review I was reminded of it. It goes back to the following passage near the start of Martin’s book:
A simple definition of digital scholarship should probably be resisted, and below it is suggested that it is best interpreted as a shorthand term. As Wittgenstein argued with the definition of ‘game’ such tight definitions can end up excluding elements that should definitely be included or including ones that seem incongruous. A digital scholar need not be a recognised academic, and equally does not include anyone who posts something online. For now, a definition of someone who employs digital, networked and open approaches to demonstrate specialism in a field is probably sufficient to progress.
Weller, M. (2011:4)
A couple of years ago I was a researcher on the Digital Scholarship project and read Martin’s book in manuscript form. I recall thinking at the time that the whole idea of digital scholarship was a bit sketchy. After all, who isn’t ‘digital’ these days? The whole thing seemed to me to need much more precise definition (which Martin always resisted for reasons I’ve never been entirely clear on but seem to have to do with something traumatic in his past around learning objects). For what it’s worth, I think I understand his perspective a bit better now.
Anyway, re-reading this section got me thinking again and I had another look at the Wittgenstein. The discussion of ‘games’ comes from the later part of Wittgenstein’s work; Wittgenstein is unusual among philosophers in that he produced two distinct and original philosophies during his life, both of which are primarily concerned with our relation to language.
The so-called ‘early’ Wittgenstein – he of the forbidding Tractatus Logico-Philosophicus -argued that most philosophical confusion results from failing to respect the sense-making limits of language. Only certain kinds of propositional utterances – descriptions of states of affairs (facts) or relations of ideas (definitions) – make any sense and the rest is just confusion. I’m oversimplifying. But the general idea is expressed in the seven ‘basic’ propositions of the Tractatus.
There are of course problems with this, but the idea that philosophy is an activity which is fundamentally therapeutic (or even quietist) is one that has stuck around. But in his later (posthumously published) work, Wittgenstein attempts to make sense of linguistic meaning moved away from logic in the direction of ordinary language. I won’t go into the reasons for his development in this direction here, but trying to find absolute definitions is replaced by looking at how language is used in practical social contexts (like working on a building site, acting in a play, cracking a joke or playing a game) since “the speaking of language is part of an activity, or a form of life” (Wittgenstein, 1953:§23). Wittgenstein termed the relationship between utterances and contexts ‘language games‘ to reflect the idea that the ‘rules’ language follows are less like axioms of logic and are mostly to do with making sense in a particular situation.
If we want to resist giving final definitions of (especially new) concepts we shouldn’t talk so much about ‘games’ but instead in terms of family resemblance between uses of language. Games are just the example Wittgenstein uses to illustrate the point about family resemblances since there are lots of things we call ‘games’ but there are often lots of difference between them (competitiveness, equipment, purpose, etc.). The thing that binds them all together is our use of the same word to describe them: “what is common to all these activities and what makes them into language or parts of language” (Wittgenstein, 1953:§65).
The implications of this are more significant for philosophy than they might as first appear.
But to my mind the idea is not that we should give up on the idea of tight or final definitions. Rather, we just need to be aware of the fact that ‘defining’ is also a language game and one that is often of great use (such as in taxonomy).
When it comes to a neologism like ‘digital scholarship’ we aren’t necessarily looking at a referent which already exists in common usage. Wittgenstein’s point about language use must be taken in conjunction with the idea of the impossibility of private language. Language doesn’t enable forms of life, but forms of life enable language. It isn’t through the definition of ‘game’ that Wittgenstein shows this, but through the idea of a ‘family resemblance‘ between different practical uses of the same word.
It’s understandable that we should strive not to get bogged down in trying to define things but we should also recognise that in itself this can be an incredibly valuable activity, particularly when sketching out new developments in existing fields, or indeed when identifying new domains of study.
And that’s the point I struggled to make even this concisely two years ago. But that’s philosophy for ya. Or maybe just me.
Weller, M. (2011). The Digital Scholar: How Technology is Changing Academic Practice. Bloomsbury Academic.
Wittgenstein, L. (1922). Tractatus Logico-Philosophicus. C.K. Ogden (trans.), London: Routledge & Kegan Paul.
Wittgenstein, L. (1953). Philosophical Investigations. G.E.M. Anscombe and R. Rhees (eds.), G.E.M. Anscombe (trans.), Oxford: Blackwell.
An interesting post by Young Hahn on map hacking has given me some food for thought with respect to the redesign of the OER Evidence Hub. This article led me to another, Take Control of Your Maps by Paul Smith.
If I was a better programmer I could probably put some of the ideas to work right away, but as I am not I’ll have to be content with trying to draw some general principles out instead:
It’s worth considering making use of the tools provided by OGR to translate and filter data. And here are some mapping libraries to check out: