Do I win the “eeeuuuwww” blog post award? There’s a concept in web design about stickiness, ie content that has people returning or spending longer. So in web design this might be having up to date content, nice design, etc. In light of my previous post about OER (read the comments by the way, some great stuff from Pat Lockley, Jim Groom, Lorna Campbell and Alan Levine in there) I’ve been thinking about why we like blogs and are a bit meh about OER sometimes (some OER is great of course, and many blogs are woeful, but you get my drift).
Stickiness, for want of a better, less punchable phrase, may be the answer. Blogs are generally more personal, social content. People are sticky – we like reading certain people’s take on a subject precisely because it is human. I don’t want the BBC interpretation of a new technology, I want to know what Audrey Watters thinks about it. Two things about stickiness: it’s a continuum, not a binary; you don’t always want or need something to be sticky.
On the first point, people are good at being sticky (I’m already annoying myself with the term, so I can imagine how you feel). Indeed in a world where our jobs may be taken by robots, stickiness may be one of our defining attributes. It’s nebulous, shifting, personal and rooted in thousands of years of culture and millions of years of evolution. But a newspaper, project, organisation or website can be sticky (because it is made up of good contributors). Some things are more sticky than others and to different people, so it’s a hard quality to pin down and provide a template for that is reproducible.
On the second point, you need to determine if stickiness is an attribute that is important. For example, if I’m creating an open textbook, it needs to be great for that course, but it doesn’t really need to be something that people want to come back repeatedly. This may get at the distinction Jim was making in the comments about why he likes people and not resources. So, “how much stickiness do we want?” is now a valid project question.
I work a lot in OER, and I do a lot of blogging, and I often blog about OER. But I don’t blog as OER. In this post I’m going to compare two things that are completely different – OER repositories and blogs – and so you can’t make any valid comparisons. But that’s the point of the post really, to see if there is a different way of looking at a topic.
I’ve been looking at the stats for various repositories recently, both OA publishing ones, and OER ones. Thanks to David Kernohan for pointing me at JISC’s IRUS service, which provides a breakdown of publication repositories from UK universities. You need to have a login from a UK university to access it, so I’m not sure how public the data is. But it does provide you with a breakdown across all unis. The figures vary wildly eg the number of deposits per institution range from just six to over 37,000. The average monthly downloads ranges from 0 to 174,000. But in general most institutions have a total number of deposits in the low 1000s, and monthly download figures between 5-20K.
If we look at the UK’s now retired nationwide OER repository, JORUM, the stats are quite strange. They vary wildly by month eg 9K in Feb 2015 and 463K just a few months later in June. They list “views” and “downloads” – my guess would have been that views would always exceed downloads (people tend to look at an item to assess it rather than download I thought). But this shows wide variation also – sometimes views far outstrips downloads (eg Sept 2015 285K vs 80K) but other times the opposite occurs (eg Sept 2014 8K vs 351K). It would be interesting if anyone has theories about this, but that’s not really the point of my post.
I’ve also seen the stats on a few institutional repositories (which I won’t name) – some are impressive with millions of hits and others really don’t get much traffic at all. I was thinking about this in relation to blog stats. This blog has reasonably high traffic, whereas my new blogs have zero visitors. Partly that is a function of having built up enough content in here that others have inked to, so it has some SEO juice. It is also a function of being caught by lots of bots, so the stats are not always reliable. Visitors (which I think is the more reliable figure) over the past year was 214K and visits (probably mainly bots) 3.3 million.
I offer these figures up not as a poorly disguised humble brag (ok, not that poorly disguised), but just because they’re the ones I have. I know plenty of other bloggers who far outstrip these. The point is, they are the type of access figures that are comparable to many big projects and which would be reported happily reported in impact statements. Now, as I said I am deliberately comparing things which are not alike – a blog visit is not the same as an article download.
But the thing it set me thinking about was the figures are in the same sort of league. And blogging is done in spare time, at little or zero cost to the institution. What if we started envisaging projects more in terms of the blog as the core element rather than the dissemination or engagement channel? When a project or an institution is tasked wit building an OER repository, we all know what that looks like, and our default mode is to produce content, build a database, recruit a technical team, etc. But what if we said instead, we’re going to employ four bloggers (say), who will write engaging posts about the topics rather produce academic content? Are those posts better accessed and used than formal OER?
I’m pretty sure someone (Jim Groom? Alan Levine?) has written on this before. And I’m not quite sure I know what I mean by it. But I think there is something in there about rethinking what we mean by OER to be content that is more socially embedded and personal. The impact stats suggest it might be a more successful route if number of eyeballs is our measure.
A couple of posts coming up about every blogger’s two favourite subjects: themselves and blogs. Since moving to Reclaim Hosting (slogan: We put the host in hosting) I’ve started creating blogs willy nilly. Partly this is because I can, and it’s a fun thing to do on a Saturday afternoon when you live on your own and have no friends when it’s raining. But I think it also reflects that I have a number of discrete interests now that qualify for blogs of their own.
It started when Blipfoto, where I posted my photo a day, began having financial difficulties. I didn’t like the thought of losing that three year catalogue of memories. They seem to have sorted themselves out now (and I recently stopped doing the photo a day thing anyway), but I liked creating a backup that I owned and could control.
Then last year I set myself the goal of seeing a current film every week. I decided to continue that this year, but also set up a blog to record it. I don’t exactly review the films, I go on the basis that people know the plot, but rather I use it to talk about my personal reaction to a film. It’s quite fun, but I’m well aware it’s not that great. Writing about movies is tough beyond “I liked it/I didn’t like it”.
Last week I created (still messing with the themes) a new blog for the upcoming Cardiff Devils ice hockey season. This will be even harder to write about than films I predict. It’s very difficult to write about sport without sinking into a quagmire of cliche, sentimentality and melodrama. Plus I’m not really grounded in hockey knowledge.
So why do it? I don’t really promote these other blogs (allright, this post is doing that I confess, but I don’t tweet them often or seek out traffic). I don’t particularly want anything from them – the sports and movies blogosphere is a crowded place, so you’re not going to make a dent there. It is this very difficulty with writing for these last two blogs in particular that is the point of it really. I think it improves my writing overall to stretch myself beyond the usual topic (I mean, I can write about OER until everyone starts crying). Blogging is how I get to grips with a subject. Making myself write about it, in a public forum (even if no-one beyond Jim Groom actually reads it) forces me to think about ways in which I can frame it, respond to it and analyse it, be that a game, a film or anything.
This is exactly what I did with ed tech blogging at the start. Blogging is a key aspect of how I engage with a topic and come to understand it. That is allied to twitter and other forms of social media also, but blogging is at the centre of it. Some of you will have read that piece in the Guardian about how using social media was not serious academic work . Although the writer is mainly sniffy about twitter and instagram, I imagine they lump blogging in there too. My feeling is the opposite – I can’t imagine being a serious (or otherwise) academic without blogging.
Here’s a fun thing to try if you’ve been blogging for a while (Warning: may not actually be fun). Get a random date from when you started blogging until present (eg using this random date generator), find the post nearest that date and revisit it. The date I got was 27th October 2010 (remember those crazy days?). Luckily I had a post on that very date: An unbundled publishing business proposal.
In revisiting it I set myself four questions:
1) What, if anything, is still relevant?
2) What has changed?
3) Does this reveal anything more generally about my discipline?
4) What is my personal reaction to it?
Answering questions 1) and 2) first, I was proposing an academic publishing model that allowed self publishing, but with a set of services. Authors paid for peer review and copy-editing, and perhaps most importantly, the prestige of it being ‘approved’ by a publisher. But they could then own the rights and distribute freely. I would suggest this is still relevant, and we haven’t really seen a model this ‘unbundled’ take off. Publishers such as Ubiquity offer a range of services, and they publish the book under a CC license, which is pretty close to the model I was suggesting (except I removed the publishing costs and used external services). Not much has changed really, except I think we have seen a gradual development of such models, and wider acceptance. But the traditional academic publishers still dominate and not owning your own work is still the norm for academics.
In terms of what it reveals about ed tech I think it shows that change happens slowly. There are lots of cultural issues around processes such as publishing and dissemination that are deeply embedded. The point I was trying to make was less about new publishing models but more about how we can rethink traditional academic practices by considering what are the core functions they provide. We publish books because we want to share knowledge, but we use publishers partly to handle the logistics, but also to give legitimacy to the work (it has passed a “is it worthy of publication?” test). Six years on I think we are probably as, if not more, conservative in our approach to publishing in academia.
In terms of my personal reaction, I was pleased it wasn’t too embarrassing (there are lots of such posts in my back catalogue). But I do think I was still a bit enamoured of the whole new shiny digital thing, and it might be a bit more nuanced if I wrote it today. I think I overlooked the value of marketing and the lock big publishers have on many channels. But generally the lack of an emergence of exciting new, viable, publishing models in academia in the six years since I wrote it I found kind of depressing.
Anyway, the revisiting your past posts is the equivalent of those episodes in long running serials that consist of flashbacks. It’s cheap, but sort of fun.
While I was in Montevideo, at the invitation of Plan Ceibal, I was interviewed about learning analytics. This playlist of four short videos (subtitled in Spanish) deals with the potential of Big Data to improve learning, how The Open University has used learning analytics, and the work of the LACE and LAEP projects.
I talk about how analytics can be used to identify when students are dropping behind, how they can be used to identify successful routes through courses, and how they can identify types of learning design that lead to student success.
I note that the supply of learning analytics is growing, but it is not clear that the demand is growing in the same way. Researchers and developers need to engage more with educators at every stage in order to identify the problems they need to be solved and the questions that they need to have answered.
I also talk about the need to align learning analytics with strategic priorities for education and training, not only at institutional level, but also at national and international level.
My videos are followed in the playlist with videos from Professor Dragan Gasevic, chair of the Society for Learning Analytics (SoLAR).
As ed-tech social media fills up with rapid-response pieces on what Pokémon Go could mean for education, I thought it was time to refer back to work with a more solid basis. And what could be a better starting point than our 2014 book on Augmented Education?
Augmented Education explores the implications and challenges of augmented learning – learning at the frontiers of reality – and the ways in which we can understand it, structure it, develop it and employ it. It investigates what we can do now that we could not do before, and asks whether these new possibilities could fundamentally affect how people approach and benefit from learning. For example, can augmented learning create the social, affective and cognitive conditions that will allow individuals and groups of people not only to approach learning in a meaningful way, but also to engage with it more deeply?
To encourage people to read the book, I wrote a piece for the OU News and OpenLearn on Pokémon Go, and how the game aligns with the four approaches to augmented education that we identify in the book.
The book provides a detailed overview of the newest possibilities in education and shows how technological developments can be harnessed to support inclusive and collaborative knowledge building through formal and informal learning.
In order to do this, we employ a broad definition of augmented learning.
“Augmented learning uses electronic devices to extend learners’ interactions with and perception of their current environment to include and bring to life different times, spaces, characters and possibilities. It offers possibilities for the transformation of learners and their learning contexts.”
Using this definition, the book extends beyond the augmentation of teaching, learning and schools to include informal subject-based learning, learning using social media, collaborative informal learning and educating the transhuman.
I wrote a piece for the Journal of Learning for Development recently, which expanded on an idea in a blog post, called the Open Flip. The basic idea is quite simple really (I’m a simple kinda guy) – it is that under certain conditions, there is an economic argument for shifting costs from purchasing copyrighted goods to producing openly licensed ones. Open Textbooks are an obvious example. This is a bit ‘no shit Sherlock’, but I think it’s worth exploring as a model in its own right. The paper only starts to do this really.
My argument is that most of the digital economic models, theories and ideologies haven’t really transferred across to education very successfully. This is either because the ideas themselves are rather poor (hello disruption) and don’t really transfer anywhere, or because the nature of education is different from a very straightforward consumer model. Education is structured differently, and is characterised by large grant or budget spends. In these circumstances that money can be reallocated, often leading to savings overall, and openly licensed content that can be adapted and used by all. The mythical win-win.
Apart from not being very good, one of my gripes with digital economic models is that are often over-applied, way beyond the context where they might be suitable. So I wanted to set out some conditions as to when the open flip might be applicable. My list of conditions is:
- There is large scale spending on the purchasing of resources that can be practically refocused through single channels. This does not apply to standard consumer purchases, for instance.
- The resources are largely digital in nature, or production can be cheap. The main component in the purchase price relates not to the physical aspect but to the intellectual property. For instance, the wide range in prices for academic textbooks is not related to any physical characteristics of their production, which varies only by a small degree.
- The initial production of the content is a task that can be financed. With open source software and many community driven approaches, it has been found that money is not an effective incentive. These community driven, peer based models are more adequately explained by Benkler’s model.
- Open licencing offers a particular benefit beyond just cost. While cost savings may be the initial driver, it is the advantages offered by openly licensed material that often sustains a movement. For example, the pedagogic advantages of adapting open textbooks.
With these in mind, the open flip model I propose could have applications beyond education – for example, GM crops. I don’t want to go into the whole GM debate here, but beyond some of the irrational fears (“playing God”) I think a very real concern about GM is that large corporations will own the genetic code for useful crops. An open flip model could spend money on developing certain crops (for example, ones that might better survive extreme weather in developing nations) and release that code openly. Producing the seeds then is relatively cheap. The same is true for certain medicines – increasingly drug companies are reluctant to spend the investment on drugs that actually cure people, since that’s a one-off purchase. Those that help ameliorate chronic conditions represent a better market. The current model puts the research costs onto Big Pharma, who will then recoup those costs through sales. But for some desired drugs different agencies might contribute to the research to produce an openly licensed drug, which is then cheap to produce. And so on. It won’t be applicable everywhere, but for certain problems, the open flip represents an economic model that utilises the advantages of the internet, digital solution and open licences. That’s my argument anyway.
I have joined the Executive Board of the Society for Learning Analytics Research (SoLAR) as a ‘member at large’! The current president of the society is Dragan Gašević from the University of Edinburgh, UK.
I’m a keen reader of the fabulous Mim’s Crinoline Robot blog for lots of West country vintage goodness and a reminder of home. Inspired by her What’s on in Vintage Wiltshire section, rather than just email a pal a list of vintage and dance events in the MK area over the next couple of months I could… blog them instead!
So, here we go. Needless to say the list is incomplete… so if you know of an event coming up in the area (surely there must be something the weekend of 13/14 August?), fill us in and leave a comment with the details!
- Dollar Shake @Bogota Milton Keynes, RnB, Surf, Cuban, Soul… 30 July 2016
- Swingsters Balboa workshop followed by tea dance, Bushey. 31 July 2016
- Death Do Us Part Dangershow, Craufurd Arms, Wolverton: 2 August 2016
- Swingsters Black Cat Friday Freestyle, Great Barford: 5 August 2016 (not the 12th as on website)
- ATOMIC, “The New Look Mid-Century Vintage Festival” Northants: 20 & 21 August 2016
- Ramsey 1940s weekend, Huntingdon: 20 & 21 August 2016
- Twinwood, “The Original and Best Vintage Music & Dance Festival” Bedford: August Bank Holiday weekend: 26-29 August 2016
- Hot 8 Brass Band with electro swing after party, MK11, 28 August 2016
September 2016Hedna’s Vintage Nightclub is back at the Stables for a Christmas special on 10 December 2016. Featured image picture credit: “Vintage Pinball” by Eric Wittman is licensed CC BY-ND 2.0 Generic and “Toward the Workers’ Institute” by Beck Pitt is licensed CC BY 2.0
Experimenting with GIFs this afternoon. Began by using The Flirtations 1968 classic Nothing but a Heartache…View post on imgur.com
Then a little snippet from Ziegfeld Follies Here’s to the Girls (the bit where Lucille Ball twirls round was another possibility along with the ballet dancing at around 1:40… future GIF alert!!)View post on imgur.com
I visited the University of Deusto in Bilbao, Spain, to give a keynote at the learning analytics summer institute there (LASI Bilbao 2016) on 28 June 2016. The event brought people together from the Spanish Network of Learning Analytics (SNOLA), which was responsible for organising the event, in conjunction with the international Society for Learning Analytics Research (SoLAR).
What does the future hold for learning analytics? In terms of Europe’s priorities for learning and training, they will need to support relevant and high-quality knowledge, skills and competences developed throughout lifelong learning. More specifically, they should improve the quality and efficiency of education and training, enhance creativity and innovation, and focus on learning outcomes in areas such as employability, active-citizenship and well-being. This is a tall order and, in order to achieve it, we need to consider how our work fits into the larger picture. Drawing on the outcomes of two recent European studies, Rebecca will discuss how we can avoid potential pitfalls and develop an action plan that will drive the development of analytics that enhance both learning and teaching.
The series of LACE workshops on Ethics and Privacy in Learning Analytics (EP4LA) keeps expanding.
I worked with María Jésus Rodríguez-Triana on the programme for one of these events, which she ran with Denis Gillet at the 12th Joint European Summer School on Technology Enhanced Learning (JTEL Summer School) in Estonia, on 20 June.Workshop outline
This 90-minute workshop aims to give participants an overview of the ethical and privacy issues in Learning Analytics. Furthermore, the workshop allows the participants to increase the awareness about how to implement LA solutions either as researchers, practitioners or as developers. It will consist of three parts:
Part 1 – Introduction: presentation of LA frameworks and guidelines for Learning Analytics regarding ethics and privacy.
Part 2 – Framework analyses: participants will be grouped to work in a specific framework. The teams will categorise those ethical and privacy issues that the participants are currently addressing in their practice, those that could be covered with a low-medium effort, and those that constitute a challenge
Part 3 – Discussion: An open discussion will follow, exploring the complexity of each framework and looking for potential ways of addressing them.
We had been asked to validate the Cert ED and the Professional Graduate Certificate in Education courses run by the college, which does not currently have the authority to award qualifications at this level. Like many other colleges in England, it asked the OU to validate its courses, so that students completing those courses could receive certificates from The Open University.
Through its Royal Charter, the OU is able to validate the programmes of institutions that do not have their own degree-awarding powers or that wish to offer OU awards. Validation is an iterative process, carried out over a period of time, culminating in an event that brings together participants. The process covers ten areas:
- Rationale, aims and intended learning outcomes of the programme of study
- Curriculum and structure of the programme of study
- Teaching and learning
- Admissions and transfer
- Staffing, staff development and research
- Teaching and learning resources
- Other resources for students
- Programme management and monitoring
- Programme specification and handbook
The process requires close scrutiny of relevant documentation, discussions with staff and students involved with the programmes, and tours of the facilities. A very interesting day, and a chance to get a detailed overview of how two qualifications work in practice.
I was invited to write a paper for Distance Education in China, a journal which reaches out to Western academics and is willing to take on the task of translating papers from English. My paper was based on work published in Augmented Education, written by me, Kieron Sheehy and Gill Clough, which was published by Palgrave in 2014.Abstract
Digital technologies are becoming cheaper, more powerful and more widely used in daily life. At the same time, opportunities are increasing for making use of them to augment learning by extending learners’ interactions with and perceptions of their environment. Augmented learning can make use of augmented reality and virtual reality, as well as a range of technologies that extend human awareness. This paper introduces some of the possibilities opened up by augmented learning and examines one area in which they are currently being employed: the use of virtual realities and tools to augment formal learning. It considers the elements of social presence that are employed when augmenting learning in this way, and discusses different approaches to augmentation.
数字化技术的价格越来越便宜,功能越来越强大,在日常生活中用途越来越广泛。与此同时,利用数字化技术进一步促进学习者与他们所处环境的互动以及对环境的 感知以增强学习的机会也越来越多。增强学习可以利用增强现实和虚拟现实以及许多能提高人类意识的技术。本文介绍增强学习的一些可能性并讨论目前正在应用增 强学习的一个领域:运用虚拟现实和工具增强正式学习。文章分析了基于虚拟现实和工具的增强学习所需的社交临场成分,并讨论不同的增强方法。
Ferguson, Rebecca (2016). 增强学习的可能性与挑战 [Possibilities and challenges of augmented learning]. Distance Education in China, 6 pp. 5–13.
Next stop after the LAK conference was Kuala Lumpur in Malaysia. There I took part in an expert workshop from 2-3 May 2016, organised by the Commonwealth of Learning, developing guidelines for the quality assurance and accreditation of massive open online courses.
The purpose of this review is to identify quality measures and to highlight some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs. It draws on the literature on both MOOCs and quality in education more generally in order to provide a framework for thinking about quality and the different variables and questions that must be considered when conceptualising quality in MOOCs. The review adopts a relativist approach, positioning quality as a measure for a specific purpose. The review draws upon Biggs’s (1993) 3P model to explore notions and dimensions of quality in relation to MOOCs — presage, process and product variables — which correspond to an input–environment–output model. The review brings together literature examining how quality should be interpreted and assessed in MOOCs at a more general and theoretical level, as well as empirical research studies that explore how these ideas about quality can be operationalised, including the measures and instruments that can be employed. What emerges from the literature are the complexities involved in interpreting and measuring quality in MOOCs and the importance of both context and perspective to discussions of quality.Workshop participants
Australia: Adam Brimo
Japan: Paul Kawachi
New Zealand: Nina Hood
My final presentation at the LAK16 conference was another session organised by the Learning Analytics Community Exchange (LACE) project that built on our Visions of the Future work. This panel session brought participants together to discuss the next steps for learning analytics and where we are heading as a community.Abstract
It is important that the LAK community looks to the future, in order that it can help develop the policies, infrastructure and frameworks that will shape its future direction and activity. Taking as its basis the Visions of the Future study carried out by the Learning Analytics Community Exchange (LACE) project, the panelists will present future scenarios and their implications. The session will include time for the audience to discuss both the findings of the study and actions that could be taken by the LAK community in response to these findings.
Ferguson, Rebecca; Brasher, Andrew; Clow, Doug; Griffiths, Dai and Drachsler, Hendrik (2016). Learning Analytics: Visions of the Future. In: 6th International Learning Analytics and Knowledge (LAK) Conference, 25-29 April 2016, Edinburgh, Scotland.
This paper explores the potential of analytics for improving accessibility of e-learning and supporting disabled learners in their studies. A comparative analysis of completion rates of disabled and non-disabled students in a large five-year dataset is presented and a wide variation in comparative retention rates is characterized. Learning analytics enable us to identify and understand such discrepancies and, in future, could be used to focus interventions to improve retention of disabled students. An agenda for onward research, focused on Critical Learning Paths, is outlined. This paper is intended to stimulate a wider interest in the potential benefits of learning analytics for institutions as they try to assure the accessibility of their e-learning and provision of support for disabled students.
Cooper, Martyn; Ferguson, Rebecca and Wolff, Annika (2016). What Can Analytics Contribute to Accessibility in e-Learning Systems and to Disabled Students’ Learning? In: 6th International Learning Analytics and Knowledge (LAK) Conference, 25-29 April 2016, Edinburgh, Scotland.
Our second LACE workshop of LAK16 was the highly successful Failathon. The idea for this workshop emerged from an overview of learning analytics evidence provided by the LACE Evidence Hub. This suggested that the published evidence is skewed towards positive results, so we set out to find out whether this is the case.
A packed workshop discussed past failures. All accounts were governed by the Chatham House Rule – they could be reported outside the workshop as long as the source of the information was neither explicitly or implicitly identified.Abstract
As in many fields, most papers in the learning analytics literature report success or, at least, read as if they are reporting success. This is almost certainly not because learning analytics research and activity are always successful. Generally, we report our successes widely, but keep our failures to ourselves. As Bismarck is alleged to have said: it is wise to learn from the mistakes of others. This workshop offers an opportunity for researchers and practitioners to share their failures in a lower-stakes environment, to help them learn from each other’s mistakes.
Clow, Doug; Ferguson, Rebecca; Macfadyen, Leah and Prinsloo, Paul (2016). LAK Failathon. In: 6th International Learning Analytics and Knowledge (LAK) Conference, 25-29 April 2016, Edinburgh, Scotland.
A busy week at the Learning Analytics and Knowledge 2016 (LAK16) conference began with a workshop on Ethics and Privacy Issues in the Design of Learning Analytics. The workshop formed part of the international EP4LA series run by the LACE project.
The workshop included a series of presentations, and I talked briefly about findings related to ethics and privacy that had emerged from the LACE Visions of the Future study.Abstract
Issues related to Ethics and Privacy have become a major stumbling block in application of Learning Analytics technologies on a large scale. Recently, the learning analytics community at large has more actively addressed the EP4LA issues, and we are now starting to see learning analytics solutions that are designed not only as an afterthought, but also with these issues in mind. The 2nd EP4LA@LAK16 workshop will bring the discussion on ethics and privacy for learning analytics to a the next level, helping to build an agenda for organizational and technical design of LA solutions, addressing the different processes of a learning analytics workflow.
Drachsler, Hendrik; Hoel, Tore; Cooper, Adam; Kismihók, Gábor; Berg, Alan; Scheffel, Maren; Chen, Weiqin and Ferguson, Rebecca (2016). Ethical and Privacy Issues in the Design of Learning Analytics Applications. In: 6th International Learning Analytics and Knowledge (LAK) Conference, 25-29 April 2016, Edinburgh, Scotland.
Learning at Scale: Using an Evidence Hub To Make Sense of What We Know, Abstract
The large datasets produced by learning at scale, and the need for ways of dealing with high learner/educator ratios, mean that MOOCs and related environments are frequently used for the deployment and development of learning analytics. Despite the current proliferation of analytics, there is as yet relatively little hard evidence of their effectiveness. The Evidence Hub developed by the Learning Analytics Community Exchange (LACE) provides a way of collating and filtering the available evidence in order to support the use of analytics and to target future studies to fill the gaps in our knowledge.
Ferguson, Rebecca (2016). Learning at Scale: Using an Evidence Hub To Make Sense of What We Know. In: L@S ’16 Proceedings of the Third (2016) ACM Conference on Learning @ Scale, ACM, New York.